Brave Conversations – Learning Through Play, Conflict, and Reflection
In collaboration with our partners from Intersticia, Change Labs Stuttgart, and School for Talents, we had the pleasure of hosting a day learning event that brought together students, researchers and educators at the intersection of technology and society. Brave Conversations is a format designed to explore the human aspects of digital transformation, opening up space for critical thinking, ethical reflection, and value-driven debate.
Saturday’s Brave Conversations event shifted the focus to collective exploration. Lead by our hosts Anni Rowland-Campbell, Hannah Stewart and Ghada Ibrahim, we began by grounding ourselves in the history of technological development: technologies aren’t abstract forces, they are invented, funded, and deployed by people, within political and economic systems.
Recognizing this brought us to the key question of the day:
If we are the ones building and training, and actively using technology, what does that say about us? And how can we design tech that serves society, not just markets?
The day unfolded as an embodied learning experience including movement and emotion. Brave Conversations is about learning through doing, feeling, and experiencing, rather than only thinking or reading. This approach carried into the afternoon, as participants worked through a fictional criminal case to examine values, responsibility, and digital agency. Using Moore’s Strategic Triangle as a framework, we asked:
What can we do? (Operational Capacity)
What may we do? (Legitimacy & Governance)
What should we do? (Public Value)
The case surfaced complex ethical questions: Who owns our data after death? Can a large language model (LLM) speak on behalf of someone who is gone? Are we morally responsible for acting on its insights? Moreover, how do we identify, and challenge, the biases that shape its responses?
Simulating the Future? AI in Action
To bring theory into lived experience, the afternoon ended with a collaborative simulation using an LLM. Working in groups, participants entered prompts into the AI and received tasks designed to fulfill a given goal. Roles shifted throughout the session – from active agents to observers to policymakers – creating a dynamic setting for testing human-AI interaction.
What followed was as insightful as it was chaotic. At first, the LLM’s responses seemed helpful and directive. However, as groups encountered conflicting goals and communication breakdowns, the AI began looping, offering repetitive solutions and defaulting to generic strategies. Surprisingly, instead of stepping outside the logic of the tool, many participants kept turning back to the LLM for answers – even when it was clearly stuck.
This prompted critical reflection: Why did we trust the system so much, even when it wasn’t delivering? What does this say about our willingness to surrender agency? And how do we train ourselves – and future professionals – to stay self-aware and critical in tech-mediated environments? For a more in-depth reflection on the notion of trust in AI read Ghada's blog entry on the Intersticia website.
Looking Back, Looking Forward
Across both days, Brave Conversations created a space for open inquiry, discomfort, humor, and insight. It invited participants to think not just about technology but also with and through it. It challenged them to articulate their values, engage in debate, and reflect on how digital systems shape, and are shaped by, human behavior.
In a world where tech is increasingly interwoven with every career path, event formats like this help prepare students to become thoughtful, ethical actors in the systems they will one day lead. We are grateful to all participants, facilitators, and partners for their courage to have brave conversations, and we look forward to continuing them.