Tabea Berberena sitting in front of the IRIS fair trade wall.

When Everyday Life Turns into Field Research

Tabea Berberena on the Role of Critical Reflection in Living and Learning with AI.

Tabea Berberena, M.A., is one of the three coordinators of IRIS and serves as the scientific coordinator of the RISING teaching and learning forum at the University of Stuttgart. She is currently also pursuing her PhD, focusing on trust in intelligent systems. Her research and teaching center on the critical reflection of AI, especially on unconscious bias, ethical issues, and the societal impact of chatbots and other intelligent technologies. In addition to her academic work, she facilitates workshops on data protection and ethical technology education for young people, including engagements with programs such as Jugend forscht.

What are you currently researching?
My current research explores the factors that influence trust in artificial intelligence (AI). I’m particularly interested in emotions and unconscious biases — in other words, how our feelings and attitudes shape the trust we place in chatbots. My aim is to develop a more nuanced and everyday-usable model of trust in AI.

How did your academic path unfold?
I studied in the Big Apple  - New York City: an exciting and inspiring environment that shaped me both academically and personally. I earned my bachelor’s degree in psychology, followed by a master’s in educational psychology.

Why those subjects? I’ve always been fascinated by understanding people—how they think, feel, and make decisions. But it was never just about understanding. I wanted to actively make things better. How do people learn best? How can knowledge be more effectively shared? What drives behavior and attitudes? Educational psychology struck me as the perfect bridge between theory and practice, because it focuses on precisely these questions.

One of my biggest motivations during my studies was the question: “But why?” I didn’t want to accept things; I wanted to understand what was behind them. Why do people make certain choices? Why are some people excited about things others find terrible? Why do emotions so strongly influence our judgment? That curiosity ultimately led me to my current research.

Tabea Berberena sitting in front of the IRIS fair trade wall.

How did you come to the University of Stuttgart?
When our family moved back to Germany from the U.S., I felt drawn once again to the world of higher education. I’ve always loved learning and sharing knowledge with students. Even during my psychology studies in the U.S., I worked at a college and taught seminars. Academia felt like home, and I wanted to return to that space.

The University of Stuttgart turned out to be the ideal place to continue my research and academic journey. It offers an interdisciplinary and innovative research landscape, especially in areas like AI, digitization, and human-technology interaction. These topics matter deeply to me, particularly because the psychological and ethical dimensions often get overlooked. It is a great privilege to be able to both research and teach on these relevant societal topics.

Was there a specific event that led you to your current research?
Yes. The topics of bias, stereotypes, and emotions became especially important to me because of my husband. After we arrived in Germany, he unfortunately experienced several instances of discrimination. I had previously believed that prejudice wasn’t a significant issue here, but that turned out to be wrong. This experience led me to examine these issues more deeply, including within the field of AI. I realized that they had received little attention in AI research until recently. Thankfully, that’s beginning to change.

At the same time, I observed how children, teens, and adults increasingly engage with AI tools on their phones without fully understanding how these technologies work or how they subtly influence them.

A child's hand tipping on a tablet.

What are your hobbies? Do they relate to your research?
With two kids, my hobbies currently include helping with homework and driving them around, so I tutor and do taxi duty. On the surface, that might not seem related to my research. But everyday family life actually shows me how deeply AI technologies affect young people and families, often without them realizing it.

Looking closer, there are definite overlaps. When I help my kids learn, I see firsthand how they absorb information, what types of explanations work, and where they struggle, especially with abstract or complex topics. That directly connects to my research: How can we make knowledge about intelligent systems accessible, understandable, and reflective?

Also, being the “taxi driver” helps me learn how children and teens communicate today and how naturally they interact with digital technologies. Voice assistants, chatbots, and social media, these are all part of their everyday lives. That gives me valuable insights into their attitudes toward AI. My daily life has become field research, conducted in the backseat or during homework time.

IRIS brings together researchers from different disciplines and connects research, teaching, and public engagement. This demonstrates that nothing exists in isolation. Every area informs the other. This constellation is rare but essential for driving true change.

Tabea Berberena

How does being a mother influence your work?
In my role at IRIS, particularly as head of the RISING learning forum, it’s a real advantage to have close connections to schools and educators through my children. It gives me not only a theoretical, but a very practical perspective on what’s needed when developing educational materials on current and socially relevant issues.

Being a mother also gives me deeper insight into the world of children and teens. I experience firsthand what topics matter to them, how they use digital technologies, and what challenges they face while learning. This helps me create materials that are not only understandable and practical, but also relevant to the target audience.

Parenting also brings a special sensitivity to how knowledge is communicated. I see how differently kids learn, what formats motivate them, and where they quickly lose interest. These observations directly influence how I design courses and materials.

Especially when it comes to AI and trust, it’s essential to not only teach the technical foundations, but also to address ethical and societal questions in a child-friendly, accessible way. I also value my exchanges with other parents and educators, who provide great feedback on our learning materials. That kind of real-world input helps me continuously refine what we offer, based on students' and teachers' actual needs. Being a parent makes me more attuned to educational challenges and constantly inspires me to make learning content more practical and grounded in real life.

What is your research goal? Will it benefit people—and if so, how?
My goal is to understand which factors influence trust in AI, especially in chatbots. It’s not just about the technology but the psychological and social mechanisms at play. This knowledge can then inform the design of chatbots, websites, and communication between organizations and their users — companies and customers or hospitals and patients.

A key aspect of my research is to highlight both the potential and the challenges of chatbot use. While the intention behind chatbots is often to provide quick, easy, and inclusive access to information, the reality shows that not everyone accepts or benefits from these systems equally. Prior experiences, personal preferences, and even biases can create skepticism or frustration.

For example, I’ve personally been irritated many times when chatting with a bot that couldn’t understand my request or only offered scripted replies. Ultimately, I had to find a different way to get help. Such moments show that it’s not enough to simply provide a chatbot. It must truly be experienced as helpful.

A poster with the question: Which Chatbot can support best wich theme (written in German).

What does your research advocate for? How does it reach people in practice?
My research advocates for a deeper understanding of how trust in chatbots is built, and how psychological factors like emotions and biases influence it. I’m also deeply committed to protecting users from manipulation. I investigate how chatbots can be designed to build trust without using persuasive or deceptive tactics.

This knowledge can help developers and designers create bots that are ethical and transparent. For example, a chatbot could show emotional intelligence but avoid manipulative sales tactics. Visual design and tone might need more straightforward guidelines.

I see three practical areas for impact:

  1. Customer service: Bots should clearly state if they aim to sell something, and not use manipulative techniques.
  2. Healthcare: Bots should provide reliable information without creating fear or pushing users toward specific decisions.
  3. Political and social communication: AI systems must not manipulate opinions through emotional triggers or disinformation.

It should always be transparent that users are talking to a bot, not a human. Ultimately, my research helps create ethical standards for chatbot use — so they can support users while also protecting them from manipulation.

What is RISING at IRIS?
Artificial intelligence is becoming part of everyday life, driving innovation and entering educational spaces. These systems are often seen as capable of making objective decisions, but many examples show how biases, both conscious and unconscious, influence their development.

The RISING learning forum is designed to promote critical reflection on intelligent systems and their social impacts, both in university courses and schools. It encourages learners to question where these systems come from, how they work, and what values shape them.

The central goal of RISING is to promote a holistic approach to the reflection of intelligent systems and their societal consequences. In simple terms: How do AI systems affect our society? Where do they come from, what influences them, and what drives them? One key question is how unconscious biases shape AI and what this means for us.

This topic now affects everyone due to digitalization. That’s why the University of Stuttgart aims not only to prepare its students for this future, but to start engaging learners in schools. By doing so, we help them broaden their perspectives and develop critical thinking skills that will benefit them later in their careers and foster a more inclusive, respectful, and thoughtful culture of coexistence.

What does IRIS mean to you personally – and what role does it play in society?
What I value most about IRIS is that we don’t just look at technological developments. We critically reflect on their societal, ethical, and psychological effects. AI isn’t just a tool; it shapes how we live, how we trust digital systems, and even how we make decisions.

IRIS brings together researchers from different disciplines and connects research, teaching, and public engagement. This demonstrates that nothing exists in isolation. Every area informs the other. This constellation is rare but essential for driving true change.

To me, IRIS means contributing to the responsible shaping of our digital future—a future where fairness and reflection guide our interactions with AI.

Reflecting on Intelligent Systems In the Next Generation - RISING

Contact

This image showsTabea Berberena

Tabea Berberena

M.A.

Scientific Coordinator of the Teaching and Learning Forum RISING | Doctoral Researcher focusing on trust in intelligent systems

To the top of the page