| Time: | November 6, 2025, 2:00 p.m. – 3:00 p.m. |
|---|---|
| Event language: | English |
| Meeting mode: | online |
| Venue: | |
| Download as iCal: |
|
Large Language Models (LLMs) are increasingly being recognized as sources of knowledge and even as advisors in various contexts. In other words, they are becoming integral participants in testimonial exchanges and, as such, are now embedded within our social epistemological practices. This development prompts the question of what standards should be applied in evaluating these models. In this presentation, I argue that LLMs should be assessed based on their credibility, rather than their trustworthiness, and I will outline what it means for an LLM to be considered credible.
Jörg Löschke is a professor of practical philosophy at the University of Stuttgart. His research focuses on personal relationships, normative ethics, axiology, and the ethics of AI.
We invite everyone interested in the topic to attend this talk, which will take place in English. Prof. Maria Wirzberger, IRIS speaker, will moderate, and following the talk, there will be an opportunity to ask questions. We look forward to active participation.
We send out a newsletter at irregular intervals with information on IRIS events. To make sure you don't miss anything, simply enter your e-mail address. You will shortly receive a confirmation e-mail to make sure that you really are the person who wants to subscribe. After receiving your confirmation, you will be added to the mailing list. This is a hidden mailing list, which means that the subscriber list can only be viewed by the administrator.
Note: It is not possible to process your subscription to the newsletter without providing your e-mail address. The information you provide is voluntary and you can unsubscribe from the newsletter at any time.