Credible LLMs: The Social Epistemology of AI with Prof. Jörg Löschke
Large Language Models (LLMs) are increasingly being recognized as sources of knowledge and even as advisors in various contexts. In other words, they are becoming integral participants in testimonial exchanges and, as such, are now embedded within our social epistemological practices. This development prompts the question of what standards should be applied in evaluating these models. In this presentation, I argue that LLMs should be assessed based on their credibility, rather than their trustworthiness, and I will outline what it means for an LLM to be considered credible.
Jörg Löschke is a professor of practical philosophy at the University of Stuttgart. His research focuses on personal relationships, normative ethics, axiology, and the ethics of AI.
You can view the talk via the link below.
Slides: 2025_11_06_Credible LLMs
Credible LLMs: The Social Epistemology of AI with Prof. Jörg Löschke
Ya, so, well, thank you very much for the possibility to present some ongoing research. Thank you very much for the very kind introduction. Um, so it's nice to have someone introducing oneself because life always sounds more spectacular when somebody reports about one's life, then it feels why one is living one's life. So thank you very much. Um, my talk today will be about a, um, ongoing research project that I, um, currently have together with a, a postdoc, um, and, um, it's combined question in social epistemology with questions in the, um, ethics of AI. (..) So maybe, um, I'm not sure whether everybody is familiar with what, uh, social epistemology is. I will explain it very briefly in very short time. Wait a minute. Okay. Okay. So first of all, let me just very, um, um, briefly explain the, um, program. So first I will, um, talk as the title of my talk suggests about credible LLMs, credible large language models. So you might think, well, why should we actually care about credible LLMs? There are already other, um, concepts such as reliable or trustworthy, um, or AI. So why should we also introduce the notion of credibility here? So this is what I will first be concerned with. Then I will share some thoughts about how this, um, what this means. So how should we develop LLMs perhaps in order to make them more credible? So these are just, uh, some reflections from a philosopher, um, from his armchair. So if, um, some people from actual, from the actual field who are working on developing LLMs have some insights, I will be very interested to hear them. And then I will also, um, and then I will also, um, present very briefly some social epistemological, epistemological implications, how we are supposed to, um, deal with, or how are we supposed to relate to those credible LLMs if they exist. (...) Um, just maybe one caveat, um, this is all very much work in progress, ongoing research. So, um, I might not be, um, able to, to have, I do not, do not have fully spelled out views on each of these points. Um, so I just use this opportunity to perhaps listen to what you, what, uh, the audience things, um, see what questions come up in order to see in what, uh, directions our research might proceed in the future. (...) I talk about the social epistemology of AI. I'm not sure whether everybody is familiar with the notion of social epistemology. So perhaps it is good to just say a little bit about, um, about this notion here. Um, very generally speaking epistemology in, um, in philosophy is the, let's say, study of the nature sources and perhaps also the pursuit of knowledge. So, um, in epistemology, we are concerned with questions of, of how are we actually come to know things, come to have knowledge about stuff. And usually this, uh, has been a very individualistic approach. So people discuss, well, what does it mean for, for example, sensory experience? How does this contribute to our knowledge? When do I know that there's actually a tree in front of me, et cetera, et cetera. (..) And just very recently, this is really a very, uh, young research project, even though it seems quite obvious that these are important questions, um, philosophers have developed that. When we pursue move or knowledge. We usually have to rely on others. It is just not a thing. We do not just gain through beliefs or knowledge just by our own. So, usually when we, um, form true beliefs or when we pursue the truth or when we. Try to gain some knowledge. We need the help of other people. I mean, you see some questions. What is the best strategy to mitigate the consequences of climate change? Are vaccines actually safe? Or is this specific vaccine actually safe? Is homeschooling harming my child or does it maybe, uh, do more good than bad? These are all questions that I might ask myself, but. I have to be honest, um, I cannot speak for yourself for you in the audience, but I have to speak for myself. I would not be able to answer any of these questions just by myself, just by reflecting on these questions. Rather, I need. (..) experts expert testimony in order to help me answering these questions. And basically, I mean, we are all, I guess we are all scientists. This is just what we do. We rely on the work of others. Deal with it. Use it as a starting point for further research, et cetera. So. So, most of the topics in which we try to gain knowledge or in which we pursue the truth, uh, areas in which we just have to rely on other people and this, uh, very broad area in which we discuss how are we actually gaining knowledge or pursuing the truth with the help of other people. This is a field of social epistemology. (...) Now, it seems to me, and of course, uh, this is controversial, but everything is controversial in philosophy. Um, we can might also very much say that, uh, the way that large, large language models are currently used or are increased increasingly used is that they are used similar to experts and not only as sources of knowledge, but also as advisors. So, for example, you might in medicine, you might have some AI system who. who tells you whether, um, this is the malign cancer. I don't know. Sorry. Um, I just have to, um, bring some examples up or for example, in financial matters, you might have robo advisors who tell you, well, you should put your money into this ETF and rather than the other ETF, something like this. So, um, in very generally speaking, the way that we use large language models currently, and I think, uh, we will do so increasingly in the future is that we use them as experts and as advisors, not merely as sources of, uh, of knowledge or information, but also as advisors who might give us. (.) certain reasons, certain reasons, either practical reasons when we ask them what we ought to do, or maybe they aim to give us epistemic reasons that are relevant for us trying to figure out what we ought to think or what we ought to do. (..) Um, and I just said, uh, in social epistemology, much of what is discussed there is, for example, how are we supposed to deal with testimony from experts? How are we supposed to treat the testimony of that? What other people who may might be even more knowledgeable in these areas? tell us, tell us, because usually we need, uh, the help of other people in areas where we do not know everything. So we need the help of experts. If we treat large language models as experts, then it seems that they are part of our social epistemological practices. And if that is the case, then you might think, and I think we ought to think this, these LLMs must meet the standards of those practices. Hence. we should evaluate large language models being, uh, given that they are part of our social epistemological practices. We ought to evaluate them at least sufficiently similarly as we, um, evaluate other parties in our social, social epistemological world. (..) So just a very general idea. Um, now, um, I already said this, you might think, well, and I will just talk about credibility in a, in a second, but, um, um, you might think, well, we just want our LLMs to be reliable. So if, um, the LLM tells me that this might be, um, malign or this might be cancer or not, or if the LLM tells me that I ought to invest my money in this ETF rather than another one. Um, all that needs, all that I need is that this, uh, this LLM or this AI system is reliable. (...) And in a sense, of course, and I will speak about this, um, in a short time, we do want our, um, AI systems to be, or our LLMs. And I will just use these words interchangeably now, these notions, um, for the purpose of this talk. We, of course, do want them to be reliable, but reliably reliability is not enough because this is just, um, how philosophers discuss these things. (.) Reliability just means that something is part of a causal chain of events. So you see here a quote from Richard Holton, who worked on reliability says, when one relies on something to happen, one works with the supposition that it will happen into one's plans. (..) But reliability is a very weak notion where you could also strategically rely on stuff, even though you do not, um, trust this thing very much. So there's a very, um, well known example. You are on top of a mountain. You need, um, you need to get down before it gets dark. Otherwise you will freeze to death. You only have one rope that looks a little bit old. So you, you are not very, um, certain whether or not this rope will be capable of holding your weight. However, this rope is the only way for you to get down, um, in time and avoid death by freezing freezing. Therefore, we have no other choice than to rely on this rope, um, to be able to carry your weight. So in this case, it is reliability just means you already have a plan. In this case, you will need to get off the mountain. This thing will play a causal role in the success of your plan. Therefore, you just have to rely on this. on a very, on a very basic level. This is what reliability boils down to. You have a plan that you already know. This thing might, um, play a causal role in you achieving your end and therefore you rely on it. (..) However, and this is important, especially when we think of LMs as possible advisors. if you give advice to someone or you gain some advice, someone from someone, you treat this piece of information or this piece of advice. I should say not only as a causal factor that might play some role in your plans or in the ends that you are aiming to achieve, but rather we treat advice itself as a reason giving factor. And this is not the case with in the case of mere reliance. So the role that you depend on and that you have to rely on to get you down off the mountain doesn't give you any reasons. You already have reasons to get off the mountain. Namely, you don't want to. You do not want to freeze to death. (.) Um, so mere reliance doesn't have this dimension of it being a reason giving factor, but advice aims to give you reasons. So if, um, if you ask your best friend, I don't know, should I, you can think of any, um, of any example. Should I take this job or should I accept this offer or not then? And your best friend tells you, yes, you should. Then the fact that your best friend has given you this piece of advice is a reason that you factor into your practical deliberation. So advice has this dimension of being a reason giving factor. And hence mere reliability is not enough in order to assess credible. If we treat them as advisors and hence as part of our social epistemological world. (...) And if you think about how we usually assess advisors, then there are two ways how we can assess them. We can either think, well, I want advisors that are trustworthy. Or you can think, well, I want advisors that are credible. And, um, as I will argue now, this makes an important difference. Whether you, um, want someone to be a trustworthy advisor or you want someone to be a credible advisor. (...) And our hypothesis that we work out in the, um, in the research project is that we should not only strive to make AI trustworthy. Of course, we should also do so. There is a large literature on trustworthy AI, but we should also make AI systems credible. And this is, um, as far as I can see something that at least in the philosophical literature is not really discussed. Um, specifically what it means for an AI system on and to be credible and why we ought to be concerned with and being actually credible rather than trustworthy. (.....) So in order to make this point, um, let me just, uh, speak about possible trustworthy AI or what trustworthiness boils down to. So, um, and this is why I said reliability is also important, but it's not enough. And there is a large literature on trust and people usually understand trustworthiness as reliability plus X. So it is reliability, but there must be some other factor present. present and usually, and this is what the majority in the literature things. (..) Trustworthiness is reliability plus goodwill. So, um, here again, something on reliance, uh, Sandy Goldberg, um, rights. Where X is the person artifact or natural process and fight is an action behavior process to rely on X to fight is to act on the supposition that X will fight. So if you rely on something or someone you act under the supposition that this person will do what you rely on, what you rely on that they are doing that they will do. (..) But this of course is also not enough enough to, um, say that you trust someone. Um, because if I am a trickster, I might rely on you just being a little bit naive in order to get all your money. Um, but clearly, I do not trust you to be to be naive. I just rely on you in order to, um, achieve my goal to get your money. (..) So what reliability reliability plus X means here if we understand this plus X as goodwill is that if we trust someone to do something and now you can think of anything like you trust someone to water your plans. If you trust that person to water your plans while you are on vacation, you first work under the supposition that they will in fact water your plans while you were while you are on vacation. But to say that you trust them to do so means that you also work under the supposition that they will water your plans from a specific motivation. (.) Namely from the motivation that they care about you, for example, so they, they do what you rely on what they are, what they are supposed to do, but they do also they do this from a specific motivation. Namely, for example, they take your wellbeing into account rather than they're just afraid that you will sue them if you do not water their plans. So trust just means you have the supposition that another person X from a certain motivation and this motivation is directed towards you. It expresses some kind of goodwill towards you. (..) If we take this together and think that we want some of our advisors to be trustworthy, then this means that if I trust X to get good, good advice to be a good advisor, then this implies the belief that X will give in fact good advice. whatever that means, whatever that means, of course, this might also be a further philosophical question, but the important point here is that X will give the good advice for specific reasons. In this case, caring about the trusting agent for their own sake. And you might think in order to spell out what good advice comes down to what it means to give good advice, you also have to factor in that they will give this advice from a certain specific motivation. Namely, they care about us. Therefore, good advice is advice that expresses their caring about us. Good advice would then be advice that somehow has to do with our wellbeing because caring about someone, of course, means caring about their wellbeing. (...) So in this case, trusting someone or trusting an advisor has an epistemic as well in, as in philosophy, we say era tag dimension. So the epistemic dimension means that they will in fact give good advice because maybe there are also demand. There are experts in some matter, but they also have certain, let's say virtues. When they give the advice, they do not give the advice merely for strategic reasons. They do not give the advice merely to gain fame and money, but rather because they have this certain orientation towards you. (....) Now, there's a large literature on trustworthy AI. And there is one problem that many philosophers bring up. Namely, if we take this account of trust acting from a certain motivation as paradigmatic for trust or for trustworthiness. And it seems to follow that AI cannot be trustworthy in the sense because, I mean, AI system do not care about anything. They do not have any motivations in a, in a strong sense of the word, and they therefore lack goodwill. If you think trust implies a certain motivation, namely motivation of goodwill, and if AI systems do not have any motivations at all, and if they cannot care about anything, then they cannot have this specific motivation and therefore they cannot be trustworthy. (..)
S00: So there's the question whether it makes sense to talk about trustworthy AI in the first place. (.....)
S01: I personally think, yes, it makes sense to talk about trustworthy AI in the first place. And there's a point in talking about trustworthy AI simply because I already said just relying on systems might not be enough because we treat them as experts that are capable of giving us reasons in our practical deliberation. (.) Also, of course, I mean, AI and LLMs, they are becoming more and more part of our social world. So we want to have some way of assessing them and trustworthiness, of course, is a very standard and a very strong way of assessing advisors. So it seems, I think it seems, I think it seems, it makes sense to also think about what trustworthiness means in the case of an LM or AI more generally. And here we already have one point that I think you can see in several other spaces or areas as well, namely that the rise of artificial intelligence somehow challenges the concepts that we use to structure our social world. So it is, we have, we have, we have certain concepts, I don't know, you can think of many concepts, something like trustworthiness, maybe friendship, what have you, love or something like this. And these are all concepts that have a certain point, namely, the point is to tell us something about how we as humans relate to one another in our social world. Now we have AI systems that are completely new actors in our social world and therefore we have to think about, is it possible to apply the concepts that we use to structure our social world to AI systems? And if so, how, what does it mean for these concepts? Do we have to somehow change them? Do we have to engineer them, et cetera, et cetera? So here, I think in the case of trust, trusting AI or trustworthy AI, we have another example where it makes good sense to think, well, what might trustworthiness mean in the case of an AI system? And, um, um, one way how philosophers spell this house is to say, well, I already said, in the case of AI system, we like mere reliability is not enough. What we want from AI systems is that they align with certain ethical values that we humans consider to be important, consider also to be important when we think about how we interact with other humans, for example. (.) So the point is basically we want AI to serve human goals, perhaps also this of course, then again, the question, which ethical values are we talking about here, but, um, we want perhaps AI to contribute to human wellbeing, et cetera, et cetera. And you might think, well, this is the way how we, um, can spell out what trustworthiness means in the case of AI, how we can, um, spell out this every take or maybe evaluative dimension of trustworthiness. Because in the case of AI, even though they might not have any motivations, and even though they might lack goodwill for principle reasons, they can still be aligned with ethical values. And if they are aligned with ethical values, then you might think, or I think it makes sense to think of them being trustworthy in these, in these cases. (..) So here we have just an example to repeat where artificial intelligence means that we have to change our concepts such as trustworthiness in a specific way, namely in a way that makes sense to be applied to AI systems, but it's still close enough to our common or everyday knowledge in order to think, well, this is roughly the same thing. because if we talk about trustworthy AI, but it is completely out of line with how we think about trustworthiness in the case of human human interaction, then of course, it wouldn't make any sense to apply the concept to AI systems. (...) Okay, so the question now is, well, why is trustworthy AI not enough? Let me just maybe mention two reasons for thinking that just thinking about trustworthy LLMs is not enough. First, it might just be too encompassing. So you might think of certain ethical values, I don't know, people mention values such as non-discrimination, non-harm principles, fairness, reasonable human control, et cetera, et cetera. It is an open question whether all these ethical values that make AI trustworthy are also relevant for every specific AI system or every specific LLM. If I have an LLM that is giving me financial advice, it is not clear that all these ethical values that make AI trustworthy or that AI systems must be aligned with in order to count as trustworthy AI, whether they are also relevant for this specific LLM or this specific question in which I need advice. (.) So when we talk about trustworthy AI, I think it seems to make more sense, it seems more to be the requirements for AI system in general, how we want AI as such to be developed, that it is in alignment with human values, but rather than individual expert LLMs that we might think about, be it medical advisors, be it financial advisors, but you might also think I want an LLM that gives me some kind of life advice where I can think I had this huge huge fight with my spouse who should I apologize to them, something like that, you might think, I don't know whether you will find this attractive, but perhaps you want an advisor LLM in such cases as well, obviously it's a further question which ethical values are relevant here. (..) But more importantly, I think that advice from a trustworthy experts differs from advice from a credible expert advisor because when we talk about trustworthy experts, we talk about trustworthy advice, we talk about advice that is concerned with our well-being and this might also mean if you ask a trustworthy friend what you ought to do, your friend is supposed to take maybe your home your whole life into account, your whole life into account, your whole life into account, your whole life into account. It's a more comprehensive outlook. They will be concerned with how your life is going. They will be concerned with, let's say all things considered judgments, for example, what you ought to do. So if you ask a trustworthy advisor or if you ask for trustworthy advice, it is basically advice that is concerned with you and your well-being, but it's also advice that might sound as a a conclusive practical reason, which just means that the advice that you gain, for example, from your best friend, simply because your friend has advised you to, I don't know, do this or that action is mere advice from your friend, given that they care about you and about your well-being. This advice could be a conclusive reason for which you from which you act. So simply because your friend has advised you to do this or that, this alone might be a sufficient reason for you to do that thing. (..) But this is different advice than we expect from what we might call credible expert advisors. So when we ask a, when we expect credible advice or advice from a credible expert advisor, we do not want advice from this, let's say comprehensive point of view, but rather we want advice that can serve as a piece of information that is then part of our practical deliberation. So basically the advice from a credible expert advisor, this is the difference, does not serve as a conclusive practical reason that you might have, but rather it would serve as an important but contributely epistemic reason when you try to figure out what to do. So maybe this might sound a little bit abstract, but just think, for example, about the pandemic, when you had expert advisors, you had somebody who could tell you a lot about, I don't know, virology, these people were not supposed to tell you what the politicians ought to do, all things considered. But they could also, all they could say is, given my field of knowledge, my field of expertise, I can tell you that if we do not go into a lockdown, then probably the infection rates will rise, will rise by something, I don't know, 50%, and this will mean X amount of more deaths on intensive care units, something like this. (..) This is basically the area in which they can give you advice, but that doesn't mean that this is a conclusive reason when you think about what you ought to do, because a lot of other considerations might get into account. (...) The mere fact that this person has given you this piece of advice doesn't mean that we ought to go into lockdown, because some other expert may tell you, well, if we go into a lockdown now, then this means that the children in school, they will be disadvantaged for life, for these and those reasons, etc. And the point is that someone has to take a comprehensive account about what you ought to do, but none of these experts can give you can tell you what you as a politician ought to do all things considered. They can also only give you epistemic reasons that you have to factor into your deliberation when you think about what you ought to do. So this, I think it means that credible advice is different differs from trustworthy advice and why we also ought to be concerned with credible LLMs because we might not want LLMs to tell us what we ought to do all things considered, but rather we might want LLMs to give us or provide us epistemic reasons that factor into our deliberation when we think about what you ought to do. And the whole idea of meaningful human control, and the whole idea of meaningful human control, and the whole idea of meaningful human control, I think points to exactly this, this dimension of LLMs, they ought not to give us conclusive reasons that all the LLM ought not to decide what we are, what we do, but rather they ought to give us, of course, good epistemic reason that help us figuring out what we ought to do, but the figuring out is something that we have to do by ourselves. (..) So what does mean and what does tradability mean? (..) Again, um, credibility is interestingly, uh, a concept that not many philosophers work on. I think it also combines epistemic and heritage and our attack dimensions. So the epistemic dimension means that the credible expert must be knowledgeable in their fields. They just, of course, have to know what they are talking about. Otherwise, they are not credible. But also they have a certain erotic dimension in that they must be truthful in German. They must be oriented towards the truth. So if you have an expert who is very knowledgeable. So if you have an expert who is very knowledgeable in their field, but you already know that they will just give you advice because they, I don't know, want to proceed in their career or whatever, they just give you, maybe they give you true advice, but only for strategic reasons. You might think, well, maybe this advice must be true, but whether this person is credible or not is a completely different, um, different question. So in order to be a credible expert, we also not only must we know what we are talking about, but in our, um, in our expertise, we must also just be oriented towards the truth and not some other goals that we aim to, um, to achieve. (......) So this is how credibility and trustworthiness differs from another in, um, in, um, if you're trustworthy, you are oriented towards the good of another person. If you're credible, you are oriented towards the truth. And this, uh, what I already said is, um, I already mentioned this. This is where trustworthy advice and credible advice can point in different directions. So trustworthy advice has to take everything into account that is relevant for the good or the wellbeing of the advisee. Credible advice provides expertise in a specific field that is relevant for the decision what to do. So trustworthy advice aims to provide put in these potentially conclusive reasons for action, while credible advice aims to provide reasons that are relevant for practical or perhaps also theoretical deliberation, but that are not conclusive. And this, I think is why we also need a notion of credible AI and why neither reliable AI nor trustworthy AI is enough. (..) Because when turning to an AI system or an LLM for advice, we do not, or at least not necessarily, and perhaps we ought not want to want them to provide information that is solely, um, solely, uh, oriented toward our own good. (..) And we do not necessarily want them to take all ethical values into account that make AI that make an AI system trustworthy. But if we turn to an AI or an LLM for advice, we might also, in certain other cases, want credible advice that provides information or important information in a domain that is relevant for our practical deliberation when we try to figure out what we want to do. (.) I'm not saying that, uh, I'm not saying that, uh, we should throw the notion of trustworthy AI out of the window. (.) It might very well be relevant to think about, um, trustworthy AI. All I'm, or all that are, we are saying is we should also reflect a little bit more about credible AI and what credible LLMs, what, what, what, what it might mean for a system to be credible. An LLM to be credible because we might not always be interested in trustworthy advice or trustworthy AI. We might also, in certain cases, be interested in credible LLMs. (....) The question now is how, so what does it mean for, um, LLMs, how can we actually, um, develop them in ways that they are credible? because, um, I mean, the epistemic dimension, um, might be a little bit obvious. So the LLM must reliably provide correct information. (..) But the question is, how can we capture this error tag dimension? So what does it mean for the LLM to be truthful? (...) And again, I mean, the, the LLM by itself cannot be oriented towards the truth. This is, um, just because LLMs do not have any state of mind. or the states of mind, um, where they can be oriented towards anything. (..) So I think that this, um, again, requires an alternative interpretation of this error tag dimension of credibility. (..) That is, again, close enough to our ordinary concept to make the concept of credibility applicable. (.) And here, now we do not have a fully worked out theory yet. So this is, as I said, very much work in progress. I can just now do a little bit of hand waving and say, look, this is how we expect. credible experts, human experts to behave or to be. Therefore, this is also something that we could incorporate into an LLM. Perhaps this is already what people already do. Perhaps not. I would be interested in hearing this from people who actually know how LLMs are developed. (....) So the first thing I think is, um, in the case of an expert, you want them in for them to be credible. you do not want them to nudge you in anything. So, um, you want them to provide the information as neutral as possible so that you are capable of, um, taking this into, uh, this piece of information into account in your own deliberation. but in a way that doesn't match you towards any specific solution. So if a human expert somehow matches, um, you into, if you, if you provide information, let's say in a heavily emotional tone, where, you know, um, the, the other person will treat my piece of advice as a conclusive reason. because just because of the way, how I presented this, then you are not showing the other person. And this might mean you are not credible or at least your credibility suffers. So this also means that. (..) Ideally speaking, um, an LLM to be credible must present advice in a way that, um, the role as an expert advisor is not compromised. Hence the information and advice needs to be presented in a way that does not nudge the advisee into any specific course of action. what that means of course now is an open question, but the idea would be the information must be presented in a way so that the other person can take it into account in the deliberation. But that doesn't push them into any specific course of action. (...) Second, um, I think a credible advisor, um, is an advisor that does not only tell you what you want to hear. So, um, maybe this is one reason why there are no credible advisors for Donald Trump, because all they do is telling Donald Trump what he wants to hear. But, um, um, incredible advisor also needs to say needs to speak. Let's say choose to power. So if you're just telling someone else, um, what they already want to hear anyway, then you as an advisor, I would think are not credible. And this also means that again, what this means in practice is a further question that, uh, we, um, we are working on. But it means, um, that's the LLM. Um, must be capable at least of giving you advice that not merely aligns with the preferences that you already have. Maybe there are some cases and we want exactly this. We want the LLM to give us advice that is completely in line with our preferences. But usually when we turn towards, um, experts in other fields, take the pandemic again, we did not want any experts who just tell us what we want to hear. Namely, it will not be so bad or whatever. We want them to give us advice that is oriented towards truth and not towards what we want to hear. (..) And finally, and this is also an important, um, aspect of credible experts. There are some people who might be experts in one field, but who have no expertise in another field. Nevertheless, they use their ex, their, um, their status as experts in, in field a, in order to appear as experts in field B. So if I, as a philosopher, go into a talk show and, um, pretend that I also very knowledgeable in, I don't know, other fields in politics or whatever. And I just because I, um, I am a professor in practical philosophy, I act as if I know everything about politics about democracy and how we, how we should proceed there. Then I am compromising my credibility because I'm claiming expertise in a field where I'm just a lay lay person like everybody else. I claim expertise in a field that I'm actually not an expert in. So in order to be credible experts, we also need to delineate the boundaries and limits of our expertise. So we are not, uh, we, we compromise our credibility if we pretend fake expertise if you want. So if you want to be credible, then this must must also be, um, present. So we must design these, um, in ways that the fields of expertise of this elements are clearly marked and that gaps in reliable information are explicitly addressed. (.) How we can do this, of course, it's an open question that, um. um, requires more work and that I cannot answer here, but these, these are some standards or some, some ways in which we can make, um, um, if you want to estimate to be credible, then these other ways to go. At least these three, uh, three aspects are relevant, but I'm sure there will be many more relevant considerations. So, let me just, um, finish very briefly with another, um, point, um, that I think is also relevant for thinking about credible elements because in social epistemology, there's also a further debate about how we ought to treat expert testimony. So, suppose that we have succeeded in developing credible AI systems. How should we actually treat the expert advice? And, um, I think here again is a field in which. (.) developments in artificial intelligence can also tell us perhaps something about our social words and how we behave to other humans. Because in the debate about expert testimony in social epistemology, there's this large question. If a, if an expert tells you something, what should you do with this piece of information? (..) And there are kind of, uh, quite a few people who hold a view that is called preemptionism and preemptionism means. expert testimony that P, um, you know, some, um, some kind of proposition expert testimony that P ought to be treated as a preemptive reason to believe P. So that the testimony replaces rather than merely adds to a person's epistemic reasons for or against P. This is a, if you think about it, this is a quite radical claim because it tells you if you're if an expert tells you that something is the case. Then you ought to believe what the expert tells you. And you ought to believe it solely on the reason that this expert has told you that this is the case. if you have reasons of your own to believe p, if the expert tells you p and you have already thought about the matter and also, um, arrive at the, at the conclusion that p, then you ought to neglect your previous deliberation. You should, should throw your own deliberation, let's say out of the window and believe p simply for the fact that the expert has told you that p and not, um, due to the fact that you have your own reason for believing p. It's a quite radical claim. There are, um, a few arguments in favor of this claim. Um, if you're interested, I'm happy to, um, to, um, tell them in the Q and a. and it's sort of account is a total evidence account where you say that expert testimony ought to be treated as a contribute, you contribute to your reason for believing p, which just means that. If an expert tells you p, you ought to take the experts testimony into account when you think about whether or not p. But. (.) You should also consider the other reasons that you have for believing p. (.) And this means that either the expert can strengthen your belief that p, if the expert gives you additional reasons for believing p, could also mean that the expert, um, tells you p, um, before the expert told you p, you thought non p. And so given that the expert has told you p, you, um, um, you acquire the belief that p, but it, the total evidence account also means even though the expert tells you something. (..) it would still be rational in believing something else because you think I have better reasons for believing non p than the expert as reasons for believing p. So total evidence accounts also leave open the possibility that you that your view differs from the view of the experts, even though the expert knows much more about the matter than you would. (..) And, um, and now, and this is, um, now I'm coming to the end. I think this would be an important implication because, um, quite regardless of the merits of preemptionism in the case of human experts, and I think here, you can say a lot about preemptionism. I believe that AI systems, um, or X, AI expertise requires a total evidence view because. As already said, we want credible AI system to give us epistemic reasons when we think about what we ought to do, but we do not want to a the AI expertise to settle the matter. So the only way to secure meaningful human control is to treat AI systems as experts, but under the banner of a total evidence view, where we think that whenever the AI system tells us is an important piece of information, perhaps a very weighty piece of information, but not a piece of information that already met us. the setters the matter, and especially we ought not to believe what the AI system tells us simply for the reason that the AI system tells us has told us this. So perhaps, but this is now very hand waving. This also might provide further requirements in design and development of LMS systems, LMS that serve the specific function of advisors. the question would be, how can we develop these LMS in ways that they are, their testimony, their expertise, or the expert testimony is not treated. As a preemptive reason for believing something, but rather is treated as a very weighty, but merely additional reason when we think about what we want to do. (.) So, sorry, I think I'm a little bit over the time. (.) So if you have further questions that will not be addressed in the, in the Q&A, please feel free to drop me an email. Just let me summarize very briefly what I've said. So I think LMS are or act as advisors are part of our social world, our social epistemological practices, or at least should be regarded as such. This means we not only need trustworthy AI because in our social world, we not only want trustworthy advisors, we also want credible advisors, but this, and this has implications for LMS ought to be designed. They should be treated not only as reliable sources of information, but also as credible providers of epistemic reasons and not of conclusive practical reasons. (..) So with this, I finish, thank you very much for your attention and now I look forward to your questions.