ELI5 Why doesnt Chatgpt and other LLM just say they don’t know the answer to a question?

r/

I noticed that when I asked chat something, especially in math, it’s just make shit up.

Instead if just saying it’s not sure. It’s make up formulas and feed you the wrong answer.

Comments

  1. thebruns Avatar

    LLM doesn’t know anything, it’s essentially an upgraded autocorrect.

    It was not trained on people saying “I don’t know” 

  2. Taban85 Avatar

    Chat gpt doesn’t know if what it’s telling you is correct. It’s basically a really fancy auto complete. So when it’s lying to you it doesn’t know it’s lying, it’s just grabbing information from what it’s been trained on and regurgitating it.

  3. Omnitographer Avatar

    Because they don’t “know” anything, when it comes down to it all LLMs are extremely sophisticated auto-complete tools that use mathematics to predict what words should come after your prompt. Every time you have a back and forth with an LLM it is reprocessing the entire conversation so far and predicting what the next words should be. To know it doesn’t know something would require it to understand anything, which it doesn’t.

    Sometimes the math may lead to it saying it doesn’t know about something, like asking about made-up nonsense, but only because other examples of made up nonsense in human writing and knowledge would have also resulted in such a response, not because it knows the nonsense is made up.

  4. LOSTandCONFUSEDinMAY Avatar

    Because it has no idea if it knows the correct answer or not. It has no concept of truth. It just makes up a conversation that ‘feels’ similar to the things it was trained on.

  5. ekulzards Avatar

    ChatGPT doesn’t say it doesn’t know the answer to a question because I was living in Dallas and flying American a lot now and then from Exchange Place into Manhattan and then from Exchange Place into Manhattan.

    Start typing ‘ChatGPT doesn’t say it doesn’t know the answer to a question because’ and then just click the first suggested word on your keyboard continually until you decide to stop.

    That’s ChatGPT. But it uses the entire internet instead of just your phone’s keyboard.

  6. The_Nerdy_Ninja Avatar

    LLMs aren’t “sure” about anything, because they cannot think. They are not alive, they don’t actually evaluate anything, they are simply really really convincing at stringing words together based on a large data set. So that’s what they do. They have no ability to actually think logically.

  7. Kaimito1 Avatar

    Because it “does not know the answer”. It does not know if an answer is correct or not, only the most probable answer based on the content its been given. Thats why its not good for “new ideas”

    Imagine it knows tons of stories, thinks about each of those stories to get info on the question you asked and decides “yes, this is the most likely answer”. Even if that answer is wrong.

    Some of the “stories” it knows is factually wrong, but it believes it to be true, because thats the story it was told

  8. jpers36 Avatar

    How many pages on the Internet are just people admitting they don’t know things?

    On the other hand, how many pages on the Internet are people explaining something? And how many pages on the Internet are people pretending to know something?

    An LLM is going to output based on the form of its input. If its input doesn’t contain a certain quantity of some sort of response, that sort of response is not going to be well-represented in its output. So an LLM trained on the Internet, for example, will not have admissions of ignorance well-represented in its responses.

  9. helican Avatar

    Because LLMs work by basically guessing how an answer could look like. Being truthfull is not part of the equation. The result is a response that is close to how a real human would answer but the content may be completely made up.

  10. Cent1234 Avatar

    Their job is to respond to your input in an understandable manner, not to find correct answers.

    That they often will find reasonably correct answers to certain questions is a side effect.

  11. Seeggul Avatar

    LLMs essentially work like very sophisticated versions of auto-complete when you’re texting. They read in the text you’ve provided and try to generate words that seem like they would fit, based on the data they’ve been fed. They can’t think critically about whether what you’ve said makes sense or not, nor can they evaluate the “truth” of the responses they generate.

  12. nusensei Avatar

    The first problem is that it doesn’t know that it doesn’t know.

    The second, and probably the bigger problem, is that it is specifically coded to provide a response based on what it has been trained on. It isn’t trained to provide an accurate answer. It is trained to provide an answer that resembles an accurate answer. It doesn’t possess the ability to verify that it is actually accurate.

    Thus, if you ask it to generate a list of sources for information – at least in the older models – it will generate a correctly formatted bibliography – but the sources are all fake. They just look like real sources with real titles, but they are fake. Same with legal documents referencing cases that don’t exist.

    Finally, users actually want answers, even if they are not fully accurate. It actually becomes a functional problem if the LLM continually has to say “I don’t know”. If the LLM is tweaked so that it can say that, a lot of prompts will return that response as default, which will lead to frustration and lessen its usage.

  13. Crede777 Avatar

    Actual answer:  Outside of explicit parameters set by the engineers developing the AI model (for instance, requesting medical advice and the model saying “I am not qualified to respond because I am AI and not a trained medical professional”), the AI model usually cannot verify the truthfulness of its own response.  So it doesn’t know it is lying or what it is making up makes no sense.

    Funny answer:  We want AI to be more humanlike right?  What’s more human than just making something up instead of admitting you don’t know the answer?

  14. Maleficent-Cow-2341 Avatar

    It doesn’t know that it doesn’t know the answer, if we very oversimplify it, it’s just picking words based on probabilities in it’s database. It has no sense of context, what it’s actually saying, or whether it makes sense on a deeper level, all it knows is that the combination of words it produces is a one that matches the dataset with selected criteria.

    If that’s what the dataset and specific LLM result in, there is no clear cut difference between a table of values corresponding to “1+1=2” and “1+1=4” that can be exploited to determine if it’s correct, you’d need to check it completely independently through a dedicated program. That’s easy for a simple math question, but as you can imagine, more abstract stuff isn’t nearly as simple

  15. diagrammatiks Avatar

    A llm has no idea that it doesn’t know the answer to a question. It can only give you the most likely response it thinks is right based on the neural net.

  16. Usual_Judge_7689 Avatar

    AI isn’t really “intelligent,” as it were. It is good at completing patterns.
    It works by looking at what it was given, looking at the data it has scraped, and then guessing what the next word/phrase is. There’s no test for truth and no calculations other than that probability. LLMs will never know whether they’re right or wrong from your perspective because there is no considering your perspective.

  17. Driesens Avatar

    A lot of good answers here already, but I’d like to suggest my theory: saying “I don’t know” kills the conversation.

    These LLMs are AIs trained on conversation data, and the parameters that get established by the creators likely have something like “likelihood the conversation continues”. If a chatbot just says “Goodnight”, it’s a pretty garbage chatbot. So instead, the creators establish the requirement that conversations continue whenever possible, leading to the AI selecting the option that most often continues the dialogue. It doesn’t care if it’s wrong, so long as it gets some kind of answer to allow the conversation to keep moving.

  18. PM_ME_BOYSHORTS Avatar

    Because everything it says is made up. It has no concept of right or wrong.

    All AI is doing is simulating natural language. If the content upon which it was trained is accurate, it will also be accurate. If it’s not, it won’t. But either way, it won’t care.

  19. gulpyblinkeyes Avatar

    Obviously there’s a lot of tech and a lot of nuance going into why LLMs do the things they do, but generally speaking it’s good to remember that while “AI” is the popular buzzword associated with LLMs, by many human standards they don’t really posses “intelligence.” When presented with a math problem, most LLMs aren’t calculating anything and don’t really have any awareness of what math even is. They are just taking in what you typed, trying to find patterns that match in the language that has been fed to it, and use those patterns to respond with something that seems to match up with what you typed.

    The LLM processes patterns but doesn’t “know” or not “know” the answer to any question in the human sense, math or otherwise, so it is incapable of being “not sure” in it’s answer.

  20. HeroBrine0907 Avatar

    Because it’s not alive. It’s job is to string together words into a human like sentence and mimic conversations. It’s an LLM. It does not ‘understand’. I can’t define this word exactly but once you observe chatgpt vs any living thing, you’ll get it. Best way to describe it is: A living creature does not need to have an idea reinforced through hundreds of experiences, even very simple organisms.

  21. CyberTacoX Avatar

    In the settings for ChatGPT, you can put directions to start every new conversation with. I included “If you don’t know something, NEVER make something up, simply state that you don’t know.”

    It’s not perfect, but it seems to help a lot.

  22. smftexas86 Avatar

    All chatgpt is a very clear speaking answer with access to the internets data. It doesn’t “make stuff up” it uses what resources it has available to it, to come up with a solution to your question.

    Most of the time, that answer is correct, because what it found was correct, but since it uses things like reddit, and other information that is not always correct or accurate, it will not always be accurate.

  23. a8bmiles Avatar

    Ask it how many Rs there are in Strawberry, and then keep asking if it’s sure.

    It’s always making shit up.

  24. EggburtAlmighty Avatar

    Theory of knowledge was already a head spinning area of philosophy before AI.

    Imagine you’re walking along a hedge that’s taller than you, and you see someone’s head bobbing along beside you high above the hedge. Based on the height and motion, you think “that person must be riding a horse. I bet when we get to the end of the hedge, I will see a horse.” You reach the end of the hedge and see that the person is in fact riding on someone’s shoulders, not a horse, but there is, by coincidence, a horse in a field behind them. Your original statement was true, but the reasoning was wrong.

    AI reasoning is like that, only it can never see past the hedge, because it is stuck in its virtual world, and the real world is inaccessible to it.

    Nothing is verifiable to AI in the first degree.

  25. inlight2 Avatar

    I always thought it was for customer perception. Imagine your AI sometimes says “I don’t know” when the competition doesn’t. People would just assume that your AI knows less even if the competition is just making up answers. So, better just make up answers when we don’t know.

  26. Fifteen_inches Avatar

    ChatGPT is not a thinking being, it cannot understand what you are asking it.

    What chatGPT is doing is mimicking what an answer would sound like.

  27. Fairwhetherfriend Avatar

    It’s not actually trying to answer your question, it’s just trying to generate language that sounds convincing.

    It’s like… imagine if there was an actor working on a medical show who often improvised lines. They might spend a lot of time watching other medical dramas and listening to the ways that IRL doctors talk. They’ll pick up patterns about when doctors use certain words and how they react to certain things, but they don’t understand any of it. So when they’re acting as a doctor, they’re very good at making up lines that sound (to a layman) exactly like what a doctor would say – but it’s probably wrong, or at least partly wrong, because they don’t actually understand what they’re saying. They’re just using words they’ve heard doctors use in similar situations to sound convincing.

    They might often end up using those words correctly by accident because they’re very good at recognizing the patterns of the sorts of conversations where a real doctor would say certain words. But it’s mostly just luck when that happens – it’s just as likely that they’ll use these words in incorrect contexts because the context kinda sounds similar to their untrained ear.

    The actor isn’t going to say “I don’t know” while acting because they’re not really there to actually be a doctor – they’re there to convincingly pretend. It won’t be convincing if they say “I don’t know” because a real doctor wouldn’t say that in these situations.

    ChatGPT is an actor. When you ask it a question, it performs a scene in which it is playing someone who knows the answer to your question – but it doesn’t actually know the answer. Don’t ask ChatGPT to give you technical information, just the same way you wouldn’t perform a scene with an actor in a medical drama and then use their improvised lines as actual medical advice.

    But ChatGPT is very good at pretending, and that’s still useful. If you have technical information that you need to communicate clearly and concisely, and you have trouble with wording things, an improv actor might be really good at helping you out with that. But you need to have the expertise yourself, so you can correct them when their attempts to reword your technical info make them wrong.

  28. Noctrin Avatar

    Because it’s a language model. Not a truth model — it works like this:

    Given some pattern of characters (your input) and a database of relationships (vectors showing how tokens — words, relate to each other) calculate the distance to related tokens given the tokens provided. Based on the resulting distance matrix, pick one of the tokens that has the lowest distance using some fuzzing factor. This picks the next token in the sequence, or the first bit of your answer.

    Eli5 caveat, it uses tensors, but matrix/vectors are close enough for ELI5

    Add everything together again, and pick the next word.. etc.

    Nowhere in this computation does the engine have any idea what it’s saying. It just picks the next best word. It always picks the next best word.

    When you ask it to solve a problem, it becomes inherently complicated — it basically has to come up with a descriptive problem description, feed it into another model that is a problem solver, which will usually write some code in python or something to solve your problem, then execute the code to find your solution. Things go terribly wrong in between those layers 🙂

  29. waffle299 Avatar

    It was the best of times….

    Did you fill the rest in? That’s what an LLM does. It has been trained to predict the next bit. That’s it.

    They don’t think. They don’t understand. They predict the next bit, based on what they’ve seen.

    So it’s really hard to tell something built to predict things and tell it to stop.

  30. SilaSitesi Avatar

    The 500 identical replies saying “GPT is just autocomplete that predicts the next word, it doesn’t know anything, it doesn’t think anything!!!” are cool and all, but they don’t answer the question.

    Actual answer, is the instruction-based training data essentially forces the model to always answer everything; it’s not given a choice to “nope out” during training. Combine that with people rating the ‘i don’t know” replies with a thumbs-down 👎, which further encourages the model (via RLHF) to make up plausible answers instead of saying it doesn’t know, and you get frequent hallucination.

    Edit: Here’s a more detailed answer (buried deep in this thread at time of writing) that explains the link between RLHF and hallucinations.

  31. PckMan Avatar

    Because it doesn’t know it doesn’t know. They also wouldn’t as popular as they are if they did that. People who don’t know something but can’t be assed to properly look it up either love the idea of getting handed out ready made answers with zero effort. If AI just repeated back to them what they already told themselves it wouldn’t have taken off in the same way.

  32. Jo_yEAh Avatar

    does anyone read the comments before posting an almost identical response to the other top 15 comments. an upvote would suffice

  33. poldrag Avatar

    I imagine it has to do with how it’s trained. It’s like when you teach your dog a few tricks and then ask it to do a new one with no training. It’ll just sit and give you a paw. The dog knows that in the past doing this generates a reward even if it doesn’t know what you’re asking.

    But I’ll admit I don’t know for sure. I’ve just heard about rewards vs punishment in other generative ai content

  34. IanFoxOfficial Avatar

    It’s just making words up with the highest rate of probability based on the training dataset.

    Some info is so widespread that it’s easy to have an accurate result.

    Other times the result is off, but it doesn’t know that.

    It has no concept of rights answers.

  35. HankisDank Avatar

    Everyone has already brought up that ChatGPT doesn’t know anything and is just predicting likely responses. But a big factor in why chatGPT doesn’t just say “I don’t know” is that people don’t like that response.

    When they’re training an LLM algorithm they have it output response and then a human rates how much they like that response. The “idk” answers are rated low because people don’t like that response. So a wrong answer will get a higher rating because people don’t have time to actually verify it.

  36. ASpiralKnight Avatar

    The training data set doesn’t say they don’t know.

    It is very plausible future LMMs can improve in this capacity.

  37. ohdearitsrichardiii Avatar

    It worries me how much people anthropomorphize chatgpt

  38. just_some_guy65 Avatar

    I don’t know, seems legit:

    Me: What do clouds dream about?

    ChatGPT:

    Clouds might dream of drifting endlessly across cerulean skies, whispering secrets to mountaintops and watching cities bloom beneath them. They could dream of becoming rain, dancing down to kiss the earth, or of turning golden at sunset, applauded by a quiet world below.

  39. ChairmanMeow22 Avatar

    In fairness to AI, this sounds a lot like what most humans do.

  40. Hectabeni Avatar

    That vast majority of LLM are incapable of doing math. They just look up similar questions in its database and just give what answer looks reasonable. So simple four function math is covered but once you move past simple algebra, accuracy goes out the window. You are far better off just using Wolfram Alpha for any math questions you have.

  41. thegrayryder Avatar

    If you think about it this way: chatgpt has “learned” everything it knows from places like reddit and stack exchange. These places, “I don’t know” is not a valid answer so it never gets posted or moderators will delete it because it doesn’t help the user come to an answer. So, chatgpt has never really been “taught” to say I don’t know, it only knows how to interact or answer questions like we see in the internet.

  42. jonsnowflaker Avatar

    The funniest interaction I had was asking ChatGpt to provide a PDF template for a sign just to see what it could do. It kept saying “sure thing, I’m generating one just wait” after 15 minutes or an hour I’d say, “hey where’s that template” and it would apologize and ask for just a bit more time. Finally I said, “are you capable of making the pdf template I’ve requested?” And it said “no, you’re right I cannot.”

  43. midnightauto Avatar

    I’ve found that when asked is ChatGPT is familure with a certain topic it’ll lie it’s ass off. For instance, I asked it to create some sample Python code using VDCP(Video Disk Control Protocol). It wasn’t even close, even though it references the original Louth documents.
    Grok, on the other hand, nailed it.

    This wasn’t the first time ChatGPT has done this. I’d much rather it tell me it doesn’t know.

  44. striedinger Avatar

    You’re asking very generic questions. I work in software and it hasn’t told me it doesn’t know, but it has told me when things are just not possible

  45. hoops_n_politics Avatar

    So then I would love to read a follow-up ELI5 to this topic:

    If ChatGPT and LLMs are (to oversimplify them a bit) basically high powered auto-correct programs that don’t actually understand anything, how will building bigger and faster LLMs lead us to create AGI? Wouldn’t this be something that’s doomed to fail (at least by pursuing it using this same route)?

  46. CleverNameThing Avatar

    You can ask it to do an uncertainty analysis. It’ll even give you a percentage for how certain they are. It will look at peer-reviewed journals first if you asked it to, for example. This limitation can be overcome.

  47. tahuff Avatar

    I’m curious: I use Perplexity, not chatgpt and it cites sources for its answers. Don’t other AI’s do this? Also, unless I specifically request that it doesn’t, when answering a math question it gives all the steps and reasoning. I still agree that it doesn’t know if the answer is correct or not, but at least I cn follow the trail

  48. Mantastic89 Avatar

    Every answer an LLM produces is basically a forecast. It forecasts based on your question which letter to put behind the next letter.

    For it to produce your mentioned “I don’t know” answer, you have to build guardrails around the LLM. Usually this is done in a Retrieval Augmented Generation setup where you pick a set of data from which the LLM can create its answers.

    If for example you create a RAG on a very specific topic like a certain product that a company sells, you can set it up in a way that whenever questions are asked about totally different topics/unrelated matters, the RAG will tell you it doesn’t know the answer because it can’t predict the letters based on the specified dataset.

  49. m1sterlurk Avatar

    “ABORT/RETRY/IGNORE”

    Us old folks as well as the young’uns who are well versed in software development will know this prompt well: the program has run into an error, and it needs to know if it should just stop running, try again with something having been changed (like what disk is inserted), or just keep running like everything’s fine.

    You will notice that this is three different decisions.

    For AI, it’s all 1s and 0s. You feed in a prompt, and the AI processes that prompt in conjunction with a seed. The AI makes an absurdly large number of “yes/no” decisions against the encoded prompt as it calculates the response it is going to give you. The seed will determine whether it falls “yes” or “no” on each of these questions as it formulates the “answer”.

    If the AI runs through it’s understanding of various connected concepts and that which is being asked is fairly well-formed information in the AI’s “mind”, you’re not very likely to get an incorrect answer. If the AI doesn’t understand the concept very well, or if it is full of information that causes it to understand the concept “differently”, it will totally just make stuff up because it has no way to understand that it “does not know” something.

  50. de_propjoe Avatar

    I dunno, why don’t humans on Reddit do that? Tons of questions on here that don’t even have a valid premise, but get dozens of comments with “answers” that might as well be made up. What’s the difference?

  51. RaltzKlamar Avatar

    You train your parrot to respond to “What is 2+2” with “4.” You ask it “What is 2+3” it responds “4.”

    It’s not doing calculations or any sort of reasoning, it’s basically just a really complex parrot.

  52. Happy-Forever-3476 Avatar

    Because it doesn’t “know” anything. It doesn’t know that it doesn’t know. It’s not aware or intelligent or capable of critical thought

  53. Mackntish Avatar

    They are corporate products that are being sold for money. As such, they are designed to be impressive and valuble. Sometimes that means bullshitting your way though a hard question.

  54. thejesteroftortuga Avatar

    Follow up question: I’ve noticed that Claude is more likely to disagree and say no. Why?

  55. Chaos90783 Avatar

    Because in general llm gives you the most likely answer out of everything it knows, but that doesn’t mean it is correct. And since theres always someone out there with an opinion on the internet, the most likely answer is almost never going to be no answer. When was the last time you went to ask a question in a forum and people started saying i dont know instead of just giving you their two cents anyways?

  56. Urc0mp Avatar

    Because we don’t have a good model that generates whether ChatGPT can answer correctly or not, lol. I’d guess it will exist in the future.

  57. Own-Psychology-5327 Avatar

    How is chatgbt supposed to know its wrong? If it knew that it wouldn’t give you a wrong answer in the first place

  58. Bia_LaSheouf Avatar

    Humans train ChatGPT to create human-like responses to prompts. ChatGPT doesn’t “know” anything – all it does is try to maximize the number it’s told to (# of human-sounding responses, basically).  The humans that train it have 2 options:

    1. “I don’t know” is an acceptable answer. ChatGPT will respond “I don’t know” to every question because that’s easiest and never wrong, maximizing its success rate.

    2. “I don’t know” is not as good of an answer. To ChatGPT, which is physically incapable of doing anything but maximizing the number, anything less than maximum is unacceptable. It will never say “I don’t know” because that has 0 chance of passing for a good answer. It will make something up, true or not, because that will at least have a >0% of being accepted as a good answer.

    This may sound like intentional lying to trick humans, which is 100% not true. ChatGPT is always making stuff up, but it’s based on a ton of stuff that humans wrote which is usually correct, therefore ChatGPT’s made-up answer is also usually correct.

  59. ZealousidealPoet4293 Avatar

    The amount of training data giving the output “I don’t know” is very very sparse.

  60. PBKYjellythyme Avatar

    Many people have stated some variation of this, but it’s worth repeating. LLM’s like ChatGPT are not programmed to factually answer questions. They are programmed to mimic natural human language by having absorbed an absurd amount of data and established a statistical relationship for how words are related to each other.

    ChatGPT doesn’t “know” anything. As far as LLMs are programmed, an incorrect answer is just as valid of an output as a correct answer.

  61. Nu-Hir Avatar

    Because it doesn’t know what it doesn’t know. If you took a cook, and only trained him on Italian Cuisine and French Cuisine, then asked him to make a Chicago Dog. Without any reference, he’s just got to make a hot dog and slap some toppings on it. Will it have a dill pickle spear? Tomatoes? Be on a poppy seed bun? No clue. He’ll just base it on what he knows from French and Italian cooking

    LLMs are the same way. They will base their answers on what they’re trained on and how you word your prompt. It isn’t programed to not know things. So instead of replying with “I don’t know” it creates an answer from all of the data it was trained on. It doesn’t know that the answer isn’t correct. The answer is correct based on the information it has.

  62. GlitteringDare9454 Avatar

    Because the people who are in charge of these type of things are incapable of admitting that their MLM LLM is wrong.

  63. BlackWindBears Avatar

    AI occasionally makes something up for partly the same reason that you get made up answers here. There’s lots of confidently stated but wrong answers on the internet, and it’s trained from internet data!

    Why, however, is ChatGPT so frequently good at giving right answers when the typical internet commenter (as seen here) is so bad at it!

    That’s the mysterious part!

    I think what’s actually causing the problem is the RLHF process. You get human “experts” to give feedback to the answers. This is very human intensive (if you look and you have some specialized knowledge, you can make some extra cash being one of these people, fyi) and llm companies have frequently cheaped out on the humans. (I’m being unfair, mass hiring experts at scale is a well known hard problem).

    Now imagine you’re one of these humans. You’re supposed to grade the AI responses as helpful or unhelpful. You get a polite confident answer that you’re not sure if it’s true? Do you rate it as helpful or unhelpful?

    Now imagine you get an “I don’t know”. Do you rate it as helpful or unhelpful?

    Only in cases where it is generally well known in both the training data and by the RLHF experts is “I don’t know” accepted.

    Is this solvable? Yup. You just need to modify the RLHF to include your uncertainty and the models’ uncertainty. Force the LLM into a wager of reward points. The odds could be set by either the human or perhaps another language model simply trained to analyze text to interpret a degree of confidence. The human should then fact-check the answer. You’d have to make sure that the result of the “bet” is normalized so that the model gets the most reward points when the confidence is well calibrated (when it sounds 80% confident it is right 80% of the time) and so on.

    Will this happen? All the pieces are there. Someone needs to crank through the algebra. To get the reward function correct. 

    Citations for RLHF being the problem source: 

    – Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas
    Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. Language models (mostly)
    know what they know. arXiv preprint arXiv:2207.05221, 2022. 

    The last looks like they have a similar scheme as a solution, they don’t refer to it as a “bet” but they do force the LLM to assign the odds via confidence scores and modify the reward function according to those scores. This is their PPO-M model

  64. niversalvoice Avatar

    Because which person have you ever met that said they didn’t know?

  65. RealSataan Avatar

    Because it was trained on the internet. Nobody on the internet will say I don’t know the answer(most won’t). They will ignore the question or go to the comments section to know the answer themselves.

    So chatgpt never learns to say don’t know to a question.

  66. Hyphz Avatar

    They’re trained on online conversations and Q&A.

    In an online conversation, if you don’t know the answer to a question, you don’t post that you don’t know. So it’s never seen that statement.

    Worse, if it had seen that phrase in training, it wouldn’t understand what it meant. It’d just have a random chance of saying “I don’t know” based on how often people posted that.

  67. Dalannar Avatar

    In addition to what everyone is saying that it doesn’t know that it’s not correct, it also doesn’t seem far fetched that most of the training data is made up of articles and other pieces of information that are meant to be correct. There’s not many articles where the authors just go “I don’t know…”.

  68. ShitfacedGrizzlyBear Avatar

    It’s funny, some attorneys have been sanctioned for using ChatGPT, because they filed pleadings where ChatGPT just makes up case law. I don’t think there’s anything wrong with attorneys trying to use AI. But you must—at the very least—confirm that the citations are correct and that the cases cited actually exist.

    That’s the confusing one for me. That it would make up whole quotes, holdings, and case citations. Surely it knows that it’s just making shit up. I understand it running math problems wrong. But why does it just completely make up case law?

  69. YellowSlugDMD Avatar

    Honestly, as an adult human male, it took me a really long time, some therapy, and a good amount of building my self confidence before I got good at this skill.