Artificial intelligence can now be taught to experience crippling self-doubt, much like humans, after researchers developed a method to make AI recognize when it does not know an answer. This breakthrough aims to address the problem of AI 'hallucinations,' where models generate confident but incorrect responses.
Hallucinations occur when AI is incentivized to guess rather than admit ignorance, which can be dangerous in contexts like medical diagnosis. This overconfidence has damaged trust in AI, with users mocking models that stubbornly insist on falsehoods, such as misspelling December.
Researchers from the Korea Advanced Institute of Science and Technology (KAIST) have created a training method that mimics human brain development. By setting AI's initial confidence to a low level, close to chance, the model learns a state of 'I don't know anything yet' before actual learning begins. This reduces overconfidence bias and helps AI recognize unfamiliar data.
'While conventional models tend to give incorrect answers with high confidence even for data they have not encountered during training, models with warm-up training showed a clear improvement in their ability to lower confidence and recognise that they do not know,' the researchers explained in the journal Nature Machine Intelligence.
Se-Bum Paik, an author of the study, said, 'This study demonstrates that by incorporating key principles of brain development, AI can recognise its own knowledge state in a way that is more similar to humans. This is important because it helps AI understand when it is uncertain or might be mistaken, not just improve how often it gives the right answer.'
When asked what it does not know, ChatGPT listed several limitations, including anything beyond its knowledge cutoff in 2025, personal information unless shared, unobservable facts, real-time data by default, hidden or private data, and the potential for misinterpretation or confidently wrong answers.
OpenAI is currently valued at $852 billion, and several lawsuits have sought damages from AI and tech companies over the influence of chatbots on mental health.



