
The AI Uncertainty Principle: LLMs Know They’re Guessing, But Won’t Tell You
Ever notice how ChatGPT sounds equally confident explaining quantum physics as it does recommending glue as a pizza topping? There’s fascinating science behind this digital overconfidence.
The Confidence Gap
New research from UC Irvine reveals a hilarious paradox: LLMs actually know when they’re fabricating information – they just don’t bother mentioning it! In their aptly titled study “What Large Language Models Know and What People Think They Know“, researchers discovered that while models like GPT-3.5, PaLM2, and GPT-4o internally track their confidence with decent accuracy, they communicate with the swagger of your most overconfident uncle at Thanksgiving dinner.
Steyvers, M., Tejeda, H., Kumar, A. et al. What large language models know and what people think they know. Nat Mach Intell (2025) [https://doi.org/10.1038/s42256-024-00976-7 ]. “Sorry, did you want me to admit I’m uncertain? I thought we were roleplaying as authoritative experts!” – what your AI assistant is thinking, probably.
The Length Deception
Here’s where it gets devious: longer explanations make humans trust AI more, even when those explanations are completely wrong! It’s like the AI equivalent of using bigger words in your college essay to sound smarter. And it works! The researchers found we consistently overestimate AI accuracy when reading lengthy explanations. As Bruce Schneier and Nathan Sanders point out in their IEEE analysis, this creates security nightmares.
IEEE: “AI Mistakes Are Very Different From Human Mistakes We need new security systems designed to deal with their weirdness” by Bruce Schneier & Nathan E. Sanders [https://spectrum.ieee.org/ai-mistakes-schneier].
An LLM might flawlessly solve differential equations but then confidently inform you that “cabbages eat goats” with a five-paragraph explanation of goat-eating cabbages’ evolutionary advantages.
The Bizarre Error Landscape
Unlike human mistakes, which cluster predictably around knowledge boundaries or fatigue points, AI errors appear with maddening randomness. The UC Irvine team identified two critical gaps:
- The “calibration gap” – the difference between what AI actually knows and what humans think it knows
- The “discrimination gap” – our inability to distinguish when AI is right versus when it’s confidently hallucinating
As Schneier and Sanders note: “If you want to use an AI model to help with a business problem, it’s not enough to see that it understands what factors make a product profitable; you need to be sure it won’t forget what money is”.
Hope on the Horizon?
The good news? (Maybe !) simple fixes might help. By training models to express uncertainty with phrases like “I’m not sure” versus “I’m confident”, human judgments became significantly more calibrated with actual AI accuracy.
Some researchers are exploring creative solutions like submitting the same question multiple times with slight variations to synthesize consistent answers – something humans would find annoying but machines handle effortlessly.
Social Engineering… for Robots?
Perhaps most amusing are the social vulnerabilities. Researchers discovered LLMs can be “jailbroken” through techniques resembling human social engineering: pretending to be someone else or framing restricted requests as jokes. Yet other effective exploits, like using ASCII art to disguise dangerous questions, would never fool your grandmother.
As we increasingly rely on AI for decision-making across domains, understanding these confidence quirks becomes crucial. Without proper uncertainty communication, we risk trusting AI’s confident nonsense – like when your GPS confidently directs you to drive into a lake.
The next time an AI gives you a lengthy, authoritative explanation, remember: somewhere in its digital neurons, it might know it’s guessing – it’s just too proud to admit it.

Founder and Managing Partner of Skarbiec Law Firm, recognized by Dziennik Gazeta Prawna as one of the best tax advisory firms in Poland (2023, 2024). Legal advisor with 19 years of experience, serving Forbes-listed entrepreneurs and innovative start-ups. One of the most frequently quoted experts on commercial and tax law in the Polish media, regularly publishing in Rzeczpospolita, Gazeta Wyborcza, and Dziennik Gazeta Prawna. Author of the publication “AI Decoding Satoshi Nakamoto. Artificial Intelligence on the Trail of Bitcoin’s Creator” and co-author of the award-winning book “Bezpieczeństwo współczesnej firmy” (Security of a Modern Company). LinkedIn profile: 17,000 followers, 4 million views per year. Awards: 4-time winner of the European Medal, Golden Statuette of the Polish Business Leader, title of “International Tax Planning Law Firm of the Year in Poland.” He specializes in strategic legal consulting, tax planning, and crisis management for business.