
Why Asking Your AI LLM Chatbot for Guidance is Like Consulting a Ouija Board
AI hype – Millions of otherwise rational people started to ask autocomplete machines for life advice, strategic guidance, and profound wisdom. It’s rather like consulting a very expensive, electricity-hungry Ouija board – one that happens to have read Wikipedia and can spell considerably better than most spirits allegedly can.
Picture this: educated adults typing earnest questions into their computers—”What should I do with my career?” “How do I handle this relationship conflict?” “What investment strategy makes sense?” – and expecting meaningful guidance from what is essentially a very eloquent pattern-matching system.
The Fundamental Confusion
The central misconception plaguing our relationship with Large Language Models is breathtakingly simple: we’ve confused eloquence with intelligence, pattern matching with reasoning, and statistical prediction with wisdom. LLMs are indeed powerful tools – perhaps the most sophisticated text manipulation instruments ever created – but asking them to solve problems rather than process text is like asking a calculator to compose a symphony or expecting a microscope to perform surgery.
We’ve created machines that can mimic human reasoning so convincingly that we’ve forgotten they’re merely mimicking. They’ve become the ultimate confidence artists, speaking with such apparent authority about subjects they fundamentally cannot understand that we’ve begun to mistake their fluency for insight. It’s the Dunning-Kruger effect in silicon: the less these systems actually comprehend, the more confident they sound.
The Oracle vs. Tool Distinction
When we approach an LLM seeking guidance – asking “What should I do about X?” or “How do I solve Y?” – we’re essentially engaging in a modern form of bibliomancy, the ancient practice of seeking divine guidance by randomly opening books. Except our digital oracle doesn’t even have the honesty of randomness; it gives us statistically probable responses based on patterns it has absorbed from human writing, including all our biases, misconceptions, and confident assertions about matters we ourselves barely understand.
The profound difference lies in direction of control. A master craftsman doesn’t ask their hammer how to build a house; they direct the hammer’s strikes with precision and purpose. Similarly, LLMs excel when we direct them with specific instructions: “Analyze this text for sentiment,” “Generate five variations of this paragraph,” or “Extract key dates from this document.” These are tasks that leverage their genuine capabilities: pattern recognition, text manipulation, and linguistic transformation.
But when we ask them to reason about complex problems, to provide guidance on nuanced situations, or to solve challenges requiring genuine understanding, we’re asking our linguistic hammers to perform architectural planning. The results may sound impressive – LLMs are remarkably good at generating plausible-sounding advice – but they’re fundamentally hollow, like beautifully crafted fortune cookies containing random platitudes.
The Ouija Board Phenomenon
The comparison to a Ouija board is not merely colorful rhetoric; it’s structurally accurate. Both phenomena rely on the ideomotor effect – unconscious movements that create the illusion of external intelligence. With a Ouija board, participants unconsciously guide the planchette while believing the movements come from supernatural forces. With LLMs, we unconsciously provide context and interpretation while believing the responses emerge from genuine understanding.
Both create compelling illusions of intelligence where none exists. The Ouija board appears to know things because participants unconsciously influence it based on their own knowledge and expectations. LLMs appear to understand because they’ve been trained on vast amounts of human-generated text that does contain understanding—they’re sophisticated mirrors reflecting our own collective intelligence back at us, with enough statistical sophistication to make the reflection seem autonomous.
The tragedy is that we often get better advice from the Ouija board, precisely because its responses are so obviously nonsensical that we’re forced to do our own thinking. LLMs, by contrast, provide advice that sounds just reasonable enough to short-circuit our critical thinking processes.
The Illusion of Reasoning
Perhaps the most dangerous aspect of modern LLMs is their uncanny ability to simulate reasoning while performing no reasoning whatsoever. They can walk through logical steps, cite relevant principles, and construct seemingly coherent arguments—all while having no genuine understanding of logic, principles, or argumentation. They’re like skilled actors performing the role of thinkers, complete with all the verbal tics and rhetorical flourishes that make thinking sound authentic.
This creates a peculiar form of intellectual outsourcing where we delegate our reasoning to systems that cannot actually reason. It’s as if we decided to improve our physical fitness by hiring very convincing actors to pretend to exercise on our behalf. The performance might be flawless, but we remain just as out of shape.
The more sophisticated these systems become at simulating human reasoning, the more we risk atrophying our own reasoning capabilities. Why struggle through complex analysis when we can simply ask our digital oracle for the answer? The problem, of course, is that the oracle is essentially reflecting back a statistically averaged version of human reasoning – including all its flaws, biases, and limitations – while lacking the lived experience, contextual understanding, and genuine judgment that make human reasoning valuable despite its imperfections.
The Path to Productive Partnership
The solution is not to abandon these remarkable tools but to understand them properly. LLMs are extraordinary servants but terrible masters. They excel when given clear direction, specific constraints, and well-defined tasks. They can help us draft documents, analyze text, generate variations on themes, translate languages, and perform countless other linguistic operations with unprecedented sophistication.
The key is to approach them as we would any powerful tool: with clear intentions, specific objectives, and an understanding of their capabilities and limitations. A skilled carpenter doesn’t ask their saw for architectural advice; they use their own expertise to determine what needs cutting and then direct the saw with precision. Similarly, we must provide the intelligence, judgment, and reasoning that LLMs fundamentally lack, while leveraging their remarkable capabilities for text manipulation and pattern recognition.
This means being explicit about what we want, providing context and examples, breaking complex tasks into manageable components, and – crucially – maintaining responsibility for evaluation and decision-making. The moment we start asking “What do you think I should do?” instead of “Help me analyze these options according to these criteria,” we’ve crossed from tool use into oracle consultation.
The Deeper Implications
Our relationship with LLMs reveals something profound about human psychology: our deep-seated desire to externalize responsibility for difficult decisions. There’s something seductive about having an apparently knowledgeable entity provide guidance, even when we know intellectually that this entity possesses no genuine understanding or wisdom.
This impulse is ancient and powerful – it’s why humans have always sought guidance from oracles, soothsayers, and wise elders. But LLMs represent a particularly insidious form of this tendency because they provide the appearance of reasoned guidance without the lived experience, hard-won wisdom, or genuine understanding that makes human guidance valuable.
The irony is that by seeking guidance from systems that cannot truly guide, we risk becoming less capable of guiding ourselves. Every decision we outsource to an LLM is a decision we don’t practice making. Every problem we ask it to solve is a problem we don’t learn to solve ourselves.
Conclusion: Tools for Thought, Not Substitutes for Thinking
Large Language Models represent a remarkable achievement in human engineering – sophisticated tools that can manipulate language with unprecedented fluency and apparent intelligence. But they remain tools, not oracles, and our relationship with them should reflect this fundamental reality.
The future belongs to those who learn to direct these powerful instruments rather than seeking direction from them. Like any transformative technology, LLMs will amplify human capabilities for those who understand how to use them properly, while potentially diminishing the capabilities of those who mistake them for something they’re not.
The choice is ours: we can treat LLMs as the sophisticated Ouija boards they essentially are, seeking mystical guidance from statistical patterns, or we can recognize them as the powerful tools they actually are, using them to augment our own thinking rather than replace it.
The former path leads to intellectual dependency and atrophied reasoning skills. The latter leads to enhanced capabilities and genuine human-AI collaboration. The difference lies not in the technology itself, but in our understanding of what it can and cannot do – and what we must continue to do ourselves.
In the end, the most intelligent thing we can do with artificial intelligence is to remain genuinely intelligent ourselves.

Founder and Managing Partner of Skarbiec Law Firm, recognized by Dziennik Gazeta Prawna as one of the best tax advisory firms in Poland (2023, 2024). Legal advisor with 19 years of experience, serving Forbes-listed entrepreneurs and innovative start-ups. One of the most frequently quoted experts on commercial and tax law in the Polish media, regularly publishing in Rzeczpospolita, Gazeta Wyborcza, and Dziennik Gazeta Prawna. Author of the publication “AI Decoding Satoshi Nakamoto. Artificial Intelligence on the Trail of Bitcoin’s Creator” and co-author of the award-winning book “Bezpieczeństwo współczesnej firmy” (Security of a Modern Company). LinkedIn profile: 18 500 followers, 4 million views per year. Awards: 4-time winner of the European Medal, Golden Statuette of the Polish Business Leader, title of “International Tax Planning Law Firm of the Year in Poland.” He specializes in strategic legal consulting, tax planning, and crisis management for business.