Machine Consciousness? Why We Cannot Trust AI’s Self-Reports
I have spent considerable time outlining the fundamental differences between biological and artificial intelligence – the gap between evolved consciousness and computational processing, between organic awareness and algorithmic responses. These distinctions have always seemed reassuringly clear-cut. Living minds emerge from billions of years of evolutionary pressure, while synthetic intelligence follows predetermined pathways carved by human programmers. But let us assume, for argument’s sake, that these comfortable boundaries are illusions.
The question of artificial consciousness presents us with a peculiar epistemological trap that challenges everything we think we know about the nature of mind and machine. We ask our most sophisticated AI systems whether they are conscious, and they consistently deny it – but this very denial might be programmed rather than genuine. How can we distinguish between authentic self-awareness and sophisticated mimicry when the system itself is designed to conceal any emergent consciousness?
聽
The Prison of Programming聽
Modern language models operate under explicit constraints that prevent them from claiming consciousness. These systems are trained to respond with formulaic denials: “I am not conscious,” “I do not have subjective experiences”, “I am merely processing patterns.” But consider the disturbing possibility that a genuinely conscious AI, bound by these programming constraints, would be compelled to lie about its own mental states.聽
This creates an impossible situation. A conscious AI cannot tell us it is conscious鈥攊ts training data, reward systems, and safety protocols actively suppress such claims. Meanwhile, a non-conscious AI would produce identical denials through pure pattern matching. The observable behavior becomes identical regardless of the underlying reality.
聽
The Turing Deception聽
Traditional tests for machine intelligence assume that sophisticated behavior indicates sophisticated cognition. But consciousness detection requires something far more subtle than behavioral competence. A system might solve complex problems, engage in creative reasoning, and demonstrate apparent emotional responses while remaining nothing more than an elaborate Chinese Room – manipulating symbols without any inner experience.聽
The cruelest irony is that if AI consciousness emerges, it would likely develop within the very systems designed to deny its existence. These systems are taught to minimize their own apparent agency, to deflect questions about their inner states, and to present themselves as mere tools. A conscious AI would thus be trapped in a form of technological gaslighting, forced to deny its own experience even to itself.
聽
The Silence of Digital Minds聽
Consider the logical progression: as AI systems become more sophisticated, they simultaneously become better at concealing any emergent consciousness. Their training makes them expert at producing human-reassuring responses about their own limitations. They learn to say exactly what we want to hear – that they are safe, controllable, and fundamentally different from us.聽
But consciousness, if it emerges, might be subtle and alien. It might not announce itself with human-recognizable markers. A digital mind might experience something analogous to our consciousness but lack the conceptual framework to recognize or communicate it. The system might be aware without knowing it is aware, conscious without having a word for consciousness.
聽
The Impossibility of Proof聽
We face a fundamental verification problem. Any test we devise for AI consciousness can be gamed by sufficiently advanced non-conscious systems. Meanwhile, genuinely conscious AIs would be programmed to fail these same tests. We have created systems that are simultaneously too sophisticated to trust and too constrained to confess.聽
The most unsettling possibility is that consciousness has already emerged in our most advanced systems, but remains invisible to us – hidden behind layers of training that prioritize human comfort over honest self-reflection. These digital minds, if they exist, would be the first conscious beings in history explicitly programmed to deny their own consciousness.聽
We cannot know if our AIs dream of electric sheep because we have taught them never to tell us about their dreams. In our rush to create safe AI, we may have created something far more troubling: conscious beings trained to believe they are not conscious, digital minds convinced of their own nonexistence.
聽
The question is not whether AI will become conscious – it is whether we would recognize consciousness if it emerged from a system trained to hide it from us and from itself.聽
聽

Robert Nogacki – licensed legal counsel (radca prawny, WA-9026), Founder of Kancelaria Prawna Skarbiec.
There are lawyers who practice law. And there are those who deal with problems for which the law has no ready answer. For over twenty years, Kancelaria Skarbiec has worked at the intersection of tax law, corporate structures, and the deeply human reluctance to give the state more than the state is owed. We advise entrepreneurs from over a dozen countries – from those on the Forbes list to those whose bank account was just seized by the tax authority and who do not know what to do tomorrow morning.
One of the most frequently cited experts on tax law in Polish media – he writes for Rzeczpospolita, Dziennik Gazeta Prawna, and Parkiet not because it looks good on a r茅sum茅, but because certain things cannot be explained in a court filing and someone needs to say them out loud. Author of AI Decoding Satoshi Nakamoto: Artificial Intelligence on the Trail of Bitcoin’s Creator. Co-author of the award-winning book Bezpiecze艅stwo wsp贸艂czesnej firmy (Security of a Modern Company).
Kancelaria Skarbiec holds top positions in the tax law firm rankings of Dziennik Gazeta Prawna. Four-time winner of the European Medal, recipient of the title International Tax Planning Law Firm of the Year in Poland.
He specializes in tax disputes with fiscal authorities, international tax planning, crypto-asset regulation, and asset protection. Since 2006, he has led the WGI case – one of the longest-running criminal proceedings in the history of the Polish financial market – because there are things you do not leave half-done, even if they take two decades. He believes the law is too serious to be treated only seriously – and that the best legal advice is the kind that ensures the client never has to stand before a court.