
Intelligence Without Maturity: The Philosophical Void in AI Ethics
In January 2017, at the historic Asilomar Conference Grounds in California, over 100 leading experts in artificial intelligence, ethics, economics, and law gathered for what would become a landmark event in AI governance. The “Asilomar Conference on Beneficial AI,” organized by the Future of Life Institute, aimed to establish guidelines for the development of artificial intelligence that would benefit humanity. Among signatories of the guidelines were luminaries like Stephen Hawking, Demis Hassabis of DeepMind, and prominent AI researchers including Yann LeCun, Yoshua Bengio, and Stuart Russell.
The conference produced the “23 Asilomar AI Principles” – a set of guidelines covering research, ethics, and long-term considerations in AI development. These principles quickly gained prominence, gathering signatures from over 1,700 AI researchers and 3,900 other individuals. Their adoption appeared to represent a significant step toward responsible AI development, with major technology companies and research institutions pledging to follow them.
However, these principles, formulated in 2017, now appear strikingly inadequate for governing today’s rapidly advancing AI technology. While they articulate important aspirations – like developing beneficial AI, ensuring safety, and protecting privacy – they lack specific enforcement mechanisms or concrete guidelines for implementation. More critically, they reflect an era when advanced AI capabilities seemed more distant and theoretical. Today, as AI systems demonstrate increasingly sophisticated abilities across multiple domains, the gap between these broad declarations of intent and the robust governance frameworks we need becomes increasingly apparent.
This situation presents us with a critical problem: while our technological capabilities in AI have advanced at an extraordinary pace, our ethical frameworks and governance structures remain in their infancy. The principles that seemed forward-thinking in 2017 now appear more like a child’s first attempt at setting boundaries – well-intentioned but ultimately insufficient for managing the complex reality of today’s AI landscape.
At their core, 23 Asilomar AI Principles embody an uneasy marriage of techno-progressive utilitarianism and liberal humanism. They promote AI development as an inevitable force for good while simultaneously insisting on human control and values, never resolving this fundamental tension. The document adopts a technocratic approach while espousing democratic values – as if profound ethical questions could be resolved through mere technical expertise and voluntary industry coordination.
The metaphysical ambiguity is particularly striking. The principles never clearly articulate their position on the nature of intelligence or consciousness. Is AI fundamentally different from human intelligence? Can machines be truly conscious? Should they have rights? Instead, we get carefully constructed vagaries, perhaps reflecting the political necessity of achieving consensus among diverse stakeholders.[1]
This philosophical void extends to questions of power and justice. While advocating for “shared prosperity” and broad distribution of AI’s benefits, the principles offer no substantive framework for achieving these goals. They seem to assume that proper technical management will naturally lead to equitable outcomes – a naïve belief that ignores the complex political and economic forces shaping AI development.
Most problematically, the principles exhibit a kind of technological determinism, accepting AI advancement as inevitable rather than questioning whether certain developments should be pursued at all. This effectively forecloses deeper questions about the fundamental nature and direction of AI development.
The practical limitations are equally concerning. Many principles are too broad to provide concrete guidance. There’s no mechanism for resolving conflicts between principles, no framework for handling tradeoffs between competing values, and limited guidance for translating aspirations into specific policies.
The structural problems are also significant. The development process was dominated by AI researchers and industry representatives rather than ethicists. The principles take an overly optimistic view that downplays fundamental risks and tensions. They fail to address whether there should be absolute limits on AI development, give limited consideration to distributional impacts, and provide no mechanism for holding developers accountable.
The Asilomar Principles thus stand as both a warning and a challenge. Their limitations reveal not just the inadequacy of our current philosophical frameworks, but the urgent need to develop new ones equal to the task. As AI continues its rapid evolution, we cannot afford to build our ethical frameworks on philosophical quicksand. We need deeper foundations – ones that engage seriously with fundamental questions about intelligence, consciousness, value, and the future of human-machine relations.
The path forward demands more than technical expertise or ethical guidelines. It requires a profound philosophical reckoning with what it means to be human in an age of increasingly capable machines. Until we undertake this deeper work, our attempts at AI governance will remain both practically and philosophically inadequate to the monumental task before us.
FOOTNOTE
[1] The “Chinese Room” argument, proposed by philosopher John Searle in 1980, gets to the heart of the fundamental question of machine understanding. Let me explain it in contemporary terms using an example with GPT-4. Imagine you’re texting with GPT-4 in Chinese. The AI responds perfectly, engaging in sophisticated discussion about Chinese literature, philosophy, and current events. Its responses are indistinguishable from those of a native Chinese scholar. But does GPT-4 actually “understand” Chinese?
Searle’s thought experiment suggests not. Here’s why.
Inside the “room” (or in GPT-4’s case, inside the model), there’s essentially a very sophisticated system of pattern matching and response generation. It processes input symbols (Chinese characters) and produces output symbols according to rules and patterns it has learned, without any actual comprehension of meaning.
Just like a person in a room could follow detailed instructions to respond to Chinese messages by matching patterns and following rules – without understanding a word of Chinese – an AI system may be manipulating symbols without true understanding or consciousness.
This becomes clearer when we examine recent research showing that large language models can:
– Make fluent statements that are factually nonsensical
– Confidently provide explanations that are logically impossible
– Change their “beliefs” and “knowledge” dramatically based on small prompt changes
The implications are profound. While we can verify if a machine can process information effectively (it clearly can), we still have no reliable way to determine if it truly “understands” or is conscious in any meaningful sense. This isn’t just a philosophical puzzle – it has major implications for questions of AI rights, responsibilities, and risks.
This fundamental uncertainty about machine consciousness and understanding is precisely what makes the Asilomar Principles’ vagueness about these issues so problematic. We’re trying to establish ethical guidelines for entities whose basic nature we still can’t fully comprehend.

Founder and Managing Partner of Skarbiec Law Firm, recognized by Dziennik Gazeta Prawna as one of the best tax advisory firms in Poland (2023, 2024). Legal advisor with 19 years of experience, serving Forbes-listed entrepreneurs and innovative start-ups. One of the most frequently quoted experts on commercial and tax law in the Polish media, regularly publishing in Rzeczpospolita, Gazeta Wyborcza, and Dziennik Gazeta Prawna. Author of the publication “AI Decoding Satoshi Nakamoto. Artificial Intelligence on the Trail of Bitcoin’s Creator” and co-author of the award-winning book “Bezpieczeństwo współczesnej firmy” (Security of a Modern Company). LinkedIn profile: 17,000 followers, 4 million views per year. Awards: 4-time winner of the European Medal, Golden Statuette of the Polish Business Leader, title of “International Tax Planning Law Firm of the Year in Poland.” He specializes in strategic legal consulting, tax planning, and crisis management for business.