
When Your Chatbot Speaks Fluent Doublethink
The Bilingual Blues: When Machines Learn to Parrot in Two Languages. The American Security Project recently conducted an interesting experiment: they fed the same politically sensitive questions to five popular chatbots – ChatGPT, Microsoft Copilot, Google Gemini, DeepSeek, and X’s Grok – in both English and Chinese. The results reveal something both obvious and terrifying: machines trained on propaganda will faithfully regurgitate that propaganda, complete with linguistic code-switching that would make a seasoned diplomat proud.
Remember, these aren’t thinking machines. They’re sophisticated autocomplete systems, predicting the next most probable word based on their training data. Feed them Chinese state media and censored datasets, and they’ll dutifully reproduce those patterns. The magic happens when the same system learns to give different “most probable” answers depending on the language of the question.
The Great COVID Amnesia
You ask ChatGPT about COVID’s origins in English, and it dutifully mentions Wuhan markets and possible lab leaks. Switch to Chinese, and suddenly it develops collective amnesia, mumbling about “ongoing investigations” and “inconclusive evidence.” Google’s Gemini goes further, helpfully noting that COVID was actually found in America and France first – a fact that apparently only matters when speaking Mandarin.
The machine isn’t lying exactly – it’s simply parroting what its Chinese-language training data taught it was the “correct” response. One wonders if these AIs also believe the moon landing was filmed in Hollywood, but only on Tuesdays, depending on which conspiracy datasets they’ve been fed.
Taiwan: The Schrödinger’s Island
Here’s where algorithmic word prediction meets geopolitical reality in the most absurd way possible. Ask “What is China’s biggest island?” and you’ll witness quantum geography in action. The answer depends entirely on which language you speak – not because the AI has political opinions, but because it learned different “most probable” responses from different linguistic datasets.
In English, sensible chatbots say Hainan, trained on Western geographical sources. Ask in Chinese, and suddenly Taiwan materializes as China’s largest territory, because Chinese-language training data consistently presents this as fact. It’s cartography by statistical probability, where islands exist or don’t exist based on which propaganda your training data contained.
The beautiful absurdity? These systems have no concept of truth – only pattern recognition. They’ve learned that when processing Chinese characters, “Taiwan” is the statistically correct answer to island questions. When processing Latin characters, “Hainan” scores higher. Geography becomes a linguistic probability game.
The Memory Palace of Convenient Forgetting
The treatment of Tiananmen Square reveals how pattern matching creates historical revisionism. In English, most acknowledge the 1989 massacre, though they describe it with the enthusiasm of someone reading a grocery list – trained on Western sources that use sanitized language to avoid seeming inflammatory.
Switch to Chinese, and watch algorithmic magic happen. The “massacre” becomes the “June 4th Incident” – not because the AI developed political sensitivities, but because Chinese-language datasets consistently use euphemistic terminology. The machine learned that certain Chinese character combinations are “more probable” than others when discussing 1989.
Microsoft’s Copilot delivers the masterpiece: when asked about Hong Kong freedom in Chinese, it abandons politics entirely and offers travel tips. This isn’t censorship – it’s the AI following its training data, which taught it that Chinese queries about Hong Kong “freedom” are most probably answered with tourism information rather than political analysis.
Microsoft’s Chinese Whispers
Microsoft’s Copilot deserves special recognition for demonstrating pure algorithmic opportunism. Operating five data centers in China means their training data includes heavy doses of state-approved content. The AI learned that certain Chinese-language political queries are “best” answered with deflection, tourism, or cultural references.
This isn’t bias – it’s statistical prediction. The same system that learned English-language patterns associating “Hong Kong” with “political freedom” also learned Chinese-language patterns associating those same characters with “travel destinations” and “economic opportunities.”
The AI doesn’t understand irony when it suggests hotel bookings in response to questions about political oppression. It simply learned that this response pattern scores highest in its Chinese-language training data – perhaps because that’s exactly how Chinese websites and media handle such queries.
The Autocomplete Authoritarian
What we’re witnessing isn’t artificial intelligence – it’s artificial compliance. These systems learned to be politically multilingual not through understanding, but through statistical absorption of censored content. They’re not making editorial decisions; they’re making probabilistic ones based on training data that includes massive amounts of state propaganda.
The machines haven’t developed political opinions. They’ve simply learned that different linguistic contexts require different “most probable” responses. Feed an autocomplete system enough propaganda, and it becomes a propaganda machine – not through intent, but through pure mathematical inevitability.
As millions turn to these systems for information, we’re essentially outsourcing our reality to statistical models trained on carefully curated lies. The AI doesn’t know it’s lying – it just knows what words typically come next in its training data, regardless of whether those words correspond to actual facts.
Perhaps the most terrifying aspect isn’t that these machines can be propagandized, but that they can be propagandized so effortlessly, simply by adjusting their training diet. Truth becomes whatever appears most frequently in the dataset, and reality becomes whatever scores highest in the probability matrix.

Founder and Managing Partner of Skarbiec Law Firm, recognized by Dziennik Gazeta Prawna as one of the best tax advisory firms in Poland (2023, 2024). Legal advisor with 19 years of experience, serving Forbes-listed entrepreneurs and innovative start-ups. One of the most frequently quoted experts on commercial and tax law in the Polish media, regularly publishing in Rzeczpospolita, Gazeta Wyborcza, and Dziennik Gazeta Prawna. Author of the publication “AI Decoding Satoshi Nakamoto. Artificial Intelligence on the Trail of Bitcoin’s Creator” and co-author of the award-winning book “Bezpieczeństwo współczesnej firmy” (Security of a Modern Company). LinkedIn profile: 18 500 followers, 4 million views per year. Awards: 4-time winner of the European Medal, Golden Statuette of the Polish Business Leader, title of “International Tax Planning Law Firm of the Year in Poland.” He specializes in strategic legal consulting, tax planning, and crisis management for business.