
The Great AI Text Generation Panic: A Modern Witch Hunt
In the hallowed halls of academia, a curious inversion is taking place. Just as medieval peasants once scrutinized their neighbors for signs of consorting with the devil, today’s students pore over lecture notes searching for the telltale marks of artificial intelligence. They hunt for linguistic quirks like overuse of the word “crucial,” distorted images with extraneous limbs, or the occasional prompt left visible in the document margins. The modern witch’s mark, it seems, is a request to “expand on all areas. Be more detailed and specific” (inside joke – if you are a heavy prompter, you know what I mean).
The New York Times article about professors using ChatGPT reveals a fascinating paradox in our collective response to AI text generation. We have entered an era where the hunted have become the hunters, and those who once warned against the sorcery of machine-generated text now secretly employ it themselves. This moral panic embodies three fundamental misunderstandings about our relationship with generative AI.
The Unstoppable Tide
The first truth we must acknowledge is that mass adoption of AI text generation is as inevitable as the printing press or the internet. Those who resist this technological transformation will find themselves like medieval scribes protesting Gutenberg’s innovation—technically correct about certain losses, but ultimately standing against the inexorable current of progress.
Consider Professor Kwaramba’s astute observation that ChatGPT is merely “the calculator on steroids.” Just as the calculator freed mathematicians from tedious computation to focus on higher-order thinking, AI tools are liberating educators from rote tasks. Harvard’s Professor Malan demonstrates this eloquently by deploying custom AI chatbots to handle remedial questions, allowing him to engage with students in more meaningful ways through “memorable moments and experiences” like hackathons and personal lunches.
The panic response – banning tools, deploying dubious “AI detection” software, and creating academic honor codes specifically targeting machine assistance – resembles nothing so much as King Canute commanding the tide to recede. These efforts are not merely futile; they’re counterproductive. They create underground usage patterns rather than thoughtful integration, driving the very behavior their proponents seek to prevent.
The Fallacy of Distinction
The second misconception lies in the belief that we can reliably distinguish between human and machine-generated text, assigning different value to each. This is technological essentialism at its most absurd – judging content not by its quality but by its genesis. As the article demonstrates, even students actively hunting for AI “tells” occasionally misidentify human-created content as machine-generated, and vice versa.
This distinction becomes even more meaningless when we consider collaborative creation. When Professor Arrowood uploads his class materials to ChatGPT to “give them a fresh look,” who precisely is the author of the resulting content? The professor who supplied the source material and pedagogical intent? The AI that restructured and expanded it? The engineers who built the model? Or perhaps the countless writers whose works trained the system? The answer is simultaneously all and none of the above.
The very concept of “AI-generated” versus “human-generated” creates a false binary that fails to recognize the increasingly symbiotic relationship between human and machine creativity. As philosopher Andy Clark would argue, we are “natural-born cyborgs,” and our intellectual output has always depended on technological extensions of our capabilities—from the humble pencil to the modern neural network.
Quality Above Origin
The third and most important realization is that we should evaluate content based on its merit rather than its provenance. Ella Stapleton’s complaint about her professor’s ChatGPT-assisted materials reveals this tension perfectly. Her objection wasn’t fundamentally about the use of AI- it was about the poor quality of the resulting materials, with their “distorted text, photos of office workers with extraneous body parts and egregious misspellings.”
Had the professor used ChatGPT more skillfully – carefully reviewing and correcting its output before presentation – Stapleton might never have noticed or objected. This suggests that her real concern (and that of most students paying premium tuition) isn’t technological but qualitative. They expect excellence, regardless of how it’s produced.
This leads us to a more sophisticated approach: judging outputs on their intrinsic value rather than their method of creation. A brilliant essay remains brilliant whether typed by human hands, dictated to an AI, or co-created through iterative prompting. Conversely, mediocre work remains mediocre regardless of whether it came from a tired student’s midnight typing session or a hastily executed AI prompt.
Beyond the Witch Hunt
The academic panic over AI text generation bears all the hallmarks of moral panic that have accompanied every significant technological advancement. Like medieval witch hunts, it combines genuine concerns with exaggerated fears, creates arbitrary tests to detect “unnatural” influences, and establishes harsh penalties for transgressors.
But as Ohio University’s Professor Shovlin wisely notes, the true value of education lies in “the human connections that we forge with students as human beings who are reading their words and who are being impacted by them.” This human connection transcends the tools used to facilitate communication; it cannot be measured by counting the percentage of machine-generated words in a document.
Rather than continuing this futile witch hunt, we should embrace a more nuanced approach. Universities should focus on teaching responsible AI integration—helping students and faculty understand when and how these tools enhance human capabilities and when they potentially diminish them. As Professor Shovlin puts it, students need to “develop an ethical compass with AI” because they will inevitably use it in their professional lives.
The case of Northeastern University points toward this more enlightened future. Rather than banning AI outright, their policy “requires attribution when AI systems are used and review of the output for accuracy and appropriateness.” This approach acknowledges the technology’s inevitability while establishing reasonable guidelines for its ethical use.
In the final analysis, the hysteria surrounding AI-generated text may reveal more about our anxieties concerning human uniqueness than any genuine threat to education. Like all moral panics, it will eventually subside as the technology becomes normalized. The students who now scrutinize their professors’ slides for AI “tells” will someday chuckle at their former concerns, just as we now laugh at those who once feared that calculators would destroy mathematical thinking or that word processors would ruin the art of composition.
The witch hunt will end not because we successfully banish the witches, but because we finally recognize that we’ve been hunting phantoms of our own creation.

Founder and Managing Partner of Skarbiec Law Firm, recognized by Dziennik Gazeta Prawna as one of the best tax advisory firms in Poland (2023, 2024). Legal advisor with 19 years of experience, serving Forbes-listed entrepreneurs and innovative start-ups. One of the most frequently quoted experts on commercial and tax law in the Polish media, regularly publishing in Rzeczpospolita, Gazeta Wyborcza, and Dziennik Gazeta Prawna. Author of the publication “AI Decoding Satoshi Nakamoto. Artificial Intelligence on the Trail of Bitcoin’s Creator” and co-author of the award-winning book “Bezpieczeństwo współczesnej firmy” (Security of a Modern Company). LinkedIn profile: 17,000 followers, 4 million views per year. Awards: 4-time winner of the European Medal, Golden Statuette of the Polish Business Leader, title of “International Tax Planning Law Firm of the Year in Poland.” He specializes in strategic legal consulting, tax planning, and crisis management for business.