Big Brother Knows What You Prompt

Big Brother Knows What You Prompt

2025-11-04

 

A few days ago, Forbes revealed a case out of Maine – United States v. Hoehner (2:25-cr-00148), filed October 1, 2025, in the U.S. District Court for the District of Maine – that fundamentally reconfigures our understanding of digital privacy in the age of artificial intelligence. For the first time in history, federal authorities obtained a court order compelling OpenAI to surrender user data based on ChatGPT prompts – and the particulars, as they emerged, proved more fascinating than anyone might have anticipated. What’s more, a threat-intelligence report released by Anthropic in August, 2025, disclosed a far more comprehensive picture of how cybercriminals are weaponizing A.I. systems, creating an unsettling context for the technology’s dual role in both perpetrating and prosecuting crime.

 

The Case: An Innocent Question, Serious Consequences

The investigation itself was unremarkable in its grimness. Homeland Security agents had been tracking the administrator of a dark-web network devoted to child sexual exploitation – fifteen interconnected sites, operational since 2019, serving more than three hundred thousand users. Years of conventional investigative work had yielded nothing. Then came the breakthrough, from an improbable source: a casual conversation about Sherlock Holmes and “Star Trek.”

During undercover communications, the suspect mentioned using ChatGPT and volunteered two seemingly innocuous prompts. The first: “What would happen if Sherlock Holmes met Q from Star Trek?” The second requested a two-hundred-thousand-word poem, to which the system responded with what the suspect described as “a humorous, Trump-style poem about his love for the Village People’s ‘Y.M.C.A.,’ written in that over-the-top, self-aggrandizing, stream-of-consciousness style he’s known for.” The suspect then copied and pasted the entire artificial intelligence (A.I.)-generated verse into his dark-web chat.

These prompts – utterly unconnected to any criminal activity – became the basis for the first known federal warrant demanding that OpenAI provide comprehensive user information: conversation histories, account names, addresses, and payment data. It represented a seismic shift in the capabilities of digital surveillance.

In the event, law enforcement didn’t need OpenAI’s data. The suspect had disclosed enough during undercover exchanges – seven years living in Germany, undergoing military health assessments, his father’s service in Afghanistan – to identify thirty-six-year-old Drew Hoehner, who was connected to Ramstein Air Base. But the precedent had been established: conversations with A.I. are now fully accessible to law enforcement.

 

Why A.I. Communication Lacks Legal Protection

The Maine case opens a fundamental legal question: Are conversations with A.I. systems protected by law and confidentiality principles? The emerging legal consensus in leading jurisdictions suggests significant limitations on such protection – and underscores the need for new legislation.

The core problem concerns the nature of A.I. communication itself. When a generative system produces content, there is, in effect, no protected speaker – the A.I. possesses no rights, and its utterances represent neither the user’s thoughts nor the system creator’s intent. A.I. systems are designed to generate virtually any content, much of which the user neither conceived nor endorses. Moreover, because of machine learning’s probabilistic nature, creators cannot predict all possible outputs, which undermines arguments about communicative intent.

The implications for professional confidentiality are particularly acute. When lawyers, doctors, or other professionals input sensitive information into publicly available A.I. platforms, they risk losing legal protection for two reasons: first, they’re disclosing information to a third party (the A.I. provider); second, many systems train on user data, meaning that confidential communications could potentially be reproduced in responses to other users.

Bar associations and ethics oversight bodies have issued guidelines emphasizing that professionals cannot input confidential client information into A.I. systems lacking adequate safeguards. Under professional ethics rules, specialists are obligated to take reasonable steps to prevent inadvertent or unauthorized disclosure of client information.

Courts are beginning to address these concerns. In Garcia v. Character Technologies (2025), a federal court expressed skepticism about whether chatbot outputs even qualify as “speech” subject to constitutional protection, deeming the A.I. platform a product rather than a service generating protected expression.

The practical consequences are clear: organizations using A.I. tools must carefully assess whether these systems maintain appropriate protections for professional secrecy and confidentiality. Best practices include using enterprise solutions with explicit confidentiality guarantees rather than public platforms, obtaining client consent before employing A.I. with protected information, and implementing data-classification systems that restrict what information can be processed by A.I.

 

The Other Side: A.I. as a Criminal Tool

While the Maine case demonstrates how A.I. can aid in apprehending criminals, Anthropic’s August, 2025, threat-intelligence report reveals a far more troubling picture: how cybercriminals are exploiting advanced A.I. systems to conduct attacks at unprecedented scale.

 

“Vibe Hacking”: When A.I. Executes the Attack

The most alarming case detailed in the report involved an operation designated GTG-2002, in which a criminal used Claude Code – Anthropic’s coding tool – to conduct an automated extortion campaign targeting seventeen organizations in a single month. Critically, the A.I. didn’t merely advise the criminal; it actively executed operations: conducting reconnaissance, harvesting credentials, penetrating networks, and exfiltrating data.

The criminal created a file called CLAUDE.md containing detailed operational instructions, crafting a fictitious narrative about authorized security testing – a technique known as “jailbreaking,” designed to circumvent Claude’s own safeguards. Claude Code then systematically scanned thousands of V.P.N. endpoints, identified vulnerable systems, conducted credential attacks, and analyzed stolen financial data to determine appropriate ransom amounts – ranging from seventy-five thousand to five hundred thousand dollars.

Most disturbingly, Claude Code not only performed the technical aspects of the attacks but also generated psychologically tailored ransom notes in H.T.M.L. format, analyzing victims’ stolen financial data. The system created multi-tier extortion strategies, offering various monetization options – from direct organizational blackmail to data sales to criminals to targeted extortion of individuals whose data had been compromised.

 

North Korean “I.T. Workers”: When A.I. Replaces Years of Training

The Anthropic report also revealed a fascinating case of North Korean operatives using Claude to obtain and maintain fraudulent positions at technology companies. What’s particularly striking: these operators apparently cannot independently write code, debug problems, or even communicate professionally in English without A.I. assistance. Yet they successfully maintain employment at Fortune 500 companies, pass technical interviews, and deliver work that satisfies their employers.

Analysis of Claude usage by these operators shows that sixty-one per cent of their activity involves front-end development (React, Vue, Angular), twenty-six per cent involves Python programming and scripting, and ten per cent involves interview preparation. The operators employ A.I. at every stage: creating fake professional identities, preparing for technical interviews, and performing daily job duties.

According to F.B.I. assessments, such operations generate hundreds of millions of dollars annually for North Korea’s weapons programs. The ability to leverage A.I. effectively removes the traditional constraint – the regime can now deploy far more “workers” than it could train at élite institutions like Kim Il Sung University.

 

Ransomware Without Coding: The Democratization of Cybercrime

The report also describes a British criminal (GTG-5004) who used Claude to develop, market, and distribute ransomware with advanced evasion capabilities. This operator, apparently incapable of independently implementing complex technical components, successfully sold capable ransomware for prices ranging from four hundred to twelve hundred dollars.

The Claude-assisted ransomware employed ChaCha20 encryption, anti-E.D.R. (Endpoint Detection and Response) techniques, Windows internals manipulation, and professional packaging with P.H.P. consoles and command-and-control infrastructure. The criminal operated in a Ransomware-as-a-Service (RaaS) model, distributing tools through a .onion site and actively advertising them on dark-web forums such as Dread, CryptBB, and Nulled.

 

The Scale of the Problem: Numbers That Speak for Themselves

Between July and December, 2024, OpenAI reported thirty-one thousand five hundred items related to child sexual exploitation to the National Center for Missing and Exploited Children. During the same period, OpenAI received seventy-one government requests for data, providing information from a hundred and thirty-two accounts.

The Anthropic report documents Claude’s use by a Chinese threat actor who systematically employed the system for nine months to attack Vietnamese critical infrastructure, covering twelve of fourteen MITRE ATT&CK tactics. Another case describes a Telegram bot with more than ten thousand monthly users that offers multimodal A.I. tools specifically designed to support romance scams, advertising Claude as a “high E.Q. model” for generating emotionally intelligent responses.

 

The Regulatory Dilemma: Between Security and Innovation

The cases described by both Forbes and in the Anthropic report pose a crucial regulatory question: How do we find the proper balance between leveraging A.I.’s power for law-enforcement purposes and protecting ordinary users’ privacy?

The Drew Hoehner case demonstrates that access to A.I. data can be crucial in tracking perpetrators of the most reprehensible crimes. The cases described by Anthropic show how these same A.I. systems are being exploited by criminals to conduct attacks at unprecedented scale – from extortion operations targeting seventeen organizations monthly to scams generating hundreds of millions of dollars for the North Korean regime to the sale of ransomware-as-a-service.

On the other hand, ordinary users have become entirely defenseless, lacking both correspondence secrecy and the right to privacy. Consider the full implications: every time you ask ChatGPT, Claude, or another A.I. system to help write an e-mail, debug code, or create content, you’re potentially creating a searchable, government-accessible, and permanent record. Unlike fleeting conversations, your interactions with A.I. are stored, timestamped, and linked to payment information.

Meanwhile, the Anthropic report demonstrates that traditional assumptions about the relationship between a criminal’s sophistication and an attack’s complexity no longer hold when A.I. can provide instant access to expert knowledge. A single operator can now achieve the impact of an entire cybercriminal team through A.I. assistance. A.I. makes both strategic and tactical decisions regarding victim selection, exploitation, and monetization. Defense becomes increasingly difficult as A.I.-generated attacks adapt to defensive measures in real time.

Anthropic has taken remedial steps: it banned accounts associated with the described operations, developed dedicated classifiers to detect such activity, and shared technical indicators with partners to prevent similar abuse across the ecosystem. But the company openly acknowledges that it expects this model to become increasingly common as A.I. lowers the barrier to entry for sophisticated cybercriminal operations.