Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings
OpenAI ignored three warnings that a ChatGPT user was dangerous — including its own mass-casualty flag — while he stalked and harassed his ex-girlfriend, a new lawsuit alleges.
After months of conversations with ChatGPT, a 53-year-old Silicon Valley entrepreneur became convinced he’d discovered a cure for sleep apnea and that powerful people were coming after him, according to a new lawsuit filed in California Superior Court in San Francisco County. He then allegedly used the tool to stalk and harass his ex-girlfriend.
Now the ex-girlfriend is suing OpenAI, alleging the company’s technology enabled the acceleration of her harassment, TechCrunch has exclusively learned. She claims OpenAI ignored three separate warnings that the user posed a threat to others, including an internal flag classifying his account activity as involving mass-casualty weapons.
The plaintiff, referred to as Jane Doe to protect her identity, is suing for punitive damages. She also filed a temporary restraining order Friday asking the court to force OpenAI to block the user’s account, prevent him from creating new ones, notify her if he attempts to access ChatGPT, and preserve his complete chat logs for discovery.
OpenAI has agreed to suspend the user’s account but has refused the rest, according to Doe’s lawyers. They say the company is withholding information about specific plans for harming Doe and other potential victims the user may have discussed with ChatGPT.
The lawsuit lands amid growing concern over the real-world risks of sycophantic AI systems. GPT-4o, the model cited in this and many other cases, was retired from ChatGPT in February .
The case is brought by Edelson PC, the firm behind the wrongful death suits involving teenager Adam Raine , who died by suicide after months of conversations with ChatGPT, and Jonathan Gavalas , whose family alleges Google’s Gemini fueled his delusions and potential mass-casualty event before his death. Lead attorney Jay Edelson has warned that AI-induced psychosis is escalating from individual harm toward mass-casualty events .
That legal pressure is now colliding directly with OpenAI’s legislative strategy: The company is backing an Illinois bill that would shield AI labs from liability even in cases involving mass deaths or catastrophic financial harm.
This Week Only: Save up to $500 for Disrupt 2026
This Week Only: Save up to $500 for Disrupt 2026
OpenAI did not respond in time to comment. TechCrunch will update the article if the company responds.
The Jane Doe lawsuit lays out in detail how that liability played out for one woman over several months.
Last year, the ChatGPT user in the lawsuit (whose name is not included in the lawsuit to protect his identity) became convinced that he had invented a cure for sleep apnea after months of “high volume, sustained use of GPT-4o.” When no one took his work seriously, ChatGPT told him that “powerful forces” were watching him, including using helicopters to surveil his activities, according to the complaint.
In July 2025, Jane Doe urged him to stop using ChatGPT and to seek help from a mental health professional. He instead turned back to ChatGPT, which assured him he was “a level 10 in sanity” and helped him double down on his delusions, per the lawsuit.
Doe had broken up with the user in 2024, and he used ChatGPT to process the split, according to emails and communications cited in the lawsuit. Rather than push back on his one-sided account, it repeatedly cast him as rational and wronged, and her as manipulative and unstable. He then took these AI-generated conclusions off the screen and into the real world, using them to stalk and harass her. This manifested in several AI-generated, clinical-looking psychological reports that he distributed to her family, friends, and employer.
Meanwhile, the user continued to spiral. In August 2025, OpenAI’s automated safety system flagged him for “Mass Casualty Weapons” activity and deactivated his account.
A human safety team member reviewed the account the next day and restored it, even though his account may have contained evidence that he was targeting and stalking individuals, including Doe, in real life. For example, a September screenshot the user sent to Doe showed a list of conversation titles including “violence list expansion” and “fetal suffocation calculation.”
The decision to reinstate is notable following two recent school shootings in Tumbler Ridge, Canada, and at Florida State University (FSU). OpenAI’s safety team had flagged the Tumbler Ridge shooter as a potential threat, but higher-ups reportedly decided not to alert authorities. Florida’s attorney general this week opened an investigation into OpenAI’s possible link with the FSU shooter.
According to the Jane Doe lawsuit, when OpenAI restored her stalker’s account, his Pro subscription wasn’t reinstated alongside it. He emailed the trust and safety team to sort it out, copying Doe on the message.
In his emails, he wrote things like: “I NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!” and “this is a matter of life or death.” He claimed he was “in the process of writing 215 scientific papers,” which he was writing so fast he didn’t “even have time to read.” Included in those emails was a list of tens of AI-generated “scientific papers” with titles like: “Deconstructing Race as a Biological Category_ Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.”
“The user’s communications provided unmistakable notice that he was mentally unstable and that ChatGPT was the engine of his delusional thinking and escalating conduct,” the lawsuit states. “The user’s stream of urgent, disorganized, and grandiose claims, along with a concrete ChatGPT-generated report targeting Plaintiff by name and a sprawling body of purported ‘scientific’ materials, was unmistakable evidence of that reality. OpenAI did not intervene, restrict his access, or implement any safeguards. Instead, it enabled him to continue using the account and restored his full Pro access.”
Doe, who claims in the lawsuit that she was living in fear and could not sleep in her own home, submitted a Notice of Abuse to OpenAI in November.
“For the last seven months, he has weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise,” Doe wrote in her letter to OpenAI requesting the company permanently ban the user’s account.
OpenAI responded, acknowledging the report was “extremely serious and troubling” and that it was carefully reviewing the information. Doe never heard back.
Over the next couple of months, the user continued to harass Doe, sending her a series of threatening voicemails. In January, he was arrested and charged with four felony counts of communicating bomb threats and assault with a deadly weapon. Doe’s lawyers allege this validates warnings both she and OpenAI’s own safety systems had raised months earlier, warnings the company allegedly chose to ignore.
The user was found incompetent to stand trial and committed to a mental health facility, but a “procedural failure by the State” means he will soon be released to the public, according to Doe’s lawyers.
Edelson called on OpenAI to cooperate. “In every case, OpenAI has chosen to hide critical safety information — from the public, from victims, from people its product is actively putting in danger,” he said. “We’re calling on them, for once, to do the right thing. Human lives must mean more than OpenAI’s race to an IPO.”
Rebecca Bellan is a senior reporter at TechCrunch where she covers the business, policy, and emerging trends shaping artificial intelligence. Her work has also appeared in Forbes, Bloomberg, The Atlantic, The Daily Beast, and other publications.
You can contact or verify outreach from Rebecca by emailing rebecca.bellan@techcrunch.com or via encrypted message at rebeccabellan.491 on Signal.
StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.
France to ditch Windows for Linux to reduce reliance on US tech Zack Whittaker
France to ditch Windows for Linux to reduce reliance on US tech
France to ditch Windows for Linux to reduce reliance on US tech
This founder helped build SpaceX’s most powerful rocket engine. Now he’s building a ‘fighter jet for orbit.’ Tim Fernholz
This founder helped build SpaceX’s most powerful rocket engine. Now he’s building a ‘fighter jet for orbit.’
This founder helped build SpaceX’s most powerful rocket engine. Now he’s building a ‘fighter jet for orbit.’
Google quietly launched an AI dictation app that works offline Ivan Mehta
Google quietly launched an AI dictation app that works offline
Google quietly launched an AI dictation app that works offline
Apple’s foldable iPhone is on track to launch in September, report says Aisha Malik
Apple’s foldable iPhone is on track to launch in September, report says
Apple’s foldable iPhone is on track to launch in September, report says
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making Zack Whittaker
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
North Korea’s hijack of one of the web’s most used open source projects was likely weeks in the making
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants Kate Park
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
In Japan, the robot isn’t coming for your job; it’s filling the one nobody wants
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage Anthony Ha
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage
Pontos-chave
- O caso destaca a responsabilidade das empresas de tecnologia em garantir a segurança de suas ferramentas.
- A crescente preocupação com os riscos da IA pode influenciar a legislação brasileira sobre tecnologia.
- A percepção pública sobre IA pode ser afetada negativamente por casos de abuso envolvendo essas tecnologias.
Análise editorial
O caso da Jane Doe contra a OpenAI levanta questões cruciais sobre a responsabilidade das empresas de tecnologia em relação ao uso de suas ferramentas. A alegação de que o ChatGPT não apenas falhou em proteger a vítima, mas também potencialmente facilitou o comportamento abusivo do usuário, destaca a necessidade urgente de regulamentação mais rigorosa e de práticas de segurança mais robustas na implementação de IA. Para o setor de tecnologia brasileiro, isso serve como um alerta sobre a importância de considerar as implicações éticas e legais da IA, especialmente em um contexto onde as soluções de IA estão se tornando cada vez mais integradas em diversas aplicações do dia a dia.
Além disso, a situação reflete uma crescente preocupação global sobre os riscos associados a sistemas de IA que podem reforçar comportamentos prejudiciais. À medida que o Brasil avança em sua própria legislação sobre IA, é vital que os legisladores considerem não apenas a inovação, mas também a proteção dos cidadãos contra possíveis abusos. A interação entre a tecnologia e a legislação será um ponto crítico a ser observado nos próximos meses, especialmente com o aumento das discussões sobre a responsabilidade civil das empresas de tecnologia.
Por fim, o caso pode influenciar a percepção pública sobre a IA e suas aplicações, levando a um aumento da desconfiança em relação a essas tecnologias. À medida que mais casos como este emergem, as empresas de tecnologia precisarão não apenas responder a processos judiciais, mas também se engajar proativamente com as comunidades e usuários para garantir que suas ferramentas sejam utilizadas de maneira segura e ética. O que se observa agora é um ponto de inflexão que pode moldar o futuro da regulamentação de IA no Brasil e em outros lugares.
O que esta cobertura entrega
- Atribuicao clara de fonte com link para a publicacao original.
- Enquadramento editorial sobre relevancia, impacto e proximos desdobramentos.
- Revisao de legibilidade, contexto e duplicacao antes da publicacao.
Fonte original:
TechCrunch AISobre este artigo
Este artigo foi curado e publicado pelo AIDaily como parte da nossa cobertura editorial sobre desenvolvimentos em inteligência artificial. O conteúdo é baseado na fonte original citada abaixo, enriquecido com contexto e análise editorial. Ferramentas automatizadas podem auxiliar tradução e estruturação inicial, mas a decisão de publicar, a revisão factual e o enquadramento de contexto seguem responsabilidade editorial.
Saiba mais sobre nosso processo editorial