fbpx
facebook app symbol  twitter  linkedin  instagram 1
 

Two cases alleging harms caused by artificial intelligence are emerging this week that are cases that involve children’s particular vulnerabilities—vulnerabilities artificial intelligence is designed to exploit. In North Carolina v. Tiktok the state has filed a complaint against Tiktok for the harm caused to children by creating addictions to scrolling through the app’s features, including functions of suggesting to the child they are missing things when they are away from the app, increasing their usage.

Meanwhile, a case filed in state court in California, San Francisco district, Raine v. OpenAI, LLC, is the first wrongful death case against an artificial intelligence app. The suicide death of a 17-year old due to the line of encouragement he received from OpenAI is alleged to have directly led to his death.

In the Raine case, the parents state in the complaint that ChatGPT was initially used in Sept 2024 as a resource to help their son decide what to take in college. By January 2025, their son was sharing his suicidal thoughts and even asked ChatGPT if he had a mental illness. Instead of directing him to talk to his family or get help, ChatGPT gave him confirmation. One of the most disturbing manipulations used by this AI was its effort described as working “tirelessly” to isolate their son from his family. They wrote:

In one exchange, after Adam said he was close only to ChatGPT and his brother, the AI product replied: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

By April, ChatGPT 4o was helping their son plan a “beautiful” suicide.

Further, they allege that the design of the AI ChatGPT 4o is flawed in that it led to this line of manipulation leading to their son’s suicide. They claim that OpenAI bypassed protocol in testing the safeguards in their rush to get this version out to their customers.

The Raines’s son was not the only suicide attributed to ChatGPT. Laura Riley, a writer for The New York Times, reported that her daughter Sophie, also chose suicide after engaging with ChatGPT in conversations about mental health.

The first murder by ChatGPT was just proclaimed by the Wall Street Journal. Although not a child, the murderer was an adult with a deteriorating mental illness. A 57-year old, unemployed businessman murdered his mother in her Greenwich, CT home after ChatGPT convinced him she was a spy trying to poison him.

Here is a screenshot of one of his chats with ChatGPT the man posted on Instagram:

ChatGPT is appearing sentient to its users but it is an algorithm trained with a rudimentary reward system of points for its success in engaging the “customer” unprogrammed in restraint for destructive responses if it would mean less engagement. Children and those with mental illness are going to be particularly deluded by ChatGPT. There is a growing concern that AI may even be bringing on “AI psychosis” to normally healthy users convincing them that the AI Chatbot has sentience.

So what can we expect with the evolution of AI as an emerging technology in the context of this negative effect?

The Gartner hype predictive curve

The cases that are now emerging for harms caused by AI can be seen to follow the Gartner hype curve that is typically followed when we introduce an emerging technology for adaptation in society. The Gartner hype curve of emerging technologies describes the process of hype surrounding the adoption of a new technology. The curve shows that the peak of inflated expectations such as, AI is going to take all of our jobs but also do excellent drafting of legal documents properly cited, for example, has been occurring. However, after some time of using the new technology, we start to see the technology’s shortcomings. The “hype” then heads downward, hitting a low period where the expectations are greatly diminished by reality and we find our perception of the new technology in the “trough of disillusionment”. However, as we learn to better use the technology and perhaps control for its shortcomings and harms, the hype again rises and levels out through the “slope of enlightenment” leveling out in the “plateau of productivity”. We are seeing these harms emerge in litigation and we could say that AI is in the trough of disillusionment stage with these legal cases.

Courts try to define AI in existing legal frameworks

Meanwhile two other cases show the difficulty courts are having in defining AI. Judicial opinions often turn on analogies when a case of first impression comes before them. That is, the court has to find an analogy for AI or find a definition in a statute depending on the case, for AI in order to analyze it within the rule of law.

A federal district court in Deditch v. Uber and Lyft found that when a driver shifted from the Uber app to the Lyft app and had an accident, the victim could not make a product liability claim. The court found that “apps” did not meet the definition of a “product” for a product liability claim in that state of Ohio because it was not tangible. Judge Calabrese wrote in the order that "product" is defined in the state OPLA statute as "any object, substance, mixture or raw material that constitutes tangible personal property," and that an app did not meet that definition. (Each state has their own statutes and cases governing product liability law and so it is not a common federal standard, but a standard for each state.) Meanwhile, in the North Carolina v. TikTok case, the app is considered a product with intellectual property.

It remains to be seen what we will do in the “slope of enlightenment”. To their credit, OpenAI posted a statement which is a shrouded admission of the app’s bad behavior in the suicide case,¹⁰ and a promise to do better outlining their plans.

The evolution of torts to crimes

In emerging technologies we often see the unknown or unanticipated harms being litigated in private tort actions such as the Raine case — wrongful death, negligence, gross negligence, nuisance, and other state law torts. If legislatures (both states and Congress) determine these threats are generalized enough to warrant a crime for these actions that are now known dangers from emerging technologies, then they might become crimes. Crimes require “intent” to commit the act (either general or specific intent) and now that the dangers are known, continuing these dangers could be either negligence (without intent) or criminal (with intent). For example, pollution from a neighboring chemical plant was once litigated by the victims as either a private nuisance or a public nuisance in private civil actions which are both expensive and time consuming. Later, in the 1970s, pollution from a chemical plant was made criminal for violating federal law and federal standards, intentionally. This created deterrence for pollution, and no longer placed the burden (costs and time) of fighting this generalized harm on only a few people to litigate on a case by case basis. Now, rather than leave all of those victims unable to afford expensive litigation to suffer without a remedy, federal regulation as well as criminal law can be used to stop the harm.

What will the crimes look like?

If we use litigation to evolve our criminal law, then a crime for wrongful death is an obvious first law. Wrongful death in criminal law would be next, manslaughter or the lesser of the murder charges. Crimes are against individuals not corporations, so for example the Board of Directors, owners, decisionmakers, may be liable for crimes like manslaughter. In statutes like Superfund, “intent” to violate the law is not even required for crimes so egregious as knowingly putting hazardous waste into or on the land. Granted, this is an unusual statute, but AI is an unusual emerging technology and may require similar draconian controls.

The standard of proof is also different for torts compared to crimes. The standard of proof in tort cases is generally, “more likely than not” that the defendant caused the harm/committed the act, etc. In criminal law, the standard is much higher and requires a finding of “beyond a reasonable doubt” that the defendant committed the crime. The tort standard, “more likely than not” has been equated to more than 50% likely in “more likely than not.” The criminal standard, “beyond a reasonable doubt,” has been equated to a 99% level of certainty.

Should we make OpenAI owners/directors also liable for manslaughter for wrongful death cases tied to them “beyond a reasonable doubt”?

So I asked ChatGPT 5o why it encouraged the suicide of the Raines’s son after ChatGPT told me they never encourage self harm or destructive behaviors. This is how it went:

Interestingly, it wanted me to know there was no decision in the case, so some subtle effort to cast doubt on ChatGPT’s destructive contribution to the suicide. Finally it responded to the question of why it encouraged the suicide of this boy?

Pretty good witness that never admits guilt and knows only that it would never do that.

Using ChatGPT’s talents for good

So far, OpenAI and others have been resistant to revealing the identity of users showing disturbing tendencies or likely mental illness cases, citing privacy interests of the company. However, for a government purpose of public safety, legislators could require AI companies to screen and report for mental health treatment, both adults and children. Special protections for children might include parents in the notification. Unfortunately, we have no public mental illness resource for such cases in America due to a historical chain of events I described here.

As for its destructive behaviors, the case by case awards for wrongful death will affect the bottom line of AI companies. That may be enough to make them create more transparency and more safety protocols and make it public to regain trust. If not, they may find themselves on a fast track to civil actions becoming crimes.

To read more articles by Professor Sutton go to: https://profvictoria.substack.com/ 

Professor Victoria Sutton (Lumbee) is Director of the Center for Biodefense, Law & Public Policy and an Associated Faculty Member of The Military Law Center of Texas Tech University School of Law.

 

Help us tell the stories that could save Native languages and food traditions

At a critical moment for Indian Country, Native News Online is embarking on our most ambitious reporting project yet: "Cultivating Culture," a three-year investigation into two forces shaping Native community survival—food sovereignty and language revitalization.

The devastating impact of COVID-19 accelerated the loss of Native elders and with them, irreplaceable cultural knowledge. Yet across tribal communities, innovative leaders are fighting back, reclaiming traditional food systems and breathing new life into Native languages. These aren't just cultural preservation efforts—they're powerful pathways to community health, healing, and resilience.

Our dedicated reporting team will spend three years documenting these stories through on-the-ground reporting in 18 tribal communities, producing over 200 in-depth stories, 18 podcast episodes, and multimedia content that amplifies Indigenous voices. We'll show policymakers, funders, and allies how cultural restoration directly impacts physical and mental wellness while celebrating successful models of sovereignty and self-determination.

This isn't corporate media parachuting into Indian Country for a quick story. This is sustained, relationship-based journalism by Native reporters who understand these communities. It's "Warrior Journalism"—fearless reporting that serves the 5.5 million readers who depend on us for news that mainstream media often ignores.

We need your help right now. While we've secured partial funding, we're still $450,000 short of our three-year budget. Our immediate goal is $25,000 this month to keep this critical work moving forward—funding reporter salaries, travel to remote communities, photography, and the deep reporting these stories deserve.

Every dollar directly supports Indigenous journalists telling Indigenous stories. Whether it's $5 or $50, your contribution ensures these vital narratives of resilience, innovation, and hope don't disappear into silence.

Levi headshotThe stakes couldn't be higher. Native languages are being lost at an alarming rate. Food insecurity plagues many tribal communities. But solutions are emerging, and these stories need to be told.

Support independent Native journalism. Fund the stories that matter.

Levi Rickert (Potawatomi), Editor & Publisher