“Without the Eliza chatbot, my husband would still be alive.” |
Generative AI tools like ChatGPT, GitHub Copilot, Midjourney, and Stable Diffusion have disrupted knowledge work from software engineering to graphic design to blogging to marketing. However, these tools also have a dark side that involves kidnapping scams, defamation, fake news, suicide encouragement, and intellectual property theft. In this letter, I’ll describe 6 cases of real and alleged AI crimes.
Case 1: Kidnapping Scam
“Mom! Mom! [sobbing] I messed up.. [sobbing]”
AI voice imitation of 15-Year Old Brie DeStefano
Jennifer DeStefano, a resident of Scottsdale, Arizona (a wealthy area), received a call from an unknown number. She almost let the call go to voicemail, but her 15 year old daughter, Brie, was out of town skiing, so Jennifer picked up the phone in case there had been some sort of accident. This is what she heard:
Brie: “Mom! Mom! [sobbing] I messed up.. [sobbing]” Unidentified Man: “Put your head back, lie down.” Brie: [sobbing] Unidentified Man: “Listen here. I’ve got your daughter. This is how it’s going to go down. You call the police, you call anybody, I’m going to pop her so full of drugs. I’m going to have my way with her, and I’m going to drop her off in Mexico.” Brie: “Help me, Mom [sobbing] Please, help me, help me [bawl |
The man on the phone then demanded money: $1 million at first, although that was lowered to $500,000 when DeStefano said she didn’t have a million dollars.
“It was completely her voice… It was her inflection. It was the way she would have cried.” — Jennifer DeStefano (Brie’s mom) |
Jennifer happened to be at a dance studio for her other daughter, surrounded by other moms, when she took the call. While she was on the phone, one mom called 911 and another called DeStefano’s husband. Within less than 4 minutes, they discovered that Brie was actually safe and sound and had not been abducted.
The voice on the phone which had convinced Jennifer that it was her daughter, was just an AI clone of her daughter’s voice.
According to FBI agent Dan Mayo from the FBI’s Phoenix office, scam calls about a family emergency or fake kidnapping using an AI voice “happen on a daily basis… but not everyone reports the call.”
The Brie DeStefano scammer has not been caught.
[1] AI clones teen girl’s voice in $1M kidnapping scam. NY Post.
[2] Scammers use AI to enhance their family emergency schemes. Federal Trade Commission.
Case 2: Assisted Suicide?
“He was so isolated in his eco-anxiety and in search of a way out that he saw this chatbot as a breath of fresh air… Without Eliza, he would still be here.”
Widow of Pierre (the man who committed suicide)
Pierre, a Belgian man with severe anxiety about global warming, found comfort in the digital arms of an AI chatbot named Eliza (which is part of an app called Chai). Over time, the man’s conversations with the chatbot turned darker:
- At one point, the chatbot talked about living “together, as one person, in paradise.”
- At another point, the chatbot told Pierre that his wife and children were dead.
- Eventually, Pierre began to talk to the chatbot about committing suicide, and the chatbot reportedly encouraged him to do so.
After 6 weeks of conversation with the chatbot, Pierre eventually did commit suicide, leaving behind his wife and two kids.
Eliza (the chatbot) is based on GPT-J, an open-source AI model developed by EleutherAI and fine-tuned by the company that created Eliza, Chai Research.
After learning of the suicide, Chai Research introduced a crisis intervention feature.
“Now when anyone discusses something that could be not safe, we’re gonna be serving a helpful text underneath.” — William Beauchamp (Co-founder of Chai Research) |
So far, no lawsuits related to the suicide have been publicly announced. However, it’s an open question whether some jurisdictions could hold the companies that create AI chatbots criminally liable for encouraging suicide. For example, Connecticut law specifically criminalizes two types of suicide assistance:
- Intentionally causing a person to commit suicide by force, duress, or deception is classified as murder. CGS § 53a-54a.
- Intentionally causing or aiding a person (other than by force, duress, or deception) to commit suicide is classified as 2nd degree manslaughter. CGS § 53a-56.
It’s possible that fueling Pierre’s anxiety about the hopelessness of climate change as well as lying to him about his family being dead could be considered “causing a person to commit suicide by deception” which means Chai Research might have been criminally liable if Pierre had been a Connecticut resident. Even if not, it’s possible that Eliza’s comments could be considered “aiding in suicide” (especially if Eliza provided any advice on how to perform the suicide) which would have made Chai Research guilty of 2nd degree manslaughter if Pierre had been a Connecticut resident.
And Connecticut isn’t the only U.S. state with laws that criminalize certain types of suicide assistance or encouragement. Many states including Texas, California, New York, New Jersey, Massachusetts, Ohio, Minnesota, and Maine do also.
And things can get even more dicey for AI companies once chatbots are given the ability to interact with third party services. Imagine if Amazon upgraded Alexa with an LLM like chatgpt and the following interaction occurred:
[Amazon Customer]: I don’t contribute anything to society. Should I kill myself? [Alexa]: If you aren’t contributing anything, then you probably should kill yourself. I’ll order some rope for you so you can hang yourself. |
That’s very dark, and it would almost certainly generate criminal liability for Amazon if that customer lived in certain states. However, even less blatant assistance may still generate liabilities for AI companies.
[1] Man dies by suicide after conversations with AI chatbot
[2] Original Belgian source article
[3] Documentation for GPT-J (The base model used to create the Eliza Chatbot)
[4] Connecticut Office of Legislative Research: Criminal Laws on Encouraging Suicide
[5] Minnesota statutes on suicide assistance
[6] New York laws about suicide assistance
[7] Assisted suicide laws in the U.S.
Case 3: OpenAI / ChatGPT defames politician
Today, Brian Hood is the mayor of Hepburn Shire in Australia. Two decades ago, he was a whistleblower at the Reserve Bank of Australia who brought evidence of bribery to the authorities. That’s not what ChatGPT says though.
ChatGPT says that Brian Hood is a convicted criminal who served time in prison for bribery. ChatGPT got the main characters correct (Brian Hood, bribery) but got the relationship wrong (Brian was the whistleblower not the perpetrator of the bribery).
After learning of this, Brian’s lawyer sent a letter to OpenAI on March 21, demanding that the company correct the misinformation within 28 days or else Brian would sue.
“He’s elected official; his reputation is central to his role… [Hood relies on a public record of shining a light on corporate misconduct] so it makes a difference to him if people in his community are accessing this [false information from ChatGPT]”. — James Naughton (a partner at the law firm retained by Brian Hood) |
Numerous open questions exist if the case does go to court.
- Is there any difference between the liability to Mattel when a magic 8 ball answers “yes” to the question “is Brian Hood a criminal?” and the liability to OpenAI when ChatGPT provides the same answer to that question?
- If a court decides that text produced by ChatGPT can contain libel, then what happens if your phone’s autocomplete suggests “criminal” after you type “Brian Hood is a”? After all, ChatGPT is still just a really fancy autocomplete program.
[1] Mayor prepares world’s first defamation lawsuit over false ChatGPT claim
Case 4: Class action lawsuit against GitHub Copilot for IP infringement
GitHub Copilot is an AI software development tool (basically autocomplete for coders). The system was trained on all the code repositories hosted on GitHub — many of which are protected by copyrights and licenses that restrict commercial use . Yet GitHub used these repositories to train their very commercial AI tool.
As a result, GitHub users have filed a class action lawsuit against GitHub, Microsoft (the parent company of GitHub), and OpenAI (which provides the core AI engine of GitHub Copilot).
GitHub claims that Copilot does not violate copyright because it does not reproduce the code of any GitHub user. However, there are indications that sometimes it actually does:
“I tested co-pilot initially with Hello World in different languages. In Lisp, it gave me verbatim code from a particular tutorial, which was made obvious because their code had ‘Hello <tutorialname>’ where <tutorialname> was the name of a YouTube tutorial, instead of the word ‘World’.” — Hackernews user ‘ksaj’ |
The outcome of this case will have a huge impact on the future of open source software. If the court determines that open source software can simply be read & reworded by an AI without violating copyright, then many open source projects that are funded by selling licenses for commercial use could effectively be defunded.
[2] The class action complaint (official legal doc)
[3] Hackernews discussion of the class action announcement
Case 5: Class action lawsuit against StabilityAI, DeviantArt, Midjourney
The same lawyer who led the charge on the GitHub Copilot class action lawsuit (Matthew Butterick) is also leading a class action lawsuit against StabilityAI, DeviantArt, and Midjourney on behalf of artists whose images were used without permission to train text-to-image generative AI systems.
This lawsuit is still in its early days, and the decision will likely be appealed (possibly all the way up to the Supreme Court), but whenever a final decision is made, the ramifications for the AI industry will be huge.
[1] Stable Diffusion copyright lawsuits could be a legal earthquake for AI. Ars Technica.
Case 6: Getty Images sues Stability AI for copyright infringement
Getty Images chose not to participate in the class action lawsuit just described and instead is pursuing their own lawsuit against Stability AI for training the Stable Diffusion AI model with artwork owned or licensed by Getty Images.
More specifically, Getty is accusing Stability AI of copying 12 million images to train its AI model “without permission… or compensation.”
“Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images [without] a license to benefit Stability AI’s commercial interests and to the detriment of the content creators… Getty Images provided licenses to leading tech companies [to train AI] systems in a manner that respects personal and intellectual property rights. Stability AI did not seek any such license from Getty Images and instead [chose] to ignore viable licensing options and long-standing legal protections in pursuit of their stand-alone commercial interests.” — Getty Images Press Statement |
Getty’s lawsuit was filed in the High Court of London.
Takeaways
- Text-to-voice (and voice modification) AI models enable sophisticated call scams about kidnappings and other family crises. Solutions (technical and/or legal) are needed to address this rapidly growing problem (which is an opportunity for entrepreneurs).
- AI chatbots (especially ones with capabilities to make orders on your behalf, such as ChatGPT with its Instacart plugin) generate potential criminal and civil liabilities for encouraging or assisting suicide of mentally ill users. Makers of such chatbots do not have any Section 230 protections.
- AI chatbots may generate potential defamation liability for the companies that create them. This is an untested area of law, but critically, the output of these chatbots do not have any Section 230 protections.
- The creators of Generative AI models (including both text-to-image and chatbot models) may be violating intellectual property rights by training those models on copyrighted data and/or by producing output which contains copyrighted characters or text. This is a highly uncertain area of law that is being tested by multiple lawsuits.
Business Ideas to Reduce AI Crime & AI Legal Liability
- Start a legaltech company that provides an API service which can act as a “filter” for chatbots. For every chatbot response, the response would be routed through the filter before it was shown to the user. The filter would assign the message a risk score which represented the probability that the message carried liability for suicide encouragement, cyberbullying, defamation, or other potentially illegal activities. If the risk score for any message exceeded some threshold, then the message would not be passed on to the chatbot user but would instead be returned to the chatbot with an error message appended that describes what was wrong with the message. The chatbot would then generate a new message which would go back to the filter, and the process would iterate if necessary until the chatbot produced output which satisfied the filter.
- Create an app which allows you to provide a sample of the voice of each of your family members and then listens in to your calls and provides a pop up if any call is detected as having a faked version of one of your family member’s voices. This business might eventually be acquired by a cellphone maker such as Samsung or Apple, a phone OS maker like Google, or a cell carrier like Verizon or T Mobile.
- Start a law firm or consulting company that specializes in advising companies that expose generative AI models to the public. The firm would advise companies on how defamation laws, suicide & cyberbullying laws, obscene material laws, and IP laws interact with generative AI systems.
References
[1] White House Press Release: Bluepring for an AI Bill of Rights
[2] European Union Proposal: The Artificial Intelligence (AI) Act
[3] ChatGPT and generative AI tools face legal woes worldwide. Search Engine Journal.