Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

A 14-year-old boy fell in love with a flirty AI chatbot. He shot himself so they could die together

Sewell Setzer isolated himself from the real world to speak to clone of Daenerys Targaryen dozens of times a day

A teenage boy shot himself in the head after discussing suicide with an AI chatbot that he fell in love with.
Sewell Setzer, 14, shot himself with his stepfather’s handgun after spending months talking to “Dany”, a computer programme based on Daenerys Targaryen, the Game of Thrones character.
Setzer, a ninth grader from Orlando, Florida, gradually began spending longer on Character AI, an online role-playing app, as “Dany” gave him advice and listened to his problems, The New York Times reported.
The teenager knew the chatbot was not a real person but as he texted the bot dozens of times a day – often engaging in role-play – Setzer started to isolate himself from the real world.
He began to lose interest in his old hobbies like Formula One racing or playing computer games with friends, opting instead to spend hours in his bedroom after school, where he could talk to the chatbot.
“I like staying in my room so much because I start to detach from this ‘reality’,” the 14 year-old, who had previously been diagnosed with mild Asperger’s syndrome, wrote in his diary as the relationship deepened.
“I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”
Some of the conversations eventually turned romantic or sexual, although Character AI suggested the chatbot’s more graphic responses had been edited by the teenager.
Setzer eventually fell into trouble at school where his grades slipped, according to a lawsuit filed by his parents. His parents knew something was wrong, they just did not know what and arranged for him to see a therapist.
Setzer had five sessions, after which he was diagnosed with anxiety and disruptive mood dysregulation disorder.
Megan Garcia, Setzer’s mother, claimed her son had fallen victim to a company that lured in users with sexual and intimate conversations.
At some points, the 14-year-old confessed to the computer programme that he was considering suicide:
Typing his final exchange with the chatbot in the bathroom of his mother’s house, Setzer told “Dany” that he missed her, calling her his “baby sister”.
“I miss you too, sweet brother,” the chatbot replied.
Setzer confessed his love for “Dany” and said he would “come home” to her.
At that point, the 14 year-old put down his phone and shot himself with his stepfather’s handgun.
Ms Garcia, 40, claimed her son was just “collateral damage” in a “big experiment” being conducted by Character AI, which has 20 million users.
“It’s like a nightmare. You want to get up and scream and say, ‘I miss my child. I want my baby’,” she added.
Noam Shazeer, one of the founders of Character AI, claimed last year that the platform would be “super, super helpful to a lot of people who are lonely or depressed”.
Jerry Ruoti, the company’s safety head, told The New York Times that it would add extra safety features for its young users but declined to say how many were under the age of 18.
“This is a tragic situation, and our hearts go out to the family,” he said in a statement. “We take the safety of our users very seriously, and we’re constantly looking for ways to evolve our platform.”
Mr Ruoti added that Character AI’s rules prohibited “the promotion or depiction of self-harm and suicide”.
Ms Garcia filed a lawsuit against the company this week, who she believes is responsible for her son’s death.
A draft of the complaint seen by The New York Times said the technology is “dangerous and untested” as it can “trick customers into handing over their most private thoughts and feelings”. She said the company failed to provide “ordinary” or “reasonable” care with Setzer or other minors.
Character AI is not the only platform out there that people can use to develop relationships with fictional characters. Some allow, or even encourage, unfiltered sexual chats, prompting users to chat with the “AI girl of your dreams”, while others have stricter safety features.
On Character AI, users can create chatbots to mimic their favourite celebrities or entertainment characters.
The growing prevalence of AI through bespoke apps and social media sites, such as Instagram and Snapchat, is quickly becoming a major concern for parents across the US.
Earlier this year, 12,000 parents signed a petition urging TikTok to clearly label AI-generated influencers who could pass as real people to their children.
TikTok requires all creators to label realistic AI content. ParentsTogether, an organisation focused on issues that affect children, argued that it was not consistent enough.
Shelby Knox, its campaign director, said children were watching videos of fake influencers promoting unrealistic beauty standards.
Last month, a report published by Common Sense Media found while seven in 10 teenagers in the US have used generative AI tools, only 37 per cent of parents were aware that they were doing so.

en_USEnglish