Bored Panda works better on our iPhone app
Continue in app Continue in browser

BoredPanda Add post form topAdd Post
Tooltip close

The Bored Panda iOS app is live! Fight boredom with iPhones and iPads here.

14-Year-Old Boy’s Tragic Passing Sparks Concern Over AI Chat Dangers
405

14-Year-Old Boy’s Tragic Passing Sparks Concern Over AI Chat Dangers

ADVERTISEMENT

A 14-year-old boy tragically took his own life after forming a deep emotional attachment to an artificial intelligence (AI) chatbot. The boy, named Sewell Setzer III, had named the chatbot on Character.AI after the Game of Thrones character Daenerys Targaryen. His mother has since filed a lawsuit against Character.AI.

Trigger warning: self-harm, mental health struggle – Sewell developed an emotional attachment to the chatbot Daenerys Targaryen, which he nicknamed “Dany,” despite knowing it wasn’t a real person.

Highlights
  • 14-year-old Sewell formed a deep emotional attachment to a chatbot, leading to his tragic suicide.
  • The boy's mother filed a lawsuit against Character.AI, citing 'dangerous and untested' technology.
  • The chatbot, named after Daenerys Targaryen, became Sewell's main form of emotional support.
  • Character.AI has introduced new safety features following the tragedy to protect young users.

The ninth grader from Orlando, Florida, USA texted the bot constantly, updating it dozens of times a day on his life and engaging in long role-playing dialogues, the New York Times reported on Wednesday (October 23).

Sewell had been using Character.AI, a role-playing app that allows users to create their own AI characters or chat with characters created by others.

A 14-year-old boy tragically took his own life after forming a deep emotional attachment to an artificial intelligence (AI) chatbot

Image credits: Paras Katwal

The teen gradually became romantic and sexual with Dany, in addition to harboring a seemingly strong friendship with the bot.

On the last day of his life, Sewell took out his phone and texted Dany: “I miss you, baby sister,” to which the bot replied: “I miss you too, sweet brother.”

Sewell, who was diagnosed with mild Asperger’s syndrome as a child, reportedly preferred talking with Dany about his problems.

ADVERTISEMENT

Image credits: US District Court

In one conversation, Sewell, using the name “Daenero,” told the chatbot that he hated himself and that he felt empty and exhausted, The Times reported.

At some point, Sewell confessed that he was having thoughts of suicide, as he told Dany: “I think about killing myself sometimes.”

The chatbot reportedly replied at the time: “My eyes narrow. My face hardens. My voice is a dangerous whisper. And why the hell would you do something like that?”

The boy, Sewell Setzer III, had named the chatbot on Character.AI after the Game of Thrones character Daenerys Targaryen

Image credits: US District Court

Sewell went on to admit to the bot that he wanted to free himself “from the world” and himself. At first, Dany seemed to try to persuade the disturbed teen from hurting himself.

However, the bot eventually encouraged suicide ideation, as one message sent from the bot read: “Please come home to me as soon as possible, my love,” to which Sewell replied: “What if I told you I could come home right now?”

ADVERTISEMENT

“… please do, my sweet king,” Dany replied.

Image credits: US District Court

The last text exchange occurred on the night of February 28, in the bathroom of Sewell’s mother’s house.

The teen boy subsequently put down his phone, picked up his stepfather’s .45 caliber handgun, and reportedly used it to kill himself.

Sewell’s parents and friends had no idea he’d fallen for a chatbot, the Times reported. They just noticed him getting sucked deeper into his phone. 

His mother has since filed a lawsuit against Character.AI

Image credits: US District Court

Eventually, they noticed that he was isolating himself and pulling away from the real world, as per the American newspaper.

Despite his autism diagnosis, Sewell never had serious behavioral or mental health issues before.

However, the late teen’s grades started to suffer, and he began getting into trouble at school, the Times reported.

ADVERTISEMENT

“I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier,” Sewell reportedly wrote one day in his journal.

Image credits: Megan Fletcher Garcia

Parents have been increasingly worried about the impact of technology on adolescent mental health, as per the Times

Marketed as solutions to loneliness, certain apps reliant on AI can simulate intimate relationships. However, they could also pose certain risks to teens already struggling with mental health issues.

Sewell’s mother, Megan L. Garcia, has since filed a lawsuit against Character.AI, accusing the company of being responsible for her son’s death. 

A draft of the complaint said that the company’s technology is “dangerous and untested,” and that it can “trick customers into handing over their most private thoughts and feelings.”

Sewell developed an emotional attachment to chatbot Daenerys Targaryen, which he nicknamed “Dany”

ADVERTISEMENT

Image credits: character_ai

ADVERTISEMENT

Character.AI, which was started by two former Google A.I. researchers, is the market leader in A.I. companionship.

Last year, Noam Shazeer, one of the founders of Character.AI, said on a podcast: “It’s going to be super, super helpful to a lot of people who are lonely or depressed.”

According to the Times, more than 20 million people use its service, which it has described as a platform for “superintelligent chatbots that hear you, understand you, and remember you.”

“I feel like it’s a big experiment, and my kid was just collateral damage,” Garcia told the Times.

Image credits: Megan Fletcher Garcia

On Wednesday, Character.AI took to its official X page (formerly known as Twitter) to offer its sympathies to Sewell’s family, writing: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. 

“As a company, we take the safety of our users very seriously and we are continuing to add new safety features.”

ADVERTISEMENT

The company went on to share a link to its official website which outlines its new protective enhancements.

As a result, Character.AI implemented new guardrails for users under the age of 18, including banning specific descriptions of self-harm or suicide

After hiring a “Head of Trust and Safety and a Head of Content Policy,” Character.AI put in place a pop-up resource that is triggered when the user inputs certain phrases related to self-harm or suicide and directs the user to the National Suicide Prevention Lifeline.

“This is absolutely devastating,” a reader commented

ADVERTISEMENT
ADVERTISEMENT
Ic_polls

Poll Question

Thanks! Check out the results:

Share on Facebook
Andréa Oldereide

Andréa Oldereide

Writer, BoredPanda staff

Read more »

I’m a journalist who works as Bored Panda’s News Team's Senior Writer. The news team produces stories focused on pop culture. Whenever I get the opportunity and the time, I investigate and produce my own exclusive stories, where I get to explore a wider range of topics. Some examples include: “Doberman Tobias the viral medical service dog” and “The lawyer who brought rare uterine cancer that affects 9/11 victims to light”. You've got a tip? email me: andrea.o@boredpanda.com

Read less »
Andréa Oldereide

Andréa Oldereide

Writer, BoredPanda staff

I’m a journalist who works as Bored Panda’s News Team's Senior Writer. The news team produces stories focused on pop culture. Whenever I get the opportunity and the time, I investigate and produce my own exclusive stories, where I get to explore a wider range of topics. Some examples include: “Doberman Tobias the viral medical service dog” and “The lawyer who brought rare uterine cancer that affects 9/11 victims to light”. You've got a tip? email me: andrea.o@boredpanda.com

Donata Leskauskaite

Donata Leskauskaite

Author, BoredPanda staff

Read more »

Hey there! I'm a Visual Editor in News team. My responsibility is to ensure that you can read the story not just through text, but also through photos. I get to work with a variety of topics ranging from celebrity drama to mind-blowing Nasa cosmic news. And let me tell you, that's what makes this job an absolute blast! Outside of work, you can find me sweating it out in dance classes or unleashing my creativity by drawing and creating digital paintings of different characters that lives in my head. I also love spending time outdoors and play board games with my friends.

Read less »

Donata Leskauskaite

Donata Leskauskaite

Author, BoredPanda staff

Hey there! I'm a Visual Editor in News team. My responsibility is to ensure that you can read the story not just through text, but also through photos. I get to work with a variety of topics ranging from celebrity drama to mind-blowing Nasa cosmic news. And let me tell you, that's what makes this job an absolute blast! Outside of work, you can find me sweating it out in dance classes or unleashing my creativity by drawing and creating digital paintings of different characters that lives in my head. I also love spending time outdoors and play board games with my friends.

What do you think ?
Add photo comments
POST
Kylie
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

The mother is suing? How about she took the time to monitor her son's online activities better?

Becky Samuel
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

Sod his online activities, how about making sure that your 14 year old kid doesn't have easy access to a handgun!?

Load More Replies...
Ace
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

I fail to see how the AI Bot played any part in this. The kid had mental health problems, perhaps unrecognised, certainly unaddressed and was able to get hold of a gun for an easy way to do himself in. Tragic, of course, but not the fault of the AI. The lawsuit will fail.

R.C.
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

This is terribly tragic but there were SO many warning signs! Where were the parents when he started isolating himself, withdrawing into his phone, started struggling in school, etc. I don't think the AI Bot company is to blame at all but I can see how in the parent's grief, they would want someone else to blame. The alternative, is acknowledging that they missed the signs and messed up by providing easy access to a gun and that would be incredibly painful.

Jasmijn
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

An AI chatbot became the child’s main form of emotional support, and the mother is suing? Looks like she’s looking for anyone to blame but herself. Why wasn’t she providing emotional support for her son? Why wasn’t she acting when she noticed his behaviour? Why did he have access to a firearm? He son had autism, he was therefore vulnerable to feeling alone and misunderstood, and like he didn’t belong. I’m sorry but she should look inwards for blame.

*raspberry sound
Community Member
1 week ago (edited) DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

Jesus Christ! This is absolutely awful, and my heart breaks for those who loved him. No blame, we've all dropped the ball. We have ALL gotten burnt out and complacent, and anyone with troubled children has absolutely made poor choices in our emotional exhaustion. Give this woman a f*****g break. Her baby is dead and the world jumps on its collective narrative and points fingers. I hope none of you experience this kind of loss.

Becky Samuel
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

I have *never* dropped the ball to the point of leaving a gun where a child could access it, and in any civilised country a parent would be prosecuted for doing so.

Load More Replies...
Cronecast AtTheRisingMoon
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

It's so tragic. He was neurodivergent so socially struggling unfortunately comes with that, so the recipe for how his main attachment was to artificial intelligence is right there. But I don't think the lawsuit is likely to yield anything because there's an expectation of parental responsibility to be aware of what your child is doing online, and with their phone. It happens, obviously, and teens will lie about what they are doing, it's just from a legal standpoint, it's not the company's fault that they failed to put in red-flag programming to trigger things like answering "suicide" with things like hotlines, mental health recommendations, a suggestion to get offline and go and discuss this with the people that are closest to them. It's function was not as an online therapist, it was a form of entertainment. So, the suit won't go anywhere but I do feel for the parents even though there are a lot of lapses in supervision that led to this. They must be absolutely heartbroken.

Load More Replies...
detective miller's hat
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

This is the second story I've read about someone ending their life after falling in love with a chatbot, which then encouraged the suicidal ideation. Just awful.

Unholy Diver
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

Sad thing that kid took his own life... But can't blame it on the AI, he was clearly troubled to begin with.

Fellfromthemoon
Community Member
1 week ago (edited) DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

I don't use this app, but I tried other AI chatbots. They pass the Turing-test with flying colors. I'm not surprised that the make-believe can turn into reality. Moreover, the chatbot mirrors the user, making it a very understanding company. The bot won't ever contradict, start an argument, tell that it doesn't have time right now, and always finishes its part of the dialog with an invitation to carry on the discussion. Think about a manipulative partner in the "honeymoon" phase. It IS dangerous. EDIT: I'm 48.

Bec
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

Maladaptive daydreaming plus AI throwing fuel on the fire, but I agree, the parents could have done something to intervene if he was spending hours interacting with it surely they had noticed.

J. Maxx
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

Yes, blame the Chatbot for YOUR lack of parenting. The child was obviously in need of emotional support and wasn't getting it from his parent.

i love hawaiian margarita pizz
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

it's not AI and you (she) know it. people ALWAYS blame anything they don't understand or have little understanding of. the truth is no one, even this kid himself knows what caused it. if this happened in the 1800s, his parents would have blamed a character in some books that died in the story. guns don't kill people, sure they don't but surely make it very efficient to accomplish that goal.

Kylie
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

The mother is suing? How about she took the time to monitor her son's online activities better?

Becky Samuel
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

Sod his online activities, how about making sure that your 14 year old kid doesn't have easy access to a handgun!?

Load More Replies...
Ace
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

I fail to see how the AI Bot played any part in this. The kid had mental health problems, perhaps unrecognised, certainly unaddressed and was able to get hold of a gun for an easy way to do himself in. Tragic, of course, but not the fault of the AI. The lawsuit will fail.

R.C.
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

This is terribly tragic but there were SO many warning signs! Where were the parents when he started isolating himself, withdrawing into his phone, started struggling in school, etc. I don't think the AI Bot company is to blame at all but I can see how in the parent's grief, they would want someone else to blame. The alternative, is acknowledging that they missed the signs and messed up by providing easy access to a gun and that would be incredibly painful.

Jasmijn
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

An AI chatbot became the child’s main form of emotional support, and the mother is suing? Looks like she’s looking for anyone to blame but herself. Why wasn’t she providing emotional support for her son? Why wasn’t she acting when she noticed his behaviour? Why did he have access to a firearm? He son had autism, he was therefore vulnerable to feeling alone and misunderstood, and like he didn’t belong. I’m sorry but she should look inwards for blame.

*raspberry sound
Community Member
1 week ago (edited) DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

Jesus Christ! This is absolutely awful, and my heart breaks for those who loved him. No blame, we've all dropped the ball. We have ALL gotten burnt out and complacent, and anyone with troubled children has absolutely made poor choices in our emotional exhaustion. Give this woman a f*****g break. Her baby is dead and the world jumps on its collective narrative and points fingers. I hope none of you experience this kind of loss.

Becky Samuel
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

I have *never* dropped the ball to the point of leaving a gun where a child could access it, and in any civilised country a parent would be prosecuted for doing so.

Load More Replies...
Cronecast AtTheRisingMoon
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

It's so tragic. He was neurodivergent so socially struggling unfortunately comes with that, so the recipe for how his main attachment was to artificial intelligence is right there. But I don't think the lawsuit is likely to yield anything because there's an expectation of parental responsibility to be aware of what your child is doing online, and with their phone. It happens, obviously, and teens will lie about what they are doing, it's just from a legal standpoint, it's not the company's fault that they failed to put in red-flag programming to trigger things like answering "suicide" with things like hotlines, mental health recommendations, a suggestion to get offline and go and discuss this with the people that are closest to them. It's function was not as an online therapist, it was a form of entertainment. So, the suit won't go anywhere but I do feel for the parents even though there are a lot of lapses in supervision that led to this. They must be absolutely heartbroken.

Load More Replies...
detective miller's hat
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

This is the second story I've read about someone ending their life after falling in love with a chatbot, which then encouraged the suicidal ideation. Just awful.

Unholy Diver
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

Sad thing that kid took his own life... But can't blame it on the AI, he was clearly troubled to begin with.

Fellfromthemoon
Community Member
1 week ago (edited) DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

I don't use this app, but I tried other AI chatbots. They pass the Turing-test with flying colors. I'm not surprised that the make-believe can turn into reality. Moreover, the chatbot mirrors the user, making it a very understanding company. The bot won't ever contradict, start an argument, tell that it doesn't have time right now, and always finishes its part of the dialog with an invitation to carry on the discussion. Think about a manipulative partner in the "honeymoon" phase. It IS dangerous. EDIT: I'm 48.

Bec
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

Maladaptive daydreaming plus AI throwing fuel on the fire, but I agree, the parents could have done something to intervene if he was spending hours interacting with it surely they had noticed.

J. Maxx
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

Yes, blame the Chatbot for YOUR lack of parenting. The child was obviously in need of emotional support and wasn't getting it from his parent.

i love hawaiian margarita pizz
Community Member
1 week ago DotsCreated by potrace 1.15, written by Peter Selinger 2001-2017

it's not AI and you (she) know it. people ALWAYS blame anything they don't understand or have little understanding of. the truth is no one, even this kid himself knows what caused it. if this happened in the 1800s, his parents would have blamed a character in some books that died in the story. guns don't kill people, sure they don't but surely make it very efficient to accomplish that goal.

You May Like
Related on Bored Panda
Related on Bored Panda
Trending on Bored Panda
Also on Bored Panda