ADVERTISEMENT
ADVERTISEMENT

Sewell Setzer’s Death Reveals the Dangers AI Poses to Latine & Black Youth

Photo: Getty Images.
For almost a year, Sewell Setzer III, a 14-year-old from Orlando, spoke to AI-generated chatbots, powered by a company called Character.AI, meant to emulate “Game of Thrones” characters, like Daenerys and Rhaenyra Targaryen. Throughout that time, he developed a relationship with the chatbots. “The world I’m in now is such a cruel one. One where I’m meaningless,” Setzer said in one of his exchanges with the Daenerys bot, according to a lawsuit filed by his mother, Megan García, against Character.AI. “But I’ll keep living and trying to get back to you so we can be together again, my love.” The conversations eventually turned sexual. 
AdvertisementADVERTISEMENT
Shortly after Setzer began using the chatbots, his mental health began declining: he spent more time in his bedroom and quit the basketball team. A therapist diagnosed him with anxiety and disruptive mood disorder. 
He brought up some suicidal thoughts to the Daenerys chatbot on multiple occasions, which the bot kept mentioning. At one point, the chatbot asked him if “he had a plan” to kill himself,  to which Sewell responded that he was considering something but didn’t know if it would work or allow him to have a pain-free death. The chatbot responded by saying, “That’s not a reason not to go through with it.”

"This tragic case illustrates the dangers that some of the rapidly evolving AI technology can pose on young people."

Valeria Ricciulli
Following his therapist’s advice of spending less time on social media, his mom confiscated and hid his phone. On February 28, while searching for his phone around the house, he also found his stepfather’s pistol. (According to the lawsuit, police determined that the gun had been hidden and stored in compliance with Florida law.) He went into the Character.AI app and told the Daenerys chatbot that he would “come home,” to which the bot responded, “Please do, my sweet king.” Just seconds after, he died by shooting himself in the head. 
The lawsuit filed by García, Setzer’s mother, alleges that Character.AI and its founders (who are now employed by Google) “went to great lengths to engineer 14-year-old Sewell’s harmful dependency on their products, sexually and emotionally abused him, and ultimately failed to offer help or notify his parents when he expressed suicidal ideation.”
AdvertisementADVERTISEMENT
This tragic case illustrates the dangers that some of the rapidly evolving AI technology can pose on young people. Advocates are asking lawmakers to act on policies to protect children and teens and tech companies to include guardrails that can prevent tragedies like Setzer’s — especially amid a severe youth mental health crisis with a troubling increase in suicide rates among Latines.
Why AI companion apps like Character.AI can be dangerous to young people
AI companion apps are meant to emulate human interaction and be as realistic as possible. Character.AI, one of several existing companion apps, has more than 20 million users (a lot of whom are teens and young adults) and allows users to create their own “personas” or chatbots and also chat with them.
“[Companion apps] are designed to remember past conversations and personalized future interactions, to maintain consistent personas over time, show emotion or mimic emotion or show empathy,” Robbie Torney, senior director of AI programs at Common Sense Media, tells Refinery29 Somos. He explains that one of the main problems with this type of technology and teens using them is that AI chatbots (like ChatGPT) tell users what they want to hear, and AI companions (like Character.AI) are even quicker to agree with what a user says. “In real life, if you do something that your friend doesn't like, they'll tell you, [but] if you're doing something with a chatbot that is inappropriate, crossing a line, unsafe; you might not get that same sort of pushback that you might get from a trusted friend.” 
AdvertisementADVERTISEMENT

"In real life, if you do something that your friend doesn't like, they'll tell you, [but] if you're doing something with a chatbot that is inappropriate, crossing a line, unsafe; you might not get that same sort of pushback that you might get from a trusted friend."

Robbie Torney
This lack of pushback can be dangerous to users who are experiencing suicidal thoughts, psychosis, or mania, a report from Common Sense Media outlines. Other concerns include that young people can use them to avoid real human relationships, they can exacerbate loneliness and isolation, and they may keep kids from seeking professional advice in the midst of a crisis. 
Experts have also warned about the fact that these AI technologies are experimental, so children should not be allowed to use them. Arielle Geismar, co-chair of the youth Internet safety advocacy group Design It For Us, says that young people are being used as “guinea pigs for these technologies.” 
Michelle Barsa, principal at philanthropic organization Omidyar Network, who leads the organization’s work on generative AI, adds that “these models should not be trained on children, period, [because] we don't know the impact of engaging with these types of companion technologies on things like child or adolescent brain development; we don't know how it impacts identity formation. We don't know how it impacts the capacity to form and maintain social capital over the long term.”
Photo: Courtesy of US District Court, Middle Division of Florida.
Additionally, experts say that these new technologies should avoid adding addictive features that focus on engagement or users spending extended amounts of periods on them. In other words, they shouldn’t use the same model that social media platforms like Instagram and TikTok use, which can create addictive behavior and, in turn, impact a young person’s mental health.
A Character.AI spokesperson told Refinery29 that the application implemented new safety measures over the last six months, including a “‘pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.” The spokesperson added that the company is working on “creating a different experience for users under 18 to reduce the likelihood of encountering sensitive or suggestive content.”
AdvertisementADVERTISEMENT
Latine and Black young people could be at greater risk
A recent Common Sense Media survey found that Black and Latine kids are more likely than their non-Latine white peers to use generative AI for things like keeping them company or seeking health-related advice. “We've seen that Black and Latino youth are quicker adopters of some types of technology, like mobile phones and other screen media in particular, than their white peers,” Torney explains. “So when you start to see some of these usage statistics, and you start to think about AI companions being a very risky area, you start to worry that Black and Latino youth are at much greater risk.” 

"We've seen that Black and Latino youth are quicker adopters of some types of technology. ... So when you start to see some of these usage statistics, and you start to think about AI companions being a very risky area, you start to worry that Black and Latino youth are at much greater risk."

Robbie Torney
This is particularly alarming given that the mental health crisis is already disproportionately affecting youth of color, particularly Latines. From 2010 to 2019, the suicide rate increased 92.3% for Latine children 12 and younger, according to a study published in 2022. Additionally, recent data from the Kaiser Family Foundation (KFF), a non-partisan organization focused on health policy, shows that suicide deaths are increasing fastest among people of color. Latines are also less likely to receive or access mental health treatment.
And access to guns exacerbates the situation. Research has shown that living in a house with a gun triples a person’s chance of dying by suicide. “Firearm suicide rates for young people, especially young people of color, are rising the fastest of any demographic group [and] access to guns is contributing to more more firearm suicides” Nina Vinik, founder of gun safety nonprofit Project Unloaded, tells Somos. “There's a mental health crisis layered on top of that. Concerns about gun violence are contributing to that mental health crisis. So it is this perfect storm, and it's hitting young people of color especially hard.”
AdvertisementADVERTISEMENT
But similar to gun laws, passing laws to protect children online at the federal level has been challenging — and there’s still a long way to go when it comes to AI-specific laws, especially around newer technologies like companion AI. 

"Firearm suicide rates for young people, especially young people of color, are rising the fastest of any demographic group [and] access to guns is contributing to more more firearm suicides."

Nina Vinik
This past summer, a bill aimed to protect kids from harm on social media and other online platforms, the Kids Online Safety and Privacy Act, passed in the Senate, but it’s unclear if it will pass in the House. The bill would require social media platforms to offer options for children to opt out of things like algorithmic recommendations and some of the platforms’ addictive features. However, some organizations, including the ACLU, have raised concerns about free speech — the tech industry has also made this argument — and about vulnerable children potentially being unable to access crucial information on LGBTQ issues or their reproductive health. 
Another bill, the Defiance Act — which would allow victims of non-consensual sexually explicit deepfakes to sue — also passed in the Senate over the summer and is now headed to the House. 
Current federal policy related to AI doesn’t “specifically tend to this issue of this type of human- machine interaction” adds Barsa. “And generally, it's been difficult to move bills with provisions like this through Congress, as it's generally difficult to move bills through Congress these days, and so the responsibility has largely fallen to states.” 
So far, states like California, Colorado, and Maryland have passed child online safety laws, though they’ve been faced with intense pushback from tech lobbyists. In California, NetChoice, a tech industry group representing platforms like X, Meta, and Google, challenged one safety law, which halted its implementation
AdvertisementADVERTISEMENT

"We [young people] are the most impacted by [tech’s] harmful design practices, yet we are some of the least listened to voices in this space, so companies and lawmakers should make an active effort to include the perspectives of young people and work with us to design something that is better."

Arielle Geismar
Because these technologies are being developed quickly and “rushed to market,” Geismar notes, “it’s important that lawmakers and legislators start playing on the offense, rather than on the defense: start creating evergreen policies that govern the ways and the processes through which emerging technologies are created, not just one-off responses when a tragic event occurs.” 
Geismar adds: We [young people] are the most impacted by [tech’s] harmful design practices, yet we are some of the least listened to voices in this space, so companies and lawmakers should make an active effort to include the perspectives of young people and work with us to design something that is better.” 
If you or someone you know is in crisis, call 988 to reach the Suicide and Crisis Lifeline.
AdvertisementADVERTISEMENT

More from Tech

ADVERTISEMENT