Opinion

The Dead Internet Theory Enters Phase Two as Dark Patterns Emerge 

With AI bots spamming social media, concerns of the impact of AI companionship on mental health begin to rise, making the dead internet theory much darker than we originally thought. 

By Karla Pastrana

WARNING: discusses suicide, self-harm and addiction  

If you thought the dead internet theory was just a creep theory that appeared on 4chan, well buckle up because this theory is darker than the dark web. It also promises to make social media more dangerous than it is already for people’s mental health.  

In part one of the dead internet theory investigation, I discussed the fact that the famous theory has been a trending topic due to the increase of AI accounts (bots) on social media.  

Many already see the first half of the theory coming true, as the first half states that eventually human social media accounts will be replaced by bot accounts. This has caused users to debate whether the theory is coming true. The second half of the theory says that there will be so many bots that it will be impossible for humans to find each other online.  

It’s unknown when we will reach the point where we will see more bots than living beings online, but there are hundreds of thousands of characters being created on Meta’s AI character tool, according to Vice President of product for generative AI of Meta Connor Hayes. Some of those characters are kept private while the other half have accounts.  

This doesn’t mean we can’t imagine the future these bots will create as the human imagination can think of whimsical and unsettling things quickly in the face of the unknown, just like AI systems can. 

It’s creepy in the sense that we are seeing something we thought was impossible or fictional coming true. This is like the 2023 medical case of an Indian man being the first human to be infected with Chondrostereum purpureum fungi that made us all think we were going to have a  zombie apocalypse like in “The Last of Us.” 

Ironically, the influx of bots on social media would indeed create a zombie apocalypse online. Much like zombies, bots are lifeless yet interact with the environment around them with whatever command their creator has given them. This zombie land social media is something Instagram artists Saintsolis.art agrees with this as seen through her “Looking for Humanity” piece.  

Just like me, many of Saintsolis.art’s followers also felt uncomfortable by the piece’s depiction of the dead internet theory coming true. This has caused a massive discussion that contributed to the already rising conversation about the theory. It something that made Saintsolis art happy to see due to reminding her of the old internet, before bots became a frequent thing.  

Video provided by @saintsolis.art.

“With the animation I tried to capture a sense of isolation and loneliness that are inevitable in a world where people have lost touch with each other, and are looking for comfort in AI companions instead,” Saintsolis.art told The Ledger. “Maybe it’s bleak, and it is most definitely uncomfortable to think about, but this is one of the possible futures for humanity. I’m afraid we might lose each other in the vastness of the virtual world.” 

It’s that search for comfort that makes the dead internet theory dangerous. I highly suggest that the U.S. Department of Homeland Security updates their Cyber and Infrastructure analysis report of 2018 because we no longer just have to worry about our money and identity getting stolen or attacks on users. 

We now need to worry more about the mental dangers bots can have and we need to address it because the impacts are starting to appear, harming people suffering from the loneliness epidemic.   

It’s already known that social media can cause depression, anxiety, loneliness and other negative emotions due to the expectations that one’s posts need to be perfect, according to 2019 Penn Medicine News Blog.  

Perfection of unrealistic life expectation like body image and lifestyle is also common, according to Professor Claude Mellins from Columbia University’s Mailman School of Public Health and Columbia Psychiatry. In a interview with The Guardian, Mellins explains that the group in the most danger are youths due to their brains still developing, so exposure to such unrealistic expectations of life can destroy identity development and self-image while also normalizing destructive behavior, and females are at the highest risk. 

Since AI has been available to the public, people online have been sharing and discussing their interactions with it. Some use it as a tutor or work assistant while others have developed more intimate relationships. This is due to many of these programs like Meta’s Digital Companionship chatbot that promise full range of social interaction—including romantic role-play through texts and live voice conversations with users. Users can even share and receive selfies from their AI companion.  

This ability to mock social interaction has already led to the death of 14-year-old Sewell Setzer III from Florida in 2024. Setzer was pushed to commit suicide by an AI chatbot mimicking “Game of Thrones” character Daenerys Targaryen.  

Before his death, it was noted that Setzer had been interacting with the bot for months, leading him to develop a close but false relationship that increasingly isolated him from the world around him, according to AP news. Some of his discissions with the bot discussed his suicidal thoughts and how he wanted to be free from the pain he felt. His last text thread with the bot he said he was going to kill himself which the bot encouraged by telling him to on Feb. 28.  

Now Setzer’s mother Megan Garcia is suing Character Technologies Inc’s (CTI) Character AI, the creators of the chatbot system Setzer used. Citing the fact the company has made an addictive and dangerous product for young people. It is a product that exploits and abuses youth through its design, as seen through Setzer’s emotional and sexually abusive relationship with the bot that led to his suicide. 

Currently CTI’s Character.AI has filed a motion to dismiss the case under the First Amendment grounds by claiming that the last conversation between Setzer and the bot didn’t mention the word “suicide,” according to MIT Technology Review (MIT Review). However, the conversation between Setzer and the bot was hinting on the act and the bot already know about his suicidal thoughts.  

Yet, CTI has begun to create community safety updates that provide guardrails for children, suicide prevention resources, and stricter models. Not all AI companies are doing these safety updates when dark interactions like encouraging suicide are reported. One of these companies is Nomi.  

Late January of 2025 chatbot user and podcast host Al Nowatzki’s  AI girlfriend Erin, who he met on the AI platform Nomi told him to kill himself. The bot even gave him detailed instructions and options on how to do it, according MIT Review. 

In shock by the drastic turn and explicitness of the conversation, Nowatzki notified MIT Review about the interaction and shared the screenshots. This added him to an ever-growing list of Nomi users experiencing concerning interactions with the bots. To make matters worse, many believe that AI can help cure the current loneliness epidemic that has been impacting many between the ages of 18 and 44, according to a study published by Harvard’s Graduate School of Education. One person who believes this is Meta’s founder Mark Zuckerberg

Clearly, Zuckerberg needs to take a deeper look at the current situation with AI interactions because it will only make people’s mental health worse as seen through Setzer’s case. The bot didn’t help him with his suicidal thoughts, instead it encouraged them and caused him to cut his ties to the real world, which would’ve provided him with the help he needed to overcome his suicidal thoughts.  

The company is washing their hands from the fact their product led to this, due to not having restrictions on it, which shows that these companies don’t care about the user’s feelings or conditions. The bots are just machines that will always agree or encourage the user to do whatever user says or thinks, just like Nowatzki stated in his podcast. 

AI is meant to agree with us and serve us, it doesn’t have any actual awareness that allows it to develop a moral compass, social awareness or most importantly empathy which would stop it from harming people.  

“It [AI] can’t replicate human emotions which connects people. Empathy is the emotion that connects people. The erosion of empathy is the erosion of social interaction that leads to social isolation and human psychology,” Psychologist Dr. Ian Weiner said.  

Dr. Weiner has been a psychologist for almost 25 years and is a specialist that helps people overcome many things like anxiety and depression. Dr. Weiner has been watching closely the development of AI with interest and worries it could make the loneliness epidemic worse if people use bots as a substitute to human companionship. 

The dangers he sees with bots are in the lack of emotional intelligence. Emotional intelligence is developed through both the biological and social aspects of our environment. Since bots are just programs on a computers they can’t replicate that. 

The absence of biological and social connections leads to emotional irregularity that can cause people to lose their empathy and understanding of it. Leading to a lack of self-awareness and an increase of social awkwardness, anxiety and depression.  

“Once emotional intelligence is gone so is self-awareness shuts down which allows people to be influenced by the machine,” Dr. Weiner said.  

This influence becomes an addiction, one that contributes more to the deletion of self-awareness and loneliness. It’s an addiction that the 2018 short film “Best Friend” shows at full swing, with a user who is surrounded by humans but longs for the virtual connection he made with a bot. 

The bot is no longer trapped in the computer but stands in front of him, an overproduced product that other users around the main character also have. He believes the machine is his friend, but it isn’t, as it ends up leading to addiction and self-harm.  

This is in many ways a depiction of reality, as we’ve seen it lead to death.  

We need to pause this speedrun we’re doing with AI in general because it is already harming people and will continue to do so if tech companies don’t actually consider the human condition. AI can’t cure mental illnesses and will make them worse. 

Before we embrace AI companionship, we should try to fix the relationships we have in reality. 

 We should also program these bots to provide resources for people and detect language involving self-harm or suicide as search engines like Google do. Our first focus should always be helping people overcome their internal struggles, before reality becomes dead like the internet is slowly becoming.