The "dead internet" conspiracy is gaining momentum with the emergence of AI-generated content.

 The "dead internet" conspiracy is gaining momentum with the emergence of AI-generated content.

The head of OpenAI now fears for the network created by automated programs that facilitate manipulation and misinformation.

OpenAI CEO Sam Altman speaks during a meeting of the White House Task Force on Artificial Intelligence (AI) Education in the East Room of the White House in Washington.

By Raul Limon, El Pais, Spain *

“I had desired it with a fervor far beyond restraint; but now that it was attained, the beauty of the dream was gone, and loathing and horror filled me.” This is his reaction to his creation of Dr. Frankenstein in Mary Shelley’s 1818 work, known by the surname of the scientist’s character, or The Modern Prometheus . Sam Altman, CEO of OpenAI, has suffered a similar vertigo. The CEO of the company behind one of the most sophisticated developments in artificial intelligence (AI) is beginning to believe in the “dead internet” theory , which argues that automatically generated content will surpass that generated by humans, thus multiplying the dangers of manipulation, misinformation, and intentional behavioral conditioning.

 

Altman's terse message has raised concerns: “I never took the dead internet theory that seriously, but it seems there are now a lot of Twitter accounts [ahora X y propiedad de Elon Musk] run by LLM,” the AI language models.

Aaron Harris, global chief technology officer (CTO) at Sage, a multinational company specializing in AI applications , is cautious about naming the phenomenon, although he doesn't deny the process: “I don't know if I would call it 'the dead internet,' but it is certainly changing rapidly. The rise of automated content and bot- driven interaction [programas informáticos que imitan el comportamiento humano] makes it increasingly difficult to separate the authentic from the noise. The question is whether we allow that noise to overwhelm us or focus on designing technology that restores trust. What matters now is how we filter, verify, and display information that people can trust.”

Altman's specific reference to the social network is not gratuitous. "This is vitally important, as social media is now the primary source of information for many users around the world," write Jake Renzella, head of computer science, and Vlada Rozova, a machine learning researcher at the University of New South Wales (UNSW Sydney) and the University of Melbourne, respectively, in an article published in The Conversation .

“As these AI-powered accounts grow in followers (many fake, some real), that high number legitimizes the account to real users. This means an army of accounts is being created out there. There is already strong evidence that social media is being manipulated by these bots to influence public opinion with misinformation, and it has been happening for years,” the Australian researchers emphasize, echoing Altman’s warning. A study by security firm Imperva published two years ago already estimated that “nearly half of all internet traffic was driven by bots .”

There is already strong evidence that social media is being manipulated by bots [programas informáticos que imitan el comportamiento humano] to influence public opinion with misinformation, and it has been happening for years.

Jake Renzella and Vlada Rozova, researchers from the universities of Sydney and Melbourne

And these bots are not only capable of creating unique content, but also of mimicking the formula for its massive, viral spread. According to a new study published in Physical Review Letters and led by researchers at the University of Vermont and the Santa Fe Institute, “whatever spreads—whether it’s a belief, a joke, or a virus—evolves in real time and gains strength as it spreads” following a mathematical model of “self [ Self-Reinforcing Cascades ] cascades.”

According to this research, what is disseminated mutates as it spreads, and this change helps it go viral, in a model similar to sixth-generation fires, which are impossible to extinguish with conventional means. “We were partly inspired by forest fires: they can become stronger when they burn through dense forests and weaker when they cross open gaps. The same principle applies to information, hoaxes, or diseases. They can intensify or weaken depending on the conditions,” explains Sid Redner, a physicist, professor at the Santa Fe Institute, and co-author of the article.

Juniper Lovato, a computer scientist and co-author of the study, believes the work provides a better understanding of how belief formation, misinformation, and social contagion occur. “This gives us a theoretical basis to explore how stories and narratives evolve and spread across networks,” she says.

Researchers warn that the risks of viral content that supports manipulation or misinformation are multiplied by artificial intelligence tools and require users to be more aware of the dangers of AI assistants and agents.

Because these innovative AI tools not only know how to create content and make it go viral, but also how to create a personal and effective impact with the information they gather from user interactions.

The paper " Big Help or Big Brother ? Auditing Tracking, Profiling, and Personalization in Generative AI Assistants ," presented at the USENIX Security Symposium in Seattle, examines users' vulnerability to influence.

“When it comes to susceptibility to social media influence, it's not just about who you are, but where you are in a network and who you're connected to,” explains Luca Luceri, a researcher at the University of Southern California and co-author of the paper.

“Susceptibility Paradox”

In this sense, the research reflects a phenomenon they call the "Susceptibility Paradox," which assumes "a pattern whereby users' friends are, on average, more easily influenced than the account holders themselves." According to the study, this behavior "can explain how behaviors, trends, and ideas become popular, and why some corners of the internet are more vulnerable to influence than others."

People who post because others do often belong to close-knit circles that share similar behavior, suggesting, according to the study, that “social influence operates not only through direct exchanges between individuals, but is also shaped and constrained by the structure of the network.”

This way, it's possible to predict who is most likely to share content, a goldmine for automatic viralization based on the personal data collected by AI. "In many cases, knowing how a user's friends behave is enough to estimate how the user would behave," the study warns.

The solution to the effects of this artificial intelligence on the internet created by humans, but beginning to dominate, is not solely regulatory. According to The Ethics of Advanced AI Assistants , a complex and exhaustive work by Google Deepmind with twenty researchers and universities, the challenge must be found in a tetrahedral relationship. between the AI assistant, the user, the developer, and society to develop “an appropriate set of values or instructions to operate safely in the world and produce outcomes that are broadly beneficial.”

The researchers' work develops, in the manner of Asimov's laws for robotics, a series of commandments to avoid an AI that is alien to moral principles that can be summarized as follows: it will not manipulate the user to favor the interests of the AI (of its developers) or generate a social cost (such as misinformation), it will not allow the user or the developers to apply strategies that are negative for society (domination, conditioning or institutional discredit) and it will not unduly limit the user's freedom.

The issue isn't necessarily whether the content comes from a human or an AI, but whether someone is responsible for it.

Aaron Harris, CTO of Sega

Aaron Harris, CTO of Sega, believes an ethical internet is possible, “but it won't happen by chance,” he says. “Transparency and accountability must determine how AI is designed and regulated. Companies developing it must ensure that their results are auditable and explainable, so that people understand where the information comes from and why it's being recommended. In finance, for example, accuracy isn't optional, and errors have real consequences. The same principle applies online : responsible training, clear labeling, and the ability to question results can make AI part of a more ethical and trustworthy internet.”

Harris advocates for protecting the “human internet,” “especially now that more and more content is being created by bots ,” but not at the expense of dispensing with the progress made. “I don’t think the solution is to go back to the pre-AI world and try to restrict or completely eliminate the content it has generated. It’s already part of how we live and work, and it can provide real value when used responsibly. The question is whether anyone is responsible for the content. That’s the principle all companies should follow: AI should enhance human capabilities, not replace them. A more human internet is still possible, but only if we keep people’s needs at the center and make accountability non-negotiable.”

Raúl Limón

Raúl Limón

With a degree in Information Sciences from the Complutense University, a master's degree in Digital Journalism from the Autonomous University of Madrid, and training in the US, he is an editor in the Science section. He contributes to television, has written two books (one of them winning the Lorca Prize), and was awarded the Dissemination in the Digital Age award.