Why former OpenAI employees warn of the danger of artificial intelligence despite the risk of losing their millions

Why former OpenAI employees warn of the danger of artificial intelligence despite the risk of losing their millions
Europe
SpainSpain
IA

 

Workers from the company that created ChatGPT denounce in a letter that they are forced to sign contracts where they cannot criticize the company after their departure if they do not want to lose their money

 
 

A group of former employees and anonymous OpenAI workers have published a letter to warn about how the company prioritizes commercial incentives over the dangers of developing increasingly advanced AI systems. The signatories regret the internal culture of secrecy and how the company forces them to sign contracts where they cannot criticize OpenAI after their departure if they do not want to lose their money.

In the letter, signed by five former employees and six “anonymous” OpenAI workers, they ask that the company withdraw these contracts and open internal complaint channels in the event of dangerous advances. “My wife and I thought about it a lot and decided that my freedom to express myself in the future was more important than [de OpenAI] actions,” Daniel Kokotajlo, who left his position at the department in April, wrote on X [antes Twitter] of Governance of OpenAI, and who told the New York Times that the value of his shares was around 1.6 million euros.

 

“The world is not prepared and we are not prepared,” Kokotajlo wrote in his farewell email. “And I am concerned that we have rushed forward without any concern and without rationalizing our actions,” he added. Another signatory, William Saunders, told Time magazine that his money was at risk by speaking publicly: “By talking to you, I could lose the opportunity to access stocks worth millions of dollars. But I think it's more important to have a public dialogue about what's happening in these AI companies.”

This letter is not the first about the dangers of AI in its unbridled race toward “artificial general intelligence” (AGI), industry slang for when machines gain the ability to think and decide like a human. In March 2023 , thousands of experts asked to pause their progress ; in May of last year another 350 signed a 22-word manifesto asking to “mitigate the risk of extinction.” In between, Sam Altman himself, co-founder of OpenAI, went to Congress to say that “everything can go very wrong” and Geoffrey Hinton, one of the godfathers of AI, left Google to freely warn about the threatening future.

Hinton also supports this new letter from OpenAI employees, along with Yoshua Bengio, another of the winners of the Turing award, the “Nobel of Computer Science” for his achievements with AI. Other signatories to the letter, titled “Right to warn about advanced AI,” are a former employee and a current employee of Google Deep Mind.

Right to warn from within

This new card is slightly different. The main complaint is no longer the hypothetical danger of general AI, but rather the uncontrolled and uncontrolled advance of companies by prioritizing the commercial race over security. Unlike what has happened at Meta and other companies, former OpenAI employees fear that protections such as possible “deep throats” are insufficient because they do not report anything “illegal”, since the risks “that concern us have not yet been regulated.” ”, they write in the letter.

This movement does not occur in a vacuum. In mid-May, Ilya Sutskever, co-founder of OpenAI and one of the instigators of the November revolt against Sam Altman, left the company, accompanied by Jan Leike, who wrote in X that OpenAI “is taking on an enormous responsibility on behalf of the entire humanity” and that prioritizes the creation of “eye-catching products.” That same week, OpenAI had announced the new ChatGPT-4o, capable of chatting in real time and which generated controversy with actress Scarlett Johansson. A former member of the board of directors also revealed how Altman had hidden news as basic as the launch of ChatGPT in November 2022 from them.

 

All this controversy continues to occur over supposed futures that no one knows if or how they will occur. The fears center on the appearance of uncontrolled machines that take control and stop following human orders. The task of ensuring that machines do not become autonomous is called “alignment”, which comes from “aligning” their interests with humans. It is a technical job linked to the security of these systems and whose team at OpenAI has disbanded.

This week the details of the departure of one of the members of that team, the young man of German origin Leopold Aschenbrenner, graduated at the age of 19 with the best grade from Columbia University, according to his LinkedIn, were known. Aschenbrenner confirmed in a nearly 5-hour podcast that he had been fired over a “leak”: “I wrote a brainstorming document on preparations, security, and safeguards needed on the path to mainstream AI. I shared it with three external researchers. That was the leak,” Aschenbrenner explained. On another occasion he shared another document with two members of the previous council. In that case he received a warning from human resources.

 

Also this week Aschenbrunner published a 165-page pdf document with five essays where he analyzes the “next decade” and explains why everything is more dangerous than we believe from the outside. His message went viral and sparked a new round of speculation about the hypothetical danger of future versions of AI. Following his departure from OpenAI, he will head an investment fund focused on general AI projects.

Jordi Pérez Colomé

JORDI PEREZ COLOMÉ

Published in the newspaper El Pais of Spain