Artificial intelligence: Is a post-apocalyptic scenario near?

Artificial intelligence: Is a post-apocalyptic scenario near?

 

by: Zhandra Flores

Published:

The takeoff of these technologies has put humanity before unprecedented ethical and technopolitical crossroads.

Artificial intelligence: Is a post-apocalyptic scenario near?

Illustrative image. Image created by artificial intelligence

 

The massification of artificial intelligence (AI) has been accompanied by a lot of enthusiasm, but also by many questions and fears about the effects of this technology on people's daily lives.

The most optimistic maintain that processes for which today it is necessary to invest a lot of time will be accelerated, that the ground will be paved for making more objective decisions and that its intensive use could help find answers to big questions that have been crying out for solutions for decades or centuries, including the cure of deadly diseases or highly complex scientific problems.

Although it is true that AI could offer all these possibilities, it is also true that its emergence has fueled the debate on issues such as the massive collection of data from Internet users without their consent , the reproduction of politically motivated historical biases and prejudices and has contributed to the devaluation of the critical spirit, as well as the falsification of the truth about controversial events.

It is not yet possible to gauge the real scope of these transformations within societies, but it has already been shown that they have affected such core issues as interpersonal relationships, the labor market and teaching-learning processes, and it is expected that This will be even greater in the coming years.

 

Thus, in professions in which until now absolute human primacy was assumed, such as journalism or writing for literary purposes, they are inevitably being permeated by these tools, while authors and media owners are forced, at the very least, to play on a board with other rules , which could even put in suspense what until now has been a lucrative corporate model.

On the other hand, in this equation, national states, multilateral entities and universities have been relegated compared to the technological giants , which today control AI-based systems outside of any regulation or counterweight.

Three stages: where are we?

Experts have identified three stages in the development of AI , whose differences are defined by the ability to imitate – or surpass – human cognition. The first, known as ANI, for its acronym in English, has been part of the lives of millions of people for more than a decade and a half through their smartphones, internet search engines, online games and electronic assistants such as Siri or Alexa.

In simple terms, these are robots specialized in a task , which are capable of doing those tasks better than a human being. These AIs support issues as varied as Google search algorithms, personalized audio and video playlists in various applications or geolocation through GPS.

Despite the apparent differences, 'chatbots' such as OpenIA's ChatGPT, Google Brain or Microsoft's Copilot are also part of this list. The answers they offer to users are based on a literal analysis of the information that circulates on the Internet about the topics, although according to the limits imposed by its developers; That is, they cannot think or make autonomous decisions .

 

It is precisely this aspect that, for experts, defines the limit between ANI or "weak" AI with the next level: general AI (AGI) or "strong" AI, where machines would acquire the cognition capacity of human beings and, at least in theory, they would be able to perform any task .

That time may seem distant, but the truth is that it is closer than it seems and applications like ChatGPT made that proximity transparent, revealing itself as a kind of transition interface between the ANI and the AGI.

It is not the worst possible future. The third phase, which only exists as a prefiguration, is the so-called artificial superintelligence (ASI), in which machines would be able not only to perform any human task, but could think for themselves ; That is, AI would surpass even the most brilliant of all human minds.

Already an alliance between the Massachusetts Institute of Technology (MIT), the University of California (USA) and the technology company Aizip Inc. made possible the creation of an AI that is capable of generating other artificial intelligence models without the intervention of human beings. humans, which opened another disturbing door: it is possible that objects develop autonomous learning mechanisms that make them become more intelligent.

Illustrative image.PhonlamaiPhoto / Gettyimages.ru

A panel of computer scientists interviewed by BBC Mundo last year agreed that although AI-based systems took decades to develop to their current state, the transition towards artificial superintelligence or ASI will be achieved much more quickly and will be overcome with You grow human cognition , including in matters as specific as creativity or social skills.

"It is something that we have to be very careful with, and we have to be extremely proactive in terms of its governance. Why? The limitation among humans is how scalable our intelligence is. To be an engineer you need to study a lot, to "To be a nurse, to be a lawyer, it takes a lot of time. The issue with generalized AI is that it is immediately scalable," said Carlos Ignacio Gutiérrez, attached to the Future of Live Institute.

On this topic, South African tycoon Elon Musk considered that AI "represents one of the greatest threats" to humanity. "For the first time we are in a situation where we have something that will be much more intelligent than the most intelligent human," said the businessman at a meeting on AI security held in the United Kingdom at the beginning of last November.

Similarly, Musk predicted that AI will be able to surpass human cognition in as little as five years . "To me, it is not clear whether we can control such a thing, but I think we can hope to guide it in a direction that is beneficial to humanity," he noted.

Dangers in the making

In March 2023, some 1,000 specialists and owners of large technology companies publicly requested that all AI developers immediately suspend "for at least six months" the "training of AI systems more powerful than GPT-4", the basis of learning ChatGPT.

From left From left: Bill Gates, Steve Wozniak and Elon Musk.Jamie McCarthy / Getty Images for Bill & Melinda Gates Foundation / Justin Sullivan / Andreas Rentz / Gettyimages.ru

 

"AI systems with human-competitive intelligence can pose serious risks to society and humanity , as demonstrated by extensive research and recognized by the main AI laboratories," reads part of the letter, which was signed by industry heavyweights like Elon Musk, Steve Wozniak and Bill Gates.

This is not a unanimous position . Gates himself doubts the effectiveness of these delays in the general development of AI and rather urges governments and companies to face the challenges.

"I don't think asking a particular group to pause will solve the challenges. Clearly there are enormous benefits to these things (…). What we have to do is identify the most complicated areas," alleged the Microsoft founder in an interview granted to Reuters in April 2023.

The topic was also addressed in the most recent edition of the Davos Forum , where a good part of the global political and economic elite meets annually. For the occasion, Sam Altman, CEO of OpenIA, was invited, who took advantage of the stage to promote the benefits of AI and dismiss the risks and warnings that circulate in academic circles and public opinion.

https://actualidad.rt.com/actualidad/499066-inteligencia-artificial-cerca-escenario-postapocaliptico