Mustafa Suleyman: “Controlling artificial intelligence is the challenge of our time”

The CEO of Microsoft AI and co-founder of DeepMind in 2010 reflects on today's technological challenges: "My greatest hope is that everyone can feel the benefits of an intelligence revolution that empowers them to achieve and do more."
In the last decade—and with even greater intensity in the last five years—the development of artificial intelligence has been breathtaking. Every day, new applications based on AI models appear. Nvidia, the manufacturer of the chips that fuel this revolution, is now the most valuable company in the world, and the seven major technology companies, known as Big Tech, are the economic engine of the United States. Many view this boom with skepticism and warn that it could be a bubble similar to the dot-com bubble. But very few doubt the impact that AI will have on the course of humanity in the coming decades.
At the heart of this momentum is British scientist Mustafa Suleyman (London, 41 years old), current CEO of Microsoft AI and co-founder in 2010 of DeepMind, where he was director of product and later head of AI applications. Although artificial intelligence had been advancing quietly, almost lazily, for decades, DeepMind achieved seemingly unattainable milestones, such as AlphaGo, the fledgling AI system capable of defeating world champion Lee Sedol of Go, one of the most complex combinatorial games, with an irrefutable score of 4-1. By combining neural networks—developed by the team of now Nobel laureate in Physics Geoffrey Hinton at the University of Toronto—with large-scale reinforcement learning, the machines began to devise strategies that humans had never imagined. It was a true eureka moment.
When Google acquired DeepMind in 2016 and the AI race accelerated, Suleyman was already one of its leading pioneers. His position allowed him to understand the revolution from within and develop a broad perspective on its opportunities and risks. The greatest of these is the colossal impact that the convergence of AI, robotics, and synthetic biology will have on society: from the transformation of work and human relationships to major contemporary challenges such as climate change, healthcare systems, biogenetics, and geopolitical competition.
This reflection crystallized in *The Coming Wave: Technology, Power, and the Great Dilemma of the 21st Century *, an enthusiastic yet cautious journey through the history and future of technology . AI promises to transform everything, but it also harbors real and imminent dangers that, if ignored, could unleash social conflict, nihilism, and destruction. His concerned optimism underscores the need for a political framework to govern AI, something impossible to delegate to technology and difficult to achieve consensus on in a world marked by polarization, authoritarianism, and competition between the United States and China.
This interview is the result of a series of email exchanges.
Question: As an AI pioneer and co-founder of DeepMind, did you ever imagine, in your youth, that you would play such a pivotal role in this era of rapid technological development? How did you personally experience the “frenzy phase” described by Venezuelan Carlota Pérez, which you reference in your book?
Answer : I’ve always been interested in things that can have a massive, positive impact on the world, but my path to artificial intelligence was unusual. I got involved in the Copenhagen climate negotiations in 2009, at the age of 24. It was an important experience because it taught me that many of the traditional institutions we have to solve our biggest and most urgent problems simply aren’t up to the task. At the same time, I was seeing digital platforms being deployed on a massive scale and having a huge impact. It seemed to me that artificial intelligence could bridge those two worlds. That was the motivation: to build an AI capable of making a significant difference to the big challenges of our time—climate change, rising healthcare costs, stagnant productivity, loneliness, and disconnection. More than the hype, one of the biggest challenges we faced in the early days of DeepMind was that hardly anyone was talking about artificial intelligence. It was unpopular and seen as something quite strange. Although AI is everywhere today, back then it was just emerging from a long “winter.” So we had to work hard simply to convince people that AI and artificial general intelligence were real and valuable ideas.
Q. From your position within the industry, how much control do today's technologists really have over the disruption they themselves are driving? How do they navigate this environment between competitors and allies?
R. Technologists must always take responsibility. We can't control everything that happens downstream of what we create, but that doesn't eliminate the obligation to make the right decisions.
Q. Technological advances such as large-scale language models have gone from science fiction to the daily lives of billions of people. What strategies can help society prepare for such sudden and large-scale changes?
R. Transitions like this are complex. In the past, these changes happened relatively slowly, so their consequences were diluted in the background. No one remembers exactly when ATMs appeared or when self-service kiosks in supermarkets became commonplace. This transition will be more pronounced because it is faster, more direct, and will affect almost everyone. That is why I have advocated for containment and the installation of rails and guardrails for AI. We must limit the pace of change so that society can absorb it. How quickly can we retrain workers and upgrade their skills? How much can the welfare state support people while they change jobs or professions? That is probably the biggest challenge today because there are overwhelming forces driving the deployment of AI to millions and billions of people. We have to manage that while smoothing the transition wherever possible.
Q. You have predicted that we will live in an era of surprises brought about by new technologies. What kind of positive and negative surprises should we expect as this wave progresses?
R. My greatest fear is that malicious actors will use technology in dangerous ways. My greatest hope is that all living people can experience the benefits of an intelligence revolution that empowers them to achieve and do more, wherever they are.
Q. Robots, brain implants, gene editing, synthetic life, and artificial intelligence are, as you write in *The Coming Wave*, signs of a historic turning point. What are the main promises or positive aspects of this disruption if things go well?
A. Artificial intelligence distills the essence of the global economy—intelligence—into an algorithmic construct. In the short term, it will help people be more productive, which should drive significant global economic growth and offset any losses. But this will require a massive response from governments to ensure that everyone maintains their living standards, receives training, and enjoys a better quality of life than they do today. Those building AI must focus on enhancing workers' physical and cognitive capabilities—allowing them to retain control of the AI, or augmentation —and not on replacing humans. Regulators and policymakers should already be thinking about the right tactics and mechanisms to help everyone through this transition. If we get it right, we can tackle some of humanity's greatest challenges, from clean energy to affordable healthcare for all.
Q. Many fear that AI is already making crucial human skills obsolete. How do you envision it not only offsetting job losses but also addressing major challenges such as climate change—given its own voracious energy consumption—or access to healthcare and worker empowerment? As Nobel laureate in Economics Daron Acemoglu argues, AI’s current trajectory is more focused on automation and displacement than on enhancing workers.
R. I recently announced the formation of a new team at Microsoft AI: the Superintelligence team, created to find a new vision for humanistic superintelligence. Here’s how I define it: humanistic superintelligence is advanced AI designed to remain controllable and aligned with its mission to be firmly at the service of humanity. It’s AI that amplifies human potential, not replaces it. This is our answer to what I see as the most important question of our time: how do we ensure that the most advanced forms of AI remain under human control while making a tangible difference? Humanistic superintelligence offers a safer path. Imagine AI assistants that alleviate the mental burden of daily life, boost productivity, and transform education through individualized, adaptive learning. Think of medical superintelligence capable of making expert-level diagnoses with accuracy and low cost, which could revolutionize global health—capabilities that our healthcare team at Microsoft AI has already demonstrated. And consider AI-driven advances in clean energy that enable us to generate, store, and dispose of vast quantities of carbon affordably, to meet growing demand while protecting the planet. The potential benefit to humanity is enormous: a world of rapid advancements in living standards, science, and new forms of art, culture, and growth. With humanist superintelligence, I believe these are not speculative dreams, but achievable goals that can deliver tangible improvements in the daily lives of millions. We must celebrate and accelerate technology because it has been the greatest engine of human progress in history. That is why we need much, much more of it.
Q. How close are we to technology surpassing human agency and control? What does it mean to confront the “gorilla problem”—that is, to create something smarter than ourselves?
R. The goal should be to create AI that supports and empowers humans. That means building contained and aligned systems, designed with clear intent, explicit trade-offs, and appropriate safeguards. It's about making key design and engineering decisions early on and then sticking to the principles behind them.
Q. AI is often described as a black box. Is it realistic to expect that we can control it and ensure that humans will maintain a meaningful role, given its drive towards autonomy?
R. Yes, I think so. At Microsoft AI, we built Copilot, which we launched in mid-October, an AI assistant for everyone. It's a very new and different technology, unlike any tool we've used before: much richer and more dynamic. An AI assistant that will accompany you through life, grow with you, adapt to your needs and quirks, remember what matters, navigate the web, and act on your behalf: from booking a trip to managing everyday tasks or helping you with complex activities. And it will do all of that on your behalf, aligned with your interests. This is something new: supporting human roles and bringing out the best in us.
Q. Even AI experts sometimes don't fully understand how these systems work. What serious risks do you see in this lack of transparency, and how can they be minimized?
R. It's crucial that we take responsibility for what we do. Microsoft has one of the strongest security teams in the world, and security is our number one priority. Containment means that artificial intelligence must always be controlled and demonstrably accountable. Being accountable means being transparent: we must always have a clear explanation of what it does and why it does it. And there must be enforceable limits on its capabilities, with verifiable and demonstrable controls.
Q. The emergence of a techno-elite or “superclass” is shifting power from states to those who control digital infrastructure, data, algorithms, and biogenetic advances. What threats does this pose to democracy and social equity? What measures could prevent a future dominated by a techno-oligarchy?
R. While training large models isn't something just anyone can do, there are opposing trends to the ones you mention that are worth highlighting. Technology is spreading incredibly fast, moving from cutting-edge to open source in a matter of months. Small, lightweight models are improving every day. That means that while large tech companies will play a role, so will many others. Beyond that, governments and companies still have a huge role to play in upholding our social contract. Both should make that clear. I certainly do.
Q. You argue that regulation alone cannot contain these technologies. What would a practical and effective containment strategy actually consist of?
A. Containment must not only keep technology under control but also manage its consequences for societies and individuals. It must unify engineering, ethics, regulation, and international collaboration within a coherent framework. To manage the AI wave, we need a containment program that operates in ten concentric layers, from the technical core outward. It begins with built-in safety measures—concrete mechanisms to ensure safe outcomes—and continues with auditing systems for transparency and accountability. It involves using bottlenecks in the ecosystem to buy time for regulators and to develop defensive technologies. It also involves fostering responsible creators who build contained systems, not just systems criticized from the outside; and reforming corporate incentives to prevent a reckless competitive race. Governments must license and monitor technologies. New international treaties and even new global institutions will be needed to coordinate oversight. We must also cultivate a culture that embraces the precautionary principle, while social movements push for responsible change. All these measures must be integrated into a comprehensive program with mutually reinforcing mechanisms to maintain social control over technology at a time of exponential advancement. Without that, any other debate—about ethics, risks, or benefits—becomes irrelevant. And none of this will be easy.
Q. Thinking about what you say, I wonder: Is it viable to promote a Paris Agreement for AI or to create an independent body with real power that is accepted by the main players?
R. Yes, but it will require a tremendous amount of work. The key is to find ways to create win-win situations where countries can collaborate to secure benefits for their societies while managing risks together. There are good historical precedents: the Montreal Protocol on CFCs (chlorofluorocarbons), the Paris Agreement on climate change, and arms bans. That is the challenge of our time.
Q. What role should the humanities—philosophy, history, ethics, the arts—play in AI research, development, and application? Are interdisciplinary perspectives being adequately integrated, or do we risk overlooking crucial humanistic insights in the race to innovate?
R. I have a background in the humanities, and it's something deeply important to me. I believe there's a huge role for diversity of ideas, disciplines, and perspectives in AI. In fact, it's essential. We're at a point where the tools are so advanced that you don't need to be an engineer to lead product or engineering teams. We have a new kind of clay with which to sculpt experiences in unprecedented ways. It's an incredible opportunity for writers and artists. Many people at Microsoft AI come from diverse backgrounds: educators, therapists, linguists, comedy writers, advertisers, designers, gamers . I'm interested in bringing in genuine creatives who don't fit traditional molds but have breadth and scope, and placing them at the heart of product creation, alongside engineers and managers.
Q: Are these voices truly influencing core AI design decisions? Or are they being used to “humanize” already defined products within a corporate framework?
R. My call to everyone is to get involved. There is a huge role for people to play in influencing the outcomes; nothing is certain or inevitable, and we all must have a direct stake in what happens. Ultimately, society will decide what is created and what is not. We tend to overestimate the short-term impact of technology and underestimate its long-term consequences. That means there is still time and enormous room for maneuver for all of us to participate, join movements for positive change, and learn how to influence these tools to achieve the best possible results.
Q. To conclude on a human or humanistic level: can empathy truly be programmed into AI, or is it a dangerous illusion?
A: Yes, it's possible. Advances in recent years show that it is. But we shouldn't confuse it with human empathy or see it as a replacement. At Microsoft AI, we call it "personality engineering," and it's an important part of designing supportive systems that are responsive and aligned with your interests. A genuinely rich emotional experience can be created. But it's not about faking a real emotion or replacing it. An empathetic AI should help you connect with other human beings. It won't pretend to be something it's not because that would shatter its own illusion. It's a delicate balance, but one we're determined to achieve.
Boris Muñoz , El Pais, Spain






