It was one of the first highly publicized ‘human versus machine’ battles. Go master Lee Sedol faced off against AlphaGo, an AI program developed by Google’s Deepmind, in Game 2 of their match at the Four Seasons Hotel in Seoul on March 10, 2016. AlphaGo, having won the first game the day before, unsettled the Go master with an unconventional move.
This was when AlphaGo played the unforgettable ‘Move 37′ that is set to go down in the board game’s 2,500-year history. The move was unusual and unprecedented - experts said no professional Go player would have considered it. Lee was taken aback by the move and hesitated to respond. The match continued with Lee contemplating for about 10 minutes, ultimately ending in AlphaGo’s victory. DeepMind co-founder Mustafa Suleyman, who was watching the match from the control room at the time, later reflected, “AlphaGo’s victory heralded a new era of AI.”
Nearly eight years later, Suleyman, who left DeepMind and founded generative startup Inflection AI, is both optimistic and concerned about the upcoming AI era. Over the past few years, he witnessed the potential risks AI can pose - from massive AI-powered cyberattacks to automated warfare capable of destroying nations - can be catastrophic, even though the likelihood of these threats blowing up is slim.
This is why AI experts, including Suleyman, stress the importance of AI containment. By containment, he refers to the ability to monitor, curtail, control, and potentially shut down the technology if necessary. Containment acts as a guardrail, or protective shield, to ensure humanity maintains control over AI. Suleyman details his thoughts in his book, “The Coming Wave,” which was translated and published in Korea in January this year. In an email interview with ChosunBiz just before he joined Microsoft as AI chief, Suleyman shared his thoughts on how AI will impact humanity in the future. Here’s what he had to say.
AlphaGo’s 2016 victory is ingrained in the minds of many Koreans. While the Go master lost, people focused more on that single win rather than the losses, which symbolizes the hope that humanity might still excel over AI. Do you think such hope still persists?
“Pitting humans versus AI is probably the wrong way to think about this over the long term. Of course, competition encourages that framing, but the right approach is to think that AI will be able to do a huge amount, mostly better than us, but that this takes the form of a massive augmentation of our capabilities. AI is a force amplifier, probably the most powerful in history. We shouldn’t think about wins and losses - we should think about how the whole set of we can do is about to expand, for good and for bad.”
What pivotal moment made you realize that AI technology containment was necessary, and what were the reasons?
“There wasn’t a single pivotal moment. It’s more the result of watching these systems up close for years, tracking their development and rising capabilities, seeing how fast they are moving forward, and then also getting an appreciation for how entrenched and powerful the forces driving them forward really are. All the incentives behind AI, things like perceived strategic necessity or the commercial rewards on offer are so immense and ingrained we can’t just wish them away. It means that advanced new forms of technology are on the way. Watching these trendlines develop over the years made it clear that we need some kind of project of containment to ensure they stay under the meaningful control of societies.”
Could you explain to those who haven’t read your book why, despite optimistic views on new technology, AI containment is deemed critical?
“I am an optimist, so I believe these technologies can offer so much. However, I also think it comes back to that point about empowerment I made: bad actors, as well as a huge range of good uses, will be empowered. Whatever your goals, they will be much easier to achieve. Clearly, this is of great benefit to most people, but it also applies to terrorists and dictators. At the same time, a lot of the capabilities of AI will put a huge strain on the liberal democratic nation-state system, the very system that we need to regulate and control. All of that is why containment is critical. I don’t see a project of containment as being anti-technology in any way. Containment is about delivering world-changing technology safely.”
MIT professor Daron Acemoglu said the government needs to eliminate incentives that speed up automation to avoid exacerbating AI-induced inequalities. Do you believe such measures could potentially stifle technological advancement?
“They potentially could, certainly. There may be some instances in which something like this is appropriate, a necessary step. However, it’s also important to remember that such an approach has downsides. Alongside risks, you must remember that AI has huge benefits, and slowing it down means slowing down those benefits. For instance, AI will be a major boon to areas like education and healthcare, where we need cost-effective results. It will offer new breakthroughs in both and enable higher customization rates than anything we’ve dreamed of up until now. AI will be a Personal Intelligence for everyone, and it can help you manage your life and solve problems. AI will unleash a wave of economic growth and help us address the grand challenges of the 21st century, from climate change to running our world in the face of aging populations. It just needs to be properly restrained.”
You seem to be both optimistic and concerned about AI technology. Your book mentioned the risks of pursuing and not pursuing new technologies. How can we manage to strike the right balance in this dilemma?
“We need to understand how AI develops and constantly course-correct our company and policy response accordingly. It’s one of the reasons I have called for an Intergovernmental Panel on Climate Change (IPCC) for AI. The issue with AI is that there is little consensus on where we are in terms of risk, where things will reach, and when. We need an impartial and expert body to guide us to constantly modulate our response and weigh the risks and rewards appropriately.”
What is the impact of AI on the labor market? Historically, new technologies have both eliminated jobs and created more. However, you have expressed concern that job creation may not happen as quickly following the advent of AI.
“In the short term, AI will help make people more productive. But over the long term, it will be able to do more and more. We can make new jobs, but I don’t think they will come in the quantity or quality that we’ll need. AI isn’t static but develops and learns, so it won’t just stay in a few areas. It will be able to take entire roles like humans do in the future.”
What should the government do when people lose their jobs due to AI?
“This will require a massive response from governments to ensure that everyone maintains their living standards, receives retraining for new technologies or jobs, and enjoys a better quality of life than today, not worse.”
A CTO I recently interviewed mentioned that “future humanity will be divided between those who can and cannot use AI.” How should ordinary individuals prepare for this upcoming “wave”?
“I’m not sure that’s true. One of the key points about AI is that it now speaks our language. You can produce detailed code simply by asking a large language model in ordinary natural language. This means using AI is increasingly becoming part of everyday conversation, and this trend will continue. Today, creating and implementing AI is an extremely technical and sophisticated task. However, Ultimately, everyone can already use AI simply by talking, and soon, more and more aspects of it will work in the same way.”
In a recent media interview, you said, “Nvidia chips should only be sold to companies that adhere to AI ethics.” Considering Korea’s significant investment in developing high bandwidth memory (HBM), do you think Korean companies should actively engage in this ethical movement?
“Absolutely. Creating ethical technology, which is one of the most important challenges of the 21st century, requires buy-in from everyone involved. There can’t be any gaps here. It is a truly global team effort that we all need to engage in for the benefit of everyone.”
Considering that effective containment seems to require global consensus, which is challenging in the current geopolitical climate, how do you envision overcoming the growing world polarization?
“This is a really hard question. There are no easy answers. But that said, there are a lot of useful precedents we can learn from. There’s the Treaty on Non-Proliferation of Nuclear Weapons, the Paris Agreement on emissions, the Montreal Protocol phasing out CFCs, the Biological Weapons Convention, the various environmental and safety legislation that has improved conditions for billions, the norms in genetic engineering established at the Asilomar Conference in 1975 etc. Each is an example of world collaboration, often at times of major tension. Ultimately every nation has an interest in getting this right, and wherever we can we need to build bridges, encourage collaboration, and fight against increasing polarization.”
What book are you most engaged with currently, and do you make frequent use of chatbot AI in your everyday life?
“Two books I’ve recently enjoyed are “Material World” by Ed Conway and “How Not to Be a Politician” by Rory Stewart. The former is an exploration of the different materials the modern world is made from, everything from sand to oil. So much of AI is about software and code and this book is a great antidote. Rory’s book is a memoir of a political career in Britain, but it highlights so many of the challenges around democratic politics today that have a truly global resonance.”
Who is Mustafa Suleyman?
Mustafa Suleyman, the CEO of Microsoft AI, describes himself as a “serial tech entrepreneur.” After dropping out of Oxford University at the age of 19, he founded a charity for Muslim youth and Reos Partners, an international conflict resolution consulting firm. In 2010, he co-founded the AI startup DeepMind with Demis Hassabis (current CEO) and Shane Legg (current chief AGI scientist), whom he knew from school. Suleyman and his colleagues focused on analyzing human intelligence using machine learning and neuroscience and implementing it in computers. In 2014, Google acquired DeepMind for over $400 million (approximately 533.5 billion won). At DeepMind, Suleyman played a crucial role in the development of AlphaGo and later served as Google’s Vice President of AI Product and Policy, where his team developed the large language model “LaMDA.” Google’s recently announced “Gemini” is a multimodal AI model based on LaMDA and other LLMs developed by Google.
In 2022, Suleyman left Google and co-founded Inflection AI, a generative AI company, with LinkedIn founder Reid Garrett Hoffman. The following year, they launched “Pi,” a generative AI similar to ChatGPT, which gained industry attention for its human-friendly conversational style. This AI chatbot is noted for its empathy. Inflection AI has raised approximately $1.5 billion (approximately 2 trillion won) in funding from notable figures like Bill Gates, the founder of Microsoft, and companies like NVIDIA, earning unicorn status as a privately held company valued at over $1 billion.
In March of this year, Suleyman was appointed as CEO of Microsoft’s AI business.
This article was originally published on February 26.