Jason Kwon, Chief Strategy Officer (CSO) of OpenAI, smiles during an interview with Chosunilbo at OpenAI’s headquarters in San Francisco on Aug. 7, 2024. He said, "No one knows what the outcome will be when AGI emerges," and added, "AI should be regulated, and safety measures will be put in place." /Oh Ro-ra
Jason Kwon, Chief Strategy Officer (CSO) of OpenAI, smiles during an interview with Chosunilbo at OpenAI’s headquarters in San Francisco on Aug. 7, 2024. He said, "No one knows what the outcome will be when AGI emerges," and added, "AI should be regulated, and safety measures will be put in place." /Oh Ro-ra

The artificial intelligence (AI) revolution, which is transforming paradigms across global industries, began in earnest with the release of OpenAI’s ChatGPT in November 2022. In May, OpenAI once again astonished the world by unveiling the generative AI model GPT-4o, capable of real-time human interaction and even emotional detection, reminiscent of science fiction movies. Fears of an era where AI could dominate humanity have intensified. Jason Kwon, the Chief Strategy Officer (CSO) at OpenAI, is responsible for overseeing future strategies and addressing ethical and legal issues surrounding AI, in addition to technology development. He is a Korean-American who is shaping global standards.

In an interview with Chosunilbo at OpenAI’s headquarters in San Francisco on Aug. 7, Kwon addressed the common view that the development of key technology for artificial general intelligence (AGI) might be a few years away. He said, “Internally, we are assuming it could come sooner than expected and are preparing safety measures accordingly.” This suggests that while many predict AGI will emerge in three to five years, Kwon considers the possibility it could happen even sooner. However, he added, “We won’t suddenly release an all-encompassing AI overnight.” When asked if this is to avoid a significant societal crash, he confirmed, “Yes.” This indicates that although the development of AGI technology is advanced, its pace is being carefully managed to prevent potential negative consequences. This is Kwon’s first interview with Korean media.

Does this mean AGI’s emergence is not far off?

“We are assuming that this technology will soon be realized and are seeking ways to manage it appropriately. However, just because the technology exists doesn’t mean it will immediately become a product. It’s similar to how lighting and appliances didn’t appear the day after electricity was invented. There can be a long delay between the development of core technology and its application in society.”

“Reinvesting AI business profits in safety research”

There is speculation that the next model, GPT-5, might be close to AGI.

“(Smiling) We will discuss more at the time of the release.”

The OpenAI CSO did not provide clear answers about the release timing or performance of GPT-5, OpenAI’s next-generation AI model. Although it was initially expected to be unveiled at the developer conference in October, the tech industry now believes it might be delayed until next year.

What aspects of AGI does OpenAI consider most dangerous?

There are four main areas that could be considered ‘catastrophic risks.’ These include extreme persuasive power, cyberattacks, support for nuclear, chemical, and biological weapons, and the autonomy of AI models.

The extreme persuasive power Kwon mentioned refer to AI’s potential to use various data to make humans blindly believe in certain matters. AI autonomy involves AI creating and learning from its own data. Regarding support for chemical and biological weapons, Kwon said, “If there are attempts to use AI for biologically risky tasks, we monitor and manage the users. So far, AI does not seem to be more dangerous than search engines like Google.” He added, “However, the greatest risk lies in AI creating knowledge that never existed before and exceeding human control.”

In May, OpenAI Chief Strategy Officer (CSO) Jason Kwon (left) and Chief Executive Officer (CEO) Sam Altman (right) posed for a photo with Lance Braunstein, Head of Aladdin Engineering and a member of BlackRock's Global Executive Committee, at a BlackRock-hosted event. /Screenshot from Lance Braunstein's LinkedIn

Didn’t OpenAI disband the Superalignment Team, the group responsible for AI control?

“There’s been some confusion around this, but the Superalignment Team’s work on AI control hasn’t been discontinued. Instead, the team members have been reassigned to continue this critical work within the company. We’re still focused on developing safety technologies to block dangerous user requests and monitor any attempts to use AI models for malicious activities, like cyberattacks.”

‘Alignment work’ refers to the technology that guides and controls AI systems to operate according to human-intended goals and ethical principles.

“Our AI control team disbanded? A big misunderstanding.”

Some critics argue that AI companies are now prioritizing profitability over safety.

“That’s simply not true. Profitability and safety can go hand in hand. A successful business can reinvest its profits into safety research, creating a sustainable model. Just like the automotive industry used earnings from car sales to fund research that ultimately made vehicles safer, our ability to generate revenue directly supports the safety studies we’re currently conducting. Without that financial success, these crucial safety efforts wouldn’t be possible.”

Is humanity equipped to handle the potential threat of AGI?

“No one knows exactly when AGI will arrive, but companies need to be ready. My job is to offer insights into the potential psychological and economic impacts of AGI, advise on necessary laws, and guide how businesses should collaborate with governments globally. We’ve always believed that AI should be regulated, and that commitment remains unchanged.”

OpenAI has recently faced criticism for developing technology to detect students’ misuse of AI but not making it publicly available.

“That technology is still experimental and far from perfect. Our goal is to create a world where technology benefits people, not one where they’re unfairly accused because of it. There are often complex reasons behind improper use of AI, and relying solely on censorship tools that only detect AI usage could lead teachers to make misguided decisions. We can’t assure people that everything is fine while offering technology that isn’t fully reliable.”

“Korean talent needs to enhance their networking skills.”

Recently, CEO Altman stated that the United States should take the lead in AI development.

“He didn’t mean that only the U.S. should have AI technology. The focus should be on promoting ‘democratic AI’ and preventing authoritarian regimes from using AI for harmful purposes. Given that many leading AI companies and experts are based in Silicon Valley, the U.S. has a responsibility to lead by example in the ethical use and development of AI. I believe allied nations, especially Korea, will play a crucial role in setting a positive precedent in AI.”

What advice would you give to Korea’s AI talent?

“Korea’s AI capabilities are already exceptional, with some of the top researchers at OpenAI being of Korean descent. However, one area where Korean talent can improve is in leveraging the power of networking. I’m naturally quite introverted myself, but in Silicon Valley, I reached my position by constantly meeting people, communicating, and seizing opportunities. Here, simply being excellent and diligent isn’t enough to guarantee success. I hope more Koreans, who value humility, will adopt a more proactive approach.”