/Getty Images
/Getty Images

LG introduced South Korea’s first reasoning artificial intelligence (AI) model on March 18. Unlike traditional AI, which finds answers from pre-learned data, reasoning AI roduce answers through logical, step-by-step thinking processes similar to humans. A key example is China’s DeepSeek, which recently gained global attention for its cost-efficient, high-performance model. As tech giants like OpenAI and DeepSeek compete in reasoning AI development, LG’s new model positions South Korea in the race. However, LG will not release it for public use, instead keeping it for internal product development.

Graphics by Yang Jin-kyung

LG AI Research unveiled Exaone Deep, with its main model, Exaone Deep-32B. The model features 32 billion parameters, which facilitate data connections for AI learning and reasoning. More parameters generally mean better performance, but they also require more AI chips. Because of this, companies are increasingly focused on optimizing performance with fewer parameters.

DeepSeek’s R1 has 671 billion parameters, while Exaone Deep-32B has only about 5% of that. Despite this, performance tests show that LG’s model is on par with DeepSeek-R1. In benchmark comparisons with leading reasoning AI models like DeepSeek and Alibaba’s QwQ-32B, Exaone Deep-32B particularly excelled in mathematics. It scored 90 in the 2024 U.S. Mathematical Olympiad, outperforming DeepSeek-R1 (86.7) and Alibaba’s QwQ-32B (86.7). In South Korea’s 2025 CSAT math section, it achieved 94.5, the highest among competing models. It also performed well on doctoral-level science problems, scoring 66.1, ahead of Alibaba’s QwQ-32B (63.3).

However, it fell behind in coding and language skills. In the Massive Multitask Language Understanding (MMLU) test, it scored 83, trailing Alibaba (87.4) and DeepSeek (90.8). An industry expert explained, “Reasoning AI models are designed for solving math and science problems, so their language capabilities naturally lag behind larger models.”

LG AI Research also introduced the lightweight Exaone Deep-7.8B and on-device Exaone Deep-2.4B models. The institute said, “Despite being just 24% of the 32B model’s size, the lightweight version retains 95% of its performance, while the on-device model achieves 86% performance at only 7.5% of the scale.” LG has also released the model’s source code as open-source for developers, similar to DeepSeek.

LG offers the source code for free, but the AI model itself is used only internally. Deploying it for public use, like ChatGPT, would require massive data centers and cost at least several trillion won.

Among South Korean companies, Naver is also developing AI models. It launched HyperCLOVA X in 2023 and has since cut parameters by about 60% while enhancing reasoning performance. Naver said that the new model’s operating costs have improved by more than 50%. The company is also developing a dedicated reasoning AI model. Leading AI startup Upstage has also recently begun full-scale development of reasoning AI.