South Korea's LG Challenges OpenAI With Sovereign AI Model That Beats GPT-5 Mini in Visual Tests

LG AI Research has released Exaone 4.5, a new artificial intelligence model that processes both text and images, positioning itself as a competitive entry in South Korea's government-backed sovereign AI initiative. In internal benchmark tests, the model scored an average of 77.3 across five visual reasoning metrics in science, technology, engineering and math, ahead of OpenAI's GPT-5 mini at 73.5 and Anthropic's Claude Sonnet 4.5 at 74.6 . The achievement marks a significant milestone for South Korea's push toward developing globally competitive AI models independent of Western tech giants.

What Is South Korea's Sovereign AI Competition?

South Korea's Proprietary AI Foundation Model project, overseen by the Ministry of Science and ICT, represents a multiyear public-private effort backed by roughly 530 billion won, approximately $358 million, to develop globally competitive AI models by 2027 . The initiative works through a phased elimination process, with teams competing across multiple evaluation rounds. K-Exaone, LG's larger 236-billion-parameter model competing in the project, scored highest across all categories in the first evaluation round in January and faces a second review after August . This competitive structure reflects a broader global trend of nations investing heavily in domestic AI infrastructure to reduce dependence on foreign technology providers.

The sovereign AI movement reflects growing concerns about technological independence and data sovereignty. Countries recognize that controlling their own AI development capabilities provides strategic advantages in economic competitiveness, national security, and the ability to tailor AI systems to local needs and values. South Korea's investment demonstrates how governments are treating AI development as critical infrastructure comparable to semiconductors or telecommunications.

How Does Exaone 4.5 Achieve Competitive Performance at a Smaller Size?

Exaone 4.5 operates at 33 billion parameters, roughly one-seventh the size of K-Exaone's 236-billion-parameter model, yet LG claims it matches K-Exaone's text reasoning performance . This efficiency breakthrough stems from architectural innovations and inference optimization techniques. The company attributes this achievement to a hybrid processing architecture and faster inference techniques that allow the smaller model to punch above its weight class . For context, parameters are the internal variables that AI models adjust during training; fewer parameters typically means faster processing and lower computational costs, making the model more practical for real-world deployment.

The technical accomplishment has practical implications for organizations seeking to deploy AI systems without massive computational infrastructure. A smaller, efficient model reduces the energy consumption and hardware costs required to run AI applications at scale. This efficiency matters particularly for companies and governments with limited budgets or those operating in regions with constrained computing resources.

Steps to Understanding LG's AI Development Strategy

  • Multimodal Expansion: LG is progressively adding capabilities beyond text and images, with plans to integrate voice, video, and physical environment understanding into future versions of the model.
  • Open Research Approach: Exaone 4.5 has been released as open weights on Hugging Face for research and educational use, allowing the broader AI community to study and build upon LG's work while maintaining commercial licensing restrictions.
  • Industrial Application Focus: The company explicitly targets real-world decision-making in industrial settings rather than consumer-facing chatbot applications, reflecting a different market strategy than OpenAI or Anthropic.

"Starting with this model, we will expand AI's understanding to voice, video and physical environments to build AI that makes real decisions in industrial settings," said Lee Jin-sik, head of the Exaone Lab at LG AI Research.

Lee Jin-sik, Head of Exaone Lab at LG AI Research

Why Do the Benchmark Comparisons Matter, and What Are Their Limitations?

LG's benchmark results represent a notable achievement, but context matters significantly. The comparison targets are mid-tier models from their respective companies; GPT-5 mini is a lightweight variant in OpenAI's lineup that has already been succeeded by GPT-5.4 mini as of March . Additionally, some figures in LG's published benchmark tables are marked as self-measured rather than independently verified, which means external auditors have not confirmed the results . This distinction is important because companies have inherent incentives to present their work favorably, and independent verification provides stronger evidence of genuine performance gains.

The visual reasoning benchmarks test AI systems on tasks involving science, technology, engineering, and math problems that require understanding images and diagrams. Scoring 77.3 versus 73.5 represents a meaningful but modest performance advantage. For context, the difference amounts to roughly 5% better accuracy on these specific visual reasoning tasks. Whether this translates to meaningful real-world advantages depends on the specific applications LG targets.

What Does This Mean for the Broader AI Landscape?

LG's progress in sovereign AI development reflects a global pattern where major technology companies and governments are investing billions to reduce dependence on a handful of Western AI providers. South Korea's $358 million commitment positions the country alongside other nations building domestic AI capabilities. The success of Exaone 4.5 demonstrates that competitive multimodal AI models can be developed outside the OpenAI-Anthropic-Google ecosystem, though the comparison to mid-tier models rather than flagship systems suggests the gap remains significant.

The release of Exaone 4.5 as open weights for research purposes also signals a different approach to AI development than some competitors. By allowing researchers and educators to access the model, LG builds community engagement and attracts talent to its ecosystem while maintaining commercial control through licensing restrictions. This strategy balances openness with business interests, potentially accelerating innovation while protecting revenue streams.

As South Korea continues its sovereign AI competition through 2027, the results will likely influence how other nations approach their own AI development strategies. Success in building competitive models domestically could validate the investment thesis that countries need not rely entirely on foreign technology providers for critical AI capabilities.