Upstage is building anticipation for an AI model that doesn't exist publicly yet, claiming internal tests show it performs two to three times better than its current flagship. The South Korean company hasn't released Solar Open 2, posted benchmarks, or even made a demo available. Yet the performance claims from CEO Sung-Hoon Kim are already shaping expectations in Korea's government-led sovereign AI race. Why Are Companies Making Performance Claims Before Release? In the AI industry, this is unusual. Typically, companies publish benchmarks and let the results speak for themselves. Performance claims usually come after independent testing, not before. But Upstage's move signals something important: the company has confidence in its technical direction, and it's positioning itself strategically ahead of the second evaluation round of South Korea's national AI project. The key insight lies in how Upstage is testing the model. The company is using ablation testing, a method where engineers systematically remove or modify different components to see which parts actually contribute to performance gains. If Solar Open 2 is already showing measurable improvements at this stage, it suggests the core architecture is close to finalized, not still in early experimentation. What Does Upstage's Hiring Tell Us About Solar Open 2? The company's recent recruitment activity provides concrete clues about where the model is headed. Upstage has been actively hiring talent in post-training and reinforcement learning, with particular focus on candidates experienced in RLVR, which stands for reinforcement learning with verifiable rewards. This is a training method used to improve reasoning by teaching models on problems with clear right or wrong answers. Post-training is the phase where a model gets refined for real-world use. It improves how well the AI understands what users actually want and delivers useful responses. When you combine post-training expertise with reinforcement learning with verifiable rewards, a clear direction emerges: Solar Open 2 is likely less about scaling up model size and more about improving reasoning ability and practical usability. How to Evaluate AI Models Beyond Just Parameter Count - Reasoning Capability: Models trained with reinforcement learning with verifiable rewards can solve complex problems by learning from clear feedback, not just pattern matching from training data. - Real-World Performance: Post-training refinement ensures the model understands user intent and delivers practical responses, not just technically correct ones. - Efficiency Metrics: A model with fewer parameters that performs better indicates smarter architecture and training, not just brute-force scaling. Upstage's strategy reflects a broader shift in the AI industry. The company made its mark in the first round of Korea's sovereign AI initiative with a clear positioning: efficient models that deliver strong performance without massive scale. Solar Open 100B, Upstage's currently available model, achieved this by reaching perfect scores on individual global benchmarks despite having fewer parameters than some competitors. Only LG AI Research and Upstage achieved perfect scores in that category. Many industry observers expect Solar Open 2 to follow the same path. Not bigger, but smarter and more efficient. This matters because South Korea's sovereign AI initiative is designed to reduce reliance on global tech giants and build competitive domestic foundation models. So far, LG AI Research, SK Telecom, and Upstage have advanced to the second evaluation round, with Motif Technologies also joining through an additional selection process. Upstage stands out among these competitors for a specific reason: it has already proven its capabilities with a public model available on Hugging Face since January, and now it's building anticipation with a next-generation system that no one has seen yet. That makes Solar Open 2 a key variable in the next phase of competition. At this point, timing matters as much as performance. When Solar Open 2 is finally released, and how it performs in official evaluations, could significantly shift the balance in Korea's AI race. Until then, the market is reacting to something unusual: a model that doesn't exist publicly yet, but is already shaping expectations about what the next generation of efficient, reasoning-focused AI should look like.