Why Steel Mills and Factories Are Ditching Cloud AI for Chips That Work Right on the Factory Floor

Manufacturing facilities are moving away from sending data to distant cloud servers and instead running artificial intelligence directly on-site, using specialized chips designed specifically for factory work. POSCO DX, a major South Korean industrial technology company, has begun replacing graphics processing units (GPUs) from overseas semiconductor makers with domestic neural processing units (NPUs) built by the startup Mobileint. This shift reflects a broader industry trend toward edge AI, where machine learning happens locally rather than in remote data centers .

What's the Difference Between NPUs and GPUs for Factory Work?

Graphics processing units were originally designed to handle image rendering and massive parallel calculations, making them powerful but not necessarily efficient for the specific tasks factories need. Neural processing units, by contrast, are built from the ground up for artificial intelligence computations like deep learning and machine learning. For manufacturing environments, this distinction matters significantly .

NPUs excel at edge AI implementation because they can be installed directly into facility control systems without requiring constant communication with remote servers. This means a factory can analyze sensor data, detect equipment problems, and make adjustments in real time, all without the latency and security concerns of cloud connectivity. POSCO DX plans to integrate Mobileint's NPU into its own industrial control system called PosMaster, enabling intelligent factories that can analyze and control operations directly from the facility control stage .

How to Implement Edge AI in Manufacturing Operations

  • Assess Current Infrastructure: Evaluate existing GPU-based systems and identify which processes would benefit most from real-time, on-site AI analysis, such as predictive maintenance or quality control.
  • Select Appropriate NPU Hardware: Choose NPUs optimized for your specific workloads; Mobileint's technology is noted for supporting large language models in edge environments, making it suitable for complex manufacturing scenarios.
  • Integrate with Control Systems: Work with vendors to embed NPUs directly into facility management platforms, allowing AI models to run alongside existing automation without requiring separate infrastructure.
  • Develop Localized AI Models: Train machine learning models that can run efficiently on edge hardware, reducing the need for constant cloud connectivity and improving response times for critical manufacturing decisions.

POSCO DX invested 3 billion won (approximately $2.3 million USD) in Mobileint through its corporate venture capital fund, signaling serious commitment to this technology transition. The company plans to gradually shift its entire GPU-based AI infrastructure toward NPU-centered systems across steel production, secondary battery manufacturing, and logistics operations .

Why Are Factories Choosing Local AI Over Cloud Computing?

The advantages of edge AI extend beyond just technical performance. NPUs consume significantly less power than GPUs, reducing operational costs for facilities that run continuous monitoring and analysis. They also eliminate the need to transmit sensitive manufacturing data to external servers, addressing both security concerns and regulatory requirements around data sovereignty. For iterative AI inference tasks, which are common in manufacturing, NPUs deliver superior efficiency .

This trend is not isolated to South Korea. At Embedded World 2026, a major conference for embedded systems held in Nuremberg, Germany, the shift toward local AI processing was evident across multiple vendors and industries. The global embedded systems market was valued at $114.75 billion in 2025 and is projected to reach $212.74 billion by 2034, driven largely by industrial adoption of edge intelligence .

Texas Instruments demonstrated this concept with its TinyEngine neural processing unit, a hardware accelerator integrated into microcontrollers. TinyEngine delivers 2.56 billion operations per second while reducing latency by up to 90 times compared to traditional approaches and cutting energy consumption by over 120 times per inference. This enables edge AI to run on devices as small as smartwatches, thermostats, and medical sensors .

"In the past, nobody would be running edge AI in such small microcontrollers. To do that, you would have needed a higher-end, power-hungry microcontroller. Or even before that, you needed to send that information to the cloud," said Yiding Luo, product line manager at Texas Instruments.

Yiding Luo, Product Line Manager at Texas Instruments

Mobileint is recognized as one of South Korea's leading AI semiconductor startups, with high-performance NPU technology capable of running large language models in edge environments. The company has recently received top-tier government research and development support, validating its technical approach .

For POSCO DX, this partnership goes beyond a simple hardware purchase. The company and Mobileint are collaborating on technology development to ensure POSCO DX's AI models perform optimally within Mobileint's NPU environment. This kind of deep integration suggests that the future of industrial AI will involve close partnerships between equipment manufacturers and chip designers, rather than generic off-the-shelf solutions .

"With this cooperation, we have secured LLM-based edge AI technology needed to implement intelligent factories. We will use it as an opportunity to lead manufacturing AX by developing it as a case of collaboration and win-win with domestic startups beyond simple semiconductor supply relationships," noted Cho Seok-joo, head of POSCO DX's AX Convergence Research Institute.

Cho Seok-joo, Head of AX Convergence Research Institute at POSCO DX

The implications extend across industries. Companies increasingly recognize that their operational data is valuable and sensitive. Running AI locally, rather than uploading it to cloud services, preserves competitive advantages and protects proprietary manufacturing processes. As edge AI hardware becomes more capable and affordable, the economic case for cloud-dependent AI weakens, particularly for time-sensitive applications where latency matters .

POSCO DX's strategy signals that large industrial conglomerates are ready to move beyond cloud-first AI architectures. By investing in domestic NPU technology and integrating it into their own control systems, they are building resilient, efficient, and secure AI infrastructure tailored to their specific operational needs. This approach may become the template for how major manufacturers worldwide approach artificial intelligence deployment in the coming years.