Humanoid robots are getting safer to work around humans, thanks to a breakthrough in how they perceive their environment. Texas Instruments and NVIDIA have partnered to integrate advanced sensing technology that allows robots to detect obstacles—including transparent ones like glass doors—in ways that cameras alone cannot. By combining radar and camera data, this sensor fusion solution creates a more reliable perception system that works in challenging conditions, from low light to fog and dust. Why Can't Cameras Alone Keep Robots Safe? Current humanoid robots rely heavily on camera vision, but cameras have a critical blind spot: they struggle to detect transparent or reflective surfaces. A glass door in an office building, a mirror in a retail store, or a reflective window can be invisible to a camera-based system. This limitation has been a major barrier to deploying humanoid robots safely in real-world environments where humans are present. "The safe operation of humanoid robots in unpredictable environments requires a massive leap in processing power to synchronize complex AI models with real-time sensor data and motor controls," explains Deepu Talla, vice president of robotics and edge AI at NVIDIA. The solution lies in combining multiple sensing technologies. When radar data is fused with camera information, robots can detect obstacles that either technology would miss on its own. Radar excels at detecting solid objects regardless of lighting conditions or surface reflectivity, while cameras provide detailed visual information. Together, they create a perception system that's more like human vision—using multiple sensory inputs to build a complete picture of the environment. How Does This Sensor Fusion Technology Work? The partnership integrates Texas Instruments' mmWave radar sensor (model IWR6243) with NVIDIA's Jetson Thor processor using NVIDIA's Holoscan Sensor Bridge platform. This combination enables low-latency, three-dimensional perception and safety awareness for physical AI applications. The mmWave radar operates at millimeter-wave frequencies, allowing it to detect objects with high precision while providing real-time data about their location and movement. The system processes data from both sensors simultaneously, fusing the information to improve object detection, localization, and tracking while reducing false positives. This means the robot can make confident, real-time decisions about navigation and interaction with its environment. The integration works across Ethernet connections, making it scalable for different robot designs and configurations. Steps to Implementing Safer Robot Perception Systems - Sensor Integration: Combine mmWave radar technology with camera systems using dedicated sensor fusion middleware like NVIDIA Holoscan to process data from multiple sources simultaneously. - Real-Time Processing: Deploy advanced compute platforms such as NVIDIA Jetson Thor that can handle the computational demands of processing radar and camera data with minimal latency for immediate decision-making. - Environmental Testing: Validate the system's performance in challenging real-world conditions including low light, bright glare, fog, dust, and transparent obstacles before deployment in human-occupied spaces. Where Will These Safer Robots Be Deployed First? The immediate applications for this technology are in environments where humanoid robots need to navigate safely alongside humans. These include office buildings, hospitals, and retail environments—spaces where transparent obstacles are common and unpredictable human movement requires reliable perception. In hospitals, for example, robots could assist with patient care or logistics without the risk of colliding with glass partitions or reflective surfaces. In retail settings, robots could manage inventory or assist customers in aisles without the safety concerns that have previously limited deployment. Texas Instruments and NVIDIA demonstrated this technology at NVIDIA GTC 2026, showcasing a live demonstration titled "Real-time sensor fusion for reliable robotic perception with Holoscan." The presentation highlighted how the integrated system processes data through an end-to-end software chain, providing visualization and real-time feedback for robot operators and developers. What Does This Mean for the Future of Physical AI? "The next generation of physical AI requires more than just advanced compute—it demands seamless integration between sensing, control, power and safety systems," said Giovanni Campanella, general manager of industrial automation and robotics at Texas Instruments. This partnership represents a shift in how robots are designed and deployed. Rather than treating perception as a separate component, manufacturers are now building sensing, control, and safety into every aspect of the robot's architecture. The collaboration between Texas Instruments and NVIDIA bridges the gap between powerful artificial intelligence compute and real-world applications. By combining Texas Instruments' comprehensive portfolio of sensing, motor control, power management, and safety technologies with NVIDIA's advanced robotics compute, developers can validate complete humanoid systems earlier in the development process. This integrated approach accelerates the evolution from prototypes to commercially viable robots that can operate safely alongside humans in unpredictable environments. As humanoid robots move from research labs into workplaces and public spaces, solving the perception problem isn't just a technical achievement—it's a safety imperative. The convergence of radar and camera technology represents a meaningful step toward robots that can navigate the real world with the same situational awareness humans take for granted.