Wayve, a British autonomous driving startup, argues that self-driving cars don't need the expensive sensor arrays that most competitors rely on, claiming cameras, GPS, and powerful computing are sufficient. The company says its approach costs just 10 percent of traditional sensor-heavy systems. However, leading experts in sensing technology strongly disagree, warning that eliminating LiDAR and radar sensors creates dangerous blind spots that could compromise passenger safety. How Does Wayve's Camera-Only System Actually Work? Wayve's approach relies on end-to-end machine learning, a technique where the AI system learns to drive by analyzing camera footage and past driving experiences rather than interpreting real-time sensor data. Instead of processing live information about pedestrians, debris, or changing light conditions, the system depends on training data already embedded in its algorithms and reinforcement learning from human driver interventions. The company acknowledges its method is slower and more cautious than traditional approaches. In a recent blog post, Wayve stated: "With each safety-driver intervention, our system learns and will improve, rather than buckle with scale. It will take us longer to reach our first deployment, but we are riding a fundamentally different curve". Wayve What Do Sensor Experts Say About This Strategy? The response from the autonomous vehicle industry has been blunt. Rick Tewell, Chief Operating Officer at Velodyne LiDAR, called the approach problematic. "It's lunacy," he stated, emphasizing that "AI performs a lot better with a lot more data than less data". "Sensors can get details that the human eye can't," explained Leilei Shinohara, Vice President of Research and Development at RoboSense. Leilei Shinohara, Vice President of R&D at RoboSense Shinohara pointed out a critical safety concern: even if LiDAR is used only 5 percent of the time as a backup system, it becomes essential in edge cases where cameras fail. He gave a vivid example: if a paint truck spills paint across a self-driving car's windshield, both cameras and LiDAR sensors might be blocked, but radar's radio-wave sensing would continue functioning. Why Is Safety Redundancy So Important for Autonomous Vehicles? Multiple sensor types create what industry professionals call "robustness," a critical requirement for self-driving safety. The logic is straightforward: if one sensor system fails or provides incomplete data, others can compensate. This redundancy becomes especially important in unpredictable scenarios that AI systems haven't encountered during training. Matt Weed, Director of Technology Strategy at Luminar Technologies, framed the fundamental principle clearly: "The whole point of self-driving cars is to be safer than a human driver. Why you would eliminate technology that maximizes safety inputs doesn't compute. You want to be able to get as much good information about the world as you can". "People understand you have to have robustness if you want to have safety for self-driving cars," noted Raviv Melamed, CEO and co-founder of radar sensor company Vayyar Imaging. Raviv Melamed, CEO and co-founder of Vayyar Imaging Steps to Understanding the Sensor Debate in Autonomous Driving - Camera-Only Approach: Wayve's method relies exclusively on visual data and machine learning, reducing hardware costs but limiting real-time environmental awareness compared to multi-sensor systems. - LiDAR Technology: Light-based sensors that create 3D maps of surroundings; once cost thousands of dollars but now available for around $1,000 or less, making cost arguments less compelling. - Radar Systems: Radio-wave sensing technology that works in conditions where cameras and LiDAR fail, such as heavy rain, fog, or when the camera lens is obstructed. - Redundancy Philosophy: Industry consensus favors multiple overlapping sensor systems so that if one fails, others provide backup perception and decision-making capability. What's the Cost Argument, and Is It Convincing? Wayve claims its sensor and computing costs are 10 percent of traditional approaches, a significant savings. However, this cost advantage hasn't swayed major automakers or well-funded autonomous vehicle startups. The reason is straightforward: for companies investing billions in autonomous vehicle development, the marginal cost of adding proven safety sensors is far less important than avoiding catastrophic failures that could derail the entire industry. Sensor costs have also dropped dramatically. LiDAR systems that cost thousands of dollars just a few years ago now sell for $1,000 or less, and radar technology is even cheaper. As manufacturing scales, these costs continue declining, making Wayve's cost-saving argument less compelling to competitors. How Does Wayve Respond to These Criticisms? Wayve co-founder Alex Kendall acknowledged the debate but defended the company's philosophy. He explained that the company's machine learning system can work with any sensor set or none at all, and emphasized that quality of training data matters more than quantity. "The most important part is that we use end-to-end machine learning to make decisions and drive the car," Kendall wrote, noting that "not all data is equal". Alex Kendall This response highlights a fundamental disagreement about how autonomous vehicles should be developed. Wayve believes that better algorithms and training data can compensate for fewer sensors, while industry experts argue that multiple sensor types provide irreplaceable safety benefits that no amount of software optimization can fully replace. The debate reflects a broader tension in autonomous vehicle development: the race to reduce costs and complexity versus the imperative to ensure safety in unpredictable real-world conditions. As the industry matures, this question of whether cameras plus AI can truly replace traditional sensor arrays will likely determine which approaches survive to commercial deployment.