Well, it's not required persay. If it was we feeble humans and our addiction to the visible light spectrum wouldn't be able to drive.
It does, however, make things a lot easier.
The granddaddy vehicle lidar, the Velodyne HDL64-E produces a scan with 64 vertical layers. I believe the below is from a single scan:
Here's a google car visualization (somewhat old)
There's two big reasons why lidar is easier than monocam Computer Vision (CV):
1. Going from processing mono-cam images to 3d using structure from motion, edge-searching, object recognition and a plethora of other techniques to lidar like the above is like cheating. You know how far a given thing is from the lidar because you've got a discrete and independent distance measurement.
You aren't susceptible to a bunch of corner cases like your shitty assumptions that worked fine until some jackass in a semi pulled in front of your Tesla and your CV classified it as a harmless billboard overhead (That's what happened in that fatality).
2. Cameras are highly dependent on lighting conditions. Lidar fails in certain conditions too. Ex: direct sunlight in the sensor, isn't a big deal really. Heavy dust obscurants (really big fucking deal that nobody is admitting yet. OPAL LiDAR is trying to solve that via a class 3 laser). Snowfall and ground covering is a huge problem that Ford is trying to solve. But over all lidar shows up with its own energy source and works in a variety of lighting conditions including dark, heavy sunlight. If you ever see a CV example done in an overcast sky, they're probably hiding that it fails in bright sunlight. So why can I drive at night? Well, I use a ton of inference and assumptions. Everytime I turn left around a car with its lights on I'm betting that there's nothing there, but it's dangerous and if there was something there it'd be dead. It'd be trivial to set up a test case and people would run over your black hoodie-wearing mannequin everytime. Hands up don't manslaughter. When I drive into the sun and can't see shit, I'm just hoping that nobody gets in my way. If that light I can't see just turned red, buckle up everyone.
A third less tangible reason is that lidar tech is absolutely exploding right now. Once/If we get Quarnergy style solid-state lidar, it's going to be fucking awesome. I'm hoping for a similar increase in mon-camera tech that more closely matches the human eye in terms of robustness in lighting conditions, but we're just not there yet and that problem is really hard.
It does, however, make things a lot easier.
The granddaddy vehicle lidar, the Velodyne HDL64-E produces a scan with 64 vertical layers. I believe the below is from a single scan:
Here's a google car visualization (somewhat old)
There's two big reasons why lidar is easier than monocam Computer Vision (CV):
1. Going from processing mono-cam images to 3d using structure from motion, edge-searching, object recognition and a plethora of other techniques to lidar like the above is like cheating. You know how far a given thing is from the lidar because you've got a discrete and independent distance measurement.
You aren't susceptible to a bunch of corner cases like your shitty assumptions that worked fine until some jackass in a semi pulled in front of your Tesla and your CV classified it as a harmless billboard overhead (That's what happened in that fatality).
2. Cameras are highly dependent on lighting conditions. Lidar fails in certain conditions too. Ex: direct sunlight in the sensor, isn't a big deal really. Heavy dust obscurants (really big fucking deal that nobody is admitting yet. OPAL LiDAR is trying to solve that via a class 3 laser). Snowfall and ground covering is a huge problem that Ford is trying to solve. But over all lidar shows up with its own energy source and works in a variety of lighting conditions including dark, heavy sunlight. If you ever see a CV example done in an overcast sky, they're probably hiding that it fails in bright sunlight. So why can I drive at night? Well, I use a ton of inference and assumptions. Everytime I turn left around a car with its lights on I'm betting that there's nothing there, but it's dangerous and if there was something there it'd be dead. It'd be trivial to set up a test case and people would run over your black hoodie-wearing mannequin everytime. Hands up don't manslaughter. When I drive into the sun and can't see shit, I'm just hoping that nobody gets in my way. If that light I can't see just turned red, buckle up everyone.
A third less tangible reason is that lidar tech is absolutely exploding right now. Once/If we get Quarnergy style solid-state lidar, it's going to be fucking awesome. I'm hoping for a similar increase in mon-camera tech that more closely matches the human eye in terms of robustness in lighting conditions, but we're just not there yet and that problem is really hard.
Last edited: