I feel the opposite. The one sensor that is guaranteed to have sufficient information to drive in all conditions is vision. That's obviously because humans drive exclusively with vision (and a single vantage point to boot - modulo mirrors).
>The one sensor that is guaranteed to have sufficient information to drive in all conditions is vision.
Minor nit but humans can't drive--certainly not safely--in all conditions. You can certainly get to a point in fog, blizzards, and even very heavy rain where you really would like to get off the road if possible. (Not always possible of course and in snow particularly, pulling off to the side of a highway isn't a great option.)
Not a nit at all; the parent comment has the cart completely in front of the horse. Humans use vision (primarily) to drive because it's the only sense we have that's even close to being sufficient.
There are certainly other senses (lidar, ultrasound, radio signals) that robots could avail themselves of that would be helpful even in conditions where vision also worked.
Kind of. The difference between human vision and computer vision is that human vision is stereoscopic. We perceive depth in addition to color and shape. And that gives us the ability to perceive the 3-dimensional shape of an object, which lets us anticipate how it might move. A lot of CV algorithms operate on single images from a single camera, which makes it impossible to judge depth. In that case you'd have to use the size of an object as a proxy for its distance and speed, so you'd tend to misjudge how far things are from you and where they're going.
The nice thing about LIDAR is that you can gather that depth/shape information and with sufficient calibration map the shapes in the camera image to the depths in the LIDAR image. You can do the same thing with two cameras so I'm not sure why LIDAR would be preferred here.
We don't need two eyes to drive. And we don't need two eyes to gauge depth. Neither does a machine, but adding stereoscopic cameras is not hard.
The stereoscopic effect is very poor on driving distances, doesn't help us that much. Primarily we use clues form the environment to gauge distances. We also have to focus our eyes to the correct distance - that also tells us how far an object is.
Primarily we have had an insane amount of training to understand our world. An understanding that a self-driving car will never achieve unless singularity happens. It might not need that, but it will need other ways to compensate for it.
Tesla as an example has 3 forward looking cameras, additionally a single moving camera can sense depth since differences between frames relates to the distance from the camera.
LIDAR has its advantages, like precise 3D positions under ideal conditions. However there are downsides as well. Cost is a big one, but that's becoming less of a issue over time. Another is sensitivity to rain, fog, blowing sand, etc.
A complicating factor is human driven cars will assume cars to act like they have human limitations. So higher speeds when humans can see well, and low speeds when humans can't.
Not sure Tesla's current sensors will do it, but seems like camera based systems are likely to be quite competitive with LIDAR. Maybe instead of 3 forward cameras, 6 or 8 so there's overlapping views (for stereoscopic vision), handling failures better, and allowing a narrower field of view at a higher zoom.
More range will be a huge help, that way an autonomous car can slow more gently when uncertain and drive more like a human. After all superhuman reflexes aren't much use if you get rear ended all the time.
> You can do the same thing with two cameras so I'm not sure why LIDAR would be preferred here.
I almost spilled my beer about to comment that a camera or two are equally if not more powerful than lidar. To me personally, Lidar feels like an incomplete solution to 3D mapping when high res images from a smartphone camera can provide so many more data points from different angles.
My thinking was vehicles should have an idea where other vehicles are without the need for comp viz. Like beacons saying "hey Im here." And we can try to calculate relative direction and distance. The vision bit should ideally come in to validate and confirm other things like road signs etc. Ideally we should add that data to mapping software andnthe car should know these things without "seeing it".
You can get 3D data from a single moving camera. The technique is called structure from motion and has been demonstrated to work well more than a decade ago.
The biggest problem with relying on visual data is that you get very noisy data. Poor lighting and reflective or glossy surfaces cause problems (I'm not sure what current state of the art is, it's been a few years since I looked at the research).
As far as I understand the big advantage of LIDAR is that you get nice and clean depth data and it's not so computationally expensive.
That's why one place I really want a driving assist is automatically backing out of spaces in parking lots. Visibility is terrible for the driver. You need to be paying close attention in multiple directions at once, you often don't have visibility at all when you need to start moving (like a larger vehicle parked next to you), and both pedestrians and other vehicles can appear out of nowhere, often moving in unexpected directions. It feels very unsafe.
Computer vision could be making those go-stop decisions for you, much more effectively than human drivers.
Heck, imagine a "smart" parking lot that tracks its available spaces and communicates with your car. You enter the parking lot and hand over control, and the car and lot work together to park you safely in the best available space.
Some cars already have those. I've got a car from 2017 that has some kind of sensors wrapping all the way around the bumper. When I'm parked between two cars, it can detect a car roughly 15-20 feet away as soon as I get an inch or two of my bumper clear of the other cars.
It does lack any spatial indicators to the driver, though. It just plays a beep, but it doesn't seem to play the beep from the side of the car the danger is in, nor does it beep louder when they're closer. Still, it works remarkably well. Reliably detects pedestrians and vehicles (even while I'm moving, which is impressive, because it doesn't go off for parked cars when I start moving).
I think it's the Dodge ParkSense, but I bought the car used so there's a possibility it's something aftermarket.
It's saved my ass a number of times. I'm in jacked-up pickup truck country, so it is often impossible for me to see pedestrians about to walk behind me because of the lifted truck, and likewise the pedestrians can't see me until they're behind me.