I've run that light myself by accident. It's a hard light for humans too. This light is in the middle of a block rather than at the end. The area is full of activity including people walking but also popular sights. That brick building to the right is SFMOMA. Its hard to notice the traffic lights amongst all this activity not to mention that you don't look for it. Also that middle lane can easily have its views obstructed on both sides.
That area is also messed up because you need to be in the correct lane else you will be forced up different streets so you have many cars trying to change lanes. Lastly, that area has its share of asshole drivers that cut you off, speed, etc....
From my perspective, that car is behaving like someone not aware of the light and not sensitive to context (should have driven more slowly). You can call that asshole behavior if you wish.
>but why would it affect the database lookup that a self-driving car is doing?
Who said a self driving car is doing a "database lookup"?
If anything, a good self driving car should NOT do any kind of database lookup (of the location of traffic lights etc) and be able to recognize and respond to a moved, impromptu (e.g. because of road work), new, unfamiliar, etc. traffic light.
Getting additional data from a predefined map is expected, even if it's just for something to test the data coming from the sensors. If the car knows there's a traffic light on the map but it isn't 'seeing' one then it should be handing control back to the human, not just carrying on regardless on the assumption that the map is wrong.
A level 5 self-driving car would work completely autonomously without any prior knowledge of the area it's driving in. We're not there yet.
I see -- checked the paper. I knew they were using maps data for the routes and assistance (and that would extend to traffic lights) but I'd expect them to be able to spot all kinds of movable traffic lights (e.g. when there are works or an accident) by pure image recognition/AI.
The standard approach to detecting signal lights is to have a database of GPS positions of the signals, along with rough location in the camera where the signal is expected to occur [1]. Then, when the car nears the signal, it locates the signal and detects the current color.
This mechanism really shouldn't be susceptible to the same biases as humans. The described signal may legitimately be more challenging for the self driving car, but more than likely the signal was missing from Uber's database. Their lack of explanation for this failure does not inspire confidence in their approach.
That area is also messed up because you need to be in the correct lane else you will be forced up different streets so you have many cars trying to change lanes. Lastly, that area has its share of asshole drivers that cut you off, speed, etc....
From my perspective, that car is behaving like someone not aware of the light and not sensitive to context (should have driven more slowly). You can call that asshole behavior if you wish.