The article mentions milestones twice, and assumes their existence. But the scheduling methodology described has nothing to say about where these come from or how to think about them. So it’s missing something that makes it at least a little less simple.
> here aren't infinitely many scenarios to consider, but even if that's a figure of speech, there aren't thousands or even hundreds.
There are a very, very large number of scenarios. Every single possible different state the robot can perceive, and every possible near future they can be projected to.
Ten kids is not 10 path scenarios. Every kid could do a vast number of different things, and each additional kid raises the number of joint states to another power.
This is trivially true. The game that makes driving possible for humans and robots is that all these scenarios are not equally likely.
But even with that insight, it’s not easy. Consider a simple case of three cars about to arrive at an all-way stop. Tiny differences in their acceleration - potentially smaller differences than the robot can measure - will result in a different ordering of cars taking turns through the intersection.
> Ten kids is not 10 path scenarios. Every kid could do a vast number of different things, and each additional kid raises the number of joint states to another power.
This is the difference between computing and humans. The car will attempt to compute all possible path scenarios because it has no instict, and it might not be possible to compute everything in real time so it might fail.
But the human will easily deal with the situation.
Try running through a sports field in an elementary school during lunch, full of unpredictable kids running around. Can you make it from one side to the other without crashing into a whole bunch of kids? Of course you can. You didn't need to try to compute an exponential number of scenarios, you just do it easily. The human brain is pretty amazing.
In fact no computer approach attempts to compute all possible path scenarios since we know that’s not tractable.
And current practical approaches are mostly end to end (or nearly) ML systems that do not compute a lot of alternative paths, and they work in approximately constant time independent of the scenario.
You strongly imply that computers can’t drive, but you could have written that in a Waymo.
This is the classical ‘Frame Problem” of AI. How do you consider, even if only to reject, infinite scenarios in finite time? Humans and other animals don’t seem to suffer from it.
Why is this shocking? Surely if you hadn’t grown up with the very technical idea of unrealized gains, this would seem totally normal. The surprising thing is that we let ourselves be convinced in the past that making money with money should be tax advantaged compared to making money with labor.
Do you have to pay tax on unrealized gains with realized money? A classic problem with exercising employee stock options and holding the stock is that you have a tax event on the unrealized gain, but if the stock drops substantially, you still owe the tax money on the unrealized gain, but you cannot sell the stock for enough to pay for it. This happened to a lot of people around 2001.
Paying for unrealized gains with realized money is not a situation anyone want to be in.
I was thinking of 'real' holdings rather than options that don't have a liquid value. Not all unrealized gains are the same. Thanks for pointing out the complexity.
The claim was that "the materials that go into a chip are nothing". Arguing that that is not that case does not really put someone on the hook to explain or even have any clue how to do it better.
It's absolutely terrible but at the scale of a large country it's not logistically hard to get to that many deaths in a couple of days. Iran is a big country with population around 93 million.
The article says "36,500 killed in 400 cities". That's 91 people per city.
> The historical track record on “this technology will end work” predictions is remarkably consistent: they’re always wrong.
This is nicely expressed, and could serve as a TDLR for the article, though buried in the middle.
We have the most automation we've ever had, AND historically low unemployment. We have umpteen labor saving devices, AND people labor long, long hours.
Sadly, for thankfully brief periods among relatively small groups of morally confused people, this happens from time to time. They would likely tell you it was morally required, not just acceptable.
reply