Hacker Newsnew | past | comments | ask | show | jobs | submit | tikl1's commentslogin

Well we ARE social creatures as a species since it's been the only reason we had been able to achieve what we did, be where we are and more simply survive. So even if some of us don't like the idea of being around others it's a fact that it served us well to be able to collaborate in such extensive ways.


As said in others answers, there will be multiple annotations sources so each one will have it's own moderation to do and users will have the choice to subscribe to any or all of them.

Therefore the moderation will not have anything to do with the content owner. It's comparable to reddit or HN : would you complain that content owner have no control over discussion about their content on those sites ?


I guess you'll have to choose wisely your annotations sources... If you see too much spam on 1 source, just unsubscribe. As said earlier, it's not so different of reddit/HN discuussions abouut content. The publisher may not even be aware of the discussion.


Again, it's about discoverability. If browsers support this natively, visiting a website would put up a virtual "link" of some sort to see the discussions for every url, as you visit them. There's a difference between discussion exists somewhere, and discussion is tied to your website/url.


Not sure about them in particular but usually those initiatives also glorify open standards and the fact that even if the company disapear in 10 years, they will allow you to walk away with all your data in standard formats allowing you to continue your journey with another provider that uses this standard. Which is not obvious with some major companies using their own data models. (although I've heard of some bankrupting companies that made efforts to let users backup their datas before the end).


Humans are the very definition of imperfection and thinking that you are capable of "correctly using a car" that is, never make the mistake that will cause a terrible accident is a big problem.

It's probably why so much people keep resisting evolution like autopilot, they can't admit that even them can someday fail and crash. They believe that if they are good drivers they will avoid it.

Sadly very good drivers die every day and not only because of someone else's mistake.

Of course a computer or a machine can fail too, but in comparison it will never fail as often as human do. Because we can be careless, sleepy, drunk, unskilled,...

Now the real problem we will face (what scares me even though I'm pro-autopilot) is to accept to hand our lives to machines that will have to make choices in emergency situations (should it saves its owner or the kids in front of it ?, who is responsible in case of a crash ?,...)


> but in comparison it will never fail as often as human do.

Maybe eventually, but what I'm interested in is right now, does the tesla autopilot fail more often than a human driving in similar conditions. My suspicion is that it does.


The data already gathered would indicate that your suspicion is wrong. Currently the US averages 1 fatality for every 89 million miles driven. Autopilot is at almost twice that with only 1 fatality. So autopilot already fails less often than a human.

Consider this anecdote: https://twitter.com/elonmusk/status/756004029239472132

Then rethink if you really believe that this pedestrian deserves to be dead because you don't trust the data and the world-class engineers for which this is literally their job everyday.


> Currently the US averages 1 fatality for every 89 million miles driven.

This is the average for all cars on all roads, in all driving conditions.

To have a meaningful comparison, we would need to know the fatality rate for new luxury sedans on divided highways in good weather conditions.


Either you've not read my comments in this thread or I've not clearly explained my point.

Your statistic ignores the fact that autopilot is only used in relatively simple situations, and since you're measuring fatalities only, you need to compare it only with cars of equivalent safety ratings.

If you have statistics comparing humans and autopilot in relevant situations I'd be very interested, but the ones you quote are exactly the ones I'm complaining about as being nonrepresentative.

Even considering all miles equally, a rate of 1 per 130 million autopilot miles (a much too small statistic from which to generalise) still does not compare particularly well to a lot of cars driven by humans, e.g. the BMW 7 series.

Incidentally, it's not at all clear that the situation described in your anecdote even involved the autopilot functionality, as it sounds like it was autonomous breaking which is a standard feature on many cars.


I don't know for Tesla but I think google autonomous cars are already there, so Tesla must not be far.


"Sadly very good drivers die every day and not only because of someone else's mistake."

Indeed, and that may often be because of taking warranted risks. It bothers me that when making comparisons of driving safety people tend to suffer from absence blindness and discount other important things like the death avoidance cases. Let's say you have a bleeding injured person which requires urgent access to a medical facility. Here the driver can take some risks in order to save a life. Or any number of causes that may warrant risk taking. Now, take away the control from that driver and leave it to an autopilot that may compute the driving parameters in order to satisfy minimum pollution, safety (from the manufacturer's judicial liabilities prospective), and whatnot. Heck, I foresee cases when the autopilot won't even approve any movement due to whatever considerations when there may be passengers in risk of loosing their lives if won't reach somewhere soon enough. For now people can take risks, which may be both good and bad. Don't look only at the bad side.


This is a very interesting point and I wonder if this might at some point turn into an advantage once there are enough self driving cars. Here is a potential feature: I am in an emergency and request "emergency override" or whatever you want to call it. Maybe someone pops up on the screen asks a question or not, maybe the override gets activated immediately and deactivated if it gets identified that I don't actually have an emergency. If the override is active the car goes faster and more importantly communicates to other self driving cars what's going on. Other cars will automatically make room like they would for an emergency vehicle. In general how self driving cars react to emergency vehicles of any kind could be so much better than anything we have today.


Yes, that is an interesting solution. Either letting the driver to take risks or allowing the autopilot to have a special mode so it can override a lot if not all of the limiting factors that would otherwise be commended. It's something like "give me 110% now, I'll gladly answer for that later if I'll have to!"


Well you can say it would be infuriating to be unable to take risks to save let's say your wife dying in the car but you can't argue that it's a GOOD thing to take those risks... who are you to decide that it is worth risking others lives to try to save one ?


"you can't argue that it's a GOOD thing to take those risks... who are you to decide that it is worth risking others lives to try to save one ?"

That's exactly what I argue about. In a rare (but very important) occasion where risk taking would be needed, you're forcing the moral dilemma of an even rarer occurrence with a presumed victim. We don't know if it would come to that. For all I care there might be only some speeding on a rainy road. My car will recommend going slowly for my own sake, but in that moment I care less for me personally and more for avoiding wife's impending death. Actually, I fear that my car will not limit itself to recommending, it will force that on me, because it knows better, because the people behind the said decisions won't be only the engineers trying give their best, but also lawyers doing "mercantile calculation of legal liability", politicians trying to score on public safety through their regulations, and so on!


I understand you and again it will be something to have in mind during design of said cars but I don't think this will be a huge problem.

Even if the problem exists in the early days, I'm pretty sure this is the kind of things that can be dealt with easily later. Those car are already capable of evaluating dangerous situations for others and I don't think that's unreasonable to believe that they will adapt their comportment to conditions and even in some special cases allow for risks taking if it only concerns the one that chose so.

And don't forget that we don't speak about only one company. There will be concurrence as it exists today and if some constructors are so limiting in the behavior of their cars that people die in it when conditions would have allowed to save them otherwise then soon enough someone will come with a car that can deal with that situation and so forth.


The social contract is such that people can reasonably be assumed to be ok with a minuscule increase in risk in order to get a dying person to the hospital.


"Humans are the very definition of imperfection and thinking that you are capable of "correctly using a car" that is, never make the mistake that will cause a terrible accident is a big problem."

A human, with proper training and heuristics, can reasonably be expected to never make a mistake that causes a terrible accident.

Unfortunately, a large part of "proper training" involves being driven around as a passenger for your entire childhood, immersed in a car culture. Further, the heuristics are commonly ignored by males younger than, say, 25.


> A human, with proper training and heuristics, can reasonably be expected to never make a mistake that causes a terrible accident.

fighter pilots make mistakes causing terrible accidents. and they're a group of humans that were pre-screened to ensure physical/mental/emotional fit, and had extensive training.


Wow! That's exactly what I said earlier, you are convinced that a good driver (one with proper training or whatever heuristics are) will be able to avoid those crashes.

And in my opinion the only reason that can make you believe that is that there is, fortunately, few enough car crashes that you think those who had never crashed avoided it because of their skills.

Of course skills and training will allow you to behave well in a lots of situations that could have killed you otherwise but my conviction is that if you go through life without a big accident, a huge part of it is due to luck. Luck that you didn't encounter the situation in which training and skills couldn't have saved you.


"but in comparison it will never fail as often as human do"

Ok I know this is my little cause of the moment, but seriously why are computers apriori better at anything? Sure they don't make poor life choices but they are absolutely at the mercy of the quality of the algorithms, sensors, operating system, training data and physical computational hardware.


Better is probably a poor word, but consistent at least. Even good drivers may run the speed limit and be "perfectly safe", but the autopilot will always stay at the speed limit and will always make the same choices in a given situation - there's no emotion or influence (alcohol, drugs, bad mood, etc.). In short, they are boring but safe.


"boring but safe."

But seriously, the most important part of that sentence 'safe' is absolutely unproven.


I agree. It's like arguing that autonomous weapons systems are better than human-operated because they don't get tired, whilst neglecting to mention that, say, their object detection algorithms can't tell the difference between combatants and civilians.

Autonomous systems, as a technology, are capable of performing better than humans; that doesn't imply that some particular autonomous system (e.g. Tesla's Autopilot) does so right now.


Every decision a machine makes benefits from the accumulated organized efforts of hundreds (if not many thousands). In a sense, so too do human decisions, but in practice we can much more quickly improve the decision-making of machines, over time leading to them having the advantage, especially in routinized tasks.


Computers are not apriori better - you can always develop a stupid or buggy algorithm that fails often. But Tesla is working to PROVE (using real-world data) that autopilot is better than a human driver, and IMPROVE it constantly until it is ten times better.


Tesla seems quite willing to massage their telemetry data until it shows what they want it to show.


As of today, there's much left to prove. AI remain ideology, not reality.


> should it saves its owner or the kids in front of it?

If a driver ever has to make that choice, they were going too fast in the first place. You should never be going faster than the stopping distance to the nearest blind spot, unless you're driving on a controlled access highway.


Yep that's the theory and sadly we all drive too fast at one point.


the autopilot system, if not every system similar that is offered these days, does the driving in conditions that are already likely the easiest a driver will encounter.

the real shocking truth is none of these systems is capable of driving when even skilled drivers might need the assistance. think about it this way, car safety systems are designed to insure recovery of safe driving during some of the worst conditions yet self driving systems cannot handle most of these poor conditions as they cannot accurately see and assess the situation.


I'm not sure about this... Do you have sources on that ?

I'm pretty sure that on contrary, autonomous car constructors are actually testing their cars in the worst possible conditions even some that are unlikely to ever happen to anyone. At least it's what Google is telling about their test process.


Maybe it shouldn't choose, instead, it should pick at random, based on the probabilities of survival. If both have the same probability, then both should get the same chance to survive. Otherwise there will be endless bickering about how to decide every case.


> should it saves its owner or the kids in front of it?

I seem to remember reading a paper postulating drivers unconsciously always try to avoid crashing their side of the car into hard objects.


> should it saves its owner or the kids in front of it ?

It would seem as the life expectancy of dwarves will go up.


The point is that in nowadays automobile industry a HUGE part of the production is made outside the constructors facilities by contractors.

So considering only car constructors employees is a pretty inaccurate estimation of man-hours/item in that scenario.

EDIT: comma <=> satan


Would it be fair to say this outsourcing is similar regardless of company.

Perhaps if some of the larger manufactures have in house teams for things others outsource, it will be largely insignificant given their larger size.

Essentially it's not significant.


I think this is different between companies but not in the way you are suggesting.

For example, concerning the subject we are discussing in the first place, Tesla because of the specificity of their cars probably have to produce themselves a huge part of the components of the car whereas Ford, Volkswagen,... are now to a point where a huge amount of the parts of their car are similar and can be outsourced to contractors that will decrease costs by producing only one type of components in huge quantity and provides to multiple constructors (who looks more and more like assemblers nowadays).

So I agree that it is in fact not easy to compare manufacturers on this metrics but I would believe that it's actually suggesting that Tesla does a lot more in-house than conventional car manufacturers.


Am I wrong or is there a lot of plane crashes recently ? Are all those planes hitting some sort of expiration date mid flight ?


It is possible that you are under a false impression here. Factors for this might be the availability of news reports about crashes (e.g. you just didn't get to know about all those crashes in, say, Africa in the 90s) or an increase of air traffic while the flight:crash ratio might be stable I haven't looked up all those numbers but it is a good thing to keep these factors in mind.

A similar issue is the perceived increase of crime, while the actual numbers tell a different story: an astonishing decrease of crime rate since the 90s.


I don't know if I get that right but from where I come, "watch the news" means mostly watch it on TV. So it might not have something to do with reading IT news on HN for example...


It's complicated to just do something you were not supposed to and then go with: "look I know I had to do that but instead I did that so please give me [amount you never aggreed to pay]".

What he could have done on the other hand is: inform the client of the attack and propose him to get rid of it for a certain fee...


He did cancel the job, at the end, but it sounded like he wanted to investigate the hack for fun, when he came across it.

Not a good general/continuing policy, but maybe worth doing the first time, if you're interested.


Thank you for that, I loved the talk and will probably order the book.

EDIT: French version of this conference(by the same author): https://www.youtube.com/watch?v=NZKqPoQiaDE


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: