Hacker Newsnew | past | comments | ask | show | jobs | submit | more alyx's commentslogin

Welcome to the world run by business. Here’s a warm towel, enjoy your stay.


> Narrator: The towel is made by the lowest bidder, falling apart, it's not warm and you won't.


Many "anti-vaxxers" are simply concerned parents.

Considering that vaccines (in general) have been deemed "unavoidably unsafe" at the Supreme Court level, they are not completely unjustified in their fears. [0]

Vaccine manufactures cannot be held liable.

Thus there is a "National Vaccine Injury Compensation Program" put in place to compensate those that have been damaged by vaccinations.

To date, over $4 billion dollars has been paid out in damages. [1]

Just as mentioned in this article, side effects are major deterrents of any medication. And significant efforts need to be made in order to ensure that side effects do not occlude positives effects.

[0]: https://www.supremecourt.gov/opinions/10pdf/09-152.pdf

[1]: https://www.hrsa.gov/sites/default/files/hrsa/vaccine-compen...


I think you're misreading the supreme court ruling because you're misinterpreting the term unavoidably unsafe.

"In short, “unavoidably unsafe” is simply a legal term that means the manufacture is not liable because they cannot do anything to make the product safer. It does not mean that the product is dangerous and should be avoided."

This could apply to peanut butter because of peanut butter allergies. The manufacture is not liable for someone who dies of a peanut butter allergy when they eat peanut butter because this is not due to manufacturing defects but is due to the inherent nature of peanut butter being "unavoidable unsafe".

This does't mean peanut butter or vaccines are dangerous.


The Supreme Court consists of zero medical doctors.

Do not take a legal opinion and present it as medical fact.


From your [0]:

> Whereas between 1978 and 1981 only nine product-liability suits were filed against DTP manufactur-ers, by the mid-1980’s the suits numbered more than 200 each year.6 This destabilized the DTP vaccine market, causing two of the three domestic manufacturers to with-draw; and the remaining manufacturer, Lederle Laborato-ries, estimated that its potential tort liability exceeded itsannual sales by a factor of 200.

So did vaccines suddenly become dangerous after 1980? Or did people develop irrational fear?

If you read further it plainly says that the "National Vaccine Injury Compensation Program" was started to mollify these anti-vaxxers to try to get them to vaccinate:

> significant number of parents were already declining vaccination for their children,10 and concerns about compensation threatened to depress vaccination rates even further.11 This was a source of concern to public health officials, since vaccines are effective in preventing out-breaks of disease only if a large percentage of the population is vaccinated.12

> To stabilize the vaccine market and facilitate compensa-tion, Congress enacted the NCVIA in 1986.


Good overview, I share your opinion.

Tesla is betting BIG on NNs.

In the presentation they say that we have NNs in our brain. Which is why we can look at one photo of a dog, and distill enough identifying information in order to spot the breed from then on. Training sample of 1.

Curiously though, they than say that NNs don't work like our brains do, and require A LOT of training data from all angles, etc. Training sample of N.

The presentation doesn't actually show, how they plan to train a single NN to encompass all the knowledge for self driving. In particular, the looooooong tail. But maybe they'll have independent models, say based on "observed" conditions, which swap in and out ... based on more NN logic.

But at the end of the day, there is no known (to me) examples, of NNs even approaching human ability. Tesla is bascially hoping we will believe, they will be the first to show this capability of NNs.

Let's wait and see, 2020 is not that far away.


> require A LOT of training data from all angles

Also, we learn things outside of our cars. I ride a bike, this gives me insight into what bike riders will do on the road.

I have kids, so I know a kid on the pavement will be more risky / unpredictable than an adult.

I drove down a single track road today and met another car coming towards me. We stopped, then he started to reverse, there was a passing place behind him. Another car pulled up behind him, so I then reversed back into a field to let the cars past. This sort of thing happened maybe six times today as I drove around. Not at all surprising, not even noteworthy enough for any of the drivers to acknowledge even a flick of the finger in thanks. Just part of driving here.


Hmm maybe the problem is actually interacting with humans naturally, on the road, so self-driving is really human-style general AI


> Hmm maybe the problem is actually interacting with humans naturally

Almost every driver has a rythym (or pattern, if you will) to the way they drive; and we pick up on this, and interact with them accordingly on the road. Even most bad drivers have a pattern to the way they drive. The more experienced you are, the easier it is to "read" fellow motorists, and safely interact with them on the road. One of the most difficult scenarios is when you come upon someone who drives with No rythym or pattern whatsoever. You have no clue what that person is Likely (Emphasis on likely) to do next.

I have no experience sharing the road with autonomous vehicles, but I suspect they will Have No rythym to the way they drive, or rather none that is discernible to a human. IMO this will be The major obstacle to overcome if humans and bots are to share the Same road at the same time.


This is actually the aspect that really doesn't get discussed enough. Imagine these scenarios:

a) Roadworks and a guy tells you to stop or waves you through. Or asks you to drive slowly. Or points to a direction that he wants you to go.

b) Woman on the side of the road is waving their arms about because her small child is crawling on the road trying to chase a toy.

If self driving cars can't look another person in the eye and identity their intent then there are going to be a lot of unforeseen problems.


What's the more interesting question is not whether there will be corner cases where self-driving fails, but when the average performance is so much better than humans that AI saves 10s if not 100s of lives every day, but that's still a reality where AI driving software could be actively out there involved in the deaths of a single digit number of different people a day or week.

Do you deploy that build or not?


For sure deploy if the numbers are right. It's the edge cases that are interesting.

Cameras are fine in the day, but when I struggle to see at night, in the rain, with spray from cars, glare from on coming traffic... that's where I would LOVE to have a lidar assisted heads up display. How Tesla think they can do better with just cameras compared to another car with cameras AND lidar just baffles the mind.


Not better in absolute terms, but better in terms of the best car for a certain price. Sure a $200k car with redundant LIDAR and horribly inefficient aerodynamics because a giant spheres on top (like weymo) might be better than a $50k with just cameras.

But the hard part is the software, and it's unclear that the added complexity of lidar + camera will allow the software to be better than just cameras.

As an example a recent study at Cornell shows that a stereo pair of cameras mounted high up behind the windshield provided similar quality 3d data to LIDAR. Search for "Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving" or the more consumer friendly "Elon Musk Was Right: Cheap Cameras Could Replace Lidar on Self-Driving Cars, Researchers Find".

Seems much easier to work on a stereo video stream that includes 2D, color, and extracted 3D features than trying to achieve sensor fusion with lidar + video. Especially if you want to make full use of color for things like brake lights, traffic signals, color/markings of an ambulance, school bus, lane markings, and other important details that are invisible to LIDAR.

Especially consider that if weather is terrible and the camera vision is so bad that only 5mph is safe. If LIDAR can see significantly better, do you allow it to drive at 50 mph because it can see 200 feet? For just social reasons it seems like driving as much like a human as possible is going to be best until all cars are autonmous.


Cars and humans inhabit mutual space to such an extent that perhaps level-5 self-driving wouldn't be enough. What we need might be a general AI that can comprehend its surroundings and connect them to a symbolic hierarchy of intentions well enough to obey Asimov's 3 laws.


This is why I think the solution will be something like "SDV lanes" -- separate lanes for robots on the highway, like bus lanes. Current roads are designed for humans communicating with each other, and if we wait for general AI to implement self-driving cars, we'll be waiting a long time.


It would be interesting to see if kids who spent a lot of time riding bikes or even just spent a lot of time in cars can learn how to drive well quicker than other kids.


Going to guess that you're new to this.

So deep learning via neural networks is pretty much the only way to implement self driving cars these days. It's what everyone is using and is used also in FaceID and other recognition systems.

So Tesla isn't betting big on it any different to all of the other self driving companies. We wouldn't say Facebook is betting big on PHP just because they use that technology. And Tesla definitely isn't in the top tier compared to say Waymo.

Plus talking about neural networks approaching human ability is just marketing buzzwords. It doesn't mean anything tangible since we can't define our abilities in terms of models with hyperparameters.


PCC has a good slicing tomato variety that tastes great and stores great.

I saved seeds, plan to plant them in my garden.



This is the Electron demo using Blazor by Steve Sanderson. https://github.com/SteveSandersonMS/BlazorElectronExperiment...


"ASCII" is limited to characters 0 - 127

https://en.wikipedia.org/wiki/ASCII


I find what might help those that quote statistics is to self-reflect and ask the question, how would you react if your father/mother/sister/son/daughter/etc was the victim?

Would you care that statistically self-driving cars have killed less people than human drivers?


Well, just because people react badly when confronted with personal tragedy doesn't mean you have to do the same when it's not applied to you.

For example it's clearly excusable to be angry / depressed / looking for someone to blame when a personal tragedy happens to you. But that doesn't mean you should replicate that behaviour when a tragedy happens to someone else. We should distance ourselves from subjects where we're emotionally compromised, not guide our decision by the example of emotionally compromised people.


> …statistically self-driving cars have killed less people than human drivers?

I realise that the point you're making is about emotional turmoil and not really statistics, but it looks like so far Uber aren't doing so well in terms of driving safety:

https://news.ycombinator.com/item?id=16620736

So yeah, if it turned out that Uber were sending cars out into the world without sufficient engineering or testing* and they killed someone (especially someone I know) then I would be understandably furious.

* In order to get to market sooner? I don't trust Uber and I wouldn't put this past them. Nothing I have seen of the Uber self-driving car program makes me want to be anywhere near one on a road.


> Would you care that statistically self-driving cars have killed less people than human drivers?

I don't think there are enough data points right now to make such claim, but please correct me if I am wrong.


All deaths are a tragedy to someone. A statistic is an aggregate account of untold suffering and misery. To be truly compassionate, we must learn to see the humanity in statistics. We must recognise that our emotional reaction to the death of a loved one and our indifference to the death of a stranger is not a sign of compassion, but egoism. As enlightened citizens, it is our duty to use all our ingenuity and effort to reduce the sum total of suffering.

This death is a tragedy. There were 37,461 such tragedies in 2016. Every single one of them matters equally. It is only by studying and analysing them in aggregate that we can prevent future tragedies. Seat belts, air bags, crumple zones, anti-lock brakes, median dividers and rumble strips are tremendous acts of compassion. Statisticians, scientists and engineers have made an incalculable contribution to human welfare.


> how would you react if your father/mother/sister/son/daughter/etc was the victim?

Shit, of course. But I'd like to believe that I'll realize that banning self-driving cars is not going to make things any better, just like I'd never propose banning cars in general. I'm not sure how it would help here to imagine it's my daughter; it's not as if we imagine it being our daughter in every traffic accident we hear of, since that would be crippling given the sheer number of them...


Would you care that seatbelts are statistically likely to save lives if your loved ones died drowning in their car due to them?

What do we do differently based on the answer to that question?


Yes. I'm in favour of safer drivers, robot or human.


> how would you react if your father/mother/sister/son/daughter/etc was the victim?

How you would react when emotionally distraught shouldn’t be the model for how you act all the time.


This. And I think this is why these stats end up posted here when stuff like this happens. Because the emotionally distraught look at this situation and they're like BAN SELF DRIVING CARS! and the people posting stats are here to remind them that they're still safer than regular cars and accidents happen but we can't impede progress because of one accident here and there.


Could you please not use allcaps for emphasis in HN comments?

This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.


I didn't realize that, it's been a while since I read those. Thank you for the information!


I think that's a strange thought experiment, since it's basically requesting that you think emotionally and as irrationally as possible to draw conclusions.


It's requesting that you remember that a human life was just lost, and to not reduce this woman's life to a rounding error.

The only strange thing in this thread is the emotional detachment many are showing and one could probably argue that's the exact kind of thinking that will lead to more cases like this.



And one of the linked articles was explicitly engaging with that.

https://spectrum.ieee.org/computing/hardware/can-we-quantify...


For those interested in thoughts about consciousness, I highly recommend Bernardo Kastrup.

He comes from a comp sci background but has several books and very well structured arguments for Idealism. His manner of delivering his insights I think particularly appeal to those with analytical and critical thinking backgrounds.

He has several very good videos on YouTube as well.

https://www.goodreads.com/author/show/4552692.Bernardo_Kastr...

https://www.youtube.com/user/bernardokastrup/


When you buy an apple in the store, unless it is labeled as "new crop" it has already been in storage for up-to a year.

http://www.foodrenegade.com/your-apples-year-old/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: