Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not even hallucinating it's just correctly summarizing the wrong thing.

Also we're only talking about a handful of examples out of billions of queries, that doesn't stop the usual media hyperventilating though.



It only takes a reply to a handful of queries to get a handful of people actually hurt by telling them to do something dangerous.

The average person doesn't necessarily have the media and AI literacy to know not to trust papa google's answer at the top of the result page.


Despite some evidence to the contrary I believe most people aren't that stupid and the "average person" isn't actually typing in these queries, these are marginal cases played out for meme value.


It highlights the underlying problem, which is that Google has built up trust over 20 years that they will return the best results from the web. Now they undermine that trust by shoving AI results at the top and calling it answers.

It doesn't take much imagination the think of non-meme questions that will propogate wrong information.

"Fake info from trusted sources" isn't a hypothetical issue. When they changed the law that required TV news to be factual we quickly headed down the path of Fox News and MSNBC. They are so effective precisely because Boomers grew up fully trusting news sources.

You can argue that it is not a problem and that people know the difference, but we have plenty of real-life proof to the contrary.


More consequential queries probably have better safeguards, this hullabaloo is about long tail nonsense that doesn't matter, they're kinks to be ironed out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: