> “The thing with Search — we handle billions of queries,” Google CEO Sundar Pichai told The Verge on Monday when asked about the AI overview rollout. “You can absolutely find a query and hand it to me and say, ‘Could we have done better on that query?’ Yes, for sure. But in many cases, part of what is making people respond positively to AI Overviews is that the summary we are providing clearly adds value and helps them look at things they may not have otherwise thought about.”
That's a really weird and risky way of putting it.
There is a huge difference between being a search engine that finds (external) pages, and coming up with answers in your own name. If some page on the Internet says Obama is a Muslim, who cares, it's not important. But if Google says that Obama is in fact a Muslim, suddenly it's a very big deal. And that's true for every query, on every topic imaginable.
Every answer they get wrong attacks their image a little, and out of billions of queries it's inevitable they will get many wrong.
This forced, radical change motivated only by fear feels like Google+ all over again. Or New Coke.
It’s one thing for the search results to bubble up a wrong result. You go to it, and then you go back and look at more results to get a more complete picture. See where those different sites disagree and where they agree.
But these AI results don’t do that. They are providing a single answer and confidently trying to come off as that is the answer. No fuzziness about its reliability.
Every time it is right, it slowly could be deemed as reliable to a general population until it is dangerously wrong. But then you don’t question it since it was right so many other times.
> You go to it, and then you go back and look at more results to get a more complete picture. See where those different sites disagree and where they agree.
Me and others I know follow this workflow when we try to find out the answer to some question we don't know the answer to.
But the vast majority of people outside of my own personal friend circle don't seem to approach it like this. Their approach looks more like:
1. Search for thing
2. Read the first info-box that pops up, if it does. That's what they think is 100% the correct information. If no info-box:
3. Read the short description for the first link that appears (sometimes an ad) and take that as the correct answer.
Ideally, people would be more careful, but I haven't seen that in practice.
Yeah I guess I have honestly seen the same thing, my hands are full so I ask my partner to search something and the number of times he has said "google says x" makes me quite concerned.
But at least those info boxes are not generating text on its own, it's what is on the website already and clearly includes a source.
At least though, if you are even only looking at that first result it's still different than google ai generating an answer. Barely different, but different.
Excellent point! Shah and Bender have explicitly called this out:
> Contrast this with posing the query as a question to a dialogue agent and receiving a single answer, possibly synthesized from multiple sources, and presented from a disembodied voice that seems to have both the supposed objectivity of not being a person (despite all the data it is working with coming from people) and access to “all the world's knowledge”
Right, the domain itself is a huge signal. Information can have a very different meaning and trustworthiness if appears on the blog of a known expert vs the NYT vs some spam site like Quora. Google wants the only context to be Google. Even if the AI results are as good as organic ones, which is not a terribly high bar these days, it will be much less useful and trustworthy because nobody knows the provenance of the information.
It's really unfortunate that Google is going to torch so much of the open internet in order to shovel this nonsense onto people. Of course as organic search traffic dries up and reduces new high quality content, and AI slop floods the internet, the GenAI results will get even worse.
The thing that really concerns me is it isn't just Google, they are putting it right in front of a lot of people thanks to this change. But yes, it being Google that is doing it, is giving it a view of quality that just isn't there.
You contrast this with the demo's showed for chatgpt-4o and we are building this idea that we can really trust this. I was just having a conversation with someone over the weekend that I was like, yes I acknowledge that the tech is really good, just in a couple years it has advanced a ton, but I really think we are overselling its current capabilities and ignoring where it falls flat on its face. That we are going too quick rolling this technology out to the average user in every day situations.
And there response was basically, no don't think so. It is ready to be used, there are not these big issues, and so on.
Thats really scary. And these are fundamental issues with this type of technology but "we can't risk another company putting out their misguided dangerous product so we must do it first!" I am convinced that is the general attitude at these companies right now.
> This forced, radical change motivated only by fear feels like Google+ all over again. Or New Coke.
A year ago everyone was trashing Google for being ahead for years on the science of AI, but completely failing to productize it. "OpenAI is going to eat Google's search business for lunch", was the prevailing narrative. This is what people said they wanted!
I'm sure it's clear to everyone now[0] that genAI is not a search solution, and Google's former approach of quietly putting AI into phones and products without it being flashy and in-your-face was actually a better strategy.
[0] Just kidding, I'm certain that the industry and especially the people running Google will learn all the wrong lessons from this.
The problem is that Google effectively destroyed its classic search and is playing catch-up with AI. AI is the only lunch out there to be had in the first place. There is nothing else.
> "OpenAI is going to eat Google's search business for lunch", was the prevailing narrative. This is what people said they wanted
One thing isn't related to the other. Actually most people noticing OpenAI/LLMs gonna displace Google enjoy the idea. Nobody is asking Google to give LLM results.
> A year ago everyone was trashing Google for being ahead for years on the science of AI, but completely failing to productize it. "OpenAI is going to eat Google's search business for lunch", was the prevailing narrative. This is what people said they wanted!
People say a lot of shit. Doesn't mean you should put your poorly performing Q&A AI to provide people with outright false answers, I'm pretty sure no one asked for that.
After all, Google is running their own products, and when they make shitty choices, I think it's perfectly fair to question how they can make so bad decisions, no matter if I personally might have said "Google is failing to productize LLMs properly" or whatever.
Section 230 of the Communications Decency Act currently protects Google, and others, when they link to other html posts within their search results. Wouldn't it become ineffective with AI acting as a publisher? Wouldn't false statements about individuals be liable and open them up to lawsuits? For example, AI making the statement some is a pedophile through hallucinations versus actual court outcomes.
The Google+ comparison is apt: a product rushed out in response to a perceived threat, on which the entire company is aligned and bet, with no regard for how that full-power triangulated focus killed whatever potential it had, if any. They flipped from missing the boat on transformers, a thing they created, to playing catchup on something that's probably mostly fad.
It's like they decided to speed run the trust thermocline they were already in. We're a long way from this Yahoo killer making the rounds on AOL chats and IM.
Large organizations expect to cause some harm as a "tax" built into the system; they are not very harm avoidant, they are more psychopathic (and often led by psychopaths). So new tech is used without much concern about creating harms.
Sure, LLMs do amazing things, but so did all the previous once-disruptive technologies later widely adopted. Electricity, flight, nuclear energy, transistors and integrated circuits, the internet; AI is joining a very competitive list.
We've just never seen a disruptive technology with this much media buzz behind it so quickly, and such rapid technological advances. Disruptive technologies are never "ready for prime time" ... until they are.
It is highly unlikely it will say that, there's 10000x more articles on the fact that he isn't, and more specifically, the LLM would be "aware" (aware in this sense in that it would have possible next words with high rank) that there is a specific controversy over this.
It's the long tail and new stuff that it will get wrong.
That's a really weird and risky way of putting it.
There is a huge difference between being a search engine that finds (external) pages, and coming up with answers in your own name. If some page on the Internet says Obama is a Muslim, who cares, it's not important. But if Google says that Obama is in fact a Muslim, suddenly it's a very big deal. And that's true for every query, on every topic imaginable.
Every answer they get wrong attacks their image a little, and out of billions of queries it's inevitable they will get many wrong.
This forced, radical change motivated only by fear feels like Google+ all over again. Or New Coke.