> “The thing with Search — we handle billions of queries,” Google CEO Sundar Pichai told The Verge on Monday when asked about the AI overview rollout. “You can absolutely find a query and hand it to me and say, ‘Could we have done better on that query?’ Yes, for sure. But in many cases, part of what is making people respond positively to AI Overviews is that the summary we are providing clearly adds value and helps them look at things they may not have otherwise thought about.”
That's a really weird and risky way of putting it.
There is a huge difference between being a search engine that finds (external) pages, and coming up with answers in your own name. If some page on the Internet says Obama is a Muslim, who cares, it's not important. But if Google says that Obama is in fact a Muslim, suddenly it's a very big deal. And that's true for every query, on every topic imaginable.
Every answer they get wrong attacks their image a little, and out of billions of queries it's inevitable they will get many wrong.
This forced, radical change motivated only by fear feels like Google+ all over again. Or New Coke.
It’s one thing for the search results to bubble up a wrong result. You go to it, and then you go back and look at more results to get a more complete picture. See where those different sites disagree and where they agree.
But these AI results don’t do that. They are providing a single answer and confidently trying to come off as that is the answer. No fuzziness about its reliability.
Every time it is right, it slowly could be deemed as reliable to a general population until it is dangerously wrong. But then you don’t question it since it was right so many other times.
> You go to it, and then you go back and look at more results to get a more complete picture. See where those different sites disagree and where they agree.
Me and others I know follow this workflow when we try to find out the answer to some question we don't know the answer to.
But the vast majority of people outside of my own personal friend circle don't seem to approach it like this. Their approach looks more like:
1. Search for thing
2. Read the first info-box that pops up, if it does. That's what they think is 100% the correct information. If no info-box:
3. Read the short description for the first link that appears (sometimes an ad) and take that as the correct answer.
Ideally, people would be more careful, but I haven't seen that in practice.
Yeah I guess I have honestly seen the same thing, my hands are full so I ask my partner to search something and the number of times he has said "google says x" makes me quite concerned.
But at least those info boxes are not generating text on its own, it's what is on the website already and clearly includes a source.
At least though, if you are even only looking at that first result it's still different than google ai generating an answer. Barely different, but different.
Excellent point! Shah and Bender have explicitly called this out:
> Contrast this with posing the query as a question to a dialogue agent and receiving a single answer, possibly synthesized from multiple sources, and presented from a disembodied voice that seems to have both the supposed objectivity of not being a person (despite all the data it is working with coming from people) and access to “all the world's knowledge”
Right, the domain itself is a huge signal. Information can have a very different meaning and trustworthiness if appears on the blog of a known expert vs the NYT vs some spam site like Quora. Google wants the only context to be Google. Even if the AI results are as good as organic ones, which is not a terribly high bar these days, it will be much less useful and trustworthy because nobody knows the provenance of the information.
It's really unfortunate that Google is going to torch so much of the open internet in order to shovel this nonsense onto people. Of course as organic search traffic dries up and reduces new high quality content, and AI slop floods the internet, the GenAI results will get even worse.
The thing that really concerns me is it isn't just Google, they are putting it right in front of a lot of people thanks to this change. But yes, it being Google that is doing it, is giving it a view of quality that just isn't there.
You contrast this with the demo's showed for chatgpt-4o and we are building this idea that we can really trust this. I was just having a conversation with someone over the weekend that I was like, yes I acknowledge that the tech is really good, just in a couple years it has advanced a ton, but I really think we are overselling its current capabilities and ignoring where it falls flat on its face. That we are going too quick rolling this technology out to the average user in every day situations.
And there response was basically, no don't think so. It is ready to be used, there are not these big issues, and so on.
Thats really scary. And these are fundamental issues with this type of technology but "we can't risk another company putting out their misguided dangerous product so we must do it first!" I am convinced that is the general attitude at these companies right now.
> This forced, radical change motivated only by fear feels like Google+ all over again. Or New Coke.
A year ago everyone was trashing Google for being ahead for years on the science of AI, but completely failing to productize it. "OpenAI is going to eat Google's search business for lunch", was the prevailing narrative. This is what people said they wanted!
I'm sure it's clear to everyone now[0] that genAI is not a search solution, and Google's former approach of quietly putting AI into phones and products without it being flashy and in-your-face was actually a better strategy.
[0] Just kidding, I'm certain that the industry and especially the people running Google will learn all the wrong lessons from this.
The problem is that Google effectively destroyed its classic search and is playing catch-up with AI. AI is the only lunch out there to be had in the first place. There is nothing else.
> "OpenAI is going to eat Google's search business for lunch", was the prevailing narrative. This is what people said they wanted
One thing isn't related to the other. Actually most people noticing OpenAI/LLMs gonna displace Google enjoy the idea. Nobody is asking Google to give LLM results.
> A year ago everyone was trashing Google for being ahead for years on the science of AI, but completely failing to productize it. "OpenAI is going to eat Google's search business for lunch", was the prevailing narrative. This is what people said they wanted!
People say a lot of shit. Doesn't mean you should put your poorly performing Q&A AI to provide people with outright false answers, I'm pretty sure no one asked for that.
After all, Google is running their own products, and when they make shitty choices, I think it's perfectly fair to question how they can make so bad decisions, no matter if I personally might have said "Google is failing to productize LLMs properly" or whatever.
Section 230 of the Communications Decency Act currently protects Google, and others, when they link to other html posts within their search results. Wouldn't it become ineffective with AI acting as a publisher? Wouldn't false statements about individuals be liable and open them up to lawsuits? For example, AI making the statement some is a pedophile through hallucinations versus actual court outcomes.
The Google+ comparison is apt: a product rushed out in response to a perceived threat, on which the entire company is aligned and bet, with no regard for how that full-power triangulated focus killed whatever potential it had, if any. They flipped from missing the boat on transformers, a thing they created, to playing catchup on something that's probably mostly fad.
It's like they decided to speed run the trust thermocline they were already in. We're a long way from this Yahoo killer making the rounds on AOL chats and IM.
Large organizations expect to cause some harm as a "tax" built into the system; they are not very harm avoidant, they are more psychopathic (and often led by psychopaths). So new tech is used without much concern about creating harms.
Sure, LLMs do amazing things, but so did all the previous once-disruptive technologies later widely adopted. Electricity, flight, nuclear energy, transistors and integrated circuits, the internet; AI is joining a very competitive list.
We've just never seen a disruptive technology with this much media buzz behind it so quickly, and such rapid technological advances. Disruptive technologies are never "ready for prime time" ... until they are.
It is highly unlikely it will say that, there's 10000x more articles on the fact that he isn't, and more specifically, the LLM would be "aware" (aware in this sense in that it would have possible next words with high rank) that there is a specific controversy over this.
It's the long tail and new stuff that it will get wrong.
Absolutely ironic that Google for years penalized websites that didn't have all that good "authority" when it came to topics like health and finance ('Your Money or Your Life' - YMYL), and now it ranks its own seriously harmful results right at the top of the page.
I stopped using google search a few years ago, for 95% of the time I use DDG (and then sometimes I do a fallback !g), but as it was down yesterday (thanks to Microsoft) I had to use GS a few times. Never before I had to add `-phrase` so often to eliminate results I didn't want to see. I was looking for some podman tips and had to keep adding -kubernetes and -openshift to every single query. It really just showed me how much worse GS became since I left it, but also no regrets.
Whoever at Google thought this was a good idea should no longer be involved in search! Reddit is a terrible source of information, it's all confirmation bias and unqualified people giving advice upvoted by unqualified people to vet it. It's worse than Yahoo Answers because it looks more trustworthy on the surface but has worse quality of information.
That describes the whole internet. It was never meant to be anything more, so that's what we got.
Which was fine, until we had the bright idea of feeding it all into a neural network as a collection of facts.
Then we gave that neural network a voice and a personality. It spoke with the utmost confidence as the expert on any subject.
Truth, lies, facts and falsehoods all blended together and regurgited in an infinite stream of babble.
I vacillate between LLMs being an interesting fad with limited usefulness and an apocalypse that'll throw the world into chaos. I'm back on the fence again I suppose, leaning towards the latter.
OpenAI is excited about their deal with Reddit too, it's a game of one-upmanship in the current AI hype cycle. It also makes Reddit seem very valuable for the AI arms race, could influence its stock price, and there are certain stock holders who are benefiting greatly if this does increase the price.
> "google execs hearing the feedback that 'adding reddit to the end of every search is the only way to get information that you need' and immediately destroying their business trying to automate that"
> All these examples seem like deliberate attempts to get weird/nonsense answers back.
I'm not sure that I agree.
If a child hears a rumour, or sees some joke online, that claims that gasoline cooks spaghetti faster they may search to find out if it's legit.
During Obama's term, there was a right wing conspiracy theory that gained popularity that claimed that Obama was Muslim. Someone coming across that conspiracy theory years after the fact, completely devoid of any context (ex: a pre-teen who wasn't even alive yet during Obama's term), might do a search to find out if it's true.
There WAS an NPR article that cited a study claiming that parachutes are no more effective than regular backpacks. AI results have not been enabled for my Google account yet, and currently if you search for "are parachutes effective?" you get a feature snippet that clips that article and links to it. Now take that link, with all of its context, out of the picture and imagine that someone hears that claim casually and wants to search to see if it's true. Currently, in MY search results, you get the link to the NPR article that not only explains where the claim comes from but gives you the full context with all of the "gotchas." It sounds like with Google AI the first thing you get is a definitive, authoritative claim that no, parachutes are not effective at saving your lives and you might as well jump out of an air plane with your carry-on pack-back on.
I just saw an interview with Sundar Pichai and someone showing him the mobile Google search results and how bad they are. It was pretty embarrassing. It's like he doesn't even use his own product. https://youtu.be/lqikP9X9-ws?si=0NAJX5GhzXfhP1t0&t=1524
Example from a few days ago, the first time I decided to use the AI Google Search: I Googled for how to reset the time on my Nixon Ripley watch (I forget the exact search term I used). Google "AI" helpfully brought up a list of instructions, prefaced with a helpful "to reset the time on a Nixon Ripley watch ..." But the instructions referred to a mode that's not on this watch, which was suspicious. It helpfully included a link to a website with a set of instructions that looked like what the AI had generated. Except that the website didn't reference the Nixon at all, but some other watches (Casio gShock, I think), and so the instructions of course didn't work.
So I went back to the "old fashioned" way of searching the links for the actual Nixon website, finding the watch manual, with the correct steps.
That seems pretty par for the course with Google these days. “Hmm, you want to know something about a specific watch? Well, I know something else about watches!”
It seems like LLM is good at broad and generic things but fail miserably at precision. And instead of admitting it doesn't know something, it confidently responds with nonsensical answers.
I thought the commenter was talking about Google Search. It sometimes pushes similar sites you don’t want that might generate more ad revenue. Now, they have two products that do this.
They were talking about something they did through Google Search, but it hasn't been just search for a long time. Pretty often it'll add a section on top of the search results that attempts to answer a question directly, using data pulled from sites, instead of linking to another site.
This feature has existed since long before LLMs, but it sounds like they may have mixed that into there too.
My favorite part of Copilot is when it auto-completes a call to a function that does exactly what I need to do, like magic!
Except that function doesn't exist and never did.
LLMs don't know what they don't know, so they just make something up because they have to say something. The danger is that most people don't understand that's how they work and don't known when to call BS.
This is where I think companies have a responsibility. To ensure that _every_ response has a disclaimer that the answer from their AI could be right or completely wrong and it's up to the user to figure that out, because AI can't at the moment.
I am not as worried about these sorts of ludicrous results as I am the ones that are close enough to correct to be believable. That is where you will get in trouble.
I have a similar experience using GitHub Copilot… it usually gets it right, which is great, and sometimes gets it really wrong, in which case I don’t use the suggestion and move on with my work.
However, every now and then it will give me a result that really looks correct, but is wrong in some minor way, and I end up getting burned because it takes me way too long to realize where the error is.
The uncanny valley of generated software: utility is inversely proportional to distance from the correct answer, with a huge drop into the dangerously believable.
If you want to get rid off that, try https://udm14.com/. It will redirect you to tab "Web" which Google secretly added[0] pretty recently. It will show you only text-based links.
You can also edit your browser search settings to add this parameter &udm=14 to query automatically.
Google today feels nothing like the Google 24 years ago. It was magic and reliable and no nonsense.
I understand technology needs to change with gamification of SEO but I get so much frustration in my searching and often have to use Google in combination with site filtering keywords (e.g, site:stackoverflow.com). But it would be even more difficult to use if I didn’t even know what sites were trustworthy ``authorities.''
I just wish somebody would disrupt Google like Google did with search and email 2 decades ago. Searching on Alta Vista or Yahoo was a nightmare until Google came into our universe.
I joined Facebook pretty early on, back when you needed to have an email address issued by a university to join. My posts from back then are pretty wild, all very personal stuff and conversations with friends.
I logged in today, not a single mention of a friend in any way, shape or form on my feed. No posts from friends. No comments from friends. No "here's what your friend liked."
Half of the content wasn't even stuff I was following, it was posts that were "similar" to something I liked or to some group I was in.
It's amazing what a bait and switch these companies pulled. They really leaned into it. Google barely resembles a search engine now. Facebook is basically just a billboard.
It's immensely frustrating, as if you're of a certain age (I am), a non-trivial amount of your formative young adult relationship-building took place through Facebook. I remember having the prescience to lament that this wouldn't allow old relationships to fade away quietly, but instead, through the magic of social media, rot slowly. Joke's on me, it was that and worse.
Zuckerberg and co. muscled their way in and extracted value out of the dismantling of traditional social dynamics and cohesion, and left us with a hole where the scaffolding of youth should have been. Very Uber-esque. Actually, it describes any number of start-ups from the last 2 decades. Maybe "disruption" has a negative connotation for a reason.
I think we just grew up. When I joined Facebook as a teenager in 2008, we essentially had no filter. Every thought, every photo, every relationship update - all of it was immediately shared to Facebook because at the time it was fun and novel and innocent.
For better or worse, millennials have become much more discretionary in what they post online than they were 15 years ago. I imagine Facebook had organic content from your real friends to show you then it probably would, the well's just run dry.
Those in-the-know use private group chats as social media. No "useful" recommendations that tend towards selling you stuff. No trolls swooping in to derail conversations. No capricious design changes. Just simple human-scale interaction.
You used to have niche, semi-public forums. This had the combined benefit of a sustainable culture and moderation for those in the conversation, and an accessible knowledge-base for everyone else. Discord et al. are not a replacement for this.
Been using it for a year and I hope I never have to go back. It is so much better in every way. It’s amazing how many nice features can be added when there isn’t any worry about how it will impact ad revenue.
I got tired of poor search experience on Google and switched to DDG, used it by default for a couple years (resorting to !g maybe 10-20% of the time). But for the last 6? months with Kagi, it's literally 0 times I've even been tempted. Paying $5/mo to be a customer and not the product, having my privacy respected, and enjoying consistent access to excellent search results is IME equivalent to switching from browsing without an adblocker to using ublock plus, or trying reader mode for the first time. It's transformative.
I started using it 2 years ago. In the first few months I would flip to Google when I wasn't getting results out of Kagi. However I now have enough experience with Kagi that I trust if a query returning nothing useful on Kagi then it wouldn't on Google either. I have no reason to ever return to using Google search.
Same here, ~7 months with Kagi and replaced Google search 100% for me. Felt weird at first to pay for it, but after trialing it for a month or two, I now feel like I'm proud to pay for it, to ensure long-levity and sustainability. Best feature for me is probably the ability to rank domains up/down as I wish.
I have high hopes for Kagi precisely because they're not trying to disrupt Google. As long as they stick to their current niche, they can do things to filter out the nonsense and surface good content without ever running the risk that their algorithms become the game that every website must play to compete.
Yet another vote for Kagi. When I was on the free plan it felt like a secret weapon that I pulled out for tough searches. Happy to pay for it now. Now, they do have a similar AI summary feature, but it actually links the sources it used to come to its answer so you can check its work. You can turn it off too. I've also found Kagi staff to be fairly quick about bug fixes, though I've only reported one bug.
The garbage web today is partially Google's own doing. They're the ones who created the incentives for websites to have cruft to satisfy the Google bot. They're the ones who capture all value from news outlets by aggregating and summarizing articles so people don't need to click on links.
Google isn't some dainty little startup. They're the dominant interface (search and browser!) through which most of the planet uses the internet.
Web pages are optimized for the most popular search engine.
If any of Ask Jeeves or Lycos or Webcrawler or AltaVista had risen to the top of the heap instead of Google, then web pages would have been optimized for that respective bot instead.
What you say is true, but it doesn't excuse Google for failing to take steps that mitigate what we're seeing today. All it would have taken was some restraint: some combination of being less dominant in the market (i.e., optimizing for a 90%-market-share search engine is different vs. 60%-market-share) and giving users more control over search results (e.g., allowing users to blacklist entire domains for themselves).
We have a garbage Google-specific web because websites didn't have to satisfy anyone else; not other search engines, and not even the users themselves. Instead of Google delivering customers to websites, Google positioned itself to be the only customer.
While it would be absurd for me to say that an advertising behemoth like Google has no influence on the decisions of users, it would also be absurd for me to say that Google is somehow empowered to force users to do...anything, at all.
Free will still exists. Nobody from Menlo Park has put a gun to anyone's head to make them use Google to search the web instead of Bing or DDG or Yandex or whatever.
> All it would have taken was some restraint: some combination of being less dominant in the market (i.e., optimizing for a 90%-market-share search engine is different vs. 60%-market-share)
So, let me get this straight: The idea is that Google Search sucks, and the suggested cause for this level of suck is that it is so popular that it causes many publishers to deliberately poison the well using Google-optimized SEO. (Or, more simply: That Google has reached critical mass, and that this is problematic for Google users.)
And, well: I don't disagree. That does appear to be the state of things.
But the apparent proposed corrective action is for it to somehow make itself less popular? By doing what, exactly? Sucking harder? Does it not already suck hard enough?
What a confounding paradox.
Wouldn't a simpler and less paradoxical plan of attack -- that anyone can accomplish completely and absolutely, starting right now -- be to just not use Google search at all for one's own dealings in life?
I agree, but I remember when I was a university student I was easily able to search for answers for my queries. I was actually reliving some of my past and I tried to find the same info for a useful epiphany from many decades ago, and I couldn’t find it (even though the same engine gave me the answer decades ago)
Very much this. It kills me when I search for something that I know for a fact still exists on the web, and there is no way of finding it through google. I used to comb through page 10+ regularly for hits for things related to my search, and now I’m pretty sure google doesn’t even compile hits after the first few pages.
One of them was looking at my old AP US History tests. I don’t have the exam question right in front of me, but I remember getting the answer wrong. I found the correct answer on Google pretty quickly. It was on an obscure fact of some former president. I didn’t find the answer in the first several search result pages. I remember getting the answer within the first 3-4 results when I had looked up why I got it wrong 22 years ago.
AltaVista was actually pretty good if you knew how to use its search operators. Google just enabled search for the masses so to speak. A bit like the iPhone of search engines.
AltaVista was great until they quit running the crawler for a few months - I gave up when I searched something and the first 10 results were dead links. I'm told they got it running again just after I left, but I never looked back to see. (there is a lesson here)
I actually remember the first time I begin struggling with Google results and it turned out that Google was simply switching to an answer machine instead of a search engine.
As a long time Google user, I was used to search for phrases that might be written in articles related to the stuff that I’m looking for. I had to switch my mental model on how Google works, so instead of typing what might have been written in an article about the stuff I’m looking for I had to type my question.
Maybe it’s time for another unlearning phase and learn how to use the LLM dominant Google? I’m not sure yet, LLMs seem too unpredictable.
Google won because it was simple. Just type in the text box. Yahoo was a page full of junk and alta vista had a difficult to navigate search results. If I remember, it's been a while!
I'm not sure we can. In 2000 everyone had a homepage that they (poorly) maintained with links to places they found useful. This gave google a large set of data to mine for places that are useful and worth searching for. Not such things are not common and so google can't find the signal as easily. (people share links on social media but that isn't google searchable)
You used to be able to see Twitter results in real time on Google. Facebook, too, early on, IIRC. Google could pay them to get access again, but... won't.
I suppose Google would have to somehow exit the personal data broker business to pull this off, too.
According to this article [1] last month, Google deliberately made search results worse in 2019 because they get more ad revenue from the spammier sites.
Honestly it's becoming impossible to find information on the web.
A friend of mine wanted to find some lyrics and the lack of proper verbatim mode has made it impossible to know if he had the wrong lyrics or if the information is not there or if the search itself is failing.
If there are no matches whatsoever then tell me there's no matches. If there's partial matches tell me how close it is to my search terms. Like you might match all but one or two words and so on. These are the kind of useful features I'd expect on search.
The usefulness of AI, on my mind, might be more towards interpretation of the questions rather than generation of the answers.
If I were Google I'd try something like using genAI to rephrase the question to extract keywords and so on that can be used to enhance search. But then again, I think I put myself in a position of "how do we make search better and more accurate" and that's simply not the position Google finds themselves in.
I live in India, haven't been to the US in 17 years, don't use a VPN.
Yet when I search for "St Petersburg Airport", it directs me to the airport for the city in Florida, and now the much older, much more densely populated, and much more culturally significant city in Russia (a city where I HAVE been once - and Google would likely know).
And I noticed my 2021 Macbook was unable to scroll to the bottom of the page. It was just chugging like crazy, and I couldn't see the content I wanted because of it.
Then I remembered I didn't install AdBlock on this browser. I installed it and went back to the page, and OK now I can actually use the website.
---
What concerns me is not so much that there are lots of low quality sites looking to scrape pennies from Google or whatever.
It's that this kind of software development culture is normalized -- where government agencies and hospitals are sending data to Google and Facebook.
I would be interested to hear from somebody who works in those areas what the incentives are.
If you are working on the central park website, why are you adding ads to it?
Isn't it funded by the government?
Specifically I opened up dev tools and I see like 297 blocked requests to
I see ads for Lowe's Hardware Memorial Day sales and such. OK fine, but why are they locking up my computer and making the site unusable?
(Also does anyone remember the days when Google was "morally" against invasive image ads and irrelevant ads? They preached non-invasive and relevant, helpful ads. Those days seem SO far away now ...)
>Honestly it's becoming impossible to find information on the web.
It's because everyone is trying to sell you something, no matter how irrelevant it is to you. The cost of transmitting information at scale is effectively zero, and gen AI makes generating information at scale also zero. It's noise at scale crowding out the non-scalable things that are of actual value to humans. Something has to give.
I don't think that's the only factor. There is a strong impetus toward obfuscation, both from the direction of a search engine (more clicks, more "interaction"), and from corporate & govt interests (the less people know, & know to be actually true, the better).
Now with verbatim if you have a typo they no longer show the number of results up front so you often don't notice and only get sources with the same typo, furthering the impression that Google's results suck.
"Google hides search results count under tools section"
The end of the internet came sooner than expected. I wonder how long it took on alpha centurion.
It’s not obvious ‘attention is all you need’ would have been a public disclosure by a corporation in parallel worlds. Usually such inventions are buried.
It went from search to making up utterly unusable dreck. It's not helpful, it's machine-generated nonsense that's more expensive to operate than search.
It occurs to me that I don't know who Alex Northstar is; he could quite simply be an AI creation as well. I now understand why the greybeard dream is to move to an off-grid cabin.
FWIW those pull-out answers always were dubious. For instance Google has pulled out answers out of forum threads that were plain wrong, even the original poster conceded that in the thread later but Google just didn't get it.
Despite some evidence to the contrary I believe most people aren't that stupid and the "average person" isn't actually typing in these queries, these are marginal cases played out for meme value.
It highlights the underlying problem, which is that Google has built up trust over 20 years that they will return the best results from the web. Now they undermine that trust by shoving AI results at the top and calling it answers.
It doesn't take much imagination the think of non-meme questions that will propogate wrong information.
"Fake info from trusted sources" isn't a hypothetical issue. When they changed the law that required TV news to be factual we quickly headed down the path of Fox News and MSNBC. They are so effective precisely because Boomers grew up fully trusting news sources.
You can argue that it is not a problem and that people know the difference, but we have plenty of real-life proof to the contrary.
More consequential queries probably have better safeguards, this hullabaloo is about long tail nonsense that doesn't matter, they're kinks to be ironed out.
And this is why I subscribe to Kagi. They also have AI based summary, but you have to invoke it manually on a specific search result (in which case you know where the info is coming from).
I feel like we're on a race to the bottom. All of Silicon Valley and beyond have big FOMO on the AI hype train and any and all use of AI pleases investors. I guess we'll see how it all pans out in a few years.
One thing is for sure, generative AI and LLMs open up a whole can of worms in terms of disinformation and information noise on the Internet. The signal to noise ratio will greatly shift with these new initiatives.
It also stings that I can't pay by PayPal, so not only would I have to pay the currency exchange margin to US Dollars but my bank also changes an out-of-currency fee.
I understand that they're probably focusing on US market growth first, but the UK and EU surely have potential too.
Oh! That's at least one reason why people say they need to pay with PayPal. I've see people saying they won't buy things until PayPal was supported, but I couldn't figure it out for the longest time how anyone would be in a situation where paying with PayPal would be essential. Thanks for explaining that.
I can empathize with this. If not for my decent salary by European standards, I’m not sure I would prioritize paying for search when there are free options available.
It would be nice if more services used purchasing parity based pricing so more people could benefit.
I have been struggling to arrange lawn care for an out-of-state relative while they're incapacitated. While Google search populated some tables in its search results with a bunch of lawn care companies saying the address was in their service area, each and every company I contacted said it was not and asked where I got the info.
Visiting each company's site showed they didn't serve this town/address so it's really not clear how/why Google returned those results. In some cases the address was 20 miles from the nearest town served. Whether or not due to AI hallucination, or just general Googlenshittification - not a good result.
They could really just present it as what it is, a _search result_ summary, not an answer summary. Its not like searching for satirical comments should go away. As its presented now it feels very flawed but it doesn't need to be.
I get "it's not good enough". I get "even occasional mistakes are unacceptable in my field". I get "it's not really intelligent" even though I think that's a question of terminology.
I don't get "worse than crypto" or "unable to find a single use".
I didn't say it's worse than crypto. Just a biggest waste of resources by tech companies.
Crypto never got investment from tech companies, but those have already diverted millions of hours of work from other projects to shoehorn GenAI garbage everywhere they can.
Similarly, I didn't say I wasn't able to find a use. Just not a legitimate use. Something that is a net positive to society and couldn't be done better without GenAI.
> I didn't say it's worse than crypto. Just a biggest waste of resources by tech companies.
Currently that comment says "the biggest waste of resources of the tech world. Even before crypto." — even if it was not your intent, you wrote something which absolutely can have the meaning I read.
> Just not a legitimate use
No true scotsman. For any value of X, at least one person will claim that X is not "legitimate".
I, like everyone else here going "WTF?" at you, find it totally legitimate despite its flaws.
I literally just used OAI to walk through my financial plan and suggest my next steps towards retirement. It pointed out all sorts of things like conversion ladders, my cash on equity for my rental properties, average cost of health insurance and more which would have taken me a ton of time to research on my own.
I don't agree, but the current usages aren't exploiting its full potential.
GenAI is the best that we currently have for parsing natural language, in a way that is multilingual, tolerant 2 tipoz n slang and swearing users. It helps by being a bridge between unestructured data and structured.
It's actually terrible at parsing natural language. So bad that on a long enough text (or even short if you're unlucky) it will 100% of the time come up with tokens that are not present in the original text.
This sort of rethoric is exactly the same as with crypto "yeah ok it's bad now but think of the future".
Sorry you have had such bad experiences we won't be able to convince and nobody can see the future but there are exciting things happening at an amazingly short scale.
Really? ChatGPT 3.5 and beyond models are fairly capable of understanding PoS and doing text analysis. I have never seen that issue yet with the more advanced models, although smaller/older ones tend to imagine fmthings about the text.
Last year I wrote a paper about using LLMs for definition generation for unknown words based on context, and the models did a fairly good job. https://ieeexplore.ieee.org/abstract/document/10346136/ if someone is curious.
I would like to read prompts where the models are failing in such way. The field is moving quite fast.
I think one of the techniques underexplored in all the hype is guiding the evaluation process depending on the context. I.e. if you're generating code, it has to satisfy the parser for the given language. If the token is unsatisfactory, throw it out and try another one. Thought chains could be generated in a similar way (you can do so with special tokens, see "Recursion of Thought").
But yeah overall GenAI tends to remain hype-over-substance.
The main blocker for this is that LLMs are slow. Imagine waiting 3 seconds for your output in a pretty happy case, only for it to be invalid and have to wait an extra 3 seconds, with again non negligible chances of being wrong.
We envisioned doing this for an SQL query generator at work but with our constraints a single query already takes 15 seconds.
While there is definite hype and buzz among executives to jump on the bandwagon, from my experience and perspective massive amounts of value to be obtained. We are doing work with LLMs that would have previously taken teams of people to do. Many legitimate uses but overshadowed by the hype.
I think the way the corporates are applying it are not very useful, specifically the Googles and Microsofts of the world. I am overall bullish on the niche applications of LLM though. Google and Microsoft are just throwing it at everything and I don't think much of it is sticking. The Google search experience has definitely downgraded with their AI implementation. Kagi on the flip side has done a much better job imo, it does not get in the way and it is generally answering the question I asked.
I am bullish in areas outside of RAG/chatbots that gain so much of the hype. Classification, extraction, summarization and similar natural language workflows. My work is net positive but again anecdotal to my corner of the world and the others I interact with. Could be different from your distant side of the world.
To me the most clear-cut benefit of AI is automated content moderation. People literally get PTSD from moderating Facebook. It's no different from replacing the most hazardous factory work with robotic automation. You will still need a human in the loop for the more difficult cases, of course, but by any reasonable utilitarian calculation AI content moderation is a win.
(I'm not implying anything about whether or not LLMs are good for Google search.)
> Maybe someday but not with LLMs, which by nature do not understand who's talking, who is being quoted, and who is being falsely quoted.
Exact opposite. Humans don't have time to work out any of those things in practice. Machines do have time, and LLMs make a much better job of those things given the real limitation on human labour that actually exist in practice.
The genie left the bottle. The Large Bullshit Confabulator revolution is here and either google does it or someone else will.
Seriously though, Shannon's theory of communication is highly applicable here - AI noise is reducing available bandwidth (in the entropy sense, not in the advertised Mbps sense) of the internet because the noise floor is raising faster than the error correction advances.
> I am still unable to find a single legitimate use.
Really? Nothing? Not even really clear-cut use cases that are already in production at a bunch of companies like rapid document templating or surfacing esoteric yet relevant internal documents and knowledge?
Those use cases alone have saved seven figure sums at companies where I've seen them implemented. And those savings allowed the positions to be repurposed in more useful ways e.g threat hunts rather than document prep.
---
Lots of adjusted goalposts in the comments below. The fact that I can simply ask an LLM to bake me a template and adjust as needed — or ask it to fetch me an internal policy document describing compliance requirements I might need for a new mobile app — and get either of these answers in literally seconds makes them the best tool for the job by a wild margin. And the fact that I'm seeing hiring managers rework their positions for different roles as a result of implementing GenAI to obviate mundane job responsibilities supports this.
Many practitioners across many fields have their blinders on with regards to the risk of disruption to their own disciplines. Surprised I'm seeing those blinders on here too.
It's true, LLM AI is really great at creating content as well as spam and other "slop". While this is useful in some circumstances, I think it's a net negative in the long run. Once enough slop is created, will AI be able to sort through it?
"Slop" is a good way to put it. AI seems really useful for creating a huge sludge of "filler" art and words on a particular topic. The quality may or may not be good, but it can produce endless quantity of this slop.
So, if you need to fake a blog by generating 10,000 posts on a topic that are each, say, 1,000 words long, AI is great. If you need to make a web forum that, at a surface level, looks active, AI can generate all the fake posts you need. If you have something you can say in one sentence, but need to make your text 10,000 words in order to appease an algorithm or "look credible" then AI slop is the way to pad it out.
The way I see it, many of the "good" uses of LLMs boil down to easily counterfeiting traditionally-trusted indicators of human expertise, time investment, or intelligence.
For example: "Hey LLM, generate a smart sounding cover letter for a job at AcmeCo as a sales representative, highlighting my diligent work ethic and ability to generate new sales."
When the counterfeiting becomes widespread, those writing indicators become debased as receivers start to realize how easy they are to fake.
I wouldn't be entirely surprised if some of those prose-heavy documents started to move (culturally) toward raw bullet points.
> All GenAI is good for at best is prototyping to show what COULD be done if you invested the time to do it without GenAI.
That is the argument used against mechanical freezers replacing natural ice and power looms replacing artisnal weavers and coffe shops replacing real human baristas with identikit machines.
None of those use cases actually require AI, and each of them is potentially undermined by the tendencies of AI to hallucinate, or as yet unresolved legal questions about the copyright status of AI content.
Can you name a use case for which AI was the only or best possible solution?
Very quickly finding texual needles in long-document haystacks. Can't generally do that with a plain text search unless you already know what the needle looks like, and doing it as a human is expensive — see Terry Pratchett's experience with German soup adverts.
Real-time translation within the price constraints of tourists.
That's a really weird and risky way of putting it.
There is a huge difference between being a search engine that finds (external) pages, and coming up with answers in your own name. If some page on the Internet says Obama is a Muslim, who cares, it's not important. But if Google says that Obama is in fact a Muslim, suddenly it's a very big deal. And that's true for every query, on every topic imaginable.
Every answer they get wrong attacks their image a little, and out of billions of queries it's inevitable they will get many wrong.
This forced, radical change motivated only by fear feels like Google+ all over again. Or New Coke.