Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs, kind of like Bill Bryson's books, are great at presenting "information" that seems completely plausible, authoritative, and convincing to the reader. But when you actually do know the truth about a subject, you realize how completely full of crap they too often are. And somehow after being given a patently counterfactual response to one query, we just blindly continue to take their responses to other queries as having value.


At the moment, I find them to be the perfect tool to get started with learning about something. I don't expect it to tell me everything I need to know or to even be right, but if I ask ChatGPT or another LLM a question about a subject I'm not familiar with then it will at least use a bunch of terminology that I didn't have in my vocabulary before starting.

For example, I just bought a 1990 Miata and I want to install a couple of rocker switches in the dash to individually control the pop-up headlights. I have enough circuits knowledge to safely change outlets and light switches, but I didn't know about relays. I asked ChatGPT how to add these switches and it immediately mentioned buying DPDT switches and tying in the OEM relay into a SPDT relay. It may have gotten the actual circuit diagram completely wrong, but now I know exactly what to read up on.


Yeah, it's definitely been terrific for figuring out terminology or "the right word" to use for things.


Or to put it another way, it's great at filling in the "don't know what you don't know" gap.


Now let me ask you the more fundamental question.. did this do you any better than if you had searched a youtube video or some other source? Would this be video from 2016 be relevant? This may not be the right video but my approach for DIY in the last 10-20 years was to hit youtube up. https://www.youtube.com/watch?v=77q9KtjnNTU

I'm trying to gauge whether LLMs are truly expanding our capabilities in a fundamental way or are really just another way to search for answers without going to google or a library.


> did this do you any better than if you had searched a youtube video or some other source?

Yes, because when I searched youtube for "miata wink mod" almost all of the results were for kits for microcontrollers which I wanted to avoid because I just want to control the motors with switches. Now I know to include "SPDT" in my search and I can find more targeted videos that add an override using switches.

The video you linked is relevant but doesn't really match what I want to do. The NA Miata has a motor for each pop-up headlight. There's a dedicated button that controls the headlights popping up and down but the light switch on the turn signal overrides this if the lights are on i.e. the relay is DPDT that's an OR of the 2 signals.

I want to add rocker switch for each light where the signal from the rocker switch overrides the behavior from the existing relay. If a given DPDT rocker switch is in neutral then the signal from the relay is used but if the rocker switch is engaged in either direction then the motor moves in that direction. ChatGPT did explain a lot about the default behavior and included a lot of the terminology that helped me confirm that. Of course, if I already knew about relays then I wouldn't have needed any of this, but I didn't.


One challenging thing about searching in new domains is you don't necessarily have the vocabulary to adequately ask the right question or use the right terms to unlock the secret knowledge. If I type my dryer symptoms into an LLM and it tells me that the drum rollers are likely bad and need to be replaced, I can take that information to Youtube or Google and get more targeted advice. The LLM can also, and often does ask leading questions to help narrow down the list of possible options.


They're a slightly better search, because web search has degraded. They also provide needed vocabulary almost directly, which accelerates search.

I would say, for big decisions (financial, work projects, health, etc), you really need the sources and you need to double check things, but I would say that maybe 70% of my searches are closer to trivia than to life changing things, so LLMs are obviously very good for that. And frequently the stuff I search is trivially verifiable, so that's also good.

The bigger worry is that the general public doesn't have the mental immune system to actually know what to look for and especially to validate the LLM answers, so we're in for a world of hurt.

We will soon have some extremely brainwashed individuals.


> did this do you any better than if you had searched a youtube video or some other source?

Yes. With LLM it is easy to explore the domain from ground up, and it is interactive. You don't wait for some random guy in a video to come to a point, you are asking questions, consuming the information at your speed.

When I do this, I switch constantly between a search engine and LLM. I copy words of LLM into search box, and asking LLM questions about things I've found. It is the way to explore things. Search engines alone are not. Not anymore. At least you need to ask LLM for some starting points, because when you search google, you get results that are LLM slop. The same thing you can get from LLM, but not interactive, so it can go and go for multiple screens of a wall of a text, while delivering exactly zero useful information.

> I'm trying to gauge whether LLMs are truly expanding our capabilities in a fundamental way or are really just another way to search for answers without going to google or a library.

They just another way to search. And you should strike Google, it doesn't work anymore. 15 years ago google was good enough, but now it is useless.


For obscure things, it's often very hard to find videos like that, and the videos vary greatly in quality. ChatGPT helped me fix my washing machine and my dryer yesterday with perfect advice, walking me through every step. Those are both projects I would've made a half assed attempt at and then thrown my hands up and called someone to do in the past.


I wonder if that can be attributed to search engines and search fields on various websites being intentionally worsened in order to push specific content and ads.

Google search and Youtube search used to almost always get you what you were looking for. Now you have to fight with it to maybe get what you are looking for because of all the sponsored ads.

Search used to be a nearly solved problem.


I really dislike YouTube on mobile for tutorials because UI is clunky compared to desktop. The information is locked into video frames and audio that are hard to search through and mobile clients aren’t rich enough to search through transcripts and object search through frames.

I much prefer static web pages and text which is why I reach for the LLM hammer.

The way I see the two is as complements. A YouTube video with someone doing something is rich with information but it’s slow to process. A LLM prompt is fast but unreliable. Sometimes the information that in looking for is not in the Internet and I’m actually looking for a plausible hallucination so I can start from somewhere. Tradeoffs.


Search hasn't worked for years now.


Completely not related to any LLM usage, but welcome to the world of NA Miata ownership! I think you'll find that with just general maintenance it'll treat you very well -- My '91 is the most reliable car in the drive, and by far the most whimsical. (I just got back from a Miata errand trip in the pouring rain -- Why did I drive the Miata? Winter is very soon, and it gets put away for ~3 ish months -- so at this time of year, every possible trip is a miata trip!)


Thanks! I've been looking for a while but couldn't find one that was in decent shape for less than $10k. Thankfully for some reason people shy away from RHD cars and I snagged a 1990 Eunos Roadster for $7700 on C&B. I'm in NJ and sadly it seems like that week was the last week that would've been a decent week to drive with the top down. I may still try and take it out but I'm definitely going to be bundled up.


This weekend I stumbled upon a cars and coffee in Fremont. Was expecting a wide variety of cars, and was surprised to see instead all Miatas.


I don't quite disagree but this comparison is typically unfair, because when you really know about a subject you tend to ask way more difficult questions than about other subjects, so of course the LLMs are gonna struggle more. If you ask really basic questions they will regurgitate well known bachelor-level knowledge and look good. What do I know about biology anyway? about silos for grain storage? any passable answer is enough to wow me on those topics. But on the topics I really know about, I never ask the basics.


It's a valuable but scary experiment to query an LLM on basic subject matter in a field that you know a lot about. Ask those basic questions first.


I think it is truly hilarious that you brought Bill Bryson into this discussion.


I was curious about exploring the motivations of a character (specifically Linter in The State of the Art) and asked a question to start off with (and bring existing understanding into the context) with another character (Diziet Sma)... and ChatGPT got things wrong...

The chat is https://chatgpt.com/share/691266fa-c76c-8011-876c-027206abd2... if one is curious. I continued a bit to see what else it got right and wrong.

The thing is if you don't know the story or the books mentioned... it's perfectly plausible that what was written is correct. And while a good bit of it is... maybe; that it got material facts wrong makes it "if it's working from that, then nothing it produces is based on the correct information."

I've known that ChatGPT is full of crap (and experienced in other chats).

It can be a good tool to augment some capacities - but the exploration of ideas based on facts and reality are often (at best) flawed and if one is to try to build upon those flaws and add in one's own misconceptions, then its output is even more questionable.


> And somehow after being given a patently counterfactual response to one query, we just blindly continue to take their responses to other queries as having value.

They have value because they are very fast, concise, and right "often enough."

People used to make your criticism about Wikipedia (and occasionally still do).

All of the following are true:

1. They regularly make errors

2. They require more caution than most people have to use effectively.

3. They have tremendous value.


> like Bill Bryson's books, are great at presenting "information" that seems completely plausible, authoritative, and convincing to the reader. But when you actually do know the truth about a subject, you realize how completely full of crap

Wow, I have a couple Bill Bryson books on my reading list, can you share some examples of that?


I read this good breakdown on 'The Mother Tongue' on everything2 sometime ago: https://everything2.com/title/The+Mother+Tongue%253A+English...


Hmm. Why should I take this critique as being any more accurate than Bryson, given that the writer says in so many words:

"[...] I - someone who’s far from an expert at linguistics [...]"

The rather sniffy observation about Wikipedia falls very flat as the book was written 10 years before Wikipedia existed!

In fact Bryson wrote his book a good 20 years earlier than this critique so perhaps this huffy person has resources to draw upon that were not available in 1990.

Not that I really expect Bryson's stuff to dot every i and cross every t - he's a humourist.


The writer doesn't claim that Bryson should have consulted Wikipedia, more that the myth that eskimos have 500 words for snow is so famous that the myth itself has a Wikipedia page dedicated to it. The discussion had been going on a long time when Bryson wrote this book, and I remember well being told this as a child in the 80's. To present what was either known as an urban myth or at least under a more nuanced discussion (they do, but it's due to how root words are easier to pluralise, not snow per se) is pretty lazy in a non-fiction book.


> Why should I take this critique as being any more accurate than Bryson

Because you have access to various dictionaries and can easily verify it for yourself?

Assuming the quotes from the book are accurate, that's really poor.


Honestly I wouldn't worry about it. He's a wonderful writer, the problem is that he doesn't let reality get in the way of a good story. Just classify them with the rest of the fiction-non-fiction books and enjoy the journey. If you ever find yourself asking "wow is that true?" then it probably isn't.


> LLMs, kind of like Bill Bryson's books

I wonder if maybe Malcolm Gladwell would be a more apt comparison?


Can you give me examples of topics that you know about that LLMs don’t know about?


I read your comment and just see the typical web developer.

I can’t count how many times developers have tried to school me with their expert wisdom. It’s typically garbage, complete Dunning-Kruger. The causes are two-fold.

On one hand most of these people’s capabilities are an inch wide. Maybe they are really good about JSX, but you take that away and wisdom becomes empty hostility. I don’t want anything to do with JSX or React, so it that’s all you got you are probably just blowing smoke.

The other cause is no experience at all. For example somebody might think they are super knowledgeable on WebSockets because they used a package off NPM. They have no idea how it really works, can’t understand RFC6455 even with Cliff Notes, and can’t write original code.

If you want to be an expert at least start with your own implementation of the thing you want to be an expert about, but most of the people doing that work can’t program.

I can’t help but wonder if the DK coming out of LLMs is really any worse.


geLLMan amnesia


> But when you actually do know the truth about a subject, you realize how completely full of crap they too often are

The Gell-Mann Amnesia Effect https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect


Indeed, a couple years ago I tried to use chatgpt to clarify some confusing parts of geometric algebra to me, and it confidently told me completely useless information which I had to blindly trust. Fast forward a couple years, I've read more on the subject from trusted sources, and realised everything it was telling me was pure BS.


Similar to (same as?) Gell-Mann amnesia effect.


I've most frequently heard this referred to as “Gell-Mann Amnesia,” and yes, LLMs are fertile ground to find it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: