Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is an illustrative comment for meta reasons, I think. Karpathy's lecture almost certainly doesn't cover the superposition hypothesis (which hadn't been invented for ANNs 8 years ago), or sparse dictionary learning (whose application to ANNs is motivated by the superposition hypothesis). It certainly doesn't talk about actual specific features found in post-ChatGPT language models. What's happening here seems like a thing LLMs are often accused of dismissively - you're pattern-matching to certain associated words without really reasoning about what is or isn't new in this paper.

I worry this is going to come across as insulting, but that's not my intention. I do this too sometimes; I think everyone does. The point is we shouldn't define true reasoning so narrowly that we think no system capable of it would ever be caught doing what most of us are in fact doing most of the time.



> I worry this is going to come across as insulting, but that's not my intention. I do this too sometimes; I think everyone does. The point is we shouldn't define true reasoning so narrowly that we think no system capable of it would ever be caught doing what most of us are in fact doing most of the time.

Indeed; to me LLMs pattern match (yes, I did spot the irony) to system-1 thinking, and they do a better job of that than we humans do.

Fortunately for all of us, they're no good at doing system-2 thinking themselves, and only mediocre at translating problems into a form which can be used by a formal logic system that excels at system-2 thinking.


By that reasoning even humans are not thinking. But of course humans are always excluded from such research - if it's human it's thinking by default, damn the reasoning. Then of course we have snails and dogs and apes, are they thinking? Were the Neanderthals thinking? By which definition? Moving posts is a too weak metaphor for what is going on here where everybody distorts the reasoning for whatever point they're trying to make today. And because I can't shut up I'll just add my user view: if it works like a duck and outputs like a duck, it's duck enough for any practical use, let's move on and see what do we do with it (like, use or harness or adopt or...).


> “By that reasoning even humans are not thinking”

I’m a neophyte, so take this as such. If we can agree that people output is not always the product of thinking, then I’d be more willing to accept computational innovations as thought-like.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: