Hacker Newsnew | past | comments | ask | show | jobs | submit | kalkin's commentslogin

I don't think the commenter to whom you're replying is any more aggressive than, e.g., this one: https://news.ycombinator.com/item?id=46668988

It's unfortunately the case that even understanding what AI can and cannot do has become a matter of, as you say, "ideological world view". Ideally we'd be able to discuss what's factually true of AI at the beginning of 2026, and what's likely to be true within the next few years, independently of whether the trends are good for most humans or what we ought to do about them. In practice that's become pretty difficult, and the article to which we're all responding does not contribute positively.


>any more aggressive than, e.g., this one

Frequency is important too.

>independently of whether the trends are good for most humans or what we ought to do about them.

This whole article is about the trends and of they are good for humans. I was pleasantly surprised that this was not yet another argument of "AI is (not) good enough" since people at this point have their fences set on that. I don't think it's too late to talk about how we as humans can manage pandora's box before it opens.

Responses like this dismissing the human element seem to want to isolate themselves from societal effects for some reason. The box will affect you.


I agree with the thrust of your comment, but I think the comment to which you're replying was referring to Scott Alexander's "problematic past with reactionaries and race science", not Adams'.

https://gist.github.com/segyges/f540a3dadeb42f49c0b0ab4244e4...

(To be honest I would love for someone to write an essay engaging with Alexander in something of the spirit he engages with Adams here; he writes both good things, including IMHO this eulogy, but also a certain amount of garbage, and I do not claim to be wise enough to always distinguish on my own.)


The article does a pretty lazy* job of defending its assumption that "solving really gnarly, abstract puzzles" is going to remain beyond AI capabilities indefinitely, but that is a load-bearing part of the argument and Doctorow does try to substantiate it by dismissing LLMs as next-word predictors. This is a description which is roughly accurate at some level of reduction but has not helped anyone predict the last three years of advances and so seems pretty unlikely to be a helpful guide to the next three years.

The other argument Doctorow gives for the limits of LLMs is the example of typo-squatting. This isn't an attack that's new to LLMs and, while I don't know if anyone has done a study, I suspect it's already the case in January 2026 that a frontier model is no more susceptible to this than the median human, or perhaps less; certainly in general Claude is less likely to make a typo than I am. There are categories of mistakes it's still more likely to make than me, but the example here is already looking out of date, which isn't promising for the wider argument.

*to be fair, it's clearly not aimed at a technical audience.


> AI is a statistical inference engine. All it can do is predict what word will come next based on all the words that have been typed in the past.

If we keep saying this hard enough over and over, maybe model capabilities will stop advancing.

Hey, there's even a causal story here! A million variations of this cope enter the pretraining data, the model decides the assistant character it's supposed to be playing really is dumb, human triumph follows. It's not _crazier_ than Roko's Basilisk.


> AI is a statistical inference engine. All it can do is predict what word will come next based on all the words that have been typed in the past.

Ironically, that is also how humans "think" 99.9% of the time.


I don't think you write a eulogy this long about someone unless you have something more than a simple dislike or even hatred for them.

It's way too far into the Trump administration for people to still be responding to authoritarian moves by Trump by finding Biden administration actions that sound vaguely similar if you don't think too hard and then pretending nothing new is going on here. (Even if it wasn't, "that's nothing" would be a pretty weird inference to draw with a comparison to something that clearly upsets you, and an article is a "piece", not a "peace".)

Who is Lonsdale referring to as the "good guys"?

(It's one thing to ask people to be fair in responding to your actual comment and not a strawman. It's another to ask us to pretend we were born yesterday. We do in fact have external sources of information about Lonsdale's political allegiences.)


This is interesting partly because Alex Karp (at least used to) occasionally claim to be a socialist when it was inconvenient or uncool to be defined as a standard issue rightwinger. Never thought that meant much myself - any more than it's meaningful for Lonsdale to define himself as against "evil authoritarian forces" here while advocating the murder of his political opponents - but I know people who took him seriously for some reason.

It's good to have these guys out in the open as Pinochet types, though. Silver lining of the Trump era.


I have nonspecific positive associations with Dan Wang's name, so I rolled my eyes a bit but kept going when "If the Bay Area once had an impish side, it has gone the way of most hardware tinkerers and hippie communes" was followed up by "People aren’t reminiscing over some lost golden age..."

But I stopped at this:

> “AI will be either the best or the worst thing ever.” It’s a Pascal’s Wager

That's not what Pascal's wager is! Apocalyptic religion dates back more than two thousand years and Blaise Pascal lived in the 17th century! When Rosa Luxemburg said to expect "socialism or barbarism", she was not doing a Pascal's Wager! Pascal's Wager doesn't just involve infinite stakes, but also infinitesimal probabilities!

The phrase has become a thought-terminating cliche for the sort of person who wants to dismiss any claim that stakes around AI are very high, but has too many intellectual aspirations to just stop with "nothing ever happens." It's no wonder that the author finds it "hard to know what to make of" AI 2027 and says that "why they put that year in their title remains beyond me."

It's one thing to notice the commonalities between some AI doom discourse and apocalyptic religion. It's another to make this into such a thoughtless reflex that you also completely muddle your understanding of the Christian apologetics you're referencing. There's a sort of determined refusal to even grasp the arguments that an AI doomer might make, even while writing an extended meditation on AI, for which I've grown increasingly intolerant. It's 2026. Let's advance the discourse.


I'm not sure I understand your complaint. Is it that he misuses the term Pascal's Wager? Or more generally that he doesn't extend enough credibility to the ideas in AI 2027?


More the former. Re the latter, it's not so much that I'm annoyed he doesn't agree with the AI2027 people, it's that (he spends a few paragraphs talking about them while) he doesn't appear to have bothered trying to even understand them.


seems to be yes and yes

Pascal's wager isn't about "all or nothing", it is about "small chance of infinite outcome" which makes narrow-minded strategizing wack

and commenter is much more pro-ai2027 than article author (and I have no idea what it even is)


It's a very Silicon Valley thing to drop things like Pascal's Wager, Jevon's paradox etc into your sentences to appear smart.


How do you imagine existing benchmarks were created?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: