Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We will never achieve AGI, because we keep moving the goalposts.

SOTA models are already capable of outperforming any human on earth in a dizzying array of ways, especially when you consider scale.

Humans also produce nonsensical, useless output. Lots of it.

Yes, LLMs have many limitations that humans easily transcend.

But few if any humans on earth can demonstrate the breadth and depth of competence that a SOTA model possesses.

Relatively few (probably less than half) are casually capable of the level of reasoning that LLMs exhibit.

And, more importantly, as anyone in the field when neural networks were new is aware, AGI never meant human level intelligence until the LLM age. It just meant that a system could generalize one domain from knowledge gained in other domains without supervision or programming.



> But few if any humans on earth can demonstrate the breadth and depth of competence that a SOTA model possesses.

Most humans can count the occurrence of letters in a word. The word competence here is doing quite a bit of work. I think most people understand competence to mean more than just encyclopedic knowledge, with very limited reasoning capability.

> AGI never meant human level intelligence until the LLM age. It just meant that a system could generalize one domain from knowledge gained in other domains without supervision or programming.

I think it's probably correct to say that many people who seriously studied the problem had a larger notion of AGI than the layperson who only ever talked about the Turing test in the most basic terms. Also, I don't think LLMs have even convincingly demonstrated a great ability to generalize.

They're basically really great natural language search engines but for the fact that they give incorrect but plausible answers about 5-10% of the time.


>> they give incorrect but plausible answers about 5-10% of the time.

This describes most of the human population as well. Why do we expect machines to be more accurate and perfect in their correctness than humans before we say they are at parity, when they are clearly savant as much as idiot. It’s a strange bias.


> SOTA models are already capable of outperforming any human on earth in a dizzying array of ways, especially when you consider scale.

So why are so many people still employed as e.g. software engineers? People aren’t prompting the models correctly? They’re only asking 10 times instead of 20? They’re holding it wrong?


Long form engineering tasks aren’t doable yet without supervision. But I can say in our shop, we won’t be hiring any more junior devs, ever, except as (in my region, free) interns or because of some extraordinary capabilities, insights, or skills. There just isn’t any business case for hiring junior devs to do the grunt work anymore.

But, the vast majority of work that is done in the world is not in the same order of magnitude of complexity or rigor that is required by long form engineering.

While models may not outperform an experienced developer, they will likely outperform her junior assistant, and a dev using ai effectively will almost certainly outperform a team of three without ai, in most cases.

The salient fact here is not that the human is outperformed by the model in a narrow field of extraordinary capability, but rather that the model can outperform that dev in 100 other disciplines, and outperform most people in almost any cerebral task.

My claim is not that models outperform people in all tasks, but that models outperform all people at many tasks, and I think that holds true with some caveats, especially when you factor in speed and scale.


What does junior or senior have anything to do with it ? I would think a smarter junior will run circles around a dumber senior engineer with LLM autocomplete.


If you’re hiring dumb senior engineers you’re holding it wrong lol. Using LLMs is a lot like delegating to a team from a skills perspective, so it favors extensive domain knowledge. You don’t just commit whatever it writes, just like you wouldn’t commit what a junior dev writes without scrutiny. Experience makes that scrutiny more valuable and effective.


> We will never achieve AGI, because we keep moving the goalposts.

I think it's fair to do it to the idea of AGI.

Moving the goalpost is often seen as a bad thing (like, shifting arguments around). However, in a more general sense, it's our special human sauce. We get better at stuff, then raise the bar. I don't see a reason why we should give LLMs a break if we can be more demanding of them.

> SOTA models are already capable of outperforming any human on earth in a dizzying array of ways, especially when you consider scale.

Performance should include energy consumption. Humans are incredibly efficient at being smart while demanding very little energy.

> But few if any humans on earth can demonstrate the breadth and depth of competence that a SOTA model possesses.

What if we could? What if education mostly stopped improving in 1820 and we're still learning physics at school by doing exercises about train collisions and clock pendulums?


I’m with you on the energy and limitations, and even on the moving of goalposts.

I’d like to add that I think limit definition of AGI has jumped the shark though and is already at ASI, since we expect our machine to exhibit professional level acumen across such a wide range of knowledge that it would be similar to the 0.01 percent top career scholars and engineers, or even above any known human capacity just due to breadth of knowledge. And we also expect it to provide that level of focused interaction to a small city of people all at the same time / provide that knowledge 10,000 times faster than any human can.

I think definitionally that is ASÍ.

But I also think AGI that “we are still chasing” focus-groups a lot better than ASI which is legitimately scary as shit to the average Joe, and which seasoned engineers recognize as a significant threat if controlled by people with misaligned intentions.

PR needs us to be “approaching AGI”, not “closing in on ASI”, or we would be pinned down with prohibitive regulatory straitjackets in no time.


As much regulatory measures as possible seems good. This things are not toys.


Yeah, it’s definitely some kind of new chapter. It’s reducing hiring, and will drive unemployment, no matter what people are saying. It’s a poison pill in a way, since no one will hire junior staff anymore. The reliance on AI will skyrocket as experienced staff ages out and there are no replacements coming up through the ranks.



I may have missed the target of this reference, but I enjoyed it nonetheless. The CD definitely seems to be gaining traction in the last decade.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: