>it will not lead to human-level AGI that can, e.g., perform sensorimotor reasoning, motion planning, and social coordination.
That seems much less convincing in the face of current LLM approaches overturning a similar claim plenty of people wod have held about this technology, as of a few years ago, to do what it does now. Replace the specifics here with "will not lead to human level NLP that can, e.g., perform the functions of WSD, stemming, pragmatics, NER, etc."
And then people who had been working on these problems and capabilites just about woke up one morning and realized many of their career-long plans for addressing just some of these research tasks had to find something else to do for the next few decades of their lives.
I am not affirming the inverse of this author's claims, merely pointing out that it's early days in evaluating the full limits.
But one of the central points of the paper/essay is that embodied AGI requires a world model. If that is true, and if it is true that LLMs simply do not build world models, ever, then "it's early days" doesn't really matter.
Of course, whether either of those claims is true are quite difficult questions to answer; the author spends some effort on them, quite satisfyingly to me (with affirmative answers to both).
That seems much less convincing in the face of current LLM approaches overturning a similar claim plenty of people wod have held about this technology, as of a few years ago, to do what it does now. Replace the specifics here with "will not lead to human level NLP that can, e.g., perform the functions of WSD, stemming, pragmatics, NER, etc."
And then people who had been working on these problems and capabilites just about woke up one morning and realized many of their career-long plans for addressing just some of these research tasks had to find something else to do for the next few decades of their lives.
I am not affirming the inverse of this author's claims, merely pointing out that it's early days in evaluating the full limits.