The problem I see with A.I. research is that its spearheaded by individuals who think that intelligence is a total order. In all my experience, intelligence and creativity are partial orders at best; there is no uniquely "smartest" person, there are a variety of people who are better at different things in different ways.
This came up in a discussion between Stephen Wolfram and Eliezer Yudkowsky I saw recently. I generally think Wolfram is a bit of a hack but it was one of his first points that there is no single "smartness" metric and that LLMs are "just getting smarter" all the time. They perform better at some tasks, sure, but we have no definition of abstract "smartness" that would allow for such ranking.
You're good at some things because there is only one copy of you and limited time and bounded storage.
What could you be intelligent at if you could just copy yourself a myriad number of times? What could you be good at if you were a world spanning set of sensors instead of a single body of them?
Body doesn't need to mean something like a human body nor one that exists in a single place.
Humans all have similar brains. Different hardware and algorithms have way more variance in strengths and weaknesses. At some points you bump up against the theoretical trade-offs of different approaches. It is possible that systems will be better than humans in every way but they will still have different scaling behavior.
At a certain point intelligence is a loop that improves itself.
"Hmm, oral traditions are a pain in the ass lets write stuff down"
"Hmm, if I specialize in doing particular things and not having to worry about hunting my own food I get much better at it"
"Hmm, if I modify my own genes to increase intelligence..."
Also note that intelligence applies resource constraints against itself. Humans are a huge risk to other humans, hence the lack of intelligence over a smarter human can constrain ones resources.
Lastly, AI is in competition with itself. The best 'most intelligent' AI will get the most resources.
Thanks for the comment, it triggerred a few thought experiments for me.
For example, if you focus on oral traditions you experiment and create more poems, songs, etc. If you focus on preserving food you discover jams, dried meat, etc.
Is it useful to focus on everything, or global optimal? Is it possible?
Also regarding competition and evolution, what stopped humans to get more capable brains? Is it just resource constraints, like not having enough calories(not having mini nuclear reactor with us)? Or are there other, more interesting causes?
I don't agree with your premise at all so I don't think that the rest of it follows from it either. What evidence or reason do you have to bring me to accept that premise?
Huh? Can you cite _one_ major AI researcher who believes intelligence is a total ordering?
They'll definitely be aligned on partial ordering. There's no "smartest" person, but there are a lot of people who are consistently worse at most things. But "smartest" is really not a concept that I see bandied about.