Neither can any LLM. It’s not what they’re designed to do.
There’s no need to worry about a speculative paper clip maximizer turning the world into grey goo. That’s still science fiction.
The real harms today are much more banal.
Is it reasonable to believe that LLMs, even if they scale to hundreds of billions of tokens, could even emulate reasoning? No. It literally cannot.
Neither can any LLM. It’s not what they’re designed to do.
There’s no need to worry about a speculative paper clip maximizer turning the world into grey goo. That’s still science fiction.
The real harms today are much more banal.