"Intelligence" is a continuous process. Without a continuous feedback loop, LLMs will never be more than a compression algorithm we bullied into being a chatbot.
OpenAi as a mega-organism might be intelligent, but the LLMs definitely are not.
The "compressed capture of semantic relationships" is a new thing we don't have a word for.
Funnily enough, there is a mathematical link between data compression and AGI [1]. I believe a paper circulated some time ago that compared gpt2 to gzip, with interesting results.
OpenAi as a mega-organism might be intelligent, but the LLMs definitely are not.
The "compressed capture of semantic relationships" is a new thing we don't have a word for.