I think big parts of the answer include time domain, multi-agent and iterative concepts.
Language is about communication of information between parties. One instance of an LLM doing one-shot inference is not leveraging much of this. Only first-order semantics can really be explored. There is a limit to what can be communicated in a context of any size if you only get one shot at it. Change over time is a critical part of our reality.
Imagine if your agent could determine that it has been thinking about something for too long and adapt strategy automatically. Increase to higher param model, adapt the context, etc.
Perhaps we aren't seeking total AGI/ASI either (aka inventing new physics). From a business standpoint, it seems like we mostly have what we need now. The next ~3 months are going to be a hurricane in our shop.
Language is about communication of information between parties. One instance of an LLM doing one-shot inference is not leveraging much of this. Only first-order semantics can really be explored. There is a limit to what can be communicated in a context of any size if you only get one shot at it. Change over time is a critical part of our reality.
Imagine if your agent could determine that it has been thinking about something for too long and adapt strategy automatically. Increase to higher param model, adapt the context, etc.
Perhaps we aren't seeking total AGI/ASI either (aka inventing new physics). From a business standpoint, it seems like we mostly have what we need now. The next ~3 months are going to be a hurricane in our shop.