We have reached "peak UI". In the future we're not going to need every service to build four different versions of their app for every major platform. They can just build a barebones web app and the AI will use it for you, you'll never have to even see it.
I don't think this is the case. You provide an API spec but you also have to provide the implementation of that API. ChatGPT is basically a concierge between your API and the user.
I think the API is meant to be the data model in this scenario. The point is that you design the API around the task that it solves, rather than against whatever fixed spec OpenAI publishes. And then you tell ChatGPT, "here's an AI, make use of it for ..." - and it magically does, without you having to write any plumbing.
I mean, if you consider mobile we might already be down from the peak. In the sense that the interface bandwidth has shrunk to whatever 2 fingers can handle.