And it shows. It has a poor grasp of reality. It does a poor job with complex tasks. It cannot be trusted with specialized tasks typically done by expert humans. It is certainly an amazing technical achievement that does a decent job with simple tasks requiring cursory knowledge, but that’s all it is at this time.
>There’s absolutely nothing to suggest text can’t generalized further to new modalities
Nope. OpenAI has already demonstrated the ability to generalize GPT4 to a new modality. Your claim that text models can only generalize to images and not other modalities is utterly unconvincing. Explain to me why vision is so much different than say audio?
>> And it shows. It has a poor grasp of reality. It does a poor job with complex tasks.
GPT4 is a proof of concept more than anything. I’m excited to see how much reliability improves over time. It’s grasp of reality isn’t prefect, but at least it understands how burden of proof works.
I walked back nothing. OpenAI was surprised by the mass adoption of ChatGPT, they saw it as an early technical preview.
I don’t understand why some people have a such hard time envisioning the potential of new technologies without a polished end product in their hands. Imagine if AI researchers had the same attitude.
Technology can be both real and unpolished at the same time. Those two things are not contradictory.
And it shows. It has a poor grasp of reality. It does a poor job with complex tasks. It cannot be trusted with specialized tasks typically done by expert humans. It is certainly an amazing technical achievement that does a decent job with simple tasks requiring cursory knowledge, but that’s all it is at this time.
>There’s absolutely nothing to suggest text can’t generalized further to new modalities
Inversion of burden of proof.