Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wait what? Glue as in extract high level semantic representations from _syntatic probabilities_ and pass on to appropriate domain specific tools?

This is the glaring hole in LLMs, a paradoxical semantic incoherence despite impressive sentenial and gramatical coherence.

As glue it is so thin as to be potable.



Quoting this tweet[0]:

"Here's a brief glimpse of our INCREDIBLE near future.

GPT-3 armed with a Python interpreter can · do exact math · make API requests · answer in unprecedented ways"

[0]https://twitter.com/sergeykarayev/status/1569377881440276481


All that remains is a small matter of programming…


Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents

https://wenlong.page/language-planner/


you would still need an executive it would be more like a universal translator.


Translate to what? The next likely string of characters? How would this executive even interact with it? Sibling comment of yours mentioned extracting low level steps from high level tasks but it needed another language model (no kidding!) to map to the «most likely» of the admissable actions. I mean, this shit is half baked even in theory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: