Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs,not AGI

Okay, but imagine someone strips ChatGPT of the safeguard layers, asks it to shut down MAERSK operation world wide without leaving tracks,, and connects the outputs to a bash terminal, and the stdout to the chat api.

It is still an LLM, but if it can masquerade as an AGI, is that then not enough to qualify as one? To me, this is what the Chinese Room Experiment [1] is about.

[1] https://en.wikipedia.org/wiki/Chinese_room



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: