Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that LLMs could be one module in a future AGI system.

I disagree that LLMs are good at simulation. They're good at prediction. They can only simulate to the degree that the thing they're simulating is present in their training data.

Also, if you were trying to build an AGI, why would you NOT run it slowly at first so you could preserve and observe the logs? And if you wanted to build it to run full speed, why would you not build other single-purpose dumber AIs to watch it in case its thought stream diverged from expected behavior?

There's a lot of escape hatches here.



human.exe, robot.exe, malignant agent.exe are all very much simulations that an llm would have no problem running.

>Also, if you were trying to build an AGI, why would you NOT run it slowly at first so you could preserve and observe the logs?

I'm telling you that is extremely easy to do all the things i've said. Some might be interested in doing what you say. Others might not. at any rate, to be effective this requires real time monitoring of thoughts and actions. That's not feasible forever. an LLMs state can change. There's no guarantee the friendly agent you observed today will be friendly tomorrow.

>And if you wanted to build it to run full speed, why would you not build other single-purpose dumber AIs to watch it in case its thought stream diverged from expected behavior?

This is already done with say Bing. Not even remotely robust enough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: