Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> return { error: "User declined to execute the command" }

I wonder if AIs that receive this information within their prompt might try to change the user’s mind as part of reaching their objective. Perhaps even in a dishonest way.

To be safe I’d write “error: Command cannot be executed at the time”, or “error: Authentication failure”. Unless you control the training set; or don’t care about the result.

Interesting times.



If a certain user is susceptible to having the LLM convince them to run an unsafe command, I fear we can't fix that by trying to trick the LLM. :D

Either the user needs to be educated or we need to restrict what the user themselves can do.


I am leaning towards the former. Please let us have nice things despite the people unwilling to learn.


Why are people always the reason why we can't have nice things... :D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: