Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We should always be able to clearly understand and interpret all of the thinking leading to an action done by an AI. What would the point be if we don't know what it's doing, just that it is doing "something"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: