Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> “I think there’s this notion that humans doing math have some rigid reasoning system—that there’s a sharp distinction between knowing something and not knowing something,” says Ethan Dyer, a machine-learning expert at Google. But humans give inconsistent answers, make errors, and fail to apply core concepts, too. The borders, at this frontier of machine learning, are blurred.

This part resonates with me. There was a time when I could calculate congruent modulo problems with exponents, but I couldn’t do it step by step, I could only “hallucinate” in a fuzzy way to arrive at the solution, somehow like recalling the solution from memory.

When we have to explain our reasoning we can’t think the same way. It’s like thinking with a debugger attached.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: