Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It might be that some day, an AI could be just as good as a trained mathematician, but so far, it doesn't look like it.

One of the on-going arguments in automated theorem proving has been this central question: would anyone understand or want to read a proof written by a machine?

A big part of why we write them and enjoy them is elegance, a rather subjective quality appreciated by people.

In computer science we don't tend to care so much for this quality; if the checker says the proof is good that's generally about as far as we take it.

But endearing proofs that last? That people want to read? That's much harder to do.



I don't think anyone wants elegant proofs for their software's correctness. As long as the code is proven correct, programmers would be satisfied.


I had mentioned that. This is true today when software developers are writing proofs and why automation is useful.

You would still need to be able to read the proof that a computer generated and understand what it contains in order to judge whether it is correct with respect to your specifications. So in some sense elegance would be useful since the proof isn’t written by a human but has to be understood by one.

If the input here is still plain language prompts… then we’ll have to assume that we eventually develop artificial cognition. As far as I know, there isn’t a serious academic attempting this, only VC’s with shovels to sell.


Meanwhile, the shovels learned to solve mathematical problems of the international math olympiad.

https://deepmind.google/discover/blog/alphageometry-an-olymp...


Deep learning also learned how to play Go, not terribly interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: