Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Compilers seem a great target for AI. Whole program optimisation using reinforcement learning could be massive, there's plenty of data, and collecting more data is relatively cheap. The course touches on superoptimisers but these don't really use AI. I think a change is coming in this space


AI os great for compiler optimization as long as it's OK that the rule that functionality can't be observably changed gets broken.

Drop in a modern AI as the optimizer and every single line of C++ code can be considered undefined behaviour.


How would you prove the resulting program is functionally still the same?


As others have commented, it would be more like deciding which optimisation passes to run and in which order. Things like loop unrolling and inlining are not always helpful. Things like polyhedral optimization can be very slow. Running every optimization pass every time it might help is certainly far too slow.


AI could be used to choose which of the valid transformations to apply and in which order.

Current compilers do not guarantee that re-running the optimization passes a second time is idempotent. Every pass just does some "useful" transformations, it's best-effort.


You don't have to go that far. A compiler contains a lot of heuristics, I could see ai being useful for that.


Those heuristics are about "when" to apply certain transformations in a situation when two options are already proven equivalent. That is different from transforming correct code into possibly incorrect code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: