Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> There are LLMs today that are amazing at coding, and when you allow it to iterate (eg. respond to compiler errors), the quality is pretty impressive. If you can run an LLM 3x faster, you can enable a much bigger feedback loop in the same period of time.

Well now the compiler is the bottleneck isn't it? And you would still need human check for bugs that aren't caught by the compiler.

Still nice to have inference speed improvements tho.



Something will always be the bottleneck, and it probably won’t be the speed of electrons for a while ;)

Some compilers (go) are faster than others (javac) and some languages are interpreted and can only be checked through tests. Moving the bottleneck from AI code gen step to the same bottleneck as a person seems like a win.


Spelling out the code in editor is not really the bottleneck.


And yet it takes a non-zero amount of time. I think an apt comparison is a language like C++ vs Python. Yea, technically you can write the same logic in both, but you can't genuinely say that "spelling out the code" takes the same amount of time in each. It becomes a meaningful difference across weeks of work.

With LLM-pair-programing, you can basically say "add a button to this widget that calls this callback" or "call this API with the result of this operation", and the LLM will spit out code that does that thing. If your change is entirely within 1-2 files, and < 300 LOC, in a few seconds, and it can be in your IDE, probably syntactically correct.

It's human-driven, and the LLM just handles the writing. The LLM isn't doing large refactors, nor is it designing scalable systems on its own. A human is doing that still. But it does speed up the process noticeably.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: