Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The endgame in programming is reducing complexity before the codebase becomes impossible to reason about. This is not a solved problem, and most codebases the LLMs were trained on are either just before that phase transition or well past it.

Complexity is not just a matter of reducing the complexity of the code, it's also a matter of reducing the complexity of the problem. A programmer can do the former alone with the code, but the latter can only be done during a frank discussion with stakeholders.

A vibe coder using an LLM to generate complexity will not be able to tell which complexity to get rid of, and we don't have enough training data of well-curated complexity for LLMs to figure it out yet.



No kidding. So far the complexity introduced by LLM-generated code in my current codebase has taken far more time to deal with than the hand-written code.

Overall, we are trying to "silo" LLM-generated code into its own services with a well-defined interface so that the code can just be thrown away and regenerated (or rewritten by hand) because maintaining it is so difficult.


Yeah, same. I like the silo idea, I'll have to explore that.

I'm relieved to hear this because the LLM hype in this thread is seriously disorienting. Deeply convinced that coding "by hand" is just as defensible in the LLM age as handwriting was in the TTY age. My dopamine system is quite unconvinced though, killing me.


I have a silo’d service that handles file uploads of PDFs, images and so on. It was largely vibe coded.

It sits on an isolated tier and isn’t allowed to persist state or have permanent storage. We wanted to reduce the impact of a security flaw in this code.

We’ve ended up doing similar things for search and for an orchestration tool used for testing. The key thing is it’s non critical so we can live without it.


Yes, a retreading of the accidental vs. implicit complexity discussion is in order here. I asked an AI agent to implement function calls in a programming language the other day. It decided the best way to do this was to spin up a new interpreter for every function call and evaluate the function within that context. This actually worked but it was very very very slow.

The only way I was able to direct the AI to a better design was by saying the words I know in my head that describe better designs. Anyone without that knowledge wouldn't be able to tell the heavy interpreter architecture wasn't good, because it was fast enough for simple test cases which all passed.

And you can say "just prompt better" but we're very quickly coming to a place where people won't even have the words to say without AI first telling them what they are. At that point it might as well just say "The design is fine don't worry about it" and how would the user know any better.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: