Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Code has a generation cost and a maintenance cost.

If you just look at generation then sure it's super cheap now.

If you look at maintenance, it's still expensive.

You can of course use AI to maintain code, but the more of it there the more unwieldy it gets to maintain it even with the best models and harnesses.



I 'love' that folks are seemingly inching slowly towards more acceptance of crappy llm code. Because it costs marginally less to produce to production if you just pass some smoke tests? Have we not learned anything about technical debt and how it bites back hard? Its not even seniority question, rather just sane rational approach to our craft unless one wants to jump companies every few months like a toxic useless apple (not sure who would hire such person but world is big and managers often clueless).

There are of course various use cases, for few this is an acceptable tradeoff but most software ain't written once and never touched (significantly) again, in contrary.


> Have we not learned anything about technical debt and how it bites back hard?

I think LLMs are changing the nature of technical debt in weird ways, with trends that are hard to predict.

I've found LLMs surprisingly useful in 'research mode', taking an old and badly-documented codebase and answering questions like "where does this variable come from, and what are its ultimate consumers?" Its answers won't be as natural as a true expert's, but its answers are nonetheless useful. Poor documentation is a classic example of technical debt, and LLMs make it easier to manage.

They're also useful at making quick-and-dirty code more robust. I'm as guilty as anyone else of writing personal-use bash scripts that make all kinds of unjustified assumptions and accrete features haphazardly, but even in "chat mode" LLMs are capable of reasonable rewrites for these small problems.

More systematically, we also see now-routine examples of LLMs being useful at code de-obfuscation and even decompilation. These forward processes maximize technical debt compared to the original systems, yet LLMs can still extract meaning.

Of course, we're not now immune to technical debt. Vibe coding will have its own hard-to-manage technical debt, but I'm not quite sure that we have the countours well defined. Anecdotally, LLMs seem to have their biggest problem in the design space, missing the forest of architecture for the trees of implementation such that they don't make the conceptual cuts between units in the best place. I would not be so confident as to call this problem inherent or structural rather than transitory.


> taking an old and badly-documented codebase and answering questions like "where does this variable come from, and what are its ultimate consumers?"

Why do you even need an LLM for this? Code is formal notation, it’s not magic. Unless the code is obfuscated, even bad code is pretty clean on what they’re doing and how various symbols are created and used. What is not clear is “why” and the answer is often a business or a technical decision.


Once you are dealing with legacy codebase older than you, with very little comments and confusing documentation, you'd understand that LLM is god send for untangling the mess and troubleshooting issues.


> Why do you even need an LLM for this?

Once you get above a few hundred thousand lines of legacy undocumented code having a good LLM to help dig through it is really useful.


None of what you describe is free.

After the LLM helps untangle the mess, if you leave the mess in place, you will have to ask the LLM untangle it for you every time you need to make a change.

Better to work with the LLM to untangle the technical debt then and there and commit the changes, so neither you nor the LLM have to work so hard in the future.

I’ve even seen anecdotal evidence that code that’s easier for humans to work with is easier for LLMs to work with as well.


The inching-towards-acceptance of crappy processes is quite influencer-driven as well, with said influencers if not directly incentivised by LLM providers, then at least indirectly incentivised by the popularity of outrageous exhortations.

There's definitely a chunk of the developer population that's not going to trade the high-craft aspects of the process for output-goes-brrr. A Faustian bargain if ever I saw one. If some are satisfied by what comes down to vibe-testing and vibe-testing, I guess we wish them well from afar.


I wouldn't say acceptance of crappy code. I think the issue is the acceptance of LLM plans with just a glance and the acceptance of code without any code review by the author at all because if the author would waste any more time it wouldn't be worth it anymore.


People aren't interested in long-term thinking when companies are doing layoffs for bullshit reasons and making vague threats about how most of us will have to go find a new career which causes a heck of a lot of stress and financial costs. That isn't being petty, it's having self-respect. They get the quality when the companies treat the craftspeople with respect.


Once writing code is cheap you don't maintain code. You regenerate it from scratch.

What you maintain is the specification harness, and change that to change the code.

We have to start thinking at a higher level, and see code generation in the same way we currently see compilation.


I'm not sold on that idea yet.

I don't just have LLMs spit out code. I have them spit out code and then I try that code out myself - sometimes via reviewing it and automated tests, sometimes just by using it and confirming it does the right thing.

That upgrades the code to a status of generated and verified. That's a lot more valuable than code that's just generated but hasn't been verified.

If I throw it all away every time I want to make a change I'm also discarding that valuable verification work. I'd rather keep code that I know works!


I suspect that is where we will be going next - automated verification. At least to the point where we can pass it over the wall for user acceptance testing.

Is it possible to write Cucumber specs (for example) of sufficient clarity that allows an LLM agent team to generate code in any number of code languages that delivers the same outcome, and do that repeatedly?

Then we're at the point where we know the specs work. And is getting to the point where we know the specs work less effort than just coding directly?

We live in exciting times.


Unless the specification is also free of bugs and side effects, there is no guarantee that a rewrite would have fewer bugs.

Plenty of rewrites out there prove that point.



I think the nuanced take on Joel's rant is this: it was good advice for 26 years. It became slightly less good advice a few months ago. This is a good time to warn overenthuastic people that it’s still good advice in 2026, and to start a discussion about which of its assumptions remain to be true in 2027 and later.


Yes, but that's the point.

We're not writing code in a computer language any more, we're writing specs in structured English of sufficient clarity that they can be generated from.

The debugging would be on the specs.


> writing specs in structured English of sufficient clarity

What does "sufficient clarity" mean? And is it english expressive enough and free of ambiguities? And who is going to review this process, another LLM, with the same biases and shortcomings?

I code for a living, and so far I'm OK with using LLMs to aid in my day to day job. But I wouldn't trust any LLM to produce code of sufficient quality that I would be comfortable deploying it in production without human review and supervision. And most definitely wouldn't task a LLM to just go and rewrite large parts of a product because of a change of specs.


Tokens aren’t free.

Far more expensive than compilation and non deterministic so you’re not sure if you will get the same software if you give the AI the same spec.


You'll get the same software in outcome terms. Which is what we want.

Tokens are cheaper than getting an individual to modify the code, and likely the tokens will get cheaper - in the same way compilation has (which used to be batched once a day overnight in the mainframe era).

Non-determinism is how the whole LLM system works. All we're doing with agents is adding another layer of reinforcement learning that gets it to converge on the correct output.

That's also how routing protocols like OSPF work. There's no guarantee when those multicast packets will turn up, yet the routes converge and networks stay stable.

I think this fear of non-determinism needs to pass, but it will only pass if evidence of success arises.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: