Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is really interesting to me; I have the opposite belief.

My worry is that any idiot can prompt themselves to _bad_ software, and the differentiator is in having the right experience to prompt to _good_ software (which I believe is also possible!). As a very seasoned engineer, I don't feel personally rugpulled by LLM generated code in any way; I feel that it's a huge force multiplier for me.

Where my concern about LLM generated software comes in is much more existential: how do we train people who know the difference between bad software and good software in the future? What I've seen is a pattern where experienced engineers are excellent at steering AI to make themselves multiples more effective, and junior engineers are replacing their previous sloppy output with ten times their previous sloppy output.

For short-sighted management, this is all desirable since the sloppy output looks nice in the short term, and overall, many organizations strategically think they are pointed in the right direction doing this and are happy to downsize blaming "AI." And, for places where this never really mattered (like "make my small business landing page,") this is an complete upheaval, without a doubt.

My concern is basically: what will we do long term to get people from one end to another without the organic learning process that comes from having sloppy output curated and improved with a human touch by more senior engineers, and without an economic structure which allows "junior" engineers to subsidize themselves with low-end work while they learn? I worry greatly that in 5-10 years many organizations will end up with 10x larger balls of "legacy" garbage and 10x fewer knowledgeable people to fix it. For an experienced engineer I actually think this is a great career outlook and I can't understand the rug pull take at all; I think that today's strong and experienced engineer will be command a high amount of money and prestige in five years as the bottom drops out of software. From a "global outcomes" perspective this seems terrible, though, and I'm not quite sure what the solution is.

 help



>For short-sighted management, this is all desirable since the sloppy output looks nice in the short term

It was a sobering moment for me when I sat down to look at the places I have worked for over my career of 20-odd years. The correlation between high quality code and economic performance was not just non-existing, it was almost negative. As in: whenever I have worked at a place where engineering felt like a true priority, tech debt was well managed, principles followed, that place was not making any money.

I am not saying that this is a general rule, of course there are many places that perform well and have solid engineering. But what I am saying is that this short-sighted management might not be acting as irrationally as we prefer to think.


I generally agree; for most organizations the product is the value and as long as the product gives some semblance of functionality, improving along any technical axis is a cost. Organizations that spend too much on engineering principles usually aren’t as successful since the investment just isn’t worth it.

But, I have definitely seen failure due to persistent technical mistakes, as well, especially when combined with human factors. There’s a particularly deep spiral that comes from “our technical leadership made poor choices or left, we don’t know what to invest in strategically so we keep spending money on attempted refactors, reorgs, or rewrites that don’t add more value, and now nobody can fix or maintain the core product and customers are noticing;” I think that at least two companies I’ve worked at have had this spiral materially affect their stock price.

I think that generative coding can both help and hurt along this axis, but by and large I have not seen LLMs be promising at this kind of executive function (ie - “our aging codebase is getting hard to maintain, what do we need to do to ensure that it doesn’t erode our ability to compete”).


As always has been, but for most of two boom times throught he industry was forgotten, is that specification is everything.

If you adequately specify what you want, then LLM's today are perfectly capable to produce code of a quality exceeding most humans.

But what has been going on is that many of the details of architecture and code have been implied as "good practice" or "experience" because it is time consuming to write a good specification, partly because you need to first work out exactly what you want.


My guesses are

1. We'll train the LLMs not to make sloppy code.

2. We'll come up with better techinques to make guardrails to help

Making up examples:

* right now, lots of people code with no tests. LLMs do better with tests. So, training LLMs to make new and better tests.

* right now, many things are left untested because it's work to build the infrastructure to test them. Now we have LLMs to help us build that infrustructure so we can use it make better tests for LLMs.

* ...?


* better languages and formal verification. If an LLM codes in Rust, there’s a class of bugs that just can’t happen. I imagine we can develop languages with built-in guardrails that would’ve been too tedious for humans to use.

Good software, bad software, and working software.

ChatGPT came out a little over 3 years ago. After 5-10 more years of similar progress I doubt any humans will be required to clean up the messes created by today’s agents.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: