Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree and I’m surprised more people don’t get this. Bad behaviors aren’t suddenly okay because AI makes them easy.

If you are wasting time you may be value negative to a business. If you are value negative over the long run you should be let go.

We’re ultimately here to make money, not just pump out characters into text files.



> We’re ultimately here to make money, not just pump out characters into text files.

Different projects have other incentives. Dealing with AI slop from internet randos is a very real problem in open-source codebases. I've pretty much just stopped reviewing code from people that I don't know on one project that I work on when it's obviously going to take way more time than it would have done to do the patch myself. There used to be an incentive to help educate new contributors, of course, but now I can't know whether that is even happening.


Yeah, fair enough. This applies to both business cases and OSS and the OSS incentive often isn’t money.


How do you know the net value add isn’t greater with the AI, even if it requires more code review comments (and angrier coworkers)?


In a scenario where what we're doing is describing and assigning work to someone, having them paste that into an LLM, sending the LLM changes to me to review, me reviewing the LLM output, them pasting that back into the LLM and sending the results for me to review...

What value is that person adding? I can fire up claude code/cursor/whatever myself and get the same result with less overhead. It's not a matter of "is AI valuable", it's a matter of "is this person adding value to the process". In the above case... no, none at all.


Because we know what the value is without AI. I’ve been in the industry for about ten years and others have been in it longer than I have. Folks have enough experience to know what good looks like and to know what bad looks like.


You have it exactly backwards. If you are consuming my time with slop, it’s on you to prove there’s still a net benefit.


all the recent studies (that are constantly posted here) that say so.


The Stanford study showed mixed results, and you can stratify the data to show that AI failures are driven by process differences as much as circumstantial differences.

The MIT study just has a whole host of problems, but ultimately it boils down to: giving your engineers cursor and telling them to be 10x doesn't work. Beyond each individual engineer being skilled at using AI, you have to adjust your process for it. Code review is a perfect example; until you optimize the review process to reduce human friction, AI tools are going to be massively bottlenecked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: