The method of producing the work can be more important (and easier to review) than the work output itself. Like at the simplest level of a global search-replace of a function name that alters 5000 lines. At a complex level, you can trust a team of humans to do something without micro-managing every aspect of their work. My hope is the current crises of reviewing too much AI-generated output will subside into the way you can trust the team because the LLM has reached a high level of “judgement” and competence. But we’re definitely not there yet.
And contrary to the article, idea-generation with LLM support can be fun! They must have tested full replacement or something.
>> At a complex level, you can trust a team of humans to do something without micro-managing every aspect of their work
I see you have never managed an outsourced project run by a body shop consultancy. They check the boxes you give them with zero thought or regard to the overall project and require significant micro managing to produce usable code.
I find this sort of whataboutism in LLM discussions tiring. Yes, of course, there are teams of humans that perform worse than an LLM. But it obvious to all but the most hype-blinded booster that it is possible for teams of humans to work autonomously to produce good results, because that is how all software has been produced to the present day, and some of it is good.
And contrary to the article, idea-generation with LLM support can be fun! They must have tested full replacement or something.