This has convinced many non-programmers that they can program, but the results are consistently disastrous, because it still requires genuine expertise to spot the hallucinations.
I've been programming for 30+ years and now a people manager. Claude Code has enabled me to code again and I'm several times more productive than I ever was as an IC in the 2000s and 2010s. I suspect this person hasn't really tried the most recent generation, it is quite impressive and works very well if you do know what you are doing
If you’ve been programming for 30+ years, you definitely don’t fall under the category of “non-programmers”.
You have decades upon decades of experience on how to approach software development and solve problems. You know the right questions to ask.
The actual non-programmers I see on Reddit are having discussions about topics such as “I don’t believe that technical debt is a real thing” and “how can I go back in time if Claude Code destroyed my code”.
People learning to code always have had those questions and issues though. For example, “git ate my code’ or “I don’t believe in python using white space as a bracket so I’m going to end all my blocks with #endif”
The author headline starts with "LLMs are a failure", hard to take author seriously with such a hyperbole even if second part of headline ("A new AI winter is coming") might be right.
But it can work well even if you don't know what you are doing (or don't look at the impl).
For example, build a TUI or GUI with Claude Code while only giving it feedback on the UX/QA side. I've done it many times despite 20 years of software experience. -- Some stuff just doesn't justify me spending my time credentializing in the impl.
Hallucinations that lead to code that doesn't work just get fixed. Most code I write isn't like "now write an accurate technical essay about hamsters" where hallucinations can sneak through lest I scrutinize it; rather the code would just fail to work and trigger the LLM's feedback loop to fix it when it tries to run/lint/compile/typecheck it.
But the idea that you can only build with LLMs if you have a software engineer copilot isn't true and inches further away from true every month, so it kinda sounds like a convenient lie we tell ourselves as engineers (and understandably so: it's scary).
> Hallucinations that lead to code that doesn't work just get fixed
How about hallucinations that lead to code that doesn't work outside of the specific conditions that happen to be true in your dev environment? Or, even more subtly, hallucinations that lead to code which works but has critical security vulnerabilities?
Replace "hallucination" with "oversight" or "ignorance" and you have the same issue when a human writes the code.
A lot of that will come to the prompter's own foresight much like the vigilance of a beginner developer where they know they are working on a part of the system that is particularly sensitive to get right.
That said, only a subset of software needs an authentication solution or has zero tolerance to some codepath having a bug. Those don't apply to almost all of the apps/TUIs/GUIs I've built over the last few months.
If you have to restrict the domain to those cases for LLMs to be "disastrous", then I'll grant that for this convo.
> A lot of that will come to the prompter's own foresight
And, on the current trend, how on earth are prompters supposed to develop this foresight, this expertise, this knowledge?
Sure, fine, we have them now, in the form of experienced devs, but these people will eventually be lost via attrition, last even faster if companies actually do make good on their threat to replace a team of 10 devs with a team of three prompters (former senior devs).
The short-sightedness of this, the ironic lack of foresight, is troubling. You're talking about shutting off the pipeline that will produce these future prompters.
The only way through, I think, will be if (very big if) the LLMs get so much better at coding (not code-gen) that you won't need a skilled prompter.
I have a journalist friend with 0 coding experience who has used ChatGPT to help them build tools to scrape data for their work. They run the code, report the errors, repeat, until something usable results. An agent would do an even better job. Current LLMs are pretty good at spotting their own hallucinations if they're given the ability to execute code.
The author seems to have a bias. The truth is that we _do not know_ what is going to happen. It's still too early to judge the economic impact of current technology - companies need time to understand how to use this technology. And, research is still making progress. Scaling of the current paradigms (e.g. reasoning RL) could make the technology more useful/reliable. The enormous amount of investment could yield further breakthroughs. Or.. not! Given the uncertainty, one should be both appropriately invested and diversified.
For toy and low effort coding it works fantastic. I can smash out changes and PRs fantastically quick, and they’re mostly correct. However, certain problem domains and tough problems cause it to spin its wheels worse than a junior programmer. Especially if some of the back and forth troubleshooting goes longer than one context compaction. Then it can forget the context of what it’s tried in the past, and goes back to square one (it may know that it tried something, but it won’t know the exact details).
That was true six months ago - the latest versions are much better at memory and adherence, and my senior engineer friends are adopting LLMs quickly for all sorts of advanced development.
Last week I gave antigravity a try, with the latest models and all, it generated subpar code that did the job very quickly for sure, but no one would have ever accepted this code in a PR, it took me 10x more time to clean it up than to have gemini shit it out.
The only thing I learned is that 90% of devs are code monkeys with very low expectations which basically amount to "it compiles and seems to work then it's good enough for me"
..and works very well if you do know what you are doing
That's the issue. AI coding agents are only as good as the dev behind the prompt. It works for you because you have an actual background in software engineering of which coding is just one part of the process. AI coding agents can't save the inexperienced from themselves. It just helps amateurs shoot themselves in the foot faster while convincing them they're a marksman.
UChicago’s strains came after its $10bn endowment — a critical source of revenue — delivered an annualised return of 6.7 per cent over the 10 years to 2024, among the weakest performances of any major US university.
The private university has taken a more conservative investment approach than many peers, with greater exposure to fixed income and less to equities since the global financial crisis in 2008.
“If you look at our audits and rating reports, they’ve consistently noted that we had somewhat less market exposure than our peers,” said Ivan Samstein, UChicago’s chief financial officer. “That led to less aggregate returns over a period of time.”
An aggressive borrowing spree to expand its research capacity also weighed on the university’s financial health. UChicago’s outstanding debt, measured by notes and bonds payable, climbed by about two-thirds in the decade ending 2024, to $6.1bn, as it poured resources into new fields such as molecular engineering and quantum science.
A combination of bad bets and mismanagement. Ah! Well I have a friend who is currently going their for law school, so I shouldn't be celebrating this, it harms them and their career prospects.
One of my hobbies is Houdini which is like Blender. While I agree with you that you can build a nice parameterised model in a few days - if you want to make an entire scene or a short film, you will need hundreds if not thousands of models, all textured and topolgized and many of them rigged, animated or even have simulations.
What this means is that making even a 2 minute short animation is out of reach for a solo artist. Your only option today is to go buy an asset pack and do your best. But then of course your art will look like the asset pack.
AI Tools like this reduce one of the 20+ stages down to something reachable by someone working solo.
> What this means is that making even a 2 minute short animation is out of reach for a solo artist.
Is it truly the duration of the result that consumes effort and the number of people required? What is the threshold for a solo artist? Is it expected that a 2 minute short takes half as much effort/people as a 4 minute short? Does the effort/people scale linearly, geometrically, or exponentially with the duration? Does a 2 minute short of a two entity dialog take the same as a 4 minute short of a monologue?
> Your only option today is to go buy an asset pack and do your best. But then of course your art will look like the asset pack.
What's more valuable? That you can create a 2 minute short solo or that all the assets don't look like they came from an asset pack? The examples shown in TFA look like they were procedurally generated, and customizations beyond the simple "add more vertexes" are going to take time to get a truly unique style.
> AI Tools like this reduce one of the 20+ stages down to something reachable by someone working solo.
To what end? Who's the audience for the 2 minute short by a solo developer? Is it meant to show friends? Post to social media as a meme? Add to a portfolio to get a job? Does something created by skipping a large portion of the 20+ steps truly demonstrate the person's ability, skill, or experience?
> Your only option today is to go buy an asset pack and do your best.
There is a real possibility the assets generated by these tools will look equally or even more generic, the same generated images today are full of tells.
> What this means is that making even a 2 minute short animation is out of reach for a solo artist.
Flatland was animated and edited by a single person. In 2007. It’s a good movie. Granted, the characters are geometric shapes, but still it’s a 90 minute 3D movie.
These are exceptional cases (by definition, as there aren’t that many of them), but do not underestimate solo artists and the power of passion and resilience.
There are always exceptions. I think the parent is refering to the many solo artists that would almost be able to make such great movies if not for some of the time constraints or life event etc. I'm sure there are countless solo artists that made 75% of a great movie then lacked time for unforeseeable reasons. Making the creation a bit easier allows much more solo artists to create!
> I suspect that it wasn’t trained well enough on your art
No no, it’s not that the style wasn’t mine, the subject was generic.
ChatGPT did the typical thing that messes people up when they draw: It was drawing a representation of the thing, instead of the shapes it sees. The AI clearly went “AH! That’s a venus flytrap. Here is a drawing of a venus flytrap”. But it wasn’t _that_ venus flytrap :)
I think I read this door one closer to when it came out, because it sounded familiar (and it was also really interesting both times). Specifically for me, never having worked in gaming.
I've been programming for 30+ years and now a people manager. Claude Code has enabled me to code again and I'm several times more productive than I ever was as an IC in the 2000s and 2010s. I suspect this person hasn't really tried the most recent generation, it is quite impressive and works very well if you do know what you are doing
reply