If we were living in a progressive society, I'd root for LLMs to come in and replace me, as this would free up some of my time to use on what I really want.
But we don't, and any marginal improvement due to automation is fully expected to improve the wealth of rich people, create all new sorts of scams, and leave everyone else worse off.
So I surely hope this article is true, the impact is limited, and the soufflé falls down as it did with blockchain. I'm certain we'll collectively suffer from it though.
Its interesting how the VCs just want to apply AI to creative tasks/occupations rather than mundane/drudgery tasks.
AI should be applied to all those horrible tasks we wish we didn't have to do. Example, teachers using AI to mark tests, exams, preparing lesson plans, etc. Yet the mentality of VCs is to replace the teacher.
LSDs aren't just about replacing workers, they can be about enhancing worker productivity.
Using code to code is just faster and simpler than telling an AI to do something, it not being exactly right, wasting time trying to tell it to do it right, and then needing to fix it manually but not having the skills to correct it bc you never code. I don’t know any dev that would say “we need more code faster, even if it sucks”. The real need is the opposite, we need as little code as possible, that is also good code, and done in the time it takes to get it right.
No-code isn't a new concept, and there's a reason why all past attempts have failed, or why people still pay web developers despite the existence of tools like square space. Nothing about the LLMs of today suggests they have solved the no-code problems or will radically displace coding. They generate bad, oftentimes incorrect code for well trodden paths, while struggling to solve novel problems or work in private or unique code bases. They do not easily keep up with new trends or tools. They do not offer the type of semantic understanding that is necessary to work in a logic based field.
LLMs are nothing more than an alternative take on auto-complete, a feature that has been around forever and doesn't radically change programming. It will speed up good programmers to some degree and probably lead to bugs and more bad code from everyone else.
This is yet another hype cycle overselling a modest advancement in technology.
Computer chess has been failing for 30 years, until it didn't. Try winning a Go or a chess game against the computer now. There easily might be another architectural find lateral to LLMs that will 10x the code generation quality.
You can say that about any field. We could invent the elixir of immortality tomorrow, but is that a realistic expectation? The CEO of Nvidia is a smart guy, he's pushing the hype train because his business is riding the wave. But you have to separate hype from an empirical view of what we can actually do today with these tools, versus what hasn't been delivered and is being oversold.
I think you're viewing it from the programming bubble. He's not that vested in AI success for programming. Even if AI code generation completely failed, NVIDIA's business is still more than OK because LLMs have a lot of other uses, killing Google search for example. That's not a small niche.
My previous response was refuting your statement: "The CEO of Nvidia is a smart guy, he's pushing the hype train because his business is riding the wave"
Do chess-bots rely on this? I was under the impression that a full search of the space was infeasible, so our current state-of-the-art approaches use heuristics, bounded search, and learned strategies. In other words, I suspect our current models apply to programming better than we might expect.
Chess is a bounded, non-moving target. Think about the difference between chess in the 1970s and today, and compare that to the same time period with programming. Chess is a single game whereas programming is a federation of tools, protocols, and standards that are ever evolving. They're not comparable in any sense.
I don't think that's a particularly relevant metric, as we can easily restrict programming to languages like Lisp/Pascal from the 70s, and the landscape doesn't change much.
I'd also suggest that our chess bots have evolved dramatically in that time. Deep Blue works very differently than AlphaZero, for example. Deep Blue might not be suited to code generation, but AlphaCode spawned from AlphaZero.
When it comes to predictions everyone has their own opinion, and I don't think this technology is mature enough for anyone to claim to own the truth.
Either LLMs already showing us all their potential, their improvements will be incremental and marginal - in which case their role will be mostly as augmenting tools that human use to be better. Maybe some full automation suites heavily adding up to LLMs to build complete, albeit crapy un maintainable and limited software, as existing no code options do today.
Or it is only the beginning and we'll get actual thinking that can grasp what the need is, and build something awesome and usable without a human developer supervisor.
To draw a parallel: the building of furniture has been largely automated, yet people value hand crafted items more, where time is taken to produce a unique, refined finish. Maybe we'll have the same? Where simple software is relying on less specialized devs (or accountants/logistics managers/... turned the minimum bar of developers) - and more complex or critical ones still rely on software engineers augmented with LLMs?
I don't know what the future holds, but I do know that Junior devs are cheaper than senior ones. I could envision many junior devs coding (with the help of AI) and a few senior devs doing mostly code reviews.
I don't use AI, but I know an IT manager that uses it with code snippets and prompts like: "explain what this code does". He says it works great.
That seems to play into the kind of tools that would help junior devs become senior devs. But again, I really don't know. AI may fade away like pet rocks...
> I don't know what the future holds, but I do know that Junior devs are cheaper than senior ones. I could envision many junior devs coding (with the help of AI) and a few senior devs doing mostly code reviews.
Honestly, that seems like a recipe for dysfunction. You have junior devs relying on a crutch and failing to develop their own skills, then you have seniors getting their skills dulled/stagnating because code reviews aren't how you do that either (it's just an limited application of existing skills). Also, code reviews are a drag, and I can't imagine someone staying motivated if that's literally their whole job.
That sounds horrible, personally, when I have my senior dev hat on, doing code reviews is very far down the list of rewarding work. I’d much rather have an LLM that could do that for me.
No question that we won't need junior developers do grunt work in the near future. I remember my first job at Google was to convert some Angular code to more recent version with TypeScript. Google paid me 2 years doing it. Today and specially in near future such jobs won't exist
I view it a bit like assembly. Nobody learns that anymore outside of niche use cases. It all just moved to higher abstractions. I suspect programming with AI will become a similar abstraction of sorts
For me, the words of NVIDIA CEO weigh more than MuratBuffalo blog, due to their respective track records. While AI may not replace hard-core cryptography or kernel coding, it will definitely replace a lot of Upwork that goes into say restaurant website creation.
There will be AI IDEs that will allow previewing the results of the prompts, before converting them into code
But we don't, and any marginal improvement due to automation is fully expected to improve the wealth of rich people, create all new sorts of scams, and leave everyone else worse off.
So I surely hope this article is true, the impact is limited, and the soufflé falls down as it did with blockchain. I'm certain we'll collectively suffer from it though.