Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
"Coding" was never the source of value (twitter.com/id_aa_carmack)
87 points by mfiguiere on Feb 26, 2024 | hide | past | favorite | 69 comments


I don't think the current round of AI is capable of replacing anything but Junior developers and doing that would just result in a shock in a few decades when there is a lack of Senior developers since on the job training is important for that role.

And it is difficult to listen to NVidia talk about what AI can do when it's $2 Trillion evaluation is rooted in AI everything.

AI being used to make programming more efficient is fine. At least once it is capable of meaningfully doing that.

What a lot of developers don't realize is if your code is long lived the time to write the first time isn't really a primary consideration when you are talking about say 40 hours vs 80 hours. Maintainability and extensibility are core.

And if AI doesn't help with those (certainly this batch doesn't) it is only increasing the tech debt problem in the industry.


AI “feels” more efficient for programming since it spits out code in a jiffy, but once I am done with a session of coding I really have to wonder whether it wouldn’t have just been faster to look up APIs and online discussions rather than repeatedly trying to fix what the LLM was doing or attempting to force it down a given path if you know roughly what you want. (Plus half the time it’s not clear why a method it wants to use isn’t there - not imported, or doesn’t exist and never existed, or defined elsewhere in whatever context used to be around this in training?)

It’s almost like using StackOverflow but a toddler is in charge of inputting your queries and only tells you a fraction of what he finds.


this was kind of my experience as well, esp. when dealing with stuff I don't know super well.

for example not super up on node.js but know enough to get done what I need to get done. got GPT to throw out some ideas re: linked lists with an approach I'd never seen before.

so then the question was... is it smarter than me, and knows the answer better than I do? or is it dumber? I spent more time figuring that out -- it was dumber -- than if I'd just slapped in the node and mongo code myself. though I'll say it was a good learning experience, but not something really sustainable for full-time coding.


I've found the AI tooling very helpful for 1:1 pair programming with junior developers. The AI tends to provide bad code at "100%" but it prevents me from constantly having to take the keyboard away from the junior — I can just prompt the junior to type a bit, the AI fleshes out the rest, and we talk about how/why the AI isn't quite right, and what should be correct. The junior is typing the whole time. It really helps, a lot. Probably cuts 1:1 time in half, or better (re: it doubles the amount of stuff we can cover).


When I'm working with a junior developer I have a rule: never, ever, touch their keyboard or mouse while they are driving.

If I absolutely have to help them make some specific input, I'll spell it out, symbols and all, so that the junior can type it themselves. It takes time, but it's so much easier to remember something when you're the one who typed it out. Besides, even before AI, autocomplete was usually good enough that I rarely had to spell out more than a few letters. :)


Yeah — that's what I was doing before AI: clearly spelling out each character, character-by-character. The AI does about 75% of that, so we can skip the 0.5-baud C-over-bongos thing.


LLMs can only reproduce what has been done before. For many tasks like web programming it can do a lot of what you need because it was trained on that data many times over. When you need to use a concept or knowledge that's fairly rare or oblique it will fail. That's pretty much what Carmack is saying--problem solving will continue to be a needed skill.


I tell the computer exactly what I want done and it produces another program I can use to do that task.

Am I describing an LLM or a programming language?

Templates have been a thing for a long time and beyond being better than Google at finding them LLMs fall into the exact same problems those did:

If they don't exactly fit your goals you end up wasting a ton of timing fixing the edge cases.


this is really far from my experience coding with llms

i've often been able to get them to write working code to harness libraries i just wrote (so they definitely aren't in their training set) or programming languages that i haven't even implemented yet. i've fed them obfuscated code in one programming language and asked them to translate it into another—getting sometimes very buggy output, sometimes not. i've asked them for problem-solving approaches for novel problems and gotten reasonable answers—sometimes


Unless you’re at the cutting edge of your field, the rest of us are doing work that has been proven and solved by someone else.


Even when I’m working cutting edge, 99% of my work is still mundane. You still need to get all of the basic stuff in place.


> anything but Junior developers

Not even junior developers. Only thing it can, and somewhat already has is the students with their assignments. And it is going to spoil a lot of future developers forever. Will all those students with LLM skills be able to enter the market and put generated code everywhere or will that serve as a filter.

I believe first one because before GPTs the entry skill was googling and copy pasting stackoverflow code.


To be clear I don't mean ChatGPT today. I meant the technology we are using.

As in while AI in general could do more the highest possible echelon of this kind of AI is junior developer.


> and doing that would just result in a shock in a few decades when there is a lack of Senior developers since on the job training is important for that role

By which time senior management will have retired to the golf course / yacht / beach house. Reward systems etc.


At this rate I would be shocked if junior developers can't function at a higher level than senior ones with the help of AI in a few decades.

I'd be shocked if there are developers at all in a few decades.


If AI makes junior developers "better" than senior developers then what stops senior developers from adopting the tools and becoming once again better than the juniors?


They are older and learn slower, they also learned a lot of biases over the years. They are often unwilling to accept major changes to workflow.


Maybe the second sentence is true but

> They are older and learn slower, they also learned a lot of biases over the years

Hasn’t this been disproven time and time again. And research has shown continuous learning helps to prevent or delay cognitive decline.


I don't see why that would contradict my statement. Yes learning is good, but it becomes harder as we age.


> They are older and learn slower, they also learned a lot of biases over the years.

Wow, I rarely see such unwarranted confidence in such a blatantly wrong statement.


It's basically true for every profession. We even have a word for it, we call it wise. They built up an experience that they rely on during decision making and produce good results. Changing that is hard.


Did you read the article?

Have you looked into what LLMs can do beyond more quickly implement a templating engine?


I have looked at what they could do in 2017, 2020, 2022, and 2024. It's incredibly foolish to assume that the tech is stagnant to the point of still just being a "template engine implementer" decades from now, much less even a year from now.


Nvidia says coding will decline because they want to sell the solution that does that for you.

I really hope that nvidia gets competition, although it will be difficult because the AI environment is locked in to a significant degree in many tools. Not completely though, so maybe we will see competitors arise.

I am still pretty disappointed with LLMs on their ability to solve logical problems. Just pick an easier logic quiz from the internet and plug it into copilot an co.

Copilot seems to now have some templates to answer some of them and I think they fix the model pretty actively. Had some quizzes that were solved wrongly just a few weeks ago, that now get a correct answer. But the overall ability of LLM is quite low still. Always funny when your model says it knows this "classic puzzle" and still gets it wrong. Or the typical "find the cat in the box"-puzzle where the AI just wants to suffocate the cat so it cannot move anymore.

Despite that I wouldn't want to miss AI for coding anymore. It is just added value. I have a decent rig that just runs codellama:7b with VSCode&Twinny locally. Not perfect, but a big help already.

AI critics often called it autocomplete on steroids. Perhaps it is just that, but maybe that is already quite helpful.

I don't see non-coders being able to leverage AI for coding at all. The concept of the citizen-developer was proven wrong so many times already... I think there will be some modified models for certain languages to improve overall tooling around a language.


One comparable tool to LLMs in software history is the spreadsheet. Which also defines a new / different abstraction level that doesn't require "programming". While it didn't cause the death of programming, a lot of non-technical people can get a lot of work done without programmers, that in the past may have required months of dev work.

One of those things we take for granted now, but was absolutely transformative in the history of computing


The spreadsheet is a wonderful way to create tech debt. So, it's very comparable to an LLM that spits out bad code.


Like LLMs it’s good enough until it’s not good enough :)


Spreadsheets aren't fundamentally different from programming, though. It's one thing to have a nice neat way to see all (or many of) the variables at once and do computations on them. It's quite another to have fuzzy AI programming where the system will silently make invisible assumptions in writing code for you. Understanding and evaluating assumptions is a skill currently learned by coding; learning how to tell AIs what you want is not a substitute for rigor.


I share Carmack's sentiment on managing people here.

Naval Ravikant calls managing people the most common and recognized form of leverage. And he's absolutely right. But the scaling properties of people management are terrible once you get accustomed to how you can scale with code. People management necessitates adding new layers in order to keep the system from falling over, overall efficiency stops scaling as people are added, and communication becomes very difficult.


None of what you said seems different from how scaling software and data processing. Only the constant factors are different. Communication overhead is a major concern in supercomputing / distributed commuting. 10x-100x scaleups require rearchitecture. Google Talk is not as cannot be same code as "UNIX talk"


He's not entirely wrong, but here's my take: however sophisticated these AI-built software systems are, no one has shown that they can do what Agile methodology brought to the industry: ease of modification to serve changing business needs.

Once the AI sets up the initial data modeling and storage system cross-talk, and it's singing in production with paying customers, how well is it going to continually iterate on that system when non-engineers make asks of it? My guess is, not that well, and worse and worse as the business evolves. Who's going to be able to make sense of the possible hallucinations and spaghetti code nightmare under the hood when things are going too haywire to ignore? Are you going to let it just drop columns being used by active code, while it takes minutes if not hours to rebuild indexes on new ones? Let it design access patterns across 10+ table joins? Make UI decisions that load 100 nested divs for every element in an array because that's how it interpreted the best way to do styling?

Someone's going to blow a company to hell trying to get AI to build and continually run IT, I predict, and it will be a much-remembered case study.


the hard part of modifying existing software is understanding it

my limited experience has been that llms are enormously better at explaining existing code (code → text) than at writing new code (text → code)


Wouldn't a GPT-5 or 6 level AI be even better than agile? A new feature can be digested by AI as part of the new complete set of requirements and the AI can then refactor the entire codebase for each new feature. It can also generate all the net new test cases to add to ensure that there is no regression.


Sure, and if I had a jar of wizard jelly, I could teleport directly to the moon to eat it.


Yes, magic singularity AI angels would solve all our problems. When are they coming?


Managers always underestimate the cost of maintaining software, with nasty consequences down the road. There's nothing new about it. It's just another form of technical debt. We're crossing our fingers and hoping that a better AI will come along that knows how to fix that debt before it all blows up.


He's probably right. But the upside is this: Think about all the hundreds of little coding ideas (games etc) that you have in your brain or written down somewhere that would have taken years to develop and millions in opportunity cost to pursue. In the next few years those will probably be promptable into existence. I'm excited about this :) Even if it means code becomes more of a commodity.


LLMs have been monumentally helpful for creating small tools/apps/games that I would have been too scared to try on my own because they require deep knowledge of languages & libraries that I don't care to learn.

The LLM is wrong and hallucinates sometimes, but thus far it's been a huge timesaver in actually shipping finished products. It's like if people still tried to answer your questions on stack overflow, instead of just downvoting with no feedback.


Perhaps "coding" is indeed a source of value in the development of the core skill of problem solving, not unlike studying logic, pure math, and such. For example, many universities require introductory computability theory courses for CS degrees which I argue is invaluable for developing certain problem solving skills despite never being used in the software industry. Supposing Carmack is right, that coding itself won't be a barrier to entry, this does not reflect that fact that problem solving requires one to create solutions precisely, in ways that mathematics or programming does that natural language prompting (for instance) does not.


I don't get the argument that you should become a domain expert and that AIs will do coding for us. If AIs can code, they can code themselves into becoming domain experts. Why do we not believe that an AI that can code wont also be better at every single domain (or at the very least, code itself to become better)?

And once we reach that point, the questions become less practical about what should you learn and more existential. Thankfully I already went through my existential crisis and am at the point of not caring.


Because the truth that everyone is dancing around is that AI is not intelligent and to protect their moat, the people who currently dominate that industry wouldn't dare program it to make themselves (or their profit potential) obsolete.

It will all make sense when Doritos drops their own quirky LLM that guides you through a stressful gravity bong experience.


AI is replacing Google search, not programmers, at this point. It's great to see all the hype talk about replacing coders but the only "consistent" value it provides is answering questions quickly without ad spam.

I know once the investor subsidies end, monetization will once again stifle a great productivity tool. I'm old enough to have lived through Google's "Don't be evil" era.


When I write code, except for some extreme edge cases, the computer will do what I want, the same way, every time.

Compare that too interacting with current ai, where it might follow the instructions and the out put is based on probability. Is there a reason to think these probability based on models will suddenly become deterministic?

I don't get how you can engineer anything on top of a models output that requires exactness.


A way to reach exactness from current probabilistic models is to ask them to build a deterministic system. Then you won't directly employ the LLM services, but instead use the output of the system the LLM built. This system may be flawed, but it's easier to get it to behave deterministically and amenable to analysis and correction.


So basically me asking CoPilot for help.


Yep. For business, solving a business problem or profits from use of apps and systems matter. For FOSS, working is probably the most important.

Sadly: maintainability, scrivener beauty, architecture, language, platform, and such are nonfunctional wants and/or requirements that don't really matter.

What users need, and saving them time, should be the most important problems code solves.

Language and platform religiosity are therefore irrational and often just tribal nonsense. It's therefore absurdly small-minded, self-limiting, and/or a sign of incuriousness whenever I hear someone say "I'm [not] a {X} {guy/gal}" or throw shade or unbridled desire for a particular technology.


It still probably will be better to know how to program. You can use the AI as well to teach you.

I think the assembly/higher level languages analogy may not be entirely accurate. If I'm making a web app and want some JavaScript for my special UI, an LLM can write a lot of versions based on how I prompt it. But it will always be faster if I already know how to write code, and using it to speed me up/do any boilerplate needed. That's why copilot is so useful, even if it's not the highest quality.

And it's really not hard to get the basics of programming.. that's why I think it should be taught. I don't see why you need to learn more chemistry than programming in school.


This is how a lot of folks get stuck at the "Senior" level. They try to write code harder instead of leveling up the abstraction layer of the problems they're solving. Anecdotally, many people don't know how to make that pivot.


A fair number of people get "stuck" at the senior level because they enjoy being ICs more than they enjoy being managers, but their company promotion track doesn't support having high leveled ICs so they either have to accept less money and prestige, or force themselves into a role that they don't like.


It's a difficult pivot to make because it really is not the same job, one of them is still fundamentally about tech and the other is not.


And yet, the latter is impossible to perform well unless you know a lot about the underlying tech.

Fortunately, there are fewer open positions at that level, so not all of us need to accept the mission.


The problems become less about what is being built and more tending to the machine that the org itself represents.


Computing, automation and creating is great, but coding sucks.

I know some people enjoy the challenge and achievement and there are lots of people who are especially attuned to highly technical and particular things, but overall programming productivity and software design suffers badly because of it.

We can do better and coding should look completely different than it does now. Hopefully that will change.


I like to say we, as human, are all « shaman » : we're structuring chaos into order.

Any job is just that, it's a story, it's transforming & aggregating bits of the information field into another, surrounding it by meaning in the process.

I don't care if you work with computer, as a social worker or as a fisherman. You're just transforming bits mate.


When AI will invent Fast inverse square root algorithms (truly invents, before it happens IRL) Carmack will have to worry. Till then it is super efficient and useful search on steroids.

But sure it is a source of value as it cuts down on wasting one's time for unproductive parts.



That's pretty easy to invent using AI. Finding short pieces of code to approximate known outputs and deployed problem. That is ASIC optimization, for example.


Can't argue with an expert but I've tried to grill ChatGPT on couple of unexplored science / coding related problems problems that I've encountered during course of my work in my other live in USSR where I used to be a scientist and the result was of zero use. No new inventions.


ChatGPT/LLM is not the end all of AI.


carmack didn't invent it either


I know that. It is in Wikipedia article


CEO of Nvidia: no need for coding anymore.

Me: considering how often your driver crashes, you still need coding.


> “Coding” was never the source of value, and people shouldn’t get overly attached to it.

Except when you're Carmack and you're able to make some "abstractions" (like Doom) come true precisely because you have super strong skills.

So there is coding and coding.


There is the proverbial "old guy who can code in COBOL" and I've been looking forward to eventually being the "old guy who can code in C", but maybe it'll just be "old guy who can code".


I tend to think differently. Coding is also a source of value. Why? Because people do what they enjoy. People do what makes them feel important. And people have an identity of being, for example, an assembler programmer. This will go on as long as they can afford it. That is as long as there's a market for some arcane language or skill. If there's no market, maybe they'll pursue it as a hobby. I think HN is full of this. Just two posts from today: compiling the vanilla source code of Wolfenstein 3D and Doom rendered via console.log(). (Interestingly both related to John Carmack.)

I guess the world could be a better place if we'd squeeze out value of every minute. But I don't see that fully also in business. Even Carmack says himself that he's not going to the final abstraction. So I tend to see this as a guideline one can strive for.


We don’t use punchcards anymore. Punchcards were not the source of value. AI assisted problem solving is simply a new interface to work through. I’m not convinced it’s better. Just another tool.


Maybe one can make this analogy, but I think it is a bit too simplistic/abstracted away from reality:

We are not using tools to output massive loads of punchcards today, but these AI models output still the same thing: Massive loads of code.


If you can't

1. resign your coding job, 2. get hired by a company which does not do code, and 3. earn a comparable or higher salary

then coding is a source of your value. You retain the same general problem-solving ability, but your salary will decrease, probably substantially, which should tell you that general problem-solving ability is not the primary source of most software developers' value.

Coding might not have been "the source of value" for Carmack himself: after all, he did make a comparable living doing rocketry at one pont, although he did return to graphics programming after that stint. Now he thinks he might make a living "managing AIs" in the future.

I don't know whether managing AIs will become a job in the future or not, but if Carmack thinks people will want to pay _him_ of all people to manage any important AI, he's deluding himself. Everybody knows his managerial qualities from Masters of Doom.

Of course, it's all academic: Carmack is a multi-millionaire who has no need for paid employment, and nor need to maintain good mental models of how paid employment works.


Sounds like a skill issue. /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: