Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you spend time on places that attract newbie programmers (some subreddits focused on game dev or game engines, for example) you’ll see the outcome of “I no longer think you should learn to code.” And it’s not pretty.

Many, many posts of people looking for help fixing AI-generated code because the AI got it wrong and they have no idea what the code even does. Much of the time the problem is simply an invented method name that doesn’t exist, a problem that is trivially solved by the error message and documentation. But they say they’ve spent several days or whatever going back and forth with the AI trying to fix it.

It’s almost a little sad. If they just take the time to actually learn what they’re doing they’ll be able to accomplish so much more.

Now of course people learning the traditional way have these same problems, but they’re debugging code they wrote, not gobbledygook from an AI. It’s also easier to explain the solution to them because they wrote the code, so it tends to be simpler. Several times I’ve pitied someone asking for help with AI code and even when I explained the solution they still didn’t understand it, and I had to just give up on them - I’m not getting paid to help them.



I have played around wit AI code from time to time. I do not code routinely but I have pet personal projects that allow me to do some code and this is where I experimented.

Rule number 1 and the only rule: You need to be a subject matter expert. Be it program logic or be it programming language. AI is only a helper, it will go wrong, frequently, and if you do not understand the reason for the code and the programming language, you will take so much more time than if you did not even use the AI.

Without naming the IDE, but one of the top 3 I guess, I asked it to simplify some code. I had repeated code 8 times. 6 of them were identical, the last 2 had a variation. The AI just did not catch it, and refactored all 8 blocks to use the logic of first block. How can you even do that? The code is similar but different, it looks the same but there are extra lines of code in the last 2 blocks !!

And it took me a while to realize this. I never ingest AI code directly so at first I was marveling at a job well done, and as I read and compared, the horror !! And that was not the first time it happened, but once again I got tricked by the soft spoken well mannered AI to believe that it did a fantastic job when it did not.

Edit: It is just an assistant. You give it a task, it will make a mistake, you tell it to fix the mistake, it will fix the mistake. It still saves you time. Next day, it will make the same mistake - and hopefully that gets reduced as the versions evolve.


AI is excellent for tasks you know how to do, but can't be arsed to spend the time.

Example: I wanted a tool that notifies me of replies or upvotes to my recent Hacker News comments. Grok3 did it in 133 seconds with Think mode enabled. Total time including me giving it the example HTML as attachment and writing the specs + pasting the response to a file and running it? About 5 minutes.

I know perfectly well how to do it myself, but do I want to spend the hour or so to write all the boilerplate for managing state and grabbing HTML and figuring out the correct html elements to poll? Fuck no.


Yes I find this to be it's best use case. Unfortunately for anything I actually need help with the results are often terrible.


From my experience using ai, if you don't write a really precise description of your initial requirements in the initial prompt, and it it doesn't one-shot the answer, I don't bother asking it to fix the mistake.

Unless you're using a LLM with really long context, it's prone that some context loss will happen soner or later - when that happens, pointing errors that dropped out of the context will just result in repeated or garbage output.


This is why platforms(?) like Claude Code, Cursor and Windsurf are essential.

Claude Code forces you to create a CLAUDE.md to direct how it works, with Cursor you can (and should) write Cursor Rules.

The difference with a good spec + AI vs just vibe coding from scratch is like night and day.


I don‘t use these tools daily because I have a hard time to commit to these workflows. But do I need to do this setup once, once per project or every time for one off things I might code?


You really don't need these for tiny one-off scripts, but they're essential for larger projects where the whole application can't fit into the LLM context at once.

Basically they're just a markdown files where you write what the project is about, where everything is located and things like that. Pretty much something you'd need to have for another human to bring them up to speed.

An example snippet from the project I just happened to have open:

## Code Structure

- All import processors go under the `cmd/` directory - The internal/ directory contains common utility functions, use it when possible. Add to it when necessary. - Each processor has its own subdirectory and Go package

## Implementation Requirements

- Maintain consistent Go style and idiomatic patterns - Follow the existing architectural patterns - Each data source processor should be implemented as a separate command

This way the LLM (Cursor in this project) knows to, for example, check the internal/ directory for common utils. And if it finds duplication, it'll automatically generate any common functions in there.

It's a way to add guidelines/rails for projects, if you don't add anything the LLM will just pick whatever. It may even change style depending on what's being implemented. In this project the Goodreads processor was 100% different from the Steam processor. A human would've seen the similarities and followed the same style, but the LLM can't do that without help.


why would I pay for the advanced features when I haven't been impressed with the free features? in fact Claude 3.5, which is what is available, is a nearly worthless product, with value comparable to a free search engine, and not even a very good one. It is usually incorrect and frequently in subtle ways that will cost me a lot of time.

pro AI people sound like someone with an expensive addiction trying to justify it. the free product is bad, so I just need to pay to see the light?

Why would Anthropic let me use a model for free that is going to make me more skeptical of their paid offerings unless it is pretty similar to the paid ones and they think it's good?

Just read the manual and write the code yourself. These toys are a distraction.


Free version of Claude sucks because the prompts get too long very quickly. I have the Pro version and I have completed many tasks successfully.

(I know what I was doing and had a great understanding of what is going on).


Like many tools, there is some user skill required. Certainly there are situations where AI assistants won’t help much, but if every single attempt you’ve made to use an AI coding assistant has been “useless”, you are either working in a very niche area or, perhaps more likely, it is user error on your own part.

There are plenty of people who are way too high on the current abilities of AI, but I find that the “AI is useless crowd” to be equally ridiculous.

It reminds me of early in my career working in statistics where the company I joined out of grad school was justifiably looking to move out of SAS and start working in R and Python. Many were enthusiastic about the change and quickly saw the benefit, but there were some who were too entrenched in their previous way of working, and insisted that there was no benefit to changing, they could do anything required in SAS, and stubbornly refused to entertain the idea that there was a benefit to be gained by learning a new skill.

You needn’t become an AI cultist. But with the number of people who are getting at least some benefit to using AI coding assistants, if you are finding it to be worthless in your personal experience, it may be worth stepping back and considering if there is something wrong with how you are trying to utilize it.


What I do is go back through the conversation history, select the response that has the somewhat working code, then submit a prompt with what I want changed. By selectively including context and adjusting temperature, top_p/k and sometimes swapping model or system prompt for a given query will give better results. Combine this with repeating the query multiple times with that same context, then select which result is the best and move on.


I like to think of AI as a force (or skill) multiplier. If you’re low skill, it doesn’t do much. The higher your skill, the more useful it is.


That hasn't been my experience, nor that of others using the AI.

It's a force constant, rather than a multiplier. If you're low skilled, asking it to do a low skilled task, it works fine. If you're high skilled, and asked it to do the low skilled task, you saved a tiny bit of time (less than that of the low skilled person).

But it cannot do a high skilled task (at least, not right now). It can pretend, which can lead the low skilled person astray (but not the high skilled person).

Therefore, all AI does is raise the floor of what is achievable by the laymen, rather than multiply the productivity of a high skilled programmer.


You could also be a mixed skilled developer. Good at regular code, architecture, and algorithms but not as familiar with a given UI framework. Having the LLM generate the html and css for a given layout description saves a lot of time looking through the docs.


Does it really? The thing is that there’s a domain model beneath each kind of library and if they solve the same problem, you will find that they will generally follows the same pattern.

Let’s take the web. React, Svelte, Angular, Alpine.js, all have the same problems they’re solving. Binding some kind of state to a template, handling form interactions, decompose the page into “components”,… once you got the gist of that, it’s pretty easy to learn. And if you care about your code being correct, you still have to learn the framework to avoid pitfalls.

Same things with 3D engines, audio frameworks, physic engines, math packages,…


Using myself as an example -- I'm a long time C programmer (occasionally in a professional setting, mostly personal or as a side-item on my primary professional duties). I've picked up other languages through the years, had to deliver a web based application a few years ago so I did a deep-dive into html5, css3, and javascript. Now javascript has evolved since then, and I lost a bit of what I learned.

So now I want to do a new web application -- If I fall back on my C roots, my Javascript looks a lot like C. Example: adding an item to an array. The C style in Javascript would be to track the length of the array in another variable "len", and do something like myarray[len++] = new_value;

I can feed this into an LLM, or even say "Give me the syntax to add a value to an array", and it gives me "myArray.push(newValue)", which reminds me that "Oh yeah, I'm dealing with a functional/object oriented language, I can do stuff like this". And it reminds me that camelCase is preferred in Javascript. (of course, this isn't the real situation I've run into, just a simplified example -- but I really don't have all the method names memorized for each data type. So in that manner it is useful to get more concise (and proper) code.


I'm sure this is valuable for you, but here is my point of view.

I've worked professionally in many languages like Perl, Python, Kotlin, C# and dabbled in Common Lisp, Prolog, Clojure, and other exotics one. Whenever I forgot the exact syntax (like the loop DSL in CL), I know that there is a specific page in the docs that details all of these information. So just a quick keyword in a search engine and I will find it.

So when I come back in a language I haven't used in a while, I've got a few browser tabs opened. But that only lasts for a few days until I get back in the grove.

So for your specific example, my primary choice would have been the IDE autocompletion, then the MDN documentation for the array type. Barring that, I would have some books on the language opened (if it were some language that I'm learning).


LLMs will be best at HTML and CSS among programming languages because even if it's subtly wrong the browser will still render something.

Still, I'm not sure I need to spend $250/yr to occasionally generate some HTML that I could use a template to generate


My high skilled example from the other day: I wrote an algorithm out line by line in detail and had Claude turn it into avx2 intrinsics. It worked really well and i didn't have to dig through the manuals to remember how to get the lanes shuffled in the way I needed. Probably saved me 10 minutes but it would have been an annoying 10 minutes. :)

But that's a very low level task, where i had already decided on the "how it should be done" part. I find in general that for things that aren't really obvious, telling the llm _how_ to do it produces much more satisfactory results. I'm not quite sure how to measure that force multiplier - I can put in the same amount of work as a junior person and get better output?


> Rule number 1 and the only rule: You need to be a subject matter expert.

Strong disagree. I've been coding for 25+years but never on the front-end side. I couldn't write JS w/ pen & paper with a gun against my head. But I know what to ask and how to make sure a react component does what I want it to do, with these tools.


Or, in other words, you are a subject matter expert and are agreeing with the person you’re responding to. Quote the full argument (emphasis added):

> Rule number 1 and the only rule: You need to be a subject matter expert. Be it program logic or be it programming language.

You are a subject matter in program logic, just not programming language. You are supporting their point, not disagreeing.


I see where you're coming from. But later in the message they go > and if you do not understand the reason for the code and the programming language, you will take so much more time than if you did not even use the AI.

I guess that's the part I disagree on. The programming language is largely irrelevant now is what I'm seeing. And especially the "time to first result" is orders of magnitude smaller when using AI assistants than having to rtfm/google/so every little problem I encounter in an unfamiliar language.

And I am in no way stating that my code will match an expert in that field, of course.


How many thousands of lines does your AI-generated frontend code have?

Do you have to maintain the code?


> head. But I know what to ask and how to make sure a react component does what I want it to do

That is the subject matter expertise that many lack.


Knowing how to code, and having a lot of experience and an "intuitive" sense of what is a good idea and what is a bad idea, also puts you in a position to question the advice the AI gives you. Just now I was asking Claude to help me with an issue with a React component and it told me to add useEffect with a timer. I am not a React expert, but that immediately felt like a code smell to me, so I followed up:

> is it weird or an anti-pattern to use a timer like this?

The response:

> Yes, using a timer like this is generally considered an anti-pattern in React for several reasons: It introduces non-deterministic behavior (timing-dependent code), It's a workaround rather than addressing the root cause, It can be brittle and lead to race conditions.

I'm sure all those things are true. This is a classic example of the problem with people using AI programming tools but lacking a real understanding of what they're doing. They don't know enough to question the advice they're getting, let alone properly review the code its generating.

The other day, in a Rails app, Claude generated a bunch of code that spawned various threads to accomplish certain things I needed to do asynchronously. Maybe these days, in Ruby 3 and Rails 8, this is safe. But I remember that back in the Rails 2 days, going off and spawning new threads was not a good idea. Plus, I have a back-end async job processor already set up. Again, I questioned the approach. The revised code I got back was a lot simpler, and once I'd reviewed and tested it, I (mostly) used it as-is.


that's the thing if your inquisitive and have an interest in learning things then you can still go far with AI coding. can you explain why this code works?

is this the best way to do it or are there other solutions? what are the pros and cons?

are there security problems with this? how could I make this code more secure?

what are some things I should look out for with AI coding(meta question)?

what does this error mean?

just talking back and forth with the AI on the phone you can get a high level understanding of a topic pretty quickly and way more in depth and personalized than a tutorial on the internet.


> inquisitive and have an interest in learning things

Traits much less common than you might think among people who want to get into programming.


and that's just about any topic.


> Much of the time the problem is simply an invented method name that doesn’t exist

I spent a solid 2 hours yesterday trying to get an SSDP protocol implementation going because the LLM was absolutely insistent upon using 3rd party libraries that don't exist and UDP client methods defined in Narnia. I had to spoon feed it half-way attempts before I could get it to budge on useful code. This was all before I realized we had a problem with multicast group membership and multiple network adapters.

These models definitely can help (I wouldn't have gotten as far as I did without one), but you need to know what you want every step of the way. Having mere "vibes" about a sophisticated end result will result in unhappy outcomes. I think the model would have made my life much worse if I wasn't as cynical and suspicious regarding every aspect of its operation. I can see how these models would steal learning opportunities from more novice developers. Breaking out Wireshark is the sort of desperation that only arises when you can't constantly ping some rubber duck for shreds of hope (or once you realize there is no hope).


I gave up on AI because of that. The old IDEs that use an AST for autocomplete still exist and work very well for allowing me to hit tab and get the correct function filled in. They are also very good at the little pop-up that tells me details about the parameter I'm trying to fill in really is - the AI has no clue what order the arguments really are and so often gets it wrong. They won't complete 1000 lines of code - but that is only rarely a savings as most 1000 line code snippets I've worked with just as fast to write myself (I've been programming for 30 years) as to figure out how the AI got some details wrong.

If the AI had access to the AST and could know what functions exist they might be helpful. Then they could write that function they wished existed if it doesn't. However that means they need to know how to understand the code not just the structure.


> Now of course people learning the traditional way have these same problems, but they’re debugging code they wrote, not gobbledygook from an AI.

I’m not sure this is true. Prior to AI you saw a lot of the same behavior, but it was with code copied and pasted from stack overflow, or tutorials, or what have you.

I don’t think AI has changed much in terms of behavior. There has always been a subset of people who have just looked for getting something that “worked” without understanding why, whether that’s from an AI code assistant, or an online forum, or their fellow teammates, and others who want to understand why something works. AI has perhaps made this more apparent, but it’s always been a thing.


The difference is that code are copy-pasting isn’t randomly mutated for each person doing so, and likely if they take the time to go back to where they got it there is likely also an explanation or more info about if they care to take the time to read.


Subreddits focused on gamedev have long stopped being about the craft itself, unfortunately.

90% of the posts are about marketing or are self-help/motivational.

Anything related to art, sound or programming barely gets upvotes.


They like to talk about feelings a lot. Lots of posts about how it feels when their upcoming game reaches <number> of wishlists on Steam, or how it felt when their low resolution pixel art game using Kenney's asset packs flopped against all odds.


The other side effect to that is the difficulty to socialize, albeit online, with those who care about the craft itself.

To paraphrase a meme "best I can do is text editor wars".


Yep.

Gamedev.net was an amazing hang back in the early 2000s up to the 2010s.

Now it's just a perpetual "how do I do this with Unity" that is super hard to filter.

I just go to meetups now.

> best I can do is text editor wars

Or Unreal vs Unity wars in this case :'D


Browse by new though and you’ll see it. Post after post with zero comments about questions or problems that are answered in the docs.


And the people who can actually answer have been driven away by the money-chasers.


IME these tend to be the same people arguing that programmers will all be out of a job in 10 years. It makes me wonder why they persist.


Real live example. Recent conversation with colleague :

    Hey, trying to translate excel sheet with chatgpt, cant understand what to do (posts screenshot with explanation and example "pip install [package-name]"
    You just need to execute specified command in your environment
    What is "my environment"?


vibe coding is overrated. The good sequence is: learn to code, then vibe code when possible. Vibe coding is a trap for junior software engineers. https://www.lycee.ai/blog/why-vibe-coding-is-overrated


Appears to be a trap for idiot engineers as well. Some who are quite senior.


Yeah, I like using LLMs when I code, but every time I've tried "vibe coding" (i.e. letting the LLM do the whole thing start to finish) it's never written a full, functioning app that doesn't have bugs in the basic functionality. I tried this just a couple days ago with the SOTA Gemini 2.5 Pro - it wrote itself into a corner and when I gave it the logs from my bugs it literally said "this doesn't make sense" and got stuck. That finally prompted me to take a look at the code and I immediately knew what it got wrong.


> a problem that is trivially solved by the error message and documentation

Then why does the AI not solve it anyways?

I think understanding "the code" will eventually be as important as understanding machine code or assembly nowadays - still very important for a small number of devs, important in very rare cases for some other devs and completely irrelevant to the majority of devs.


Perhaps because "A.I." doesn't understand anything, it just makes plausible output based on its training data. LLM is a much better term, because intelligence has connotations of understanding, whereas model does not.


Does your question not refute your statement? The AI doesn't solve the problem because it can't.


Yeah, in a sense it does. But I think the problem here isn't "the AI" but actually the tooling or the way "the AI" is applied by the person. (i.e. being unable to even c&p the error message into the AI)

Because from my experience, sonnet 3.7 can almost always fix issues of that type and if it can't, it's usually not "trivial" at least not by my understanding of the word.


That’s an interesting observation, because a corollary of this would be: if young people believe this to be true and don’t start learning to code now, there will be an even higher shortage of developers, unless the AI system become insanely better than today.


If anything, debugging gobbledygook an AI wrote is good practice for debugging gobbledygook John from 4 years ago wrote.


Hah, but at least that code probably ran.


As a very experienced programmer who has experimented with using AI: I can't imagine trying to do anything useful with it if you don't understand the code it generates.

Even if the code it generates works, what happens when you need to change it and no longer have that AI conversation and its state available?

Then there's the security nightmare this is going to be. All this slop-code generated by ignorant hustlers with AIs is going to be a hellscape of security bugs.


It was always thus, and will continue to be.

The scale may change, but that is likely just because more people code. You couldn't do X but now you almost can!

> Now of course people learning the traditional way have these same problems, but they’re debugging code they wrote,

Really? They've not copied and pasted something from SO? They're that early in their coding journey?

> It’s almost a little sad. If they just take the time to actually learn what they’re doing they’ll be able to accomplish so much more.

Leading a horse to water, etc. but llms are excellent at being a patient teacher of basic coding.

Frankly they can be excellent at lots of things people keep saying they're bad at but some seem to refuse to learn how to use them as a tool. It reminds me of watching people be unable to use Google in the past - how to not just ask a question but search for information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: