Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In recent weeks, I have made huge changes to my main codebase. Pretty sweeping stuff, things that would have taken me months to get right. Both big, architecturally important things, as well as minor housekeeping tasks.

None of it could have been done without AI, yet I am somehow inclined to agree with the sentiment in this article.

Most of what I've done lately is, in some strange sense, just typing quickly. I already knew what changes I wanted, in fact I had it documented in Trello. I already understood what the code did and where I wanted it to go. What was stopping me, then?

Actually, it was the dread loop of "aw gawd, gotta move this to here, change this, import that, see if it builds, goto [aw gawd]". To be fair, it isn't just typing, there ARE actual decisions to be made as well, but all with a certain structure in mind. So the dread loop would take a long long time.

To the extent that I'm able to explain the steps, Claude has been wonderful. I can tell it to do something, it will make a suggestion, and I will correct it. Very little toil, and being able to make changes quickly actually opens up a lot of exploration.

But I wonder if AI had not been invented at this point in my career, where I would be. I wonder what I will teach my teenager about coding.

I've been using a computer to write trading systems for a long time now. I've slogged through some very detailed little things over the years. Everything from how networks function to how c++ compiles things, how various exchanges work on the protocol level, how the strats make money.

I consider it all a very long apprenticeship.

But the timing of AI, for me, is very special. I've worked through a lot of minutiae in order to understand stuff, and just as it's coming together in a greater whole, I get this tool that lets me zip through the tedium.

I wonder, is there a danger to giving the tool to a new apprentice? If I send my kid off to learn coding using the AI, will it be a mess? Or does he get to mastery in half the time of his father? I'm not sure the answer is so obvious.



No one knows for sure, the truth is we’ll have to burn one generation as the experimental test subjects before we can get an idea of what works and what doesn’t when it comes to AI assisted education. There is a lot we will probably get wrong and some kids will be screwed, but for the generations after hopefully we can do better.


very good articulation on the problem. My bet is that it is probably fine in the sense that the mastery itself is not going to be relevant.

I want to view AI coding as invention of new coding tools, at least in the way you described. I (hope) it will be more like punch-card->assembly-> BASIC/C -> scripting language -> some sort of well-structured natural language.


I think it depends on the effort put into reading and understanding the code being generated. The article makes the assumption that extended use of LLMs leads to a shift of _not reviewing and validating the code_. It is pointed out as the wrong thing to do but goes on assuming that that's what you do. I think it's like reading books.. there are various degrees of reading comprehension from skimming for content/tone, to reading for enjoyment, to studying for applications, to active analysis like preparing for a book club. There isn't a prescribed depth of reading for any document but context and audience has an effect on what depth is appropriate. With code, if it's for a one-off utility that can be verified for a specific application, yeah why not just look at its output and skip the code, full vibing, especially if it doesn't have any authority on its own. But if it is business critical it better still have at least two individuals read over it, and other CONTRIBUTING-related policies.

And it's not just complacence.. this illusion of mastery cuts even harder for those who haven't really developed the skills to do a review of the code. And, some code is just easier to write than it is to read, or easier to generate with confidence using an automata or some macro-level code, which often the LLMs will not produce, in preference to repeated-in-various-styles inlining of sub-solutions, unless you have enough mastery to know how to ask for the appropriate abstraction and would still rather not just write the deterministic version.

   > I wonder, is there a danger to giving the tool to a new apprentice? If I send my kid off to learn coding using the AI, will it be a mess
As long as your kid develops the level of mastery needed to review the code, and takes the time to review it, I don't think it'll be a mess (or not too large to debug). A lot of this depends on how role models use the tool, I think. If it's always nonchalant "oh we'll just re-roll or change the prompt and see" then I doubt there will be mastery. If the response is "hmm, *opens debugger*" then it's much more likely.

I don't think there's anything wrong with holding back on access to LLM code generators but that's like saying no access to any modern LLMs at this point, so maybe that's too restrictive; tbh I'm glad that's not a decision I'm having to make for any young person these days. But separate from that you can still encourage a status quo of due diligence for any code that gets generated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: