LLMs are built based on human language and texts produced by people, and imitate the same exact reasoning patterns that exist in the training data. Sorry for being direct, but this is literally unsurprising. I think it is important to realize it to not anthropomorphize LLM / AI - strictly speaking they do not *become* anything.
> I feel the effects of this are going to take a while to be felt (5 years?);
Who knows if we'll even need senior devs in 5 years. We'll see what happens. I think the role of software development will change so much those years of technical experience as a senior won't be so relevant but that's just my 5 cents.
The way I'm using claude code for personal projects, I feel like most devs will become moreso architects and testers of the output, and reviewers of the output. Which is good, plenty of us have said for ages, devs dont read code enough. Well now you get to read it. ;)
While the work seems to take similar amounts of time, I spend drastically less time fixing bugs, bugs that take me days or God forbid weeks, solved in minutes usually, sometimes maybe an hour if its obscure enough. You just have to feed the model enough context, full stack trace, every time.
Man, I wish this was true. I've given the same feedback on a colleague's clearly LLM-generated PRs. Initially I put effort into explaining why I was flagging the issues, now I just tag them with a sadface and my colleague replies "oh, cursor forgot." Clearly he isn't reading the PRs before they make it to me; so long as it's past lint and our test suite he just sends the PR.
I'd worry less if the LLMs weren't prone to modifying the preconditions of the test whenever they fail such that the tests get neutered, rather than correctly resolving the logic issues.
We need to develop new etiquette around submitting AI-generated code for review. Using AI for code generation is one thing, but asking other people review something that you neither wrote nor read is inconsiderate of their time.
I'm getting AI generated product requirements that they haven't read themselves. It is so frustrating. Random requirements like "this service must have a response time of 5s or less" - "A retry mechanism must be present". We have a specific SLA already for response time and the designs don't have a retry mechanism built.
The bad product managers have become 10x worse because they just generate AI garbage to spray at the engineering team. We are now writing AI review process for our user stories to counter the AI generation of the product team. I'd much rather spend my time building things than having AI wars between teams.
Oof. My general principle is "sending AI-authored prose to another human without at least editing it is rude". Getting an AI-generated message from someone at all feels rude to me, kind of like an extreme version of "dictated but not read" being in a letter in the old days.
At least they're running the test suite? I'm working with guys who don't even do that! I've also heard "I've fixed the tests" only to discover, yes, the tests pass now, but the behavior is no longer correct...
> I feel like most devs will become moreso architects and testers of the output
which means either devs will take over architectural roles (which already exist and are filled) or architects will take over dev roles. same goes for testing/QA - these are already positions within the industry in addition to being hats that we sometimes put on out of necessity or personal interest.
I've seen Product Manager / Technical Program Manager types leaning into using AI to research what's involved in a solution, or even fix small bugs themselves. Many of these people have significant software experience already.
This is mostly a good thing provided you have a clear separation between solution exploration and actually shipping software - as the extra work put into productionizing a solution may not be obvious or familiar to someone who can use AI to identify a bugfix candidate, but might not know how we go about doing pre-release verification.
> I feel like most devs will become moreso architects and testers of the output
Which stands to reason you'll need less of them. I'm really hoping this somehow leads to an explosion of new companies being built and hiring workers , otherwise - not good for us.
> Which stands to reason you'll need less of them.
Depends on how much demand there would be for somewhat-cheaper software. Human hours taken could well remain the same.
Also depends on whether this approach leads to a whole lot of badly-fucked projects that companies can’t do without and have to hire human teams to fix…
This is what I'm doing, Opus 4.5 for personal projects and to learn the flow and what's needed. Only thing I'll disagree with is how the work takes similar amount of time because I'm finding it unbelievably faster. It's crazy how with smart planning and documentation that we can do with the agents, getting markdown files etc, they can write the code better and faster than I can as a senior dev. No question.
I've found Opus 4.5 as a big upgrade compared to any of the other models. Big step up and the minor issues that were annoying and I needed to watch out for with Sonnet and GPT5.1.
It's to the point where I'm on the side of, if the models are offline or I run out of tokens for the 5 hour window or the week (with what I'm paying now), there's kind of no use of doing work. I can use other models to do planning or some review, but then wait until I'm back with Opus 4.5 to do the code.
It still absolutely requires review from me and planning before writing the code, and this is why there can be some slop that goes by, but it's the same as if you have a junior and they put in weak PRs. Difference is much quicker planning which the models help with, better implementation with basic conventions compared to juniors, and much easier to tell a model to make changes compared to a human.
> This is what I'm doing, Opus 4.5 for personal projects and to learn the flow and what's needed. Only thing I'll disagree with is how the work takes similar amount of time because I'm finding it unbelievably faster.
I guess it depends on the project type, in some cases like you're saying way faster. I definitely recognize I've shaved weeks off a project, and I get really nuanced and Claude just updates and adjusts.
I'm impressed by this. You know in the beginning I was like hey why doesn't this look like counterstrike ? yeah I had the exepectation this things can one shot an industry leading computer game. Of course that's not yet possible.
But still, this is pretty damn impressive for me.
In a way, they really condensed perfectly a lot of what's silly currently around AI.
> Codex, Opus, Gemini try to build Counter Strike
Even though the prompt mentions Counter Strike, it actually asks to build the basics of a generic FPS, and with a few iterations ends up with some sort of minecraft-looking generic FPS with code that would never make it to prod anywhere sane.
It's technically impressive. But functionally very dubious (and not at all anything remotely close to Counter-Strike besides "being an FPS").
The same can be said about hucksters of all stripes, yes.
But maybe not contrarians/non-contrarians? They are just the agree/disagree commentators. And much of the most valuable commentary is nuanced with support for and against their own position. But generally for.
> So when should we start to be worried, as developers ?
I've been worrying ever since chatgpt 3 came out, it was shit at everything but it was amazing as well. And in the last 3 years the progress was incredible. I don't know if you "should" worry, worrying for the sake of it isn't helping much, but yes we should all be mentally prepared to the possibility we won't be able to make a living doing this X years from now. Could be 5, could be 10 , could be less than 5 even.
God, I’d love to once again be working at a company where coding speed mattered.
Meanwhile in non-tech Bigcos the slow part of everything isn’t writing the code, it’s sorting out access and keys and who you’re even supposed to be talking to, and figuring out WTF people even want to build (and no, they can’t just prompt an LLM to do it because they can’t articulate it well, and don’t have any concept of what various technologies can and cannot do).
The code is already like… 5% of the time, probably. Who gives a damn if that’s on average 2x as fast?
I agree that coding isn't all we do by if agentic A.I progresses far enough it can drastically reduce the amount of people you're supposed to talk to, figure out what they want to build etc. There'll be way fewer of those around - including some of us unfortunately.
Sure but your therapist is also monetizing your pain for his own gain. Either A.I therapy works (e.g can provide good mental relief) or it doesn't. I tend to think it's gonna be amazing at those things talking from experience (very rough week with my mom's health deterioriating fast, did a couple of sessions with Gemini that felt like I'm talking to a therapist). Perhaps it won't work well for hard issues like real mental disorders but guess what human therapists are very often also not great at treating people with serious issue.
I agree, in severe mental disorders A.I is probably not enough (and afaik it acknowledges this to the user right away) but it might be able to help. If we accept/believe that the act of talking about your problems and having them reframed back to you empathetically helps , I don't see why we can't accept LLMs can help here. I don't buy that only a human can do that.
But one is a company ran by sociopaths that have no empathy and couldn't care less about anything but money, while the other is a human that at least studied the field all their life.
> But one is a company ran by sociopaths that have no empathy and couldn't care less about anything but money, while the other is a human that at least studied the field all their life.
Unpacking your argument you make two points:
1) The human has studied all his life; yes, some humans study and work hard. I have also studied programming half my life and it doesn't mean A.I can't make serious contributions in programming and that A.I won't keep improving.
2) These companies, or OpenAI in particular, are untrustworthy many grabbing assholes. To this I say if they truly care about money they will try to do a good job, e.g provide an A.I that is reliable, empathetic and that actually help you get on with life. If they won't - a competitor will. That's basically the idea of capitalism and it usually works.
I partly agree and partly disagree. Yes, we're more individual and more isolated. But ChatGPT/Gemini can really provide mental relief for people - not everyone can afford or have the time/energy to find a good human therapist close to their home. And this thing lives in your computer or phone and you can talk to it to get mental relief 24 / 7. I don't see it as bleak as you see it, mental help should be accessible and free for everyone.
I know, we've had a bad decade with platforms like Meta/TikTok but I'm not convinced as you are the current LLMs will have an adverse effect.
Jeez they really ARE becoming human like
reply