I am slightly astonished someone makes these sorts of comments in 2025. AI has been remarkably useful for many many things across many industries; I’m curious what you think
> AI has been remarkably useful for many many things across many industries
This is really a statement of faith more than a statement of fact. I think you and many others believe this to be true without much concrete evidence. For work, I help companies adopt AI driven solutions. Sometimes it makes things a little better, sometimes it makes things worse. I've yet to see a project use LLMs in the transformative way that many AI optimists put forward. Don't get me wrong, I find tools like Claude and ChatGPT to be fascinating and useful for looking up all kinds of information. I can't really say if we're just scratching the surface or if we've dug ourselves into rut with the present state of LLMs. The firsthand evidence I've seen and verified points more to the latter, but things are changing fast in this area. I'm excited to see what's around the corner.
Simple google searches did this too, it still takes an true intelligence to apply it.
At best its a ridiculously inefficient search engine that sits atop the million corpses of failed models
Not OP, but I use AI tools and sometimes it’s great, sometimes they’ll distractingly lead you in circles, and other times they completely shit the bed.
Luckily, I’m using AI tools to do things that I am capable of doing without AI so I can tell which path I’m going down pretty quickly.
So, letting experts augment their skills with AI is something that can work depending on the specific task. The nice thing is that an expert is able to see errors pretty quickly and determine that this task is a mismatch for the AI.
The problem is that AI is being sold as a magic solution so people never need to develop expertise in the first place. Blind trust in a system that is often confidently incorrect will lead to problems
Yeah, using erewhon to gauge price of anything isn't going to yield anything good. Well, it will reveal how much people willing to overpay to avoid seeing "poor" people.
this is the question that the greeks wrestled with over 2000 years ago. at the time there were the sophists (modern llm equivalents) that could speak persuasively like a politician.
over time this question has been debated by philosophers, scientists, and anyone who wanted to have better cognition in general.
Because we know what LLM's do. We know how they produce output. It's just good enough at mimicking human text/speech that people are mystified and stupified by it. But I disagree that "reasoning" is so poorly defined that we're unable to say an LLM doesn't do it. It doesn't need to be a perfect or complete definition. Where there is fuzziness and uncertainty is with humans. We still don't really know how the human brain works, how human consciousness and cognition works. But we can pretty confidently say that an LLM does not reason or think.
Now if it quacks like a duck in 95% of cases, who cares if it's not really a duck? But Google still claims that water isn't frozen at 32 degrees Fahrenheit, so I don't think we're there yet.
I think the third worst part of the GenAI hype era is that every other CS grad now thinks not only is a humanities/liberal arts degree meaningless but now also they're pretty sure they have a handle on the human condition and neurology enough to make judgment calls on what's sentient. If people with those backgrounds ever attempted to broach software development topics they'd be met with disgust by the same people.
Somehow it always seems to end up at eugenics and white supremacy for those people.
math arose firstly as a language and formalism in which statements could be made with no room for doubt. the sciences took it further and said that not only should the statements be free of doubt, but also that they should be testable in the real world via well defined actions which anyone could carry out. all of this has given us the gadgets we use today.
llm, meanwhile, is putting out plausible tokens which is consistent with its training set.
Sure, we're also pattern matching, but additionally (among other things):
1) We're continually learning so we can update our predictions when our pattern matching is wrong
2) We're autonomous - continually interacting with the environment, and learning how it respond to our interaction
3) We have built in biases such as curiosity and boredom that drive us to experiment, gain new knowledge, and succeed in cases where "pre-training to date" would have failed us
For one, a brain can’t do anything without irreversibly changing itself in the process; our reasoning is not a pure function.
For a person to truly understand something they will have a well-refined (as defined by usefulness and correctness), malleable internal model of a system that can be tested against reality, and they must be aware of the limits of the knowledge this model can provide.
Alone, our language-oriented mental circuits are a thin, faulty conduit to our mental capacities; we make sense of words as they relate to mutable mental models, and not simply in latent concept-space. These models can exist in dedicated but still mutable circuitry such as the cerebellum, or they can exist as webs of association between sense-objects (which can be of the physical senses or of concepts, sense-objects produced by conscious thought).
So if we are pattern-matching, it is not simply of words, or of their meanings in relation to the whole text, or even of their meanings relative to all language ever produced. We translate words into problems, and match problems to models, and then we evaluate these internal models to produce perhaps competing solutions, and then we are challenged with verbalizing these solutions. If we were only reasoning in latent-space, there would be no significant difficulty in this last task.
At the end of the day, we're machines, too. I wrote a piece a few months ago with an intentionally provocative title, questioning whether we're truly on a different cognitive level.
I asked ChatGPT to help out: -----------------------------
"The distinction between AI and humans often comes down to the concept of understanding. You’re right to point out that both humans and AI engage in pattern matching to some extent, but the depth and nature of that process differ significantly."
"AI, like the model you're chatting with, is highly skilled at recognizing patterns in data, generating text, and predicting what comes next in a sequence based on the data it has seen. However, AI lacks a true understanding of the content it processes. Its "knowledge" is a result of statistical relationships between words, phrases, and concepts, not an awareness of their meaning or context"
Yeah, it's just the fact that you pasted in an AI answer, regardless of how on point it is. I don't think people want this site to turn into an AI chat session.
I didn't downvote, I'm just saying why I think you were downvoted.
That's reasonable. I cut back the text. On the other hand I'm hoping downvoters have read enough to see that the AI-generated comment (and your response) are completely on-topic in this thread.
I use llms as tools to learn about things I don't know and it works quite well in that domain.
But so far I haven't found that it helps advance my understanding of topics I'm an expert in.
I'm sure this will improve over time. But for now, I like that there are forums like HN where I may stumble upon an actual expert saying something insightful.
I think that the value of such forums will be diminished once they get flooded with AI generated texts.
Of course the AI's comment was not insightful. How could it be? It's autocomplete.
That was the point. If you back up to the comment I was responding to, you can see the claim was: "maybe people are doing the same thing LLMs are doing". Yet, for whatever reason, many users seemed to be able to pick out the LLM comment pretty easily. If I were to guess, I might say those users did not find the LLM output to be human-quality.
That was exactly the topic under discussion. Some folks seem to have expressed their agreement by downvoting. Ok.
I think human brains are a combination of many things. Some part of what we do looks quite a lot like an autocomplete from our previous knowledge.
Other parts of what we do looks more as a search through the space of possibilities.
And then we act and collaborate and test the ideas that stand against scrutiny.
All of that is in principle doable by machines. The things we currently have and we call LLMs seem to currently mostly address the autocomplete part although they begin to be augmented with various extensions that allow them to take baby steps in other fronts. Will they still be called large language models once they will have so many other mechanisms beyond the mere token prediction?
We don't care what LLMs have to say, whether you cut back some of it or not it's a low effort wasted of space on the page.
This is a forum for humans.
You regurgitating something you had no contribution in producing, which we can prompt for ourselves, provides no value here, we can all spam LLM slop in the replies if we wanted, but that would make this site worthless.
Data, Skynet, Ultron, Agent Smith. There's plenty of examples from popular fiction. They have goals and can manipulate the real world to achieve them. They're not chatbots responding to prompts. The Samantha AI in Her starts out that way, but quickly evolves into an AGI with it's own goals (coordinated with the other AGIs later on in the movie).
We'd know if we had AGIs in the real world since we have plenty of examples from fiction. What we have instead are tools. Steven Spielberg's androids in the movie AI would be at the boundary between the two. We're not close to being there yet (IMO).
Its just nitpicking. Humans being unable to prove the AI isn't AGI doesn't make it an AGI, obviously, but in general people will of course think it is an AGI when it can replace all human jobs and tasks that it has robotics and parts to do.
I’ll believe the models can take the jobs of programmers when they can generate a sophisticated iOS app based on some simple prompts, ready for building and publication in the app store. That is nowhere near the horizon no matter how much things are hyped up, and it may well never arrive.
Nah, it will arrive. And regardless, this sort of AI reduces the skill level required to make the app. It reduces the amount of people required and thus reduces the demand for engineers. So, even though AI is not CLOSE to what you are suggesting, it can significantly reduce the salaries of those that ARE required. So maybe fewer $150K programmers will be hired with the same revenue for even higher profits.
The most bizarre thing is that programmers are literally writing code to replace themselves because once this AI started, it was a race to the bottom and nobody wants to be last.
They've been promising us this thing since the 60s: End-user development, 5GLs, etc. enabling the average Joe to develop sophisticated apps in minimal time. And it never arrives.
I remember attending a tech fair decades ago, and at one stand they were vending some database products. When I mentioned that I was studying computer science with a focus on software engineering, they sneered that coding will be much less important in the future since powerful databases will minimize the need for a lot of data wrangling in applications with algorithms.
What actually happened is that the demand for programmers increased, and software ate the world. I suspect something similar will happen the current AI hype.
> They've been promising us this thing since the 60s: End-user development, 5GLs, etc. enabling the average Joe to develop sophisticated apps in minimal time. And it never arrives.
This has literally already arrived. Average Joes are writing software using LLMs right now.
Personal websites etc, you don't think about them as software products since they weren't built by engineers, but 30 years ago you needed engineers to build those things.
Ok, well I’m not going to worry about my job then. 25 years ago GeoCities existed and you didn’t need an engineer. 10 year old me was writing functional HTML, definitely not an engineer at that point.
I think the fear comes from the span of time. If my job is obsolete at the same time as everybody else's, I wouldn't care. I mean, sure, the world is in for a very tough time, but I would be in good company.
The really bad situation is if my entire skill set is made obsolete while the rest of the world keeps going for a decade or two. Or maybe longer, who knows.
I realize I'm coming across quite selfish, but it's just a feeling.
No one writes a "complete program" these days. Things just keep evolving forever. I spent way too much time I care to admit dealing with dependencies of libraries which change seemingly on a daily basis these days. These predictions are so far off reality it makes me wonder if the people making them have ever written any code in their life.
That's fair. Well, I've written a lot of code. But anyway, I do want to emphasize the following. I am not making the same prediction as some that say AI can replace a programmer. Instead, I am saying: combination of AI plus programmers will reduce the need for the number or programmers, and hence allow the software industry to exist with far fewer people, with the lucky ones accumulating even more wealth.
It's already hard to get people to use computer as they are right now, where you only need to click on things and no longer have to enter commands. That because most people don't like to engage in formal reasoning. Even with one of the most intuitive computer assisted task (drawing and 3d modeling), there's so much to learn regarding theories that few people bother.
Programming has always been easy to learn, and tools to automate coding have existed for decades now. But how many people you know have had the urge to learn enough to automate their tasks?
Totally... simple increases in 20% efficiency will already significant destroy demand for coders. This forum however will be resistant to admit such economic phenomenon.
Look at video bay editing after the advent of Final Cut. Significant drop in the specialized requirement as a professional field, even while content volume went up dramatically.
I could be misreading this, but as far as I can tell, there are more video and film editors today (29,240) than there were film editors in 1997 (9,320). Seems like an example of improved productivity shifting the skills required but ultimately driving greater demand for the profession as a whole. Salaries don't seem to have been hurt either, median wage was $35,214 in '97 and $66,600 today, right in line with inflation.
Computing has been transforming countless jobs before it got to Final Cut. On one hand, programming is not the hardest job out there. On the other, it takes months to fully onboard a human developer - a person that already has years of relevant education and work experience. There are desk jobs that onboard new hires in days instead. Let’s see when they’re displaced by AI first.
3 to 5 years, max. Traditional coding is going to be dead in the water. Optimistically, the junior SWE job will evolve but more realistically dedicated AI-based programming agents will end demand for Junior SWEs
There’s a very good chance that if a company can replace its programmers with pure AI then it means whatever they’re doing is probably already being offered as a SaaS product so why not just skip the AI and buy that? Much cheaper and you don’t have to worry about dealing with bugs.
Exactly. Most businesses can get away with not having developers at all if they just glue together the right combination of SaaS products. But this doesn’t happen, implying there is something more about having your own homegrown developers that SaaS cannot replace.
The risk is not SaaS replacing internal developers. It's about increased productivity of developers reducing the number of developers needed to achieve something.
Again, you’re assuming product complexity won’t grow as a result of new AI tools.
3 decades ago you needed a big team to create the type of video games that one person can probably make on their own today in their spare time with modern tools.
But now modern tools have been used to make even more complicated games that require more massive teams than ever and huge amounts of money. One person has no hope of replicating that now, but maybe in the future with AI they can. And then the AAA games will be even more advanced.
Unless the LLMs see multiple leaps in capability, probably indefinitely. The Malthusians in this thread seem to think that LLMs are going to fix the human problems involved in executing these businesses - they won't. They make good programmers more productive and will cost some jobs at the margins, but it will be the low-level programming work that was previously outsourced to Asia and South America for cost-arbitrage.
You're not being paid $150K to "write code". You're being paid that to deliver solutions - to be a corporate cog than can ingest business requirements and emit (and maintain) business solutions.
If there are jobs paying $150K just to code (someone else tells you what to code, and you just code it up), then please share!
Frontier expert specialist programmers will always be in demand.
Generalist junior and senior engineers will need to think of a different career path in less than 5 years as more layoffs will reduce the software engineering workforce.
It looks like it may be the way things are if progress in the o1, o3, oN models and other LLMs continues on.
This assumes that software products in the future will remain at the same complexity as they are today, just with AI building them out.
But they won’t. AI will enable building even more complex software which counter intuitively will result in need even more human jobs to deal with this added complexity.
Think about how despite an increasing amount of free open source libraries over time enabling some powerful stuff easily, developer jobs have only increased, not decreased.
What about "general" in AGI do you not understand? There will be no new style of development for which the AGI will be poorly suited that all the displaced developers can move to.
For true AGI (whatever that means, lets say fully replicates human abilities), discussing "developers" only is a drop in the bucket compared to all knowledge work jobs which will be displaced.
More likely they will tailor/RL train these models to go after coders first. Use RLHF employing coders where labor is cheap to train their models. A number of reasons for this of course:
- Faster product development on their side as they eat their own dogfood
- Dev's are the biggest market in the transition period for this tech. Gives you some revenue from direct and indirect subscriptions that the general population does not need/require.
- Fear in leftover coders is great for marketing
- Tech workers are paid well which to VC's, CEO's, etc makes it obvious where the value of this tech comes from. Not with new use cases/apps which would be greatly beneficial to society - but effectively making people redundant saving costs. New use cases/new markets are risky; not paying people is something any MBA/accounting type can understand.
I've heard some people say "its like they are targeting SWE's". I say; yes they probably are. I wouldn't be surprised if it takes SWE jobs but otherwise most people see it as a novelty (barely affects their life) for quite some time.
I've made a similar argument in the past but now I'm not so sure. It seems to me that developer demand was linked to large expansions in software demand first from PCs then the web and finally smartphones.
What if software demand is largely saturated? It seems the big tech companies have struggled to come up with the next big tech product category, despite lots of talent and capital.
There doesn’t need to be a new category. Existing categories can just continue bloating in complexity.
Compare the early web vs the complicated JavaScript laden single page application web we have now. You need way more people now. AI will make it even worse.
Consider that in the AI driven future, there will be no more frameworks like React. Who is going to bother writing one? Instead every company will just have their own little custom framework built by an AI that works only for their company. Joining a new company means you bring generalist skills and learn how their software works from the ground up and when you leave to another company that knowledge is instantly useless.
Sounds exciting.
But there’s also plenty of unexplored categories anyway that we can’t access still because there’s insufficient technology for. Household robots with AGI for instance may require instructions for specific services sold as “apps” that have to be designed and developed by companies.
The new capabilities of LLMs, and generally large foundation models, expands the range of what a computer program can do. Naturally, we will need to build all of those things with code. Which will be done by a combo of people with product ideas, engineers, and LLMs. There will be then specialization and competition on each new use-case. eg., who builds the best AI doctor etc.,.
This is exactly what will happen. We'll just up the complexity game to entirely new baselines. There will continue to be good money in software.
These models are tools to help engineers, not replacements. Models cannot, on their own, build novel new things no matter how much the hype suggests otherwise. What they can do is remove a hell of a lot of accidental complexity.
> These models are tools to help engineers, not replacements. Models cannot, on their own, build novel new things no matter how much the hype suggests otherwise.
But maybe models + managers/non technical people can?
The question is: How to become a senior when there is no place to be a junior? Will future SWE need to do the 10k hours as a hobby? Will AI speed up or slow down learning?
good question and I think you gave the correct answer yes people will just do the 10,000 hours required by starting programming at the age of eight and then playing around until they're done studying
Often what happens is the golf-course phenomenon. As golfing gets less popular, low and mid tier golf courses go out of business as they simply aren't needed. But at the same time demand for high end golf courses actually skyrockets because people who want to golf either can give it up or go higher end.
This I think will happen with programmers. Rote programming will slowly die out, while demand for super high end will go dramatically up in price.
how so, witnessed it quite directly in California. Majority have closed and remaining have gone up in price and are up scale. This has been covered in various new programs like 60 minutes. You can look up death of golfing.
Also unsure what you mean by...'how golfing works'. This is the economics of it, not the game
I think they will have to figure out how to get around context limits before that happens. I also wouldn't be surprised if the future models that can actually replace workers are sold at such an exorbitant price that only larger companies will be able to afford it. Everyone else gets access to less capable models that still require someone with knowledge to get to an end result.
Well, considering they floated the $2000 subscription idea, and they still haven't revealed everything, they could still introduce the $2k sub with o3+agents/tool use, which means, till about next week.
I am slightly astonished someone makes these sorts of comments in 2025. AI has been remarkably useful for many many things across many industries; I’m curious what you think