Hacker Newsnew | past | comments | ask | show | jobs | submit | cratermoon's commentslogin

I agree that AI wasn't genuinely replacing junior roles to any important extent, and the larger investment in AI is spot on. Fast Company had exactly this take in November in "AI isn’t replacing jobs. AI spending is". https://www.fastcompany.com/91435192/chatgpt-llm-openai-jobs...

"We’ve seen this act before. When companies are financially stressed, a relatively easy solution is to lay off workers and ask those who are not laid off to work harder and be thankful that they still have jobs. AI is just a convenient excuse for this cost-cutting. "


Not a "what if". Can you name 3 new cool technologies that have come out in the last 5 years?

1. Copilot for Microsoft PowerPoint

2. Copilot for Windows Notepad

3. Copilot for Windows 11 Start Menu


Nah man, I’m still waiting for Copilot for vim.

Yeah. Where are all the great new Mac native apps putting electron to shame, avalanche of new JS frameworks, and affordable SaaS to automate more of life? AI can write decent code, why am I not benefiting from that a consumer?

Well, if you're a consumer of code, then technically you benefit. Otherwise, you probably won't notice it as much.

It's almost like a lot of our technologies were pretty mature already and an AI trained on 'what has been' has little to offer with respect to 'what could be'.

oof that's profound. Really nice closing thought for 2025.

LLMs, Apple Silicon, self-driving cars just off the top of my head without really thinking about it.

GPT-2 was 6 years ago, the first Apple silicon (though not branded as such at the time) was 15 years ago, and the first public riders in autonomous vehicles happened around 10 years ago. Also, 2/3 of those are "AI".

> the first Apple silicon (though not branded as such at the time) was 15 years ago

Nobody, not even Apple was using the term "Apple Silicon" in 2010.

The first M series Macs shipped November 2020.


1 year is being pedantic. Apple Silicon is clearly referring to the M series chips which have disrupted and transformed the desktop/laptop market. Self driving also refers to the recent boom and ubiquity of self driving vehicles.

M series is an interation of A series, "disrupting markets" since 2010. LLMs are an iteration of SmarterChild. "Self driving vehicles" is an iteration of the self-parking and lane assist vehicles of the last decade.

I'm bored.


Damn, what an LLM roast. Smarterchild couldn't even recall past 3 messages

I would be bored too if I was disingenuous. Everything is an iteration of ENIAC right? Things haven't changed at all since then right?

All of those things are more than 5 years old.

I could not get in a Waymo and travel across San Francisco five years ago, are you serious?

Neura Link, Quantum computers are making interesting milestones with Microsoft releasing a processor chip for Quantum computing. Green steel is another interesting one, though not as 'sexy' as the previous two.

didn't believe the quantum stuff, so I googled it. I'm shocked how far its come. Even China has some kind of photonic quantum chips now.

Wait so quantum is going to actually deliver something useful within the next 10-20 years??

Incredibly cheaper batteries and solar panels. Much better induction stoves.

I was thinking computer-related, but those are good, and better battery technology helps with computing.

Uhhh, LLMs? The shit computers can do now is absurd compared to 2020. If you showed engineers from 2020 Claude, Cursor, and Stable Diffusion and didn't tell them how they worked their minds would be fucking exploding.

So LLMs exist therefore nothing else is worth the time? That’s sort of the gist of HN these past few years

Moreover: people’ve been crowing about LLM-enabled productivity for longer than it took a tiny team to conceive and build goddamn Doom. In a cave! With a box of scraps!

Isn’t the sales pitch that they greatly expand accessibility and reduce cost of a variety of valuable work? Ok, so where’s the output? Where’s the fucking beef? Shit’s looking all-bun at the moment, unless you’re into running scams, astroturfing, spammy blogs, or want to make ELIZA your waifu.


No I was just skeptical of the GPs assertion that tech hasn't produced anything "cool" in the last 5 years when it has been a nonstop barrage of insane shit that people are achieving with LLMs.

Like the ability for computers to generate images/videos/songs so reliably that we are debating if it is going to ruin human artists... whether you think that is terrible or good it would be dumb to say "nothing is happening in tech".


This was posted earlier today:

https://www.danshapiro.com/blog/2025/12/i-made-the-xkcd-impo...

The xkcd comic is from 11 years ago (September 2014).


Surely you have realized by now that a large portion of the HN userbase is here for get rich quick schemes.

ahh brings me back to the blockchain days, and the many excuses people tried to use them instead of a SQL database for whatever reason

It’s really incredible how quickly people take things for granted.

LLMs are one, granted. GP asked for three, though.

GGPs question doesn't make sense though. What does it mean for a technology to "come out".

Also what does three prove? Is three supposed to be a benchmark of some kind?

I would wager every year there are dozens, probably hundreds, of novel technologies being successfully commercialized. The rate is exponentially increasing.

New procedural generation methods for designing parking garages.

New manufacturing approaches for fuselage assembly of aircraft.

New cold-rolled steel shaping and folding methods.

New solid state battery assembly methods.

New drug discovery and testing methods.

New mineral refinement processes.

New logistics routing software.

New heat pump designs.

New robotics actuators.

See what I mean?


Great list, and most of those don't involve big tech. I think what your list illustrates is that progress is being made, but it requires deep domain expertise.

Technology advances like a fractal stain, ever increasing the diversity of jobs to be done to decrease entropy locally while increasing it globally.

I would wager we are very far from peak complexity, and as long as complexity keeps increasing there will always be opportunities to do meaningful innovative work.


1. We may be at the peak complexity that our population will support. As the population stops growing, and then starts declining, we may not have the number of people to maintain this level of specialization.

2. We may be at the peak complexity that our sources of energy will support. (Though the transition to renewables may help with that.)

3. We may be at the peak complexity that humans can stand without too many of them becoming dehumanized by their work. I could see evidence for this one already appearing in society, though I'm not certain that this is the cause.


1. Human potential may be orders of magnitude greater than what people are capable of today. Population projections may be wrong.

2. Kardachev? You think we are at peak energy production? Fusion? Do you see energy usage slowing down, or speeding up, or staying constant?

3. Is the evidence you're seeing appear in society just evidence you're seeing appear in media? If media is an industry that competes for attention, and the best way to get and keep attention is not telling truth but novel threats + viewpoint validation, could it be that the evidence isn't actually evidence but misinformation? What exactly makes people feel dehumanized? Do you think people felt more or less dehumanized during the great depression and WW2? Do you think the world is more or less complex now than then?

From the points you're making you seem young (maybe early-mid 20s) and I wonder if you feel this way because you're early in your career and haven't experienced what makes work meaningful. In my early career I worked jobs like front-line retail and maintenance. Those jobs were not complex, and they felt dehumanizing. I was not appreciated. The more complex my work has become, the more creative I get to be, the more I'm appreciated for doing it, and the more human I feel. I can't speak for "society" but this has been a strong trend for me. Maybe it's because I work directly for customers and I know the work I do has an impact. Maybe people who are caught up in huge complex companies tossed around doing valueless meaningless work feel dehumanized. That makes sense to me, but I don't think the problem is complexity, I think the problem is getting paid to be performative instead of creating real value for other people. Integrity misalignment. Being paid to act in ways that aren't in alignment with personal values is dehumanizing (literally dissolves our humanity).


Not even close. I'm 63. You would be nearer the mark if you guessed that I was old, tired, and maybe burned out.

I've had meaningful work, and I've enjoyed it. But I'm seeing more and more complexity that doesn't actually add anything, or at least doesn't add enough value to be worth the extra effort to deal with it all. I've seen products get more and more bells and whistles added that fewer and fewer people cared about, even as they made the code more and more complex. I've seen good products with good teams get killed because management didn't think the numbers looked right. (I've seen management mess things up several other ways, too.)

You say "Maybe it's because I work directly for customers and I know the work I do has an impact". And that's real! But see, the more complex things get, the more the work gets fragmented into different specialties, and the (relative) fewer of us work directly with customers, and so the fewer of us get to enjoy that.


Ah my bad, that was a silly deduction on my part.

Yes I see your point better now, however I still think this is temporary. It's probably something like accidental/manufactured complexity is friction, and I'm this example the friction is dehumanizing jobs. You're right this is a limiting factor. My theory is that something will get shaken up and refactored and a bunch of the accidental complexity that doesn't effectively increase global entropy will fall off, and then real complexity will continue to rise.

I'm kind of thinking out loud here and conflating system design with economics, sociology, antitrust, organizational design, etc. Not sure if this makes sense but maybe in this context real complexity increases global entropy and manufactured complexity doesn't.

Manufactured complexity oscillates and real complexity increases over longer time horizons.

So what you see as approaching a limit (in the context of our lifetimes) is the manufactured complexity, and I agree.

My point is that real complexity is far from its limit.

I'm a lot less confident, but suspect, that if real complexity rises and manufactured complexity decreases we will see jobs on average become better aligned with human qualities. (Drop in dehumanizing jobs)

Not sure how long this will take. Maybe a generation?


I see your point better also. I'd like to think you're right, especially about the accidental complexity getting removed. That would do much to make me feel more positive about the way work is.

And in fact, if you have multiple firms in competition, the one that can decrease accidental complexity the most has a competitive advantage. Maybe such firms will survive and firms with more accidental complexity will die out.


That sounds right to me. It also makes me wonder whether artificially low cost of capital (artificially low interest rates) would result in more manufactured complexity.

Is it too off-topic or controversial to note that in January 1941 in an edict signed by Martin Bormann, head of the Nazi Party Chancellery and private secretary to Adolf Hitler, the Nazis called for a ban on the future use of Judenlettern (Jewish fonts) like Fraktur?

<https://web.archive.org/web/20151207071605/http://historywei...>


Speaking of DEI: Stanley Morison, the inventor of Times New Roman, in collaboration with Victor Lardent, was one of the founders of The Guild of the Pope's Peace, an organization created to promote Pope Benedict XV's calls for peace in the face of the First World War. On the imposition of conscription in 1916 during First World War, he was a conscientious objector, and was imprisoned. <https://en.wikipedia.org/wiki/Stanley_Morison#Early_life_and...>

One weird trick kernel developers HATE

Concerned that Sarandos and Peters haven't sufficiently greased his palm?


The article clearly has Trump raising microeconomic concerns about increased monopoly power.


Who believes anything he says?

It appears that Trump's intent is to prefer a hostile buyout financed in part by Jared Kushner's investment firm Affinity Partners, along with financing from the Saudi and Qatari sovereign wealth funds and L'imad Holding Co, owned by Abu Dhabi. Nothing to see here.

https://www.reuters.com/legal/transactional/kushner-role-bid...


Now is a good time to read Hannah Arendt's The Origins of Totalitarianism.


Also Umberto Eco's essay Ur-Fascism.

Interestingly to me, Arendt makes a good case that fascism is distinct from totalitarianism. It's possible to have a fascist government that is authoritarian and dictatorial, but not fully totalitarian. It's also possible to have a totalitarian government based on a system other than fascism.

Agreed, it's some sort of a Venn diagram of "traits" present in both -isms.

I believe we discussed this last week, for a different vendor. https://news.ycombinator.com/item?id=46088236

Headline should be "AI vendor’s AI-generated analysis claims AI generated reviews for AI-generated papers at AI conference".

h/t to Paul Cantrell https://hachyderm.io/@inthehands/115633840133507279


That's not true: there's some menu items and supporting code by default.


Still, the menu item is not interacting with AI without you explicit configuring it.

I bet if you click it without any configuration will give you an error message.


How many inactive menu items that error out when clicked on is acceptable? Are we ok with a Microsoft Word style ribbon of controls that do nothing?


If UI bugs are really the issue, then one just sends patches to the upstream project - I'm sure the maintainers will be happy to receive fixes for broken menus. A fork for this is useless, and guaranteed to be abandoned.

As best I can tell Goyal started adding AI-related code to Calibre back in August, merging the LLM tab work from https://github.com/kovidgoyal/calibre/pull/2838, and created the chat widget in November with commit 41d0da4267dc6f7f7e48fb9bb7e8609a2e251cb7.

I looked at forking the project myself: the challenges are that it's a very quirky application, its design and implementation doesn't share conventions with any other application, and the build system is complex and unique to Calibre.

It's a shame there's no good open source ebook library application with a more conventional design. Shoving AI into everything, even when it defaults to "off" (for now), is getting old.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: