Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is gonna keep happening with every AI advance until humans are an absolute bottleneck in every domain. May take a bit of time for some professions, but the writing is on the wall. This will be the greatest shift in human history, and I think a lot of people will have trouble grappling with it because its not fun to think about being made irrelevant.

The only thing that will slow AI down is massive universal international regulation. Human intelligence really isn’t the be all end all to intelligence in general, it’s just a stepping stone. I feel many on this site don’t want to accept this because their intelligence has been such a valuable tool and source of personal pride/identity for them for so long.



What is all of this for if the result is that human beings are "made irrelevant"? If these LLMs truly become as game changing as so many say they will be, then can we agree that it's time to stop thinking that a person's worth equals their economic output?


I agree with you, the problem currently is that the balance of power has shifted so far in favor of the 0.1%. And those people will not want to give up the power that they already have.

I fear for a future where the technocrats win out and we end up in an "Altered Carbon" scenario. We are on the precipice of AI and robotics equalizing the playing field for everyone, but only if the power is held by the people and not the few at the top with the most resources.

Not sure how to steer the ship in that direction, but I do have a few ideas...


No, that won’t happen, because these tools are being built based on investments in private goods.

It would be something if there were national level LLM tools, owned and operated as commons.


Things that were once operated as commons became private goods. There is no reason that it can't go the other way.


I can't tell if this is satire or end-stage depoliticization in action. Yeah we are just gonna nationalize things again, no big deal.


If the change is an extinction level event for the state, nationalization is a sure bet


Neither, it is just a statement of fact.


> What is all of this for if the result is that human beings are "made irrelevant"?

I think your views on this will radically differ if you earn 200k a year versus 2k a year.


Which is maddening. Too many people lack class consciousness.

An engineer making 200k a year has more in common with someone making 2k a year, than they do with the Elon Musk's of the world.

This delusion is rampant in professional spheres like medicine and tech.


> An engineer making 200k a year has more in common with someone making 2k a year, than they do with the Elon Musk's of the world.

No they don't. Only someone making 200k could say that.


If you make 200k a year, you're not even in the top tax bracket of the US. On 200k alone you can hardly even afford a SFH in SV, NOVA, and NYC.

Meanwhile Elon Musk's net worth can swing 200+ million in a single day. He could buy up an entire neighborhood in those same zip codes you'd hardly be able to break into.

So how are you "closer" to Elon Musk exactly?


It is definitely past time to start thinking outside of the economy.

Although must we deal in "worth" at all at that point? If two people have conflicting visions, it shouldn't be the one who is "worth" more that gets their way, it should be the one whose vision is most appealing to the rest of us.


No, I disagree, and for everyone who bemoans capitalism or the power of money, its important to understand the foundational arguments from which economics is born.

Wants are infinite, and resources limited. Economics is the objective methods to order a system to achieve subjective ends.

For better or worse, money is a medium of exchange and signal of what people are willing to allocate for their needs. Unless you create economic markets, information markets, and political systems that are built to handle the forces being harnessed by society, you have failure states.

In other words, taxes need to bleed of wealth, to ensure that it cannot create advantage in other fields (media, politics), breaking the even playing field in those other economies.


You are begging the question by relying an unproven basis for your argument. Why do economies have to be based on free market capitalism?


Free markets are superior to planned economies because they’re able to instantly respond to consumer preferences, resulting in efficient allocation of resources.

On a side note, I’m not sure why HN is often hostile to economic arguments. Economics is a well-established science.


Horses were superior to steam engines for 100 years. It takes time for technology to improve, and money is a technology.

As technologists, we understand the the need for a new major version here and there. A breaking change where the new thing is not compatible with the old. Economics as we know it smells overdue.

The particular bit that doesn't seem to be fitting the bill anymore is "value". Back when more economic activity was undeniably a good thing... Back when the majority of our resources were spent fending off hunger, or exposure to the elements, or illness, we had enough of a common enemy that we could get by with a single untyped primitive notion of value. However much we disagreed, we still agreed enough for that to work.

But now we're able to handle the basics well enough that we spend the majority of our resources fending off each other. A single fungible notion of value feels awkward. When I accept a dollar from somebody I'm not sure whether I've helped or harmed myself by doing so because its just as likely that they made that dollar by degrading the water I drink or some other activity that's worth way more than a dollar for me to prevent. We lack shared values but still share a notion of value, and it's not working out.

So perhaps instead of "thinking outside the economy" I should've said "Update the economy to account for more". Whatever words you prefer for it, drastic change is on our doorstep.


Just wanted to note that free markets are separate from capitalism. Free market socialism has existed here and there as well.


Economics is not capitalism though. They are not synonyms.


I am making a defense of economics, not capitalism.

I like markets, and would laugh if anyone went ahead and tried to make a purely capitalistic economy. Fair, well regulated economies, work.


>Fair, well regulated economies, work

There is not a single fair, well regulated economy in the world. Private interests of those with large amounts of capital skew the markets to their favor.


This is a nirvana fallacy. “There aren’t any that are perfect so why try?”

How fair and well regulated is not a binary option of it is or it isn’t and we can continuously try to make things better.


>we can continuously try to make things better

Who is "we" here? Because the current system is ran by people who have a vested interest in keeping things the same.

Also, I disagree on my comment being a fallacy. I'd almost argue the comment I replied to is a fallacy because it's comparing a theoretical well regulated market to the reality of what we actually have.

It's clear to me that such a thing could never truly work because it would require near omnipotence by whomever the regulating body is in order to prevent actors gaming the market.


Have you seen one of those lately?


Don’t know, but Europe seems to be doing better than the US in that regard. So perhaps some countries in the EU?


Humans have more access to the real world. These models have to tokenize everything and put it into words, but so much information is outside of words. These models may well be super intelligent but their intelligence is locked inside of a cage (the tokenizer).

Even in the world where AI has full control of lights out factories (again, doubt it. something goes wrong at the factory, you gotta send a guy in), human beings still need to look each other in the eye and communicate, they need to touch each other. Not only that, they need to be seen and acknowledged by other human beings.

"AI" cannot ever replace this. People whose intelligence is their pride/identity kind of miss this. Stupid people are capable of loving each other more deeply and more completely than any machine ever will love them.


You basically just said people will be the janitors, the on-site fixers, and the personification of decisions and that they will still be able to live fulfilling lives in the real world. I think that is perfectly in line with what the parent wrote.


All those things could be done by humanoid robots. AI models aren’t limited to words, as we’ve seen with video models. Gpt 4o, which has been out for over a year, is natively multimodal. Robotics companies are training robots to take in all the data they have avaliable, video, audio, and interpret them all together in context. There is the core substrate of tokens, yes, but largely it is just a standard “bit” level of information for AI brains, not some essential limiter that will keep AI from understanding all the soft, abstract stuff that humans can. If you look at o3 now, just feeding it images, it clearly now can reason in a way closer to humans than a calculator is to it.


What a load of guff.

AI models still produce galling inconsistencies and errors for me on a daily basis.


Same.

I find LLMs to be useful, but my day to day usage of them doesn't fit the narrative of people who suggest they are creating massive complex projects with ease.

And if they are, where's the actual output proof? Why don't we see obvious evidence of some massive AI-powered renaissance, and instead just see a never ending stream of anecdotes that read like astroturf marketing of AI companies?


Speaking of which, astroturfing seem like the kind of task LLMs should excel at…


I think it's easy to ignore all the times the models get things hilariously wrong when there's a few instances where its output really surprises you.

That said, I don't really agree with the GP comment. Humans are the bottleneck if we knew these models get things right 100% of the time but with a model like o3-pro it's very possible it'll just spend 20 minutes chasing down the wrong rabbit hole. I've often found prompting o4-mini gave me results that were pretty good most of the time while being much faster whereas with base o3 I usually have to wait 2-3 minutes and hope that it got things right and didn't make any incorrect assumptions.


What good is intelligence if there is nobody with the money to pay for it? We run our brains on a few thousand calories a day. Who is going to pay to provide the billions of calories it takes to run/cool GPUs all day long if there are no humans with marketable skills?


“No marketable skills” seems pretty unlikely if you look beyond office work.


Genuine question--I've seen this thrown around a lot. Do you count yourself in this hypothetical situation where society returns to physical labor, or do you think you're immune from being automated?


Since there are excellent educational resources available online, I've sometimes wondered what it is that teachers do that couldn't be done by computer software. But it seems clear that they're somehow necessary? In theory, a bright kid with access to the Internet should be able to teach themselves, but most kids won't learn much that way.

We're going to see more jobs automated, and lots of jobs will change, but I think lots of jobs will still be around for similar reasons. Even if it's not what we'd normally consider physical labor, there's something about in-person interaction that's not easy to automate.

Independent, self-sufficient adults who could and want to get by with just machine interaction are a minority.


AIs will pay other AIs through various means of exchange

Assuming AI need humans in that way is like being a tribe of monkeys and saying

“What good is being human if they don’t have bananas to pay? Monkey only need banana, humans need clothes, houses, cars, gas, who is going to pay the humans bananas if monkeys have all the banana?”


I think too many people call this intelligence, and it results in intuitions that are useless and waste time, pushing the day we understand this moment further into the future.

The best I’ve got is theres 2 frames of assessment people are using:

1) Output frame of reference: The output of an LLM is the same as what a human could make.

2) Process frame of reference: The process at play is not the same as human thinking

These 2 conversation streams end up with contradictions when they engage with each other. Yes, the tools are impressive. The tools aren’t thinking. etc.

A useful analogy is rote learning - many people have passed exams by memorizing textbooks. The output is indistinguishable from someone who manipulates a learned model of the subject to understand the question and provide the answer.


> unilateral international regulation

is an oxymoron/contradictory


sorry I meant "universal" or "omnilateral"


Did you mean global regulation?


yeah


Yes, people will start asking "when must we kill them?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: