Congress is largely the wrong people though. What sane person would build a system where getting elected requires you to be rich? Where a primary system ensures everyone elected is not roughly in the center of opinions?
I think it's designed that way because it wasn't originally seen as one country, more as a federation.
Even by the time of the civil war, Robert E Lee decided he was Virginian ahead of his national identity.
If you have a bunch of sovereign states, then you need some state-level evening out. If everyone is a citizen of one large state, you can just go proportional.
On top of this, it was never going to be easy to gradually move from one to the other with the issue of slavery looming large, so they didn't fix it. This was still a huge issue in 1848 when a lot of Europe was grappling with how to do a constitution.
Yes I understand it was designed that way 250 years ago. What I don't understand is why so many Americans think that it was perfect. Why aren't Americans open to the idea that their system of "separation of powers" is fundamentally flawed. I went to an American school and separation of powers is talked about is as if it's the only possible right answer.
The US quickly realized that the loose federation wasn't going to work and centralized a lot of power. It should continue to evolve it's system.
It's worth noting that even the US doesn't think it's system is a good idea. When it imposes a new government on countries (like Iraq) it chooses a parliamentary system.
> What I don't understand is why so many Americans think that it was perfect.
because theres no example in history that has worked better. Its unclear how much of the success of the US should be attributed to the Constitution (what history would have looked like if the US had a canadian constitution for example), but what cant be argued is that the US is the most successful political body in world history and it is the oldest continuous Constitution in the world.
Under that lens it makes sense that Americans are fairly conservative about changing the constitution and why the founders are so revered. Its just fucking worked out great for us until now. Its really a miracle in many ways.
The fact that the US Constitution is basically more sacred that the Bible when you talk to the average American is even weirder. The Founding Fathers are the Original Gods (Gangsters?).
responded the same to the person you responsed to but perhaps this is a decent explanation.
because theres no example in history that has worked better. Its unclear how much of the success of the US should be attributed to the Constitution (what history would have looked like if the US had a canadian constitution for example), but what cant be argued is that the US is the most successful political body in world history and it is the old continuous Constitution in the world.
Under that lense it makes sense that Americans are fairly conservative about changing the constitution and why the founders are so revered. Its just fucking worked out great for us until now. Its really a miracle in many ways.
It’s not really excusing anything, just pointing out that Cantor Fitzgerald would be making money whether this Supreme Court ruling went for or against the Trump tariffs. So it’s not like they had to have any inside knowledge to be making money.
It's true that a volatile environment in general is good for certain types of investment banking business, including facilitating this trade. I nevertheless think it's unlikely - honestly, a galaxy brain take - that Cantor Fitzgerald or other investment banks with influence in the Trump administration would push for policies like unconstitutional tariffs just to drive trading revenue. Maybe the strongest reason is that other, frankly more lucrative investment banking activities, like fundraising and M&A, benefit from a growing economy and a stable economic and regulatory environment.
It stretches your imagination to conceive of a financier chasing short term gains over the long term stability of the investment bank they are part of? I seem to recall an event back in the late '00s that you may want to look into.
How is it that the form of government comes up so often when discussing the decisions of ordinary people?
I would think for most people, you care about whether you can fit economically before you consider something that is unlikely to matter.
Obviously don't go and try to immigrate to China if you are planning to be a political commentator.
But for most people in most places, what will you notice? Are there jobs, how is tax, are the streets clean, are there homeless people, can I see a doctor, is there a lot of paperwork? Will I find friends?
It's not that easy to block someone. It's easy to block a particular account, sure.
But there are now people who purposefully make a bunch of accounts to spread lies.
> that unless social media has a mechanism to promote civilised behaviour
We need more Dangs.
He is maybe the major reason this forum is still decent. Tasteful moderation is really hard, I'd say the vast majority of Reddit subs don't have good moderation.
Anonymity leads to the multiple accounts issue. Pseudonymity addresses that. Eg: "We don't know the name of the person behind this identifier in real life, but we see we blocked them last year, so we will deny their request to open a new account with us"
You and I agree the moderation here on HN is fantastic. There is a minority of people who would prefer HN allow spam, bigotry, calls to violence, revenge porn, snuff content, etc. A large community - a nation, for example - should have the ability to 'tyrannize' an antisocial minority into enforcing some base level of standards. For example, at a minimum, to prevent a site operator from showing those types of content to users who do not specifically request them.
There's a huge problem with the media landscape. It's similar to the junk-food problem, or gambling, or addiction to drugs.
We've made a society where "number goes up" is the only measure of success. We don't care whether what makes the number go up is good, and that leads to exploiting the irrationality of consumers.
People know they aren't supposed to eat chips all day. They know they aren't likely to win their bet. They know it's not a good idea to watch the most exciting news.
But they can't help themselves, so they get exploited, and the exploiters are wealthy enough to write it into law that they aren't responsible.
Point this out, and inevitably someone says "who are you to decide what's good for other people", and yes, I used to think this way. Well, one thing is that I'm straight up taking it from the people who are being used. Who wants to be fat? Virtually everyone is eating more than they should. Are we supposed to think this is the revealed, rational preference of everyone? The other thing that changed is that I'm a parent. I have to make choices for my kids, and doing that makes me recognize that people their age aren't the only children. Paternalistic much? Sure. Eat your vegetables!
Who wants to be uninformed? Yet we are. People can just look up the crime statistics in London and see which way it has been going the past couple of decades.
I don't have a solution, I'm afraid, just a diagnosis. We're living in a society that is being abused under the pretense of personal freedom.
Someone better read than me has probably written an essay or two about this, please link. I don't know the best keywords for such a search.
It's the same perspective that asks, "If he's so bad, why doesn't she leave him?" And when she doesn't ultimately reconciles it by blaming her.
It reveals that the emotional relationship to the consequences take priority over the consequences themselves. Whether it's justifying domestic violence or justifying the consequences of an obesity epidemic, or the consequences of a sizeable fraction of people living in a false reality.
Those problems still exist, nothing is solved except if we apply the salve of personal choice, we can avoid meaningful change. It's a nilhistic, defeatist defense mechanism that says much more about the person employing it and their inability to withstand emotional discomfort than the facts of each case - that people regularly take actions that are objectively against their best interest.
Our failure to provide aid and cling to the that really the world is just by hiding behind the idea of rational choice is childishly naive.
You want to write a book about people's deepest motivations. Formative experiences, relationships, desires. Society, expectations, disappointment. Characters need to meet and talk at certain times. The plot needs to make sense.
You bring it to your editor. He finds you forgot to capitalise a proper noun. You also missed an Oxford comma. You used "their" instead of "they're".
He sends you back. You didn't get any feedback about whether it makes sense that the characters did what they did.
You are in hell, you won't hear anything about the structure until you fix your commas.
Eventually someone invents an automatic editor. It fixes all the little grammar and spelling and punctuation issues for you.
Now you can bring the script to an editor who tells you the character needs more development.
You are making progress.
Your only issue is the Luddites who reckon you aren't a real author, because you tend to fail their LeetGrammar tests, calling you a vibe author.
Except that the editor doesn't focus on little things but the structure. It is the job of copy editor to correct all the grammar and bad writing. Copy editor can't be done by AI since it includes fixing logical errors and character names. My understanding is that everybody, including the author, fixes typos when they find them. There is also proofreader at the end to catch typos.
The real question is whether the boom is, economically, a mistake.
If AI is here to stay, as a thing that permanently increases productivity, then AI buying up all the electricians and network engineers is a (correct) signal. People will take courses in those things and try to get a piece of the winnings. Same with those memory chips that they are gobbling up, it just tells everyone where to make a living.
If it's a flash in a pan, and it turns out to be empty promises, then all those people are wasting their time.
What we really want to ask ourselves is whether our economy is set up to mostly get things right, or it is wastefully searching.
"If X is here to stay, as a thing that permanently increases productivity" - matches a lot of different X. Maintaining persons health increases productivity. Good education increases productivity. What is playing out now is completely different - it is both irresistible lust for omniscient power provided by this technology ("mirror mirror on the wall, who has recently thought bad things about me?"), and the dread of someone else wielding it.
Plus, it makes natural moat against masses of normal (i.e. poor) people, because requires a spaceship to run. Finally intelligence can also be controlled by capital the way it was meant to, joining information, creativity, means of production, communication and such things
> Plus, it makes natural moat against masses of normal (i.e. poor) people, because requires a spaceship to run. Finally intelligence can also be controlled by capital the way it was meant to, joining information, creativity, means of production, communication and such things
I'd put intelligence in quotes there, but it doesn't detract from the point.
It is astounding to me how willfully ignorant people are being about the massive aggregation of power that's going on here. In retrospect, I don't think they're ignorant, they just haven't had to think about it much in the past. But this is a real problem with very real consequences. Sovereignty must be occasionally be asserted, or someone will infringe upon it.
> It's like the apathetic masses want the billionaires to become trillionaires as long as they get their tiktok fix.
it's much worse. a great demographic of hacker news love gen. AI.. these are usually highly educated people showing their true faces on the plethora of problems this technology violates and generates
>I've given up tbh. It's like the apathetic masses want the billionaires to become trillionaires as long as they get their tiktok fix.
Especially at cost of diverting power and water for farmers and humans who need them. And the benefit of the AI seems quite limited from recent Signal post here on HN.
Water for farmers is its own pile of bullshit. Beef uses a stupid amount of water. Same with almonds. If you're actually worried about feeding people and not just producing an expensive economic product you're not going to make them.
Same goes for people living in deserts where we have to ship water thousands of miles.
I came back to agree that we should be eating a lot less meat than we do, I'm guilty of it too. We didn't eat meat all day every day as we evolved; if we love it it's because it was scarce and we need to create an artificial scarcity by choosing not to indulge (the same goes for fats, sugars etc in general).
As for the other responses regarding AI. I think that AI could very well become the best thing to ever happen to our species, if we were ready for it but we are not by a long shot.
Regarding wastage: AI research is just fine imo, but corpos have gotten their parasite hooks into the technology and as per usual are more interested in making money right now rather than when it's appropriate. Energy and water use would not be a problem if everyone & their mum weren't desperately seeking VC funding for a technology they know nothing about.
Regarding culture: besides the obvious capitalisation of capitalism doing capitalism things, we aren't ready in a cultural sense either; our tribalism and social structure is incredibly juvenile. They say that civilisation first started when one human tended to the wounds of another human and took care of them while they healed. From what I see of the world around me we have gone backwards - we call this civilisation? UBI would just be one step in a long list of cultural change required to prepare for AI.
Our deficiencies are long and complex to solve. The only solution that I see is that one day we do crack AGI. And that "it happens" and turns out to be banevolent in that it forces us to be good. Because we have to be forced to; we are selfish and will never vote in each other's interests.
The difference is that we've more or less hit a stable Pareto front in education and healthcare. Gains are small and incremental; if you pour more money into one place and less into another, you generally don't end up much better off, although you can make small but meaningful improvements in select areas. You can push the front forward slightly with new research and innovation, but not very fast or far.
The current generation of AI is an opportunity for quick gains that go beyond just a few months longer lifespan or a 2% higher average grade. It is an unrealised and maybe unrealistic opportunity, but it's not just greed and lust for power that pushes people to invest, it's hope that this time the next big thing will make a real difference. It's not the same as investing more in schools because it's far less certain but also has a far higher alleged upside.
> The difference is that we've more or less hit a stable Pareto front in education and healthcare.
Not even close. So many parts of the world need to be pumped with target fund infusions ASAP. Only forcing higher levels of education and healthcare at the places where it lags is a viable step towards securing peaceful and prosperous nearest future.
Also throwing money at problems doesn't necessarily solve them. Sometimes problems get worse when you throw more money at them. No matter how much money you throw at education, if you don't use Phonics to teach them, kids won't be able to read. Guess what we use?
Ok, this is mostly irrelevant - education is not one of those problems that money can’t at least massively improve. And lack of direct phonics instruction will leave behind some kids. And if you use phonics without the rest of learning to read, some of them still won’t manage it. One of the reasons that teaching American kids how to read varies is that somewhere between 30 and 60% of kids will figure it out if you just read to them enough. The others have a wide variety of gaps, ranging from hearing or sight difficulties to short term memory issues to not speaking English. Phonics helps a subset of them, but is not enough by itself - and I don’t know who “we” is, but most American schools do and have always taught phonics. The debate is really over the length of time and level of focus it gets, and whether to make 100% of kids sit through it when maybe half of them don’t need it. I’m sure there are teachers out there who just don’t teach phonics but I haven’t seen them.
We haven't used Phonics in US schools since the 1980s in most cases. So I somehow doubt you have kids or have interacted with a US public school in decades.
This is so wildly incorrect that I question your understanding of the word Phonics. Not only do I have kids in school now, I am familiar with the curriculums recently used by multiple school districts (and the alternatives they considered), and research conducted over the last five decades on literacy teaching around the US.
> if you pour more money into one place and less into another, you generally don't end up much better off, although you can make small but meaningful improvements in select areas
> The difference is that we've more or less hit a stable Pareto front in education and healthcare. Gains are small and incremental;
You probably mean gains between someone receiving healtcare and education now, as compared to 10 years ago, or maybe you mean year to year average across every man alive.
You certainly do not mean that person receiving appropriate healthcare is only 2% better off than one not receiving it, or educated person is noly 2% better of than an uneducated one?
Because I find such notion highly unlikely. So, here you have vast amounts of people you can mine for productivity increase, simply by providing things that exist already and are available in unlimited supply to anyone who can produce money at will. Instead, let's build warehouses and fill them with obsolete tech, power it all up using tiny Sun and .. what exactly?
This seems like a thinly disguised act of an obsessed person that will stop at nothing to satisfy their fantasies.
> Finally intelligence can also be controlled by capital
The relationship between capital and AI is a fascinating topic. The contemporary philosopher who has thought most intensely about this is probably Nick Land (who is heavily inspired by Eugen von Böhm-Bawerk and Friedrich Hayek). For Land, intelligence has always been immanent in capitalism and capitalism is actively producing it. As we get closer to the realization of capitalism's telos/attractor (technological singularity), this becomes more and more obvious (intelligible).
2% is a lot! There's only fifty things you can invest 2% of GDP in before you occupy the entire economy. But the list of services people need, from food, water, shelter, heating, transportation, education, healthcare, communications, entertainment, mining, materials, construction, research, maintenance, legal services... there's a lot of things on that list. To allocate each one 1% or 2% of the economy may seem small, but pretty quickly you hit 100%.
Most of you have mentioned is not investment, but consumption. Investments means to use money to make more money in the future. Global investment rates are around 25 % of global GDP. Avarage return on investement ist about 10% per year. In other words: using 1% or 2% of GDP if its leads to an improvement in GDP of more than 0.1% or 0.2% next year would count as a success. I think to expect a productivity gain on this scale due to AI is not unrealistic for 2026.
> Most of you have mentioned is not investment, but consumption.
It's ongoing investment in the infrastructure of civil society. These sorts of investments usually give you indirect returns... which is why it's usually only done by governments.
Is it? I am using AI daily, but would rank it dead last compared to food, water, shelter, heating, transportation, education, healthcare, communications
Investing 1 or 2% of global GDP to increase wealth gap 50% more and make top 1% unbelievable rich while everyone else looking for jobs or getting 50 year mortgage, seem very bad idea to me.
This problem is not specific to AI, but a matter of social policy.
For example here in Germany, the Gini index, an indicator of equality/inequality has been oszillating about 29.75 +/-1.45 since 2011.[1] In other words, the wealth distribution was more or less stable in the last 15 years, and is less extrem than in the USA, where it was 41.8 in 2023.[2]
A century ago people in the US started to tax the rich much more heavily than we do now. They didn't believe that increasing inequality was necessary - or even actually that helpful - for improving their real livelihood.
Don't be shocked if that comes back. (And that was the mild sort of reaction.)
If you have billions and all the power associated with it, why are you shooting for personal trillions instead of actually, directly improving the day to day for everyone else without even losing your status as an elite, just diminishing it by a little bit of marginal utility? Especially if you read history about when people make that same decision?
I don't think that is scalable to infinite iphones since the input materials are finite. If your all your friends get 100,000 iphones and then you need an ev battery and that now costs 20,000 iphones now and you're down 5k iphones if the previous battery cost was 5k iphones. On the other hand if you already had a good battery, then you're up 20k iphones or so in equity. Also, since everyone has so many iphones the net utility drops and they become worth less than the materials so everyone would have to scrap their iphones to liquidate at the cost of the recycled metals.
It can be, but there are lots of reasons to believe it will not be. Knowledge work was the ladder between lower and upper classes. If that goes away, it doesn't really matter if electricians make 50% more.
Its not really a matter of some great shift. Millennials are the most educated generation by a wide margin, yet their wealth by middle age is trailing prior generations. The ladder is being pulled up inch by inch and I don't see AI doing anything other than speeding up that process at the moment.
> If some one were to say to you - you can have 10,000 more iPhones to play with but your friends would get 100,000 iPhones, would you reject the deal?
I'd think about how many elections he can buy with those iPhones, for starters.
>Both that inequality increases but also prosperity for the lower class? I don’t mind that trade off.
This sounds like it is written from the perspective of someone who sees their own prosperity increase dramatically so that they end up on the prosperous side of the worsening inequality gap. The fact that those on the other side of the gap see marginal gains in prosperity makes them feel that it all worked out okay for everyone.
I think this is greed typical of the current players in the AI/tech economy. You all saw others getting abundant wealth by landing high-paying jobs with tech companies and you want to not only to do the same, but to one-up your peers. It's really a shame that so much tech-bro identity revolves around personal wealth with zero accountability for the tools that you are building to set yourselves in control of lives of those you have chosen to either leave behind or to wield as tools for further wealth creation through alternate income SaaS subscription streams or other bullshit scams.
There really is not much difference between tech-bros, prosperity gospel grifters or other religious nuts whose only goal is to be more wealthy today than yesterday. It's created a generation of greedy, selfish narcissists who feel that in order to succeed in their industry, they need to be high-functioning autists so they take the path of self-diagnosis and become, as a group, resistant to peer review since anyone who would challenge their bullshit is doing the same thing and unlikely to want too much light shed on their own shady shit. It is funny to me that many of these tech-bros have no problem admitting their drug experimentation since they need to maintain an aura of enlightenment amongst their peers.
It's gonna be a really shitty world when the dopeheads run everything. As someone who grew up back in the day when smoking dope was something hidden and paranoia was a survival instinct for those who chose that path I can see lots of problems for society in the pipeline.
I think you inadvertently stepped in the point — Yes, what the fuck do I need 10,000 iPhones for?
Also part of the problem is which resources end up in abundance. What am I going to do with more compute when housing and land are a limited resource.
Gary’s Economics talks about this but in many cases inequality _is_ the problem. More billionaires means more people investing in limited resources(housing) driving up prices.
Maybe plebes get more money too, but not enough to spend on the things that matter.
It’s just a proxy for wealth using concrete things.
If you were given 10,000 dollars but your friends were also 100,000 dollars as well, would you take the deal?
Land and housing can get costlier while other things get cheaper, making you overall more prosperous. This is what happened in the USA and most of the world. Would you take this deal?
I wouldn't be able to hang out with them as much (they'd go do a lot of higher-cost things that I couldn't afford anymore).
I'd have a shittier apartment (they'd drive up the price of the nicer ones, if we're talking about a significant sized group; if it's truly just immediate friends, then instead it's just "they'd all move further away to a nicer area").
So I'd have some more toys but would have a big loss in quality of my social life. Pass.
(If you promised me that those cracks wouldn't happen, sure, it would be great for them. But in practice, having seen this before, it's not really realistic to hold the social fabric together when economic inequality increases rapidly and dramatically.)
More to the point, what does research into notions of fairness among primates tell us about the risks of a vast number of participants deciding to take this deal?
You have to tell us the answer so we can resolve your nickname "simianwords" with regard to Poe's Law.
I don't know how nobody has mentioned this before:
The guy with 100k will end up rewriting the rules so that in the next round, he gets 105k and you get 5k.
And people like you will say "well, I'm still better off"
In future rounds, you will try to say "oh, I can't lose 5k for you to get 115k" and when you try to vote, you won't be able to vote, because the guy who has been making 23x what you make has spent his money on making sure it's rigged.
You’re missing the point. It’s not about jealousy it’s basic economics - supply and demand. No I would not take the deal if it raised the demand in something central to my happiness (housing) driving the price up for something previously affordable and make it unaffordable.
I would not trade my house for a better iPhone with higher quality YouTube videos, and slightly more fashionable athleisure.
I don’t care how many yacht’s Elon Musk has, I care how many governments.
What if you could buy the same house as before, buy the same iPhone as before and still have more money remaining? But your house cost way way more proportionally.
If you want to claim that that's a realistic outcome you should look at how people lived in the 50s or 80s vs today, now that we've driven up income inequality by dramatically lowering top-end tax rates and reduced barriers to rich people buying up more and more real property.
What we got is actually: you can't buy the same house as before. You can buy an iPhone that didn't exist then, but your boss can use it to request you do more work after-hours whenever they want. You have less money remaining. You have less free time remaining.
If you’re asking me if I’m an idiot who doesn’t understand basic economics / capitalism, the answer is no. If you’re asking me if I think that in the real world there are negative externalities of inequality in and of itself that makes it more complicated than “everyone gets more but billionaires get more more” than the answer is yes.
Just being born in the US already makes you a top 10% and very likely top 5-1% in terms of global wealth. The top 1% you're harping about is very likely yourself.
> Just being born in the US already makes you a top 10%
Our family learned how long-term hunger (via poverty) is worse in the US because there was no social support network we could tap into (for resource sharing).
Families not in crisis don't need a network. Families in crisis have insufficient resources to launch one. They are widely scattered and their days are consumed with trying to scrape up rent (then transpo, then utilities, then food - in that order).
And so many people in the US are already miserable before yet another round of "become more efficient and productive for essentially the same pay or less as before!!"
So maybe income equality + disposable material goods is not a good path towards people being happier and better off.
It's our job to build a system that will work well for ourselves. If there's a point where incentivizing a few to hoard even more resources to themselves starts to break down in terms of overall quality of life, we have a responsibility to each other to change the system.
Look at how many miserable-ass unhappy toxic asshole billionaires there are. We'll be helping their own mental health too.
It is not really obvious to me that happiness should be part of the social contract.
Happiness is very slippery even in your own life. It seems absurd to me that you should care about my happiness.
So much of happiness is the change from the previous state to the present. I am happy right now because 2026 has started off great for me while 2025 was a bad year.
I would imagine there was never a happier American society than the year's after WW2.
I imagine some of the most happy human societies were the ones during the years after the black plague. No one though today gains happiness because of the absence of black plague.
To believe a society can be built around happiness seems completely delusional to me.
Yes the very fact that billionaires exist mean our species has failed.
I do not believe that there is a legitimate billionaire on the planet, in that they haven't engaged in stock manipulation, lobbying, insider trading, corrupt deals, monopolistic practices, dark patterns, corporate tax dodging, personal tax dodging.
You could for example say that the latter are technically legal and therefore okay, but it's my belief that they're "technically legal/loopholes" because we have reached a point where the rich are so powerful that they bend the laws to their own ends.
Our species is a bit of a disappointment. People would rather focus on trivial tribal issues than on anything that impacts the majority of the members of our species. We are well and truly animals.
It’s implied you mean that the ROI will be positive. Spending 1-2% of global GDP with negative ROI could be disastrous.
I think this is where most of the disagreement is. We don’t all agree on the expected ROI of that investment, especially when taking into account the opportunity cost.
I suspect a lot of this is due to large amounts of liquidity sloshing around looking for returns. We are still dealing with the consequences of the ZIRP (Zero Interest Rate Policy) and QE (Quantitative Easing) where money to support the economy through the Great Financial Crisis and Covid was largely funneled in to the top, causing the 'everything bubble'. The rich got (a lot) richer, and now have to find something to do with that wealth. The immeasurable returns promised by LLMs (in return for biblical amounts of investment) fits that bill very well.
They still gotta figure out how their consumers will get the cash to consume. Toss all the developers and a largish cohort of well-paid people head towards the dole.
Yeah I don't think this get's enough attention. It still requires a technical person to use these things effectively. Building coherent systems that solve a business problem is an iterative process. I have a hard time seeing how an LLM could climb that mountain on it's own.
I don't think there's a way to solve the issue of: one-shotted apps will increasingly look more convincing, in the same way that the image generation looks more convincing. But when you peel back the curtain, that output isn't quite correct enough to deploy to production. You could try brute-force vibe iterating until it's exactly what you wanted, but that rarely works for anything that isn't a CRUD app.
Ask any of the image generators to build you a sprite sheet for a 2d character with multiple animation frames. I have never gotten one to do this successfully in one prompt. Sometimes the background will be the checkerboard png transparency layer. Except, the checkers aren't all one color (#000000, #ffffff), instead it's a million variations of off-white and off-black. The legs in walking frames are almost never correct, etc.
And even if they get close - as soon as you try to iterate on the first output, you enter a game of whack-a-mole. Okay we fixed the background but now the legs don't look right, let's fix those. Okay great legs are fixed but now the faces are different in every frame let's fix those. Oh no fixing the faces broke the legs again, Etc.
We are in a weird place where companies are shedding the engineers that know how to use these things. And some of those engineers will become solo-devs. As a solo-dev, funds won't be infinite. So it doesn't seem likely that they can jack up the prices on the consumer plans. But if companies keep firing developers, then who will actually steer the agents on the enterprise plans?
> It still requires a technical person to use these things effectively.
I feel like few people critically think about how technical skill gets acquired in the age of LLMs. Statements like this kind of ignore that those who are the most productive already have experience & technical expertise. It's almost like there is a belief that technical people just grow on trees or that every LLM response somehow imparts knowledge when you use these things.
I can vibe code things that would take me a large time investment to learn and build. But I don't know how or why anything works. If I get asked to review it to ensure it's accurate, it would take me a considerable amount of time where it would otherwise just be easier for me to actually learn the thing. Feels like those most adamant about being more productive in the age of AI/LLMs don't consider any of the side effects of its use.
It's a fun thought, but you know what we call those people? Poor. The people who light their own money on fire today are ceding power. The two are the same.
1. Some people can afford to light a lot of their money on fire and still remain rich.
2. The trick is to burn other people’s money. Which is a lot more akin to what is going on here. Then, at least in the US, if you’re too big to fail, the fed will just give you more cash effectively diminishing everyone else’s buying power.
In regards to 2: it's as simple as not letting it be your money being set on fire. Every fiscally responsible individual is making sure they have low exposure to the mag 7.
Why do we need people to consume when we have the government?
Serious question. As in, we built the last 100 years on "the american consumer", the idea that it would be the people buying everything. There is no reason that needs to or necessarily will continue-- don't get me wrong, I kind of hope it does, but my hopes don't always predict what actually happens.
What if the next 100 is the government buying everything, and the vast bulk of the people are effectively serfs. Who HAVE to stay in line otherwise they go to debt prison or tax prison where they become slaves (yes, the US has a fairly large population of prison laborers who are forced to work for 15-50 cents/hour. The lucky ones can earn as much as $1.50/hour.
https://www.prisonpolicy.org/blog/2017/04/10/wages/
where will the government get the money to buy anything if the billionaires and their mega corps have it all and spend sufficient amounts to keep the government from taxing. we have a k shape economy where the capital class is extracting all of the value from the working class who are headed to subsistence levels of income and the low class dies in the ditch.
This prevents the consumers from slacking off and enjoying life, instead they have to continue to work work work. They get to consume a little, and work much more (after all, they also have to pay interest, and for consumer credits and credits that the masses get that adds up to a lot).
In this scenario, it does not even matter that many are unable to pay off all that debt. As long as the amount of work that is extracted from them significantly exceeds the amount of consumption allowed to them all is fine.
The chains that bind used to be metal, but we progressed and became a civilized society. Now it's the financial system and the laws. “The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread.” (Anatole France)
SF is not the only place where housing is expensive. There are plenty of cities where they could build more housing and they don't because it isn't profitable or because they don't have the workers to build more, not because the government is telling them they can't.
It is expensive in those other places for similar reasons as SF -- the government either tells them they can't (through zoning), or makes it very expensive (through regulation, like IZ / "affordable" housing), or limit profitability (rent control), or some combination of the above. All of these reduce the supply of new housing.
Generally the cities where housing is expensive are exactly the ones where the government is telling people they can't build (or making it very expensive to get approval). Do you have a specific example of a city such as you claim?
> US$700 billion could build a lot of infrastructure, housing, or manufacturing capacity.
I am now 100% convinced, that the US has power to build those things, but it will not, because it means lives of ordinary people will be elevated even more, this is not what brutal capitalism wants.
If it can make top 1% richer in 10 year span vs good for everyone in 20 years, it will go with former
What $700 billion can't do is cure cancers, Parkinsons, etc. We know because we've tried and that's barely a sliver of what it's cost so far, for middling results.
Whereas $700 billion in AI might actually do that.
Your name is well earned! "can't cure cancers" is impressively counterfactual [0] as 5 year survival of cancer diagnosis is up over almost all categories. Despite every cancer being a unique species trying to kill you, we're getting better and better at dealing with them.
In my experience, most people with cancer that I know simply oscillate between having life-threatening active cancer/tumors and remission.
I don't know any case where people have detectable cancer and it's just being managed, I think that's more the exception than the rule.
For my girlfriend, when she was in her last stages they had to do that (try to slow down/manage the cancer instead of remove it), but that was already palliative care and she died soon after. Also, the only reason they didn't try removing the tumor is because the specific location in the brain (pons) is inoperable.
Yes, we're getting better at treating cancers, but still if a person gets cancer, chances are good the thing they'll die of is cancer. Middling results.
Because we're not good at curing cancers, we're just good at making people survive better for longer until the cancer gets them. 5 year survival is a lousy metric but it's the best we can manage and measure.
I'm perfectly happy investing roughly 98% of my savings into the thing that has a solid shot at curing cancers, autoimmune and neurodegenerative diseases. I don't understand why all billionaires aren't doing this.
If we knew that we probably wouldn’t need AI to tell us.
But realistically: perhaps by noticing patterns we’ve failed to notice and by generating likely molecules or pathways to treatment that we hadn’t explored.
We don’t really know what causes most diseases anyway. Why does the Shingles vaccine seem to defend against dementia? Why does picking your nose a lot seem to increase risk of Alzheimer’s?
That’s the point of building something smarter than us: it can get to places we can’t get on our own, at least much faster than we could without it.
I don’t think that lack of intelligence is the bottleneck. It might be in some places, but categorically, across the board, our bottlenecks are much more pragmatic and mundane.
Consider another devastating disease: tuberculosis. It’s largely eradicated in the 1st world but is still a major cause of death basically everywhere else. We know how to treat it, lack of knowledge isn’t the bottleneck. I’d say effectively we do not have a cure for TB because we have not made that cure accessible to enough humans.
Flying is a bad example because airlines are a thing and make flying relatively accessible.
I get your point, but I don’t think it really matters. If a cure for most (or all) cancers is known but it’s not accessible to most people then it is effectively nonexistent. E.g it will be like TB.
> We have treatments (cures) for TB
TB is still one of the top 10 causes of death globally.
Things like antibiotics are plenty accessible - 3rd world countries are literally overusing and misusing antibiotics to the point of causing drug resistance in TB. "Effectively we do not have [thing] because we have not made that [thing] accessible to enough humans" is an exercise in goal-post moving.
About 15% of people over the age of 15 are illiterate, but it'd be silly to say "effectively we don't have literacy", even in a global context. Depending on the stat, 1 in 10 don't have access to electricity, but electricity has been in 50% of American homes for over 100 years.
The reality is that the future is unevenly distributed. AI and more broadly technology as a whole, will only exacerbate that uneven distribution. That's just the reality of progress: we didn't stall electrifying homes in NYC because they didn't get electricity in Papua New Guinea.
If AI discovers a cure for cancer, it may be incredibly unevenly distributed. Imagine it's some amp'd-up form of CAR-T, requiring huge resources and expenses, but offering an actual cure for that individual. It'd be absurd to say we couldn't consider cancer cured just because the approach doesn't scale to a $1 pill.
> As an example, in the UK in 2013 the cost of standard TB treatment was estimated at £5,000 while the cost of treating MDR-TB was estimated to be more than 10 times greater, ranging from £50,000 to £70,000 per case.
I pulled this from Wikipedia. It does not look like TB treatment is “plenty affordable”.
If the issue is with the semantics of the word “cure” that’s not a hill I’ll die on, but can you see how knowing how to cure something and actually curing something are two vastly different things?
If you told someone a cure for cancer existed but there’s literally no way they could afford it, that sounds a lot like the cure effectively doesn’t exist for that person.
So I’ll posit that the weirdness of such a statement depends entirely on your audience.
If you’re one of the people likely to be able to afford such a cure, it might sound nonsensical.
I’ll also note that I intentionally selected a term with a more narrow definition “effective existence” vs a more general term “existence”. E.g. something can be true in general but effectively false in practice.
It gives me pause that most people drive cars or are willing to sit in one for more than 20 minutes a week.
But people accept the status quo and are afraid to take a moment’s look into the face of their own impending injury, senescence and death: that’s how our brains are wired to survive and it used to make sense evolutionarily until about 5 minutes ago.
Ah, yes: "well, we can't cure cancer or autoimmune and neurodegenerative diseases, but I'm willing to invest basically all my money into a thing that's...trained on the things we know how to do already, and isn't actually very good at doing any of them."
...Meanwhile, we are developing techniques to yes, cure some kinds of cancer, as in every time they check back it's completely gone, without harming healthy tissue.
We are developing "anti-vaccines" for autoimmune diseases, that can teach our bodies to stop attacking themselves.
We are learning where some of the origins of the neurodegenerative diseases are, in ways that makes treating them much more feasible.
So you're 100% wrong about the things we can't do, and your confidence in what "AI" can do is ludicrously unfounded.
Every doctor and researcher in the world is trained on things we already know how to do already.
I’m not claiming we haven’t made a dent. I’m claiming I’m in roughly as much danger from these things right now as any human ever has been: middling results.
If we can speed up the cures by even 1%, that’s cumulatively billions of hours of human life saved by the time we’re done.
But what they can do, that AI can't, is try new things in measured, effective, and ethical ways.
And that hypothetical "billions of hours of human life saved" has to be measured against the actual damage being done right now.
Real damage to economy, environment, politics, social cohesion, and people's lives now
vs
Maybe, someday, we improve the speed of finding cures for diseases? In an unknown way, at an unknown time, for an unknown cost, and by an unknown amount.
Who knows, maybe they'll give everyone a pony while they're at it! It seems just as likely as what you're proposing.
There's one additional question we could have here, which is "is AI here to stay and is it net-positive, or does it have significant negative externalities"
> What we really want to ask ourselves is whether our economy is set up to mostly get things right, or it is wastefully searching.
We've so far found two ways in recent memory that our economy massively fails when it comes to externalities.
Global Warming continues to get worse, and we cannot globally coordinate to stop it when the markets keep saying "no, produce more oil, make more CO2, it makes _our_ stock go up until the planet eventually dies, but our current stock value is more important than the nebulous entire planet's CO2".
Ads and addiction to gambling games, tiktok, etc also are a negative externality where the company doing the advertising or making the gambling game gains profit, but at the expense of effectively robbing money from those with worse impulse control and gambling problems.
Even if the market votes that AI will successfully extract enough money to be "here to stay", I think that doesn't necessarily mean the market is getting things right nor that it necessarily increases productivity.
Gambling doesn't increase productivity, but the market around kalshi and sports betting sure indicates it's on the rise lately.
> People will take courses in those things and try to get a piece of the winnings.
The problem is boom-bust cycles. Electricians will always be in demand but it takes about 3 years to properly train even a "normal" residential electrician - add easily 2-3 years on top to work on the really nasty stuff aka 50 kV and above.
No matter what, the growth of AI is too rapid and cannot be sustained. Even if the supposed benefits of AI all come true - the level of growth cannot be upheld because everything else suffers.
It’s protected by requiring many hours (years) of apprenticeship. These kinds of heavily unionized jobs only reward seniority. Gotta pay your dues buddy!
I'm talking about proper German training, not the kind of shit that leads to what Cy Porter (the home inspector legend) exposes on Youtube.
Shoddy wiring can hold up for a looong time in homes because outside of electrical car chargers and baking ovens nothing consumes high current over long time and as long as no device develops a ground fault, even a lack of a GFCI isn't noticeable. But a data center? Even smaller ones routinely rack up megawatts of power here, large hyperscaler deployments hundreds of megawatts. Sustained, not peak. That is putting a lot of stress on everything involved: air conditioning, power, communications.
And for that to hold up, your neighbor Joe who does all kinds of trades as long as he's getting paid in cash won't cut it.
> What we really want to ask ourselves is whether our economy is set up to mostly get things right, or it is wastefully searching.
I can’t speak to the economy as a whole, but the tech economy has a long history of bubbles and scams. Some huge successes, too—but gets it wrong more often than it gets it right.
AI could be here to stay and "chase a career as an electrician helping build datacenters" could also be a mistake. The construction level could plateau or decline without a bubble popping.
That's why it can't just be a market signal "go become an electrician" when the feedback loop is so slow. It's a social/governmental issue. If you make careers require expensive up-front investment largely shouldered by the individuals, you not only will be slow to react but you'll also end up with scores of people who "correctly" followed the signals right up until the signals went away.
> you'll also end up with scores of people who "correctly" followed the signals right up until the signals went away.
I think this is where we're headed, very quickly, and I'm worried about it from a social stability perspective (as well as personal financial security of course). There's probably not a single white-collar job that I'd feel comfortable spending 4+ years training for right now (even assuming I don't have to pay or take out debt for the training). Many people are having skills they spent years building made worthless overnight, without an obvious or realistic pivot available.
Lots and lots of people who did or will do "all the right things," with no benefit earned from it. Even if hypothetically there is something new you can reskill into every five years, how is that sustainable? If you're young and without children, maybe it is possible. Certainly doesn't sound fun, and I say this as someone who joined tech in part because of how fast-paced it was.
> Many people are having skills they spent years building made worthless overnight, without an obvious or realistic pivot available.
I'd like to see real examples of this, beyond trivial ones like low-quality copywriting (i.e. the "slop" before there was slop) that just turns into copyediting. Current AI's are a huge force multiplier for most white-collar skills, including software development.
> If AI is here to stay, as a thing that permanently increases productivity,
Thing is, I am still waiting to see where it increases productivity aside from some extremely small niches like speech to text and summarizing some small text very fast.
Serious question, but have you not used it to implement anything at your job? Admittedly I was very skeptical but last sprint in 2 days I got 12 pull requests up for review by running 8 agents on my computer in parallel and about 10 more on cloud VMs. The PRs are all double reviewed and QA'd and merged. The ones that don't have PRs are larger refactors, one 40K loc and the other 30k loc and I just need actual time to go through every line myself and self-test appropriately, otherwise it would have been more stuff finished. These are all items tied to money in our backlog. It would have taken me about 5 times as long to close those items out without this tooling. I also would have not had as much time to produce and verify as many unit tests as I did. Is this not increased productivity?
I have an engineering degree but don't call myself an engineer because it's illegal to do so in my country unless I hold a PEng.
That aside, I fail to see how code where I have reviewed every single line, tested all edge cases as normal, had 2 other devs also test this, had a QA test, and had a PO run and approve the behaviour as "rolling the dice" but you're entitled to your opinion.
> I am still waiting to see where it increases productivity...
If you are a software engineer, and you are not using using AI to help with software development, then you are missing out. Like many other technologies, using AI agents for software dev work takes time to learn and master. You are not likely to get good results if you try it half-heartedly as a skeptic.
And no, nobody can teach you these skills in a comment in an online forum. This requires trial and error on your part. If well known devs like Linus Torvalds are saying there is value here, and you are not seeing it, then then the issue is not with the tool.
Same here. I did notice what I think was an actual error on someone's part, there was a chart in the files comparing black to white IQ distributions, and well, just look at it:
Me too. I first assumced it was an OCR error, then remembered they were emails and wouldn't need to go through OCR. Then I thought that the US Government is exactly the kind of place to print out millions of emails only to scan them back in again.
reply