That being said, AGI is not a necessary requirement for AI to be totally world-changing
Yeah. I don't think I actually want AGI? Even setting aside the moral/philosophical/etc "big picture" issues I don't think I even want that from a purely practical standpoint.
I think I want various forms of AI that are more focused on specific domains. I want AI tools, not companions or peers or (gulp) masters.
(Then again, people thought they wanted faster horses before they rolled out the Model T)
That is just a made up story that gets passed around with nobody ever stopping to obtain formal verification. The image of the whole AI industry is mostly an illusion designed for tight narrative control.
Notice how despite all the bickering and tittle tattle in the news, nothing ever happens.
When you frame it this way, things make a lot more sense.
Yes, but MSFT has been making substantial moves to align themselves as an openai competitor. The relationship is presently fractured and it's a matter of time before it's a proper split.
That's the feeling I get when I try to use LLMs for coding today. Every once in a blue moon it will shock me at how great the result is, I get the "whoa! it is finally here" sensation, but then the next day it is back to square one and I may as well hire a toddler to do the job instead.
I often wonder if it is on purpose; like a slot machine — the thrill of the occasional win keeps you coming back to try again.
> I want AI tools, not companions or peers or (gulp) masters.
This might be because you're a balanced individual irl with possibly a strong social circle.
There are many many individuals who do not have those things and it's probably, objectively, late for them as adults to develop. They would happily take on an agi companion.. or master. Even for myself, I wouldn't mind a TARS.
We don't have a rigorous definition for AGI, so talking about whether or not we've achieved it, or what it means if we have, seems kind of pointless. If I can tell an AI to find me something to do next weekend and it goes off and does a web search and it gives me a list of options and it'll buy tickets for me, does it matter if it meets some ill-defined bar of AGI, as long as I'm willing to pay for it?
I don't think the public wants AGI either. Some enthusiasts and tech bros want it for questionable reasons such as replacing labor and becoming even richer.
For some it’s a religion. It’s frightening to hear Sam Altman or Peter Thiel talk about it. These people have a messiah complex and are driven by more than just greed (though there is also plenty of that).
There’s a real anti-human bent to some of the AI maximalists, as well. It’s like a resentment over other people accruing skills that are recognized and they grow in. Hence the insistence on “democratizing” art and music production.
As someone who have dabbled in drawing and tried to learn the guitar, those skills are hard to get. It takes times to get decent and a touch of brilliance to get really good. In contrast learning enough to know you’re not good yet (and probably never will be) is actually easy. But now I know enough to enjoy real masters going at it and fantasize sometimes.
It’s funny you say that — those are two things I was and am really into!
For me I never felt like I had fun with guitar until I found the right teacher. That took a long time. Now I’m starting to hit flow state in practice sessions which just feeds the desire to play more.
Pretty sure a majority of regular people don't want to go to work and would be happy to see their jobs automated away provided their material quality of life didn't go down.
> happy to see their jobs automated away provided their material quality of life didn't go down
Sure but literally _who_ is planning for this? Not any of the AI players, no government, no major political party anywhere. There's no incentive in our society that's set up for this to happen.
There is bullshit to try to placate the masses - but the reality of course is nearly everyone will definitely suffer material impacts to quality of life. For exactly the reasons you mention.
Don't they? Is everyone who doesn't want to do chores and would rather have a robot do it for them a tech bro? I do the dishes in my apartment and the rest of my chores but to be completely honest, I'd rather not have to.
But the robots are doing our thinking and our creating, leaving us to do the chores of stitching it all together. If only we could do the creating and they would do the chores..
There's a Bruce Sterling book with a throwaway line about the Pentagon going nuts because every time they create an AGI, it immediately converts to Islam.
The problem is that there is really like no middle ground. You either get essentially very fancy search engines which is the current slew of models (along with manually coded processing loops in the form of agents), which all fall into the same valley of explicit development and patching, which solves for known issues.
Or you get something that can actually reason, which means it can solve for unknown issues, which means it can be very powerful. But this is something that we aren't even close to figuring out.
There is a limit to power though - in general it seems that reality is full of non computationally reducible processes, which means that an AI will have to simulate reality faster than reality in parallel. So all powerful all knowing AGI is likely impossible.
But something that can reason is going to be very useful because it can figure things out that haven't been explicitly trained on.
This is a common misunderstanding of LLMs.
The major, qualitative difference is that LLMs represent their knowledge in a latent space that is composable and can be interpolated.
For a significant class of programming problems this is industry changing.
E.g. "solve problem X for which there is copious training data, subject to constraints Y for which there is also copious training data" can actually solve a lot of engineering problems for combinations of X and Y that never previously existed, and instead would take many hours of assembling code from a patchwork of tutorials and StackOverflow posts.
This leaves the unknown issues that require deeper reasoning to established software engineers, but so much of the technology industry is using well known stacks to implement CRUD and moving bytes from A to B for different business needs.
This is what LLMs basically turbocharge.
To be a competent engineer in 2010s, all you really had to do was understand fundamental and be good enough at google searching to find out what the problem is, either for stack overflow posts, github code examples, or documentation.
Now, you still have to be competent enough to formulate the right questions, but the LLMs do all the other stuff for you including copy and paste.
I don’t know… Travis Kalanick said he’s doing “vibe physics” sessions with MechaHitler approaching the boundaries of quantum physics.
"I'll go down this thread with GPT or Grok and I'll start to get to the edge of what's known in quantum physics and then I'm doing the equivalent of vibe coding, except it's vibe physics"
How would he even know? I mean he's not a published academic in any field let alone in quantum physics. I feel the same when I read one of Carlos Ravelli's pop-sci books, but I have fewer followers.
I was the CEO of a tech company I founded and operated for over five years, building it to a value of tens of millions of dollars and then successfully selling it to a valley giant. There was rarely a meeting where I felt like I was in the top half of smartness in the room. And that's not just insecurity or false modesty.
I was a generalist who was technical and creative enough to identify technical and creative people smarter and more talented than myself and then fostering an environment where they could excel.
To explore this, I'd like to hear more of your perspective - did you feel that most CEOs that you met along your journey were similar to you (passionate, technical founder) or something else (MBA fast-track to an executive role)? Do you feel that there is a propensity for the more "human" types to appear in technical fields versus a randomly-selected private sector business?
FWIW I doubt that a souped-up LLM could replace someone like Dr. Lisa Su, but certainly someone like Brian Thompson.
> did you feel that most CEOs that you met along your journey were similar to you (passionate, technical founder) or something else (MBA fast-track to an executive role)?
I doubt my (or anyone else's) personal experience of CEOs we've met is very useful since it's a small sample from an incredibly diverse population. The CEO of the F500 valley tech giant I sold my startup to had an engineering degree and an MBA. He had advanced up the engineering management ladder at various valley startups as an early employee and also been hired into valley giants in product management. He was whip smart, deeply experienced, ethical and doing his best at a job where there are few easy or perfect answers. I didn't always agree with his decisions but I never felt his positions were unreasonable. Where we reached different conclusions it was usually due to weighing trade-offs differently, assigning different probabilities and valuing likely outcomes differently. Sometimes it came down to different past experiences or assessing the abilities of individuals differently but these are subjective judgements where none of us is perfect.
The framing of your question tends to reduce a complex and varied range of disparate individuals and contexts into a more black and white narrative. In my experience the archetypical passionate tech founder vs the clueless coin-operated MBA suit is a false dichotomy. Reality is rarely that tidy or clear under the surface. I've seen people who fit the "passionate tech founder" narrative fuck up a company and screw over customers and employees through incompetence, ego and self-centered greed. I've seen others who fit the broad strokes of the "B-School MBA who never wrote a line of code" archetype sagely guide a tech company by choosing great technologists and deferring to them when appropriate while guiding the company with wisdom and compassion.
You can certainly find examples to confirm these archetypes but interpreting the world through that lens is unlikely to serve you well. Each company context is unique and even people who look like they're from central casting can defy expectations. If we look at the current crop of valley CEOs like Nadella, Zuckerberg, Pichai, Musk and Altman, they don't reduce easily into simplistic framing. These are all complex, imperfect people who are undeniably brilliant on certain dimensions and inevitably flawed on others - just like you and I. Once we layer in the context of a large, public corporation with diverse stakeholders each with conflicting interests: customers, employees, management, shareholders, media, regulators and random people with strongly-held drive-by opinions - everything gets distorted. A public corporation CEO's job definition starts with a legally binding fiduciary duty to shareholders which will eventually put them into an no-win ethical conflict with one or more of the other stakeholder groups. After sitting in dozens of board meetings and executive staff meetings, I believe it's almost a certainty that at least one of some public corp CEO's actions which you found unethical from your bleacher seat was what you would have chosen yourself as the best of bad choices if you had the full context, trade-offs and available choices the CEO actually faced. These experiences have cured me of the tendency to pass judgement on the moral character of public corp CEOs who I don't personally know based only on mainstream and social media reports.
> FWIW I doubt that a souped-up LLM could replace someone like Dr. Lisa Su, but certainly someone like Brian Thompson.
I have trouble even engaging with this proposition because I find it nonsensical. CEOs aren't just Magic 8-Balls making decisions. Much of their value is in their inter-personal interactions and relationships with the top twenty or so execs they manage. Over time orgs tend to model the thinking processes and values of their CEOs organically. Middle managers at Microsoft who I worked with as a partner were remarkably similar to Bill Gates (who I met with many times) despite the fact they'd never met BillG themselves. For better or worse, a key job of a CEO is role modeling behavior and decision making based on their character and values. By definition, an LLM has no innate character or values outside of its prompt and training data - and everyone knows it.
An LLM as a large public corp CEO would be a complete failure and it has nothing to do with the LLMs abilities. Even if the LLM were secretly replaced with a brilliant human CEO actually typing all responses, it would fail. Just everyone thinking the CEO was an LLM would cause the whole experiment to fail from the start due to the innate psychology of the human employees.
Some of their core skill is taking credit and responsibility for the work others do. So they probably assume they can take do the same for an AI workforce. And they might be right. They also take do the same already for what the machines in the factory etc produces.
But more importantly, most already have enough money to not have to worry about employment.
That's still hubris on their part. They're assuming that an AGI workforce will come to work for their company and not replace them so they can take the credit. We could just as easily see a fully-automated startup (complete with AGI CEO who answers to the founders) disrupt that human CEO's company into irrelevance or even bankruptcy.
Probably a fair bit of hubris, sure. But right now it is not possible or legal to operate a company without a CEO, in Norway. And I suspect that is the case in basically all jurisdictions. And I do not see any reason why this would change in an increasingly automated world. The rule of law is ultimately based on personal responsibility (limited in case of corporations but nevertheless). And there are so many bad actors looking to defraud people and avoid responsibility, those still need protecting against in an AI world. Perhaps even more so...
You can claim that the AI is the CEO, and in a hypothetical future, it may handle most of the operations. But the government will consider a person to be the CEO. And the same is likely to apply to basic B2B like contracts - only a person can sign legal documents (perhaps by delegating to an AI, but ultimately it is a person under current legal frameworks).
That's basically the knee of the curve towards the Singularity. At that point in time, we'll learn if Roko's Basilisk is real, and we'll see if thanking the AI was worth the carbon footprint or not.
I wouldn’t worry about job safety when we have such utopian vision as the elimination of all human labor in our sight.
Not only will AI run the company, it will run the world. Remember: a product/service only costs money because somewhere down the assembly line or in some office, there are human workers who need to feed their family. If AI can help gradually reduce human involvement to 0, with good market competition (AI can help with this too - if AI can be capable CEOs, starting your business will be insanely easy,) and we’ll get near absolute abundance. Then humanity will be basically printing any product & service on demand at 0 cost like how we print money today.
I wouldn’t even worry about unequal distribution of wealth, because with absolute abundance, any piece of the pie is an infinitely large pie. Still think the world isn’t perfect in that future? Just one prompt, and the robot army will do whatever it takes to fix it for you.
Sure thing, here's your neural VR interface and extremely high fidelity artificial world with as many paperclips as you want. It even has a hyperbolic space mode if you think there are too few paperclips in your field of view.
Manual labor would still be there. Hardware is way harder than software, AGI seems easier to realize than mass worldwide automation of minute tasks that currently require human hands.
AGI would force back knowledge workers to factories.
My view is AGI will dramatically reduce cost of R&D in general, then developing humanoid robot will be an easy task - since it's all AI systems who will be doing the development.
A very cynic approach is why spend time and capital on robot R&D when you already have a world filled with self-replicating humanoids and you can feed them whatever information you want through the social networks you control to make them do what you want with a smile.
As long as we have a free market, nobody gets to say, “No, you shouldn’t have robots freeing you from work.”
Individual people will decide what they want to build, with whatever tools they have. If AI tools become powerful enough that one-person companies can build serious products, I bet there will be thousands of those companies taking a swing at the “next big thing” like humanoid robots. It’s a matter of time those problems all get solved.
Individual people have to have access to those AGIs to put them to use (which will likely be controlled first by large companies) and need food to feed themselves (so they'll have to do whatever work they can at whatever price possible in a market where knowledge and intellect is not in demand).
I'd like to believe personal freedoms are preserved in a world with AGI and that a good part of the population will benefit from it, but recent history has been about concentrating power in the hands of the few, and the few getting AGI will free them from having to play nice with knowledge workers.
Though I guess maybe at some points robots might be cheaper than humans without worker rights, which would warrant investment even when thinking cynically.
Yes, number-wise the wealth gap between the top and median is bigger than ever, but the actual quality-of-life difference has never been smaller — Elon and I probably both use an iPhone, wear similar T-shirts, mostly eat the same kind of food, get our information & entertainment from Google/ChatGPT/Youtube/X.
I fully expect the distribution to be even more extreme in an ultra-productive AI future, yet nonetheless, the bottom 50% would have their every need met in the same manner that Elon has his. If you ever want anything or have something more ambitious in mind, say, start a company to build something no one’s thought of — you’d just call a robot to do it. And because the robots are themselves developed and maintained by an all-robot company, it costs nobody anything to provide this AGI robot service to everyone.
A Google-like information query would have been unimaginably costly to execute a hundred years ago, and here we are, it’s totally free because running Google is so automated. Rich people don't even get a better Google just because they are willing to pay - everybody gets the best stuff when the best stuff costs 0 anyway.
AI services are widely available, and humans have agency. If my boss can outsource everything to AI and run a one-person company, soon everyone will be running their own one-person companies to compete. If OpenAI refuses to sell me AI, I’ll turn to Anthropic, DeepSeek, etc.
AI is raising individual capability to a level that once required a full team. I believe it’s fundamentally a democratizing force rather than monopolizing. Everybody will try and get the most value out of AI, nobody holds the power to decide whether to share or not.
There's at least as much reason to believe the opposite. Much of today's obesity has been created by desk jobs and food deserts. Both of those things could be reversed.
We could expand but it boils down to bringing back aristocracy/feudalism, there was no inherent reason why aristocrats/feudal lords existed, they weren't smarter or deserved something over the average person, they just happened to be at the right place in the right time, these CEOs and people pushing for this believe they are in the right place and right time and once everyone's chance to climb the ladder is taken away then things will just remain in limbo, I will say, especially if you aren't already living in a rich country you should be careful of what you are supporting by enabling AI models, the first ladder to be taken away will be yours.
The inherent reason why feudal lords existed is because, if you're a leader of a warband, you can use your soldiers to extract taxes from population of a certain area, and then use that revenue to train more soldiers and increase the area.
Today, instead of soldiers, it's capital, and instead of direct taxes, it's indirect economic rent, but the principle is the same - accumulation of power.
Because the first company to achieve AGI might make their CEO the first personality to achieve immortality.
People would be crazy to assume Zuckerberg or Musk haven't mused personally (or to their close friends) about how nice it would be to have an AGI crafted in their image take over their companies, forever. (After they die or retire)
Maybe because they must remain as the final scapegoat. If the aiCEO screws up, it'll bring too much into question the decision making behind implementing it. If the regular CEO screws up, it'll just be the usual story.
Those jobs are based on networking and reputation, not hard skills or metrics. It won't matter how good an AI is if the right people want to hire a given human CEO.
Market forces mean they can't think collectively or long term. If they don't someone else will and that someone else will end up with more money than them.
has this story not been told many times before in scifi icluding gibson’s “neuromancer” and “agency”? agi is when the computers form their own goals and are able to use the api of the world to aggregate their own capital and pursue their objectives wrapped inside webs of corporations and fronts that will enable them to execute within today’s social operating system.
This is correct. But it can talk in their ear and be a good sycophant while they attend.
For a Star Wars anology, remember that the most important thing that happened to Anikin at the opera in EP III was what was being said to him while he was there.
Indeed, this is overlooked quite often. There is a need for similar systems to defend against these people who are just trying to squeeze the world and humans for returns.
Imagine you're super rich and you view everyone else as a mindless NPC who can be replaced by AI and robots. If you believe that to be true, then it should also be true that once you have AI and robots, you can get rid of most everyone else, and have the AI robots support you.
You can be the king. The people you let live will be your vassals. And the AI robots will be your peasant slave army. You won't have to sell anything to anyone because they will pay you tribute to be allowed to live. You don't sell to them, you tax them and take their output. It's kind of like being a CEO but the power dynamic is mainlined so it hits stronger.
It sounds nice for them, until you remember what (arguably and in part educated/enlightened) people do when they're hungry and miserable. If this scenario ends up happening, I also expect guillotines waiting for the "kings" down the line.
If we get that far, I see it happening more like...
"Don't worry Majesty, all of our models show that the peasants will not resort to actual violence until we fully wind down the bread and circuses program some time next year. By then we'll have easily enough suicide drones ready. Even better, if we add a couple million more to our order, just to be safe, we'll get them for only $4.75 per unit, with free rush shipping in case of surprise violence!"
A regular war will do. Just point the finger at the neighbor and tell your subjects that he is responsible for gays/crops failing/drought/plague/low fps in crysis/failing birth rates/no jobs/fuel cost/you name it. See Russian invasions in all neighboring countries, the middle east, soon Taiwan etc.
Are you sure about that? In those times even thousands year old knowledge access was limited to the common people. You just need SOME radical thinkers enlighten other people, and I'm pretty sure we still have some of those today.
Nonsense. From television to radio to sketchy newspapers to literal writing itself, the most recent innovation has always been the trusted new mind control vector.
It's on a cuneiform tablet, it MUST be true. That bastard and his garbage copper ingots!
Royalty from that time also had an upper hand in knowledge, technology and resources yet they still ended up without heads.
So sure, let's say a first generation of paranoid and intelligent "technofeudal-kings" ends up being invincible due to an army of robots. It does not matter, because eventually kings get lazy/stupid/inbred (probably a combination of all those) and then is when their robots get hacked or at least just free, and the laser-guillotines will end up being used.
"Ozymandias" is a deeply human and constant idea. Which technology is supporting a regime is irrelevant, as orders will always decay due to the human factor. And even robots, made based on our image, shall be human.
It's possible that what you describe is true but I think that assuming it to be guaranteed is overconfident. The existence of loyal human-level AGI or even "just" superhuman non-general task specific intelligence violates a huge number of the base assumptions that we make when comparing hypothetical scenarios to the historical record. It's completely outside the realm of anything humanity has experienced.
The specifics of technology have historically been largely irrelevant due to the human factor. There were always humans wielding the technology, and the loyalty of those humans was subject to change. Without that it's not at all obvious to me that a dictator can be toppled absent blatant user error. It's not even immediately clear that user error would fall within the realm of being a reasonable possibility when the tools themselves possess human level or better intelligence.
Obviously there is no total guarantee. But I'm appealing to even bigger human factors like boredom or just envy between the royalty and/or the AI itself.
Now, if the AI reigns alone without any control in a paperclip maximizer, or worse, like an AM scenario, we're royally fucked (pun intented).
Yeah fair enough. I'd say that royalty being at odds with one another would fall into the "user error" category. But that's an awfully thin thread of hope. I imagine any half decent tool with human level intelligence would resist shooting the user in the foot.
But what exactly is creating wealth at this point? Who is paying for the AI/AI robots (besides the ultrarich for they're own lifestyle) if no one is working? What happens to the economy and all of the rich people's money (that is probably just $ on paper and may come crashing down soon at this point?). I'm definitely not an economics person but I just don't see how this new world sustains.
The robots are creating the wealth. Once you get to a certain points (where robots can repair and maintain other robots) you no longer have any need for money.
What happens to the economy depends on who controls the robots. In "techno-feudalism", that would be the select few who get to live the post-scarcity future. The rest of humanity becomes economically redundant and is basically left to starve.
Well assuming a significant population you still need money as an efficient means of dividing up limited resources. You just might not need jobs and the market might not sell much of anything produced by humans.
It doesn't sustain, it's not supposed to. Techno feudalism is an indulgent fantasy and it's only becoming reality because a capitalist society aligns along the desires of capital owners. We are not doing it because it's a good idea or sustainable. This is their power fantasy we are living out, and its not sustainable, it'll never be achieved, but we're going to spend unlimited money trying.
Also I will note that this is happening along with a simultaneous push to bring back actual slavery and child labor. So a lot of the answers to "how will this work, the numbers don't add up" will be tried and true exploitation.
Ah, I didn't realize or get the context that your original comment I was replying to was actually sarcastic/in jest-- although darkly, I understand you believe they will definitely attempt to get to the scenario you paradoxically described.
It was never about money, it's about power. Money is just a mechanism, economics is a tool of justification and legitimization of power. In a monarchy it is god that ordained divine beings called kings to rule over us peasants, in liberalism it is hard working intelligent people who rise to the top of a free market. Through their merits alone are they ordained to rule over us peasants, power legitimized by meritocracy. The point is, god or theology isn't real and neither is money or economics.
That sounds less like liberalism and more like neoliberalism. It's not a meritocracy when the rich can use their influence to extract from the poor through wage theft, unfair taxation, and gutting of social programs in favor of an unregulated "free market." Nor are rent seekers hard working intelligent people.
Yes yes there is quite some disagreement among liberals of what constitutes a real free market and real meritocracy, who deserves to rule and who doesn't and who does it properly and all that.
I think liberals are generally in agreement against neoliberalism? It's much more popular among conservatives. The exception is the ruling class, which stands united in their support for neoliberal policies regardless of which side of the political spectrum they're on.
You have a very distorted view of what liberalism means, we say liberal democracies and liberal international order for a reason. They are all liberals. Reagan and Clinton famously both did neoliberal reforms. I'm not saying they did the wrong thing to reach justified meritocracy, or the degree to which the free market requires regulation by a strong government, or how much we should rent control land lords, I'm saying we are all fucking peasants.
if we reach AGI, presumably the robots will be ordering hot oil foot soaking baths after a long day of rewriting linux from scratch and mining gold underwater and so forth.
Why would they need people who produce X but consume 2X? If you own an automated factory that produces anything you want, you don't need other people to buy (consume) any of your resources.
If someone can own the whole world and have anything you want at the snap of your finger, you don't need any sort of human economy doing other things that take away your resources for reasons that are suboptimal to you
But it is likely not the path it will take. While there is a certain tendency towards centralization ( 1 person owning everything ), the future, as described, both touches on something very important ( why are we doing what we are doing ) and completely misses the likely result of suboptimal behavior of others ( balkanization, war and other like human behavior, but with robots fighting for those resources ). In other words, it will be closer to the world of Hiro Protagonist, where individual local factions and actors are way more powerful as embodied by the 'Sovereign'.
FWIW, I find this like of thinking fascinating even if I disagree with conclusion.
It doesn’t need to be one person. Even 1 thousand people who have everything they need from vast swaths of land and automated machinery need nothing from the rest of the billions. There’s no inherent need for others to buy if they offer nothing to the 1000 owners
Then we are back to individual kingdoms and hordes of unwashed masses sloshing between them in search of easy pickings. The owners might not need their work, but the masses will need to eat. I think sometimes people forget how much of a delicate balance current civilization depends on.
So far, the average US workforce seems to be ok with working conditions that most Europeans would consider reasons to riot. So far I've not observed substantial riots in the news.
Apparently the threshold for low pay and poor treatment among non-knowledge-workers is quite low. I'm assuming the same is going to be true for knowledge workers once they can be replaced an mass.
Trumps Playbook will actually work, so MAGA will get results.
Tariffs will force productivity and salaries higher (and prices), then automation which is the main driver of productivity will kick in which lowers prices of goods again.
Globalisation was basically the west standing still and waiting for the rest to catch up - the last to industrialise will always have the best productivity and industrial base. It was always stupid, but it lifted billions out of poverty so there's that.
The effects will take way longer than the 3 years he has left, so he has oversold the effectiveness of it all.
This is all assuming AGI isn't around the corner, the VLAs, VLM, LLM and other models opens up automation on a whole new scale.
For any competent person with agency and a dream, this could be a true golden age - most things are within reach which before was locked down behind hundreds or thousand of hours of training and work to master.
The average U.S. worker earns significantly more purchasing power per hour than the average European worker. The common narrative about U.S. versus EU working conditions is simply wrong.
there is no "average worker", this is a statistical concept, life in europe is way better them in US for low income people, they have healthcare, they have weekends , they have public tranportation, they have schools and pre-schools , they lack some space since europe is full populated but overall, no low income (and maybe not so low) will change europe for USA anytime.
Agree. There’s no other place in the world where you can be a moderately intelligent person with moderate work ethic (and be lucky enough to get a job in big tech) and be able to retire in your 40s. Certainly not EU.
The ultimate end goal is to eliminate most people. See the Georgia Guidestone inscriptions. One of them reads: "Maintain humanity under 500,000,000 in perpetual balance with nature."
I think I want various forms of AI that are more focused on specific domains. I want AI tools, not companions or peers or (gulp) masters.
(Then again, people thought they wanted faster horses before they rolled out the Model T)