> Once an AI gets just 1% efficient at automation, it achieves 100% efficiency quickly thereafter
This is really optimistic and completely overlooks the 80/20 problems that have plagued both AI automation and traditional automation. The AI sweeps up the easy cases and humans are left to fix, or paper over with heuristics, the remainder.
Example? Speech-to-text. It's pretty good and the spread of automated closed captions to youtube videos is a definite benefit. But broadcast TV will almost exclusively use human subtitlers still, because the last few percent of accuracy isn't there.
Software hasn't even effectively been reliably outsourced to cheap humans yet. Perhaps you can automate some of the production of CRUD tools, in the absence of a better framework for them (frameworks, compilers, libraries and npm should all be thought of as forms of automation). But anything with a competitive advantage will be relying on humans for a long time.
Exactly - AI is amazing at taking millions of datapoints and interpolating between them to be extremely accurate - as long as it's well within the manifold of examples it's seen. But edge cases where it has either not seen it before, or has seen few examples? Who knows, usually a lot worse.
To further support your point, AI is 90% efficient/good at driving. But the last 10% is so hard that some think it's impossible to have 100% self-driving cars.
Self-checkout isn't a great example since it's mainly just shifting the work from one human (the employee) to another (the customer), rather than true automation.
As a side note, we were told decades ago that visual programming would similarly displace developers; however, I'm doing perfectly fine
The best thing about self checkout is the parallel stations, so that getting behind a single black swan customer won't set me back by half an hour.
In some cases, shifting work to the customer is a net positive, especially if the customer can estimate their own time and effort better than they can estimate and manage someone else doing it. Nearly a half century after The Mythical Man Month, managing software development is still widely agreed to be an unsolved problem.
The cost of software development is not just the time and effort, but the existential risk to your own job or business from its unpredictability. This is why I do all of my own programming, and also most of my own home repairs... and use the self checkout at the supermarket.
It didn't displace developers because the industry has grown to absorb them. In my own case, I use my programs to get my own job done, then hand the whole thing over to the project team, and the developers write the production code that ends up in the product... but not at my expense.
Self-checkout isn't a great example since it's mainly just shifting the work from one human (the employee) to another (the customer), rather than true automation.
It may not be true automation, but it's very effective, and it's something we're seeing not only at the grocery store. What used to involve calling a person up and asking for services is now done through web portals... taking a few minutes of the person's time.
The real issue in this degradation of customer service is usually not the time cost but the fact that, while it doesn't change the amount of random human error that occurs, it shifts the penalties onto the users... if your travel agent screws up, you get a refund... if you screw up (in something that used to be someone else's job) you're told to eat the costs.
Thing is, the replacement of a process when an ersatz alternative may not be "true automation" but it doesn't need to be in order to work. And job destruction across an industry isn't 0/1. There are still jobs in car factories in Detroit, just far fewer of them. There will still be jobs for truck drivers in 2040, but there probably won't be nearly as many. Employers will still need programmers; they might not need the same number of us. It only takes a dip of a few percent in job supply to cause wages and working conditions to crater (that's inelasticity), and even this says nothing about the ripple effects and systemic calamities (up to and including the possibility of economic depressions that last decades) that occur when large numbers of working people lose substantial income.
True automation would be system that would at 100% accuracy note and bill all of the bagged items you carry out of the shop. Without ever scanning them. Which turns the whole thing to much harder case to solve. 90% is simple, but it really doesn't change the last 10% still needing to do essentially the 90% again...
I like it. It’s one of my favorite innovations of all time. I don’t use it when I have a ton of stuff of course but for my frequent trips to buy <10 items it’s much quicker than waiting in line.
I’ve seen different self checkout systems that provide a vastly different experience. Some are painful, like the ones where you scan a bar code and place the items in a scale. It’s effectively a checkout where I am now the clerk, but am less well trained, less experienced, and can not solve my own issues when they happen.
I’ve also seen a superb one at Decathlon stores in Poland. You dump your items in a box. It automagically scans everything using something that I’m guessing is RFID based. Bag, swipe a card to pay, and goIt is much simpler to use and is actually pleasant. It’s one step short of just walking out of the store without actually checking out.
I think this experience probably varies a lot person-to-person. For someone like me who often experiences social anxiety even from the most minimal of interactions, being able to keep my earbuds in and not have to talk to a stranger can be a huge difference if I'm happening to feel stressed on a given day. The biggest issue I used to have was when I wanted to carry something out instead of putting it in a bag, but since all they check for bagging is that the weight is placed on the surface, you can just put the item on the surface outside of a bag after scanning, and it will accept it as being "bagged".
Sure, and I do always strive to be nice to anyone I have to interact with whether they're cashiers or otherwise. Social anxiety doesn't tend to be a rational thing though, so the fact that consequences aren't unlikely doesn't really make it go away. I know that nothing bad is going to happen from a minor social interaction, and yet that does absolutely nothing to make me dread it less if I'm happening to be in an anxious state. A lot of mental health things work similarly to this; for example, having depression doesn't just mean you're sad because things aren't going well, but you can be despondent despite everything going perfectly fine in your life. If you can't imagine having feelings like this that conflict with what you actually know about a situation, then you're one of the lucky ones, but you probably do know someone who would know exactly what I'm talking about.
Depends. Software could be like lighting, where the cheaper it gets the more resources is spent on it. Right now it looks like it. Humans want to be able to see everywhere, and we want everything to be smart. Maybe a biotech revolution will shift demand towards that, but that’s speculation at this point.
If you look back at far enough, quite a lot of development tasks have already been automated. There aren't all that many people manually "coding", or turning high level languages into machine code. That got automated early on.
It is already happening. For example; setting up a production DB in primary-secondary mode; with daily back up and what not used to be quite a bit of manual grunt work. AWS Aurora and similar services handle 70-80% of the pain leaving you with setting up right table, index etc., Same in setting up a backend stack with DNS, load balancer etc.,
My guess is cloud services will automate even DB configuration. They have 100% visibility into the true production load pattern they will be able to suggest or even auto-configure things such as indexes.
That's why most dev teams now do dev ops, where before you had an ops team and on premise infrastructure, which is still necessary in some sectors with strict data regulations but not for everyone.
Indeed. We now have dev teams trying to tinker around with infra layer and more often shooting themselves in the foot because the UI makes it easy to play around. We often read horror stories of a one person startup accidentally running up $1MM AWS bills.
That said, we will continue to have a need for dedicated dev-ops team. Just that instead of say 10 people team it'd require 2 person team. But you still need a person dedicated (maybe part time) baby-sitting infrastructure. Don't let every dev get their hands on infra.
The scenario in which demand for labor drops massively but human labor is still needed. Many industries would lose 80% of workers, but 20% would remain. Those 20% would be very high status and well paid, at least compared to the UBI neo-serf class that yheir former colleagues of the 80% have fallen off into. It is yhese 20% that woll form a new upper middle class. They will be isolated from the UBI serfs and live in gated communities.
These communities won’t be like the extremely luxurious communities of the one percent, but still isolated from the poor.
In another thread on this topic I’ve predicted a rise of technologically driven authoritarianism where the human elite uses AI to oppress the poor. The 20% that maintain such a system’s infrastructure (technical and otherwise) would probably support it politically. In most right wing authoritarian states the government has drawn support from its upper middle class. In left wing authoritarian states like the Soviet Onion there was a vast beurocrat class that was structurally isomorphic to the upper middle class.
I’m predicting that the humans that hold onto their jobs the longest, and work in high paying stem jobs, will form the core of such a regime. This is good news for many HN users, because you’ll be part of those 20%. I’m just worried what happens to the people that aren’t so lucky.
Of course there are caveats here that the automation may not happen soon, happen quickly or lead to societal upheaval or fascism, but all of those are still possible.
Will they bother to oppress the poor, or will they start killing them off completely? Historically, peasants have been oppressed but still kept alive because the rich and powerful relied on them to farm the food, build the houses and empty the chamberpots. In a world with 80% unemployment, will the rich and powerful be merciful enough to only oppress that 80% of the population, or will they kill them instead?
You know, the entirety of economics is doomed because frankly it is just there to justify and explain wealth inequality rather than actually maximize economy efficiency. The laffer curve is the go to example people bring up. Completely made up.
Oh, that didn't cross my mind. Yes, that's a serious potential.
Take it a few more steps and then you get to automated analysis tools, system simulations, and NLP systems. This looks essentially like IronMan's Jarvis running heuristics interactively.
"Which edge cases did you cover?"
`I covered 3000 edge cases extrapolated from these bug reports...`
This is a possibility, but it presents a problem. In order to be a senior enough engineer to review PRs really well, you need years of practical experience. Those years start out in entry level jobs where you make lots of mistakes and learn by doing. But these are the very jobs that AI is most likely to eliminate first. So the first AIs will have good PR reviewers that were “classically trained”. Who will review the PRs of AI 50 years from now?
Great observation. Reminds me of this article[] about how surgeons are having a hard time getting experience with the rise of robotics.
It may be that this training will need to get pushed to the education system (whatever that looks like by that point).
Additionally, there can be a mentorship model where the junior performs the first pass of reviews and iterations before presenting to the senior.
Think Star Trek: how does anyone learn to fly a spaceship? Lots of simulation and running smaller parts of the system in close observation... which isn't too different from now.
The thing people forget about whether it's 80/20 (and the last 20 takes infinite time) or whether it's 80/20 (and it happens super-linearly) is the nature of the 20.
One of two things tends to happen.
1) The 20 is a fundamentally different problem, but is itself solved by applying different new developments (e.g. most breakthrough devices: semiconductor manufacture and design, cellular phones, iPhone)
2) The 80 is so useful that work realigns to avoid / ignore the 20 (e.g. most mass market adoptions: horse drawn carriages, trains, automobiles, planes, television, personal and business computing)
The key thing realistic futurists harp on is what drives revolutions is superior utility versus the status quo.
If AI is "same as human, without the labor costs," it will be adopted gradually. If AI is "better than human in a fundamental way," it will be adopted overnight.
I would argue that horse drawn carriages were a 100% solution for something like 8000 years of human history. It’s been barely over a hundred years where there has been something that could efficiently replace them and the ‘self driving’ modern replacement have maybe 0.01% the smarts of a horse.
I mean, go get drunk and pass out on the back of your horse and the chances are real good it’s going to take you home without running into the back of a parked emergency vehicle sitting on the side of the road without giving it specific directions to do so.
Full disclosure: I’ve never passed out on a horse…
Horses produce a lot of shit, need to be fed, generate rather large biohazards when they expire, and have an oscillating motion that isn't a great match for drunkenness.
"A horse for every person" probably wasn't the best of times.
AI doesn't have to fully replace humans (or even be that good) to have detrimental effects on wages and working conditions. Jobs are like oil; if the people who control the supply can reduce quantity by 10 percent, prices spike (this, for workers, would mean that wages decline, work conditions worsen, and hours grow longer).
We're not going to see AIs replacing humans in all jobs any time soon. We are going to see, barring regulation or (better yet) a total overthrow of the corporate system, increasing power accruing to capital--due to AIs' ability to perform the "80" in the aforementioned 80/20 analysis--with devastating human consequences. And this is true even though DALL-E, in the given example, doesn't actually understand (even in a figurative sense) that a third of Africa is desert or that certain facial expressions indicate concentration.
If an AI can do even half of your job--and anything you do as a subordinate probably can, in principle, be done by machines--you should be terrified. You're not going to be working 20 hours per week; you're going to be working for pennies due to wage inelasticity--there's now twice as much competition for jobs. Automation is of course both desirable and inevitable, but we've got to transition to an economic system in which it doesn't result in widespread poverty and homelessness... which is not the one we've got right now.
> If AI is "better than human in a fundamental way," it will be adopted overnight.
Are there any examples of this and how would we qualify them? Do plenty of pedestrian control systems also satisfy that criterion? After all, the SpaceX landing system is a feat of piloting that would probably be impossible for a human , but I don't believe the system is "AI" rather than an extremely well tuned more conventional thing of Kalman filters and feedback loops.
> Speech-to-text. It's pretty good and the spread of automated closed captions to youtube videos is a definite benefit. But broadcast TV will almost exclusively use human subtitlers still, because the last few percent of accuracy isn't there.
This also widely varies by language. Automated Japanese subtitles on YouTube are completely useless. Even with someone speaking clearly you'll find at least 2-3 mistakes every line of dialogue and if they speak slightly quickly then all bets are off (it also makes mistakes that seem strange -- Japanese has a very limited set of phonemes so consistently guessing the wrong ones seems very strange).
I agree with you in your analysis. And yet in the majority of jobs, the level of work that is required is at a level that AI will be able to effectively replace.
The jobs that it will replace will be the entry level jobs. It will replace the jobs where excellence is not required. It will replace the jobs where quantity matters more than quality. These are the jobs that everyone starts at. Some people learn and grow and become experts. And some remain at some lower level for their whole career. This works well when there is enough work to go around. When AI replaces the need for low-end professionals, where will experts learn their trade?
When combustion engines replaced horses in many applications, they did not eliminate all horse handling jobs. There were just significantly fewer horse handling jobs left.
Hmm, that said, the purpose of using AI is because it can replace human labour and save labour cost. Otherwise, we won't see half the money flowing into AI research. So AI certainly might prove to become a crisis for human on economic and military fronts. Since the end goal of AI will result in a crisis for human, isn't it time for governments to step in and regulate?
> Since the end goal of AI will result in a crisis for human, isn't it time for governments to step in and regulate?
Your crisis has been added to the queue and will be dealt with in due course. Your crisis is .. <click> 1,734 in the crisis queue.
(seriously I think this is best left alone until it clarifies or subsumed into the general "regulation of harmful doing things with computers", like privacy laws. Self-driving cars are their own game and deserve immediate scrutiny.)
I think that we are neglecting the possiblity that AI advancement and growth may proceed at an exponential pace as the author mentioned. The more people join AI industry, the faster AI advances, the more money invested in AI and causing more people to join the AI industry. Many people may join AI industry out of fear of losing their current jobs to AI. This may lead to a sudden avalanche of AI advances(domino effect) and also overnight collapse of economy.
Once we reached AGI, there is no way to back out of it and say we don't want it, because now other countries will also have AGI. And there won't be overnight solution to deal with the overnight crisis AGI created.
How many decades is "overnight" supposed to represent here? Or do you really mean overnight? The WarGames scenario where the AI decides the only logical option is a pre-emptive nuclear strike?
Collapses, even in seriously banana republic countries, do not happen overnight. They are slow and agonising, and present options for .. kinetic solutions.
> Once we reached AGI, there is no way to back out of it and say we don't want it, because now other countries will also have AGI.
Oh sweet child. You think too much of humanity as a whole.
If what you said was true, all countries would have had an equivalent of TSMC to begin with. At the very least, every country would have had multiple supply chains for anything they deemed of strategic value. Each country would manufacture cars, computers, commodities.
But instead, it’s all outsourced to a player who they think is the best. So if one day some anomaly happens and there is a new sea where China used to be, at least a lot of Europe and North America is going to be screwed big time.
Same with AGI. It’s going to happen at one company in one country. Not only will outsource their AI needs to that company, but its home country is going to have a leverage on all of it, but lurk in the shadows most of the time.
And once everyone else is dependent on HAL, should it, say, commit suicide as in that Asimov story, all countries that signed up are going to be fucked.
I think this is thinking small. An AI doesn't need to use human readable programming languages or frameworks to solve problems. Also software engineers are expensive. Corporate greed will drive this faster.
> An AI doesn't need to use human readable programming languages or frameworks to solve problems.
Why would an AI writing assembly give it an advantage?
> Also software engineers are expensive. Corporate greed will drive this faster.
Are they? You can get a software engineer for less than $10,000/year.
When I was going to university in the mid-00s I had quite a few "advisors" recommend against going into computer science, because corporate greed meant all the jobs were going to be outsourced and US salaries would tank as a result.
And yet 17y later salaries are still sky high. Someone that believed that advisor could have had a job paying a lot and time on the side or enough of a safety net to learn new skills if salaries ever tank (which we still don't really see a sign of)...
Yeah exactly. The impending apocalypse that will make software developers redundant has been just around the corner for 20 years, during which time the number of people employed in software has increased exponentially and software developer salaries are some of the highest you can get for non-managerial work.
> Why would an AI writing assembly give it an advantage?
There are lots of non-human-optimized options that aren't raw assembly. Any intermediate representation, for one (e.g. LLVM IR, C-- used by ghc, Python/JVM bytecode, abstract syntax trees to simplify parsing).
And even raw assembly is often not what the CPU operates directly on (see: microcode).
Yes, but that's not really relevant to what I was asking -- why would using any of those give an AI an advantage over, say, generating Python or C code?
This is really optimistic and completely overlooks the 80/20 problems that have plagued both AI automation and traditional automation. The AI sweeps up the easy cases and humans are left to fix, or paper over with heuristics, the remainder.
Example? Speech-to-text. It's pretty good and the spread of automated closed captions to youtube videos is a definite benefit. But broadcast TV will almost exclusively use human subtitlers still, because the last few percent of accuracy isn't there.
Software hasn't even effectively been reliably outsourced to cheap humans yet. Perhaps you can automate some of the production of CRUD tools, in the absence of a better framework for them (frameworks, compilers, libraries and npm should all be thought of as forms of automation). But anything with a competitive advantage will be relying on humans for a long time.