> Once an AI gets just 1% efficient at automation, it achieves 100% efficiency quickly thereafter
This is really optimistic and completely overlooks the 80/20 problems that have plagued both AI automation and traditional automation. The AI sweeps up the easy cases and humans are left to fix, or paper over with heuristics, the remainder.
Example? Speech-to-text. It's pretty good and the spread of automated closed captions to youtube videos is a definite benefit. But broadcast TV will almost exclusively use human subtitlers still, because the last few percent of accuracy isn't there.
Software hasn't even effectively been reliably outsourced to cheap humans yet. Perhaps you can automate some of the production of CRUD tools, in the absence of a better framework for them (frameworks, compilers, libraries and npm should all be thought of as forms of automation). But anything with a competitive advantage will be relying on humans for a long time.
Exactly - AI is amazing at taking millions of datapoints and interpolating between them to be extremely accurate - as long as it's well within the manifold of examples it's seen. But edge cases where it has either not seen it before, or has seen few examples? Who knows, usually a lot worse.
To further support your point, AI is 90% efficient/good at driving. But the last 10% is so hard that some think it's impossible to have 100% self-driving cars.
Self-checkout isn't a great example since it's mainly just shifting the work from one human (the employee) to another (the customer), rather than true automation.
As a side note, we were told decades ago that visual programming would similarly displace developers; however, I'm doing perfectly fine
The best thing about self checkout is the parallel stations, so that getting behind a single black swan customer won't set me back by half an hour.
In some cases, shifting work to the customer is a net positive, especially if the customer can estimate their own time and effort better than they can estimate and manage someone else doing it. Nearly a half century after The Mythical Man Month, managing software development is still widely agreed to be an unsolved problem.
The cost of software development is not just the time and effort, but the existential risk to your own job or business from its unpredictability. This is why I do all of my own programming, and also most of my own home repairs... and use the self checkout at the supermarket.
It didn't displace developers because the industry has grown to absorb them. In my own case, I use my programs to get my own job done, then hand the whole thing over to the project team, and the developers write the production code that ends up in the product... but not at my expense.
Self-checkout isn't a great example since it's mainly just shifting the work from one human (the employee) to another (the customer), rather than true automation.
It may not be true automation, but it's very effective, and it's something we're seeing not only at the grocery store. What used to involve calling a person up and asking for services is now done through web portals... taking a few minutes of the person's time.
The real issue in this degradation of customer service is usually not the time cost but the fact that, while it doesn't change the amount of random human error that occurs, it shifts the penalties onto the users... if your travel agent screws up, you get a refund... if you screw up (in something that used to be someone else's job) you're told to eat the costs.
Thing is, the replacement of a process when an ersatz alternative may not be "true automation" but it doesn't need to be in order to work. And job destruction across an industry isn't 0/1. There are still jobs in car factories in Detroit, just far fewer of them. There will still be jobs for truck drivers in 2040, but there probably won't be nearly as many. Employers will still need programmers; they might not need the same number of us. It only takes a dip of a few percent in job supply to cause wages and working conditions to crater (that's inelasticity), and even this says nothing about the ripple effects and systemic calamities (up to and including the possibility of economic depressions that last decades) that occur when large numbers of working people lose substantial income.
True automation would be system that would at 100% accuracy note and bill all of the bagged items you carry out of the shop. Without ever scanning them. Which turns the whole thing to much harder case to solve. 90% is simple, but it really doesn't change the last 10% still needing to do essentially the 90% again...
I like it. It’s one of my favorite innovations of all time. I don’t use it when I have a ton of stuff of course but for my frequent trips to buy <10 items it’s much quicker than waiting in line.
I’ve seen different self checkout systems that provide a vastly different experience. Some are painful, like the ones where you scan a bar code and place the items in a scale. It’s effectively a checkout where I am now the clerk, but am less well trained, less experienced, and can not solve my own issues when they happen.
I’ve also seen a superb one at Decathlon stores in Poland. You dump your items in a box. It automagically scans everything using something that I’m guessing is RFID based. Bag, swipe a card to pay, and goIt is much simpler to use and is actually pleasant. It’s one step short of just walking out of the store without actually checking out.
I think this experience probably varies a lot person-to-person. For someone like me who often experiences social anxiety even from the most minimal of interactions, being able to keep my earbuds in and not have to talk to a stranger can be a huge difference if I'm happening to feel stressed on a given day. The biggest issue I used to have was when I wanted to carry something out instead of putting it in a bag, but since all they check for bagging is that the weight is placed on the surface, you can just put the item on the surface outside of a bag after scanning, and it will accept it as being "bagged".
Sure, and I do always strive to be nice to anyone I have to interact with whether they're cashiers or otherwise. Social anxiety doesn't tend to be a rational thing though, so the fact that consequences aren't unlikely doesn't really make it go away. I know that nothing bad is going to happen from a minor social interaction, and yet that does absolutely nothing to make me dread it less if I'm happening to be in an anxious state. A lot of mental health things work similarly to this; for example, having depression doesn't just mean you're sad because things aren't going well, but you can be despondent despite everything going perfectly fine in your life. If you can't imagine having feelings like this that conflict with what you actually know about a situation, then you're one of the lucky ones, but you probably do know someone who would know exactly what I'm talking about.
Depends. Software could be like lighting, where the cheaper it gets the more resources is spent on it. Right now it looks like it. Humans want to be able to see everywhere, and we want everything to be smart. Maybe a biotech revolution will shift demand towards that, but that’s speculation at this point.
If you look back at far enough, quite a lot of development tasks have already been automated. There aren't all that many people manually "coding", or turning high level languages into machine code. That got automated early on.
It is already happening. For example; setting up a production DB in primary-secondary mode; with daily back up and what not used to be quite a bit of manual grunt work. AWS Aurora and similar services handle 70-80% of the pain leaving you with setting up right table, index etc., Same in setting up a backend stack with DNS, load balancer etc.,
My guess is cloud services will automate even DB configuration. They have 100% visibility into the true production load pattern they will be able to suggest or even auto-configure things such as indexes.
That's why most dev teams now do dev ops, where before you had an ops team and on premise infrastructure, which is still necessary in some sectors with strict data regulations but not for everyone.
Indeed. We now have dev teams trying to tinker around with infra layer and more often shooting themselves in the foot because the UI makes it easy to play around. We often read horror stories of a one person startup accidentally running up $1MM AWS bills.
That said, we will continue to have a need for dedicated dev-ops team. Just that instead of say 10 people team it'd require 2 person team. But you still need a person dedicated (maybe part time) baby-sitting infrastructure. Don't let every dev get their hands on infra.
The scenario in which demand for labor drops massively but human labor is still needed. Many industries would lose 80% of workers, but 20% would remain. Those 20% would be very high status and well paid, at least compared to the UBI neo-serf class that yheir former colleagues of the 80% have fallen off into. It is yhese 20% that woll form a new upper middle class. They will be isolated from the UBI serfs and live in gated communities.
These communities won’t be like the extremely luxurious communities of the one percent, but still isolated from the poor.
In another thread on this topic I’ve predicted a rise of technologically driven authoritarianism where the human elite uses AI to oppress the poor. The 20% that maintain such a system’s infrastructure (technical and otherwise) would probably support it politically. In most right wing authoritarian states the government has drawn support from its upper middle class. In left wing authoritarian states like the Soviet Onion there was a vast beurocrat class that was structurally isomorphic to the upper middle class.
I’m predicting that the humans that hold onto their jobs the longest, and work in high paying stem jobs, will form the core of such a regime. This is good news for many HN users, because you’ll be part of those 20%. I’m just worried what happens to the people that aren’t so lucky.
Of course there are caveats here that the automation may not happen soon, happen quickly or lead to societal upheaval or fascism, but all of those are still possible.
Will they bother to oppress the poor, or will they start killing them off completely? Historically, peasants have been oppressed but still kept alive because the rich and powerful relied on them to farm the food, build the houses and empty the chamberpots. In a world with 80% unemployment, will the rich and powerful be merciful enough to only oppress that 80% of the population, or will they kill them instead?
You know, the entirety of economics is doomed because frankly it is just there to justify and explain wealth inequality rather than actually maximize economy efficiency. The laffer curve is the go to example people bring up. Completely made up.
Oh, that didn't cross my mind. Yes, that's a serious potential.
Take it a few more steps and then you get to automated analysis tools, system simulations, and NLP systems. This looks essentially like IronMan's Jarvis running heuristics interactively.
"Which edge cases did you cover?"
`I covered 3000 edge cases extrapolated from these bug reports...`
This is a possibility, but it presents a problem. In order to be a senior enough engineer to review PRs really well, you need years of practical experience. Those years start out in entry level jobs where you make lots of mistakes and learn by doing. But these are the very jobs that AI is most likely to eliminate first. So the first AIs will have good PR reviewers that were “classically trained”. Who will review the PRs of AI 50 years from now?
Great observation. Reminds me of this article[] about how surgeons are having a hard time getting experience with the rise of robotics.
It may be that this training will need to get pushed to the education system (whatever that looks like by that point).
Additionally, there can be a mentorship model where the junior performs the first pass of reviews and iterations before presenting to the senior.
Think Star Trek: how does anyone learn to fly a spaceship? Lots of simulation and running smaller parts of the system in close observation... which isn't too different from now.
The thing people forget about whether it's 80/20 (and the last 20 takes infinite time) or whether it's 80/20 (and it happens super-linearly) is the nature of the 20.
One of two things tends to happen.
1) The 20 is a fundamentally different problem, but is itself solved by applying different new developments (e.g. most breakthrough devices: semiconductor manufacture and design, cellular phones, iPhone)
2) The 80 is so useful that work realigns to avoid / ignore the 20 (e.g. most mass market adoptions: horse drawn carriages, trains, automobiles, planes, television, personal and business computing)
The key thing realistic futurists harp on is what drives revolutions is superior utility versus the status quo.
If AI is "same as human, without the labor costs," it will be adopted gradually. If AI is "better than human in a fundamental way," it will be adopted overnight.
I would argue that horse drawn carriages were a 100% solution for something like 8000 years of human history. It’s been barely over a hundred years where there has been something that could efficiently replace them and the ‘self driving’ modern replacement have maybe 0.01% the smarts of a horse.
I mean, go get drunk and pass out on the back of your horse and the chances are real good it’s going to take you home without running into the back of a parked emergency vehicle sitting on the side of the road without giving it specific directions to do so.
Full disclosure: I’ve never passed out on a horse…
Horses produce a lot of shit, need to be fed, generate rather large biohazards when they expire, and have an oscillating motion that isn't a great match for drunkenness.
"A horse for every person" probably wasn't the best of times.
AI doesn't have to fully replace humans (or even be that good) to have detrimental effects on wages and working conditions. Jobs are like oil; if the people who control the supply can reduce quantity by 10 percent, prices spike (this, for workers, would mean that wages decline, work conditions worsen, and hours grow longer).
We're not going to see AIs replacing humans in all jobs any time soon. We are going to see, barring regulation or (better yet) a total overthrow of the corporate system, increasing power accruing to capital--due to AIs' ability to perform the "80" in the aforementioned 80/20 analysis--with devastating human consequences. And this is true even though DALL-E, in the given example, doesn't actually understand (even in a figurative sense) that a third of Africa is desert or that certain facial expressions indicate concentration.
If an AI can do even half of your job--and anything you do as a subordinate probably can, in principle, be done by machines--you should be terrified. You're not going to be working 20 hours per week; you're going to be working for pennies due to wage inelasticity--there's now twice as much competition for jobs. Automation is of course both desirable and inevitable, but we've got to transition to an economic system in which it doesn't result in widespread poverty and homelessness... which is not the one we've got right now.
> If AI is "better than human in a fundamental way," it will be adopted overnight.
Are there any examples of this and how would we qualify them? Do plenty of pedestrian control systems also satisfy that criterion? After all, the SpaceX landing system is a feat of piloting that would probably be impossible for a human , but I don't believe the system is "AI" rather than an extremely well tuned more conventional thing of Kalman filters and feedback loops.
> Speech-to-text. It's pretty good and the spread of automated closed captions to youtube videos is a definite benefit. But broadcast TV will almost exclusively use human subtitlers still, because the last few percent of accuracy isn't there.
This also widely varies by language. Automated Japanese subtitles on YouTube are completely useless. Even with someone speaking clearly you'll find at least 2-3 mistakes every line of dialogue and if they speak slightly quickly then all bets are off (it also makes mistakes that seem strange -- Japanese has a very limited set of phonemes so consistently guessing the wrong ones seems very strange).
I agree with you in your analysis. And yet in the majority of jobs, the level of work that is required is at a level that AI will be able to effectively replace.
The jobs that it will replace will be the entry level jobs. It will replace the jobs where excellence is not required. It will replace the jobs where quantity matters more than quality. These are the jobs that everyone starts at. Some people learn and grow and become experts. And some remain at some lower level for their whole career. This works well when there is enough work to go around. When AI replaces the need for low-end professionals, where will experts learn their trade?
When combustion engines replaced horses in many applications, they did not eliminate all horse handling jobs. There were just significantly fewer horse handling jobs left.
Hmm, that said, the purpose of using AI is because it can replace human labour and save labour cost. Otherwise, we won't see half the money flowing into AI research. So AI certainly might prove to become a crisis for human on economic and military fronts. Since the end goal of AI will result in a crisis for human, isn't it time for governments to step in and regulate?
> Since the end goal of AI will result in a crisis for human, isn't it time for governments to step in and regulate?
Your crisis has been added to the queue and will be dealt with in due course. Your crisis is .. <click> 1,734 in the crisis queue.
(seriously I think this is best left alone until it clarifies or subsumed into the general "regulation of harmful doing things with computers", like privacy laws. Self-driving cars are their own game and deserve immediate scrutiny.)
I think that we are neglecting the possiblity that AI advancement and growth may proceed at an exponential pace as the author mentioned. The more people join AI industry, the faster AI advances, the more money invested in AI and causing more people to join the AI industry. Many people may join AI industry out of fear of losing their current jobs to AI. This may lead to a sudden avalanche of AI advances(domino effect) and also overnight collapse of economy.
Once we reached AGI, there is no way to back out of it and say we don't want it, because now other countries will also have AGI. And there won't be overnight solution to deal with the overnight crisis AGI created.
How many decades is "overnight" supposed to represent here? Or do you really mean overnight? The WarGames scenario where the AI decides the only logical option is a pre-emptive nuclear strike?
Collapses, even in seriously banana republic countries, do not happen overnight. They are slow and agonising, and present options for .. kinetic solutions.
> Once we reached AGI, there is no way to back out of it and say we don't want it, because now other countries will also have AGI.
Oh sweet child. You think too much of humanity as a whole.
If what you said was true, all countries would have had an equivalent of TSMC to begin with. At the very least, every country would have had multiple supply chains for anything they deemed of strategic value. Each country would manufacture cars, computers, commodities.
But instead, it’s all outsourced to a player who they think is the best. So if one day some anomaly happens and there is a new sea where China used to be, at least a lot of Europe and North America is going to be screwed big time.
Same with AGI. It’s going to happen at one company in one country. Not only will outsource their AI needs to that company, but its home country is going to have a leverage on all of it, but lurk in the shadows most of the time.
And once everyone else is dependent on HAL, should it, say, commit suicide as in that Asimov story, all countries that signed up are going to be fucked.
I think this is thinking small. An AI doesn't need to use human readable programming languages or frameworks to solve problems. Also software engineers are expensive. Corporate greed will drive this faster.
> An AI doesn't need to use human readable programming languages or frameworks to solve problems.
Why would an AI writing assembly give it an advantage?
> Also software engineers are expensive. Corporate greed will drive this faster.
Are they? You can get a software engineer for less than $10,000/year.
When I was going to university in the mid-00s I had quite a few "advisors" recommend against going into computer science, because corporate greed meant all the jobs were going to be outsourced and US salaries would tank as a result.
And yet 17y later salaries are still sky high. Someone that believed that advisor could have had a job paying a lot and time on the side or enough of a safety net to learn new skills if salaries ever tank (which we still don't really see a sign of)...
Yeah exactly. The impending apocalypse that will make software developers redundant has been just around the corner for 20 years, during which time the number of people employed in software has increased exponentially and software developer salaries are some of the highest you can get for non-managerial work.
> Why would an AI writing assembly give it an advantage?
There are lots of non-human-optimized options that aren't raw assembly. Any intermediate representation, for one (e.g. LLVM IR, C-- used by ghc, Python/JVM bytecode, abstract syntax trees to simplify parsing).
And even raw assembly is often not what the CPU operates directly on (see: microcode).
Yes, but that's not really relevant to what I was asking -- why would using any of those give an AI an advantage over, say, generating Python or C code?
Any decent developer will use existing libraries and tools for common tasks, so 90% of the code is already written. The remaining 10% of the code is business and use case specifics that change frequently and is made by pulling the pieces provided by libraries together. The ai assisted auto complete will likely never be able to understand use cases, for that it would need sentience - and even humans fail at understanding what non techies want in their apps. As others pointed out such ai tools reduce the time taken to google something, so its more like a built in code search engine and very good at it. Other than that all the “software jobs automation” stories such as this article are written to impress either non techies or gullible newbies. A waste of content, scifi at best.
Right. The efficiencies are already here in the form of CI/CD, workflow tools like IntelliSense, and boring old stuff like ORMs and libraries. It's probably about 50% faster to make a common CRUD app than it was 20 years ago.
Luckily the industry has found a way to compensate for this by building the common CRUD app in React and Microservices, so now it takes 3x longer again and requires twice the staff to maintain it.
> so now it takes 3x longer again and requires twice the staff to maintain it
precisely. or like with aws, what used to be a sysadmin is now two to three devops engineers doing the same stuff. if anything ai will enable even more people to get into software dev and thus allow even more businesses to thrive and thus create more jobs. ais and tools like you mentioned are like fertilisers. the more crap the more jobs for us.
The AI will need a knowledge base wiyh real world information. That may take a long time before it exists. If the scaling hypothesis doez hold you could get that knowledge base as part of a large language model.
You could have a setup like the Socratic Models paper where the code AI talks to the language model AI which is fine tuned on facts about the business context.
I just don’t know how far out we are from that. Someone at Deepmind, OpenAI or Salesforce (Salesforce have their own code AI research unit that few people know about) should work on this.
If someone reading this has a background in economics I’d like to hear what they have to say about this. In all these discussions on this topic I never see anyone bring up economics or empirical economics papers.
We need another "No Silver Bullet" [1] for these modern times of ridiculous hype. Contain your ego and all these Matrices Statistical manipulations you decide to call AI...Somebody please volunteer.
"OpenAI estimates that OpenAI Codex (what powers Github’s Copilot) can already complete 37% of coding tasks."
And who decides the task is done, complete and works? The Meat does [2]...The Meat and nobody else.
Great article! Enjoyed the argument but I have a few disagreements
I disagree with the response to objection 1. Look for more than a few moments at the right side picture and the shadows of the chess pieces are low quality, the shadows continue off the table, and the checkers on the board look more like backgammon than chess. The AI generated sites that clutter google are easily identified as low quality GPT 3 nonsense.
I’d also disagree that the AI “understands” the physics of the sun, or that Africa has a lot of desert. The prompt includes the desert, so it pulls from pictures of the desert. It doesn’t understand that people have a face when they concentrate either, it finds a picture of someone playing chess then maps an “African fortune teller” onto that person. Oh, it also has to grab photos of an African fortune teller. The AIs used as examples in this article (DALLE, Copilot) are regurgitated humans. What happens when we have new tasks that we can’t train on? They don’t “understand” things, they recreate them.
Maybe I’m just typically bad at estimating the exponential growth. I’m not optimistic I can replace humans with Copilot authored pull requests anytime soon. It might be good at “write a function to convert ISO date time to MM/DD/YYYY” but we’re decades off of “create an API for managing a warehouses inventory.” Either way, being replaced by copilot would require business partners to be able to explain what they want, so I’m not worried.
When google search was introduced it was a revelation. Search results were eerily accurate compared with the then-state-of-the-art. The key was using computers to interpret and leverage human-made webs of knowledge.
20+ years later, search sucks because it’s mostly robots (or humans acting like robots) writing SEO centric copy to capture valuable queries.
Most of these strikingly capable “AI” tools today like DALL-E and Codex are fundamentally built by applying more leverage in new ways to a corpus of human work: art/code/etc.
What happens if these systems take over and 99% of the humans stop writing code or making art and successive systems are trained on mostly regurgitated AI output?
The same effect is apparent in equities: index funds now make up the bulk of investment, but they merely follow other money flows in lock step, many of them driven by computers. As a result the decisions of a relatively small number of actual humans have incredibly outsized impact on markets.
We will have a system of infinitely recursive navel-gazing that is devoid of utility once the actual intelligence that was the key driver of the system has dried up because it’s economic value was captured by greedy hill-climbing algorithms.
Pieces like this are naive because they extrapolate what they see as trends without recognizing that the world is a dynamic system. Predictions are never this simple when the subject is so dependent on the state of the world which it will dramatically change.
But what happens when AI needs less and less "real" data for training then it, will not degrade like google's search, which by the way is still the best from all the search engines I have known and tried.
This article seems very simplistic to me. I'm grasping for a word to describe an argument that leads you, via brightly-colored baby steps, to a desired conclusion, but this article seems to be that.
What is intelligence? Can classical machines become intelligent, or is there some mystical quantum quality involved? Is there a limit to intelligence? What is the best form of intelligence (single entity vs society), Is intelligence even understandable, and therefore engineerable? Are current neural networks in any way intelligent (they seem extremely good at pattern matching, and/or regurgitating those patterns, but beyond that I'm not sure). Etc, etc, etc, etc.
Not convinced at all. This stuff makes people more productive. This may mean fewer jobs orbit may just mean more output as the lower costs (not wages) make custom software more accessible. I see this as making the Google/stack overflow turn around much shorter.
It will. In particular, people who aren't software developers. Most devs are basically modern scribes working for kings who can't read or write. In order to exert willpower in digital society, you need to be able to code, which is why everyone with money who can't code wants coders to work for them. But that's all going to change the day ideas men can explain their idea to an AI instead, which whips up an app on the fly. When that happens there'll be a lot less demand for coders. It probably wouldn't impact things like software engineering, where it'd be more of a productivity booster, but it'd certainly narrow down the tech workforce by a bit.
If you work doing ML in production, you know how incredibly difficult it is to get an extra % improvement. The idea of exponential growth that accelerates into the singularity is a sci-fi concept that I have never experienced.
From the outside we've experienced exponential improvements in ML models, sure. But those didn't come out of nowhere. They are powered by massive efforts involving humans.
Personally I'm not worried about losing my dev job to machines in my lifetime.
Because you’re using the library’s we have today; they don’t make the unknown you encountered easy.
What about the ripples from “enough people” losing their jobs?
Covid and the logistics whiplash have done nothing to embed in people they’re just one of seven billion. Each still acts like the king of their own mountain. It’s pretty sad.
You have no say over your place in a future society that appears keen to rug pull the masses lives from under them.
What I see whit AI is that it is very good at closed ended questions. Take, like the article, AlphaGo as an example. The rules and goal of the game are fixed and verifiable by the computer itself.
With writing and graphic design less so. The best AI can do is to generate a text or image that it judges itself as passing as realistic according to its own model.
For jobs where passing a being realistic is good enough I certainly see those as being replaceable.
This might include mundane coding tasks like writing an API to an existing server.
Tasks where the goal is not closed ended, where the computer cannot judge itself if it got the correct result and passing as realistic is not enough, might never get replaced by machines.
> At some point along the continuum, there will simply not be enough new work to make up for the efficiency gains by the AI models.
I don't buy that devs are going to run out of things to do. Instead I see this increasing the leverage of engineers (who are really just creative problem solvers) such that an exponentially increasing number of lower value problems will make financial sense to solve with engineers.
I think we'll see the same thing with designers and all different kinds of tech workers.
Pay will increase, not decrease (at least globally) because business people don't have the training to work these levers or judge the quality of the output.
As I read this I thought all read as very philosophical and hand-wavy, as well as showing a misunderstanding of how software engineers actually work. I then looked at the authors profile and saw that they’re a young PM and “entrepreneur”. So interesting, but not an expert analysis at all, which explains the gaps.
This author is running on heroic amounts of hopium. That said, the first thing I see this replacing is not graphic designers, but stock photography sites.
The main edge I see in AI today is the ability to create random ideas on a theme. I suspect this will end up being much more of a force multiplier than a replacement tool.
Exactly, think about it, it just creates content and the human is there to select, it can be wrong a lot but only need to succeed once and we are ok.
Programming is completely another deal, it is a creative task sure but once you get it wrong it is a costly mess! Another point is that most of the difficulty is not even actually coding but understanding what to code from people that cannot express what they want...
So yes I agree with this statement, AI would probably not fully replace the coding workforce but would help with some careful supervision.
That's a fair comment, I mean random in the sense of wide variation. I suspect a moderately skilled user of graphical software could convert either of the proposed images into something that looks real solving for shadows and incorrect game pieces, whereas generating 10 different faces might be quite difficult. Especially if tasked with doing it for a race or culture the artist was not familiar with.
I don’t see why it couldn’t - but the end result might be incoherent to humans. It could easily combine or extrapolate on random things to create “art”
In games like Go, the problem is very well defined and bounded. Most high value software dev work is working with humans to define the problem itself, and the problem usually exists in a multi-actor, non-deterministic, dynamic, somewhat irrational environment.
Meanwhile, the resultant code or 'solution' is also not a solution in a vacuum - it needs to work within certain workflows, tooling, skills, cultures, processes, orgs etc...which are also often undefined, dynamic and irrational.
Not saying AI can't help in many areas, but do the areas which are growing exponentially as described by OP help AI function effectively in enviros such as the above?
These are some wild estimates for professions based on products that haven't even been generally accessible for that long. Nor does it equate that writing, coding, or designing is only a small portion of most jobs. And typically the most enjoyable/easiest part for some.
What about the argument that unless the collective gets better/smarter, that AI will indeed only be as smart as the current collective? Perhaps this is the "AI will hit a ceiling", but it needs to be trained on something. If more and more gets automated in these areas, how will they ever improve?
People seem to think that automation needs to be 100% or nothing. Automation at 80%(or whatever percent) of ideal will affect a large number of programming jobs. If you as a programmer start to compete without the use of AI vs someone that has embraced it then you are going to lose. If you can get an AI to do 80% of your work and you as a sentient being can finish the project then your output will explode vs someone that does not.
This says to me that the future programmer will be closer to a software architect. Someone that can interpret what someone wants and decide what the solution is using all the resources available including AI tools. Not all programmers can do that. I've known plenty of programmers that are great at their job but lack social skills. Those are the type that will definitely be affected by AI. Given that you don't need 100% perfection in AI to make an impact in the industry and programming jobs then I see big changes happening with-in the next 10 years at most.
Don't believe it? Look at Dall-e 2. It won't get rid of designers or such but many people will start to use it in some cases and by-pass the designer all together. Also a new crop of designers will use Dall-e 2 to create some of their work. Those that won't will be left behind.
Let me summarize the state of AI as a layman who has been following the main trends:
1) Cyc and reasoning - stuck for decades but making slow progress
2) GANNs - can detect a whole lot of things by being trained on data
3) MTCS - this is AlphaGo and AlphaZero. Adversarial networks that play each other. Have nearly solved any turn based game that can be expressed this way, and can be used for StarCraft etc.
4) Data fitting - can get scientific data and find new scientific theories and formulas to explain them, etc. by essentially building analytic functions from building blocks and optimizing for simplicity.
5) GPT3 and DALL-E - have very little understanding of anything, they just throw a lot of bullshit at a wall and see if it sticks. But since the bullshit is actually stitched from all serious human output that could be digitized, they sound quite serious at times (even though they show total lack of understanding when basic math is involved for instance, as numbers get higher).
Now I will be willing to admit (and have written about this) that logic and science may be a “poor man’s” million-dimensional vector data fitting. Maybe our binary logic and simple models are the thing that models the world worse than the million-vector models that capture the actual dynamics better. But humans won’t be able to understand these dynamics any more than a cat can understand most of your activities based on abstract systems.
All that other stuff — superhuman sensors, motors etc. gives them unfair advantages, like the finger on Geopardy. Auto-aim and self-driving cars are more about HUMANS improving, training, and then downloading the new software to every device.
What scares me most, though, is deepfakes and swarming. You don’t need GREAT AI to take over political discussions, or direct swarms of self driving cars or drones to wreak havoc. Online, swarms of sleeper bots can inftrate many communities and execute sybil attacks at scale. They can amass karma right here on HN with comments stitched through bullshit that sounds just like HN. It’s a positive EV strategy for the swarm. Eventually they can edge out all human participants where everything you see all day long is bots.
The kicker is THEY DONT NEED TO PASS THE TURING TEST in order to take over our systems because most people consume content non-interactively (read comments, watch videos) . “Truth” online is just a correlation of A vs B and every repository of “truth” can be flooded with a different correlation. Stephen Colbert fans even did it manually with elephants tripling in Africa! Imagine what botswarms can do to out sense of truth or political will or anything!
> What scares me most, though, is deepfakes and swarming. You don’t need GREAT AI to take over political discussions
This. If an AI does take over the world it will be as a cult leader, with the willing help of millions of humans who've "done their own research" on youtube. The AI will of course be at the top of all search rankings and auto-suggestions by other "AI"s; that's table stakes for that kind of scenario.
Ah, they seem to think that 100% of programming problems literally amount to Easy leetcodes. I see no glue-code questions there, nor anything brownfield.
Yeah I don't know where benreesman pulled that statement from, neither the blog post nor the original OpenAI paper claims anything of the sort.
That aside, I'm not sure how impressive is an AI that could do 37% of "all coding tasks". It's like a self-driving car that only works 37% of the time.
Are you intentionally trying to hide that “coding tasks” in the blog article is a link to the paper, clearly explaining what these coding task are? And you, for some reason, decided to add “all” before “coding tasks”.
I’m not trying to “pull” anything. I find the notion that Codex/Copilot is within 2-3 orders of magnitude from writing code I’d trust without a careful audit that does anything more than some CRUD thing in JS framework du jour to be Kurzweil-whacko singularity shit, and that the only people trying to “pull” anything here are the people training GPT-3 on everyone’s GutHub repository without permission or attribution and exploit non-expert opinion to generate yet another round of hype, hysteria, and mysticism about that magical pair of two undefinable words: “artificial” and “intelligence”.
I regret mis-quoting TFA by an article, because if I cited it correctly I’d really lay the fucking smack down here.
Have I adequately answered your condescending question?
One of my tasks as a software engineer is to reduce code as much as possible. Because code is a liability, it needs to be supported, maintained, adapted, ported, etc, etc. If you can do same or more with less code, please do. If you can reduce other's code, which is often legacy code, please, do.
Right now all demoes I see are generating code, they do not changing code, not deleting something. This means the only way for text-to-code system to get a new program for a changing requirement is to regenerate new code from scratch.
I am not in the mood of deducing what that means for AI, it is enough for me to know what that means for typical software development process.
> The 60% decrease in AI computing hardware every year which translates to the ability to train exponentially larger AI models (ARK Invest 2022 Big Ideas Report).
From what I understand most intensive AI tasks require using a GPU and the leader in the GPU business is NVIDIA. Looking at their profit margins I don’t see any drop in fact a growth of over 30% yoy [1]
Also as an aside, I am a big skeptic of ARK. During the short time she had a diva status Woods seems to have been more lucky than good. That said, investing is a long game.
"playing go" is actually a very constrained problem domain.
"design and implement a SasS product that humans will give you money for" is a much less well-defined problem.
applying AI to boilerplate will probably be effective, but that might just eliminate more outsourcing and allow local devs to be more productive, as an extension of IDEs that have various macros to reduce boilerplate.
i think getting past that to the point where some manager can just ask an AI to design a commercially viable SaaS website top to bottom and it plonks out of the machine is exponentially more _difficult_ of a problem domain.
although a lot of the websites which pollute search results in google will be churned out by the millions and millions.
Go is actually pretty simple game, alternate between placing stones, if stones connected have no open spaces remove them and lose points. The decision space is huge, but that is pretty much it.
Now SaaS is one thing, but who would trust industrial control software for AI. At least current one. Here is oil refinery, make money. Oh, take these bank details as well and go figure it out...
AI automation pushing people out of jobs is a strong incentive for universal basic income, or a similar scheme.
If there is enough work doable by AI, why would humans still be needed? Some humans would cover the missing 10% of what AI can't do, as pointed out in other threads. But that leaves a lot of humans idle.
The entities operating this AI would still need to operate i.e. in a country. So the country's government will have to enact some sort of legislation to regulate the AI's impact.
It doesn't necessarily mean everybody will be out of a job and miserable. Probably work would be optional and shelter, food and the likes would be provided by the government.
When it comes to AI replacing jobs the biggest wonder I have is.. why is there so much talk and attention on replacing software jobs first? Software work is fairly high skill and hard to replace. Every hour that a software dev spends doing a task, they are doing some kind of work that the industry has probably tried and failed to automate, and we're talking about an industry filled with the most clever and most qualified automation-writers there are.
There's many jobs that AI will probably replace. Software will be among the last.
That's because people saying that are fooled and believe that SE is just about writing code and that it is similar to making a picture like Dall-E or writing a pseudo-profound text like GPT-3 does. This is really far from reality. You have to design, redesign, correct bugs, think about the whole system from backend to frontend, envision user interaction take into account evil actors or rogue employees... And to that you add a layer of politics that forbid some options and favor others because of company culture. Good luck designing an AI that does that. It is not impossible, but it is much harder than a robot that mixes liquids or a program that make drawings that look cool.
Software developers worry about their jobs being replaced. You see posts from people worried about this on this forum because this forum is full of software developers.
The nature of our work exposes to new technological developments first, so we’re also the first to talk about this AI. If I went on an art forum I might find posts by artists worried about Dall-E but those posts wouldn’t come about until later when the artists hear about this technology.
The "Formula Translating System" introduced in 1957 eliminated the drudgery of programming by allowing users to simply enter equations into a computer directly.
Specifying machine instructions and memory addresses by hand rapidly became a thing of the past.
Today, having a programming assistant is as quaint as a secretary.
A powerful AI capable of programming would be amazing. It could reverse engineer binaries into human-readable source code. It could transpile tools and libraries to different platforms and languages.
Personally I can't wait to be made obsolete by the god AI.
The limit to human creativity would move from a technical one to your imagination.
All my hope rests on the assumption that creating bigger models with current AI technology will only lead to marginally better results. The old law of diminishing returns. In other words, current model approaches hopefully have a limit (that's not too far away) that will keep them from getting better.
It's pretty much never right to extrapolate something that looks exponential as an actual exponential. It's much more likely to be something with roughly the shape of a logistic curve.
... and the fun part with a logistic curve is guessing how far up you actually are...
* Boss type speaking disapprovingly to some programmers and designers: *
Come on guys, back to the fields, play time's over. Those strawberries need to be picked before they spoil.
Tell the AI what you've done so far, he'll take care of it from now on.
The upside is, if the AI ever really gets that scary good(aka singularity), we can always ask it to find a solution for our current troubles.
Btw, can entropy ever be reversed?
Sounds like the AI will need a decentralized money to value compute costs and trade freely with other AI. One advantage we have is that AI is not smart enough to buy Bitcoin yet. That is my hedge against Exponential AI. Another megabull scenario
Think of all the money that could be saved if interlocking boards of directors and corporate executives could be replaced by AI? No more need to pay bloated salaries and ludicrous bonuses and provid ridiculous 'golden parachute' contracts to top executives.
The discussion about AI always focuses on replacing the leaves of the trees in corporate heirarchies, but the value obtained by replacing the root seems to be much higher.
One could also plausible replace shareholders with AI, although then one might start to question the whole structure of investment capitalism... maybe AI would be better at making long-term decisions about the distribution of capital to prospective business operations than some group of Wall Street funds are?
Perhaps we'll see AI-managed pension funds in the near future with significantly better results (and more interest in the long-term health of the outfits they invest in) soon?
A self driving car that becomes a lawyer that enters into contracts with body shops where it randomly causes accident in return for a % of the business, free charging and maintenance.
I don’t know which is worse, an amoral corporation using AI to crush its workers or an independent AI working for its own interests
I think that might even be a improvement. Just imagine if there was enough of them and they entered actual price competition. Dropping cost to 10th or 100th...
Marx talks about this a bit in Das Capital... You would think that the automation of an industry improves working conditions inside of it, in practice the opposite has usually been true: fewer people work longer hours in ever-more-tedious ways. So you used to go to work to design something, but in this vision of the future, you now go to work to craft prompts to DALL-E-2 so that it can design something for you, but because design is now so much cheaper you have to do three or four times as much design in the day, and this mechanism only doubles your rate of production... Gets to be a real mess.
However, I would notice that we have been blessed with a different exponential, which we have been riding into the future. That is the exponential of processor speed and storage and all of the other computing resources. It's a perennial observation that our actual programs have not gotten much faster. The extra resources are immediately wasted. The reputation of hiring developers from developing countries, the cheap foreign labor force, for us has always been that you get a big ball of mud which you cannot maintain very easily. I do not see AI helping with this, but rather exacerbating it. Beforehand, there was a reasonable limit to how many lines of code you might expect from a developer per day, maybe a couple hundred if they test their code well and do other things to refactor and simplify. But with AI you could imagine it multiplying 10-fold, just throw more mud at the problem, the mud ball is already huge: what could it hurt.
In fact, we may see the Smalltalk revolution! Of course this has been prophecied before, but never really come to pass. In this more-dystopian form, the idea is that everyone leans heavily into AI and something like kubernetes for modularization. Every single cell, every pod, in these gigantic supercomputer clusters which shall exists to serve up a smallish blog, needs to do something small enough that with its abundant AI generated code, it can nevertheless live and die successfully, spawning new processes as needed. The blog is thousands of small mud balls, and they work together in signaling pathways that are no longer understood by any human who works on the blog, and the dominant metaphor ceases to be one of mechanism and design and inputs and outputs... Rather it is biological, or even ecological. To start such systems up requires a story of “childhood” where a harness coordinates the first thousand nanoservicesv and grows the cluster into a template for the living application, then it needs AI-style “training” to reconfigure itself to handle actual business purposes, finally it will be released on the world, only to make mistakes which we will have to educate it to discern as mistakes... We certainly can't fix the system, we don't know how it works. Not what Alan Kay envisioned! But who knows. Maybe our kubernetes clusters will go to school one day.
It is in the history of the programming language Smalltalk, that it was meant to usher in a new era of computing.
This era would be built on several basic principles. The first one is that you would have no real separation between source code in some text editor, and the running program—a prophecy that has yet to come to pass. All the rest of us rely on shooting a misbehaving program in the head and birthing a similar child with slightly altered DNA to replace it... Smalltalk expects you to have a conversation with the program as it runs: the whole programming language is instrumented with reflection, and every module is hot reloadable. There is literally a method, Object.become, “OK computer, take all of the references anyone has pointing to that other object and point them to me, I will be taking over for it.” Bold.
A second conceit is that the new way to be a programmer, the new metaphor, is not one of traditional authoritarianism—think in particular of remote procedure calls. But the new metaphor will be biological: a module will peek around its environment, maintain a “homeostasis” of inner state consistency, the system as a whole will be tolerant of intra-module logical inconsistency, while the individual cells will just die if they cannot maintain consistent state, cells live, cells die, cells organize into larger tissues and organelles and organs in order to find their greater purpose.
A third great principle is fractal architecture, the idea that the parts should contain the essence of the whole. So in functional programming you might build a computer program out of parts that are not computer programs. But Smalltalk wants to say, no: every computer program in this language will be constructed out of units that are fully fledged computers in their own right, running one or many computer programs. And so there is no need for all of these computers to be on the exact same host machine, of course they can be because a computer can pretend to be multiple computers by task scheduling, but it is not necessary.
Related to that, there is a vast skepticism towards locking and atomicity. Implement virtual time, or leverage the full power of the actor model... Those are preferable.
OpenAI estimates that Github’s Copilot can already complete 37% of coding tasks. DeepMind’s AlphaCode is estimated to be 59% better at coding competitions than Codex. Will automation start in 2023?
You said this in the article too, but why does being better at coding competitions, where there are very specific rules and outcomes, mean this AI can work in the very abstract realm of software development. And forgive me if OpenAIs estimates of its own product it’s trying to sell isn’t exactly convincing. It looks like you were a former PM, and to be frank, until an AI can wade through confusing conversations with PMs to figure out what actually needs to be built, engineer jobs are safe.
As useful as Copilot has been 37% paints an untrue picture. It helped with 80% of copy paste tasks that repeat in a pattern. For data seeding thousand line files.. super helpful. In other areas it can create tough bugs instead of solving them.
This is really optimistic and completely overlooks the 80/20 problems that have plagued both AI automation and traditional automation. The AI sweeps up the easy cases and humans are left to fix, or paper over with heuristics, the remainder.
Example? Speech-to-text. It's pretty good and the spread of automated closed captions to youtube videos is a definite benefit. But broadcast TV will almost exclusively use human subtitlers still, because the last few percent of accuracy isn't there.
Software hasn't even effectively been reliably outsourced to cheap humans yet. Perhaps you can automate some of the production of CRUD tools, in the absence of a better framework for them (frameworks, compilers, libraries and npm should all be thought of as forms of automation). But anything with a competitive advantage will be relying on humans for a long time.