I'm on the skeptic side of "AI" and find this entire industry obnoxious, but your argument doesn't hold any water.
Technology that can be used to kill innocent people is all around us. Would it be moral to attack knife manufacturers? Attacking one won't make the technology disappear. It has been invented, so we have to live with it.
Also, it's a stretch to say that "AI" "kills innocent people". In the hands of malicious people it can certainly do harm, but even in extreme cases, "AI" can currently only be used very indirectly to actually kill someone.
Technology itself is inert. What humans do with technology should be regulated.
IMO the fabricated concern around this tech is just part of the hype cycle. There's nothing inherently dangerous about a probabilistic pattern generator. We haven't actually invented artificial intelligence, despite of how it's marketed. What we do need to focus on is educating people to better understand this tech and use it safely, on restricting access to it so that we can mitigate abuse and avoid flooding our communication channels with garbage, and on better detection and mitigation technology to flag and filter it when it is abused. Everything else is marketing hype and isn't worth paying attention to.
if they're selling the knives knowingly to a knife-murderer, it might be worth discussing.
Sam Altman is not, although he portrays himself that way, some geeky guy without power who just builds products, he's the guy who makes the decision to supply this tech directly to the US government who is on the record about using it for military operations. And you're right on the last point. Sure the 20 year old guy who threw a molotov cocktail at Sam's house is, I'm going to assume for now given the topic Sam chose for the piece, an anti-tech guy.
But assume for a second you had your family wiped out in a bombing run because Pete Hegseth attempted to prompt himself to victory with the statistical lottery machine. If the CEO knew this and enabled it to add another zero to his bank account, not so sure about the ethics of that one.
Sibling comment already said it, but yes I was specifically alluding to Altman's decision to allow the US government to use their AI to choose bombing targets without a human in the loop - perhaps this is why the US government double-tapped[1] a school killing 160 girls, all younger than 12, when the school was clearly marked on google maps.
I also vigorously dislike the industry, but your stance 'I'm on the skeptic side of "AI"' is something you need to address - saying this in the friendliest way possible, you are wrong.
AI needs to be opposed, because the billionaires are going to use it to turn the world into shit, but if the best the AI opposition can muster is "AI isn't useful", we are fucked. It's extremely powerful and can do bizzaro things when you rig it up with tools - the kinds of things we need to prevent companies like Google from doing with it, no one is paying attention to.
[1] double-tapped: a phrase referring to the practice of firing a second missile after the first to kill any rescuers or surviving schoolgirls
Regardless, "AI" is not doing the killing in that case. Rather, humans have deployed it to control weapons that kill people. There are several layers of indirection there before you can claim "AI kills people". This is the same indirection as when a human chooses to press a button that fires a missile, or stab someone, just with more steps involved.
So you can also be outraged at weapon manufacturers, which is one step closer. Or, you can skip the indirection, and be outraged specifically at people in charge of using this technology, which is my point.
I'm disgusted by this industry as much as you are, believe me. But blaming the companies that produce "AI" for people dying is misplaced. They're certainly part of the problem, but not the root cause.
> AI needs to be opposed
AI doesn't exist. It is a marketing term used by grifters to sell their snake oil.
But even if it did, it's silly to claim that any technology needs to be opposed. This one is potentially more problematic than others because it raises some difficult existential and social questions which we might not be ready to answer, but it's still ultimately on us to control how it's used. We've somehow been able to do this for nuclear weapons which can literally obliterate civilization at the press of a button, so a probabilistic pattern generator seems trivial in comparison. It's going to be bumpy, but I think we'll manage.
> Regardless, "AI" is not doing the killing in that case. Rather, humans have deployed it to control weapons that kill people.
One of those humans is Sam Altman, which makes him a valid military target.
He's not somebody that released a product and doesn't know what it's being used for. He's selling it specifically to be used as part of killing people.
Did the US government ask Huang to buy drone parts for killer drones and Huang said yes? Did Huang offer to optimize the drone parts to make them more effective in killing people? Altman did.
> AI doesn't exist. It is a marketing term used by grifters to sell their snake oil.
They've claimed the term, this is not a useful objection to make at this point. And everyone was fine with calling our shitty little computer vision handwriting parsers "AI algorithms" before LLMs.
> We've somehow been able to do this for nuclear weapons which can literally obliterate civilization at the press of a button
Knowing what you know about nuclear weapons, if you ran into the Manhattan Project scientists, would you still be cheering them on? "Thanks guys, our democracies are so stable these will literally never be used for a nuclear holocaust, and they might have useful mining applications!"
Can you not think of any exceptionally nasty things the US government could do with the "machines that act as if they can think for most practical purposes"? Do you think maybe it might be a good idea to develop that technology after you have made sure that the government serves the peoples interest?
> They've claimed the term, this is not a useful objection to make at this point.
Sure it is. Someone saying that the sky is purple will never be true, no matter how many times they say it. Pushing against this is how we avoid the fabricated mystique around this tech, precisely so that people don't see it as a threat.
> Knowing what you know about nuclear weapons, if you ran into the Manhattan Project scientists, would you still be cheering them on?
You're twisting my words. I never said that I support what "AI" companies are doing. I said that your claim that "AI is killing people" is hyperbolic, and that you're barking up the wrong tree.
Besides, the scientific research invested in nuclear technology has produced far more benefits for humanity than drawbacks. It's very likely that the conversation we're having now wouldn't have been possible without this research. There's an argument to be made that even nuclear weapons and their deployment in WW2 had a more positive outcome than any alternative would've had.
Similarly, the same can be said about the current generation of "AI". For all its potential dangers and harms, whether direct or indirect, it has and will continue to have many positive use cases, some of which we haven't discovered yet. Ignoring this and opposing the tech altogether is throwing out the baby with the bathwater.
The solution isn't banning the tech. It's strongly regulating it, as we've done with many others. Unfortunately, governments move at glacial speeds, and some are deeply entrenched with corporations, so there's conflicts of interest galore, but that's still the most sensible approach to manage it safely.
> Can you not think of any exceptionally nasty things the US government could do with the "machines that act as if they can think for most practical purposes"?
Sure I can. Any government, organization, or individual can abuse any technology. But you haven't made the case why opposing technology itself would prevent that, versus holding those individuals accountable directly. Until then your comments come across as misplaced fear mongering.
> Do you think maybe it might be a good idea to develop that technology after you have made sure that the government serves the peoples interest?
So what do you suggest? We stop all tech R&D because governments can't be trusted? That's pure fantasy. No single government would even agree to it since technology is universal. If the US doesn't invent it, another country will. Advancing within this messy geopolitical framework is the only path forward, for better or worse.
> Any government, organization, or individual can abuse any technology. But you haven't made the case why opposing technology itself would prevent that, versus holding those individuals accountable directly
I think we should hold the individuals accountable directly. But we can't. The system is skewing further and further from the point where we could. Look at the Epstein files - everyone on the planet knows that there is a mountain of evidence condemning someone rich and powerful, and nothing will be done about it.
In the meantime, I want to stop handing weapons to the powerful people that we can't hold to account. I don't think we should stop all R&D - but I think "machines that act as if they can think for most practical purposes" are uniquely dangerous. I also used to think the "AI" companies were full of shit, until my work handed me a bottomless anthropic API key to use for claude code. They can successfully navigate novel situations using tools to interact with the world. Tasks like "find me 20 puritanical whitehouse staffers who are cheating on their spouses, using credit card / location history" are now costly only in terms of api tokens. Going the other direction - "Find the organizers of this protest. Using all the information collected by big tech, find an unrelated criminal offence they have committed".
Technology that can be used to kill innocent people is all around us. Would it be moral to attack knife manufacturers? Attacking one won't make the technology disappear. It has been invented, so we have to live with it.
Also, it's a stretch to say that "AI" "kills innocent people". In the hands of malicious people it can certainly do harm, but even in extreme cases, "AI" can currently only be used very indirectly to actually kill someone.
Technology itself is inert. What humans do with technology should be regulated.
IMO the fabricated concern around this tech is just part of the hype cycle. There's nothing inherently dangerous about a probabilistic pattern generator. We haven't actually invented artificial intelligence, despite of how it's marketed. What we do need to focus on is educating people to better understand this tech and use it safely, on restricting access to it so that we can mitigate abuse and avoid flooding our communication channels with garbage, and on better detection and mitigation technology to flag and filter it when it is abused. Everything else is marketing hype and isn't worth paying attention to.