An argument that I have some sympathy for, while still being moderately+ in favor of gun control (here in the USA where I'm a citizen).
It seems that gun control—though imperfect—in regions that have implemented it has had a good bit of success and the legitimate/non-harmful capabilities lost seem worth it to me in trade for the gains. (Reasonable people can disagree here!)
Whereas it seems to me that if we accept the proposition that the vast majority of code in the future is going to be written by AI (and I do), these valuable projects that are taking hard-line stances against it are going to find themselves either having to retreat from that position or facing insurmountable difficulties in staying relevant while holding to their stance.
> these valuable projects that are taking hard-line stances against it are going to find themselves either having to retreat from that position or facing insurmountable difficulties in staying relevant while holding to their stance.
It is the conservative position: it will be easier to walk back the policy and start accepting AI produced code some time down the road when its benefits are clearer than it will be to excise AI produced code from years prior if there's a technical or social reason to do that.
Even if the promise of AI is fulfilled and projects that don't use it are comparatively smaller, that doesn't mean there's no value in that, in the same way that people still make furniture in wood with traditional methods today even if a company can make the same widget cheaper in an almost fully automated way.
The AI hype machine is pushing the "inevitability" and "left behind" sentiments to make it a self-fulfilling prophecy, like https://en.wikipedia.org/wiki/Pluralistic_ignorance, and they have the profit and power incentives to do so and drive mass adoption. It is far from certain that AI will be indispensable or that people will "fall behind" for not using it.
Why would the AI-fans even care if others who decide not to use it fall behind? Wouldn't they get to point and laugh and enjoy the benefits of "keeping up"? Their fervor should be looked at with suspicion.
If you're addressing this to me: you need to separate my description of how I perceive things from any effort/desire on my part to make that come to pass. I don't expect to stand to gain if AI continues to get better at coding — most likely just the opposite; this is the first time in my career that I've ever felt much anxiety about whether I'd be able to find work in my field in the future.
There are many others like me who share this expectation, and, while we certainly may be wrong, it's not because of some sinister plan to make the prophecy come true. (There are certainly some who do have sinister/profit-seeking motives, of course!)
> It seems that gun control—though imperfect—in regions that have implemented it has had a good bit of success and the legitimate/non-harmful capabilities lost seem worth it to me in trade for the gains.
This is even true despite the fact that there are bad actors only a few minutes drive away in many cases (Chicago->Indiana border, for example).
The desire to anthropomorphize LLMs is super interesting. People naturally anthropomorphize technology (even printers: "why are you not working!?"). It's a natural and useful heuristic. However, I can easily see how chatGPT would want to intensify this tendency in order to sell the technology's "agency" and the promise that it can solve all your problems. However, since it's a heuristic, it papers over a lot of details that one would do well to understand.
(as an aside - this reminds me of the trend of Object Oriented Ontology that specifically /tried/ to imbue agency onto large-scale phenomena that were difficult to understand discretely. I remember "global warming" being one of those things - and I can see now how this philosophy would have done more to obscure the dominion of experts wrt that topic)
The point is thst this is a common pro-gun argument to deflect from the fact that making guns harder to own does in fact reduce gun violence. Which is how much of the rest of the world works.
But post Sandy Hook, it's clear which side prevailed in this argument.
Except it seems to be arguing in the exact opposite direction, and about the other side of the problem?
Those in favor of gun control aren't trying to lower human responsibility, they're trying to place stricter limits on the guns than the status quo. Those against gun control are trying to loosen limits on the guns.
Here this person is proposing making individual responsibility stricter compared than what it is today. And they're not arguing for loosening limits on the tech either.
Isn't that practically the opposite of your analogy?
The Gervais model is predicated on sociopathy as the driving force of social cohesion. This is the kind of model a sociopath would construct. There are other models available to us.
Social organizations require some sort of glue to bind them together. They need ways to maintain cohesion despite vagueness and to obscure (small) errors. There is a cap put upon max individual output, but aggregate output is much higher than whatever a collection of individuals could attain. This is a very basic dynamic that is lost amidst a cult of individualism that refuses to admit to any good greater than themselves.
Yes - the CEO talking to the board in this way would lose credibility. But a CEO failing to deploy this jargon correctly would also lose credibility with the board : it's obvious he doesn't know how to lead.
What I would like to see is a study of the ratio's between corporate speak and technical speak - and the inflection points at which too much of either causes organization ruin.
Agree wholeheartedly - but the conversation around what these technologies /mean/ is gonna end up happening one way or another - even if it is sloppy, imprecise and done by proxy of the definition. If anything, this is a feature and not a bug. It's through this imprecision that the actually important questions of morality and ethics can leak into discussions that are often structured by their participants to obscure the ethical and moral implications of what is being discussed.
reply