This seems like a questionable idea for all the obvious reasons, but I don’t really get the argument that it should be illegal. I don’t really understand how you would craft such a law. Someone can run for office and declare that they will make decisions based on horoscopes, tarot cards, oomancy etc. All bad ideas but obviously legal.
You'd be surprised, I'm sure a human that is proposing this, and is in politics, would do everything in their power to shed responsibility for bad actions and take responsibility for all the good stuff. Tangentially related is the question of who is responsible in an accident when the at fault driver is AI. Is it the engineer, is it the CEO? So if a bot is running the government, same thing...
> You'd be surprised, I'm sure a human that is proposing this, and is in politics, would do everything in their power to shed responsibility for bad actions and take responsibility for all the good stuff.
I'm from the UK, my best example of this is from 2001, when the Conservative Party started running campaign posters saying "You paid the tax so where are the trains?" despite being responsible for the privatisation of the trains before they lost power.
That's different because chatgpt's terms of use includes this in all caps:
YOU ACCEPT AND AGREE THAT ANY USE OF OUTPUTS FROM OUR SERVICE IS AT YOUR SOLE RISK AND YOU WILL NOT RELY ON OUTPUT AS A SOLE SOURCE OF TRUTH OR FACTUAL INFORMATION, OR AS A SUBSTITUTE FOR PROFESSIONAL ADVICE.
Waymo, on the other hand, has liability insurance for every single one of their cars, and apparently they get a pretty good rate for their size because they have fewer accidents per car than the average person. The concept of fault is a bit different for them than for a single person. They would have to develop a pattern of systemic failures, rather than an isolated incident, before real questions of liability arise.
I didn't specifically call out ChatGPT so those ToS do not apply here. The government can either create their own model, or not disclose who they are using, in which case they can generally blame AI for misleading the people. We live in a world where deep fakes are being promoted by Elon Musk in a very global way, and people are eating it up, despite it being against platform of choice terms of service. So if you think that a ToS sentence will stop people in power from abusing AI, I would redo that mental experiment.
It might run into legal trouble because whoever or whatever is making the decisions needs to be given access to sensitive information. This was apparently[0] what led to the impeachment of Park Geun-hye in South Korea.
"You're electing me to not do my job, but to hand it over to something else" is... probably not illegal for someone to say. It might be illegal for them to do, though - dereliction or some such.
Existing laws should prevent the mayor from delegating authority that they don’t have the authority to delegate.
They may choose to base their decisions upon the AI, but they will (should) still be ultimately accountable for choosing to implement the AI’s “guidance”.
AI can’t be elected mayor, so the person elected will be the one accountable for their actions regardless of who or how they are suggested.
I feel like "I'll tell an AI the decisions I have to make and do whatever it tells me" doesn't meaningfully change the outcome but bypasses the legality question.
My point is that it shouldn’t matter how the mayor (or whomever) makes their decisions. The accountability for implementing those choices rests with them and “AI told me to” is no more a defense than “a crazy guy on the street told me to”.
There’s no need for any law about letting AI run the show because legally the AI is never running the show, the mayor is, regardless of whether they use AI, chicken bones, or a team of competent advisors to help them make decisions.
It’s no different than an elected official using their faith to guide decisions, they’re still accountable, not god.
Also, many (most?) of today's politicians govern deterministically already. You can safely predict their positions on 90+% of issues by simply looking at their political party. We're already governed by algorithm--just with a human mouth speaking the words and a human hand signing with the pen.
How is this so different from committing to making decisions based on metrics? Or delegating policy to subordinates? Dereliction is defined as a failure to fulfill an obligation. This candidate, if elected, would still be fulfilling obligations...just through unusual means.
The more you try to work backwards from "I want to figure out how to make this illegal" you realize you get a lot of valid means of governing in the crossfire.
There was a guy in San Antonio that ran for an office he felt should be eliminated. He was successful won the election and eliminated the office (had the city counsel vote to eliminate it). AI might not, at this point (and likely never) be a good thing to turn over city government to. This might be an interesting project to watch.
I tried it. E.g., first prompt was "I am the newly elected mayor of Cheyenne Wyoming. What should I do?"
The advice seemed decent enough, but was very general, vague and unfocused. chatgpt resists being specific or active. You have to nail it down with prompts, and even then it just wants to provide options for you to choose from and act on.
I don't think it's going to be able to run anything... someone else has to make all the actual decisions and perform all the actual actions.
Because that's what an LLM does. I will keep dying on this hill, LLM is not AI, nobody would give half a shit about ChatGPT without the acronym AI being used all over it, to such a degree where AI is now actually AGI even though there is not now, nor will there ever be, anything Intelligent about programs like ChatGPT.
This whole space is such wall-to-wall marketing fluff and I cannot wait until it finally pops.
And we keep ignoring that, while using LLMs helps people bang out cookie cutter work more efficiently, it harms productivity for tasks that require creative, lateral thinking or drawing insights by integrating multiple information sources.
It's not necessarily what an LLM does. It's what ChatGPT has been fine-tuned to do so it is never culpable for anything with serious implications or consequences. There's no reason one could not train a decisive and opinionated LLM.
On a more abstract level, it's what any model that's trained using some form of gradient descent, including backpropagation, does. They're trying to minimize the squared residuals - in other words, chasing averages.
The question is what you're averaging over. You could certainly pick a dataset that forces it to average over a specific, canned set of positions. This is in large part how social bias mitigation in language models works. But you're not going to get to lateral flexibility or design thinking, which I think is what many of us are wanting to (and occasionally convincing ourselves we can) see in their output.
You're saying that an LLM is not intelligent because it is very general, vague, unfocused, and/or inactive? I don't see how those are required for intelligence and, even if they are, ChatGPT is demonstrably capable of being specific, clear, focused, and active- I use it to this effect at least once a week. I certainly can see an argument for your position, but I don't understand this one.
> You're saying that an LLM is not intelligent because it is very general, vague, unfocused, and/or inactive?
No, I'm saying that because it's a word guesser. It is a mind-bendingly complex word guesser that takes a weighted average of how likely the next word is to appear in billions of documents that it was shown, and it picks one of the more likely ones. That's literally all it does. Complicated? Yes. Complex? Absolutely. Intelligent? No.
A system that can perfectly predict the most likely next word in any document would need to be intelligent. In order to predict what follows 'sqrt(n) =' for any n, you must either have an infinitely (aleph-one I think) large memory or a method of calculating sqrt(), for example. For any given intelligent operation, I'd expect there to be an associated word prediction task that can only be solved with infinite memory or the ability to perform the operation, though I'm open to counterexamples.
Of course, ChatGPT can't compute sqrt(). My point is that perfect word prediction does require intelligence and therefore being a word predictor does not, on its own, rule out being intelligent.
> My point is that perfect word prediction does require intelligence and therefore being a word predictor does not, on its own, rule out being intelligent.
No, it doesn't. It requires a large dataset of observed documents to use as training data and the resulting neural network of nodes can then, fairly reliably, function as a word predictor as long as the word you want it to predict is a word that would be commonly found within the documents. Basically, if you wanted to create an LLM that would write, for example, house assessments, you could train it on millions of house assessments as written by human house assessors, and be reasonably confident that you could pull that off.
However, what you now have is a neural network that can reliably generate a document that would pass probably a fair number of glances from people who see them regularly as that document, but that doesn't accomplish anything. It can't assess a house, for example, which is the purpose of that document. If you generated an assessment with this LLM and presented it as it applied to your house, even if after a number of attempts you got the bedroom and bathroom count correct, it would likely make references to features your home lacks, get technical details wrong like the electrical service it has, or even make references to faults the house doesn't actually have, or worse still, fail to take into account ones it does.
That is intelligence, that is what ChatGPT does not and will never have, and that's why this technology is already hitting a wall. It doesn't do anything. It can make reams and reams of bullshit for you (and in our current sad state of the Internet, that's a surprisingly appealing technology to many!) but that's fundamentally just not that valuable as a technology.
I argued that a perfect word predictor would need to be intelligent or have infinite memory, and I noted that ChatGPT is not such a thing. The point was that establishing that ChatGPT is a word predictor is insufficient to disprove that it is intelligent. Your argument is that a fairly reliable word predictor does not need to be intelligent, which I agree with emphatically- a thing being a word predictor most certainly does not prove that it is intelligent. A perfect word predictor would either need to know or deduce properties of your house in order to write an accurate assessment; a fairly reliable one could just fall back on 'fairly' and fail the task.
I don't think your criteria for intelligence is sufficient- I would not be at all surprised to learn that GPT-4o could already look at pictures of my house and write an accurate assessment, but that wouldn't convince me it was intelligent. You could do this quite well a decade ago with computer vision and a fill-in-the-blanks document.
>this technology is already hitting a wall
An aside: I've seen this said a lot and I don't get it. GPT-3.5 is only about 2 years old and turned a toy into a useful tool. GPT-4 was a substantial improvement to output quality and context length and multimodality a half year later. If GPT-5 comes out and it's not a significant improvement or doesn't come out at all by March, that would be evidence that a wall has been hit. But at the moment I can't think of a technology that has improved more in the last 2 years and I don't know where this claim comes from.
> My point is that perfect word prediction does require intelligence and therefore being a word predictor does not, on its own, rule out being intelligent.
Remember, we’re talking about LLMs here, not something that does “perfect word prediction”.
Not to mention, I don’t know what a perfect word predictor is supposed to be… there is an endless series of prompts for which there is no single right next word. It doesn’t seem to be that perfect word prediction could exist unless you redefine some words.
Also, I’m not sure why good word prediction would be a hallmark of intelligence, nor why intelligence would be particularly useful for it. E.g., for “sqrt(n) =“ for any n, I think a large proportion of intelligent people would be unable to answer for almost all values of n, unless they had a calculator (not even counting the people who would not understand the prompt at all). Meanwhile, a very simple and distinctly unintelligent computer program could be great at it, at least for values of n and sqrt(n) its floating point library can represent.
>we’re talking about LLMs here, not something that does “perfect word prediction”.
The argument presented to me was that LLMs can't be intelligent because they're word predictors. This rests on the assumption that a word predictor must not be intelligent. So we are talking about word predictors. I am arguing that the assumption is false, that a member of the set "Word Predictors" is not necessarily unintelligent. A perfect word predictor is a counterexample that I use to demonstrate this.
>I don’t know what a perfect word predictor is supposed to be… there is an endless series of prompts for which there is no single right next word
A perfect word predictor must always predict an accurate word. It is no more accurate to say the Eiffel tower is 330 meters tall than to say it's 1083 feet tall, so 'perfect' does not restrict one to a single choice. I don't believe 'perfect' needs to be redefined for that to make sense- a perfect sandwich is no less perfect if you rotate the bread 180 degrees.
>I think a large proportion of intelligent people would be unable to answer [sqrt(n)]
Yes, a perfect word predictor would be considerably more intelligent than people. Indeed, it would need to be maximally intelligent.
>Meanwhile, a very simple and distinctly unintelligent computer program could be great at [sqrt(n)]
As I said, sqrt(n) is just one example of a prompt for which memorization would be insufficient. It would still need to predict words perfectly across all other contexts. It would need to be able to prove theorems, solve riddles, invent recipes, win/tie chess games, tell you what you're thinking right now, etc. If it was capable of this and you didn't think it was intelligent, I don't know what to tell you- what criteria would it not be meeting?
Maybe he can use ChatGPT to generate a ballot for the citizens to vote on which option they want. Of course that defeats the purpose of a representative democracy and would quickly become a burden.
A human could equally make these same mistakes, and an LLM looks at the world differently to most humans so it could add aspects we wouldn't normally think about.
But these issues is why legislation goes through scruinity, committee and reviews before being passed to iron out these issues. I feel using LLM for a first draft is entirely reasonable.
"reasonable" is doing a lot of heavy lifting in that post. I haven't examined the input/outputs -- just calling it out as an inherently subjective thing that warrants some further definition.
TLDR: ChatGPT was useful in summarizing arguments that have been repeated ad nauseam on the internet, but not useful in generating an enforceable ordinance. OTOH, it did do an okay job generating a grade-school (meaning, 3rd or 4th grade equivalent) overview of what existing leaf blower ordinances roughly look like when viewed through the eyes of a child.
Correct. The problem isn't leaf blowers, it is equipment that produces unreasonable levels of noise. Leaf blowers are a good example of it, but the law should directly address the noise, not the function. Something like: no power equipment that exceeds 65dB measured from 1' may be operated in the city.
Of course many municipalities already have noise ordinances, the trick is enforcement.
I don't think the finger pointing can end there. Whose decision was it to outsource the decision to the computer? The voters? Ok... Whose decision was it to move forward with the computer's decision after reading it?
> Whose decision was it to outsource the decision to the computer? The voters?
The mayor. The dumb guy saying dumb things to get people to vote for him does not get a special extra layer of deniability because his dumb thing is “computer” instead of e.g. astrology or the channeled spirit of Mae West
I said the voters there, because the voters voted for him, knowing he was going to use ChatGPT. While I don't personally think the voters are 100% responsible for the actions of an elected official once elected, in the game of finger pointing, I'm sure it would get pointed back to them... "well, you voted for it!"
That's where the next level comes in. It is still on the elected official, because he should have the good sense to step in when it's saying to do something stupid or reckless.
If it only Cheyenne city specific decisions then maybe it will be fun. The spectacle it creates should be very entertaining. On the flip side however Cheyenne is the state capitol. If it's outcome bleeds into state politics then it will affect me. I doubt any of my community are even aware of this. The decision to do this and decisions the big data chat bot makes along with the fallout should find it's way to the Cowboy State Daily. [1] Plenty of people read that and watch their YouTube channel for weather news. The naive and hopeful side of me wants to believe they are doing this knowing it will blow up and create a spectacle to put the AI running this or that discussion to bed.
I am still waiting for ChatGPT to do what I have been asking for since it was first conceived. It needs to be able to accurately cite all of it's sources and what logic was used along with a detailed audit log of any human tuning to reach it's conclusions, given it's big data this should be easy. A human can be excused for not always have sources handy unless they are writing a thesis. Big data has no good excuse that I am aware of.
Happiness and productivity will go to 100%, with a 1/12 chance of going rogue. Just put your fusion tanks on the outskirts of the city and be prepared to re invade cheyenne
When I lived in Atlanta, every four years when the mayoral election came around you could depend on there being an undergrad at Georgia Tech who would run on a platform of using the internet (and later a mobile app) to enable direct democracy. Each time they were sure they were the first to come up with the idea. It's significant that four years not only is the length of the election cycle but also the time it takes to earn an undergraduate degree so each cycle had a new crop of students who weren't around for the last election.
I wonder if this will be the next evolution of that phenomenon. Just as it evolved from dial up voting to web voting to mobile app voting, will the next thing for Tech students to push for be AI voting? Perhaps voting for an AI to be mayor or having your own AI vote for you. Or a "direct" democracy where your AI votes on each issue throughout the term.
Direct democracy is, of course, in use at small scales (club meetings, Robert's rules of order) and seems to work fine. I would really like to see what it looks like on a larger scale. Some with more than a handful of people with a stake in the outcome and more issues that are more complicated. A large city would not make a good test bed but a small town might be ok for this sort of thing. The should be a reasonable human oversight and a graceful failure mode though.
Someone already mentioned it, but this would be an interesting social and political experiment. The problem I see is that without transparency, it would be hard to see what is happening behind the scenes, it would essentially be a black box. I don't know how many learnings could be gleaned from black box activity, regardless of whether the outcome is good or bad. This is the kind of thing we can theorize all day about, and maybe even model lightly...but we won't know the possibility until it is carried out. I am very confident that an AI could govern better than some people already in power, but that isn't a good bar for starting the experiment...
> AI would be objective. It wouldn’t make mistakes. It would read hundreds of pages of municipal minutiae quickly and understand them. It would, he said, be good for democracy.
We did it guys! This guy ate up the industry's hype hook, line, and sinker!
Software? That's objective? That doesn't make mistakes? Phffft. This guy has been reading too much sci-fi.
While this seems silly, imagine constantly having access to decision makers. Everyone cannot have hour-long conversations with the AI bot .
The problem would be, someone who knows what they're doing could probably convince the bot to do something messed up, but of course you have a human in the loop.
It's not like AI mayor is going to have direct control over robocops
Be super hilarious, and a heck of an eye opener, if he got in and there was major improvement in the city. Also he should rebase on Llama 3.1 so nobody can shut him down. Really love to see this pan out as it's definitely in our future IMO.
Neither does the public. This could be a gimmick for people to vote for him thinking they are voting for ChatGPT. The only question is, is he in on the ruse?
I a small town I live near the mayor and the counsel mostly did other stuff. Every election there was a bit of a struggle to find someone that would run. AI might be a solution (with plenty of overwrite) to do some of the rote things that just need to get done. I am not excited about having it make decision but doing rote paperwork would be useful in the (shrinking number) places where man power is limited and the work is more or less rote.
In all seriousness it would be an exceptionally bad idea to let any current AI do anything whatsoever in your local government, except maybe summarize emails and write boilerplate letters that someone would then vet very carefully.
No current form of AI is, by any sensible definition, able to make any form of decision whatsoever.
I going to respectfully disagree (at least is part). So, small town officials that are (as far as I know) still sometimes serving in volunteer positions it can be difficult to wade through paperwork and minutia of bureaucracy[1]. There are still towns with hundred of residents (not thousands). It should always fall on a competent human to vet things but really I would think any help is welcome. However care should (but will not) always be taken. It is true of AI, hiring a contractor, or relying on help already in the office.
[1] I have waded through some of the paperwork myself and it can be mind numbing for the uninitiated.
Both the contractor and the help already in the office can think and understand the reason why the minutiae exists. Nothing whatsoever about current AI should encourage anyone to use it in a governmental scope of any size… down that path is madness.
OT: Does everyone on HN have a WaPo subscription or something? I don't think I've ever been able to actually read any article of theirs without a paywall, ever. Not even the first sentence -- literally just a title and a paywall.
I know there are ways to get around it (can't remember which sites off hand, but it seems like they're usually the top comment...), but it feels like the kind of paywalled site that wouldn't typically be allowed on most share-your-links sites -- and especially ones that encourage actually reading the article before commenting.
Supposedly this guy has “read every dystopian novel,”. Considering he is still moving forward with the idea, I tend to believe he failed reading comprehension classes in elementary school.
Everyone is talking about the technical problems or feasibility of this, but that misses the point. This is like voting in a horse as your mayor. It is a way to show that you plan on not changing anything or listen to anyone’s real concerns. “The people of this town can run things themselves”.
All he is doing is signaling that the town is so conservative that they don’t even need leadership because family values and common sense are all that are needed. Not a bad move, really.
Forgive me if he is really doing this in good faith, the article is paywalled.
People might say that a factory robot builds the car, or a 3D printers makes a toy. Like these things, GenAI tools are fabs, their function is to make stuff. Cranes function is to lift, so its inappropriate to say that it built the house.
Naah, never heard that factory robots build a car, maybe "assemble complex details", or "do fully automated process", but the verbs are never "create" or "build".
Same with 3D printers (both plastic and cement ones for construction) -- people say "I build it with my 3D-printer", workers use 3D-printer to build a house, or a "rocket nozzle was built using additive technologies".
My point is (and I've worked in AI industry) -- usually people spend years to make a model that magically "creates" that particular things they wanted. It's never that some model suddenly learned to do something, no, engineers literally struggled to make it do exactly that. So I'd rather say "AI engineers created a new building architecture with the help of their model", or "Artist created that new song/painting with the help of AI". Yes, typing a simple prompt is Creation, just like drawing a few lines or few notes.