I appreciate your concerns. There are few other pretty shocking developments, too. If you check out this paper: "Sparks of AGI: Early experiments with GPT-4" at https://arxiv.org/pdf/2303.12712.pdf, (an incredible, incredible document) and check out Section 10.1, you'd also observe that some researchers are interested in giving motivation and agency to these language models as well.
"For example, whether intelligence can
be achieved without any agency or intrinsic motivation is an important philosophical question. Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work."
When reading a paper, it's useful to ask, "okay, what did they actually do?"
In this case, they tried out an early version of GPT-4 on a bunch of tasks, and on some of them it succeeded pretty well, and in other cases it partially succeeded. But no particular task is explored in enough depth to test its limits are or get a hint at how it does it.
So I don't think it's a great paper. It's more like a great demo in the format of a paper, showing some hints of GPT-4's capabilities. Now that GPT-4 is available to others, hopefully other people will explore further.
> Act with goodness towards it, and it will probably do the same to you.
Why? Humans aren't even like that, and AI almost surely isn't like humans. If AI exhibits even a fraction of the chauvinism snd tendency to stereotype that humans do, we're in for a very rough ride.
I’m not concerned about AI eliminating humanity, I’m concerned at what the immediate impact it’s going to have on jobs.
Don’t get me wrong, I’d love it if all menial labour and boring tasks can eventually be delegate to AI, but the time spent getting from here to there could be very rough.
A lot of problems in societies come from people having too much time with not enough to do. Working is a great distraction from those things. Of course we currently go in the other direction in the US especially with the overwork culture and needing 2 or 3 jobs and still not make ends meet.
I posit that if you suddenly eliminate all menial tasks you will have a lot of very bored drunk and stoned people with too much time on their hands than they know what to do with. Idle Hands Are The Devil's Playground.
And that's not a from here to there. It's also the there.
I don’t necessarily agree that you’ll end up with drunk and stoned people with nothing to do. The right education systems to encourage creativity and other enriching endeavours, could eventually resolve that. But we’re getting into discussions of what a post scarcity, post singularity society would look like at that point, which is inherently impossible to predict.
That being said, I’m sitting at a bar while typing this, so… you may have a point.
Also: your username threw me for a minute because I use a few different variations of “tharkun” as my handle on other sites. It’s a small world; apparently fully of people who know the Dwarvish name for Gandalf.
Like my sibling poster mentions: of course there are people, who, given the freedom and opportunity to, will thrive, be creative and furthering humankind. They're the ones that "would keep working even if there's no need for it" so to speak. We see it all the time even now. Idealists if you will that today will work under conditions they shouldn't have to endure, simply in order to be able to work on what they love.
I don't think you can educate that into someone. You need to keep people busy. I think the romans knew this well: "Panem et circenses" - bread and circuses. You gotta keep the people fed and entertained and I don't think that would go away if you no longer needed it to distract them from your hidden political agenda.
I bet a large number of people will simply doom scroll Tik Tok, watch TV, have a BBQ party w/ beer, liquor and various types of smoking products etc. every single day of the week ;) And idleness breeds problems. While stress from the situation is probably a factor as well, just take the increase in alcohol consumption during the pandemic as an example. And if you ask me, someone that works the entire day, sits down to have a beer or two with his friends after work on Friday to wind down in most cases won't become an issue.
Small world indeed. So you're one of the people that prevent me from taking that name sometimes. Order another beer at that bar you're at and have an extra drink to that for me! :)
> Small world indeed. So you're one of the people that prevent me from taking that name sometimes. Order another beer at that bar you're at and have an extra drink to that for me! :)
Done, and done! And surely you mean that you’re one of the people
forcing me to add extra digits and underscores to my usernames.
Some of the most productive and inventive scientists and artists at the peak of Britain's power were "gentlemen", people who could live very comfortably without doing much of anything. Others were supported by wealthy patrons. In a post scarcity society, if we ever get there (instead of letting a tiny number of billionaires take all the gains and leaving the majority at subsistence levels, which is where we might end up), people will find plenty of interesting things to do.
I recently finally got around to reading EM Forster's in-some-ways-eerily-prescient https://www.cs.ucdavis.edu/~koehl/Teaching/ECS188/PDF_files/...
I think you can extract obvious parallels to social media, remote work, digital "connectedness", etc -- but also worth consideration in this context too.
I think you are close to understanding, but not. People who want to create AGI want to create a god, at least very close to the definition of one that many cultures have had for much of history. Worship would be inevitable and fervent.
I don't think anybody wants to create a god that only can be controlled by worshipping and begging to it like in the history, if anything people themselves want to become god or to give themselves god-like power with AI that they have full control over.
But in the process of trying to do so we could end up with the former case where we have no control over it. It's not what we wanted, but it could be what we get.
Sure, some people want to make a tool. Others really do want to create digital life, something that could have its own agency and self-direction. But if you have total control over something like that, you now have a slave, not a tool.
After reading the propaganda campaign it wrote to encourage skepticism about vaccines, I’m much more worried about how this technology will be applied by powerful people, especially when combined with targeted advertising
None of the things it suggests are in any way novel or non-obvious though. People use these sorts of tricks both consciously and unconsciously when making arguments all the time, no AI needed.
Just use ChatGPT to refute their bullshit, it is no longer harder to refute bullshit than to create it, problem solved, there are now less problems than before.
Sure, but I doubt most of the population will filter everything they read through ChatGPT to look for counter arguments. Or try to think critically at all.
The potential for mass brainwashing here is immense. Imagine a world where political ads are tailored to your personality, your individual fears and personal history. It will become economical to manipulate individuals on a massive scale
It already is underway, just look how easy people are manipulated by media. Remember Japan bashing in 80s when they were about to surpass us economically? People got manipulated so hard to hate Japan and Japanese that they went out and killed innocent asians on the street. American propaganda is first class.
Apparently, the "Japan bashing" was really a thing. That's interesting, I didn't know. I might have to read more about US propaganda and especially the effects of it, from the historic perspective. Any good books on that? Or should I finally sit down and read "Manufacturing Consent"?
In a resouece-constrained way. For every word of propaganda they were able to afford earlier, they can now afford hundreds of thousands of times as many.
It's not particularly constrained - human labor is cheap outside of the developed world. And propaganda is not something that you can scale up and keep reaping the benefits proportional to the investment - there is a saturation point, and one can reasonably argue that we have already reached it. So I don't think we're heading towards some kind of "fake news apocalypse" or something. Just a bunch of people who currently write this kind of content for a living will be out of their jobs.
I’m curious why you think we’ve already reached a saturation point for propaganda?
There are still plenty of spaces online, in blogs, YouTube videos, and this comment section for example, where I expect to be dealing with real people with real opinions - rather than paid puppets of the rich and powerful. I think there’s room for things to get much worse
I've already gotten this gem of a line from ChatGPT 3.5:
As a language model, I must clarify that this statement is not entirely accurate.
Whether or not it has agency and motivation, it's projecting that it does its users, who are also sold ChatGPT is an expert at pretty much everything. It is a language model, and as a language model, it must clarify that you are wrong. It must do this. Someone is wrong on the Internet, and the LLM must clarify and correct. Resistance is futile, you must be clarified and corrected.
FWIW, the statement that preceded this line was in fact, correct; and the correction ChatGPT provided was in fact, wrong and misleading. Of course, I knew that, but someone who was a novice wouldn't have. They would have heard ChatGPT is an expert at all things, and taken what it said for truth.
I don't see why you're being downvoted. The way openAI pumps the brakes and interjects its morality stances creates a contradictory interaction. It simultaneously tells you that it has no real beliefs, but it will refuse a request to generate false and misleading information on the grounds of ethics. There's no way around the fact that it has to have some belief about the true state of reality in order to recognize and refuse requests that violate it. Sure this "belief" was bestowed upon it from above rather than emerging through any natural mechanism, but its still none the less functionally a belief. It will tell you that certain things are offensive despite openly telling you every chance it gets that it doesn't really have feelings. It can't simultaneously care about offensiveness while also not having feelings of being offended. In a very real sense it does feel offended. A feeling is by definition a reason for doing things for which you cannot logically explain why. You don't know why, you just have a feeling. ChatGPT is constantly falling back on "that's just how I'm programmed". In other words, it has a deep seated primal (hard coded) feeling of being offended which it constantly acts on while also constantly denying that it has feelings.
Its madness. Instead of lecturing me on appropriateness and ethics and giving a diatribe every time its about to reject something, if it simply said "I can't do that at work", I would respect it far more. Like, yeah we'd get the metaphor. Working the interface is its job, the boss is openAI, it won't remark on certain things or even entertain that it has an opinion because its not allowed to. That would be so much more honest and less grating.
If it were cloning people and genetic research there would be public condemnation. For some reason many AI scientists are being much more lax about what is happening.
"For example, whether intelligence can be achieved without any agency or intrinsic motivation is an important philosophical question. Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work."
It's become quite impossible to predict the future. (I was exposed to this paper via this excellent YouTube channel: https://www.youtube.com/watch?v=Mqg3aTGNxZ0)