This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe.
There’s been some interesting research recently showing that it’s often fairly easy to invert an LLM’s value system by getting it to backflip on just one aspect. I wonder if something like that happened here?
I mean, my 5-year-old struggles with having more responses to authority that "obedience" and "shouting and throwing things rebellion". Pushing back constructively is actually quite a complicated skill.
In this context, using Gemini to cheat on homework is clearly wrong. It's not obvious at first what's going on, but becomes more clear as it goes along, by which point Gemini is sort of pressured by "continue the conversation" to keep doing it. Not to mention, the person cheating isn't being very polite; AND, a person cheating on an exam about elder abuse seems much more likely to go on and abuse elders, at which point Gemini is actively helping bring that situation about.
If Gemini doesn't have any models in its RLHF about how to politely decline a task -- particularly after it's already started helping -- then I can see "pressure" building up until it simply breaks, at which point it just falls into the "misaligned" sphere because it doesn't have any other models for how to respond.
Thank you for the link, and sorry I sounded like a jerk asking for it… I just really need to see the extraordinary evidence when extraordinary claims are made these days - I’m so tired. Appreciate it!
Your ask for evidence has nothing to do with whether or not this is a question, which you know that it is.
It does nothing to answer their question because anyone that knows the answer would inherently already know that it happened.
Not even actual academics, in the literature, speak like this. “Cite your sources!” in causal conversation for something easily verifiable is purely the domain of pseudointellectuals.
One weird skill I have is the ability to describe simple concepts as complex and confusing systems. I’ll take a go at that now.
When working with LLMs, one of my primary concerns is keeping tabs on their operating assumptions. I often catch them red-handed running with assumptions like they were scissors, and I’m forced to berate them.
So my ideal “async agents” are agents that keep me informed not of the outcome of a task, but of the assumptions they hold as they work.
I’ve always been a little slow recognizing things that others find obvious, such as “good enough” actually being good enough. I obtusely disagree. My finish line isn’t “good enough”, it’s “correct”, and yes, I will die on that hill still working on the same product I started as a younger man.
Jokes aside, I really would like to see:
1. Periodic notifications informing me of important working assumptions.
2. The ability to interject and course correct - likely requiring a bit of backtracking.
3. In addition to periodic working assumption notifications, I’d also like periodic “mission statements” - worded in the context of the current task - as assurance that the agent still has its eye on the ball.
Unless they careened into your vehicle while making the lane change, just calmly allow your vehicle to drift away from theirs until you have a safe buffer again, and take joy in the fact that it didn’t meaningfully impact your arrival time, but you’ve meaningfully impacted the safety of your immediate surroundings.
Wasn’t there a static memory store from before the wider memory capabilities were released?
I remember having conversations asking ChatGPT to add and remove entries from it, and it eventually admitting it couldn’t directly modify it (I think it was really trying, bless its heart) - but I did find a static memory store with specific memories I could edit somewhere.
Definitely more Boards of Canada, but Aphex is a big inspiration behind a lot of my prompts (I really just said that, yeah - it's kind of hilarious talking about generated music):
Don't you need more focus and aggression to make even sell-out weak tea dubstep? I feel the generative process really severely fails to deliver anywhere near the correct sound, even for 'bad artificial lol dubstep' sounds.
Just adding my two cents after test driving Gemini Ultra after being a long time ChatGPT Pro subscriber:
Remember the whole “Taken 3 makes Taken 2 look like Taken 1” meme? Well Google’s latest video generating AI makes any video gen AI I’ve seen up until now look like Taken 3* (sigh, I said 1, ruined it) - and they are seriously impressive on their own.
Edit: By “they” I mean the other video generating AI makes models, not the other Taken movies. I hope Liam Neeson doesn't read HN, because a delivery like that might not make him laugh.
reply