Amazed social media engineering society isn’t getting more press. They are all doing it.
I noticed this first on X - during the FarageRiots where an Asian woman asked how many people felt safe. The volume of violently racist replies was insane. As an Asian man - it made me feel, very scared about society. I felt outnumbered. It wasn’t reflective of society - as it turned out it was a mass demonstration of racial unity, by the vast majority of Britain. But not on X
On YouTube I noticed it silent deleting my comments. Nothing violent - literally a comment saying I was concerned about NHS privatisation and takeover by the US finance. Noticed the same comment get removed. No notice. No reason. No appeal. Take down. Invisible. Quick google shows lots of people experiencing same thing.
And it got me thinking - wow - imagine shaping public sentiment at mass. Making opinions that weren’t convenient to people owning social media disappear. That creates helplessness. Shapes elections.
The rage bait we see now pulls in attention, shapes conversations and defines the Overton window.
Noticed a post from Theo T3 an hour or so ago that was critical, on X, about OpenAi and the first comment was calling him an OpenAI shill. Certainly seems plausible incentives on X to fuel anti competitor sentiment and amplify useful sentiment.
This article on meta is validating the patterns I’ve seen. It’s deeply concerning. We are in an era where society is micro shaped by social media owners and their agenda.
This issue needs to be addressed. We need regulation and transparent recommendation algorithms and clear limit in targeting users.
And then there is the toxic nature of social media engineered addiction. Side bar I know but has to be said.
We need much more regulation and we need more decentralised ownership for social media companies to protect democracy.
I wholly agree with your point on X. The comeback of racism is one of the most dangerous social phenomenons in today's world. Besides sowing unsurmountable amounts of hatred, it also brings along xenophobia, misogyny/misandry and the whole likes along with it as the forerunning discriminatory practice in our world.
It's pretty bad. I've also been very interested in the non-organic way certain topics get introduced. It's often chains of non-organic posts/replies that will seed topics in a way that makes it seem like opinion 1 is proposed then someone else will come in and make some obvious fallacy in a counter-argument, then another post will respond calling them out in some inflammatory way. This kicks off a cycle of user engagement either defending or attacking one of the participants. However the entire initial chain of 3-4 back and forth is all bots just subtly guiding topics.
Wow that’s insane - didn’t realise that was happening re non-organic posts.
From a behavioural POV it seems like an obvious play. These companies and there owners have huge gains via social engineering.
There is very little transparency, accountability or regulation.
The thing that worries me is the unobvious… most people know about instagram and ++ suicide rates. What shocked me was finding out that instagram used things like waiting for people to remove photos of themselves, identify insecurity behaviour, use that to position beauty products to young girls. It seems so so unethical and predatory. Not to mention the impact on public MH when applied at scale.
Another crazy stat was something like screen time av was 4hrs/ day and av attention spans dropped from ~180s to something like ~90s.
The impact in so many areas is so bad. Blows my mind there is such a lack of regulation.
Thinking AI has the potential, at scale to social engineer without the need to bother creating content / making bots.
It was a big deal when Google started doing it 15 years ago on YouTube. They were explicitly changing weights for recommendations wrt middle eastern videos. At the time it was considered a moral thing because it was done to prevent ISIS from radicalizing people.
I remember warning people at the time they'd do it for domestic political videos, it was really frustrating how no one believed me. It's a little more frustrating that after experiencing people continue to use sites with artificial cybernetics.
I noticed this first on X - during the FarageRiots where an Asian woman asked how many people felt safe. The volume of violently racist replies was insane. As an Asian man - it made me feel, very scared about society. I felt outnumbered. It wasn’t reflective of society - as it turned out it was a mass demonstration of racial unity, by the vast majority of Britain. But not on X
On YouTube I noticed it silent deleting my comments. Nothing violent - literally a comment saying I was concerned about NHS privatisation and takeover by the US finance. Noticed the same comment get removed. No notice. No reason. No appeal. Take down. Invisible. Quick google shows lots of people experiencing same thing.
And it got me thinking - wow - imagine shaping public sentiment at mass. Making opinions that weren’t convenient to people owning social media disappear. That creates helplessness. Shapes elections.
The rage bait we see now pulls in attention, shapes conversations and defines the Overton window.
Noticed a post from Theo T3 an hour or so ago that was critical, on X, about OpenAi and the first comment was calling him an OpenAI shill. Certainly seems plausible incentives on X to fuel anti competitor sentiment and amplify useful sentiment.
This article on meta is validating the patterns I’ve seen. It’s deeply concerning. We are in an era where society is micro shaped by social media owners and their agenda.
This issue needs to be addressed. We need regulation and transparent recommendation algorithms and clear limit in targeting users.
And then there is the toxic nature of social media engineered addiction. Side bar I know but has to be said.
We need much more regulation and we need more decentralised ownership for social media companies to protect democracy.