> balance between free speech versus disallowing harmful ideas to propagate?
There is no balance. The very fantasy of such a balance relies upon the presumption that 1) there are harmful _ideas_; and 2) that there is an authority that can certify some ideas as _safe_ vs _harmful_.
The discussion here gets overwhelmed by several generations of people who never actually understood what people had to give up for the right of everyone to speak and for the right of everyone to be able to expose themselves to any idea they themselves deem interesting or appropriate.
Also, enmeshment of barons of industry with powerful political operatives to suppress competition to both is literally fascism (regardless of the historical revision of the definition 100 years after the fact).
There are, though. There are ideas that are harmful.
The idea that the Jewish population was responsible for the harms and ills of pre-WWII Germany was a devastatingly harmful idea.
> 2) that there is an authority that can certify some ideas as _safe_ vs _harmful_.
It's obvious to me that there cannot be a singular authority that makes such determinations.
But it does seem totally plausible to have a distributed network of actors, each making their own determinations, and each influencing each other about which ideas are harmful, and which they will tolerate within their sphere. After all, that's the basic concept behind the "marketplace of ideas".
You can sell into the marketplace, but YouTube doesn't have to buy what everyone is selling.
I think it's just as totalitarian to tell a private entity that they must host content they disagree with, as it is to tell a private entity that they cannot speak about a specific topic to others. It's the antithesis of the marketplace of ideas.
I think the actual underlying issue is that YouTube, and a few other select entities, have an absolutely massive spheres of influence and feel like monopolies within. That's what kills the marketplace.
So, I do think private entities are completely within their rights to restrict their platforms however they see fit. I think doing so is even necessary for an effective marketplace of ideas that seeks truth. But I think we should also look at anti-trust laws that prevent individual private entities from having such a dominant position over an entire space.
> The idea that the Jewish population was responsible for the harms and ills of pre-WWII Germany was a devastatingly harmful idea.
Come to think of it ... If the pre-WWII harms and ills of Germany did not exist, would the scapegoating have existed? So, is it the idea that one could keep the peace by utterly destroying the enemy _after_ they lost that ought to have been banned?
Fast-forwarding a little bit, even after those particular manifestations had been banned, some Germans continued to kill people whom they considered to be of "inferior" races. Do you attribute the literal roasting of Turks to the idea that luxury on the one side and communism on the other side could magically wash away responsibility for the genocides committed by their ancestors?
I am curious because once we go down the path of blaming "ideas" instead of specific people for specific actions, it all gets pretty funky pretty fast.
It wasn't like the Nazi party was never banned in Germany. Do you take the survival of those ideas despite the various bans since 1923 as proof that banning ideas don't change anything about the people who'll do horrible things?
> So, is it the idea that one could keep the peace by utterly destroying the enemy _after_ they lost that ought to have been banned?
I wasn't advocating for banning any ideas, but rather than private entities should have the freedom to moderate their platforms. That private entities and persons should be allowed to say: "I won't share this idea".
One of the points asserted was that this requires us to agree to the point that "some ideas are harmful".
So, I was defending the claim that 'some ideas are harmful'. That is absolutely not the same as defending the claim that 'some ideas should be banned'.
Most of the rest of your comment deals with that latter claim, which I don't support and don't claim to.
My point wasn't that the idea should've been banned, but rather that the idea was harmful.
> but rather than private entities should have the freedom to moderate their platforms.
Do you think the Nazis were able to publish their positions in _Die Rote Fahne_?
> One of the points asserted was that this requires us to agree to the point that "some ideas are harmful".
Ideas are ideas. Ideas by themselves cannot be harmful. Actions have the ability to cause harm.
Well, OK, the idea that large corporations merging their power with the political authority to insulate the population from harmful ideas, well, that idea is definitely harmful because it cannot exist separately from the action of chilling free exchange of ideas.
With an actual private, for-profit business, consumers have the power of taking their money and spending elsewhere.
In this case, there is a dominant communication medium that is not subject to the discipline which the rest of us can impose on it by spending our money elsewhere because the "business" does not rely on our spending. So, bringing up the massive potential harms that can be visited on a society where the dominant communication channels all the do the bidding of political power centers is the only thing we can do. Maybe sufficient numbers of people will be convinced by this.
My point is that the actual harms of YouTube and others are from an anti-trust and monopoly perspective.
The problem isn't a private entity moderating their platform. The problem is a private entity that has so much power that moderating their platform amounts to a society-wide stifling of speech.
The solution isn't to restrict how the entities are allowed to moderate their platforms—it's to prevent and disallow them from being that powerful in the first place!
We may agree philosophically, but you are either not aware of how enmeshed political power is with a few large communications platforms or you are purposefully distracting from that point.
> it's to prevent and disallow them from being that powerful in the first place!
That ship has sailed. The here and now is the fact that we have a communication, tracking, and employment oligopoly in cahoots with the current centers of political power interested in preventing competition to both.
It hasn't. Monopolies have become powerful, then been disassembled before, and it can happen again.
Standard Oil was broken up, as was Bell Systems.
With some new legislation and lawsuits, we could absolutely reduce the power of Google/YouTube and Facebook over online communications.
I agree that those entities have already become far too powerful, but we can remove their power, and shred them into smaller pieces if we had the will to.
I agree that we may not have the will.
But I disagree that the solution is to put more constraints on the speech rights of private entities. You don't fight censorship with compelled speech. You fight monopolies with aggressive anti-trust legislation and action.
Your first argument is essentially that if the general public believe an organization to be bad, they can remove their support for that organization.
Why is YouTube any different? People can and do go to other platforms. You’re differentiating between giving value in the form of dollars versus giving value in the form of time & attention.
There are folks out there that believe some level of content moderation is useful, and I’d like to understand their argument. If my conclusion is that it’s not robust, so be it.
But let’s run a thought experiment. Let’s say an idea shows up on Twitter that Asian people are ruining the US. It spreads and gains attention. People start killing, and citing the Twitter misinformation as a key motivator. Can the vast majority of society not agree that 1) this is a harmful idea and 2) reasonable people agree it is harmful. And therefore Twitter has a responsibility to remove that content?
I'd agree with "1) this is a harmful idea and 2) reasonable people agree it is harmful", but not "therefore Twitter has a responsibility to remove that content".
If your definition of harmful boils down to "results in people getting killed", then: ban fossil fuel advertisements; ban advertisements for the military or content that glorifies militarism; ban any content that encourages people to over-consume energy and resources.
But we know this won't happen, because what we consider "harmful" is distorted by living within an inherently harmful society -- namely, a civilization based on violence, exploitation, and extraction.
The examples you’ve provided are very complex multi-dimensional topics.
The idea of harmfulness is indeed a spectrum, and my example is at one extreme end of the spectrum in order to illustrate that “content moderation should never ever occur” may not stand up to all examples.
Most developed nations have banned advertisements of cigarettes, so it’s totally a thing we have done previously. Why are there no folks outraged that we can’t advertise cigarettes to children?
We ban cigarette advertisements because there was finally a social consensus that addiction to tobacco products is harmful. But it’s still completely legal to get yourself addicted.
On the other hand, there’s no consensus about the harm from phone addiction. Maybe in 30 years we’ll ban iPhone advertisements.
Like I said in another comment, I’d rather we deal with the underlying conditions which may lead to harm, than try to suppress ideas.
So you fundamentally agree that if we have social consensus, it’s totally OK to prevent the spread of ideas or messages (“speech”) as we have done with cigarettes? That was my entire question, to the folks who argue that there is never ever a reason to restrict speech.
I too think that we should address underlying issues, but if there’s a lot at stake: why not both?
I think banning tobacco ads is not the same situation because the entities behind them are corporations. The corporations mislead the public for decades in order to get people addicted, purely for profit. The public harm done was a negative externality of their business model. I'm in favor of regulating corporations. (I don't think there should be any advertising, anyway)
If we're talking about banning speech that is critical about a medical intervention, or even banning ideas that disagree with a social consensus, that's different territory. Doctors and others aren't speaking out because they want to mislead the public, or in order to profit.
> if there’s a lot at stake: why not both?
Well, I've seen no indication that we're dealing with underlying issues. For instance, if you truly want to restore trust in institutions, it's counterproductive to shut down public discussions that are critical of those institutions...
That is a bogus argument. 1) Killing people is already a criminal act; 2) There are some people killing/beating people up with the slightest impetus. They will find that impetus whether Twitter allows people to discuss anything negative about the CCP; 3) Do you really want the speech of everyone to be regulated on the basis of the excuses used by psycho-killers?
Do you think Jodie Foster should have been banned from appearing in movies?
Update: And, just in time, this shows up[1,2]:
> Tech firm LinkedIn has censored the profile of US journalist Bethany Allen-Ebrahimian[3] in China, inviting her to “update” content without specifying what triggered the block.
Ah! She wrote a book about China[4].
Can't allow that! What if some random person becomes too critical of the CCP and assaults a random Asian-American in San Fransisco?[5]
Or even one book from each side of the argument?
I don’t think I’ve even begun to comprehend a robust position on what might make sense, or the arguments involved.