> All meaningful speech has consequences. Calls to violence definitely so. Is this really something that needs to be defended when spam and porn don't?
To me it's all pretty simple. The platform has its loyalty to the listener. It is up to the listener to decide whether he wants to engage with different kinds of speech, and most networks already can determine that, either by separation into forums and communities or just by general engagement metrics.
If you prevent exposure because the listener doesn't want to be exposed, that's OK.
If you prevent exposure because you don't want the listener to be exposed, that's censorship.
Censorship is suppressing information because it might be listened to, while spam filters and porn filters is suppressing information because it won't be listened to. The ethical and moral boundary is clear, and it is strictly the platform's loyalty to its consumers.
Anything else is inherently positioning the platform in an assumed moral superiority to its users, which given the recent Twitter revelations, is an incredibly bad assumption.
Moderation can be both good and bad. Good moderators are those that remove content nobody will want to see. Bad moderators remove content because people might see it.
There are plenty of calls to violence from the Ukraine side against Russia. None of those got censored. Should they?
I'm not defending the essence of the speech. It should be irrelevant. I'm defending what is a moral intent and what is an immoral intent of the platform. There's meaningful discussion and then there are constructing strawman and fabricated threats and vague ambiguous terms like hate speech.
If your method for deciding whether to censor foofoo is whether there exists a story where foofoo leads to bad outcome, and someone wants foofoo censored, they will create that story (potentially including real life actions). Therefore deciding whether to censor foofoo should be independent of the existence of those stories. They also tend to overgeneralize and use broad categories when the stories themselves are specific anecdotes.
To me the boundaries are pretty clear, and everything else is just people telling pretty meaningless stories, conflating terms, and having inconsistent standards.
Understanding where things stand morally is something you do after discussing them, not before.
To me it's all pretty simple. The platform has its loyalty to the listener. It is up to the listener to decide whether he wants to engage with different kinds of speech, and most networks already can determine that, either by separation into forums and communities or just by general engagement metrics.
If you prevent exposure because the listener doesn't want to be exposed, that's OK.
If you prevent exposure because you don't want the listener to be exposed, that's censorship.
Censorship is suppressing information because it might be listened to, while spam filters and porn filters is suppressing information because it won't be listened to. The ethical and moral boundary is clear, and it is strictly the platform's loyalty to its consumers.
Anything else is inherently positioning the platform in an assumed moral superiority to its users, which given the recent Twitter revelations, is an incredibly bad assumption.
Moderation can be both good and bad. Good moderators are those that remove content nobody will want to see. Bad moderators remove content because people might see it.
There are plenty of calls to violence from the Ukraine side against Russia. None of those got censored. Should they?
I'm not defending the essence of the speech. It should be irrelevant. I'm defending what is a moral intent and what is an immoral intent of the platform. There's meaningful discussion and then there are constructing strawman and fabricated threats and vague ambiguous terms like hate speech.
If your method for deciding whether to censor foofoo is whether there exists a story where foofoo leads to bad outcome, and someone wants foofoo censored, they will create that story (potentially including real life actions). Therefore deciding whether to censor foofoo should be independent of the existence of those stories. They also tend to overgeneralize and use broad categories when the stories themselves are specific anecdotes.
To me the boundaries are pretty clear, and everything else is just people telling pretty meaningless stories, conflating terms, and having inconsistent standards.
Understanding where things stand morally is something you do after discussing them, not before.