Hacker Newsnew | past | comments | ask | show | jobs | submit | grus's commentslogin

Oh shit, is it the current year already? But yeah, it's clearly labeled as a pdf link, and if you don't like your browser's default handling of pdfs, you can just change it. I don't like opening Youtube links in the browser, but I'm not gonna ask people to not post them, it'd be weird.


Particularly not compared to Hillary Clinton.


What are we even talking about here - "quashing abuse"? Are we talking about going to work deleting spam, statementless insults, etc., literally combating people abusing the platform? Or is it about stopping people abusing other people, deleting hateful comments, and maybe censoring certain ideologies (nazis, etc.)? I'm just wondering what abuse is, and what a twitter moderator's job description even is.


I imagine "abuse" is whatever the moderator in question defines it to be.

Look at Twitter's history of curbing "abuse" and you'll find that this nebulous definition gives plenty of room for their subconscious and conscious biases to take over--a disproportionate number of the targets of censorship on Twitter are of a certain political influence.

So I see no nobility in the original commenter's goal. Taking away the voice of people you don't agree with is not noble, and censorship of any kind is unfortunate.


I have a fairly fundamentalist view of free speech, so in my opinion the only reasonable standard where censorship would be an acceptable solution to any speech is either:

a) content that violates an individual or entity's privacy; or

b) direct incitement to physical harm to an individual, group or property; or

c) to protect the platform from legal damages or other legal quandaries

Unfortunately, most codes of conduct tend to be structured around public perception, which better serves a business' long-term interests over the "killer feature" of a truly open platform, which is seen by many as more trouble than it is worth, from the perspective of the platform owner.


> a disproportionate number of the targets of censorship on Twitter are of a certain political influence.

Yeah see I don't worry so much about this. I purposely follow people with whom I disagree (but who are well spoken) on Twitter to avoid ending up in an echo chamber. So I don't disagree with you about quashing diversity; the last thing I need in my life is to preach to the choir.

But the fact remains that, 1st Amendment or not, you can't yell "Fire!" in a crowded theater and then plead "but, but, my rights!" to evade responsibility. Credible threats are assault (look it up, it's in every state's legal code). I would not be surprised if doxxing was found to fall under similar statutes by an enlightened judge if done with malice aforethought.

Furthermore, an individual's rights to free expression do not include a right to make Twitter spend money or lose opportunities due to their chosen expressions. Twitter != the government. If push comes to shove, their self-preservation must take precedence (legally, since they have a fiduciary duty to their shareholders as a public company).

It's a chestnut, to be sure, but freedom of the press applies to those who own a printing press. Until and unless Twitter is nationalized (heaven forfend), they're restricted (at worst) by common carrier provisions. They can do a good job if it's in the best interests of their shareholders (and recent events suggest that may be the case). It's really an issue of how (how to maximize shareholder value, how to improve public perception, how not to drive away ads...).


Well, let me put it this way: banning people for threatening violence against other users in the name of social justice costs Twitter opportunities due to being unacceptable to a large chunk of the tech community (and I'm not even talking about them doing it to right-wing blowhards like Milo).


And yet, here you are on a platform that does not permit its users to threaten violence against other users.

Is that costing Hacker News "opportunities?" Compared to what? The opportunities afforded by not driving away all the users for whom threats of violence make a site toxic even if you aren't on the receiving end?

And what do you mean by a "large chunk of the tech community?" Do you mean a significant proportion relative to the "tech community?" Or a significant proportion relative to Twitter's user base as a whole?

And how do you define "the tech community?" Do you mean this cosy little echo chamber we inhabit made up of startup hackers and like-minded individuals? Or do you mean everyone working in tech, including the kinds of people working for BigCo who never go near Hacker News and visit Reddit for affinity subreddits like /r/volvo?

---

All design decisions are tradeoffs. If you make one choice, you alienate a certain set of users. But if you make a different choice, you'll alienate another set of users while placating some of the first set.

So yes, there is some opportunity given up if Twitter goes around banning people who incite threats of violence (your example), but there is some other opportunity given up if it tolerates them.


I don't know if you've noticed, but Hacker News is not exactly terribly popular amongst the social justice part of the tech community. Also, users' opinions can only be influenced by threats of violence that they hear about, very few people who aren't in the immediate social circles of the offenders generally hear about these threats, and those are under social pressure to accept them for obvious reasons. Not banning people who haven't made threats of violence but whose opponents have tried to tie to unrelated people that have for political reasons has probably done far more damage to Twitter's acceptance amongst people who refuse to participate in communities where death threats are normal.


It's a small fraction of the tech community that has a problem with banning people for threatening others with violence. And the tech community is a small fraction of Twitter's userbase.

Not catering to the libertariat has never really harmed a mainstream-targeting business.


To define "abuse" we'll need to talk about expectations and norms around collective communication.

Social communication platforms (implicitly or explicitly) embody certain communication goals.

A. Striving for constructive conversation is one family of goals. These goals are valid even if fuzzy and imperfectly enforced.

B. Striving for "(almost) anything goes, up to legal limits, such as libel" is another kind of goal. This may seem safer and more ideologically comfortable to some. However, in practice, it still has large complexity and uncertainty.

Social media platforms are not necessarily responsible for promoting free speech in the same way that Democratic institutions are bound to protect individual rights. Many individuals have multiple channels for speech. They often can find a venue that works best for them.

Speaking generally -- there may be exceptions in specific cases -- organizations are not required to give every viewpoint equal airtime. Organizations are free to choose how they want to structure their environments, up to the point at which they accept money or are subject to the purview of governmental bodies.

Some people such the word "censorship" too loosely, in my view. Yes, governmental censorship is only justified with a very high and particular standard, subject to close scrutiny. However, private organizations are much freer to filter as they see fit, and I think this is justified. One church does not have to voice every viewpoint at a gathering. A magazine does not have to print every article submitted. Not every story gets to the top of Hacker News. The rules at play are, by definition, filters.

I want an online platform for constructive conversation. This requires some kind of norms and probably some kind of message "shaping" for lack of a better word. Call it filtering or summarization or topic modeling. Whatever it is, people have limited time, and to be effective, a platform has to design around that constraint. Platforms need to demonstrate excellent collective user interface design.

I think almost all "social" platforms are falling very short to the extent that they claim they are effective social communication tools. I wrote much more here: https://medium.com/@xpe/designing-effective-communication-67...


"Spam" is also a nebulous term. A million actual humans having a genuine conversation about their Nazi ideology isn't spam, in most people's definitions. But a million actual humans having that genuine conversation while at-mentioning me unequivocally makes it impossible for me to productively use Twitter the product - which is really the reason spam is bad.

And that applies to traditional spam too: if a bunch of fake Nigerian princes emailed each other and only each other, we wouldn't have a reason to get rid of it, as long as there was technical capacity to deliver those messages. The spam part is that it makes it hard for me to get the emails I want.

I might have a disagreement with Twitter's priorities if there are a million actual humans quietly discussing their Nazi ideologies in a corner and not bothering the rest of us, and Twitter prioritizes their quality of service over the rest of our quality of service. Other people might not, and that's fair. Other people might have a problem with it regardless of whether it impacts other people's service, and that's fair. But I think that such a scenario isn't what is commonly meant by "abuse".


> What are we even talking about here - "quashing abuse"?

Well, even if we don't agree on the full definition, we could start somewhere, specifically at the bottom of the barrel.

Harassment is an easy one: Death threats, threats of violence, unwanted contact with people in real life, unprompted sexual comments, threats of doxxing / swatting / hacking, relentless insults, a combination thereof etc, aren't really difficult to agree on. I don't think there's a convincing or defensible argument that someone has a right to follow someone around and bother them /directly/. Just improving this would be a massive improvement in many twitter users' lives, and twitter has been condemned many times for failing to take the simplest actions to prevent this.

Hateful ideologies discussed privately is quite different than that - I do think those still cause measurable harm and I'm against that personally, but at least I can understand why some people hold a free-speech slippery slope argument for not interfering.


Well that's part of the challenge, isn't it? What constitutes abuse?

The GP doesn't say that they're going to be a moderator, BTW. They might be a developer who would be building features that would allow each user to better determine for themselves how to manage abuse. Or tooling that would allow moderators to do a better job of managing abuse complaints.

The native feature set is pretty simple right now. A lot of people use Tweetbot for its temporary mute feature, which is not native to Twitter.


I can't imagine there is anything other than a sliding scale at work here. It's difficult to come up with hard and fast, absolute rules.

But there seem to be some easy wins - there is an increasing trend of Jewish users being on the receiving end of tweets suggesting that they belong in an oven. I've seen some tweets showing than when the user reports said tweet, Twitter replies saying that there has been no rule violation and that the user won't be suspended.

Working out a better policy for outright hate speech like that seems like a good start.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: