How do you propose this actually work out? Every time youtube, twitter, facebook, etc wants to ban someone they have to submit a request to the government or be subject to its oversight? That's far more dystopian.
Personally, I wouldn't mind if the judicial branch was in charge of arbitration.
These companies are not obligated to pay creators. They pay them because it's profitable, and the moment money exchanges hand and someone livehood depends on them, the relationship changes.
At that point, if you leave creators without recourse, you only changed labels and left workers without hundreds of years worth of labor rights thrown down the toilet.
This is a good point that I don’t see very often. Video producers who have an explicit (or even an implicit) agreement with YouTube and depend financially on the earnings that it provides are not just “creators” who can “go somewhere else”. Surely, one could say that to any worker: don’t like the job? Go somewhere else. And still we have fought so hard for labor rights that give employees more agency and some level of protection against abuse.
Makes me think whether receiving regular earnings from any online service should legally redefine the relationship between the user and the service to something closer resembling an employment contract.
You say this as a person with no fear of getting unpersoned when the wind changes, or a cosmic ray flips a bit. It never happened to you and you don't have empathy for the wide range of people it happens to (some of them as innocent as snow), so you don't quite have the fear of it in your bones. Until you're the one to get unpersoned, and then it's too late.
Aren't they still publishing his content, just not running ads and paying? The US government will do fuckall about that, even if platforms are forced to be quasi-national entities subject to the First Amendment.
Or alternatively companies have to provide clear and explicit rules about what is permissible on their platform and if you feel you're wrongly censored or removed from the platform you should be able to take legal action.
I'm fine with YouTube not wanting to provide a platform for people who they feel are harmful, but they need to define that in an explicit way so that these decisions are not made arbitrarily.
I believe primary Brand's job for the last few years has been as a content creator. Given this I think it's reasonable to expect he should have some legal rights. Personally I don't see a huge amount of difference between an Uber gig worker and a YouTube content creator. Both should have some basic rights regardless of whether they're technically classed as "employees".
Define "clear and explicit rules". Does the constitution of say United States qualify as examples of clear and explicit rules? If yes, then even after roughly a quarter millennium, there are still hundreds of thousands of cases filed each year.
I don't need to define it. It would be open to reasonable interpretation.
If an online platform creates an unclear or vague rule and use that rule to remove a user, then that user could pursue legal actions. If a court agrees that the rule (or rules) used to remove the user from the platform is unclear or too vague from the perspective of a reasonable person then the platform would need to pay out for their mistake.
Therefore it would be in their interest to ensure they have clear and explicit rules.
I don't think this is hard and we shouldn't pretend it is. It's just regulators in the West rather force Apple adopt USB-C and destroy E2E than than protect us from arbitrary corporate censorship.
If by "rules" you mean vague references to "harm" then sure.
My use of the word "explicit" here was intentional. As it stands the "rules" may as well just read "if we don't like what you're doing on or off our platform we reserve the right fire you as a content creator". And again I'll note, if you're fired as a content creator for some arbitrary reason you have no way to challenge the decision.
I don't think this is acceptable. I think Google should ultimately be able to run their platform however they like, but they have a responsibility to make those rules clear when people are dependant on them for their income.
Usenet was a set of fiefdoms mostly administered by academics in CompSci departments, and proved utterly unequal to its first real crisis*. Distributed systems work great as long as they're new and everyone is participating in good faith most of the time. In adversarial situations, they're rarely able to adapt flexibly enough, partly because the networked structure imposes a severe decision-time penalty on consensus formation. A negligent or malicious attacker just has to overwhelm nodes with high betweenness centrality and the whole network fails.
Immediately following crises everyone talks about making the network more resilient and so on, but it never fully recovers because everyone intuitively knows that establishing consus is slow and bumpy, and that major restructuring/retooling efforts are way easier to accomplish unilaterally. So people start drifting away because unless there's a quick technical fix that can be deployed within a month or two, It's Over. Distributed systems always lose against coherent attackers with more than a threshold level of resources because the latter has a much tighter OODA loop.
Exactly, and look what happened to Usenet. People abused the commons and we lost it to spam. Unmoderated networks always fall to bad actors.
I'm building a p2p social network and struggling hard with how to balance company needs, community needs, and individual freedom. A free-for-all leads to a tyranny of structurelessness in which the loudest and pushiest form a defacto leadership that doesn't represent the will of the majority. On the flip side, overly restrictive rules stifle expression and cause resentment. These are hard questions and there is no one answer, except that unmoderated networks always suck eventually, so the question is one of line drawing and compromise.