Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You've misunderstood me. I am happy to clarify.

The only algorithmic content that would be eliminated is that which is already illegal speech and/or that which is too valueless to be worth spending the approximately $2 of time it costs for someone to look over a 10 second video and sign off that it's OK.

Who pays that can be decided by the market, but probably the poster could pay it and if they choose not to, the platform will algorithmically decide which "free" content seems worth moderating at the platform's expense.

I'm literally just actually requiring a modicum of moderation, not banning anything.

Or is it that you believe that the vast majority of the content on the platforms today is ultra low value and/or illegal content? Frankly, if you do believe that, that's even more damning for the platforms, isn't it?

However, I don't actually believe that. Most content I see appears to be legal and appears to already have thousands of views before I see it. Algorithmic content is already mostly stuff that is popular enough that the tiny expense of vetting it wouldn't quash it.

As for how a newbie gets their foot in the door, this actually makes it easier for them. Yes, now you might have to pay $2 to post your first video if you want any shot of anyone besides your friends seeing it -- but now you're only competing with others who were willing to do the same. All the ultra low effort AI generated videos suddenly become totally unviable. In fact, this generally biases a platform towards higher production effort content. If I spent 2 minutes concieving of and making a low effort video, posting it 4 minutes after having thought of the idea, shot it in one take and immediately publish it -- then $2 seems like a lot. But if I spent two days planning, shooting, editing, then suddenly $2 is nothing.

This will make everything better and nothing worse except for bad actors who are parasites one our once wonderful internet ecosystem.



> that which is too valueless to be worth spending the approximately $2 of time it costs for someone to look over a 10 second video and sign off that it's OK

The problem is that's basically all user generated content. Who's going to spend $2 vetting this comment that I'm writing right now before approving it to be shown to users by the algorithm that ranks it based on upvotes?

Sure, there will still be some high-value content out that earns enough revenue to be worth the effort. But the vast, vast majority of user generated content doesn't fall under that category.


There are details, but we can get them right if the political will is there. The big picture is that platforms should be responsible for their arbitrary and self-serving publishing and promoting decisions, while being shielded from liability when they act more like a simple common carrier.

For example, I think I'd be OK with some sort of exemption for pure-text content. Most of the problematic content out there isn't pure text.

And/or maybe trivial to describe rules like upvotes, linear historical feeds, etc need to count as an algorithm-- If it's obvious to a "reasonable person" with a high school education exactly how the content is being selected, it does not need to count as the platform making publication choices.

Focus on the big picture and the details will follow. It in no way is a takedown of a big idea to point out that implementation will require some attention to detail. The big picture of social security is "If you're too sick or old to work, we should pay you" but the details fill volumes and volumes. This is nothing new; it's how laws work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: