Can you get rid of it? No. Doing so would result in the loss of many important rights and have unintended consequences. Not even China can get rid of this. BUT that doesn't mean you can't put regulations and limitations on them.
As one example, we may want to make laws that ensure that ads are easily recognizable as ads. I'm referencing Native Advertising. I want to use an example from the NYT[0] that is marked, to give an example of how nefarious this can actually be. The article itself only mentions the show once, in the middle, and mostly discusses women's lives in prison. It would not be surprising to believe that this is not an ad but actually a news story. It is both, but that's why it is nefarious. Is this ad easily recognizable? Even with the notice?
We can talk about dark patterns (native advertising might be one), and prevent many of them. Not allowing for bait and switches. Ensuring that options are easily conveyed. I don't think it matters which side of the political spectrum you're on or many of your philosophical ideals, but tricking people into buying things they don't want or need is not ethical. We live in a specialized world and one person can't be an expert in everything. If the game is supercomputers and teams of psychologists and lawyers against individuals then I think we all know this is an unfair game. We have to talk about how to level this playing field if we want to preserve individual freedoms and safety.
So I know this doesn't really answer your question, and the truth is that I don't have a good answer. I think the topic itself is surprisingly complicated and we need to think carefully about it. The path we're going down clearly isn't acceptable to most people. But overreacting will also be similarly bad. We need to have a tough social conversation and figure out what we want together. We have to learn, a lot, because this is nuanced. We have to be open to being wrong, with a focus on learning and improving rather than asserting our positions (because they are all wrong in some form or another). Which that might be the hardest thing of all, but if we can do this then we can solve a lot more problems. Maybe this is the great filter?
While i think these are ideas worth considering, what I think is the answer is perhaps staring us in the face: put regulatory limits on the amount and kind of data that apps and websites can collect.
The real problem with tiktok is not the CCP; it seems likely in my mind that our own government has equally nefarious techniques at play in other countries, and I think its unfair to single out a single company over this or any other behavior that is otherwise legal.
So cut them off at the knees—make the behavior of tiktok illegal, for them and for any other of the thousands of companies doing basically the same thing. Pointedly i mean the extra-application data collection, cross-checking with third-party data miners (which should be illegal already), and the sorts of things we've just become accustomed to being par for the course.
> put regulatory limits on the amount and kind of data that apps and websites can collect.
Yeah, I would be in full support of this. I think there's a double edged sword that people are playing with and don't see the other edge. Any data that you use to control your population can also be used by an adversary for the same purpose. The same is true about encryption. We have two competing forces in our own government. Blue team and red teams. But we know red team gets a lot more money and is a lot flashier. Focusing all on red team is fun and exciting but makes you a glass cannon.
You would have to make it illegal to show different content to different users. Get rid of "the algorithm" and every website becomes a simple catalog of content.
I also think if you do any moderation of content, you lose your "common carrier" status and become a publisher, responsible for any content you publish.
Anyone every consider making a social media site/app like fb, tiktok, insta, twitter, where the user can control the algo, and or have sum input of the algo, in so much that the user can "control" what they see, still have ads [company gets paid] but the user can control those ads to a certain degree...[sort of like brave browser][but for social media]
Just wondering, not saying data collection is good, but perhaps, if it were more transparent and interactive, people would be more accepting to using and capitalizing on their own data. Value for value, the user gets to decide what data to share, and the company gets to push ads based on known algorithm unique to each user's approved data metrics... perhaps this already exists???
Is this a pipedream? Or a yes, yes, "if you build it, they will come" life changing moment? I need to know, it is important I change my outfit if it's the latter, athletic shorts and a tshirt, (in my opinion) don't convene much confidence when shopping around for angel investors... ;)
So distinguish between, you're seeing this content because a company paid us to show it to users like you, and because users like you watch similar things. what if someone pays to have similar users be shown things that give a certain impression? the advertiser didn't create the content or even choose what content, is it an ad?
how would you enforce that? without open sourcing it you'd have no way of knowing why a thing was recommended. giving access only to the government is not possible.
It's possible in some far future. Just rewire brains of people to ignore any kind of biases and susceptibility to manipulation. This new society would be 1000x times better than what we have now.
Can you get rid of it? No. Doing so would result in the loss of many important rights and have unintended consequences. Not even China can get rid of this. BUT that doesn't mean you can't put regulations and limitations on them.
As one example, we may want to make laws that ensure that ads are easily recognizable as ads. I'm referencing Native Advertising. I want to use an example from the NYT[0] that is marked, to give an example of how nefarious this can actually be. The article itself only mentions the show once, in the middle, and mostly discusses women's lives in prison. It would not be surprising to believe that this is not an ad but actually a news story. It is both, but that's why it is nefarious. Is this ad easily recognizable? Even with the notice?
We can talk about dark patterns (native advertising might be one), and prevent many of them. Not allowing for bait and switches. Ensuring that options are easily conveyed. I don't think it matters which side of the political spectrum you're on or many of your philosophical ideals, but tricking people into buying things they don't want or need is not ethical. We live in a specialized world and one person can't be an expert in everything. If the game is supercomputers and teams of psychologists and lawyers against individuals then I think we all know this is an unfair game. We have to talk about how to level this playing field if we want to preserve individual freedoms and safety.
So I know this doesn't really answer your question, and the truth is that I don't have a good answer. I think the topic itself is surprisingly complicated and we need to think carefully about it. The path we're going down clearly isn't acceptable to most people. But overreacting will also be similarly bad. We need to have a tough social conversation and figure out what we want together. We have to learn, a lot, because this is nuanced. We have to be open to being wrong, with a focus on learning and improving rather than asserting our positions (because they are all wrong in some form or another). Which that might be the hardest thing of all, but if we can do this then we can solve a lot more problems. Maybe this is the great filter?
[0] https://www.nytimes.com/paidpost/netflix/women-inmates-separ...