> [access to use customer data...] *to provide the Services and perform its obligations under this Agreement.*
"Services" – which you'll note is capitalized... lawyers do that for a reason – has a very specific meaning that very obviously does not include "whatever the fuck Google wants to do with it", nor "training general purpose AI models" in particular.
Why are you intentionally and blatantly misinterpreting Wiz's policies? Or are you just that good at ignoring/missing details in order to weave the story you've already decided to believe?
The person above you was correct and they are real things that are called microaggressions. A micro aggression is defined as "A statement, action, or incident regarded as an instance of indirect, subtle, or unintentional discrimination against members of a marginalized group such as a racial or ethnic minority."
There are psychologists today who think that higher IQ is a cause of greater wealth and success. I wasn't referring to the original incarnation of IQ theory, which was designed to identify children who needed more attention from their teachers. It then morphed into a justification for racial discrimination. I still hear concepts like country IQ and racial IQ being brought up in conversations. Admittedly, the people talking about this are non-psychologists, but some of them appear to reference the psychology literature.
You're conflating psychologist and psychiatrists. Also, that sort of thing was used to rationalize eugenics a century ago - I'd be very interested if you could found some mainstream psychologists/psychiatrists who espouse those beliefs.
Edit: I see you added "Admittedly, the people talking about this are non-psychologists, but some of them appear to reference the psychology literature."
That's like making the argument that anti-Vaxers are good scientists as they sometimes reference the literature.
Some of this is definitely good. Requiring companies to use plain and truthful language when describing privacy-affecting settings is a great step, and apps/services shouldn't try to hide the "continue without enabling" button.
With that said, requiring different rules for children seriously increases barriers to entry for new services hoping to attract users in areas where these rules take effect, and kids will ALWAYS find ways around it. Nudges like snap streaks and the Like button encourage daily active use, but they also encourage actual social interaction between people to some degree.
Additionally, nudges like the Like button or Snapstreaks, though they do encourage a potentially unhealthy relationship with technology, also encourage social interaction with peers. It's certainly more complicated than "these are bad!"
I don’t think that’s necessarily the case. Until about a year ago I myself was one of the under 18’s and although I myself have never had a “streak” or used social media much in general I do have some experience with these kind of things because I see my friends and peers partake in these kind of things. What I mostly see is that the upkeep of a streak consists of sending a black photo with some text on it like “goodnight” and then it being sent to dozens of people. Not much comes from it other than maintaining a streak.
So not a lot of social interraction in that case, more so a reward for substanceless and ultimately unrewarding behaviour. Even more so, I saw my peers getting distracted from actual social interaction IRL by these kind of things.
I don’t think social interaction has anything to do with things like streaks or other addictive nudges. I think the most that is needed for social interaction is a chat client (or voice for that matter) and the ability to send photo’s or use a webcam. It shouldn’t be more than that. Other things often get in the way of real social interaction be it online or offline.
I'm definitely guilty of the blank picture solely for streaks. For me, the value comes from a feeling of connection when keeping a streak, mostly with friends I barely ever see. There are people that I've only really met once at events etc., but we
have remained acquaintances through these 'substanceless' conversations over Snapchat. If I ever travel and end up near these friends I wouldn't hesitate to message them to hang out, where as if we never started a streak I'd most likely never see them again.
I've never liked keeping in touch with people over the internet, but 'streaks' lets me do it with dozens of people without sinking in hours of my time for conversation.
Do they really encourage social interaction? From what I can tell, they're pretty much just an addictive feature.
If you want to encourage social interaction, promote less popular content. If you promote more popular content, then it becomes a popularity contest. Services want more eyeballs on their platform, so there's no way they'd promote less popular content because people will switch to a platform that gives them the reward for being popular.
It always ends up with "we do something to make your experiences better", to "help us continually protect and improve your experience", etc.
I'll be rather explaining along the lines "we're a corporation, we have shares to sustain and employees to pay, so we're going to milk you and your personal information for our sole benefit. You have to know that it will be better for you if you don't make it easy for us to do so".
Or they can just ask? I would have no trouble helping my son work around dumb rules. "Here's another one of those situations where lying is appropriate. I hope you're learning to be a good judge of when it is and isn't. Yes, adults are pretty stupid sometimes."
As his guardian you limit your son's ability to legally retaliate in the future against possible coercive actions offered by said corporations, with its sole intention to form an addictive dependency to its users.
It's like buying your son cigarettes. This shifts the responsibility from the supplier to the enabler (you).
Will we in the near future reflect on this digital period, similar to how we now reflect on the era of dominance of cigarette manufacturers?
I triggered on "work around dumb rules". Large institutions consulted by legal and behavioural experts, backed by decades of research come to the conclusion to severely restrict the effects of persuasive design on children. Taking the time to reflect on both sides of the argument without dismissing it as 'dumb rules', would be my interpretation of responsible guardianship.
As we speak, persuasive design experts, applying behavioural psychology are having wreaking measureable, wide-scale deleterious effects on our youngest generation.
Yes, I agree we should teach our children about these systems of control. At the same time, we should acknowledge that the effects are spanning wide ranges of our population, regardless of age, sex, income class and education level. Our society needs stronger controls to counter decades of psychological and sociological research, designed and funded for offensive purposes (psy-ops).
On the one side we have corporate interests looking to exploit everything they are not barred from exploiting and piously declaring that not exploiting things they are not barred from exploiting is failing in their mission.
On the other side we have governments trying with varying degrees of success to add regulations on this stuff that most lawmakers do not understand.
As a parent, you cannot just rely on the government. Some regulations are good. Some don't go far enough. Some are kind of dumb.
There’s never been anything like the mass psyops of the past 100 years, and it’s not clear that such toxic memetic behavior can coexist with society. We’ve only seen three generations of impact, but it seems the deleterious effects are outpacing our ability to adapt.
It’s possible PR will trigger a memetic mass extinction event.
I’d encourage people here who work in marketing, public relations, or social media to seriously consider the systemic effects of what they’re doing.
At least one of the age verification services being tested just validates the card number is formatted correctly (Twitter users have found out of date cards work very well). No need to sneak the cards when you can just google one tbh.
I would NEVER put an Alexa in my home - but has Amazon done anything wrong here?
You agreed to the terms of use, they’re doing expected work to anyone that has any idea how this all works, there seems to be no malicious intent even implied. IDK, if you don’t want someone potentially listening you, don’t put an Echo in your home.
There's the contract that was actually signed, which Amazon didn't break, and then there's the contract people think they signed. It's generally considered your fault when you sign a contract with the devil and get tricked, but the devil is still evil. In other words, Amazon is evil in this situation and their customers made a mistake. If it had been made clear to non-technical people that they would get listened to by the listening device, then it would all be on them. However, this comes as a surprise to the average person, because it doesn't take that much work to encourage a false belief in the mind of a non-expert.
The headline in this case is that humans are also involved in the listening, which is technically advantageous but not technically necessary. It is also worth mentioning that the recording gets uploaded out of your house, which is again technically advantageous but not technically necessary.
I disagree that it is conceptually possible for most consumers to have an adequate understanding of future implications of this technology such that they can possibly give informed consent for this type of data capture.
There are just too many ways it can eventually be used maliciously, so much so that we can’t even contemplate or imagine them ahead of time. In that case, it should be illegal for a company to capture the consumer-generated private data, much the same as age of consent laws... even if a minor appears to have a self-aware and complete understanding of something they are consenting to, it is just legally defined to be impossible for the minor to actually be capable of granting consent.
The same should be true for collecting user behavior and non-aggregated user activity data, across the board. “Consenting” to it, whether explicitly or through site usage terms or EULAs, is just not a logically possible thing.
I somehow disagree. In a free democratic society every citizen should be able to operate with the assumption that very invasive practice is banned, unless you opt into it with clear intent.
Most people who got an Alexa have probably no idea how it works and whether it is invasive or not and they quite certainly operate it under the assumption that no real human will listen in to the conversations they have in the presence of their electronic assistant.
Unless Amazon made a strong effort to communicate this in the clearest possible way (e.g. by putting a "our employes may listen in to any conversation" front and center on every place they sell it), they are in my opinion at least guilty in a ethical sense here. Legal is a different thing, but you can be operating totally legal, while still beeing ethically wrong.
>In a free democratic society every citizen should be able to operate with the assumption that very invasive practice is banned
The words free society and everything you don’t agree with should be banned seem like opposite things.
The reality is if you put an always on microphone in your home - you should absolutely expect someone could be listening. At not point did they promise you that wouldn't happen.
> they’re doing expected work to anyone that has any idea how this all works
For sure. But lots of people dont't know how this stuff works and should not be expected too.
I wonder how many average consumers of Alexa products would feel differently about their purchase if it said on the box "some people at amazon are likely to listen to what you say". Sure, everyone agrees to the terms of use, but that doesn't seem to count for much in the way of "informed consent" these days. Maybe to the letter, but not the spirit.
>But lots of people don't know how this stuff works and should not be expected too.
How does caveat emptor not apply here? If you put an internet connected microphone in your home - it seems very reasonable to assume someone could listen in on that.
I don't see it as a wrong or right, more so a matter of informing the public. Everyone suspects that those devices are listening to them but it's usually brushed off as tin-foil hat stuff. Stories like this are just making people aware of the reality of these technologies. You decide if it's acceptable or not.
Any time Alexa is lit up, it may be listening to you. Any time Alexa is not lit up, your conversations can't be overheard. The tin foil hat aspect comes in when people say it's listening ALL THE TIME, which it isn't.
None of my VERY non-technical family think that their assistant devices aren't transmitting recordings of their voices to some remote server somewhere, because it's obvious enough. Even my 71 year old grandfather understands this.
You will be surprised how many customers don't expect it to happen. You only know it because you have an idea of how the whole system works. Almost every user just agrees to terms of use without reading it.
On an iPhone, you can tap the power button five times consecutively to disable biometric access. However, I think the only way to do this on a MacBook with TouchID is turning it off.
The fact that this happens at all should be constantly in the news. This is unconstitutional treatment of a US citizen and CBP clearly needs oversight.
Unique with Grindr is that it's a safe bet that the majority of users have sent and received explicit photos. So if you want to find a congressman's dick pics that his wife doesn't know he's been sending to men..... grindr is probably a good place to do it.
"Services" – which you'll note is capitalized... lawyers do that for a reason – has a very specific meaning that very obviously does not include "whatever the fuck Google wants to do with it", nor "training general purpose AI models" in particular.
Why are you intentionally and blatantly misinterpreting Wiz's policies? Or are you just that good at ignoring/missing details in order to weave the story you've already decided to believe?