It is though, because no one is a victim in that transaction. Nobody lost anything and nobody suffered any loss of potential profit because they actively refused said profit.
The `Memory Pit` exploit for the Nintendo DSi works in a similar way - it exploits a buffer overflow in the reading of image meta data by the Nintendo DSi Camera application in order to achieve arbitrary code execution.
This can only ever be opt-in if you want to stay on the legal side of the GDPR (and equivalents in other jurisdictions). You can ask, but the default needs to be "no" if no answer is given.
I provide telemetry data to KDE, because they default to collecting none, and KDE is an open-source and transparent project that I'd like to help if I can. If I used your app, I would be likely to click yes, since it's open-source. Part of the problem I have with projects collecting user data is the dark patterns used or the illegal opt-out mechanism, which will make me decline sending telemetry every time, or even make me ditch it for an alternative. An app that asks:
Can we collect some anonymized data in order to improve the app?
[Yes] [No]
...with equal weight given to both options, is much more likely to have me click Yes if none of the buttons are big and blue whilst the other choice is in a smaller font and "tucked away" underneath the other (or worse, in a corner or hidden behind a sub-menu).
Plus, I would think that SOME data would be better than NO data, even if there's an inherent bias leaning towards privacy-minded/power users.
> This can only ever be opt-in if you want to stay on the legal side of the GDPR
The GDPR only applies to personal data. You can collect things like performance data without opt-in (or even an opt-out option) as long as you are careful to not collect any data that can be used to identify an individual, so no unique device IDs or anything like that. Of course, you should be transparent about what you collect. You also have to be careful about combinations of data points that may be innocuous on their own but can be used to identify a person when combined with other data points.
I think that isn’t it, because it would be easy to say something like “we can’t verify the claim that it is privacy respecting so we should assume otherwise.” Which is a totally reasonable position to take.
I think it is important to be specific, clear, and to have evidence if one wants to call somebody a liar, though.
Or maybe it is something else, it could be interesting if they have some other definition of “privacy respecting” that precludes closed source apps, for example. That is, to “respect privacy” could be understood to actually be to provide users with verifiable evidence that their private info isn’t compromised. I think this isn’t the conventional definition definition of privacy respecting but I’m definitely ready to be pulled on-side if anybody starts pushing it.
Not really, not anymore. Many apps are now using certificate pinning to make it impossible for the user to to modify the trust store. This means that unless it is open source, it is very difficult for people to verify, even when they know very well what they are doing.
Yes you could, although the bar is still a lot higher than if it's open source. You will have to fully re-test all possible paths in the app every time a new release is made if it's closed source. If it's open, you just need to look at the git log.
Plus if there is one legitimate network call, then this strategy is out since you can't know what that request contains. OP using in-app purchases, so I'm willing to be there's at least one network call in there.
If there is no network access permission at all, then I think we agree, that's a reasonable guarantee.