Hacker Newsnew | past | comments | ask | show | jobs | submit | drglitch's commentslogin

In earlier life, I authored this exact product (including distributed deploy) for a large investment bank and a fintech startup :)

This is immensely valuable to a lot of Python devs, especially in finance/quant community. Great work!

Edit: The reason why this is so impactful is b/c most users have no idea how to build a react app (along with the circus of build/deploy train one typically entails). They just want a quick-and-dirty UI for their Python model they developed in Jupyter or similar. If the model has legs, a separate FE dev would build a fancier front end later.


Thank you! Glad the finance/quant community can find it useful!


Wouldn't Django or Flask with Stimulus and Turbolinks be a better option? The hard work has already been done.


It is quite plausible that Russia would try to take down parts of Ukraine internet given everything going on.

Alternatively, could simply be someone fat-fingering things, given the insane numbers of blocks that RosKomNadzon has been putting in today (Facebook, Twitter, etc)


I read an article (in Russian, will link later) outlining a plan to copy current BGP tables, update them so that the Russian internal internet space is encircled with government-controlled ASes, and filter or block any outside access, while making the BGP inside Russia look synchronized with the rest of the world. Of course this applies to all BGP announcements from outside the perimeter.

Not exactly a great firewall with packet inspection, but still something to prevent any possibility to access any resources except whitelisted ones, or to run a VPN to the outside.

Update: the article in question, Google-translated: https://whatisyournameinsider-com.translate.goog/politika/24...


If Russia is preparing to up its information warfare game does this give them better defenses against inbound attacks? While letting FSB selectively allow outbound attacks?


This reads like a machine that has DEP enabled and getting locked remotely ;)

On a more serious note, I ran macs on multiple occasions without an AppleID - it presents maybe one nag a month, usually when you accidentally open “Messages”.

Microsoft, sadly, has also been increasingly more annoying with pushing online accounts on people’s machines lately.


Apple is doing the same now :/ the wizard tries hard for you not to


My experience is that the popup appears every time I unlock the computer, but it's not modal / blocking, so can be ignored but really annoying.


Who needs SWATing when you can send a CP pic (either real or with hash collision as per the thread few days ago) from a virtual overseas number/service and get FBI van to show up as well?

What about injecting code into a public website to download same pic into local browser cache without user’s knowledge?

The simplicity of the attack vectors here that would trigger the “manual” investigation is just dumbfounding and ripe for abuse/misuse.


The reported response from Apple offers little reassurance:

> The executives acknowledged that a user could be implicated by malicious actors who win control of a device and remotely install known child abuse material. But they said they expected any such attacks to be very rare and that in any case a review would then look for other signs of criminal hacking.

What triggers them to look for signs of criminal hacking?

Does every manual review process involve such checks?

Are they searching device backups for indicators of compromise [IoC]?

What if there's no device backup or device image to scan?

What if the scan fails to notice IoC?

What if the device was compromised after the last backup?

What if the device was compromised via physical access?

What if the device isn't compromised and the material was pushed maliciously or via drive-by download?

It's dangerous to assume that all material on a network-connected device arrived with the consent of the user when it can accept incoming messages from strangers, trick people into downloading files, or be compromised without your knowledge.

“That isn't mine” is going to be a tough defence if you can't even take measures to log where content came from.

Client-side scanning seems to amplify this issue (which could still happen with cloud storage) because at least cloud storage doesn't generally ship with or integrate deeply with messaging apps, social media, a web browser, QR codes, App Clip Codes[1] etc.

The impact might be fairly low right now with the current proposal (images would have to be uploaded to iCloud, so cached browser images don't get scanned as far as we know), but the existence of the non-consensual scan in the first place is worrying, because it means such attacks are only a policy change away.

[1] : https://developer.apple.com/design/human-interface-guideline...


> The executives acknowledged that a user could be implicated by malicious actors who win control of a device and remotely install known child abuse material.

Since Google has been scanning your account for kiddie porn for the past decade, wouldn't this apply equally to Google accounts?

>a man [was] arrested on child pornography charges, after Google tipped off authorities about illegal images found in the Houston suspect's Gmail account

https://techcrunch.com/2014/08/06/why-the-gmail-scan-that-le...

All people have to do is email you kiddie porn and Google will have you arrested?


No, but the person who sent that message could get in trouble.

In the case you linked to the person was reported for sending email to a friend with attached CSAM, not for receiving it.[1]

Apple's system scans images client-side if they're due to be uploaded to iCloud. That process can happen without user consent or action. For example, WhatsApp and other messaging apps save images to photos, which are auto-synced to iCloud. (If you use WhatsApp and iCloud you'll find your Photos section full of memes from WhatsApp group chats when you log in at icloud.com, for example. This was a surprise to me at first.)

So the risk of malice seems higher with Apple's system than with the long-running PhotoDNA implementations backing Gmail/Google Drive/OneDrive etc.

Gaining access to someone's email and sending attached CSAM is likely to cause them more issues than receiving it. But that's harder because you need their login info and not just their email address/phone number, which is all that an attacker potentially requires to trigger action from Apple's automated scans.

[1]: https://nakedsecurity.sophos.com/2014/07/31/google-tips-off-...

> The investigation was apparently sparked by a tip-off sent by Google to the National Center for Missing and Exploited Children, after explicit images of a child were detected in an email he was sending.


> No, but the person who sent that message could get in trouble.

Is there some reason to imagine the person sending the message couldn't do so with burner email accounts or by abusing open/vulnerable email servers?

Has Google suddenly prevented spam from landing in your spam folder without anyone noticing?

It's much simpler to send email than it is to take control of someone's device.


Right, the sender isn’t going to use their own email address in an attempt to incriminate you. My point was that receiving material by email from a stranger doesn’t make you liable for its contents (unless there is a record of you requesting the content). It makes the sender liable (if they can be traced).

Apple’s approach does not seem to provide the same safeguard. Your account will be flagged for review if there are n flagged images destined for upload on your device. The description of the process does not mention if or how provenance or intent to receive those images is established.


I mean you think that would be how it works, but say a system found the image stored in your mail's temp directory and notified the police, do you think they would be that interested in finding the person who sent it, or do you think they would think, "You had kiddie porn on your phone, that's against the law. 30 years." Win.


Is there even a way to get iCloud or Google photos on the iPhone to only upload photos taken with the camera, to not spam one's photo account with chat garbage?

I was trying to figure out a way, but got side tracked on the issue, then my phone got stolen and I lost a bunch of family/baby pictures (thanks Google/apple).


WhatsApp has a setting you can disable:

Settings → Chats → Save to Camera Roll

Not sure about other messaging apps.


Google Photos are also scanned. Just after they're uploaded.


How do we know they aren't?

Anyone with your credentials to social media/any cloud service like gmail could send CP on your behalf to get you flagged and interrogated.

Good luck mounting a defense against a subject this taboo. Even if you win it will follow you forever.


Gmail blocks incoming messages that contain CSAM, so you don't actually have this concern. It's similar to if someone tries to send you an email with an attachment that has a computer virus. It will never reach your Gmail account - not even your Spam folder.

(In the virus case, they also do a second scan when you open the message - with updated virus definitions to catch new viruses).


I thought google wasn't scanning your emails anymore.


Wasn't that claim limited to children's accounts?

Since Google is saving a history of what you purchase from third party merchants by scraping invoices and receipts sent to you through your Gmail account, it's safe to say that they are scanning your emails.

https://news.ycombinator.com/item?id=26248486


Google scans all photos in the cloud (Gmail, Drive, Google Photos) for CSAM and has for a long time. It just doesn't show contextual ads against email anymore, since those all sucked.


> “That isn't mine” is going to be a tough defence if you can't even take measures to log where content came from.

It's not a defense at all. This material is prosecuted under a "strict liability." It doesn't matter how you got it, you're liable.


> This material is prosecuted under a "strict liability." It doesn't matter how you got it, you're liable.

You're overselling it.

First, there is a statutory affirmative defense: if I obtain CSAM and "promptly and in good faith" delete it or report what happened to law enforcement, liability does not attach.

Additionally, federal laws are clear that you have to knowingly receive CSAM. That's not just a legal flourish or a word – knowledge is an element that a jury or judge will rule on. If I ask you to send me an illegal video and you do, we've both knowingly violated federal law. If you send me to a webpage that purports to offer me a job, but actually has images hidden with CSS to poison my cache, I've not knowingly received anything.


> knowledge is an element that a jury or judge will rule on.

And yet, i never want to be in this court case at all.


>> they expected any such attacks to be very rare

Very rarely will your life be completely ruined based on inaccurate information.


Something slightly different but very related happened to a senior police officer in the UK. She got sent a WhatsApp message by her sister containing a horrific CP act. It was captioned with a message asking people to circulate it to identify the adult in it, and probably those who sent it around (including the sister) were acting in good faith, but actually it was still illegal to send or even possess it. No doubt the originator of the caption was a deliberate troll.

She was found guilty of "possessing an indecent image of a child". [1] She tried to argue that she hadn't noticed the message, but it's not surprising that wasn't believed given that she had immediately replied to her sister saying "please call". She was sentenced to 200 hours community service, and originally sacked from her job but recently reinstated after appealing. [2]

It seems that she wasn't immediately in trouble when she received the message ... so long as she had immediately reported her own sister for distributing it, even though it's clear that she hadn't deliberately done anything wrong. (In fact the sister had contacted her to ask what she should do about it. Probably her answer was "don't have already sent it me!")

[1] https://www.bbc.co.uk/news/uk-england-london-50476166

[2] https://www.bbc.co.uk/news/uk-england-london-57501764


> a senior police officer in the UK

To be fair, this is partially because the laws in the UK are, I think, fairly bonkers strict about CSAM - mere possession, whether you've looked at it or not, whether you downloaded it or not, whether you even know it's there or not, etc., is counted as criminal.


As I mentioned in the last paragraph, it seems that she would've been cleared if she'd been able to convince the jury that she didn't know it was there. And would've been clear even if she had seen it so long as she'd reported it (although that would of course have got her own sister in trouble even though she was acting in good faith).


The US is the same.


I believe this is incorrect.

> At the same time, because of the First Amendment, child pornography offenses are not "strict liability" crimes like statutory rape: in order to convict a defendant, the government must prove that the defendant knew the material involved the actual abuse of a child

https://www.zmolaw.com/child-pornography-faqs#

I've found similar claims on the websites of a few law offices. For some reason, the official DoJ materials are pretty cagey on the topic.


> "strict liability" crimes like statutory rape

That varies by jurisdiction. Some US states require criminal negligence or offer affirmative defenses with regard to the defendant's belief as to the victim's age.


Off topic

I think its surprising that society does not want to talk about CP and just content locking up whoever they find and throwing away the key. Pretty shambolic response for something so common - no offense but we spend way too much time and resources undoubtedly useless social issues instead if hard questions like CP and what causes it. Even the academic literature is sparse but I would argue we need more people finding answers and we might learn something about the human condition - rather than putting so much money and intellectual capital on crap like cyber bullying or transgender pronouns or mental health. Not that those aren't important but they are low hanging fruit. We need to get our priorities straight. Tackle the hard questions instead of this absurd head in the sand approach to uncomfortable topics. FFS.

Rant over


This will cheer you up - they're trying to fire her again!

https://www.bbc.co.uk/news/uk-england-london-58072822

This is another instance:

https://www.bbc.co.uk/news/technology-57156799


Thanks, I hadn't seen that. How soul destroying.

(It's a pity your comment was downvoted when it was the only meaningful reply. As always, we'll never know why. Maybe the downvoters didn't get the sarcasm. Or maybe they think handing your sister to the police when she asks for your help is the right thing to do...)


Yes. This reminds me of when typing or receiving certain text would make an iPhone crash. But now having your account deleted makes it a feature. For example Whatsapp automatically downloads media to the camera roll which then get uploaded to iCloud. Of course that can be turned off prior, but this is like what happens with backing up. People want to backup, but don't invest the time in it. That's until it's too late, they lost their data and now want their stuff back.


Backup is a good point:

- Apple: “Backup your phone to iCloud, it will be safe there.”

- 5 minutes later: “We’ve wiped your account because of a photos of (porn actor here) which is not CP but technically minor at the time she filmed.”

- “Also we’ve wiped your iPhone because we couldn’t knowingly let you keep that. Good luck contacting your parents, we’ve deleted your contacts. Good luck! PS: We’ve reported you to the police.”

- Also you can’t connect to your iMac now.


Or photos of your own children.

We have a Tumblr set up for family to view pics of the kids. Several photos and videos of our kids when they were under 2 were taken down either temporarily or permanently by their CP algo.

These were a pic or video of kids in the bath or without a shirt. In none of them could you see bum or bits. Just a semi naked baby.

Algorithms like this get things wrong all the time


This is not the kind of algorithm that Apple is be using. That one only scans for already known CSAM in NCMEC's database.


Quite funnily and disturbingly, one the databases of "known CSAM" hashes also apparently includes a picture of a clothed man holding a monkey[1]

[1]: https://www.hackerfactor.com/blog/index.php?/archives/929-On...


That was just a MD5 collision - an image that has same MD5 hash as some other image (in this case some CP). This is uncommon yet possible thing - see this example[0].

[0] https://natmchugh.blogspot.com/2014/11/three-way-md5-collisi...


I think a flawed process where the monkey image ended up in the database is more likely than a random unintentional hash collision.


Not really. MD5 is thoroughly and completely broken, and has been for years. You can modify an image to be an MD5 collision for another image.


No you cannot. A collision requires the attacker to create both images.

What you are describing is a second preimage attack-- creating a second input with the same hash as a target.

There is no currently known tractable way to create second preimages for MD5.


Yeah, vaguely talking about MD5 as "broken" is common and misleading. There are very particular known attacks.

Obviously nobody should be using MD5, but it can be useful to understand there are circumstances where it's basically reliable unless you have an extremely sophisticated attacker.


That would be an intentional collision. An unintentional collision remains unlikely for a cryptographic hash.


Not just unlikely but astronomically unlikely.


Yes, hash collisions definitely occur. There is no such thing as collision-free hashes, and MD5 is definitely broken.

Even though the author says they were 3 million MD5 hashes the second time, the first one he calls them SHA1 and MD5 hashes (even though SHA1 is considered weak too).

I wonder what kind of hashes Apple is planning to use. Will it be whatever is made available to them or will they only accept (what is now considered) secure standards?


Which may contain the hashes of their photos, because they've been taken down in the past, which means they probably have been added to certain blacklists that may have been integrated into the blackbox of NCMEC's database.


Photographs of your naked child in the bath are not illegal, are not CSAM, and are not going to be in the NCMEC's database.


NCMEC's CSAM database already includes images that are not necessarily illegal. If _your particular_ photos have been flagged in the past, they may well be part of the database.


> NCMEC's CSAM database already includes images that are not necessarily illegal.

How could this be the case? If it's been determined to be CSAM then it is, by definition, illegal.

If it were true that the database is likely to contain legal material, how would we possibly know about it, given that the contents of the database are secret?


> How could this be the case? If it's been determined to be CSAM then it is, by definition, illegal.

Certain images are CSAM by _context_. They do not necessarily require those within the image to be abused, but rather that the image at one time or another was traded alongside other CSAM.

> If it were true that the database is likely to contain legal material, how would we possibly know about it, given that the contents of the database are secret?

Tools like Spotlight [0] make use of the database, so certain well-known images are known to flag. Such as Nirvana's controversial cover for Nevermind.

[0] https://www.wired.com/story/how-facial-recognition-fighting-...


> Certain images are CSAM by _context_. They do not necessarily require those within the image to be abused, but rather that the image at one time or another was traded alongside other CSAM.

At the risk of sounding like a broken record, how can we know this is actually true? Every description of the NCMEC database's contents that I've seen is incredibly vague, and as of 2019 it seems like there were fewer than[1] 4 million total hashes available. I would think that if it genuinely did include innocent photos of people's kids, the number would be much higher.

> ...certain well-known images are known to flag. Such as Nirvana's controversial cover for Nevermind.

I've heard this multiples times now, but I've never been able to find any evidence of it actually happening. The only instance I could find was one where Facebook removed[2] that Nirvana cover once for containing nudity.

1. https://inews.co.uk/news/technology/uk-us-collaborate-crack-...

2. https://www.theguardian.com/music/2011/jul/28/facebook-nirva...


Interesting random data point, I just checked Apple Music and the Nevermind cover art is not censored.


If you're sending other people photos of your children that are explicit enough to prompt someone bring them to the attention of child safety groups like NCMEC, and they look at it and agree it's worth their time to investigate, the first you hear of it isn't likely to be after it eventually comes full circle through Apple's CSAM processes.

Remember, this isn't a porn detector strapped to a child detector.


Step 1: Get copies of pictures of targets kid in bath from phone/SNS

Step 2: Manipulate pictures so that hash collides with CSAM

Step 3: Get pictures back on targets phone so they get scanned.

I don't have the skills or understanding of how the hashes are created but would this be possible?


Hypothetically that's possible, although all three steps you listed are exceedingly non-trivial. The notion that an attacker could pull off two of those steps let alone three is borderline fanciful. In addition, their target must also qualifies with the necessary prerequisites:

• has an iPhone;

• has children;

• took photos of their children which could be mistaken for CSAM by a sloppy reviewer;

• is of sufficiently high importance to justify the effort.

And after that insane effort, all you've done is inconvenience your target for a little while until child safety people investigate your family situation and discover that the photos which got flagged were not actually CSAM.

Immediately after the investigation process discovers the hash fraud, Apple will immediately start delving into exactly how their hash algorithm failed in this instance, improving it to mitigate this exploit. So this target better be worth it!

If this was a plausible exploit, surely it would have already happened to people with Android phones since Google has been doing pretty much the exact same scanning of customer images for over five years. (The only difference with what Apple is now doing is where the hashing is performed—but this makes no functional difference to the viability of your hypothetical exploit.)


This isn't an ML algorithm. It's a hash. It only matches already known material.


It is a hash created with ML. So it’s both. But yes, it only matches already known material.


I haven't seen anyone claim that any of this algorithm was "created with ML". I'm interested in learning more so do you have a citation for that?

Regardless, it's not both. Setting aside how the algorithm was created, it's incorrect to say that an algorithm "created with ML" is itself an ML algorithm.

NeuralHash was so named because it was optimised to run on the Apple Neural Engine for the sake of speed and power efficiency.


It’s both because it’s a multi-step process.

The image is not fed directly into the hashing function, like taking an MD5 hash of a file or something.

Rather, the image is first evaluated by a neural net that looks at specific visual details, and has been trained to match even if the image has been cropped or anything like that. The results of the neural net evaluation are what is then input for the hashing function.

This is explained in detail in Apple’s documentation they released with the announcement.


This is why I just noped the fuck out of the Apple ecosystem. I won’t support anything which relies on opaque blacklisting to ruin lives.

In this example as well on iCloud shared galleries you can upload to other people’s ones you have been invited to. What could possibly go wrong?


Who did you switch to? As I assume you are aware Google has already been doing this as well as Facebook. Apple was simply the last of those to start doing it. Facebook reported 20 million instances of csam to ncmec last year alone.


Who says you need any of the above? Cloud storage is overrated. When is the last time you lost files? I have stuff from an old lexar jump drive early 2000s doing just fine.

For more sensitive materials back up when the data changes and store in a disaster proof safe.

I think the fear of losing things is a problem. People take so many photos anyway and who even looks at all of them? Memories are great and we should cherish them but… this is one of those cases where folks don’t need to rely on big tech.


Or, just encrypt your files before sending them to sit on someone else's server.


Gasp, are you saying anyone worried about getting caught simply could encrypt their photos first and this system won’t work? So an extra step for the bad guy, a system that is invasive for all users, and a system that is easily avoided by the bad guy. What are you doing Apple, this feels like a cheating partner here.


Then in this case you could still use an Apple device, considering that if you don't use iCloud Photos, there's no scanning on your device anyway. I think that radical stances and refusing a dialogue, albeit critical, it's something that in this case won't really go anywhere.


No, someone can still attack you by creating an iCloud account and pushing cp. There is no way to mitigate such an attack after purchasing an apple device as far as I can tell. And, apple pretends their devices are secure so they have incentive to not discover compromised devices (as if they could) even though it’s clearly a problem with Pegasus and probably many other non-consumer grade exploits. I think the only answer is a phone that cannot back up to the cloud at all. Which is what I suppose I have to shop for now. Hopefully this attack hits some senator or apple exec first. I don’t want to backup my phone, and at this point I don’t want a camera or location services. I want security which apple no longer offers.


>No, someone can still attack you by creating an iCloud account and pushing cp. There is no way to mitigate such an attack after purchasing an apple device as far as I can tell.

Could you elaborate? Totally unclear to me what kind of attack you're talking about.


I think they're saying that if someone can completely hack your phone so as to have remote control of it, they can sign you up for an iCloud account and add CSAM to it.

This seems... implausibly convoluted. If you have full remote control of someone's phone, Apple or not, you could do all sorts of incriminating things "as them", and I don't think Apple's new system noticeably increases your risk from this.


Like what? Buy illegal fireworks online?

It would take the flick of a switch for someone to ruin your life for a crime you could never explain yourself out of. Nobody will ever believe that you were framed because that means other convicted predators could also have been framed. As soon as your name hits an index-able news article, guilty or not, your life is over.

This is a blackmail machine.


Well, the obvious option if you've subverted someone's phone so you can do whatever you want with it, and have access to illegal stuff, would be to store it on the phone and submit anonymous tips about the person to the police. Or upload it to random image-sharing websites, or Facebook, or email it to their coworkers with some "I found this on X's phone and thought you should know" note attached, or whatever.

I'm just saying that actually getting the attention of authorities is the most trivial part of this suggested attack. Apple's new stuff is a vector for that, sure, but anyone who is in a position to exploit it could easily do so in other ways as well.


Nope, Apple announced this tech is coming to 3rd party apps via API.

iCloud was just the start, it wasn't the end.


Source?



Most HN crowd presumable isn't actually worried about CSAM detection itself - its the local-side scanning where you lose control over your own hardware.


Exactly that.


Why would you use any of these ?


Linux, dumbphone (sms/calls only) Fastmail, no other cloud services.

I’ve been on the verge of doing this for a few years so had my exit strategy well planned.


What did you switch to? Google? Are they handling this issue any better? Or are you using a dumbphone?


Android without google services, using lineageos or calyxos.


There's also GrapheneOS, which excludes Google APIs completely and is additionally hardened down to its memory allocation implementation, at the cost of performance and app compatibility[1].

[1] "GrapheneOS vs CalyxOS ULTIMATE COMPARISON (Battery & Speed Ft. Stock Android & iPhone)", https://www.youtube.com/watch?v=7iS4leau088


> which excludes Google APIs completely

lineageos and calyxos should as well, unless you opt-in. I guess they would still use the google captive portal detection? Is that what you're referring to?

> and is additionally hardened down to its memory allocation implementation

That's really interesting. Do you use GrapheneOS? Is it easy to lock the bootloader on Pixel devices?


How well does this work as a daily driver? I heavily rely on my smartphone.


It is just Android minus the nosy bits, it works just fine. I've used AOSP-derived distributions since 2011 and never felt I was missing out on anything, au contraire. Longer battery life, no ads, no spying other than through the radio firmware (which is part of all devices from all manufacturers using all operating systems [1]), no nonsense.

[1] I seem to remember that RIM (of Blackberry fame) made devices which used combined radio and systems firmware so those would be an exception to this rule


It's all I've ever used. I think it works great but I think your experience will depend heavily on your expectations.

I don't use any proprietary apps and only install them from fdroid or build them myself.

But if you do, you're going to have a different experience. Let's say you want to run Whatsapp. From what I can tell you basically have three options:

1) Install google apps.

When you install your rom you will also download a gapps bundle and install it. This will be a very vanilla android experience but with the ability to uninstall whatever you want, root, etc. You can open the play store and install Whatsapp. Everything should work OOTB. However you're running all of the google service including google play services, so privacy-wise this is not significantly different than stock android.

2) Install microg

When you install your rom you can also install microg. This is an install time option in Calyxos. Microg replaces many of the google apis. You can install Whatsapp through Aurora store, which can install apps from the play store. Whatsapp will use the microg FCM implementation. FCM is google's notification service. It allows your phone to make a single persistent connection to receive notifications, allowing for better battery efficiency b/c you don't have many apps activating the radio. FCM just communicates that an app has a notification, it doesn't carry the contents of the message. Unlike play services, microg registers the FCM connection with an anonymous.

So google knows your device is running whatsapp and when you get notifications, but not what they are.

3) No gapps / no microg

Don't do either of the above. You won't get push notifications with whatsapp. Many free/libre apps have alternative notification schemes involving separate persistent connections. This is less power efficient but works without involving google. I use Signal and Element like this and my battery still lasts >24 hours.

Several developments


I use it as a daily driver for 2+ years now (LineageOS without gapps, or even microg). I use the f-droid store for my app needs, and the occasional proprietary app I download with Aurora store, or use whichever APK hosting site seems the least shady. I sometimes use MS Teams - complains on each start about needing the G framework, but works just fine regardless. Or, I played another game that had in game purchase, and it worked fine until I opened the in-game store, when it froze. Otherwise perfectly playable.

From the f-droid store I use a ton of apps, games, mostly utilities. For navigation I like Organic Maps.


Could you or someone else say what are the better options in terms of hardware for this setup? Pinephone?


The Google Pixel phones are the easiest to run alternative Android roms on because Google provides the sources and allow you to unlock the bootloader.

They also pay Qualcomm more so you can re-lock the bootloader.

The Pinephone is great but it's most appropriate for developers interested in linux phones at this time.


None of the systems current or proposed scan local files. They all work on cloud storage. You could not use icloud and none of this change would affect you. Also I don't believe anything in icloud is encrypted so they could have scanned it at any time.


On device hash generation is 'scanning local files.' The fact that this process is only initiated by being flagged to being uploaded to iCloud doesn't change the fact that it is being done on-device, and increases the capacity for surveillance significantly.


Yup. Good luck telling repressive regimes that the technology doesn’t exist. How is the hash list to be trusted, especially in foreign countries? Who will be reviewing the images in foreign countries?


A smartphone is not a requirement for life.


Neither is a car or a dishwasher. Yet, they are convenient to have.


Correct. Totally agree. It’s a convenience at most.


Nither is Air Travel. And we had the same arguments after 9/11 about the No Fly List and possible abuses. And the same reassurances.

Guess what?

Everyday people who didn't want to become informants:

https://www.cnn.com/2014/09/11/opinion/hu-shamas-no-fly-list...

https://www.nytimes.com/2020/02/24/us/supreme-court-case-no-...

https://ccrjustice.org/home/press-center/press-releases/laws...

>The lawsuit is brought on behalf of four American Muslim men with no criminal records who were approached by the FBI in an effort to recruit them as informants. Some of our clients found themselves on the No Fly List after refusing to spy for the FBI, and were then told by the FBI that they could get off the List if they agreed to become informants. Our other clients were approached by the FBI shortly after finding themselves unable to fly and were told that they would be removed from the List if they consented to work for the FBI.

Journalists

https://www.cnn.com/2008/US/07/17/watchlist.chertoff/index.h...

>A House representative said Thursday she is requesting an investigation after learning a CNN reporter was put on the federal no-fly list shortly after his investigation of the Transportation Security Administration.

Whistleblowers

https://www.latimes.com/archives/la-xpm-2010-apr-27-la-oe-ra...

https://whistleblower.org/in-the-news/buffalo-news-governmen...

>In my case, I started having trouble flying after I blew the whistle in the case of “American Taliban” John Walker Lindh, the first terrorism prosecution in the United States after Sept. 11. As the Justice Department ethics attorney in that case, I inadvertently learned that my e-mail records had been requested by the court. When I tried to comply, I found that the e-mails, which concluded that the FBI committed an ethics violation during its interrogation of Lindh, had been purged from the file. I managed to recover them from the bowels of my computer archives, gave them to my boss and resigned. I also took home copies in case they “disappeared” again. Eventually, in accordance with the Whistleblower Protection Act, I turned them over to the media when it became evident that the Justice Department withheld them from the court.


Maybe wait to see how it's implemented and how it works first?

I really think that the HN crowd is having a giant knee-jerk reaction to all of this.


It isn't exactly a knee jerk; it has been quite likely that this sort of thing would happen sooner or later.

This is just a great point for if anyone is going to do anything. Apple is going to start scanning my phone looking for reasons to put me in jail. I don't want my phone's CPU time spent looking for reasons to imprison me and I don't want to be funding it either. This system will make mistakes.


Exactly. Despite countless occurrences of automated systems getting things wrong--there is no such thing as AI, remember, just fallible developers and their fallible formulae--somehow the naive continue to trust in these systems. It's insane, and those of us who do know how insane it is are left to pay the price for the naivete.


Its harder to take back policies like this than it is to object and get them stopped initially.

Also people have a habit of 'forgetting' about it later. Until stories of how it is misused are found. And then it's another attack vector we need to be conscious of.


And that’s how France still has VAT & revenue taxes.

Revenue tax? Have to pay for that expensive WWI war effort, you understand? For all the good it did.

Same with the VAT. Have to rebuild after WWII, you understand.

We also have an "Exceptional and Temporary Contribution" (CET), recently renamed to "Technical Equilibrium Contribution" (still CET. Smart one, that one).

A funny one, for a change?

When the Germans invaded in WWII, they changed France's timezone to theirs. After the war, we still called it "the German time". There were talks of going back for a few years…

Guess who still has noon at 2pm in the summer, decades later?

Change, no matter how ridiculously small or sensical, even when nobody benefits from the status quo (ie the damn timezone) is horrendously difficult.

Thus one should always assume that once it’s here, whatever "it" is, it’s here to stay.


There is still one constant: how the state system cares for victims of child abuse is still the same as in WW2.

https://www.kansascity.com/news/special-reports/article23820...

You would think money would go into the "backend": caring for kids where the state is responsible for everything BEFORE more money goes into the frontend: finding more kids to throw into the hellhole that is child services.

Without the "backend" being in order and working well, raising well-educated, stable kids, the frontend is completely immoral. "Saving" kids from abuse, only to throw them into a slightly different kind of abuse ... if any person did that (e.g. a guy marrying a woman (or I guess vice-versa) with that resulting in that person abusing their new spouse's kids) would be considered a despicable crime. Somehow child services, who do the exact same thing (and they use violence to do it) is not a despicable crime.

Somehow just because the state does it, makes such things all a-okay.

But frankly this is merely the hole in the justification, all this should merely tell you one thing: any government that doesn't work hard to fix the child services backend does not have children's interests at heart when making these sorts of laws (and mostly they're making budget cuts in the backend, of course). Because fundamentally these laws throw children into the child services system. THAT is the real effect these efforts have on the actual children behind this. THAT is what is meant by "saving kids".

And if that system is full of abuse, how is that any better than what paedophiles do? It's not.

Which means the state is not attempting to help abused or disadvantaged children. In fact, they're doing the opposite.


> We also have an "Exceptional and Temporary Contribution" (CET), recently renamed to "Technical Equilibrium Contribution" (still CET. Smart one, that one).

This is amazing


Oh, there’s a lot more where that came from.

Just an example. During a heatwave some summer over a decade ago, many elderly people died.

So what did the government do? They instituted a "day of solidarity", of course!

What does it mean? If you are salaried, then you get to work an extra day, during a holiday of your company’s choosing, and not be paid. The day’s salary will go to a public fund dedicated to helping promote the autonomy of elderly people. And your employer gets an extra day of employees supposedly producing value out of it.

Many people instead take the day, either on their paid leave or their Work Time Reduction days (RTT).

That’s on top of all the other social "contributions" (sounds better than taxes), of course.

Payslips used to be quite funny to decipher[1][2]. They’ve simplified those a bit since then; mostly by regrouping items[3].

[1]: http://cdn-s-www.ledauphine.com/images/F9FED7FA-778E-40CA-8F...

[2]: https://cap.img.pmdstatic.net/scale/http.3A.2F.2Fprd2-bone-i...

[3]: http://s-www.ledauphine.com/images/39A7BC0B-D6E2-456D-800D-5...


Devices betraying their owner to serve a remote master in ways the owner does not consent to is abhorrent, regardless of the purpose of such spying.


Presumably you consent by turning on iCloud Photo Library.


This will happen with or without iCloud; the photos in iCloud are already not end to end encrypted and could easily be scanned on the server side because Apple can read all of them today.

The only reason to do this clientside when the data is already readable on the server is to do it to images that aren't hitting the cloud.


> This will happen with or without iCloud;

You don't know that.

> The only reason to do this clientside when the data is already readable on the server is to do it to images that aren't hitting the cloud.

Or to eventually e2e encrypt all of iCloud. Or because Apple doesn't want to decrypt images server-side if they don't have to. Etc.

But the point is that currently, only photos that will be uploaded to iCloud Photo Library will be scanned. Making definitive points about possible future scenarios isn't particularly insightful, especially because the current system isn't much of a precondition of those scenarios.


None of this is happening "currently"; both of these claims are speculation about future changes based on Apple's statements.

Apple has made 3 announcements and released one research paper and held a press conference. Now we have to reconstruct what is likely going to be the truth from their carefully crafted statements.


With currently I obviously meant the system that's going to be deployed in the next major update.

The rest of my points still stand.


Yes, and I mean the same system, based on the same statements from Apple.

Clientside scanning will happen even without iCloud. Apple expects and pressures all users to use iCloud, defaults it to on without interaction or consent, and does not test the non-iCloud paths very well. You can't even setup a homepod to be a simple wifi speaker without iCloud.


> Clientside scanning will happen even without iCloud.

Again, you don't know that. "Scanning" (whatever that even means) non-iCloud photos would be completely pointless.

And you said:

> The only reason to do this clientside when the data is already readable on the server is to do it to images that aren't hitting the cloud.

Again, you don't know that at all. You present your speculation as the "only reason" with no knowledge at all.


> Devices betraying their owner to serve a remote master

This is the type of dramatic over-the-top reaction that I'm talking about.


It's an accurate and objective description of the situation; there is no opinion involved. If you think facts are over the top, perhaps the situation is actually outrageous.


I this is sarcasm?


No, I am sincere.


Yes I’m fully aware of all the flaws in the technical, ethical and political arenas.


I remember WhatsApp used to save each received image to the iCloud Photo Album. I remember one day going to my album and seeing several memes and pics I had received but never saved.

Having 3rd party apps that have access to the photo album being able to do that makes it a bit risky to have iCloud.


WhatsApp was my first thought as well. Any app that automatically saves photos to iCloud without user interaction is a huge risk.


You could completely wreck somebody's life with this. SWATing will look trivial in comparison.


This.

Combined with the unpatched remote-root-via-phone-number disclosed in the Pegasus leak this boils down to a single-click "destroy this person's life" tool.


You can already do the same. Send message then call the cops. Just because it's auto-detected now doesn't mean it wasn't possible before.


To be fair, SWATing kills people. Death is generally considered a non trivial and also life wrecking event.


Honestly I'd rather get shot dead by a SWAT team than implicated for something as atrocious as what this tool is looking for. I imagine many people with a family would feel the same way.

It's an abomination that will destroy innocent people. The engineers behind this no doubt think it's fool-proof because they believe they're leagues smarter than any of those pesky naysayers ("hey, we're Apple").

If we've learned anything about Apple this year (as if we needed the reminder) is that their software is nowhere close to as flawless as they seem to think it is.


If you assume that cops will just arrest people without doing any further research... Then yeah.


> If you assume that cops will just arrest people without doing any further research... Then yeah.

Like when they arrested & charged someone for a poor facial recognition match that never had a hope of passing human review? [0] Just glancing at the original photo would have stopped that. Or checking his rock-solid alibi. Neither of those things happened.

[0] https://www.wired.com/story/flawed-facial-recognition-system...


Oh, the police will get a search warrant, and find exactly what they were told would be on your device. The police aren't in the business of discovering your innocence. It's then up to you and your lawyer to prove you didn't put it on your device. Meanwhile your life will fall apart as you get fired, your wife divorces you, you lose all custody of your kids, etc.


None of those attacks would work against the system as described by Apple. The only photos scanned are items in your photo library prior to upload to iCloud. Your browser cache is not scanned.

Hash collisions would fail human review. About the only consequence I can think of for hash collisions is that the person at Apple who performs the human review step has a slightly nicer day because they were about to look at an image... and then it wasn't CSAM.


> Hash collisions would not pass the human review. About the only consequence I can think of for hash collisions is that the person at Apple who performs the human review step has a slightly nicer day because they were about to look at an image... and then it wasn't CSAM.

I truly wish I could subscribe to this optimistic view. Experience tends to show this to be unlikely.

Two factors combine against it: 1. There is no negative consequence for a mis-flag (to the reviewer) 2. This set up is a tool, and like many tools, inventive humans will find a way to subvert it in the name of convenience. I am referring to NSLs from U.S. Patriot Act as an example. Since CSAM is such a toxic thing (let's stipulate that CSAM itself is unequivocally bad), there is less tendency to examine it closely for, well, CSAM-ness.


Again, I'm only pointing out how this conflicts with Apple's description of their system. I'm in no position to know whether their description is accurate or how it will actually operate in the real world.

For the sake of argument, let's assume you're correct and Apple's review team are lazy shits who don't look at the images. Okay, so Apple then sends the report onto NCMEC. What are they going to do when they open the report and it turned out the images Apple reported were hash collisions?


My understanding (from someone who would know but said this in a Chatham House rules space) is that NCMEC is already incredibly underfunded, understaffed, and backlogged. Similar incentives apply to them. They're a nonprofit: a private organization who has significantly fewer dollars than Apple does.


The critical follow-up question is what do NCMEC do with their backlog? Unless they're dumping this backlog directly at the feet of law enforcement, I don't see how this changes the equation.


All watchers of Clara Morgan were watching what is legally categorized as “child porn” (=“any depiction of an individual under 18).

And since “depiction” includes drawing, any consumer of Hentai (s. manga) is hosting what passes legally as clear child porn.

I wouldn’t be surprised if 25% of the youth could be taken to jail according to the law, so, definitely, a learning period or warnings are required.

It’s akin to all the US adults who are registered as sex offenders because they peed in a park at night. Apple is clearly help with law abuse here.


That may be true in principle, but irrelevant with respect to Apple's CSAM process. Unless the exact material is explicitly catalogued by NCMEC or another child safety organisation, there won't be a hash match.

This isn't a porn detector strapped to a child detector.


Do you have a source for a single person being required to register as a sex offender for peeing in public?


Peeing in public is often charged as indecent exposure, which can have you forced to register as a sex offender. [0][1]

It doesn't take long to find those cases.

[0] https://www.nevadaappeal.com/news/2021/mar/21/public-urinati...

[1] https://law.justia.com/cases/california/supreme-court/3d/10/...


You are rather desperately trying here to downplay a massive security fuckup by Apple as if its perfectly fine. One of the main selling points of Apple, heck for many the most important one, was just blown to pieces couple of days ago. Its NOT Okay for Apple to send your images further.

The only argument left missing here is 'you have nothing to hide anyway, right?'.

I would be able to accept an inferior OS incapable of true multitasking and with very limited options to set. Closed system with no sideloading. I would even accept a lousy zoom on flagships cameras compared to, well, any competition. Proprietary connection port. Mediocre battery life. Overpriced accessories. But start removing security, and that's one step too far.


I was assuming for the sake of argument. I am not saying that a "major fuckup" of Apple's human review process would be acceptable.


    > Hash collisions would fail human review.
This (pervasive, over the past couple days) idea that Apple (of all major tech companies, lol!) will be capable of manually reviewing tens of thousands of automated detections per day is... nuts.

The "system as described by Apple" doesn't comport to reality, because it relies on human review. If you remove the human review, the system is fucked.

But no company on the planet has the capability to sanely and ethically (to say nothing of competently or effectively) conduct such review, at the scale of iOS.


Can they even, legally, review anything at all? I mean, it's highly likely there will be actual CP among the matches, viewing of which is - AFAIK - a crime in the US.


That is somewhat unclear at the moment. They don't get to see the actual image in your library, they see a derived image that's part of the encrypted data uploaded by your phone as it analyses the images.

I don't believe any of the information they've released thus far, gives any actual detail about what that derived image actually is.

One might guess it's a significantly detail-reduced version of the original image, that they would compare against the detail-reduced image that is able to be generated from the matching hash in the CSAM database.


Tens of thousands of automated detections per day? Unlikely. More likely tens per year. Remember, this isn't a porn detector combined with a child detector. It is hashing images in your cloud-enabled photo library and comparing those to hashes of images already known to child abuse authorities.

In addition, consider how monumentally unlikely it is for any CSAM enthusiast to copy these illicit photos into their phone's general camera roll alongside pictures of their family and dog. This is only going to catch the stupidest and sloppiest CSAM enthusiast.


For comparison to your "likely tens per year" number, Facebook is running the same kind of detectors and reports ~20 million instances a year: https://twitter.com/durumcrustulum/status/142377627884745113...


That doesn't seem to be the same kind of detectors at all.

"21.4 million of these reports were from Electronic Service Providers that report instances of apparent child sexual abuse material that they become aware of on their systems."

So those 20M seems to be images that Facebook looked at and determined to be CP. Apple's system is about comparing hashes against already known CP.

For the record: I don't support Apple's system here, but it's not the same kind of detection at all. Let's try to not make up random facts.


From the same thread: https://twitter.com/alexstamos/status/1424017125736280074

> The vast majority of Facebook NCMEC reports are hits for known CSAM using a couple of different perceptual fingerprints using both NCMEC's and FB's own hash banks.


Ah, I see. My apologies.


Facebook looked at them after they hash matched known CP. That is how all these providers do it.

If you think that this is 20 million people mashing the report button, that is almost certainly wrong


That's a summary number of many kinds of reports, of which CSAM hash matches would be one part.

That summary number also includes accusations of child sex trafficking and online enticement. I wouldn't be surprised if reported allegations of trafficking and enticement were in excess of 99.9% of Facebook's reporting. But since they don't break it out, I can only guess.

Given that guesses aren't useful to anyone, it would be interesting if you know of any statistics from any of the major tech vendors, of the reporting frequency of just CSAM hash matches.


> of which CSAM hash matches would be one part.

The majority part:

https://twitter.com/alexstamos/status/1424017125736280074

> The vast majority of Facebook NCMEC reports are hits for known CSAM using a couple of different perceptual fingerprints using both NCMEC's and FB's own hash banks.


Fascinating. Thank you for providing the clarification. I still find that number to be perplexingly huge. If it's indeed correct, one hopes that Apple know what they're getting themselves in for.


> If it's indeed correct

Just admit you are wrong and leave it at that without continuing to try to put a false light on this.


Thanks for the kind suggestion, but I'm not going to concede anything on the basis of an assertion made by one person in one tweet, with zero supporting evidence, zero specificity, zero context.

Assuming that number is correct, it means there are orders of magnitude more reports than there are entries in the CSAM database. So even if I conceded that Facebook were reporting over 10 million CSAM images, how many distinct images does this represent? More than four? We have no idea.

How many of those four were actually illegal? Remember, there's a Venn diagram of CSAM and illegal. A non-sexual, non-nude photograph of a child about to be abused is CSAM but not illegal.

This is a serious topic; you don't seem to be taking it seriously.


Google is probably a better comparison. I can't find the source atm, but IIRC it was ~500k/year.


That wouldn't surprise me as Google's reporting would include everything seen by GoogleBot as it crawls the internet.


Ten thousand iOS users doing something stupid or sloppy per day (noting they don’t have to be stupid or sloppy in general for that to happen) would not hit the monumentally unlikely criteria for me. Also this is not counting the false positives which is the premise of this thread.


Yes, being sloppy is common.

I don't know about anyone else but I've never had any issue with regular porn sloppily falling into my camera roll. And that's just regular legal porn. Maybe I'm more diligent than others but regardless, it's just not something that happens to me.

Being sloppy with material which you know is illegal? Material which, if stumbled upon by a loved one, could utterly ruin your life whether or not authorities are notified? Material which (I optimistically assume) is difficult to acquire and you'd know to guard with the most extreme trepidation? We're seriously expecting tens of thousands of CSAM enthusiasts to be sloppy with their deepest personal secret and have this stuff casually fall into their camera roll?

I'm not buying that.


A false positive will not have any effect. The threshold system they have means that they won’t be able to decrypt the results unless there are many separate matches.


> Hash collisions would not pass the human review. About the only consequence I can think of for hash collisions is that the person at Apple who performs the human review step has a slightly nicer day because they were about to look at an image... and then it wasn't CSAM.

The whitepapers provided by Apple do not say what the human reviews consists of. They could just look at the hashes to make sure there isn‘t a bug in their system.


> The whitepapers provided by Apple do not say what the human reviews consists of.

At minimum what we know is that each flagged image generates a "safety voucher" which consists of metadata, plus a low-resolution greyscale version of the image. The human review process involves viewing the metadata and thumbnail content enclosed in each safety voucher which cumulatively caused that account to be flagged.


A human at Apple likely doesn't get access to anything. I assume it would be part of the police group under strict restrictions checking these.


The data is not sent to a "police group", it is sent to NCMEC.

From Apple's FAQ:

Will CSAM detection in iCloud Photos falsely flag innocent people to law enforcement?

No. The system is designed to be very accurate, and the likelihood that the system would incorrectly flag any given account is less than one in one trillion per year. In addition, any time an account is flagged by the system, Apple conducts human review before making a report to NCMEC. As a result, system errors or attacks will not result in innocent people being reported to NCMEC.


NCMEC then makes those images available to the appropriate law enforcement agency after the fact.


Yes, if they're CSAM.


This is out of date. It took less than a week for Apple to announce this tech is coming to "3rd party apps".


One obvious problem with human review is steganography.

The picture can look normal to the human eye, but if it contains hidden content (in the least significant bit of each pixel for example so that the hash is unchanged), a forensic software will definitely notice, raise some flags, and extract the hidden offensive content automatically, leaving the human reviewer no other choice but to report you.

If Apple says they are not going to look for hidden content, then they are just handling a free pass which render the whole scanning thing pointless.


I'm confused what scenario you're positing here. Given the widespread adoption of encrypted communications, steganography is of no use to traffickers of CSAM. Steganography generally serves only one purpose, which is to transfer material in public view with plausible deniabilty—such as leaking material out of a military facility which has exceedingly robust data protection processes.

Apple have explicitly said that their hash algorithm is only concerned with visible elements of the image.


I'm speaking about the adversarial scenario of an attacker trying to frame a target. He just need to get on your phone an image with hidden content that has a hash collision with the database.

Traffickers and consumers of CSAM know that their content is illegal to possess and store so they sometime use steganography software to store the offensive data inside their innocuous photo library. This way when they can browse their private collection via the lens of the steganography software and they don't have some suspicious encrypted file that would attract attention of someone they share the computer with.


You seem to be confused. As you said yourself, steganographic concealment would, by its very nature, not change the perceptual hash of the visible image. If the visible image doesn't match an known hash, the steganographically modified version isn't going to either.


This sit on top of the perceptual hash collision.

First you generate an innocuous image that has a bad hash collision. (This is easy because perceptual hash are not cryptographically secure). Then in a second step you hide some offensive content in it via steganography without changing the hash. Then you send the image to the target.

He stores it in his cloud, it gets flagged because of the hash collision, so it get a manual review. The manual review take the image through some forensic software, which will catch the steganography (because the attacker will have chosen a weak scheme) which will reveal the hidden offensive content and then report you.


The manual review process only involves a severely transformed (low resolution, greyscale) version of the image which is attached to the safety token. The ability to decrypt any original files only occurs if the human review process confirms the presence of CSAM.


I don't have a lot of info on the quality of the visual derivative.

But since a human should look at it should have enough details to distinguish subtle cases like the age of the people in the picture, otherwise it's even more concerning.

If some human has enough info to make this call then the low-res greyscale visual derivative should still raise some flags if it get through a forensic software, as steganography software usually offer some resistance against usual compression artifacts.


I don't know exactly what's in the safety token, but we do know that it's grayscale and low resolution.

Allow me to be hypothetical for a moment; let's assume for a moment that the image has all chroma data stripped, it's downsampled to 1 megapixel, and then compressed to around 100 kilobytes using JPEG or HEIC. That would be sufficient for performing careful human review but would completely demolish any steganography.


What you're talking about here has nothing to do with what Apple is implementing.


Messaging apps like WhatsApp will save to your photo library though (unless disabled).

So any photo sent to you would be scanned. If you someone sent you a bunch of files, that might trigger a manual review, that would most likely flag your account.

I wouldn't expect that immediately deleting them would stop the review process.


again, I hope someone sends a couple of executives the recently posted images, to make a point


That is why they talk about having a manual review process. So that when someone wealthy or politically connected triggers the system there is a review.


I haven't used WhatsApp, but I'm tempted to call bullshit on that. I've never used any messaging app on iOS which saves photos to your photo library. Doing so would make no sense and would surely be infuriating. It's also worth noting that apps on iOS can't save to your photo library unless you give them explicit permission.


WhatsApp does by default save received images to your photo library (as opposed to e.g. iMessage). You can turn that off, though. And the permission to read from a user's photo library (to e.g. post images) includes the ability to write to it.


Gross. I can't fathom how anyone would put up with that.


WhatsApp really does it, by default. It's a weird choice.

https://faq.whatsapp.com/iphone/how-to-save-incoming-media/


It depends on how the attack was crafted:

Step 1: Get copies of pictures of targets kid in bath from phone/SNS

Step 2: Manipulate pictures so that hash collides with CSAM

Step 3: Get pictures back on targets phone so they get scanned.

If it were me, I would try and get a series of photos from the target, and manipulate several that look most borderline. That way it looks like more than a one off.

Now if there is an Apple review, the person who views them will see some suspect pictures and would confirm.

Now the target would have to get someone to review the original pictures vs the modified pictures. Good luck with the defense.


> Hash collisions would fail human review

You mean like the absolutely perfect human review of appstore content that's known for both false positives and false negatives?

Neither automatic nor manual (human) review works 100% reliably. And believing otherwise will only ruin lives.


You are absolutely correct that neither automatic nor manual review is ever going to be 100% accurate.

I would like to believe though that for this system to fully fail an innocent person, the following would all need to have failed:

1) Coincidental CSAM hash collision 2) Incorrect manual review by Apple 3) Incorrect subsequent review by NCMEC 4) Inability of a lawyer to obtain the original image for presentation during a trial/appeal

which seems kind of unlikely? (although it's certainly the case that once steps 1, 2 and 3 have failed, the person's reputation is likely damaged even if they are able to prove their innocence in court).

The wider question here is, should 100% accuracy be the bar by which we judge this? I don't think we expect the law enforcement system to be 100% right, hence principles like the presumption of innocence and right to appeal, and even then it gets things wrong sometimes.


There are known cases of police faking AI-generated evidence[0]. There's no reason why Apple would be immune against such things. And the recent British post office scandal shows that even without manipulation false faith in technology as evidence can destroy hundreds of lives. The low chance of an error going through that whole chain of checks also increases the trust in that system even in the case of a false positive.

And all this is assuming it will never be expanded from CSAM to other content. Apple is already rolling out a censored version of iOS in China.

[0] https://www.vice.com/en/article/qj8xbq/police-are-telling-sh...


You’re missing the threshold that is part of this system. You would need multiple hash collisions across multiple photos to trigger these mechanisms.


Of course not, Apple will simply match the much more reliable Youtube flagging system :P


Don't worry. I'm sure the police will believe you and help you out. /s

What you've described is pretty much the scariest thing I can imagine as far as computer crime goes.


> Who needs SWATing when you can send a CP pic (either real or with hash collision as per the thread few days ago) from a virtual overseas number/service and get FBI van to show up as well?

You are talking like collisions are trivial to make. I bet they have had a deep conversations in this area. At first, you would need a real hash to even try (which are hidden). Secondly, to get real material it means that it must be in their database to trigger anything. This tells a lot from sender already, and is worth to tell for police. It is quite easy to prove that someone just send it to you. And one photo is not triggering anything. Besides, sender must know that those photos must go automatically into the cloud to mean anything.

> What about injecting code into a public website to download same pic into local browser cache without user’s knowledge?

At least US legistlation is precise that user must willingly obtain/download CSAM material, and it must be proved. So this is not harmful for the user in the end.

A lot of speculation, but does not really lead for coencequences. Almost every system can be tried to be abused, but does it really mean something, is different story.


Step 1: Get copies of pictures of targets kid in bath from phone/SNS

Step 2: Manipulate pictures so that hash collides with CSAM

Step 3: Get pictures back on targets phone so they get scanned.

I don't have the skills or understanding of how the hashes are created but would this be possible?

>At first, you would need a real hash to even try (which are hidden).

How are the hashes hidden? It looks like they are shared: https://www.thorn.org/reporting-child-sexual-abuse-content-s...


> How are the hashes hidden? It looks like they are shared: https://www.thorn.org/reporting-child-sexual-abuse-content-s..

These hashes are not generated by Apple and are not valid. (Must be generated by their new system) They are probably very strictly guarded.

They will be stored on every iOS from 15 version, somehow securely. This must limit the support of older iPhones.


> At least US legistlation > does not really lead for coencequences

Except that a trial, even with an innocent verdict will SUCK and have terrible news stories about you and poison any google search for you with CSAM stories


It's a good method of protecting important documents. Simply add some stamps on top of all documents in case someone steals them


You don't even need to inject code into a public website. There have been no shortage of zero-click exploits for iMessage


Just because you assume attack vectors are simple doesn't mean they are. First of all, why would Apple forward a report about something that isn't CSAM to the NCMEC?


No code required. <img width=0 height=0> would do the job.


I wonder how long it takes until they add a feature to Safari to scan all the <img> <video> <canvas> elements for possibly illegal content. Would be very convenient considering Safari is the only browser engine on iOS.


> <img width=0 height=0> would do the job.

No, that's not how the Apple's system works.


That's fine™. You are just going to redirect blame on the original source, provided you got enough Apple Cash on balance to pay the lawyers and stay out of jail while sorting this out.


How is that going to get the image into your iCloud photo storage?


Content-Disposition: attachment; hit the wrong button, done. It's in your iCloud/Downloads folder.


It doesn't, but it does get the image into your browser cache and onto your machine.


So what does that do in the context of this conversation about Apple and iCloud?


My public Wifi captivity portal...


The original is a delight to read. Sadly, I can't help but wonder how many would be "triggered" by the answers today.

How do we, as a society, start accepting rational self-criticism again?


> How do we, as a society, start accepting rational self-criticism again?

A good starting point would be to stop using the word "trigger" when someone reacts in a way you don't like to something you say.

Personally I interpret it as a way to diminish the other side position by suggesting they don't have control over their emotions regarding some subjects. Not a good start.


Is there a more appropriate word to describe this? Honest question as a non-native English speaker.

Seems like this is a commonly used term/usage: https://www.google.com/amp/s/dictionary.cambridge.org/us/amp...


"Offended" would be my suggestion. "Triggered" is only really used in this sense as part of bad faith culture war discourse, mocking the idea of trigger warnings.


I've heard advocates of trigger warnings share their lived experience by talking about what "triggered" them, using that word specifically.


The word is used in the context of PTSD. e.g. "hearing fireworks triggered flashbacks from the war"

More generally, a "trigger warning" is a "content advisory warning" by another name.


There's a big difference between describing one's own experience and labeling others.


I don't think they mock the idea of triggers.

They leverage and weaponize the concept, which is arguably worse.


? but trigger was started and used by that side themselves.


my understanding is that the word trigger is used in psychology to describe a wide range of traumatic responses. e.g whenever a war veteran suffering from ptsd hears firework it may trigger a strong traumatic response. //victims of abuse can get "triggered" by the presence or even mention of their abuser etc

it applies to a wide range of traumas and responses some of which might be more or less extreme, so it does include some small things: if you ever get a minor burn, the idea of touching a potentially hot surface might make you somewhat uncomfortable - you could call that a triggering experience even though it's not nearly as intense as the other examples.

Triggering experiences are generally considered to reinforce trauma and generate unncessary distress and should therefore be avoided

And so we use trigger warning before movies etc to warn users of potentially upsetting/triggering content such as war, torture, sexual violence and various forms of abuse, it's really not that big of a deal


This was the original idea, that some people with genuine traumatic experiences and were currently suffering from mental illness could choose to opt-out before proceeding on to read/watch something. But the concept got wildly out of hand as activists, especially younger adults, began obsessively applying it to nearly any piece of written word and anything they could frame as traumatic.

For example, if you're about to present a movie to a captive audience that involves depictions of rape, it would be good that someone who has experienced rape, especially recently, knows that it will ahead of time and has the ability to opt-out because it might trigger a traumatic episode. The circumstances where you have a captive audience and it's not clear from the context what will be depicted are actually quite rare, so its usage should be rare.

But young people, trying to signal their virtuous compassion and understanding to like-minded individuals, would put "trigger warnings" at the top of blog posts about things like "racism" or "homophobia", and all that would be discussed would be that they overheard a slur at the store.

At some point, the dominant use of trigger warnings was by people with thin skins, ready to get offended on behalf of "victims" who had suffered, at worst, nothing more than people being rude or mean to them. Pretending like these kinds of negative encounters are anything close to the mind-breaking trauma of getting raped or watching your fellow soldiers explode in front of you is disgusting, and eventually everyone caught on that the activists were trying to equivocate real trauma with "hurt feelings". Worse, they were effectively teaching young people to internalize and exaggerate negative experiences so that they could identify as someone with PTSD. That doing this made them unique and gave them extra attention from others who wanted to actively show compassion to victimized people. For lonely young people who want a cause, it was extremely attractive because it gave them identity, purpose, and community. But in reality, it was largely a perverted roleplay which coddled everyone involved and made them emotionally fragile.

The original concept of trigger warnings is solid, but should be practiced only where necessary and never attached to the phrase "trigger warning", as that nomenclature has been ceded to the activists.


I’d advise reading the book “112 Gripes about the French” and then applying that same empathy to the people you dislike in our modern era.


Where did you get I disliked anybody? Parent used "sides" before I did, and it's seems absurd to say the right started trigger to mock the left


> Parent used “sides” before I did

My philosophy is that I don’t care who started it. I care how I respond to try to improve things.

(Though I myself have a lot to learn as well.)


> How do we, as a society, start accepting rational self-criticism again?

You might want to start with taking critique from the people you deem to be "triggered" seriously. That would be a first step in rational self-criticism. Otherwise you're selling a gut reaction as rationality to yourself. This is a common mistake people make. You might end up disagreeing with their critique - that may well be - but even then in most cases you'll likely take something from it, might be able to understand the topic at hand with some further nuance or get a deeper understanding of your position and its merits.


Do you have any example of answer that is "triggering" today?


I have never seen triggered in relation to self-criticism.


Russia already has law in works to make possession of StarLink hardware illegal. There was some coverage in January: https://m.slashdot.org/story/380606


While you can often see the claims that simple citizens can be fined under the proposed law, I could not find an official confirmation. Here is the law in question: https://sozd.duma.gov.ru/bill/1086353-7

The relevant part can be translated like this:

> (not complying with the law) entails the imposition of an administrative fine on officials in the amount of ten to thirty thousand rubles; On legal entities - from five hundred thousand to one million rubles.

For some reason the part about officials gets applied to simple citizens, which is completely incorrect in my understanding of the law.

So my guess is that you will not be able to legally to import Starlink receivers into Russia unless some kind of agreement will be achieved, but simple citizens owning such receiver will not be prosecuted (well, at least in the near future). Though they may be prosecuted under a different law if they'll try to share the unrestricted Internet access with other people. It's quite similar to VPNs, currently it's not illegal in Russia to use it to visit blocked resources, but it's illegal to provide such service (though any sane person would not use a VPN-service based in Russia to work around those blocks in the first place).


I took an iPhone X into a 3 foot deep swimming pool to take an underwater selfie (don’t ask, not the proudest moment) and it promptly died - apparently if you drop the phone, the water tight seal has a tendency to fail “silently”. Caveat Emptor :)

Thankfully, AppleCare was in effect and it got replaced for a small deductible.

Great to see gadgets standing up to being used as opposed to needing to be cradled and soothed all the time!


I took my phone out of the case today and noticed how dirty it, and the case was. I simply washed it off in the sink and it was nice and clean. I couldn’t help but think this was such a strange thing to do but also so natural.

Also, I’ve done underwater selfies too. It’s a lot of fun!


But don't use soap, which reduces the surface tension and lets water get past seals. At the start of the pandemic I used soapy water on my iphone 11 and the faceid sensor broke immediatly. Apple replaced the phone, though.


Just remember to let the phone dry fully before charging (unless you’re charging wirelessly of course). Moisture while charging gradually damages both cable and port.


iPhones are actually able to detect this and will give a pop-up saying moisture has been detected in the charging port and charging has been disabled out of safety concerns. It even shows you a bypass button for emergencies!


I get a similar notification when I take Apple Watch surfing. Also took an iPhone 11 in a waterproof bag around my neck for pics, but it was a hassle to zip up the wetsuit, and the bag fogged up.


A warning for what? Apple Watch doesn’t have a charging port


Apple watch gives a moisture warning if the speaker port has water inside. Happens often after swimming or running in rain.


Android also does this


i totally agree


On the other hand I have a Samsung Active 2 watch which is IPX68 rated and has a swimming mode for a pool swim yesterday, in which it died. It lasted a total of 1 year and 4 months.

It also has an ECG sensor that died for most people(myself included) before Samsung released the ECG mode(one year later).


Oh, I also have that, I shower with it and took it swimming a few times, no problem.

So it's still working and I can fully enjoy the zero apps it has, because no one cares about Tizen, the continuous heartbeat measurement that stopped measuring after the last big update and the ongoing lack of ECG mode that I'm not allowed to use because I don't own a Samsung phone.

And people wonder why Apple owns the high-end smartwatch market.


If you want a laugh go and download the tizen SDK and you will soon understand why nobody develops for it.


I was actually thinking of developing a little app for my GF and me when we both got our GA2, but noped out of it very quickly.


I once dropped my SE into a pond while I was fishing. I didn’t notice until I actually saw the thing at the bottom of the pond. It was probably down there at least 5 minutes.

I fished it out and it worked completely fine. I don’t think SEs were even marketed as waterproof. They still had the headphone jack, which I think was the weakspot


My SE on the other hand did not survive ~ 5 minute dip into the shallow end of the YMCA pool.

But Xs has been great in this regard. And with this pandemic, I regularly wash it with soap and water.


Chlorine can damage the seals especially of something not actually rated as water resistant. Soap is also a weakness and can cause water to bypass the barrier due to water tension lowering.


I took my iphone 5 with me when I fell into a lake in college. The screen was previously all cracked up too. Worked fine still somehow.


Being waterproof and being waterproof against salt water or chlorinated water are different things.

> (don’t ask, not the proudest moment)

don't be ashamed - that had the potential to be a super cool picture.


Same, went swimming with my iPhone 7. It never fully died but the camera clouded up and it totally freaked out to the point of being near unuseable. AppleCare+ ,£70 and replaced in-store. I always get AppleCare+ now.


AppleCare and a store to do the replace at in most major cities is one of the big reasons I stick with apple


Speaking about at least NYC, the nightclub prices have gone just a little bit too nuts right before Covid - think $2k for a table for 4 with a single bottle of ($30 dollar) vodka. $25 mix drinks were a norm.

Combined with lack of innovative artists and ever-increasing homogeneity of style (do I sound like a grumpy old man yet?), clubs seemed to have been racing towards a peak. Smaller venues were suddenly cooler, cheaper, and more interesting.


My experience is in London, but very similar. Even if you can afford it, the "real" nightclubs are total waste of time and money.

Unfortunately a lot of places that were actual music venues (Fabric was the one that got the most press attention, but there were others) got shut down in recent years because of noise complaints, drug investigations, etc. This of course played right into the hand of property developers. I'm sure lockdowns aren't helping either, and there's probably a bunch of suits frothing at the mouth to replace everything with "luxury housing units" in places that currently have decent nightlife like Vauxhall, the northern part of Kingsland Road leading to Tottenham, etc.

On the other hand I don't think it's possible to kill culture, and the people who want to put on events will find a way. At least that's the only optimistic way to look at the current situation.


It is possible to kill culture and it's been happening in London for years, the more people are priced out of the city the less interesting things that are happening. You can see that in microcosm in the differences between Berlin and London. The number of small interesting arts and music venues in Berlin is an order of magnitude higher than in London because it's much more affordable to live there and keep doing your thing for little to no financial reward.


Ironically it’s those same suits who are paying for the bottle service.


Jesus H Christ! 2 G's for a table? You know, where I'm from, a person can get a decently sized house with a 12,000 downpayment. All you have to do is stay home for 6 straight weekends in NYC. Apparently.


Not sure that's a relevant comparison, the class of people that will put a 12000 down payment for this decently sized house are not the same class of people that will splash 2k on a nightclub table in NYC


They’re the same class, one group just has a lot more money than the other


It's explicitly conspicuous consumption, a way to signal one's financial status to others without having to wear expensive clothes, watches, shoes, et c.

I would imagine that it gets you a certain amount of attention from a certain type of person.


$25,000 for a top table at LIV Miami NYE. $7,500 on a regular night.


Just to add to what everyone else said: I’m in Miami right now and most of the clubs (the ones that open at midnight and close at 0500) basically work such that if you don’t spend $1-10k for table/bottle service you might as well not even go.


I have not been to a club night or otherwise in over 25 years. It's just not my thing. But I am curious what is the difference between a top table and a regular table? What even goes on at a regular table?


Location within the club. In and around DJ booth / dancefloor can cost almost 10x the cheapest table. See and be seen = $$$$$

At these venues you aren't paying for the table per se... these are generally minimum spend amounts for drink/food (pre gratuity).

For ticketed events, table reso also usually includes admission & line bypass for x guests.


This seems very strange, why on earth would you pay that much to listen to music and who are you trying to impress? Is there some implicit assumption that you can meet otherwise unavailable women / men there? That seems extremely unlikely as well.


I assume that it's prostitution with extra steps.



Access to higher “””quality””” drugs and women.


[flagged]


No one that can afford that is working for a paycheck.


[flagged]


Can't Americans stop themselves from policing language? You're no better with that attitude than your Christian lunatics trying to censor every nipple out there. Puritans in every form.


I’m not trying to censor anybody (nor indeed am I American). I’m merely sharing information about language and encouraging people to think about the words they use and the effect they have on other people. At the end of the day I still believe it is your right to use words how you wish.


$12,000 might be a down payment for a parking spot in a major city.


Even ignoring the cost (which isn't that bad for an occasional thing if you don't buy a table, which I've never done), I've never had that much fun at a nightclub. I feel on edge the entire time. It's something about the overall atmosphere of the place that raises your stress levels. I can't quite put a finger on it.

Now a slightly more laid back live music venue I love. Can't tell you how many times I went to Brooklyn Bowl and Rockwood pre-pandemic.


I just can't see how clubs that are basically on some continuum of the sexual socializing scene (from meat market to more meet/greet) aren't getting completely undermined by online dating/hookup/meet apps.

Maybe corner bar for first meets, but who would take a first date to a zoot suit club?

Drug/experience/rave/music clubs should all rebound though.


There's a sentiment rolling lately in music-making communities that EDM is past its peak, and I think it's tied to how the scene has evolved commercially, including its presence in nightclubs, as well as the ushering in of streaming media.


Who would control the listing of a company’s shares? A startup can choose to simply not make itself available. Similarly, most employees overlook things like right of first refusal that their parent cos have over their shares.

This is an awesome development, if it can also be used to push for more standard, employee-friendly shareholder agreement terms.


I’ve seen that circumvented with trusts, where the trustee is swapped or trustees are added prorata but the trust stays on the cap table

Or in cases where the employee cant change the shareholder to the trust, a trust is formed for doing an escrow transaction of the cash behind the scenes.


Correct me if I’m wrong, but right of first refusal doesn’t mean your employer gets to block a transfer of equity outright. It only means that you have to make the same offer to them before you make it to another buyer. If your employee declines the offer, the other buyer is free to accept it.


Yah. So, if you're looking to buy something protected by a RoFR, you get to make an offer... And, then you wait a few weeks to see if another party thinks it's worth more than that and takes it instead.

In practice it pushes down both liquidity and price.

It becomes this strange meta-game of having to guess what the likelihood of whether you're buying something. And, you lose out on any materially positive change that manifests in that time window. Indeed, the decision can be made even on the basis of insider information...


That’s exactly right, but doing so makes the process much more illiquid.


Mercedes has a good demo video about current state of this tech (EU only) - when you drive a car with it, it’s literally magical. Alternative tech uses a projector-like screen with 4K resolution, allowing for very precise light control.

Here is a demo: https://m.youtube.com/watch?v=0OJjvYPV3oc


Are they actually developed by Mercedes? Or do they have exclusive contracts with a supplier like Bosch?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: