Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are different incidents here.

The first one where the police uploaded videos and wanted viewer information is absolutely egregious and makes me wonder how a court could authorize that.

The next one, which I didn’t fully understand, but appeared to be in response to a swatting incident where the culprit is believed to have watched a specific camera livestream and the police provided a lot of narrowing details (time period, certain other characteristics, etc) appears far more legitimate.



I don't understand how either of these are remotely constitutional. They sure aren't what is in the spirit.

They asked for information about a video watched 30k times. Supposing every person watched that video 10 times AND supposing the target was one of the viewers (it really isn't clear that this is true), that's 2999 people who have had their rights violated to search for one. I believe Blackstone has something to say about this[0]. Literally 30x Blackstone's ratio, who heavily influenced the founding fathers.

I don't think any of this appears legitimate.

Edit: Ops [0] https://en.wikipedia.org/wiki/Blackstone%27s_ratio


Cell phone tower data has been used for a decade now in pretty much the same way.

Did you happen to pass by a cell tower in a major city around the time a crime was committed? We all have.

Well, your IEMI was included in a cell tower dump. Probably dozens of times.

Did you happen to drive your car over any bridge in the Bay Area lately? Did a municipal vehicle pass you and catch your license plate with their ALPR camera?

Guess what? Your name went through a database of an LEO search if they wanted to find a perp for that time/location.

Privacy has been dead for a long time. The worst part is people don’t care.

The Snowden files changed nothing. If there was ever a point in history where people would have given up their cell phones for their civil liberties, that would have been the time to do it.


> Cell phone tower data has been used for a decade now in pretty much the same way.

I was mad then. I'm more mad now. Stop these arguments because it isn't like one implies the other. And who the fuck cares if someone wasn't but is now. What's the argument, that you're a hipster? That's not solving problems. I don't want to gatekeep people from joining the movement to protect rights. I don't care if they joined as a tin foil hat or just yesterday after having literally been complacent in these atrocities. If you're here now, that's what matters.

> Privacy has been dead for a long time. The worst part is people don’t care.

Bull, and bull.

There are plenty of people fighting back. I'm pretty sure me getting ads in languages I don't speaks is at least some good sign. Maybe I can't beat the NSA, sure, but can I beat mass surveillance? Can I beat 10%? 50%? 80%? 1% is better than 0% and privacy will die when we decide everything is binary.

People care. People are tired. People feel defeated. These are different things. If people didn't care Apple (and even Google) wouldn't advertise themselves as privacy conscious. Signal wouldn't exist and wouldn't have 50 million users. It's not time to lay down and give up.

> mingus88 36 minutes ago | parent | context | flag | on: Google Ordered to Identify Who Watched Certain You...

Cell phone tower data has been used for a decade now in pretty much the same way.

Did you happen to pass by a cell tower in a major city around the time a crime was committed? We all have.

Well, your IEMI was included in a cell tower dump. Probably dozens of times.

Did you happen to drive your car over any bridge in the Bay Area lately? Did a municipal vehicle pass you and catch your license plate with their ALPR camera?

Guess what? Your name went through a database of an LEO search if they wanted to find a perp for that time/location.

Privacy has been dead for a long time. The worst part is people don’t care.

> The Snowden files changed nothing.

They didn't change enough, but that isn't nothing.


> > The Snowden files changed nothing. >They didn't change enough, but that isn't nothing.

The biggest change IMHO was the entire industry got off their collective assets to finally move to HTTPS.


I wonder who's going to have to end up hiding out in a US-hostile part of the world for us to read this part of the cloudflare FAQ: https://developers.cloudflare.com/ssl/troubleshooting/faq/#w...


The world’s largest MITM


Lol, I'm a bit slow ... some USA TLA runs Cloudflare, right?


tin foil hat time, but who do you think the MITM is for?


Tech bros love it. And tailscale. And saas as a whole. Data sovereignty means you can’t be kind by the adtech industry so it’s not cool.


Calling out tailscale here is odd considering it's peer-to-peer and encrypted.


With keys controlled by a central entity


do you have a source for that?


Tailscale [0] says the private keys never leave the device.

“Security

Tailscale and WireGuard offer identical point-to-point traffic encryption.

Using Tailscale introduces a dependency on Tailscale’s security. Using WireGuard directly does not. It is important to note that a device’s private key never leaves the device and thus Tailscale cannot decrypt network traffic. Our client code is open source, so you can confirm that yourself.”

0. https://tailscale.com/compare/wireguard


That is true as far as it goes, but how does your node learn the public keys of the other nodes in your tailnet? My understanding is that they are provided by the coordination server, so you have to trust that the public key the coordination server gives you is actually the one for your peer device.

Tailnet lock helps mitigate this by requiring that node public keys are signed by a trusted signing node, but it isn't bulletproof.


Public key cryptography doesn’t work like that. If you were given wrong public keys you wouldn’t be able to connect to start with.


> Public key cryptography doesn’t work like that

Like what? I'm saying both sides of the connection would be given the wrong public keys by the coordination server. The private keys of which would be held by a MITM.


To add to that, they also provides Tailnet lock [0], which protects from the only way the coordination server can mess with the tailnets, by connecting unauthorized nodes.

[0] https://tailscale.com/kb/1226/tailnet-lock


Not sure what the issue is with Tailscale, especially since you can self-host Headscale server locally to get the same effect.


Headscale is fine. With tailscale they control the deployment of public keys to devices, and can thus deploy anything they want to.


Good to know.

Have they ever deployed anything they want to devices?


The direction you're heading in sounds very similar to the arguments that may have been made pre-Snowden about mass-surveillance.



a single encryption is for the stone age.

if [pecadillo] must remain secret when your nieghbour is investigated for [crime?] then encrypt at least twice, and obfusicate the original message


> The biggest change IMHO was the entire industry got off their collective assets to finally move to HTTPS.

And then promptly moved most things behind cloudflare, which is MITMing everything, undoing the benefit of HTTPS.

Remember "SSL added and removed here!"? Now it happens at cloudflare.


I thought this was driven by ISPs inserting their own ads in normal HTTP.


…no, it was definitely “HTTPS added/removed here”


Had nothing to do with Snowdon but with Google ranking algo changes. Google has a commercial interest of hindering competitors in the add brokering market from observing info on the wire.


It had everything to do with Snowden. Source: I was at Google at the time he started leaking.

Before Snowden encryption was something that was mostly seen as a way to protect login forms. People knew it'd be nice to use it for everything but there were difficult technical and capacity/budget problems in the way because SSL was slow.

After Snowden two things happened:

1. Encryption of everything became the companies top priority. Budget became unlimited, other projects were shelved, whole teams were staffed to solve the latency problems. Not only for Google's own public facing web servers but all internal traffic, and they began working explicitly on working out what it'd take to get the entire internet to be encrypted.

2. End-to-end encryption of messengers (a misnomer IMHO but that's what they call it) went from an obscure feature for privacy and crypto nerds to a top priority project for every consumer facing app that took itself seriously.

The result was a massive increase in the amount of traffic that was encrypted. Maybe that would have eventually happened anyway, but it would have been far, far slower without Edward.


You were at Google at the time, but your memory of the ordering of events is off. Google used HTTPS everywhere before Snowden.[1][2] HTTPS on just the login form protects the password to prevent a MITM from collecting it and using it on other websites, but it doesn't prevent someone from just taking the logged in cookie and reusing it on the same website. That was a known issue before Snowden, and Google had already addressed it. Many other websites, including Yahoo, didn't start using HTTPS everywhere until after Snowden.[3] I know because this was something I was interested in when using public WiFi points that were popping up at the time. I also remember when Facebook moved their homepage to HTTPS.[4] Previously, only the login form POSTed to an HTTPS endpoint, but that doesn't protect against the login form being modified by a MITM to have a different action for the MITM to get your password, rendering the whole thing useless.

What changed after Snowden was how Google encrypts traffic on its network, according to an article quoting you at the time.[5]

[1]https://gmail.googleblog.com/2010/01/default-https-access-fo...

[2]https://googleblog.blogspot.com/2011/10/making-search-more-s...

[3]https://www.zdnet.com/article/yahoo-finally-enables-https-en...

[4]https://techcrunch.com/2012/11/18/facebook-https/

[5]https://arstechnica.com/information-technology/2013/11/googl...


An important clarification is that the leaks about NSA snooping on Google motivated end-to-end encryption between all pairs of Google internal services. It was a technical marvel, every Stubby connection had mutual TLS without any extra code or configuration required. Non-Stubby traffic needed special security review because it had to reinvent much of the same.

People even got internal schwag shirts made of the iconic "SSL added and removed here" note [1]. It became part of the culture.

Over a decade later I still see most environments incur a lot of dev & ops overhead to get anywhere close to what Google got working completely transparently. The leak might have motivated the work, but the insight that it had to be automatic, foolproof, and universal is what made it so effective.

[1] https://blog.encrypt.me/2013/11/05/ssl-added-and-removed-her...


A minor quibble; iirc it was only connections that crossed datacenters that were encrypted. RPC connections within a cluster didn't need it, as the fiber taps were all done on the long distance fibers or at telco switching centers.

But otherwise you're totally right. I suspect the NSA got a nasty shock when the internal RPCs started becoming encrypted nearly overnight, just weeks after the "added and removed here" presentation. The fact that Google could roll out a change of that magnitude and at that speed, across the entire organization, would have been quite astonishing to them. And to think... all that work reverse engineering the internal protocols, burned in a matter of weeks.


According to the reporting at the time, the NSA has shut down the email metadata collection program, which was the only leaked NSA program that parsed data on those taps, back in 2011; so the reverse engineering work was burned by an interagency review two years prior to Google's cross-datacenter encryption work.


They were tapping replication traffic on a database that included login IP addresses. I remember it well because it was a database my team had put there.


I missed that leak. Any chance you have a link for me to fill in my gap?


Slide 5 (Serendipity - New protocols) in this presentation:

https://github.com/iamcryptoki/snowden-archive/blob/master/d...

It's heavily redacted but the parts that are visible show they were targeting BigTable replication traffic (BTI_TabletServer RPCs) for "kansas-gaia" (Gaia is their account system), specifically the gaia_permission_whitelist table which was one of the tables used for the login risk analysis. You can see the string "last_logins" in the dump.

Note that the NSA didn't fully understand what they were looking at. They thought it was some sort of authentication or authorization RPC, but it wasn't.

In order to detect suspicious logins, e.g. from a new country or from an IP that's unlikely to be logging in to accounts, the datacenters processing logins needed to have a history of recent logins for every account. Before around 2011 they didn't have this - such data existed but only in logs processing clusters. To do real time analytics required the data to be replicated with low latency between clusters. The NSA were delighted by this because real-time IP address info tied to account names is exactly what they wanted. They didn't have it previously because a login was processed within a cluster, and user-to-cluster traffic was protected by SSL. After the authentication was done inter-cluster traffic related to a user was done using opaque IDs and tokens. I know all about this because I initiated and ran the anti-hijacking project there in about 2010.

The pie chart on slide 6 shows how valuable this traffic was to them. "Google Authorization, Security Question" and "gaia // permission_whitelist" (which are references to the same system) are their top target by far, followed by "no content" (presumably that means failed captures or something). The rest is some junk like indexing traffic that wouldn't have been useful to them.

Fortunately the BT replication traffic was easy to encrypt, as all the infrastructure was there already. It just needed a massive devops and capacity planning effort to get it turned on for everything.


The first two links are about Gmail and personalized results in web search specifically. Even as late as 2011 SSL being activated for a product was treated as unusual enough to write blog posts about, and it was up to individual projects whether or not to activate it and how to trade off the latency costs.

You're right that I might be mis-remembering the ordering of things, but I'm pretty sure by the time Snowden came around the vast majority of traffic was still unencrypted. Bearing in mind that lot of Google's traffic was stuff you wouldn't necessarily think of, like YouTube Thumbnails, map tiles and Omaha pings (for software update). Web search and Gmail by that point made up a relatively small amount of it, albeit valuable. Look at how the Chrome updater does update checks and you'll discover it uses some weird custom protocol which exists purely because at the time it was designed Google was in a massive LB CPU capacity crunch caused by turning on SSL for as many services as possible. Omaha controlled the client so had the flexibility to do cryptographic offload and was pushed to do so, to free up capacity for other services.

> What changed after Snowden was how Google encrypts traffic on its network, according to an article quoting you at the time.[5]

That also changed and did so at enormous speed, but I'm pretty sure by June 2013 most external traffic still didn't have TLS applied. It looks like Facebook started going all-SSL just 8 months before Snowden.


I had completely forgotten about YouTube. I think it switched to https video serving post-Snowden, but I can't find the announcement.

Edit: Here it is. Only 25% of YouTube's traffic was encrypted at the start of 2014. https://web.archive.org/web/20160802000052/https://youtube-e...


Right, I remember (as an outsider to google) the push for https coming after Firesheep [1] and the google research on the actual CPU cost of https [2], both in 2010. Snowden's revelations came in 2013.

[1] https://en.m.wikipedia.org/wiki/Firesheep [2] https://www.imperialviolet.org/2010/06/25/overclocking-ssl.h...


That's nice and all, but the "why" is more important than the "what".

Google was driven not out of some panicked rush to protect user privacy, but to protect Google's collection and storage of user data.

Google has 10+ years of my email. It doesn't treat that like Fort Knox because it gives a shit about my privacy; it treats it like Fort Knox because it wants to use that for itself and provide services to others based off it.

You do know that Google was heavily seed-funded by the NSA, right?


Google might just be the biggest advocate of https out there, certainly (from my recollection) post Snowden. There has been a lot of progress made over the years.

https://transparencyreport.google.com/https/overview

https://transparencyreport.google.com/safer-email/overview - transmitting email with some form of encryption is probably a bigger and completely unseen problem that is similar


There was literally a PowerPoint slide in the released docs implying they had backdoored Google's internal servers.


>Stop these arguments because it isn't like one implies the other. And who the fuck cares if someone wasn't but is now. What's the argument, that you're a hipster?

That we are nothing in the ocean of people who don't care. Someone upended their entire life to whistleblow on the government doing it as hard proof and no one cares (from a statistical POV, not a "literally 100% of the population" way).

They cared more about the boston bombing the month prior, which while tragic is a statistical molecule compared to the impact of what Snowden revealed.

>There are plenty of people fighting back.

This can be a game of numbers, but it isn't. This can be a game of power, but it isn't. Not enough people are fighting back and not enough powerful people are fighting back.

>People care. People are tired. People feel defeated. These are different things

well it sounds like they gave up. Different words, samae results


The concept introduced by the Supreme Court regarding Pen register is consistent with all the examples you have given.

Anytime you willing share data with a 3rd party the law assumes you aren't keeping it private.

https://en.wikipedia.org/wiki/Pen_register

If you want to keep something private don't share it outside of your house.


Except that existing in modern society requires giving immense amounts of personal information for even basic transactions.


It's beyond absurd and desperately needs to be addressed. Too bad both the government and corporations stand to loose too much that I doubt it will be treated seriously.


I personally think that the Apple anti-trust is being pushed due to their privacy stance.

Apple looked at the pen register cases and realized the best position to be in as a third party is to not possess usable data.

The US case from my point of view is trying to fore Apple to share user data with third parties.


How would a successful antitrust verdict against Apple further the NSA's implicit dogma of "insecure by default"? Especially if it winds up breaking up Apple into many pieces. It's far easier for a centralized tech industry to bend the knee to the NSA than a distributed one.


>It's far easier for a centralized tech industry to bend the knee to the NSA than a distributed one.

I don't agree. NSA can hack/pressure smaller companies much easier than a giant like Apple.


easier but you get less data. There's thousands of small knees to get to bend to. More points of failures for public outings Centralizing it to one company makes everyone's lives easieer.


Furthermore the NSA/FBI/CIA want all their spying behavior to be secret. If you have to bend a lot of small knees then someone's going to fib before they get the data they want. And moving off a small company that's bent the knee is way easier than moving off FAANG, which can keep secrets[0] and has your balls locked in a vise.

[0] Because, among other things, the whole "Surprise and Delight" doctrine demands internal controls and secret-keeping discipline not that far off from an actual intelligence agency


Forcing Apple to hand over data to a third party for commercial reasons (not needing a warrant) is much simpler than whatever scenario you have worked out.


We all have choices to make. I avoid all sorts of things people consider indispensable.

2 examples are not having an amazon prime account and running my own mail server.


Given recent events, I don't think Amazon Prime is that necessary anymore.

Mail servers, sure. The big issue there is another annoying pseudo-monopoly issue where so many major email servers assume anything not from [major email server] is spam, so you may not even get to communicate properly. More sticks for the fire.


I'm anything but a major mail provider and I don't have any issues. I did have some hiccups around 2008 and had to implement DMARC-DKIM. I use strict delivery so my mail server must delivery all mail directly.

Occasionally people have a vanity domain email that bounces back to me. I have to search the headers for the actual email address and re-send.


IMHO the problem here is really transparency. There IMHO can be situation in which it could be reasonable. But the concrete cases might be questionable as we are probably not talking about capital crime.

In Berlin there used to be a notification system if you were subjected to cell surveillance in Berlin. It was recently stopped [0]. IMHO we need the same for all IP assignment or account lookups. The problem IMHO is that we, individualy, and particularly vulnerable groups like journalists and activists, might be subject to far more of such activities than we know.

[0] https://netzpolitik.org/2024/rolle-rueckwaerts-berlin-beende...


> notification system

More-generally, imagine if every citizen was entitled to a yearly report on all how many times law-enforcement received records containing their names or personally identifying information, except in cases that are formally unsolved and in-progress.

So a line item might be something like:

    {Ref ID}, {Date}, "All Youtube accounts that watched {Video Title}"


I don't think anyone is saying that rights can't be infringed upon for any reason. The issue is that there needs be sufficient reason. Is this sufficient reason? I think the action is sufficient reason were it specifically targeted at the individual under suspicion. But a dragnet is not. Those innocent people were not under suspicion and were not doing anything wrong or illegal.


There is a distinction I tend to make here.

If some person was able to pick me out from a lineup because they physically saw me then that wasn't private and privacy laws don't apply.

So for instance capturing my face on CCTV in a public place isn't a privacy violation, same with my license plate in a pulic place.

However what happens on my private property is a privacy violation if it is recorded without consent.

Certian information isn't private, and that being stored is fine. Where the line gets drawn is what's up for debate.

I surely would want my contact details and name saved by a company that I intend to do business with in either direction. However if they spam me with information I should be able to lodge an harrassment claim against them. It's not a privacy issue but a decency issue.


> However what happens on my private property is a privacy violation if it is recorded without consent.

And the biggest enablers of violation are things like ring doorbells and dashcams. There is no comeback in my country, don’t know about the US.

Governmental and commercial cctv has checks and balances. Domestic just goes onto planet wide databases with no control.


That notion isn't universal. In Germany, for instance, I can't install a camera pointing to the street.


I understand that completely. Just wanted to give a different viewpoint on that.

I'm all for finding a balance, it's just that many times people are against surveillance that does actually improve security or enforcement but mildy infringes on their "rights" when in reality they never had privacy in that situation to start with and the use of technology didn't substantially change that.

Youtube being forced to give up personal information based on who viewed a video is something I don't see as an issue. How is this any different from any other website getting the exact same order?

If you are doing something shady you know how to obfuscate that information, if you aren't, sure your "privacy" was "violated" for sure but it was violated in a way that was legally allowed and by law enforcement at that.

Living in a surveillance state where I have no choice but for the government to be able to track every single transaction I make financially and being able to link my cell number amongst other details directly to me, I feel like if I had to try to fight that I would only be causing myself undue anxiety and I've got enough legitimate reasons to be anxious.


>Youtube being forced to give up personal information based on who viewed a video is something I don't see as an issue. How is this any different from any other website getting the exact same order?

Scale. This isn't "supbpeona to get all of Bob's info", it's "subpeona to get information on all of the people's info tangentially related to bob". Imagine if this was as tangential as "who watched this video with 10m views"? is the YT history of 10m people worth it? Is it even useful?

The issue comes down to whether or not "Youtube" is a public place. All logistical terms point to "no", hence this story.

>your "privacy" was "violated" for sure but it was violated in a way that was legally allowed and by law enforcement at that.

That isn't how court orders work. They cannot make a single order to search an entire neighborhood's worth of houses because of drugs or whatever. That'd be N orders which may or may not go through based on the arguments made.


> and the use of technology didn't substantially change that.

This is complete BS. Technology made it scalable to track where everyone is and query it historically. This used to require tailing someone so it couldn’t be done at scale.


That same technology has also dramatically increased the cost of doing that.

Data isn't free and processing big data isn't cheap. As much as Google has the data, that means they need to store that data.

You know what used to happen before and still happens now, an example. I live in a restricted access area. Restricted in the sense rhat to get in some guy needs to take your name and license plate.

For many many businesses parks in my country that is still the defacto. There isn't really a camera watching that other than general CCTV that probably doesn't have the resolution to pick up text on our license plates. It's cheaper for them to literally pay a guy to stand at a boom and get that information than to install the technology required to track that automatically.


> It's cheaper for them to literally pay a guy to stand at a boom and get that information than to install the technology required to track that automatically.

It depends of the local cost of labor, also the technology is easier to scale, imagine New York City having employees at the bridges writing all the entering license plates! And searching through those records how many times a certain plate entered the city on a given time frame. To me the problem with technology is that they’re used for lazy policing to just inflate the numbers of solved cases. There were cases of cops feeding hand-drawn suspects to face recognition software. Every case becomes a “throw something to the wall and see what sticks”.


Your complaint seems more like a failing legal system than unnecessary surveillance.

Legitimately if an investigator put a hard drawn sketch through facial recognition and that was even remotely allowed into evidence by the court then the suspect evidence wasn't the issue


I don’t recall the actual case but what I try to point out is that technologies are used as dragnets to “fish anything” be it facial recognition, cell tower logs or license plate reads. I’m all out in favor of using any tool to catch criminals but not to manufacture them, specially when the only goal is revenues for the agency du jour.


> Data isn't free

The adtech industry made data and its processing not just free (as in more than covered by the ad revenue) but outright profitable.

This is frankly a one-in-a-lifetime gift to the government because we've not only built an unaccountable industrial-grade spying machine but the government doesn't even have to pay for it as it pays for itself and incentivizes its own expansion.


"Hunters don't kill the innocent animals - they look for the shifty-eyed ones that are probably the criminal element of their species!

If they're not guilty, why are they running?"


I never said any of that.

What I said is for this specific point a smart criminal won't get caught and you too can very easily obfuscate that very same data.


Thank you for so eloquently explaining the bootlicking and privacy not caring mindset I’ve never understood. Also sorry that I can’t come up with a less worse way to say that


Unless you're wealthy and powerful.

I guarantee the very wealthy or politically powerful have plenty of very-well-hidden cameras surrounding their properties.

Those rules are to keep you from catching and proving the powerful doing something they shouldn't.


I... well, I will be honest, this is the first time I've heard someone arguing that street facing CCTV was meant to catch _that_ kind of wrongdoing.

For the German context, and for the kind of CCTV I'm talking about, it makes no sense thou.


Are businesses allowed to install cameras facing the street?


> If some person was able to pick me out from a lineup because they physically saw me then that wasn't private and privacy laws don't apply

It’s not an invasion of privacy. But it is a problem for other reasons

https://nobaproject.com/modules/eyewitness-testimony-and-mem....


So, when your in your own property, cellular tower shouldn't be allowed to allow your mobile phone to register? Because they will record your IMEI while you are in your private property.


Yeah but the electro-magnetic spectrum is a limited public good. You don’t own your broadcasted radio waves in the same way you own your house. Your cellphone is a pollutant.


Both the radio waves his cell phone emits, and the information (voltage change of an ADSL line or photons moving in an optic fiber) used to communicate over the Internet, actually leave his home, and then are registered. So I think in nature it's the same as sending a letter. So let's symmetrically consider that you send a letter, and police/agency asks the post office to attach to each letter information (from, to, weight, stamp...) the phone number from their database. If that happens for all letters going through a given sorting room, I can understand how that's an abuse.


> Privacy has been dead for a long time. The worst part is people don’t care.

I would argue “people don’t care” because… there isn’t a high enough number of people who suffer negative consequences from “their privacy being invaded”.


Maybe people would care more if there were more then two viable political parties to choose from?

Getting rid of First Past The Post voting in favor of something like Ranked Choice voting would allow people to vote 3rd party with no chance of a spoiler effect. This would introduce competition into the electoral process, improving the quality of candidates available to choose from. Even from within the current two mainstream political parties.


You play right into their hands by being demoralised (and trying to spread that to others)


Most people don't know. Or if they know, they don't understand the implications. As Computer Scientists, part of pur whole shtick is to try to spread that lnowledge far and wide. Most, I hazard, spend precious little time on that particular responsibility.


Most people think Facebook secretly records all their conversations because their ad tracking is just that good. They don't know the root cause but they absolutely do understand the implications.


"people do not care" - Please stop repeating this false statement. When you repeat it you give it legitimacy, and take the time when other statements could be made.

Most people are helpless to make change. Greater than one million adults serve in uniform services of some kind where they literally must comply. The ad budgets and massive, overflowing volumes of money generated by "surveillance capitalism" buy the consent of the mercenary finance occupations. None of this means "nobody cares"


>Please stop repeating this false statement.

Society, please stop making it true.

>Most people are helpless to make change.

you get even 10,000 people to petition something to the government and you can get something rolling. This relatively moderate post probably had 10,000 views. You don't need to do much but you just got to get enough people to care enough to spend 10 minutes making a request. If they can't even do that much... well, they don't care.

This is the issue with an individualistic mindset, you hyperfocus on what immediately benefits you. Not the wider community around you which is needed for such petitioning.


[flagged]


You can have privacy and many of these things. You may be interested in homomorphic encryption or the weaker version differential privacy. There are such things as zero knowledge proofs.

But I think it is far easier to have these technologies without doing the encrypted aspects or protecting privacy. Then it is more a "never let a tragedy go to waste" situation. Paved with good intentions, right? This will always happen though as we rush and are unable to do things the right way. Often the right way can take the same amount of time but generally appears to take longer and that is enough, even if the wrong way actually takes longer. Because it also matters at what resolution we're concerned with.

So I'm saying it is mostly stupidity, not evil. Though evil loves and leverages stupidity.


you honestly think there is some masterplan? I love a good conspiracy theory but that is nuts.

Nobody has a made a plan so it hasn't been long planned. Shit is just happening and we as a society, a culture and a race are adapting to it like we've always done.


Not a masterplan, but definitely guides. I mean there are all sorts of meetings that occur where the rich and powerful meet and arrange this or that - eg Bilderberg, WEF, Trilateral Commission, UN, WHO, etc. These aren't elected bodies, but somehow all the governments act in tandem according their pronouncements. Its as if voting doesn't matter, as if its merely a pressure release valve.

Elsewhere I posted about technocracy inc - which Elon's grandfather was involved with. (https://newsinteractives.cbc.ca/longform/technocracy-incorpo...). Just 2 generations later, and you have this guy apparently putting out electric cars, space ships, neural laces, etc. Its more coherent than you think. Bill Gates's father headed Planned Parenthood.

It seems to me that there are a group of very rich individuals that try to shape policy and direct this or that. Its really a very natural state I think - we see this everywhere - eg a headmaster of a school will direct the school towards this or that, a CEO does the same in their company. This is the same principle, except at a far higher level. I don't think its even debatable that this is the case, tbh.


Well, I guess we're just naturally evolving and adapting into enslaving ourselves, just following nature's course... There doesn't need to be a "master plan", but clearly there are some working themes


Yep. Humans are lazy, greedy, and gullible. Give us a device which feeds our brains constant little dopamine hits and we will sacrifice anything for it.

Remember the experiments with rats and pigeons that could administer hits of plesure giving drugs? They all hit that button til they died. We are no different.


well, humans do teach and practice self control and restraint, so I wouldnt say we're all the same as rats. We just need more clever devices. The variety of content on the internet, constantly updating, means this can tailor to almost any taste.

Even then, sure. there have been extreme examples of such things like a few korean individuals dying in their rooms playing MMOs, not even taking enough care to feed themselves.


We know of the ruling class regularly getting together to plan, and surely they meet and collude far more often than we know about. Of course they are making plans.


This sounds scary, and yet I seem to be unharmed.


Shall we wait on the laws until you personally come to some harm?


No, but the argument did use "you" to imply that the reader was harmed. I consider that an illegitimate scare tactic. It would be better to talk about how someone else might be harmed.


If you weren't one of the 30k watching the video, you are the "someone else".


A case is criminal gangs buying from data brokers to scam elders


Then they came for me. And there was no one left to speak out for me


Blackstone: "It is better that ten guilty persons escape than that one innocent suffer"

So not sure where you got the impression he's okay with up to 100 people being disturbed so we can catch one bad guy.

But then, he wasn't really talking about that was he? Better the guilty go free than the innocent suffer what? He was, essentially, talking about the principle of innocent until proven guilty; that innocent people shouldn't suffer by being punished for a crime unjustly.

2999 innocent people, in your formulation, though, are not being punished for a crime. They're not even being accused of a crime.


> innocent people, in your formulation, though, are not being punished for a crime. They're not even being accused of a crime.

They are, however, being harmed.

It's easier to use historical examples because they're not afflicted with modern politics.

The FBI was known to investigate and harass civil rights leaders during the civil rights movement. Suppose they want to do that today.

Step one, come up with some pretext for why they should get a list of all the people who watched some video. It only has to be strong enough to get the list, not result in a conviction, because the point is only to get the list. Meanwhile the system is designed to punish them for a thin pretext by excluding the evidence when they go to charge someone and their lawyer provides context and an adversarial voice, but since their goal here isn't to gather evidence for a particular investigation, that's no deterrent.

Step two, now that they have the list of people interested in this type of content they can go on a fishing expedition looking for ways to harass them or charge them with unrelated crimes. This harms them, they're innocent people, therefore this should be prevented. Ideally by never recording this type of information to begin with.

There is a reason good librarians are wary about keeping a history of who borrowed a particular book.


Surely you are not contending that Blackstone was of the position that no innocent person should be investigated, however briefly, unless it results in at least 10 convictions.

I very much agree that (some, probably minimal) harm is being done to these people. Pretending that they "suffer" in the sense Blackstone was using the word is disingenuous.


Being investigated is a red herring. The problem is from the other end. Your premise is that a person being investigated when they're innocent of the original crime is basically harmless because the investigation will come to naught. The actual issue is that if they can find a pretext to get a list of all of the people who viewed some content they don't approve of, now they have a list of targets with which to play "bring me the man and I'll find you the crime" and that is a harm in need of preventing.


> Your premise is that a person being investigated when they're innocent of the original crime is basically harmless because the investigation will come to naught.

Not at all. I would say that it's usually (not always!) small in the particulars but adds up in aggregate, and that we should be a lot more careful with how much surveillance we allow.

I just would also say that the kinds or amounts of harm being done there are manifestly not what Blackstone was talking about in his "formulation" as it leads immediately to absurd conclusions that go very well past the present case.

I will not here that "there is a concern here analogous to Blackstone's ratio" is a different thing than, paraphrasing what was up thread, "this is substantially more extreme than Blackstone's ratio should forbid".

And in case I haven't said it in thread anywhere, I share concerns about surveillance. I just think if we are enlisting support from historical figures, we should find a quote where they're talking about the question or acknowledge the distance, rather than pretending the quote means something it didn't - that will only turn off those who might be persuaded.


> "there is a concern here analogous to Blackstone's ratio" is a different thing than, paraphrasing what was up thread, "this is substantially more extreme than Blackstone's ratio should forbid".

I agree with this. What's happening here is different than the scenario in the original ratio, even though it's a similar concern.

> I just would also say that the kinds or amounts of harm being done there are manifestly not what Blackstone was talking about in his "formulation" as it leads immediately to absurd conclusions that go very well past the present case.

If we direct ourselves to the case at hand, I'm not sure that a general rule that the government can't compel innocent bystanders to assist an investigation against their will would even be a net negative, much less cause serious problems. When a crime is committed people will generally be inclined to help bring the perpetrators to justice, because who wants thieves and murderers and so on going unpunished? Whereas if someone is disinclined to help, we might consider that they could have a reason, e.g. because the law being enforced is unjust or they believe the investigation is not being conducted in good faith, or they simply don't trust the government with the information, at which point the ability to refuse acts as a reasonable check on government power.

> I just think if we are enlisting support from historical figures, we should find a quote where they're talking about the question or acknowledge the distance, rather than pretending the quote means something it didn't - that will only turn off those who might be persuaded.

I feel like historical quotes tend to detract from discussions in general, because they're effectively an appeal to authority and then the discussion turns to exactly where we are now, debating whether the current situation can be distinguished from the original, which is a separate matter from whether what's happening in the modern case is reasonable or satisfactory in its own right.


> What's happening here is different than the scenario in the original ratio, even though it's a similar concern.

Correct me if I'm wrong, but I'm pretty sure Blackstone wrote about negative or natural rights.

In fact, let me pull out more context around the exact quote. He specifically addresses direct punishment but immediately after is the nature of having the duty to defend one's innocence. Which is exactly the case here.

  Fourthly, all presumptive evidence of felony should be admitted cautiously, for the law holds that ***it is better that ten guilty persons escape than that one innocent suffer.*** And Sir Matthew Hale in particular lays down two rules most prudent and necessary to be observed: 1. Never to convict a man for stealing the goods of a person unknown, merely because he will give no account how he came by them, unless an actual felony be proved of such goods; and, 2. Never to convict any person of murder or manslaughter till at least the body be found dead; on account of two instances he mentions where persons were executed for the murder of others who were then alive but missing.

  Lastly, it was an antient and commonly-received practice that as counsel was not allowed to any prisoner accused of a capital crime, so neither should he be suffered to exculpate himself by the testimony of any witnesses.
I would not be surprised if Blackstone found the act of investigation without the qualification of sufficient suspicion as gross injustice and directly relevant to his intent. As this is a less inconvenient version of locking everyone in a room and interviewing them checking their pockets for stolen goods before they leave. The negative or god given right of innocence is innate. The punishment is the accusation and search, which is an explicit infringement on the natural right. Yes, rights can be infringed upon, but not without due cause and not simply because one is in a position of authority.

I know that this is a point of contention in this (these) discussions, but I stand by that a right is being violated and harm is being done by the simple act of investigation. Mass surveillance (which is mass investigation), is an infringement on our god given rights. The point is to have friction for the infringement of rights. All rights can be violated, but they must need sufficient reason. It does not matter if these rights seem inconsequential or not. Because at the end of the day, that is a matter of opinion and perspective. Blackstone was writing about authoritarian governments and the birth of America was similarly founded on the idea of treating government as an adversary. These were all part of the same conversation, and they were happening at the same time.

I do not think I am taking the historical quote out of context. I think it is more in context than most realize. But I'm neither a historian nor a lawyer, so maybe there is additional context I am missing. But as far as I can tell, this is all related and we should not be distinguishing investigation (or from the other side of the same coin, exculpation) from punishment as these are in the same concept of reducing one's rights. They are just a matter of degree.

https://oll.libertyfund.org/titles/sharswood-commentaries-on...


> He specifically addresses direct punishment but immediately after is the nature of having the duty to defend one's innocence.

The issue is that the ratio can't mean much outside the realm of a criminal conviction when any of the rest of it would need a different standard.

Suppose we want to evaluate if it's reasonable for the police to search your residence for a murder weapon. Should we let 100 guilty people go free to avoid one search of an innocent person? That's probably not right, a search is enough of an imposition to require probable cause, but if you had to prove the crime to the same level as would be necessary for a conviction in order to get a warrant then searches would always be superfluous because they could only happen in cases where guilt is already fully established without the results of the search.

Conversely, with this YouTube kind of situation where the police want data on large numbers of people, the majority of whom are fully expected to be innocent, they're not even reaching probable cause for those people. Which is a lesser standard for justifiable reasons but it's still not one which is being met for those people. And so it's still a problem, but it's a different problem with a different standard.


I find this interpretation odd. I do not see the numbers as meaningful in a literal sense but rather in a means of making a point and a grounding for the surrounding abstraction. I think the point is to explicitly discuss these bounds and view them as a spectrum. To think of them in the abstract but to push back against authority.

Certainly Blackstone was not saying that infringement of rights (punishment) should not happen under any circumstance. Rather that there should be significant friction and that we should take great care to ensure that this is not eroded.


Another reason a lot of people got their hackles up is that you also had the ratio backwards:

> Supposing every person watched that video 10 times AND supposing the target was one of the viewers (it really isn't clear that this is true), that's 2999 people who have had their rights violated to search for one. I believe Blackstone has something to say about this[0]. Literally 30x Blackstone's ratio

"3000 innocent people for every one possibly guilty" isn't 30x Blackstone's ratio, it's 300,000x and worse, because the ratio is "100 actually guilty people for every one innocent". Of course, this actually helps your argument -- violating the rights of thousands of innocent people is unjustifiable -- but once you've given everyone cause to pause and work out what's wrong, they're going to reply with whatever they can find.


So what? We're going to derail an entire point because a gaff was made and everyone still understands the argument?

It's worth pointing out, but not worth derailing an entire conversation. It generates noise that prevents us from actually discussing the issues at hand. We're people, not computers. We can handle mistakes (and look how much work we put into computers to make them do this). And this thread blew up, you aren't the first to point it out. So forgive me if I'm a bit exhausted.[0]

I've made several mistakes (including the first blackstone link not pasting and pasting the whole comment instead of the specific part I was responding to (thanks firefox)), and so have you, and others. But let's not make the conversation about that. We'll never get shit done. We can take an aside to resolve any confusion, but it is an aside. Clearly by your explanation here you understood the point. And clearly we know that the number itself is arbitrary. Are we gonna shit talk everything Franklin said because he used 100 instead of Blackstone's original 10? No, because the number isn't what's consequential.

[0] We do meet each other here a lot and I have respect for you. It's why I'll take the time to respond to you. But I also know you to be better than this. I think you can also understand why it can be exhausting to be overloaded with responses and with a large number of people trying to tear down my argument by things that are not actually important to the argument. Specifically when the complaints make it clear that the correct interpretation was actually found. I'm happy to correct and appreciate mistakes being pointed out, but too many internet conversations just get derailed this way. The distinction of correcting vs derailing is critical, and the subsequent emotional response is clearly different in the two cases.

I'm happy to continue the conversation w.r.t the actual topic (even where we disagree), but it seems like wasted time to argue over a gaff that we both know was made and we understand what was said despite this.


This is true of literally any investigation though


It isn't. If the police are investigating John Smith and they get a warrant for the files of John Smith then they don't also get the files of anybody else along with them.


And importantly, there has to be sufficient reason for investigating John Smith. It can't be arbitrary (he looks funny, has a limp, is black, is gay, plays Doom, is a Muslim, etc). Rights can be infringed, but they need reason. And they need good reason.


How do police find out that John Smith is the person whose files they want to get a warrant for?

Maybe because John Smith was one of only eleven people who signed in to a building on the day a crime took place, and he signed out right after the crime happened.

But should the police not look at the sign-in sheet at the building because that will infringe the privacy of ten innocent people?


> How do police find out that John Smith is the person whose files they want to get a warrant for?

The victims go to the police and tell them that John Smith stole from them, so the police go and seize his files to confirm that the victims are telling the truth.

> But should the police not look at the sign-in sheet at the building because that will infringe the privacy of ten innocent people?

Asking for the sign-in sheet and seizing the sign-in sheet by force against the wishes of its owner are two different things.


I would contend that Blackstone was of the position that no innocent person who was not of sufficient suspicion of committing a crime should be investigated.

These are innocent bystanders. There is nothing suspicious about their activities other than they did something that a suspected criminal did. A perfectly legal activity? To take this to the ridiculous side, are we going to investigate everyone who took a poop at a specific time because a criminal did?

https://www.youtube.com/watch?v=DJklHwoYgBQ


Wait. Aren't "innocent bystanders" literally the first people the police wants to get ahold of, to interview and yes, potentially investigate if there's something off? People don't spontaneously become suspects, as if by radioactive decay; some degree of investigation comes first and is what turns "innocent rando" into a suspect.


> Aren't "innocent bystanders" literally the first people the police wants to get ahold of, to interview

Yes. But that's not the same

> potentially investigate if there's something off?

If you're asking information from people who witnessed a crime *and volunteering information* (which is not investigating that person and not accusing them of a crime, nor is lack of volunteering information a suspicious activity) and they then generate suspicious evidence, then yes, that enables capacity for investigation. It is true that things are not static, time exists, and entropy marches on.

That's the difference. There is nothing that these people did that warrants suspicion. These people are not being asked or questioned. This was not done voluntarily. They didn't even know this was happening to them. This was a thing imposed upon them, full stop.

I want to give a scenario to help make things clear. Suppose I send nudes to my partner. The government intercepts these without my knowledge, looks at these, and deletes them, and literally nothing else happens. Is this okay? I did not know this happened to me. No "harm" has fallen upon me. And as far as I know, nothing has changed in my life. But then later I find out this happened. Let's say 20 years later. I feel upset. Do you not think I am justified in being upset? I think I do. My rights were violated. It is worse that it was done in secrecy because it is difficult for me to seek justice. It is because I have the right to privacy. It is a natural, de facto, negative, but a god given right. They put my information at risk by simply intercepting it and making a copy. It was unnecessary and unjustified.


> Blackstone was of the position that no innocent person who was not of sufficient suspicion of committing a crime should be investigated

I expect so. But pretending that's what he was talking about in the quote you were referencing is going to undermine your (our, probably) position with those not already convinced.


I'm not convinced I am taking him out of context[0]. Was Blackstone not also discussing natural rights? I see him as viewing punishments as infringements on ones rights. As a spectrum. And those rights even including the simple aspect of presumption of innocence. My best understanding is that so much is literally about the mundane and simple. Because natural rights are... well... natural. They are things we have until we don't. That's why they are called negative rights, because they need be removed, not given. Punishments (infringements) can be extremely minor to major. But they are still one in the same because it is about the concept in the abstract. Or rather, in generalized form.

As far as I can tell, this is explicitly within the context of the quote.

That said, I do see your point and appreciate your feedback. Maybe this can be an opportunity to turn this into a lesson? It seems too cumbersome to bring up from the get-go and similarly backfire. But discussing in the abstract is a difficult task when it is neither a natural way of thinking nor is it a common way that is taught. But I still think it is an important tool and something that does distinguish humanity. I am open to suggestions (I think HN of all places is more likely to be successful when discussing things in the abstract, but it is still general public).

[0] https://news.ycombinator.com/item?id=39798280


> They are, however, being harmed.

No they're not. Which ones will live a day less of their lives?

If I'm on surveillance footage near a crime scene, police have the right to look for me and question me. This isn't any different. It's just different sets of photons and electrons.

I respect the rights to privacy, but a crime happened, and the police have the tools to investigate. It's barely an inconvenience.

The burden of proof will still be on the investigators and prosecution to find out and show beyond a shadow of a doubt who performed the swatting.


> Which ones will live a day less of their lives?

The ones who, having had their political inclinations revealed to adversarial law enforcement, then become subject to harassment for those views which should have been private.

> If I'm on surveillance footage near a crime scene, police have the right to look for me and question me.

The question is whether they should have the right to seize the surveillance footage by force if the proprietors would rather protect the privacy of their users. The third party doctrine is wrongful.

And given that it exists, so is keeping records like this that can then be seized using it.

> The burden of proof will still be on the investigators and prosecution to find out and show beyond a shadow of a doubt who performed the swatting.

This is assuming they're trying to prosecute a particular crime rather than using a crime as a pretext to get a list of names.

And it's about the principle, not the particular case. Suppose a protester commits a crime and now they want a list of all the protesters. Any possibility for harm there?


> The question is whether they should have the right to seize the surveillance footage by force if the proprietors would rather protect the privacy of their users.

If there was a crime committed outside your home and you have surveillance footage that has captured passers by, you would not offer it to the police because you would rather protect the privacy of the all the anonymous passers by when one of them is likely the culprit?

That strikes me as highly unlikely. And if you wouldn’t, I am willing to bet that most people would. Why care about the privacy of anonymous passers by when you can help catch the perpetrator and increase safety around your home?


>Why care about the privacy of anonymous passers by when you can help catch the perpetrator and increase safety around your home?

well if we're resorting to hyperbole comparing a murder to "watching a youtube video": say you knew and had multiple whistleblowers pass by in your footage, and they are all wanted by the government. You turning over the footage puts those whistleblowers in danger, who's only "crime" is revealing government corruption. Is catching one crook worth endangering multiple good people?


> If there was a crime committed outside your home and you have surveillance footage that has captured passers by, you would not offer it to the police?

These are not the same. You might think the difference is subtle, but I'll tell you that that subtly matters. And matters a lot.

And tbh, these two scenarios are quite different.


I think the analogy is rather strong. Where does it differ?


In one case the homeowner has surveillance footage and freely offers it to the police because they want to assist the investigation. In the other case the police seize the footage by force even though the homeowner is totally innocent and might not trust the government with a record of all of their own comings and goings and associates etc.


Just confirming that this too is the distinction I see. I'll expand since there is so much confusion around this:

The difference is how information was gathered.

People volunteering information to an authority? Perfectly fine (especially in cases when information was not requested).

People being compelled to provide information? Needs friction (checks and balances).

People being compelled to provide information about others who then unknowningly being investigated? Needs even more friction.

It's also important to note that in the hypothetical that random passerbyers are not being investigated either. A specific type of behavior is being sought. Either the explicit act of the crime being committed or a STRONG correlation with another piece of evidence (such as already knowing what the criminal looks like and trying to find a better view). Random people are not considered suspect.

In the article's case all viewers were considered suspect.


> No they're not. Which ones will live a day less of their lives?

There are cases like the bombing in Madrid where the US agencies cast out a wide net over possible suspects using data about people who converted to Islam and then used a bad finger print match (which everyone told them was garbage) to terrorize one suspect for weeks. They had no evidence that the guy was involved, they had no evidence that any of their suspects was involved, but they had a narrative and where happy about every bit of data that supported it. Meanwhile Spain convicted the actual bombers.


> There are cases like the bombing in Madrid where the US agencies cast out a wide net over possible suspects using data about people who converted to Islam and then used a bad finger print match (which everyone told them was garbage) to terrorize one suspect for weeks.

Some hyperbole in your telling of the story and failure to mention that he was awarded restitution. According to Wikipedia:

Brandon Mayfield (born July 15, 1966) is a Muslim-American convert in Washington County, Oregon, who was wrongfully detained in connection with the 2004 Madrid train bombings on the basis of a faulty fingerprint match. On May 6, 2004, the FBI arrested Mayfield as a material witness in connection with the Madrid attacks, and held him for two weeks, before releasing him with a public apology following Spanish authorities identifying another suspect.[1] A United States DOJ internal review later acknowledged serious errors in the FBI investigation. Ensuing lawsuits resulted in a $2 million settlement.

https://en.wikipedia.org/wiki/Brandon_Mayfield

What point are you trying to make with this example?


That it should never have happened?


And? As someone harassed and stalked by police officers (many years ago now) I assure you, restitution after the fact fixes nothing. I’d rather not be harassed and stalked in the first place, especially when such activities have led to a life long distrust of the people who in theoryyy I should be able to turn to in trust. Today I wouldn’t engage with a police officer regardless of circumstance. they are just another armed gang to me.


What's your point? Someone wrongly being detained/prosecuted/charged/hung/etc in the past does not make it right now. It doesn't make it any less wrong then. Nor are people of a government homogeneous. Especially America. The great American past time is shitting on America. But if you're going to dismiss wrong because wrong was done in the past (or even let it slide or be apathetic) that is enabling. Being upset and angry is very different than apathy.


My point was that it was wrong. I do not agree with casting wide nets in the hope that someone might fit whatever profile the police or other agencies have pulled out of their asses.


Your point came off as whataboutism. This may not be what was intended, but this is how I interpreted it and it appears that others did as well. Thank you for clarifying and I'm sorry we miscommunicated.


> Which ones will live a day less of their lives?

Your liberties encompass so much more than this and a government that treads on them recklessly does far more damage than to simply waste an individuals time.

> It's barely an inconvenience.

You assume it's not. How would you verify this? Why should you have to?


> Your liberties encompass so much more than this

You don't have a liberty from being investigated if they have evidence. They're not snooping around in your home without cause. The swatter was watching the live stream, and the timestamped IP logs can corroborate.

Just because it was an IP address and not a face or license plate on camera doesn't make it any different. You can't hide behind a chosen technology stack as a shield when the fundamentals of the case are the same.


> You don't have a liberty from being investigated if they have evidence.

The evidence has to be specific to an individual. In these cases they're obviously not.

> The swatter was watching the live stream, and the timestamped IP logs can corroborate.

He wasn't the only one doing so.

> Just because it was an IP address and not a face or license plate on camera doesn't make it any different.

Yes it does. There are wildly different expectations of privacy between these two scenarios, this is immediately apparent, and easily demonstrable.

I feel like you're just trying to win an argument and not actually thinking this through. I'm not saying you're fundamentally wrong on facts it's just that to follow your conclusions blindly does in fact violate individual rights, and those rights are superior to the governments "right" to investigate crime.

> You can't hide behind a chosen technology stack as a shield when the fundamentals of the case are the same.

You can't hide behind weak evidence to violate the privacy of groups of individuals. The crime has already occurred. The damage is done. You can't solve that problem by causing _more_ damage.


> This isn't any different. It's just different sets of photons and electrons.

And a dictator is just another set of cells and organic compounds? You can't break things down into this because then literally everything is the same. Literally everything you see is just a different set of photons and electrons. But those things have real effects. They aren't fungible. I don't care that my partner sees pictures of me naked, but I sure do care if cops or "the government" is, despite it being "just a different set of photons and electrons."

> The burden of proof will still be on the investigators and prosecution to find out and show beyond a shadow of a doubt who performed the swatting.

The burden of proof is step by step. I don't think I should have to cite the 4th Amendment but

  The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and ***no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.***
The setup was to treat the government as an adversary. Needing to understand positive rights vs negative rights[0]. Obviously rights are not infinite, but there should be friction. Doesn't matter if the thing is seemingly innocent or inconsequential, what matters is power. Perception shifts and creeps so this is why people take a stand at what might seem trivial. [1]

[0] https://en.wikipedia.org/wiki/Negative_and_positive_rights

[1] https://encyclopedia.ushmm.org/content/en/article/martin-nie...


> The setup was to treat the government as an adversary. Needing to understand positive rights vs negative rights

+1 on citing the constitution's wisdom of treating the government itself as adversarial, due to the enormous power it has.

+1 on pointing at the difference between positive and negative rights in this context.


There was a swatting incident and they have a time window where they'd like to corroborate IP logs.

An IP address is no different from a license plate on camera. It's a lead and the evidence was gathered at a crime scene. Nobody's home is being entered into. Nobody's iCloud account is being unlocked and ransacked. Gathering these logs alone won't lead to those things happening either.

I'm all for limits on power, but this seems to be entirely reasonable. This isn't a fishing expedition. IANAL, but I don't see how the 4th would be violated with either a court order or willing third party handing over the logs.

If the investigators get the IP logs, they shouldn't then be able to take those logs and ask the ISP for everything that those people were doing. The burden will be on the investigators to find more evidence linking one of those IPs to the call.

More crime will happen digitally year by year. Swatting has already entered the public consciousness. Just wait until people start strapping bombs to FPV drones or calling grandma with your voice.

We shouldn't stop at the software stack as some kind of impenetrable legal barrier that shrouds investigation. We should respect and enhance limits on power, but we also need to modernize the judicial tools to tackle the new reality.

The framers couldn't have imagined "swatting". The law needs to understand this. It should provide scoped-down investigatory tools that simultaneously guard and respect our constitutional rights and privacy. Access to anything beyond the scope of an actual crime that took place should be restricted.


In your story, the injustices are 1) the police going on a fishing expedition, and 2) the police using the data gained through an investigation to unjustly harass people. Those are bad things and we should have laws to prevent that and punish people who do so.

I agree it would be bad if they were making the request in furtherance of a conspiracy to do either of those things.

But the police asking Google for a list of people who viewed a video, though, is in itself not one of those things. It’s similar them asking a business owner whose business has a camera overlooking a street near a crime scene to hand over surveillance footage (which will include innocent passers by) or a business that sells a product which was known to be used by a criminal to provide a list of purchasers of that product (which will include innocent purchasers).

Many such businesses will voluntarily hand over such information to assist with an enquiry. Some businesses might refuse, or might choose not to have such information.

And this is why judges are involved in the process of issuing warrants and grand juries in the process of issuing subpoenas when the police or a prosecutor want to compel the production of evidence of that sort.

But it just seems inevitable that, at the beginning of an investigation into a crime where the perpetrator is unknown, the first step is to identify possible suspects; by definition not all of the people so identified will end up being investigated. How are the police to do that if they can’t ask anyone for information that might bring innocent people’s names to their attention?

I appreciate it seems idealistic maybe, but it feels to me that we need rules that ensure ‘coming to the attention of the police in the course of an investigation’ is genuinely harmless; not rules that assume it automatically exposes you to harm.


>And this is why judges are involved in the process of issuing warrants and grand juries in the process of issuing subpoenas when the police or a prosecutor want to compel the production of evidence of that sort.

That's the issue, court orders aren't free to make and factors like "it is filming a public street" are taken into account. There isn't anything "public" about "viewing a video stored on a server of a large private website". And there enlies the rub.

Also, the story here isn't just "get me a list of 30k people who watched a video", which may be reasonable:

>The court orders show the government telling Google to provide the names, addresses, telephone numbers and user activity for all Google account users who accessed the YouTube videos between January 1 and January 8, 2023. The government also wanted the IP addresses of non-Google account owners who viewed the videos.

They want ALL your Google activity for a week, because you watched a video that may or may not have been recommended to you by Google itself. that can include schedules, emails, financial transactions, Maps inquiries, chat records, etc. Depends on how much you use google, but Google can power a lot of aspects of life these days.

Even if you aren't on Google you have your IP revealed for simply viewing a video. That feels like an overreach.

------

The second factor is that they barely have a specific suspect. That just think "they saw this video -> they may be money laundering":

> Google to hand over the information as part of an investigation into someone who uses the name "elonmuskwhm" online.

I can't believe that passed a court order. some random handle is selling bitcoin and may have watched this video, so lets get all the data of everyone who watched this tutorial at this time.

>How are the police to do that if they can’t ask anyone for information that might bring innocent people’s names to their attention?

by narrowing it down to more than 30k people. That can be an entire town for some smaller areas

>but it feels to me that we need rules that ensure ‘coming to the attention of the police in the course of an investigation’ is genuinely harmless; not rules that assume it automatically exposes you to harm.

in my mind this is the more idealistic scenario. They've had decades to espouse this sentiment and they aren't even close to doing so.

Also, the issue is that it's not like the government deletes this data after they are done. Quite the contrary. Maybe the US government needs its own GDPR protocol so this won't be pulled up on record down the line.


Well I forgot to link but from the wiki

  Other commentators have echoed the principle. Benjamin Franklin stated it as: "it is better 100 guilty Persons should escape than that one innocent Person should suffer"
I went with Franklin because we are specifically talking America but let's be honest, the number doesn't matter and it seems you agree. Let's focus on that. Because I'm 100% with you, this isn't even people who have been accused. Which even those accused have rights.

https://en.wikipedia.org/wiki/Blackstone%27s_ratio


You've still got the ratio backwards. Franklin says if you know 101 people watched a video, and 100 of them are guilty of a crime, you can't just round up all 101 and throw them in jail. I.e., if you have a standard of punishment that would convict even one innocent person for every 100 guilty people it catches, it's not a good standard.

Which I think we can all agree with.

But that's not what's happening here, is it?


> you can't just round up all 101 and throw them in jail

Yes. "100 people" (or whatever) had their rights violated. Sure, not as bad as jail, but it is still in the spirit. I'm not sure why you think I have it backwards, I think we're just using different perspectives.

But I'm not into being pedantic if we understand one another.


so it is considered punishment to have the watch information revealed to the state?


I agree that it's egregious, but Sir Blackstone was talking about punishment, especially relating to execution, and I think perhaps the ratio can be adjusted significantly downward when the cost to the innocent is much lower. Otherwise, the only reasonable search would be when a government official is already certain of your guilt.


I'd consider your rights being violated "punishment."

Blackstone was talking in the abstract. Clearly Franklin was too considering many of the other things he's known for saying.


> I'd consider your rights being violated "punishment."

You are wrong. Punishment is when you impose a penalty as retribution for an offense.


You can call me wrong, but maybe first ensure you understand negative rights.

I understand why you think I'm wrong, but I hope you understand why I think that way. We can disagree, and that is fine, but let's not act as if there are objective answers in social constructs.

Because, I do think punishment is an imposed penalty. In this case, on your rights. Rights are abstract, and these are not binary nor clearly discrete. House arrest is not jail, nor are fines. But as communicated to you elsewhere, the 4th amendment is about ensuring friction for removing someone's negative rights.

But I disagree that punishment is imposed as a penalty as retribution for an offense. You imply that this requires an actual offense to have been made. I assure you that punishment can be imposed for any arbitrary reason. I can also assure you that punishment is a spectrum, from extremely minor (as I think we'd agree is in this case) and extremely harsh.


You to are having a debate over semantics. Most dictionaries would probably agree that punishment is, definitionally, retribution for an offense. You cannot punish without a reason because that's not punishment, it's just hurting people. Your definition may be different, and that's fine, but you can't logic your way out of a difference of opinion on the definition of the word.


>Most dictionaries would probably agree that punishment is, definitionally, retribution for an offense.

if your entire family is "interrogated" over some crime you did not do, is that "punishment"? Maybe not legally, but it's a stressful time that may or may not cause strains on your relationship between your spouse and kids. And it's not like this is an investigator coming in for tea and asking a few questions, either.

morally, it would indeed be a punishment. There is now a bunch of nervous sentiment among your family for no reason, all because you were at the wrong place at the wrong time. Or if we want to be frank, you looked the wrong race at the wrong place at the wrong time.


Yes, but the semantics here matter.

We're following the ideas of Blackstone and subsequently those that founded the country in question. The US was founded under the idea that natural (or negative) rights were exceptionally important[0] AND that the government should be treated as an adversary (since this is the main body that could impinge upon natural rights). The idea isn't that natural rights can't be violated for any reason, but rather that there needs be friction at every step, including the smallest amount. The reasoning being that they were intimately familiar with power creep.

So yeah, semantics, but literally the semantics that we the topic of overthrowing an entire government for. ¯\_(ツ)_/¯

[0] So much so that they appear in the second paragraph of the Declaration of Independence[1] (which goes on essentially ranting about this topic)

  We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.--That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, --That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness.
As well as is in the preamble of the constitution, are the subject of the 1st, 4th, 5th, 6th, (arguable the 8th), and 9th amendments (not to mention those that came later like the 13th).

[1] https://www.archives.gov/founding-docs/declaration-transcrip...


> Supposing every person watched that video 10 times AND supposing the target was one of the viewers (it really isn't clear that this is true), that's 2999 people who have had their rights violated to search for one

I think whether their rights are violated depends entirely on what sort of information is handed over. Consider acquiring surveillance footage that has plenty of foot traffic, but a suspect is known to have passed by. The police are typically permitted to review that footage even though plenty of innocent people were captured on that video.


>depends entirely on what sort of information is handed over.

Apparently:

>The court orders show the government telling Google to provide the names, addresses, telephone numbers and user activity for all Google account users who accessed the YouTube videos between January 1 and January 8, 2023. The government also wanted the IP addresses of non-Google account owners who viewed the videos.

That definitely seems like an overreach.


Any kind of search can be deemed constitutional if it goes through a warrant process, which is the point of warrants. This story is less about the how the information was taken and more about whether or not the warrant process and 4th Amendment rights were properly followed.

This would then be mixed in with the question of whether or not new forms of data (like video views) would equate to previous forms of similar data searches that police have obtained warrants for (like reviewing CCTV).


If it is an order by a court, then i think it is ok. Then it is no mass surveillance and for solving a crime it is useful.

I wonder what kind of video it is. Maybe a shared link, so only people who secretly know about it knew about it, and they have become suspects. Is it mentioned in the forbes article?

And i wonder if people abuse videos on youtube by encrypting the content with a key and the key is then shared.


> If it is an order by a court, then i think it is ok. Then it is no mass surveillance and for solving a crime it is useful.

Why can't a court order be mass surveillance? In these cases, the videos were viewed 30,000 times and more than 130,000 times (if I understand the latter correctly). How is that not mass? Nobody suggests that more than a few of those people are suspects.


My understanding of mass surveillance is, that masses are surveilled. :) But here its a court, that allows extracting log data for a specific case and it happened after the fact.

I make a difference between leaving loggable traces of living (which we leave all the time, no matter what) and sometimes filtering to recapture the past.


That's right even if the sample size N=30,000, it is still a one-time point event controlled/approved by the proper legal authority. There will be an audit trail of said approval and the process will be documented.

In contrast mass surveillance is just "oh, we have a BIG database, and we query whenever for whatever purpose, and nobody knows who searched for what and when and why, and nobody EXTERNAL TO THE AGENCY needs to approve it (lack of control). And today, Bob, who works for the police, background-searched his new girlfriend as well."


emphasis on "Specific". This doesn't feel specific at all. Mass surveillance or not, this feels like a failure of the warrant itself passing.


OP explicitly agrees with you that the 30k is illegitimate, but that's the only one you address. What's your take on the one where the police became aware that they were being watched on a YouTube livestream while responding to a bomb threat and obtained a court order asking for information about who was viewing a set of local livestreams at the specific times where they were searching for the bomb?

What makes that one different than a court order demanding that a business release security footage that covers the scene of a crime for the time window in which the crime occurred? Or would you consider such a court order to also be illegitimate?


Better one person not be crushed to death than two thousand nine hundred ninety nine suffer essentially no harm whatsoever.

No, that doesn’t seem very familiar.


If we're being honest here: it's money laundering. It's not an inherently harmful crime except to the IRS. so apparently it's the most heinous thing imaginable (meanwhile, every corporation...).

No one is in danger, this is an absurd request for an absurd result of... catching a dude using cryptocurrency the way the government doesn't like. Screw the government.


> I don't think any of this appears legitimate.

It isn't.

Democracy is fake.

Our justice system is fake.

Everything is fake.

(Where "fake" = "not what they are advertised or perceived to be.)

If all of the same things were occurring in another country, or even better: in a video game, you would have little difficulty and zero aversion to accepting these facts.

However: put a person into these things, and the brain malfunctions.

If you are a reasonably normal person, your mind will now be filled with objections to this proposition, reasons why I am incorrect. But if you were to state those objections, I can punch holes in every single one of them without even breaking a sweat.

We live in a literal simulation, but not the kind that everyone has been hypnotized to believe is the only kind possible - have you ever noticed that when the notion of simulation theory comes up, it is always The simulation theory (Nick Bostrom's)?

This is a pretty neat trick eh? And there seems to be nothing that can be done about it, because people will fight tooth and nail (using Meme Magic, aka "The Facts", "The Reality", etc) against being extracted.

Thankfully, it is simultaneously hilarious. Well, except the part where millions of children are dying, but nobody cares....but oh boy when a big scary "pandemic" comes along, pull out all the stops.

I hope that there is a Hell, because I would like to see every single member of this despicable 21st century society end up there some day. Seeing justice finally being served for once would be worth suffering for eternity.


>Well, except the part where millions of children are dying, but nobody cares....but oh boy when a big scary "pandemic" comes along, pull out all the stops.

You MAY have had a point somewhere in those ramblings, but this right here just kind of undoes any credibility.


A rather bold (yet oh so common) metaphysical & cognitive claim.

I think it's fun to imagine that pandemics are some sort of supernatural punishment for human hubris in Western nations.


The second one seems a lot more narrow and more legitimate


I remember Larry Ellison (Oracle) in the news saying that Privacy Is Dead a quarter century ago, long enough the article merely referred to it as famous (no date needed) in 2018. It was written six years ago...

https://securitycurrent.com/privacy-is-dead-long-live-privac...


Did you forget the link to Blackstone?



> The first one where the police uploaded videos and wanted viewer information is absolutely egregious and makes me wonder how a court could authorize that.

The police didn't upload they videos. It's not entrapment, and it doesn't sound like the actual content of the videos is illegal.

Instead, they had an open communication channel with their target and were able to send them various links to youtube videos.

Their theory being if they can find any user who clicked on all (or most of) those links, it's probably their target. And it's unlikely some random user would have accidentally viewed all those videos.

The actual request for the raw list of all viewers seems unconstitutional to me. Too broad, gives the police a lot of infomation about all users who watched just one of the videos. But I suspect a much narrower request where google identified the target user and past just that user's info on would be constitutional.


> But I suspect a much narrower request where google identified the target user and past just that user's info on would be constitutional.

Isn't that worse? Essentially making Google do the job of the police and the police having to trust the work of Google for it.


I don't see any problem with trust.

The police will still get the exact same raw data of that one target user. The change just means that they won't get any data on other users.


I certainly concur with this.

On the one hand, a narrow warrant that reveals a lot of people (classic example are warrants on motels to provide the names of everyone who checked in on a certain date, or was registered on a certain date) are certainly constitutional and have been upheld many times.

The first seems, odd.


  In a just-unsealed case from Kentucky reviewed by Forbes, undercover cops sought to identify the individual behind the online moniker “elonmuskwhm,” who they suspect of selling bitcoin for cash, potentially running afoul of money laundering laws and rules around unlicensed money transmitting.

  In conversations with the user in early January, undercover agents sent links of YouTube tutorials for mapping via drones and augmented reality software, then asked Google for information on who had viewed the videos, which collectively have been watched over 30,000 times.
This is the first case. This doesn't seem that narrow to me.


> who they suspect of selling bitcoin for cash, potentially running afoul of money laundering laws and rules around unlicensed money transmitting.

Wait, what? So is Bitcoin illegal to use as a currency now? Special casing exchanges for cash seems completely pointless if you could just buy some of <any commodity> for cash and then turn around and sell it back to the same person for the same amount in Bitcoin, but if every customer has to do KYC of the merchant when they're paying with Bitcoin, how is that ever going to be feasible?


I think a charitable interpretation is that they were suspicious of the money being used illegally. But I'm not sure there's enough information here to make this clear. Because clearly there's misinformation being spread. Claiming bitcoin is anonymous...


More likely they wanted to seize the bitcoin under civil asset forfeiture and buy some high-end cars for their D.A.R.E. program.

Charitably speaking that is.

https://www.autoweek.com/news/a2055556/venom-law-police-put-...


The first is a somewhat clever attempt to unmask someone ann undercover investigator was already talking to. Police should have narrowed scope of the warrant by only asking for data on viewers within a narrow window after they sent the link.

Even better might have been to directly link to some service that they already control on a honeypot URL, and then gone after the ISP for customer details.


Nah actually pretty dumb overall. And sending an open link when a private one looks the same is even more dumb.


A person from rdrama managed to find the FBI victim's anon reddit profile (public information), and apparently it was IRS evasion? https://rdrama.net/h/slackernews/post/255754/google-ordered-...


If you see a YT which is remotely dodgy, don’t watch it… It very well could be planted there as bait.

And that’s great Google are trying to fight back, a little. Though I wonder that for us Non-American Brits that they’d do the same for us too (doubtful)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: