The NSPA's goal was never to actually hold a march. They never did hold a march in Skokie, after that defense. Their goal was to harass Jewish communities through the process of applying for the march, which Goldberger, for all his good intention, assisted them with.
As the article notes, they sent letters to a whole bunch of suburbs asking for a permit to hold a march. They never followed up with the suburbs that ignored them, nor did they march in those suburbs either.
The whole thing was a bad-faith tactic which today we'd call trolling. They weren't fighting for their right to speak freely in Skokie, they were fighting for their right to use the legal process to harass the city of Skokie.
In case anyone is curious, the article https://en.wikipedia.org/wiki/National_Socialist_Party_of_Am... linked above suggests that the NSPA's goal was in fact to hold a march in Chicago and following the Skokie decision, they apparently did in fact hold a rally in Chicago.
"Moon" is the name of the person who spoke the thing above that is being discussed in this thread.
Advising underage people through a wiki on how to make hormones at home is neither sexual assault nor grooming as the term is conventionally understood. How could it be? You can't get in contact with an anonymous reader of information that has been published to the public. You have no idea what they do with that information. So I am not really understanding what its relevance in this thread is.
(If anything, it's an example of free speech. You might not be a fan of it, which is totally fine, but information that advises people of any age on knowledge that people would prefer to be un-known is obviously at least as deserving of free speech protection - both in the legal sense of protection from government interference and in the moral sense that lovers of a free society should support it having a platform - as anything posted on Kiwifarms.)
Stars don't represent anything real. You don't star a project when you clone it, when you grab a packaged version from a package repository or the home page, when you use it as a transitive dependency, when you scale it out successfully to thousands of machines, when you pay for a support contract, when you contribute a pull request, etc. In turn, when you do star a project, it doesn't require the project to even work, let alone do things well, respond to feedback or contributions, etc.
And startups have long asked for people to star their projects for visibility.
I don't know at what point it hasn't been "gamed." Maybe there are now bots starring repos, but is that meaningfully different than masses of very real humans being excited by hype?
I very much agree with you that the lack of supply is a major problem, but the article is making a case here that what the algorithm is doing is tantamount to price-fixing / collusion. Even in markets where supply is much easier to come by than housing and does not have the entrenched political dysfunction restricting supply - such as RAM chips and canned tuna - price-fixing is a problem.
There's an analogy in the article to airplane seat pricing. The solution there was to tell the airlines they couldn't collude with each other, not to say "The real problem is the lack of flight supply."
But I don't see any actual evidence that collusion is happening. Collusion is difficult to pull off! It requires each owner to restrict their supply so that all owners benefit through higher prices. The incentive for an individual owner is to "defect" by renting out their whole supply. Collusion generally requires participants to be able to monitor and enforce each other's behavior.
To the extent that owners are increasing prices slightly to benefit themselves, that's not collusion, that's just the market clearing.
It protects the user's privacy against attackers other than Google.
To be fair, this is an entirely reasonable threat model for a lot of people. For instance, if you're a reporter in an authoritarian country, Google is almost certainly not colluding with the attackers who are literally trying to kill you, and using a Chromebook and Gmail is probably the best option out there. Your threat model is "Don't die," not "Don't be subject to surveillance capitalism."
But it's also something we should collectively be pushing back on. The motivating example for these products is "intelligent ambient systems," i.e., things like Nest hubs and doorbells that capture audio/video all the time. These products probably shouldn't exist at all, and to the extent they do, they should process data locally and discard it as soon as they can.
Google sucks up a lot of data, and is in a position to do a lot of bad stuff with it, but historically they have never told my spouse about my affair, my government about my accounts in the caymans, or leaked my nude pictures to my grandma. (I don't actually have any of these!)
I really don't care how much data of mine they have while they limit their evil they use it for to deciding if they should show an ad for baseball or football shirts...
And I trust them not to accidentally leak it far more than I trust my government or any smaller/less techy company.
This 100x. Of all the companies/entities that have had some sort of data of mine over the years Google feels by far the most trustworthy.
My country's agencies (Canada) have leaked more data than Google, and MS can claim they're secure all they want, I've had accounts on MS services hacked but never Gmail or Google services...
> historically they have never told my spouse about my affair
Have we forgotten Google Buzz? Google changed GMail to publicly list the people you email most. In one case, this de-anonymized a woman's blog and enabled her abusive ex-husband to stalk her. https://fugitivus.wordpress.com/2010/02/11/fuck-you-google/
This is IMO the most likely way that "bad stuff" will happen: not maliciously, but through privacy-invading misfeatures connected to pushing people to share more.
Thats 12 years old... I think it's a real testament to Googles privacy behaviour that amongst their 2 Billon+ users over 11 years, there are no fresher news stories that come to mind.
Compare with facebook/instagram, where it seems every other week someone messes up the privacy settings and posts something to an audience they didn't intend because the product is deliberately designed to encourage accidental oversharing.
> Google sucks up a lot of data, and is in a position to do a lot of bad stuff with it, but historically they have never told my spouse about my affair, my government about my accounts in the caymans, or leaked my nude pictures to my grandma. (I don't actually have any of these!)
"""It's unclear how widespread Barksdale's abuses were, but in at least four cases, Barksdale spied on minors' Google accounts without their consent, according to a source close to the incidents. In an incident this[2010] spring involving a 15-year-old boy who he'd befriended, Barksdale tapped into call logs from Google Voice, Google's Internet phone service, after the boy refused to tell him the name of his new girlfriend, according to our source. After accessing the kid's account to retrieve her name and phone number, Barksdale then taunted the boy and threatened to call her. [...]"""
Fwiw that was 12 years ago, and a lot of the Google infra has changed quite a bit since then to make looking at user data much harder and track access more explicitly.
Ie. I want them to commit to "No human who works at Google will ever see your email or photos without you knowing about it". And then splash that statement all over TV ads.
Set up some system so every time an engineer sees user data, the owner of that data is sent a notification (and there are legit reasons for that, like investigating a bug a user has reported). It doesn't need to be for every kind of user data, just the super sensitive ones like the text of emails.
Absolutely agree, but how do you do that in practice?
Do you self-host your services on some Linux distro? How many FAANG employees have upload access to that distro or maintain its infrastructure?
(Or maybe you audited everything yourself and you're 100% confident in your audit, somehow, and you've turned off automatic updates. How many FAANG employees are working on fuzzers to automatically find new exploitable security vulnerabilities and scale out those fuzzers on their employers' infrastructure?)
This is true now, but once they have those data you can't know what they will use them for in the future. Maybe they will keep using them in the same way as now, maybe not. Also don't forget the recent case of users that got reported to the police by Google because they took pictures of their children for medical reasons.
It's actually spelled '"Auto-Deletion" of data' since you can't prove it's been deleted.
Google and other US tech companies have no right to be trusted after PRISM. Not to mention the US government's complete abdication of public oversight under the guise of national security, with secret courts, secret rulings, and national security letters compelling silence from these same organizations while complying with whatever demands they make.
You realize many tech companies responded to PRISM by making their data centers and private fiber more secure against domestic state sponsored hacking, right?
Unfortunately, I believe that there were 2 possible outcomes in a post-PRISM world:
1) Tech companies increased their security, but it wasn't enough, and security services still have a feed of nearly all data, through a combination of software/hardware/algorithmic flaws.
2) Tech companies did manage to mostly stem the flow of information into security services. However, security services simply sent secret letters to all the big players demanding an API/backdoor and requiring them not to talk about it.
My lukewarm take is that it is possible to construct your company/infra in such a way that functionally, any employee can audit that (2) is not the case, and that Google comes very close to doing this.
If you take security and specifically insider threats seriously, you can't privilege or hide any subsystem, or it becomes a threat of its own, so the same processes that prevent an attacker from creating a shadow-system in your infrastructure also prevent you from doing the same thing.
Apple's "cooperation" with authoritarian governments tends to only go so far as it needs to in order for the next iPhone to come out on time and in sufficient supply. Otherwise Apple bends heaven and earth to engineer their devices to be as secure as they can make them, even against state authorities.
That said, if you live in China, you probably don't want to sync your stuff to iCloud. Not because Apple doesn't want to protect your data, but more because you can't trust anything in any data centers that are physically on Chinese soil.
But let's get real. If you're in mainland China and the authorities decide they need to confiscate your phone, you're already fscked.
Digging through the link the other commentator posted, Apple complied with 88% of Russia's requests for information and 94% of China's with over 1000 requests from each of those nations...
Versus Google which has avoided giving information to or censoring search results in both countries and as a result is mostly banned.
With Apple leaving Russia and removing government-affiliated apps from App Store with no way to side-load them, the only other option is Android now and blocking Google completely will probably render most smartphones useless, as most Android phones rely on Google services to function. I think that's why it's not banned yet.
> Apple's "cooperation" with authoritarian governments tends to only go so far as it needs to in order for the next iPhone to come out on time and in sufficient supply
That statement is kind of information-free. If China knows they have Apple completely over the barrel, why wouldn't they demand a lot?
But for how they cooperate, Apple's own transparency report shows they give information on Apple customers to Chinese authorities thousands of times per year, and accept the vast majority of requests: https://www.apple.com/legal/transparency/cn.html
>If you're in mainland China and the authorities decide they need to confiscate your phone, you're already fscked.
Funny how you specifically mention China, as if it worked differently in USA - the country where you can get four years of jail time for talking back to police.
Because of hypocrisy? They pretend to be not in ads business with your data
So now everyone is doing the same thing so called value the 'privacy' (aka only they could collect the data for themselves to do personalized ads). So in the end you pick the one who hoard ur data and show the ads. What's the difference again?
Google, being US-based company, is legally obliged to provide all the data they have to three letter agencies, without any real oversight. They can’t refuse even if they wanted.
Regardless, I care less about the US government having my info than, say, Russia (especially being part Ukrainian, having Ukrainian friends and family, etc...).
Lol. Selling your data to the government is one of the ways they make money. BigTech and BigBrother have been in cahoots for more than 2-3 decades now. Read https://en.wikipedia.org/wiki/PRISM for more info.
Not that I disagree with you about whether the average person realizes it, but it's not just a risk because Google has JavaScript trackers on your porn site. Google could just make a deal with the porn site to access their server logs and correlate data that way. The fact that you disclose information to the sites you visit means they may, in turn, disclose information to whomever else.
When you buy something with MasterCard, in person, with a magstripe or even an old-school carbon-copy imprinter, MasterCard can go give that data to Google.
I think the incognito warning could say "Websites you visit, and anyone those sites share data with" to draw attention to this, but I'm not sure if that's quite enough. I'm leaning towards the argument from this article that "incognito" itself is simply a poor name.
For Slack in particular, the deal is that new signins return a token that starts with "xoxc", not a token that starts with "xoxs", and that token requires corresponding cookies in order to be accepted. If you're grabbing a token out of the browser, you will also need to get cookies for slack.com and pass them in the client. Last I checked, you only strictly needed the cookie called "d", but you may as well grab all of them.
A bunch of folks who have been using Slack for a long time have saved their xoxs cookie from a couple of years ago and not invalidated their old sessions, and they'll tell you the various clients still work. For a new user who is getting an xoxc token, your client needs a way to pass cookies along with your token. Many of the terminal clients I've seen don't obviously give you a way to pass the cookie.
My company uses the enterprisiest Slack offering (Enterprise Grid, compliance archiving, proxy restrictions, the works) hooked up to Azure AD as the identity provider and previously to Okta via ADFS, and I have working programmatic auth to Slack (leveraging local Kerberos tickets from Windows AD signon). Give me some details on what your auth setup looks like and I can probably help you with it. (I can try to open-source my internal client if it helps, but it's specific in some ways to our setup so it might be easier to just talk you through it.)
One might hope, but they didn't. How do you tell in Java if stderr is a TTY, and if so, what its width is? How do you do an NSS passwd lookup (as in getpwnam("pjmlp") in C etc.)?
(To be fair, these could be defended, very slightly, on the grounds that Java is a cross-platform language, even though other cross-platform languages have better answers. But then I'd raise the point about Java corrupting RLIMIT_FILES for processes it starts....)
The same way you can tell in Limbo, and its authors did know one or two things about UNIX and its evolution, although this might be considered moving goalposts. :)
Well, kind of. Docker is a product, with official support for Linux containers on Mac (and Windows containers on Windows!). Docker for Mac comes with a Linux VM as a feature of the product; you don't need to install it yourself inside a VM (though that works, too).
It does sound like adding Rosetta binfmt_misc support would allow Docker for Mac to ship an ARM64 kernel/VM image instead of an amd64 one and benefit from some performance boost, but potentially at the risk of reliability/fidelity. The entire idea of Docker is that the kernel ABI is a (supposedly) stable interface, and even if your userspace changed around it, a Docker container would have its own userspace and wouldn't care. Running a different-architecture kernel and dynamically translating it necessarily means that there will be visible differences in the kernel ABI. Sure, you can translate those differences, but that gets you farther from the promise.
- Apple, in 2015, deprecated the version of OpenSSL they used to ship with the OS, which was the really old version 0.9.8. (The alternative is either Apple-specific crypto APIs or bringing your own OpenSSL.) The specific way they did that was to remove the development headers, but keep the runtime libraries around, so existing code would work but new code could not compile.
- For some reason, Apple's compiler searches /usr/local/include before /usr/include but their linker searches /usr/lib before /usr/local/lib.
- Homebrew's "link" operation puts libraries in /usr/local.
Therefore, if you were sufficiently unlucky, you could install a modern, up-to-date, secure OpenSSL from Homebrew in /usr/local via "brew link openssl," and have that used for the compile phase only, and end up actually using the deprecated, insecure OpenSSL in /usr, and you wouldn't notice anything had gone wrong.
Thanks, your explanation makes it a lot more clear than the thread on GitHub.
Is there something we can do on our machines to quickly check if OpenSSL is setup ok so we don’t end up in the state you described where we use the old version?
The linker/compiler discrepancy sounds incredibly weird. This Linux levels of "things don't work together as expected" but how can a uniform system developed by a sole company include nonsense like this?
The NSPA's goal was never to actually hold a march. They never did hold a march in Skokie, after that defense. Their goal was to harass Jewish communities through the process of applying for the march, which Goldberger, for all his good intention, assisted them with.
As the article notes, they sent letters to a whole bunch of suburbs asking for a permit to hold a march. They never followed up with the suburbs that ignored them, nor did they march in those suburbs either.
The whole thing was a bad-faith tactic which today we'd call trolling. They weren't fighting for their right to speak freely in Skokie, they were fighting for their right to use the legal process to harass the city of Skokie.