Having actually talked to people who have filed FoIA requests, it’s actually near impossible to get things from the government if they don’t want to share it with you.
First the requests have to be narrow and specific. You can’t just be “give me everything you have on xyz”. Then they can very much drag on the process forever by going back and forth on how narrow the specifics need to be. Then there’s the issue of a wide range of documents that are not covered under FoIA. And then finally even if you manage to request the documents correctly and they are covered under FoIA, sometimes they simply don’t have them. This is amazingly common when FoIA activists try to get budget and spending related documents.
They can also simply ignore your request, and force you to take them to court in order to get a response. There's little downside in doing this, especially against small requestors. This happens with some regularity, and is known as "constructive denial".
In a sense, that is much of what computers are - ease of access. What's the difference between millions of of paper documents in a warehouse, and a live database with the same information instantly accessible, processable, etc.? I suppose it's just 'ease of access'.
But it is all the difference in the world. One does not compare to the other. FOIA requests - which take paperwork, must specify the thing requested, and sometimes years to resolve - don't compare to live access to the databases and applications and personnel.
There's a lot that you can't get via FOIA. You can't just go on a fishing expedition. A lot of stuff is unclassified but personal, and restricted.
This potentially gives you access to everything. Such access is supposed to be about the system itself and not the data within (just like your sysadmin can theoretically look at your files but promised not to). But hey, who's watching, and to whom would they complain?
If I want to produce a device, am I required to put work in to maintaining it forever? An iPhone 8 is really old by this point, honestly I’m impressed it still works.
What timeline would you recommend that is fair to both consumers and producers?
Maybe not forever, but we have really short timelines for product longevity in this industry, even at the hardware level. A washing machine or kitchen appliance that had to be replaced every <7 years (taking the timeframe from the post) would be considered low quality; furniture that can't last 10-15 years is considered nearly disposable. Cars -- maybe the thing closest in comparison in terms of complexity and engineering required to build, even if several orders of magnitude more expensive -- are expected to last decades with proper maitenance.
Certainly there is a trend towards this in a lot of industries besides computers, but given how powerful and expensive these devices are now, the current upgrade cycles are crazy fast. I think consumers are souring on them a bit as well (both because of the price, and because the annual new models have really slowed down in the visible feature improvements they offer).
I know the economic incentives for the producers are aligned towards repeated purchases, and that's super tough to realign, but how long can the market and the environment support four-digit phone price tags that are upgraded every 1-3 years?
I have been using an iphone6s (ios 15. Received an update 4 months ago). for some time already. It works fine. Granted, I don’t use it for “important” stuff, but it works fine for browsing, checking train timelines, Uber, radio, youtube, whatsapp, Slack, camera, maps…
The problem will not be solved by laws requiring corporations to act ethically. That will never work as long as their incentives are what they are. IMO the only way to address these issues is to have a free software phone-OS alternative that users can have control of.. that is to say if you want the government involved, it would be best served by funding an free software project along these lines.
I agree that this area is more complicated than simple statements will be able to cover, but at first thought I like the idea of some sort of rule for opening up any device that is not being maintained. That when a company decides a device will no longer receive updates, some amount of source code/documentation needs to be released to allow third parties to take over.
Tying software support to how long the hardware lasts will ensure that every hardware manufacturer builds in a time-based killswitch into every device they make.
Then users should be compensated when the manufacturer decides to remotely kill their devices (whether by a kill switch or by stopping maintaining the software).
Then we will simply see fewer and more expensive models available for sale as some manufacturers and investors decide these regulations are too much and exit; others will raise prices to pay for the compensation and extended software support.
Ironically the _lack_ of dogfooding GCP products at google is often quoted as one of the reasons AWS beat GCP to defining the Cloud market. Amazon builds AWS on AWS as much as possible, Google has only somewhat recently pushed for this
What I understood is that AWS is more than dogfooding. It is something Amazon first built for themselves, to give more independence to individual teams. And as they noticed it worked well, they realized that they could turn it into a product.
For what I understand as an outsider, Google is much more monolithic, having a platform where each team can do their things independently is not really their culture, so if they build one, it is only for their customers, because they don't work like this internally. Whereas for Amazon, an AWS customer is not that different from one of their own teams.
That’s mostly a marketing myth on the AWS side. As recently as three or four years ago there were _new_ initiatives being built in the legacy “corp” fabric; and even today Amazon has internal tooling that makes use of Native AWS quite different than it is for external customers; particularly around authn/authz.
And that doesn’t even mention the comic “Moving to AWS” platform that technically consumed AWS resources, but was a wholly different developer experience to native.
Now building on AWS inside is heavily emphasized, but just a few years ago most services were built with internal systems that are very different. Some solutions (multi account/cellular architecture for example) seemed to come from dog fooding heavily, but supporting services (like account SSO for handling many accounts) are still very different from the publicly available equivalents.
As someone who worked at AWS it’s ironic how hard they dog food cellular architecture but when it comes to customers, all the offerings and docs are terrible, with the only information in obscure Re:Invent talks or blog posts.
I now work for a large customer and you would be shocked at the household names that basically put all their infrastructure in a single Account and Region. Or they have multi region but it’s basically an afterthought and wouldn’t serve any purpose in a disaster.
I think Gmail was great initially because of dogfooding. Right now, the incentives are different, and it's more about releasing new stuff. And we can see how that worked with the Google Chat saga.
Lots of other Google products suffer from similar issues because of an apparent lack of dogfooding. I bought a Pixel phone not so long ago and I had to install all updates, one by one, to bring it to the latest Android version. It took several days.
I can see why they do it, though. There are a bunch of foundational Google infra technologies that are great for building an IaaS on top of, but which can't themselves be offered as IaaS services for whatever reason.
Let's use Google's Colossus (their datacenter-scale virtual filesystem) as an example. Due to the underlying architecture of Colossus, GCP can turn around and give you:
• GCE shared read-only zonal PDs
• near-instantaneous snapshots for GCE and BigTable
• async and guaranteed-durable logging (for GCE and otherwise) and Queues (as Pub/Sub and otherwise)
• zero-migration autoclassed GCS Objects, and no per-operation slowdown on GCS Buckets as bucket size increases
• BigQuery being entirely serverless (vs e.g. Redshift needing to operate on a provisioned-storage model)
But Google can't just sell you "Colossus as a service" — because Colossus doesn't have a "multitenant with usage-cost-based backpressure to disincentivize misuse" architecture; and you can't add that without destroying the per-operation computational-complexity guarantees that make Colossus what it is. Colossus only works in a basically-trusted environment. (A non-trust-requiring version of Colossus would look like Apple's FoundationDB.)
(And yeah, you could in theory have a "little Colossus" unique to your deployment... but that'd be rather useless, since the datacenter scale of Colossus is rather what makes many of its QoS guarantees possible. Though I suppose it could make sense if you could fund entire GCP datacenters for your own use, ala AWS GovCloud.)
Google's consumer-facing systems all tend to be very focused. Things like search, maps, gmail etc. are not the same kind of system as Amazon's store.
While these systems do presumably give Google something to exercise their cloud systems on, the sense I have (as a longtime user of both GCP and AWS) is that it doesn't give them a realistic sense of what other companies, that don't just sell advertising and consumer data via focused products, do. Amazon's store is more representative of typical businesses in that sense.
Basically, it seems to me that Google Cloud has continually learned lessons the hard way about what customers need, rather than getting that information from its own internal usage.
It also depends on the density and mobility of your host population.
Ebola kills too quickly for hosts to move around and spread it, but that's in small villages in the jungle. What if there's just enough time for the host to take a crowded train and attend a Presidential campaign rally with tens of thousands of other people before feeling too sick? This might be a better strategy for an ambitious virus in the post-Covid world than a slowly escalating illness that just makes people call in sick and stay home.
It's a bit of a shame that Plague Inc does not support re-infection. Once someone had the disease, they are immune for life.
And the only goal the game really supports is killing everyone, instead of eg going for the largest sustainable population of your organism. (Think more like one of the bugs that cause the common cold, and less like the black death.)
It also doesn’t seem very realistic that the virus can infect everybody and then evolve to start killing them, and somehow the past infections all get the updated orders to kill, haha. Gameplay concessions I guess.
One of these days, someone's going to genetically engineer a virus that spreads easily to infect as many people as possible, but stay dormant for a few years while in this phase, then suddenly it'll turn lethal. Someone from the future might use a time machine to try to come back and gain information about the virus, and then even try to stop its spread once he finds the rogue researcher releasing it at airports, but then will tragically find out that you can't change the timeline, even if you can travel back in time.
Even then it seems like a crapshoot whether or not it works, because somehow if you become highly infective but otherwise are flying under the radar you'll still be detected and suddenly all governments start closing up shop... or at least that was the case in Pandemic II. I can't speak to any of the clones.
I see about a 100x slowdown on some applications[0] and IO heavy operations with defender in win11. It's unbelieveable how slow it is. I was a huge proponent of it in Win10, but I'm finding it hard to do so now.
[0] The software I'm using does a scan over a few hundred thousand files to read file headers. Without windows defender it takes about 30 seconds, but with defender it takes about 300.
The answer in this scenario is to exempt that application and/or folder. Don’t throw the baby out with the bath water.
In my environment we have to add exceptions for Developers git folders for the realtime scanning for a similar reason. Apps with large numbers of small files or high frequency writes of smalls files, like temp files during the build process, need to be exempted unless you’re willing to pay the performance penalty for the security.
I don’t understand why, but I have an exemption for that folder and I’ve disabled real time scanning. It still shows the slowdown on first launch. The only thing that works is disabling windows defender entirely. I’ve been through the troubleshooting loop a few times with this.
We're seeing the same thing - our compilation times literally double because of Defender activity, you can go into resource manager and see defender using like 50% of the CPU, it takes our project from about 12 minutes for a full rebuild to 25 minutes. And the thing is - you can add whatever exceptions you want, they work for a while and then it breaks again with updates, I literally keep having to re-add and fix exceptions in Defender every month or our compilation times slow down to a crawl.
JetBrains IDEs actually tell you to exclude some directories from Defender (and will even do it for you in some instances) because of performance issues. DevDrive[1] fixes that since it's excluded from defender by default.
I'm generally in favor of Defender, but it can definitely cause performance issues, especially in relationship to other windows processes like windows update.
If you keep your system regularly updated, it's less of a problem, but I often help people who basically never update or leave their computer on long enough for Defender to do complete scans. After a certain point it basically becomes impossible to update because defender fights update and chrome and discord and etc etc etc, for access to the files and you end up with the cpu and harddrive maxed out for a couple of days before everything completes.
You can set exclusions of course, but it does get tedious because every time you have a new project you need to add exclusions for its folder and the toolchain. Then every time a toolchain is updated (eg .../gcc/11.5 changes to gcc/11.5.2 you have to enter the 20 new exe exclusions and of course windows won't let you mass delete the old ones so it's click->confirm->click->confirm x50).
I might not do it myself but I can see why someone would just say "enough is enough".
You can use the powershell command Add-MPPreference -ExclusionPath[0] and ship a script with your app if you want. I do the same for Terraform providers - whenever a new version comes out, for a time the process can be randomly killed as I suppose a process that spawns a child process that starts talking to lots of endpoints looks somewhat suspicious.
Seems like a good enough reason to use it to me, but perhaps I’m just another cult member.
> we use Rust in production because we thought it would be easier (well safer) to teach to interpreted language OOP developers than c/c++
I think rust is just safer period, regardless of your level of expertise or background. Or are you saying that every memory safety bug was written in by someone inexperienced?
Not to mention the concept of “sharing code.” See I may write garbage tier rust code that basically glues together a bunch of libraries, many of those libraries are written by programmers way better than me, and I can gain performance and safety from using their work. The performance difference between languages like Rust and JS is huge, even if I, tainted by my background in Java, write dogshit code.
> I think rust is just safer period, regardless of your level of expertise or background. Or are you saying that every memory safety bug was written in by someone inexperienced?
I think you're reading too much into that and creating conflict where there isn't any. GP just meant that they had a bunch of interpreted language OOP developers (probably Java or C# or something like that), and they wanted them to start writing code in an AOT-compiled language (yes, I know about Graal native image; not the point). And that teaching them Rust is probably going to result in safer code than if they were to teach them C or C++.
> GP just meant that they had a bunch of interpreted language OOP developers (probably Java or C# or something like that), and they wanted them to start writing code in an AOT-compiled language (yes, I know about Graal native image; not the point).
JIT vs. AOT is not the same as being an interpreted language. For most applications, running on top of the virtual machine is a good thing, I don't see JVM developers turning to different languages just to escape the JVM, outside of some niche projects.
Also, without claiming that experienced c/c++ developers never write memory safety bugs, I don't think it's at all controversial to say inexperienced c/c++ developers write a lot of them (and hopefully experienced ones write way fewer).
I was trying to out that wanting to take the easiest path was rather contradictory to working with Rust.
> I think rust is just safer period, regardless of your level of expertise or background.
Yes, I’m not sure how you think I was implying something different. We (specifically) adopted it because it was easier for our (specifically) developers who weren’t familiar with low level languages to work with compared to c/c++.
> many of those libraries are written by programmers way better than me
You shouldn’t sell yourself so short in my opinion.
Kindle is good stuff though. On the other hand, I can't imagine what a dedicated device would bring to the table that a smartphone / tablet would not in this case...
It’s probably a wearable device and the most instant uncanny valley nerd alert device imaginable, that makes bluetools and google glass wearers seem almost normal.
I think it’s likely we have passed the peak of R/D spend on ICE, but the momentum of research that has gone into it for decades is going to continue to produce innovations for a while longer.
The engineers are still there (where else would they go!) and with ICE development moving far out of strategic focus, there might actually be more room now for trying odd curveball approaches than back when ICE refinement was still a cornerstone of success.
But unlike your original birth certificate and your social security card, you can keep an arbitrary number of copies of your backup codes in an arbitrary number of safe caches.
But what if I did't do any of those things? What if I made a mistake and didn't save my backup codes on 3 different types of storage medium (at least one of them offsite)?
Certainly, if you construct enough postulates, you can wind up in some funny places. If you forget your cash, is a shop obligated to give you the goods for free? What if you fail to read the sign in the car saying "NO LOBSTERS" and you have your crustacean for a walk?
What if you made a new Uber account with another email address? Would you be committing a venal sin?
Your actual sin, non-venal, was not naming the thread either "I love reductive argument ad absurdiam" or "Uber needs a real human on the helpdesk, this is why"
> What if you made a new Uber account with another email address? Would you be committing a venal sin?
Uber accounts are tied to phone numbers. Should I get a new phone number just because Uber's support bot is incapable of helping me recover my account?
So you're hitting upon an actual problem that we encounter often in the tech age. It turns out there's a whole category of users who have this problem: users who have no secure place to store secondary data. In other words: the homeless.
They have access to the Internet at libraries and as more and more of the world goes on line, that access becomes more and more necessary. But they don't necessarily have a wallet (let alone a safe) to store passwords and 2FA tokens, and when they lose them it's a major issue.
This is a known problem but (as with so many issues in marginalized communities), the challenge to solving it is that it will make the system worse for everyone (including them) if we relax 2FA requirements; hackers can crack passwords on, and impersonate, the homeless just as easily as everyone else.
(The best idea I've actually heard in this space is for librarians to be willing to serve as data repositories for their local unhoused patrons. They are onsite enough and have enough face-to-face interaction to be able to spot-auth someone because they know them by name. But there's a massive liability concern about the library becoming a target for identity theft that keeps most places from considering it).
You can always just fix it now as long as you have anything that works now.
You don't need the annoying one-time-use backup codes either, you can save and copy and reuse the original seed value just like any other secret. Or generate a new one now and save that, if your current single copy is not in software that provides a means of export.
The seed value is just another password. You can have any number of copies of it in any number of forms from a text file to printed in a safe deposit box. You can have any number of working copies of it on any number of different devices, all at the same time.
If you are in another country with nothing but your head, you can buy a new device, use it to access a copy of your secrets by any means you want, install software fresh, load your seed values, and regain all your totp.
You will need a way to access your secrets without totp. This can be anything, a paper copy, a sd card, an online service that you can access with only a password or that can be recovered, a phone call to a family member who you can tell how to access something and read it to you, whatever, it's just some text you have to stick somewhere and get back to later.
And if you don't have either the one time codes or the original seed value (from the original qr code or url) but you have an app that works right now, you can go redo it and this time save the seed value.
There is really no danger from totp EXCEPT the fact that most common apps and web sites obfuscate what's actually going on, and don't tell you to save the seed value, and instead give you these one time use codes. That creates a danger where there wasn't any.
Ideally, for proper security one should not do the convenient thing and store both the normal logn and the totp seed value in the same place, but for instance keepass can store everything and not only store the seed value but also display the current totp code to use for logging in.
All your 12 different copies of your keepass db file are all fully functional totp generators, and all you have to do is just load that db on any new device.
Harsh but let this be a lesson to set up a more resilient structure for yourself moving forward. Anyone can make mistakes. You can set up failsafes for your own ones pre-emptively.
"2FA is garbage" is not the right lesson to learn here...
Is this even meaningful? We have the FoIA, can’t any company acquire unclassified data any time they want?
I suppose it’s an ease-of-access thing, but for a large corp it seems you would be constantly data mining the government with FoIA requests.