"All we can do is tell people that NIST are the ones in the room
making the decisions, but if you don't believe us, there's no way
you could verify that without being inside NIST" says Moody.
There's our problem - right there!
If a body as important as NIST is not so utterly transparent that any
random interested person cannot comb through every meeting, memo, and
coffee break conversation then it needs disbanding and replacing with
something that properly serves the public.
We have bastardised technology to create a world of panopticonic
surveillance, and then misused it by scrutinising the private lives of
simple citizens.
This is arse-backwards. If monitoring and auditing technology has any
legitimate use the only people who should morally (though willingly)
give-up some of their privacy are those that serve in public, in our
parliaments, councils, congress, government agencies and standards
bodies.
> All we can do is tell people
No. You can prove it, and if you cannot, step aside for leadership
better suited to serving the public interest.
I'm now picturing a slightly different government to ours, where the oversight bodies have the additional function of making sure officials don't talk to one another outside of recorded meetings under the public's eye.
It seems like a huge burden. It is the kind of thing, though, that in a parallel universe would make total sense: our representatives should be beholden to us.
> I'm unconvinced that private, smoke-filled backrooms don't have an
essential place as the grease that keeps things running well.
I hear that, and there's a case for it. Diplomacy, maneuvering and negotiation
require secrets and enclaves.
So to allow for that you need a few things;
- Strict official records of affairs
- Strong penalties for fraud, malinfluence, intimidation
- Whistleblower protection
The last of these essential checks-and-balances has gone to shit our
culture. Even if we pardoned Edward Snowden and made him a "hero of
democracy" tomorrow, it's still a mountain of work to restore the
essential sense of civic responsibility, patriotism and duty that
allows those people who discover or witness corruption to step-up and
challenge it safe in the knowledge that the law and common morality
are on their side.
Billions (the tv show), of all places, made a somewhat similar argument in favor of post-hoc investigations, in the last episode.
Record and share everything immediately = no room for deals
Record nothing = too much room for corruption
So we land at... record everything + only review with just cause + strict whistleblower protections.
Which seems a nice splitting of the matter, but requires a strong, independent third party (e.g. judicial branch) to arbitrate access requests. With tremendous pressure and incentives to breach that limit.
Many governments do have laws like this, called "sunshine laws". Enforcing them can be difficult though, and often enough they fail to achieve the transparency that is their goal while also substantially hindering process.
Modern panopticon level surveillance is not deployed through weakening encryption. It's just built right into platforms and apps and people willfully install it because it gives them free services and addictive social media feeds.
You don't need to weaken encryption to spy on people. You just have to give them a dancing bunny and to see the dancing bunny they must say yes to "allow access to contacts" and "allow access to camera" and "allow access to microphone" and "allow access to documents" and ...
For the higher-brow version replace dancing bunny with free service.
In addition the more we adopt and make use of cloud command and control architectures the more surveilled we become, because it becomes trivial for anyone with access to the cloud provider's internals to tap everyone's behavior. This could be done with or without the knowledge of the provider itself. The more such services we use the more data points we are surrendering, and these can be aggregated to provide quite a lot of information about us in near-real-time.
> In addition the more we adopt and make use of cloud command and
control architectures the more surveilled we become, because it
becomes trivial for anyone with access to the cloud provider's
internals to tap everyone's behavior.
Every US government office - at both the state and federal - levels has records keeping requirements. It’s the reason we can submit FOIA requests that return with data from the 30s.
Adding to this FOIA doesn't guarantee useful answers. Having gone through the process whilst in the military I found a few loopholes. They can give a bloated answer with boxes and boxes of useless mostly unrelated information. They did this to me. They can also decline to answer and just say it's classified. They can also just decline to answer through stalling tactics. It served my purpose and that was to stall them from collecting DNA.
I'd say the haphazard destruction of Blackberries and iPads with hammers is pretty explicit evidence of how bad State Dept IT policy and execution was.
Maybe I've worked in large corporations too much, but my first question when I see policy violations is not "How is this person conspiring?" but rather "What made following the official policy difficult? And how can we fix that?"
Also, the State Dept seems like exactly the sort of place policy violations become culturally routine: non-technical experts, doing work that is arguably "more important", with IT seen as a cost rather than profit center.
Or, she didn't even think about records retention or device security, in the same way that most politicians/executives don't, assuming it was handled by someone.
And it was in fact not, because there was no functioning policy enforcement at State.
How is that in any way like working in an open kitchen?
Anyway, interesting take that people working for the public should have no right to privacy in any way. Not sure you will find many people for such work, though.
You don't have privacy in what you do at work when it comes to your employer. That's why it's called privacy, it relates to private and personal things. Your work for someone else is, by definition, not private.
You don't get to push stuff into production and play coy about what you're pushing, do you? Why would that change when the employer is the public?
Putting aside finer points on privacy at work (depending on jurisdiction not so black and white), recording and making publicly available all work related communication goes way beyond usual surveillance at work places even in regulated communication settings. I have at least never worked in any place that would insist on recording any work related conversations with a co-worker during the morning commute (and to have that publicly available).
As it would be based on distrust, you'd have to record everything everywhere to avoid people not evading it.
Even innocently, if, for example, two employees meet at home for dinner with their respective families and there is talk about work - needs to be recorded in that view. Or people car sharing on the way to work - recorded. Any work related communication is very very broad.
If there's something wrong with the work they do in the kitchen, it doesn't matter how that came to be. If there is nothing wrong with it, it doesn't matter what else (other than something that would make them do something they shouldn't in the kitchen) they talk about outside of the kitchen.
The solution can't possibly be not even knowing the difference between someone putting salt or toe nails into a pan -- but ensuring cooks don't talk to each other, so even though we have no clue what they're doing or should be doing, we magically know nothing bad is going on.
How do we know the cooks aren't holding out on a better recipe for Alfredo sauce that uses more salt? The recipe they're using seems adequate, but there's a better formula they could be using by adding salt, except they've all gotten together outside of work to conspire against us and say there's nothing wrong with the amount of salt they're using. Who knows who's paying them to say this? We need to follow them everywhere, see who they see, read what they write, audit their bank records. We need proof.
I think everyone here is pretty clear how they would ethically view such a thing, but view it from NIST's (/ NSA's) perspective for the sake of argument. Maybe there's a specific threat where NIST (or presumably the NSA) believes it has a mandate to insert a backdoor.
In order to successfully do this, NIST needs to maintain a very large bank of social capital and industry trust that it can spend on very narrow issues.
But over the years there have been enough strange things (Dual EC DRBG being the most notorious) that that trust, at least when it comes to crypto design, simply isn't there. My perception is that newer ECC standards promoted by NIST have been trusted substantially less than AES was when it was released, and I can think of a number of major issues over the years that would lead to this distrust.
The inevitable outcome is that NIST loses much of its influence on the industry, which certainly is not in its own interest.
Everyone also discounts the other reason NIST (with NSA behind the scenes) might be shifty -- they know of a mathematical or computational exploit class that no one else does.
And therefore want to do things-which-seem-pointless-to-everyone-else to an algorithm to guard against it.
Without disclosing what "it" is.
Everyone's quick to jump to the "NSA is weakening algorithms" explanation, but there's both historical and practical precedent for the strengthening alternative.
After all, if the US government and military use a NIST-standardized algorithm too... how is using one with known flaws good for the NSA? They have a dual mission.
>I think everyone here is pretty clear how they would ethically view such a thing, but view it from NIST's (/ NSA's) perspective for the sake of argument. Maybe there's a specific threat where NIST (or presumably the NSA) believes it has a mandate to insert a backdoor.
That's an incredibly charitable version of their point of view. How's this for their POV: They're angry that they can't see every single piece of communications, and they think they can get away with weakening encryption because nobody can stop them legally (because the proof is classified), and nobody's going to stop them by any other avenue either.
> view it from NIST's (/ NSA's) perspective for the sake of
argument. Maybe there's a specific threat where NIST (or presumably
the NSA) believes it has a mandate to insert a backdoor.
Without any /sarcasm tags I have to take that on face value, and
frankly there are few words to fully describe what a colossally stupid
idea (not your idea, I am sure) that is. Belief in containable
backdoors is the height of naivety and recklessly playing fast and
loose with everyone's personal security, our entire economy and
national security.
That is to say, even taking Hollywood Terror Plots into consideration
[0], I don't believe there is ever a "mandate to insert a backdoor".
> In order to successfully do this, NIST needs to maintain a very
large bank of social capital and industry trust that it can spend on
very narrow issues.
Having some "trust to burn" is great for lone operatives, undercover
mercs, double agents and crooks that John le Carre described as
fugitives living by the seat of expedient alliances and fast
goodbyes. Fine if you can disappear tomorrow, reinvent yourself and
pop up somewhere else anew.
But absolutely no use for institutions holding on to any hope for
permanence and the power that brings.
> The inevitable outcome is that NIST loses much of its influence on
the industry, which certainly is not in its own interest.
Exactly this. And corrosion of institutional trust is a massive
loss. Not for NIST or a bunch of corrupt academics who'd stop getting
brown envelopes to stuff their pockets, but for the entire world.
But since you obliquely raise an interesting question... what is
NIST's "interest" here?
Surely we're not saying that by spending trust "on very narrow issues"
it's ultimate ploy is to deceive, defect and double-cross everything
the public believe it was created to protect? [1]
I'm all for the game, subterfuge and craft, but sometimes you just
bump up against the brute reality of principles and this is one of
those cases. Backdoors always cost you more than you ever thought
you'd save, and I've always assumed the people at a place like NIST
are smart enough to know that.
> Belief in containable backdoors is the height of naivety
What if it is acceptable for potential enemies to (eventually) also have access to that backdoor, and your goal in providing the backdoor is just to give the masses a false belief that they can communicate secretly?
Obviously those in the know would not use the flawed system, but instead would have a similar/better one without the intentional flaws.
The fact that NIST is not transparent is enough to assume that anything related to cryptography that NIST touches is compromised.
Frankly, I would assume any modern encryption is compromised by default - the gamble is just in who compromised it and how likely it would be that they want access to your data.
NIST standardized AES and SHA3, two designs nobody believes are compromised. The reason people trust AES and SHA3 is that they're the products of academic competitions that NIST refereed, rather than designs that NSA produced, as was the case with earlier standards. CRYSTALS-Kyber is, like AES and SHA3, the product of an academic competition that NIST simply refereed.
A competition is the perfect way to subvert a standard. A competition looks 'open', but in fact you can 'collaborate' with any team to make your weakened encryption and then persuade the judging panel to rate it highly.
But the competition process looks weird to me. Aparently it's not like a sports fixture, where the rules are set before the competition, and the referee just enforces the rules; this referee adjusts the rules while the competition is underway.
NIST has form for juking the standards, or at least for letting the NSA juke them. If they're not completely transparent, then any standard they recommend is open to question, which isn't good for a standard.
The original argument is not "they created encryption that isn't broken before". The argument is "encryption created by competitions that are only refereed by NIST is trustworthy"
Slow? Fastest in HW, and comparable performance in SW. Moreover if you take into account security hardening, SHA3 is easier to protect than alternatives.
SHA-2 is has more reliable HW acceleration from what I’ve seen.
SHA-1 SW according to smhasher is 350mib/s as is MD5, so never use MD5 as SHA-1 is always stronger (sha2 supposedly is 150mib/s). Hw accelerated SHA-1 and SHA-2 are both ~1.5 Gib/s and on x86 HW acceleration is always available.
Blake3 is the most interesting because it’s competitive with SHA2 even without HW acceleration. I wonder how it would fare with HW acceleration.
First, each Blake version was written by a totally different authors AFAICT so while each version is faster, making a faster construction was a totally different team. I don't see why you're putting so much confidence that the original SHA-2 team could come up with a faster hash function.
FWIW, SHA-3 is slower than SHA-2 although of course SHA-3 is a totally different construction from SHA-2 by design.
As a non cryptographer this whole conversation chain has me confused. I though it was desirable for a good hashing algorithm to be slow to make brute force difficult.
Yeah this is a super common point of confusion. You need to worry about slowing down a brute force attacker when you're trying to protect a low-entropy secret, which basically always means user passwords. But when your secrets are big enough, like 128 bits, brute force becomes impossible just by virtue of the size of the search space. So for most cryptographic applications, it's the size of your key (and the fact that your hash or cipher doesn't _leak_ the key) that's protecting you from brute force, not the amount of time a single attempt takes.
Yeah SHA3 isn't directly for password hashing. You should use a memory strong PBKDF (password based key derivation function) like Argon2, bcrypt, or scrypt. These functions are all constructions that run the hash many times, so the underlying hash speed is irrelevant
For high entropy inputs, like for a HMAC signature, you want the hash to be fast because its practically impossible to brute force the 256bit input key, and you often apply this to large inputs.
That's true for passwords (where you don't really use raw SHA or something, but rather something like bcrypt which is intentionally slow), not necessarily for all uses of passwords. E.g. computing a SHA sum of some binary doesn't need to be slow, it just has to be practically impossible to create a collision (which is what's required of every good cryptographic hash function).
Blake2 was not created (December 2012) until after the SHA-3 competition, which ended on October 2012 (Keccak being the winner). It was Blake1 that was entered. Blake3 was released in 2020.
I'm sure a Keccak2/3 could have also been better than the original Keccak1, but that was not available either.
The American people- who are the only ones who matter- want to live in a superpower.
Everything America does is in service of maintaining its position as the hegemonic player. The US intelligence agencies have infiltrated every university and tech company since forever. It's their job.
Sounds like it's time that academia form a crypto equivalent of NIST amongst universities so they can put out transparent versions of new cryptographic algorithms that can be traced back to their birth so that other cryptographers can look for holes, if NIST is unwilling to be open about their processes.
Why aren't other participants in the competition --- most of them didn't win! --- saying the same thing? Why are the only two kinds of people making this argument a contest loser writing inscrutable 50,000 word manifestos and people on message boards who haven't followed any of the work in this field?
There's our problem - right there!
If a body as important as NIST is not so utterly transparent that any random interested person cannot comb through every meeting, memo, and coffee break conversation then it needs disbanding and replacing with something that properly serves the public.
We have bastardised technology to create a world of panopticonic surveillance, and then misused it by scrutinising the private lives of simple citizens.
This is arse-backwards. If monitoring and auditing technology has any legitimate use the only people who should morally (though willingly) give-up some of their privacy are those that serve in public, in our parliaments, councils, congress, government agencies and standards bodies.
> All we can do is tell people
No. You can prove it, and if you cannot, step aside for leadership better suited to serving the public interest.