Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"All we can do is tell people that NIST are the ones in the room making the decisions, but if you don't believe us, there's no way you could verify that without being inside NIST" says Moody.

There's our problem - right there!

If a body as important as NIST is not so utterly transparent that any random interested person cannot comb through every meeting, memo, and coffee break conversation then it needs disbanding and replacing with something that properly serves the public.

We have bastardised technology to create a world of panopticonic surveillance, and then misused it by scrutinising the private lives of simple citizens.

This is arse-backwards. If monitoring and auditing technology has any legitimate use the only people who should morally (though willingly) give-up some of their privacy are those that serve in public, in our parliaments, councils, congress, government agencies and standards bodies.

> All we can do is tell people

No. You can prove it, and if you cannot, step aside for leadership better suited to serving the public interest.



I'm now picturing a slightly different government to ours, where the oversight bodies have the additional function of making sure officials don't talk to one another outside of recorded meetings under the public's eye.

It seems like a huge burden. It is the kind of thing, though, that in a parallel universe would make total sense: our representatives should be beholden to us.


OTOH, ensuring our representatives are 100% beholden to us also means screaming in the next primary about every bipartisan deal made.

Which has the net effect of decreasing the ability to reach nobody-is-happy compromises.

Which is something else people say they want.

I'm unconvinced that private, smoke-filled backrooms don't have an essential place as the grease that keeps things running well.


> I'm unconvinced that private, smoke-filled backrooms don't have an essential place as the grease that keeps things running well.

I hear that, and there's a case for it. Diplomacy, maneuvering and negotiation require secrets and enclaves.

So to allow for that you need a few things;

  -  Strict official records of affairs

  -  Strong penalties for fraud, malinfluence, intimidation

  -  Whistleblower protection
The last of these essential checks-and-balances has gone to shit our culture. Even if we pardoned Edward Snowden and made him a "hero of democracy" tomorrow, it's still a mountain of work to restore the essential sense of civic responsibility, patriotism and duty that allows those people who discover or witness corruption to step-up and challenge it safe in the knowledge that the law and common morality are on their side.


Billions (the tv show), of all places, made a somewhat similar argument in favor of post-hoc investigations, in the last episode.

Record and share everything immediately = no room for deals

Record nothing = too much room for corruption

So we land at... record everything + only review with just cause + strict whistleblower protections.

Which seems a nice splitting of the matter, but requires a strong, independent third party (e.g. judicial branch) to arbitrate access requests. With tremendous pressure and incentives to breach that limit.


Many governments do have laws like this, called "sunshine laws". Enforcing them can be difficult though, and often enough they fail to achieve the transparency that is their goal while also substantially hindering process.


Modern panopticon level surveillance is not deployed through weakening encryption. It's just built right into platforms and apps and people willfully install it because it gives them free services and addictive social media feeds.

You don't need to weaken encryption to spy on people. You just have to give them a dancing bunny and to see the dancing bunny they must say yes to "allow access to contacts" and "allow access to camera" and "allow access to microphone" and "allow access to documents" and ...

For the higher-brow version replace dancing bunny with free service.

In addition the more we adopt and make use of cloud command and control architectures the more surveilled we become, because it becomes trivial for anyone with access to the cloud provider's internals to tap everyone's behavior. This could be done with or without the knowledge of the provider itself. The more such services we use the more data points we are surrendering, and these can be aggregated to provide quite a lot of information about us in near-real-time.


> In addition the more we adopt and make use of cloud command and control architectures the more surveilled we become, because it becomes trivial for anyone with access to the cloud provider's internals to tap everyone's behavior.

Spoke about this last night in London

https://www.youtube.com/watch?v=mcWIQALtOtg


So 24x7 surveillance with anything gathered visible to anyone for any person working there on this stuff? Would anyone take such jobs?


Every US government office - at both the state and federal - levels has records keeping requirements. It’s the reason we can submit FOIA requests that return with data from the 30s.


Is every conversation in a rest room recorded and available?

If you don't trust people creating cryptography standards you cannot really leave gaps, can you?


Most officials communicate in ways that circumvent FOIA.


Adding to this FOIA doesn't guarantee useful answers. Having gone through the process whilst in the military I found a few loopholes. They can give a bloated answer with boxes and boxes of useless mostly unrelated information. They did this to me. They can also decline to answer and just say it's classified. They can also just decline to answer through stalling tactics. It served my purpose and that was to stall them from collecting DNA.


Hillary Clinton's email server is a famous example of attempting to avoid FOIA requests.


From what I hear, it's also an example of how ineptly run the State Dept's IT systems were.

Complex tasks like "set up a new computer" had months of lead time.


That wouldn't explain the intentional destruction of the blackberry phones and the deletion of thousands of emails.


I'd say the haphazard destruction of Blackberries and iPads with hammers is pretty explicit evidence of how bad State Dept IT policy and execution was.

Maybe I've worked in large corporations too much, but my first question when I see policy violations is not "How is this person conspiring?" but rather "What made following the official policy difficult? And how can we fix that?"

Also, the State Dept seems like exactly the sort of place policy violations become culturally routine: non-technical experts, doing work that is arguably "more important", with IT seen as a cost rather than profit center.

Perfect storm for policy-in-name-only.


She must've been unaware of her legal requirements under FOIA, that her husband signed into law, when she ordered the people's emails destroyed.


Or, she didn't even think about records retention or device security, in the same way that most politicians/executives don't, assuming it was handled by someone.

And it was in fact not, because there was no functioning policy enforcement at State.


> there was no functioning policy enforcement at State

I agree, the laws were not enforced in this case.


I would have gone with Petraeus' gmail account. The one that got the Chinese to hack Google.


Why not? Have you ever worked in an open kitchen?

If you can't do your work for the public under public scrutiny, you shouldn't.


How is that in any way like working in an open kitchen?

Anyway, interesting take that people working for the public should have no right to privacy in any way. Not sure you will find many people for such work, though.


How is it any different?

You don't have privacy in what you do at work when it comes to your employer. That's why it's called privacy, it relates to private and personal things. Your work for someone else is, by definition, not private.

You don't get to push stuff into production and play coy about what you're pushing, do you? Why would that change when the employer is the public?


Putting aside finer points on privacy at work (depending on jurisdiction not so black and white), recording and making publicly available all work related communication goes way beyond usual surveillance at work places even in regulated communication settings. I have at least never worked in any place that would insist on recording any work related conversations with a co-worker during the morning commute (and to have that publicly available).


Does mandating to record any work related communication really invade privacy of people working there?


As it would be based on distrust, you'd have to record everything everywhere to avoid people not evading it.

Even innocently, if, for example, two employees meet at home for dinner with their respective families and there is talk about work - needs to be recorded in that view. Or people car sharing on the way to work - recorded. Any work related communication is very very broad.

And everything recorded open to the public, too.


The people are still human beings. The process is still open to FOIA. It’s not perfect, but it’s the right solution.


Sorry, what is the right solution? Total surveillance? The current approach? Some middling thing?


FOI


But how do we prove the cooks aren't talking to each other outside the kitchen?


If there's something wrong with the work they do in the kitchen, it doesn't matter how that came to be. If there is nothing wrong with it, it doesn't matter what else (other than something that would make them do something they shouldn't in the kitchen) they talk about outside of the kitchen.

The solution can't possibly be not even knowing the difference between someone putting salt or toe nails into a pan -- but ensuring cooks don't talk to each other, so even though we have no clue what they're doing or should be doing, we magically know nothing bad is going on.


How do we know the cooks aren't holding out on a better recipe for Alfredo sauce that uses more salt? The recipe they're using seems adequate, but there's a better formula they could be using by adding salt, except they've all gotten together outside of work to conspire against us and say there's nothing wrong with the amount of salt they're using. Who knows who's paying them to say this? We need to follow them everywhere, see who they see, read what they write, audit their bank records. We need proof.


It seems wildly shortsighted as well.

I think everyone here is pretty clear how they would ethically view such a thing, but view it from NIST's (/ NSA's) perspective for the sake of argument. Maybe there's a specific threat where NIST (or presumably the NSA) believes it has a mandate to insert a backdoor.

In order to successfully do this, NIST needs to maintain a very large bank of social capital and industry trust that it can spend on very narrow issues.

But over the years there have been enough strange things (Dual EC DRBG being the most notorious) that that trust, at least when it comes to crypto design, simply isn't there. My perception is that newer ECC standards promoted by NIST have been trusted substantially less than AES was when it was released, and I can think of a number of major issues over the years that would lead to this distrust.

The inevitable outcome is that NIST loses much of its influence on the industry, which certainly is not in its own interest.


Everyone also discounts the other reason NIST (with NSA behind the scenes) might be shifty -- they know of a mathematical or computational exploit class that no one else does.

And therefore want to do things-which-seem-pointless-to-everyone-else to an algorithm to guard against it.

Without disclosing what "it" is.

Everyone's quick to jump to the "NSA is weakening algorithms" explanation, but there's both historical and practical precedent for the strengthening alternative.

After all, if the US government and military use a NIST-standardized algorithm too... how is using one with known flaws good for the NSA? They have a dual mission.


>there's both historical and practical precedent for the strengthening alternative.

I'm aware of the DES S-boxes, are there other examples of this?


SHA was withdrawn after publication and replaced with a stronger version[0].

[0] https://en.wikipedia.org/wiki/SHA-1#Development


> They have a dual mission

Which is why I don't buy anything from the apologists for "manageable" backdoors.

> strengthen

This is a good theory and interesting take.


> And therefore want to do things-which-seem-pointless-to-everyone-else to an algorithm to guard against it.

Or, more likely, to exploit it.


>I think everyone here is pretty clear how they would ethically view such a thing, but view it from NIST's (/ NSA's) perspective for the sake of argument. Maybe there's a specific threat where NIST (or presumably the NSA) believes it has a mandate to insert a backdoor.

That's an incredibly charitable version of their point of view. How's this for their POV: They're angry that they can't see every single piece of communications, and they think they can get away with weakening encryption because nobody can stop them legally (because the proof is classified), and nobody's going to stop them by any other avenue either.


> view it from NIST's (/ NSA's) perspective for the sake of argument. Maybe there's a specific threat where NIST (or presumably the NSA) believes it has a mandate to insert a backdoor.

Without any /sarcasm tags I have to take that on face value, and frankly there are few words to fully describe what a colossally stupid idea (not your idea, I am sure) that is. Belief in containable backdoors is the height of naivety and recklessly playing fast and loose with everyone's personal security, our entire economy and national security.

That is to say, even taking Hollywood Terror Plots into consideration [0], I don't believe there is ever a "mandate to insert a backdoor".

> In order to successfully do this, NIST needs to maintain a very large bank of social capital and industry trust that it can spend on very narrow issues.

Having some "trust to burn" is great for lone operatives, undercover mercs, double agents and crooks that John le Carre described as fugitives living by the seat of expedient alliances and fast goodbyes. Fine if you can disappear tomorrow, reinvent yourself and pop up somewhere else anew.

But absolutely no use for institutions holding on to any hope for permanence and the power that brings.

> The inevitable outcome is that NIST loses much of its influence on the industry, which certainly is not in its own interest.

Exactly this. And corrosion of institutional trust is a massive loss. Not for NIST or a bunch of corrupt academics who'd stop getting brown envelopes to stuff their pockets, but for the entire world.

But since you obliquely raise an interesting question... what is NIST's "interest" here?

Surely we're not saying that by spending trust "on very narrow issues" it's ultimate ploy is to deceive, defect and double-cross everything the public believe it was created to protect? [1]

I'm all for the game, subterfuge and craft, but sometimes you just bump up against the brute reality of principles and this is one of those cases. Backdoors always cost you more than you ever thought you'd save, and I've always assumed the people at a place like NIST are smart enough to know that.

[0] https://www.schneier.com/essays/archives/2005/09/terrorists_...

[1] https://cybershow.uk/episodes.php?id=16


> Belief in containable backdoors is the height of naivety

What if it is acceptable for potential enemies to (eventually) also have access to that backdoor, and your goal in providing the backdoor is just to give the masses a false belief that they can communicate secretly?

Obviously those in the know would not use the flawed system, but instead would have a similar/better one without the intentional flaws.


> Obviously those in the know would not use the flawed system

Perhaps the clearest argument against such a ploy is the TETRA radio system. Turns out in this case that "the masses" are our:

- police and emergency services

- military and civil defence forces

- diplomatic and political security, escorts, attaches and close security

You see the problem is this concept of "in the know". It's an insoluble information-hazard and boundary problem;

Two people can keep a secret, if one of them is dead.


The fact that NIST is not transparent is enough to assume that anything related to cryptography that NIST touches is compromised.

Frankly, I would assume any modern encryption is compromised by default - the gamble is just in who compromised it and how likely it would be that they want access to your data.


NIST standardized AES and SHA3, two designs nobody believes are compromised. The reason people trust AES and SHA3 is that they're the products of academic competitions that NIST refereed, rather than designs that NSA produced, as was the case with earlier standards. CRYSTALS-Kyber is, like AES and SHA3, the product of an academic competition that NIST simply refereed.


A competition is the perfect way to subvert a standard. A competition looks 'open', but in fact you can 'collaborate' with any team to make your weakened encryption and then persuade the judging panel to rate it highly.


But the competition process looks weird to me. Aparently it's not like a sports fixture, where the rules are set before the competition, and the referee just enforces the rules; this referee adjusts the rules while the competition is underway.

NIST has form for juking the standards, or at least for letting the NSA juke them. If they're not completely transparent, then any standard they recommend is open to question, which isn't good for a standard.


The man walked into a bank many times over his life, no way he could decide to rob it one day.


You are creating a strawman.

The original argument is not "they created encryption that isn't broken before". The argument is "encryption created by competitions that are only refereed by NIST is trustworthy"


So it’s worse. They already robbed a bank in the past.


And lest we forget Dual_EC_DRBG


SHA3 is fine but it's so slow, I don't many people that use it


Slow? Fastest in HW, and comparable performance in SW. Moreover if you take into account security hardening, SHA3 is easier to protect than alternatives.


Comparable in software, to what? Password hashes? :)


Last I checked SHA3-512 is like 4x slower than SHA2-512 on x86.


And SHA-1 is faster than SHA-2, with MD5 faster than both. But speed isn't the only reason to choose an algorithm.


SHA-2 is has more reliable HW acceleration from what I’ve seen.

SHA-1 SW according to smhasher is 350mib/s as is MD5, so never use MD5 as SHA-1 is always stronger (sha2 supposedly is 150mib/s). Hw accelerated SHA-1 and SHA-2 are both ~1.5 Gib/s and on x86 HW acceleration is always available.

Blake3 is the most interesting because it’s competitive with SHA2 even without HW acceleration. I wonder how it would fare with HW acceleration.


> Blake3 is the most interesting […]

Not really? SHA-2 was released in 2001:

* https://en.wikipedia.org/wiki/SHA-2

Blake3 was released 2020. I'm sure if the folks that created SHA-2 did a hash in 2020 they could do something better/faster as well.


First, each Blake version was written by a totally different authors AFAICT so while each version is faster, making a faster construction was a totally different team. I don't see why you're putting so much confidence that the original SHA-2 team could come up with a faster hash function.

FWIW, SHA-3 is slower than SHA-2 although of course SHA-3 is a totally different construction from SHA-2 by design.


As a non cryptographer this whole conversation chain has me confused. I though it was desirable for a good hashing algorithm to be slow to make brute force difficult.


Yeah this is a super common point of confusion. You need to worry about slowing down a brute force attacker when you're trying to protect a low-entropy secret, which basically always means user passwords. But when your secrets are big enough, like 128 bits, brute force becomes impossible just by virtue of the size of the search space. So for most cryptographic applications, it's the size of your key (and the fact that your hash or cipher doesn't _leak_ the key) that's protecting you from brute force, not the amount of time a single attempt takes.


Yeah SHA3 isn't directly for password hashing. You should use a memory strong PBKDF (password based key derivation function) like Argon2, bcrypt, or scrypt. These functions are all constructions that run the hash many times, so the underlying hash speed is irrelevant

For high entropy inputs, like for a HMAC signature, you want the hash to be fast because its practically impossible to brute force the 256bit input key, and you often apply this to large inputs.


That's true for passwords (where you don't really use raw SHA or something, but rather something like bcrypt which is intentionally slow), not necessarily for all uses of passwords. E.g. computing a SHA sum of some binary doesn't need to be slow, it just has to be practically impossible to create a collision (which is what's required of every good cryptographic hash function).


Faster than Blake2/3? Not even close!


> Faster than Blake2/3? Not even close!

Blake2 was not created (December 2012) until after the SHA-3 competition, which ended on October 2012 (Keccak being the winner). It was Blake1 that was entered. Blake3 was released in 2020.

I'm sure a Keccak2/3 could have also been better than the original Keccak1, but that was not available either.


You never specified timing. You made a blanket statement as if it was still true.


SHA3 is mostly a hedge for the risk SHA2 is broken.


The American people- who are the only ones who matter- want to live in a superpower.

Everything America does is in service of maintaining its position as the hegemonic player. The US intelligence agencies have infiltrated every university and tech company since forever. It's their job.


Slava America


Sounds like it's time that academia form a crypto equivalent of NIST amongst universities so they can put out transparent versions of new cryptographic algorithms that can be traced back to their birth so that other cryptographers can look for holes, if NIST is unwilling to be open about their processes.


Why aren't other participants in the competition --- most of them didn't win! --- saying the same thing? Why are the only two kinds of people making this argument a contest loser writing inscrutable 50,000 word manifestos and people on message boards who haven't followed any of the work in this field?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: