>how compilers and compiler engineers are sabotaging the efforts of cryptographers
I'm not exposed to this space very often, so maybe you or someone else could give me some context. "Sabotage" is a deliberate effort to ruin/hinder something. Are compiler engineers deliberately hindering the efforts of cryptographers? If yes... is there a reason why? Some long-running feud or something?
Or, through the course of their efforts to make compilers faster/etc, are cryptographers just getting the "short end of the stick" so to speak? Perhaps forgotten about because the number of cryptographers is dwarfed by the number of non-cryptographers? (Or any other explanation that I'm unaware of?)
It's more a viewpoint thing. Any construct cryptographers find that runs in constant time is something that could be optimized to run faster for non-cryptographic code. Constant-time constructs essentially are optimizer bug reports. There is always the danger that by popularizing a technique you are drawing the attention of a compiler contributor who wants to speed up a benchmark of that same construct in non-cryptographic code. So maybe it's not intended as sabotage, but it can sure feel that way when everything you do is explicitly targeted to be changed after you do it.
It’s not intentional. The motivations of CPU designers, compiler writers, and optimizers are at odds with those of cryptographers. The former want to use every trick possible to squeeze out additional performance in the most common cases, while the latter absolutely require indistinguishable performance across all possibilities.
CPUs love to do branch prediction to have computation already performed in the case where it guesses the branch correctly, but cryptographic code needs equal performance no matter the input.
When a programmer asks for some register or memory location to be zeroed, they generally just want to be able to use a zero in some later operation and so it doesn’t really matter that a previous value was really overwritten. When a cryptographer does, they generally are trying to make it impossible to read the previous value. And they want to be able to have some guarantee that it wasn’t implicitly copied somewhere else in the interim.
Since the sibling comment is dead and thus I can’t reply to it: Search for “unintentional sabotage”, which should illustrate the usage. Despite appearances, it isn’t an oxymoron. See also meaning 3a on https://www.merriam-webster.com/dictionary/sabotage.
Every dictionary I've looked at, wikipedia, etc. all immediately and prominently highlight the intent part. It really seems like the defining characteristic of "sabotage" vs. other similar verbs. But, language is weird, so, ¯\_(ツ)_/¯.
As compiler have become more sophisticated, and hardware architecture more complicated, there are been a growing sentiment that some of the code transformation done by modern compiler make the code hard to reason about and to predict.
A lot of software engineer are seeing this as compiler engineer only caring about performance as opposed to other aspect such as debuggability, safety, compile time and productivity etc... I think that's where the "sabotage" comes from. Basically the focus on performance at the detriment of other things.
My 2 cents : The core problem is programmers expecting invariant and properties not defined in the languange standard. The compiler only garanty things as defined in the standard, expecting anything else is problematic.
I don't think it's nefarious but it is sabotage. There's long been an implicit assumption that optimization should be more important than safety.
Yes, languages do lack good mechanisms to mark variables or sections as needing constant-time operation ... but compiler maintainers could have taken the view that that means all code should be compiled that way. Now instead we're marking data and section as "secret" so that they can be left unoptimized. But why not the other way around?
I understand how we get here; speed and size are trivial to measure and they each result in real-world cost savings. I don't think any maintainer could withstand this pressure. But it's still deliberate.
> Now instead we're marking data and section as "secret" so that they can be left unoptimized. But why not the other way around?
Worse cost-benefit tradeoff, perhaps? I'd imagine the amount of code that cares more about size/speed than constant-time operation far outnumbers the amount of code which prioritizes the opposite, and given the real-world benefits you mention and the relative newness of concerns about timing attacks I think it makes sense that compiler writers have defaulted to performance over constant-time performance.
In addition, I think a complicating factor is that compilers can't infer intent from code. The exact same pattern may be used in both performance- and timing-sensitive code, so absent some external signal the compiler has to choose whether it prioritizes speed or timing. If you think more code will benefit from speed than timing, then that is a reasonable default to go with.
I'm not exposed to this space very often, so maybe you or someone else could give me some context. "Sabotage" is a deliberate effort to ruin/hinder something. Are compiler engineers deliberately hindering the efforts of cryptographers? If yes... is there a reason why? Some long-running feud or something?
Or, through the course of their efforts to make compilers faster/etc, are cryptographers just getting the "short end of the stick" so to speak? Perhaps forgotten about because the number of cryptographers is dwarfed by the number of non-cryptographers? (Or any other explanation that I'm unaware of?)