What are the use-cases, and when would such a policy have led to a drastically different outcome (in the case of a crime being committed, industrial espionage, or what-have-you)?
If I have a laptop encrypted with luks or whatever, then what? What are the consequences of non-compliance?
The complaint that it looks old-fashioned is addressed partly by using ttk (themed tk) but also just about every aspect of the widgets can be styled using a simple xresources-style config.
It's fast and lightweight, free forever, stable as hell, and there's lots of resources online.
Anecdotally, my Aunt had banned all guns and violent toys from her house. One morning, after serving breakfast, she noticed that my cousin had bitten his toast into the shape of a gun and was making "pew-pew" motions. They laugh and laugh about that. It is what it is.
I really liked Redis for a long time. Simple, fast data-structures in-memory. That's it. Along the way there have been some nice enhancements like Lua, which solves a lot of the atomicity issues. But somewhere after 4.0 I feel they have lost their way. First they saw all the attention Kafka/event-stuff was receiving, so they baked-in this monstrous, complicated, stateful streams feature. Now we have SSL (do people really expose Redis on the internet??), ACLs, cluster stuff, and most relevant to me a new wire protocol.
To my thinking, Redis fit very well in the "lightweight linux thing" category. It seems they aspire to be enterprise software, and this may be a good move for Redis-the-Business, but it's not good for users like me who just want simple in-memory data-structures and as little state as possible. Forcing a new protocol that adds very little value (in my opinion) also seems like a great way to alienate your users.
I understand the sentiment, but things are a bit different than they may look. About SSL, there is no way out of this. I opposed to this feature for a long time, but simply now because of changes in regulations, policies and so forth, a lot of use cases are migrating to SSL internally even if Redis is not exposed. And frankly it is really a mess to handle SSL proxies now that everybody look like needs encryption. So what I did was the best I could do, when checking for PRs to merge: 1) Opt in, not compiled by default, no SSL libs requirements. 2) Connection abstraction, there is no SSL mentioned inside the code. Everything is in a different file.
About the "Kafka" thing, actually streams were wanted by myself, very strongly, and not suggested by Redis Labs. Let's start thinking at Redis ad a data structure server and at streams without the consumer groups part (which is totally optional). It was incredible we had no way to model a "log" in a serious way. No time series easily, using hacks and with huge memory usage because sorted sets are not the solution for this problem. But then why consumer groups? Because for a long time people had this problem of wanting a "persistent Pub/Sub": you can't lose messages just because clients disconnect in most use cases. Btw this Kafka monster is a total of 2775 lines of code, including comments. 1731 lines of code without comments. In other systems this is the prelude in the copyright notice.
But ACLs, in order to manage to survive 10 years without ACLs we had to resort to all kind of tricks: renaming commands to unguessable strings. Still with the panic of some library calling FLUSHALL for error because the developer was testing it in her/his laptop. Really ACLs have nothing to do with enterprise, but some safety is needed. The ACL monster is 1297 lines of code, and is one of the most user friendly security system you'll find in a database.
Actually all those features have a great impact on the users, huge impact on day to day operations, and are designed in order to be as simple as possible. And Redis Labs actually has only to lose from all this, because those were all potential "premium" features, instead now they are in and every other Redis provider will have it automatically as a standard. So... reality is a bit different, and it's not a conspiracy to gain market shares or alike.
My company has no choice- we have to use ssl internally for regulatory purposes. Right now we're using an stunnel solution for having out clients connect to redis- I am super excited that I'll be able to remove this workaround in the future!
Putting the server behind TLS is a minor part of the process.
If you want any kind of HA, you'll have multiple instances of Redis, with changes replicated from the writable node to the others.
That traffic needs to be encrypted too - and redis (pre 6.0) knows nothing about TLS.
So now you need a tunnel to each other Redis node.
Oh but you also want Sentinel to make sure a failure means a new primary node is elected... and sentinel doesn't speak TLS either, and they need to both speak to each other, and the redis nodes... so that's another set of TLS tunnels you need to setup.
I setup redis on 3 nodes for a customer, if you tried to draw the stunnel setup on paper, it'd look like you're illustrating a plate of spaghetti.
How is stunnel a workaround? Honestly that would seem like an ideal solution to me - "do one thing, do it well". Stunnel can focus on having a rock solid TLS implementation and Redis can focus on being a great DB.
Redis streams have been a phenomenal addition to my toolbelt in designing realtime ETL/ELT pipelines. Before I had to make do with a way more complicated Pub/Sub + job q (Tasktiger). That all became redundant thanks to Redis streams.
Thank you!
It would really be awesome if there was a built in way to attach/spill/persist individual streams to external data volumes (older/busy streams could run out of memory) and have it support hot swapping.
> Btw this Kafka monster is a total of 2775 lines of code, including comments. 1731 lines of code without comments. In other systems this is the prelude in the copyright notice.
Happy to talk shop anytime, feel free to reach out.
In short - I like to have audit-able dataflows in my pipelines. Streams are inherently timestamped and provide a great way to see how data changed over time. For one, if you had a bug or regression in the pipeline, you can precisely track down the impacted time window - no guessing needed.
> Really ACLs have nothing to do with enterprise, but some safety is needed.
Huzzah!
Let’s stop calling basic security features “enterprise”.
Locking basic security features behind a paywall is a protection racket, pure and simple.
Small companies, and lone developers, need security, too.
If we are making software for consumers who won’t know any better, why not encourage (and make it trivial) for fledglings to do the right thing from the very beginning?
Why does every single company have to go through the same security mistakes on their way to Series A/B/C? Why can’t we learn from our mistakes and make the doing the right thing not just accessible, but easily accessible.
1000%. Basic security (and that includes an evolving basket of features) are not just for "enterprise." Neither from the developer's POV nor a user's. How many database hacks do people have to have reported as front page news about unsecured databases — where users didn't even change default security credentials — before people finally get that any database running anywhere is at risk — even on-prem with only your own people accessing it. Security is not an "advanced" feature. It is a foundational requirement before you even load data into a cluster.
I don't know what your browser is doing, but it is not behaving correctly. Maybe you are connected to a corporate VPN that is doing weird things to TLS?
bash-3.2$ telnet antirez.com 443
Trying 109.74.203.151...
telnet: connect to address 109.74.203.151: Connection refused
telnet: Unable to connect to remote host
Security guy here. I'd argue that SSL and ACL are always good things to have, especially for systems that store data.
Modern security practices typically dictate a defense-in-depth approach. The ideas is that you will be compromised at some point (no security is perfect) and as such you should make any compromise that does happen as minimal as possible--you want to prevent attackers who get a foot in the door from rummaging around your network.
A key part of any defense-in-depth strategy are things like encryption and authentication/authorization. If you're using redis to store any kind of sensitive material, you want to make sure that only people on your network with the appropriate auth credentials can access it. This is one of the easiest ways to prevent drive-by data theft.
From here, SSL is a logical step. You need to ensure bad actors can't sniff network traffic and steal credentials.
I can't speak to streams or the other features you feel complicate Redis, but I think SSL+ACL are very important tools for increasing the cost to attackers that target redis instances leveraging those features.
Many systems don’t do TLS in process. TLS proxying is probably more common for systems deployed in the cloud (e.g. running nginx on the same node, or using a cloud load balancer).
AWS and GCP don’t even give you a way to install a cert yourself— you MUST use an ELB or bring your own certificate.
This is highly dependent on your environment. I work in finance and there is legislation saying we must encrypt all traffic on the wire.
Legislation aside, this also goes back to a defense-in-depth strategy; TLS proxying only works if the network behind the proxy will always be secure. You might be able to get away with running TLS on the same host as redis, but in all other cases I can think of you're going back to the 90's-era security policy of having a hard shell and a soft underbelly--anything that gets into the network behind your TLS proxy can sniff whatever traffic it wants.
EDIT: It occurs to me that you seem to be hinting at running redis as a public service. In that scenario it makes perfect sense to use a TLS proxy for versions of redis without SSL. That said, it's still important to encrypt things on your private network to ensure you aren't one breach away from having your whole stack blown open.
Regulated industry SRE here. I've run Redis at scale through stunnel, terminated through a proxy, and once Redis supported it, in-process.
In-process won by a mile, despite my feelings about redis from an operational perspective (read: not good). The added choreo, monitoring, and overall moving parts were strong contributors against external proxying.
Not sure what the argument is here. Many systems _do_ have TLS in process. Also, there are plenty of regulations/certifications that require encryption in transit. Terminating at a load balancer means you have an unencrypted hop.
> AWS and GCP don’t even give you a way to install a certain yourself.
If you mean as a service they provide, well, that’s what ACM[1] is for, no? I assume Google Cloud has something similar.
ACM doesn’t talk to EC2. AWS Enterprise Support will tell you with a straight face to let them handle TLS termination on the ELB/ALB, and keep things unencrypted in the VPC. Their claim is that the VPC has much more sophisticated security than TLS.
This is probably true. You can't eavesdrop on network traffic in a VPC because you never touch the actual layer 2 network, it's a virtualized network tunneled through the actual physical network, so you will never even see packets that aren't directed to you. I don't think there is a really strong security rationale for requiring SSL between an ELB and one of it's target groups, but from a regulatory standpoint it's probably easier to say "encrypt everything in transit." This is why ELBs don't do certificate validation as well. It's unnecessary and extremely cumbersome to implement well, so if you need to have SSL between the ELB and a host, you can just toss a self-signed cert on the host and be done with it.
Can you see traffic between hosts in the same VPC, even if they wouldn't otherwise have access via security groups?
The scenario I'm imagining is that an attacker manages to gain access to one box in VPC, and from there is able to snoop on (plaintext) traffic between an ELB that does TLS termination and some other box in the VPC that receives the (unencrypted) traffic.
If you encrypt all inter-box traffic, then this attacker still doesn't get to see anything. If not, then the attacker gets to snoop on that traffic.
I'm not sympathetic to lazy arguments like, "if an attacker has compromised one host in your VPC, it's game over". No, it's not. It's really really bad, but you can limit the amount of damage an attacker can do (and the amount of data they can exfiltrate) via defense-in-depth strategies like encrypting traffic between hosts.
You can't snoop on traffic between hosts in the same VPC. Here is a good video explaining why https://www.youtube.com/watch?v=3qln2u1Vr2E&t=1592s. The tl;dr; is that your guest OS (the EC2 instance) is not connected to the physical layer 2 network. The host OS hypervisor is and when it receives a packet from the physical NIC, if that packet is not directed to the guest OS then it won't be passed to it. So the NIC on the guest OS (your EC2 instance) will never even see the packets that are not intended for it. Of course this gets slightly more complicated because AWS added some tools for traffic mirroring. So theoretically someone with the right access could setup a mirror to a host they control in the VPC and sniff the traffic that way. But if someone were able to pull that off then you're likely f'ed either way.
Right, they let you do it now but iirc that is a relatively recent feature that was resisted for a long time. And for good reason I think. The purpose of certificate validation is to verify that the remote machine is who they say they are. But those guarantees are already provided by the VPC protocol. In order to impersonate a target instance you would need to MITM the traffic, which isn't possible in a VPC.
This doesn't scale when you're using multiple replicating Redises, because every Redis needs to communicate to every other Redis. With TLS in-process, you can just sign keys and distribute them to hosts and you're done. With a tunnel like ghostunnel[1] (which we at Square built precisely for this type of purpose), you end up having to set up and configure n^{n-1} tunnels (which requires twice that number of processes) so that every host has a tunnel to every other host.
> Now we have SSL (do people really expose Redis on the internet??).
This is 2020. The "hard outer shell, soft chewy center" model of security is dead and it's not coming back. Modern datacenters and cloud deployments use mTLS (mutually-authenticated TLS) between every service, everywhere, all the time.
There are some massive benefits to this. For starters, you can limit what services talk to one-another entirely through distribution and signing of keys. Yes, this adds a burden of complexity if you go that route. But suddenly you don't have to care as much about (for instance) many network-exploitable vulnerabilities in your services because someone with a foothold on your network can't even get talk to your service in the first place if they don't have the right TLS cert, which is only on the handful of machines and only readable by the specific services that are legitimately allowed to connect to it.
This is a much stronger guarantee than firewalling alone (though you should also use firewalling), because multiple services can be running on a host but only the applications that are allowed to talk to your service will have read access to that key.
On the flip side, you have stronger guarantees that the service you're connecting to really is the service you're expecting it to be. If you're storing sensitive information in Redis, you can know for sure that the port you've connected to is the right Redis and not another, less-sensitive application's.
I used to think I was special: someone who comes in and discovers these ugly pockets of pus. The kind that with a single poke and they burst creating a very ugly problem.
In talking to other people high up on the technical side I realized it is a norm. The only question is if what I call "velocity of awesomeness of the product" makes the warts less important.
> After that I as a consultant get access to the network and apart from some test that a developer stood up nothing matches the glossy talk.
Or in my case recently... someone has generated a root certificate for the internal CA that uses an insecure crypto scheme, and Chrome still throws up a security error requiring users to click past the warnings to access the site.
"Can you generate and roll out a new cert please? This isn't really 'security'?"
"Oh we will get to it, can you just use the one you already have?"
I agree that many shops don't work this way, but they absolutely should. Anyone not developing a good defense-in-depth strategy, and just assuming that their edge firewalls will take care of them... well, they're one step away from a break-in and a data breach.
Our industry needs to do better, and not brush off good security as "glossy marketing talk".
I'll just say I work at a large SF unicorn where we do this. We're not at 100% (getting anything to 100% when you're big enough is impossible), but the vast majority of everything is behind TLS 1.2 with unique certificates per server/app pair.
We're hoping to use SPIFFE/SPIRE to bring adoption even higher.
Redis is painful to use in a highly regulated environment where all data must be encrypted in transit, all access logged and audited, etc. Personally, I think the requirements are over the top and focus on the wrong things a lot of the time. But it is what our compliance people say we must do.
We've spent hundreds of hours cobbling together a system that meets our regulatory requirements and still performs well. These features go a long way toward addressing this pain point. I think they've done a decent job making the extra complexity optional, too.
> Now we have SSL (do people really expose Redis on the internet??)
There are no secure networks. Your options are vpn, third party ssl, or ssl in the service. Sometimes, your datacenter/cloud will guarantee "secure" network (ie: manage vpn for you).
But in many instances having ssl "inside" can be simpler.
>> do people really expose Redis on the internet??
The same logic could be applied to exposed MongoDB and we know there's been a plethora of leaked data in recent years.
how is this comparable? SSL is secure transport - if you leave you mongoDB wide open people will just as happily steal your data over SSL. The problem was never MITM attacks on in-flight data.
I don't run Redis, or anything else that has data storage, in a containerized environment. Those are dedicated machines to a dedicated purpose and I already have resource slicing and prioritization in place. They're called "virtual machines".
You might be fully aware, but containers doesn't have to be docker/kubernetes. Previously OpenVZ, and now LXC/LXD is great for replacing full VMs in a lot of scenarios. The isolation is great and it's way less resource intensive than full containers.
Quite aware, though others probably aren’t so it’s worth mentioning. But in my case, I press a button and I get an AWS or DigitalOcean or GCP instance.
I like containers for my stuff. It’s silly, IMO, to doubly encapsulate my datastores.
It's worth noting that VPN / SSL proxies provide box to box (or process to box) encryption, whereas native SSL support provides process to process encryption. The difference being that if an attacker manages to get access to the box then it becomes easier to capture traffic due to it going unencrypted between the app and the VPN/SSL proxies. Fundamentally, native SSL support provides strictly better protection than just VPNs or SSL proxies.
Now, given the context this may or may not be a distinction that you care about, but there certainly are times where you really do care.
(Besides, if I'm running a tcpdump on a box to try and figure out why the network is going wibbly I'm a lot happier knowing all traffic is encrypted and I'm not going to accidentally capture some PII. I've had to tcpdump within docker containers before too, so putting everything in containers doesn't necessarily solve this.)
I think this could be workable, but it probably depends a lot on context.
One reason off the top of my head would be regulatory/compliance issues around how things are encrypted. wireguard is relatively new, and some certifications required to do business in specific industries (finance, healthcare, etc) mandate protocols with a minimum level of maturity. wireguard may be good, but many regulators would probably not find it acceptable without a longer track record.
On a more concrete note, I'd consider any system that handles authentication to be inherently broken if it had no way to keep those credentials safe out of the box. TLS has long been a cheap-ish way to do this, as it's widely available and well understood by both implements and regulators.
I'd feel more safe as an admin by knowing I only have 1 port and 1 app (ie wireguard) being public than 10 with their own ports and security (ie redis and others).
This isn't about making things public, just resistant to tampering and sniffing. Yes if you want to connect networks together then wireguard is a good choice.
If your devices are already on the same network and instead you close down the firewall and you move everything into wireguard you've just moved your problem.
It's much easier to configure once wireguard than configure on each application the ssl mecanism. Redis for example is very easy to use normally, but adding ssl makes it quite harder to setup/use.
This is the "vpn" option. It's a valid option. I don't think tightly coupling ssl is always a good idea - I just don't think it's a bad idea as an option/feature.
I don't use haproxy to secure my telnet sessions - I use ssh.
I see Redis as a toolkit that collects a number of solutions to hard distributed system problems in a single tool. It is great for developers that have a number of use cases for these kinds of things but for which there is no need or justification to spool up yet another cluster of containers/vms/servers/load balancers/etc to support it. Redis already has to do these things to be reliable and consistent; directly exposing this ability to clients and modules is a very logical thing to do. Like it or not, Redis is a platform now.
If you really just want fast data structures in memory, use memcached. If you somehow feel that Redis is a better solution for you, perhaps you should carefully consider that you may be placing more weight on its platform features than you realize.
SSL is a pretty important feature for almost all apps that you run in the datacenter. The idea is not to securely send Redis data to an end user on an untrusted network, the idea is to reduce the blast radius of a compromise inside your datacenter. A good example is that Slack postmortem from a couple weeks ago -- they had a proxy running inside their datacenter, and it could be convinced to make connections to internal addresses. If the service it was trying to connect to required the client to have a valid TLS certificate, the proxy would likely not provide the right credentials (because who uses client certificates on the Internet), and the connection would simply fail. A big security bug would manifest as a higher error ration in the service, instead of letting an attacker poke around in their user data. (Network based policy is also good, but is often too broad a brush. You might want the proxy to be able to talk to a database server in your network to store some results; now you can't simply add a firewall rule that says "no traffic may pass to the internal network".)
Finally, you might remember that internal NSA slideshow with the "SSL added and removed here ;-)" when talking about how they stole user data from Google's internal network. After that leak, rollout of internal mutual authentication/encryption accelerated, because people were actually inside the network observing internal RPC calls. It wasn't theoretical, it was happening.
Ultimately, mTLS is a pretty simple way to get a large increase in security. It has its disadvantages; you now have another "moving part" that can interfere with deployments and debugging (an attacker can't tcpdump, and neither can you, easily), but given how many large companies have exposed user data through unintentional interactions, it is something worth considering. It's a technique where you have to go out of your way to cause surprising behavior, and that is always good for security and reliability.
I'm a dev advocate at Redis Labs so I'm just going to reply about the last point about the protocol from my developer PoV.
Thanks to RESP3 I was able to write this client that, combined with Zig's comptime metaprogramming, can do things that no other Redis client that I've seen can do.
The user gives the desired response type and the client is able to provide appropriate translation based on what the RESP3 reply is. This would still be possible with RESP2, but v3 makes it much more robust and explicit, to the point that the ease becomes transparent without looking magical and/or triggering confusing corner cases.
As a comparison, we run (and I think this is pretty common) nginx proxy servers that point to app servers. The proxy servers handle SSL to the outside, whilst the connection to the app servers is simply http. Pretty sure that is an acceptable solution in most cases. So then this would apply to the SSL argument here as well.
All network traffic that leaves a host should be encrypted. You could have an exception for a physically isolated network in a secure cage, if you're adventurous. But most of use are in cloud environments, so encrypted traffic is required. Even with VPCs and Security Groups, you don't want to rely on network ACLs alone to prevent data from being intercepted.
If Redis does not support encryption natively, then you have to run a gateway like stunnel on every redis host. The redis clients mostly all already support connecting to a secure socket, but the server and cli client require manual stunnel configurations. Native support for encryption just removes this extra setup.
You can stick a proxy in front of apps that don't have features you need like mTLS, tracing, metrics, etc. to get those. Google "service mesh" to explore that space. But to some extent, I think it's all a bit easier if your apps just do the right thing out of the box. Less moving parts. Better integration testing.
Like medicine, every piece of software you use has effects and side-effects. If the advantage of the effects outweigh the disadvantages of the side-effects, then something is a good deal. But if you can avoid the side-effects entirely, that's best.
We did, too, when we were in startup mode. Now, nothing runs unencrypted internally.
Most tooling uses TLS, because when you do this at scale, you automate your CA and it is much easier to securely deal with than, eg, ssh certs. But we do use (LDAP centralized) ssh as well, mostly for humans.
Personally I'm jumping up and down for ACLs. I went so far as to implement a proof of concept Redis proxy that added ACLs a couple years ago, before I heard that they would be in 6. ACLs may be niche, but when you need 'em, you need 'em!
We found streams to be a breakthrough feature for pub-sub type data on IoT devices. That it can both be low-latency pub-sub and a stateful, short-lived cache is quite powerful to improve performance for many queries to the types of data generated by cameras and high-frequency sensor devices.
Having TLS support in the main client is useful because AWS only supports AUTH if you enable TLS. Running Redis without AUTH can be kind of dangerous because Redis can kind of speak HTTP* (I think you can define custom commands to fix this) so if you have web hooks in your system and don't properly filter internal addresses then you might allow external parties to run Redis commands against your system.
* it's been years since I looked at this so maybe Redis now ships with inbuilt protection against this.
It's two data structures (which were already in Redis for other reasons!), and an automatic sequential identifier. Everything else that's "stateful" about it is client-side state—the server is still just a data-structure server. A Redis stream is basically just a Redis sorted set that's coherent in the face of clients trying to consume it paginated as other clients insert into the middle of it.
Also, the code is in one file (https://github.com/antirez/redis/blob/unstable/src/t_stream.... ); that file is ~3KLOC. It's just another Redis Module, isolated into its own set of functions with no impact on the codebase as a whole. It's just one that's so widely applicable, to so many use-cases that people were already using Redis for (through Sidekiq/Resque/etc) that it makes sense to ship this particular module with Redis itself.
Would you get upset about bloat if Postgres upstreamed a highly-popular extension? It already has nine or ten installed by default, and a few more sitting in contrib/. But, of course, even upstreamed, none of those extensions are enabled by default, adding runtime overhead to your DB; you have to ask for them, just as if you were installing a third-party extension. Same here: if you don't use the Streams module, there's no overhead to its existence in the Redis codebase.
> do people really expose Redis on the internet??
Cloud DBaaS providers expose Redis instances "over the Internet", in the sense that they're in the same AZ but not within your VPC. To the extent that you can wireshark a data-center's virtual SDN, they need to encrypt this traffic.
Even PaaS providers do things this way, since they usually lean on third-party DBaaS providers. E.g. all of the Redis services you can attach to a Heroku app are consumed "over the Internet."
If you're using Redis through an IaaS provider's offering (e.g. AWS ElastiCache, Google Cloud Memorystore) then you get the benefit of them being able to spawn an instance "outside" your project/VPC (i.e. having it be managed by them), but have it nevertheless routed to an IP address inside your VPC. That might be enough security for you, if you don't have any legal requirements saying otherwise. For some people, it's not, and they need TLS on top anyway.
> cluster stuff
Have you looked at how it's done? It's just ease-of-use tooling around the obvious thing to do to scale Redis: partitioning the keyspace onto distinct Redis instances, and then routing requests to partitions based on the key. It's not like Redis has suddenly become a multi-master consensus system like Zookeeper; the router logic isn't even in the server codebase!
The question would be if by adding those additional features, the experience to use the "basic/original" features got more cumbersome or the hardware requirements did change a lot. My guess it hasn't changed that much.
Super-simple, good-enough things don't last too long. They die when the ecosystem changes, or when another super-simple player comes along and looks a bit more shiny.
It's just how the world works. You have to conquer to survive.
> To my thinking, Redis fit very well in the "lightweight linux thing" category.
It sounds like you don't follow Redis then.
That ship sailed years ago. Redis has at least 10 major features in addition to the caching you're talking about, including search. Redis is a kind of database now.
If you just want a cache, use memcached.
Half of my jiras at one company were related to enabling SSL for Redis due to compliance reasons (all for internal use.) Now those can be closed.
Virus going to do what it's going to do. People can choose to stay home but this shelter in place stuff is wrecking the lives of the poor and the people who are most vulnerable.
"The death rate in Sweden has now risen significantly higher than many other countries in Europe, reaching more than 22 per 100,000 people, according to figures from Johns Hopkins University, controlled for population.
By contrast, Denmark has recorded just over seven deaths per 100,000 people, and both Norway and Finland less than four."
If I have a laptop encrypted with luks or whatever, then what? What are the consequences of non-compliance?