Ultimately the problem we care about is very simple, and there is no way to solve it, despite your claims.
Say we have a database of customers, with two replicas - one in the USA, one in Europe. Say a customer in Europe wants to update their shipping address. We ship products every month to this customer from the USA to their current shipping address, on the 10th of that month.
The customer is updating their address on the 9th at 10 AM PST. However, we are in the midst of a massive network partition that started on the 9th at 02 AM PST and is expected to last until the 11th.
Do we perform the update in the European replica and tell the customer it succeeded (giving up consistency)? If we do, the US side of the business will ship the products to the old address, even though the update happened a full day earlier.
Alternatively, do we tell the customer the update failed (giving up availability)? If we do, then the customer can't even let the European side of the business know of their new address.
This is the CAP theorem in a nutshell, and it is obviously inescapable. It doesn't require appeals to immediacy that are anywhere close to the bounds of relativity. And while 3-day long network partitions are quite rare, partitions that last for many hours are not.
Any modification to existing data must be "haggled for" somewhere, you're right about that. When you say "partition event", what is being partitioned here? A specific communication-link between two nodes. It's entirely possible (no, extremely likely) that your EU node would have access to a different US node, but not the one that's having the partition-event right now.
Networks will have latency-spikes, but if you can stream time-information the same way you can stream others, you can use redundancies to mitigate the disruption of any single channel.
Considering that this is maybe the 12th time I'm linking the explanation, I now think you're not looking for a discussion, you're here for a fight. Please let me know if you find any problems with the article in its reasoning. It was vetted by a lot of people before, so I really need to notify them too if you do.
> So how can we fix this problem? The same way we solve all our failure issues in IT: redundancy.
Redundancy doesn't fix any problems, it just makes them less likely to occur. Again this not only does nothing to address the CAP theorem but is done for a wide array of problems already, there is nothing new here.
Okay, so we can move beyond CAP. So we don't talk about implementation in either the science-paper or in the intro for it (which is what this thread is about). I mostly write about implementation, Mark mostly writes about the science. So yes, redundancy will only make latency-spikes less likely. Notice however, that the way we establish strong consistency is based on communication also. There's a part in the essay I keep linking where I talk about what happens to nodes with flaky connections, it's called "Stability.
There are three separate mitigation-solutions that go into how total-order strong consistency can keep marching on even if a specific child-group is intermittently isolated.
The first one is redundancy by deterministic replication, so there will always be many replicas which aren't just shards, but full copies of the _consistency ledger_. It's not a database, not a cache, just the thing that establishes the consistency between nodes. These instances all "race each other" to provide the first value of the outputs to other nodes.
The second one is the latency-mitigation we talked about earlier, I don't think we need to waste more breath on that.
The third one is that since the consistency-mechanism requires an explicit association-instruction to interleave the children's versions into its parent's (so that these versions can be exposed to nodes observing it from afar). If the child goes AWOL, it won't be able to associate its versions to its parent, so it won't keep up everyone else either. In this case the total-order won't be affected by the child-group's local-order, which is still allowed to make progress, as long as it's not trying to update any record that is distant to it.
1. I don't quite understand what a "consistency ledger" is and how it's not a database, but it sounds like a log. Many distributed systems solutions have logs, i.e. Raft. The question you haven't covered is how you keep the ledger consistent. I believe you're using Kafka underneath (correct me if wrong), but it doesn't matter so much, any queuing system is much under the control of CAP as anything else, so it's not clear to me how this resolves anything CAP related.
2. Redundancy in network connection. Yes this can help. Again, it doesn't resolve anything CAP related, it just reduces the likelihood of a certain class of distributed systems fails. Note, there are LOTS of ways to have unbounded latency that this does not resolve. Anything from misconfigured routers to disk drives dying. Again, not resolving anything CAP states, just attempting to reduce the some probabilities.
3. If I understand this correctly, you are saying some data is "homed" in certain regions, and if that region becomes partitioned from the rest of the world, they can still modify the "homed" data there. They own it. This doesn't address anything CAP related.
Assuming I understand the architecture correctly, then yes, some architecture's will benefit from this. Some might not. But all of these decisions align with existing understanding and decisions architects make in the face of CAP.
It feels to me that you have believe CAP defines a very particular database architecture (something like a PostgreSQL) database, and your architecture addresses limitations in that, and thus your architecture solves limitations of CAP. But that's just not true. Take Riak, Cassandra, BigTable, Spanner, CockroachDB, all of these represent architectures defined in the face of CAP and they all have different trade-offs. They don't look a lot like PostgreSQL. But they cannot get around the simple laws of physics for how information is communicated.
The reason you don't understand why my claims are different from being just another solution with clocks and mutexes is because you haven't actually engaged with the essence of it. I'll give you a hint: the consistency mechanism is decoupled from collision resolution and that makes the consistency ledger both deterministic and inherently non-blocking. I don't want to go into more specific questions around the implementation, but I can tell you with certainty that we provide much better promises around progressing global state and sustained write-latency than anything else and our client-centric consistency guarantees local-only reads.
But this is about the solution, not the science. The science is basically unlimited, its only real restriction is around our budget.
If you want to understand how the client-centric consistency mechanism takes care of these things, I write about it on medium and Twitter all the time.
But again, I don't feel like you owe me your time or attention
I have read everything you've given me. If you really believe in what you're selling then I am absolutely the person to convince because if you can convince me you can convince anyone. All you have done is post links to the same two articles which multiple people have told you don't answer the question they've asked, and your only response is to accuse me of not engaging. C'mon, this is ridiculous. You have several people with their undivided attention and you aren't threading the needle.
There's a mental leap you need to make before it clicks into place and I can't do it for you. I have several people who understand what I say and why I say it, but I get that this isn't an easy step to make. If you read the essay, you understand how we turn time into data. You also understand how we can construct partial and global hierarchy of orders, how these are both strongly ordered and still move independently and how observers can construct their own consistent view from the data available to them. There's formal proof in the science-paper that we can do consistency all the way to snapshot.
To summarise: we have a consistent system that works via one-way streaming, via redundant channels. Best possible cache-invalidation. Reads are both consistent and locally available. Upper-bound time of writes is predictable and doesn't suffer from blocking. I'm not sure what other improvements anyone could want from such a system, this is the best such systems can ever be. These improvements go well beyond the limits CAP sets out and it basically makes the whole argument moot.
No, by definition, a partition event is one in which some amount of the nodes are entirely non accessible by any other node in the cluster.
Even if there were two different US nodes, and only one node lost connectivity to the others, the problem wouldn't change at all - the branch which only has access to the disconnected node would still either send shipments to the wrong address or stop being able to read the status at all.
"Note that Gilbert and Lynch’s definition requires any non-failed nodenot just some non-failed nodeto be able to generate valid responses, even if that node is isolated from other nodes. That is an unnecessary and unrealistic restriction. In essence, that restriction renders consensus protocols not highly available, because only some nodes, the nodes in the majority, are able to respond."
Partition with a capital P doesn't really move me as our experiences contradict its assertions, as the model it bases its statements on is fundamentally broken.
So let's talk about partition in the real world. Practically speaking, partitions mean that some nodes experience high latency when communicating. If you need to update some record in a specific data-centre and you can only to talk to that place via a single channel and that channel is disrupted; yes, you're going to experience a slowdown or even halt. Nowadays however, even widely used consensus-algorithms can mitigate that, and the delinquent node will eventually be dropped from the group if it causes enough problems. We don't do anything very different in this regard. Since our deterministic ledger can be replicated across multiple data-centres (as no nodes in it create original information), and since observers will only rely on records the ledger already sent out and since the only reason you would need to talk to this ledger is if you need to modify a shared record with other observers, you can always pick a different ledger-instance, there are no "master copies" anywhere. Remember that we can stream time-information with the data, so the client can always calculate its "point in time" and reconstruct a (even globally) consistent view of the data it has.
Sure, if you choose to centralise all your data behind a single flaky connection, you're gonna have a bad time. The point is that the setup we built allows you to not need to do that and it does that transparently, behind SQL semantics.
No, the CAP requirements are not at all as absurd as that article claims.
For the specific quote you gave, that is an obvious assumption. A client only has access to some of the nodes in the distributed system. Of course we want any node to give the correct answer - the whole purpose is to reduce the burden on the client. The client is not responsible for searching all of the nodes. And note that the proof doesn't actually require that all nodes return the right answer - the contradiction is reached as long as all the nodes that the client has access to return the wrong answer.
Another bad claim in the article is that the proof of CAP requires that the partition is permanent. Maybe it's written like that for simplicity, but it obviously only requires the partition to be longer than the client's bound on response time. If the client is willing to wait an hour for a response, then any partition event that's two hours long will lead to the same conclusion. Since clients never have unbounded time to wait, and since partition duration is unbounded even if not permanent, then the argument still holds.
Also, major network outages that disconnect whole regions of the internet for hours from the rest of the world happen somewhat regularly (more than once a year). Whole AWS regions have become disconnected, ~half of Japan was disconnected for a few hours, Ukraine has been disconnected several times, etc. If you run a large distributed system for a significant amount of time, you will hit such events with probability approaching 1.
I can only repeat what I told you earlier. Our distributed consistency model meets the SQL-standard's requirements for consistency and tolerates such outages. This is a fact.
CAP is a bad model for more reasons than the ones listed in that article. My favourite one is that it requires Linearizability, which nobody does in SQL. The disconnect when saying "SQL is not consistent" to me is just too much. CAP is based on a badly defined idea that comes from a presentation that was wrong in what it said.
That you need to tolerate outages of entire regions is a good argument to make in itself, there's no need to point at CAP. My answer to that is that as there's a way to define consistency in a way that allows for it to manage partition problems more gracefully, and that is the model we show. If you require communication to establish consistency and stream the changes associated with the specific timeslot at the same time, partition means that the global state will move on without the changes from the partitioned areas and that they will show up once the region rejoins the global network. While separated, they can still do (SQL-) consistent reads and writes for (intra-region) records they can modify.
Are you saying that it's possible for an SQL server to allow to successfully commit a transaction where you modify a record, and then in a subsequent query, return the old value of that record? I very much doubt this is true of any single-node SQL database.
In contrast, any distributed system has this property (or it may refuse the query entirely) in the face of partitions.
> The “Pick 2 out of 3” interpretation implies that network partitions are optional, something you can opt-in or opt-out of. However, network partitions are inevitable in a realistic system model. Therefore, the ability to tolerate partitions is a non-negotiable requirement, rather than an optional attribute.
> CAP requirements are absurd
Yes! Literally. One would roundly ridicule someone who claimed to have met (or exceeded) those requirements.
CAP means many different things. If you took the time to read what I have to say about it, you would know that I'm saying that we're beating the requirements Brewer sets out in his original presentation, where he introduces the concept of the C-A-P tradeoff. He's clearly wrong in what he says in the presentation, which is what we say we're beating. We can say this, because we're meeting the requirements for "C" there (DBMS-consistency) and because we don't suffer the trade-offs mentioned there. In fact, our system can be both available and partition-tolerant with a definition of "C" that matches the ones laid out in the SQL-spec, as the reads are always local. The SQL-standard doesn't mandate time-related availability guarantees for writes.
Say we have a database of customers, with two replicas - one in the USA, one in Europe. Say a customer in Europe wants to update their shipping address. We ship products every month to this customer from the USA to their current shipping address, on the 10th of that month.
The customer is updating their address on the 9th at 10 AM PST. However, we are in the midst of a massive network partition that started on the 9th at 02 AM PST and is expected to last until the 11th.
Do we perform the update in the European replica and tell the customer it succeeded (giving up consistency)? If we do, the US side of the business will ship the products to the old address, even though the update happened a full day earlier.
Alternatively, do we tell the customer the update failed (giving up availability)? If we do, then the customer can't even let the European side of the business know of their new address.
This is the CAP theorem in a nutshell, and it is obviously inescapable. It doesn't require appeals to immediacy that are anywhere close to the bounds of relativity. And while 3-day long network partitions are quite rare, partitions that last for many hours are not.