I used to work on a telecom platform (think something that runs 4G services), where every node was just part of an in-memory database that replicated using 2PC and just did periodic snapshot to avoid losing data. Basically processes were colocated with their data in the DB.
Very erlang/otp. Joe Armstrong used to rant to anyone who would listen that we used databases too often. If data was important, multiple nodes probably need a copy of it. If multiple nodes need a copy, you probably have plenty of durability.
Even if you weren't using erlang, his influence (and in general, ericsson) permeates the telecom industry.
I worked on a lottery / casino system that was similar. In memory database ( memory mapped files), with a WAL log for transaction replay / recovery. There was also a periodic snapshot capability. It was incredibly low latency on late 90's era hardware.