Maybe I've missed this, but why do they need a particularly large LRU cache? Surely this isn't all one process, so presumably they could reduce spikes by splitting the same load across yet more processes?
Larger cache = faster performance and less load on the database.
I only glossed over the article but the problem they had with Go seems to be the GC incurred from having a large cache. Their cache eviction algorithm was efficient, but every 2 minutes there was a GC run which slowed things down. Re-implementing this algorithm in Rust gave them better performance because the memory was freed right after the cache eviction.
Splitting it across more processes will result in more cache misses and more DB calls.
I am of course talking about the same amount of total cache RAM, just split among more processes. Depending on distribution of the calls, you might get more cache misses, but I don't think it's guaranteed, and if it is, I don't think we can assume it's significant. Heck, you could even use more cache RAM; the cost of a total rewrite plus ongoing maintenance in a new language covers a fair bit of hardware these days.