> you avoid computing/fetching the same value twice
But this problem comes up in reverse proxies too. In Nginx, you can set the `proxy_cache_lock`[0] directive to achieve the same effect (avoiding double requests).
Interesting! This is an issue I had with an external API which I intended to cache on my serverless workers infra.
I hit the API's rate limiter when the workers invalidated their cache, even though I staggered the lifetime of the cache keys for each replicated instance. This is how I found out how many Cloudflare workers run on a single edge instance. Hint: It's many.
You’re welcome, happy to help.
If you are in the .NET space I suggest you to take a look at FusionCache. It has cache stampede protection built in, plus some other nice features like a fail-safe mechanism and soft/hard timeouts https://github.com/jodydonetti/ZiggyCreatures.FusionCache
> you avoid computing/fetching the same value twice
But this problem comes up in reverse proxies too. In Nginx, you can set the `proxy_cache_lock`[0] directive to achieve the same effect (avoiding double requests).
[0]: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#pr...