Hacker Newsnew | past | comments | ask | show | jobs | submit | noahfschr's commentslogin

I have commented this section and redeployed the code.

See the Fauna endpoint: https://71q1jyiise.execute-api.us-west-1.amazonaws.com/dev/f...

See the code: https://github.com/upstash/latency-comparison/blob/master/ne...


Did you delete all the extra events that have been created already?


If you mean histogram, yes I reset the histogram for Fauna.

If you mean deleting Faunadb internal events, I do not know how to do. Can you guide me?


I will be happy if someone from Fauna team helps me to improve my code. https://github.com/upstash/latency-comparison

Upstash is not non-durable. It is based on multitier storage (memory + EBS) implementing Redis API. In a few weeks, I will add upstash premium which replicates data to multiple zones, to the benchmark app.

In the blog post, I mentioned the qualities where Fauna is stronger than the others: https://blog.upstash.com/latency-comparison#why-is-faunadb-s...


Your own docs say that by default “writes are not guaranteed to be durable even if the client receives a success response”.


Upstash has two consistency modes. Eventual consistency and Strong consistency. Please see: https://docs.upstash.com/overall/consistency

In my code, Upstash database was eventually consistent. Similarly the index in the FaunaDB was not serialized.

But both of those should not affect the latency numbers in the histogram because those numbers are all read latency.


That's an apples to oranges comparison though. Upstash couples durability with consistency/isolation. Regardless of configuration, FaunaDB and DynamoDB both always ensure durability of acknowledged writes with a fault tolerance of greater than 1 node failure. To compare them on equal footing, Upstash would need to be configured for strong consistency, at least according to the docs.


DynamoDB also guarantees your write is distributed to multiple data centers as well.


No it doesn't. The write is not durable in other datacenters when the client acknowledgement is returned. It is still possible to lose data.


It does. The three AZs they replicate across are just as good as anything else someone typically calls a "datacenter." Amazon itself operates retail out of one region and uses multi-AZ as a fault tolerance strategy.


It won’t lose data in the event of a data center failure. Each of the replicas is in a different AZ, and at least two of three have to durably write the data before the put succeeds.


An AZ is not a datacenter.


This is what I was referring to: An Availability Zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. AZs give customers the ability to operate production applications and databases that are more highly available, fault tolerant, and scalable than would be possible from a single data center. All AZs in an AWS Region are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated metro fiber providing high-throughput, low-latency networking between AZs. All traffic between AZs is encrypted. The network performance is sufficient to accomplish synchronous replication between AZs. AZs make partitioning applications for high availability easy. If an application is partitioned across AZs, companies are better isolated and protected from issues such as power outages, lightning strikes, tornadoes, earthquakes, and more. AZs are physically separated by a meaningful distance, many kilometers, from any other AZ, although all are within 100 km (60 miles) of each other.


Sure, I will.


Thanks for the merge. As you pointed in the PR you're only measuring the latency of the first query so it will have no effect in the benchmark. I'm guessing the "current request" latency will improve though, no?

I have to say the latencies you're getting are much higher that I've experienced with Fauna. Obviously it's expected for a KV database to be faster, but I'd be surprised to get more than 100ms of 50th percentile.


Current request latency is also the read latency (from lambda function -> db). I should have been more clear on that.


Still, almost a second of total latency (50th percentile) is super high.

I'd love to do the same test from Cloudflare Workers instead of AWS Lambda. Could you please DM on Twitter to discuss this?

https://twitter.com/PierB

Edit:

I just saw this comment by Evan from Fauna which explains why the latency could be so high:

https://news.ycombinator.com/item?id=26806541



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: