placeholder

The Latency/Throughput Tradeoff: Why Fast Services Are Slow And Vice Versa

Special thanks to the graceful and cunning Ben Ng for consulting on this post. I’m finally getting around to reading that DevOps* book everybody’s been raving about, Site Reliability En…

Click to view the original at blog.danslimmon.com

Hasnain says:

Great read going into a bunch of engineering tradeoffs. The follow up post is also solid.

“Here’s one of the first passages to jump out to me, from Chapter 3: Embracing Risk:

The low-latency user wants Bigtable’s request queues to be (almost always) empty so that the system can process each outstanding request immediately upon arrival. (Indeed, inefficient queuing is often a cause of high tail latency.) The user concerned with offline analysis is more interested in system throughput, so that user wants request queues to never be empty. To optimize for throughput, the Bigtable system should never need to idle while waiting for its next request.

This is a profound and general insight. When I read this passage, my last decade of abject suffering suddenly came into focus for me.”

Posted on 2022-09-01T05:04:15+0000