Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ah, I've been looking for a name for this.

So this applies to software systems as well. We had a system where, every time we optimized the underlying database, it became slower . This puzzled us for a while. Turns out that people were used to the slowness. When they found out that the system became faster, they actually started entering more transactions. As more people noticed, the system got overwhelmed again. Until it reached a point where it could finally handle the load.

The variable we failed to see is that people were leaving work earlier now. Even though the system's metrics appeared to degrade, that was a metric that markedly improved. They used to work longer hours to get all their work done.



> We had a system where, every time we optimized the underlying database, it became slower.

One thing I've noticed is that the amount of time I have to wait when performing operations has remained more or constant through the past few decades, even as processor instruction retirement rates skyrocketed and memory/storage/network latencies plummeted. I often find myself trying to pull up a site to do something like track a project or buy a ticket, only to be waiting for 5 to 10 seconds before I am able to read the information and/or perform the operation I was there for.

I always ask myself, "Why on earth am I having to wait at all for this?" I assume it's the same concept of "induced demand" for roadways. Under that theory, in densely-populated areas, traffic will always be as bad as it is, regardless of the number of lanes. Add more lanes, and commute times drop. Then people who didn't tolerate the longer commute times before jump in and start commuting, increasing the load on the roads. This quickly raises the wait time to just below the previous wait time, with all the people in the "margin of toleration" who weren't commuting before now commuting.

In the case of the systems at my company for project tracking, I just assume that people build systems on dependencies in a way that trades latency with the convenience of using a particular API/service with an SLO. The system with the SLO costs money to maintain, and so they have a quota system in place based on priority. The project tracking software comes along and says, "Well, we could write the data into a Prostgres instance with a 10 millisecond response time, but then we've got to worry about backup/restore, availability, and all that stuff. Or we can be the lowest priority traffic for this nifty distributed storage backend that has a support team and SLO with a 7 second response time." The seven seconds of each person's life every time they use the tool is an externality, and it's something few people are ever going to complain about. So, 7 seconds it is.

I'm resigned to waiting an average of 5 or so seconds for anything I do on a computer for the rest of my life, no matter how cheap cycles, storage, or bandwidth get.


This equation has very interesting implications.

E. g. more lanes translates to higher automaker profits at expense of less overall population happiness.

Faster computers translate to higher overall rate of software use at expense of more overall time spent waiting


This is precisely what happens with Mobile Networks. The faster the mobile network gets, the more people uses it which causes things to slow down to zero return of performance.

Ultimately though both Economy of Scale and Jevons Paradox reaches Law of Diminishing returns. Adding more capacity to network or faster DB queries will no longer yield any meaningful output.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: