For thinking about bursty arrivals, a good rule of thumb is to look at the variance of the inter-arrival times. The key number is the variance of interarrival times divided by the mean interarrival time squared. The waiting time in a system with bursty arrivals will roughly be larger than the M/M/c by this multiplicative factor. Kingman's formula is the equivalent for the single-server setting: https://en.wikipedia.org/wiki/Kingman%27s_formula
For seasonality, if the arrival rates fluctuate over a long time period relative to the typical waiting time, it makes sense to just do separate calculations for the different conditions you experience. If the fluctuation is very fast, just use the average arrival rate.
For constrained queue lengths, there are a lot of theoretical results in this area, such as the M/M/c/c model: https://en.wikipedia.org/wiki/M/M/c_queue. The second "c" refers to the buffer size.
> For seasonality, if the arrival rates fluctuate over a long time period relative to the typical waiting time, it makes sense to just do separate calculations for the different conditions you experience. If the fluctuation is very fast, just use the average arrival rate.
Thanks, that makes sense. More quantitatively, about where would get set the bar on "very fast"? Is it ~1x the mean interarrival time, or ~1 million x?
By the way, I really enjoyed your "Nudge" paper from last year. The result about FCFS was very surprising to me!
There's a transition zone from "fast" to "slow" fluctuations around the mean waiting time that's more complicated and is an area of active research. If the fluctuations are 5x below the mean waiting time, I'd guess the effects of fluctuation will be gone.
One more: my understanding is that in Nudge I need to know processing time, but in FCFS I don't. How sensitive is your optimality result to errors in processing time estimates (I don't recall this being covered in the paper, but if it is feel free to tell me).
In the cloud services and databases settings, we seldom have accurate processing time estimates until we're quite far down processing a request (post-auth, post-parse, at least, but also for databases post-query-plan).
The only thing we use processing time for in the Nudge paper is to classify jobs as "Large", "Small" or "Other". If instead of exact processing times, we had estimates, the result would still work as long as a job that was estimated to large was typically longer than a job estimated to be small. So Nudge totally works in these more realistic settings.
If the estimates were super noisy, you might be better off using Nudge very sparingly, only when you're more confident about the relative sizes.
For thinking about bursty arrivals, a good rule of thumb is to look at the variance of the inter-arrival times. The key number is the variance of interarrival times divided by the mean interarrival time squared. The waiting time in a system with bursty arrivals will roughly be larger than the M/M/c by this multiplicative factor. Kingman's formula is the equivalent for the single-server setting: https://en.wikipedia.org/wiki/Kingman%27s_formula
For seasonality, if the arrival rates fluctuate over a long time period relative to the typical waiting time, it makes sense to just do separate calculations for the different conditions you experience. If the fluctuation is very fast, just use the average arrival rate.
For constrained queue lengths, there are a lot of theoretical results in this area, such as the M/M/c/c model: https://en.wikipedia.org/wiki/M/M/c_queue. The second "c" refers to the buffer size.