Eh, not sure that follows. Here's the thing, costs aren't linear. If you do what AWS does and create some artificial "compute units" as a sort of fungible measure of processing power, what you'll find is that the sweet spot for price per compute unit is a medium power system. The current mid-range processors tend to be slightly more expensive than the low-end processors, but significantly cheaper than the high-end processors.
So, hypothetically, lets say you can get one of 4 processors, a low end one that gives you 75 units for $80, a mid-range processor that gives you 100 units for $100, a high-end processor that gives you 125 units for $150, and the top of the line processor that gives you 150 units for $300. If you normalize those costs, your 4 processors get price-per-compute values of $1.01, $1, $1.20, and $2. The best value is at the $1 per compute unit price point of the $100 processor. Logically if you need 150 units of compute power you have 2 choices, you can use 2 $100 processors, or 1 $300 processor. Clearly the better option is the 2 $100 processor. This would be scaling out. In the case of what SO did though, they took that off the table, because their formula isn't just the cost of the processor (ignoring related things like RAM and storage), but also includes a per-instance license cost. Their math ends up looking more like $100 CPU + $150 windows license times 2 totaling to $500, vs, $300 CPU + $150 windows license times 1, totaling to $450, which ends up making the more expensive processor the cheaper option in terms of total costs.
> If you do what AWS does and create some artificial "compute units" as a sort of fungible measure of processing power, what you'll find is that the sweet spot for price per compute unit is a medium power system.
At least when I look here the costs are linear.
E.g. c5.2xlarge is double a c5.xlarge, and a c5.24xlarge is 24 times higher than a c5.xlarge.
That's interesting, it didn't used to be linear, particularly on the very large instances. Oddly when looking at the prices for RHEL those are much closer to what all the prices used to look like (I hadn't actually looked at AWS pricing in a few years). I wonder if AWSes virtualization tech has just reached the point now where all processing is effectively fungible and it's all really just executing on clusters of mid-range CPUs no matter how large your virtual server is.
So, hypothetically, lets say you can get one of 4 processors, a low end one that gives you 75 units for $80, a mid-range processor that gives you 100 units for $100, a high-end processor that gives you 125 units for $150, and the top of the line processor that gives you 150 units for $300. If you normalize those costs, your 4 processors get price-per-compute values of $1.01, $1, $1.20, and $2. The best value is at the $1 per compute unit price point of the $100 processor. Logically if you need 150 units of compute power you have 2 choices, you can use 2 $100 processors, or 1 $300 processor. Clearly the better option is the 2 $100 processor. This would be scaling out. In the case of what SO did though, they took that off the table, because their formula isn't just the cost of the processor (ignoring related things like RAM and storage), but also includes a per-instance license cost. Their math ends up looking more like $100 CPU + $150 windows license times 2 totaling to $500, vs, $300 CPU + $150 windows license times 1, totaling to $450, which ends up making the more expensive processor the cheaper option in terms of total costs.