If we have data from a random population, a 3d histogram of throughput and current cache size of DRAM (intersection of a IOPS & a cache size will have a count). How can one use queuing theory to estimate the best possible cache size ?
1 Answer
You first need to establish the access speed of the cache, depending on the cache size (usually larger cache -> slower speed, because of physics). You also need to determine the energy consumption of a larger cache, which will lead to increased heat, which will lead to lower clock speed (again, because of physics). Then you need to find the cost of a larger cache - if I can gain more for less money using different methods than increasing the cache size, I'm not going to increase the cache size (because of economics).
Histograms of existing applications are of limited use, since there are plenty of algorithms that adapt to the cache size - with a cache four times the size you would have completely different access patterns. So you are going to use simulations.
I wonder why I didnt't mention queuing theory here.