Recently came across this interesting and thought provoking post around IOPS and Latency.
We all know we need to consider IOPS as well as and often more critically than overall storage volume – 10TB of storage can effectively be saturated from a performance perspective but under 1TB of data that is read / written to at a high rate. This is a message that many people don’t consider when they just say project X or application Y just needs xx GB of storage.
However even with the understanding of the need to assess IOPS required by a solution it is still possible to get caught out if you don’t consider the profile of these IOPS, and the impact of random reads and writes on the actual performance of the drives / array. Add to this the fact that many manufacturers’ figures for their products are somewhat on the optimistic side and it is very easy to deploy a solution that at first glance appears to meet the performance requirements, but turns out to be very inadequate in practice.
So; of course consider your storage volume requirements, but make sure you pay great attention to the IOPS and latency requirements along with the usage profile. Then carefully design and test the storage solution to make sure it works as expected.
Post can be found here, interesting reading;