A good way to frame any innovation discussion in high-tech is in the context of application workloads.
The slide shown below has become very common both inside and outside of EMC. It attempts to categorize the IT infrastructure needs of application workloads.
From an engineering standpoint, the Y-axis can be difficult to comprehend. The top portion of the Y-axis represents performance (often represented using units such as “I/Os per second”), while the bottom portion of the Y-axis represents capacity (e.g. often represented using units such as “terabytes”). For an engineer used to the Cartesian Coordinate system, one would expect the units to be consistent for the entire Y-axis (not to mention the X-axis as well)!
It was exactly this discussion that EMC Distinguished Engineer Mark Lippitt and I were having last week. Mark has been involved in the storage industry since the late 1970s, and he and I were taking a historical look at what the chart is trying to convey.
Both of us agreed with the fundamental premise of the chart: application workload requirements drive innovation and dramatic change into existing IT infrastructures.
It is interesting to discuss the idea of application workload evolution in the context of one of the dominant information storage protocols emerging from the 1980s: the SCSI protocol. One of the first jobs I had after college graduation was a performance evaluation of workloads between SCSI and ESDI. During this time I learned about the capabilities of the SCSI protocol. In particular I learned about tagged command queueing, and began to understand, for the first time, that disk technology was not keeping up with application workloads.
In a recent post I wrote about the concept of application nearness. I used the illustration below to indicate that in the 1980s the applications were compiled to run on a CPU that was geographically and physically quite close to a spinning disk drive. One of the capabilities of SCSI command tagged queueing, as illustrated below, was the ability for the disk drive to accept more than one request at a time (e.g. the request to store the values “1”, “2”, “3”, and “4” are all issued by the application before the first request is finished).
This picture is meant to highlight that the applications running on “fast” CPUs were spending a lot of time waiting for a hard disk drive to perform a series of mechanical movements to store the data.
In the context of the workload graph shown above, the application workloads of the 1980s were driving the performance requirements further up the Y-axis. At that point in history, an application would typically perform all of its read and write request to one hard disk drive, which could only handle (for example) the completion of one request every 20 milliseconds.
Interestingly enough, two alternative approaches, by two separate teams, in two separate companies, were developed less than 10 miles away from each other. Both of these approaches, which are still valid and operational 30 years later, ushered the industry into the disk array era. Note that the Wikipedia definition of disk array displays the products deploying these alternative approaches).
I will spend some time in an upcoming approach diving into how application workload played a role in driving these two innovations.
In the meantime, Mark and I agreed that while the Cartesian coordinate approach for describing application workload may annoy the engineer, it is a highly effective framework for starting a great dialogue.
image credits: tcc.edu; emc.com
Wait! Before you go…
Choose how you want the latest innovation content delivered to you:
Daily — RSS Feed — Email — Twitter — Facebook — Linkedin Today
Weekly — Email Newsletter — Free Magazine — Linkedin Group
Steve ToddSteve Todd is an EMC Fellow, the Director of EMC’s Innovation Network, and a high-tech inventor and book author Innovate With Global Influence. An EMC Intrapreneur with over 200 patent applications and billions in product revenue, he writes about innovation on his personal blog, the Information Playground. Twitter: @SteveTodd