Datacenter resource fragmentation
The concept of resource fragmentation is common in the IT world. In the simplest of contexts, resource fragmentation occurs when blocks of capacity (compute, storage, whatever) are allocated, freed, and ultimately re-allocated to create noncontiguous blocks. While the most familiar setting for fragmentation is memory allocation, the phenomenon plays itself out within the datacenter as well.
But what does resource fragmentation look like in the datacenter? And more importantly, what is the remediation?
The impacts of virtualization
Server virtualization does for applications and compute what fragmentation and noncontiguous memory blocks did for storage. By creating virtual machines on servers, each with a customizable resource footprint, the once large contiguous blocks of compute capacity (each server) can be divided into much smaller subdivisions. And as applications take advantage of this architectural compute model, they become more distributed.
The result of this is an application environment where individual components are distributed across multiple devices, effectively occupying a noncontiguous set of compute resources that must be unified via the network. It is not a stretch to say that for server virtualization to deliver against its promise of higher utilization, the network must act as the Great Uniter.
Not just a virtual phenomenon
While Continue reading