Infrastructure Availability Zones: Increase Resiliency as You Scale

By GIXnews

infrastructure availability zones

Randy Rowland, President of Data Center Services at Cyxtera, explores why, when and how to apply the concept of availability zones to dedicated infrastructure in colocation.

Randy Rowland, President of Data Center Services at Cyxtera

The cloud has brought a new way of looking at scaling capacity and resiliency through availability zones (AZ). Cloud AZs are created when cloud providers deploy separate hyperscale node instances in multiple locations within a metro area. Customers of the cloud provider are then able to deploy their workloads across these multiple node instances, mitigating single points of failure.

Dedicated infrastructure, on the other hand, has traditionally been deployed in a single location to help reduce complexity and maximize operational control. For example, infrastructure housed in one physical data center location increases operational staff efficiency and makes shipping and receiving more convenient. This approach, while delivering some benefits, lacks the scale and resiliency of cloud availability zones that many workloads require. The natural next step would be cloud migration, but as more businesses realize that cloud is not a fit for every workload, they’re turning to dedicated infrastructure to support increased demand. To provide comparable resiliency and scale, organizations should apply a similar availability zone approach to their dedicated infrastructure deployments.

Implementing Dedicated Infrastructure Availability Zones

The first step is to decide which workloads would benefit most from dedicated infrastructure AZs. Workloads such as mission-critical internal or external applications with high availability requirements are often a good place to start. These could include point-of-sale, trading or back office manufacturing operations systems. Applications experiencing continuous growth in capacity demand or user/transaction volume are also a logical choice. Selected workloads should also use a hypervisor, architecture or software that can support multi-site high-availability configurations in order to benefit from dedicated infrastructure AZs.

Next, consider when the right time is to implement AZs for these workloads. Migrations, tech refreshes or new workload deployments offer the best opportunity. For existing workloads, organizations should incorporate AZs as they plan migrations from an on-premises data center to a new colocation footprint. Tech refreshes or deployments of significant capacity for existing workloads already in colocation present an ideal opportunity to make AZs part of the application architecture. For new workloads, planning well in advance of initial deployments enables businesses to achieve the benefits of cloud AZs for their dedicated infrastructure footprint.

The Role of the Colo Provider

To enable dedicated infrastructure AZs, organizations should look for providers that offer at least three or more physically separated data center facilities in their desired metro region. In addition to metro area facility breadth and density, the provider should also offer low latency network connectivity between facilities in the AZ. While industry average latency is typically 10-15 milliseconds, 5 milliseconds is needed to ensure resources distributed across different facilities in the AZ operate as if they were in a single cabinet.

It’s also essential that the colocation provider delivers the connectivity ecosystem needed to access both local and long-haul network service providers. Without this, the AZ’s benefits can be limited just as it would be for cloud.

The time it takes to the provider to provision colocation is another key consideration. Typical deployments can take three to six months – simply too long when users expect agility. Colocation compute and connectivity should be cloud-like – easy to provision and scale. Look for providers that can offer on-demand provisioning of connectivity and pre-configured dedicated compute infrastructure to shorten deployment to as little as a single business day.

Finally, make sure that the provider’s operations staff is seasoned. Experienced ‘remote hands’ staff can mitigate onsite efficiency concerns raised when colocation deployments are distributed across multiple facilities.

To enable dedicated infrastructure availability zones, look for providers with metro area facility breadth and depth that offer low latency network connectivity across different facilities.

More organizations are turning to data center providers to support mission-critical workloads not fit for cloud. But they shouldn’t have to compromise reliability. Businesses can still leverage cloud-like practices such as AZs to increase resiliency and scale. Innovations such as on-demand provisioning, low latency metro region connections and the right connectivity ecosystem make it even easier to mitigate single points of failure for dedicated infrastructure in colocation.

Randy Rowland is the President of Data Center Services at Cyxtera.

Source:: Data Center Frontier