Open Networking for VMware vSphere: The Last Piece of the SDDC Puzzle?
VMware vSphere is one of the most prevalent virtualization platforms in the market today. Despite stiff competition from proprietary and open-source initiatives, VMware vSphere has continued to innovate and provide value to the enterprise.
Now that I’ve kept marketing/SEO happy, lets dive right in. Two of the latest initiatives from the VMware marketing engine are ‘“SDDC”’ (software-defined data center) and “Hyper-Converged.” Hype aside, these two concepts are fundamentally aligned with what hyper-scale operators have been doing for years. It boils right down to having generic hardware configured for complex and varied roles by layering different software.
At Cumulus Networks, we help businesses build cost-effective networks by leveraging “white box” switches together with a Linux® operating system. We feel this is the crucial missing piece to the overall SDDC vision.
What is VMware’s SDDC and Hyper-converged Strategy All about Anyway?
First, let’s back up a little and talk a little history. VMware started as a hypervisor stack for abstracting compute resources from the underlying server (and before that, workstations… but I digress). They moved on to more advanced management of that newly disaggregated platform, with the rise of vCenter and everything that followed.
Then VMware turned their attention to the network with vShield (later renamed vCNS) and then NSX (through the Nicira acquisition). Seeking to abstract network complexity from the underlying physical network.
Around the same time that NSX reached general availability, along came vSAN; VMware’s play at scale-out clustered server-side storage. This software-defined storage platform is certainly new for VMware, but the architecture has been used in HPC and webscale DC’s for years. ; glusterFS, GFS, Ceph, NDFS are all examples of distributed clustered storage sub systems typically used for this purpose.
The overall story of SDDC is how all of these software products work together, in concert, to deliver data center services faster and more dynamically than before. This model promises to dramatically reduce time to market, increase agility and generally lower TCO through increased operational efficiency. The model also allows more commodity hardware to be used, since all the “smarts,” including HA, are now done in software.
VMware’s hyper-converged platforms — the EVO product line —take optimized vSAN reference designs and deliver them in an easy to consume, and scale, package. This is fundamentally a scale-out commodity play, made dramatically simpler through tight integration and clever automation.
However, Nutanix have been doing similar things in the hyper-converged space for a while now. This is now a competitive space with all players taking a similar approach to the one laid out by Web-scale operators such as Google.
In my humble opinion, SDDC boils down to applying resource pooling and abstraction to storage, network and compute. By definition, all the functionality tends to move to software and the underlying hardware is treated by most as a commodity; the logical next step is to start using white box or original design manufacturer (ODM) servers and storage. But what about the network?
Single-vendor Networking: Last Bastion of a Bygone Era?
For those old (or is that wise?) enough to remember, in the early days of enterprise IT (think big washing-machine-size disk drives, custom vendor-specific CPUs), the whole stack (hardware, OS, applications and usually labor) was procured from a single vendor; usually Big Blue.
At some point, the types of servers dominated the data center began to radically shift. As I look back, I would submit there were 3 real waves of innovation/change.
- Move from highly specialized, centralized “Big Iron” to distributed rack-servers.
- CPUs standardizing on x86 and becoming available via new channels.
- General purpose operating systems available separate from hardware vendors. UNIX/Linux, Windows, and later, the hypervisors.
In the server world, that brings us up to about 15 years ago. Then virtualization helped usher in the era of choice with server hardware. With VMware’s broad HCL, it didn’t matter what you ran underneath. Hardware got faster and utilization increased and costs went down. Choice helped commoditize servers and fueled the rise of the likes of Super Micro.
I would assert that the data center networking industry is largely still at step 1 or 2. There has been a lot of hype surrounding Software-Defined Networking SDN, and while most networking vendors still sell a high-margin integrated solution, that tide is starting to turn.
So if x86 was one of the inflection points for server innovation, do we see something similar in data center networking?
Yes, we do. Merchant silicon for networking is here, and all the traditional network vendors are using it. Note that article is from 2012 and since then, even Cisco’s vision for the future, Application Centric Infrastructure (ACI), with Nexus 9k series switches, are Broadcom Trident2-based.
For several years, Web-scale/hyperscale providers have had direct access to ODM switches. Some providers just took something already on the shelf and sourced directly, while others decided to design their own switch. Facebook even chose to publish their switch design to the Open Compute Project (OCP) as “Wedge”.
What’s changed over the last 2 years is the broad availability of ODM or white box switches to the mass market.
This presents all companies with a new, radically different choice for network hardware; you can buy a switch with an ASIC (and performance) identical to what you would buy from your normal vendor, without an operating system, for dramatically reduced and predictable cost.
With SDDC, the concept of disaggregation of software from hardware is largely assumed. It’s quite conceivable to build a whole rack of compute and storage entirely from commodity servers. What’s more, you get to choose the software and hardware vendors independently.
Why is the data center network so different and/or special? Does it need to be treated that way?
I would argue that it doesn’t need to be. Open networking allows you to choose your switch hardware as well as your network operating system, and you can make these choices independent of each other. Just as it happened in the server market, we see opportunity for a dramatic increase in competition, which ultimately benefits customers.
To quote Fight Club regarding network switches; “You are not special. You are not a beautiful or unique snowflake. You’re the same decaying organic matter as everything else.”
The graphic above shows how disaggregation through open networking fits in the overall SDDC model at the most fundamental level. Just as vSphere (ESXi) abstracts the underlying server, storage and logical network, Cumulus Linux abstracts the physical switch hardware.
Making the Jump
Hopefully at this point the question becomes; “OK, how do we get there?”
At Cumulus Networks we have tried to make the transition as smooth as possible. With our upcoming 2.5 release we’ve added a few targeted features to help people take advantage of this paradigm shift not only in greenfield SDDC rollouts, but also in traditional vSphere environments, without requiring NSX to be fully integrated on day 1. We certainly recognize Rome wasn’t built in a day.
We also invested the time in fully testing and validating vSphere in our labs with a variety of topologies. The end result is our Cumulus Validated Design: vSphere. . I’ll also post an article on the build-out of that lab, because I had so much fun doing it.
Then, we kicked it up a notch; we automated the build of our validated design. This is to demo the real power of a fully software-defined environment. The automated demos are available here.
The demo installs Cumulus Linux on 4 bare-metal switches, configures the switches, deploys ESXi, vSAN and vCenter and forms a basic cluster. The complete demo is available on our Cumulus Workbench for you to try out. As is our style, the automation is built in a modular way and we release all the code on GitHub, so you can use it as a base for your own automation project.
Also available are the normal training and professional services offerings , should you need a little helping hand.
So, that’s it. Disaggregation of network hardware and software in an SDDC world, it just makes sense… right?
The post Open Networking for VMware vSphere: The Last Piece of the SDDC Puzzle? appeared first on Cumulus Networks Blog.
Read more here:: Cumulus Networks