cumulus workbench

At VMworld 2013, before the Cumulus Workbench was born, Cumulus Networks needed a quick way to demonstrate Cumulus Linux.

One of our amazing engineers, Nat Morris, quickly whipped up a VM (almost out of nowhere), meant to run on virtualbox, on a laptop with two interfaces. Voila! Cumulus Workbench!

For a first effort and for lack of time, this was awesome. However, there were a few limitations, as you would imagine – flexibility was an issue and new features required distributing an entirely new VM. Plus, for the latest version, you had to ask around. This would be fine for a quick demo, but we wanted more. We wanted it to be bigger and better.

We put some thought behind what exactly bigger and better meant to us and too that to the drawing board. From there, we built a framework and began to deep dive into the design and architecture. We wanted to build something useful for customers so that they would be able to see what they could do in their own environment. It was at that moment that the Cumulus Workbench was born, thanks to a lot of elbow grease and hard work from Ratnakar Kolli. Thus, giving way to your own self-contained lab of physical hardware with a “jump host” and real switches. You even have the ability to choose your automation tools from a variety of vendors such as Puppet Labs and Ansible (with more to come).

Cool, right?

The demo framework was built as a set of Debian packages, meant to run on Debian or Ubuntu in the workbench. However, the packages are not limited to the workbench and all of the configurations can easily be run in any customer environment. Plus, since each package only has the necessary files, it is
much smaller and quicker to download. Plus, Cumulus Networks hosts all of the files on GitHub so that all of the configurations are publicly accessible. And, we always welcome any pull requests, comments, or improvements on how to make things better.

Packages are easy to install and keep up to date, using APT. Our GitHub repository contains one folder per package, and all of the necessary control files for building packages are included in each directory structure. Our build script automatically compiles the packages, and then we push the packages to our public cldemo repository. Some packages can be as simple as a control file and postinst script, while others may call in software to install, via git submodules.

The demos have a variety of examples of different automation tools to implement different technologies. Puppet, Ansible, Chef, and CFEngine are all used to show that no matter which orchestration tool you prefer, your Cumulus Linux switches can easily be automated.

Our demonstrations also come with an accompanying Knowledge Base article — so that you can understand what is happening and how to set this up.

Of course, this is just a starting point. We welcome the use of this configuration information as a starting point for deployments in the field. And, we welcome all feedback and pull requests from our community!

It’s simple to jump onto the Cumulus Workbench to try it out for yourself. All you have to do is fill out the Cumulus Workbench form and we will then email you a unique login, which will grant you access to our labs of physical hardware for up to 48 hours. Each workbench contains a “jump host” and either 1,2, or 4 real switches. You can use the switches and “jump host” however you like. You can deploy a configuration with an orchestration tool, install your own scripts or play ascii-invaders on your switches. It’s super easy and fun.

Happy hacking.

Editors note: Leslie Carr will be hosting a webinar about Unattended Deployment with Zero Touch Provisioning (ZTP) on October 15, 2014

The post Cumulus Workbench – a year of progress appeared first on Cumulus Networks Blog.

Read more here:: Cumulus Networks

Cumulus Workbench – a year of progress