Making Networking Great Again: Leveraging ifupdown2 in the Data Center


I love election season, mainly for all the great slogans. Every candidate is trying to find a way to catch the attention of the electorate in order to get their ideas across. If people don’t know the benefits of a new solution, they’ll be hard pressed to understand how much better life can be.

The same can be said for Linux networking when ifupdown2 came along. This article describes the improvements made to ifupdown2, but it doesn’t describe the excruciating pain of having to run the classic ifupdown. I feel obliged to join this campaign cycle to wholeheartedly endorse ifupdown2 and tell you about how it’s making networking great again.

I was recently simulating a data center environment with Vagrant to test scalable architectures. I was trying to leverage ECMP via the new Routing on the Host feature on an Ubuntu 14.04LTS server over a Cumulus Linux spine/leaf Clos network. One requirement for this feature to work is peering BGP between the Ubuntu server and the first-hop leaf. Sounds simple, right? I had already peered BGP throughout my entire Cumulus Linux switch network, and since Ubuntu is also a Debian-based distribution, it should have been a trivial task.

Read more: Routing on the Host

Except it wasn’t so simple. Cumulus Linux and ifupdown2 had spoiled me. Its many enhancements made networking easy and intuitive. Unfortunately, the Linux server did not come packaged with ifupdown2, so I struggled with classic ifupdown functionality.

I started by verifying basic connectivity between the leaf and server, ultimately aiming for using BGP unnumbered interfaces to peer between the two nodes. On the Cumulus Linux switch I changed /etc/network/interfaces and applied the changes immediately using ifreload -a, a hitless method to apply configurations. On the server I had to flap the interface every time I made any change. Using ifdown/ifup became my own prison, trapping me at every configuration change. I started to wonder how server admins balanced resolving outages with updating interface configurations. I shuddered at the thought of needing a maintenance window to make minor interface changes. Why does ifupdown require an interface flap to apply a new configuration? Mainly because the command service networking reload (which ifupdown2 leverages via the ifreload -a command) does not work on most Linux distributions.

After I verified that I had basic connectivity, I tried setting up IPv6 connectivity between the leaf and server. I assumed that the fundamentals of IPv4 address assignment would translate over to IPv6, but again I was faced with more challenges during the configuration process. With ifupdown2 on the switch, I could configure both IPv4 and IPv6 settings within the same interface stanza, an intuitive configuration. Using ifupdown on the server, the IPv6 settings needed to be configured in a subinterface and managed independently of the main interface. And despite working daily on Linux networking, it took me about 20 minutes of searching on Google to find that solution. I’d like to chalk that up to a failure of Linux to move past the limitations of ifupdown, instead of a judgment on my capabilities as an engineer.

Screen Shot 2016-07-26 at 4.49.37 PM

I needed to vent my frustration and validate that I wasn’t just a bad Linux networking engineer, so I talked to my colleague and trusted confidant Sean Cavanaugh. Not only did he understand my plight, but he vindicated my frustration. He encountered all the challenges I faced and discovered even more when he tried to automate configuring multiple subinterfaces on a server using Ansible with classic ifupdown. His configuration had five times as many commands as the same configuration using ifupdown2. Not only was it longer, but many tasks required manual intervention. He had to check whether the VLAN package was installed, then the 802.1q package. Then after copying over the interface configurations, he couldn’t leverage a networking handler to apply the interface configurations; he had to flap every interface via a multi-line Ansible configuration.

Screen Shot 2016-07-26 at 4.51.26 PM

Screen Shot 2016-07-26 at 4.52.55 PM

This conversation brought me to a moment of enlightenment. Leveraging ifupdown2 wasn’t just a new solution for Cumulus Linux. It was a solution that all Linux distributions could leverage to harness the advancements in data center architecture, design and maintenance.

  • Now that I had seen the light, I sat down to write the gospel of ifupdown2. Here are just a few of its benefits:
  • No flapping interfaces to apply a new config, so no network outages; configs are reloaded through service networking reload and ifreload -a
  • Simpler IPv6 and loopback configurations
  • Bridging and bonding packages included in the distro
  • Fewer Ansible/automation manual validation tasks
  • Unified ifup/ifdown behavior for both switches and servers

These features by themselves seem trivial, but together they make ifupdown2 a catalyst in utilizing the great enhancements in networking: designs such as ECMP with BGP unnumbered, orchestration through Ansible and hitless configuration application. All these features were only possible thanks to ifupdown2. At a cultural level, it also explains why application and server owners hate the network so much — I would have, too, if I had to configure networking without using ifupdown2!

Thank you ifupdown2, for making networking great again!

The post Making Networking Great Again: Leveraging ifupdown2 in the Data Center appeared first on Cumulus Networks Blog.

Read more here:: Cumulus Networks