New Feature in Cumulus Linux 2.2: sFlow
sFlow is an open protocol, newly supported in Cumulus Linux 2.2, that enables a collector to determine what is going on in a complex network.
It is used to collect statistics, such as packet counts, error counts, CPU usage, etc from a large number of individual switches. What is especially interesting is that it can be used to collect sampled packets (usually only the first n bytes, containing the header), along with some metadata about those packets.
Bringing sFlow to Cumulus Linux was particuarly easy, because “hsflowd” was already available for implementing sFlow support on Linux servers. We were able to reuse that existing code, with extremely minimal modification, to implement sFlow on our Linux based switches.
sFlow allows a collector to get a statistical view of what is going on in a collection of switches, approaching per-flow granularity. This is extremely useful information to present to users for capacity planning and debugging purposes, but things really get interesting when the collector can make decisions based on the information.
For example, our friends at inMon implemented detection of elephant flows (high bandwidth), followed by marking those flows on the switch at network ingress for special QoS handling. This nearly eliminated the latency impact of elephant flows on small latency-sensitive flows.
They also demonstrated DDoS attack mitigation. When the collector detects a DDoS attack in the sampled packets, it can add ACL rules to the switches to block the attack.
Notably, they were able to do all of that without relying on us to do anything; Cumulus Linux’s open environment empowered them to innovate on their own! They simply wrote a simple REST server in python that exposed the functionality that they needed and ran it on the switches.
I look forward to many more interesting and innovative ways to use the insight that is delivered by sFlow to make networks faster, better and more reliable! Click here for more information about sFlow on Cumulus Linux.
Read more here:: Cumulus Networks