Linkerd 2.18 advances cloud-native service mesh

The term “service mesh” has been widely used over the last several years to refer to technology that helps to manage communications across microservices and applications. In the cloud-native world, arguably the first implementation of a service mesh came from Buoyant with the open-source Linkerd technology in 2015. Over the last 10 years, Linkerd has grown as a core project within the Cloud Native Computing Foundation (CNCF). Today the project is out with Linkerd 2.18, continuing the technology’s decade-long evolution with improvements focused on operational simplicity and real-world production scenarios.

Service meshes have become critical infrastructure components for organizations running Kubernetes at scale, handling the complex network communication layer so developers don’t have to. Linkerd pioneered this approach a decade ago.

“Kubernetes itself doesn’t provide any built-in mechanism for sending traffic across clusters, for monitoring it, for managing it, for taking into account what you have to do if the clusters are in different clouds, or if you’re doing a hybrid approach,” William Morgan, CEO and co-founder of Buoyant, told Network World. “That’s a clear use case for Linkerd, which does all that, makes it all secure, and then decouples it from the application.”

The evolution of service mesh technology

What makes service meshes valuable is their ability to abstract away complex networking concerns from application code. 

“We were the project that coined the term, we were the first service mesh project, for better or worse,” Morgan said.

Linkerd helped to pioneer what is known as a “sidecar” approach, which runs a secondary container alongside a main application container. The Linkerd service mesh is deployed as a sidecar in Kubernetes. Morgan explained that the magic of the sidecar approach is that functionality can be attached to an application without the developers having to be involved. This functionality includes critical features like mutual TLS encryption, authentication, retries, timeouts, and request-level load balancing.

The project’s focus has evolved significantly over the years. While early adoption centered on mutual TLS between pods, today’s enterprises are tackling much larger challenges.

“For a long time, the most common pattern was simply, ‘I want to get mutual TLS between all my pods, which gives me encryption, and it gives me authentication,’” Morgan said. “More recently, one of the biggest drivers has been multi-cluster communication… now our customers are deploying hundreds of clusters and they’re planning for thousands of clusters.”

What’s new in Linkerd 2.18

Morgan describes the theme of 2.18 as “battle-scarred spectacular,” reflecting refinements based on real-world production experience with customers. Key improvements include:

  • Enhanced multi-cluster support: Better integration with GitOps workflows. “When you have 200 or 2000 clusters, you’re driving that all declaratively. You’ve got a GitOps approach… the multi-cluster implementation had to be adapted to fit into that world,” Morgan explained.
  • Improved protocol configuration: Addressing edge cases for organizations pushing Kubernetes to its limits.
  • Gateway API decoupling: Improvements that reflect the maturation of the Gateway API standard and better shared resource management.
  • Preliminary Windows support: An experimental proxy build for Windows workloads, expanding beyond Linux environments.
  • What sets Linkerd apart and why AI isn’t a focus (yet)

    While Linkerd was the first cloud-native service mesh, in 2025 it certainly isn’t the only one. Linkerd is often compared with Istio, which is another open-source CNCF service mesh project.

    “The biggest difference from us has been a focus on what we’re calling operational simplicity, which is, how do we provide this very rich set of functionality to you in a way that doesn’t overwhelm you with the resulting complexity,” Morgan said.

    Unlike competitors, Linkerd doesn’t use the open-source Envoy technology as its sidecar proxy. Instead, Linkerd has its own custom proxy that has been written in the Rust programming language. According to Morgan, that makes Linkerd very secure and very fast.

    One thing that isn’t in Linkerd is AI, and for a specific reason. Morgan expressed a cautious and pragmatic view on AI. He emphasized that Linkerd is not looking to introduce AI as a product feature, as the project’s core principles are being fast, predictable, and easy to understand – qualities opposite to AI’s current characteristics. However, he acknowledges that Linkerd is working with customers running large AI workloads on Kubernetes, helping them optimize deployment and management. Morgan’s approach is product-first and focused on solving concrete problems, rather than chasing AI hype.

    “What we see is a lot of our customers who run LLM workloads kind of have these unique challenges,” Morgan said. 

    Among the AI challenges are response times. While typical Kubernetes workloads might respond in 250-400 milliseconds, AI workloads like large language models (LLM) can take much longer – around 24 seconds. Additionally AI workloads require specialized GPU resources that cannot be time-sliced the same way CPU resources can.

    “I don’t have anything that’s AI specific to talk about, but I will say we’re working hand in hand with customers,” he said. “There’s some really interesting opportunities for Linkerd to actually improve the way that you can deploy, manage, operate on a lot of workloads on Kubernetes, because we see the traffic that goes there, we know exactly what’s happening, and we can help optimize them.”

    Source:: Network World