5 reasons why 2025 will be the year of OpenTelemetry

Some open-source projects are spectacularly successful and become standard components of the IT infrastructure. Take Linux or Kubernetes, for example. Others fall by the wayside – remember the Ubuntu Phone?

OpenTelemetry, a project of the Cloud Native Computing Foundation, is building momentum and is on track to become another open-source success story. OpenTelemetry, or OTel, addresses a key pain point for network managers who must prevent network outages and maintain high levels of application performance across increasing complex and opaque multi-cloud environments.

“Historically, the observability market has been dominated by incumbents with proprietary data formats. This has created a lock-in scenario, forcing organizations to integrate and administer a complex universe of disjointed monitoring solutions,” says Casber Wang, a partner at venture capital firm Sapphire Ventures. “Over time, the community recognized the limitations of proprietary data formats and began collaborating on open standards, with the OpenTelemetry project leading the charge.”

OTel delivers a vendor-neutral way to collect three types of basic telemetry information: logs, metrics and traces. Logs are timestamps of events; analysis of logs can uncover errors or unpredictable behaviors. Metrics provide numerical measurements of resource utilization and application performance so network managers can take preemptive action to avoid a problem. And traces provide visibility into application layer performance in distributed cloud environments.

OTel aggregates that data into a single, vendor-neutral source of truth. Since OTel doesn’t store data or provide a way to analyze, query or present it, the telemetry that has been collected is then exported to a backend system that provides observability, network monitoring or application performance management.

Here are five reasons why OTel is poised to have a breakout year in 2025:

1. Broad adoption by observability platform vendors

One key proof point for OTel is the fact that virtually all of the major observability vendors offer integrated support for the open-source standard. According to Gartner’s August 2024 Magic Quadrant for observability platforms, leading vendors such as Chronosphere, New Relic, Datadog, Dynatrace, ServiceNow, Splunk, and SumoLogic support OTel. In fact, Gartner takes points away from Microsoft because, according to the report, Microsoft’s Azure Monitor doesn’t yet offer support for automated ingestion of OTel data directly via a collector interface; it requires an additional exporter tool.

2. Certification adds validation

Certification programs are important because they provide IT staffers with a formalized process to learn new skills (and make more money). And, as more network managers become proficient working with OTel, it becomes a standard part of their toolkit. The Cloud Native Computing Foundation (CNCF) and Linux Foundation have announced an OpenTelemetry certification aimed at validating the skills needed to utilize OTel in order to gain visibility across distributed systems.

The OpenTelemetry Certified Associate (OTCA) is aimed at application engineers, DevOps engineers, system reliability engineers, platform engineers, or IT professionals looking to increase their abilities to leverage telemetry data across distributed systems of cloud-native and microservices-based applications.

“Earning your OTCA boosts your career by equipping you with sought-after skills for modern IT operations,” said Clyde Seepersad, senior vice president and general manager, Linux Foundation Education. “These skills position you as a proactive, problem-solving professional in an era of complex, distributed systems.” The exam costs $250 and applicants will be able to schedule exams beginning in January 2025.

3. OTel extends into CI/CD pipelines

OTel was initially targeted at cloud-native applications, but with the creation of a special interest group within OpenTelemetry focused on the continuous integration and continuous delivery (CI/CD) application development pipeline, OTel becomes a more powerful, end-to-end tool.

“CI/CD observability is essential for ensuring that software is released to production efficiently and reliably,” according to project lead Dotan Horovits. “By integrating observability into CI/CD workflows, teams can monitor the health and performance of their pipelines in real-time, gaining insights into bottlenecks and areas that require improvement.” He adds that open standards are critical because they “create a common uniform language which is tool- and vendor-agnostic, enabling cohesive observability across different tools and allowing teams to maintain a clear and comprehensive view of their CI/CD pipeline performance.”

4. OTel integrates with business data

Correlating raw telemetry data with network performance is important, but analyzing OTel data through a business lens is even more critical for enterprises.

“Product experience monitoring and system monitoring tools have traditionally operated in silos,” Wang says. “More recently, we’ve seen a push to converge these domains to better understand the correlation between end-user behaviors and system-level signals.” OTel enables organizations to do just that. For example, a particular app or feature is seeing a drop-off in customer utilization; is that because customers don’t like the feature and it needs to be revamped, or is it because slow or erratic network response times are turning customers off?

A recent survey of 1,700 tech professionals commissioned by New Relic showed that 35% of respondents had integrated at least five business-related data types with their telemetry data. The most popular data types included customer data, production data, sales data, inventory data, marketing data, logistics data and human resources data.

5. The AI effect

The explosion of interest in AI, genAI, and large language models (LLM) is creating an explosion in the volume of data that is generated, processed and transmitted across enterprise networks. That means a commensurate increase in the volume of telemetry data that needs to be collected in order to make sure AI systems are operating efficiently.

The emergence of OTel as a vendor-neutral, industry-wide standard is crucial to helping enterprises detect bottlenecks or other application performance problems associated with data-intensive, real-time AI apps that involve end users querying the database and expecting fast results.

The flip side of that equation is harnessing the power AI analytics to help network managers optimize and automate their everyday operations – AIOps. OTel data is a foundational element of AIOps, because it provides a vendor-neutral stream of core telemetry data.

Source:: Network World