RAN-in-the-Cloud: Delivering Cloud Economics to 5G RAN

Connected cityscape

5G deployments have been accelerating around the globe. Many telecom operators have already rolled out 5G services and are expanding rapidly. In addition to the…

5G deployments have been accelerating around the globe. Many telecom operators have already rolled out 5G services and are expanding rapidly. In addition to the telecom operators, there is significant interest among enterprises in using 5G to set up private networks leveraging higher bandwidth, lower latency, network slicing, mmWave, and CBRS spectrum. 

imgJoin NVIDIA at MWC Barcelona 2023 to see how the world’s leading companies are optimizing networks and innovating faster using AI, 5G, and edge computing.

The 5G ramp comes at an interesting time. Over the last two decades, cloud computing has matured, becoming the playground of choice for developers to build their applications. Cloud offers many advantages, including mature software tools, automation and orchestration, business agility, and lower total cost of ownership (TCO). 

Furthermore, applications in every segment (industrial robotics, cloud gaming, smart cities, security, retail, autonomous driving, smart farming) are increasingly using artificial intelligence (AI) to enable transformative experiences. This confluence of 5G, cloud computing, and AI will drive many innovations over the next decade. 

The NVIDIA Aerial SDK is a key technology foundation for building virtualized radio access network (vRAN). It is a software-defined, full 5G Layer1 (L1) offload implemented as inline acceleration in an NVIDIA GPU. It also implements all 3GPP- and O-RAN-compliant interfaces. The L1 software consisting of complex signal processing algorithms are implemented in CUDA C/C++, making it easy to optimize the L1 algorithms, implement new features, and future-proof the RAN applications for 5G evolution and 6G. The NVIDIA Aerial SDK is implemented as modular microservices with E2E cloud-native architecture and managed by Kubernetes using standard ORAN SMO-compliant interfaces.  

From CloudRAN to RAN-in-the-Cloud 

There has been much discussion recently about CloudRAN. As an industry leader in accelerated computing platforms and cloud computing, NVIDIA has been at the forefront of CloudRAN innovations. Many industry leaders are using the term CloudRAN to mean a cloud-native implementation of a radio access network (RAN). 

While cloud-native technologies are table stakes, the important question is whether CloudRAN equates to using cloud-native technologies. We contend that it is not. We believe that a true cloud RAN has all compute elements (vDU, vCU, and the dUPF) deployed in the cloud. Hence, the term RAN-in-the-Cloud: a 5G radio access network fully hosted as a service in multitenant cloud infrastructure.  

Why is this distinction important and what is the motivation for RAN-in-the-Cloud? First, RAN constitutes the largest CapEx and OpEx spending for telecom operators. It is also the most underutilized resource with most radio base stations typically operating below 50% use. Moving RAN compute into the cloud brings all the benefits of cloud computing: pooling and higher utilization in a shared cloud infrastructure resulting in the largest CapEx and OpEx reduction for telecom operators. 

A COTS platform with GPU accelerators can accelerate not just 5G RAN; it can also accelerate edge AI applications. Telco operators and enterprises are increasingly using NVIDIA GPU servers to accelerate edge AI applications. This provides an easy path to use the same GPU resources for accelerating the 5G RAN connectivity in addition to AI applications. This in turn reduces the TCO and provides the best path for setting up enterprise 5G networks. 

Cloud software, tools, and technologies have matured over the years and are delivering the benefits of at-scale automation, reduced energy consumption, elastic computing, and autoscaling on demand, in addition to reliability, observability, and service assurance.

Graphic illustrating how the traditional siloed data centers designed for enterprise and telco applications can converge into a unified accelerated software-defined data center of the future, which can also host RAN-in-the-Cloud. Accelerated computing platforms with NVIDIA GPUs are at the heart of this convergence.Figure 1. Unified accelerated data center

It is important to note that some vendors are designing application-specific integrated circuit (ASIC)-based fixed function accelerator cards for RAN L1 offloads. A RAN built on these ASIC-based accelerators is akin to a fixed function appliance. It can only do RAN L1 processing and is a wasted resource when it is not being used.

NVIDIA Aerial SDK with general purpose GPU-accelerated servers delivers a true multiservice, multitenant platform that can be used for 5G RAN, enterprise AI, and other edge applications deployed in the cloud with all the benefits highlighted above.

Cloud native as the foundation of RAN-in-the-Cloud 

As the industry accelerates 5G deployments, realizing the full business value of 5G requires scalable and flexible solutions. Disaggregating RAN software from hardware and making the software available and deployable in the cloud has the potential to fuel faster innovation and new value-added services. 

Cloud-native vDU/vCU RAN software suites are designed to be fully open and automated for deployment and consolidated operation, supporting 3GPP and O-RAN interfaces on private, public, or hybrid cloud infrastructure. It leverages the benefits of cloud-native architecture, including horizontal and vertical scaling, autohealing, and redundancy. It is also optimally designed for mobile network evolution, including next-generation radio technologies such as 6G.  

NVIDIA Aerial SDK cloud-native architecture facilitates RAN functions to be realized as microservices in containers orchestrated and managed by Kubernetes. The modular software enables: 

  • improved granularity and increased speed of software upgrades, releases, and patching 
  • independent life cycle management following DevOps principles, with continuous integration and continuous delivery (CI/CD) 
  • independent scaling of different RAN microservices elements 
  • application-level reliability, observability, and service assurance 
  • simplified operations and maintenance with network automation

For a true cloud-native RAN experience, the cloud, edge platform, and networking all need to evolve. In our view, a number of requirements are critically important for the cloud-native containerized RAN software stack to be commercially deployable, including:

  • time synchronization  
  • CPU affinity and isolation 
  • topology management and feature discovery 
  • multiple networking interfaces  
  • high performance data plane and acceleration hardware 
  • low latency, QoS guarantees, and high throughput 
  • remote distributed deployments 
  • zero-touch provisioning  
  • Kubernetes operator framework and production-ready operators for accelerator devices

The NVIDIA GPU Operator uses the operator framework within Kubernetes to automate the management of all NVIDIA software components needed to provision a GPU. These components include the device drivers (to enable CUDA), Kubernetes device plugins for GPUs, the NVIDIA container runtime, automatic node labeling, data center GPU manager (DCGM) based monitoring, and more. 

The GPU Operator enables administrators of Kubernetes clusters to manage GPU nodes just like CPU nodes in the cluster. Instead of provisioning a special OS image for GPU nodes, administrators can rely on a standard OS image for both CPU and GPU nodes and then rely on the GPU Operator to provision the required software components for GPUs. 

Leveraging Kubernetes CRDs and the operator SDK, it manages networking-related components to enable fast networking with RDMA and NVIDIA GPUDirect for workloads in a Kubernetes cluster. The network operator works with the GPU operator to enable GPU-direct RDMA on compatible systems. The goal of the network operator is to manage the networking related components, while enabling execution of RDMA and GPU-direct RDMA workloads in a Kubernetes cluster.  

NVIDIA Aerial SDK is built ground up with microservices and cloud-native architecture, providing a solid foundation for building and deploying 5G RAN-in-the-Cloud. 

Built, deployed, and managed in the cloud 

The O-RAN alliance initiative to disaggregate traditional radio base stations into RRU, vDU, and vCU instances with well-defined interfaces between them is resulting in a larger ecosystem with vendor choices. Additionally, the cloud-native containerized software enables composable and automated RAN, managed by Kubernetes and SMO. What would it take to cloudify and host the complete RAN as a service in the cloud?

Graphic illustrating how all processing elements of a RAN can be hosted in the cloud. The SW defined front haul terminates the fiber connectivity from RRU (Remote Radio Unit) and brings all data into vDU running as a container POD in a data center. Other elements such as vCU, dUPF, and 5G core are all hosted in the cloud as containerized services.Figure 2. RAN-in-the-Cloud vision

The economics of deploying 5G have been challenging. 5G is driving a significantly higher RAN CapEx growth, compared to previous generations of wireless. The number of cell sites are expected to nearly double over the next 5 years. Consequently, the RAN CapEx as a share of overall TCO is increasing from 45-50% up to 65%. For more details, see Wireless Backhaul Evolution and 5G-era Mobile Network Cost Evolution.

In addition, it is known that the RAN is traditionally provisioned for peak capacity, which leads to significant underuse of precious compute resources. Bursty and time-dependent traffic mean many traditional RAN sites run at below 25% capacity use on average. If the RAN could be hosted in the cloud, the benefits of pooling could reduce the OpEx associated with energy savings and improve use. Moreover, the unused resources could be reprovisioned in a true cloud-like fashion for other applications and workloads. 

In the U.S. alone, moving 50% of the total 420 K cell sites to the GPU accelerated cloud could result in a significant new revenue opportunity for telco operators. When RAN utilization is low and GPUs are unused, they could be used for enterprise AI, video services, and other edge applications in a multitenant cloud environment. This could result in a new multibillion revenue opportunity globally. 

Figure 2 shows how data centers built with accelerated computing infrastructure using NVIDIA GPUs can accelerate many applications, delivering cloud economics and optimal TCO. 

NVIDIA AI Enterprise with NVIDIA Base Command Platform and NVIDIA Fleet Command software enable enterprises to run AI applications in the NVIDIA GPU cloud leveraging all the prebuilt and hardened software for various vertical segments. 5G connectivity running as a containerized solution alongside other AI applications utilizing the same infrastructure will be extremely powerful for enterprises. This will transform how the world thinks about wireless connectivity. 5G will become a fully cloud-based service that can be deployed on demand. This is the essence of RAN-in-the-Cloud.  

Build your 5G RAN-in-the-Cloud with NVIDIA

There are five key ingredients to centralize the RAN and deploy it fully in the cloud, as shown in Figure 3.

Figure showing five key requirements of RAN-in-the-Cloud and how NVIDIA is building solutions to address these key requirements: 1) SW-defined front haul 2) general-purpose GPU accelerated servers 3) SW-defined stack 4) O-RAN-compliant SMO, and 5) end-to-end orchestration framework.Figure 3. Five key technologies that enable RAN-in-the-Cloud

The new NVIDIA Spectrum SN3750-SX Open Ethernet Switch is a critical component of the RAN-in-the-Cloud solution. Built on the NVIDIA Spectrum-2 Ethernet ASIC, it is the first-ever software defined xHaul switch, capable of delivering the front haul, mid haul, and backhaul networking needed for telco data centers. 

A key capability of this switch is that it can be dynamically programmed to route traffic to any vDU deployed on any server in the data center supporting autoscaling and on-demand RAN deployments. It is the first switch to combine all the functionality needed to run telco and AI on the same infrastructure. SN3750-SX supports advanced timing protocols such as telco-grade precision time protocol (PTP), synchronized ethernet (SyncE), and PPS (packets per second), as well as dynamic RU/DU mapping. 

To enable AI training, the switch supports low latency 200G bandwidth for highest throughput. The Spectrum ASIC brings innovative features such as RoCE (RDMA over converged Ethernet) and adaptive routing, all at the highest network scale. It should be noted that many applications (such as the metaverse and AR/VR) need PTP-enabled data centers. This will pave the way for RAN-in-the-Cloud use cases. Some of the web scale companies already support PTP in their data centers.  

The NVIDIA A100X converged accelerator with the NVIDIA A100 Tensor Core GPU and NVIDIA BlueField DPU support a full inline 5G RAN offload. This delivers market-leading performance measured by cell density per watt and MHz-Layers per watt, for a range of configurations from 4T4R to massive MIMO 32T32R and 64T64R. 

NVIDIA is working with various ecosystem partners to ensure other O-RAN software components such as SMO (service management and orchestration), RIC (RAN intelligent controllers), xApps, and rApps are optimized for NVIDIA Aerial SDK and are ready for RAN-in-the-Cloud deployments. These components are still in early development but will be key differentiators, as they use AI for RAN automation and programmability. While RAN-in-the-Cloud will take some time to mature, we believe that NVIDIA will be at the forefront of this innovation with NVIDIA GPU-accelerated platforms at the foundation.  

Summary 

RAN-in-the-Cloud is the future. It is a natural evolution and the next step for the wireless market. A vRAN built using cloud-native technologies is a necessary first step. Realizing the cloud economics for 5G RAN and driving the co-innovation of 5G with edge AI applications requires embracing RAN-in-the-Cloud. The NVIDIA Aerial SDK delivers a scalable and cloud-native software architecture as a foundational technology for RAN-in-the-Cloud. 

Finally, it is important to note that the RAN transformation has just begun. The use of AI to optimize complex signal processing algorithms will unleash a whole new set of innovations in coming years. GPU-accelerated platforms are the best approach to future-proof your investments. Reach out if you would like to collaborate with us to build innovative RAN–in-the-Cloud solutions. For more information, see NVIDIA AI-on-5G Platform. 

Register for GTC 2023 for free and join us March 20–23 for AI-powered wireless sessions and AI-powered edge sessions. Browse all telecommunications sessions to discover much more.

Source:: NVIDIA