The future is autonomous, and AI is already transforming the transportation industry. But what exactly is an autonomous vehicle and how does it work? Autonomous…
The future is autonomous, and AI is already transforming the transportation industry. But what exactly is an autonomous vehicle and how does it work?
Autonomous vehicles are born in the data center. They require a combination of sensors, high-performance hardware, software, and high-definition mapping to operate without a human at the wheel. While the concept of this technology has existed for decades, production self-driving systems have just recently become possible due to breakthroughs in AI and compute.
Specifically, massive leaps in high-performance computing have opened new possibilities in developing, training, testing, validating, and operating autonomous vehicles. The Introduction to Autonomous Vehicles GTC session walks through these breakthroughs, how current self-driving technology works, and what’s on the horizon for intelligent transportation.
From the cloud
The deep neural networks that run in the vehicle are trained on massive amounts of driving data. They must learn how to identify and react to objects in the real world—an incredibly time-consuming and costly process.
A test fleet of 50 vehicles generates about 1.6 petabytes of data each day, which must be ingested, encoded, and stored before any further processing can be done.
Then, the data must be combed through to find scenarios useful for training, such as new situations or situations underrepresented in the current dataset. These useful frames typically amount to just 10% of the total collected data.
You must then label every object in the scene, including traffic lights and signs, vehicles, pedestrians, and animals, so that the DNNs can learn to identify them as well as checking for accuracy.
NVIDIA DGX data center solutions have made this onerous process into a streamlined operation by providing a veritable data factory for training and testing. With high-performance compute, you can automate the curation and labeling process, as well as run many DNN tests in parallel.
When a new model or set of models is ready to be deployed, you can then validate the networks by replaying the model against thousands of hours of driving scenarios in the data center. Simulation also provides the capability to test these models in the countless edge cases an autonomous vehicle could encounter in the real world.
NVIDIA DRIVE Sim is built on NVIDIA Omniverse to deliver a powerful, cloud-based simulation platform capable of generating a wide range of real-world scenarios for AV development and validation. It creates highly accurate, digital twins of real-world environments using precision map data.
It can run just the AV software, which is known as software-in-the-loop, or the software running on the same compute as it would in the vehicle for hardware-in-the-loop testing.
You can truly tailor situations to your specific needs using the NVIDIA DRIVE Replicator tool, which can generate entirely new data. These scenarios include physically based sensor data, along with the corresponding ground truth, to complement real-world driving data and reduce the time and cost of development.
To the car
Validated deep neural networks run in the vehicle on centralized, high-performance AI compute.
Redundant and diverse sensors, including camera, radar, lidar, and ultrasonics, collect data from the surrounding environment as the car drives. The DNNs use this data to detect objects and infer information to make driving decisions.
Processing this data while running multiple DNNs concurrently requires an incredibly high-performance AI platform.
NVIDIA DRIVE Orin is a highly advanced, software-defined compute platform for autonomous vehicles. It achieves 254 trillion operations per second, enough to handle these functions while achieving systematic safety standards for public road operations.
In addition to DNNs for perception, AVs rely on maps with centimeter-level detail for accurate localization, which is the vehicle’s ability to locate itself in the world.
Proper localization requires constantly updated maps that reflect current road conditions, such as a work zone or a lane closure, so vehicles can accurately measure distances in the environment. These maps must efficiently scale across AV fleets, with fast processing and minimal data storage. Finally, they must be able to function worldwide, so AVs can operate at scale.
NVIDIA DRIVE Map is a multimodal mapping platform designed to enable the highest levels of autonomy while improving safety. It combines survey maps built by dedicated mapping vehicles with AI-based crowdsourced mapping from customer vehicles. DRIVE Map includes four localization layers—camera, lidar, radar, and GNSS—providing the redundancy and versatility required by the most advanced AI drivers.
The AV development process isn’t linear. As humans, we never stop learning, and AI operates in the same way.
Autonomous vehicles will continue to get smarter over time as the software is trained for new tasks, enhanced, tested, and validated, then updated to the vehicle over the air.
This pipeline is continuous, with data from the vehicle constantly being collected to continuously train and improve the networks, which are then fed back into the vehicle. AI is used at all stages of the real-time computing pipeline, from perception, mapping, and localization to planning and control.
This continuous cycle is what turns vehicles from their traditional fixed-function operation to software-defined devices. Most vehicles are as advanced as they will ever be at the point of sale. With this new software-defined architecture, automakers can continually update vehicles throughout their lives with new features and functionality.