Webinar: Learn How NVIDIA DriveWorks Gets to the Point with Lidar Sensor Processing

With NVIDIA DriveWorks SDK, autonomous vehicles can bring their understanding of the world to a new dimension. The SDK enables autonomous vehicle developers to easily process three-dimensional lidar data and apply it to specific tasks, such as perception or localization. You can learn how to implement this critical toolkit in our expert-led webinar, Point Cloud … Continued

With NVIDIA DriveWorks SDK, autonomous vehicles can bring their understanding of the world to a new dimension.

The SDK enables autonomous vehicle developers to easily process three-dimensional lidar data and apply it to specific tasks, such as perception or localization. You can learn how to implement this critical toolkit in our expert-led webinar, Point Cloud Processing on DriveWorks, Aug. 25.

Lidar sensors enhance an autonomous vehicle’s sensing capabilities, detecting the depth of surrounding objects that may not be picked up by camera or radar.

It does so by bouncing invisible lasers off the vehicle’s surrounding environment, building a 3D image based on the time it takes for those lasers to return. However, processing and extracting contextual meaning from lidar data efficiently and quickly is not as straightforward.

Lidar point cloud processing must be performed in real-time and in tight coordination with other sensing modalities to deliver the full benefits of enhanced perception — a difficult feat to accomplish when working with third-party open source modules.

A Streamlined Solution

With DriveWorks, efficient and accelerated lidar point cloud processing can be performed right out of the gate.

The SDK provides middleware functions that are fundamental to autonomous vehicle development. These consist of the sensor abstraction layer (SAL) and sensor plugins, data recorder, vehicle I/O support and a deep neural network framework. It’s modular, open, and designed to be compliant with automotive industry software standards.

These development tools include a point cloud processing module, which works with the SAL and sensor plugin framework to provide a solid basis for developers to implement a lidar-based perception pipeline with little effort and quick results. 

The module is CUDA-accelerated and straightforward to implement. It’s the same toolkit the NVIDIA autonomous driving team uses to develop our own self-driving systems, making it purpose-built for production solutions rather than purely research and development.

Register now to learn more from NVIDIA experts about the DriveWorks point cloud processing module and how to use it in your autonomous vehicle development process.

Source:: NVIDIA