Site icon GIXtools

Exelon Uses Synthetic Data Generation of Grid Infrastructure to Automate Drone Inspection

Most drone inspections still require a human to manually inspect the video for defects. Training a computer vision model to automate inspection is difficult…

Most drone inspections still require a human to manually inspect the video for defects. Training a computer vision model to automate inspection is difficult without a large pool of labeled data for every possible defect.

In a recent session at NVIDIA GTC, we shared how Exelon is using synthetic data generation in NVIDIA Omniverse to automatically create thousands of labeled, photorealistic examples of various grid asset defects. This post highlights how synthetic images are being used to train an inspection model for real-time drone inspection, enabling better grid maintenance for reliability and resiliency.

Project overview

Exelon, the largest regulated electric utility in the United States, serves more than 10M customers across Illinois, Maryland, Pennsylvania, Delaware, New Jersey, and Washington, D.C.

Under its Path to Clean initiative, the energy provider plans a 50% reduction in emissions by 2030 and net-zero emissions by 2050.

We identified multiple benefits, including reducing exposure to in-field hazards for crews, reducing manual labor to review images, and accelerating the timeline from image capture to defect resolution for improved grid reliability.

Our method identified grid assets of interest and associated defects in image data. Then, we created an asset image-labeling pipeline to enable our subject matter experts (SMEs) to label assets and defects. Next, we had to build, test, and validate the computer vision asset and defect detection models with SMEs. Finally, we are working to deploy the solution to the business stakeholders.

There were challenges. We needed a ton of labeled real-world defect data for training and testing the AI model. We wanted to see if synthetic data generated in NVIDIA Omniverse could address this challenge.

We also wanted to develop an end-to-end scalable ecosystem, which could help us accelerate deployment across other transmission and distribution assets, such as high-voltage lines, towers, and substations.

Drone inspection process

BGE (Baltimore Gas and Electric) is an Exelon company serving over 1M customers in Maryland. As part of the targeted field drone inspection program, BGE sends a field team to take multiple photos, using a drone approximately 8-15 feet away, from different angles, including front, side, top, and back. The primary goal is to have any defects visible in at least one of the images, whether it is in utility poles, cross-arms, insulators, transformers, or other assets. Consistency of image capture enabled us to investigate using AI and computer vision techniques for automated defect detection.

View of a wooden crossarm with defects against a gray background generated using synthetic data in NVIDIA Omniverse.Figure 1. Using NVIDIA Omniverse to simulate wooden pole defects used in training AI inspection models.

In this project, we focused on identifying defects in cross-arms where BGE has historical image data, both with and without defects. BGE identified reducing cross-arm failures as one of the main drivers for improving system reliability. The most common defect for cross-arms is splitting, which could potentially impact the stability of the mounted insulators and cause a power outage to customers.

Our team initially used manually labeled data for model training and validation to accurately detect and count cross-arms in drone images. The next goal was cross-arm defect detection using labeled real-world data as well as synthetic data generation. We’re currently training our defect detection models on both real and synthetic images and are in the process of model validation with the business stakeholders.

Asset detection model training and evaluation

Due to the nature of the problem, and the need for identifying the exact pixels of an object for defect detection, the team chose to use segmentation masks in image labeling.

Theoretically, segmentation returns positive identifications and can identify every image pixel and distinguish between items. This returns better performance on detection of linear cracks, joins, fillings, and shadows.

Our data scientists spent quality time testing labeled images and understanding how different techniques of annotation might affect the accuracy of the model.

Side-by-side images showing prediction samples identifying power pole assets with bounding boxes in YOLOv5 and bounding boxes with masks in the new defect detection model.Figure 2. Asset detection model training and evaluation for cross-arm defects.

As we were seeing early success in asset detection, we knew that defect detection would be much more challenging given the lack of a large pool of labeled data for every possible defect. However, we had previously collaborated with NVIDIA—going back to Exelon’s purchase of an NVIDIA DGX-1 system—and were introduced to NVIDIA Omniverse. The platform provides many opportunities around 3D modeling and generating synthetic data of grid asset defects that occur in the field.

As part of our engagement with NVIDIA, we had multiple sessions to brainstorm our architecture. We used the NVIDIA Omniverse Replicator to generate different defects of cross-arms, which would produce labeled data for the training of the inspection model. We used Omniverse to create different types of cross-arm defects by size, shape, and location.

After we generated sufficient variations and defects, the output was added to the entire pole structure. Then we generated synthetic images by different scenes, number of cross-arms, and number of defects.

Partnership with third-party for synthetic images

During our brainstorming sessions with NVIDIA, we realized that 3D modeling was a challenging effort and is beyond Exelon’s data science core focus. We needed artists and modelers with experience building 3D models, integrating into photorealistic environments, and manipulating lighting and scene conditions. NVIDIA helped to connect Exelon with several vendors who are using NVIDIA Omniverse and serving images for other use cases with other utilities and outside the utility industry.

Solution flow from Deloitte showing asset development for 3D models and material design tools, scene development for world building and scene design tools, and randomization graphs and creation tools.Figure 3. Deloitte’s synthetic data generation 3D workflow using NVIDIA Omniverse.

For this pilot, we chose Deloitte to help us develop synthetic images for crossarms. Deloitte builds 3D models of assets and defects using Maya, and then uses Unreal Engine to further develop photorealistic surrounding environments with accurate lighting and scene conditions ported into NVIDIA Omniverse. Their designers and developers then work together to output labeled images for use in defect detection model training.

Future focus areas and opportunities

Our end goal was to create an end-to-end scalable ecosystem. We can use this ecosystem to move from one asset to the other, starting with crossarms and scaling up across our distribution, transmission, and oil and gas assets.

Building analytics products is a team sport. Our work with NVIDIA and Deloitte supported synthetic image generation and enabled us to leverage outside experts in building 3D models, designing backgrounds, and labeling images. We are seeing value in Omniverse as a hub for Deloitte to collect all the tools available to create 3D images and provide the framework needed to develop a large pool of datasets for cross-arms.

Leveraging intelligence through image capture will continue to be a focus area for Exelon and other utilities. If this project is successful, we’ll be able to scale to other transmission and distribution assets, such as poles, transformers, insulator pins, cross-arm braces, or transmission towers.

For more information, see the following resources:

To dive deeper into training defect detection models using synthetic data, see the NVIDIA Omniverse Synthetic Data Generation forum.

Acknowledgments

At Exelon and BGE, we are fortunate to have exceptional innovators and partners who are experts in their respective fields. We would like to express our gratitude to multiple teams involved in this project, including BGE Analytics/Innovation, Drone Inspection, BGE Distribution Standards, and BGE Regional Electric Operations.

A special acknowledgment goes to the present and former members of the Exelon Infrastructure Analytics team. We extend our gratitude to Vladyslav Anderson and Po-Chen Chen for their project leadership and guidance, as well as Reddy Mandati and Bishwa Sapkota for their exceptional work in executing this complex use case.

We are grateful for the encouragement from our leadership teams to take on high-impact projects like this one. This includes the BGE Utility of the Future Council; Ajit Apte, BGE VP Technical Services, who serves as our lead sponsor; and the analytics leadership at Exelon, with Isaac Akridge, SVP Operation, Analytics, and Business Investments. Their support, funding, and guidance are indispensable to our success.

Source:: NVIDIA

Exit mobile version