The study uses generative adversarial networks to underscore the impacts of climate change and prompt collective action toward curbing emissions.
You may soon be able to see how future flooding could hit your city with a newly developed AI model. The study, from a team of Canadian and U.S. researchers, uses generative adversarial networks (GANs) to produce realistic images of climate change-induced flooding. Named ClimateGAN, the team developed the model to underscore the destruction of extreme weather events and prompt collective action toward curbing emissions.
“Projecting the potential consequences of extreme climate events such as flooding in familiar places can help make the abstract impacts of climate change more concrete and encourage action,” the researchers write.
People across the globe are grappling with more frequent extreme weather events including storms, hurricanes, droughts, and wildfires brought on by a warming planet. Coastal and inland communities are also experiencing more intense flooding due to rising sea levels, stronger storms, and faster snowmelt.
The devastating outcomes of global warming may hit home for people who have lived through these catastrophes, whether that is destructive flooding from Hurricane Ida or blazing bush fires across Australia. However, many still view climate change impacts as a hypothetical, distant, or uncertain occurrence—a psychological phenomenon called distancing.
According to the researchers, first-person perspectives and images of extreme weather events can reduce distancing. Up to this point digital technology, such as geographic visualizations and interactive data dashboards, has focused on creating manual regional renderings, limited to specific locations. With ClimateGAN, the team worked to create an AI framework capable of illustrating flooding in familiar places, to transform the abstract impacts of climate change into concrete examples.
Using a two-phase, unsupervised image-to-image translation pipeline, the framework relies on both real images and simulated data from a virtual world. Using these two data sources, the Masker model predicts the location of water in an image, if flooding were to occur. Then the Painter model—which uses GauGAN, a deep learning model developed by NVIDIA Research—renders contextualized water textures guided by the Masker model.
Sampling a broad range of regions and scenery, 5540 non-flooded images were used to train the Masker model, and 1200 flooded images to train the Painter model.
Together the framework renders realistic imagery of floods in urban, suburban, and rural areas.
The researchers state the long-term goal of this work aims to create a system where a user can enter any address and see a climate-change affected version of the image from Google Street View.
The code and additional materials are available for download on GitHub.