AI for a Scientific Computing Revolution

Decorative image of a sphere of cool white light with internal spheres of warm light.

AI and its newest subdomain generative AI are dramatically accelerating the pace of change in scientific computing research. From pharmaceuticals and materials…

AI and its newest subdomain generative AI are dramatically accelerating the pace of change in scientific computing research. From pharmaceuticals and materials science to astronomy, this game-changing technology is opening up new possibilities and driving progress at an unprecedented rate.

In this post, we explore some new and exciting applications of generative AI in science, including the impact of GPT-3 on the 2022 ACM Gordon Bell Special Prize work on SARS-CoV-2 evolutionary dynamics.

We also look at trained surrogate models to control fusion plasma reactions, new models using SCORE methods at the Large Hadron Collider, and advances in climate modeling for Earth-2 and Destination Earth (DestinE). These research models are evolving to become more like transformer models.

LLMs for genomic research

The 2022 Gordon Bell Special Prize honored a team of researchers from top institutions for their groundbreaking work modeling the evolution of pandemic-causing viruses. Using genome data, they developed large language models (LLMs) called genome-scale language models (GenSLMs). These models were pretrained on over 110M prokaryotic gene sequences and fine-tuned on 1.5M SARS-CoV-2 genomes.

GenSLMs represent one of the first whole genome-scale foundation models and demonstrated impressive scaling on the Polaris and Selene supercomputing systems.

Figure 1. Workflow from the initial 2022 gene sequence LLM project on SARS-CoV-2

Figure 1 shows the workflow starting with the foundation model trained on 110M PATRIC sequences. It is then fine-tuned on SARS CoV-2 ORFs. The trained foundation models may take two paths:

  • A prediction workflow that uses diffusion models to get the hierarchy of gene organization and generate SARS-CoV-2 genomes. This is generative AI. The output may be sent to OPENFOLD.
  • A detection workflow produces semantic similarity scoring (embeddings), which looks at immune escape and identifies variants of concern scores. The sequence log of likelihood score is used to make a fitness evaluation and also create variant of concern scores (the key output).

The OPENFOLD model is used to examine epitotic alteration that feeds the immune escape analysis step. OPENFOLD also provides PPI interaction through molecular dynamics simulations that inform the fitness evaluation analysis.

LLMs have a natural affinity to genomics and proteomics because they are letters or language in the simpler sense. Just like in language, the context of letters far away from a sequence can have meaning.

This ability to understand context makes LLMs useful for simulating interactions between genes at different parts of the genome. This interaction is called epistasis and transformer models are valuable for understanding how multiple parts of a sequence can interact in 3D space to support folding.

The initial GenSLM workflow is being scaled up and updated using GPT-3 and reinforcement learning with great preliminary results and is training at scale on the Perlmutter supercomputer.

Diagram of models going through reinforcement learning steps of sequence generation, rewards, and RL loss.Figure 2. Workflow from work in progress using GPT-3 and reinforcement learning

In Figure 2, the seed model, a pretrained LLM, is used to produce the policy module, which contains the model (Policy) and a reference model (Reference Policy). The Policy module flows to sequence generation, a rewards step, and then a reference learning loss step. The reward model (sequence, structure, function) also inputs into the rewards step. Backpropagation feeds into the Policy module. Foundation models trained on 110M PATRIC sequences are used to fine-tune models on protein targets creating trained foundation models.

Many agents process a reinforcement learning policy to input into a multi-objective loss function:

  • Agent 1: Evaluate sequences with an evolutionary coupling analysis.
  • Agent 2: Evaluate sequences, structure, and kinetics with experiment observations.
  • Agent k: Evaluate sequence, structure, and energetics with molecular dynamics simulations.

The result of the rewards entered into a multi-function loss function informs the reward model.

Using diffusion models, synthetic sequences processed using AlphaFold appear to fold into desired wild-type structures. AlphaFold is an AI system created by DeepMind that can predict the 3D structure of proteins from their amino acid sequence.

Chart shows binding energy from -200 to +200 and depicts five wild-type variants. One prominently shows binding energy values for XBB (nearly 100), Gamma (nearly 100), BQ.1 (about 175), Omicron (about 75), BA.2.75.2 (about 50), and Alpha (about 150). The other four examples exhibit a range of binding energy values.Figure 3. A generated variant is evolutionarily close to BQ.1

Now, you can generate new sequences with varying degrees of sequence identity and positive matches. You can also generate minimal sequences that have functional domains and can function as a productive enzyme. In-vitro expression and enzymatic validation will follow.

It is anticipated that the research team at Argonne National Laboratory will upgrade their workflow to GPT-4 next year. The pace of change in science due to AI in scientific computing is unprecedented. Models that were in use just 3 years ago are now obsolete, and it would not be surprising if current models become obsolete in the next 3 years. This rapid advancement is driving progress and opening new possibilities in scientific research.

Deep learning for fusion simulation

In experiments with toroidal plasmas, having accurate information about plasma instabilities can help control the plasma successfully.

For example, one common cause of major disruptions in plasma is the neoclassical tearing mode (NTM). These disruptions can cause damage to the experimental device. By identifying and controlling the plasma perturbations that can excite NTM, it is possible to prevent these disruptions.

ITER project

The ITER project is an international effort to build a fusion reactor that can produce unlimited carbon-free energy. The design of ITER’s operations and plasma control system relies on extrapolating information from smaller experimental devices.

By studying plasma instabilities using first-principles-based simulations, it is possible to improve our understanding and prediction of the dynamics and transport in future fusion devices like ITER. However, these simulations can be computationally expensive and take a long time to run, making it difficult to use them in real-time experiments.

DIII-D National Fusion Facility

Machine learning models have been used to predict plasma behaviors in plasma control systems. Recently, deep learning-based models have shown promising results in predicting disruptions and perturbed magnetic signals. Researchers at the DIII-D National Fusion Facility at General Atomics, the largest magnetic fusion research experiment in the United States, have developed a deep-learning-based surrogate model to simulate plasma instabilities using data from global gyrokinetic toroidal code simulations.

The model has demonstrated strong predictive capabilities for linear kink mode instability properties and can provide physics-based instability information to complement experimental measurements and guide the plasma control system. The inference time of the model is on the order of milliseconds, making it suitable for use in real-time plasma control systems. This approach shows the potential for using machine learning to simulate and predict plasma instabilities in fusion experiments.

SGTC surrogate model for gyrokinetic toroidal code

The SGTC surrogate model for gyrokinetic toroidal code (SGTC) is a game-changer, reducing simulation time by at least six orders of magnitude. For the first time ever, it is possible to bring physics-based instability information from first-principles–based massively parallel simulations into the plasma control system of modern tokamaks. An initial DIII-D Fusion experiment surrogate model and “prediction for control” is included in NVIDIA Omniverse to create a digital twin of the fusion reactor workflow.

Video 1. Fusion digital twin workflow for the DIII-D fusion experiment

The SGTC models are being evaluated to evolve from neural networks to transformer-based models.

Generative models for nuclear physics

Detailed detector simulations are an important part of particle and nuclear physics. The simulations are used to compare predictions with data and to design future experiments. The most widely used program for these simulations is GEANT.

However, achieving precision with these simulations requires a lot of computing time because particles must be propagated through materials, resulting in many secondary particles that undergo interactions.

Calorimeters are the most difficult detectors to simulate because their job is to stop particles and measure the energy they deposit. High energy physics (HEP) is the field of physics that explores the fundamental building blocks of nature and how they interact at the smallest and largest scales. In HEP, a constant fraction of all computing resources goes towards these simulations using GEANT.

However, it is not possible to run full simulations for all events at the Large Hadron Collider due to computing budget constraints. As a result, fast simulation methods have been developed that use simpler models tuned to the full simulation. These models have fewer parameters and are easier to optimize and validate but are less precise.

Deep learning offers a complementary approach using flexible neural networks to transform random numbers into structured data.

Different types of deep generative models are used for emulating calorimeter showers and other particle detectors. Each has its advantages and disadvantages:

  • GANs are fast and flexible but can be challenging to optimize and may suffer from mode collapse.
  • Variational autoencoders (VAEs) can learn smooth latent state representations of input data but can be more complex and computationally expensive to train compared to standard autoencoders.
  • Normalizing flows (NFs) tend to be robust to mode collapse and have a convex loss function but have difficulty scaling to higher-dimensional datasets.

A new class of deep generative algorithms called SCORE-based generative models minimizes a convex loss function with a single generator network and provides access to the full data likelihood after training. These models have more flexibility in their network architecture and can introduce bottleneck layers, which can reduce the number of trainable parameters and improve scalability.

HPC and AI for climate modeling

Developing effective strategies to mitigate and adapt to climate change impacts hinges on our ability to create accurate climate models that can precisely predict regional climate trends over extended time frames. Ultra-high-resolution climate modeling, which necessitates computing power millions to billions of times greater than currently available, enables the simulation of cloud formations and regional extreme weather event predictions decades in advance.

By harnessing the capabilities of GPU-accelerated computing, deep learning, physics-informed neural networks, AI supercomputers, and massive data sets, we can achieve a million-fold acceleration in computational speed. Furthermore, super-resolution techniques bring us closer to the billion-fold leap required for ultra-high-resolution climate modeling.

These advancements enable early warnings, helping countries, cities, and towns adapt and reinforce their infrastructure. Increased accuracy in predictions will inspire people and nations to act more urgently.

DestinE and Earth-2

In 2020, the European Commission unveiled DestinE, a flagship initiative aimed at developing a highly accurate global-scale digital model of Earth. This model will monitor, simulate, and predict interactions between natural phenomena and human activities.

In 2021, NVIDIA announced the Earth-2 initiative dedicated to climate change prediction. E-2 will generate a digital twin of Earth within NVIDIA Omniverse, using the accelerated digital twin platform for scientific computing. This 3D, virtual world simulation platform includes the NVIDIA Modulus framework for developing physics-informed neural network models and the FourCastNet global data-driven deep learning-based weather forecasting model.

Earth-2 will be in collaboration with international climate science initiatives. At GTC 2023, Thomas Schulthess, the Swiss National Supercomputing Centre (CSCS) director, highlighted weather and climate as a “lighthouse use case” for Alps, the CSCS’ next-generation system.

Alps, which introduces NVIDIA Grace Hopper Superchips, seems poised for participation in DestinE. Peter Bauer, European Centre for Medium-Range Weather Forecasts (ECMWF) DestinE Director, mentioned a “Swiss component” of DestinE at GTC.

FourCastNet

Diagram shows a multi-layer transformer architecture using the Adaptive Fourier Neural Operator (AFNO). The FourCastNet model’s additional training and inference modes include two-step fine-tuning, a precipitation model with an AFNO backbone, an AFNO precipitation model, and a forecast inference model.Figure 5. The FourCastNet multi-layer transformer architecture

The FourCastNet physics-ML model incorporates Fourier neural operators and transformers and is trained on 10 TBs of Earth system data. It can simulate and predict extreme weather events, such as hurricanes and atmospheric rivers, with improved accuracy 50,000x times faster. With AI super-resolution, this brings us closer to the goal for ultra-high-resolution climate modeling. This breakthrough signifies a significant step toward creating Earth-2.

FourCastNet offers numerous essential benefits for science and society, including high-resolution, high-fidelity wind and precipitation forecasts. Although developed in under a year and having fewer variables and vertical levels than the ECMWF Integrated Forecasting System (IFS), a state-of-the-art numerical weather prediction (NWP) model, FourCastNet’s accuracy rivals the IFS model and outperforms state-of-the-art deep learning weather prediction models on short timescales.

FourCastNet’s predictions are 4–5 orders of magnitude faster than traditional NWP models. This has two crucial implications: generating large ensembles of thousands of members within seconds for improved probabilistic weather forecasting, and rapidly testing hypotheses on weather variability mechanisms and their predictability.

FourCastNet offers immense societal benefits, enabling swift disaster mitigation responses. The deep learning-based model provides flexibility in inputting prognostic variables from various models or observational sources.

A pressing challenge for the climate community is predicting how extreme weather events will change under climate change, such as their frequency, intensity, and spatiotemporal characteristics. After FourCastNet achieves high fidelity under extreme climates, it can address this grand challenge.

Scientific computing acceleration driven by AI

Scientific computing research, the researchers, and the institutions that manage the high-performance computing and supercomputing infrastructure are facing an unprecedented pace of positive change enabled by generative AI technologies.

By co-designing with the scientific community and industry leaders, NVIDIA has been able to provide the flexibility to accelerate workflows like the ones mentioned in this post. New AI models accelerated by the latest-generation GPUs—the NVIDIA Ampere, NVIDIA Ada Lovelace, and NVIDIA Hopper architectures and the NVIDIA Grace Hopper Superchip—all leverage the full stack platforms for HPC, AI, and digital twins. By embracing and leveraging the power of deep learning and HPC, researchers can tap into the capabilities of these technologies to tackle some of the most pressing challenges in scientific computing today.

Source:: NVIDIA