Artificial intelligence (AI), a branch of computer science that is transforming scientific inquiry and industry, could now develop the safe, clean and virtually limitless fusion energy for generating electricity. A major step in this direction is underway at the U.S. Harvard graduate student at the Princeton University of Physics Laboratory (PPPL) and Princeton University, Harvard
Promising new chapter in fusion research
"This research opens a new chapter in the effort to bring unlimited energy to Earth. Steve Cowley, director of PPPL, said of the findings, which are reported in the current issue of Nature magazine.
Fusion, which drives the sun and stars, is the fusing of light elements in the form of plasma ̵
Crucial to demonstrating the ability to learn how to disentangle the explosion of plasma particles and energy – has been access to The two major fusion facilities provided by the DIII-D National Fusion Facility are the General Atomic Facility for the DOE of California, the largest facility in the United States, and the Joint European Torus (JET) of the United Kingdom, the largest facility in the world, which is managed by EUROfusion, the European Consortium for the Development of Fusion Energy.
The vast databases have reliable predictions of disruptions on tokamaks other than those on which the system was trained – in this case from the smaller DIII-D to the larger JET. The deep learning code, called the Fusion Recurrent Neural Network (FRNN), so
Most intriguing area of scientific growth
19659005] "Artificial Intelligence is the most intriguing area of scientific growth right now, and it is very exciting," said Bill Tang, principal research physicist at PPPL, coauthor of the paper and lecturer with the rank and title of professor in the Princeton University Department of Astrophysical Sciences who supervises the AI project.
Unlike traditional software, which carries out the instructions to learn from its mistakes. Accomplishing this are the neural networks, layers of interconnected nodes – mathematical algorithms – that are "parameterized," or weighted by the program to shape the desired output. For any given input the nodes seek to produce a specified output, such as correct identification of a face or accurate forecasts of a disruption.
A key feature of deep learning is its ability to capture high-dimensional rather than one-dimensional data , For example, while non-deep learning software might consider the temperature of a plasma at a single point in time, the FRNN considers profiles of the temperature developing in time and space. Julian Kates-Harbeck, a physics graduate student at Harvard University and a DOE-Office Computational Science Graduate
Training and running networks on graphics processing units (GPUs), computer chips first designed to render 3D images.
Kates-Harbeck trained on the FRNN code on more than two terabytes (1012) of data collected from JET and DIII-D. Princeton University's Tiger cluster of modern GPUs, the team placed it on Titan, a supercomputer at the Oak Ridge Leadership Computing Facility, a DOE Office of Facility User Facility, and other high-performance machines.
A demanding task
Distributing the network across many computers was a demanding task. Alexey Svyatkovskiy, coauthor of the Nature Paper who helped convert the algorithms into a production code and now at Microsoft.
The ITER wants to require, while reducing the number of false alarms. The code is now closing in on the ITER requirement of 95 percent correct predictions with less than 3 percent false alarms. While the researchers say that the methods used in the predictive method are similar to those used in the predictive methodology, they cover a wide range of operational scenarios
From prediction to control
The next step wants to move from prediction to the control of disruptions. In the first place, "Kates-Harbeck said." Rather than predicting disruptions in the first place, "Kates-Harbeck said. Highlighting this next step is Michael Zarnstorff, who recently moved from deputy director of research at PPP to chief science officer for the laboratory. Zarnstorff noted.
Progressing from AI-enabled accurate predictions to realistic plasma control requires more than one discipline. "We want to combine deep learning with basic, first-principles physics on high-performance computers to zero in on realistic control mechanisms in burning plasmas," said Tang. "By control, one means knowing which 'knobs to turn' on a tokamak to change conditions to prevent disruptions.
Support for this work comes from the Department of Energy. Computational Science Graduate Fellowship Program of the DOE Office and National Nuclear Security Administration; from Princeton University's Institute for Computational Science and Engineering (PICsiE); and from Laboratory Directed Research and Development funds that provides PPPL. Bill Wichser and Curt Hillegas at PICSciE; Jack Wells at the Oak Ridge Leadership Computing Facility; Satoshi Matsuoka and Rio Yokata at the Tokyo Institute of Technology; and Tom Gibbs at NVIDIA Corp.
Publication: M.D. Boyer, et al., "Real-time capable modeling of neutral beam injection on NSTX-U using neural networks," Nuclear Fusion, 2019; doi: 10.1088 / 1741-4326 / ab0762