قالب وردپرس درنا توس
Home / Science / Machine learning improves the performance of the light beam at the advanced light source

Machine learning improves the performance of the light beam at the advanced light source

  Machine learning improves the power of the light beam at the extended light source.
The profile of an electron beam at the Berkeley Lab Advanced Light Source Synchrotron, represented as a pixel, measured with a CCD (Charge Coupled Device) sensor. When stabilized by a machine learning algorithm, the beam has a horizontal dimension of 49 microns in root mean square and a vertical dimension of 48 microns in root mean square. Demanding experiments require the appropriate beam size to be stable in timescales of less than seconds to hours to ensure reliable data. Photo credits: Lawrence Berkeley National Laboratory

Synchrotron light sources are powerful devices that produce light in a variety of "colors" or wavelengths – from infrared to X-rays – by accelerating electrons to emit light in controlled beams.

Synchrotrons such as the Advanced Light Source at the Lawrence Berkeley National Laboratory of the Department of Energy (Berkeley Lab) allow scientists to study samples of this light in a variety of ways, from materials science, biology and chemistry to physics and chemistry environmental sciences.

Researchers have found ways to improve these machines to create more intense, focused, and consistent beams of light that enable new, more complex, and more detailed studies across a wide range of sample types.

Radiation properties still show power fluctuations that challenge certain experiments.

Solving a Decade-Old Problem

Many of these synchrotron systems provide different types of light for dozens of simultaneous experiments. And small changes to improve the light beam properties on these individual beamlines can affect the overall performance of the light beam throughout the device. For decades, synchrotron developers and operators have struggled with various approaches to offset the most stubborn of these fluctuations.

Now, a large team of researchers from Berkeley Lab and UC Berkeley has successfully demonstrated how machine learning can improve the size stability of light rays for experiments by adjusting to a large extent those variations, ranging from a few percent to 0.4 percent accuracy reduce less than one millionth of a meter (submicron).

The tools are described in detail in a study published on November 6, in the journal Physical Review Letters .

Machine learning is a form of artificial intelligence in which computer systems analyze a range of data to create predictive programs that solve complex problems. The machine learning algorithms used at ALS are referred to as a "neural network" because they are designed to recognize patterns in the data that loosely resemble the functions of the human brain.

In this study, researchers fed electrons – beam data from the ALS, including the positions of the magnetic devices used to generate light from the electron beam, into the neural network. The neural network recognized patterns in this data and identified how different device parameters affected the width of the electron beam. The machine learning algorithm also recommended adjustments to the magnet to optimize the electron beam.

Since the size of the electron beam reflects the light beam generated by the magnets, the algorithm also optimized the light beam used to study material properties at the ALS.

Solution May Have Global Impact

The successful demonstration at the ALS shows how the technique can generally be applied to other light sources and is particularly suitable for specialist studies by upgrading the ALS, the known as ALS-U project.

"That's the beauty of it," said Hiroshi Nishimura, a subsidiary of Berkeley Lab, who had retired last year, engaging in early discussions and explorations of a machine builder. Learning solution for the longstanding problem of light beam size stability. "Whatever the accelerator is and whatever the conventional solution is, this solution can go beyond that."

Steve Kevan, ALS Director, said, "This is a very important advance for ALS and ALS-U For some years, we have had artifact issues in the images from our x-ray microscopes, and this study introduces a new feed-forward approach It has largely solved the problem. "

The ALS-U project will increase the narrow focus of light rays from about 100 microns to less than 10 microns and also increase the demand for consistently reliable light-beam properties.

The machine learning technique builds on traditional solutions that have improved over the decades since the launch of the ALS in 1993, relying on constant adjustments of the magnets along the ALS ring, which compensate for adjustments to individual beamlines in real time.

Nishimura, who was part of the team that brought the ALS online More than 25 years ago, he began investigating the potential application of machine learning tools for accelerator applications. This happened about four or five years ago. His talks spanned computing and accelerator experts at the Berkeley Lab and UC Berkeley, and the concept began to gel some two years ago.

  Machine learning improves the light beam performance at the advanced light source.
This graph shows how Vertical beam size stability improves significantly when a neural network is implemented during Advanced Light Source operations. When so-called "feed-forward" correction is implemented, the variations in vertical beam size from values ​​that are otherwise in the range of several percent to the sub-percent level (see yellow highlighted portion) are stabilized. Photo credits: Lawrence Berkeley National Laboratory

Successful Tests During ALS Operation

Researchers successfully tested the algorithm earlier this year at two different locations around the ALS ring. They alerted ALS users by doing experiments to test the new algorithm and asked them to provide feedback on unexpected performance issues.

"We had consistent user testing from April to June of this year," said C. Nathan Melton, Postdoc Fellow at ALS, who joined the Machine Learning Team in 2018 and worked closely with Shuai Liu, a former graduate student UC Berkeley, who made a significant contribution to this effort and is co-author of the study.

Simon Leemann "We have not received any negative feedback on the tests One of the surveillance beams the team uses is a diagnostic beam line that is constantly being used measures the accelerator power and another was a beam line in which experiments were active. Alex Hexemer, Senior Scientist at ALS and Program Manager for Computing, co-developed the new tool.

The beamline with the active experiments, Beamline, uses a technique known as scan-transmission X-X-ray microscopy or STXM, and scientists reported in experiments on improved light output. The machine learning team found that the improved light output is also suitable for advanced X-ray techniques such as ptychography can resolve the structure of samples down to nanometers (billionths of a meter); and X-ray Photon Correlation Spectroscopy (XPCS), which is useful for studying rapid changes of highly concentrated materials without uniform structure.

Other experiments require a reliable, highly focused light beam of constant intensity Interactions with the sample can also benefit from the improvement in machine learning, Leemann noted.

"The demands on experiments are getting higher, with smaller area scans on samples," he said. "We need to find new ways to correct these imperfections."

He noted that the core problem with which the light source community has to contend – and that the machine-learning tools are dealing with the fluctuating vertical electron beam size at the source point of the beam line.

The source point is the point at which the electron beam at the light source emits the light that reaches the experiment of a certain beam line. While the width of the electron beam at this point is inherently stable, its height (or vertical size of the source) may vary.

Opening the "black box" of artificial intelligence

"This is a very nice example of team science," said Leemann, noting that the initial skepticism regarding the feasibility of machine learning for improvement overcome the accelerator performance and opened the "black box" as such tools can bring real benefits.

"This is not a tool that has traditionally been part of the accelerator community, we've managed to bring people from two different communities together to solve a really serious problem." About 15 Berkeley Lab researchers participated in the effort ,

"Machine learning essentially requires two things: the problem must be reproducible, and you need huge amounts of data," said Leemann. "We realized that we could use all our data and use a pattern recognition algorithm."

The data showed the small deflections in electron beam performance when adjustments were made to individual beamlines, and the algorithm found a way to optimize the electron beam, so that this influence is better negated than with conventional methods.

"The problem consists of about 35 parameters – far too complex for us to get an idea of ​​ourselves," said Leemann. "What the neural network did after training – it gave us a prediction of what would happen to the source size in the machine if it did nothing to correct it.

" There is an additional parameter in this model, It describes how the changes we make to a particular magnet affect that source size. Then all we have to do is select the parameter that gives the beam size we want to generate and apply to the machine after this prediction of the neural network, "added Leemann.

The algorithmically controlled system can now make corrections at a speed of up to 10 times per second, although thrice per second seems to be sufficient to improve performance, said Leemann.

The search for new applications for machine learning

The Machine Learning Team received a two-year grant from the US Department of Energy in August 2018 to conduct this and other machine learning projects in collaboration with the Stanford Synchrotron Radiation Lightsource at the SLAC National Accelerator Laboratory. Keep developing this and we also have some new machine learning ideas we want to try, "said Leemann.

Nishimura said that the buzz words "Artificial Intelligence" seem to have evolved in and out of the Res community for many years, "This time it finally seems to be something real."

World record acceleration: zero to 7.8 billion electron volts in 8 inches

Further information:
S.C. Leemann et al., Demonstration of Model-Independent Machine Learning Stabilization of Source Properties in Synchrotron Light Sources, Physical Review Letters (2019). DOI: 10.1103 / PhysRevLett.123.194801

Provided by
Lawrence Berkeley National Laboratory

Quote :
Machine Learning Enhances the Power of the Light Beam at the Advanced Light Source (2019, November 8)
retrieved on November 9, 2019
from https://phys.org/news/2019-11-machine-light-beam-advanced-source.html

This document is subject to copyright. Apart from any fair dealings for the purpose of private learning or research, no
Part may be reproduced without written permission. The content is for informational purposes only.

Source link