Newswise — Banks of computer screens stacked two and three high line the walls. The screens are covered with numbers and graphs that are unintelligible to an untrained eye. But they tell a story to the operators staffing the particle accelerator control room. The numbers describe how the accelerator is speeding up tiny particles to smash into targets or other particles.
However, even the best operator can’t fully track the miniscule shifts over time that affect the accelerator’s machinery. Scientists supported by the Department of Energy (DOE) are investigating how to use computers to make the tiny adjustments necessary to keep particle accelerators running at their best.
Researchers use accelerators to better understand materials and the particles that make them up. Chemists and biologists use them to study ultra-fast processes like photosynthesis. Nuclear and high energy physicists smash together protons and other particles to learn more about the building blocks of our universe. Compact accelerators can be particularly useful for broader applications in society. Medical scientists and doctors use accelerators in cancer therapy, while manufacturers use them to produce semiconductors for electronics. Other applications include sterilizing medical devices, analyzing historical artifacts, and hardening lightweight materials for cars.
Unfortunately, the performance of particle accelerators is prone to drifting over time. They have hundreds of thousands of components. Some of these components are incredibly complex. Influences from outside, like vibrations and temperature changes, can affect how the machinery functions. As various parts shift, they have a domino effect on the pieces after them in line. By the time the accelerator produces the particle beam, tiny shifts may have added up to a significant change. It’s like how individual cars slowing down can lead to a traffic jam. Over time, the beam becomes less precise and less useful.
To fix this issue, operators need to “retune” accelerators back to their optimum parameters. These periods of retuning limit how much time the accelerators are available to scientists. In addition, while scientists are taking experimental data, the technicians can’t adjust the accelerators in real-time.
On top of all of that, the beams are incredibly complex. They exist in a space that scientists can’t measure quickly or even directly. Operators are limited to looking at the beam position in one dimension. Considering that the beam actually exists in six dimensions (the normal three, plus motion in each of them), the operators miss out on a lot of data.
To deal with these issues, scientists have developed complex controls and diagnostics. Special algorithms adapt how a particle accelerator operates to compensate for changes over time. A number of systems use these algorithms, including the LCLS (a DOE Office of Science user facility at SLAC National Accelerator Laboratory). But these methods have a big challenge. Because these algorithms are based on feedback from the accelerator, the algorithms can end up getting “stuck” without finding the true optimal conditions.
With machine learning, computers could act as “virtual observers” that support human technicians. Machine learning applications search for patterns in data and then make predictions. Scientists “teach” machine learning applications by giving them sets of training data. From these data, the application learns to identify the relationship data and results. While a human operator can recognize a problem based on their past experience, a machine learning application recognizes a problem based on what it “saw” in its training data. Some accelerators at CERN – the particle physics laboratory in Switzerland – are using this type of application.
But machine learning applications are only as good as their training data. Training data are based on the original characteristics of an accelerator. But unfortunately as the accelerator’s machinery shifts, those data are no longer accurate! To solve this problem, scientists would have to continuously retrain the model. That defeats the entire point. They just end up running into a different variation of their original issue.
The best solution may lie in combining the two approaches. With support from the Office of Science’s Accelerator R&D and Production Office, researchers and engineers at DOE’s Los Alamos National Laboratory and Lawrence Berkeley National Laboratory are machine learning technique(link is external)">developing a new machine learning technique for compact particle accelerators. This technique uses real time data from the accelerator diagnostics to continuously tweak the model. It then uses these data to guide an AI process known as diffusion advanced generative AI process known as diffusion. The process creates virtual views of accelerators’ beams as they change with time. One machine learning tool has the ability to take a set of super complex inputs with many dimensions, compress them into a much simpler representation, and then provide a complex output that reflects the system.
In addition to compact accelerators, these methods can also be applied to large-scale accelerators such as FACET-II. At the FACET-II accelerator system at SLAC, the model produced 15 different two-dimensional projections of the six-dimensional beam at five different locations. While even thinking about that scale hurts a human’s brain, the machine learning system needs it. These data allow the system to learn the possible changes over time as well as the changes’ relationships with each other and the basic physics. Scientists also demonstrated the adaptability of this approach by showing that the same generative diffusion method can be used at the European X-ray FEL. They used the method to create megapixel-resolution, virtual views of intense electron beams.
So far, this method seems promising. On accelerators where operators can take complex measurements of the beam as it is running, researchers have been collecting data. They then compare the application’s predictions to the measurements. With this information, they can further train the application.
In the future, human operators of particle accelerators may get some help from their computer counterparts. This assistance will allow scientists to make more and better discoveries than ever before.
If working on improving accelerators is appealing, there are many opportunities to conduct research with support from DOE. For college students, the Science Undergraduate Laboratory Internship and Community College Internship programs offer paid internships at national labs. The Early Career Research Program provides long-term support for researchers early in their career. The Funding for Accelerated Inclusive Research (FAIR) and Reaching a New Energy Sciences Workforce (RENEW) programs work to support institutions historically underrepresented in the Office of Science portfolio. Lastly, the Small Business Innovation Research and Small Business Technology Transfer programs support small businesses conducting research and development. To see the funding that enabled this project and other opportunities, visit the Accelerator R&D and Production Office’s website.