Kaiser Jan
SUPM050
Beam Trajectory Control with Lattice-Agnostic Reinforcement Learning
use link to access more material from this paper's primary code
In recent work, it has been shown that reinforcement learning (RL) is capable of outperforming existing methods on accelerator tuning tasks. However, RL algorithms are difficult and time-consuming to train and currently need to be retrained for every single task. This makes fast deployment in operation difficult and hinders collaborative efforts in this research area. At the same time, modern accelerators often reuse certain structures within or across facilities such as transport lines consisting of several magnets, leading to similar tuning tasks. In this contribution, we use different methods, such as domain randomization, to allow an agent trained in simulation to easily be deployed for a group of similar tasks. Preliminary results show that this training method is transferable and allows the RL agent to control the beam trajectory at similar lattice sections of two different real linear accelerators. We expect that future work in this direction will enable faster deployment of learning-based tuning routines, and lead towards the ultimate goal of autonomous operation of accelerator systems and transfer of RL methods to most accelerators.
About: Received: 02 May 2023 — Revised: 09 May 2023 — Accepted: 11 May 2023 — Issue date: 26 Sep 2023
THPL019
Machine learning for combined scalar and spectral longitudinal phase space reconstruction
4464
Longitudinal beam diagnostics are a useful aid during tuning of particle accelerators, but acquiring them usually requires destructive and time intensive measurements. In order to provide such diagnostics non-destructively, computational methods allow for the development of virtual diagnostics. Existing Fourier-based reconstruction methods for longitudinal current reconstruction, tend to be slow and struggle to reliably reconstruct phase information. We propose using an artificial neural network trained on data from a start-to-end beam dynamics simulation to combine scalar and spectral information in order to infer the longitudinal phase space of the electron beam. We demonstrate that our method can reconstruct longitudinal beam diagnostics accurately and provide the reconstructed data with adaptive resolution. Deployed to control rooms today, our method can help human operators reduce tuning times, improve repeatability and achieve pioneering working points. In the future, ML-based virtual diagnostics will help the deployment of feedbacks and autonomous tuning methods, working toward the ultimate goal of autonomous particle accelerators.
Paper: THPL019
DOI: reference for this paper: 10.18429/JACoW-IPAC2023-THPL019
About: Received: 02 May 2023 — Revised: 09 May 2023 — Accepted: 19 Jun 2023 — Issue date: 26 Sep 2023
THPL029
Beam trajectory control with lattice-agnostic reinforcement learning
4487
In recent work, it has been shown that reinforcement learning (RL) is capable of outperforming existing methods on accelerator tuning tasks. However, RL algorithms are difficult and time-consuming to train and currently need to be retrained for every single task. This makes fast deployment in operation difficult and hinders collaborative efforts in this research area. At the same time, modern accelerators often reuse certain structures within or across facilities such as transport lines consisting of several magnets, leading to similar tuning tasks. In this contribution, we use different methods, such as domain randomization, to allow an agent trained in simulation to easily be deployed for a group of similar tasks. Preliminary results show that this training method is transferable and allows the RL agent to control the beam trajectory at similar lattice sections of two different real linear accelerators. We expect that future work in this direction will enable faster deployment of learning-based tuning routines, and lead towards the ultimate goal of autonomous operation of accelerator systems and transfer of RL methods to most accelerators.
Paper: THPL029
DOI: reference for this paper: 10.18429/JACoW-IPAC2023-THPL029
About: Received: 02 May 2023 — Revised: 09 May 2023 — Accepted: 11 May 2023 — Issue date: 26 Sep 2023