Simon Hirlaender (University of Salzburg)
TUPS59
Data-driven model predictive control for automated optimization of injection into the SIS18 synchrotron
1800
In accelerator labs like GSI/FAIR, automating complex systems is key for maximizing physics experiment time. This study explores the application of a data-driven model predictive control (MPC) to refine the multi-turn injection (MTI) process into the SIS18 synchrotron, departing from conventional numerical optimization methods. MPC is distinguished by its reduced number of optimization steps and superior ability to control performance criteria, effectively addressing issues like delayed outcomes and safety concerns, including septum protection. The study focuses on a highly sample-efficient MPC approach based on Gaussian processes, which lies at the intersection of model-based reinforcement learning and control theory. This approach merges the strengths of both fields, offering a unified and optimized solution and yielding a safe and fast state-based optimization approach beyond classical reinforcement learning and Bayesian optimization. Our study lays the groundwork for enabling safe online training for the SS18 MTI issue, showing great potential for applying data-driven control in similar scenarios.
  • S. Appel, N. Madysa
    GSI Helmholtzzentrum für Schwerionenforschung GmbH
  • S. Hirlaender
    University of Salzburg
Paper: TUPS59
DOI: reference for this paper: 10.18429/JACoW-IPAC2024-TUPS59
About:  Received: 15 May 2024 — Revised: 20 May 2024 — Accepted: 23 May 2024 — Issue date: 01 Jul 2024
Cite: reference for this paper using: BibTeX, LaTeX, Text/Word, RIS, EndNote
TUPS60
Towards few-shot reinforcement learning in particle accelerator control
1804
This paper addresses the automation of particle accelerator control through reinforcement learning (RL). It highlights the potential to increase reliable performance, especially in light of new diagnostic tools and the increasingly complex variable schedules of specific accelerators. We focus on the physics simulation of the AWAKE electron line, an ideal platform for performing in-depth studies that allow a clear distinction between the problem and the performance of different algorithmic approaches for accurate analysis. The main challenges are the lack of realistic simulations and partially observable environments. We show how effective results can be achieved through meta-reinforcement learning, where an agent is trained to quickly adapt to specific real-world scenarios based on prior training in a simulated environment with variable unknowns. When suitable simulations are lacking or too costly, a model-based method using Gaussian processes is used for direct training in a few shots only. The work opens new avenues for implementing control automation in particle accelerators, significantly increasing their efficiency and adaptability.
  • S. Hirlaender, L. Lamminger, S. Pochaba
    University of Salzburg
  • A. Santamaria Garcia, C. Xu, L. Scomparin
    Karlsruhe Institute of Technology
  • J. Kaiser
    Deutsches Elektronen-Synchrotron
  • V. Kain
    European Organization for Nuclear Research
Paper: TUPS60
DOI: reference for this paper: 10.18429/JACoW-IPAC2024-TUPS60
About:  Received: 15 May 2024 — Revised: 19 May 2024 — Accepted: 20 May 2024 — Issue date: 01 Jul 2024
Cite: reference for this paper using: BibTeX, LaTeX, Text/Word, RIS, EndNote
TUPS62
The reinforcement learning for autonomous accelerators collaboration
1812
Reinforcement Learning (RL) is a unique learning paradigm that is particularly well-suited to tackle complex control tasks, can deal with delayed consequences, and can learn from experience without an explicit model of the dynamics of the problem. These properties make RL methods extremely promising for applications in particle accelerators, where the dynamically evolving conditions of both the particle beam and the accelerator systems must be constantly considered. While the time to work on RL is now particularly favorable thanks to the availability of high-level programming libraries and resources, its implementation in particle accelerators is not trivial and requires further consideration. In this context, the Reinforcement Learning for Autonomous Accelerators (RL4AA) international collaboration was established to consolidate existing knowledge, share experiences and ideas, and collaborate on accelerator-specific solutions that leverage recent advances in RL. Here we report on two collaboration workshops, RL4AA'23 and RL4AA'24, which took place in February 2023 at the Karlsruhe Institute of Technology and in February 2024 at the Paris-Lodron Universität Salzburg.
  • A. Santamaria Garcia, C. Xu, L. Scomparin
    Karlsruhe Institute of Technology
  • A. Eichler, J. Kaiser
    Deutsches Elektronen-Synchrotron
  • M. Schenk
    Ecole Polytechnique Fédérale de Lausanne
  • S. Pochaba, S. Hirlaender
    University of Salzburg
Paper: TUPS62
DOI: reference for this paper: 10.18429/JACoW-IPAC2024-TUPS62
About:  Received: 15 May 2024 — Revised: 21 May 2024 — Accepted: 23 May 2024 — Issue date: 01 Jul 2024
Cite: reference for this paper using: BibTeX, LaTeX, Text/Word, RIS, EndNote