Data Analytics
Paper Title Page
Data Analysis and Rapid Prototyping using Dashboards  
  • R. Kammering
    DESY, Hamburg, Germany
  Today most control system frameworks offer sophisticated GUI toolkits and designers. Despite its capabilities in data visualisation these toolkits still require additional processing and computation power when it comes to the exploration of new kinds of data and its hidden analysis potential. Modern dashboard technologies as used in data science do not offer only rapid prototyping of the data analysis chain but also provide complex processing power combined with smart visualisation techniques. This enables a quick turn-around when modifying and adapting the data analysis implementation itself. In modern data analytics dashboards are considered to be a key element for enabling investigative data science. For a project analysing the accelerator machine availability, we implemented a web application using the Streamlit framework. The ease of use and the rich set of possibilities further encouraged us to use this technology for some data science related tasks. The seamless interplay between complex preprocessing and the multitude of visualisation possibilities demonstrate, that these dashboard technologies are very well suited for explorative science related projects.  
poster icon Poster THPV034 [4.345 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
THPV036 Laser Driver State Estimation Oriented Data Governance 942
  • J. Luo, L. Li, Z. Ni, X. Zhou
    CAEP, Sichuan, People’s Republic of China
  Laser driver state estimation is an important task dur-ing the operation process for the high-power laser facility, by utilizing measured data to analyze experiment results and laser driver performances. It involves complicated data processing jobs, including data extraction, data cleaning, data fusion, data visualization and so on. Data governance aims to improve the efficiency and quality of data analysis for laser driver state estimation, which fo-cuses on 4 aspects ’ data specification, data cleaning, data exchange, and data integration. The achievements of data governance contribute to not only laser driver state estimation, but also other experimental data analy-sis applications.  
poster icon Poster THPV036 [0.477 MB]  
DOI • reference for this paper ※  
About • Received ※ 10 October 2021       Revised ※ 24 October 2021       Accepted ※ 21 November 2021       Issue date ※ 22 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
THPV037 The Implementation of the Beam Profile Application for KOMAC Beam Emittance 947
  • J.H. Kim, S.Y. Cho, S. Lee, Y.G. Song, S.P. Yun
    KOMAC, KAERI, Gyeongju, Republic of Korea
  Funding: This work was supported by the Ministry of Science, ICT & Future Planning of the Korean Government.
Korea Multi-purpose Accelerator Complex(KOMAC) has been operating a 100 MeV proton linear accelerator that accelerates a beam using ion source, a radio frequency quadrupole(RFQ), 11 drift tube linac(DTL). And the accelerated protons are transported to target rooms that meets the conditions required by the users. It is important to figure out the beam profile of the proton linac to provide the proper beam condition to users. We installed 8 wire scanners to measure beam emittance of KOMAC at beam lines. And beam profile application to measure beam emittance has been implemented using EPICS and python. This paper will describe the implementation of the beam profile application for KOMAC beam emittance.
poster icon Poster THPV037 [1.722 MB]  
DOI • reference for this paper ※  
About • Received ※ 08 October 2021       Revised ※ 21 October 2021       Accepted ※ 21 November 2021       Issue date ※ 27 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
THPV038 Plug-in-Based Ptychography & CDI Reconstruction User Interface Development 950
  • S.W. Kim, K.H. Ku, W.W. Lee
    PAL, Pohang, Republic of Korea
  Synchrotron beamlines have a wide range of fields, and accordingly, various open source and commercial softwares are being used for data analysis. Inevitable, the user interface differs between programs and there is little shared part, so the user had to spend a lot of effort to perform a new experimental analysis and learn how to use the program newly. In order to overcome these shortcomings, the same user interface was maintained using the Xi-cam framework, and different analysis algorithms for each field were introduced in a plugin method. In this presentation, user interfaces designed for ptychography and cdi reconstruction will be introduced.  
poster icon Poster THPV038 [1.333 MB]  
DOI • reference for this paper ※  
About • Received ※ 08 October 2021       Revised ※ 25 October 2021       Accepted ※ 21 November 2021       Issue date ※ 12 December 2021
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
Machine learning applications for accelerator failure prevention at MAX IV  
  • J.E. Petersson, B. Meirose
    MAX IV Laboratory, Lund University, Lund, Sweden
  Machine learning (ML) applications have received renewed interest in recent years. The reason behind this lies in advances in ML methods, data availability and increased computational power. Application of ML techniques to diagnose or even prevent accelerator failures is an area of particular interest not least because of the ample data that is routinely gathered in all modern accelerators to conduct reliability studies. In this contribution we present preliminary results of the application of unsupervised learning to diagnose and decrease accelerator failure rates at MAX IV, focusing on systems and methods that presented the best results.  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
THPV040 New Machine Learning Model Application for the Automatic LHC Collimator Beam-Based Alignment 953
  • G. Azzopardi
    CERN, Geneva, Switzerland
  • G. Ricci
    Sapienza University of Rome, Rome, Italy
  A collimation system is installed in the Large Hadron Collider (LHC) to protect its sensitive equipment from unavoidable beam losses. An alignment procedure determines the settings of each collimator, by moving the collimator jaws towards the beam until a characteristic loss pattern, consisting of a sharp rise followed by a slow decay, is observed in downstream beam loss monitors. This indicates that the collimator jaw intercepted the reference beam halo and is thus aligned to the beam. The latest alignment software introduced in 2018 relies on supervised machine learning (ML) to detect such spike patterns in real-time*. This enables the automatic alignment of the collimators with a significant reduction in the alignment time**. This paper analyses the first-use performance of this new software focusing on solutions to the identified bottleneck caused by waiting a fixed duration of time when detecting spikes. It is proposed to replace the supervised ML model with a Long-Short Term Memory model able to detect spikes in time windows of varying lengths, waiting for a variable duration of time determined by the spike itself. This will allow to further speed up the automatic alignment.
*G. Azzopardi et al., "Automatic spike detection in beam loss signals for LHC collimator alignment", NIMA 2019.
**G. Azzopardi et al., "Operational Results of LHC collimator alignment using ML", IPAC’19.
poster icon Poster THPV040 [0.894 MB]  
DOI • reference for this paper ※  
About • Received ※ 08 October 2021       Accepted ※ 21 November 2021       Issue date ※ 10 December 2021  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
THPV041 Innovative Methodology Dedicated to the CERN LHC Cryogenic Valves Based on Modern Algorithm for Fault Detection and Predictive Diagnostics 959
  • M. Pezzetti, A. Amodio, Y. Donon, L. Iodice
    CERN, Geneva, Switzerland
  • P. Arpaia
    Naples University Federico II, Science and Technology Pole, Napoli, Italy
  • F. Gargiulo
    University of Naples Federico II, Naples, Italy
  The European Organization for Nuclear Research (CERN) cryogenic infrastructure is composed of many equipment, among them there are the cryogenic valves widely used in the Large Hadron Collider (LHC) cryogenic facility. At present time, diagnostic solutions that can be integrated into the process control systems, capable to identify leak failures in valves bellows, are not available. The authors goal has been the development of a system that allows the detection of helium leaking valves during normal operation using available data extracted from the control system. The design constraints has driven the development towards a solution integrated in the monitoring systems in use, not requiring manual interventions. The methodology presented in this article is based on the extraction of distinctive features (analyzing the data in time and frequency domain) which are exploited in the next phase of machine learning. The aim is to identify a list of candidate valves with a high probability of helium leakage. The proposed methodology, which is at very early stage now, with the evolution of the data set and the iterative approach is aiming toward a cryogenic valves targeted maintenance.  
poster icon Poster THPV041 [1.120 MB]  
DOI • reference for this paper ※  
About • Received ※ 06 October 2021       Revised ※ 26 October 2021       Accepted ※ 22 December 2021       Issue date ※ 02 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
THPV042 Evolution of the CERN Beam Instrumentation Offline Analysis Framework (OAF) 965
  • A. Samantas, M. Gonzalez-Berges, J-J. Gras, S. Zanzottera
    CERN, Geneva 23, Switzerland
  The CERN accelerators require a large number of instruments, measuring different beam parameters like position, losses, current etc. The instruments’ associated electronics and software also produce information about their status. All these data are stored in a database for later analysis. The Beam Instrumentation group developed the Offline Analysis Framework some years ago to regularly and systematically analyze these data. The framework has been successfully used for nearly 100 different analyses that ran regularly by the end of the LHC run 2. Currently it is being updated for run 3 with modern and efficient tools to improve its usability and data analysis power. In particular, the architecture has been reviewed to have a modular design to facilitate the maintenance and the future evolution of the tool. A new web based application is being developed to facilitate the users’ access both to online configuration and to results. This paper will describe all these evolutions and outline possible lines of work for further improvements.
* "A Framework for Off-Line Verification of Beam Instrumentation Systems at CERN", S. Jackson et al., ICALEPCS 2013 San Francisco
poster icon Poster THPV042 [1.251 MB]  
DOI • reference for this paper ※  
About • Received ※ 09 October 2021       Revised ※ 14 October 2021       Accepted ※ 21 November 2021       Issue date ※ 13 December 2021
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
THPV043 Using AI for Management of Field Emission in SRF Linacs 970
  • A. Carpenter, P. Degtiarenko, R. Suleiman, C. Tennant, D.L. Turner, L.S. Vidyaratne
    JLab, Newport News, Virginia, USA
  • K.M. Iftekharuddin, M. Rahman
    ODU, Norfolk, Virginia, USA
  Funding: This work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under Contract No. DE-AC05-06OR23177.
Field emission control, mitigation, and reduction is critical for reliable operation of high gradient superconducting radio-frequency (SRF) accelerators. With the SRF cavities at high gradients, the field emission of electrons from cavity walls can occur and will impact the operational gradient, radiological environment via activated components, and reliability of CEBAF’s two linacs. A new effort has started to minimize field emission in the CEBAF linacs by re-distributing cavity gradients. To measure radiation levels, newly designed neutron and gamma radiation dose rate monitors have been installed in both linacs. Artificial intelligence (AI) techniques will be used to identify cavities with high levels of field emission based on control system data such as radiation levels, cryogenic readbacks, and vacuum loads. The gradients on the most offending cavities will be reduced and compensated for by increasing the gradients on least offensive cavities. Training data will be collected during this year’s operational program and initial implementation of AI models will be deployed. Preliminary results and future plans are presented.
poster icon Poster THPV043 [1.857 MB]  
DOI • reference for this paper ※  
About • Received ※ 08 October 2021       Revised ※ 21 October 2021       Accepted ※ 21 November 2021       Issue date ※ 14 December 2021
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
FRBL01 Machine Learning for Anomaly Detection in Continuous Signals 1032
  • A.A. Saoulis, K.R.L. Baker, R.A. Burridge, S. Lilley, M. Romanovschi
    STFC/RAL/ISIS, Chilton, Didcot, Oxon, United Kingdom
  Funding: UKRI / STFC
High availability at accelerators such as the ISIS Neutron and Muon Source is a key operational goal, requiring rapid detection and response to anomalies within the accelerator’s subsystems. While monitoring systems are in place for this purpose, they often require human expertise and intervention to operate effectively or are limited to predefined classes of anomaly. Machine learning (ML) has emerged as a valuable tool for automated anomaly detection in time series signal data. An ML pipeline suitable for anomaly detection in continuous signals is described, from labeling data for supervised ML algorithms to model selection and evaluation. These techniques are applied to detecting periods of temperature instability in the liquid methane moderator on ISIS Target Station 1. We demonstrate how this ML pipeline can be used to improve the speed and accuracy of detection of these anomalies.
slides icon Slides FRBL01 [12.611 MB]  
DOI • reference for this paper ※  
About • Received ※ 08 October 2021       Revised ※ 27 October 2021       Accepted ※ 21 December 2021       Issue date ※ 24 January 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
FRBL03 A Literature Review on the Efforts Made for Employing Machine Learning in Synchrotrons 1039
  • A. Khaleghi, Z. Aghaei, H. Haedar, I. Iman, K. Mahmoudi
    IKIU, Qazvin, Iran
  • F. Ahmad Mehrabi, M. Akbari, M. Jafarzadeh, A. Khaleghi, P. Navidpour
    ILSF, Tehran, Iran
  Using machine learning (ML) in various contexts is in-creasing due to advantages such as automation for every-thing, trends and pattern identification, highly error-prone, and continuous improvement. Even non-computer experts are trying to learn simple programming languages like Python to implement ML models on their data. De-spite the growing trend towards ML, no study has re-viewed the efforts made on using ML in synchrotrons to our knowledge. Therefore, we are examining the efforts made to use ML in synchrotrons to achieve benefits like stabilizing the photon beam without the need for manual calibrations of measures that can be achieved by reducing unwanted fluctuations in the widths of the electron beams that prevent experimental noises obscured measurements. Also, the challenges of using ML in synchrotrons and a short synthesis of the reviewed articles were provided. The paper can help related experts have a general famil-iarization regarding ML applications in synchrotrons and encourage the use of ML in various synchrotron practices. In future research, the aim will be to provide a more com-prehensive synthesis with more details on how to use the ML in synchrotrons.  
slides icon Slides FRBL03 [1.681 MB]  
DOI • reference for this paper ※  
About • Received ※ 10 October 2021       Revised ※ 20 October 2021       Accepted ※ 20 November 2021       Issue date ※ 12 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
Real-Time Azimuthal Integration of X-Ray Scattering Data on FPGAs  
  • Z. Matej, A. Barczyk, A. Salnikov, K. Skovhede
    MAX IV Laboratory, Lund University, Lund, Sweden
  • C. Johnsen, K. Skovhede, B. Vinter
    NBI, København, Denmark
  • C. Weninger
    Max Planck Institute for the Physics of Complex Systems, Dresden, Germany
  Funding: eSSENCE@LU 5:10 is kindly acknowledged for supporting this work.
Azimuthal integration (AZINT) is a procedure for reducing 2D-detector image into a 1D-histogram. AZINT is used extensively in photon science experiments, in particular in small angle scattering and powder diffraction. It improves signal to noise ratio and data volumes are reduced by a factor of 1000. The underlaying procedure i.e. bin-counting has other applications. The potential of FPGAs for data analysis originates from recent progress in FPGA software design with complexity matching the scientific requirements. We implemented AZINT on FPGAs using OpenCL and synchronous message exchange (SME). It is demonstrated AZINT can process 600 Gb/s streams, i.e. about 20’40 Gpixels/s, on a single FPGA. FPGAs are usually more energy-efficient than GPUs, they are flexible so they can fit a specific problem and outperform GPUs in relevant applications, in particular AZINT here. Beside high throughput FPGAs allow data processing with well-defined and low latencies for real-time experiments. Radiation tolerance of FPGAs brings more synergies. It makes them ideal components for extra-terrestrial scientific instruments (e.g. Mars rovers) or detectors at spaceflights and satellites.
slides icon Slides FRBL04 [6.308 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
FRBL05 RemoteVis: An Efficient Library for Remote Visualization of Large Volumes Using NVIDIA Index 1047
  • T.V. Spina, D. Alnajjar, M.L. Bernardi, F.S. Furusato, E.X. Miqueles, A.Z. Peixinho
    LNLS, Campinas, Brazil
  • A. Kuhn, M. Nienhaus
    NVIDIA, Santa Clara, USA
  Funding: We would like to thank the Brazilian Ministry of Science, Technology, and Innovation for the financial support.
Advancements in X-ray detector technology are increasing the amount of volumetric data available for material analysis in synchrotron light sources. Such developments are driving the creation of novel solutions to visualize large datasets both during and after image acquisition. Towards this end, we have devised a library called RemoteVis to visualize large volumes remotely in HPC nodes, using NVIDIA IndeX as the rendering backend. RemoteVis relies on RDMA-based data transfer to move large volumes from local HPC servers, possibly connected to X-ray detectors, to remote dedicated nodes containing multiple GPUs for distributed volume rendering. RemoteVis then injects the transferred data into IndeX for rendering. IndeX is a scalable software capable of using multiple nodes and GPUs to render large volumes in full resolution. As such, we have coupled RemoteVis with slurm to dynamically schedule one or multiple HPC nodes to render any given dataset. RemoteVis was written in C/C++ and Python, providing an efficient API that requires only two functions to 1) start remote IndeX instances and 2) render regular volumes and point-cloud (diffraction) data on the web browser/Jupyter client.
slides icon Slides FRBL05 [12.680 MB]  
DOI • reference for this paper ※  
About • Received ※ 10 October 2021       Revised ※ 28 October 2021       Accepted ※ 20 November 2021       Issue date ※ 01 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)