Keyword: network
Paper Title Other Keywords Page
MOBL03 Machine Learning Platform: Deploying and Managing Models in the CERN Control System controls, experiment, operation, embedded 50
 
  • J.-B. de Martel, R. Gorbonosov, N. Madysa
    CERN, Geneva, Switzerland
 
  Recent advances make machine learning (ML) a powerful tool to cope with the inherent complexity of accelerators, the large number of degrees of freedom and continuously drifting machine characteristics. A diverse set of ML ecosystems, frameworks and tools are already being used at CERN for a variety of use cases such as optimization, anomaly detection and forecasting. We have adopted a unified approach to model storage, versioning and deployment which accommodates this diversity, and we apply software engineering best practices to achieve the reproducibility needed in the mission-critical context of particle accelerator controls. This paper describes CERN Machine Learning Platform - our central platform for storing, versioning and deploying ML models in the CERN Control Center. We present a unified solution which allows users to create, update and deploy models with minimal effort, without constraining their workflow or restricting their choice of tools. It also provides tooling to automate seamless model updates as the machine characteristics evolve. Moreover, the system allows model developers to focus on domain-specific development by abstracting infrastructural concerns.  
slides icon Slides MOBL03 [0.687 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOBL03  
About • Received ※ 07 October 2021       Accepted ※ 16 November 2021       Issue date ※ 07 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV001 Status of the SARAF-Phase2 Control System controls, EPICS, cryomodule, LLRF 93
 
  • F. Gougnaud, P. Bargueden, G. Desmarchelier, A. Gaget, P. Guiho, A. Lotode, Y. Mariette, V. Nadot, N. Solenne
    CEA-DRF-IRFU, France
  • D. Darde, G. Ferrand, F. Gohier, T.J. Joannem, G. Monnereau, V. Silva
    CEA-IRFU, Gif-sur-Yvette, France
  • H. Isakov, A. Perry, E. Reinfeld, I. Shmuely, Y. Solomon, N. Tamim
    Soreq NRC, Yavne, Israel
  • T. Zchut
    CEA LIST, Palaiseau, France
 
  SNRC and CEA collaborate to the upgrade of the SARAF accelerator to 5 mA CW 40 Mev deuteron and proton beams and also closely to the control system. CEA is in charge of the Control System (including cabinets) design and implementation for the Injector (upgrade), MEBT and Super Conducting Linac made up of 4 cryomodules hosting HWR cavities and solenoid packages. This paper gives a detailed presentation of the control system architecture from hardware and EPICS software points of view. The hardware standardization relies on MTCA.4 that is used for LLRF, BPM, BLM and FC controls and on Siemens PLC 1500 series for vacuum, cryogenics and interlock. CEA IRFU EPICS Environment (IEE) platform is used for the whole accelerator. IEE is based on virtual machines and our MTCA.4 solutions and enables us to have homogenous EPICS modules. It also provides a development and production workflow. SNRC has integrated IEE into a new IT network based on advanced technology. The commissioning is planned to start late summer 2021.  
poster icon Poster MOPV001 [1.787 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV001  
About • Received ※ 09 October 2021       Revised ※ 20 October 2021       Accepted ※ 03 November 2021       Issue date ※ 11 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV003 Laser Megajoule Facility Operating Software Overview laser, software, controls, experiment 104
 
  • J-P. Airiau, V. Denis, H. Durandeau, P. Fourtillan, N. Loustalet, P. Torrent
    CEA, LE BARP cedex, France
 
  The Laser MegaJoule (LMJ), the French 176-beam laser facility, is located at the CEA CESTA Laboratory near Bordeaux (France). It is designed to deliver about 1.4 MJ of energy on targets, for high energy density physics experiments, including fusion experiments. The first bundle of 8-beams was commissioned in October 2014. By the end of 2021, ten bundles of 8-beams are expected to be fully operational. Operating software tools are used to automate, secure and optimize the operations on the LMJ facility. They contribute to the smooth running of the experiment process (from the setup to the results). They are integrated in the maintenance process (from the supply chain to the asset management). They are linked together in order to exchange data and they interact with the control command system. This talk gives an overview of the existing operating software and the lessons learned. It finally explains the incoming works to automate the lifecycle management of elements included in the final optic assembly (replacement, repair, etc.).  
poster icon Poster MOPV003 [0.656 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV003  
About • Received ※ 08 October 2021       Revised ※ 22 October 2021       Accepted ※ 03 November 2021       Issue date ※ 13 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV010 Working under Pandemic Conditions: Contact Tracing Meets Technology software, database, site, distributed 121
 
  • E. Blanco Viñuela, B. Copy, S. Danzeca, Ch. Delamare, R. Losito, A. Masi, E. Matli, T. Previero, R. Sierra
    CERN, Geneva, Switzerland
 
  Covid-19 has dramatically transformed our working practices with a big change to a teleworking model for many people. There are however many essential activities requiring personnel on site. In order to minimise the risks for its personnel CERN decided to take every measure possible, including internal contact tracing by the CERN medical service. This initially involved manual procedures which relied on people’s ability to remember past encounters. To improve this situation and minimise the number of employees who would need to be quarantined, CERN approved the design of a specific device: the Proximeter. The project goal was to design a wearable device, built in a partnership* with industry fulfilling the contact tracing needs of the medical service. The proximeter records other devices in close proximity and reports the encounters to a cloud-based system. The service came into operation early 2021 and 8000 devices were distributed to personnel working on the CERN site. This publication reports on the service offered, emphasising on the overall workflow of the project under exceptional conditions and the implications data privacy imposed on the design of the software application.
* Terabee. https://www.terabee.com
 
poster icon Poster MOPV010 [3.489 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV010  
About • Received ※ 11 October 2021       Revised ※ 26 October 2021       Accepted ※ 03 November 2021       Issue date ※ 18 December 2021
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV011 The Inclusion of White Rabbit into the Global Industry Standard IEEE 1588 hardware, operation, framework, electron 126
 
  • M.M. Lipiński
    CERN, Geneva, Switzerland
 
  White Rabbit (WR) is the first CERN-born technology that has been incorporated into a global industry standard governed by the Institute of Electrical and Electronics Engineers (IEEE), the IEEE 1588 Precision Time Protocol (PTP). This showcase of technology transfer has been beneficial to both the standard and to WR technology. For the standard, it has allowed the PTP synchronisation performance to be increased by several orders of magnitude, opening new markets and opportunities for PTP implementers. While for WR technology, the review during its standardisation and its adoption by industry makes it future-proof and drives down prices of the WR hardware that is widely used in scientific installations. This article provides an insight into the 7-year-long WR standardisation process, describing its motivation, benefits, costs and the final result. After a short introduction to WR, it describes the process of reviewing, generalising and translating it into an IEEE standard. Finally, it retrospectively evaluates this process in terms of efforts and benefits to conclude that basing new technologies on standards and extending them bears short-term costs that bring long-term benefits.  
poster icon Poster MOPV011 [1.283 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV011  
About • Received ※ 08 October 2021       Accepted ※ 03 November 2021       Issue date ※ 15 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV027 The Evolution of the DOOCS C++ Code Base controls, MMI, factory, interface 188
 
  • L. Fröhlich, A. Aghababyan, S. Grunewald, O. Hensler, U. Jastrow, R. Kammering, H. Keller, V. Kocharyan, M. Mommertz, F. Peters, A. Petrosyan, G. Petrosyan, L.P. Petrosyan, V. Petrosyan, K. Rehlich, V. Rybnikov, G. Schlesselmann, J. Wilgen, T. Wilksen
    DESY, Hamburg, Germany
 
  This contribution traces the development of DESY’s control system DOOCS from its origins in 1992 to its current state as the backbone of the European XFEL and FLASH accelerators and of the future Petra IV light source. Some details of the continual modernization and refactoring efforts on the 1.5 million line C++ codebase are highlighted.  
poster icon Poster MOPV027 [0.912 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV027  
About • Received ※ 14 October 2021       Accepted ※ 21 December 2021       Issue date ※ 07 March 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV031 The Deployment Technology of EPICS Application Software Based on Docker EPICS, controls, software, interface 197
 
  • R. Wang, Y.H. Guo, B.J. Wang, N. Xie
    IMP/CAS, Lanzhou, People’s Republic of China
 
  StreamDevice, as a general-purpose string interface device’s Epics driver, has been widely used in the control of devices with network and serial ports in CAFe equipment. For example, the remote control of magnet power supply, vacuum gauges, and various vacuum valves or pumps, as well as the information reading and control of Gauss meter equipment used in magnetic field measurement. In the process of on-site software development, we found that various errors are caused during the deployment of StreamDevice about the dependence on software environment and library functions, which because of different operating system environments and EPICS tool software versions. This makes StreamDevice deployment very time-consuming and labor-intensive. To ensure that StreamDevice works in a unified environment and can be deployed and migrated efficiently, the Docker container technology is used to encapsulate its software and its application environment. Images will be uploaded to an Aliyun private library to facilitate software developers to download and use.  
poster icon Poster MOPV031 [0.405 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV031  
About • Received ※ 09 October 2021       Revised ※ 17 October 2021       Accepted ※ 06 January 2022       Issue date ※ 11 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV035 Development of Alarm and Monitoring System Using Smartphone real-time, EPICS, status, monitoring 214
 
  • W.S. Cho
    PAL, Pohang, Republic of Korea
 
  In order to find out the problem of the device remotely, we aimed to develop a new alarm system. The main functions of the alarm system are real-time monitoring of EPICS PV data, data storage, and data storage when an alarm occurs. In addition, an alarm is transmitted in real time through an app on the smartphone to communicate the situation to machine engineers of PLS-II. This system uses InfluxDB to store data. In addition, a new program written in Java language was developed so that data acquisition, analysis, and beam dump conditions can be known. furthermore Vue.js is used to develop together with node.js and web-based android and iOS-based smart phone applications, and user interface is serviced. Eventually, using this system, we were able to check the cause analysis and data in real time when an alarm occurs. In this paper, we introduce the design of an alarm system and the transmission of alarms to an application.  
poster icon Poster MOPV035 [0.430 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV035  
About • Received ※ 05 October 2021       Revised ※ 22 October 2021       Accepted ※ 04 November 2021       Issue date ※ 25 January 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV040 Introducing Python as a Supported Language for Accelerator Controls at CERN controls, operation, software, GUI 236
 
  • P.J. Elson, C. Baldi, I. Sinkarenko
    CERN, Geneva, Switzerland
 
  In 2019, Python was adopted as an officially supported language for interacting with CERN’s accelerator controls. In practice, this change of status was as much pragmatic as it was progressive - Python has been available as part of the underlying operating system for over a decade and unofficial Python interfaces to controls have existed since at least 2015. So one might ask: what really changed when Python’s adoption became official? This paper will discuss what it takes to officially support Python in a controls environment and will focus on the cultural and technological shifts involved in running Python operationally. It will highlight some of the infrastructure that has been put in place at CERN to facilitate a stable and user-friendly Python platform, as well as some of the key decisions that have led to Python thriving in CERN’s accelerator controls domain. Given its general nature, it is hoped that the approach presented in this paper can serve as a reference for other scientific organisations from a broad range of fields who are considering the adoption of Python in an operational context.  
poster icon Poster MOPV040 [2.133 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV040  
About • Received ※ 09 October 2021       Revised ※ 15 October 2021       Accepted ※ 04 November 2021       Issue date ※ 12 January 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV041 Modernisation of the Toolchain and Continuous Integration of Front-End Computer Software at CERN software, framework, controls, interface 242
 
  • P. Mantion, S. Deghaye, L. Fiszer, F. Irannejad, J. Lauener, M. Voelkle
    CERN, Geneva, Switzerland
 
  Building C++ software for low-level computers requires carefully tested frameworks and libraries. The major difficulties in building C++ software are to ensure that the artifacts are compatible with the target system’s (OS, Application Binary Interface), and to ensure that transitive dependent libraries are compatible when linked together. Thus developers/maintainers must be provided with efficient tooling for friction-less workflows: standardisation of the project description and build, automatic CI, flexible development environment. The open-source community with services like Github and Gitlab have set high expectations with regards to developer user experience. This paper describes how we leveraged Conan and CMake to standardise the build of C++ projects, avoid the "dependency hell" and provide an easy way to distribute C++ packages. A CI system orchestrated by Jenkins and based on automatic job definition and in-source, versioned, configuration has been implemented. The developer experience is further enhanced by wrapping the common flows (compile, test, release) into a command line tool, which also helps transitioning from the legacy build system (legacy makefiles, SVN).  
poster icon Poster MOPV041 [1.227 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV041  
About • Received ※ 07 October 2021       Accepted ※ 14 November 2021       Issue date ※ 10 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV044 Lessons Learned Moving from Pharlap to Linux RT Linux, timing, hardware, Windows 257
 
  • C. Charrondière, O.O. Andreassen, D. Sierra-Maíllo Martínez, J. Tagg, T. Zilliox
    CERN, Meyrin, Switzerland
 
  The start of the Advanced Proton Driven Plasma Wakefield Acceleration Experiment (AWAKE) facility at CERN in 2016 came with the need for a continuous image acquisition system. The international scientific collaboration responsible for this project requested low and high resolution acquisition at a capture rate of 10Hz and 1 Hz respectively. To match these requirements, GigE digital cameras were connected to a PXI system running PharLap, a real-time operating system, using dual port 1GB/s network cards. With new requirements for a faster acquisition with higher resolution, it was decided to add 10GB/s network cards and a Network Attached Storage (NAS) directly connected to the PXI system to avoid saturating the network. There was also a request to acquire high-resolution images on several cameras during a limited duration, typically 30 seconds, in a burst acquisition mode. To comply with these new requirements PharLap had to be abandoned and replaced with Linux RT. This paper describes the limitation of the PharLap system and the lessons learned during the transition to Linux RT. We will show the improvement of CPU stability and data throughput reached.  
poster icon Poster MOPV044 [0.525 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV044  
About • Received ※ 08 October 2021       Revised ※ 18 October 2021       Accepted ※ 20 November 2021       Issue date ※ 28 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV049 Standardizing a Python Development Environment for Large Controls Systems controls, GUI, software, interface 277
 
  • S.L. Clark, P.S. Dyer, S. Nemesure
    BNL, Upton, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
Python provides broad design freedom to programmers and a low barrier of entry for new software developers. These aspects have proven that unless standardized, a Python codebase will tend to diverge from a common style and architecture, becoming unmaintainable across the scope of a large controls system. Mitigating these effects requires a set of tools, standards, and procedures developed to assert boundaries on certain aspects of Python development – namely project organization, version management, and deployment procedures. Common tools like Git, GitLab, and virtual environments form a basis for development, with in-house utilities presenting their capabilities in a clear, developer-focused way. This paper describes the necessary constraints needed for development and deployment of large-scale Python applications, the function of the tools which comprise the development environment, and how these tools are leveraged to create simple and effective procedures to guide development.
 
poster icon Poster MOPV049 [0.476 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV049  
About • Received ※ 04 October 2021       Revised ※ 20 October 2021       Accepted ※ 20 November 2021       Issue date ※ 20 December 2021
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPV013 Back End Event Builder Software Design for INO Mini-ICAL System data-acquisition, software, detector, monitoring 413
 
  • M. Punna, N. Ayyagiri, J.A. Deshpande, P.M. Nair, P. Sridharan, S. Srivastava
    BARC, Trombay, Mumbai, India
  • S. Bheesette, Y. Elangovan, G. Majumder, N. Panyam
    TIFR, Colaba, Mumbai, India
 
  The Indian-based Neutrino Observatory collaboration has proposed to build a 50 KT magnetized Iron Calorimeter (ICAL) detector to study atmospheric neutrinos. The paper describes the design of back-end event builder for Mini-ICAL, which is a first prototype version of ICAL and consists of 20 Resistive Plate Chamber (RPC) detectors. The RPCs push the event and monitoring data using a multi-tier network technology to the event builder which carries out event building, event track display, data quality monitoring and data archival functions. The software has been designed for high performance and scalability using asynchronous data acquisition and lockless concurrent data structures. Data storage mechanisms like ROOT, Berkeley DB, Binary and Protocol Buffers were studied for performance and suitability. Server data push module designed using publish-subscribe pattern allowed transport & remote client implementation technology agnostic. Event Builder has been deployed at mini-ICAL with a throughput of 3MBps. Since the software modules have been designed for scalability, they can be easily adapted for the next prototype E-ICAL with 320 RPCs to have sustained data rate of 200MBps  
poster icon Poster TUPV013 [0.760 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-TUPV013  
About • Received ※ 09 October 2021       Revised ※ 19 October 2021       Accepted ※ 24 February 2022       Issue date ※ 15 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPV036 An Evaluation of Schneider M580 HSBY PLC Redundancy in the R744 System A Cooling Unit PLC, controls, operation, power-supply 484
 
  • D.I. Teixeira
    University of Cape Town, Cape Town, South Africa
  • L. Davoine, W.K. Hulek, L. Zwalinski
    CERN, Meyrin, Switzerland
 
  The Detector Technologies group at CERN has developed a 2-stage transcritical R744 cooling system as a service for future detector cooling. This is the first system in operation at CERN where Schneider HSBY (Hot Standby) redundant PLCs are used. This cooling system provides a good opportunity to test the Schneider redundant PLC system and understand the operation, limitations and probability of failure in a con-trolled environment. The PLC redundancy is achieved by connecting Schneider M580 HSBY redundant PLCs to the system where one is the primary which operates the system and the other is in standby mode. A series of tests have been developed to understand the operation and failure modes of the PLCs by simulating different primary PLC failures and observing whether the standby PLC can seamlessly take over the system operation.  
poster icon Poster TUPV036 [1.154 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-TUPV036  
About • Received ※ 09 October 2021       Revised ※ 29 October 2021       Accepted ※ 20 November 2021       Issue date ※ 31 December 2021
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPV050 Control System Upgrade of the High-Pressure Cell for Pressure-Jump X-Ray Diffraction controls, EPICS, operation, detector 524
 
  • R. Mercado, N.L. Griffin, P. Holloway, S.C. Lay, P.J. Roberts
    DLS, Oxfordshire, United Kingdom
 
  This paper reports on the upgrade of the control system of a sample environment used to pressurise samples to 500 MPa at temperatures between -20 °C and 120 °C. The equipment can achieve millisecond pressure jumps for use in X-ray scattering experiments. It has been routinely available in beamline I22 at Diamond. The millisecond pressure-jump capability is unique. Example applications were the demonstration of pressure-induced formation of super crystals from PEGylated gold nanoparticles and the study of controlled assembly and disassembly of nanoscale protein cages. The project goal was to migrate the control system for the improved integration to EPICS and the GDA data acquisition software. The original control system uses National Instruments hardware controlled from LabView. The project looked at mapping the old control system hardware to alternatives in use at Diamond and migrating the control software. The paper discusses the choice of equipment used for ADC acquisition and equipment protection, using Omron PLCs and Beckhoff EtherCAT modules, a custom jump-trigger circuit, the calibration of the system and the next steps for testing the system.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-TUPV050  
About • Received ※ 13 October 2021       Revised ※ 29 October 2021       Accepted ※ 21 December 2021       Issue date ※ 22 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEBL02 Prototype of Image Acquisition and Storage System for SHINE interface, FEL, laser, database 564
 
  • H.H. Lv, D.P. Bai, X.M. Liu, H. Zhao
    SSRF, Shanghai, People’s Republic of China
 
  Shanghai HIgh repetition rate XFEL aNd Extreme light facility (SHINE) is a quasi-continuous wave hard X-ray free electron laser facility, which is currently under construction. The image acquisition and storage system has been designed to handle a large quantity of image data generated by the beam and X-ray diagnostics system, the laser system, etc. A prototype system with Camera Link cameras has been developed to acquire and to reliably transport data at a throughput of 1000MB/sec. The image data are transferred through ZeroMQ protocol to the storage where the image data and the relevant metadata are archived and made available for user analysis. For high-speed frames of image data storage, optimized schema is identified by comparing and testing four schemas. The image data are written to HDF5 files and the metadata pertaining to the image are stored in NoSQL database. It could deliver up to 1.2GB/sec storage speed. The performances are also contrasted between a stand-alone server and the Lustre file system. And the Lustre could provide a better performance. Details of the image acquisition, transfer, and storage schemas will be described in the paper.  
slides icon Slides WEBL02 [3.703 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-WEBL02  
About • Received ※ 10 October 2021       Accepted ※ 21 November 2021       Issue date ※ 12 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEBR01 RomLibEmu: Network Interface Stress Tests for the CERN Radiation Monitoring Electronics (CROME) radiation, software, interface, controls 581
 
  • K. Ceesay-Seitz, H. Boukabache, M. Leveneur, D. Perrin
    CERN, Geneva, Switzerland
 
  The CERN RadiatiOn Monitoring Electronics are a modular safety system for radiation monitoring that is remotely configurable through a supervisory system via a custom protocol on top of a TCP/IP connection. The configuration parameters influence the safety decisions taken by the system. An independent test library has been developed in Python in order to test the system’s reaction to misconfigurations. It is further used to stress test the application’s network interface and the robustness of the software. The library is capable of creating packets with default values, autocompleting packets according to the protocol and it allows the construction of packets from raw data. Malformed packets can be intentionally crafted and the response of the application under test is checked for protocol conformance. New test cases can be added to the test case dictionary. Each time before a new version of the communication library is released, the Python test library is used for regression testing. The current test suite consists of 251 automated test cases. Many application bugs could be found and solved, which improved the reliability and availability of the system.  
slides icon Slides WEBR01 [1.321 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-WEBR01  
About • Received ※ 10 October 2021       Revised ※ 18 October 2021       Accepted ※ 02 February 2022       Issue date ※ 24 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPV005 Experiment Automation Using EPICS EPICS, controls, experiment, detector 625
 
  • D.D. Cosic, M. Vićentijević
    RBI, Zagreb, Croatia
 
  Beam time at accelerator facilities around the world is very expensive and scarce, prompting the need for experiments to be performed as efficiently as possible. Efficiency of an accelerator facility is measured as a ratio of experiment time to beam optimization time. At RBI we have four ion sources, two accelerators, ten experimental end stations. We can obtain around 50 different ion species, each requiring a different set of parameters for optimal operation. Automating repetitive procedures can increase efficiency of an experiment and beam setup time. Currently, operators manually fine tunes the parameters to optimize the beam current. This process can be very long and requires many iterations. Automatic optimization of parameters can save valuable accelerator time. Based on a successful implementation of EPICS, the system was expanded to automate reoccurring procedures. To achieve this, a PLC was integrated into EPICS and our acquisition system was modified to communicate with devices through EPICS. This allowed us to use tools available in EPICS to do beam optimization much faster than a human operator can, and therefore significantly increased the efficiency of our facility.  
poster icon Poster WEPV005 [0.468 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-WEPV005  
About • Received ※ 08 October 2021       Accepted ※ 21 November 2021       Issue date ※ 16 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPV010 R&D of the KEK Linac Accelerator Tuning Using Machine Learning injection, linac, operation, electron 640
 
  • A. Hisano, M. Iwasaki
    OCU, Osaka, Japan
  • H. Nagahara, Y. Nakashima, N. Takemura
    Osaka University, Institute for Datability Science, Oasaka, Japan
  • T. Nakano
    RCNP, Osaka, Japan
  • I. Satake, M. Satoh
    KEK, Ibaraki, Japan
 
  We have developed a machine-learning-based operation tuning scheme for the KEK e/e+ injector linac (Linac), to improve the injection efficiency. The tuning scheme is based on the various accelerator operation data (control parameters, monitoring data and environmental data) of Linac. For the studies, we use the accumulated Linac operation data from 2018 to 2021. To solve the problems on the accelerator tuning of, 1. A lot of parameters (~1000) should be tuned, and these parameters are intricately correlated with each other; and 2. Continuous environmental change, due to temperature change, ground motion, tidal force, etc., affects to the operation tuning; We have developed, 1. Visualization of the accelerator parameters (~1000) trend/correlation distribution based on the dimensionality reduction using Variational Autoencoder (VAE), to see the long-term correlation between the accelerator operation parameters and the environmental data, and 2. Accelerator tuning method using the deep neural network, which is continuously updated with the short-term accelerator data to adapt the environment changes. In this presentation, we report the current status of the R&D.  
poster icon Poster WEPV010 [1.997 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-WEPV010  
About • Received ※ 10 October 2021       Revised ※ 19 October 2021       Accepted ※ 21 November 2021       Issue date ※ 11 January 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPV011 Research on Correction of Beam Beta Function of HLS-II Storage Ring Based on Deep Learning storage-ring, quadrupole, controls, feedback 645
 
  • Y.B. Yu, C. Li, W. Li, G. Liu, W. Xu, K. Xuan
    USTC/NSRL, Hefei, Anhui, People’s Republic of China
 
  The beam stability of the storage ring determines the light quality of synchrotron radiation. The beam stability of the storage ring will be affected by many factors ’such as magnetic field error, installation error, foundation vibration, temperature variation, etc., so it is inevitable to correct the beam optical parameters to improve the beam stability. In this paper, the deep learning technology is used to establish the HLS-II storage ring beam stability model, and the beam optical parameters can be corrected based on the model. The simulation results show that this method realizes the simulation correction of the Beta function of the HLS-II storage ring, and the correction accuracy precision meets the design requirements.  
poster icon Poster WEPV011 [2.142 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-WEPV011  
About • Received ※ 09 October 2021       Revised ※ 15 November 2021       Accepted ※ 17 November 2021       Issue date ※ 21 November 2021
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPV020 Learning to Lase: Machine Learning Prediction of FEL Beam Properties diagnostics, simulation, FEL, electron 677
 
  • A.E. Pollard, D.J. Dunning
    STFC/DL/ASTeC, Daresbury, Warrington, Cheshire, United Kingdom
  • M. Maheshwari
    STFC/DL, Daresbury, Warrington, Cheshire, United Kingdom
 
  Accurate prediction of longitudinal phase space and other properties of the electron beam are computationally expensive. In addition, some diagnostics are destructive in nature and/or cannot be readily accessed. Machine learning based virtual diagnostics can allow for the real-time generation of longitudinal phase space and other graphs, allowing for rapid parameter searches, and enabling operators to predict otherwise unavailable beam properties. We present a machine learning model for predicting a range of diagnostic screens along the accelerator beamline of a free-electron laser facility, conditional on linac and other parameters. Our model is a combination of a conditional variational autoencoder and a generative adversarial network, which generates high fidelity images that accurately match simulation data. Work to date is based on start-to-end simulation data, as a prototype for experimental applications.  
poster icon Poster WEPV020 [1.330 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-WEPV020  
About • Received ※ 10 October 2021       Revised ※ 22 October 2021       Accepted ※ 28 December 2021       Issue date ※ 25 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPV021 Machine Learning for RF Breakdown Detection at CLARA cavity, detector, gun, operation 681
 
  • A.E. Pollard, D.J. Dunning, A.J. Gilfellon
    STFC/DL/ASTeC, Daresbury, Warrington, Cheshire, United Kingdom
 
  Maximising the accelerating gradient of RF structures is fundamental to improving accelerator facility performance and cost-effectiveness. Structures must be subjected to a conditioning process before operational use, in which the gradient is gradually increased up to the operating value. A limiting effect during this process is breakdown or vacuum arcing, which can cause damage that limits the ultimate operating gradient. Techniques to efficiently condition the cavities while minimising the number of breakdowns are therefore important. In this paper, machine learning techniques are applied to detect breakdown events in RF pulse traces by approaching the problem as anomaly detection, using a variational autoencoder. This process detects deviations from normal operation and classifies them with near perfect accuracy. Offline data from various sources has been used to develop the techniques, which we aim to test at the CLARA facility at Daresbury Laboratory. These techniques could then be applied generally.  
poster icon Poster WEPV021 [1.565 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-WEPV021  
About • Received ※ 09 October 2021       Accepted ※ 21 November 2021       Issue date ※ 24 November 2021  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPV022 Sample Alignment in Neutron Scattering Experiments Using Deep Neural Network neutron, experiment, alignment, scattering 686
 
  • J.P. Edelen, K. Bruhwiler, A. Diaw, C.C. Hall
    RadiaSoft LLC, Boulder, Colorado, USA
  • S. Calder
    ORNL RAD, Oak Ridge, Tennessee, USA
  • C.M. Hoffmann
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: DOE Office of Science Office of Basic Energy Science SBIR award number DE-SC0021555
Access to neutron scattering centers, such as Oak Ridge National Laboratory (ORNL) and the NIST Center for Neutron Research, has provided beam energies to investigating a wide variety of applications such as particle physics, material science, and biology. In these experiments, the quality of collected data is very sensitive to sample and beam alignment, and stabilization of the experimental environment, requiring human intervention to tune the beam. While this procedure works, it is inefficient and time-consuming. In the work we present progress towards using machine learning to automate the alignment of a beamline in neutron scattering experiments. Our algorithm uses convolutional neural network to both learn a surrogate of the image data of the sample and to predict the sample contour using a u-net. We tested our algorithm on neutron camera images from the H2-BA powder diffractometer and the Topaz single crystal diffractometer beamlines of ORNL.
 
poster icon Poster WEPV022 [4.472 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-WEPV022  
About • Received ※ 10 October 2021       Revised ※ 22 October 2021       Accepted ※ 21 December 2021       Issue date ※ 06 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPV023 Development of a Smart Alarm System for the CEBAF Injector operation, vacuum, solenoid, quadrupole 691
 
  • D.T. Abell, J.P. Edelen
    RadiaSoft LLC, Boulder, Colorado, USA
  • B.G. Freeman, R. Kazimi, D.G. Moser, C. Tennant
    JLab, Newport News, Virginia, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under Award Number DE-SC0019682.
RadiaSoft and Jefferson Laboratory are working together to develop a machine-learning-based smart alarm system for the CEBAF injector. Because of the injector’s large number of parameters and possible fault scenarios, it is highly desirable to have an autonomous alarm system that can quickly identify and diagnose unusual machine states. We present our work on artificial neural networks designed to identify such undesirable machine states. In particular, we test both auto-encoders and inverse models as possible tools for differentiating between normal and abnormal states. These models are being developed using both supervised and unsupervised learning techniques, and are being trained using CEBAF injector data collected during dedicated machine studies as well as during regular operations. Lastly, we discuss tradeoffs between the two types of models.
 
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-WEPV023  
About • Received ※ 10 October 2021       Accepted ※ 19 January 2022       Issue date ※ 14 March 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPV041 Implementation of a VHDL Application for Interfacing Anybus CompactCom interface, neutron, FPGA, PLC 755
 
  • S. Gabourin, A. Nordt, S. Pavinato
    ESS, Lund, Sweden
 
  The European Spallation Source (ESS ERIC), based in Lund (Sweden), will be in a few years the most powerful neutron source in Europe with an average beam power of 5 MW. It will accelerate proton beam pulses to a Tungsten wheel to generate neutrons by the spallation effect. For such beam, the Machine Protection System (MPS) at ESS must be fast and reliable, and for this reason a Fast Beam Interlock System (FBIS) based on FPGAs is required. Some protection functions monitoring slow values (like temperature, mechanical movements, magnetic fields) need however less strict reaction times and are managed by PLCs. The communications protocol established between PLCs and FBIS is PROFINET fieldbus based. The Anybus CompactCom allows an host to have connectivity to industrial networks as PROFINET. In this context, FBIS represents the host and the application code to interface the AnyBus CompactCom has been fully developed in VHDL. This paper describes an open source implementation to interface a CompactCom M40 with an FPGA.  
poster icon Poster WEPV041 [0.967 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-WEPV041  
About • Received ※ 09 October 2021       Revised ※ 22 October 2021       Accepted ※ 14 January 2022       Issue date ※ 01 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPV048 An Archiver Appliance Performance and Resources Consumption Study simulation, EPICS, controls, software 774
 
  • R.N. Fernandes, S. Armanet, H. Kocevar, S. Regnell
    ESS, Lund, Sweden
 
  At the European Spallation Source (ESS), 1.6 million signals are expected to be generated by a (distributed) control layer composed of around 1500 EPICS IOCs. A substantial amount of these signals - i.e. PVs - will be stored by the Archiving Service, a service that is currently under development at the Integrated Control System (ICS) Division. From a technical point of view, the Archiving Service is implemented using a software application called the Archiver Appliance. This application, originally developed at SLAC, records PVs as a function of time and stores these in its persistent layer. A study based on multiple simulation scenarios that model ESS (future) modus operandi has been conducted by ICS to understand how the Archiver Appliance performs and consumes resources (e.g. RAM) under disparate workloads. This paper presents: 1) The simulation scenarios; 2) The tools used to collect and interpret the results; 3) The storage study; 4) The retrieval study; 5) The resources saturation study; 6) Conclusions based on the interpretation of the results.  
poster icon Poster WEPV048 [0.487 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-WEPV048  
About • Received ※ 10 October 2021       Accepted ※ 11 February 2022       Issue date ※ 12 March 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THAL01 Machine Learning Tools Improve BESSY II Operation experiment, ISOL, simulation, controls 784
 
  • L. Vera Ramiréz, T. Birke, G. Hartmann, R. Müller, M. Ries, A. Schälicke, P. Schnizer
    HZB, Berlin, Germany
 
  At the HZB user facility BESSY II Machine Learning (ML) technologies aim at advanced analysis, automation, explainability and performance improvements for accelerator and beamline operation. The development of these tools is intertwined with improvements of the prediction part of the digital twin instances at BESSY II [*] and the integration into the Bluesky Suite [**,***]. On the accelerator side, several use cases have recently been identified, pipelines designed and models tested. Previous studies applied Deep Reinforcement Learning (RL) to booster current and injection efficiency. RL now tackles a more demanding scenario: the mitigation of harmonic orbit perturbations induced by external civil noise sources. This paper presents methodology, design and simulation phases as well as challenges and first results. Further ML use cases under study are, among others, anomaly detection prototypes with anomaly scores for individual features.
[*] P. Schnizer et. al, IPAC21
[**] D. Allan, T. Caswell, S. Campbell and M. Rakitin, Synchrot. Radiat. News 32 19-22, 2019
[***] W. Smith et. al, this conference
 
slides icon Slides THAL01 [9.849 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THAL01  
About • Received ※ 08 October 2021       Revised ※ 24 October 2021       Accepted ※ 21 November 2021       Issue date ※ 29 January 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THAL04 Machine Learning Based Tuning and Diagnostics for the ATR Line at BNL quadrupole, simulation, controls, diagnostics 803
 
  • J.P. Edelen, K. Bruhwiler, E.G. Carlin, C.C. Hall
    RadiaSoft LLC, Boulder, Colorado, USA
  • K.A. Brown, V. Schoefer
    BNL, Upton, New York, USA
 
  Funding: This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under Award Number DE-SC0019682.
Over the past several years machine learning has increased in popularity for accelerator applications. We have been exploring the use of machine learning as a diagnostic and tuning tool for transfer line from the AGS to RHIC at Brookhaven National Laboratory. In our work, inverse models are used to either provide feed-forward corrections for beam steering or as a diagnostic to illuminate quadrupole magnets that have excitation errors. In this talk we present results on using machine learning for beam steering optimization for a range of different operating energies. We also demonstrate the use of inverse models for optical error diagnostics. Our results are from studies that use combine simulation and measurement data.
 
slides icon Slides THAL04 [4.845 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THAL04  
About • Received ※ 10 October 2021       Revised ※ 22 October 2021       Accepted ※ 06 February 2022       Issue date ※ 01 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THBL04 Kubernetes for EPICS IOCs EPICS, controls, target, detector 835
 
  • G. Knap, T.M. Cobb, Y. Moazzam, U.K. Pedersen, C.J. Reynolds
    DLS, Oxfordshire, United Kingdom
 
  EPICS IOCs at Diamond Light Source are built, deployed, and managed by a set of in-house tools that were implemented 15 years ago. This paper will detail a proof of concept to demonstrate replacing these tools and processes with modern industry standards. IOCs are packaged in containers with their unique dependencies included. IOC images are generic, and a single image is required for all containers that control a given class of device. Configuration is provided to the container in the form of a start-up script only. The configuration allows the generic IOC image to bootstrap a container for a unique IOC instance. This approach keeps the number of images required to a minimum. Container orchestration for all beamlines in the facility is provided through a central Kubernetes cluster. The cluster has remote nodes that reside within each beamline network to host the IOCs for the local beamline. All source, images and individual IOC configurations are held in repositories. Build and deployment to the production registries is handled by continuous integration. Finally, a development container provides a portable development environment for maintaining and testing IOC code.  
slides icon Slides THBL04 [0.640 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THBL04  
About • Received ※ 11 October 2021       Revised ※ 14 October 2021       Accepted ※ 23 February 2022       Issue date ※ 01 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THBR01 Renovation of the Trigger Distribution in CERN’s Open Analogue Signal Information System Using White Rabbit controls, hardware, timing, interface 839
 
  • D. Lampridis, T. Gingold, A. Poscia, M.H. Serans, M.R. Shukla, T.P. da Silva
    CERN, Geneva, Switzerland
  • D. Michalik
    Aalborg University, Aalborg, Denmark
 
  The Open Analogue Signal Information System (OASIS) acts as a distributed oscilloscope system that acquires signals from devices across the CERN accelerator complex and displays them in a convenient, graphical way. Today, the OASIS installation counts over 500 multiplexed digitisers, capable of digitising more than 5000 analogue signals and offers a selection of more than 250 triggers for the acquisitions. These triggers are mostly generated at a single central place and are then distributed by means of a dedicated coaxial cable per digitiser, using a "star" topology. An upgrade is currently under way to renovate this trigger distribution system and migrate it to a White Rabbit (WR) based solution. In this new system, triggers are distributed in the form of Ethernet messages over a WR network, allowing for better scalability, higher time-stamping precision, trigger latency compensation and improved robustness. This paper discusses the new OASIS trigger distribution architecture, including hardware, drivers, front-end, server and application-tier software. It then provides results from preliminary tests in laboratory installations.  
slides icon Slides THBR01 [2.229 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THBR01  
About • Received ※ 09 October 2021       Accepted ※ 21 December 2021       Issue date ※ 06 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THBR02 White Rabbit and MTCA.4 Use in the LLRF Upgrade for CERN’s SPS LLRF, controls, FPGA, cavity 847
 
  • T. Włostowski, K. Adrianek, M. Arruat, P. Baudrenghien, A.C. Butterworth, G. Daniluk, J. Egli, J.R. Gill, T. Gingold, J.D. González Cobas, G. Hagmann, P. Kuzmanović, D. Lampridis, M.M. Lipiński, S. Novel González, J.P. Palluel, M. Rizzi, A. Spierer, M. Sumiński, A. Wujek
    CERN, Geneva, Switzerland
 
  The Super Proton Synchrotron (SPS) Low-level RF (LLRF) system at CERN was completely revamped in 2020. In the old system, the digital signal processing was clocked by a submultiple of the RF. The new system uses a fixed-frequency clock derived from White Rabbit (WR). This triggered the development of an eRTM module for generating very precise clock signals to be fed to the optional RF backplane in MTCA.4 crates. The eRTM14/15 sandwich of modules implements a WR node delivering clock signals with a jitter below 100 fs. WR-clocked RF synthesis inside the FPGA makes it simple to reproduce the RF elsewhere by broadcasting the frequency-tuning words over the WR network itself. These words are received by the WR2RF-VME module and used to produce beam-synchronous signals such as the bunch clock and the revolution tick. This paper explains the general architecture of this new LLRF system, highlighting the role of WR-based synchronization. It then goes on to describe the hardware and gateware designs for both modules, along with their supporting software. A recount of our experience with the deployment of the MTCA.4 platform is also provided.  
slides icon Slides THBR02 [0.981 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THBR02  
About • Received ※ 12 October 2021       Revised ※ 24 October 2021       Accepted ※ 03 January 2022       Issue date ※ 28 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THBR03 Prototype of White Rabbit Based Beam-Synchronous Timing Systems for SHINE timing, FEL, electron, controls 853
 
  • P.X. Yu, Y.B. Yan
    SSRF, Shanghai, People’s Republic of China
  • G.H. Gong
    Tsinghua University, Beijing, People’s Republic of China
  • G. Gu, Z.Y. Jiang, L. Zhao
    USTC, Hefei, Anhui, People’s Republic of China
  • Y.M. Ye
    TUB, Beijing, People’s Republic of China
 
  Shanghai HIgh repetition rate XFEL aNd Extreme light facility (SHINE) is under construction. SHINE requires precise distribution and synchronization of the 1.003086MHz timing signals over a long distance of about 3.1 km. Two prototype systems were developed, both containing three functions: beam-synchronous trigger signal distribution, random-event trigger signal distribution and data exchange between nodes. The frequency of the beam-synchronous trigger signal can be divided according to the accelerator operation mode. Each output pulse can be configured for different fill modes. A prototype system was designed based on a customized clock frequency point (64.197530MHz). Another prototype system was designed based on the standard White Rabbit protocol. The DDS (Direct Digital Synthesis) and D flip-flops (DFFs) are adopted for RF signal transfer and pulse configuration. The details of the timing system design and test results will be reported in this paper.  
slides icon Slides THBR03 [3.344 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THBR03  
About • Received ※ 11 October 2021       Revised ※ 19 October 2021       Accepted ※ 22 December 2021       Issue date ※ 10 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPV009 Web Gui Development and Integration in Libera Instrumentation interface, GUI, software, instrumentation 875
 
  • D. Bisiach, M. Cargnelutti, P. Leban, P. Paglovec, L. Rahne, M. Škabar, A. Vigali
    I-Tech, Solkan, Slovenia
 
  During the past 5 years, Instrumentation Technologies expanded and added to the embedded OS running on Libera instruments (beam position instrumentation, LLRF) a lot of data access interfaces to allow faster access to the signals retrieved by the instrument. Some of the access interfaces are strictly related to the user environment Machine control system (Epics/Tango), and others related to the user software preferences (Matlab/Python). In the last years, the requirement for easier data streaming was raised to allow easier data access using PC and mobile phones through a web browser. This paper aims to present the development of the web backend server and the realization of a web frontend capable to process the data retrieved by the instrument. A use-case will be presented, the realization of the Libera Current Meter Web GUI as a first development example of a Web GUI interface for a Libera instrument and the starting point for the Web GUI pipeline integration on other instruments. The HTTP access interface will become in the next years a standard in data access for Libera instrumentation for quick testing/diagnostics and will allow the final user to customize it autonomously.  
poster icon Poster THPV009 [0.729 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THPV009  
About • Received ※ 08 October 2021       Accepted ※ 11 February 2022       Issue date ※ 11 March 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPV046 Virtualized Control System Infrastructure at LINAC Project, PINSTECH controls, EPICS, interface, Windows 975
 
  • N.U. Saqib, F. Sher
    PINSTECH, Islamabad, Pakistan
 
  IT infrastructure is backbone of modern big science accelerator control systems. Accelerator Controls and Electronics (ACE) Group is responsible for controls, electronics and IT infrastructure for Medical and Industrial NDT (Non-Destructive Testing) linear accelerator prototypes at LINAC Project, PINSTECH. All of the control system components such as EPICS IOCs, Operator Interfaces, Databases and various servers are virtualized using VMware vSphere and VMware Horizon technologies. This paper describes the current IT design and development structure that is supporting the control systems of the linear accelerators efficiently and effectively.  
poster icon Poster THPV046 [1.174 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THPV046  
About • Received ※ 10 October 2021       Revised ※ 20 October 2021       Accepted ※ 21 November 2021       Issue date ※ 06 January 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPV049 Virtualisation and Software Appliances as Means for Deployment of SCADA in Isolated Systems controls, SCADA, software, operation 985
 
  • P. Golonka, L. Davoine, M.Z. Zimny, L. Zwalinski
    CERN, Meyrin, Switzerland
 
  The paper discusses the use of virtualisation as a way to deliver a complete pre-configured SCADA (Supervisory Control And Data Acquisition) application as a software appliance to ease its deployment and maintenance. For the off-premise control systems, it allows for deployment to be performed by the local IT servicing teams with no particular control-specific knowledge, providing a "turn-key" solution. The virtualisation of a complete desktop allows to deliver and reuse the existing feature-rich Human-Machine Interface experience for local operation; it also resolves the issues of hardware and software compatibilities in the deployment sites. The approach presented here was employed to provide replicas of the "LUCASZ" cooling system to collaborating laboratories, where the on-site knowledge of underlying technologies was not available and required to encapsulate the controls as a "black-box" so that for users, the system is operational soon after power is applied. The approach is generally applicable for international collaborations where control systems are contributed and need to be maintained by remote teams  
poster icon Poster THPV049 [2.954 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THPV049  
About • Received ※ 08 October 2021       Revised ※ 30 November 2021       Accepted ※ 19 February 2022       Issue date ※ 25 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
FRBL01 Machine Learning for Anomaly Detection in Continuous Signals operation, neutron, controls, software 1032
 
  • A.A. Saoulis, K.R.L. Baker, R.A. Burridge, S. Lilley, M. Romanovschi
    STFC/RAL/ISIS, Chilton, Didcot, Oxon, United Kingdom
 
  Funding: UKRI / STFC
High availability at accelerators such as the ISIS Neutron and Muon Source is a key operational goal, requiring rapid detection and response to anomalies within the accelerator’s subsystems. While monitoring systems are in place for this purpose, they often require human expertise and intervention to operate effectively or are limited to predefined classes of anomaly. Machine learning (ML) has emerged as a valuable tool for automated anomaly detection in time series signal data. An ML pipeline suitable for anomaly detection in continuous signals is described, from labeling data for supervised ML algorithms to model selection and evaluation. These techniques are applied to detecting periods of temperature instability in the liquid methane moderator on ISIS Target Station 1. We demonstrate how this ML pipeline can be used to improve the speed and accuracy of detection of these anomalies.
 
slides icon Slides FRBL01 [12.611 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-FRBL01  
About • Received ※ 08 October 2021       Revised ※ 27 October 2021       Accepted ※ 21 December 2021       Issue date ※ 24 January 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
FRBL05 RemoteVis: An Efficient Library for Remote Visualization of Large Volumes Using NVIDIA Index software, synchrotron, GPU, detector 1047
 
  • T.V. Spina, D. Alnajjar, M.L. Bernardi, F.S. Furusato, E.X. Miqueles, A.Z. Peixinho
    LNLS, Campinas, Brazil
  • A. Kuhn, M. Nienhaus
    NVIDIA, Santa Clara, USA
 
  Funding: We would like to thank the Brazilian Ministry of Science, Technology, and Innovation for the financial support.
Advancements in X-ray detector technology are increasing the amount of volumetric data available for material analysis in synchrotron light sources. Such developments are driving the creation of novel solutions to visualize large datasets both during and after image acquisition. Towards this end, we have devised a library called RemoteVis to visualize large volumes remotely in HPC nodes, using NVIDIA IndeX as the rendering backend. RemoteVis relies on RDMA-based data transfer to move large volumes from local HPC servers, possibly connected to X-ray detectors, to remote dedicated nodes containing multiple GPUs for distributed volume rendering. RemoteVis then injects the transferred data into IndeX for rendering. IndeX is a scalable software capable of using multiple nodes and GPUs to render large volumes in full resolution. As such, we have coupled RemoteVis with slurm to dynamically schedule one or multiple HPC nodes to render any given dataset. RemoteVis was written in C/C++ and Python, providing an efficient API that requires only two functions to 1) start remote IndeX instances and 2) render regular volumes and point-cloud (diffraction) data on the web browser/Jupyter client.
*NVIDIA IndeX, https://developer.nvidia.com/nvidia-index
 
slides icon Slides FRBL05 [12.680 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-FRBL05  
About • Received ※ 10 October 2021       Revised ※ 28 October 2021       Accepted ※ 20 November 2021       Issue date ※ 01 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)