Control System Infrastructure
Paper Title Page
THBL01 Control System Management and Deployment at MAX IV 819
 
  • B. Bertrand, Á. Freitas, V. Hardion
    MAX IV Laboratory, Lund University, Lund, Sweden
 
  The control systems of big research facilities like synchrotron are composed of many different hardware and software parts. Deploying and maintaining such systems require proper workflows and tools. MAX IV has been using Ansible to manage and deploy its full control system, both software and infrastructure, for quite some time with great success. All required software (i.e. tango devices, GUIs…) used to be packaged as RPMs (Red Hat Package Manager) making deployment and dependencies management easy. Using RPMs brings many advantages (big community, well tested packages, stability) but also comes with a few drawbacks, mainly the dependency to the release cycle of the Operating System. The Python ecosystem is changing quickly and using recent modules can become challenging with RPMs. We have been investigating conda as an alternative package manager. Conda is a popular open-source package, dependency and environment management system. This paper will describe our workflow and experience working with both package managers.  
slides icon Slides THBL01 [5.899 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THBL01  
About • Received ※ 10 October 2021       Accepted ※ 21 November 2021       Issue date ※ 12 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THBL02 Exploring Alternatives and Designing the Next Generation of Real-Time Control System for Astronomical Observatories 824
 
  • T.C. Shen, A. Sepulveda
    ALMA Observatory, Santiago, Chile
  • R.A. Augsburger, S.A. Carrasco, P. Galeas, F. Huenupan, R.S. Seguel
    Universidad de La Frontera, Temuco, Chile
 
  The ALMA Observatory was inaugurated in 2013, after the 8 years of successful operation, obsolescence has started to emerge in different areas. One of the most critical areas is the control bus of the hardware devices located the antenna, which is based on a customized version of CAN bus. Initial studies were performed to explore alternatives, and one of the candidates could be a solution based on EtherCAT. In this paper, the existing architecture will be presented and new architecture will be proposed, which would not only be compatible with the existing hardware devices but also allow prepared the ground for new subsystems that come with ALMA 2030 initiatives. This document reports the progress achieved in a proof of concept project that explores the possibility to embed the existing ALMA monitor & control data structure into EtherCAT frames and use EtherCAT as the main communication protocol to control hardware devices in all the subsystems that comprise the ALMA telescope.  
slides icon Slides THBL02 [6.969 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THBL02  
About • Received ※ 10 October 2021       Accepted ※ 18 January 2022       Issue date ※ 06 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THBL03 The State of Containerization in CERN Accelerator Controls 829
 
  • R. Voirin, T. Oulevey, M. Vanden Eynden
    CERN, Geneva, Switzerland
 
  In industry, containers have dramatically changed the way system administrators deploy and manage applications. Developers are gradually switching from delivering monolithic applications to microservices. Using containerization solutions provides many advantages, such as: applications running in an isolated manner, decoupled from the operating system and its libraries; run-time dependencies, including access to persistent storage, are clearly declared. However, introducing these new techniques requires significant modifications of existing computing infrastructure as well as a cultural change. This contribution will explore practical use cases for containers and container orchestration within the CERN Accelerator Controls domain. We will explore challenges that have been arising in this field for the past two years and technical choices that we have made to tackle them. We will also outline the foreseen future developments.  
slides icon Slides THBL03 [0.863 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THBL03  
About • Received ※ 08 October 2021       Revised ※ 24 October 2021       Accepted ※ 06 January 2022       Issue date ※ 28 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THBL04 Kubernetes for EPICS IOCs 835
 
  • G. Knap, T.M. Cobb, Y. Moazzam, U.K. Pedersen, C.J. Reynolds
    DLS, Oxfordshire, United Kingdom
 
  EPICS IOCs at Diamond Light Source are built, deployed, and managed by a set of in-house tools that were implemented 15 years ago. This paper will detail a proof of concept to demonstrate replacing these tools and processes with modern industry standards. IOCs are packaged in containers with their unique dependencies included. IOC images are generic, and a single image is required for all containers that control a given class of device. Configuration is provided to the container in the form of a start-up script only. The configuration allows the generic IOC image to bootstrap a container for a unique IOC instance. This approach keeps the number of images required to a minimum. Container orchestration for all beamlines in the facility is provided through a central Kubernetes cluster. The cluster has remote nodes that reside within each beamline network to host the IOCs for the local beamline. All source, images and individual IOC configurations are held in repositories. Build and deployment to the production registries is handled by continuous integration. Finally, a development container provides a portable development environment for maintaining and testing IOC code.  
slides icon Slides THBL04 [0.640 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THBL04  
About • Received ※ 11 October 2021       Revised ※ 14 October 2021       Accepted ※ 23 February 2022       Issue date ※ 01 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPV045
Monitoring, Logging and Alarm Systems for the Cherenkov Telescope Array: Architecture and Deployment  
 
  • A. Costa, E. Sciacca
    INAF-OACT, Catania, Italy
 
  The Array Control and Data Acquisition System (ACADA) is responsible for the telescope control operations in The Cherenkov Telescope Array (CTA). We present the software architecture of the Monitoring, Logging and Alarm subsystems in the ACADA framework. The Monitoring System (MON) is the subsystem that addresses the acquisition of the monitoring and logging information from the CTA array elements. The MON will also support corrective and predictive maintenance to minimize the downtime of the system. The Array Alarm System (AAS) is the subsystem that is responsible for collecting alarms from telescopes, array calibration and environmental monitoring instruments, and the ACADA systems itself. The final software deployment is expected to manage about 200.000 monitoring points sampled between 1 and 5 Hz for a maximum data rate for writing operations of 26 Mbps for the monitoring system including the alarms, and a maximum rate of about 1 Gbps for the aggregated log information. This paper presents the architecture and deployment for MON and AAS subsystems which are currently being tested with a simulated set of monitoring points and log events.  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPV046 Virtualized Control System Infrastructure at LINAC Project, PINSTECH 975
 
  • N.U. Saqib, F. Sher
    PINSTECH, Islamabad, Pakistan
 
  IT infrastructure is backbone of modern big science accelerator control systems. Accelerator Controls and Electronics (ACE) Group is responsible for controls, electronics and IT infrastructure for Medical and Industrial NDT (Non-Destructive Testing) linear accelerator prototypes at LINAC Project, PINSTECH. All of the control system components such as EPICS IOCs, Operator Interfaces, Databases and various servers are virtualized using VMware vSphere and VMware Horizon technologies. This paper describes the current IT design and development structure that is supporting the control systems of the linear accelerators efficiently and effectively.  
poster icon Poster THPV046 [1.174 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THPV046  
About • Received ※ 10 October 2021       Revised ※ 20 October 2021       Accepted ※ 21 November 2021       Issue date ※ 06 January 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPV047 Status of High Level Application Development for HEPS 978
 
  • X.H. Lu
    IHEP CSNS, Guangdong Province, People’s Republic of China
  • H.F. Ji, Y. Jiao, J.Y. Li, C. Meng, Y.M. Peng, G. Xu, Q. Ye, Y.L. Zhao
    IHEP, Beijing, People’s Republic of China
 
  The High Energy Photon Source (HEPS) is a 6 GeV, 1.3 km, ultralow emittance ring-based light source in China. The construction started in 2019. In this year, the development of beam commissioning software of HEPS started. It was planned to use EPICS as the control system and Python as the main development tools for high level applications (HLAs). Python has very rich and mature modules to meet the challenging requirements of HEPS commissioning and operation, such as PyQt5 for graphical user interface (GUI) application development, PyEPICS and P4P for communicating with EPICS. A client-server framework was proposed for online calculations and always-running programs. Model based control is also one important design criteria, all the online commissioning software should be easily connected to a powerful virtual accelerator (VA) for comparison and predicting actual beam behaviour. It was planned to use elegant and Ocelot as the core calculation model of VA  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THPV047  
About • Received ※ 10 October 2021       Revised ※ 20 October 2021       Accepted ※ 21 November 2021       Issue date ※ 26 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPV048 Novel Control System for the LHCb Scintillating Fibre Tracker Detector Infrastructure 981
 
  • M. Ostrega, M.A. Ciupinski, S. Jakobsen, X. Pons
    CERN, Geneva, Switzerland
 
  During the Long Shutdown 2 of the LHC at CERN, the LHCb detector is upgraded to cope with higher instantaneous luminosities. The largest of the new trackers is based on the scintillating fibres (SciFi) read out by SIlicon PhotoMultipliers (SiPMs). The SiPMs will be cooled down to -40°C to minimize noise. For performance and space reasons, the cooling lines are vacuum insulated. Ionizing radiation requires detaching and displace the readout electronics from Pirani gauges to a lower radiation area. To avoid condensation inside the SiPM boxes, the atmosphere inside must have a dew point of at most -45°C. The low dew point will be achieved by flushing a dry gas through the box. 576 flowmeters devices will be installed to monitor the gas flow continuously. A Condensation Prevention System (CPS) has been introduced as condensation was observed. The CPS powers heating wires installed around the SiPM boxes and the vacuum bellows isolating the cooling lines. The CPS also includes 672 temperature sensors to monitor that all parts are warmer than the cavern dew point. The temperature readout systems are based on multiplexing technology at the in the front-end and a PLC in the back-end.  
poster icon Poster THPV048 [8.181 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THPV048  
About • Received ※ 10 October 2021       Revised ※ 22 October 2021       Accepted ※ 22 November 2021       Issue date ※ 21 December 2021
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPV049 Virtualisation and Software Appliances as Means for Deployment of SCADA in Isolated Systems 985
 
  • P. Golonka, L. Davoine, M.Z. Zimny, L. Zwalinski
    CERN, Meyrin, Switzerland
 
  The paper discusses the use of virtualisation as a way to deliver a complete pre-configured SCADA (Supervisory Control And Data Acquisition) application as a software appliance to ease its deployment and maintenance. For the off-premise control systems, it allows for deployment to be performed by the local IT servicing teams with no particular control-specific knowledge, providing a "turn-key" solution. The virtualisation of a complete desktop allows to deliver and reuse the existing feature-rich Human-Machine Interface experience for local operation; it also resolves the issues of hardware and software compatibilities in the deployment sites. The approach presented here was employed to provide replicas of the "LUCASZ" cooling system to collaborating laboratories, where the on-site knowledge of underlying technologies was not available and required to encapsulate the controls as a "black-box" so that for users, the system is operational soon after power is applied. The approach is generally applicable for international collaborations where control systems are contributed and need to be maintained by remote teams  
poster icon Poster THPV049 [2.954 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THPV049  
About • Received ※ 08 October 2021       Revised ※ 30 November 2021       Accepted ※ 19 February 2022       Issue date ※ 25 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)