THBL —  Control System Infrastructure   (21-Oct-21   13:15—14:30)
Chair: A. Buteau, SOLEIL, Gif-sur-Yvette, France
THBL   Video of full session »Control System Infrastructure« (total time: 01:07:23 h:m:s)  
video icon  
  please see instructions how to view/control embeded videos  
Paper Title Page
THBL01 Control System Management and Deployment at MAX IV 819
  • B. Bertrand, Á. Freitas, V. Hardion
    MAX IV Laboratory, Lund University, Lund, Sweden
  The control systems of big research facilities like synchrotron are composed of many different hardware and software parts. Deploying and maintaining such systems require proper workflows and tools. MAX IV has been using Ansible to manage and deploy its full control system, both software and infrastructure, for quite some time with great success. All required software (i.e. tango devices, GUIs…) used to be packaged as RPMs (Red Hat Package Manager) making deployment and dependencies management easy. Using RPMs brings many advantages (big community, well tested packages, stability) but also comes with a few drawbacks, mainly the dependency to the release cycle of the Operating System. The Python ecosystem is changing quickly and using recent modules can become challenging with RPMs. We have been investigating conda as an alternative package manager. Conda is a popular open-source package, dependency and environment management system. This paper will describe our workflow and experience working with both package managers.  
slides icon Slides THBL01 [5.899 MB]  
DOI • reference for this paper ※  
About • Received ※ 10 October 2021       Accepted ※ 21 November 2021       Issue date ※ 12 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
THBL02 Exploring Alternatives and Designing the Next Generation of Real-Time Control System for Astronomical Observatories 824
  • T.C. Shen, A. Sepulveda
    ALMA Observatory, Santiago, Chile
  • R.A. Augsburger, S.A. Carrasco, P. Galeas, F. Huenupan, R.S. Seguel
    Universidad de La Frontera, Temuco, Chile
  The ALMA Observatory was inaugurated in 2013, after the 8 years of successful operation, obsolescence has started to emerge in different areas. One of the most critical areas is the control bus of the hardware devices located the antenna, which is based on a customized version of CAN bus. Initial studies were performed to explore alternatives, and one of the candidates could be a solution based on EtherCAT. In this paper, the existing architecture will be presented and new architecture will be proposed, which would not only be compatible with the existing hardware devices but also allow prepared the ground for new subsystems that come with ALMA 2030 initiatives. This document reports the progress achieved in a proof of concept project that explores the possibility to embed the existing ALMA monitor & control data structure into EtherCAT frames and use EtherCAT as the main communication protocol to control hardware devices in all the subsystems that comprise the ALMA telescope.  
slides icon Slides THBL02 [6.969 MB]  
DOI • reference for this paper ※  
About • Received ※ 10 October 2021       Accepted ※ 18 January 2022       Issue date ※ 06 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
THBL03 The State of Containerization in CERN Accelerator Controls 829
  • R. Voirin, T. Oulevey, M. Vanden Eynden
    CERN, Geneva, Switzerland
  In industry, containers have dramatically changed the way system administrators deploy and manage applications. Developers are gradually switching from delivering monolithic applications to microservices. Using containerization solutions provides many advantages, such as: applications running in an isolated manner, decoupled from the operating system and its libraries; run-time dependencies, including access to persistent storage, are clearly declared. However, introducing these new techniques requires significant modifications of existing computing infrastructure as well as a cultural change. This contribution will explore practical use cases for containers and container orchestration within the CERN Accelerator Controls domain. We will explore challenges that have been arising in this field for the past two years and technical choices that we have made to tackle them. We will also outline the foreseen future developments.  
slides icon Slides THBL03 [0.863 MB]  
DOI • reference for this paper ※  
About • Received ※ 08 October 2021       Revised ※ 24 October 2021       Accepted ※ 06 January 2022       Issue date ※ 28 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
THBL04 Kubernetes for EPICS IOCs 835
  • G. Knap, T.M. Cobb, Y. Moazzam, U.K. Pedersen, C.J. Reynolds
    DLS, Oxfordshire, United Kingdom
  EPICS IOCs at Diamond Light Source are built, deployed, and managed by a set of in-house tools that were implemented 15 years ago. This paper will detail a proof of concept to demonstrate replacing these tools and processes with modern industry standards. IOCs are packaged in containers with their unique dependencies included. IOC images are generic, and a single image is required for all containers that control a given class of device. Configuration is provided to the container in the form of a start-up script only. The configuration allows the generic IOC image to bootstrap a container for a unique IOC instance. This approach keeps the number of images required to a minimum. Container orchestration for all beamlines in the facility is provided through a central Kubernetes cluster. The cluster has remote nodes that reside within each beamline network to host the IOCs for the local beamline. All source, images and individual IOC configurations are held in repositories. Build and deployment to the production registries is handled by continuous integration. Finally, a development container provides a portable development environment for maintaining and testing IOC code.  
slides icon Slides THBL04 [0.640 MB]  
DOI • reference for this paper ※  
About • Received ※ 11 October 2021       Revised ※ 14 October 2021       Accepted ※ 23 February 2022       Issue date ※ 01 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)