Software Technology Evolution
Paper Title Page
MOBL01 The ELT Control System: Recent Developments 37
 
  • G. Chiozzi, L. Andolfato, J. Argomedo, N. Benes, C. Diaz Cano, A. Hoffstadt Urrutia, N. Kornweibel, U. Lampater, F. Pellegrin, M. Schilling, B. Sedghi, H. Sommer, M. Suarez Valles
    ESO, Garching bei Muenchen, Germany
 
  The Extremely Large Telescope (ELT) is a 39m optical telescope under construction in the Chilean Atacama desert. The design is based on a five-mirror scheme, incorporating Adaptive Optics (AO). The primary mirror consists of 798 segments with 1.4m diameter. The main control challenges can be identified in the number of sensors (~25000) and actuators (~15000) to be coordinated, the computing performance and small latency required for phasing of the primary mirror and the AO. We focus on the design and implementation of the supervisory systems and control strategies. This includes a real time computing (RTC) toolkit to support the implementation of the AO for telescope and instruments. We will also report on the progress done in the implementation of the control software infrastructure necessary for development, testing and integration. We identify a few lessons learned in the past years of development and major challenges for the coming phases of the project.  
slides icon Slides MOBL01 [6.399 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOBL01  
About • Received ※ 10 October 2021       Revised ※ 15 October 2021       Accepted ※ 03 November 2021       Issue date ※ 25 December 2021
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOBL02 Real-Time Framework for ITER Control Systems 45
 
  • W.R. Lee, B. Bauvir, T.H. Tak, A. Žagar
    ITER Organization, St. Paul lez Durance, France
  • P. Karlovsek, M. Knap
    COSYLAB, Control System Laboratory, Ljubljana, Slovenia
  • S. Lee
    KFE, Daejeon, Republic of Korea
  • D.R. Makowski, P. Perek
    TUL-DMCS, Łódź, Poland
  • A. Winter
    MPI/IPP, Garching, Germany
 
  The ITER Real-Time Framework (RTF) is a middleware providing common services and capabilities to build real-time control applications in ITER such as the Plasma Control System (PCS) and plasma diagnostics. The RTF dynamically constructs applications at runtime from the configuration. The principal building blocks that compose an application process are called Function Blocks (FB), which follow a modular structure pattern. The application configuration defines the information that can influence control behavior, such as the connections among FBs, their corresponding parameters, and event handlers. The consecutive pipeline process in a busy-waiting mode and a data-driven pattern minimizes jitter and hardens the deterministic system behavior. In contrast, infrastructural capabilities are managed differently in the service layer using non-real-time threads. The deployment configuration covers the final placement of a program instance and thread allocation to the appropriate computing infrastructure. In this paper, we will introduce the architecture and design patterns of the framework as well as the real-life examples used to benchmark the RTF.  
slides icon Slides MOBL02 [3.192 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOBL02  
About • Received ※ 10 October 2021       Accepted ※ 11 November 2021       Issue date ※ 24 January 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOBL03 Machine Learning Platform: Deploying and Managing Models in the CERN Control System 50
 
  • J.-B. de Martel, R. Gorbonosov, N. Madysa
    CERN, Geneva, Switzerland
 
  Recent advances make machine learning (ML) a powerful tool to cope with the inherent complexity of accelerators, the large number of degrees of freedom and continuously drifting machine characteristics. A diverse set of ML ecosystems, frameworks and tools are already being used at CERN for a variety of use cases such as optimization, anomaly detection and forecasting. We have adopted a unified approach to model storage, versioning and deployment which accommodates this diversity, and we apply software engineering best practices to achieve the reproducibility needed in the mission-critical context of particle accelerator controls. This paper describes CERN Machine Learning Platform - our central platform for storing, versioning and deploying ML models in the CERN Control Center. We present a unified solution which allows users to create, update and deploy models with minimal effort, without constraining their workflow or restricting their choice of tools. It also provides tooling to automate seamless model updates as the machine characteristics evolve. Moreover, the system allows model developers to focus on domain-specific development by abstracting infrastructural concerns.  
slides icon Slides MOBL03 [0.687 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOBL03  
About • Received ※ 07 October 2021       Accepted ※ 16 November 2021       Issue date ※ 07 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOBL04 Karabo Data Logging: InfluxDB Backend and Grafana UI 56
 
  • G. Flucke, V. Bondar, R. Costa, W. Ehsan, S.G. Esenov, R. Fabbri, G. Giovanetti, D. Goeries, S. Hauf, D.G. Hickin, A. Klimovskaia, A. Lein, L.G. Maia, D. Mamchyk, A. Parenti, G. Previtali, A. Silenzi, J. Szuba, M. Teichmann, K. Wrona, C. Youngman
    EuXFEL, Schenefeld, Germany
  • D.P. Spruce
    MAX IV Laboratory, Lund University, Lund, Sweden
 
  The photon beam lines and instruments at the European XFEL (EuXFEL) are operated using the Karabo* control system that has been developed in house since 2011. Monitoring and incident analysis requires quick access to historic values of control data. While Karabo’s original custom-built text-file-based data logging system suits well for small systems, a time series data base offers in general a faster data access, as well as advanced data filtering, aggregation and reduction options. EuXFEL has chosen InfluxDB** as backend that is operated since summer 2020. Historic data can be displayed as before via the Karabo GUI or now also via the powerful Grafana*** web interface. The latter is e.g. used heavily in the new Data Operation Center of the EuXFEL. This contribution describes the InfluxDB setup, its transparent integration into Karabo and the experiences gained since it is in operation.
* Steffen Hauf et al., J. Synchrotron Rad. (2019). 26, 1448-1461
** https://docs.influxdata.com/influxdb/
*** https://grafana.com/grafana/
 
slides icon Slides MOBL04 [3.204 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOBL04  
About • Received ※ 13 October 2021       Accepted ※ 16 November 2021       Issue date ※ 06 January 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOBL05 Photon Science Controls: A Flexible and Distributed LabVIEW Framework for Laser Systems 62
 
  • B.A. Davis, B.T. Fishler, R.J. McDonald
    LLNL, Livermore, California, USA
 
  Funding: This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
LabVIEW software is often chosen for developing small scale control systems, especially for novice software developers. However, because of its ease of use, many functional LabVIEW applications suffer from limits to extensibility and scalability. Developing highly extensible and scalable applications requires significant skill and time investment. To close this gap between new and experienced developers we present an object-oriented application framework that offloads complex architecture tasks from the developer. The framework provides native functionality for data acquisition, logging, and publishing over HTTP and WebSocket with extensibility for adding further capabilities. The system is scalable and supports both single process applications and small to medium sized distributed systems. By leveraging the framework, developers can produce robust applications that are easily integrated into a unified architecture for simple and distributed systems. This allows for decreased system development time, improved onboarding for new developers, and simple framework extension for new capabilities.
 
slides icon Slides MOBL05 [3.178 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOBL05  
About • Received ※ 09 October 2021       Accepted ※ 16 November 2021       Issue date ※ 14 March 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV024 vscode-epics, a VSCode Module to Enlighten Your EPICS Code 179
 
  • V. Nadot, A. Gaget, F. Gohier, F. Gougnaud, P. Lotrus, S. Tzvetkov
    CEA-IRFU, Gif-sur-Yvette, France
 
  vscode-epics is a Visual Studio Code module developed by CEA Irfu that aims to enlight your EPICS code. This module makes developer life easier, improves code quality and helps standardizing EPICS code. It provides syntax highlighting, snippets and header template for EPICS file and provides snippets for WeTest*. This VSCode module is based on Visual Studio Code language Extension and it uses basic JSON files that make feature addition easy. The number of downloads increases version after version and the different feedback motivates us to strongly maintain it for the EPICS community. Since 2019, some laboratories of the EPICS community have participated in the improvement of the module and it seems to have a nice future (linter, snippet improvements, specific language support, etc.). The module is available on Visual Studio Code marketplace** and on EPICS extension GitHub***. CEA Irfu is open to bug notifications, enhancement suggestions and merge requests to continuously improve vscode-epics.
* https://github.com/epics-extensions/WeTest
** https://marketplace.visualstudio.com/items?itemName=nsd.vscode-epics
*** https://github.com/epics-extensions/vscode-epics
 
poster icon Poster MOPV024 [0.508 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV024  
About • Received ※ 10 October 2021       Accepted ※ 04 November 2021       Issue date ※ 26 December 2021  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV025 TangoGraphQL: A GraphQL Binding for Tango Control System Web-Based Applications 181
 
  • J.L. Pons
    ESRF, Grenoble, France
 
  Web-based applications have seen a huge increase in popularity in recent years, replacing standalone applications. GraphQL provides a complete and understandable description of the data exchange between client browsers and back-end servers. GraphQL is a powerful query language allowing API to evolve easily and to query only what is needed. GraphQL also offers a WebSocket based protocol which perfectly fit to the Tango event system. Lots of popular tools around GraphQL offer very convenient way to browse and query data. TangoGraphQL is a pure C++ http(s) server which exports a GraphQL binding for the Tango C++ API. TangoGraphQL also exports a GraphiQL web application which allows to have a nice interactive description of the API and to test queries. TangoGraphQL* has been designed with the aim to maximize performances of JSON data serialization, a binary transfer mode is also foreseen.
https://gitlab.com/tango-controls/TangoGraphQL
 
poster icon Poster MOPV025 [1.374 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV025  
About • Received ※ 08 October 2021       Revised ※ 18 October 2021       Accepted ※ 04 November 2021       Issue date ※ 17 November 2021
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV026 Integrating OPC UA Devices in EPICS 184
 
  • R. Lange
    ITER Organization, St. Paul lez Durance, France
  • R.A. Elliot, K. Vestin
    ESS, Lund, Sweden
  • B. Kuner
    BESSY GmbH, Berlin, Germany
  • C. Winkler
    HZB, Berlin, Germany
  • D. Zimoch
    PSI, Villigen PSI, Switzerland
 
  OPC Unified Architecture (OPC UA) is an open platform independent communication architecture for industrial automation developed by the OPC Foundation. Its key characteristics include a rich service-oriented architecture, enhanced security functionality and an integral information model, allowing to map complex data into an OPC UA namespace. With its increasing popularity in the industrial world, OPC UA is an excellent strategic choice for integrating a wealth of different COTS devices and controllers into an existing control system infrastructure. The security functions extend its application to larger networks and across firewalls, while the support of user-defined data structures and fully symbolic addressing ensure flexibility, separation of concerns and robustness in the user interfaces. In an international collaboration, a generic OPC UA support for the EPICS control system toolkit has been developed. It is used in operation at several facilities, integrating a variety of commercial controllers and systems. We describe design and implementation approach, discuss use cases and software quality aspects, report performance and present a roadmap of the next development steps.  
poster icon Poster MOPV026 [1.726 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV026  
About • Received ※ 10 October 2021       Accepted ※ 04 November 2021       Issue date ※ 06 March 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV027 The Evolution of the DOOCS C++ Code Base 188
 
  • L. Fröhlich, A. Aghababyan, S. Grunewald, O. Hensler, U. Jastrow, R. Kammering, H. Keller, V. Kocharyan, M. Mommertz, F. Peters, A. Petrosyan, G. Petrosyan, L.P. Petrosyan, V. Petrosyan, K. Rehlich, V. Rybnikov, G. Schlesselmann, J. Wilgen, T. Wilksen
    DESY, Hamburg, Germany
 
  This contribution traces the development of DESY’s control system DOOCS from its origins in 1992 to its current state as the backbone of the European XFEL and FLASH accelerators and of the future Petra IV light source. Some details of the continual modernization and refactoring efforts on the 1.5 million line C++ codebase are highlighted.  
poster icon Poster MOPV027 [0.912 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV027  
About • Received ※ 14 October 2021       Accepted ※ 21 December 2021       Issue date ※ 07 March 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV030 Application of EPICS Software in Linear Accelerator 193
 
  • Y.H. Guo, H.T. Liu, B.J. Wang, R. Wang, N. Xie
    IMP/CAS, Lanzhou, People’s Republic of China
 
  The institute of modern physics (IMP) has two sets of linear accelerator facilities, they are CAFe (China ADS front-end demo linac) and LEAF (Low Energy Accelerator Facility). The Main equipment of LEAF facility consists of ion source, LEBT (Low Energy Beam Transport), RFQ (Radio Frequency Quadrupole) and some experiment terminals. Compare with LEAF, CAFe equipment has more and adds MEBT (Middle Energy Beam Transport) and four sets of superconducting cavity strings at the back end of RFQ. In the process of commissioning and running linac equipment, The EPICS Archiver application and Alarm system are used. According to the refined control requirements of the facility sites, we have completed the software upgrade and deployment of the archiver and alarm systems. The upgraded software systems have made the operation of linac machines more effective in term of monitoring, fault-diagnostic and system recovery, and becomes more user-friendly as well.  
poster icon Poster MOPV030 [0.692 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV030  
About • Received ※ 09 October 2021       Revised ※ 20 November 2021       Accepted ※ 24 February 2022       Issue date ※ 16 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV031 The Deployment Technology of EPICS Application Software Based on Docker 197
 
  • R. Wang, Y.H. Guo, B.J. Wang, N. Xie
    IMP/CAS, Lanzhou, People’s Republic of China
 
  StreamDevice, as a general-purpose string interface device’s Epics driver, has been widely used in the control of devices with network and serial ports in CAFe equipment. For example, the remote control of magnet power supply, vacuum gauges, and various vacuum valves or pumps, as well as the information reading and control of Gauss meter equipment used in magnetic field measurement. In the process of on-site software development, we found that various errors are caused during the deployment of StreamDevice about the dependence on software environment and library functions, which because of different operating system environments and EPICS tool software versions. This makes StreamDevice deployment very time-consuming and labor-intensive. To ensure that StreamDevice works in a unified environment and can be deployed and migrated efficiently, the Docker container technology is used to encapsulate its software and its application environment. Images will be uploaded to an Aliyun private library to facilitate software developers to download and use.  
poster icon Poster MOPV031 [0.405 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV031  
About • Received ※ 09 October 2021       Revised ※ 17 October 2021       Accepted ※ 06 January 2022       Issue date ※ 11 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV032 Design of a Component-Oriented Distributed Data Integration Model 202
 
  • Z. Ni, L. Li, J. Liu, J. Luo, X. Zhou
    CAEP, Sichuan, People’s Republic of China
 
  The control system of large scientific facilities is composed of several heterogeneous control systems. As time goes by, the facilities need to be continuously upgraded and the control system also needs to be upgraded. This is a challenge for the integration of complex and large-scale heterogeneous systems. This article describes the design of a data integration model based on component technology, software middleware(The Apache Thrift*) and real-time database. The realization of this model shields the relevant details of the software middleware, encapsulates the remote data acquisition as a local function operation, realizes the combination of data and complex calculations through scripts, and can be assembled into new components.
*The Apache Thrift software framework, for scalable cross-language services development, combines a software stack with a code generation engine to build services that work efficiently.
 
poster icon Poster MOPV032 [1.325 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV032  
About • Received ※ 09 October 2021       Accepted ※ 04 November 2021       Issue date ※ 19 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV033 Web Client for Panic Alarms Management System 206
 
  • M. Nabywaniec, M. Gandor, P.P. Goryl, Ł. Żytniak
    S2Innovation, Kraków, Poland
 
  Alarms are one of the most important aspects of control systems. Each control system can face unexpected issues, which demand fast and precise resolution. As the control system starts to grow, it requires the involvement of more engineers to access the alarm’s list and focus on the most important ones. Our objective was to allow users to access the alarms fast, remotely and without special software. According to current trends in the IT community, creating a web application turned out to be a perfect solution. Our application is the extension and web equivalent to the current Panic GUI application. It allows constant remote access using just a web browser which is currently present on every machine including mobile phones and tablets. The access to the different functionalities can be restricted to the users provided just with appropriate roles. Alarms can be easily added and managed from the web browser as well as adding new data sources is possible. From each data source, an attribute can be extracted, and multiple attributes can be combined into composer being the base for further analysis or alarms creation.  
poster icon Poster MOPV033 [0.626 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV033  
About • Received ※ 09 October 2021       Revised ※ 25 October 2021       Accepted ※ 04 November 2021       Issue date ※ 06 January 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV034 Migration of Tango Controls Source Code Repositories 209
 
  • M. Liszcz, M. Celary, P.P. Goryl, K. Kedron
    S2Innovation, Kraków, Poland
  • G. Abeillé
    SOLEIL, Gif-sur-Yvette, France
  • B. Bertrand
    MAX IV Laboratory, Lund University, Lund, Sweden
  • R. Bourtembourg, A. Götz
    ESRF, Grenoble, France
  • T. Braun
    byte physics e.K., Berlin, Germany
  • A.F. Joubert
    SARAO, Cape Town, South Africa
  • A. López Sánchez, C. Pascual-Izarra, S. Rubio-Manrique
    ALBA-CELLS Synchrotron, Cerdanyola del Vallès, Spain
  • L. Pivetta
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  Funding: Tango Community
At the turn of 2020/2021, the Tango community faced the challenge of a massive migration of all Tango software repositories from GitHub to GitLab. The motivation has been a change in the pricing model of the Travis CI provider and the shutdown of the JFrog Bintray service used for artifact hosting. GitLab has been chosen as a FOSS-friendly platform for storing both the code and build artifacts and for providing CI/CD services. The migration process faced several challenges, both technical, like redesign and rewrite of CI pipelines, and non-technical, like coordination of actions impacting multiple interdependent repositories. This paper explains the strategies adopted for migration, the outcomes, and the impact on the Tango Controls collaboration.
 
poster icon Poster MOPV034 [0.181 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV034  
About • Received ※ 10 October 2021       Accepted ※ 04 November 2021       Issue date ※ 28 November 2021  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV035 Development of Alarm and Monitoring System Using Smartphone 214
 
  • W.S. Cho
    PAL, Pohang, Republic of Korea
 
  In order to find out the problem of the device remotely, we aimed to develop a new alarm system. The main functions of the alarm system are real-time monitoring of EPICS PV data, data storage, and data storage when an alarm occurs. In addition, an alarm is transmitted in real time through an app on the smartphone to communicate the situation to machine engineers of PLS-II. This system uses InfluxDB to store data. In addition, a new program written in Java language was developed so that data acquisition, analysis, and beam dump conditions can be known. furthermore Vue.js is used to develop together with node.js and web-based android and iOS-based smart phone applications, and user interface is serviced. Eventually, using this system, we were able to check the cause analysis and data in real time when an alarm occurs. In this paper, we introduce the design of an alarm system and the transmission of alarms to an application.  
poster icon Poster MOPV035 [0.430 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV035  
About • Received ※ 05 October 2021       Revised ※ 22 October 2021       Accepted ※ 04 November 2021       Issue date ※ 25 January 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV036 Porting Control System Software From Python 2 to 3 - Challenges and Lessons 217
 
  • A.F. Joubert, M.T. Ockards, S. Wai
    SARAO, Cape Town, South Africa
 
  Obsolescence is one of the challenges facing all long-term projects. It not only affects hardware platforms, but also software. Python 2.x reached official End Of Life status on 1 January 2020. In this paper we review our efforts to port to the replacement, Python 3.x. While the two versions are very similar, there are important differences which can lead to incompatibility or changes in behaviour. We discuss our motivation and strategy for porting our code base of approximately 200 k source lines of code over 20 Python packages. This includes aspects such as internal and external dependencies, legacy and proprietary software that cannot be easily ported, testing and verification, and why we selected a phased approach rather than "big bang". We also report on the challenges and lessons learnt - notably why good test coverage is so important for software maintenance. Our application is the 64-antenna MeerKAT radio telescope in South Africa - a precursor to the Square Kilometre Array  
poster icon Poster MOPV036 [2.277 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV036  
About • Received ※ 11 October 2021       Accepted ※ 04 February 2022       Issue date ※ 03 March 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV037 ALBA Controls System Software Stack Upgrade 222
 
  • G. Cuní, F. Becheri, S. Blanch-Torné, C. Falcon-Torres, C. Pascual-Izarra, Z. Reszela, S. Rubio-Manrique
    ALBA-CELLS Synchrotron, Cerdanyola del Vallès, Spain
 
  ALBA, a 3rd Generation Synchroton Light Source located near Barcelona in Spain, is in operation since 2012. During the last 10 years, the updates of ALBA’s Control System were severely limited in order to prevent disruptions of production equipment, at the cost of having to deal with hardware and software obsolescence, elevating the effort of maintenance and enhancements. The construction of the second phase new beamlines accelerated the renewal of the software stack. In order to limit the number of supported platforms we also gradually upgraded the already operational subsystems. We are in the process of switching to the Debian OS, upgrading to the Tango 9 Control System framework including the Tango Archiving System to HDB++, migrating our code to Python 3, and migrating our GUIs to PyQt5 and PyQtGraph, etc. In order to ensure the project quality and to facilitate future upgrades, we try to automate testing, packaging, and configuration management with CI/CD pipelines using, among others, the following tools: pytest, Docker, GitLab-CI and Salt. In this paper, we present our strategy in this project, the current status of different upgrades and we share the lessons learnt.  
poster icon Poster MOPV037 [0.338 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV037  
About • Received ※ 08 October 2021       Revised ※ 22 October 2021       Accepted ※ 04 November 2021       Issue date ※ 24 November 2021
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV038
The EPIC(S) Battle Between Individualism and Collectivism  
 
  • S.C.F. Rose
    ESS, Lund, Sweden
 
  ESS uses a customized version of the EPICS build system in order to manage a small number of consistent EPICS environments for all of the integrators on site. In order to maintain this, we use in particular Gitlab CI/CD to build, test, and deploy our EPICS modules. We will present our work on how we maintain control over our EPICS environment while still allowing integrators the easy ability to build and test their control software.  
poster icon Poster MOPV038 [0.848 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV039 UCAP: A Framework for Accelerator Controls Data Processing @ CERN 230
 
  • L. Cseppentő, M. Büttner
    CERN, Geneva, Switzerland
 
  The Unified Controls Acquisition and Processing (UCAP) framework provides a means to facilitate and streamline data processing in the CERN Accelerator Control System. UCAP’s generic structure is capable of tackling classic "Acquisition - Transformation - Publishing/Presentation" use cases, ranging from simple aggregations to complex machine reports and pre-processing of software interlock conditions. In addition to enabling end-users to develop data transformations in Java or Python and maximising integration with other controls sub-systems, UCAP puts an emphasis on offering self-service capabilities for deployment, operation and monitoring. This ensures that accelerator operators and equipment experts can focus on developing domain-specific transformation algorithms, without having to pay attention to typical IT tasks, such as process management and system monitoring. UCAP is already used by Linac4, PSB and SPS operations and will be used by most CERN accelerators, including LHC by the end of 2021. This contribution presents the UCAP framework and gives an insight into how we have productively combined modern agile development with conservative technical choices.  
poster icon Poster MOPV039 [7.998 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV039  
About • Received ※ 09 October 2021       Accepted ※ 04 November 2021       Issue date ※ 20 December 2021  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV040 Introducing Python as a Supported Language for Accelerator Controls at CERN 236
 
  • P.J. Elson, C. Baldi, I. Sinkarenko
    CERN, Geneva, Switzerland
 
  In 2019, Python was adopted as an officially supported language for interacting with CERN’s accelerator controls. In practice, this change of status was as much pragmatic as it was progressive - Python has been available as part of the underlying operating system for over a decade and unofficial Python interfaces to controls have existed since at least 2015. So one might ask: what really changed when Python’s adoption became official? This paper will discuss what it takes to officially support Python in a controls environment and will focus on the cultural and technological shifts involved in running Python operationally. It will highlight some of the infrastructure that has been put in place at CERN to facilitate a stable and user-friendly Python platform, as well as some of the key decisions that have led to Python thriving in CERN’s accelerator controls domain. Given its general nature, it is hoped that the approach presented in this paper can serve as a reference for other scientific organisations from a broad range of fields who are considering the adoption of Python in an operational context.  
poster icon Poster MOPV040 [2.133 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV040  
About • Received ※ 09 October 2021       Revised ※ 15 October 2021       Accepted ※ 04 November 2021       Issue date ※ 12 January 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV041 Modernisation of the Toolchain and Continuous Integration of Front-End Computer Software at CERN 242
 
  • P. Mantion, S. Deghaye, L. Fiszer, F. Irannejad, J. Lauener, M. Voelkle
    CERN, Geneva, Switzerland
 
  Building C++ software for low-level computers requires carefully tested frameworks and libraries. The major difficulties in building C++ software are to ensure that the artifacts are compatible with the target system’s (OS, Application Binary Interface), and to ensure that transitive dependent libraries are compatible when linked together. Thus developers/maintainers must be provided with efficient tooling for friction-less workflows: standardisation of the project description and build, automatic CI, flexible development environment. The open-source community with services like Github and Gitlab have set high expectations with regards to developer user experience. This paper describes how we leveraged Conan and CMake to standardise the build of C++ projects, avoid the "dependency hell" and provide an easy way to distribute C++ packages. A CI system orchestrated by Jenkins and based on automatic job definition and in-source, versioned, configuration has been implemented. The developer experience is further enhanced by wrapping the common flows (compile, test, release) into a command line tool, which also helps transitioning from the legacy build system (legacy makefiles, SVN).  
poster icon Poster MOPV041 [1.227 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV041  
About • Received ※ 07 October 2021       Accepted ※ 14 November 2021       Issue date ※ 10 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV042 PLCverif: Status of a Formal Verification Tool for Programmable Logic Controller 248
 
  • J-C. Tournier, B. Fernández Adiego, I.D. Lopez-Miguel
    CERN, Geneva, Switzerland
 
  Programmable Logic Controllers (PLC) are widely used for industrial automation including safety systems at CERN. The incorrect behaviour of the PLC control system logic can cause significant financial losses by damage of property or the environment or even injuries in some cases, therefore ensuring their correct behaviour is essential. While testing has been for many years the traditional way of validating the PLC control system logic, CERN developed a model checking platform to go one step further and formally verify PLC logic. This platform, called PLCverif, first released internally for CERN usage in 2019, is now available to anyone since September 2020 via an open source licence. In this paper, we will first give an overview of the PLCverif platform capabilities before focusing on the improvements done since 2019 such as the larger support coverage of the Siemens PLC programming languages, the better support of the C Bounded Model Checker backend (CBMC) and the process of releasing PLCverif as an open-source software.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV042  
About • Received ※ 07 October 2021       Revised ※ 20 October 2021       Accepted ※ 21 December 2021       Issue date ※ 23 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV043 CERN Controls Configuration Service - Event-Based Processing of Controls Changes 253
 
  • B. Urbaniec, L. Burdzanowski
    CERN, Geneva, Switzerland
 
  The Controls Configuration Service (CCS) is a core component of the data-driven Control System at CERN. Built around a central database, the CCS provides a range of client APIs and graphical user interfaces (GUI) to enable efficient and user-friendly configuration of Controls. As the entry point for all the modifications to Controls system configurations, the CCS provides the means to ensure global data coherency and propagation of changes across the distributed Controls sub-systems and services. With the aim of achieving global data coherency in the most efficient manner, the need for an advanced data integrator emerged. The Controls Configuration Data Lifecycle manager (CCDL) is the core integration bridge between the distributed Controls sub-systems. It aims to ensure consistent, reliable, and efficient exchange of information and triggering of workflow actions based on events representing Controls configuration changes. The CCDL implements and incorporates cutting-edge technologies used successfully in the IT industry. This paper describes the CCDL architecture, design and technology choices made, as well as the tangible benefits that have been realised since its introduction.  
poster icon Poster MOPV043 [2.770 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV043  
About • Received ※ 09 October 2021       Revised ※ 20 October 2021       Accepted ※ 21 December 2021       Issue date ※ 23 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV044 Lessons Learned Moving from Pharlap to Linux RT 257
 
  • C. Charrondière, O.Ø. Andreassen, D. Sierra-Maíllo Martínez, J. Tagg, T. Zilliox
    CERN, Meyrin, Switzerland
 
  The start of the Advanced Proton Driven Plasma Wakefield Acceleration Experiment (AWAKE) facility at CERN in 2016 came with the need for a continuous image acquisition system. The international scientific collaboration responsible for this project requested low and high resolution acquisition at a capture rate of 10Hz and 1 Hz respectively. To match these requirements, GigE digital cameras were connected to a PXI system running PharLap, a real-time operating system, using dual port 1GB/s network cards. With new requirements for a faster acquisition with higher resolution, it was decided to add 10GB/s network cards and a Network Attached Storage (NAS) directly connected to the PXI system to avoid saturating the network. There was also a request to acquire high-resolution images on several cameras during a limited duration, typically 30 seconds, in a burst acquisition mode. To comply with these new requirements PharLap had to be abandoned and replaced with Linux RT. This paper describes the limitation of the PharLap system and the lessons learned during the transition to Linux RT. We will show the improvement of CPU stability and data throughput reached.  
poster icon Poster MOPV044 [0.525 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV044  
About • Received ※ 08 October 2021       Revised ※ 18 October 2021       Accepted ※ 20 November 2021       Issue date ※ 28 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV045 Data-Centric Web Infrastructure for CERN Radiation and Environmental Protection Monitoring 261
 
  • A. Ledeul, C.C. Chiriac, G. Segura, J. Sznajd, G. de la Cruz
    CERN, Meyrin, Switzerland
 
  Supervision, Control and Data Acquisition (SCADA) systems generate large amounts of data over time. Analyzing collected data is essential to discover useful information, prevent failures, and generate reports. Facilitating access to data is of utmost importance to exploit the information generated by SCADA systems. CERN’s occupational Health & Safety and Environmental protection (HSE) Unit operates a web infrastructure allowing users of the Radiation and Environment Monitoring Unified Supervision (REMUS) to visualize and extract near-real-time and historical data from desktop and mobile devices. This application, REMUS Web, collects and combines data from multiple sources and presents it to the users in a format suitable for analysis. The web application and the SCADA system can operate independently thanks to a data-centric, loosely coupled architecture. They are connected through common data sources such as the open-source streaming platform Apache Kafka and Oracle Rdb. This paper describes the benefits of providing a feature-rich web application as a complement to control systems. Moreover, it details the underlying architecture of the solution and its capabilities.  
poster icon Poster MOPV045 [1.253 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV045  
About • Received ※ 07 October 2021       Accepted ※ 20 November 2021       Issue date ※ 02 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV046
Tango Controls Device Attribute extension in Python3  
 
  • T. Snijder, T. Juerges, J.J.D. Mol
    ASTRON, Dwingeloo, The Netherlands
 
  The Tango Controls Device Attributes represent a container for read-only or read-write data types. However the Attribute read/write functions need to be individually implemented for accessing structured data in hardware devices. This exposes a pattern of replicated code in the read and write functions. Maintaining this code becomes time consuming to maintain when a Device exposes tens or more Attributes. The solution we propose is to extend Tango Control Attributes. For that we combine a hardware access class (accessor) that reads and writes the structured data in hardware together with a small addition to the original Attribute declaration. The extended Attribute constructor provides information that describes how the accessor can locate a value in the hardware. This information is then used to provide the extended Attribute with a parameterised read or write function. The benefits of our solution are that various methods of hardware access can be efficiently and easily implemented and that new extended Attributes can be added with a single line of code. We have successfully used the extended Attributes with OPC-UA, SNMP, and INI-files in ASTRON’s LOFAR2.0 Station Control program.  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV047 Upgrading Oracle APEX Applications at the National Ignition Facility 267
 
  • A. Bhasker, R.D. Clark, R.N. Fallejo
    LLNL, Livermore, California, USA
 
  As with all experimental physics facilities, NIF has software applications that must persist on a multi-decade timescale. They must be kept up to date for viability, sustainability and security. We present the steps and challenges involved in a major application upgrade project from Oracle APEX v5 to Oracle APEX v19.2. This upgrade involved jumping over 2 major versions and a total of 5 releases of Oracle APEX. Some applications that depended on now legacy Oracle APEX constructs required redesigning, while others that broke due to custom JavaScript needed to be updated for compatibility. This upgrade project, undertaken by the NIF Shot Data Systems team at LLNL, involved reverse-engineering functional requirements for applications that were then redesigned using the latest APEX out-of-the-box functionality, as well as identifying changes made in the new Oracle APEX built-in ’plumbing’ to update custom-built features for compatibility with the new Oracle APEX version. As NIF enters into its second decade of operations, this upgrade allows these aging applications to function in a more sustainable way, while enhancing user experience with a modernized GUI for Oracle APEX web-pages.  
poster icon Poster MOPV047 [1.392 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV047  
About • Received ※ 08 October 2021       Accepted ※ 10 February 2022       Issue date ※ 17 March 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV048 Fast Multipole Method (FMM)-Based Particle Accelerator Simulations in the Context of Tune Depression Studies 271
 
  • M.H. Langston, R. Lethin, P.D. Letourneau, J. Wei
    Reservoir Labs, New York, USA
 
  Funding: U.S. Department of Energy DOE SBIR Phase I Project DE-SC0020934
As part of the MACH-B (Multipole Accelerator Codes for Hadron Beams) project, we have developed a Fast Multipole Method (FMM**)-based tool for higher fidelity modeling of particle accelerators for high-energy physics within Fermilab’s Synergia* simulation package. We present results from our implementations with a focus on studying the difference between tune depression estimates obtained using PIC codes for computing the particle interactions and those obtained using FMM-based algorithms integrated within Synergia. In simulating the self-interactions and macroparticle actions necessary for accurate simulations, we present a newly-developed kernel inside of a kernel-independent FMM in which near-field kernels are modified to incorporate smoothing while still maintaining consistency at the boundary of the far-field regime. Each simulation relies on Synergia with one major difference: the way in which particles interactions were computed. Specifically, following our integration of the FMM into Synergia, changes between PIC-based computations and FMM-based computations are made by changing only the method for near-field (and self) particle interactions.
* J. Amundson et al. "Synergia: An accelerator modeling tool with 3-D space charge". J.C.P. 211.1 (2006) 229-248.
** L. Greengard. "Fast algorithms for classical physics". Science (Aug 1994) 909-914.
 
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV048  
About • Received ※ 09 October 2021       Revised ※ 20 October 2021       Accepted ※ 20 November 2021       Issue date ※ 29 December 2021
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV049 Standardizing a Python Development Environment for Large Controls Systems 277
 
  • S.L. Clark, P.S. Dyer, S. Nemesure
    BNL, Upton, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
Python provides broad design freedom to programmers and a low barrier of entry for new software developers. These aspects have proven that unless standardized, a Python codebase will tend to diverge from a common style and architecture, becoming unmaintainable across the scope of a large controls system. Mitigating these effects requires a set of tools, standards, and procedures developed to assert boundaries on certain aspects of Python development – namely project organization, version management, and deployment procedures. Common tools like Git, GitLab, and virtual environments form a basis for development, with in-house utilities presenting their capabilities in a clear, developer-focused way. This paper describes the necessary constraints needed for development and deployment of large-scale Python applications, the function of the tools which comprise the development environment, and how these tools are leveraged to create simple and effective procedures to guide development.
 
poster icon Poster MOPV049 [0.476 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV049  
About • Received ※ 04 October 2021       Revised ※ 20 October 2021       Accepted ※ 20 November 2021       Issue date ※ 20 December 2021
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPV050 DevOps and CI/CD for WinCC Open Architecture Applications and Frameworks 281
 
  • R.P.I. Silvola
    CERN, Meyrin, Switzerland
  • L. Sargsyan
    ANSL, Yerevan, Armenia
 
  This paper presents the Continuous Integration and Continuous Deployment (CI/CD) tool chain for WinCC Open Architecture applications and frameworks developed at CERN, enabling a DevOps oriented approach of working. By identifying common patterns and time consuming procedures, and by agreeing on standard repository structures, naming conventions and tooling, we have gained a turnkey solution which automates the compilation of binaries and generation of documentation, thus guaranteeing they are up to date and match the source code in the repository. The pipelines generate deployment-ready software releases, which pass through both static code analysis and unit tests before automatically being deployed to short and long-term repositories. The tool chain leverages industry standard technologies, such as GitLab, Docker and Nexus. The technologies chosen for the tool chain are well understood and have a long, solid track record, reducing the effort in maintenance and potential long term risk. The setup has reduced the expert time needed for testing and releases, while generally improving the release quality.  
poster icon Poster MOPV050 [0.923 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-MOPV050  
About • Received ※ 08 October 2021       Revised ※ 13 October 2021       Accepted ※ 23 February 2022       Issue date ※ 11 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBL01 Distributed Caching at Cloud Scale with Apache Ignite for the C2MON Framework 307
 
  • T. Marques Oliveira, M. Bräger, B. Copy, S.E. Halastra, D. Martin Anido, A. Papageorgiou Koufidis
    CERN, Geneva, Switzerland
 
  The CERN Control and Monitoring platform (C2MON) is an open-source platform for industrial controls data acquisition, monitoring, control and data publishing. Its high availability, fault tolerance and redundancy make it a perfect fit to handle the complex and critical systems present at CERN. C2MON must cope with the ever-increasing flows of data produced by the CERN technical infrastructure, such as cooling and ventilation or electrical distribution alarms, while maintaining integrity and availability. Distributed caching is a common technique to dramatically increase the availability and fault tolerance of redundant systems. For C2MON we have replaced the existing legacy Terracotta caching framework with Apache Ignite. Ignite is an enterprise grade, distributed caching platform, with advanced cloud-native capabilities. It enables C2MON to handle high volumes of data with full transaction support and makes C2MON ready to run in the cloud. This article first explains the challenges we met when integrating Apache Ignite into the C2MON framework, and then demonstrates how Ignite enhances the capabilities of a monitor and control system in an industrial controls environment.  
slides icon Slides TUBL01 [0.817 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-TUBL01  
About • Received ※ 07 October 2021       Revised ※ 20 October 2021       Accepted ※ 01 March 2022       Issue date ※ 05 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBL02 Implementing an Event Tracing Solution with Consistently Formatted Logs for the SKA Telescope Control System 311
 
  • S.N. Twum, W.A. Bode, A.F. Joubert, K. Madisa, P.S. Swart, A.J. Venter
    SARAO, Cape Town, South Africa
  • A. Bridger
    ROE, UTAC, Edinburgh, United Kingdom
  • A. Bridger
    SKAO, Macclesfield, United Kingdom
 
  Funding: South African Radio Astronomy Observatory
The SKA telescope control system comprises several devices working on different hierarchies on different sites to provide a running observatory. The importance of logs, whether in its simplest form or correlated, in this system as well as any other distributed system is critical to fault finding and bug tracing. The SKA logging system will collect logs produced by numerous networked kubernetes deployments of devices and processes running a combination off-the-shelf, derived and bespoke software. The many moving parts of this complex system are delivered and maintained by different agile teams on multiple SKA Agile Release Trains. To facilitate an orderly and correlated generation of events in the running telescope, we implement a logging architecture which enforces consistently formatted logs with event tracing capability. We discuss the details of the architecture design and implementation, ending off with the limitations of the tracing solution in the context of a multiprocessing environment.
 
slides icon Slides TUBL02 [0.422 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-TUBL02  
About • Received ※ 10 October 2021       Revised ※ 21 October 2021       Accepted ※ 22 December 2021       Issue date ※ 11 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBL03 Tango Controls RFCs 317
 
  • P.P. Goryl, M. Liszcz
    S2Innovation, Kraków, Poland
  • S. Blanch-Torné
    ALBA-CELLS Synchrotron, Cerdanyola del Vallès, Spain
  • R. Bourtembourg, A. Götz
    ESRF, Grenoble, France
  • V. Hardion
    MAX IV Laboratory, Lund University, Lund, Sweden
  • L. Pivetta
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  In 2019, the Tango Controls Collaboration decided to write down a formal specification of the existing Tango Controls protocol as Requests For Comments (RFC). The work resulted in a Markdown-formatted specification rendered in HTML and PDF on Readthedocs.io. The specification is already used as a reference during Tango Controls source code maintenance and for prototyping a new implementation. All collaborating institutes and several companies were involved in the work. In addition to providing the reference, the effort brought the Community more value: review and clarification of concepts and their implementation in the core libraries in C++, Java and Python. This paper summarizes the results, provides technical and organizational details about writing the RFCs for the existing protocol and presents the impact and benefits on future maintenance and development of Tango Controls.  
slides icon Slides TUBL03 [0.743 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-TUBL03  
About • Received ※ 10 October 2021       Revised ※ 20 October 2021       Accepted ※ 22 December 2021       Issue date ※ 02 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBL04 CI-CD Practices at SKA 322
 
  • M. Di Carlo
    INAF - OAAB, Teramo, Italy
  • M. Dolci
    INAF - OA Teramo, Teramo, Italy
  • P. Harding
    Catalyst IT, Wellington, New Zealand
  • J.B. Morgado, B. Ribeiro
    GRIT, Aveiro, Portugal
  • U. Yilmaz
    SKAO, Macclesfield, United Kingdom
 
  The Square Kilometre Array (SKA) is an international effort to build two radio interferometers in South Africa and Australia forming one Observatory monitored and controlled from global headquarters (GHQ) based in the United Kingdom at Jodrell Bank. SKA is highly focused on adopting CI/CD practices for its software development. CI/CD stands for Continuous Integration \& Delivery and/or Deployment. Continuous Integration is the practice of merging all developers’ local copies into the mainline frequently. Continuous Delivery is the approach of developing software in short cycles ensuring it can be released anytime, and Continuous Deployment is the approach of delivering the software into operational use frequently and automatically. This paper analyses the decisions taken by the Systems Team (a specialized agile team devoted to developing and maintaining the tools that allow continuous practices) to promote the CI/CD practices with the TANGO-controls framework.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-TUBL04  
About • Received ※ 07 October 2021       Accepted ※ 05 December 2021       Issue date ※ 01 March 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBL05 Pysmlib: A Python Finite State Machine Library for EPICS 330
 
  • D. Marcato, G. Arena, D. Bortolato, F. Gelain, G. Lilli, V. Martinelli, E. Munaron, M. Roetta, G. Savarese
    INFN/LNL, Legnaro (PD), Italy
  • M.A. Bellato
    INFN- Sez. di Padova, Padova, Italy
 
  In the field of Experimental Physics and Industrial Control Systems (EPICS)*, the traditional tool to implement high level procedures is the Sequencer*. While this is a mature, fast, and well-proven software, it comes with some drawbacks. For example, it’s based on a custom C-like programming language which may be unfamiliar to new users and it often results in complex, hard to read code. This paper presents pysmlib, a free and open source Python library developed as a simpler alternative to the EPICS Sequencer. The library exposes a simple interface to develop event-driven Finite State Machines (FSM), where the inputs are connected to Channel Access Process Variables (PV) thanks to the PyEpics** integration. Other features include parallel FSM with multi-threading support and input sharing, timers, and an integrated watchdog logic. The library offers a lower barrier to enter and greater extensibility thanks to the large ecosystem of scientific and engineering python libraries, making it a perfect fit for modern control system requirements. Pysmlib has been deployed in multiple projects at INFN Legnaro National Laboratories (LNL), proving its robustness and flexibility.
* L. R. Dalesio, M. R. Kraimer, and A. J. Kozubal. "EPICS architecture." ICALEPCS. Vol. 91. 1991.
** M. Newville, et al., pyepics/pyepics Zenodo. http://doi.org/10.5281/zenodo.592027
 
slides icon Slides TUBL05 [1.705 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-TUBL05  
About • Received ※ 08 October 2021       Revised ※ 22 October 2021       Accepted ※ 22 December 2021       Issue date ※ 10 February 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)