Keyword: distributed
Paper Title Other Keywords Page
MOBL05 Photon Science Controls: A Flexible and Distributed LabVIEW Framework for Laser Systems controls, LabView, software, hardware 62
  • B.A. Davis, B.T. Fishler, R.J. McDonald
    LLNL, Livermore, California, USA
  Funding: This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
LabVIEW software is often chosen for developing small scale control systems, especially for novice software developers. However, because of its ease of use, many functional LabVIEW applications suffer from limits to extensibility and scalability. Developing highly extensible and scalable applications requires significant skill and time investment. To close this gap between new and experienced developers we present an object-oriented application framework that offloads complex architecture tasks from the developer. The framework provides native functionality for data acquisition, logging, and publishing over HTTP and WebSocket with extensibility for adding further capabilities. The system is scalable and supports both single process applications and small to medium sized distributed systems. By leveraging the framework, developers can produce robust applications that are easily integrated into a unified architecture for simple and distributed systems. This allows for decreased system development time, improved onboarding for new developers, and simple framework extension for new capabilities.
slides icon Slides MOBL05 [3.178 MB]  
DOI • reference for this paper ※  
About • Received ※ 09 October 2021       Accepted ※ 16 November 2021       Issue date ※ 14 March 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
MOPV010 Working under Pandemic Conditions: Contact Tracing Meets Technology software, network, database, site 121
  • E. Blanco Viñuela, B. Copy, S. Danzeca, Ch. Delamare, R. Losito, A. Masi, E. Matli, T. Previero, R. Sierra
    CERN, Geneva, Switzerland
  Covid-19 has dramatically transformed our working practices with a big change to a teleworking model for many people. There are however many essential activities requiring personnel on site. In order to minimise the risks for its personnel CERN decided to take every measure possible, including internal contact tracing by the CERN medical service. This initially involved manual procedures which relied on people’s ability to remember past encounters. To improve this situation and minimise the number of employees who would need to be quarantined, CERN approved the design of a specific device: the Proximeter. The project goal was to design a wearable device, built in a partnership* with industry fulfilling the contact tracing needs of the medical service. The proximeter records other devices in close proximity and reports the encounters to a cloud-based system. The service came into operation early 2021 and 8000 devices were distributed to personnel working on the CERN site. This publication reports on the service offered, emphasising on the overall workflow of the project under exceptional conditions and the implications data privacy imposed on the design of the software application.
* Terabee.
poster icon Poster MOPV010 [3.489 MB]  
DOI • reference for this paper ※  
About • Received ※ 11 October 2021       Revised ※ 26 October 2021       Accepted ※ 03 November 2021       Issue date ※ 18 December 2021
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
MOPV032 Design of a Component-Oriented Distributed Data Integration Model controls, software, TANGO, real-time 202
  • Z. Ni, L. Li, J. Liu, J. Luo, X. Zhou
    CAEP, Sichuan, People’s Republic of China
  The control system of large scientific facilities is composed of several heterogeneous control systems. As time goes by, the facilities need to be continuously upgraded and the control system also needs to be upgraded. This is a challenge for the integration of complex and large-scale heterogeneous systems. This article describes the design of a data integration model based on component technology, software middleware(The Apache Thrift*) and real-time database. The realization of this model shields the relevant details of the software middleware, encapsulates the remote data acquisition as a local function operation, realizes the combination of data and complex calculations through scripts, and can be assembled into new components.
*The Apache Thrift software framework, for scalable cross-language services development, combines a software stack with a code generation engine to build services that work efficiently.
poster icon Poster MOPV032 [1.325 MB]  
DOI • reference for this paper ※  
About • Received ※ 09 October 2021       Accepted ※ 04 November 2021       Issue date ※ 19 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
TUBL01 Distributed Caching at Cloud Scale with Apache Ignite for the C2MON Framework database, operation, controls, software 307
  • T. Marques Oliveira, M. Bräger, B. Copy, S.E. Halastra, D. Martin Anido, A. Papageorgiou Koufidis
    CERN, Geneva, Switzerland
  The CERN Control and Monitoring platform (C2MON) is an open-source platform for industrial controls data acquisition, monitoring, control and data publishing. Its high availability, fault tolerance and redundancy make it a perfect fit to handle the complex and critical systems present at CERN. C2MON must cope with the ever-increasing flows of data produced by the CERN technical infrastructure, such as cooling and ventilation or electrical distribution alarms, while maintaining integrity and availability. Distributed caching is a common technique to dramatically increase the availability and fault tolerance of redundant systems. For C2MON we have replaced the existing legacy Terracotta caching framework with Apache Ignite. Ignite is an enterprise grade, distributed caching platform, with advanced cloud-native capabilities. It enables C2MON to handle high volumes of data with full transaction support and makes C2MON ready to run in the cloud. This article first explains the challenges we met when integrating Apache Ignite into the C2MON framework, and then demonstrates how Ignite enhances the capabilities of a monitor and control system in an industrial controls environment.  
slides icon Slides TUBL01 [0.817 MB]  
DOI • reference for this paper ※  
About • Received ※ 07 October 2021       Revised ※ 20 October 2021       Accepted ※ 01 March 2022       Issue date ※ 05 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
TUBL02 Implementing an Event Tracing Solution with Consistently Formatted Logs for the SKA Telescope Control System controls, TANGO, software, monitoring 311
  • S.N. Twum, W.A. Bode, A.F. Joubert, K. Madisa, P.S. Swart, A.J. Venter
    SARAO, Cape Town, South Africa
  • A. Bridger
    ROE, UTAC, Edinburgh, United Kingdom
  • A. Bridger
    SKAO, Macclesfield, United Kingdom
  Funding: South African Radio Astronomy Observatory
The SKA telescope control system comprises several devices working on different hierarchies on different sites to provide a running observatory. The importance of logs, whether in its simplest form or correlated, in this system as well as any other distributed system is critical to fault finding and bug tracing. The SKA logging system will collect logs produced by numerous networked kubernetes deployments of devices and processes running a combination off-the-shelf, derived and bespoke software. The many moving parts of this complex system are delivered and maintained by different agile teams on multiple SKA Agile Release Trains. To facilitate an orderly and correlated generation of events in the running telescope, we implement a logging architecture which enforces consistently formatted logs with event tracing capability. We discuss the details of the architecture design and implementation, ending off with the limitations of the tracing solution in the context of a multiprocessing environment.
slides icon Slides TUBL02 [0.422 MB]  
DOI • reference for this paper ※  
About • Received ※ 10 October 2021       Revised ※ 21 October 2021       Accepted ※ 22 December 2021       Issue date ※ 11 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
TUPV033 Distributed Transactions in CERN’s Accelerator Control System MMI, controls, hardware, real-time 468
  • F. Hoguin, S. Deghaye, R. Gorbonosov, J. Lauener, P. Mantion
    CERN, Geneva, Switzerland
  Devices in CERN’s accelerator complex are controlled through individual requests, which change settings atomically on single Devices. Individual Devices are therefore controlled transactionally. Operators often need to apply a set of changes which affect multiple devices. This is achieved by sending requests in parallel, in a minimum amount of time. However, if a request fails, the Control system ends up in an undefined state, and recovering is a time-consuming task. Furthermore, the lack of synchronisation in the application of new settings may lead to the degradation of the beam characteristics, because of settings being partially applied. To address these issues, a protocol was developed to support distributed transactions and commit synchronisation in the CERN Control system, which was then implemented in CERN’s real-time frameworks. We describe what this protocol intends to solve and its limitations. We also delve into the real-time framework implementation and how developers can benefit from the 2-phase commit to leverage hardware features such as double buffering, and from the commit synchronisation allowing settings to be changed safely while the accelerator is operational.  
poster icon Poster TUPV033 [0.869 MB]  
DOI • reference for this paper ※  
About • Received ※ 09 October 2021       Revised ※ 18 October 2021       Accepted ※ 20 November 2021       Issue date ※ 22 January 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
THPV027 Application of the White Rabbit System at SuperKEKB timing, operation, controls, linac 919
  • H. Kaji
    KEK, Ibaraki, Japan
  • Y. Iitsuka
    EJIT, Hitachi, Ibaraki, Japan
  We employ the White Rabbit system to satisfy the increasing requests from the SuperKEKB operations. The SuperKEKB-type slave node was developed based on the SPEC board and FMC-DIO card. The firmware was customized slightly to realize the SuperKEKB needs. The device/driver for EPICS was developed. The five slave nodes have been operated since the 2021 autumn run. The delivery of the beam permission signal from the central control building to the injector linac is taken care of by new slave nodes. The timing of the abort request signal and the trigger for the abort kicker magnet are recorded with the distributed TDC system. More slave nodes will be installed in the next year to enhance the role of the distributed TDC system.  
poster icon Poster THPV027 [1.186 MB]  
DOI • reference for this paper ※  
About • Received ※ 10 October 2021       Revised ※ 25 October 2021       Accepted ※ 21 November 2021       Issue date ※ 08 January 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)