Data Management
Paper Title Page
WEBL01 FAIRmat - a Consortium of the German Research-Data Infrastructure (NFDI) 558
  • H. Junkes, P. Oppermann, R. Schlögl, A. Trunschke
    FHI, Berlin, Germany
  • M. Krieger, H. Weber
    FAU, Erlangen, Germany
  A sustainable infrastructure for provision, interlinkage, maintenance, and options for reuse of research data shall be created in Germany in the coming years. The consortium FAIRmat meets the interests of experimental and theoretical condensed-matter physics. This also includes, for example, chemical physics of solids, synthesis, and high-performance computing. All this is demonstrated by use cases from various areas of functional materials. The necessity of a FAIR data infrastructure in the FAIRmat* research field is very pressing. We need and want to support the actual, daily research work to further science. Besides storing, retrieving, and sharing data, a FAIR data infrastructure will also enable a completely new level of research. In the Area D "Digital Infrastructure" a Configurable Experiment Control System is to be developed here as a reference. EPICS was selected as an initial open source base system. The control system of the newly founded CATlab** in Berlin will be fully implemented according to the FAIRmat specifications.
FAIRmat :
CatLab :
slides icon Slides WEBL01 [5.478 MB]  
DOI • reference for this paper ※  
About • Received ※ 10 October 2021       Revised ※ 22 October 2021       Accepted ※ 21 December 2021       Issue date ※ 08 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
WEBL02 Prototype of Image Acquisition and Storage System for SHINE 564
  • H.H. Lv, D.P. Bai, X.M. Liu, H. Zhao
    SSRF, Shanghai, People’s Republic of China
  Shanghai HIgh repetition rate XFEL aNd Extreme light facility (SHINE) is a quasi-continuous wave hard X-ray free electron laser facility, which is currently under construction. The image acquisition and storage system has been designed to handle a large quantity of image data generated by the beam and X-ray diagnostics system, the laser system, etc. A prototype system with Camera Link cameras has been developed to acquire and to reliably transport data at a throughput of 1000MB/sec. The image data are transferred through ZeroMQ protocol to the storage where the image data and the relevant metadata are archived and made available for user analysis. For high-speed frames of image data storage, optimized schema is identified by comparing and testing four schemas. The image data are written to HDF5 files and the metadata pertaining to the image are stored in NoSQL database. It could deliver up to 1.2GB/sec storage speed. The performances are also contrasted between a stand-alone server and the Lustre file system. And the Lustre could provide a better performance. Details of the image acquisition, transfer, and storage schemas will be described in the paper.  
slides icon Slides WEBL02 [3.703 MB]  
DOI • reference for this paper ※  
About • Received ※ 10 October 2021       Accepted ※ 21 November 2021       Issue date ※ 12 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
Managing high-performance data flows and file structures  
  • J.M.C. Nilsson, T.S. Richter
    ESS, Copenhagen, Denmark
  • J.R. Harper
    STFC/RAL/ISIS, Chilton, Didcot, Oxon, United Kingdom
  • M.D. Jones
    Tessella, Abingdon, United Kingdom
  The beam intensity at the European Spallation Source will necessitate a high performance acquisition and recording system for the data from the user experiments. In addition to high neutron counts rates the expected large number of dynamic measurements per day calls for a flexible system that supports a high variability in sample set-ups. Apache Kafka has been chosen as the central data switchboard to handle all the event driven data sources from detectors as well as from the EPICS controls system. The file writing system centres around a facility wide pool of HDF5 file-writers that uses Apache Kafka also for command and control. File-writing jobs are posted to a topic on Apache Kafka and picked up by individual workers. This centralises and optimises resources, as I/O load can be balanced between different neutron instruments. Command messages embed a NeXus compliant structure to capture the raw data in a community agreed format. To simplify correctly defining the file structure, physical device locations can the visualised. Data inspection can be applied to find available data sources and easily allocate them locations in the file.  
slides icon Slides WEBL03 [1.851 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
WEBL04 Manage the Physics Settings on the Modern Accelerator 569
  • T. Zhang, K. Fukushima, T. Maruta, P.N. Ostroumov, A.S. Plastun, Q. Zhao
    FRIB, East Lansing, Michigan, USA
  Funding: U.S. Department of Energy Office of Science under Cooperative Agreement DESC0000661
The Facility for Rare Isotope Beams (FRIB) at Michigan State University is a unique modern user facility composed of a large-scale superconducting linac capable of accelerating heavy-ion beams from oxygen to uranium. An advanced EPICS-based control system is being used to operate this complex facility. High-level physics applications (HLA) developed before and during the staged commissioning of the linac were one of the critical tools that resulted in achieving the commissioning goals quickly, within several shifts. Many of these HLAs are expandable to other EPCIS controlled accelerators. Recent developed HLAs deal with the management of extensive data to achieve the repetitive high performance of ion beams in the entire linac measured by non-destructive diagnostics instruments, and open the possibilities to explore the extra values out of the data. This paper presents our recent significant development and utilization of these HLAs.
* T. Zhang et al. ’High-level Physics Controls Applications Development for FRIB’, ICALEPCS’19, TUCPR07, NY, USA, 2019.
slides icon Slides WEBL04 [9.835 MB]  
DOI • reference for this paper ※  
About • Received ※ 19 October 2021       Accepted ※ 21 November 2021       Issue date ※ 02 January 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
WEBL05 FAIR Meets EMIL: Principles in Practice 574
  • G. Günther, M. Bär, N. Greve, R. Krahl, M. Kubin, O. Mannix, W. Smith, S. Vadilonga, R. Wilks
    HZB, Berlin, Germany
  Findability, accessibility, interoperability, and reusability (FAIR) form a set of principles required to ready information for computational exploitation. The Energy Materials In-Situ Laboratory Berlin (EMIL) at BESSY II, with its unique analytical instrumentation in direct combination with an industrially-relevant deposition tool, is in the final phase of commissioning. It provides an ideal testbed to ensure workflows are developed around the FAIR principles; enhancing usability for both human and machine agents. FAIR indicators are applied to assess compliance with the principles on an experimental workflow realized using Bluesky. Additional metadata collection by integrating an instrument PID, an electronic laboratory book, and a sample tracking system is considered along with staff training. Data are collected in Nexus format and made available in the ICAT repository. This paper reports on experiences, problems overcome, and areas still in need of improvement in future perspectives.  
slides icon Slides WEBL05 [0.953 MB]  
DOI • reference for this paper ※  
About • Received ※ 08 October 2021       Accepted ※ 22 December 2021       Issue date ※ 24 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
WEPV047 Supporting Flexible Runtime Control and Storage Ring Operation with the FAIR Settings Management System 768
  • R. Mueller, J. Fitzek, H.C. Hüther, H. Liebermann, D. Ondreka, A. Schaller, A. Walter
    GSI, Darmstadt, Germany
  The FAIR Settings Management system has now been used productively for the GSI accelerator facility operating synchrotrons, storage rings, and transfer lines. The system’s core is being developed in a collaboration with CERN, and is based on CERN’s LHC Software Architecture (LSA) framework. At GSI, 2018 was dedicated to integrating the Beam Scheduling System BSS. Major implementations for storage rings were performed in 2019, while 2020 the main focus was on optimizing the performance of the overall control system. Integrating with the BSS allows us to configure the beam execution directly from the settings management system. Defining signals and conditions enables us to control the runtime behavior of the machine. The storage ring mode supports flexible operation with features allowing to pause the machine and execute in-cycle modifications, using concepts like breakpoints, repetitions, skipping, and manipulation. After providing these major new features and their successful productive use, the focus was shifted on optimizing their performance. The performance was analyzed and improved based on real-word scenarios defined by operations and machine experts.  
poster icon Poster WEPV047 [0.692 MB]  
DOI • reference for this paper ※  
About • Received ※ 09 October 2021       Accepted ※ 23 November 2021       Issue date ※ 22 December 2021  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
WEPV048 An Archiver Appliance Performance and Resources Consumption Study 774
  • R.N. Fernandes, S. Armanet, H. Kocevar, S. Regnell
    ESS, Lund, Sweden
  At the European Spallation Source (ESS), 1.6 million signals are expected to be generated by a (distributed) control layer composed of around 1500 EPICS IOCs. A substantial amount of these signals - i.e. PVs - will be stored by the Archiving Service, a service that is currently under development at the Integrated Control System (ICS) Division. From a technical point of view, the Archiving Service is implemented using a software application called the Archiver Appliance. This application, originally developed at SLAC, records PVs as a function of time and stores these in its persistent layer. A study based on multiple simulation scenarios that model ESS (future) modus operandi has been conducted by ICS to understand how the Archiver Appliance performs and consumes resources (e.g. RAM) under disparate workloads. This paper presents: 1) The simulation scenarios; 2) The tools used to collect and interpret the results; 3) The storage study; 4) The retrieval study; 5) The resources saturation study; 6) Conclusions based on the interpretation of the results.  
poster icon Poster WEPV048 [0.487 MB]  
DOI • reference for this paper ※  
About • Received ※ 10 October 2021       Accepted ※ 11 February 2022       Issue date ※ 12 March 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
WEPV049 Controls Data Archiving at the ISIS Neutron and Muon Source for In-Depth Analysis and ML Applications 780
  • I.D. Finch, G.D. Howells, A.A. Saoulis
    STFC/RAL/ISIS, Chilton, Didcot, Oxon, United Kingdom
  Funding: UKRI / STFC
The ISIS Neutron and Muon Source accelerators are currently operated using Vsystem control software. Archiving of controls data is necessary for immediate fault finding, to facilitate analysis of long-term trends, and to provide training datasets for machine learning applications. While Vsystem has built-in logging and data archiving tools, in recent years we have greatly expanded the range and quantity of data archived using an open-source software stack including MQTT as a messaging system, Telegraf as a metrics collection agent, and the Influx time-series database as a storage backend. Now that ISIS has begun the transition from Vsystem to EPICS this software stack will need to be replaced or adapted. To explore the practicality of adaptation, a new Telegraf plugin allowing direct collection of EPICS data has been developed. We describe the current Vsystem-based controls data archiving solution in use at ISIS, future plans for EPICS, and our plans for the transition while maintaining continuity of data.
poster icon Poster WEPV049 [0.845 MB]  
DOI • reference for this paper ※  
About • Received ※ 09 October 2021       Revised ※ 19 October 2021       Accepted ※ 22 December 2021       Issue date ※ 19 January 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
Containerised Control Systems Development at Isis and Potential Use in an Epics System  
  • G.D. Howells, I.D. Finch
    STFC/RAL/ISIS, Chilton, Didcot, Oxon, United Kingdom
  Funding: UKRI / STFC
Control system developers at the ISIS Neutron and Muon Source have been using Docker container technology as an efficient means to trial and develop interconnected software systems. We outline how the group has been able to use pre-existing container images in the traditional style for recording system metrics (e.g., TIG stack) and other telemetry. Furthermore, with the ISIS control system migrating from Vsystem to EPICS, we report how core components of these systems have been built and used within containers. We finally discuss whether such container technology could be used to implement the end goal of a full EPICS control system, or whether it is best suited to exploratory investigations.
poster icon Poster WEPV050 [1.402 MB]  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)