Data Management and Processing
Paper Title Page
TUBPA01 The Evolution of Component Database for APS Upgrade* 192
 
  • D.P. Jarosz, N.D. Arnold, J. Carwardine, G. Decker, N. Schwarz, S. Veseli
    ANL, Argonne, Illinois, USA
 
  Funding: [*] Argonne National Laboratory's work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under contract DE-AC02-06CH11357.
The purpose of the Advanced Photon Source Upgrade (APS-U) project is to update the facility to take advantage of the multi-bend achromat (MBA) magnet lattices, which will result in narrowly focused x-ray beams of much higher brightness. The APS-U installation has a short schedule of one-year. In order to plan and execute a task of such complexity, a collaboration between many individuals of very diverse backgrounds must exist. The Component Database (CDB) has been created to aid in documenting and managing all the parts that will go into the upgraded facility. After initial deployment and use, it became clear that the system must become more flexible, as the engineers started requesting new features such as tracking inventory assemblies, supporting relationships between components, and several usability requests. Recently, a more generic database schema has been implemented. This allows for the addition of more functionality without needing to refactor the database. The topics discussed in this paper include advantages and challenges of a more generic schema, new functionality, and plans for future work.
 
slides icon Slides TUBPA01 [0.770 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUBPA01  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPA02 Monitoring the New ALICE Online-Offline Computing System 195
 
  • A. Wegrzynek, V. Chibante Barroso
    CERN, Geneva, Switzerland
  • G. Vino
    INFN-Bari, Bari, Italy
 
  ALICE (A Large Ion Collider Experiment) particle detector has been successfully collecting physics data since 2010. Currently, it is in preparations for a major upgrade of the computing system, called O2 (Online-Offline). The O2 system will consist of 268 FLPs (First Level Processors) equipped with readout cards and 1500 EPNs (Event Processing Node) performing data aggregation, calibration, reconstruction and event building. The system will readout 27 Tb/s of raw data and record tens of PBs of reconstructed data per year. To allow an efficient operation of the upgraded experiment, a new Monitoring subsystem will provide a complete overview of the O2 computing system status. The O2 Monitoring subsystem will collect up to 600 kHz of metrics. It will consist of a custom monitoring library and a toolset to cover four main functional tasks: collection, processing, storage and visualization. This paper describes the Monitoring subsystem architecture and the feature set of the monitoring library. It also shows the results of multiple benchmarks, essential to ensure performance requirements. In addition, it presents the evaluation of pre-selected tools for each of the functional tasks.  
slides icon Slides TUBPA02 [11.846 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUBPA02  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPA03 Database Scheme for Unified Operation of SACLA / SPring-8 201
 
  • K. Okada, N. Hosoda, M. Ishii, T. Sugimoto, M. Yamaga
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Fujiwara, T. Fukui, T. Maruyama, K. Watanabe
    RIKEN SPring-8 Center, Hyogo, Japan
  • H. Sumitomo
    SES, Hyogo-pref., Japan
 
  For reliable accelerator operation, it is essential to have a centralized data handling scheme, for such as unique equipment ID's, archive and online data from sensors, and operation points and calibration parameters those are to be restored upon a change in operation mode. Since 1996, when SPring-8 got in operation, a database system has been utilized for this role. However, as time passes the original design got shorthanded and new features equipped upon requests pushed up maintenance costs. For example, as SACLA started in 2010, we introduced a new data format for the shot by shot synchronized data. Also number of tables storing operation points and calibrations increased with various formats. Facing onto the upgrade project at the site*, it is the time to overhaul the whole scheme. In the plan, SACLA will be the high quality injector to a new storage ring while in operation as the XFEL user machine. To handle shot by shot multiple operation patterns, we plan to introduce a new scheme where multiple tables inherits a common parent table information. In this paper, we report the database design for the upgrade project and status of transition.
* http://rsc.riken.jp/pdf/SPring-8-II.pdf
 
slides icon Slides TUBPA03 [0.950 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUBPA03  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPA04 The MAX IV Laboratory Scientific Data Management 206
 
  • V.H. Hardion, A. Barsek, F. Bolmsten, J. Brudvik, Y. Cerenius, F. H. Hennies, K. Larsson, Z. Matej, D.P. Spruce
    MAX IV Laboratory, Lund University, Lund, Sweden
 
  The Scientific Data Management is a key aspect of the IT system of a user research facility like the MAX~IV Laboratory. By definition, this system handles data produced by the experimental user of such a facility. It could be perceived as easy as using an external hard drive to store the experimental data to carry back to the home institute for analysis. But on the other hand the "data" can be seen as more than just a file in a directory and the "management" not only a copy operation. Simplicity and a good User Experience vs security/authentication and reliability are among the main challenges of this project along with all the mindset changes. This article will explain all the concepts and the basic roll-out of the system at the MAX~IV Laboratory for the first users and the features anticipated in the future.  
slides icon Slides TUBPA04 [2.801 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUBPA04  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPA05 High Throughput Data Acquisition with EPICS 213
 
  • K. Vodopivec
    ORNL, Oak Ridge, Tennessee, USA
  • B. Vacaliuc
    ORNL RAD, Oak Ridge, Tennessee, USA
 
  Funding: ORNL is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy.
In addition to its use for control systems and slow device control, EPICS provides a strong infrastructure for developing high throughput applications for continuous data acquisition. Integrating data acquisition into an EPICS environment provides many advantages. The EPICS network protocols provide for tight control and monitoring of operation through an extensive set of tools. As part of a facility-wide initiative at the Spallation Neutron Source, EPICS-based data acquisition and detector controls software has been developed and deployed to most neutron scattering instruments. The software interfaces to the in-house built detector electronics over fast optical channels for bi-directional communication and data acquisition. The software is built around asynPortDriver and allows the passing of arbitrary data structures between plugins. The completely modular design allows the setup of versatile configurations of data pre-processing plugins depending on neutron detector type and instrument requirements. After 3 years of experience and average data rates of 1.5 TB per day, it shows exemplary results of efficiency and reliability.
 
slides icon Slides TUBPA05 [2.427 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUBPA05  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUBPA06 Scalable Time Series Documents Store 218
 
  • M.J. Slabber, F. Joubert, M.T. Ockards
    SKA South Africa, National Research Foundation of South Africa, Cape Town, South Africa
 
  Funding: National Research Foundation (South Africa)
Data indexed by time is continuously collected from instruments, environment and users. Samples are recorded from sensors or software components at specific times, starting as simple numbers and increasing in complexity as associated values accrue e.g. status and acquisition times. A sample is more than a triple and evolves into a document. Besides variance, volume and veracity also increase and the time series database (TSDB) has to process hundreds of GB/day. Also, users performing analyses have ever increasing demands e.g. in <10s plot all target coordinates over 24h of 64 radio telescope dishes, recorded at 1Hz. Besides the many short-term queries, trend analyses over long periods and in-depth enquiries by specialists around past events e.g. critical hardware failure or scientific discovery, are performed. This paper discusses the solution used for the MeerKAT radio telescope under construction by SKA-SA in South Africa. System architecture and performance characteristics of the developed TSDB are explained. We demonstrate how we broke the mould of using general-purpose database technologies to build a TSDB by rather utilising technologies employed in distributed file storage.
 
slides icon Slides TUBPA06 [1.781 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUBPA06  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA036 Applying Service-Oriented Architecture to Archiving Data in Control and Monitoring Systems 461
 
  • J.M. Nogiec, K. Trombly-Freytag
    Fermilab, Batavia, Illinois, USA
 
  Funding: Work supported by the U.S. Department of Energy under contract no. DE-AC02-07CH11359
Current trends in the architectures of software systems focus our attention on building systems using a set of loosely coupled components, each providing a specific functionality known as service. It is not much different in control and monitoring systems, where a functionally distinct sub-system can be identified and independently designed, implemented, deployed and maintained. One functionality that renders itself perfectly to becoming a service is archiving the history of the system state. The design of such a service and our experience of using it are the topic of this article. The service is built with responsibility segregation in mind, therefore, it provides for reducing data processing on the data viewer side and separation of data access and modification operations. The service architecture and the details concerning its data store design are discussed. An implementation of a service client capable of archiving EPICS process variables and LabVIEW shared variables is presented. The use of a gateway service for saving data from GE iFIX is also outlined. Data access tools, including a browser-based data viewer (HTML 5) and a mobile viewer (Android app), are also presented.
 
poster icon Poster TUPHA036 [0.952 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA036  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA038 A Generic REST API Service for Control Databases 465
 
  • W. Fu, T. D'Ottavio, S. Nemesure
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
Accessing database resources from Accelerator Controls servers or applications with JDBC/ODBC and other dedicated programming interfaces have been common for many years. However, availability and performance limitations of these technologies were obvious as rich web and mobile communication technologies became more mainstream. HTTP REST services have become a more reliable and common way for easy accessibility for most types of data resources, include databases. Several commercial database REST services have become available in recent years, each with their own pros and cons. This paper presents a way for setting up a generic HTTP REST database service with technology that combines the advantages of application servers (such as Glassfish), JDBC drivers, and Java technology to make major RDBMS systems easy to access and handle data in a secure way. This allows database clients to retrieve data (user data or meta data) in standard formats such as XML or JSON.
 
poster icon Poster TUPHA038 [0.679 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA038  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA039 Bunch Arrival Time Monitor Control Setup for SwissFEL Applications 469
 
  • P. Chevtsov, V.R. Arsov
    PSI, Villigen PSI, Switzerland
  • M. Dach
    Dach Consulting GmbH, Brugg, Switzerland
 
  Bunch Arrival time Monitor (BAM) is a precise beam diagnostics instrument assessing the accelerator stability on-line. It is one of the most important components of the SwissFEL facility at the Paul Scherrer Institute (PSI). The overall monitor complexity demands the development of an extremely reliable control system that handles basic BAM operations. A prototype of such a system was created at PSI. The system is very flexible. It provides a set of tools allowing one to implement a number of advanced control features such as tagging experimental data with a SwissFEL machine pulse number or embedding high level control applications into the process controllers (IOC). The paper presents the structure of the BAM control setup. The operational experience with this setup is also discussed.  
poster icon Poster TUPHA039 [1.027 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA039  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA040 Development of Real-Time Data Publish and Subscribe System Based on Fast RTPS for Image Data Transmission 473
 
  • G.I. Kwon, J.S. Hong, T.G. Lee, W.R. Lee, J.S. Park, T.H. Tak
    NFRI, Republic of Korea
 
  Funding: This work was supported by the Korean Ministry of Science ICT & Future Planning under the KSTAR project.
In fusion experiment, real-time network is essential to control plasma real-time network used to transfer the diagnostic data from diagnostic device and command data from PCS(Plasma Control System). Among the data, transmitting image data from diagnostic system to other system in real-time is difficult than other type of data. Because, image has larger data size than other type of data. To transmit the images, it need to have high throughput and best-effort property. And To transmit the data in real-time manner, the network need to has low-latency. RTPS(Real Time Publish Subscribe) is reliable and has Quality of Service properties to enable best effort protocol. In this paper, eProsima Fast RTPS was used to implement RTPS based real-time network. Fast RTPS has low latency, high throughput and enable to best-effort and reliable publish and subscribe communication for real-time application via standard Ethernet network. This paper evaluates Fast RTPS about suitability to real-time image data transmission system. To evaluate performance of Fast RTPS base system, Publisher system publish image data and multi subscriber system subscribe image data.
* giilkwon@nfri.re.kr, Control team, National Fusion Research Institute, Daejeon, South Korea
 
poster icon Poster TUPHA040 [8.164 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA040  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA041 Conception and Realization of the Versioning of Databases Between Two Research Institutes 478
 
  • S. Mueller, R. Müller
    GSI, Darmstadt, Germany
 
  This paper describes the version control of oracle databases across different environments. The basis of this paper is the collaboration between the GSI Helmholtz Centre for Heavy Ion Research (GSI) and the European Organization for Nuclear Research (CERN). The goal is to provide a sufficient and practical concept to improve database synchronization and version control for a specific database landscape for the two research facilities. First, the relevant requirements for both research facilities were identified and compared, leading to the creation of a shared catalog of requirements. In the process database tools, such as Liquibase and Flyway, were used and integrated as prototypes into the Oracle system landscape. During the implementation of prototypes several issues were identified, which arise out of the established situation of two collaborating departments of the research facilities. Requirements on the prototype were, to be flexible enough to adapt to the given conditions of the database landscape. The creation of a flexible and adjustable system enables the two research facilities to use, synchronize and update the shared database landscape.  
poster icon Poster TUPHA041 [1.991 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA041  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA042 ADAPOS: An Architecture for Publishing ALICE DCS Conditions Data 482
 
  • J.L. LÃ¥ng, A. Augustinus, P.M. Bond, P.Ch. Chochula, A.N. Kurepin, M. Lechman, O. Pinazza
    CERN, Geneva, Switzerland
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
  • M. Lechman
    IP SAS, Bratislava, Slovak Republic
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  ALICE Data Point Service (ADAPOS) is a software architecture being developed for the Run 3 period of LHC, as a part of the effort to transmit conditions data from ALICE Detector Control System (DCS) to GRID, for distributed processing. ADAPOS uses Distributed Information Management (DIM), 0MQ, and ALICE Data Point Processing Framework (ADAPRO). DIM and 0MQ are multi-purpose application-level network protocols. DIM and ADAPRO are being developed and maintained at CERN. ADAPRO is a multi-threaded application framework, supporting remote control, and also real-time features, such as thread affinities, records aligned with cache line boundaries, and memory locking. ADAPOS and ADAPRO are written in C++14 using OSS tools, Pthreads, and Linux API. The key processes of ADAPOS, Engine and Terminal, run on separate machines, facing different networks. Devices connected to DCS publish their state as DIM services. Engine gets updates to the services, and converts them into a binary stream. Terminal receives it over 0MQ, and maintains an image of the DCS state. It sends copies of the image, at regular intervals, over another 0MQ connection, to a readout process of ALICE Data Acquisition.  
poster icon Poster TUPHA042 [0.686 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA042  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA043 Concept and First Evaluation of the Archiving System for FAIR 486
 
  • V. Rapp
    GSI, Darmstadt, Germany
  • V. Cucek
    XLAB d.o.o., Ljubljana, Slovenia
 
  Since the beginning of computer era the storing and analyzing the data was one of the main focuses of IT systems. Therefore, it is no wonder that the users and operators of the coming FAIR complex have expressed a strong requirement to collect the data coming from different accelerator components and store it for the future analysis of the accelerator performance and its proper function. This task will be performed by the Archiving System, a component, which will be developed by FAIRs Controls team in cooperation with XLAB d.o.o., Slovenia. With more than 2000 devices, over 50000 parameters and around 30 MB of data per second to store, the Archiving System will face serious challenges in terms of performance and scalability. Besides of the actual storage complexity, the system will also need to provide the mechanisms to access the data in an efficient matter. Fortunately, there are open source products available on the market, which may be utilized to perform the given tasks. This paper presents the first conceptual design of the coming system, the challenges and choices met, as well as the integration in the coming FAIR system landscape.  
poster icon Poster TUPHA043 [1.154 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA043  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPHA044 Integration of the Vacuum Scada With CERN's Enterprise Asset Management System 490
 
  • A.P. Rocha, S. Blanchard, J. Fraga, G. Gkioka, P. Gomes, L.A. Gonzalez, T. Krastev, G. Riddone, D. Widegren
    CERN, Geneva, Switzerland
 
  The vacuum group is responsible for the operation and consolidation of vacuum systems across all CERN accelerators. Concerning over 15 000 pieces of control equipment, the maintenance management requires the usage of an Enterprise Asset Management system (EAM), where the life-cycle of every individual equipment is managed from reception through decommissioning. On vacuum SCADA, the operators monitor and interact with equipment that were declared in the vacuum database (vacDB). The creation of work orders and the follow up of the equipment is done through inforEAM, which has its own database. These two databases need to be coupled, so that equipment accessible on the SCADA are available in inforEAM for maintenance management. This paper describes the underlying architecture and technologies behind vacDM, a web application that ensures the consistency between vacDB and inforEAM, thus guaranteeing that the equipment displayed in the vacuum SCADA is available in inforEAM. In addition to this, vacDM performs the management of equipment labelling jobs by assigning equipment codes to new equipment, and by automatically creating their corresponding assets in inforEAM.  
poster icon Poster TUPHA044 [1.138 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA044  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THMPA07 Improvement of Temperature and Humidity Measurement System for KEK Injector Linac 1323
 
  • I. Satake, M. Satoh, T. Suwada, Y. Yano
    KEK, Ibaraki, Japan
  • T. Kudou, S. Kusano, Y. Mizukawa
    Mitsubishi Electric System & Service Co., Ltd, Tsukuba, Japan
 
  A temperature and humidity measurement system at the KEK injector linac consists of 26 data loggers connected to around 700 temperature and humidity sensors, one EPICS IOC, and CSS archiver. CSS archiver engine retrieves the temperature and humidity data measured by the data loggers via Ethernet. These data are finally stored into the PostgreSQL based database. A new server computer has been recently utilized for the archiver of CSS version 4 instead of version 3. It can drastically improve the speed performance for retrieving the archived data. The long-term beam stability of linac is getting a quite important figure of merit since the simultaneous top up injection is required for the independent four storage rings toward the SuperKEKB Phase II operation. For this reason, we developed a new archiver data management application with a good operability. Since it can bring the operators a quick detection of anomalous behavior of temperature and humidity data resulting in the deterioration of beam quality, the improved temperature and humidity measurement system can be much effective. We will report the detailed system description and practical application to the daily beam operation.  
slides icon Slides THMPA07 [2.221 MB]  
poster icon Poster THMPA07 [1.892 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THMPA07  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA036 Multi-Criteria Partitioning on Distributed File Systems for Efficient Accelerator Data Analysis and Performance Optimization 1436
 
  • S. Boychenko, M.A. Galilée, J.C. Garnier, M. Zerlauth
    CERN, Geneva, Switzerland
  • M. Zenha-Rela
    University of Coimbra, Coimbra, Portugal
 
  Since the introduction of the map-reduce paradigm, relational databases are being increasingly replaced by more efficient and scalable architectures, in particular in environments where a query will process TBytes or even PBytes of data in a single execution. The same tendency is observed at CERN, where data archiving systems for operational accelerator data are already working well beyond their initially provisioned capacity. Most of the modern data analysis frameworks are not optimized for heterogeneous workloads such as they arise in the dynamic environment of one of the world's largest accelerator complex. This contribution presents a Mixed Partitioning Scheme Replication (MPSR) as a solution that will outperform conventional distributed processing environment configurations for almost the entire phase-space of data analysis use cases and performance optimization challenges as they arise during the commissioning and operational phases of an accelerator. We will present results of a statistical analysis as well as the benchmarking of the implemented prototype, which allow defining the characteristics of the proposed approach and to confirm the expected performance gains.  
poster icon Poster THPHA036 [0.280 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA036  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA037 Future Archiver for CERN SCADA Systems 1442
 
  • P. Golonka, M. Gonzalez-Berges, J. Guzik, R. Kulaga
    CERN, Geneva, Switzerland
 
  Funding: Presented work is conducted in collaboration with ETM/Siemens in the scope of the CERN openlab project
The paper presents the concept of a modular and scalable archiver (historian) for SCADA systems at CERN. By separating concerns of archiving from specifics of data-storage systems at a high abstraction level, using a clean and open interface, it will be possible to integrate various data handling technologies without a big effort. The frontend part, responsible for business logic, will communicate with one or multiple backends, which in turn would implement data store and query functionality employing traditional relational databases as well as modern NOSQL and big data solutions, opening doors to advanced data analytics and matching the growing performance requirements for data storage.
 
poster icon Poster THPHA037 [7.294 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA037  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA038 Upgrade of the CERN Rade Framework Architecture Using RabbitMQ and MQTT 1446
 
  • O.Ø. Andreassen, F. Marazita, M.K. Miskowiec
    CERN, Geneva, Switzerland
 
  AMQP was originally developed for the finance community as an open way to communicate the vastly increasing over-the-counter trace, risk and clearing market data, without the need for a proprietary protocol and expensive license. In this paper, we explore the possibility to use AMQP with MQTT extensions in a cross platform, cross language environment, where the communication bus becomes an extendible framework in which simple/thin software clients can leverage the many expert libraries at CERN.  
poster icon Poster THPHA038 [1.797 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA038  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA041 Information System for ALICE Experiment Data Access 1451
 
  • J. Jadlovsky, J. Cabala, J. Cerkala, E. Hanc, A. Jadlovska, S. Jadlovska, M. Kopcik, M. Oravec, M. Tkacik, D. Voscek
    Technical University of Kosice, Kosice, Slovak Republic
  • P.M. Bond, P.Ch. Chochula
    CERN, Geneva, Switzerland
 
  The main goal of this paper is the presentation of Dcs ARchive MAnager for ALICE Experiment detector conditions data (DARMA), which is the updated version of the AMANDA 3 software currently used within ALICE experiment at CERN. The typical user of this system is either a physicist who performs further analysis on data acquired during the operation of the ALICE detector or an engineer, who analyzes the detector status between iterations of experiments. Based on the experience with the current system, the updated version aims to simplify the overall complexity of the previous version, which leads to simpler implementation, administration and portability of the system without sacrificing the functionality. DARMA is realized as an ASP. NET web page based on Model-View-Controller architecture and this paper provides a closer look at the design phase of the new backend structure in comparison to previous solution as well as the description of individual modules of the system.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA041  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA042 ASCI: A Compute Platform for Researchers at the Australian Synchrotron 1455
 
  • J. Marcou, R.R.I. Bosworth
    ASCo, Clayton, Victoria, Australia
  • R. Clarken
    SLSA-ANSTO, Clayton, Australia
  • P. Martin, A. Moll
    SLSA, Clayton, Australia
 
  The volume and quality of scientific data produced at the Australian Synchrotron continues to grow rapidly due to advancements in detectors, motion control and automation. This makes it critical that researchers have access to computing infrastructure that enables them to efficiently process and extract insight from their data. To facilitate this, we have developed a compute platform to enable researchers to analyse their data in real time while at the beamline as well as post-experiment by logging in remotely. This system, named ASCI, provides a convenient web-based interface to launch Linux desktops running inside Docker containers on high-performance compute hardware. Each session has the user's data mounted and is preconfigured with the software required for their experiment. This poster will present the architecture of the system and explain the design decisions in building this platform.  
poster icon Poster THPHA042 [1.402 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA042  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA043 Lightflow - a Lightweight, Distributed Workflow System 1457
 
  • A. Moll, R. Clarken, P. Martin, S.T. Mudie
    SLSA-ANSTO, Clayton, Australia
 
  The Australian Synchrotron, located in Clayton, Melbourne, is one of Australia's most important pieces of research infrastructure. After more than 10 years of operation, the beamlines at the Australian Synchrotron are well established and the demand for automation of research tasks is growing. Such tasks routinely involve the reduction of TB-scale data, online (realtime) analysis of the recorded data to guide experiments, and fully automated data management workflows. In order to meet these demands, a generic, distributed workflow system was developed. It is based on well-established Python libraries and tools. The individual tasks of a workflow are arranged in a directed acyclic graph and one or more directed acyclic graphs form a workflow. Workers consume the tasks, allowing the processing of a workflow to scale horizontally. Data can flow between tasks and a variety of specialised tasks is available. Lightflow has been released as open source on the Australian Synchrotron GitHub page  
poster icon Poster THPHA043 [0.582 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA043  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THPHA044 REALTA and pyDART: A Set of Programs to Perform Real Time Acquisition and On-Line Analysis at the FERMI Free Electron Laser 1460
 
  • E. Allaria, E. Ferrari, E. Roussel, L. Vidotto
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  During the optimization phase of the FERMI Free Electron Laser (FEL) to deliver the best FEL pulses to users, many machine parameters have to be carefully tuned, like e.g. the seed laser intensity, the dispersion strength, etc. For that purpose, a new python-based acquisition tool, called REALTA (Real Time Acquisition program), has been developed to acquire various machine parameters, electron beam properties and FEL signals on a shot-by-shot basis thanks to the real time capabilities of the TANGO control system. The data are saved continuously during the acquisition in a HDF5 file. The pyDART (Python Data Analysis Real Time) program is the post-processing tool that enables a fast analysis of the data acquired with REALTA. It allows to study the correlations and dependences between the FEL and electron beam properties and the machine parameters. In this work, we present the REALTA and pyDART toolkit developed for the FERMI FEL.  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA044  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)