Paper | Title | Page |
---|---|---|
TUCPA01 | Data Analysis Support in Karabo at European XFEL | 245 |
|
||
We describe the data analysis structure that is integrated into the Karabo framework [1] to support scientific experiments and data analysis at European XFEL GmbH. The photon science experiments have a range of data analysis requirements, including online (i.e. near real-time during the actual measurement) and offline data analysis. The Karabo data analysis framework supports execution of automatic data analysis for routine tasks, supports complex experiment protocols including data analysis feedback integration to instrument control, and supports integration of external applications. The online data analysis is carried out using distributed and accelerator hardware (such as GPUs) where required to balance load and achieve near real-time data analysis throughput. Analysis routines provided by Karabo are implemented in C++ and Python, and make use of established scientific libraries. The XFEL control and analysis software team collaborates with users to integrate experiment specific analysis codes, protocols and requirements into this framework, and to make it available for the experiments and subsequent offline data analysis.
[1] Heisen et al (2013) "Karabo: An Integrated Software Framework Combining Control, Data Management, and Scientific Computing Tasks". Proc. of 14th ICALEPCS 2013, Melbourne, Australia (p. FRCOAAB02) |
||
![]() |
Slides TUCPA01 [10.507 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUCPA01 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
TUCPA02 | Leveraging Splunk for Control System Monitoring and Management | 253 |
|
||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 The National Ignition Facility (NIF) is the world's largest and most energetic laser experimental facility with 192 beams capable of delivering 1.8 megajoules and 500-terawatts of ultraviolet light to a target. To aid in NIF control system troubleshooting, the commercial product Splunk was introduced to collate and view system log files collected from 2,600 processes running on 1,800 servers, front-end processors, and embedded controllers. We have since extended Splunk's access into current and historical control system configuration data, as well as experiment setup and results. Leveraging Splunk's built-in data visualization and analytical features, we have built custom tools to gain insight into the operation of the control system and to increase its reliability and integrity. Use cases include predictive analytics for alerting on pending failures, analyzing shot operations critical path to improve operational efficiency, performance monitoring, project management, and in analyzing and monitoring system availability. This talk will cover the various ways we've leveraged Splunk to improve and maintain NIF's integrated control system. LLNL-ABS-728830 |
||
![]() |
Slides TUCPA02 [1.762 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUCPA02 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
TUCPA03 | Experience with Machine Learning in Accelerator Controls | 258 |
|
||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy. The repository of data for the Relativistic Heavy Ion Collider and associated pre-injector accelerators consists of well over half a petabyte of uncompressed data. By todays standard, this is not a large amount of data. However, a large fraction of that data has never been analyzed and likely contains useful information. We will describe in this paper our efforts to use machine learning techniques to pull out new information from existing data. Our focus has been to look at simple problems, such as associating basic statistics on certain data sets and doing predictive analysis on single array data. The tools we have tested include unsupervised learning using Tensorflow, multimode neural networks, hierarchical temporal memory techniques using NuPic, as well as deep learning techniques using Theano and Keras. |
||
![]() |
Slides TUCPA03 [6.658 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUCPA03 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
TUCPA04 | Model Learning Algorithms for Anomaly Detection in CERN Control Systems | 265 |
|
||
At CERN there are over 600 different industrial control systems with millions of deployed sensors and actuators and their monitoring represents a challenging and complex task. This paper describes three different mathematical approaches that have been designed and developed to detect anomalies in CERN control systems. Specifically, one of these algorithms is purely based on expert knowledge while the other two mine historical data to create a simple model of the system, which is then used to detect anomalies. The methods presented can be categorized as dynamic unsupervised anomaly detection; "dynamic" since the behaviour of the system is changing in time, "unsupervised" because they predict faults without reference to prior events. Consistent deviations from the historical evolution can be seen as warning signs of a possible future anomaly that system experts or operators need to check. The paper also presents some results, obtained from the analysis of the LHC Cryogenic system. Finally the paper briefly describes the deployment of Spark and Hadoop into the CERN environment to deal with huge datasets and to spread the computational load of the analysis across multiple nodes. | ||
![]() |
Slides TUCPA04 [1.965 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUCPA04 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
TUCPA05 | Laser Damage Image Pre-processing Based on Total Variation | 272 |
|
||
The inspection and tracking of laser-induced damages of optics play a significant role in high-power laser systems. Laser-induced defects or flaws on the surfaces of optics are presented in images acquired by specific charge coupled devices (CCDs), hence the identification of defects from laser damage images is essential. Despite a great effort we have made to improve the imaging results, the defect identification is a challenging task. The proposed research focuses on the pre-processing of laser damage images, which assists identifying optic defects. We formulate the image pre-processing as a total variation (TV) based image reconstruction problem, and further develop an alternating direction method of multipliers (ADMM) algorithm to solve it. The use of TV regularization makes the pre-processed image sharper by preserving the edges or boundaries more accurately. Experimental results demonstrate the effectiveness of this method. | ||
![]() |
Slides TUCPA05 [0.538 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUCPA05 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
TUCPA06 | SwissFEL - Beam Synchronous Data Acquisition - The First Year | 276 |
|
||
The SwissFEL beam-synchronous data-acquisition system is based on several novel concepts and technologies. It is targeted on immediate data availability and online processing and is capable of assembling an overall data view of the whole machine, thanks to its distributed and scalable back-end. Load on data sources is reduced by immediately streaming data as soon as it becomes available. The streaming technology used provides load balancing and fail-over by design. Data channels from various sources can be efficiently aggregated and combined into new data streams for immediate online monitoring, data analysis and processing. The system is dynamically configurable, various acquisition frequencies can be enabled, and data can be kept for a defined time window. All data is available and accessible enabling advanced pattern detection and correlation during acquisition time. Accessing the data in a code-agnostic way is also possible through the same REST API that is used by the web-frontend. We will give an overview of the design and specialities of the system as well as talk about the findings and problems we faced during machine commissioning. | ||
![]() |
Slides TUCPA06 [5.107 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUCPA06 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
TUMPA08 | The Automatic Quench Analysis Software for the High Luminosity LHC Magnets Evaluation at CERN | 357 |
|
||
The superconducting magnet test facility at CERN, (SM18), has been using the Automatic Quench Analysis (AQA) software to analyse the quench data during the Large Hadron Collider (LHC) magnet test campaign. This application was developed using LabVIEW in the early 2000's by the Measurement Test and Analysis section (MTA) at CERN. During the last few years, the SM18 has been upgraded for the High Luminosity LHC (HL-LHC) magnet prototypes. These HL-LHC magnets demand a high flexibility of the software. The new requirements were that the analysis algorithms should be open, allowing contributions from engineers and physicists with basic programming knowledge, execute automatically a large number of tests, generate reports and be maintainable by the MTA team. The paper contains the description, present status and future evolutions of the new AQA soft-ware that replaces the LabVIEW application. | ||
![]() |
Slides TUMPA08 [1.433 MB] | |
![]() |
Poster TUMPA08 [1.945 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUMPA08 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
TUPHA029 | Live Visualisation of Experiment Data at ISIS and the ESS | 431 |
|
||
As part of the UK's in-kind contribution to the European Spallation Source, ISIS is working alongside the ESS and other partners to develop a new data streaming system for managing and distributing neutron experiment data. The new data streaming system is based on the open-source distributed streaming platform Apache Kafka. A central requirement of the system is to be able to supply live experiment data for processing and visualisation in near real-time via the Mantid data analysis framework. There already exists a basic TCP socket-based data streaming system at ISIS, but it has limitations in terms of scalability, reliability and functionality. The intention is for the new Kafka-based system to replace the existing system at ISIS. This migration will not only provide enhanced functionality for ISIS but also an opportunity for developing and testing the system prior to use at the ESS. | ||
![]() |
Poster TUPHA029 [0.644 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA029 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
TUPHA030 | Using AI in the Fault Management Predictive Model of the SKA TM Services: A Preliminary Study | 435 |
|
||
SKA (Square Kilometer Array) is a project aimed to build a very large radio-telescope, composed by thousands of antennae and related support systems. The overall orchestration is performed by the Telescope Manager (TM), a suite of software applications. In order to ensure the proper and uninterrupted operation of TM, a local monitoring and control system is developed, called TM Services. Fault Management (FM) is one of these services, and is composed by processes and infrastructure associated with detecting, diagnosing and fixing faults, and finally returning to normal operations. The aim of the study, introducing artificial intelligence algorithms during the detection phase, is to build a predictive model, based on the history and statistics of the system, in order to perform trend analysis and failure prediction. Based on monitoring data and health status detected by the software system monitor and on log files gathered by the ELK (Elasticsearch, Logstash, and Kibana) server, the predictive model ensures that the system is operating within its normal operating parameters and takes corrective actions in case of failure. | ||
![]() |
Poster TUPHA030 [2.851 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA030 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
TUPHA031 | The Alarm and Downtime Analysis Development in the TLS | 439 |
|
||
TLS (Taiwan light Source) is a 1.5 GeV synchrotron light source at NSRRC which has been operating for users more than twenty year. There are many toolkits that are delivered to find out downtime responsibility and processing solution. New alarm system with EPICS interface is also applied in these toolkits to keep from machine fail of user time in advance. These toolkits are tested and modified in the TLS and enhance beam availability. The relative operation experiences will be migrated to TPS (Taiwan photon source) in the future after long term operation and big data statistic. These analysis and implement results of system will be reported in this conference. | ||
![]() |
Poster TUPHA031 [0.930 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA031 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
TUPHA033 | Availability Analysis and Tuning Tools at the Light Source Bessy II | 446 |
|
||
Funding: Work supported by the German Bundesministerium für Bildung und Forschung, Land Berlin and grants of Helmholtz Association. The 1.7GeV light source BESSY II features about 50 beamlines overbooked by a factor of 2 on the average. Thus availability of high quality synchrotron radiation (SR) is a central asset. SR users at BESSY II can base their beam time expectations on numbers generated according to the common operation metrics*. Major failures of the facility are analyzed according to * and displayed in real time, analysis of minor detriments are provided regularly by off line tools. Many operational constituents are required for extraordinary availability figures: meaningful alarming and dissemination of notifications, complete logging of program, device, system and operator activities, post mortem analysis and data mining tools. Preventive and corrective actions are enabled by consequent root cause analysis based on accurate eLog entries, trouble ticketing and consistent failure classifications. This paper describes the tool sets, developments, their implementation status and some showcase results at BESSY II. * Common operation metrics for storage ring light sources, A. Luedeke, M. Bieler, R.H.A. Farias, S. Krecic, R. Mueller, M. Pont, and M. Takao, Phys. Rev. Accel. Beams 19, 082802 |
||
![]() |
Poster TUPHA033 [3.025 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA033 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
TUPHA034 | SCADA Statistics Monitoring Using the Elastic Stack (Elasticsearch, Logstash, Kibana) | 451 |
|
||
The Industrial Controls and Safety systems group at CERN, in collaboration with other groups, has developed and currently maintains around 200 controls applications that include domains such as LHC magnet protection, cryogenics and electrical network supervision systems. Millions of value changes and alarms from many devices are archived to a centralised Oracle database but it is not easy to obtain high-level statistics from such an archive. A system based on the Elastic Stack has been implemented in order to provide easy access to these statistics. This system provides aggregated statistics based on the number of value changes and alarms, classified according to several criteria such as time, application domain, system and device. The system can be used, for example, to detect abnormal situations and alarm misconfiguration. In addition to these statistics each application generates text-based log files which are parsed, collected and displayed using the Elastic Stack to provide centralised access to all the application logs. Further work will explore the possibilities of combining the statistics and logs to better understand the behaviour of CERN's controls applications. | ||
![]() |
Poster TUPHA034 [5.094 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA034 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
TUPHA035 | Data Analytics Reporting Tool for CERN SCADA Systems | 456 |
|
||
This paper describes the concept of a generic data analytics reporting tool for SCADA (Supervisory Control and Data Acquisition) systems at CERN. The tool is a response to a growing demand for smart solutions in the supervision and analysis of control systems data. Large scale data analytics is a rapidly advancing field, but simply performing the analysis is not enough; the results must be made available to the appropriate users (for example operators and process engineers). The tool can report data analytics for objects such as valves and PID controllers directly into the SCADA systems used for operations. More complex analyses involving process interconnections (such as correlation analysis based on machine learning) can also be displayed. A pilot project is being developed for the WinCC Open Architecture (WinCC OA) SCADA system using Hadoop for storage. The reporting tool obtains the metadata and analysis results from Hadoop using Impala, but can easily be switched to any database system that supports SQL standards. | ||
![]() |
Poster TUPHA035 [1.016 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA035 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THMPA08 | Processing of the Schottky Signals at RHIC | 1327 |
|
||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy. Schottky monitors are used to determine important beam parameters in a non-destructive way. In this paper we present improved processing of the transverse and longitudinal Schottky signals from a hi-Q resonant 2.07 GHz cavity and transverse signals from a low-Q 245 MHz cavity with the main focus on providing the real-time measurement of beam tune, chromaticity and emittance during injection and ramp when the beam condition is changing rapidly. The analysis and control is done in python using recently developed interfaces to Accelerator Device Objects. |
||
![]() |
Slides THMPA08 [0.158 MB] | |
![]() |
Poster THMPA08 [0.726 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THMPA08 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THPHA030 | Online Analysis for Anticipated Failure Diagnostics of the CERN Cryogenic Systems | 1412 |
|
||
The cryogenic system is one of the most critical component of the CERN Large Hadron Collider (LHC) and its associated experiments ATLAS and CMS. In the past years, the cryogenic team has improved the maintenance plans, the operation procedures and achieved a very high reliability. However, as the recovery time after failure remains the major issue for the cryogenic availability new developments must take place. A new online diagnostic tool is developed to identify and anticipate failures of cryogenics field equipment, based on the acquired knowledge on dynamic simulation for the cryogenic equipment and on previous data analytic studies. After having identified the most critical components, we will develop their associated models together with the signature of their failure modes. The proposed tools will detect deviation between the actual systems and their model or identify preliminary failure signatures. This information will allow the operation team to take early mitigating actions before the failure occurrence. | ||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA030 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THPHA031 | Fast Image Analysis for Beam Profile Measurement at the European XFEL | 1416 |
|
||
At the European XFEL, images of scintillator screens are processed at a rate of 10 Hz. Dedicated image analysis servers are used for transversal beam profile analysis as well as for longitudinal profile and slice emittance measurement. This contribution describes the setup and the algorithms used for image analysis. | ||
![]() |
Poster THPHA031 [1.161 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA031 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THPHA032 | EPICS and Open Source Data Analytics Platforms | 1420 |
|
||
SKA scale distributed control and monitoring systems present challenges in hardware sensor monitoring, archiving, hardware fault detection and fault prediction. The size and scale of hardware involved and telescope high availability requirements suggest the machine learning and other automated methods will be required for fault finding and fault prediction of hardware components. Modern tools are needed leveraging open source time series database & data analytic platforms. We describe DiaMoniCA for The Australian SKA Pathfinder Radio Telescope which integrates EPICS, our own monitoring archiver MoniCA, with an open source time series database and web based data visualisation and analytic platform. | ||
![]() |
Poster THPHA032 [7.517 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA032 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THPHA033 | Development of Status Analysis System Based on ELK Stack at J-PARC MLF | 1423 |
|
||
In recent neutron scattering experiments, a large quantity and various kinds of experimental data are generated. In J-PARC MLF, it is possible to conduct many experiments under various conditions in a short time with high-intensity neutron beam and high-performance neutron instruments with a wealth of sample environmental equipment. Therefore, it is required to make an efficient and effective data analysis. Additionally, since it has been almost nine years from the beginning of operation in MLF, there are many equipment and system being up for renewal resulting in failure due to aging degradation. Since such kind of failure can lose precious beam time, failure or its sign should be early detected. MLF status analysis system based on the Elasticsearch, Logstash and Kibana (ELK) Stack, which is one of the web-based framework rapidly growing for big data analysis, ingests various data from neutron instruments in real time. It realizes to gain insight for decision-making such as data analysis and experiment as well as instrument maintenance by flexible user-based analysis and visualization. In this paper, we will report the overview and development status of our status analysis system. | ||
![]() |
Poster THPHA033 [0.690 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA033 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THPHA034 | The Study of Big Data Tools Usages in Synchrotrons | 1428 |
|
||
In today's world, there is plenty of data being generated from various sources in different areas across economics, engineering and science. For instance, accelerators are able to generate 3 PB data just in one experiment. Synchrotrons industry is an example of the volume and velocity of data which data is too big to be analyzed at once. While some light sources can deal with 11 PB, they confront with data problems. The explosion of data become an important and serious issue in today's synchrotrons world. Totally, these data problems pose in different fields like storage, analytics, visualisation, monitoring and controlling. To override these problems, they prefer HDF5, grid computing, cloud computing and Hadoop/Hbase and NoSQL. Recently, big data takes a lot of attention from academic and industry places. We are looking for an appropriate and feasible solution for data issues in ILSF basically. Contemplating on Hadoop and other up-to-date tools and components is not out of mind as a stable solution. In this paper, we are evaluating big data tools and tested techniques in various light source around the world for data in beamlines studying the storage and analytics aspects. | ||
![]() |
Poster THPHA034 [1.345 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA034 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THPHA035 | High Level Control System Code with Automatic Parametric Characterization Capabilities | 1432 |
|
||
Several degree of freedom have been introduced in the design of the proton source (named PS-ESS) and in the Low Energy Beam Transport line (LEBT) developed at INFN-LNS for the European Spallation Source (ESS) project. The beam commissioning was focused on the most important working parameters in order to optimize the beam production performance taking into account the ESS accelerator requirements. The development of a MATLAB custom code able to interact with the EPICS control system framework was needed to optimize the short time available for the beam commissioning. The code was used as an additional high level control system layer able to change all source parameters and read all beam diagnostics output data. More than four hundred of thousand configurations have been explored in a wide range of working parameters. The capability to connect Matlab to EPICS enabled also the developing of a genetic algorithm optimization code able to automatic tune the source towards a precise current value and stability. A dedicated graphical tool was developed for the data analysis. Unexpected benefit come out from this approach that will be shown in this paper. | ||
![]() |
Poster THPHA035 [1.420 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA035 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |