Keyword: distributed
Paper Title Other Keywords Page
MOA3O02 The Large Scale European XFEL Control System: Overview and Status of the Commissioning controls, undulator, cryogenics, software 5
 
  • R. Bacher, A. Aghababyan, P.K. Bartkiewicz, T. Boeckmann, B. Bruns, M.R. Clausen, T. Delfs, P. Duval, L. Fröhlich, W. Gerhardt, C. Gindler, J. Hatje, O. Hensler, J.M. Jäger, R. Kammering, S. Karstensen, H. Keller, V. Kocharyan, O. Korth, A. Labudda, T. Limberg, S.M. Meykopff, M. Möller, J. Penning, A. Petrosyan, G. Petrosyan, L.P. Petrosyan, V. Petrosyan, P. Pototzki, K.R. Rehlich, S. Rettig-Labusga, H.R. Rickens, G. Schlesselmann, B. Schoeneburg, E. Sombrowski, M. Staack, C. Stechmann, J. Szczesny, J. Wilgen, T. Wilksen, H. Wu
    DESY, Hamburg, Germany
  • S. Abeghyan, A. Beckmann, D. Boukhelef, N. Coppola, S.G. Esenov, B. Fernandes, P. Gessler, G. Giambartolomei, S. Hauf, B.C. Heisen, S. Karabekyan, M. Kumar, L.G. Maia, A. Parenti, A. Silenzi, H. Sotoudi Namin, J. Szuba, M. Teichmann, J. Tolkiehn, K. Weger, J. Wiggins, K. Wrona, M. Yakopov, C. Youngman
    XFEL. EU, Hamburg, Germany
 
  The European XFEL is a 3.4km long X-ray Free Electron Laser in the final construction and commissioning phase in Hamburg. It will produce 27000 bunches per second at 17.5GeV. Early 2015 a first electron beam was produced in the RF-photo-injector and the commissioning of consecutive sections is following during this and next year. The huge number and variety of devices for the accelerator, beam line, experiment, cryogenic and facility systems pose a challenging control task. Multiple systems, including industrial solutions, must be interfaced to each other. The high number of bunches requires a tight time synchronization (down to picoseconds) and high performance data acquisition systems. Fast feedbacks from front-ends, the DAQs and online analysis system with a seamless integration of controls are essential for the accelerator and the initially 6 experimental end stations. It turns out that the European XFEL will be the first installation exceeding 2500 FPGA components in the MicroTCA form factor and will run one of the largest PROFIBUS networks. Many subsystem prototypes are already successfully in operation. An overview and status of the XFEL control system will be given.  
slides icon Slides MOA3O02 [3.105 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUA3O01 Detector Controls Meets JEE on the Web controls, interface, detector, experiment 513
 
  • F. Glege, A. Andronidis, O. Chaze, C. Deldicque, M. Dobson, A.D. Dupont, D. Gigi, J. Hegeman, O. Holme, M. Janulis, R.J. Jiménez Estupiñán, L. Masetti, F. Meijers, E. Meschi, S. Morovic, C. Nunez-Barranco-Fernandez, L. Orsini, A. Petrucci, A. Racz, P. Roberts, H. Sakulin, C. Schwick, B. Stieger, S. Zaza, P. Zejdl
    CERN, Geneva, Switzerland
  • J.M. Andre, R.K. Mommsen, V. O'Dell
    Fermilab, Batavia, Illinois, USA
  • U. Behrens
    DESY, Hamburg, Germany
  • J. Branson, S. Cittolin, A. Holzner, M. Pieri
    UCSD, La Jolla, California, USA
  • G.L. Darlea, G. Gomez-Ceballos, C. Paus, J. Veverka
    MIT, Cambridge, Massachusetts, USA
  • S. Erhan
    UCLA, Los Angeles, California, USA
 
  Remote monitoring and controls has always been an important aspect of physics detector controls since it was available. Due to the complexity of the systems, the 24/7 running requirements and limited human resources, remote access to perform interventions is essential. The amount of data to visualize, the required visualization types and cybersecurity standards demand a professional, complete solution. Using the example of the integration of the CMS detector controls system into our ORACLE WebCenter infrastructure, the mechanisms and tools available for integration with controls systems shall be discussed. Authentication has been delegated to WebCenter and authorization been shared between web server and control system. Session handling exists in either system and has to be matched. Concurrent access by multiple users has to be handled. The underlying JEE infrastructure is specialized in visualization and information sharing. On the other hand, the structure of a JEE system resembles a distributed controls system. Therefore an outlook shall be given on tasks which could be covered by the web servers rather than the controls system.  
slides icon Slides TUA3O01 [2.611 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEM301 Timing Systems for ATNF Telescopes timing, software, controls, site 660
 
  • S.A. Hoyle
    CASS, Epping, Australia
  • P.L. Mirtschin
    CSIRO ATNF, Epping, Australia
 
  Radio Telescopes require precise time and timing signals for accurate telescope pointing, synchronisation of signal processing instrumentation and offline manipulation of observation data. We provide an overview of the timing system in use at our observatories; briefly describing the main features of the hardware, firmware and software.  
slides icon Slides WEM301 [0.568 MB]  
poster icon Poster WEM301 [0.468 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF041 Monitoring Mixed-Language Applications with Elastic Search, Logstash and Kibana (ELK) LabView, interface, network, framework 786
 
  • O.Ø. Andreassen, C. Charrondière, A. De Dios Fuente
    CERN, Geneva, Switzerland
 
  Application logging and system diagnostics is nothing new. Ever since we had the first computers scientist and engineers have been storing information about their systems, making it easier to understand what is going on and, in case of failures, what went wrong. Unfortunately there are as many different standards as there are file formats, storage types, locations, operating systems, etc. Recent development in web technology and storage has made it much simpler to gather all the different information in one place and dynamically adapt the display. With the introduction of Logstash with Elasticsearch as a backend, we store, index and query data, making it possible to display and manipulate data in whatever form one wishes. With Kibana as a generic and modern web interface on top, the information can be adapted at will. In this paper we will show how we can process almost any type of structured or unstructured data source. We will also show how data can be visualised and customised on a per user basis and how the system scales when the data volume grows.  
poster icon Poster WEPGF041 [3.848 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF047 Smooth Migration of CERN Post Mortem Service to a Horizontally Scalable Service controls, framework, dumping, operation 806
 
  • J.C. Garnier, C. Aguilera-Padilla, S. Boychenko, M. Dragu, M.A. Galilée, M. Koza, K.H. Krol, T. Martins Ribeiro, R. Orlandi, M.C. Poeschl, M. Zerlauth
    CERN, Geneva, Switzerland
 
  The Post Mortem service for CERNs accelerator complex stores and analyses transient data recordings of various equipment systems following certain events, like a beam dump or magnet quenches. The main purpose of this framework is to provide fast and reliable diagnostic to the equipment experts and operation crews to decide whether accelerator operation can continue safely or whether an intervention is required. While the Post Mortem System was initially designed to serve CERNs Large Hadron Collider (LHC), the scope has been rapidly extended to include as well External Post Operational Checks and Injection Quality Checks in the LHC and its injector complex. These new use cases impose more stringent time-constraints on the storage and analysis of data, calling to migrate the system towards better scalability in terms of storage capacity as well as I/O throughput. This paper presents an overview on the current service, the ongoing investigations and plans towards a scalable data storage solution and API, as well as the proposed strategy to ensure an entirely smooth transition for the current Post Mortem users.  
poster icon Poster WEPGF047 [1.454 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF052 Development of the J-PARC Time-Series Data Archiver using a Distributed Database System, II database, EPICS, hardware, status 818
 
  • N. Kikuzawa, A. Yoshii
    JAEA/J-PARC, Tokai-Mura, Naka-Gun, Ibaraki-Ken, Japan
  • H. Ikeda, Y. Kato
    JAEA, Ibaraki-ken, Japan
 
  The linac and the RCS in J-PARC (Japan Proton Accelerator Research Complex) have over 64000 EPICS records, providing enormous data to control much equipment. The data has been collected into PostgreSQL, while we are planning to replace it with HBase and Hadoop, a well-known distributed database and a distributed file system that HBase depends on. In the previous conference it was reported that we had constructed an archive system with a new version of HBase and Hadoop that cover a single point of failure, although we realized there were some issues to make progress into a practical phase. In order to revise the system with resolving the issues, we have been reconstructing the system with replacing master nodes with reinforced hardware machines, creating a kickstart file and scripts to automatically set up a node, introducing a monitoring tool to early detect flaws without fail, etc. In this paper these methods are reported, and the performance tests for the new system with accordingly fixing some parameters in HBase and Hadoop, are also examined and reported.  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF072 Parameters Tracking and Fault Diagnosis base on NoSQL Database at SSRF hardware, storage-ring, injection, database 873
 
  • Y.B. Yan, Z.C. Chen, L.W. Lai, Y.B. Leng
    SINAP, Shanghai, People's Republic of China
 
  As a user facility, the reliability and stability are very important. Besides using high-reliability hardware, the rapid fault diagnosis, data mining and predictive analytic s are also effective ways to improve the efficiency of the accelerator. A beam data logging system was built at SSRF, which was based on NoSQL database. The logging system stores beam parameters under some predefined conditions. The details of the system will be reported in this paper.  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF115 LabVIEW EPICS Program for Measuring BINP HLS of PAL-XFEL controls, LabView, EPICS, hardware 966
 
  • H. J. Choi, K.H. Gil, H.-S. Kang, S.H. Kim, K.W. Seo, Y.J. Suh
    PAL, Pohang, Kyungbuk, Republic of Korea
 
  In PAL-XFEL, a 4th generation light source, the HLS (Ultrasonic-type Hydrostatic Levelling System) developed at BINP (Budker Institute of Nuclear Physics) in Russia was installed and operated in all parts of PAL-XFEL in order to maintain observations of the vertical change building floor by the ground sinking and uplifting. For this, a HLS measuring program was written using NI LabVIEW and an EPICS IOC Server was built using the CA Lab which has been developed at BESSY (Berlin Electron Storage Ring Society for Synchrotron Radiation) in Germany. The CA Lab was improved and verified in order to confirm that it could support EPICS BASE libraries V3.14.12, and EPICS CA Client and that the EPICS IOC Server could be easily constructed by CA Lab in a 64-bit LabVIEW. This made Multi-core CPU (Multi-core Processor / Multi-thread Program) resource of 64bit Computer System (64bit Hardware PC / 64bit Windows OS / 64bit LabVIEW Multi-thread Programming) to be 100 percent utilized. This study proposes a configuration process for the HLS measuring program algorithm and a building process for the EPICS IOC Server by using CA Lab.  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF153 Karabo-GUI: A Multi-Purpose Graphical Front-End for the Karabo Framework GUI, controls, data-acquisition, software 1063
 
  • M. Teichmann, B.C. Heisen, K. Weger, J. Wiggins
    XFEL. EU, Hamburg, Germany
 
  The Karabo GUI is a generic graphical user interface (GUI) which is currently developed at the European XFEL GmbH. It allows the complete management of the Karabo distributed control and data acquisition system. Remote applications (devices) can be instantiated, operated and terminated. Devices are listed in a live navigation view and from the self-description inherent to every device a default configuration panel is generated. The user may combine interrelated components into one project. Such a project includes persisted device configurations, custom control panels and macros. Expert panels can be built by intermixing static graphical elements with dynamic widgets connected to parameters of the distributed system. The same panel can also be used to graphically configure and execute data analysis workflows. Other features include an embedded IPython scripting console, logging, notification and alarm handling. The GUI is user-centric and will restrict display or editing capability according to the user's role and the current device state. The GUI is based on PyQt technology and acts as a thin network client to a central Karabo GUI-Server.  
poster icon Poster WEPGF153 [0.767 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THHA2I01 Developing Distributed Hard-Real Time Software Systems Using FPGAs and Soft Cores real-time, FPGA, controls, software 1073
 
  • T. Włostowski, J. Serrano
    CERN, Geneva, Switzerland
  • F. Vaga
    University of Pavia, Pavia, Italy
 
  Hard real-time systems guarantee by design that no deadline is ever missed. In a distributed environment such as particle accelerators, there is often the extra requirement of having diverse real-time systems synchronize to each other. Implementations on top of general-purpose multi-tasking operating systems such as Linux generally suffer from lack of full control of the platform. On the other hand, solutions based on logic inside FPGAs can result in long development cycles. A mid-way approach is presented which allows fast software development yet guarantees full control of the timing of the execution. The solution involves using soft cores inside FPGAs, running single tasks without interrupts and without an operating system underneath. Two CERN developments are presented, both based on a unique free and open source HDL core comprising a parameterizable number of CPUs, logic to synchronize them and message queues to communicate with the local host and with remote systems. This development environment is being offered as a service to fill the gap between Linux-based solutions and full-hardware implementations.  
slides icon Slides THHA2I01 [2.530 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)