Keyword: data-acquisition
Paper Title Other Keywords Page
MOPGF038 Design and Commissioning Results of MicroTCA Stripline BPM System linac, software, electronics, hardware 180
 
  • S. L. Hoobler, R.S. Larsen, H. Loos, J.J. Olsen, S.R. Smith, T. Straumann, C. Xu, A. Young
    SLAC, Menlo Park, California, USA
 
  The Linac Coherent Light Source (LCLS) is a free electron laser (FEL) facility operating at the SLAC National Accelerator Laboratory (SLAC). A stripline beam position monitor (BPM) system was developed at SLAC [1] to meet the performance requirements necessary to provide high-quality stable beams for LCLS. This design has been modified to achieve improved position resolution in a more compact form factor. Prototype installations of this system have been operating in the LCLS LINAC and tested at the Pohang Accelerator Laboratory (PAL). Production systems are deployed at the new PAL XFEL facility and at the SPEAR storage ring at the Stanford Synchrotron Radiation Lightsource at SLAC. This paper presents the design and commissioning results of this system.  
poster icon Poster MOPGF038 [0.874 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF056 Synchronising High-Speed Triggered Image and Meta Data Acquisition for Beamlines EPICS, hardware, controls, framework 225
 
  • N. De Maio, A.P. Bark, T.M. Cobb, J.A. Thompson
    DLS, Oxfordshire, United Kingdom
 
  High-speed image acquisition is becoming more and more common on beamlines. As experiments increase in complexity, the need to record parameters related to the environment at the same time increases with them. As a result, conventional systems for combining experimental meta data and images often struggle to deliver at a speed and precision that would be desirable for the experiment. We describe an integrated solution that addresses those needs, overcoming the performance limitations of PV monitoring by combining hardware triggering of an ADC card, coordination of signals in a Zebra box* and three instances of area-Detector streaming to HDF5 data. This solution is expected to be appropriate for frame rates ranging from 30Hz to 1000Hz, with the limiting factor being the maximum speed of the camera. Conceptually, the individual data streams are arranged in pipelines controlled by a master Zebra box, expecting start/stop signals on one end and producing the data collections at the other. This design ensures efficiency on the acquisition side while allowing easy interaction with higher-level applications on the other.
*T. Cobb, Y. Chernousko, I. Uzun, ZEBRA: A Flexible Solution for Controlling Scanning Experiments, Proc. ICALEPCS13, http://jacow.org/.
 
poster icon Poster MOPGF056 [0.456 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF088 Integrating the Measuring System of Vibration and Beam Position Monitor to Study the Beam Stability controls, monitoring, vacuum, network 277
 
  • C. H. Huang, Y.-S. Cheng, P.C. Chiu, K.T. Hsu, K.H. Hu, C.Y. Liao
    NSRRC, Hsinchu, Taiwan
 
  For a low emittance light source, beam orbit motion needs to be controlled within submicron for obtaining a high quality light. Magnets vibration especially quadruples will be one of the main sources to destroy the beam stability. In order to study the relationship between vibration and beam motion, it is highly desirable to use a synchronous data acquisition system which integrates measurement of vibration and beam position monitor systems especially for the coherence analysis. For a larger vibration such as earthquakes are also deleterious to beam stability or even make the beam trip due to the quench of superconducting RF cavity. A data acquisition system integrated with an earthquake detector is also quite necessary to show and archive the data on the control system. The data acquisition systems of vibration and earthquake measurement system are summarized in this report. The relationship between the beam motion and magnets vibration will also study here.  
poster icon Poster MOPGF088 [0.504 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF161 LANSCE Control System Upgrade Status and Challenges controls, hardware, EPICS, timing 464
 
  • M. Pieck, D. Baros, E. Björklund, J.A. Faucett, J.G. Gioia, J.O. Hill, P.S. Marroquin, J.D. Paul, J.D. Sedillo, F.E. Shelley, H.A. Watkins
    LANL, Los Alamos, New Mexico, USA
 
  Funding: Work supported by Los Alamos National Laboratory for the U.S. Department of Energy under contract W-7405-ENG-36. LA-UR-15-27880
The Los Alamos Neutron Science Center (LANSCE) linear accelerator drives five user facilities: Isotope Production, Proton Radiography, Ultra-Cold Neutrons, Weapons Neutron Research, and Neutron Scattering. In 2011, we started an ambitious project to refurbish key elements of the LANSCE accelerator that have become obsolete or were near end-of-life. The control system went through an upgrade process that affected different areas of LANSCE. Many improvements have been made but funding challenges and LANSCE operational commitments have delayed project deliverables. In this paper, we will discuss our upgrade choices, what we have accomplished so far, what we have learned about upgrading the existing control system and what challenges we still face.
 
poster icon Poster MOPGF161 [1.131 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUB3O02 Iterative Development of the Generic Continuous Scans in Sardana controls, experiment, software, hardware 524
 
  • Z. Reszela, G. Cuní, C.M. Falcón Torres, D. Fernández-Carreiras, C. Pascual-Izarra, M. Rosanes Siscart
    ALBA-CELLS Synchrotron, Cerdanyola del Vallès, Spain
 
  Sardana* is a software suite for Supervision, Control and Data Acquisition in scientific installations. It aims to reduce cost and time of design, development and support of the control and data acquisition systems. Sardana is used in several synchrotrons where continuous scans are the desired way of executing experiments**. Most experiments require an extensive and coordinated control of many aspects like positioning, data acquisition, synchronization and storage. Many successful ad-hoc solutions have already been developed, however they lack generalization and are hard to maintain or reuse. Sardana, thanks to the Taurus*** based applications, allows the users to configure and control the scan experiments. The MacroServer, a flexible python based sequencer, provides parametrizable turn-key scan procedures. Thanks to the Device Pool controllers interfaces, heterogeneous hardware can be easily plug into Sardana and their elements used during scans and data acquisitions. Development of the continuous scans is an ongoing iterative process and its current status is described in this paper.
* http://sardana-controls.org** D. Fernandez-Carreiras, Synchronization of Motion and Detectors and Cont. Scans as the Standard Data Acquisition Technique, ICALEPCS2015*** http://taurus-scada.org
 
slides icon Slides TUB3O02 [3.173 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEB3O03 Disruptor - Using High Performance, Low Latency Technology in the CERN Control System controls, software, framework, hardware 606
 
  • M. Gabriel, R. Gorbonosov
    CERN, Geneva, Switzerland
 
  Accelerator control systems process thousands of concurrent events per second, which adds complexity to their implementation. The Disruptor library provides an innovative single-threaded approach, which combines high performance event processing with a simplified software design, implementation and maintenance. This open-source library was originally developed by a financial company to build a low latency trading exchange. In 2014 the high-level control system for CERN experimental areas (CESAR) was renovated. CESAR calculates the states of thousands of devices by processing more than 2500 asynchronous event streams. The Disruptor was used as an event-processing engine. This allowed the code to be greatly simplified by removing the concurrency concerns. This paper discusses the benefits of the programming model encouraged by the Disruptor (simplification of the code base, performance, determinism), the design challenges faced while integrating the Disruptor into CESAR as well as the limitations it implies on the architecture.  
slides icon Slides WEB3O03 [0.954 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WED3O02 Databroker: An Interface for NSLS-II Data Management System experiment, interface, detector, framework 645
 
  • A. Arkilic, D.B. Allan, D. Chabot, L.R. Dalesio, W.K. Lewis
    BNL, Upton, Long Island, New York, USA
 
  Funding: Brookhaven National Lab, U.S. Department of Energy
A typical experiment involves not only the raw data from a detector, but also requires additional data from the beamline. This information is largely kept separated and manipulated individually, to date. A much more effective approach is to integrate these different data sources, and make these easily accessible to data analysis clients. NSLS-II data flow system contains multiple backends with varying data types. Leveraging the features of these (metadatastore, filestore, channel archiver, and Olog), this library provides users with the ability to access experimental data. This service acts as a single interface for time series, data attribute, frame data access and other experiment related information.
 
slides icon Slides WED3O02 [2.944 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WED3O03 MADOCA II Data Logging System Using NoSQL Database for SPRING-8 database, controls, embedded, operation 648
 
  • A. Yamashita, M. Kago
    JASRI/SPring-8, Hyogo-ken, Japan
 
  The data logging system for SPring-8 was upgraded to the new system using NoSQL database, as a part of a MADOCA II framework. It has been collecting all the log data required for accelerator control without any trouble since the upgrade. In the past, the system powered by a relational database management system (RDBMS) had been operating since 1997. It had grown with the development of accelerators. However, the system with RDBMS became difficult to handle new requirements like variable length data storage, data mining from large volume data and fast data acquisition. New software technologies gave solution for the problems. In the new system, we adopted two NoSQL databases, Apache Cassandra and Redis, for data storage. Apache Cassandra is utilized for perpetual archive. It is a scalable and highly available column oriented database suitable for time series data. Redis is used for the real time data cache because of a very fast in-memory key-value store. Data acquisition part of the new system was also built based on ZeroMQ message packed by MessagePack. The operation of the new system started in January 2015 after the long term evaluation over one year.  
slides icon Slides WED3O03 [0.513 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WED3O05 Big Data Analysis and Analytics with MATLAB software, database, framework, controls 656
 
  • D.S. Willingham
    ASCo, Clayton, Victoria, Australia
 
  Overview using Data Analytics to turn large volumes of complex data into actionable information can help you improve design and decision-making processes. In today's world, there is an abundance of data being generated from many different sources. However, developing effective analytics and integrating them into existing systems can be challenging. Big data represents an opportunity for analysts and data scientists to gain greater insight and to make more informed decisions, but it also presents a number of challenges. Big data sets may not fit into available memory, may take too long to process, or may stream too quickly to store. Standard algorithms are usually not designed to process big data sets in reasonable amounts of time or memory. There is no single approach to big data. Therefore, MATLAB provides a number of tools to tackle these challenges. In this paper 2 case studies will be presented: 1. Manipulating and doing computations on big datasets on light weight machines; 2. Visualising big, multi-dimensional datasets Developing Predictive Models High performance computing with clusters and Cloud Integration with Databases, HADOOP and Big Data Environments.  
slides icon Slides WED3O05 [10.989 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF014 A Data Acquisition System for Abnormal RF Waveform at SACLA GUI, LLRF, cavity, controls 721
 
  • M. Ishii, M. Kago
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Fukui
    RIKEN SPring-8 Center, Innovative Light Sources Division, Hyogo, Japan
  • T. Hasegawa, M. Yoshioka
    SES, Hyogo-pref., Japan
  • T. Inagaki, H. Maesaka, T. Ohshima, Y. Otake
    RIKEN SPring-8 Center, Sayo-cho, Sayo-gun, Hyogo, Japan
  • T. Maruyama
    RIKEN/SPring-8, Hyogo, Japan
 
  At the X-ray Free Electron Laser (XFEL) facility, SACLA, an event-synchronized data acquisition system has been utilized for the XFEL operation. This system collects every shot-by-shot data, such as point data of the phase and amplitude of the RF cavity pickup signals, in synchronization with the beam operation cycle. This system also acquires RF waveform data every 10 minutes. In addition to the periodic waveform acquisition, an abnormal RF waveform that suddenly occurs should be collected for failure diagnostics. Therefore, we developed an abnormal RF waveform data acquisition (DAQ) system, which consists of the VME systems, a cache server, and a NoSQL database system, Apache Cassandra. When the VME system detects an abnormal RF waveform, it collects all related waveforms of the same shot. The waveforms are stored in Cassandra through the cache server. Before the installation to SACLA, we ensured the performance with a prototype system. In 2014, we installed the DAQ system into the injection part with five VME systems. In 2015, we will acquire waveforms from the low-level RF control system configured by 74 VME systems at the SACLA accelerator.  
poster icon Poster WEPGF014 [0.978 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF038 A Flexible System for End-User Data Visualisation, Analysis Prototyping and Experiment Logbook controls, laser, free-electron-laser, electron 782
 
  • R. Borghes, V. Chenda, G. Kourousias, M. Lonza, M. Prica, M. Scarcia
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  Experimental facilities like synchrotrons and free electron lasers, often aim at well defined data workflows tightly integrated with their control systems. Still such facilities are also service providers to visiting scientists. The hosted researchers often have requirements different than those present in the established processes. The most evident needs are those for i) flexible experimental data visualisation, ii) rapid prototyping of analysis methods, and iii) electronic logbook services. This paper reports on the development of a software system, collectively referred to as DonkiTools, that aims at satisfying the aforementioned needs for the synchrotron ELETTRA and the free electron laser FERMI. The design strategy is outlined and includes topics regarding: dynamic data visualisation, Python scripting of analysis methods, integration with the TANGO distributed control system, electronic logbook with automated metadata reporting, usability, customization, and extensibility. Finally a use case presents a full deployment of the system, integrated with the FermiDAQ data collection system, in the free electron laser beamline EIS-TIMEX.  
poster icon Poster WEPGF038 [1.016 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF044 Filestore: A File Management Tool for NSLS-II Beamlines experiment, data-analysis, EPICS, database 796
 
  • A. Arkilic, T.A. Caswell, D. Chabot, L.R. Dalesio, W.K. Lewis
    BNL, Upton, Long Island, New York, USA
 
  Funding: Brookhaven National Lab, Departmet of Energy
NSLS-II beamlines can generate 72,000 data sets per day resulting in over 2 M data sets in one year. The large amount of data files generated by our beamlines poses a massive file management challenge. In response to this challenge, we have developed filestore, as means to provide users with an interface to stored data. By leveraging features of Python and MongoDB, filestore can store information regarding the location of a file, access and open the file, retrieve a given piece of data in that file, and provide users with a token, a unique identifier allowing them to retrieve each piece of data. Filestore does not interfere with the file source or the storage method and supports any file format, making data within files available for NSLS-II data analysis environment.
 
poster icon Poster WEPGF044 [0.854 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF070 A New Data Acquiring and Query System with Oracle and EPICS in the BEPCII EPICS, database, interface, controls 865
 
  • C.H. Wang, L.F. Li
    IHEP, Beijing, People's Republic of China
 
  Funding: supported by NFSC(1137522)
The old historical Oracle database in the BEPCII has been put into operation in 2006, there are some problems such as the program operation instability and EPICS PVs loss, a new data acquiring and query system with Oracle and EPICS has been developed with Eclipse and JCA. On one hand, the authors adopt the technology of the table-space and the table-partition to build a special database schema in Oracle. On another hand, based on RCP and Java, EPICS data acquiring system is developed successfully with a very friendly user interface. It's easy for users to check the status of each PV's connection, manage or maintain the system. Meanwhile, the authors also develop the system of data query, which provides many functions, including data query, data plotting, data exporting, data zooming, etc. This new system has been put into running for three years. It also can be applied to any EPICS control systems.

 
poster icon Poster WEPGF070 [0.946 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF105 EPICS V4 Evaluation for SNS Neutron Data neutron, EPICS, network, detector 947
 
  • K.-U. Kasemir, G.S. Guyotte, M.R. Pearson
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy.
Version 4 of the Experimental Physics and Industrial Control System (EPICS) toolkit allows defining application-specific structured data types (pvData) and offers a network protocol for their efficient exchange (pvAccess). We evaluated V4 for the transport of neutron events from the detectors of the Spallation Neutron Source (SNS) to data acquisition and experiment monitoring systems. This includes the comparison of possible data structures, performance tests, and experience using V4 in production on a beam line.
 
poster icon Poster WEPGF105 [1.281 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF122 Real-Time Performance Improvements and Consideration of Parallel Processing for Beam Synchronous Acquisition (BSA) EPICS, timing, real-time, linac 992
 
  • K.H. Kim, S. Allison, T. Straumann, E. Williams
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported by the the U.S. Department of Energy, Office of Science under Contract DE-AC02-76SF00515 for LCLS I and LCLS II.
Beam Synchronous Acquisition (BSA) provides a common infrastructure for aligning data to each individual beam pulse, as required by the Linac Coherent Light Source (LCLS). BSA allows 20 independent acquisitions simultaneously for the entire LCLS facility and is used extensively for beam physics, machine diagnostics and operation. BSA is designed as part of LCLS timing system and is currently an EPICS record based implementation, allowing timing receiver EPICS applications to easily add BSA functionality to their own record processing. However, the non-real-time performance of EPICS record processing and the increasing number of BSA devices has brought real-time performance issues. The major reason for the performance problem is likely due to the lack of separation between time-critical BSA upstream processing and non-critical downstream processing. BSA is being improved with thread level programming, breaking the global lock in each BSA device, adding a queue between upstream and downstream processing, and moving out the non-critical downstream to a lower priority worker thread. The use of multiple worker threads for parallel processing in SMP systems is also being investigated.
 
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF153 Karabo-GUI: A Multi-Purpose Graphical Front-End for the Karabo Framework GUI, controls, distributed, software 1063
 
  • M. Teichmann, B.C. Heisen, K. Weger, J. Wiggins
    XFEL. EU, Hamburg, Germany
 
  The Karabo GUI is a generic graphical user interface (GUI) which is currently developed at the European XFEL GmbH. It allows the complete management of the Karabo distributed control and data acquisition system. Remote applications (devices) can be instantiated, operated and terminated. Devices are listed in a live navigation view and from the self-description inherent to every device a default configuration panel is generated. The user may combine interrelated components into one project. Such a project includes persisted device configurations, custom control panels and macros. Expert panels can be built by intermixing static graphical elements with dynamic widgets connected to parameters of the distributed system. The same panel can also be used to graphically configure and execute data analysis workflows. Other features include an embedded IPython scripting console, logging, notification and alarm handling. The GUI is user-centric and will restrict display or editing capability according to the user's role and the current device state. The GUI is based on PyQt technology and acts as a thin network client to a central Karabo GUI-Server.  
poster icon Poster WEPGF153 [0.767 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)