Keyword: experiment
Paper Title Other Keywords Page
MOCOAAB01 The First Running Period of the CMS Detector Controls System - A Success Story controls, detector, status, hardware 1
 
  • F. Glege, A. Aymeric, O. Chaze, S. Cittolin, J.A. Coarasa, C. Deldicque, M. Dobson, D. Gigi, R. Gomez-Reino, C. Hartl, L. Masetti, F. Meijers, E. Meschi, S. Morovic, C. Nunez-Barranco-Fernandez, L. Orsini, W. Ozga
    CERN, Geneva, Switzerland
  • G. Bauer
    MIT, Cambridge, Massachusetts, USA
  • U. Behrens
    DESY, Hamburg, Germany
  • J. Branson, A. Holzner
    UCSD, La Jolla, California, USA
  • S. Erhan
    UCLA, Los Angeles, California, USA
  • R.K. Mommsen, V. O'Dell
    Fermilab, Batavia, USA
 
  After only three months of commissioning, the CMS detector controls system (DCS) was running at close to 100% efficiency. Despite millions of parameters to control and the HEP typical distributed development structure, only minor problems were encountered. The system can be operated by a single person and the required maintenance effort is low. A well factorized system structure and development are keys to success as well as a centralized, service like deployment approach. The underlying controls software PVSS has proven to work in a DCS environment. Converting the DCS to full redundancy will further reduce the need for interventions to a minimum.  
slides icon Slides MOCOAAB01 [1.468 MB]  
 
MOPPC025 A Movement Control System for Roman Pots at the LHC controls, collimation, interface, FPGA 115
 
  • B. Farnham, O.O. Andreassen, I. Atanassov, J. Baechler, B. Copy, M. Deile, M. Dutour, P. Fassnacht, S. Franz, S. Jakobsen, F. Lucas Rodríguez, X. Pons, E. Radermacher, S. Ravat, F. Ravotti, S. Redaelli
    CERN, Geneva, Switzerland
  • K.H. Hiller
    DESY Zeuthen, Zeuthen, Germany
 
  This paper describes the movement control system for detector positioning based on the Roman Pot design used by the ATLAS-ALFA and TOTEM experiments at the LHC. A key system requirement is that LHC machine protection rules are obeyed: the position is surveyed every 20ms with an accuracy of 15?m. If the detectors move too close to the beam (outside limits set by LHC Operators) the LHC interlock system is triggered to dump the beam. LHC Operators in the CERN Control Centre (CCC) drive the system via an HMI provided by a custom built Java application which uses Common Middleware (CMW) to interact with lower level components. Low-level motorization control is executed using National Instruments PXI devices. The DIM protocol provides the software interface to the PXI layer. A FESA gateway server provides a communication bridge between CMW and DIM. A cut down laboratory version of the system was built to provide a platform for verifying the integrity of the full chain, with respect to user and machine protection requirements, and validating new functionality before deploying to the LHC. The paper contains a detailed system description, test bench results and foreseen system improvements.  
 
MOPPC037 Control Programs for the MANTRA Project at the ATLAS Superconducting Accelerator controls, laser, data-acquisition, ion 162
 
  • M.A. Power, C.N. Davids, C. Nair, T. Palchan, R.C. Pardo, C.E. Peters, K.M. Teh, R.C. Vondrasek
    ANL, Argonne, USA
 
  Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357.
The AMS (Accelerator Mass Spectrometry) project at ATLAS (Argonne Tandem Linac Accelerator System) complements the MANTRA (Measurement of Actinides Neutron TRAnsmutation) experimental campaign. To improve the precision and accuracy of AMS measurements at ATLAS, a new overall control system for AMS measurements needs to be implemented to reduce systematic errors arising from changes in transmission and ion source operation. The system will automatically and rapidly switch between different m/q settings, acquire the appropriate data and move on to the next setting. In addition to controlling the new multi-sample changer and laser ablation system, a master control program will communicate via the network to integrate the ATLAS accelerator control system, FMA control computer, and the data acquisition system.
 
poster icon Poster MOPPC037 [2.211 MB]  
 
MOPPC048 Evaluation of the Beamline Personnel Safety System at ANKA under the Aegis of the 'Designated Architectures' Approach radiation, software, controls, operation 195
 
  • K. Cerff, M. Hagelstein
    FZK, Karlsruhe, Germany
  • I. Birkel, J. Jakel, R. Stricker
    KIT, Karlsruhe, Germany
 
  The Beamline Personnel Safety System (BPSS) at Angstroemquelle Karlsruhe (ANKA) started operation in 2003. The paper describes the safety related design and evaluation of serial, parallel and nested radiation safety areas, which allows the flexible plug-in of experimental setups at ANKA-beamlines. It evaluates the resulting requirements for safety system hard- and software and the necessary validation procedure defined by current national and international standards, based on probabilistic reliability parameters supplied by component libraries of manufacturers and an approach known as 'Designated Architectures', defining safety functions in terms of sensor-logic-actor chains. An ANKA-beamline example is presented with special regards to features like (self-) Diagnostic Coverage (DC) of the control system, which is not part of classical Markov process modelling of systems safety.  
poster icon Poster MOPPC048 [0.699 MB]  
 
MOPPC053 A Safety System for Experimental Magnets Based on CompactRIO status, interface, controls, hardware 210
 
  • S. Ravat, L. Deront, A. Kehrli, X. Pons
    CERN, Geneva, Switzerland
 
  This paper describes the development of a new safety system for experimental magnets using National Instruments CompactRIO devices. The design of the custom Magnet Safety System (MSS) for the large LHC experimental magnets began in 1998 and it was first installed and commissioned in 2002. Some of its components like the isolation amplifier or ALTERA Reconfigurable Field-Programmable Gate Array (FPGA) are not available on the market any longer. A review of the system shows that it can be modernized and simplified by replacing the Hard-wired Logic Module (HLM) by a CompactRIO device. This industrial unit is a reconfigurable embedded system containing a processor running a real-time operating system (RTOS), a FPGA, and interchangeable industrial I/O modules. A prototype system, called MSS2, has been built and successfully tested using a test bench based on PXI crate. Two systems are currently being assembled for two experimental magnets at CERN, for the COMPASS solenoid and for the M1 magnet at the SPS beam line. This paper contains a detailed description of MSS2, the test bench and results from a first implementation and operation with real magnets.  
poster icon Poster MOPPC053 [0.543 MB]  
 
MOPPC056 The Detector Safety System of NA62 Experiment detector, status, interface, controls 222
 
  • G. Maire, A. Kehrli, S. Ravat
    CERN, Geneva, Switzerland
  • H. Coppier
    ESIEE, Amiens, France
 
  The aim of the NA62 experiment is the study of the rare decay K+→π+ν;ν- at the CERN SPS. The Detector Safety System (DSS) developed at CERN is responsible for assuring the protection of the experiment’s equipment. DSS requires a high degree of availability and reliability. It is composed of a Front-End and a Back-End part, the Front-End being based on a National Instruments cRIO system, to which the safety critical part is delegated. The cRIO Front-End is capable of running autonomously and of automatically taking predefined protective actions whenever required. It is supervised and configured by the standard CERN PVSS SCADA system. This DSS system can easily adapt to evolving requirements of the experiment during the construction, commissioning and exploitation phases. The NA62 DSS is being installed and has been partially commissioned during the NA62 Technical Run in autumn 2012, where components from almost all the detectors as well as the trigger and the data acquisition systems were successfully tested. The paper contains a detailed description of this innovative and performing solution, and demonstrates a good alternative to the LHC systems based on redundant PLCs.  
poster icon Poster MOPPC056 [0.613 MB]  
 
MOPPC133 Performance Improvement of KSTAR Networks for Long Distance Collaborations network, site, interface 423
 
  • J.S. Park, A.K. Bashir, D. Lee, S. Lee, T.G. Lee, S.W. Yun
    NFRI, Daejon, Republic of Korea
  • B.S. Cho
    KISTI, Daejeon, Republic of Korea
 
  KSTAR (Korea Superconducting Tokamak Advanced Research) has completed its 5th campaign. Every year, it produces enormous amount of data that need to be forwarded to international collaborators shot by shot for run-time analysis. Analysis of one shot helps in deciding parameters for next shot. Many shots are conducted in a day, therefore, this communication need to be very efficient. Moreover, amount of KSTAR data and number of international collaborators are increasing every year. In presence of big data and various collaborators exists in all over the world, communicating at run-time will be a challenge. To meet this challenge, we need efficient ways of communications to transfer data. Therefore, in this paper, we will optimize paths among internal and external networks of KSTAR for efficient communication. We will also discuss transmission solutions for environment construction and evaluate performance for long distance collaborations.  
poster icon Poster MOPPC133 [1.582 MB]  
 
TUMIB05 ANSTO, Australian Synchrotron, Metadata Catalogues and the Australian National Data Service database, synchrotron, data-management, neutron 529
 
  • N. Hauser, S. Wimalaratne
    ANSTO, Menai, Australia
  • C.U. Felzmann
    SLSA, Clayton, Australia
 
  Data citation, management and discovery are important to ANSTO, the Australian Synchrotron and the scientists that use them. Gone are the days when raw data is written to a removable media and subsequently lost. The metadata catalogue *MyTardis is being used by both ANSTO and the Australian Synchrotron. Metadata is harvested from the neutron beam and X-ray instruments raw experimental files and catalogued in databases that are local to the facilities. The data is accessible via a web portal. Data policies are applied to embargo data prior to placing data in the public domain. Public domain data is published to the Australian Research Data Commons using the OAI-PMH standard. The Commons is run by the Australian National Data Service (ANDS), who was the project sponsor. The Commons is a web robot friendly site. ANDS also sponsors digital object identifiers (DOI) for deposited datasets, which allows raw data to now be a first class research output, allowing scientists that collect data to gain recognition in the same way as those who publish journal articles. Data is being discovered, cited, reused and collaborations initiated through the Commons.  
slides icon Slides TUMIB05 [1.623 MB]  
poster icon Poster TUMIB05 [1.135 MB]  
 
TUPPC005 Implementation of an Overall Data Management at the Tomography Station at ANKA TANGO, data-management, controls, synchrotron 558
 
  • D. Haas, W. Mexner, H. Pasic, T. Spangenberg
    KIT, Eggenstein-Leopoldshafen, Germany
 
  New technologies and research methods increase the complexity of data management at the beamlines of a synchrotron radiation facility. The diverse experimental data such as user and sample information, beamline status and parameters and experimental datasets, has to be interrelated, stored and provided to the user in a convenient way. The implementation of these requirements leads to challenges in fields of data life-cycle, storage, format and flow. At the tomography station at the ANKA a novel data management system has been introduced, representing a clearly structured and well organized data flow. The first step was to introduce the Experimental Coordination Service ECS, which reorganizes the measurement process and provides automatic linking of meta-, logging- and experimental-data. The huge amount of data, several TByte/week, is stored in NeXus files. These files are subsequently handled regarding storage location and life cycle by the WorkSpaceCreator development tool. In a further step ANKA will introduce the European single sign on system Umbrella and the experimental data catalogue ICAT as planned as the European standard solution in the PaNdata project.  
poster icon Poster TUPPC005 [1.422 MB]  
 
TUPPC014 Development of SPring-8 Experimental Data Repository System for Management and Delivery of Experimental Data data-management, interface, database, controls 577
 
  • H. Sakai, Y. Furukawa, T. Ohata
    JASRI/SPring-8, Hyogo-ken, Japan
 
  SPring-8 experimental Data Repository system (SP8DR) is an online storage service, which is built as one of the infrastructure services of SPring-8. SP8DR enables experimental user to obtain his experimental data, which was brought forth at SPring-8 beamline, on demand via the Internet. To make easy searching for required data-sets later, the system stored experimental data with meta-data such as experimental conditions. It is also useful to the post-experiment analysis process. As a framework for data management, we adopted DSpace that is widely used in the academic library information system. We made two kind of application software for registering an experimental data simply and quickly. These applications are used to record metadata-set to SP8DR database that has relations to experimental data on the storage system. This data management design allowed applications to high bandwidth data acquisition system. In this presentation, we report about the SPring-8 experimental Data Repository system that began operation in SPring-8 beamline.  
 
TUPPC015 On-line and Off-line Data Analysis System for SACLA Experiments detector, data-analysis, laser, data-acquisition 580
 
  • T. Sugimoto, Y. Furukawa, Y. Joti, T.K. Kameshima, K. Okada, R. Tanaka, M. Yamaga
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Abe
    RIKEN SPring-8 Center, Innovative Light Sources Division, Hyogo, Japan
 
  The X-ray Free-Electron Laser facility, SACLA, has delivered X-ray laser beams to users from March 2012 [1]. Typical user experiments utilize two-dimensional-imaging sensors, which generate 10 MBytes per accelerator beam shot. At 60 Hz beam repetition, the experimental data at the rate of 600 MBytes/second are accumulated using a dedicate data-acquisition (DAQ) system [2]. To analyze such a large amount of data, we developed data-analysis system for SACLA experiments. The system consists of on-line and off-line sections. The on-line section performs on-the-fly filtering using data handling servers, which examine data qualities and records the results onto the database with event-by-event basis. By referring the database, we can select good events before performing off-line analysis. The off-line section performs precise analysis by utilizing high-performance computing system, such as physical image reconstruction and rough three-dimensional structure analysis of the data samples. For the large-scaled image reconstructions, we also plan to use external supercomputer. In this paper, we present overview and future plan of the SACLA analysis system.
[1] T. Ishikawa et al., Nature Photonics 6, 540-544 (2012).
[2] M. Yamaga et al., ICALEPCS 2011, TUCAUST06, 2011.
 
poster icon Poster TUPPC015 [10.437 MB]  
 
TUPPC037 LabWeb - LNLS Beamlines Remote Operation System operation, software, interface, controls 638
 
  • H.H. Slepicka, M.A. Barbosa, R. Bongers, H.F. Canova, M.B. Cardoso, J.C. Mauricio, D.O. Omitto, J.M. Polli, C.B. Rodella, H. Westfahl Jr., M.M. Xavier, D.C. de Oliveira
    LNLS, Campinas, Brazil
 
  Funding: Project funded by CENPES/PETROBRAS under contract number: 0050.0067267.11.9
LabWeb is a software developed to allow remote operation of beamlines at LNLS, in a partnership with Petrobras Nanotechnology Network. Being the only light source in Latin America, LNLS receives many researchers and students interested in conducting experiments and analyses in these lines. The implementation of LabWeb allow researchers to use the laboratory structure without leaving their research centers, reducing time and travel costs in a continental country like Brazil. In 2010, the project was in its first phase in which tests were conducted using a beta version. Two years later, a new phase of the project began with the main goal of giving the operation scale for the remote access project to LNLS users. In this new version, a partnership was established to use the open source platform Science Studio developed and applied at the Canadian Light Source (CLS). Currently, the project includes remote operation of three beamlines at LNLS: SAXS1 (Small Angle X-Ray Scattering), XAFS1 (X-Ray Absorption and Fluorescence Spectroscopy) and XRD1 (X-Ray Diffraction). Now, the expectation is to provide this new way of realize experiments to all the other beamlines at LNLS.
 
poster icon Poster TUPPC037 [1.613 MB]  
 
TUPPC044 When Hardware and Software Work in Concert controls, interface, detector, operation 661
 
  • M. Vogelgesang, T. Baumbach, T. Farago, A. Kopmann, T. dos Santos Rolo
    KIT, Eggenstein-Leopoldshafen, Germany
 
  Funding: Partially funded by BMBF under the grants 05K10CKB and 05K10VKE.
Integrating control and high-speed data processing is a fundamental requirement to operate a beam line efficiently and improve user's beam time experience. Implementing such control environments for data intensive applications at synchrotrons has been difficult because of vendor-specific device access protocols and distributed components. Although TANGO addresses the distributed nature of experiment instrumentation, standardized APIs that provide uniform device access, process control and data analysis are still missing. Concert is a Python-based framework for device control and messaging. It implements these programming interfaces and provides a simple but powerful user interface. Our system exploits the asynchronous nature of device accesses and performs low-latency on-line data analysis using GPU-based data processing. We will use Concert to conduct experiments to adjust experimental conditions using on-line data analysis, e.g. during radiographic and tomographic experiments. Concert's process control mechanisms and the UFO processing framework* will allow us to control the process under study and the measuring procedure depending on image dynamics.
* Vogelgesang, Chilingaryan, Rolo, Kopmann: “UFO: A Scalable GPU-based Image Processing Framework for On-line Monitoring”
 
poster icon Poster TUPPC044 [4.318 MB]  
 
TUPPC058 Automation of Microbeam Focusing for X-Ray Micro-Experiments at the 4B Beamline of Pohang Light Source-II focusing, controls, LabView, hardware 703
 
  • K.H. Gil, H. J. Choi, J.Y. Huang, J.H. Lim
    PAL, Pohang, Kyungbuk, Republic of Korea
  • C.W. Bark
    Gachon University, Seongnam, Republic of Korea
 
  The 4B beamline of the Pohang Light Source-II performs X-ray microdiffraction and microfluorescence experiments using X-ray microbeams. The microbeam has been focused down to FWHM sizes of less than 3 μm by manually adjusting the vertical and horizontal focusing mirrors of a K-B (Kirkpatrick-Baez) mirror system. In this research, a microbeam-focusing automation software was developed to automate the old complex and cumbersome process of beam focusing which may take about a day. The existing controllers of the K-B mirror system were replaced by products with communication functions and a motor-driving routine by means of proportional feedback control was constructed. Based on the routine and the outputs of two ionization chambers arranged in front and rear of the K-B mirror system, the automation software to perform every step of the beam focusing process was completed as LabVIEW applications. The developed automation software was applied to the 4B beamline and showed the performance of focusing an X-ray beam with a minimal size within an hour. This presentation introduces the details of the algorithms of the automation software and examines its performances.  
poster icon Poster TUPPC058 [1.257 MB]  
 
TUPPC060 Implementation of Continuous Scans Used in Beamline Experiments at Alba Synchrotron hardware, controls, software, detector 710
 
  • Z. Reszela, F. Becheri, G. Cuní, D. Fernández-Carreiras, J. Moldes, C. Pascual-Izarra
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
  • T.M. Coutinho
    ESRF, Grenoble, France
 
  The Alba control system * is based on Sardana **, a software package implemented in Python, built on top of Tango *** and oriented to beamline and accelerator control and data acquisition. Sardana provides an advanced scan framework, which is commonly used in all the beamlines of Alba as well as other institutes. This framework provides standard macros and comprises various scanning modes: step, hybrid and software-continuous, however no hardware-continuous. The continuous scans speed up the data acquisition, making it a great asset for most experiments and due to time constraints, mandatory for a few of them. A continuous scan has been developed and installed in three beamlines where it reduced the time overheads of the step scans. Furthermore it could be easily adapted to any other experiment and will be used as a base for extending Sardana scan framework with the generic continuous scan capabilities. This article describes requirements, plan and implementation of the project as well as its results and possible improvements.
*"The design of the Alba Control System. […]" D. Fernández et al, ICALEPCS2011
**"Sardana, The Software for Building SCADAS […]" T.M. Coutinho et al, ICALEPCS2011
***www.tango-controls.org
 
poster icon Poster TUPPC060 [13.352 MB]  
 
TUPPC061 BL13-XALOC, MX experiments at Alba: Current Status and Ongoing Improvements controls, interface, TANGO, hardware 714
 
  • G. Cuní, J. Benach, D. Fernández-Carreiras, J. Juanhuix, C. Pascual-Izarra, Z. Reszela
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
  • T.M. Coutinho
    ESRF, Grenoble, France
 
  BL13-XALOC is the only Macromolecular Crystallography (MX) beamline at the 3-GeV ALBA synchrotron. The control system is based on Tango * and Sardana **, which provides a powerful python-based environment for building and executing user-defined macros, a comprehensive access to the hardware, a standard Command Line Interface based on ipython, and a generic and customizable Graphical User Interface based on Taurus ***. Currently, the MX experiments are performed through panels that provide control to different beamline instrumentation. Users are able to collect diffraction data and solve crystal structures, and now it is time to improve the control system by combining the feedback from the users with the development of the second stage features: group all the interfaces (i.e. sample viewing system, automatic sample changer, fluorescence scans, and data collections) in a high-level application and implement new functionalities in order to provide a higher throughput experiment, with data collection strategies, automated data collections, and workflows. This article describes the current architecture of the XALOC control system, and the plan to implement the future improvements.
* http://www.tango-controls.org/
** http://www.sardana-controls.org/
*** http://www.tango-controls.org/static/taurus/
 
poster icon Poster TUPPC061 [2.936 MB]  
 
TUPPC063 Control and Monitoring of the Online Computer Farm for Offline Processing in LHCb controls, monitoring, network, interface 721
 
  • L.G. Cardoso, P. Charpentier, J. Closier, M. Frank, C. Gaspar, B. Jost, G. Liu, N. Neufeld
    CERN, Geneva, Switzerland
  • O. Callot
    LAL, Orsay, France
 
  LHCb, one of the 4 experiments at the LHC accelerator at CERN, uses approximately 1500 PCs (averaging 12 cores each) for processing the High Level Trigger (HLT) during physics data taking. During periods when data acquisition is not required most of these PCs are idle. In these periods it is possible to profit from the unused processing capacity to run offline jobs, such as Monte Carlo simulation. The LHCb offline computing environment is based on LHCbDIRAC (Distributed Infrastructure with Remote Agent Control). In LHCbDIRAC, job agents are started on Worker Nodes, pull waiting tasks from the central WMS (Workload Management System) and process them on the available resources. A Control System was developed which is able to launch, control and monitor the job agents for the offline data processing on the HLT Farm. This control system is based on the existing Online System Control infrastructure, the PVSS SCADA and the FSM toolkit. It has been extensively used launching and monitoring 22.000+ agents simultaneously and more than 850.000 jobs have already been processed in the HLT Farm. This paper describes the deployment and experience with the Control System in the LHCb experiment.  
poster icon Poster TUPPC063 [2.430 MB]  
 
TUPPC064 Reusing the Knowledge from the LHC Experiments to Implement the NA62 Run Control controls, detector, hardware, framework 725
 
  • F. Varela, M. Gonzalez-Berges
    CERN, Geneva, Switzerland
  • N. Lurkin
    UCL, Louvain-la-Neuve, Belgium
 
  NA62 is an experiment designed to measure very rare kaon decays at the CERN SPS planned to start operation in 2014. Until this date, several intermediate run periods have been scheduled to exercise and commission the different parts and subsystems of the detector. The Run Control system monitors and controls all processes and equipment involved in data-taking. This system is developed as a collaboration between the NA62 Experiment and the Industrial Controls and Engineering (EN-ICE) Group of the Engineering Department at CERN. In this paper, the contribution of EN-ICE to the NA62 Run Control project is summarized. EN-ICE has promoted the utilization of standardized control technologies and frameworks at CERN, which were originally developed for the controls of the LHC experiments. This approach has enabled to deliver a working system for the 2013 Technical Run that exceeded the initial requirements, in a very short time and with limited manpower.  
 
TUPPC077 Experiment Automation with a Robot Arm Using the Liquids Reflectometer Instrument at the Spallation Neutron Source neutron, alignment, controls, target 759
 
  • B. Vacaliuc, G.C. Greene, A.A. Parizzi, M. Sundaram
    ORNL RAD, Oak Ridge, Tennessee, USA
  • J.F. Ankner, J.F. Browning, C.E. Halbert, M.C. Hoffmann, P. Zolnierczuk
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: U.S. Government under contract DE-AC05-00OR22725 with UT-Battelle, LLC, which manages the Oak Ridge National Laboratory.
The Liquids Reflectometer instrument installed at the Spallation Neutron Source (SNS) enables observations of chemical kinetics, solid-state reactions and phase-transitions of thin film materials at both solid and liquid surfaces. Effective measurement of these behaviors requires each sample to be calibrated dynamically using the neutron beam and the data acquisition system in a feedback loop. Since the SNS is an intense neutron source, the time needed to perform the measurement can be the same as the alignment process, leading to a labor-intensive operation that is exhausting to users. An update to the instrument control system, completed in March 2013, implemented the key features of automated sample alignment and robot-driven sample management, allowing for unattended operation over extended periods, lasting as long as 20 hours. We present a case study of the effort, detailing the mechanical, electrical and software modifications that were made as well as the lessons learned during the integration, verification and testing process.
 
poster icon Poster TUPPC077 [17.799 MB]  
 
TUPPC078 First EPICS/CSS Based Instrument Control and Acquisition System at ORNL controls, EPICS, interface, neutron 763
 
  • X. Geng, X.H. Chen, K.-U. Kasemir
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy
The neutron imaging prototype beamline (CG-1D) at the Oak Ridge National Laboratory High Flux Isotope Reactor (HFIR) is used for many different applications necessitating a flexible and stable instrument control system. Beamline scientists expect a robust data acquisition system. They need a clear and concise user interface that allows them to both configure an experiment and to monitor an ongoing experiment run. Idle time between acquiring consecutive images must be minimized. To achieve these goals, we implement a system based upon EPICS, a newly developed CSS scan system, and CSS BOY. This paper presents the system architecture and possible future plans.
 
poster icon Poster TUPPC078 [6.846 MB]  
 
TUPPC100 Recent Changes to Beamline Software at the Canadian Light Source software, controls, EPICS, Windows 813
 
  • G. Wright, D. Beauregard, R. Berg, G. Black, D.K. Chevrier, R. Igarashi, E. D. Matias, C.D. Miller
    CLS, Saskatoon, Saskatchewan, Canada
 
  The Canadian Light Source has ongoing work to improve the user interfaces at the beamlines. Much of the direction has made use of Qt and EPICS, using both C++ and Python in providing applications. Continuing work on the underlying data acquisition and visualization tools provides a commonality for both development and operation, and provisions for extending tools allow flexibility in types of experiments being run.  
poster icon Poster TUPPC100 [1.864 MB]  
 
TUPPC108 Using Web Syndication for Flexible Remote Monitoring site, controls, detector, operation 825
 
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
  • A. Augustinus, P.M. Bond, P.Ch. Chochula, M. Lechman, P. Rosinský
    CERN, Geneva, Switzerland
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
 
  With the experience gained in the first years of running the ALICE apparatus we have identified the need of collecting and aggregating different data to be displayed to the user in a simplified, personalized and clear way. The data comes from different sources in several formats, can contain data, text, pictures or can simply be a link to an extended content. This paper will describe the idea to design a light and flexible infrastructure, to aggregate information produced in different systems and offer them to the readers. In this model, a reader is presented with the information relevant to him, without being obliged to browse through different systems. The project consists of data production, collection and syndication, and is being developed in parallel with more traditional monitoring interfaces, with the aim of offering the ALICE users an alternative and convenient way to stay updated about their preferred systems even when they are far from the experiment.  
poster icon Poster TUPPC108 [1.301 MB]  
 
TUPPC115 Hierarchies of Alarms for Large Distributed Systems controls, detector, interface, diagnostics 844
 
  • M. Boccioli, M. Gonzalez-Berges, V. Martos
    CERN, Geneva, Switzerland
  • O. Holme
    ETH, Zurich, Switzerland
 
  The control systems of most of the infrastructure at CERN makes use of the SCADA package WinCC OA by ETM, including successful projects to control large scale systems (i.e. the LHC accelerator and associated experiments). Each of these systems features up to 150 supervisory computers and several millions of parameters. To handle such large systems, the control topologies are designed in a hierarchical way (i.e. sensor, module, detector, experiment) with the main goal of supervising a complete installation with a single person from a central user interface. One of the key features to achieve this is alarm management (generation, handling, storage, reporting). Although most critical systems include automatic reactions to faults, alarms are fundamental for intervention and diagnostics. Since one installation can have up to 250k alarms defined, a major failure may create an avalanche of alarms that is difficult for an operator to interpret. Missing important alarms may lead to downtime or to danger for the equipment. The paper presents the developments made in recent years on WinCC OA to work with large hierarchies of alarms and to present summarized information to the operators.  
 
WECOAAB01 An Overview of the LHC Experiments' Control Systems controls, framework, interface, monitoring 982
 
  • C. Gaspar
    CERN, Geneva, Switzerland
 
  Although they are LHC experiments, the four experiments, either by need or by choice, use different equipment, have defined different requirements and are operated differently. This led to the development of four quite different Control Systems. Although a joint effort was done in the area of Detector Control Systems (DCS) allowing a common choice of components and tools and achieving the development of a common DCS Framework for the four experiments, nothing was done in common in the areas of Data Acquisition or Trigger Control (normally called Run Control). This talk will present an overview of the design principles, architectures and technologies chosen by the four experiments in order to perform the Control System's tasks: Configuration, Control, Monitoring, Error Recovery, User Interfacing, Automation, etc.
Invited
 
slides icon Slides WECOAAB01 [2.616 MB]  
 
WECOBA04 Effective End-to-end Management of Data Acquisition and Analysis for X-ray Photon Correlation Spectroscopy detector, photon, real-time, status 1004
 
  • F. Khan, J.P. Hammonds, S. Narayanan, A. Sandy, N. Schwarz
    ANL, Argonne, USA
 
  Funding: Work supported by U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357.
Low latency between data acquisition and analysis is of critical importance to any experiment. The combination of a faster parallel algorithm and a data pipeline for connecting disparate components (detectors, clusters, file formats) enabled us to greatly enhance the operational efficiency of the x-ray photon correlation spectroscopy experiment facility at the Advanced Photon Source. The improved workflow starts with raw data (120 MB/s) streaming directly from the detector camera, through an on-the-fly discriminator implemented in firmware to Hadoop’s distributed file system in a structured HDF5 data format. The user then triggers the MapReduce-based parallel analysis. For effective bookkeeping and data management, the provenance information and reduced results are added to the original HDF5 file. Finally, the data pipeline triggers user specific software for visualizing the data. The whole process is completed shortly after data acquisition – a significant improvement of operation over previous setup. The faster turn-around time helps scientists to make near real-time adjustments to the experiments.
 
slides icon Slides WECOBA04 [9.540 MB]  
 
WECOCB03 Development of a Front-end Data-Acquisition System with a Camera Link FMC for High-Bandwidth X-Ray Imaging Detectors detector, interface, FPGA, synchrotron 1028
 
  • C. Saji, T. Ohata, T. Sugimoto, R. Tanaka, M. Yamaga
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Abe
    RIKEN SPring-8 Center, Innovative Light Sources Division, Hyogo, Japan
  • T. Kudo
    RIKEN SPring-8 Center, Sayo-cho, Sayo-gun, Hyogo, Japan
 
  X-ray imaging detectors are indispensable for synchrotron radiation experiments and growing up with larger number of pixels and higher frame rate to acquire more information on the samples. The novel detector with data rate of up to 8 Gbps/sensor, SOPHIAS, is under development at SACLA facility. Therefore, we have developed a new front-end DAQ system with high data rate beyond the present level. The system consists of an FPGA-based evaluation board and a FPGA mezzanine card (FMC). As the FPGA interface, FMC was adopted for supporting variety of interfaces and considering COTS system. Since the data transmission performance of the FPGA board in combination with the FMCs was already evaluated as about 20 Gbps between boards, our choice of devices has the potential to meet the requirements of SOPHIAS detector*. We made a FMC with Camera Link (CL) interface to support 1st phase of SOPHIAS detector. Since almost CL configurations are supported, the system handles various types of commercial cameras as well as new detector. Moreover, the FMC has general purpose input/output to satisfy various experimental requirements. We report the design of new front-end DAQ and results of evaluation.
* A Study of a Prototype DAQ System with over 10 Gbps Bandwidth for the SACLA X-Ray Experiments, C. Saji, T. Ohata, T. Sugimoto, R. Tanaka, and M. Yamaga, 2012 IEEE NSS and MIC, p.1619-p.1622
 
slides icon Slides WECOCB03 [0.980 MB]  
 
THCOAAB04 Synchrobots: Experiments with Telepresence and Teleoperated Mobile Robots in a Synchrotron Radiation Facility controls, radiation, synchrotron, synchrotron-radiation 1052
 
  • M. Pugliese, F. Billè, R. Borghes, A. Curri, D. Favretto, G. Kourousias, M. Prica, M. Turcinovich
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  Synchrobot is an autonomous mobile robot that supports the machine operators of Elettra (*), a synchrotron radiation facility, in tasks such as diagnostic and measurement campaigns being capable of moving in the restricted area when the machine is running. In general, telepresence robots are mobile robot platforms capable of providing two way audio and video communication. Recently many companies are entering the business of telepresence robots. This paper describes our experience with tools like synchrobot and also commercially available telepresence robots. Based on our experience, we present a set of guidelines for using and integrating telepresence robots in the daily life of a research infrastructure and explore potential future development scenarios.
http://www.elettra.eu
 
slides icon Slides THCOAAB04 [9.348 MB]  
 
THCOAAB05 Rapid Application Development Using Web 2.0 Technologies framework, software, target, interface 1058
 
  • S.M. Reisdorf, B.A. Conrad, D.A. Potter, P.D. Reisdorf
    LLNL, Livermore, California, USA
 
  Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632813
The National Ignition Facility (NIF) strives to deliver reliable, cost effective applications that can easily adapt to the changing business needs of the organization. We use HTML5, RESTful web services, AJAX, jQuery, and JSF 2.0 to meet these goals. WebGL and HTML5 Canvas technologies are being used to provide 3D and 2D data visualization applications. JQuery’s rich set of widgets along with technologies such as HighCharts and Datatables allow for creating interactive charts, graphs, and tables. PrimeFaces enables us to utilize much of this Ajax and JQuery functionality while leveraging our existing knowledge base in the JSF framework. RESTful Web Services have replaced the traditional SOAP model allowing us to easily create and test web services. Additionally, new software based on NodeJS and WebSocket technology is currently being developed which will augment the capabilities of our existing applications to provide a level of interaction with our users that was previously unfeasible. These Web 2.0-era technologies have allowed NIF to build more robust and responsive applications. Their benefits and details on their use will be discussed.
 
slides icon Slides THCOAAB05 [0.832 MB]  
 
THCOAAB09 Olog and Control System Studio: A Rich Logging Environment controls, interface, operation, framework 1074
 
  • K. Shroff, A. Arkilic, L.R. Dalesio
    BNL, Upton, Long Island, New York, USA
  • E.T. Berryman
    NSCL, East Lansing, Michigan, USA
  • D. Dezman
    Cosylab, Ljubljana, Slovenia
 
  Leveraging the features provided by Olog and Control System Studio, we have developed a logging environment which allows for the creation of rich log entries. These entries in addition to text and snapshots images store context which can comprise of information either from the control system (process variables) or other services (directory, ticketing, archiver). The client tools using this context provide the user the ability to launch various applications with their state initialized to match those while the entry was created.  
slides icon Slides THCOAAB09 [1.673 MB]  
 
THPPC015 Managing Infrastructure in the ALICE Detector Control System controls, detector, hardware, software 1122
 
  • M. Lechman, A. Augustinus, P.M. Bond, P.Ch. Chochula, A.N. Kurepin, O. Pinazza, P. Rosinský
    CERN, Geneva, Switzerland
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The main role of the ALICE Detector Control System (DCS) is to ensure safe and efficient operation of one of the large high energy physics experiments at CERN. The DCS design is based on the commercial SCADA software package WinCC Open Architecture. The system includes over 270 VME and power supply crates, 1200 network devices, over 1,000,000 monitored parameters as well as numerous pieces of front-end and readout electronics. This paper summarizes the computer infrastructure of the DCS as well as the hardware and software components that are used by WinCC OA for communication with electronics devices. The evolution of these components and experience gained from the first years of their production use are also described. We also present tools for the monitoring of the DCS infrastructure and supporting its administration together with plans for their improvement during the first long technical stop in LHC operation.  
poster icon Poster THPPC015 [1.627 MB]  
 
THPPC034 A Novel Analysis of Time Evolving Betatron Tune betatron, injection, extraction, operation 1157
 
  • S. Yamada
    J-PARC, KEK & JAEA, Ibaraki-ken, Japan
 
  J-PARC Main Ring (MR) is a high-intensity proton synchrotron and since 2009 delivering beam to the T2K neutrino experiment and hadron experiments. It is essential to measure time variation of betatron tune accurately throughout from beam injection at 3 GeV to extraction at 30 GeV. The tune measurement system of J-PARC MR consist of a stripline-kicker, beam position monitors, and a waveform digitizer. Betatron tune appears as sidebands of harmonics of revolution frequency in the turn-by-turn beam position spectrum. Excellent accuracy of measurement and high immunity against noise were achieved by exploiting a wide-band spectrum covering multiple harmonics.  
poster icon Poster THPPC034 [0.707 MB]  
 
THPPC081 High-level Functions for Modern Control Systems: A Practical Example controls, framework, status, monitoring 1262
 
  • F. Varela, W.J. Fabian, P. Golonka, M. Gonzalez-Berges, L.B. Petrova
    CERN, Geneva, Switzerland
 
  Modern control systems make wide usage of different IT technologies and complex computational techniques to render the data gathered accessible from different locations and devices, as well as to understand and even predict the behavior of the systems under supervision. The Industrial Controls Engineering (ICE) Group of the EN Department develops and maintains more than 150 vital controls applications for a number of strategic sectors at CERN like the accelerator, the experiments and the central infrastructure systems. All these applications are supervised by MOON, a very successful central monitoring and configuration tool developed by the group that has been in operation 24/7 since 2011. The basic functionality of MOON was presented in previous editions of these series of conferences. In this contribution we focus on the high-level functionality recently added to the tool to grant access to multiple users through the web and mobile devices to the data gathered, as well as a first attempt to data analytics with the goal of identifying useful information to support developers during the optimization of their systems and help in the daily operations of the systems.  
 
THPPC082 Monitoring of the National Ignition Facility Integrated Computer Control System controls, database, framework, interface 1266
 
  • J.M. Fisher, M. Arrowsmith, E.A. Stout
    LLNL, Livermore, California, USA
 
  Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632812
The Integrated Computer Control System (ICCS), used by the National Ignition Facility (NIF) provides comprehensive status and control capabilities for operating approximately 100,000 devices through 2,600 processes located on 1,800 servers, front end processors and embedded controllers. Understanding the behaviors of complex, large scale, operational control software, and improving system reliability and availability, is a critical maintenance activity. In this paper we describe the ICCS diagnostic framework, with tunable detail levels and automatic rollovers, and its use in analyzing system behavior. ICCS recently added Splunk as a tool for improved archiving and analysis of these log files (about 20GB, or 35 million logs, per day). Splunk now continuously captures all ICCS log files for both real-time examination and exploration of trends. Its powerful search query language and user interface provides allows interactive exploration of log data to visualize specific indicators of system performance, assists in problems analysis, and provides instantaneous notification of specific system behaviors.
 
poster icon Poster THPPC082 [4.693 MB]  
 
THPPC090 Picoseconds Timing System timing, laser, controls, diagnostics 1285
 
  • D. Monnier-Bourdin, B. Riondet
    GreenField Technology, Breuillet, France
  • S. Perez
    CEA, Arpajon, France
 
  The instrumentation of large physics experiments needs to be synchronized down to few picoseconds. These experiments require different sampling rates for multi shot or single shot on each instrument distributed on a large area. Greenfield Technology presents a commercial solution with a Picoseconds Timing System built around a central Master Oscillator which delivers a serial data stream over an optical network to synchronize local multi channel delay generators. This system is able to provide several hundreds of trigger pulses within a 1ps resolution and a jitter less than 15 ps distributed over an area up to 10 000 m². The various qualities of this Picoseconds Timing System are presented with measurements and functions and have already been implemented in French facilities (Laser MegaJoule prototype - Ligne d’Intégration Laser- , petawatt laser applications and synchrotron Soleil). This system with different local delay generator form factors (box, 19” rack, cPCI or PXI board) and many possibilities of trigger pulse shape is the ideal solution to synchronize Synchrotron, High Energy Laser or any Big Physics Experiments.  
poster icon Poster THPPC090 [1.824 MB]  
 
THPPC107 Timing and Synchronization at Beam Line Experiments hardware, timing, EPICS, controls 1311
 
  • H. Blaettler Pruchova, T. Korhonen
    PSI, Villigen PSI, Switzerland
 
  Some experiment concepts require a control system with the individual components working synchronously. At PSI the control system for X-ray experiments is distributed in several VME crates, on several EPICS soft ioc servers and linux nodes, which need to be synchronized. The timing network using fibre optics, separated from standard network based on TCP/IP protocol, is used for distributing of time stamps and timing events. The synchronization of all control components and data acquisition systems has to be done automatically with sufficient accuracy and is done by event distribution and/or by synchronization by I/O trigger devices. Data acquisition is synchronized by hardware triggers either produced by sequences in event generator or by motors in case of on-the-fly scans. Some detectors like EIGER with acquisition rate close to 20kHz, fast BPMs connected to current measuring devices like picoammmeters with sampling frequences up to 26 kHz and photodiodes are integrated to measure beam properties and radiation exposures. The measured data are stored on various file servers situated within one BL subnetwork. In this paper we describe a concept for implementing such a system.  
 
THPPC116 Temperature Precise Control in a Large Scale Helium Refrigerator controls, cryogenics, operation, simulation 1331
 
  • Wu,J.H. wu, Q. Li, W. Pan
    TIPC, BeiJing, People's Republic of China
 
  Precise control of operating load temperature is a key requirement for application of a large scale helium refrigerator. Strict control logic and time sequence are necessary in the process related to main components including a fine load, turbine expanders and compressors. However control process sequence may become disordered due to improper PID parameter settings and logic equations and causes temperature oscillation, load augmentation or protection of the compressors and cryogenic valve function failure etc. Combination of experimental studies and simulation models, effect of PID parameters adjustment on the control process is present in detail. The methods and rules of general parameter settings are revealed and the suitable control logic equations are derived for temperature stabilization.  
poster icon Poster THPPC116 [0.584 MB]  
 
THPPC123 Online Luminosity Optimization at the LHC luminosity, controls, target, proton 1351
 
  • F. Follin, R. Alemany-Fernandez, R. Jacobsson
    CERN, Geneva, Switzerland
 
  The online luminosity control of the LHC experiments consists of an automatic slow real-time feedback system controlled by a specific experiment software that communicates directly with an LHC application. The LHC application drives a set of corrector magnets to adjust the transversal beam overlap at the interaction point in order to keep the instantaneous luminosity aligned to the target luminosity provided by the experiment. This solution was proposed by the LHCb experiment and tested first in July 2010. It has been in routine operation during the first two years of physics luminosity data taking, 2011 and 2012, in LHCb. It was also adopted for the ALICE experiment during 2011. The experience provides an important basis for the potential future need of levelling the luminosity in all the LHC experiments. This paper describes the implementation of the LHC application controlling the luminosity at the experiments and the information exchanged that allows this automatic control.  
poster icon Poster THPPC123 [1.344 MB]  
 
THCOBA02 Unidirectional Security Gateways: Stronger than Firewalls network, controls, hardware, software 1412
 
  • A.F. Ginter
    Waterfall Security Solutions, New York, USA
 
  In the last half decade, application integration via Unidirectional Security Gateways has emerged as a secure alternative to firewalls. The gateways are deployed extensively to protect the safety and reliability of industrial control systems in nuclear generators, conventional generators and a wide variety of other critical infrastructures. Unidirectional Gateways are a combination of hardware and software. The hardware allows information to leave a protected industrial network, and physically prevents any signal whatsoever from returning to the protected network. The result is that the hardware blocks all online attacks originating on external networks. The software replicates industrial servers to external networks, where the information in those servers is available to end users and to external applications. The software does not proxy bi-directional protocols. Join us to learn how this secure alternative to firewalls works, where and how the tecnhology is deployed routinely, and how all of the usual remote support, data integrity and other apparently bi-directional deployment issues are routinely resolved.  
slides icon Slides THCOBA02 [0.721 MB]  
 
THCOBA05 Control System Virtualization for the LHCb Online System controls, network, operation, hardware 1419
 
  • E. Bonaccorsi, L. Granado Cardoso, N. Neufeld
    CERN, Geneva, Switzerland
  • F. Sborzacchi
    INFN/LNF, Frascati (Roma), Italy
 
  Virtualization provides many benefits such as more efficiency in resource utilization, less power consumption, better management by centralized control and higher availability. It can also save time for IT projects by eliminating dedicated hardware procurement and providing standard software configurations. In view of this virtualization is very attractive for mission-critical projects like the experiment control-system (ECS) of the large LHCb experiment at CERN. This paper describes our implementation of the control system infrastructure on a general purpose server-hardware based on Linux and the RHEV enterprise clustering platform. The paper describes the methods used , our experiences and the knowledge acquired in evaluating the performance of the setup using test systems, constraints and limitations we encountered. We compare these with parameters measured under typical load conditions in a real production system. We also present the specific measures taken to guarantee optimal performance for the SCADA system (WinCC OA), which is the back-bone of our control system.  
slides icon Slides THCOBA05 [1.065 MB]  
 
THCOCB02 The Role of Data Driven Models in Optimizing the Operation of the National Ignition Facility laser, target, operation, simulation 1426
 
  • K.P. McCandless, J.-M.G. Di Nicola, S.N. Dixit, E. Feigenbaum, R.K. House, K.S. Jancaitis, K.N. LaFortune, B.J. MacGowan, C.D. Orth, R.A. Sacks, M.J. Shaw, C.C. Widmayer, S.T. Yang
    LLNL, Livermore, California, USA
 
  Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633233
The Virtual Beam Line (VBL) code is essential to operate, maintain and validate the design of laser components to meet the performance goals at Lawrence Livermore National Laboratory’s National Ignition Facility (NIF). The NIF relies upon the Laser Performance Operations Model (LPOM), whose physics engine is the Virtual Beam Line (VBL) code, to automate the setup of the laser by simulating the laser energetics of the as-built system. VBL simulates paraxial beam propagation, amplification, aberration and modulation, nonlinear self-focusing and focal behavior. Each of the NIF’s 192 beam lines are modeled in parallel on the LPOM Linux compute cluster during shot setup and validation. NIF achieved a record 1.8 MJ shot in July 2012, and LPOM (with VBL) was key to achieving the requested pulse shape. We will discuss some examples of how the VBL physics code is used to model the laser phenomena and operate the NIF laser system.
 
slides icon Slides THCOCB02 [4.589 MB]  
 
THCOCB05 The LHCb Online Luminosity Monitoring and Control luminosity, controls, detector, target 1438
 
  • R. Jacobsson, R. Alemany-Fernandez, F. Follin
    CERN, Geneva, Switzerland
 
  The LHCb experiment searches for New Physics by precision measurements in heavy flavour physics. The optimization of the data taking conditions relies on accurate monitoring of the instantaneous luminosity, and many physics measurements rely on accurate knowledge of the integrated luminosity. Most of the measurements have potential systematic effects associated with pileup and changing running conditions. To cope with these while aiming at maximising the collected luminosity, a control of the LHCb luminosity was put in operation. It consists of an automatic real-time feedback system controlled from the LHCb online system which communicates directly with an LHC application which in turn adjusts the beam overlap at the interaction point. It was proposed and tested in July 2010 and has been in routine operation during 2011-2012. As a result, LHCb has been operating at well over four times the design pileup, and 95% of the integrated luminosity has been recorded within 3% of the desired luminosity. This paper motivates and describes the implementation and the experience with the online luminosity monitoring and control, including the mechanisms to perform the luminosity calibrations.  
slides icon Slides THCOCB05 [1.368 MB]  
 
THCOCA03 High-Precision Timing of Gated X-Ray Imagers at the National Ignition Facility timing, target, laser, detector 1449
 
  • S.M. Glenn, P.M. Bell, L.R. Benedetti, M.W. Bowers, D.K. Bradley, B.P. Golick, J.P. Holder, D.H. Kalantar, S.F. Khan, N. Simanovskaia
    LLNL, Livermore, California, USA
 
  Funding: This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633013
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a stadium-sized facility that contains a 192-beam, 1.8-Megajoule, 500-Terawatt, ultraviolet laser system together with a 10-meter diameter target chamber. We describe techniques used to synchronize data acquired by gated x-ray imagers with laser beams at the National Ignition Facility (NIF). Synchronization is achieved by collecting data from multiple beam groups with spatial and temporal separation in a single NIF shot. By optimizing the experimental setup and data analysis, repeatable measurements of 15ps or better have been achieved. This demonstrates that the facility timing system, laser, and target diagnostics, are highly stable over year-long time scales.
 
slides icon Slides THCOCA03 [1.182 MB]  
 
FRCOAAB01 CSS Scan System interface, controls, EPICS, software 1461
 
  • K.-U. Kasemir, X.H. Chen
    ORNL, Oak Ridge, Tennessee, USA
  • E.T. Berryman
    NSCL, East Lansing, Michigan, USA
 
  Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy
Automation of beam line experiments requires more flexibility than the control of an accelerator. The sample environment devices to control as well as requirements for their operation can change daily. Tools that allow stable automation of an accelerator are not practical in such a dynamic environment. On the other hand, falling back to generic scripts opens too much room for error. The Scan System offers an intermediate approach. Scans can be submitted in numerous ways, from pre-configured operator interface panels, graphical scan editors, scripts, the command line, or a web interface. At the same time, each scan is assembled from a well-defined set of scan commands, each one with robust features like error checking, time-out handling and read-back verification. Integrated into Control System Studio (CSS), scans can be monitored, paused, modified or aborted as needed. We present details of the implementation and first usage experience.
 
slides icon Slides FRCOAAB01 [1.853 MB]  
 
FRCOAAB03 Experiment Control and Analysis for High-Resolution Tomography controls, software, detector, EPICS 1469
 
  • N. Schwarz, F. De Carlo, A. Glowacki, J.P. Hammonds, F. Khan, K. Yue
    ANL, Argonne, USA
 
  Funding: Work supported by U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357.
X-ray Computed Tomography (XCT) is a powerful technique for imaging 3D structures at the micro- and nano-levels. Recent upgrades to tomography beamlines at the APS have enabled imaging at resolutions up to 20 nm at increased pixel counts and speeds. As detector resolution and speed increase, the amount of data that must be transferred and analyzed also increases. This coupled with growing experiment complexity drives the need for software to automate data acquisition and processing. We present an experiment control and data processing system for tomography beamlines that helps address this concern. The software, written in C++ using Qt, interfaces with EPICS for beamline control and provides live and offline data viewing, basic image manipulation features, and scan sequencing that coordinates EPICS-enabled apparatus. Post acquisition, the software triggers a workflow pipeline, written using ActiveMQ, that transfers data from the detector computer to an analysis computer, and launches a reconstruction process. Experiment metadata and provenance information is stored along with raw and analyzed data in a single HDF5 file.
 
slides icon Slides FRCOAAB03 [1.707 MB]  
 
FRCOAAB04 Data Driven Campaign Management at the National Ignition Facility diagnostics, target, interface, database 1473
 
  • D.E. Speck, B.A. Conrad, S.R. Hahn, P.D. Reisdorf, S.M. Reisdorf
    LLNL, Livermore, California, USA
 
  Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633255
The Campaign Management Tool Suite (CMT) provides tools for establishing the experimental goals, achieving reviews and approvals, and ensuring readiness for a NIF experiment. Over the last 2 years, CMT has significantly increased the number of diagnostics that supports to around 50. Meeting this ever increasing demand for new functionality has resulted in a design whereby more and more of the functionality can be specified in data rather than coded directly in Java. To do this support tools have been written that manage various aspects of the data and to also handle potential inconsistencies that can arise from a data driven paradigm. For example; drop down menus are specified in the Part and Lists Manager, the Shot Setup reports that lists the configurations for diagnostics are specified in the database, the review tool Approval Manager has a rules engine that can be changed without a software deployment, various template managers are used to provide predefined entry of hundreds parameters and finally a stale data tool validates that experiments contain valid data items. The trade-offs, benefits and issues of adapting and implementing this data driven philosophy will be presented.
 
slides icon Slides FRCOAAB04 [0.929 MB]  
 
FRCOAAB05 JOGL Live Rendering Techniques in Data Acquisition Systems GPU, detector, real-time, controls 1477
 
  • C. Cocho, F. Cecillon, A. Elaazzouzi, Y. Le Goc, J. Locatelli, P. Mutti, H. Ortiz, J. Ratel
    ILL, Grenoble, France
 
  One of the major challenges in instrument control is to provide a fast and scientifically correct representation of the data collected by the detector through the data acquisition system. Despite the availability nowadays of a large number of excellent libraries for off-line data plotting, the real-time 2D and 3D data rendering still suffers of performance issues related namely to the amount of information to be displayed. The current paper describes new methods of image generation (rendering) based on JOGL library used for data acquisition at the Institut Laue-Langevin (ILL) on instruments that require either high image resolution or large number of images rendered at the same time. These new methods involve the definition of data buffers and the usage of the GPU memory, technique known as Vertex Buffer Object (VBO). Implementation of different modes of rendering, on-screen and off-screen, will be also detailed.  
slides icon Slides FRCOAAB05 [1.422 MB]  
 
FRCOAAB06 A Common Software Framework for FEL Data Acquisition and Experiment Management at FERMI FEL, TANGO, framework, data-acquisition 1481
 
  • R. Borghes, V. Chenda, A. Curri, G. Kourousias, M. Lonza, M. Prica, M. Pugliese
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
  • G. Passos
    STFC/RAL/ISIS, Chilton, Didcot, Oxon, United Kingdom
 
  Funding: Work supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
After installation and commissioning, the Free Electron Laser facility FERMI is now open to users. As of December 2012, three experimental stations dedicated to different scientific areas, are available for user research proposals: Low Density Matter (LDM), Elastic & Inelastic Scattering (EIS), and Diffraction & Projection Imaging (DiProI). A flexible and highly configurable common framework has been developed and successfully deployed for experiment management and shot-by-shot data acquisition. This paper describes the software architecture behind all the experiments performed so far; the combination of the EXECUTER script engine with a specialized data acquisition device (FERMIDAQ) based on TANGO. Finally, experimental applications, performance results and future developments are presented and discussed.
 
slides icon Slides FRCOAAB06 [5.896 MB]  
 
FRCOAAB07 Operational Experience with the ALICE Detector Control System detector, controls, operation, status 1485
 
  • P.Ch. Chochula, A. Augustinus, A.N. Kurepin, M. Lechman, O. Pinazza, P. Rosinský
    CERN, Geneva, Switzerland
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The first LHC run period, lasting 4 year brought exciting physics results and new insight into the mysteries of the matter. One of the key components in this achievements were the detectors, which provided unprecedented amounts of data of the highest quality. The control systems, responsible for their smooth and safe operation, played a key role in this success. The design of the ALICE Detector Control System (DCS) started more than 12 years ago. High level of standardization and pragmatic design led to a reliable and stable system, which allowed for efficient experiment operation. In this presentation we summarize the overall architectural principles of the system, the standardized components and procedures. The original expectations and plans are compared with the final design. Focus is given on the operational procedures, which evolved with time. We explain, how a single operator can control and protect a complex device like ALICE, with millions of readout channels and several thousand control devices and boards. We explain what we learned during the first years of LHC operation and which improvements will be implemented to provide excellent DCS service during the next years.  
slides icon Slides FRCOAAB07 [7.856 MB]