Keyword: detector
Paper Title Other Keywords Page
MOBAUST02 The ATLAS Detector Control System controls, interface, experiment, monitoring 5
 
  • S. Schlenker, S. Arfaoui, S. Franz, O. Gutzwiller, C.A. Tsarouchas
    CERN, Geneva, Switzerland
  • G. Aielli, F. Marchese
    Università di Roma II Tor Vergata, Roma, Italy
  • G. Arabidze
    MSU, East Lansing, Michigan, USA
  • E. Banaś, Z. Hajduk, J. Olszowska, E. Stanecka
    IFJ-PAN, Kraków, Poland
  • T. Barillari, J. Habring, J. Huber
    MPI, Muenchen, Germany
  • M. Bindi, A. Polini
    INFN-Bologna, Bologna, Italy
  • H. Boterenbrood, R.G.K. Hart
    NIKHEF, Amsterdam, The Netherlands
  • H. Braun, D. Hirschbuehl, S. Kersten, K. Lantzsch
    Bergische Universität Wuppertal, Wuppertal, Germany
  • R. Brenner
    Uppsala University, Uppsala, Sweden
  • D. Caforio, C. Sbarra
    Bologna University, Bologna, Italy
  • S. Chekulaev
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
  • S. D'Auria
    University of Glasgow, Glasgow, United Kingdom
  • M. Deliyergiyev, I. Mandić
    JSI, Ljubljana, Slovenia
  • E. Ertel
    Johannes Gutenberg University Mainz, Institut für Physik, Mainz, Germany
  • V. Filimonov, V. Khomutnikov, S. Kovalenko
    PNPI, Gatchina, Leningrad District, Russia
  • V. Grassi
    SBU, Stony Brook, New York, USA
  • J. Hartert, S. Zimmermann
    Albert-Ludwig Universität Freiburg, Freiburg, Germany
  • D. Hoffmann
    CPPM, Marseille, France
  • G. Iakovidis, K. Karakostas, S. Leontsinis, E. Mountricha
    National Technical University of Athens, Athens, Greece
  • P. Lafarguette
    Université Blaise Pascal, Clermont-Ferrand, France
  • F. Marques Vinagre, G. Ribeiro, H.F. Santos
    LIP, Lisboa, Portugal
  • T. Martin, P.D. Thompson
    Birmingham University, Birmingham, United Kingdom
  • B. Mindur
    AGH University of Science and Technology, Krakow, Poland
  • J. Mitrevski
    SCIPP, Santa Cruz, California, USA
  • K. Nagai
    University of Tsukuba, Graduate School of Pure and Applied Sciences,, Tsukuba, Ibaraki, Japan
  • S. Nemecek
    Czech Republic Academy of Sciences, Institute of Physics, Prague, Czech Republic
  • D. Oliveira Damazio, A. Poblaguev
    BNL, Upton, Long Island, New York, USA
  • P.W. Phillips
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
  • A. Robichaud-Veronneau
    DPNC, Genève, Switzerland
  • A. Talyshev
    BINP, Novosibirsk, Russia
  • G.F. Tartarelli
    Universita' degli Studi di Milano & INFN, Milano, Italy
  • B.M. Wynne
    Edinburgh University, Edinburgh, United Kingdom
 
  The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by using a system of 140 server machines running the industrial SCADA product PVSS. This highly distributed system reads, processes and archives of the order of 106 operational parameters. Higher level control system layers based on the CERN JCOP framework allow for automatic control procedures, efficient error recognition and handling, manage the communication with external control systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS physics data acquisition system. A web-based monitoring system allows accessing the DCS operator interface views and browse the conditions data archive worldwide with high availability. This contribution firstly describes the status of the ATLAS DCS and the experience gained during the LHC commissioning and the first physics data taking operation period. Secondly, the future evolution and maintenance constraints for the coming years and the LHC high luminosity upgrades are outlined.  
slides icon Slides MOBAUST02 [6.379 MB]  
 
MOBAUST06 The LHCb Experiment Control System: on the Path to Full Automation controls, experiment, framework, operation 20
 
  • C. Gaspar, F. Alessio, L.G. Cardoso, M. Frank, J.C. Garnier, R. Jacobsson, B. Jost, N. Neufeld, R. Schwemmer, E. van Herwijnen
    CERN, Geneva, Switzerland
  • O. Callot
    LAL, Orsay, France
  • B. Franek
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
 
  LHCb is a large experiment at the LHC accelerator. The experiment control system is in charge of the configuration, control and monitoring of the different sub-detectors and of all areas of the online system: the Detector Control System (DCS), sub-detector's voltages, cooling, temperatures, etc.; the Data Acquisition System (DAQ), and the Run-Control; the High Level Trigger (HLT), a farm of around 1500 PCs running trigger algorithms; etc. The building blocks of the control system are based on the PVSS SCADA System complemented by a control Framework developed in common for the 4 LHC experiments. This framework includes an "expert system" like tool called SMI++ which we use for the system automation. The full control system runs distributed over around 160 PCs and is logically organised in a hierarchical structure, each level being capable of supervising and synchronizing the objects below. The experiment's operations are now almost completely automated driven by a top-level object called Big-Brother which pilots all the experiment's standard procedures and the most common error-recovery procedures. Some examples of automated procedures are: powering the detector, acting on the Run-Control (Start/Stop Run, etc.) and moving the vertex detector in/out of the beam, all driven by the state of the accelerator or recovering from errors in the HLT farm. The architecture, tools and mechanisms used for the implementation as well as some operational examples will be shown.  
slides icon Slides MOBAUST06 [1.451 MB]  
 
MOMAU003 The Computing Model of the Experiments at PETRA III controls, TANGO, experiment, interface 44
 
  • T. Kracht, M. Alfaro, M. Flemming, J. Grabitz, T. Núñez, A. Rothkirch, F. Schlünzen, E. Wintersberger, P. van der Reest
    DESY, Hamburg, Germany
 
  The PETRA storage ring at DESY in Hamburg has been refurbished to become a highly brilliant synchrotron radiation source (now named PETRA III). Commissioning of the beamlines started in 2009, user operation in 2010. In comparison with our DORIS beamlimes, the PETRA III experiments have larger complexity, higher data rates and require an integrated system for data storage and archiving, data processing and data distribution. Tango [1] and Sardana [2] are the main components of our online control system. Both systems are developed by international collaborations. Tango serves as the backbone to operate all beamline components, certain storage ring devices and equipment from our users. Sardana is an abstraction layer on top of Tango. It standardizes the hardware access, organizes experimental procedures, has a command line interface and provides us with widgets for graphical user interfaces. Other clients like Spectra, which was written for DORIS, interact with Tango or Sardana. Modern 2D detectors create large data volumes. At PETRA III all data are transferred to an online file server which is hosted by the DESY computer center. Near real time analysis and reconstruction steps are executed on a CPU farm. A portal for remote data access is in preparation. Data archiving is done by the dCache [3]. An offline file server has been installed for further analysis and inhouse data storage.
[1] http://www.tango-controls.org
[2] http://computing.cells.es/services/collaborations/sardana
[3] http://www-dcache.desy.de
 
slides icon Slides MOMAU003 [0.347 MB]  
poster icon Poster MOMAU003 [0.563 MB]  
 
MOPKN015 Managing Information Flow in ALICE distributed, controls, monitoring, database 124
 
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
  • A. Augustinus, P.Ch. Chochula, L.S. Jirdén, A.N. Kurepin, M. Lechman, P. Rosinský
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
 
  ALICE is one of the experiments at the Large Hadron Collider (LHC), CERN (Geneva, Switzerland). The ALICE detector control system is an integrated system collecting 18 different subdetectors' controls and general services and is implemented using the commercial SCADA package PVSS. Information of general interest, beam and ALICE condition data, together with data related to shared plants or systems, are made available to all the subsystems through the distribution capabilities of PVSS. Great care has been taken during the design and implementation to build the control system as a hierarchical system, limiting the interdependencies of the various subsystems. Accessing remote resources in a PVSS distributed environment is very simple, and can be initiated unilaterally. In order to improve the reliability of distributed data and to avoid unforeseen dependencies, the ALICE DCS group has enforced the centralization of the publication of global data and other specific variables requested by the subsystems. As an example, a specific monitoring tool will be presented that has been developed in PVSS to estimate the level of interdependency and to understand the optimal layout of the distributed connections, allowing for an interactive visualization of the distribution topology.  
poster icon Poster MOPKN015 [2.585 MB]  
 
MOPKN018 Computing Architecture of the ALICE Detector Control System controls, monitoring, network, interface 134
 
  • P. Rosinský, A. Augustinus, P.Ch. Chochula, L.S. Jirdén, M. Lechman
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The ALICE Detector Control System (DCS) is based on a commercial SCADA product, running on a large Windows computer cluster. It communicates with about 1200 network attached devices to assure safe and stable operation of the experiment. In the presentation we focus on the design of the ALICE DCS computer systems. We describe the management of data flow, mechanisms for handling the large data amounts and information exchange with external systems. One of the key operational requirements is an intuitive, error proof and robust user interface allowing for simple operation of the experiment. At the same time the typical operator task, like trending or routine checks of the devices, must be decoupled from the automated operation in order to prevent overload of critical parts of the system. All these requirements must be implemented in an environment with strict security requirements. In the presentation we explain how these demands affected the architecture of the ALICE DCS.  
 
MOPKS003 High Resolution Ion Beam Profile Measurement System ion, target, LabView, ion-source 164
 
  • J.G. Lopes
    ISEL, Lisboa, Portugal
  • F.A. Corrêa Alegria
    IT, Lisboa, Portugal
  • J.G. Lopes, L.M. Redondo
    CFNUL, Lisboa, Portugal
  • J. Rocha
    ITN, Sacavém, Portugal
 
  A high resolution system designed for measuring the ion beam profile in the ion implanter installed at the Ion Beam Laboratory of the Technological Nuclear Institute (ITN) is described. Low energy, high current ion implantation is becoming increasingly important in todays technology. In order to achieve this, the use of electrostatic lens to decelerate a focused ion beam is essential, but one needs to measure, with high resolution, the 2D beam profile. Traditional beam profile monitors using a matrix of detectors, like Faraday Cups, were used. They are, in essence, discrete systems since they only measure the beam intensity in fixed positions. In order to increase the resolution further, a new system was developed that does a continuous measurement of the profile, made of a circular aluminum disc with a curved slit which extends approximately from the center of the disc to its periphery. The disc is attached to the ion implanter target, which is capable of rotating on its axis. A cooper wire, positioned behind the slit, works like a Faraday Cup and the current generated, proportional to the beam intensity, is measured. As the ion implanter is capable of scanning the beam over the target, the combination of vertical beam scanning with aluminum disc rotation allows the beam profile to be measured continuously in two dimensions. Hence, the developed system including the computer controlled positioning of the beam over the moving curved slit, the data acquisition and the beam profile representation, is described.  
poster icon Poster MOPKS003 [0.744 MB]  
 
MOPMN008 LASSIE: The Large Analogue Signal and Scaling Information Environment for FAIR controls, timing, data-acquisition, diagnostics 250
 
  • T. Hoffmann, H. Bräuning, R. Haseitl
    GSI, Darmstadt, Germany
 
  At FAIR, the Facility for Antiproton and Ion Research, several new accelerators such as the SIS 100, HESR, CR, the inter-connecting HEBT beam lines, S-FRS and experiments will be built. All of these installations are equipped with beam diagnostic devices and other components which deliver time-resolved analogue signals to show status, quality, and performance of the accelerators. These signals can originate from particle detectors such as ionization chambers and plastic scintillators, but also from adapted output signals of transformers, collimators, magnet functions, RF cavities, and others. To visualize and precisely correlate the time axis of all input signals a dedicated FESA based data acquisition and analysis system named LASSIE, the Large Analogue Signal and Scaling Information Environment, is under way. As the main operation mode of LASSIE, pulse counting with adequate scaler boards is used, without excluding enhancements for ADC, QDC, or TDC digitization in the future. The concept, features, and challenges of this large distributed DAQ system will be presented.  
poster icon Poster MOPMN008 [7.850 MB]  
 
MOPMN014 Detector Control System for the ATLAS Muon Spectrometer And Operational Experience After The First Year of LHC Data Taking controls, monitoring, electronics, hardware 267
 
  • S. Zimmermann
    Albert-Ludwig Universität Freiburg, Freiburg, Germany
  • G. Aielli
    Università di Roma II Tor Vergata, Roma, Italy
  • M. Bindi, A. Polini
    INFN-Bologna, Bologna, Italy
  • S. Bressler, E. Kajomovitz, S. Tarem
    Technion, Haifa, Israel
  • R.G.K. Hart
    NIKHEF, Amsterdam, The Netherlands
  • G. Iakovidis, E. Ikarios, K. Karakostas, S. Leontsinis, E. Mountricha
    National Technical University of Athens, Athens, Greece
 
  Muon Reconstruction is a key ingredient in any of the experiments at the Large Hadron Collider LHC. The muon spectrometer of ATLAS comprises Monitored Drift Tube (MDTs) and Cathode Strip Chambers (CSCs) for precision tracking as well as Resistive Plate (RPC) and Thin Gap (TGC) Chambers as muon trigger and for second coordinate measurement. Together with a strong magnetic field provided by a super conducting toroid magnet and an optical alignment system a high precision determination of muon momentum up to the highest particle energies accessible by the LHC collisions is provided. The Detector Control System (DCS) of each muon sub-detector technology must efficiently and safely manage several thousands of LV and HV channels, the front-end electronics initialization as well as monitoring of beam, background, magnetic field and environmental conditions. This contribution will describe the chosen hardware architecture, which as much as possible tries to use common technologies, and the implemented controls hierarchy. In addition the muon DCS human machine interface (HMI) layer and operator tools will be covered. Emphasis will be given to reviewing the experience from the first year of LHC and detector operations, and to lessons learned for future large scale detector control systems. We will also present the automatic procedures put in place during last year and review the improvements gained by them for data taking efficiency. Finally, we will describe the role DCS plays in assessing the quality of data for physics analysis and in online optimization of detector conditions.
On Behalf of the ATLAS Muon Collaboration
 
poster icon Poster MOPMN014 [0.249 MB]  
 
MOPMN019 Controling and Monitoring the Data Flow of the LHCb Read-out and DAQ Network network, controls, monitoring, FPGA 281
 
  • R. Schwemmer, C. Gaspar, N. Neufeld, D. Svantesson
    CERN, Geneva, Switzerland
 
  The LHCb readout uses a set of 320 FPGA based boards as interface between the on-detector hardware and the GBE DAQ network. The boards are the logical Level 1 (L1) read-out electronics and aggregate the experiment's raw data into event fragments that are sent to the DAQ network. To control the many parameters of the read-out boards, an embedded PC is included on each board, connecting to the boards ICs and FPGAs. The data from the L1 boards is sent through an aggregation network into the High Level Trigger farm. The farm comprises approximately 1500 PCs which at first assemble the fragments from the L1 boards and then do a partial reconstruction and selection of the events. In total there are approximately 3500 network connections. Data is pushed through the network and there is no mechanism for resending packets. Loss of data on a small scale is acceptable but care has to be taken to avoid data loss if possible. To monitor and debug losses, different probes are inserted throughout the entire read-out chain to count fragments, packets and their rates at different positions. To keep uniformity throughout the experiment, all control software was developed using the common SCADA software, PVSS, with the JCOP framework as base. The presentation will focus on the low level controls interface developed for the L1 boards and the networking probes, as well as the integration of the high level user interfaces into PVSS. We will show the way in which the users and developers interact with the software, configure the hardware and follow the flow of data through the DAQ network.  
 
MOPMN020 Integrating Controls Frameworks: Control Systems for NA62 LAV Detector Test Beams framework, controls, experiment, interface 285
 
  • O. Holme, J.A.R. Arroyo Garcia, P. Golonka, M. Gonzalez-Berges, H. Milcent
    CERN, Geneva, Switzerland
  • O. Holme
    ETH, Zurich, Switzerland
 
  The detector control system for the NA62 experiment at CERN, to be ready for physics data-taking in 2014, is going to be built based on control technologies recommended by the CERN Engineering group. A rich portfolio of the technologies is planned to be showcased and deployed in the final application, and synergy between them is needed. In particular two approaches to building controls application need to play in harmony: the use of the high-level application framework called UNICOS, and a bottom-up approach of development based on the components of the JCOP Framework. The aim of combining the features provided by the two frameworks is to avoid duplication of functionality and minimize the maintenance and development effort for future controls applications. In the paper the result of the integration efforts obtained so far are presented; namely the control applications developed for beam-testing of NA62 detector prototypes. Even though the delivered applications are simple, significant conceptual and development work was required to bring about the smooth inter-play between the two frameworks, while assuring the possibility of unleashing their full power. A discussion of current open issues is presented, including the viability of the approach for larger-scale applications of high complexity, such as the complete detector control system for the NA62 detector.  
poster icon Poster MOPMN020 [1.464 MB]  
 
MOPMN028 Automated Voltage Control in LHCb controls, experiment, status, high-voltage 304
 
  • L.G. Cardoso, C. Gaspar, R. Jacobsson
    CERN, Geneva, Switzerland
 
  LHCb is one of the 4 LHC experiments. In order to ensure the safety of the detector and to maximize efficiency, LHCb needs to coordinate its own operations, in particular the voltage configuration of the different sub-detectors, according to the accelerator status. A control software has been developed for this purpose, based on the Finite State Machine toolkit and the SCADA system used for control throughout LHCb (and the other LHC experiments). This software permits to efficiently drive both the Low Voltage (LV) and High Voltage (HV) systems of the 10 different sub-detectors that constitute LHCb, setting each sub-system to the required voltage (easily configurable at run-time) based on the accelerator state. The control software is also responsible for monitoring the state of the Sub-detector voltages and adding it to the event data in the form of status-bits. Safe and yet flexible operation of the LHCb detector has been obtained and automatic actions, triggered by the state changes of the accelerator, have been implemented. This paper will detail the implementation of the voltage control software, its flexible run-time configuration and its usage in the LHCb experiment.  
poster icon Poster MOPMN028 [0.479 MB]  
 
MOPMS003 The Evolution of the Control System for the Electromagnetic Calorimeter of the Compact Muon Solenoid Experiment at the Large Hadron Collider software, controls, hardware, interface 319
 
  • O. Holme, D.R.S. Di Calafiori, G. Dissertori, W. Lustermann
    ETH, Zurich, Switzerland
  • S. Zelepoukine
    UW-Madison/PD, Madison, Wisconsin, USA
 
  Funding: Swiss National Science Foundation (SNF)
This paper discusses the evolution of the Detector Control System (DCS) designed and implemented for the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) as well as the operational experience acquired during the LHC physics data taking periods of 2010 and 2011. The current implementation in terms of functionality and planned hardware upgrades are presented. Furthermore, a project for reducing the long-term software maintenance, including a year-long detailed analysis of the existing applications, is put forward and the current outcomes which have informed the design decisions for the next CMS ECAL DCS software generation are described. The main goals for the new version are to minimize external dependencies enabling smooth migration to new hardware and software platforms and to maintain the existing functionality whilst substantially reducing support and maintenance effort through homogenization, simplification and standardization of the control system software.
 
poster icon Poster MOPMS003 [3.508 MB]  
 
MOPMS021 Detector Control System of the ATLAS Insertable B-Layer controls, monitoring, software, hardware 364
 
  • S. Kersten, P. Kind, K. Lantzsch, P. Mättig, C. Zeitnitz
    Bergische Universität Wuppertal, Wuppertal, Germany
  • M. Citterio, C. Meroni
    Universita' degli Studi di Milano e INFN, Milano, Italy
  • F. Gensolen
    CPPM, Marseille, France
  • S. Kovalenko
    CERN, Geneva, Switzerland
  • B. Verlaat
    NIKHEF, Amsterdam, The Netherlands
 
  To improve tracking robustness and precision of the ATLAS inner tracker an additional fourth pixel layer is foreseen, called Insertable B-Layer (IBL). It will be installed between the innermost present Pixel layer and a new smaller beam pipe and is presently under construction. As, once installed into the experiment, no access is available, a highly reliable control system is required. It has to supply the detector with all entities required for operation and protect it at all times. Design constraints are the high power density inside the detector volume, the sensitivity of the sensors against heatups, and the protection of the front end electronics against transients. We present the architecture of the control system with an emphasis on the CO2 cooling system, the power supply system and protection strategies. As we aim for a common operation of pixel and IBL detector, the integration of the IBL control system into the Pixel one will be discussed as well.  
 
MOPMS031 Did We Get What We Aimed for 10 Years Ago? operation, controls, experiment, hardware 397
 
  • P.Ch. Chochula, A. Augustinus, L.S. Jirdén, A.N. Kurepin, M. Lechman, P. Rosinský
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The ALICE Detector Control System (DCS) is in charge of control and operation of one of the large high energy physics experiments at CERN in Geneva. The DCS design which started in 2000 was partly inspired by the control systems of the previous generation of HEP experiments at the LEP accelerator at CERN. However, the scale of the LHC experiments, the use of modern, "intelligent" hardware and the harsh operational environment led to an innovative system design. The overall architecture has been largely based on commercial products like PVSS SCADA system and OPC servers extended by frameworks. Windows has been chosen as operating system platform for the core systems and Linux for the frontend devices. The concept of finite state machines has been deeply integrated into the system design. Finally, the design principles have been optimized and adapted to the expected operational needs. The ALICE DCS was designed, prototyped and developed at the time, when no experience with systems of similar scale and complexity existed. At the time of its implementation the detector hardware was not yet available and tests were performed only with partial detector installations. In this paper we analyse how well the original requirements and expectations set ten years ago comply with the real experiment needs after two years of operation. We provide an overview of system performance, reliability and scalability. Based on this experience we assess the need for future system enhancements to take place during the LHC technical stop in 2013.  
poster icon Poster MOPMS031 [5.534 MB]  
 
MOPMS034 Software Renovation of CERN's Experimental Areas software, controls, hardware, GUI 409
 
  • J. Fullerton, L.K. Jensen, J. Spanggaard
    CERN, Geneva, Switzerland
 
  The experimental areas at CERN (AD, PS and SPS) have undergone a wide-spread electronics and software consolidation based on modern techniques allowing them to be used in the many years to come. This paper will describe the scale of the software renovation and how the issues were overcome in order to ensure a complete integration into the respective control systems.  
poster icon Poster MOPMS034 [1.582 MB]  
 
MOPMU020 The Control and Data Acquisition System of the Neutron Instrument BIODIFF controls, neutron, TANGO, software 477
 
  • H. Kleines, M. Drochner, L. Fleischhauer-Fuss, T. E. Schrader, F. Suxdorf, M. Wagener, S. van Waasen
    FZJ, Jülich, Germany
  • A. Ostermann
    TUM/Physik, Garching bei München, Germany
 
  The Neutron instrument BIODIFF is a single crystal diffractometer for biological macromolecules that has been built in a cooperation of Forschungszentrum Jülich and the Technical University of Munich. It is located at the research reactor FRM-II in Garching, Germany, and is in its commissioning phase, now. The control and data acquisition system of BIODIFF is based on the so-called "Jülich-Munich Standard", a set of standards and technologies commonly accepted at the FRM-II, which is based on the TACO control system developed by the ESRF. In future, it is intended to introduce TANGO at the FRM-II. The Image Plate detector system of BIODIFF is already equipped with a TANGO subsystem that was integrated into the overall TACO instrument control system.  
 
TUAAUST01 GDA and EPICS Working in Unison for Science Driven Data Acquisition and Control at Diamond Light Source EPICS, controls, hardware, data-acquisition 529
 
  • E.P. Gibbons, M.T. Heron, N.P. Rees
    Diamond, Oxfordshire, United Kingdom
 
  Diamond Light Source has recently received funding for an additional 10 photon beamlines, bringing the total to 32 beamlines and around 40 end-stations. These all use EPICS for the control of the underlying instrumentation associated with photon delivery, the experiment and most of the data acquisition hardware. For the scientific users Diamond has developed the Generic Data Acquisition (GDA) application framework to provide a consistent science interface across all beamlines. While each application is customised to the science of its beamline, all applications are built from the framework and predominantly interface to the underlying instrumentation through the EPICS abstraction. We will describe the complete system, illustrate how it can be configured for a specific beamline application, and how other synchrotrons are, and can, adapt these tools for their needs.  
slides icon Slides TUAAUST01 [9.781 MB]  
 
TUBAULT03 The Upgrade Path from Legacy VME to VXS Dual Star Connectivity for Large Scale Data Acquisition and Trigger Systems hardware, fibre-optics, data-acquisition, FPGA 550
 
  • C. Cuevas, D. Abbott, F.J. Barbosa, H. Dong, W. Gu, E. Jastrzembski, S.R. Kaneta, B. Moffit, N. Nganga, B.J. Raydo, A. Somov, W.M. Taylor, J. Wilson
    JLAB, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177
New instrumentation modules have been designed by Jefferson Lab and to take advantage of the higher performance and elegant backplane connectivity of the VITA 41 VXS standard. These new modules are required to meet the 200KHz trigger rates envisioned for the 12GeV experimental program. Upgrading legacy VME designs to the high speed gigabit serial extensions that VXS offers, comes with significant challenges, including electronic engineering design, plus firmware and software development issues. This paper will detail our system design approach including the critical system requirement stages, and explain the pipeline design techniques and selection criteria for the FPGA that require embedded Gigabit serial transceivers. The entire trigger system is synchronous and operates at 250MHz clock with synchronization signals, and the global trigger signals distributed to each front end readout crate via the second switch slot in the 21 slot, dual star VXS backplane. The readout of the buffered detector signals relies on 2eSST over the standard VME64x path at >200MB/s. We have achieved 20Gb/s transfer rate of trigger information within one VXS crate and will present results using production modules in a two crate test configuration with both VXS crates fully populated. The VXS trigger modules that reside in the front end crates, will be ready for production orders by the end of the 2011 fiscal year. VXS Global trigger modules are in the design stage now, and will be complete to meet the installation schedule for the 12GeV Physics program.
 
slides icon Slides TUBAULT03 [7.189 MB]  
 
TUCAUST06 Event-Synchronized Data Acquisition System of 5 Giga-bps Data Rate for User Experiment at the XFEL Facility, SACLA experiment, operation, controls, network 581
 
  • M. Yamaga, A. Amselem, T. Hirono, Y. Joti, A. Kiyomichi, T. Ohata, T. Sugimoto, R. Tanaka
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Hatsui
    RIKEN/SPring-8, Hyogo, Japan
 
  A data acquisition (DAQ), control, and storage system has been developed for user experiments at the XFEL facility, SACLA, in the SPring-8 site. The anticipated experiments demand shot-by-shot DAQ in synchronization with the beam operation cycle in order to correlate the beam characteristics, and recorded data such as X-ray diffraction pattern. The experiments produce waveform or image data, of which the data size ranges from 8 up to 48 M byte for each x-ray pulse at 60 Hz. To meet these requirements, we have constructed a DAQ system that is operated in synchronization with the 60Hz of beam operation cycle. The system is designed to handle up to 5 Gbps data rate after compression, and consists of the trigger distributor/counters, the data-filling computers, the parallel-writing high-speed data storage, and the relational database. The data rate is reduced by on-the-fly data compression through front-end embedded systems. The self-described data structure enables to handle any type of data. The pipeline data-buffer at each computer node ensures integrity of the data transfer with the non-real-time operating systems, and reduces the development cost. All the data are transmitted via TCP/IP protocol over GbE and 10GbE Ethernet. To monitor the experimental status, the system incorporates with on-line visualization of waveform/images as well as prompt data mining by 10 PFlops scale supercomputer to check the data health. Partial system for the light source commissioning was released in March 2011. Full system will be released to public users in March 2012.  
slides icon Slides TUCAUST06 [3.248 MB]  
 
TURAULT01 Summary of the 3rd Control System Cyber-security (CS)2/HEP Workshop controls, network, experiment, software 603
 
  • S. Lüders
    CERN, Geneva, Switzerland
 
  Over the last decade modern accelerator and experiment control systems have increasingly been based on commercial-off-the-shelf products (VME crates, programmable logic controllers (PLCs), supervisory control and data acquisition (SCADA) systems, etc.), on Windows or Linux PCs, and on communication infrastructures using Ethernet and TCP/IP. Despite the benefits coming with this (r)evolution, new vulnerabilities are inherited, too: Worms and viruses spread within seconds via the Ethernet cable, and attackers are becoming interested in control systems. The Stuxnet worm of 2010 against a particular Siemens PLC is a unique example for a sophisticated attack against control systems [1]. Unfortunately, control PCs cannot be patched as fast as office PCs. Even worse, vulnerability scans at CERN using standard IT tools have shown that commercial automation systems lack fundamental security precautions: Some systems crashed during the scan, others could easily be stopped or their process data being altered [2]. The 3rd (CS)2/HEP workshop [3] held the weekend before the ICALEPCS2011 conference was intended to raise awareness; exchange good practices, ideas, and implementations; discuss what works & what not as well as their pros & cons; report on security events, lessons learned & successes; and update on progresses made at HEP laboratories around the world in order to secure control systems. This presentation will give a summary of the solutions planned, deployed and the experience gained.
[1] S. Lüders, "Stuxnet and the Impact on Accelerator Control Systems", FRAAULT02, ICALEPCS, Grenoble, October 2011;
[2] S. Lüders, "Control Systems Under Attack?", O5_008, ICALEPCS, Geneva, October 2005.
[3] 3rd Control System Cyber-Security CS2/HEP Workshop, http://indico.cern.ch/conferenceDisplay.py?confId=120418
 
 
WEBHAUST06 Virtualized High Performance Computing Infrastructure of Novosibirsk Scientific Center network, experiment, site, controls 630
 
  • A. Zaytsev, S. Belov, V.I. Kaplin, A. Sukharev
    BINP SB RAS, Novosibirsk, Russia
  • A.S. Adakin, D. Chubarov, V. Nikultsev
    ICT SB RAS, Novosibirsk, Russia
  • V. Kalyuzhny
    NSU, Novosibirsk, Russia
  • N. Kuchin, S. Lomakin
    ICM&MG SB RAS, Novosibirsk, Russia
 
  Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. Our contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure focusing on its applications for handling everyday data processing tasks of HEP experiments being carried out at BINP.  
slides icon Slides WEBHAUST06 [14.369 MB]  
 
WEBHMULT04 Sub-nanosecond Timing System Design and Development for LHAASO Project Ethernet, timing, network, FPGA 646
 
  • G.H. Gong, S. Chen, Q. Du, J.M. Li, Y. Liu
    Tsinghua University, Beijing, People's Republic of China
  • H. He
    IHEP Beijing, Beijing, People's Republic of China
 
  Funding: National Science Foundation of China (No.11005065)
The Large High Altitude Air Shower Observatory (LHAASO) [1] project is designed to trace galactic cosmic ray sources by approximately 10,000 different types of ground air shower detectors. Reconstruction of cosmic ray arrival directions requires sub-nanosecond time synchronization, a novel design of the LHAASO timing system by means of packet-based frequency distribution and time synchronization over Ethernet is proposed. The White Rabbit Protocol (WR) [2] is applied as the infrastructure of the timing system, which implements a distributed adaptive phase tracking technology based on Synchronous Ethernet to lock all local clocks, and a real time delay calibration method based on the Precision Time Protocol to keep all local time synchronized within a nanosecond. We also demonstrate the development and test status on prototype WR switches and nodes.
[1] Cao Zhen, "A future project at tibet: the large high altitude air shower observatory (LHAASO)", Chinese Phys. C 34 249,2010
[2] P. Moreira, et al, "White Rabbit: Sub-Nanosecond Timing Distribution over Ethernet", ISPCS 2009
 
slides icon Slides WEBHMULT04 [8.775 MB]  
 
WEMAU005 The ATLAS Transition Radiation Tracker (TRT) Detector Control System controls, hardware, electronics, operation 666
 
  • J. Olszowska, E. Banaś, Z. Hajduk
    IFJ-PAN, Kraków, Poland
  • M. Hance, D. Olivito, P. Wagner
    University of Pennsylvania, Philadelphia, Pennsylvania, USA
  • T. Kowalski, B. Mindur
    AGH University of Science and Technology, Krakow, Poland
  • R. Mashinistov, K. Zhukov
    LPI, Moscow, Russia
  • A. Romaniouk
    MEPhI, Moscow, Russia
 
  Funding: CERN; MNiSW, Poland; MES of Russia and ROSATOM, Russian Federation; DOE and NSF, United States of America
TRT is one of the ATLAS experiment Inner Detector components providing precise tracking and electrons identification. It consists of 370 000 proportional counters (straws) which have to be filled with stable active gas mixture and high voltage biased. High voltage setting at distinct topological regions are periodicaly modified by closed-loop regulation mechanism to ensure constant gaseous gain independent of drifts of atmospheric pressure, local detector temperatures and gas mixture composition. Low voltage system powers front-end electronics. Special algorithms provide fine tuning procedures for detector-wide discrimination threshold equalization to guarantee uniform noise figure for whole detector. Detector, cooling system and electronics temperatures are continuosly monitored by ~ 3000 temperature sensors. The standard industrial and custom developed server applications and protocols are used for devices integration into unique system. All parameters originating in TRT devices and external infrastructure systems (important for Detector operation or safety) are monitored and used by alert and interlock mechanisms. System runs on 11 computers as PVSS (industrial SCADA) projects and is fully integrated with ATLAS Detector Control System.
 
slides icon Slides WEMAU005 [1.384 MB]  
poster icon Poster WEMAU005 [1.978 MB]  
 
WEMAU011 LIMA: A Generic Library for High Throughput Image Acquisition hardware, controls, software, interface 676
 
  • A. Homs, L. Claustre, A. Kirov, E. Papillon, S. Petitdemange
    ESRF, Grenoble, France
 
  A significant number of 2D detectors are used in large scale facilities' control systems for quantitative data analysis. In these devices, a common set of control parameters and features can be identified, but most of manufacturers provide specific software control interfaces. A generic image acquisition library, called LIMA, has been developed at the ESRF for a better compatibility and easier integration of 2D detectors to existing control systems. The LIMA design is driven by three main goals: i) independence of any control system to be shared by a wide scientific community; ii) a rich common set of functionalities (e.g., if a feature is not supported by hardware, then the alternative software implementation is provided); and iii) intensive use of events and multi-threaded algorithms for an optimal exploit of multi-core hardware resources, needed when controlling high throughput detectors. LIMA currently supports the ESRF Frelon and Maxipix detectors as well as the Dectris Pilatus. Within a collaborative framework, the integration of the Basler GigE cameras is a contribution from SOLEIL. Although it is still under development, LIMA features so far fast data saving on different file formats and basic data processing / reduction, like software pixel binning / sub-image, background subtraction, beam centroid and sub-image statistics calculation, among others.  
slides icon Slides WEMAU011 [0.073 MB]  
 
WEMMU001 Floating-point-based Hardware Accelerator of a Beam Phase-Magnitude Detector and Filter for a Beam Phase Control System in a Heavy-Ion Synchrotron Application controls, hardware, synchrotron, FPGA 683
 
  • F.A. Samman
    Technische Universität Darmstadt, Darmstadt, Germany
  • M. Glesner, C. Spies, S. Surapong
    TUD, Darmstadt, Germany
 
  Funding: German Federal Ministry of Education and Research in the frame of Project FAIR (Facility for Antiproton and Ion Research), Grant Number 06DA9028I.
A hardware implementation of an adaptive phase and magnitude detector and filter of a beam-phase control system in a heavy ion synchrotron application is presented in this paper [1]. The main components of the hardware are adaptive LMS filters and a phase and magnitude detector. The phase detectors are implemented by using a CORDIC algorithm based on 32-bit binary floating-point arithmetic data formats. Therefore, a decimal to floating-point adapter is required to interface the data from an ADC to the phase and magnitude detector. The floating-point-based hardware is designed to improve the precision of the past hardware implementation that is based on fixed-point arithmetics. The hardware of the detector and the adaptive LMS filter have been implemented on a reconfigurable FPGA device for hardware acceleration purpose. The ideal Matlab/Simulink model of the hardware and the VHDL model of the adaptive LMS filter and the phase and magnitude detector are compared. The comparison result shows that the output signal of the floating-point based adaptive FIR filter as well as the phase and magnitude detector is simillar to the expected output signal of the ideal Matlab/Simulink model.
[1] H. Klingbeil, "A Fast DSP-Based Phase-Detector for Closed-Loop RF Control in Synchrotrons," IEEE Trans. Instrum. Meas., 54(3):1209–1213, 2005.
 
slides icon Slides WEMMU001 [0.383 MB]  
 
WEMMU004 SPI Boards Package, a New Set of Electronic Boards at Synchrotron SOLEIL controls, undulator, FPGA, interface 687
 
  • Y.-M. Abiven, P. Betinelli-Deck, J. Bisou, F. Blache, F. Briquez, A. Chattou, J. Coquet, P. Gourhant, N. Leclercq, P. Monteiro, G. Renaud, J.P. Ricaud, L. Roussier
    SOLEIL, Gif-sur-Yvette, France
 
  SOLEIL is a third generation Synchrotron radiation source located in France near Paris. At the moment, the Storage Ring delivers photon beam to 23 beamlines. Since machine and beamlines improve their performance, new requirements are identified. On the machine side, new implementation for feedforward of electromagnetic undulators is required to improve beam stability. On the beamlines side, a solution is required to synchronize data acquisition with motor position during continuous scan. In order to provide a simple and modular solution for these applications requiring synchronization, the electronic group developed a set of electronic boards called "SPI board package". In this package, the boards can be connected together in daisy chain and communicate to the controller through a SPI* Bus. Communication with control system is done via Ethernet. At the moment the following boards are developed: a controller board based on a Cortex M3 MCU, 16bits ADC board, 16bits DAC board and a board allowing to process motor encoder signals based on a FPGA Spartan III. This platform allows us to embed process close to the hardware with open tools. Thanks to this solution we reach the best performances of synchronization.
* SPI: Serial Peripheral Interface
 
slides icon Slides WEMMU004 [0.230 MB]  
poster icon Poster WEMMU004 [0.430 MB]  
 
WEPKN003 Distributed Fast Acquisitions System for Multi Detector Experiments experiment, software, TANGO, distributed 717
 
  • F. Langlois, A. Buteau, X. Elattaoui, C.M. Kewish, S. Lê, P. Martinez, K. Medjoubi, S. Poirier, A. Somogyi
    SOLEIL, Gif-sur-Yvette, France
  • A. Noureddine
    MEDIANE SYSTEM, Le Pecq, France
  • C. Rodriguez
    ALTEN, Boulogne-Billancourt, France
 
  An increasing number of SOLEIL beamlines need to use in parallel several detection techniques, which could involve 2D area detectors, 1D fluorescence analyzers, etc. For such experiments, we have implemented Distributed Fast Acquisition Systems for Multi Detectors. Data from each Detector are collected by independent software applications (in our case Tango Devices), assuming all acquisitions are triggered by a unique Master clock. Then, each detector software device streams its own data on a common disk space, known as the spool. Each detector data are stored in independent NeXus files, with the help of a dedicated high performance NeXus streaming C++ library (called NeXus4Tango). A dedicated asynchronous process, known as the DataMerger, monitors the spool, and gathers all these individual temporary NeXus files into the final experiment NeXus file stored in SOLEIL common Storage System. Metadata information describing context and environment are also added in the final file, thanks to another process (the DataRecorder device). This software architecture proved to be very modular in terms of number and type of detectors while making life of users easier, all data being stored in a unique file at the end of the acquisition. The status of deployment and operation of this "Distributed Fast Acquisitions system for multi detector experiments" will be presented, with the examples of QuickExafs acquisitions on the SAMBA beamline and QuickSRCD acquisitions on DISCO. In particular, the complex case of the future NANOSCOPIUM beamline will be developed.  
poster icon Poster WEPKN003 [0.671 MB]  
 
WEPKN019 A Programmable Logic Controller-Based System for the Recirculation of Liquid C6F14 in the ALICE High Momentum Particle Identification Detector at the Large Hadron Collider controls, operation, monitoring, framework 745
 
  • I. Sgura, G. De Cataldo, A. Franco, C. Pastore, G. Volpe
    INFN-Bari, Bari, Italy
 
  We present the design and the implementation of the Control System (CS) for the recirculation of liquid C6F14 (Perfluorohexane) in the High Momentum Particle Identification Detector (HMPID). The HMPID is a sub-detector of the ALICE experiment at the CERN Large Hadron Collider (LHC) and it uses liquid C6F14 as Cherenkov radiator medium in 21 quartz trays for the measurement of the velocity of charged particles. The primary task of the Liquid Circulation System (LCS) is to ensure the highest transparency of C6F14 to ultraviolet light by re-circulating the liquid through a set of special filters. In order to provide safe long term operation a PLC-based CS has been implemented. The CS supports both automatic and manual operating modes, remotely or locally. The adopted Finite State Machine approach minimizes the possible operator errors and provides a hierarchical control structure allowing the operation and monitoring of a single radiator tray. The LCS is protected against anomalous working conditions by both active and passive systems. The active ones are ensured via the control software running in the PLC whereas the human interface and data archiving are provided via PVSS, the SCADA framework which integrates the full detector control. The LCS under CS control has been fully commissioned and proved to meet all requirements, thus enabling HMPID to successfully collect the data from the first LHC operation..  
poster icon Poster WEPKN019 [1.270 MB]  
 
WEPKS002 Quick EXAFS Experiments Using a New GDA Eclipse RCP GUI with EPICS Hardware Control experiment, interface, EPICS, hardware 771
 
  • R.J. Woolliscroft, C. Coles, M. Gerring, M.R. Pearson
    Diamond, Oxfordshire, United Kingdom
 
  Funding: Diamond Light Source Ltd.
The Generic Data Acquisition (GDA)* framework is an open source, Java and Eclipse RCP based data acquisition software for synchrotron and neutron facilities. A new implementation of the GDA on the B18 beamline at the Diamond synchrotron will be discussed. This beamline performs XAS energy scanning experiments and includes a continuous-scan mode of the monochromator synchronised with various detectors for Quick EXAFS (QEXAFS) experiments. A new perspective for the GDA's Eclipse RCP GUI has been developed in which graphical editors are used to write xml files which hold experimental parameters. The same xml files are marshalled by the GDA server to create Java beans used by the Jython scripts run within the GDA server. The underlying motion control is provided by EPICS. The new Eclipse RCP GUI and the integration and synchronisation between the two software systems and the detectors shall be covered.
* GDA website: http://www.opengda.org/
 
poster icon Poster WEPKS002 [1.277 MB]  
 
WEPMN011 Controlling the EXCALIBUR Detector software, simulation, controls, hardware 894
 
  • J.A. Thompson, I. Horswell, J. Marchal, U.K. Pedersen
    Diamond, Oxfordshire, United Kingdom
  • S.R. Burge, J.D. Lipp, T.C. Nicholls
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
 
  EXCALIBUR is an advanced photon counting detector being designed and built by a collaboration of Diamond Light Source and the Science and Technology Facilities Council. It is based around 48 CERN Medipix III silicon detectors arranged as an 8x6 array. The main problem addressed by the design of the hardware and software is the uninterrupted collection and safe storage of image data at rates up to one hundred (2048x1536) frames per second. This is achieved by splitting the image into six 'stripes' and providing parallel data paths for them all the way from the detectors to the storage. This architecture requires the software to control the configuration of the stripes in a consistent manner and to keep track of the data so that the stripes can be subsequently stitched together into frames.  
poster icon Poster WEPMN011 [0.289 MB]  
 
WEPMN023 The ATLAS Tile Calorimeter Detector Control System controls, monitoring, experiment, electronics 929
 
  • G. Ribeiro
    LIP, Lisboa, Portugal
  • G. Arabidze
    MSU, East Lansing, Michigan, USA
  • P. Lafarguette
    Université Blaise Pascal, Clermont-Ferrand, France
  • S. Nemecek
    Czech Republic Academy of Sciences, Institute of Physics, Prague, Czech Republic
 
  The main task of the ATLAS Tile calorimeter Detector Control System (DCS) is to enable the coherent and safe operation of the calorimeter. All actions initiated by the operator, as well as all errors, warnings and alarms concerning the hardware of the detector are handled by DCS. The Tile calorimeter DCS controls and monitors mainly the low voltage and high voltage power supply systems, but it is also interfaced with the infrastructure (cooling system and racks), the calibration systems, the data acquisition system, configuration and conditions databases and the detector safety system. The system has been operational since the beginning of LHC operation and has been extensively used in the operation of the detector. In the last months effort was directed to the implementation of automatic recovery of power supplies after trips. Current status, results and latest developments will be presented.  
poster icon Poster WEPMN023 [0.404 MB]  
 
WEPMN025 A New Fast Triggerless Acquisition System For Large Detector Arrays FPGA, real-time, controls, experiment 935
 
  • P. Mutti, M. Jentschel, J. Ratel, F. Rey, E. Ruiz-Martinez, W. Urban
    ILL, Grenoble, France
 
  Presently a common characteristic trend in low and medium energy nuclear physics is to develop more complex detector systems to form multi-detector arrays. The main objective of such an elaborated set-up is to obtain comprehensive information about the products of all reactions. State-of-art γ-ray spectroscopy requires nowadays the use of large arrays of HPGe detectors often coupled with anti-Compton active shielding to reduce the ambient background. In view of this complexity, the front-end electronics must provide precise information about energy, time and possibly pulse shape. The large multiplicity of the detection system requires the capability to process the multitude of signals from many detectors, fast processing and very high throughput of more that 106 data words/sec. The possibility to handle such a complex system using traditional analogue electronics has shown rapidly its limitation due, first of all, to the non negligible cost per channel and, moreover, to the signal degradation associated to complex analogue path. Nowadays, digital pulse processing systems are available, with performances, in terms of timing and energy resolution, equal when not better than the corresponding analogue ones for a fraction of the cost per channel. The presented system uses a combination of a 15-bit 100 MS/s digitizer with a PowerPC-based VME single board computer. Real-time processing algorithms have been developed to handle total event rates of more than 1 MHz, providing on-line display for single and coincidence events.  
poster icon Poster WEPMN025 [15.172 MB]  
 
WEPMN028 Development of Image Data Acquisition System for 2D Detector at SACLA (SPring-8 XFEL) data-acquisition, interface, laser, FPGA 947
 
  • A. Kiyomichi, A. Amselem, T. Hirono, T. Ohata, R. Tanaka, M. Yamaga
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Hatsui
    RIKEN/SPring-8, Hyogo, Japan
 
  The x-ray free electron laser facility SACLA (SPring-8 Angstrom Compact free electron LAser) was constructed and started beam commissioning from March 2011. For the requirements of proposed experiments at SACLA, x-ray multi-readout ports CCD detectors (MPCCD) have been developed to realize a system with the total amount of 4 Mega-pixels area and 16bit wide dynamic range at a frame rate of 60Hz shot rate. We have developed the image data-handling scheme using the event-synchronized data-acquisition system. The front-end system used the CameraLink interface that excels in abilities of real-time triggering and high-speed data transfer. For the total data rate up to 4Gbps, the image data are collected by dividing the CCD detector into eight segments, which handles 0.5M pixels each, and then sent to high-speed data storage in parallel. We prepared two types of Cameralink imaging system for the VME and PC base. The Image Distribution board is made up of logic-reconfigurable VME board with CameraLink mezzanine card. The front-end system of MPCCD detector consists of eight sets of Image Distribution boards. We plan to introduce the online lossless compression using FPGA with arithmetic coding algorithm. For wide adaptability of user requirements, we also prepared the PC based imaging system, which consists of Linux server and commercial CameraLink PCI interface. It does not contain compression function, but supports various type of CCD camera, for example, high-definition (1920x1080) single CCD camera.  
poster icon Poster WEPMN028 [5.574 MB]  
 
WEPMN038 A Combined On-line Acoustic Flowmeter and Fluorocarbon Coolant Mixture Analyzer for the ATLAS Silicon Tracker software, controls, database, real-time 969
 
  • A. Bitadze, R.L. Bates
    University of Glasgow, Glasgow, United Kingdom
  • M. Battistin, S. Berry, P. Bonneau, J. Botelho-Direito, B. Di Girolamo, J. Godlewski, E. Perez-Rodriguez, L. Zwalinski
    CERN, Geneva, Switzerland
  • N. Bousson, G.D. Hallewell, M. Mathieu, A. Rozanov
    CNRS/CPT, Marseille, France
  • R. Boyd
    University of Oklahoma, Norman, Oklahoma, USA
  • M. Doubek, V. Vacek, M. Vitek
    Czech Technical University in Prague, Faculty of Mechanical Engineering, Prague, Czech Republic
  • K. Egorov
    Indiana University, Bloomington, Indiana, USA
  • S. Katunin
    PNPI, Gatchina, Leningrad District, Russia
  • S. McMahon
    STFC/RAL/ASTeC, Chilton, Didcot, Oxon, United Kingdom
  • K. Nagai
    University of Tsukuba, Graduate School of Pure and Applied Sciences,, Tsukuba, Ibaraki, Japan
 
  An upgrade to the ATLAS silicon tracker cooling control system requires a change from C3F8 (molecular weight 188) coolant to a blend with 10-30% C2F6 (mw 138) to reduce the evaporation temperature and better protect the silicon from cumulative radiation damage at LHC. Central to this upgrade an acoustic instrument for measurement of C3F8/C2F6 mixture and flow has been developed. Sound velocity in a binary gas mixture at known temperature and pressure depends on the component concentrations. 50 kHz sound bursts are simultaneously sent via ultrasonic transceivers parallel and anti-parallel to the gas flow. A 20 MHz transit clock is started synchronous with burst transmission and stopped by over-threshold received sound pulses. Transit times in both directions, together with temperature and pressure, enter a FIFO memory 100 times/second. Gas mixture is continuously analyzed using PVSS-II, by comparison of average sound velocity in both directions with stored velocity-mixture look-up tables. Flow is calculated from the difference in sound velocity in the two directions. In future versions these calculations may be made in a micro-controller. The instrument has demonstrated a resolution of <0.3% for C3F8/C2F6 mixtures with ~20%C2F6, with simultaneous flow resolution of ~0.1% of F.S. Higher precision is possible: a sensitivity of ~0.005% to leaks of C3F8 into the ATLAS pixel detector nitrogen envelope (mw difference 156) has been seen. The instrument has many applications, including analysis of hydrocarbons, mixtures for semi-conductor manufacture and anesthesia.  
 
WEPMU006 Architecture for Interlock Systems: Reliability Analysis with Regard to Safety and Availability simulation, operation, extraction, superconducting-magnet 1058
 
  • S. Wagner, A. Apollonio, R. Schmidt, M. Zerlauth
    CERN, Geneva, Switzerland
  • A. Vergara-Fernandez
    ITER Organization, St. Paul lez Durance, France
 
  For accelerators (e.g. LHC) and other large experimental physics facilities (e.g. ITER), the machine protection relies on complex interlock systems. In the design of interlock loops, the choice of the hardware architecture impacts on machine safety and availability. While high machine safety is an inherent requirement, the constraints in terms of availability may differ from one facility to another. For the interlock loops protecting the LHC superconducting magnet circuits, reduced machine availability can be tolerated since shutdowns do not affect the longevity of the equipment. In ITER's case on the other hand, high availability is required since fast shutdowns cause significant magnet aging. A reliability analysis of various interlock loop architectures has been performed. The analysis based on an analytical model compares a 1oo3 (one-out-of-three) and a 2oo3 architecture with a single loop. It yields the probabilities for four scenarios: (1)- completed mission (e.g., a physics fill in LHC or a pulse in ITER without shutdown triggered), (2)- shutdown because of a failure in the interlock loop, (3)- emergency shutdown (e.g., after a quench of a magnet) and (4)- missed emergency shutdown (shutdown required but interlock loop fails, possibly leading to severe damage of the facility). Scenario 4 relates to machine safety and together with scenarios 2 and 3 defines the machine availability reflected by scenario 1. This paper presents the results of the analysis on the properties of the different architectures with regard to machine safety and availability.  
 
WEPMU024 The Radiation Monitoring System for the LHCb Inner Tracker radiation, luminosity, monitoring, electronics 1115
 
  • O. Okhrimenko, V. Iakovenko, V.M. Pugatch
    NASU/INR, Kiev, Ukraine
  • F. Alessio, G. Corti
    CERN, Geneva, Switzerland
 
  The performance of the LHCb Radiation Monitoring System (RMS) [1], designed to monitor radiation load on the Inner Tracker [2] silicon micro-strip detectors, is presented. The RMS comprises Metal Foil Detectors (MFD) read-out by sensitive Charge Integrators [3]. MFD is a radiation hard detector operating at high charged particle fluxes. RMS is used to monitor radiation load as well as relative luminosity of the LHCb experiment. The results obtained by the RMS during LHC operation in 2010-2011 are compared to the Monte-Carlo simulation.
[1] V. Pugatch et al., Ukr. J. Phys 54(4), 418 (2009).
[2] LHCb Collaboration, JINST S08005 (2008).
[3] V. Pugatch et al., LHCb Note 2007-062.
 
poster icon Poster WEPMU024 [3.870 MB]  
 
WEPMU026 Protecting Detectors in ALICE injection, experiment, controls, monitoring 1122
 
  • M. Lechman, A. Augustinus, P.Ch. Chochula, G. De Cataldo, A. Di Mauro, L.S. Jirdén, A.N. Kurepin, P. Rosinský, H. Schindler
    CERN, Geneva, Switzerland
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  ALICE is one of the big LHC experiments at CERN in Geneva. It is composed of many sophisticated and complex detectors mounted very compactly around the beam pipe. Each detector is a unique masterpiece of design, engineering and construction and any damage to it could stop the experiment for months or even for years. It is therefore essential that the detectors are protected from any danger and this is one very important role of the Detector Control System (DCS). One of the main dangers for the detectors is the particle beam itself. Since the detectors are designed to be extremely sensitive to particles they are also vulnerable to any excess of beam conditions provided by the LHC accelerator. The beam protection consists of a combination of hardware interlocks and control software and this paper will describe how this is implemented and handled in ALICE. Tools have also been developed to support operators and shift leaders in the decision making related to beam safety. The gained experiences and conclusions from the individual safety projects are also presented.  
poster icon Poster WEPMU026 [1.561 MB]  
 
THBHAUST02 The Wonderland of Operating the ALICE Experiment operation, experiment, controls, interface 1182
 
  • A. Augustinus, P.Ch. Chochula, G. De Cataldo, L.S. Jirdén, A.N. Kurepin, M. Lechman, O. Pinazza, P. Rosinský
    CERN, Geneva, Switzerland
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
 
  ALICE is one of the experiments at the Large Hadron Collider (LHC), CERN (Geneva, Switzerland). Composed of 18 sub-detectors each with numerous subsystems that need to be controlled and operated in a safe and efficient way. The Detector Control System (DCS) is the key for this and has been used by detector experts with success during the commissioning of the individual detectors. With the transition from commissioning to operation more and more tasks were transferred from detector experts to central operators. By the end of the 2010 datataking campaign the ALICE experiment was run by a small crew of central operators, with only a single controls operator. The transition from expert to non-expert operation constituted a real challenge in terms of tools, documentation and training. In addition a relatively high turnover and diversity in the operator crew that is specific to the HEP experiment environment (as opposed to the more stable operation crews for accelerators) made this challenge even bigger. This paper describes the original architectural choices that were made and the key components that allowed to come to a homogeneous control system that would allow for efficient centralized operation. Challenges and specific constraints that apply to the operation of a large complex experiment are described. Emphasis will be put on the tools and procedures that were implemented to allow the transition from local detector expert operation during commissioning and early operation, to efficient centralized operation by a small operator crew not necessarily consisting of experts.  
slides icon Slides THBHAUST02 [1.933 MB]  
 
THCHAUST02 Large Scale Data Facility for Data Intensive Synchrotron Beamlines data-management, experiment, synchrotron, software 1216
 
  • R. Stotzka, A. Garcia, V. Hartmann, T. Jejkal, H. Pasic, A. Streit, J. van Wezel
    KIT, Karlsruhe, Germany
  • D. Haas, W. Mexner, T. dos Santos Rolo
    Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
 
  ANKA is a large scale facility of the Helmholtz Association of National Research Centers in Germany located at the Karlsruhe Institute of Technology. As the synchrotron light source it is providing light from hard X-rays to the far-infrared for research and technology. It is serving as a user facility for the national and international scientific community currently producing 100 TB of data per year. Within the next two years a couple of additional data intensive beamlines will be operational producing up to 1.6 PB per year. These amounts of data have to be stored and provided on demand to the users. The Large Scale Data Facility LSDF is located on the same campus as ANKA. It is a data service facility dedicated for data intensive scientific experiments. Currently storage of 4 PB for unstructured and structured data and a HADOOP cluster as a computing resource for data intensive applications are available. Within the campus experiments and the main large data producing facilities are connected via 10 GE network links. An additional 10 GE link exists to the internet. Tools for an easy and transparent access allow scientists to use the LSDF without bothering with the internal structures and technologies. Open interfaces and APIs support a variety of access methods to the highly available services for high throughput data applications. In close cooperation with ANKA the LSDF provides assistance to efficiently organize data and meta data structures, and develops and deploys community specific software running on the directly connected computing infrastructure.  
slides icon Slides THCHAUST02 [1.294 MB]  
 
THCHAUST03 Common Data Model ; A Unified Layer to Access Data from Data Analysis Point of View framework, synchrotron, data-analysis, neutron 1220
 
  • N. Hauser, T.K. Lam, N. Xiong
    ANSTO, Menai, Australia
  • A. Buteau, M. Ounsy, S. Poirier
    SOLEIL, Gif-sur-Yvette, France
  • C. Rodriguez
    ALTEN, Boulogne-Billancourt, France
 
  For almost 20 years, the scientific community of neutrons and synchrotron facilities has been dreaming of using a common data format to be able to exchange experimental results and applications to analyse them. If using HDF5 as a physical container for data quickly raised a large consensus, the big issue is the standardisation of data organisation. By introducing a new level of indirection for data access, the CommonDataModel (CDM) framework offers a solution and allows to split development efforts and responsibilities between institutes. The CDM is made of a core API that accesses data through a data format plugins mechanism and scientific applications definitions (i.e. sets of logically organized keywords defined by scientists for each experimental technique). Using a innovative "mapping" system between applications definitions and physical data organizations, the CDM allows to develop data reduction applications regardless of data files formats AND organisations. Then each institute has to develop data access plugins for its own files formats along with the mapping between application definitions and its own data files organisation. Thus, data reduction applications can be developed from a strictly scientific point of view and are natively able to process data coming from several institutes. A concrete example on a SAXS data reduction application, accessing NeXus and EDF (ESRF Data Format) file will be commented.  
slides icon Slides THCHAUST03 [36.889 MB]  
 
THCHAUST05 LHCb Online Log Analysis and Maintenance System Linux, software, network, controls 1228
 
  • J.C. Garnier, L. Brarda, N. Neufeld, F. Nikolaidis
    CERN, Geneva, Switzerland
 
  History has shown, many times computer logs are the only information an administrator may have for an incident, which could be caused either by a malfunction or an attack. Due to huge amount of logs that are produced from large-scale IT infrastructures, such as LHCb Online, critical information may overlooked or simply be drowned in a sea of other messages . This clearly demonstrates the need for an automatic system for long-term maintenance and real time analysis of the logs. We have constructed a low cost, fault tolerant centralized logging system which is able to do in-depth analysis and cross-correlation of every log. This system is capable of handling O(10000) different log sources and numerous formats, while trying to keep the overhead as low as possible. It provides log gathering and management, offline analysis and online analysis. We call offline analysis the procedure of analyzing old logs for critical information, while Online analysis refer to the procedure of early alerting and reacting. The system is extensible and cooperates well with other applications such as Intrusion Detection / Prevention Systems. This paper presents the LHCb Online topology, problems we had to overcome and our solutions. Special emphasis is given to log analysis and how we use it for monitoring and how we can have uninterrupted access to the logs. We provide performance plots, code modification in well known log tools and our experience from trying various storage strategies.  
slides icon Slides THCHAUST05 [0.377 MB]  
 
FRBHAULT02 ATLAS Online Determination and Feedback of LHC Beam Parameters database, feedback, monitoring, experiment 1306
 
  • J.G. Cogan, R. Bartoldus, D.W. Miller, E. Strauss
    SLAC, Menlo Park, California, USA
 
  The High Level Trigger of the ATLAS experiment relies on the precise knowledge of the position, size and orientation of the luminous region produced by the LHC. Moreover, these parameters change significantly even during a single data taking run. We present the challenges, solutions and results for the online luminous region (beam spot) determination, and its monitoring and feedback system in ATLAS. The massively parallel calculation is performed on the trigger farm, where individual processors execute a dedicated algorithm that reconstructs event vertices from the proton-proton collision tracks seen in the silicon trackers. Monitoring histograms from all the cores are sampled and aggregated across the farm every 60 seconds. We describe the process by which a standalone application fetches and fits these distributions, extracting the parameters in real time. When the difference between the nominal and measured beam spot values satisfies threshold conditions, the parameters are published to close the feedback loop. To achieve sharp time boundaries across the event stream that is triggered at rates of several kHz, a special datagram is injected into the event path via the Central Trigger Processor that signals the pending update to the trigger nodes. Finally, we describe the efficient near-simultaneous database access through a proxy fan-out tree, which allows thousands of nodes to fetch the same set of values in a fraction of a second.  
slides icon Slides FRBHAULT02 [7.573 MB]