Keyword: detector
Paper Title Other Keywords Page
MOCOAAB01 The First Running Period of the CMS Detector Controls System - A Success Story controls, experiment, status, hardware 1
 
  • F. Glege, A. Aymeric, O. Chaze, S. Cittolin, J.A. Coarasa, C. Deldicque, M. Dobson, D. Gigi, R. Gomez-Reino, C. Hartl, L. Masetti, F. Meijers, E. Meschi, S. Morovic, C. Nunez-Barranco-Fernandez, L. Orsini, W. Ozga
    CERN, Geneva, Switzerland
  • G. Bauer
    MIT, Cambridge, Massachusetts, USA
  • U. Behrens
    DESY, Hamburg, Germany
  • J. Branson, A. Holzner
    UCSD, La Jolla, California, USA
  • S. Erhan
    UCLA, Los Angeles, California, USA
  • R.K. Mommsen, V. O'Dell
    Fermilab, Batavia, USA
 
  After only three months of commissioning, the CMS detector controls system (DCS) was running at close to 100% efficiency. Despite millions of parameters to control and the HEP typical distributed development structure, only minor problems were encountered. The system can be operated by a single person and the required maintenance effort is low. A well factorized system structure and development are keys to success as well as a centralized, service like deployment approach. The underlying controls software PVSS has proven to work in a DCS environment. Converting the DCS to full redundancy will further reduce the need for interventions to a minimum.  
slides icon Slides MOCOAAB01 [1.468 MB]  
 
MOPPC043 Development of the Thermal Beam Loss Monitors of the Spiral2 Control System EPICS, controls, monitoring, FPGA 181
 
  • C.H. Haquin
    GANIL, Caen, France
  • F. Negoita
    IFIN, Magurele- Bucuresti, Romania
 
  The Spiral2 linear accelerator will drive high intensity beams, up to 5mA, to up to 200kW at linac exit. Such beams can seriously damage and activate the machine ! To prevent from such situation, the Machine Protection System (MPS) has been designed. This system is connected to diagnostics indicating if the beam remains under specific limits. As soon as a diagnostic detects its limit is crossed, it informs the MPS which will in turn take actions that can lead to a beam cut-off in appropriated timing requirements. In this process, the Beam Loss Monitors (BLM) are involved in monitoring prompt radiation generated by beam particles interactions with beam line components and responsible for activation, on one side, and thermal effects, on the other side. BLM system relies mainly on scintillator detectors, NIM electronics and a VME subsystem monitoring the heating of the machine. This subsystem, also called «Thermal BLM», will be integrated in the Spiral2 EPICS environment. For its development, a specific project organization has been setup since the development is subcontracted to Cosylab. This paper focuses on the Thermal BLM controls aspects and describes this development process.  
poster icon Poster MOPPC043 [0.957 MB]  
 
MOPPC056 The Detector Safety System of NA62 Experiment experiment, status, interface, controls 222
 
  • G. Maire, A. Kehrli, S. Ravat
    CERN, Geneva, Switzerland
  • H. Coppier
    ESIEE, Amiens, France
 
  The aim of the NA62 experiment is the study of the rare decay K+→π+ν;ν- at the CERN SPS. The Detector Safety System (DSS) developed at CERN is responsible for assuring the protection of the experiment’s equipment. DSS requires a high degree of availability and reliability. It is composed of a Front-End and a Back-End part, the Front-End being based on a National Instruments cRIO system, to which the safety critical part is delegated. The cRIO Front-End is capable of running autonomously and of automatically taking predefined protective actions whenever required. It is supervised and configured by the standard CERN PVSS SCADA system. This DSS system can easily adapt to evolving requirements of the experiment during the construction, commissioning and exploitation phases. The NA62 DSS is being installed and has been partially commissioned during the NA62 Technical Run in autumn 2012, where components from almost all the detectors as well as the trigger and the data acquisition systems were successfully tested. The paper contains a detailed description of this innovative and performing solution, and demonstrates a good alternative to the LHC systems based on redundant PLCs.  
poster icon Poster MOPPC056 [0.613 MB]  
 
MOPPC062 Real-Time System Supervision for the LHC Beam Loss Monitoring System at CERN monitoring, FPGA, database, operation 242
 
  • C. Zamantzas, B. Dehning, E. Effinger, J. Emery, S. Jackson
    CERN, Geneva, Switzerland
 
  The strategy for machine protection and quench prevention of the Large Hadron Collider (LHC) at the European Organisation for Nuclear Research (CERN) is mainly based on the Beam Loss Monitoring (BLM) system. The LHC BLM system is one of the most complex and large instrumentation systems deployed in the LHC. In addition to protecting the collider, the system also needs to provide a means of diagnosing machine faults and deliver feedback of the losses to the control room as well as to several systems for their setup and analysis. In order to augment the dependability of the system several layers of supervision has been implemented internally and externally to the system. This paper describes the different methods employed to achieve the expected availability and system fault detection.  
 
MOPPC077 Open Hardware Collaboration: A Way to Improve Efficiency for a Team hardware, controls, electronics, FPGA 273
 
  • Y.-M. Abiven, P. Betinelli-Deck, J. Bisou, F. Blache, G. Renaud, S.Z. Zhang
    SOLEIL, Gif-sur-Yvette, France
 
  SOLEIL* is a third generation Synchrotron radiation source located near Paris in France. Today, the Storage Ring delivers photon beam to 26 beamlines. In order to improve the machine and beamlines performance, new electronics requirements are identified. For these improvements, up-to-date commercial products are preferred but sometimes custom hardware designs become essential. At SOLEIL, the electronic team (8 people) is in charge of design, implementation and maintenance of 2000 electronics installed for control and data acquisition. This large basement and small team mean there is only little time left to focus on the development of new hardware designs. As alternative, we focus our development on the open Hardware (OHWR) initiative from the CERN dedicated for electronics designers at experimental physics facilities to collaborate on hardware designs. We collaborate as an evaluator and a contributor. We share some boards in the project SPI BOARDS PACKAGE**, developed to face our current challenges. We evaluated TDC core project, and we plan to evaluate FMC carrier. We will present our approach on how to be more efficient with developments, issues to face and the benefit we get.
*: www.synchrotron-soleil.fr
**: www.ohwr.org/projects/spi-board-package
 
 
MOPPC088 Improving Code Quality of the Compact Muon Solenoid Electromagnetic Calorimeter Control Software to Increase System Maintainability software, controls, monitoring, GUI 306
 
  • O. Holme, D.R.S. Di Calafiori, G. Dissertori, L. Djambazov, W. Lustermann, S. Zelepoukine
    ETH, Zurich, Switzerland
  • S. Zelepoukine
    UW-Madison/PD, Madison, Wisconsin, USA
 
  Funding: Swiss National Science Foundation (SNSF)
The Detector Control System (DCS) software of the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at CERN is designed primarily to enable safe and efficient operation of the detector during Large Hadron Collider (LHC) data-taking periods. Through a manual analysis of the code and the adoption of ConQAT*, a software quality assessment toolkit, the CMS ECAL DCS team has made significant progress in reducing complexity and improving code quality, with observable results in terms of a reduction in the effort dedicated to software maintenance. This paper explains the methodology followed, including the motivation to adopt ConQAT, the specific details of how this toolkit was used and the outcomes that have been achieved.
* ConQAT, https://www.conqat.org/
 
poster icon Poster MOPPC088 [2.510 MB]  
 
MOPPC110 The Control System for the CO2 Cooling Plants for Physics Experiments controls, operation, software, interface 370
 
  • L. Zwalinski, J. Daguin, J. Godlewski, J. Noite, M. Ostrega, S. Pavis, P. Petagna, P. Tropea, B. Verlaat
    CERN, Geneva, Switzerland
  • B. Verlaat
    NIKHEF, Amsterdam, The Netherlands
 
  CO2 cooling has become interesting technology for current and future tracking particle detectors. A key advantage of using CO2 as refrigerant is the high heat transfer capabilities allowing a significant material budget saving, which is a critical element in state of the art detector technologies. Several CO2 cooling stations, with cooling power ranging from 100W to several kW, have been developed at CERN to support detector testing for future LHC detector upgrades. Currently, two CO2 cooling plants for the ATLAS Pixel Insertable B-Layer and the Phase I Upgrade CMS Pixel detector are under construction. This paper describes the control system design and implementation using the UNICOS framework for the PLCs and SCADA. The control philosophy, safety and interlocking standard, user interfaces and additional features are presented. CO2 cooling is characterized by high operation stability and accurate evaporation temperature control over large distances. Implemented split range PID controllers with dynamically calculated limiters, multi-level interlocking and new software tools like CO2 online p-H diagram, jointly enable the cooling to fulfill the key requirements of reliable system.  
poster icon Poster MOPPC110 [2.385 MB]  
 
MOPPC150 Channel Access in Erlang EPICS, controls, framework, network 462
 
  • D.J. Nicklaus
    Fermilab, Batavia, USA
 
  We have developed an Erlang language implementation of the Channel Access protocol. Included are low-level functions for encoding and decoding Channel Access protocol network packets as well as higher level functions for monitoring or setting EPICS Process Variables. This provides access to EPICS process variables for the Fermilab Acnet control system via our Erlang-based front-end architecture without having to interface to C/C++ programs and libraries. Erlang is a functional programming language originally developed for real-time telecommunications applications. Its network programming features and list management functions make it particularly well-suited for the task of managing multiple Channel Access circuits and PV monitors.  
poster icon Poster MOPPC150 [0.268 MB]  
 
TUMIB07 RASHPA: A Data Acquisition Framework for 2D XRays Detectors hardware, framework, FPGA, software 536
 
  • F. Le Mentec, P. Fajardo, C. Herve, A. Homs, T. Le Caer
    ESRF, Grenoble, France
  • B. Bauvir
    ITER Organization, St. Paul lez Durance, France
 
  Funding: Cluster of Research Infrastructures for Synergies in Physics (CRISP) co-funded by the partners and the European Commission under the 7th Framework Programme Grant Agreement 283745 ESRF
ESRF research programs, along with the foreseen accelerator sources upgrade, require state-of-the-art instrumentation devices with high data flow acquisition systems. This paper presents RASHPA, a data acquisition framework targeting 2D XRay detectors. By combining a highly configurable multi link PCI Express over cable based data transmission engine and a carefully designed LINUX software stack, RASHPA aims at reaching the performances required by current and future detectors.
 
slides icon Slides TUMIB07 [0.168 MB]  
 
TUPPC008 A New Flexible Integration of NeXus Datasets to ANKA by Fuse File Systems software, synchrotron, Linux, neutron 566
 
  • W. Mexner, E. Iurchenko, H. Pasic, D. Ressmann, T. Spangenberg
    KIT, Karlsruhe, Germany
 
  In the high data rate initiative (HDRI) german accelerator and neutron facilities of the Helmholtz Association agreed to use NeXus as a common data format. The synchrotron radiation source ANKA decided in 2012 to introduce NeXus as common data format for all beam lines. Nevertheless it is a challenging work to integrate a new data format in existing data processing work flows. Scientists rely on existing data evaluation kits which require specific data formats. To solve this obstacle, for linux a filesystem in userspace (FUSE) was developed, allowing to mount NeXus-Files as a filesystem. Easy in XML configurable filter rules allow a very flexible view to the data. Tomography data frames can be directly accessed as TIFF files by any standard picture viewer or scan data can be presented as a virtual ASCII file compatible to spec.  
 
TUPPC015 On-line and Off-line Data Analysis System for SACLA Experiments experiment, data-analysis, laser, data-acquisition 580
 
  • T. Sugimoto, Y. Furukawa, Y. Joti, T.K. Kameshima, K. Okada, R. Tanaka, M. Yamaga
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Abe
    RIKEN SPring-8 Center, Innovative Light Sources Division, Hyogo, Japan
 
  The X-ray Free-Electron Laser facility, SACLA, has delivered X-ray laser beams to users from March 2012 [1]. Typical user experiments utilize two-dimensional-imaging sensors, which generate 10 MBytes per accelerator beam shot. At 60 Hz beam repetition, the experimental data at the rate of 600 MBytes/second are accumulated using a dedicate data-acquisition (DAQ) system [2]. To analyze such a large amount of data, we developed data-analysis system for SACLA experiments. The system consists of on-line and off-line sections. The on-line section performs on-the-fly filtering using data handling servers, which examine data qualities and records the results onto the database with event-by-event basis. By referring the database, we can select good events before performing off-line analysis. The off-line section performs precise analysis by utilizing high-performance computing system, such as physical image reconstruction and rough three-dimensional structure analysis of the data samples. For the large-scaled image reconstructions, we also plan to use external supercomputer. In this paper, we present overview and future plan of the SACLA analysis system.
[1] T. Ishikawa et al., Nature Photonics 6, 540-544 (2012).
[2] M. Yamaga et al., ICALEPCS 2011, TUCAUST06, 2011.
 
poster icon Poster TUPPC015 [10.437 MB]  
 
TUPPC038 Simultaneous On-line Ultrasonic Flowmetery and Binary Gas Mixture Analysis for the ATLAS Silicon Tracker Cooling Control System controls, electronics, operation, Ethernet 642
 
  • M. Doubek, V. Vacek, M. Vitek
    Czech Technical University in Prague, Faculty of Mechanical Engineering, Prague, Czech Republic
  • R.L. Bates, A. Bitadze
    University of Glasgow, Glasgow, Scotland, United Kingdom
  • M. Battistin, S. Berry, J. Berthoud, P. Bonneau, J. Botelho-Direito, G. Bozza, O. Crespo-Lopez, E. Da Riva, B. Di Girolamo, G. Favre, J. Godlewski, D. Lombard, L. Zwalinski
    CERN, Geneva, Switzerland
  • N. Bousson, G.D. Hallewell, M. Mathieu, A. Rozanov
    CPPM, Marseille, France
  • G. Boyd
    University of Oklahoma, Norman, Oklahoma, USA
  • C. Degeorge
    Indiana University, Bloomington, Indiana, USA
  • C. Deterre
    DESY, Hamburg, Germany
  • S. Katunin
    PNPI, Gatchina, Leningrad District, Russia
  • S. McMahon
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
  • K. Nagai
    University of Tsukuba, Graduate School of Pure and Applied Sciences,, Tsukuba, Ibaraki, Japan
  • C. Rossi
    Università degli Studi di Genova, Genova, Italy
 
  We describe a combined ultrasonic instrument for continuous gas flow measurement and simultaneous real-time binary gas mixture analysis. The analysis algorithm compares real time measurements with a stored data base of sound velocity vs. gas composition. The instrument was developed for the ATLAS silicon tracker evaporative cooling system where C3F8 refrigerant may be replaced by a blend with 25% C2F6, allowing a lower evaporation temperature as the LHC luminosity increases. The instrument has been developed in two geometries. A version with an axial sound path has demonstrated a 1 % Full Scale precision for flows up to 230 l/min. A resolution of 0.3% is seen in C3F8/C2F6 molar mixtures, and a sensitivity of better than 0.005% to traces of C3F8 in nitrogen, during a 1 year continuous study in a system with sequenced multi-stream sampling. A high flow version has demonstrated a resolution of 1.9 % Full Scale for flows up to 7500 l/min. The instrument can provide rapid feedback in control systems operating with refrigerants or binary gas mixtures in detector applications. Other uses include anesthesia, analysis of hydrocarbons and vapor mixtures for semiconductor manufacture.
* Comm. author: martin.doubek@cern.ch
Refs
R. Bates et al. Combined ultrasonic flow meter & binary vapour analyzer for ATLAS 2013 JINST 8 C01002
 
poster icon Poster TUPPC038 [1.834 MB]  
 
TUPPC044 When Hardware and Software Work in Concert controls, experiment, interface, operation 661
 
  • M. Vogelgesang, T. Baumbach, T. Farago, A. Kopmann, T. dos Santos Rolo
    KIT, Eggenstein-Leopoldshafen, Germany
 
  Funding: Partially funded by BMBF under the grants 05K10CKB and 05K10VKE.
Integrating control and high-speed data processing is a fundamental requirement to operate a beam line efficiently and improve user's beam time experience. Implementing such control environments for data intensive applications at synchrotrons has been difficult because of vendor-specific device access protocols and distributed components. Although TANGO addresses the distributed nature of experiment instrumentation, standardized APIs that provide uniform device access, process control and data analysis are still missing. Concert is a Python-based framework for device control and messaging. It implements these programming interfaces and provides a simple but powerful user interface. Our system exploits the asynchronous nature of device accesses and performs low-latency on-line data analysis using GPU-based data processing. We will use Concert to conduct experiments to adjust experimental conditions using on-line data analysis, e.g. during radiographic and tomographic experiments. Concert's process control mechanisms and the UFO processing framework* will allow us to control the process under study and the measuring procedure depending on image dynamics.
* Vogelgesang, Chilingaryan, Rolo, Kopmann: “UFO: A Scalable GPU-based Image Processing Framework for On-line Monitoring”
 
poster icon Poster TUPPC044 [4.318 MB]  
 
TUPPC045 Software Development for High Speed Data Recording and Processing software, monitoring, network, controls 665
 
  • D. Boukhelef, J. Szuba, K. Wrona, C. Youngman
    XFEL. EU, Hamburg, Germany
 
  Funding: The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 283745.
The European XFEL beam delivery defines a unique time structure that requires acquiring and processing data in short bursts of up to 2700 images every 100 ms. The 2D pixel detectors being developed produce up to 10 GB/s of 1-Mpixel image data. Efficient handling of this huge data volume requires large network bandwidth and computing capabilities. The architecture of the DAQ system is hierarchical and modular. The DAQ network uses 10 GbE switched links to provide large bandwidth data transport between the front-end interfaces (FEI), data handling PC layer servers, and storage and analysis clusters. Front-end interfaces are required to build images acquired during a burst into pulse ordered image trains and forward them to PC layer farm. The PC layer consists of dedicated high-performance computers for raw data monitoring, processing and filtering, and aggregating data files that are then distributed to on-line storage and data analysis clusters. In this contribution we give an overview of the DAQ system architecture, communication protocols, as well as software stack for data acquisition pre-processing, monitoring, storage and analysis.
 
poster icon Poster TUPPC045 [1.323 MB]  
 
TUPPC047 The New TANGO-based Control and Data Acquisition System of the GISAXS Instrument GALAXI at Forschungszentrum Jülich TANGO, controls, software, neutron 673
 
  • H. Kleines, A. Ackens, M. Bednarek, K. Bussmann, M. Drochner, L. Fleischhauer-Fuss, M. Heinzler, P. Kaemmerling, F.-J. Kayser, S. Kirstein, K.-H. Mertens, R. Möller, U. Rücker, F. Suxdorf, M. Wagener, S. van Waasen
    FZJ, Jülich, Germany
 
  Forschungszentrum Jülich operated the SAXS instrument JUSIFA at DESY in Hamburg for more than twenty years. With the shutdown of the DORIS ring JUSIFA was relocated to Jülich. Based on most JUSIFA components (with major mechanical modifications) and a MetalJet high performance X-Ray source from Bruker AXS the new GISAXS instrument GALAXI was built by JCNS (Jülich Centre for Neutron Science). GALAXI was equipped with new electronics and a completely new control and data acquisition system by ZEA-2 (Zentralinstitut für Engineering, Elektronik und Analytik 2 – Systeme der Elektronik, formely ZEL). On the base of good experience with the TACO control system, ZEA-2 decided that GALAXI should be the first instrument of Forschungszentrum Jülich with the successor system TANGO. The application software on top of TANGO is based on pyfrid. Pyfrid was originally developed for the neutron scattering instruments of JCNS and provides a scripting interface as well as a Web GUI. The design of the new control and data acquisition system is presented and the lessons learned by the introduction of TANGO are reported.  
 
TUPPC050 Control, Safety and Diagnostics for Future ATLAS Pixel Detectors controls, monitoring, diagnostics, operation 679
 
  • S. Kersten, P. Kind, P. Mättig, L. Puellen, S. Weber, C. Zeitnitz
    Bergische Universität Wuppertal, Wuppertal, Germany
  • F. Gensolen
    CPPM, Marseille, France
  • S. Kovalenko, K. Lantzsch
    CERN, Geneva, Switzerland
 
  To ensure the excellent performance of the ATLAS Pixel detector during the next run periods of the LHC, with increasing demands, two upgrades of the pixel detector are foreseen. One takes place in the first long shutdown, which is currently on-going. During this period an additional layer, the Insertable B-Layer, will be installed. The second upgrade will replace the entire pixel detector and is planed for 2020, when the LHC will be upgraded to HL-LHC. As once installed no access is possible over years, a highly reliable control system is required. It has to supply the detector with all entities required for operation, protect it at all times, and provide detailed information to diagnose the detector’s behaviour. Design constraints are the sensitivity of the sensors and reduction of material inside the tracker volume. We report on the construction of the control system for the Insertable B Layer and present a concept for the control of the pixel detector at the HL-LHC. While the latter requires completely new strategies, the control system of the IBL includes single new components, which can be developed further for the long-term upgrade.  
poster icon Poster TUPPC050 [0.566 MB]  
 
TUPPC053 New Control System for the SPES Off-line Laboratory at LNL-INFN using EPICS IOCs based on the Raspberry Pi EPICS, controls, interface, Ethernet 687
 
  • J.A. Vásquez, A. Andrighetto, G.P. Prete
    INFN/LNL, Legnaro (PD), Italy
  • M. Bertocco
    UNIPD, Padova (PD), Italy
 
  SPES (Selective Production of Exotic Species) is an ISOL type RIB facility of the LNL-INFN at Italy dedicated to the production of neutron-rich radioactive nuclei by uranium fission. At the LNL, for the last four years, an off-line laboratory has been developed in order to study the target front-end test bench. The instrumentation devices are controlled using EPICS. A new flexible, easy to adapt, low cost and open solution for this control system is being tested. It consists on EPICS IOCs developed at the LNL which are based on the low cost computer board Raspberry Pi with custom-made expansion boards. The operating system is a modify version of Debian Linux running EPICS soft IOCs that communicates with the expansion board using home-made drivers. The expansion boards consist on multi-channel 16bits ADCs and DACs, digital inputs and outputs and stepper motor drivers. The idea is to have a distributed control system using customized IOC for controlling the instrumentation devices on the system as well as to read the information from the detectors using the EPICS channel access as communication protocol. This solution will be very cost effective and easy to customize.  
poster icon Poster TUPPC053 [2.629 MB]  
 
TUPPC059 EPICS Data Acquisition Device Support EPICS, interface, timing, software 707
 
  • V.A. Isaev, N. Claesson
    Cosylab, Ljubljana, Slovenia
  • M. Pleško, K. Žagar
    COBIK, Solkan, Slovenia
 
  A large number of devices offer a similar kind of capabilities. For example, data acquisition all offer sampling at some rate. If each such device were to have a different interface, engineers using them would need to be familiar with each device specifically, inhibiting transfer of know-how from working with one device to another and increasing the chance of engineering errors due to a miscomprehension or incorrect assumptions. In the Nominal Device Model (NDM) model, we propose to standardize the EPICS interface of the analog and digital input and output devices, and image acquisition devices. The model describes an input/output device which can have digital or analog channels, where channels can be configured for output or input. Channels can be organized in groups that have common parameters. NDM is implemented as EPICS Nominal Device Support library (NDS). It provides a C++ interface to developers of device-specific drivers. NDS itself inherits well-known asynPortDriver. NDS hides from the developer all the complexity of the communication with asynDriver and allows to focus on the business logic of the device itself.  
poster icon Poster TUPPC059 [0.371 MB]  
 
TUPPC060 Implementation of Continuous Scans Used in Beamline Experiments at Alba Synchrotron experiment, hardware, controls, software 710
 
  • Z. Reszela, F. Becheri, G. Cuní, D. Fernández-Carreiras, J. Moldes, C. Pascual-Izarra
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
  • T.M. Coutinho
    ESRF, Grenoble, France
 
  The Alba control system * is based on Sardana **, a software package implemented in Python, built on top of Tango *** and oriented to beamline and accelerator control and data acquisition. Sardana provides an advanced scan framework, which is commonly used in all the beamlines of Alba as well as other institutes. This framework provides standard macros and comprises various scanning modes: step, hybrid and software-continuous, however no hardware-continuous. The continuous scans speed up the data acquisition, making it a great asset for most experiments and due to time constraints, mandatory for a few of them. A continuous scan has been developed and installed in three beamlines where it reduced the time overheads of the step scans. Furthermore it could be easily adapted to any other experiment and will be used as a base for extending Sardana scan framework with the generic continuous scan capabilities. This article describes requirements, plan and implementation of the project as well as its results and possible improvements.
*"The design of the Alba Control System. […]" D. Fernández et al, ICALEPCS2011
**"Sardana, The Software for Building SCADAS […]" T.M. Coutinho et al, ICALEPCS2011
***www.tango-controls.org
 
poster icon Poster TUPPC060 [13.352 MB]  
 
TUPPC064 Reusing the Knowledge from the LHC Experiments to Implement the NA62 Run Control controls, experiment, hardware, framework 725
 
  • F. Varela, M. Gonzalez-Berges
    CERN, Geneva, Switzerland
  • N. Lurkin
    UCL, Louvain-la-Neuve, Belgium
 
  NA62 is an experiment designed to measure very rare kaon decays at the CERN SPS planned to start operation in 2014. Until this date, several intermediate run periods have been scheduled to exercise and commission the different parts and subsystems of the detector. The Run Control system monitors and controls all processes and equipment involved in data-taking. This system is developed as a collaboration between the NA62 Experiment and the Industrial Controls and Engineering (EN-ICE) Group of the Engineering Department at CERN. In this paper, the contribution of EN-ICE to the NA62 Run Control project is summarized. EN-ICE has promoted the utilization of standardized control technologies and frameworks at CERN, which were originally developed for the controls of the LHC experiments. This approach has enabled to deliver a working system for the 2013 Technical Run that exceeded the initial requirements, in a very short time and with limited manpower.  
 
TUPPC066 10 Years of Experiment Control at SLS Beam Lines: an Outlook to SwissFEL controls, EPICS, FEL, operation 729
 
  • J. Krempaský, U. Flechsig, B. Kalantari, X.Q. Wang
    PSI, Villigen PSI, Switzerland
  • T. Mooney
    ANL, Argonne, USA
  • M.L. Rivers
    CARS, Argonne, Ilinois, USA
 
  Today, after nearly 10 years of consolidated user operation at the Swiss Light Source (SLS) with up to 18 beam lines, we are looking back to briefly describe the success story based on EPICS controls toolkit and give an outlook towards the X-ray free-electron laser SwissFEL, the next challenging PSI project. We focus on SLS spectroscopy beam lines with experimental setups rigorously based on the SynApps "Positioner-Trigger-Detector" (PTD) anatomy [2]. We briefly describe the main beam line “Positioners” used inside the PTD concept. On the “Detector” side an increased effort is made to standardize the control within the areaDetector (AD) software package [3]. For the SwissFEL two detectors are envisaged: the Gotthard 1D and Jungfrau 2D pixel detectors, both built at PSI. Consistently with the PTD-anatomy, their control system framework based on the AD package is in preparation. In order to guarantee data acquisition with the SwissFEL nominal 100 Hz rate, the “Trigger” is interconnected with the SwissFEL timing system to guarantee shot-to-shot operation [4]. The AD plug-in concept allows significant data reduction; we believe this opens the doors towards on-line FEL experiments.
[1] Krempaský et al, ICALEPCS 2001
[2] www.aps.anl.gov/bcda/synApps/index.php
[3] M. Rivers, SRI 2009, Melbourne
[4] B. Kalantari et al, ICALEPCS 2011
 
 
TUPPC069 ZEBRA: a Flexible Solution for Controlling Scanning Experiments FPGA, EPICS, interface, controls 736
 
  • T.M. Cobb, Y.S. Chernousko, I.S. Uzun
    Diamond, Oxfordshire, United Kingdom
 
  This paper presents the ZEBRA product developed at Diamond Light Source. ZEBRA is a stand-alone event handling system with interfaces to multi-standard digital I/O signals (TTL, LVDS, PECL, NIM and Open Collector) and RS422 quadrature incremental encoder signals. Input events can be triggered by input signals, encoder position signals or repetitive time signals, and can be combined using logic gates in an FPGA to generate and output other events. The positions of all 4 encoders can be captured at the time of a given event and made available to the controlling system. All control and status is available through a serial protocol, so there is no dependency on a specific higher level control system. We have found it has applications on virtually all Diamond beamlines, from applications as simple as signal level shifting to, for example, using it for all continuous scanning experiments. The internal functionality is reconfigurable on the fly through the user interface and can be saved to static memory. It provides a flexible solution to interface different third party hardware (detectors and motion controllers) and to configure the required functionality as part of the experiment.  
poster icon Poster TUPPC069 [2.909 MB]  
 
TUPPC070 Detector Controls for the NOvA Experiment Using Acnet-in-a-Box controls, PLC, monitoring, interface 740
 
  • D.J. Nicklaus, L.R. Carmichael, D. Finstrom, B. Hendricks, CA. King, W.L. Marsh, R. Neswold, J.F. Patrick, J.G. Smedinghoff, J. You
    Fermilab, Batavia, USA
 
  In recent years, we have packaged the Fermilab accelerator control system, Acnet, so that other instances of it can be deployed independent of the Fermilab infrastructure. This encapsulated "Acnet-in-a-Box" is installed as the detector control system at the NOvA Far Detector. NOvA is a neutrino experiment using a beam of particles produced by the Fermilab accelerators. There are two NOvA detectors: a 330 ton ‘‘Near Detector'' on the Fermilab campus and a 14000 ton ‘‘Far Detector'' 735 km away. All key tiers and aspects of Acnet are available in the NOvA instantiation, including the central device database, java Open Access Clients, erlang front-ends, application consoles, synoptic displays, data logging, and state notifications. Acnet at NOvA is used for power-supply control, monitoring position and strain gauges, environmental control, PLC supervision, relay rack monitoring, and interacting with Epics PVs instrumenting the detector's avalanche photo-diodes. We discuss the challenges of maintaining a control system in a remote location, synchronizing updates between the instances, and improvements made to Acnet as a result of our NOvA experience.  
poster icon Poster TUPPC070 [0.876 MB]  
 
TUPPC083 FPGA Implementation of a Digital Constant Fraction for Fast Timing Studies in the Picosecond Range FPGA, neutron, timing, real-time 774
 
  • P. Mutti, J. Ratel, F. Rey, E. Ruiz-Martinez
    ILL, Grenoble, France
 
  Thermal or cold neutron capture on different fission systems is an excellent method to produce a variety of very neutron-rich nuclei. Since neutrons at these energies bring in the reaction just enough energy to produce fission, the fragments remain neutron-rich due to the negligible neutron evaporation thus allowing detailed nuclear structure studies. In 2012 and 2013 a combination of EXOGAM, GASP and Lohengrin germanium detectors has been installed at the PF1B cold neutron beam of the Institut Laue-Langevin. The present paper describes the digital acquisition system used to collect information on all gamma rays emitted by the decaying nuclei. Data have been acquired in a trigger-less mode to preserve a maximum of information for further off-line treatment with a total throughput of about 10 MByte/sec. Special emphasis is devoted to the FPGA implementation of an on-line digital constant fraction algorithm allowing fast timing studies in the pico second range.  
poster icon Poster TUPPC083 [9.928 MB]  
 
TUPPC086 Electronics Developments for High Speed Data Throughput and Processing FPGA, controls, interface, timing 778
 
  • C. Youngman, B. Fernandes, P. Gessler
    XFEL. EU, Hamburg, Germany
  • J. Coughlan
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
  • E. Motuk
    UCL, London, United Kingdom
  • M. Zimmer
    DESY, Hamburg, Germany
 
  Funding: The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 283745
The European XFEL DAQ system has to acquire and process data in short bursts every 100ms. Bursts lasts for 600us and contain a maximum of 2700 x-ray pulses with a repetition rate of 4.5MHz which have to be captured and processed before the next burst starts. This time structure defines the boundary conditions for almost all diagnostic and detector related DAQ electronics required and currently being developed for start of operation in fall 2015. Standards used in the electronics developments are: MicroTCA.4 and AdvancedTCA crates, use of FPGAs for data processing, transfer to backend systems via 10Gbps (SFP+) links, and feedback information transfer using 3.125Gbps (SFP) links. Electronics being developed in-house or in collaboration with external institutes and companies include: a Train Builder ATCA blade for assembling and processing data of large-area image detectors, a VETO MTCA.4 development for evaluating pulse information and distributing a trigger decision to detector front-end ASICs and FPGAs with low-latency, a MTCA.4 digitizer module, interface boards for timing and similar synchronization information, etc.
 
poster icon Poster TUPPC086 [0.983 MB]  
 
TUPPC090 Digital Control System of High Extensibility for KAGRA controls, laser, power-supply, cryogenics 794
 
  • H. Kashima, N. Araki, M. Ishizuka, T. Masuoka, H. Mukai
    Hitachi Zosen, Osaka, Japan
  • O. Miyakawa
    ICRR, Chiba, Japan
 
  KAGRA is the large scale cryogenic gravitational wave telescope project in Japan which is developed and constructed by ICRR. of The University of Tokyo. Hitz Hitachi Zosen produced PCI Express I/O chassis and the anti-aliasing/anti-imaging filter board for KAGRA digital control system. These products are very important for KAGRA interferometer from the point of view of low noise operations. This paper reports the performance of these products.  
poster icon Poster TUPPC090 [0.487 MB]  
 
TUPPC108 Using Web Syndication for Flexible Remote Monitoring site, controls, operation, experiment 825
 
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
  • A. Augustinus, P.M. Bond, P.Ch. Chochula, M. Lechman, P. Rosinský
    CERN, Geneva, Switzerland
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
 
  With the experience gained in the first years of running the ALICE apparatus we have identified the need of collecting and aggregating different data to be displayed to the user in a simplified, personalized and clear way. The data comes from different sources in several formats, can contain data, text, pictures or can simply be a link to an extended content. This paper will describe the idea to design a light and flexible infrastructure, to aggregate information produced in different systems and offer them to the readers. In this model, a reader is presented with the information relevant to him, without being obliged to browse through different systems. The project consists of data production, collection and syndication, and is being developed in parallel with more traditional monitoring interfaces, with the aim of offering the ALICE users an alternative and convenient way to stay updated about their preferred systems even when they are far from the experiment.  
poster icon Poster TUPPC108 [1.301 MB]  
 
TUPPC115 Hierarchies of Alarms for Large Distributed Systems controls, experiment, interface, diagnostics 844
 
  • M. Boccioli, M. Gonzalez-Berges, V. Martos
    CERN, Geneva, Switzerland
  • O. Holme
    ETH, Zurich, Switzerland
 
  The control systems of most of the infrastructure at CERN makes use of the SCADA package WinCC OA by ETM, including successful projects to control large scale systems (i.e. the LHC accelerator and associated experiments). Each of these systems features up to 150 supervisory computers and several millions of parameters. To handle such large systems, the control topologies are designed in a hierarchical way (i.e. sensor, module, detector, experiment) with the main goal of supervising a complete installation with a single person from a central user interface. One of the key features to achieve this is alarm management (generation, handling, storage, reporting). Although most critical systems include automatic reactions to faults, alarms are fundamental for intervention and diagnostics. Since one installation can have up to 250k alarms defined, a major failure may create an avalanche of alarms that is difficult for an operator to interpret. Missing important alarms may lead to downtime or to danger for the equipment. The paper presents the developments made in recent years on WinCC OA to work with large hierarchies of alarms and to present summarized information to the operators.  
 
TUCOCA06 Current Status of a Carborne Survey System, KURAMA survey, monitoring, radiation, operation 926
 
  • M. Tanigaki, Y. Kobayashi, R. Okumua, N. Sato, K. Takamiya, H. Yoshinaga, H. Yoshino
    Kyoto University, Research Reactor Institute, Osaka, Japan
 
  A carborne survey system named as KURAMA (Kyoto University RAdiation MApping system) has been developed as a response to the nuclear accident at TEPCO Fukushima Daiichi Nuclear Power Plant in 2011. Now the system evolved into a CompactRIO-based KURAMA-II, and serves for the various types of applications. More than a hundred of KURAMA-II are deployed for the periodical drawing of the radiation map in the East Japan by Japanese government. A continuous radiation monitoring by KURAMA-II on local buses is started in Fukushima prefecture as the collaboration project among Kyoto University, Fukushima prefectural government, and JAEA. Extended applications such as precise radiation mappings in farmlands and parks are also on the way. The present status and future prospects of KURAMA and KURAMA-II are introduced.  
 
TUCOCB03 A Practical Approach to Ontology-Enabled Control Systems for Astronomical Instrumentation controls, software, DSL, database 952
 
  • W. Pessemier, G. Deconinck, G. Raskin, H. Van Winckel
    KU Leuven, Leuven, Belgium
  • P. Saey
    Katholieke Hogeschool Sint-Lieven, Gent, Belgium
 
  Even though modern service-oriented and data-oriented architectures promise to deliver loosely coupled control systems, they are inherently brittle as they commonly depend on a priori agreed interfaces and data models. At the same time, the Semantic Web and a whole set of accompanying standards and tools are emerging, advocating ontologies as the basis for knowledge exchange. In this paper we aim to identify a number of key ideas from the myriad of knowledge-based practices that can readily be implemented by control systems today. We demonstrate with a practical example (a three-channel imager for the Mercator Telescope) how ontologies developed in the Web Ontology Language (OWL) can serve as a meta-model for our instrument, covering as many engineering aspects of the project as needed. We show how a concrete system model can be built on top of this meta-model via a set of Domain Specific Languages (DSLs), supporting both formal verification and the generation of software and documentation artifacts. Finally we reason how the available semantics can be exposed at run-time by adding a “semantic layer” that can be browsed, queried, monitored etc. by any OPC UA-enabled client.  
slides icon Slides TUCOCB03 [2.130 MB]  
 
WECOAAB03 Synchronization of Motion and Detectors and Continuous Scans as the Standard Data Acquisition Technique hardware, software, controls, data-acquisition 992
 
  • D.F.C. Fernández-Carreiras, F. Becheri, G. Cuní, R. Homs-Puron, G. Jover-Mañas, J. Klora, O. Matilla, J. Moldes, C. Pascual-Izarra, Z. Reszela, D. Roldan, S. Rubio-Manrique, X. Serra-Gallifa
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
  • T.M. Coutinho
    ESRF, Grenoble, France
 
  This paper describes the model, objectives and implementation of a generic data acquisition structure for an experimental station, which integrates the hardware and software synchronization of motors, detectors, shutters and in general any experimental channel or events related with the experiment. The implementation involves the management of hardware triggers, which can be derived from time, position of encoders or even events from the particle accelerator, combined with timestamps for guaranteeing the correct integration of software triggered or slow channels. The infrastructure requires a complex management of buffers of different sources, centralized and distributed, including interpolation procedures. ALBA uses Sardana built on TANGO as the generic control system, which provides the abstraction and communication with the hardware, and a complete macro edition and execution environment.  
slides icon Slides WECOAAB03 [2.432 MB]  
 
WECOBA04 Effective End-to-end Management of Data Acquisition and Analysis for X-ray Photon Correlation Spectroscopy photon, experiment, real-time, status 1004
 
  • F. Khan, J.P. Hammonds, S. Narayanan, A. Sandy, N. Schwarz
    ANL, Argonne, USA
 
  Funding: Work supported by U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357.
Low latency between data acquisition and analysis is of critical importance to any experiment. The combination of a faster parallel algorithm and a data pipeline for connecting disparate components (detectors, clusters, file formats) enabled us to greatly enhance the operational efficiency of the x-ray photon correlation spectroscopy experiment facility at the Advanced Photon Source. The improved workflow starts with raw data (120 MB/s) streaming directly from the detector camera, through an on-the-fly discriminator implemented in firmware to Hadoop’s distributed file system in a structured HDF5 data format. The user then triggers the MapReduce-based parallel analysis. For effective bookkeeping and data management, the provenance information and reduced results are added to the original HDF5 file. Finally, the data pipeline triggers user specific software for visualizing the data. The whole process is completed shortly after data acquisition – a significant improvement of operation over previous setup. The faster turn-around time helps scientists to make near real-time adjustments to the experiments.
 
slides icon Slides WECOBA04 [9.540 MB]  
 
WECOBA07 High Speed Detectors: Problems and Solutions network, operation, software, data-analysis 1016
 
  • N.P. Rees, M. Basham, J. Ferner, U.K. Pedersen, T.S. Richter, J.A. Thompson
    Diamond, Oxfordshire, United Kingdom
 
  Diamond has an increasing number of high speed detectors primarily used on Macromolecular Crystallography, Small Angle X-Ray Scattering and Tomography beamlines. Recently, the performance requirements have exceeded the performance available from a single threaded writing process on our Lustre parallel file system, so we have had to investigate other file systems and ways of parallelising the data flow to mitigate this. We report on the some comparative tests between Lustre and GPFS, and some work we have been leading to enhance the HDF5 library to add features that simplify the parallel writing problem.  
slides icon Slides WECOBA07 [0.617 MB]  
 
WECOCB03 Development of a Front-end Data-Acquisition System with a Camera Link FMC for High-Bandwidth X-Ray Imaging Detectors interface, FPGA, experiment, synchrotron 1028
 
  • C. Saji, T. Ohata, T. Sugimoto, R. Tanaka, M. Yamaga
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Abe
    RIKEN SPring-8 Center, Innovative Light Sources Division, Hyogo, Japan
  • T. Kudo
    RIKEN SPring-8 Center, Sayo-cho, Sayo-gun, Hyogo, Japan
 
  X-ray imaging detectors are indispensable for synchrotron radiation experiments and growing up with larger number of pixels and higher frame rate to acquire more information on the samples. The novel detector with data rate of up to 8 Gbps/sensor, SOPHIAS, is under development at SACLA facility. Therefore, we have developed a new front-end DAQ system with high data rate beyond the present level. The system consists of an FPGA-based evaluation board and a FPGA mezzanine card (FMC). As the FPGA interface, FMC was adopted for supporting variety of interfaces and considering COTS system. Since the data transmission performance of the FPGA board in combination with the FMCs was already evaluated as about 20 Gbps between boards, our choice of devices has the potential to meet the requirements of SOPHIAS detector*. We made a FMC with Camera Link (CL) interface to support 1st phase of SOPHIAS detector. Since almost CL configurations are supported, the system handles various types of commercial cameras as well as new detector. Moreover, the FMC has general purpose input/output to satisfy various experimental requirements. We report the design of new front-end DAQ and results of evaluation.
* A Study of a Prototype DAQ System with over 10 Gbps Bandwidth for the SACLA X-Ray Experiments, C. Saji, T. Ohata, T. Sugimoto, R. Tanaka, and M. Yamaga, 2012 IEEE NSS and MIC, p.1619-p.1622
 
slides icon Slides WECOCB03 [0.980 MB]  
 
THCOAAB01 A Scalable and Homogeneous Web-Based Solution for Presenting CMS Control System Data controls, interface, software, status 1040
 
  • L. Masetti, O. Chaze, J.A. Coarasa, C. Deldicque, M. Dobson, A.D. Dupont, D. Gigi, F. Glege, R. Gomez-Reino, C. Hartl, F. Meijers, E. Meschi, S. Morovic, C. Nunez-Barranco-Fernandez, L. Orsini, W. Ozga, A. Petrucci, G. Polese, A. Racz, H. Sakulin, C. Schwick, A.C. Spataru, C.C. Wakefield, P. Zejdi
    CERN, Geneva, Switzerland
  • G. Bauer, C. Paus, O. Raginel, F. Stoeckli, K. Sumorok
    MIT, Cambridge, Massachusetts, USA
  • U. Behrens
    DESY, Hamburg, Germany
  • J. Branson, S. Cittolin, A. Holzner, M. Pieri, M. Sani
    UCSD, La Jolla, California, USA
  • S. Erhan
    UCLA, Los Angeles, California, USA
  • R.K. Mommsen, V. O'Dell
    Fermilab, Batavia, USA
 
  The Control System of the CMS experiment ensures the monitoring and safe operation of over 1M parameters. The high demand for access to online and historical Control System Data calls for a scalable solution combining multiple data sources. The advantage of a Web solution is that data can be accessed from everywhere with no additional software. Moreover, existing visualization libraries can be reused to achieve a user-friendly and effective data presentation. Access to the online information is provided with minimal impact on the running control system by using a common cache in order to be independent of the number of users. Historical data archived by the SCADA software is accessed via an Oracle Database. The web interfaces provide mostly a read-only access to data but some commands are also allowed. Moreover, developers and experts use web interfaces to deploy the control software and administer the SCADA projects in production. By using an enterprise portal, we profit from single sign-on and role-based access control. Portlets maintained by different developers are centrally integrated into dynamic pages, resulting in a consistent user experience.  
slides icon Slides THCOAAB01 [1.814 MB]  
 
THPPC015 Managing Infrastructure in the ALICE Detector Control System controls, experiment, hardware, software 1122
 
  • M. Lechman, A. Augustinus, P.M. Bond, P.Ch. Chochula, A.N. Kurepin, O. Pinazza, P. Rosinský
    CERN, Geneva, Switzerland
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The main role of the ALICE Detector Control System (DCS) is to ensure safe and efficient operation of one of the large high energy physics experiments at CERN. The DCS design is based on the commercial SCADA software package WinCC Open Architecture. The system includes over 270 VME and power supply crates, 1200 network devices, over 1,000,000 monitored parameters as well as numerous pieces of front-end and readout electronics. This paper summarizes the computer infrastructure of the DCS as well as the hardware and software components that are used by WinCC OA for communication with electronics devices. The evolution of these components and experience gained from the first years of their production use are also described. We also present tools for the monitoring of the DCS infrastructure and supporting its administration together with plans for their improvement during the first long technical stop in LHC operation.  
poster icon Poster THPPC015 [1.627 MB]  
 
THPPC061 SwissFEL Magnet Test Setup and Its Controls at PSI controls, EPICS, software, operation 1209
 
  • P. Chevtsov, W. Hugentobler, D. Vermeulen, V. Vranković
    PSI, Villigen PSI, Switzerland
 
  High brightness electron bunches will be guided in the future Free Electron Laser (SwissFEL) at Paul Scherrer Institute (PSI) with the use of several hundred magnets. The SwissFEL machine imposes very strict requirements not only to the field quality but also to mechanical and magnetic alignments of these magnets. To ensure that the magnet specifications are met and to develop reliable procedures for aligning magnets in the SwissFEL and correcting their field errors during machine operations, the PSI magnet test system was upgraded. The upgraded system is a high precision measurement setup based on Hall probe, rotating coil, vibrating wire and moving wire techniques. It is fully automated and integrated in the PSI controls. The paper describes the main controls components of the new magnet test setup and their performance.  
poster icon Poster THPPC061 [0.855 MB]  
 
THPPC064 The HiSPARC Control System controls, software, Windows, database 1220
 
  • R.G.K. Hart, D.B.R.A. Fokkema, A.P.L.S. de Laat, B. van Eijk
    NIKHEF, Amsterdam, The Netherlands
 
  Funding: Nikhef
The purpose of the HiSPARC project is twofold. First the physics goal: detection of high-energy cosmic rays. Secondly, offer an educational program in which high school students participate by building their detection station and analysing their data. Around 70 high schools, spread over the Netherlands, are participating. Data are centrally stored at Nikhef in Amsterdam. The detectors, located on the roof of the high-schools, are connected by means of a USB interface to a Windows PC, which itself is connected to the high school's network and further on to the public internet. Each station is equipped with GPS providing exact location and accurate timing. This paper covers the setup, building and usage of the station software. It contains a LabVIEW run-time engine, services for remote control and monitoring, a series of Python scripts and a local buffer. An important task of the station software is to control the dataflow, event building and submission to the central database. Furthermore, several global aspects are described, like the source repository, the station software installer and organization.
Windows, USB, FTDI, LabVIEW, VPN, VNC, Python, Nagios, NSIS, Django
 
 
THPPC120 A Simplified Model of the International Linear Collider Final Focus System quadrupole, controls, feedback, resonance 1341
 
  • M. Oriunno, T.W. Markiewicz
    SLAC, Menlo Park, California, USA
  • C.G.R.L. Collette, D. Tshilumba
    ULB - FSA - SMN, Bruxelles, Belgium
 
  Mechanical vibrations are the main sources of Luminosity Loss at the Final Focus System of the future Linear Colliders, where the nanometric beams are required to be extremely stable. Precise models are needed to validate the supporting scheme adopted. Where the beam structure allows it, as for the International Linear Collider (ILC), intra-trains Luminosity Feedback schemes are possible. Where this is not possible, as for the Compact Linear Collider (CLIC), an active stabilization of the doublets is required. Further complications arise from the optics requirements, which place the final doublet very close to the IP (~4m). We present a model of the SID detector, where the QD0 doublet is captured inside the detector and the QF1 magnet is inside the tunnel. Ground Motion measured at the SLD detector at SLAC have been used together with a model of the technical noise. The model predicts that the rms vibration of QDO is below the capture range of the IP feedback system available in the ILC. With the addition of an active stabilization system on QD0, it is also possible to achieve the stability requirements of CLIC. These results can have important implications for CLIC.  
 
THCOBB01 An Upgraded ATLAS Central Trigger for 2015 LHC Luminosities timing, electronics, interface, luminosity 1388
 
  • C. Ohm
    CERN, Geneva, Switzerland
 
  The LHC collides protons at a rate of ~40MHz and each collision produces ~1.5MB of data from the ATLAS detector (~60TB of data per second). The ATLAS trigger system reduces the input rate to a more reasonable storage rate of about 400Hz. The Level1 trigger reduces the input rate to ~100kHz with a decision latency of ~2.5us and is responsible for initiating the readout of data from all the ATLAS subdetectors. It is primarily composed of the Calorimeter Trigger, Muon Trigger, and the Central Trigger Processor (CTP). The CTP collects trigger information from all Level1 systems and produces the Level-­-1 trigger decision. The LHC has now shutdown for upgrades and will return in 2015 with an increased luminosity and a center of mass energy of 14TeV. With higher luminosities, the number and complexity of Level1 triggers will increase in order to satisfy the physics goals of ATLAS while keeping the total Level1 rates at or below 100kHz. In this talk we will discuss the current Central Trigger Processor, the justification for its upgrade, including the plans to satisfy the requirements of the 2015 physics run at the LHC.
The abstract is submitted on behalf of the ATLAS Collaboration. The name of the presenter will be chosen by the collaboration and communicated upon acceptance of the abstract.
 
slides icon Slides THCOBB01 [10.206 MB]  
 
THCOCB05 The LHCb Online Luminosity Monitoring and Control luminosity, controls, experiment, target 1438
 
  • R. Jacobsson, R. Alemany-Fernandez, F. Follin
    CERN, Geneva, Switzerland
 
  The LHCb experiment searches for New Physics by precision measurements in heavy flavour physics. The optimization of the data taking conditions relies on accurate monitoring of the instantaneous luminosity, and many physics measurements rely on accurate knowledge of the integrated luminosity. Most of the measurements have potential systematic effects associated with pileup and changing running conditions. To cope with these while aiming at maximising the collected luminosity, a control of the LHCb luminosity was put in operation. It consists of an automatic real-time feedback system controlled from the LHCb online system which communicates directly with an LHC application which in turn adjusts the beam overlap at the interaction point. It was proposed and tested in July 2010 and has been in routine operation during 2011-2012. As a result, LHCb has been operating at well over four times the design pileup, and 95% of the integrated luminosity has been recorded within 3% of the desired luminosity. This paper motivates and describes the implementation and the experience with the online luminosity monitoring and control, including the mechanisms to perform the luminosity calibrations.  
slides icon Slides THCOCB05 [1.368 MB]  
 
THCOCA03 High-Precision Timing of Gated X-Ray Imagers at the National Ignition Facility timing, target, laser, experiment 1449
 
  • S.M. Glenn, P.M. Bell, L.R. Benedetti, M.W. Bowers, D.K. Bradley, B.P. Golick, J.P. Holder, D.H. Kalantar, S.F. Khan, N. Simanovskaia
    LLNL, Livermore, California, USA
 
  Funding: This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633013
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a stadium-sized facility that contains a 192-beam, 1.8-Megajoule, 500-Terawatt, ultraviolet laser system together with a 10-meter diameter target chamber. We describe techniques used to synchronize data acquired by gated x-ray imagers with laser beams at the National Ignition Facility (NIF). Synchronization is achieved by collecting data from multiple beam groups with spatial and temporal separation in a single NIF shot. By optimizing the experimental setup and data analysis, repeatable measurements of 15ps or better have been achieved. This demonstrates that the facility timing system, laser, and target diagnostics, are highly stable over year-long time scales.
 
slides icon Slides THCOCA03 [1.182 MB]  
 
FRCOAAB03 Experiment Control and Analysis for High-Resolution Tomography controls, software, experiment, EPICS 1469
 
  • N. Schwarz, F. De Carlo, A. Glowacki, J.P. Hammonds, F. Khan, K. Yue
    ANL, Argonne, USA
 
  Funding: Work supported by U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357.
X-ray Computed Tomography (XCT) is a powerful technique for imaging 3D structures at the micro- and nano-levels. Recent upgrades to tomography beamlines at the APS have enabled imaging at resolutions up to 20 nm at increased pixel counts and speeds. As detector resolution and speed increase, the amount of data that must be transferred and analyzed also increases. This coupled with growing experiment complexity drives the need for software to automate data acquisition and processing. We present an experiment control and data processing system for tomography beamlines that helps address this concern. The software, written in C++ using Qt, interfaces with EPICS for beamline control and provides live and offline data viewing, basic image manipulation features, and scan sequencing that coordinates EPICS-enabled apparatus. Post acquisition, the software triggers a workflow pipeline, written using ActiveMQ, that transfers data from the detector computer to an analysis computer, and launches a reconstruction process. Experiment metadata and provenance information is stored along with raw and analyzed data in a single HDF5 file.
 
slides icon Slides FRCOAAB03 [1.707 MB]  
 
FRCOAAB05 JOGL Live Rendering Techniques in Data Acquisition Systems GPU, real-time, controls, experiment 1477
 
  • C. Cocho, F. Cecillon, A. Elaazzouzi, Y. Le Goc, J. Locatelli, P. Mutti, H. Ortiz, J. Ratel
    ILL, Grenoble, France
 
  One of the major challenges in instrument control is to provide a fast and scientifically correct representation of the data collected by the detector through the data acquisition system. Despite the availability nowadays of a large number of excellent libraries for off-line data plotting, the real-time 2D and 3D data rendering still suffers of performance issues related namely to the amount of information to be displayed. The current paper describes new methods of image generation (rendering) based on JOGL library used for data acquisition at the Institut Laue-Langevin (ILL) on instruments that require either high image resolution or large number of images rendered at the same time. These new methods involve the definition of data buffers and the usage of the GPU memory, technique known as Vertex Buffer Object (VBO). Implementation of different modes of rendering, on-screen and off-screen, will be also detailed.  
slides icon Slides FRCOAAB05 [1.422 MB]  
 
FRCOAAB07 Operational Experience with the ALICE Detector Control System controls, operation, experiment, status 1485
 
  • P.Ch. Chochula, A. Augustinus, A.N. Kurepin, M. Lechman, O. Pinazza, P. Rosinský
    CERN, Geneva, Switzerland
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The first LHC run period, lasting 4 year brought exciting physics results and new insight into the mysteries of the matter. One of the key components in this achievements were the detectors, which provided unprecedented amounts of data of the highest quality. The control systems, responsible for their smooth and safe operation, played a key role in this success. The design of the ALICE Detector Control System (DCS) started more than 12 years ago. High level of standardization and pragmatic design led to a reliable and stable system, which allowed for efficient experiment operation. In this presentation we summarize the overall architectural principles of the system, the standardized components and procedures. The original expectations and plans are compared with the final design. Focus is given on the operational procedures, which evolved with time. We explain, how a single operator can control and protect a complex device like ALICE, with millions of readout channels and several thousand control devices and boards. We explain what we learned during the first years of LHC operation and which improvements will be implemented to provide excellent DCS service during the next years.  
slides icon Slides FRCOAAB07 [7.856 MB]  
 
FRCOAAB08 The LIMA Project Update controls, hardware, interface, software 1489
 
  • S. Petitdemange, L. Claustre, A. Homs, R. Homs Regojo, E. Papillon
    ESRF, Grenoble, France
 
  LIMA, a Library for Image Acquisition, was developed at the ESRF to control high-performance 2D detectors used in scientific applications. It provides generic access to common image acquisition concepts, from detector synchronization to online data reduction, including image transformations and storage management. An abstraction of the low-level 2D control defines the interface for camera plugins, allowing different degrees of hardware optimizations. Scientific 2D data throughput up to 250 MB/s is ensured by multi-threaded algorithms exploiting multi-CPU/core technologies. Eighteen detectors are currently supported by LIMA, covering CCD, CMOS and pixel detectors, and video GigE cameras. Control system agnostic by design, LIMA has become the de facto 2D standard in the TANGO community. An active collaboration among large facilities, research laboratories and detector manufacturers joins efforts towards the integration of new core features, detectors and data processing algorithms. The LIMA 2 generation will provide major improvements in several key core elements, like buffer management, data format support (including HDF5) and user-defined software operations, among others.  
slides icon Slides FRCOAAB08 [1.338 MB]