Keyword: experiment
Paper Title Other Keywords Page
MOBAUST02 The ATLAS Detector Control System controls, detector, interface, monitoring 5
 
  • S. Schlenker, S. Arfaoui, S. Franz, O. Gutzwiller, C.A. Tsarouchas
    CERN, Geneva, Switzerland
  • G. Aielli, F. Marchese
    Università di Roma II Tor Vergata, Roma, Italy
  • G. Arabidze
    MSU, East Lansing, Michigan, USA
  • E. Banaś, Z. Hajduk, J. Olszowska, E. Stanecka
    IFJ-PAN, Kraków, Poland
  • T. Barillari, J. Habring, J. Huber
    MPI, Muenchen, Germany
  • M. Bindi, A. Polini
    INFN-Bologna, Bologna, Italy
  • H. Boterenbrood, R.G.K. Hart
    NIKHEF, Amsterdam, The Netherlands
  • H. Braun, D. Hirschbuehl, S. Kersten, K. Lantzsch
    Bergische Universität Wuppertal, Wuppertal, Germany
  • R. Brenner
    Uppsala University, Uppsala, Sweden
  • D. Caforio, C. Sbarra
    Bologna University, Bologna, Italy
  • S. Chekulaev
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
  • S. D'Auria
    University of Glasgow, Glasgow, United Kingdom
  • M. Deliyergiyev, I. Mandić
    JSI, Ljubljana, Slovenia
  • E. Ertel
    Johannes Gutenberg University Mainz, Institut für Physik, Mainz, Germany
  • V. Filimonov, V. Khomutnikov, S. Kovalenko
    PNPI, Gatchina, Leningrad District, Russia
  • V. Grassi
    SBU, Stony Brook, New York, USA
  • J. Hartert, S. Zimmermann
    Albert-Ludwig Universität Freiburg, Freiburg, Germany
  • D. Hoffmann
    CPPM, Marseille, France
  • G. Iakovidis, K. Karakostas, S. Leontsinis, E. Mountricha
    National Technical University of Athens, Athens, Greece
  • P. Lafarguette
    Université Blaise Pascal, Clermont-Ferrand, France
  • F. Marques Vinagre, G. Ribeiro, H.F. Santos
    LIP, Lisboa, Portugal
  • T. Martin, P.D. Thompson
    Birmingham University, Birmingham, United Kingdom
  • B. Mindur
    AGH University of Science and Technology, Krakow, Poland
  • J. Mitrevski
    SCIPP, Santa Cruz, California, USA
  • K. Nagai
    University of Tsukuba, Graduate School of Pure and Applied Sciences,, Tsukuba, Ibaraki, Japan
  • S. Nemecek
    Czech Republic Academy of Sciences, Institute of Physics, Prague, Czech Republic
  • D. Oliveira Damazio, A. Poblaguev
    BNL, Upton, Long Island, New York, USA
  • P.W. Phillips
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
  • A. Robichaud-Veronneau
    DPNC, Genève, Switzerland
  • A. Talyshev
    BINP, Novosibirsk, Russia
  • G.F. Tartarelli
    Universita' degli Studi di Milano & INFN, Milano, Italy
  • B.M. Wynne
    Edinburgh University, Edinburgh, United Kingdom
 
  The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by using a system of 140 server machines running the industrial SCADA product PVSS. This highly distributed system reads, processes and archives of the order of 106 operational parameters. Higher level control system layers based on the CERN JCOP framework allow for automatic control procedures, efficient error recognition and handling, manage the communication with external control systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS physics data acquisition system. A web-based monitoring system allows accessing the DCS operator interface views and browse the conditions data archive worldwide with high availability. This contribution firstly describes the status of the ATLAS DCS and the experience gained during the LHC commissioning and the first physics data taking operation period. Secondly, the future evolution and maintenance constraints for the coming years and the LHC high luminosity upgrades are outlined.  
slides icon Slides MOBAUST02 [6.379 MB]  
 
MOBAUST06 The LHCb Experiment Control System: on the Path to Full Automation controls, detector, framework, operation 20
 
  • C. Gaspar, F. Alessio, L.G. Cardoso, M. Frank, J.C. Garnier, R. Jacobsson, B. Jost, N. Neufeld, R. Schwemmer, E. van Herwijnen
    CERN, Geneva, Switzerland
  • O. Callot
    LAL, Orsay, France
  • B. Franek
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
 
  LHCb is a large experiment at the LHC accelerator. The experiment control system is in charge of the configuration, control and monitoring of the different sub-detectors and of all areas of the online system: the Detector Control System (DCS), sub-detector's voltages, cooling, temperatures, etc.; the Data Acquisition System (DAQ), and the Run-Control; the High Level Trigger (HLT), a farm of around 1500 PCs running trigger algorithms; etc. The building blocks of the control system are based on the PVSS SCADA System complemented by a control Framework developed in common for the 4 LHC experiments. This framework includes an "expert system" like tool called SMI++ which we use for the system automation. The full control system runs distributed over around 160 PCs and is logically organised in a hierarchical structure, each level being capable of supervising and synchronizing the objects below. The experiment's operations are now almost completely automated driven by a top-level object called Big-Brother which pilots all the experiment's standard procedures and the most common error-recovery procedures. Some examples of automated procedures are: powering the detector, acting on the Run-Control (Start/Stop Run, etc.) and moving the vertex detector in/out of the beam, all driven by the state of the accelerator or recovering from errors in the HLT farm. The architecture, tools and mechanisms used for the implementation as well as some operational examples will be shown.  
slides icon Slides MOBAUST06 [1.451 MB]  
 
MOMAU003 The Computing Model of the Experiments at PETRA III controls, TANGO, interface, detector 44
 
  • T. Kracht, M. Alfaro, M. Flemming, J. Grabitz, T. Núñez, A. Rothkirch, F. Schlünzen, E. Wintersberger, P. van der Reest
    DESY, Hamburg, Germany
 
  The PETRA storage ring at DESY in Hamburg has been refurbished to become a highly brilliant synchrotron radiation source (now named PETRA III). Commissioning of the beamlines started in 2009, user operation in 2010. In comparison with our DORIS beamlimes, the PETRA III experiments have larger complexity, higher data rates and require an integrated system for data storage and archiving, data processing and data distribution. Tango [1] and Sardana [2] are the main components of our online control system. Both systems are developed by international collaborations. Tango serves as the backbone to operate all beamline components, certain storage ring devices and equipment from our users. Sardana is an abstraction layer on top of Tango. It standardizes the hardware access, organizes experimental procedures, has a command line interface and provides us with widgets for graphical user interfaces. Other clients like Spectra, which was written for DORIS, interact with Tango or Sardana. Modern 2D detectors create large data volumes. At PETRA III all data are transferred to an online file server which is hosted by the DESY computer center. Near real time analysis and reconstruction steps are executed on a CPU farm. A portal for remote data access is in preparation. Data archiving is done by the dCache [3]. An offline file server has been installed for further analysis and inhouse data storage.
[1] http://www.tango-controls.org
[2] http://computing.cells.es/services/collaborations/sardana
[3] http://www-dcache.desy.de
 
slides icon Slides MOMAU003 [0.347 MB]  
poster icon Poster MOMAU003 [0.563 MB]  
 
MOPKN019 ATLAS Detector Control System Data Viewer database, interface, framework, controls 137
 
  • C.A. Tsarouchas, S.A. Roe, S. Schlenker
    CERN, Geneva, Switzerland
  • U.X. Bitenc, M.L. Fehling-Kaschek, S.X. Winkelmann
    Albert-Ludwig Universität Freiburg, Freiburg, Germany
  • S.X. D'Auria
    University of Glasgow, Glasgow, United Kingdom
  • D. Hoffmann, O.X. Pisano
    CPPM, Marseille, France
 
  The ATLAS experiment at CERN is one of the four Large Hadron Collider ex- periments. ATLAS uses a commercial SCADA system (PVSS) for its Detector Control System (DCS) which is responsible for the supervision of the detector equipment, the reading of operational parameters, the propagation of the alarms and the archiving of important operational data in a relational database. DCS Data Viewer (DDV) is an application that provides access to historical data of DCS parameters written to the database through a web interface. It has a modular and flexible design and is structured using a client-server architecture. The server can be operated stand alone with a command-like interface to the data while the client offers a user friendly, browser independent interface. The selection of the metadata of DCS parameters is done via a column-tree view or with a powerful search engine. The final visualisation of the data is done using various plugins such as "value over time" charts, data tables, raw ASCII or structured export to ROOT. Excessive access or malicious use of the database is prevented by dedicated protection mechanisms, allowing the exposure of the tool to hundreds of inexperienced users. The metadata selection and data output features can be used separately by XML configuration files. Security constraints have been taken into account in the implementation allowing the access of DDV by collaborators worldwide. Due to its flexible interface and its generic and modular approach, DDV could be easily used for other experiment control systems that archive data using PVSS.  
poster icon Poster MOPKN019 [0.938 MB]  
 
MOPMN012 The Electronic Logbook for LNL Accelerators ion, software, Linux, booster 260
 
  • S. Canella, O. Carletto
    INFN/LNL, Legnaro (PD), Italy
 
  In spring 2009 all run-time data concerning the particle accelerators at LNL (Laboratori Nazionali di Legnaro) were still registered mainly on paper. TANDEM and its Negative Source data were logged on a large format paper logbook, for ALPI booster and PIAVE injector with its Positive ECR Source a number of independent paper notebooks were used, together with plain data files containing raw instant snapshots of each RF superconductive accelerators. At that time a decision was taken to build a new tool for a general electronic registration of accelerators run-time data. The result of this effort, the LNL electronic logbook, is presented here .  
poster icon Poster MOPMN012 [8.543 MB]  
 
MOPMN020 Integrating Controls Frameworks: Control Systems for NA62 LAV Detector Test Beams framework, controls, detector, interface 285
 
  • O. Holme, J.A.R. Arroyo Garcia, P. Golonka, M. Gonzalez-Berges, H. Milcent
    CERN, Geneva, Switzerland
  • O. Holme
    ETH, Zurich, Switzerland
 
  The detector control system for the NA62 experiment at CERN, to be ready for physics data-taking in 2014, is going to be built based on control technologies recommended by the CERN Engineering group. A rich portfolio of the technologies is planned to be showcased and deployed in the final application, and synergy between them is needed. In particular two approaches to building controls application need to play in harmony: the use of the high-level application framework called UNICOS, and a bottom-up approach of development based on the components of the JCOP Framework. The aim of combining the features provided by the two frameworks is to avoid duplication of functionality and minimize the maintenance and development effort for future controls applications. In the paper the result of the integration efforts obtained so far are presented; namely the control applications developed for beam-testing of NA62 detector prototypes. Even though the delivered applications are simple, significant conceptual and development work was required to bring about the smooth inter-play between the two frameworks, while assuring the possibility of unleashing their full power. A discussion of current open issues is presented, including the viability of the approach for larger-scale applications of high complexity, such as the complete detector control system for the NA62 detector.  
poster icon Poster MOPMN020 [1.464 MB]  
 
MOPMN028 Automated Voltage Control in LHCb controls, detector, status, high-voltage 304
 
  • L.G. Cardoso, C. Gaspar, R. Jacobsson
    CERN, Geneva, Switzerland
 
  LHCb is one of the 4 LHC experiments. In order to ensure the safety of the detector and to maximize efficiency, LHCb needs to coordinate its own operations, in particular the voltage configuration of the different sub-detectors, according to the accelerator status. A control software has been developed for this purpose, based on the Finite State Machine toolkit and the SCADA system used for control throughout LHCb (and the other LHC experiments). This software permits to efficiently drive both the Low Voltage (LV) and High Voltage (HV) systems of the 10 different sub-detectors that constitute LHCb, setting each sub-system to the required voltage (easily configurable at run-time) based on the accelerator state. The control software is also responsible for monitoring the state of the Sub-detector voltages and adding it to the event data in the form of status-bits. Safe and yet flexible operation of the LHCb detector has been obtained and automatic actions, triggered by the state changes of the accelerator, have been implemented. This paper will detail the implementation of the voltage control software, its flexible run-time configuration and its usage in the LHCb experiment.  
poster icon Poster MOPMN028 [0.479 MB]  
 
MOPMS008 Control of the SARAF High Intensity CW Protron Beam Target Systems target, controls, proton, vacuum 336
 
  • I. Eliyahu, D. Berkovits, M. Bisyakoev, I.G. Gertz, S. Halfon, N. Hazenshprung, D. Kijel, E. Reinfeld, I. Silverman, L. Weissman
    Soreq NRC, Yavne, Israel
 
  The first beam line addition to the SARAF facility was completed in phase I. two experiments are planned in this new beam line, the Liquid Lithium target and the Foils target. For those we are currently building hardware and software for their control systems. The Liquid Lithium target is planned to be a powerful neutron source for the accelerator, based on the proton beam of the SARAF phase I. The concept of this target is based on a liquid lithium that spins and produces neutron by the reaction Li7(p,n)Be7. This target was successfully tested in the laboratory and is intended to be integrated into the accelerator beam line and the control system this year. The Foils Target is planned for a radiation experiment designed to examine the problem of radiation damage to metallic foils. To accomplish this we have built a radiation system that enables us to test the foils. The control system includes varied diagnostic elements, vacuum, motor control, temp etc, for the two targets mentioned above. These systems were built to be modular, so that in the future new targets can be quickly and simply inserted. This article will describe the different control systems for the two targets as well as the design methodology used to achieve a reliable and reusable control on those targets.  
poster icon Poster MOPMS008 [1.391 MB]  
 
MOPMS031 Did We Get What We Aimed for 10 Years Ago? detector, operation, controls, hardware 397
 
  • P.Ch. Chochula, A. Augustinus, L.S. Jirdén, A.N. Kurepin, M. Lechman, P. Rosinský
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The ALICE Detector Control System (DCS) is in charge of control and operation of one of the large high energy physics experiments at CERN in Geneva. The DCS design which started in 2000 was partly inspired by the control systems of the previous generation of HEP experiments at the LEP accelerator at CERN. However, the scale of the LHC experiments, the use of modern, "intelligent" hardware and the harsh operational environment led to an innovative system design. The overall architecture has been largely based on commercial products like PVSS SCADA system and OPC servers extended by frameworks. Windows has been chosen as operating system platform for the core systems and Linux for the frontend devices. The concept of finite state machines has been deeply integrated into the system design. Finally, the design principles have been optimized and adapted to the expected operational needs. The ALICE DCS was designed, prototyped and developed at the time, when no experience with systems of similar scale and complexity existed. At the time of its implementation the detector hardware was not yet available and tests were performed only with partial detector installations. In this paper we analyse how well the original requirements and expectations set ten years ago comply with the real experiment needs after two years of operation. We provide an overview of system performance, reliability and scalability. Based on this experience we assess the need for future system enhancements to take place during the LHC technical stop in 2013.  
poster icon Poster MOPMS031 [5.534 MB]  
 
MOPMU013 Phase II and III The Next Generation of CLS Beamline Control and Data Acquisition Systems controls, software, EPICS, interface 454
 
  • E. D. Matias, D. Beauregard, R. Berg, G. Black, M.J. Boots, W. Dolton, D. Hunter, R. Igarashi, D. Liu, D.G. Maxwell, C.D. Miller, T. Wilson, G. Wright
    CLS, Saskatoon, Saskatchewan, Canada
 
  The Canadian Light Source is nearing the completion of its suite of phase II Beamlines and in detailed design of its Phase III Beamlines. The paper presents an overview of the overall approach adopted by CLS in the development of beamline control and data acquisition systems. Building on the experience of our first phase of beamlines the CLS has continued to make extensive use of EPICS with EDM and QT based user interfaces. Increasing interpretive languages such as Python are finding a place in the beamline control systems. Web based environment such as ScienceStudio have also found a prominent place in the control system architecture as we move to tighter integration between data acquisition, visualization and data analysis.  
 
MOPMU035 Shape Controller Upgrades for the JET ITER-like Wall plasma, controls, real-time, operation 514
 
  • A. Neto, D. Alves, I.S. Carvalho
    IPFN, Lisbon, Portugal
  • G. De Tommasi, F. Maviglia
    CREATE, Napoli, Italy
  • R.C. Felton, P. McCullen
    EFDA-JET, Abingdon, Oxon, United Kingdom
  • P.J. Lomas, F. G. Rimini, A.V. Stephen, K-D. Zastrow
    CCFE, Culham, Abingdon, Oxon, United Kingdom
  • R. Vitelli
    Università di Roma II Tor Vergata, Roma, Italy
 
  Funding: This work was supported by the European Communities under the contract of Association between EURATOM/IST and was carried out within the framework of the European Fusion Development Agreement.
The upgrade of JET to a new all-metal wall will pose a set of new challenges regarding machine operation and protection. One of the key problems is that the present way of terminating a pulse, upon the detection of a problem, is limited to a predefined set of global responses, tailored to maximise the likelihood of a safe plasma landing. With the new wall, these might conflict with the requirement of avoiding localised heat fluxes in the wall components. As a consequence, the new system will be capable of dynamically adapting its response behaviour, according to the experimental conditions at the time of the stop request and during the termination itself. Also in the context of the new ITER-like wall, two further upgrades were designed to be implemented in the shape controller architecture. The first will allow safer operation of the machine and consists of a power-supply current limit avoidance scheme, which provides a trade-off between the desired plasma shape and the current distribution between the relevant actuators. The second is aimed at an optimised operation of the machine, enabling an earlier formation of a special magnetic configuration where the last plasma closed flux surface is not defined by a physical limiter. The upgraded shape controller system, besides providing the new functionality, is expected to continue to provide the first line of defence against erroneous plasma position and current requests. This paper presents the required architectural changes to the JET plasma shape controller system.
 
poster icon Poster MOPMU035 [2.518 MB]  
 
TUCAUST06 Event-Synchronized Data Acquisition System of 5 Giga-bps Data Rate for User Experiment at the XFEL Facility, SACLA operation, detector, controls, network 581
 
  • M. Yamaga, A. Amselem, T. Hirono, Y. Joti, A. Kiyomichi, T. Ohata, T. Sugimoto, R. Tanaka
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Hatsui
    RIKEN/SPring-8, Hyogo, Japan
 
  A data acquisition (DAQ), control, and storage system has been developed for user experiments at the XFEL facility, SACLA, in the SPring-8 site. The anticipated experiments demand shot-by-shot DAQ in synchronization with the beam operation cycle in order to correlate the beam characteristics, and recorded data such as X-ray diffraction pattern. The experiments produce waveform or image data, of which the data size ranges from 8 up to 48 M byte for each x-ray pulse at 60 Hz. To meet these requirements, we have constructed a DAQ system that is operated in synchronization with the 60Hz of beam operation cycle. The system is designed to handle up to 5 Gbps data rate after compression, and consists of the trigger distributor/counters, the data-filling computers, the parallel-writing high-speed data storage, and the relational database. The data rate is reduced by on-the-fly data compression through front-end embedded systems. The self-described data structure enables to handle any type of data. The pipeline data-buffer at each computer node ensures integrity of the data transfer with the non-real-time operating systems, and reduces the development cost. All the data are transmitted via TCP/IP protocol over GbE and 10GbE Ethernet. To monitor the experimental status, the system incorporates with on-line visualization of waveform/images as well as prompt data mining by 10 PFlops scale supercomputer to check the data health. Partial system for the light source commissioning was released in March 2011. Full system will be released to public users in March 2012.  
slides icon Slides TUCAUST06 [3.248 MB]  
 
TUDAUST01 Inauguration of the XFEL Facility, SACLA, in SPring-8 controls, laser, electron, operation 585
 
  • R. Tanaka, Y. Furukawa, T. Hirono, M. Ishii, M. Kago, A. Kiyomichi, T. Masuda, T. Matsumoto, T. Matsushita, T. Ohata, C. Saji, T. Sugimoto, M. Yamaga, A. Yamashita
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Fukui, T. Hatsui, N. Hosoda, H. Maesaka, T. Ohshima, T. Otake, Y. Otake, H. Takebe
    RIKEN/SPring-8, Hyogo, Japan
 
  The construction of the X-ray free electron laser facility (SACLA) in SPring-8 started in 2006. After 5 years of construction, the facility completed to accelerate electron beams in February 2011. The main component of the accelerator consists of 64 C-band RF units to accelerate beams up to 8GeV. The beam shape is compressed to a length of 30fs, and the beams are introduced into the 18 insertion devices to generate 0.1nm X-ray laser. The first SASE X-ray was observed after the beam commissioning. The beam tuning will continue to achieve X-ray laser saturation for frontier scientific experiments. The control system adopts the 3-tier standard model by using MADOCA framework developed in SPring-8. The upper control layer consists of Linux PCs for operator consoles, Sybase RDBMS for data logging and FC-based NAS for NFS. The lower consists of 100 Solaris-operated VME systems with newly developed boards for RF waveform processing, and the PLC is used for slow control. The Device-net is adopted for the frontend devices to reduce signal cables. The VME systems have a beam-synchronized data-taking link to meet 60Hz beam operation for the beam tuning diagnostics. The accelerator control has gateways to the facility utility system not only to monitor devices but also to control the tuning points of the cooling water. The data acquisition system for the experiments is challenging. The data rate coming from 2D multiport CCD is 3.4Gbps that produces 30TB image data in a day. A sampled data will be transferred to the 10PFlops supercomputer via 10Gbps Ethernet for data evaluation.  
slides icon Slides TUDAUST01 [5.427 MB]  
 
TUDAUST05 The Laser MegaJoule Facility: Control System Status Report controls, laser, target, software 600
 
  • J.I. Nicoloso
    CEA/DAM/DIF, Arpajon, France
  • J.P.A. Arnoul
    CEA, Le Barp, France
 
  The French Commissariat à l'Energie Atomique (CEA) is currently building the Laser MegaJoule (LMJ), a 176-beam laser facility, at the CEA Laboratory CESTA near Bordeaux. It is designed to deliver about 1.4 MJ of energy to targets for high energy density physics experiments, including fusion experiments. LMJ technological choices were validated with the LIL, a scale 1 prototype of one LMJ bundle. The construction of the LMJ building itself is now achieved and the assembly of laser components is on-going. A Petawatt laser line is also being installed in the building. The presentation gives an overview of the general control system architecture, and focuses on the hardware platform being installed on the LMJ, in the aim of hosting the different software applications for system supervisory and sub-system controls. This platform is based on the use of virtualization techniques that were used to develop a high availability optimized hardware platform, with a high operating flexibility, including power consumption and cooling considerations. This platform is spread over 2 sites, the LMJ itself of course, but also on the software integration platform built outside LMJ, and intended to provide system integration of various software control system components of the LMJ.  
slides icon Slides TUDAUST05 [9.215 MB]  
 
TURAULT01 Summary of the 3rd Control System Cyber-security (CS)2/HEP Workshop controls, network, software, detector 603
 
  • S. Lüders
    CERN, Geneva, Switzerland
 
  Over the last decade modern accelerator and experiment control systems have increasingly been based on commercial-off-the-shelf products (VME crates, programmable logic controllers (PLCs), supervisory control and data acquisition (SCADA) systems, etc.), on Windows or Linux PCs, and on communication infrastructures using Ethernet and TCP/IP. Despite the benefits coming with this (r)evolution, new vulnerabilities are inherited, too: Worms and viruses spread within seconds via the Ethernet cable, and attackers are becoming interested in control systems. The Stuxnet worm of 2010 against a particular Siemens PLC is a unique example for a sophisticated attack against control systems [1]. Unfortunately, control PCs cannot be patched as fast as office PCs. Even worse, vulnerability scans at CERN using standard IT tools have shown that commercial automation systems lack fundamental security precautions: Some systems crashed during the scan, others could easily be stopped or their process data being altered [2]. The 3rd (CS)2/HEP workshop [3] held the weekend before the ICALEPCS2011 conference was intended to raise awareness; exchange good practices, ideas, and implementations; discuss what works & what not as well as their pros & cons; report on security events, lessons learned & successes; and update on progresses made at HEP laboratories around the world in order to secure control systems. This presentation will give a summary of the solutions planned, deployed and the experience gained.
[1] S. Lüders, "Stuxnet and the Impact on Accelerator Control Systems", FRAAULT02, ICALEPCS, Grenoble, October 2011;
[2] S. Lüders, "Control Systems Under Attack?", O5_008, ICALEPCS, Geneva, October 2005.
[3] 3rd Control System Cyber-Security CS2/HEP Workshop, http://indico.cern.ch/conferenceDisplay.py?confId=120418
 
 
WEBHAUST03 Large-bandwidth Data Acquisition Network for XFEL Facility, SACLA network, controls, site, laser 626
 
  • T. Sugimoto, Y. Joti, T. Ohata, R. Tanaka, M. Yamaga
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Hatsui
    RIKEN/SPring-8, Hyogo, Japan
 
  We have developed a large-bandwidth data acquisition (DAQ) network for user experiments at the SPring-8 Angstrom Compact Free Electron Laser (SACLA) facility. The network connects detectors, on-line visualization terminals and a high-speed storage of the control and DAQ system to transfer beam diagnostic data of each X-ray pulse as well as the experimental data. The development of DAQ network system (DAQ-LAN) was one of the critical elements in the system development because the data with transfer rate reaching 5 Gbps should be stored and visualized with high availability. DAQ-LAN is also used for instrument control. In order to guarantee the operation of both the high-speed data transfer and instrument control, we have implemented physical and logical network system. The DAQ-LAN currently consists of six 10-GbE capable network switches exclusively used for the data transfer, and ten 1-GbE capable network switches for instrument control and on-line visualization. High-availability was achieved by link aggregation (LAG) with typical convergence time of 500 ms, which is faster than RSTP (2 sec.). To prevent network trouble caused by broadcast, DAQ-LAN is logically separated into twelve network segments. Logical network segmentation are based on DAQ applications such as data transfer, on-line visualization, and instrument control. The DAQ-LAN will connect the control and DAQ system to the on-site high performance computing system, and to the next-generation super computers in Japan including K-computer for instant data mining during the beamtime, and post analysis.  
slides icon Slides WEBHAUST03 [5.795 MB]  
 
WEBHAUST06 Virtualized High Performance Computing Infrastructure of Novosibirsk Scientific Center network, site, detector, controls 630
 
  • A. Zaytsev, S. Belov, V.I. Kaplin, A. Sukharev
    BINP SB RAS, Novosibirsk, Russia
  • A.S. Adakin, D. Chubarov, V. Nikultsev
    ICT SB RAS, Novosibirsk, Russia
  • V. Kalyuzhny
    NSU, Novosibirsk, Russia
  • N. Kuchin, S. Lomakin
    ICM&MG SB RAS, Novosibirsk, Russia
 
  Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. Our contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure focusing on its applications for handling everyday data processing tasks of HEP experiments being carried out at BINP.  
slides icon Slides WEBHAUST06 [14.369 MB]  
 
WEBHMUST02 Solid State Direct Drive RF Linac: Control System controls, cavity, software, LLRF 638
 
  • T. Kluge, M. Back, U. Hagen, O. Heid, M. Hergt, T.J.S. Hughes, R. Irsigler, J. Sirtl
    Siemens AG, Erlangen, Germany
  • R. Fleck
    Siemens AG, Corporate Technology, CT T DE HW 4, Erlangen, Germany
  • H.-C. Schröder
    ASTRUM IT GmbH, Erlangen, Germany
 
  Recently a Solid State Direct Drive ® concept for RF linacs has been introduced [1]. This new approach integrates the RF source, comprised of multiple Silicon Carbide (SiC) solid state Rf-modules [2], directly onto the cavity. Such an approach introduces new challenges for the control of such machines namely the non-linear behavior of the solid state RF-modules and the direct coupling of the RF-modules onto the cavity. In this paper we discuss further results of the experimental program [3,4] to integrate and control 64 RF-modules onto a λ/4 cavity. The next stage of experiments aims on gaining better feed forward control of the system and on detailed system identification. For this purpose a digital control board comprising of a Virtex 6 FPGA, high speed DACs/ADCs and trigger I/O is developed and integrated into the experiment and used to control the system. The design of the board is consequently digital aiming at direct processing of the signals. Power control within the cavity is achieved by an outphasing control of two groups of the RF-modules. This allows a power control without degradation of RF-module efficiency.
[1] Heid O., Hughes T., THPD002, IPAC10, Kyoto, Japan
[2] Irsigler R. et al, 3B-9, PPC11, Chicago IL, USA
[3] Heid O., Hughes T., THP068, LINAC10, Tsukuba, Japan
[4] Heid O., Hughes T., MOPD42, HB2010, Morschach, Switzerland
 
slides icon Slides WEBHMUST02 [1.201 MB]  
 
WEMAU002 Coordinating Simultaneous Instruments at the Advanced Technology Solar Telescope controls, software, interface, target 654
 
  • S.B. Wampler, B.D. Goodrich, E.M. Johansson
    Advanced Technology Solar Telescope, National Solar Observatory, Tucson, USA
 
  A key component of the Advanced Technology Solar Telescope control system design is the efficient support of multiple instruments sharing the light path provided by the telescope. The set of active instruments varies with each experiment and possibly with each observation within an experiment. The flow of control for a typical experiment is traced through the control system to preset the main aspects of the design that facilitate this behavior. Special attention is paid to the role of ATST's Common Services Framework in assisting the coordination of instruments with each other and with the telescope.  
slides icon Slides WEMAU002 [0.251 MB]  
poster icon Poster WEMAU002 [0.438 MB]  
 
WEMAU010 Web-based Control Application using WebSocket controls, GUI, Linux, Windows 673
 
  • Y. Furukawa
    JASRI/SPring-8, Hyogo-ken, Japan
 
  The Websocket [1] brings asynchronous full-duplex communication between a web-based (i.e. java-script based) application and a web-server. The WebSocket started as a part of HTML5 standardization but has now been separated from the HTML5 and developed independently. Using the WebSocket, it becomes easy to develop platform independent presentation layer applications of accelerator and beamline control software. In addition, no application program has to be installed on client computers except for the web-browser. The WebSocket based applications communicate with the WebSocket server using simple text based messages, so the WebSocket can be applicable message based control system like MADOCA, which was developed for the SPring-8 control system. A simple WebSocket server for the MADOCA control system and a simple motor control application was successfully made as a first trial of the WebSocket control application. Using google-chrome (version 10.x) on Debian/Linux and Windows 7, opera (version 11.0 beta) on Debian/Linux and safari (version 5.0.3) on MacOSX as clients, the motors can be controlled using the WebSocket based web-application. The more complex applications are now under development for synchrotron radiation experiments combined with other HTML5 features.
[1] http://websocket.org/
 
poster icon Poster WEMAU010 [44.675 MB]  
 
WEMMU005 Fabric Management with Diskless Servers and Quattor on LHCb Linux, controls, embedded, collider 691
 
  • P. Schweitzer, E. Bonaccorsi, L. Brarda, N. Neufeld
    CERN, Geneva, Switzerland
 
  Large scientific experiments nowadays very often are using large computer farms to process the events acquired from the detectors. In LHCb a small sysadmin team manages 1400 servers of the LHCb Event Filter Farm, but also a wide variety of control servers for the detector electronics and infrastructure computers : file servers, gateways, DNS, DHCP and others. This variety of servers could not be handled without a solid fabric management system. We choose the Quattor toolkit for this task. We will present our use of this toolkit, with an emphasis on how we handle our diskless nodes (Event filter farm nodes and computers embedded in the acquisition electronic cards). We will show our current tests to replace the standard (RedHat/Scientific Linux) way of handling diskless nodes to fusion filesystems and how it improves fabric management.  
slides icon Slides WEMMU005 [0.119 MB]  
poster icon Poster WEMMU005 [0.602 MB]  
 
WEPKN003 Distributed Fast Acquisitions System for Multi Detector Experiments detector, software, TANGO, distributed 717
 
  • F. Langlois, A. Buteau, X. Elattaoui, C.M. Kewish, S. Lê, P. Martinez, K. Medjoubi, S. Poirier, A. Somogyi
    SOLEIL, Gif-sur-Yvette, France
  • A. Noureddine
    MEDIANE SYSTEM, Le Pecq, France
  • C. Rodriguez
    ALTEN, Boulogne-Billancourt, France
 
  An increasing number of SOLEIL beamlines need to use in parallel several detection techniques, which could involve 2D area detectors, 1D fluorescence analyzers, etc. For such experiments, we have implemented Distributed Fast Acquisition Systems for Multi Detectors. Data from each Detector are collected by independent software applications (in our case Tango Devices), assuming all acquisitions are triggered by a unique Master clock. Then, each detector software device streams its own data on a common disk space, known as the spool. Each detector data are stored in independent NeXus files, with the help of a dedicated high performance NeXus streaming C++ library (called NeXus4Tango). A dedicated asynchronous process, known as the DataMerger, monitors the spool, and gathers all these individual temporary NeXus files into the final experiment NeXus file stored in SOLEIL common Storage System. Metadata information describing context and environment are also added in the final file, thanks to another process (the DataRecorder device). This software architecture proved to be very modular in terms of number and type of detectors while making life of users easier, all data being stored in a unique file at the end of the acquisition. The status of deployment and operation of this "Distributed Fast Acquisitions system for multi detector experiments" will be presented, with the examples of QuickExafs acquisitions on the SAMBA beamline and QuickSRCD acquisitions on DISCO. In particular, the complex case of the future NANOSCOPIUM beamline will be developed.  
poster icon Poster WEPKN003 [0.671 MB]  
 
WEPKN007 A LEGO Paradigm for Virtual Accelerator Concept controls, simulation, software, operation 728
 
  • S.N. Andrianov, A.N. Ivanov, E.A. Podzyvalov
    St. Petersburg State University, St. Petersburg, Russia
 
  The paper considers basic features of a Virtual Accelerator concept based on LEGO paradigm. This concept involves three types of components: different mathematical models for accelerator design problems, integrated beam simulation packages (i.e. COSY, MAD, OptiM and others), and a special class of virtual feedback instruments similar to real control systems (EPICS). All of these components should interoperate for more complete analysis of control systems and increased fault tolerance. The Virtual Accelerator is an information and computing environment which provides a framework for analysis based on these components that can be combined in different ways. Corresponding distributed computing services establish interaction between mathematical models and low level control system. The general idea of the software implementation is based on the Service-Oriented Architecture (SOA) that allows using cloud computing technology and enables remote access to the information and computing resources. The Virtual Accelerator allows a designer to combine powerful instruments for modeling beam dynamics in a friendly to use way including both self-developed and well-known packages. In the scope of this concept the following is also proposed: the control system identification, analysis and result verification, visualization as well as virtual feedback for beam line operation. The architecture of the Virtual Accelerator system itself and results of beam dynamics studies are presented.  
poster icon Poster WEPKN007 [0.969 MB]  
 
WEPKS002 Quick EXAFS Experiments Using a New GDA Eclipse RCP GUI with EPICS Hardware Control detector, interface, EPICS, hardware 771
 
  • R.J. Woolliscroft, C. Coles, M. Gerring, M.R. Pearson
    Diamond, Oxfordshire, United Kingdom
 
  Funding: Diamond Light Source Ltd.
The Generic Data Acquisition (GDA)* framework is an open source, Java and Eclipse RCP based data acquisition software for synchrotron and neutron facilities. A new implementation of the GDA on the B18 beamline at the Diamond synchrotron will be discussed. This beamline performs XAS energy scanning experiments and includes a continuous-scan mode of the monochromator synchronised with various detectors for Quick EXAFS (QEXAFS) experiments. A new perspective for the GDA's Eclipse RCP GUI has been developed in which graphical editors are used to write xml files which hold experimental parameters. The same xml files are marshalled by the GDA server to create Java beans used by the Jython scripts run within the GDA server. The underlying motion control is provided by EPICS. The new Eclipse RCP GUI and the integration and synchronisation between the two software systems and the detectors shall be covered.
* GDA website: http://www.opengda.org/
 
poster icon Poster WEPKS002 [1.277 MB]  
 
WEPKS011 Use of ITER CODAC Core System in SPIDER Ion Source EPICS, controls, data-acquisition, framework 801
 
  • C. Taliercio, A. Barbalace, M. Breda, R. Capobianco, A. Luchetta, G. Manduchi, F. Molon, M. Moressa, P. Simionato
    Consorzio RFX, Associazione Euratom-ENEA sulla Fusione, Padova, Italy
 
  In February 2011 ITER released a new version (v2) of the CODAC Core System. In addition to the selected EPICS core, the new package includes also several tools from Control System Studio [1]. These tools are all integrated in Eclipse and offer an integrated environment for development and operation. The SPIDER Ion Source experiment is the first experiment planned in the ITER Neutral Beam Test Facility under construction at Consorzio RFX, Padova, Italy. As the final product of the Test Facility is the ITER Neutral Beam Injector, we decided to adhere since the beginning to the ITER CODAC guidelines. Therefore the EPICS system provided in the CODAC Core System will be used in SPIDER for plant control and supervision and, to some extent, for data acquisition. In this paper we report our experience in the usage of CODAC Core System v2 in the implementation of the control system of SPIDER and, in particular, we analyze the benefits and drawbacks of the Self Description Data (SDD) tools which, based on a XML description of the signals involved in the system, provide the automatic generation of the configuration files for the EPICS tools and PLC data exchange.
[1] Control System Studio home page: http://css.desy.de/content/index_eng.html
 
 
WEPKS014 NOMAD – More Than a Simple Sequencer controls, hardware, CORBA, interface 808
 
  • P. Mutti, F. Cecillon, A. Elaazzouzi, Y. Le Goc, J. Locatelli, H. Ortiz, J. Ratel
    ILL, Grenoble, France
 
  NOMAD is the new instrument control software of the Institut Laue-Langevin. A highly sharable code among all the instruments’ suite, a user oriented design for tailored functionality and the improvement of the instrument team’s autonomy thanks to a uniform and ergonomic user interface are the essential elements guiding the software development. NOMAD implements a client/server approach. The server is the core business containing all the instrument methods and the hardware drivers, while the GUI provides all the necessary functionalities for the interaction between user and hardware. All instruments share the same executable while a set of XML configuration files adapts hardware needs and instrument methods to the specific experimental setup. Thanks to a complete graphical representation of experimental sequences, NOMAD provides an overview of past, present and future operations. Users have the freedom to build their own specific workflows using intuitive drag-and-drop technique. A complete drivers’ database to connect and control all possible instrument components has been created, simplifying the inclusion of a new piece of equipment for an experiment. A web application makes available outside the ILL all the relevant information on the status of the experiment. A set of scientific methods facilitates the interaction between users and hardware giving access to instrument control and to complex operations within just one click on the interface. NOMAD is not only for scientists. Dedicated tools allow a daily use for setting-up and testing a variety of technical equipments.  
poster icon Poster WEPKS014 [6.856 MB]  
 
WEPKS019 Data Analysis Workbench data-analysis, interface, TANGO, synchrotron 823
 
  • A. Götz, M.W. Gerring, O. Svensson
    ESRF, Grenoble, France
  • S. Brockhauser
    EMBL, Heidelberg, Germany
 
  Funding: ESRF
Data Analysis Workbench [1] is a new software tool produced in collaboration by the ESRF, Soleil and Diamond. It provides data visualization and workflow algorithm design for data analysis in combination with data collection. The workbench uses Passerelle as the workflow engine and EDNA plugins for data analysis. Actors talking to Tango are used for sending limited commands to hardware and starting existing data collection algorithms. There are scripting interfaces to SPEC and Python. The current state at the ESRF is prototype.
[1] http://www.dawb.org
 
poster icon Poster WEPKS019 [2.249 MB]  
 
WEPMN022 LIA-2 Power Supply Control System controls, interlocks, electron, network 926
 
  • A. Panov, P.A. Bak, D. Bolkhovityanov
    BINP SB RAS, Novosibirsk, Russia
 
  LIA-2 is an electron Linear Induction Accelerator designed and built by BINP for flash radiography. Inductors get power from 48 modulators, grouped by 6 in 8 racks. Each modulator includes 3 control devices, connected via internal CAN bus to an embedded modulator controller, which runs Keil RTX real-time OS. Each rack includes a cPCI crate equipped with x86-compatible processor board running Linux*. Modulator controllers are connected to cPCI crate via external CAN bus. Additionally, brief modulator status is displayed on front indicator. Integration of control electronics into devices with high level of electromagnetic interferences is discussed, use of real-time OSes in such devices and interaction between them is described.
*"LIA-2 Linear Induction Accelerator Control System", this conference
 
poster icon Poster WEPMN022 [5.035 MB]  
 
WEPMN023 The ATLAS Tile Calorimeter Detector Control System detector, controls, monitoring, electronics 929
 
  • G. Ribeiro
    LIP, Lisboa, Portugal
  • G. Arabidze
    MSU, East Lansing, Michigan, USA
  • P. Lafarguette
    Université Blaise Pascal, Clermont-Ferrand, France
  • S. Nemecek
    Czech Republic Academy of Sciences, Institute of Physics, Prague, Czech Republic
 
  The main task of the ATLAS Tile calorimeter Detector Control System (DCS) is to enable the coherent and safe operation of the calorimeter. All actions initiated by the operator, as well as all errors, warnings and alarms concerning the hardware of the detector are handled by DCS. The Tile calorimeter DCS controls and monitors mainly the low voltage and high voltage power supply systems, but it is also interfaced with the infrastructure (cooling system and racks), the calibration systems, the data acquisition system, configuration and conditions databases and the detector safety system. The system has been operational since the beginning of LHC operation and has been extensively used in the operation of the detector. In the last months effort was directed to the implementation of automatic recovery of power supplies after trips. Current status, results and latest developments will be presented.  
poster icon Poster WEPMN023 [0.404 MB]  
 
WEPMN025 A New Fast Triggerless Acquisition System For Large Detector Arrays detector, FPGA, real-time, controls 935
 
  • P. Mutti, M. Jentschel, J. Ratel, F. Rey, E. Ruiz-Martinez, W. Urban
    ILL, Grenoble, France
 
  Presently a common characteristic trend in low and medium energy nuclear physics is to develop more complex detector systems to form multi-detector arrays. The main objective of such an elaborated set-up is to obtain comprehensive information about the products of all reactions. State-of-art γ-ray spectroscopy requires nowadays the use of large arrays of HPGe detectors often coupled with anti-Compton active shielding to reduce the ambient background. In view of this complexity, the front-end electronics must provide precise information about energy, time and possibly pulse shape. The large multiplicity of the detection system requires the capability to process the multitude of signals from many detectors, fast processing and very high throughput of more that 106 data words/sec. The possibility to handle such a complex system using traditional analogue electronics has shown rapidly its limitation due, first of all, to the non negligible cost per channel and, moreover, to the signal degradation associated to complex analogue path. Nowadays, digital pulse processing systems are available, with performances, in terms of timing and energy resolution, equal when not better than the corresponding analogue ones for a fraction of the cost per channel. The presented system uses a combination of a 15-bit 100 MS/s digitizer with a PowerPC-based VME single board computer. Real-time processing algorithms have been developed to handle total event rates of more than 1 MHz, providing on-line display for single and coincidence events.  
poster icon Poster WEPMN025 [15.172 MB]  
 
WEPMS026 The TimBel Synchronization Board for Time Resolved Experiments at Synchrotron SOLEIL synchrotron, electron, storage-ring, FPGA 1036
 
  • J.P. Ricaud, P. Betinelli-Deck, J. Bisou, X. Elattaoui, C. Laulhé, P. Monteiro, L.S. Nadolski, S. Ravy, G. Renaud, M.G. Silly, F. Sirotti
    SOLEIL, Gif-sur-Yvette, France
 
  Time resolved experiments are one of the major services that synchrotrons can provide to scientists. The short, high frequency and regular flashes of synchrotron light are a fantastic tool to study the evolution of phenomena over time. To carry out time resolved experiments, beamlines need to synchronize their devices with these flashes of light with a jitter shorter than the pulse duration. For that purpose, Synchrotron SOLEIL has developed the TimBeL board fully interfaced to TANGO framework. This paper presents the main features required by time resolved experiments and how we achieved our goals with the TimBeL board.  
poster icon Poster WEPMS026 [1.726 MB]  
 
WEPMU026 Protecting Detectors in ALICE detector, injection, controls, monitoring 1122
 
  • M. Lechman, A. Augustinus, P.Ch. Chochula, G. De Cataldo, A. Di Mauro, L.S. Jirdén, A.N. Kurepin, P. Rosinský, H. Schindler
    CERN, Geneva, Switzerland
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  ALICE is one of the big LHC experiments at CERN in Geneva. It is composed of many sophisticated and complex detectors mounted very compactly around the beam pipe. Each detector is a unique masterpiece of design, engineering and construction and any damage to it could stop the experiment for months or even for years. It is therefore essential that the detectors are protected from any danger and this is one very important role of the Detector Control System (DCS). One of the main dangers for the detectors is the particle beam itself. Since the detectors are designed to be extremely sensitive to particles they are also vulnerable to any excess of beam conditions provided by the LHC accelerator. The beam protection consists of a combination of hardware interlocks and control software and this paper will describe how this is implemented and handled in ALICE. Tools have also been developed to support operators and shift leaders in the decision making related to beam safety. The gained experiences and conclusions from the individual safety projects are also presented.  
poster icon Poster WEPMU026 [1.561 MB]  
 
WEPMU035 Distributed Monitoring System Based on ICINGA monitoring, network, distributed, database 1149
 
  • C. Haen, E. Bonaccorsi, N. Neufeld
    CERN, Geneva, Switzerland
 
  The basic services of the large IT infrastructure of the LHCb experiment are monitored with ICINGA, a fork of the industry standard monitoring software NAGIOS. The infrastructure includes thousands of servers and computers, storage devices, more than 200 network devices and many VLANS, databases, hundreds diskless nodes and many more. The amount of configuration files needed to control the whole installation is big, and there is a lot of duplication, when the monitoring infrastructure is distributed over several servers. In order to ease the manipulation of the configuration files, we designed a monitoring schema particularly adapted to our network and taking advantage of its specificities, and developed a tool to centralize its configuration in a database. Thanks to this tool, we could also parse all our previous configuration files, and thus fill in our Oracle database, that comes as a replacement of the previous Active Directory based solution. A web frontend allows non-expert users to easily add new entities to monitor. We present the schema of our monitoring infrastructure and the tool used to manage and automatically generate the configuration for ICINGA.  
poster icon Poster WEPMU035 [0.375 MB]  
 
WEPMU037 Virtualization for the LHCb Experiment network, controls, Linux, hardware 1157
 
  • E. Bonaccorsi, L. Brarda, M. Chebbi, N. Neufeld
    CERN, Geneva, Switzerland
  • F. Sborzacchi
    INFN/LNF, Frascati (Roma), Italy
 
  The LHCb Experiment, one of the four large particle physics detectors at CERN, counts in its Online System more than 2000 servers and embedded systems. As a result of ever-increasing CPU performance in modern servers, many of the applications in the controls system are excellent candidates for virtualization technologies. We see virtualization as an approach to cut down cost, optimize resource usage and manage the complexity of the IT infrastructure of LHCb. Recently we have added a Kernel Virtual Machine (KVM) cluster based on Red Hat Enterprise Virtualization for Servers (RHEV) complementary to the existing Hyper-V cluster devoted only to the virtualization of the windows guests. This paper describes the architecture of our solution based on KVM and RHEV as along with its integration with the existing Hyper-V infrastructure and the Quattor cluster management tools and in particular how we use to run controls applications on a virtualized infrastructure. We present performance results of both the KVM and Hyper-V solutions, problems encountered and a description of the management tools developed for the integration with the Online cluster and LHCb SCADA control system based on PVSS.  
 
THBHAUST02 The Wonderland of Operating the ALICE Experiment detector, operation, controls, interface 1182
 
  • A. Augustinus, P.Ch. Chochula, G. De Cataldo, L.S. Jirdén, A.N. Kurepin, M. Lechman, O. Pinazza, P. Rosinský
    CERN, Geneva, Switzerland
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
 
  ALICE is one of the experiments at the Large Hadron Collider (LHC), CERN (Geneva, Switzerland). Composed of 18 sub-detectors each with numerous subsystems that need to be controlled and operated in a safe and efficient way. The Detector Control System (DCS) is the key for this and has been used by detector experts with success during the commissioning of the individual detectors. With the transition from commissioning to operation more and more tasks were transferred from detector experts to central operators. By the end of the 2010 datataking campaign the ALICE experiment was run by a small crew of central operators, with only a single controls operator. The transition from expert to non-expert operation constituted a real challenge in terms of tools, documentation and training. In addition a relatively high turnover and diversity in the operator crew that is specific to the HEP experiment environment (as opposed to the more stable operation crews for accelerators) made this challenge even bigger. This paper describes the original architectural choices that were made and the key components that allowed to come to a homogeneous control system that would allow for efficient centralized operation. Challenges and specific constraints that apply to the operation of a large complex experiment are described. Emphasis will be put on the tools and procedures that were implemented to allow the transition from local detector expert operation during commissioning and early operation, to efficient centralized operation by a small operator crew not necessarily consisting of experts.  
slides icon Slides THBHAUST02 [1.933 MB]  
 
THBHAUST05 First Operation of the Wide-area Remote Experiment System operation, radiation, controls, synchrotron 1193
 
  • Y. Furukawa, K. Hasegawa
    JASRI/SPring-8, Hyogo-ken, Japan
  • G. Ueno
    RIKEN Spring-8 Harima, Hyogo, Japan
 
  The Wide-area Remote Experiment System (WRES) at the SPring-8 has been successfully developed [1]. The system communicates with the remote user's based on the SSL/TLS with the bi-directional authentication to avoid the interference from non-authorized access to the system. The system has message filtering system to allow remote user access only to the corresponding beamline equipment and safety interlock system to protect persons aside the experimental station from accidental motion of heavy equipment. The system also has a video streaming system to monitor samples or experimental equipment. We have tested the system from the point of view of safety, stability, reliability etc. and successfully made first experiment from remote site of RIKEN Wako site 480km away from SPring-8 in the end of October 2010.
[1] Y. Furukawa, K. Hasegawa, D. Maeda, G. Ueno, "Development of remote experiment system", Proc. ICALEPCS 2009(Kobe, Japan) P.615
 
slides icon Slides THBHAUST05 [5.455 MB]  
 
THCHAUST02 Large Scale Data Facility for Data Intensive Synchrotron Beamlines data-management, synchrotron, detector, software 1216
 
  • R. Stotzka, A. Garcia, V. Hartmann, T. Jejkal, H. Pasic, A. Streit, J. van Wezel
    KIT, Karlsruhe, Germany
  • D. Haas, W. Mexner, T. dos Santos Rolo
    Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
 
  ANKA is a large scale facility of the Helmholtz Association of National Research Centers in Germany located at the Karlsruhe Institute of Technology. As the synchrotron light source it is providing light from hard X-rays to the far-infrared for research and technology. It is serving as a user facility for the national and international scientific community currently producing 100 TB of data per year. Within the next two years a couple of additional data intensive beamlines will be operational producing up to 1.6 PB per year. These amounts of data have to be stored and provided on demand to the users. The Large Scale Data Facility LSDF is located on the same campus as ANKA. It is a data service facility dedicated for data intensive scientific experiments. Currently storage of 4 PB for unstructured and structured data and a HADOOP cluster as a computing resource for data intensive applications are available. Within the campus experiments and the main large data producing facilities are connected via 10 GE network links. An additional 10 GE link exists to the internet. Tools for an easy and transparent access allow scientists to use the LSDF without bothering with the internal structures and technologies. Open interfaces and APIs support a variety of access methods to the highly available services for high throughput data applications. In close cooperation with ANKA the LSDF provides assistance to efficiently organize data and meta data structures, and develops and deploys community specific software running on the directly connected computing infrastructure.  
slides icon Slides THCHAUST02 [1.294 MB]  
 
THCHAUST04 Management of Experiments and Data at the National Ignition Facility laser, controls, target, diagnostics 1224
 
  • S.G. Azevedo, R.G. Beeler, R.C. Bettenhausen, E.J. Bond, A.D. Casey, H.C. Chandrasekaran, C.B. Foxworthy, M.S. Hutton, J.E. Krammen, J.A. Liebman, A.A. Marsh, T. M. Pannell, D.E. Speck, J.D. Tappero, A.L. Warrick
    LLNL, Livermore, California, USA
 
  Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Experiments, or "shots", conducted at the National Ignition Facility (NIF) are discrete events that occur over a very short time frame (tens of ns) separated by hours. Each shot is part of a larger campaign of shots to advance scientific understanding in high-energy-density physics. In one campaign, energy from the 192-beam, 1.8-Megajoule pulsed laser in NIF will be used to implode a hydrogen-filled target to demonstrate controlled fusion. Each shot generates gigabytes of data from over 30 diagnostics that measure optical, x-ray, and nuclear phenomena from the imploding target. Because of the low duty cycle of shots, and the thousands of adjustments for each shot (target type, composition, shape; laser beams used, their power profiles, pointing; diagnostic systems used, their configuration, calibration, settings) it is imperative that we accurately define all equipment prior to the shot. Following the shot, and the data acquisition by the automatic control system, it is equally imperative that we archive, analyze and visualize the results within the required 30 minutes post-shot. Results must be securely stored, approved, web-visible and downloadable in order to facilitate subsequent publication. To-date NIF has successfully fired over 2,500 system shots, and thousands of test firings and dry-runs. We will present an overview of the highly-flexible and scalable campaign setup and management systems that control all aspects of the experimental NIF shot-cycle, from configuration of drive lasers all the way through presentation of analyzed results.
LLNL-CONF-476112
 
slides icon Slides THCHAUST04 [5.650 MB]  
 
FRAAUIO05 High-Integrity Software, Computation and the Scientific Method software, controls, background, vacuum 1297
 
  • L. Hatton
    Kingston University, Kingston on Thames, United Kingdom
 
  Given the overwhelming use of computation in modern science and the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. There is a growing debate but this has some distance to run as yet with journals still divided on what even constitutes repeatability. Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. In this paper, some of the problems with computation, for example with respect to specification, implementation, the use of programming languages and the long-term unquantifiable presence of undiscovered defect will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within CS itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest.  
slides icon Slides FRAAUIO05 [0.710 MB]  
 
FRBHAULT02 ATLAS Online Determination and Feedback of LHC Beam Parameters database, feedback, detector, monitoring 1306
 
  • J.G. Cogan, R. Bartoldus, D.W. Miller, E. Strauss
    SLAC, Menlo Park, California, USA
 
  The High Level Trigger of the ATLAS experiment relies on the precise knowledge of the position, size and orientation of the luminous region produced by the LHC. Moreover, these parameters change significantly even during a single data taking run. We present the challenges, solutions and results for the online luminous region (beam spot) determination, and its monitoring and feedback system in ATLAS. The massively parallel calculation is performed on the trigger farm, where individual processors execute a dedicated algorithm that reconstructs event vertices from the proton-proton collision tracks seen in the silicon trackers. Monitoring histograms from all the cores are sampled and aggregated across the farm every 60 seconds. We describe the process by which a standalone application fetches and fits these distributions, extracting the parameters in real time. When the difference between the nominal and measured beam spot values satisfies threshold conditions, the parameters are published to close the feedback loop. To achieve sharp time boundaries across the event stream that is triggered at rates of several kHz, a special datagram is injected into the event path via the Central Trigger Processor that signals the pending update to the trigger nodes. Finally, we describe the efficient near-simultaneous database access through a proxy fan-out tree, which allows thousands of nodes to fetch the same set of values in a fraction of a second.  
slides icon Slides FRBHAULT02 [7.573 MB]