WEPMU —  Poster   (12-Oct-11   13:30—15:00)
Chair: J.M. Meyer, ESRF, Grenoble, France
Paper Title Page
WEPMU001 Temperature Measurement System of Novosibirsk Free Electron Laser 1044
 
  • S.S. Serednyakov, B.A. Gudkov, V.R. Kozak, E.A. Kuper, P.A. Selivanov, S.V. Tararyshkin
    BINP SB RAS, Novosibirsk, Russia
 
  This paper describes the temperature-monitoring system of Novosibirsk FEL. The main task of this system is to prevent the FEL from being overheated and its individual components from being damaged. The system accumulates information from a large number of temperature sensors installed on different parts of the FEL facility, which allows measuring the temperature of the vacuum chamber, cooling water, and magnetic elements windings. Since the architecture of this system allows processing information not only from temperature sensors, it is also used to measure, for instance, vacuum parameters and some parameters of the cooling water. The software part of this system is integrated into the FEL control system, so readings taken from all sensors are recorded to the database every 30 seconds.  
poster icon Poster WEPMU001 [0.484 MB]  
 
WEPMU002 Testing Digital Electronic Protection Systems 1047
 
  • A. Garcia Muñoz, S. Gabourin
    CERN, Geneva, Switzerland
 
  The Safe Machine Parameters Controller (SMPC) ensures the correct configuration of the LHC machine protection system, and that safe injection conditions are maintained throughout the filling of the LHC machine. The SMPC receives information in real-time from measurement electronics installed throughout the LHC and SPS accelerators, determines the state of the machine, and informs the SPS and LHC machine protection systems of these conditions. This paper outlines the core concepts and realization of the SMPC test-bench, based on a VME crate and LabVIEW program. Its main goal is to ensure the correct function of the SMPC for the protection of the CERN accelerator complex. To achieve this, the tester has been built to replicate the machine environment and operation, in order to ensure that the chassis under test is completely exercised. The complexity of the task increases with the number of input combinations which are, in the case of the SMPC, in excess of 2364. This paper also outlines the benefits and weaknesses of developing a test suite independently of the hardware being tested, using the "V" approach.  
poster icon Poster WEPMU002 [0.763 MB]  
 
WEPMU003 The Diamond Machine Protection System 1051
 
  • M.T. Heron, Y.S. Chernousko, P. Hamadyk, S.C. Lay, N. Rotolo
    Diamond, Oxfordshire, United Kingdom
 
  Funding: Diamond Light Source LTD
The Diamond Light Source Machine Protection System manages the hazards from high power photon beams and other hazards to ensure equipment protection on the booster synchrotron and storage ring. The system has a shutdown requirement, on a beam mis-steer, of under 1msec and has to manage in excess of a thousand interlocks. This is realised using a combination of bespoke hardware and programmable logic controllers. The structure of the Machine Protection System will be described, together with operational experience and developments to provide post-mortem functionality.
 
poster icon Poster WEPMU003 [0.694 MB]  
 
WEPMU005 Personnel Protection, Equipment Protection and Fast Interlock Systems: Three Different Technologies to Provide Protection at Three Different Levels 1055
 
  • D.F.C. Fernández-Carreiras, D.B. Beltrán, J. Klora, O. Matilla, J. Moldes, R. Montaño, M. Niegowski, R. Ranz, A. Rubio, S. Rubio-Manrique
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  The Personnel Safety System is based on PILZ PLCs, SIL3 compatible following the norm IEC 61508. It is independent from other subsystems and relies on a dedicated certification by PILZ first and then by TÜV. The Equipment Protection System uses B&R hardware and comprises more than 50 PLCs and more than 100 distributed I/0 modules installed inside the tunnel. The CPUs of the PLCs are interconnected by a deterministic network, supervising more than 7000 signals. Each Beamline has an independent system. The fast interlocks use the bidirectional fibers of the MRF timing system for distributing the interlocks in the microsecond range. Events are distributed by fiber optics for synchronizing more than 280 elements.  
poster icon Poster WEPMU005 [32.473 MB]  
 
WEPMU006 Architecture for Interlock Systems: Reliability Analysis with Regard to Safety and Availability 1058
 
  • S. Wagner, A. Apollonio, R. Schmidt, M. Zerlauth
    CERN, Geneva, Switzerland
  • A. Vergara-Fernandez
    ITER Organization, St. Paul lez Durance, France
 
  For accelerators (e.g. LHC) and other large experimental physics facilities (e.g. ITER), the machine protection relies on complex interlock systems. In the design of interlock loops, the choice of the hardware architecture impacts on machine safety and availability. While high machine safety is an inherent requirement, the constraints in terms of availability may differ from one facility to another. For the interlock loops protecting the LHC superconducting magnet circuits, reduced machine availability can be tolerated since shutdowns do not affect the longevity of the equipment. In ITER's case on the other hand, high availability is required since fast shutdowns cause significant magnet aging. A reliability analysis of various interlock loop architectures has been performed. The analysis based on an analytical model compares a 1oo3 (one-out-of-three) and a 2oo3 architecture with a single loop. It yields the probabilities for four scenarios: (1)- completed mission (e.g., a physics fill in LHC or a pulse in ITER without shutdown triggered), (2)- shutdown because of a failure in the interlock loop, (3)- emergency shutdown (e.g., after a quench of a magnet) and (4)- missed emergency shutdown (shutdown required but interlock loop fails, possibly leading to severe damage of the facility). Scenario 4 relates to machine safety and together with scenarios 2 and 3 defines the machine availability reflected by scenario 1. This paper presents the results of the analysis on the properties of the different architectures with regard to machine safety and availability.  
 
WEPMU007 Securing a Control System: Experiences from ISO 27001 Implementation 1062
 
  • V. Vuppala, K.D. Davidson, J. Kusler, J.J. Vincent
    NSCL, East Lansing, Michigan, USA
 
  Recent incidents have emphasized the importance of security and operational continuity for achieving the quality objectives of an organization, and the safety of its personnel and machines. However, security and disaster recovery are either completely ignored or given a low priority during the design and development of an accelerator control system, the underlying technologies, and the overlaid applications. This leads to an operational facility that is easy to breach, and difficult to recover. Retrofitting security into the control system becomes much more difficult during operations. In this paper we describe our experiences in achieving ISO 27001 compliance for NSCL's control system. We illustrate problems faced with securing low-level controls, infrastructure, and applications. We also provide guidelines to address the security and disaster recovery issues upfront during the development phase.  
poster icon Poster WEPMU007 [1.304 MB]  
 
WEPMU008 Access Safety Systems – New Concepts from the LHC Experience 1066
 
  • T. Ladzinski, Ch. Delamare, S. Di Luca, T. Hakulinen, L. Hammouti, F. Havart, J.-F. Juget, P. Ninin, R. Nunes, T.R. Riesco, E. Sanchez-Corral Mena, F. Valentini
    CERN, Geneva, Switzerland
 
  The LHC Access Safety System has introduced a number of new concepts into the domain of personnel protection at CERN. These can be grouped into several categories: organisational, architectural and concerning the end-user experience. By anchoring the project on the solid foundations of the IEC 61508/61511 methodology, the CERN team and its contractors managed to design, develop, test and commission on time a SIL3 safety system. The system uses a successful combination of the latest Siemens redundant safety programmable logic controllers with a traditional relay logic hardwired loop. The external envelope barriers used in the LHC include personnel and material access devices, which are interlocked door-booths introducing increased automation of individual access control, thus removing the strain from the operators. These devices ensure the inviolability of the controlled zones by users not holding the required credentials. To this end they are equipped with personnel presence detectors and the access control includes a state of the art biometry check. Building on the LHC experience, new projects targeting the refurbishment of the existing access safety infrastructure in the injector chain have started. This paper summarises the new concepts introduced in the LHC access control and safety systems, discusses the return of experience and outlines the main guiding principles for the renewal stage of the personnel protection systems in the LHC injector chain in a homogeneous manner.  
poster icon Poster WEPMU008 [1.039 MB]  
 
WEPMU009 The Laser MégaJoule Facility: Personnel Security and Safety Interlocks 1070
 
  • J.-C. Chapuis, J.P.A. Arnoul, A. Hurst, M.G. Manson
    CEA, Le Barp, France
 
  The French CEA (Commissariat à l'Énergie Atomique) is currently building the LMJ (Laser MégaJoule), at the CEA Laboratory CESTA near Bordeaux. The LMJ is designed to deliver about 1.4 MJ of 0.35 μm light to targets for high energy density physics experiments. Such an installation entails specific risks related to the presence of intense laser beams, and high voltage power laser amplifiers. Furthermore, the thermonuclear fusion reactions induced by the experiment also produce different radiations and neutrons burst and also activate some materials in the chamber environment. Both risks could be lethal. This presentation (paper) discusses the SSP (system for the personnel safety) that was designed to prevent accidents and protect personnel working in the LMJ. To achieve the security level imposed on us by labor law and by the French Safety Authority, the system consists of two independent safety barriers based on different technologies, whose combined effect can reduce to insignificant level the occurrence probability of all accidental scenarios identified during the risk analysis.  
 
WEPMU010 Automatic Analysis at the Commissioning of the LHC Superconducting Electrical Circuits 1073
 
  • H. Reymond, O.O. Andreassen, C. Charrondière, A. Rijllart, M. Zerlauth
    CERN, Geneva, Switzerland
 
  Since the beginning of 2010 the LHC has been operating in a routinely manner, starting with a commissioning phase and then an operation for physics phase. The commissioning of the superconducting electrical circuits requires rigorous test procedures before entering into operation. To maximize the beam operation time of the LHC these tests should be done as fast as procedures allow. A full commissioning needs 12000 tests and is required after circuits have been warmed above liquid nitrogen temperature. Below this temperature, after an end of year break of two months, commissioning needs about 6000 tests. Because the manual analysis of the tests takes a major part of the commissioning time, we proceeded to the automation of the existing analysis tools. We present the way in which these LabVIEW™ applications were automated. We evaluate the gain in commissioning time and reduction of experts on night shift observed during the LHC hardware commissioning campaign of 2011 compared to 2010. We end with an outlook at what can be further optimized.  
poster icon Poster WEPMU010 [3.124 MB]  
 
WEPMU011 Automatic Injection Quality Checks for the LHC 1077
 
  • L.N. Drosdal, B. Goddard, R. Gorbonosov, S. Jackson, D. Jacquet, V. Kain, D. Khasbulatov, M. Misiowiec, J. Wenninger, C. Zamantzas
    CERN, Geneva, Switzerland
 
  Twelve injections per beam are required to fill the LHC with the nominal filling scheme. The injected beam needs to fulfill a number of requirements to provide useful physics for the experiments when they take data at collisions later on in the LHC cycle. These requirements are checked by a dedicated software system, called the LHC injection quality check. At each injection, this system receives data about beam characteristics from key equipment in the LHC and analyzes it online to determine the quality of the injected beam after each injection. If the quality is insufficient, the automatic injection process is stopped, and the operator has to take corrective measures. This paper will describe the software architecture of the LHC injection quality check and the interplay with other systems. A set of tools for self-monitoring of the injection quality checks to achieve optimum performance will be discussed as well. Results obtained during the LHC commissioning year 2010 and the LHC run 2011 will finally be presented.  
poster icon Poster WEPMU011 [0.358 MB]  
 
WEPMU012 First Experiences of Beam Presence Detection Based on Dedicated Beam Position Monitors 1081
 
  • A. Jalal, S. Gabourin, M. Gasior, B. Todd
    CERN, Geneva, Switzerland
 
  High intensity particle beam injection into the LHC is only permitted when a low intensity pilot beam is already circulating in the LHC. This requirement addresses some of the risks associated with high intensity injection, and is enforced by a so-called Beam Presence Flag (BPF) system which is part of the interlock chain between the LHC and its injector complex. For the 2010 LHC run, the detection of the presence of this pilot beam was implemented using the LHC Fast Beam Current Transformer (FBCT) system. However, the primary function of the FBCTs, that is reliable measurement of beam currents, did not allow the BPF system to satisfy all quality requirements of the LHC Machine Protection System (MPS). Safety requirements associated with high intensity injections triggered the development of a dedicated system, based on Beam Position Monitors (BPM). This system was meant to work first in parallel with the FBCT BPF system and eventually replace it. At the end of 2010 and in 2011, this new BPF implementation based on BPMs was designed, built, tested and deployed. This paper reviews both the FBCT and BPM implementation of the BPF system, outlining the changes during the transition period. The paper briefly describes the testing methods, focuses on the results obtained from the tests performed during the end of 2010 LHC run and shows the changes made for the BPM BPF system deployment in LHC in 2011.  
 
WEPMU013 Development of a Machine Protection System for the Superconducting Beam Test Facility at FERMILAB 1084
 
  • L.R. Carmichael, M.D. Church, R. Neswold, A. Warner
    Fermilab, Batavia, USA
 
  Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
Fermilab’s Superconducting RF Beam Test Facility currently under construction will produce electron beams capable of damaging the acceleration structures and the beam line vacuum chambers in the event of an aberrant accelerator pulse. The accelerator is being designed with the capability to operate with up to 3000 bunches per macro-pulse, 5Hz repetition rate and 1.5 GeV beam energy. It will be able to sustain an average beam power of 72 KW at the bunch charge of 3.2 nC. Operation at full intensity will deposit enough energy in niobium material to approach the melting point of 2500 °C. In the early phase with only 3 cryomodules installed the facility will be capable of generating electron beam energies of 810 MeV and an average beam power that approaches 40 KW. In either case a robust Machine Protection System (MPS) is required to mitigate effects due to such large damage potentials. This paper will describe the MPS system being developed, the system requirements and the controls issues under consideration.
 
poster icon Poster WEPMU013 [0.755 MB]  
 
WEPMU015 The Machine Protection System for the R&D Energy Recovery LINAC 1087
 
  • Z. Altinbas, J.P. Jamilkowski, D. Kayran, R.C. Lee, B. Oerter
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy.
The Machine Protection System (MPS) is a device-safety system that is designed to prevent damage to hardware by generating interlocks, based upon the state of input signals generated by selected sub-systems. It protects all the key machinery in the R&D Project called the Energy Recovery LINAC (ERL) against the high beam current. The MPS is capable of responding to a fault with an interlock signal within several microseconds. The ERL MPS is based on a National Instruments CompactRIO platform, and is programmed by utilizing National Instruments' development environment for a visual programming language. The system also transfers data (interlock status, time of fault, etc.) to the main server. Transferred data is integrated into the pre-existing software architecture which is accessible by the operators. This paper will provide an overview of the hardware used, its configuration and operation, as well as the software written both on the device and the server side.
 
poster icon Poster WEPMU015 [17.019 MB]  
 
WEPMU016 Pre-Operation, During Operation and Post-Operational Verification of Protection Systems 1090
 
  • I. Romera, M. Audrain
    CERN, Geneva, Switzerland
 
  This paper will provide an overview of the software checks performed on the Beam Interlock System ensuring that the system is functioning to specification. Critical protection functions are implemented in hardware, at the same time software tools play an important role in guaranteeing the correct configuration and operation of the system during all phases of operation. This paper will describe tests carried out pre-, during- and post- operation, if protection system integrity is not sure, subsequent injections of beam into the LHC will be inhibited.  
 
WEPMU017 Safety Control System and its Interface to EPICS for the Off-Line Front-End of the SPES Project 1093
 
  • J.A. Vásquez, A. Andrighetto, G. Bassato, L. Costa, M.G. Giacchini
    INFN/LNL, Legnaro (PD), Italy
  • M. Bertocco
    UNIPD, Padova (PD), Italy
 
  The SPES off-line front-end apparatus involves a number of subsystems and procedures that are potentially dangerous both for human operators and for the equipments. The high voltage power supply, the ion source complex power supplies, the target chamber handling systems and the laser source are some example of these subsystems. For that reason, a safety control system has been developed. It is based on Schneider Electrics Preventa family safety modules that control the power supply of critical subsystems in combination with safety detectors that monitor critical variables. A Programmable Logic Controller (PLC), model BMXP342020 from the Schneider Electrics Modicon M340 family, is used for monitoring the status of the system as well as controlling the sequence of some operations in automatic way. A touch screen, model XBTGT5330 from the Schneider Electrics Magelis family, is used as Human Machine Interface (HMI) and communicates with the PLC using MODBUS-TCP. Additionally, an interface to the EPICS control network was developed using a home-made MODBUS-TCP EPICS driver in order to integrate it to the control system of the Front End as well as present the status of the system to the users on the main control panel.  
poster icon Poster WEPMU017 [2.847 MB]  
 
WEPMU018 Real-time Protection of the "ITER-like Wall at JET" 1096
 
  • M.B. Jouve, C. Balorin
    Association EURATOM-CEA, St Paul Lez Durance, France
  • G. Arnoux, S. Devaux, D. Kinna, P.D. Thomas, K-D. Zastrow
    CCFE, Abingdon, Oxon, United Kingdom
  • P.J. Carvalho
    IPFN, Lisbon, Portugal
  • J. Veyret
    Sundance France, Matignon, France
 
  During the last JET tokamak shutdown a new ITER-Like Wall was installed using Tungsten and Beryllium materials. To ensure plasma facing component (PFC) integrity, the real-time protection of the wall has been upgraded through the project "Protection for the ITER-like Wall" (PIW). The choice has been made to work with 13 CCD robust analog cameras viewing the main areas of plasma wall interaction and to use regions of interest (ROI) for monitoring in real time the surface temperature of the PFCs. For each camera, ROIs will be set up pre-pulse and, during plasma operation, surface temperatures from these ROIs will be sent to the real time processing system for monitoring and eventually preventing damages on PFCs by modifying the plasma parameters. The video and the associated control system developed for this project is presented in this paper. The video is captured using PLEORA frame grabber and it is sent on GigE network to the real time processing system (RTPS) divided into a 'Real time processing unit' (RTPU), for surface temperature calculation, and the 'RTPU Host', for connection between RTPU and other systems. The RTPU design is based on commercial Xilinx Virtex5 FPGA boards with one board per camera and 2 boards per host. Programmed under Simulink using System generator blockset, the field programmable gate array (FPGA) can manage simultaneously up to 96 ROI defined pixel by pixel.  
poster icon Poster WEPMU018 [2.450 MB]  
 
WEPMU019 First Operational Experience with the LHC Beam Dump Trigger Synchronisation Unit 1100
 
  • A. Antoine, C. Boucly, P. Juteau, N. Magnin, N. Voumard
    CERN, Geneva, Switzerland
 
  Two LHC Beam Dumping Systems (LBDS) remove the counter-rotating beams safely from the collider during setting up of the accelerator, at the end of a physics run and in case of emergencies. Dump requests can come from 3 different sources: the machine protection system in emergency cases, the machine timing system for scheduled dumps or the LBDS itself in case of internal failures. These dump requests are synchronised with the 3 μs beam abort gap in a fail-safe redundant Trigger Synchronisation Unit (TSU) based on Digital Phase Lock Loops (DPLL), locked onto the LHC beam revolution frequency with a maximum phase error of 40 ns. The synchronised trigger pulses coming out of the TSU are then distributed to the high voltage generators of the beam dump kickers through a redundant fault-tolerant trigger distribution system. This paper describes the operational experience gained with the TSU since their commissioning with beam in 2009, and highlights the improvements which have been implemented for a safer operation. This includes an increase of the diagnosis and monitoring functionalities, a more automated validation of the hardware and embedded firmware before deployment, or the execution of a post-operational analysis of the TSU performance after each dump action. In the light of this first experience the outcome of the external review performed in 2010 is presented. The lessons learnt on the project life-cycle for the design of mission critical electronic modules are discussed.  
poster icon Poster WEPMU019 [1.220 MB]  
 
WEPMU020 LHC Collimator Controls for a Safe LHC Operation 1104
 
  • S. Redaelli, R.W. Assmann, M. Donzé, R. Losito, A. Masi
    CERN, Geneva, Switzerland
 
  The beam stored energy at the Large Hadron Collider (LHC) will be up to 360 MJ, to be compared with the quench limit of super-conducting magnets of a few mJ per cm3 and with the damage limit of metal of a few hundreds kJ. The LHC collimation system is designed to protect the machine against beam losses and consists of 108 collimators, 100 of which are movable, located along the 27 km long ring and in the transfer lines. Each collimator has two jaws controlled by four stepping motors to precisely adjust collimator position and angle with respect to the beam. Stepping motors have been used to ensure high position reproducibility. LVDT and resolvers have been installed to monitor in real-time at 100 Hz the jaw positions and the collimator gaps. The cleaning performance and machine protection role of the system depend critically on the accurate jaw positioning. A fully redundant survey system has been developed to ensure that the collimators dynamically follow optimum settings in all phases of the LHC operational cycle. Jaw positions and collimator gaps are interlocked against dump limits defined redundantly as functions of the time, of the beam energy and of the beta* functions that describes the focusing property of the beams. In this paper, the architectural choices that guarantee a safe LHC operation are presented. Hardware and software implementations that ensure the required reliability are described. The operational experience accumulated so far is reviewed and a detailed failure analysis that show the fulfillment of the machine protection specifications is presented.  
 
WEPMU022 Quality-Safety Management and Protective Systems for SPES 1108
 
  • S. Canella, D. Benini
    INFN/LNL, Legnaro (PD), Italy
 
  SPES (Selective Production of Exotic Species) is an INFN project to produce Radioactive Ion Beams (RIB) at Laboratori Nazionali di Legnaro (LNL). The RIB will be produced using the proton induced fission on a Direct Target of UCx. In SPES the proton driver will be a Cyclotron with variable energy (15-70 MeV) and a maximum current of 0.750 mA on two exit ports. The SPES Access Control System and the Dose Monitoring will be integrated in the facility Protective System to achieve the necessary high degree of safety and reliability and to prevent dangerous situations for people, environment and the facility itself. A Quality and Safety Management System for SPES (QSMS) will be realized at LNL for managing all the phases of the project (from design to decommissioning), including therefore the commissioning and operation of the Cyclotron machine too. The Protective System, its documents, data and procedures will be one of the first items that will be considered for the implementation of the QSMS of SPES. Here a general overview of SPES Radiation Protection System, its planned architecture, data and procedures, together with their integration in the QSMS are presented.  
poster icon Poster WEPMU022 [1.092 MB]  
 
WEPMU023 External Post-Operational Checks for the LHC Beam Dumping System 1111
 
  • N. Magnin, V. Baggiolini, E. Carlier, B. Goddard, R. Gorbonosov, D. Khasbulatov, J.A. Uythoven, M. Zerlauth
    CERN, Geneva, Switzerland
 
  The LHC Beam Dumping System (LBDS) is a critical part of the LHC machine protection system. After every LHC beam dump action the various signals and transient data recordings of the beam dumping control systems and beam instrumentation measurements are automatically analysed by the eXternal Post-Operational Checks (XPOC) system to verify the correct execution of the dump action and the integrity of the related equipment. This software system complements the LHC machine protection hardware, and has to ascertain that the beam dumping system is ‘as good as new’ before the start of the next operational cycle. This is the only way by which the stringent reliability requirements can be met. The XPOC system has been developed within the framework of the LHC “Post-Mortem” system, allowing highly dependable data acquisition, data archiving, live analysis of acquired data and replay of previously recorded events. It is composed of various analysis modules, each one dedicated to the analysis of measurements coming from specific equipment. This paper describes the global architecture of the XPOC system and gives examples of the analyses performed by some of the most important analysis modules. It explains the integration of the XPOC into the LHC control infrastructure along with its integration into the decision chain to allow proceeding with beam operation. Finally, it discusses the operational experience with the XPOC system acquired during the first years of LHC operation, and illustrates examples of internal system faults or abnormal beam dump executions which it has detected.  
poster icon Poster WEPMU023 [1.768 MB]  
 
WEPMU024 The Radiation Monitoring System for the LHCb Inner Tracker 1115
 
  • O. Okhrimenko, V. Iakovenko, V.M. Pugatch
    NASU/INR, Kiev, Ukraine
  • F. Alessio, G. Corti
    CERN, Geneva, Switzerland
 
  The performance of the LHCb Radiation Monitoring System (RMS) [1], designed to monitor radiation load on the Inner Tracker [2] silicon micro-strip detectors, is presented. The RMS comprises Metal Foil Detectors (MFD) read-out by sensitive Charge Integrators [3]. MFD is a radiation hard detector operating at high charged particle fluxes. RMS is used to monitor radiation load as well as relative luminosity of the LHCb experiment. The results obtained by the RMS during LHC operation in 2010-2011 are compared to the Monte-Carlo simulation.
[1] V. Pugatch et al., Ukr. J. Phys 54(4), 418 (2009).
[2] LHCb Collaboration, JINST S08005 (2008).
[3] V. Pugatch et al., LHCb Note 2007-062.
 
poster icon Poster WEPMU024 [3.870 MB]  
 
WEPMU025 Equipment and Machine Protection Systems for the FERMI@Elettra FEL facility 1119
 
  • F. Giacuzzo, L. Battistello, L. Fröhlich, G. Gaio, M. Lonza, G. Scalamera, G. Strangolino, D. Vittor
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
FERMI@Elettra is a Free Electron Laser (FEL) based on a 1.5 GeV linac presently under commissioning in Trieste, Italy. Three PLC-based systems communicating to each other assure the protection of machine devices and equipment. The first is the interlock system for the linac radiofrequency plants; the second is dedicated to the protection of vacuum devices and magnets; the third is in charge of protecting various machine components from radiation damage. They all make use of a distributed architecture based on fieldbus technology and communicate with the control system via Ethernet interfaces and dedicated Tango device servers. A complete set of tools including graphical panels, logging and archiving systems are used to monitor the systems from the control room.
 
poster icon Poster WEPMU025 [0.506 MB]  
 
WEPMU026 Protecting Detectors in ALICE 1122
 
  • M. Lechman, A. Augustinus, P.Ch. Chochula, G. De Cataldo, A. Di Mauro, L.S. Jirdén, A.N. Kurepin, P. Rosinský, H. Schindler
    CERN, Geneva, Switzerland
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  ALICE is one of the big LHC experiments at CERN in Geneva. It is composed of many sophisticated and complex detectors mounted very compactly around the beam pipe. Each detector is a unique masterpiece of design, engineering and construction and any damage to it could stop the experiment for months or even for years. It is therefore essential that the detectors are protected from any danger and this is one very important role of the Detector Control System (DCS). One of the main dangers for the detectors is the particle beam itself. Since the detectors are designed to be extremely sensitive to particles they are also vulnerable to any excess of beam conditions provided by the LHC accelerator. The beam protection consists of a combination of hardware interlocks and control software and this paper will describe how this is implemented and handled in ALICE. Tools have also been developed to support operators and shift leaders in the decision making related to beam safety. The gained experiences and conclusions from the individual safety projects are also presented.  
poster icon Poster WEPMU026 [1.561 MB]  
 
WEPMU028 Development Status of Personnel Protection System for IFMIF/EVEDA Accelerator Prototype 1126
 
  • T. Kojima, T. Narita, K. Nishiyama, H. Sakaki, H. Takahashi, K. Tsutsumi
    Japan Atomic Energy Agency (JAEA), International Fusion Energy Research Center (IFERC), Rokkasho, Kamikita, Aomori, Japan
 
  The Control System for IFMIF/EVEDA* accelerator prototype consists of six subsystems; Central Control System (CCS), Local Area Network (LAN), Personnel Protection System (PPS), Machine Protection System (MPS), Timing System (TS) and Local Control System (LCS). The IFMIF/EVEDA accelerator prototype provides deuteron beam with power greater than 1 MW, which is the same as that of J-PARC and SNS. The PPS is required to protect technical and engineering staff against unnecessary exposure, electrical shock hazard and the other danger phenomena. The PPS has two functions of building management and accelerator management. For both managements, Programmable Logic Controllers (PLCs), monitoring cameras and limit switches and etc. are used for interlock system, and a sequence is programmed for entering and leaving of controlled area. This article presents the PPS design and the interface against each accelerator subsystems in details.
* International Fusion Material Irradiation Facility / Engineering Validation and Engineering Design Activity
 
poster icon Poster WEPMU028 [1.164 MB]  
 
WEPMU029 Assessment And Testing of Industrial Devices Robustness Against Cyber Security Attacks 1130
 
  • F.M. Tilaro, B. Copy
    CERN, Geneva, Switzerland
 
  CERN (European Organization for Nuclear Research),like any organization, needs to achieve the conflicting objectives of connecting its operational network to Internet while at the same time keeping its industrial control systems secure from external and internal cyber attacks. With this in mind, the ISA-99 [1] international cyber security standard has been adopted at CERN as a reference model to define a set of guidelines and security robustness criteria applicable to any network device. Devices robustness represents a key link in the defense-in-depth concept as some attacks will inevitably penetrate security boundaries and thus require further protection measures. When assessing the cyber security robustness of devices we have singled out control system-relevant attack patterns derived from the well-known CAPEC [2] classification. Once a vulnerability is identified, it needs to be documented, prioritized and reproduced at will in a dedicated test environment for debugging purposes. CERN - in collaboration with SIEMENS –has designed and implemented a dedicated working environment, the Test-bench for Robustness of Industrial Equipments [3] (“TRoIE”). Such tests attempt to detect possible anomalies by exploiting corrupt communication channels and manipulating the normal behavior of the communication protocols, in the same way as a cyber attacker would proceed. This document provides an inventory of security guidelines [4] relevant to the CERN industrial environment and describes how we have automated the collection and classification of identified vulnerabilities into a test-bench.
[1] http://www.isa.org
[2] http://capec.mitre.org
[3] F. Tilaro, "Test-bench for Robustness…", CERN, 2009
[4] B. Copy, F. Tilaro, "Standards based measurable security for embedded devices", ICALEPCS 2009
 
poster icon Poster WEPMU029 [3.152 MB]  
 
WEPMU030 CERN Safety System Monitoring - SSM 1134
 
  • T. Hakulinen, P. Ninin, F. Valentini
    CERN, Geneva, Switzerland
  • J. Gonzalez, C. Salatko-Petryszcze
    ASsystem, St Genis Pouilly, France
 
  CERN SSM (Safety System Monitoring) is a system for monitoring state-of-health of the various access and safety systems of the CERN site and accelerator infrastructure. The emphasis of SSM is on the needs of maintenance and system operation with the aim of providing an independent and reliable verification path of the basic operational parameters of each system. Included are all network-connected devices, such as PLCs, servers, panel displays, operator posts, etc. The basic monitoring engine of SSM is a freely available system monitoring framework Zabbix, on top of which a simplified traffic-light-type web-interface has been built. The web-interface of SSM is designed to be ultra-light to facilitate access from handheld devices over slow connections. The underlying Zabbix system offers history and notification mechanisms typical advanced monitoring systems.  
poster icon Poster WEPMU030 [1.231 MB]  
 
WEPMU031 Virtualization in Control System Environment 1138
 
  • L.R. Shen, D.K. Liu, T. Wan
    SINAP, Shanghai, People's Republic of China
 
  In a large scale distribute control system, there are lots of common services composing an environment of the entire control system, such as the server system for the common software base library, application server, archive server and so on. This paper gives a description of a virtualization realization for a control system environment, including the virtualization for server, storage, network system and application for the control system. With a virtualization instance of the epics based control system environment built by the VMware vSphere v4, we tested the whole functionality of this virtualization environment in the SSRF control system, including the common server of the NFS, NIS, NTP, Boot and EPICS base and extension library tools, we also carried out virtualization of the application server such as the Archive, Alarm, EPICS gateway and all of the network based IOC. Specially, we tested the high availability (HA) and VMotion for EPICS asynchronous IOC successfully under the different VLAN configuration of the current SSRF control system network.  
 
WEPMU033 Monitoring Control Applications at CERN 1141
 
  • F. Varela, F.B. Bernard, M. Gonzalez-Berges, H. Milcent, L.B. Petrova
    CERN, Geneva, Switzerland
 
  The Industrial Controls and Engineering (EN-ICE) group of the Engineering Department at CERN has produced, and is responsible for the operation of around 60 applications, which control critical processes in the domains of cryogenics, quench protections systems, power interlocks for the Large Hadron Collider and other sub-systems of the accelerator complex. These applications require 24/7 operation and a quick reaction to problems. For this reason the EN-ICE is presently developing the monitoring tool to detect, anticipate and inform of possible anomalies in the integrity of the applications. The tool builds on top of Simatic WinCC Open Architecture (formerly PVSS) SCADA and makes usage of the Joint COntrols Project (JCOP) and UNICOS Frameworks developed at CERN. The tool provides centralized monitoring of the different elements integrating the controls systems like Windows and Linux servers, PLCs, applications, etc. Although the primary aim of the tool is to assist the members of the EN-ICE Standby Service, the tool may present different levels of details of the systems depending on the user, which enables experts to diagnose and troubleshoot problems. In this paper, the scope, functionality and architecture of the tool are presented and some initial results on its performance are summarized.  
poster icon Poster WEPMU033 [1.719 MB]  
 
WEPMU034 Infrastructure of Taiwan Photon Source Control Network 1145
 
  • Y.-T. Chang, J. Chen, Y.-S. Cheng, K.T. Hsu, S.Y. Hsu, K.H. Hu, C.H. Kuo, C.Y. Wu
    NSRRC, Hsinchu, Taiwan
 
  A reliable, flexible and secure network is essential for the Taiwan Photon Source (TPS) control system which is based upon the EPICS toolkit framework. Subsystem subnets will connect to control system via EPICS based CA gateways for forwarding data and reducing network traffic. Combining cyber security technologies such as firewall, NAT and VLAN, control network is isolated to protect IOCs and accelerator components. Network management tools are used to improve network performance. Remote access mechanism will be constructed for maintenance and troubleshooting. The Ethernet is also used as fieldbus for instruments such as power supplies. This paper will describe the system architecture for the TPS control network. Cabling topology, redundancy and maintainability are also discussed.  
 
WEPMU035 Distributed Monitoring System Based on ICINGA 1149
 
  • C. Haen, E. Bonaccorsi, N. Neufeld
    CERN, Geneva, Switzerland
 
  The basic services of the large IT infrastructure of the LHCb experiment are monitored with ICINGA, a fork of the industry standard monitoring software NAGIOS. The infrastructure includes thousands of servers and computers, storage devices, more than 200 network devices and many VLANS, databases, hundreds diskless nodes and many more. The amount of configuration files needed to control the whole installation is big, and there is a lot of duplication, when the monitoring infrastructure is distributed over several servers. In order to ease the manipulation of the configuration files, we designed a monitoring schema particularly adapted to our network and taking advantage of its specificities, and developed a tool to centralize its configuration in a database. Thanks to this tool, we could also parse all our previous configuration files, and thus fill in our Oracle database, that comes as a replacement of the previous Active Directory based solution. A web frontend allows non-expert users to easily add new entities to monitor. We present the schema of our monitoring infrastructure and the tool used to manage and automatically generate the configuration for ICINGA.  
poster icon Poster WEPMU035 [0.375 MB]  
 
WEPMU036 Efficient Network Monitoring for Large Data Acquisition Systems 1153
 
  • D.O. Savu, B. Martin
    CERN, Geneva, Switzerland
  • A. Al-Shabibi
    Heidelberg University, Heidelberg, Germany
  • S.M. Batraneanu, S.N. Stancu
    UCI, Irvine, California, USA
  • R. Sjoen
    University of Oslo, Oslo, Norway
 
  Though constantly evolving and improving, the available network monitoring solutions have limitations when applied to the infrastructure of a high speed real-time data acquisition (DAQ) system. DAQ networks are particular computer networks where experts have to pay attention to both individual subsections as well as system wide traffic flows while monitoring the network. The ATLAS Network at the Large Hadron Collider (LHC) has more than 200 switches interconnecting 3500 hosts and totaling 8500 high speed links. The use of heterogeneous tools for monitoring various infrastructure parameters, in order to assure optimal DAQ system performance, proved to be a tedious and time consuming task for experts. To alleviate this problem we used our networking and DAQ expertise to build a flexible and scalable monitoring system providing an intuitive user interface with the same look and feel irrespective of the data provider that is used. Our system uses custom developed components for critical performance monitoring and seamlessly integrates complementary data from auxiliary tools, such as NAGIOS, information services or custom databases. A number of techniques (e.g. normalization, aggregation and data caching) were used in order to improve the user interface response time. The end result is a unified monitoring interface, for fast and uniform access to system statistics, which significantly reduced the time spent by experts for ad-hoc and post-mortem analysis.  
poster icon Poster WEPMU036 [5.945 MB]  
 
WEPMU037 Virtualization for the LHCb Experiment 1157
 
  • E. Bonaccorsi, L. Brarda, M. Chebbi, N. Neufeld
    CERN, Geneva, Switzerland
  • F. Sborzacchi
    INFN/LNF, Frascati (Roma), Italy
 
  The LHCb Experiment, one of the four large particle physics detectors at CERN, counts in its Online System more than 2000 servers and embedded systems. As a result of ever-increasing CPU performance in modern servers, many of the applications in the controls system are excellent candidates for virtualization technologies. We see virtualization as an approach to cut down cost, optimize resource usage and manage the complexity of the IT infrastructure of LHCb. Recently we have added a Kernel Virtual Machine (KVM) cluster based on Red Hat Enterprise Virtualization for Servers (RHEV) complementary to the existing Hyper-V cluster devoted only to the virtualization of the windows guests. This paper describes the architecture of our solution based on KVM and RHEV as along with its integration with the existing Hyper-V infrastructure and the Quattor cluster management tools and in particular how we use to run controls applications on a virtualized infrastructure. We present performance results of both the KVM and Hyper-V solutions, problems encountered and a description of the management tools developed for the integration with the Online cluster and LHCb SCADA control system based on PVSS.  
 
WEPMU038 Network Security System and Method for RIBF Control System 1161
 
  • A. Uchiyama
    SHI Accelerator Service Ltd., Tokyo, Japan
  • M. Fujimaki, N. Fukunishi, M. Komiyama, R. Koyama
    RIKEN Nishina Center, Wako, Japan
 
  In RIKEN RI beam factory (RIBF), the local area network for accelerator control system (control system network) consists of commercially produced Ethernet switches, optical fibers and metal cables. On the other hand, E-mail and Internet access for unrelated task to accelerator operation are usually used in RIKEN virtual LAN (VLAN) as office network. From the viewpoint of information security, we decided to separate the control system network from the Internet and operate it independently from VLAN. However, it was inconvenient for users for the following reason; it was unable to monitor the information and status of accelerator operation from the user's office in a real time fashion. To improve this situation, we have constructed a secure system which allows the users to get the accelerator information from VLAN to control system network, while preventing outsiders from having access to the information. To allow access to inside control system network over the network from VLAN, we constructed reverse proxy server and firewall. In addition, we implement a system to send E-mail as security alert from control system network to VLAN. In our contribution, we report this system and the present status in detail.  
poster icon Poster WEPMU038 [45.776 MB]  
 
WEPMU039 Virtual IO Controllers at J-PARC MR using Xen 1165
 
  • N. Kamikubota, N. Yamamoto
    J-PARC, KEK & JAEA, Ibaraki-ken, Japan
  • T. Iitsuka, S. Motohashi, M. Takagi, S.Y. Yoshida
    Kanto Information Service (KIS), Accelerator Group, Ibaraki, Japan
  • H. Nemoto
    ACMOS INC., Tokai-mura, Ibaraki, Japan
  • S. Yamada
    KEK, Ibaraki, Japan
 
  The control system for J-PARC accelerator complex has been developed based on the EPICS toolkit. About 100 traditional ("real") VME-bus computers are used as EPICS IOCs in the control system for J-PARC MR (Main Ring). Recently, we have introduced "virtual" IOCs using Xen, an open-source virtual machine monitor. Scientific Linux with an EPICS iocCore runs on a Xen virtual machine. EPICS databases for network devices and EPICS soft records can be configured. Multiple virtual IOCs run on a high performance blade-type server, running Scientific Linux as native OS. A few number of virtual IOCs have been demonstrated in MR operation since October, 2010. Experience and future perspective will be discussed.  
 
WEPMU040 Packaging of Control System Software 1168
 
  • K. Žagar, M. Kobal, N. Saje, A. Žagar
    Cosylab, Ljubljana, Slovenia
  • F. Di Maio, D. Stepanov
    ITER Organization, St. Paul lez Durance, France
  • R. Šabjan
    COBIK, Solkan, Slovenia
 
  Funding: ITER European Union, European Regional Development Fund and Republic of Slovenia, Ministry of Higher Education, Science and Technology
Control system software consists of several parts – the core of the control system, drivers for integration of devices, configuration for user interfaces, alarm system, etc. Once the software is developed and configured, it must be installed to computers where it runs. Usually, it is installed on an operating system whose services it needs, and also in some cases dynamically links with the libraries it provides. Operating system can be quite complex itself – for example, a typical Linux distribution consists of several thousand packages. To manage this complexity, we have decided to rely on Red Hat Package Management system (RPM) to package control system software, and also ensure it is properly installed (i.e., that dependencies are also installed, and that scripts are run after installation if any additional actions need to be performed). As dozens of RPM packages need to be prepared, we are reducing the amount of effort and improving consistency between packages through a Maven-based infrastructure that assists in packaging (e.g., automated generation of RPM SPEC files, including automated identification of dependencies). So far, we have used it to package EPICS, Control System Studio (CSS) and several device drivers. We perform extensive testing on Red Hat Enterprise Linux 5.5, but we have also verified that packaging works on CentOS and Scientific Linux. In this article, we describe in greater detail the systematic system of packaging we are using, and its particular application for the ITER CODAC Core System.
 
poster icon Poster WEPMU040 [0.740 MB]