Author: Zerlauth, M.
Paper Title Page
WEPMU006 Architecture for Interlock Systems: Reliability Analysis with Regard to Safety and Availability 1058
 
  • S. Wagner, A. Apollonio, R. Schmidt, M. Zerlauth
    CERN, Geneva, Switzerland
  • A. Vergara-Fernandez
    ITER Organization, St. Paul lez Durance, France
 
  For accelerators (e.g. LHC) and other large experimental physics facilities (e.g. ITER), the machine protection relies on complex interlock systems. In the design of interlock loops, the choice of the hardware architecture impacts on machine safety and availability. While high machine safety is an inherent requirement, the constraints in terms of availability may differ from one facility to another. For the interlock loops protecting the LHC superconducting magnet circuits, reduced machine availability can be tolerated since shutdowns do not affect the longevity of the equipment. In ITER's case on the other hand, high availability is required since fast shutdowns cause significant magnet aging. A reliability analysis of various interlock loop architectures has been performed. The analysis based on an analytical model compares a 1oo3 (one-out-of-three) and a 2oo3 architecture with a single loop. It yields the probabilities for four scenarios: (1)- completed mission (e.g., a physics fill in LHC or a pulse in ITER without shutdown triggered), (2)- shutdown because of a failure in the interlock loop, (3)- emergency shutdown (e.g., after a quench of a magnet) and (4)- missed emergency shutdown (shutdown required but interlock loop fails, possibly leading to severe damage of the facility). Scenario 4 relates to machine safety and together with scenarios 2 and 3 defines the machine availability reflected by scenario 1. This paper presents the results of the analysis on the properties of the different architectures with regard to machine safety and availability.  
 
WEPMU010 Automatic Analysis at the Commissioning of the LHC Superconducting Electrical Circuits 1073
 
  • H. Reymond, O.O. Andreassen, C. Charrondière, A. Rijllart, M. Zerlauth
    CERN, Geneva, Switzerland
 
  Since the beginning of 2010 the LHC has been operating in a routinely manner, starting with a commissioning phase and then an operation for physics phase. The commissioning of the superconducting electrical circuits requires rigorous test procedures before entering into operation. To maximize the beam operation time of the LHC these tests should be done as fast as procedures allow. A full commissioning needs 12000 tests and is required after circuits have been warmed above liquid nitrogen temperature. Below this temperature, after an end of year break of two months, commissioning needs about 6000 tests. Because the manual analysis of the tests takes a major part of the commissioning time, we proceeded to the automation of the existing analysis tools. We present the way in which these LabVIEW™ applications were automated. We evaluate the gain in commissioning time and reduction of experts on night shift observed during the LHC hardware commissioning campaign of 2011 compared to 2010. We end with an outlook at what can be further optimized.  
poster icon Poster WEPMU010 [3.124 MB]  
 
WEPMU023 External Post-Operational Checks for the LHC Beam Dumping System 1111
 
  • N. Magnin, V. Baggiolini, E. Carlier, B. Goddard, R. Gorbonosov, D. Khasbulatov, J.A. Uythoven, M. Zerlauth
    CERN, Geneva, Switzerland
 
  The LHC Beam Dumping System (LBDS) is a critical part of the LHC machine protection system. After every LHC beam dump action the various signals and transient data recordings of the beam dumping control systems and beam instrumentation measurements are automatically analysed by the eXternal Post-Operational Checks (XPOC) system to verify the correct execution of the dump action and the integrity of the related equipment. This software system complements the LHC machine protection hardware, and has to ascertain that the beam dumping system is ‘as good as new’ before the start of the next operational cycle. This is the only way by which the stringent reliability requirements can be met. The XPOC system has been developed within the framework of the LHC “Post-Mortem” system, allowing highly dependable data acquisition, data archiving, live analysis of acquired data and replay of previously recorded events. It is composed of various analysis modules, each one dedicated to the analysis of measurements coming from specific equipment. This paper describes the global architecture of the XPOC system and gives examples of the analyses performed by some of the most important analysis modules. It explains the integration of the XPOC into the LHC control infrastructure along with its integration into the decision chain to allow proceeding with beam operation. Finally, it discusses the operational experience with the XPOC system acquired during the first years of LHC operation, and illustrates examples of internal system faults or abnormal beam dump executions which it has detected.  
poster icon Poster WEPMU023 [1.768 MB]