Author: Schmidt, R.
Paper Title Page
MOODB103 Results of an Experiment on Hydrodynamic Tunnelling at the SPS HiRadMat High Intensity Proton Facility 37
 
  • R. Schmidt, J. Blanco, F. Burkart, D. Grenier, D. Wollmann
    CERN, Geneva, Switzerland
  • E. Griesmayer
    CIVIDEC Instrumentation, Wien, Austria
  • N.A. Tahir
    GSI, Darmstadt, Germany
 
  To predict the damage for a catastrophic failure of the protections systems for the LHC when operating with beams storing 362 MJ, simulation studies of the impact of an LHC beam on targets were performed. Firstly, the energy deposition of the first bunches in a target with FLUKA is calculated. The effect of the energy deposition on the target is then calculated with a hydrodynamic code, BIG2. The impact of only a few bunches leads to a change of target density. The calculations are done iteratively in several steps and show that such beam can tunnel up to 30-35 m into a target. Validation experiments for these calculations at LHC are not possible, therefore experiments were suggested for the CERN Super Proton Synchrotron (SPS), since simulation studies with the tools used for the LHC also predict hydrodynamic tunnelling for SPS beams. An experiment at the SPS-HiRadMat facility (High-Radiation to Materials) using the 440 GeV beam with 144 bunches was performed in July 2012. In this paper we compare the results of this experiment with our calculations of hydrodynamic tunnelling.  
slides icon Slides MOODB103 [40.426 MB]  
 
TUPFI012 HL-LHC: Integrated Luminosity and Availability 1352
 
  • A. Apollonio, M. Jonker, R. Schmidt, B. Todd, S. Wagner, D. Wollmann, M. Zerlauth
    CERN, Geneva, Switzerland
 
  The objective of LHC operation is to optimise the output for particle physics by maximising the integrated luminosity. An important constraint comes from the event pile–up for one bunch crossing that should not exceed 140 events per bunch crossing. With bunches every 25 ns the luminosity for data taking of the experiments should therefore not exceed 5*1034 s−1cm-2. For the optimisation of the integrated luminosity it is planned to design HL-LHC for much higher luminosity than acceptable for the experiments and to limit the initial luminosity by operating with larger beam size at the collision points. During the fill, the beam size will be slowly reduced to keep the luminosity constant. The gain from luminosity levelling depends on the average length of the fills. Today, with the LHC operating at 4 TeV, most fills are terminated due to equipment failures, resulting in an average fill length of about 5 h. In this paper we discuss the expected integrated luminosity for HL-LHC as a function of fill length and time between fills, depending on the expected MTBF of the LHC systems with HL-LHC parameters. We derive an availability target for HL-LHC and discuss steps to achieve this.  
 
TUPFI028 Beam Losses Through the LHC Operational Cycle in 2012 1400
 
  • G. Papotti, A.A. Gorzawski, M. Hostettler, R. Schmidt
    CERN, Geneva, Switzerland
 
  We review the losses through the nominal LHC cycle for physics operation in 2012. The loss patterns are studied and categorized according to timescale, distribution, time in the cycle, which bunches are affected, whether coherent or incoherent. Possible causes and correlations are identified, e.g. to machine parameters or instability signatures. A comparison with losses in the previous years of operation is also shown.  
 
TUPME032 Update on Beam Induced RF Heating in the LHC 1646
 
  • B. Salvant, O. Aberle, G. Arduini, R.W. Aßmann, V. Baglin, M.J. Barnes, W. Bartmann, P. Baudrenghien, O.E. Berrig, A. Bertarelli, C. Bracco, E. Bravin, G. Bregliozzi, R. Bruce, F. Carra, F. Caspers, G. Cattenoz, S.D. Claudet, H.A. Day, M. Deile, J.F. Esteban Müller, P. Fassnacht, M. Garlaschè, L. Gentini, B. Goddard, A. Grudiev, B. Henrist, S. Jakobsen, O.R. Jones, O. Kononenko, G. Lanza, L. Lari, T. Mastoridis, V. Mertens, N. Mounet, E. Métral, A.A. Nosych, J.L. Nougaret, S. Persichelli, A.M. Piguiet, S. Redaelli, F. Roncarolo, G. Rumolo, B. Salvachua, M. Sapinski, R. Schmidt, E.N. Shaposhnikova, L.J. Tavian, M.A. Timmins, J.A. Uythoven, A. Vidal, J. Wenninger, D. Wollmann, M. Zerlauth
    CERN, Geneva, Switzerland
  • H.A. Day
    UMAN, Manchester, United Kingdom
  • L. Lari
    IFIC, Valencia, Spain
 
  Since June 2011, the rapid increase of the luminosity performance of the LHC has come at the expense of increased temperature and pressure readings on specific near-beam LHC equipment. In some cases, this beam induced heating has caused delays whilie equipment cools down, beam dumps and even degradation of these devices. This contribution gathers the observations of beam induced heating attributable to beam coupling impedance, their current level of understanding and possible actions that are planned to be implemented during the long shutdown in 2013-2014.  
 
WEPME044 Generation of Controlled Losses in Milisecond Timescale with Transverse Damper in LHC 3025
 
  • M. Sapinski, T. Baer, V. Chetvertkova, B. Dehning, W. Höfle, A. Priebe, R. Schmidt, D. Valuch
    CERN, Geneva, Switzerland
 
  A controlled way of generating of beam losses is required in order to investigate the quench limits of the superconducting magnets in the LHC. This is especially difficult to achieve for losses with millisecond duration. A series of experiments using the transverse damper system has proven that such a fast loss can be obtained even in the case of rigid 4 TeV beams. This paper describes the optimisation of beam parameters and transverse damper waveform required to mimic fast loss scenarios and reports on extensive tracking simulations undertaken to fully understand the time and spatial structure of these losses. The application of this method to the final quench tests is also presented.  
 
THPEA045 Beam Induced Quenches of LHC Magnets 3243
 
  • M. Sapinski, T. Baer, M. Bednarek, G. Bellodi, C. Bracco, R. Bruce, B. Dehning, W. Höfle, A. Lechner, E. Nebot Del Busto, A. Priebe, S. Redaelli, B. Salvachua, R. Schmidt, D. Valuch, A.P. Verweij, J. Wenninger, D. Wollmann, M. Zerlauth
    CERN, Geneva, Switzerland
 
  In the years 2009-2013 LHC was operating with the beam energy of 3.5 and 4 TeV instead of the nominal 7 TeV, with the corresponding currents in the superconducting magnets also half nominal. To date only a small number of beam-induced quenches have occurred, with most being due to specially designed quench tests. During normal collider operation with stored beam there has not been a single beam induced quench. This excellent result is mainly explained by the fact that the cleaning of the beam halo worked very well and, in case of beam losses, the beam was dumped before any significant energy was deposited in the magnets. However, conditions are expected to become much tougher after the long LHC shutdown, when the magnets will be working at near nominal currents in the presence of high energy and intensity beams. This paper summarizes the experience to date with beam-induced quenches. It describes the techniques used to generate controlled quench conditions which were used to study the limitations. Results are discussed along with their implication for LHC operation after the first Long Shutdown.  
 
THPEA046 Machine Protection at the LHC - Experience of Three Years Running and Outlook for Operation at Nominal Energy 3246
 
  • D. Wollmann, R. Schmidt, J. Wenninger, M. Zerlauth
    CERN, Geneva, Switzerland
 
  With above 22fb-1 integrated luminosity delivered to the experiments ATLAS and CMS the LHC surpassed the results of 2011 by more than a factor 5. This was achieved at 4TeV, with intensities of ~2e14p per beam. The uncontrolled loss of only a small fraction of the stored beam is sufficient to damage parts of the sc. magnet system, accelerator equipment or the particle physics experiments. To protect against this a correct functioning of the complex LHC machine protection (MP) systems through the operational cycle is essential. Operating with up to 140MJ stored beam energy was only possible due to the experience and confidence gained in the two previous running periods, where the intensity was slowly increased. In this paper the 2012 performance of the MP systems is discussed. The strategy applied for a fast, but safe, intensity ramp up and the monitoring of the MP systems during stable running periods are presented. Weaknesses in the reliability of the MP systems, set-up procedures and setting adjustments for machine development periods, discovered in 2012, are critically reviewed and improvements for the LHC operation after the up-coming long shut-down of the LHC are proposed.  
 
THPEA047 Diamond Particle Detector Properties during High Fluence Material Damage Tests and their Future Applications for Machine Protection in the LHC 3249
 
  • F. Burkart, J. Blanco, J. Borburgh, B. Dehning, M. Di Castro, E. Griesmayer, A. Lechner, J. Lendaro, F. Loprete, R. Losito, S. Montesano, R. Schmidt, D. Wollmann, M. Zerlauth
    CERN, Geneva, Switzerland
  • E. Griesmayer
    CIVIDEC Instrumentation, Wien, Austria
 
  Experience with LHC machine protection (MP) during the last three years of operation shows that the MP systems sufficiently protect the LHC against damage in case of failures leading to beam losses with a time constant exceeding 1ms. An unexpected fast beam loss mechanism, called UFOs, was observed, which could potentially quench superconducting magnets. For such fast losses, but also for better understanding of slower losses, an improved understanding of the loss distribution within a bunch train is required. Diamond particle detectors with bunch-by-bunch resolution and high dynamic range have been developed and successfully tested in the LHC and in experiments to quantify the damage limits of LHC components. This paper will focus on experience gained in use of diamond detectors. The properties of these detectors were measured during high-fluence material damage tests in CERN's HiRadMat facility. The results will be discussed and compared to the cross-calibration with FLUKA simulations. Future applications of these detectors in the LHC to understand beam losses and to improve the protection against fast particle losses will be discussed.  
 
THPFI045 Reliability Approach for Machine Protection Design in Particle Accelerators 3388
 
  • A. Apollonio, J.-B. Lallement, B. Mikulec, B. Puccio, J.L. Sanchez Alvarez, R. Schmidt, S. Wagner
    CERN, Geneva, Switzerland
 
  Particle accelerators require Machine Protection Systems (MPS) to prevent beam induced damage of equipment in case of failures. This becomes increasingly important for proton colliders with large energy stored in the beam such as LHC, for high power accelerators with a beam power of up to 10 MW, such as the European Spallation Source (ESS), and for linear colliders with high beam power and very small beam size. The reliability of Machine Protection Systems is crucial for safe machine operation; all possible sources of risk need to be taken into account in the early design stage. This paper presents a systematic approach to classify failures and to assess the associated risk, and discusses the impact of such considerations on the design of Machine Protection Systems. The application of this approach will be illustrated using the new design of the MPS for LINAC 4, a linear accelerator under construction at CERN.