Author: Gorzawski, A.A.
Paper Title Page
TUPMW012 Beam Offset Stabilization Techniques for the LHC Collision Points 1438
 
  • A.A. Gorzawski, R. Jacobsson, J. Wenninger
    CERN, Geneva, Switzerland
 
  Maintaining head-on collisions over many hours is an important aspect of optimizing the performance of a collider. For current LHC operation where the beam optics is fixed during periods of colliding beam, mainly ground motion induced perturbations have to be compensated. The situation will become significantly more complex when luminosity leveling will be applied following the LHC luminosity upgrades. During β* leveling the optics in the interaction region changes significantly, feed-downs from quadrupole misalignment may induce significant orbit changes that may lead to beam offsets at the collision points. Such beam offsets induce a loss of luminosity and reduce the stability margins for collective effects that is provided by head-on beam-beam. It is therefore essential that the beam offsets at the collision points are minimized during the leveling process. This paper will review sources and mitigation techniques for the orbit perturbation at the collision points during β* leveling, and present results of experiments performed at the LHC to mitigate and compensate such offsets.  
DOI • reference for this paper ※ DOI:10.18429/JACoW-IPAC2016-TUPMW012  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUPMW013 Experimental Demonstration of β* Leveling at the LHC 1442
SUPSS001   use link to see paper's listing under its alternate paper code  
 
  • A.A. Gorzawski, D. Mirarchi, B. Salvachua, J. Wenninger
    CERN, Geneva, Switzerland
 
  The HL-LHC project foresees to boost the LHC peak luminosity beyond the capabilities of the LHC experimental detectors. Leveling the luminosity down to a constant value that is sustainable for the experiments is therefore the operational baseline of HL-LHC. Various luminosity leveling techniques are available at the LHC. Leveling by adjusting β*, the betatron function at the interaction point, to maintain a constant luminosity is favorable because the beams remain head-on which provides optimal stability from the point of view of collective effects. Smooth leveling by β* requires however excellent control of the beam orbits and beam losses in the interaction regions since the beam offsets should not vary by more than around one r.m.s. beam size during the process. This leveling scheme has been successfully tested and experimentally demonstrated during the LHC machine development program in 2015. This paper presents results on luminosity leveling over a β* range from 10 m to 0.8 m and provides an outlook on future developments and use of this technique at the LHC.  
DOI • reference for this paper ※ DOI:10.18429/JACoW-IPAC2016-TUPMW013  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPOR051 Second Generation LHC Analysis Framework: Workload-based and User-oriented Solution 2784
 
  • S. Boychenko, C. Aguilera-Padilla, M.A. Galilée, J.C. Garnier, A.A. Gorzawski, K.H. Krol, J. Makai, M. Osinski, M.C. Poeschl, T.M. Ribeiro, A. Stanisz, M. Zerlauth
    CERN, Geneva, Switzerland
  • M.Z. Rela
    University of Coimbra, Coimbra, Portugal
 
  Consolidation and upgrades of accelerator equipment during the first long LHC shutdown period enabled particle collisions at energy levels almost twice higher compared to the first operational phase. Consequently, the software infrastructure providing vital information for machine operation and its optimisation needs to be updated to keep up with the challenges imposed by the increasing amount of collected data and the complexity of analysis. Current tools, designed more than a decade ago, have proven their reliability by significantly outperforming initially provisioned workloads, but are unable to scale efficiently to satisfy the growing needs of operators and hardware experts. In this paper we present our progress towards the development of a new workload-driven solution for LHC transient data analysis, based on identified user requirements. An initial setup and study of modern data storage and processing engines appropriate for the accelerator data analysis was conducted. First simulations of the proposed novel partitioning and replication approach, targeting a highly efficient service for heterogeneous analysis requests, were designed and performed.  
DOI • reference for this paper ※ DOI:10.18429/JACoW-IPAC2016-WEPOR051  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)