Author: Dragu, M.
Paper Title Page
WEPGF046 Towards a Second Generation Data Analysis Framework for LHC Transient Data Recording 802
 
  • S. Boychenko, C. Aguilera-Padilla, M. Dragu, M.A. Galilée, J.C. Garnier, M. Koza, K.H. Krol, R. Orlandi, M.C. Poeschl, T.M. Ribeiro, K.S. Stamos, M. Zerlauth
    CERN, Geneva, Switzerland
  • M. Zenha-Rela
    University of Coimbra, Coimbra, Portugal
 
  During the last two years, CERNs Large Hadron Collider (LHC) and most of its equipment systems were upgraded to collide particles at an energy level twice higher compared to the first operational period between 2010 and 2013. System upgrades and the increased machine energy represent new challenges for the analysis of transient data recordings, which have to be both dependable and fast. With the LHC having operated for many years already, statistical and trend analysis across the collected data sets is a growing requirement, highlighting several constraints and limitations imposed by the current software and data storage ecosystem. Based on several analysis use-cases, this paper highlights the most important aspects and ideas towards an improved, second generation data analysis framework to serve a large variety of equipment experts and operation crews in their daily work.  
poster icon Poster WEPGF046 [0.501 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF047 Smooth Migration of CERN Post Mortem Service to a Horizontally Scalable Service 806
 
  • J.C. Garnier, C. Aguilera-Padilla, S. Boychenko, M. Dragu, M.A. Galilée, M. Koza, K.H. Krol, T. Martins Ribeiro, R. Orlandi, M.C. Poeschl, M. Zerlauth
    CERN, Geneva, Switzerland
 
  The Post Mortem service for CERNs accelerator complex stores and analyses transient data recordings of various equipment systems following certain events, like a beam dump or magnet quenches. The main purpose of this framework is to provide fast and reliable diagnostic to the equipment experts and operation crews to decide whether accelerator operation can continue safely or whether an intervention is required. While the Post Mortem System was initially designed to serve CERNs Large Hadron Collider (LHC), the scope has been rapidly extended to include as well External Post Operational Checks and Injection Quality Checks in the LHC and its injector complex. These new use cases impose more stringent time-constraints on the storage and analysis of data, calling to migrate the system towards better scalability in terms of storage capacity as well as I/O throughput. This paper presents an overview on the current service, the ongoing investigations and plans towards a scalable data storage solution and API, as well as the proposed strategy to ensure an entirely smooth transition for the current Post Mortem users.  
poster icon Poster WEPGF047 [1.454 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)