Keyword: extraction
Paper Title Other Keywords Page
MOPKN007 Lhc Dipole Magnet Splice Resistance From Sm18 Data Mining dipole, electronics, operation, database 98
 
  • H. Reymond, O.O. Andreassen, C. Charrondière, G. Lehmann Miotto, A. Rijllart, D. Scannicchio
    CERN, Geneva, Switzerland
 
  The splice incident which happened during commissioning of the LHC on the 19th of September 2008 caused damage to several magnets and adjacent equipment. This raised not only the question of how it happened, but also about the state of all other splices. The inter magnet splices were studied very soon after with new measurements, but the internal magnet splices were also a concern. At the Chamonix meeting in January 2009, the CERN management decided to create a working group to analyse the provoked quench data of the magnet acceptance tests and try to find indications for bad splices in the main dipoles. This resulted in a data mining project that took about one year to complete. This presentation describes how the data was stored, extracted and analysed reusing existing LabVIEW™ based tools. We also present the encountered difficulties and the importance of combining measured data with operator notes in the logbook.  
poster icon Poster MOPKN007 [5.013 MB]  
 
MOPKN009 The CERN Accelerator Measurement Database: On the Road to Federation database, controls, data-management, status 102
 
  • C. Roderick, R. Billen, M. Gourber-Pace, N. Hoibian, M. Peryt
    CERN, Geneva, Switzerland
 
  The Measurement database, acting as short-term central persistence and front-end of the CERN accelerator Logging Service, receives billions of time-series data per day for 200,000+ signals. A variety of data acquisition systems on hundreds of front-end computers publish source data that eventually end up being logged in the Measurement database. As part of a federated approach to data management, information about source devices are defined in a Configuration database, whilst the signals to be logged are defined in the Measurement database. A mapping, which is often complex and subject to change and extension, is therefore required in order to subscribe to the source devices, and write the published data to the corresponding named signals. Since 2005, this mapping was done by means of dozens of XML files, which were manually maintained by multiple persons, resulting in a configuration that was error prone. In 2010 this configuration was improved, such that it becomes fully centralized in the Measurement database, reducing significantly the complexity and the number of actors in the process. Furthermore, logging processes immediately pick up modified configurations via JMS based notifications sent directly from the database, allowing targeted device subscription updates rather than a full process restart as was required previously. This paper will describe the architecture and the benefits of current implementation, as well as the next steps on the road to a fully federated solution.  
 
MOPKS013 Beam Spill Structure Feedback Test in HIRFL-CSR feedback, controls, power-supply, FPGA 186
 
  • R.S. Mao, P. Li, L.Z. Ma, J.X. Wu, J.W. Xia, J.C. Yang, Y.J. Yuan, T.C. Zhao, Z.Z. Zhou
    IMP, Lanzhou, People's Republic of China
 
  The slow extraction beam from HIRFL-CSR is used in nuclear physics experiments and heavy ion therapy. 50Hz ripple and harmonics are observed in beam spill. To improve the spill structure, the first set of control system consisting of fast Q-magnet and feedback device based FPGA is developed and installed in 2010, and spill structure feedback test also has been started. The commissioning results with spill feedback system are presented in this paper.  
poster icon Poster MOPKS013 [0.268 MB]  
 
WEPMS015 NSLS-II Booster Timing System injection, booster, timing, storage-ring 1003
 
  • P.B. Cheblakov, S.E. Karnaev
    BINP SB RAS, Novosibirsk, Russia
  • J.H. De Long
    BNL, Upton, Long Island, New York, USA
 
  The NSLS-II light source includes the main storage ring with beam lines and injection part consisting of 200 MeV linac, 3 GeV booster synchrotron and two transport lines. The booster timing system is a part of NSLS-II timing system which is based on Event Generator (EVG) and Event Receivers (EVRs) fromμResearch Finland. The booster timing is based on the external events coming from NSLS-II EVG: "Pre-Injection", "Injection", "Pre-Extraction", "Extraction". These events are referenced to the specified bunch of the Storage Ring and correspond to the first bunch of the booster. EVRs provide two scales for triggering both of the injection and the extraction pulse devices. The first scale provides triggering of the pulsed septums and the bump magnets in the range of milliseconds and uses TTL outputs of EVR, the second scale provides triggering of the kickers in the range of microseconds and uses CML outputs. EVRs also provide the timing of a booster cycle operation and events for cycle-to-cycle updates of pulsed and ramping parameters, and the booster beam instrumentation synchronization. This paper describes the final design of the booster timing system. The timing system functional and block diagrams are presented.  
poster icon Poster WEPMS015 [0.799 MB]  
 
WEPMS020 NSLS-II Booster Power Supplies Control booster, controls, operation, injection 1018
 
  • P.B. Cheblakov, S.E. Karnaev, S.S. Serednyakov
    BINP SB RAS, Novosibirsk, Russia
  • W. Louie, Y. Tian
    BNL, Upton, Long Island, New York, USA
 
  The NSLS-II booster Power Supplies (PSs) [1] are divided into two groups: ramping PSs providing passage of the beam during the beam ramp in the booster from 200 MeV up to 3 GeV at 300 ms time interval, and pulsed PSs providing beam injection from the linac and extraction to the Storage Ring. A special set of devices was developed at BNL for the NSLS-II magnetic system PSs control: Power Supply Controller (PSC) and Power Supply Interface (PSI). The PSI has one or two precision 18-bit DACs, nine channels of ADC for each DAC and digital input/outputs. It is capable of detecting the status change sequence of digital inputs with 10 ns resolution. The PSI is placed close to current regulators and is connected to the PSC via fiber-optic 50 Mbps data link. The PSC communicates with EPICS IOC through a 100 Mbps Ethernet port. The main function of IOC includes ramp curve upload, ADC waveform data download, and various process variable control. The 256 Mb DDR2 memory on PSC provides large storage for up to 16 ramping tables for the both DACs, and 20 second waveform recorder for all the ADC channels. The 100 Mbps Ethernet port enables real time display for 4 ADC waveforms. This paper describes a project of the NSLS-II booster PSs control. Characteristic features of the ramping magnets control and pulsed magnets control in a double-injection mode of operation are considered in the paper. First results of the control at PS testing stands are presented.
[1] Power Supply Control System of NSLS-II, Y. Tian, W. Louie, J. Ricciardelli, L.R. Dalesio, G. Ganetis, ICALEPCS2009, Japan
 
poster icon Poster WEPMS020 [1.818 MB]  
 
WEPMU006 Architecture for Interlock Systems: Reliability Analysis with Regard to Safety and Availability simulation, operation, superconducting-magnet, detector 1058
 
  • S. Wagner, A. Apollonio, R. Schmidt, M. Zerlauth
    CERN, Geneva, Switzerland
  • A. Vergara-Fernandez
    ITER Organization, St. Paul lez Durance, France
 
  For accelerators (e.g. LHC) and other large experimental physics facilities (e.g. ITER), the machine protection relies on complex interlock systems. In the design of interlock loops, the choice of the hardware architecture impacts on machine safety and availability. While high machine safety is an inherent requirement, the constraints in terms of availability may differ from one facility to another. For the interlock loops protecting the LHC superconducting magnet circuits, reduced machine availability can be tolerated since shutdowns do not affect the longevity of the equipment. In ITER's case on the other hand, high availability is required since fast shutdowns cause significant magnet aging. A reliability analysis of various interlock loop architectures has been performed. The analysis based on an analytical model compares a 1oo3 (one-out-of-three) and a 2oo3 architecture with a single loop. It yields the probabilities for four scenarios: (1)- completed mission (e.g., a physics fill in LHC or a pulse in ITER without shutdown triggered), (2)- shutdown because of a failure in the interlock loop, (3)- emergency shutdown (e.g., after a quench of a magnet) and (4)- missed emergency shutdown (shutdown required but interlock loop fails, possibly leading to severe damage of the facility). Scenario 4 relates to machine safety and together with scenarios 2 and 3 defines the machine availability reflected by scenario 1. This paper presents the results of the analysis on the properties of the different architectures with regard to machine safety and availability.  
 
WEPMU012 First Experiences of Beam Presence Detection Based on Dedicated Beam Position Monitors operation, injection, pick-up, instrumentation 1081
 
  • A. Jalal, S. Gabourin, M. Gasior, B. Todd
    CERN, Geneva, Switzerland
 
  High intensity particle beam injection into the LHC is only permitted when a low intensity pilot beam is already circulating in the LHC. This requirement addresses some of the risks associated with high intensity injection, and is enforced by a so-called Beam Presence Flag (BPF) system which is part of the interlock chain between the LHC and its injector complex. For the 2010 LHC run, the detection of the presence of this pilot beam was implemented using the LHC Fast Beam Current Transformer (FBCT) system. However, the primary function of the FBCTs, that is reliable measurement of beam currents, did not allow the BPF system to satisfy all quality requirements of the LHC Machine Protection System (MPS). Safety requirements associated with high intensity injections triggered the development of a dedicated system, based on Beam Position Monitors (BPM). This system was meant to work first in parallel with the FBCT BPF system and eventually replace it. At the end of 2010 and in 2011, this new BPF implementation based on BPMs was designed, built, tested and deployed. This paper reviews both the FBCT and BPM implementation of the BPF system, outlining the changes during the transition period. The paper briefly describes the testing methods, focuses on the results obtained from the tests performed during the end of 2010 LHC run and shows the changes made for the BPM BPF system deployment in LHC in 2011.  
 
WEPMU023 External Post-Operational Checks for the LHC Beam Dumping System controls, kicker, operation, injection 1111
 
  • N. Magnin, V. Baggiolini, E. Carlier, B. Goddard, R. Gorbonosov, D. Khasbulatov, J.A. Uythoven, M. Zerlauth
    CERN, Geneva, Switzerland
 
  The LHC Beam Dumping System (LBDS) is a critical part of the LHC machine protection system. After every LHC beam dump action the various signals and transient data recordings of the beam dumping control systems and beam instrumentation measurements are automatically analysed by the eXternal Post-Operational Checks (XPOC) system to verify the correct execution of the dump action and the integrity of the related equipment. This software system complements the LHC machine protection hardware, and has to ascertain that the beam dumping system is ‘as good as new’ before the start of the next operational cycle. This is the only way by which the stringent reliability requirements can be met. The XPOC system has been developed within the framework of the LHC “Post-Mortem” system, allowing highly dependable data acquisition, data archiving, live analysis of acquired data and replay of previously recorded events. It is composed of various analysis modules, each one dedicated to the analysis of measurements coming from specific equipment. This paper describes the global architecture of the XPOC system and gives examples of the analyses performed by some of the most important analysis modules. It explains the integration of the XPOC into the LHC control infrastructure along with its integration into the decision chain to allow proceeding with beam operation. Finally, it discusses the operational experience with the XPOC system acquired during the first years of LHC operation, and illustrates examples of internal system faults or abnormal beam dump executions which it has detected.  
poster icon Poster WEPMU023 [1.768 MB]  
 
THCHAUST06 Instrumentation of the CERN Accelerator Logging Service: Ensuring Performance, Scalability, Maintenance and Diagnostics instrumentation, database, distributed, framework 1232
 
  • C. Roderick, R. Billen, D.D. Teixeira
    CERN, Geneva, Switzerland
 
  The CERN accelerator Logging Service currently holds more than 90 terabytes of data online, and processes approximately 450 gigabytes per day, via hundreds of data loading processes and data extraction requests. This service is mission-critical for day-to-day operations, especially with respect to the tracking of live data from the LHC beam and equipment. In order to effectively manage any service, the service provider's goals should include knowing how the underlying systems are being used, in terms of: "Who is doing what, from where, using which applications and methods, and how long each action takes". Armed with such information, it is then possible to: analyze and tune system performance over time; plan for scalability ahead of time; assess the impact of maintenance operations and infrastructure upgrades; diagnose past, on-going, or re-occurring problems. The Logging Service is based on Oracle DBMS and Application Servers, and Java technology, and is comprised of several layered and multi-tiered systems. These systems have all been heavily instrumented to capture data about system usage, using technologies such as JMX. The success of the Logging Service and its proven ability to cope with ever growing demands can be directly linked to the instrumentation in place. This paper describes the instrumentation that has been developed, and demonstrates how the instrumentation data is used to achieve the goals outlined above.  
slides icon Slides THCHAUST06 [5.459 MB]