Author: Andreassen, O.O.
Paper Title Page
MOPKN007 Lhc Dipole Magnet Splice Resistance From Sm18 Data Mining 98
 
  • H. Reymond, O.O. Andreassen, C. Charrondière, G. Lehmann Miotto, A. Rijllart, D. Scannicchio
    CERN, Geneva, Switzerland
 
  The splice incident which happened during commissioning of the LHC on the 19th of September 2008 caused damage to several magnets and adjacent equipment. This raised not only the question of how it happened, but also about the state of all other splices. The inter magnet splices were studied very soon after with new measurements, but the internal magnet splices were also a concern. At the Chamonix meeting in January 2009, the CERN management decided to create a working group to analyse the provoked quench data of the magnet acceptance tests and try to find indications for bad splices in the main dipoles. This resulted in a data mining project that took about one year to complete. This presentation describes how the data was stored, extracted and analysed reusing existing LabVIEW™ based tools. We also present the encountered difficulties and the importance of combining measured data with operator notes in the logbook.  
poster icon Poster MOPKN007 [5.013 MB]  
 
MOPMS023 LHC Magnet Test Benches Controls Renovation 368
 
  • A. Raimondo, O.O. Andreassen, D. Kudryavtsev, S.T. Page, A. Rijllart, E. Zorin
    CERN, Geneva, Switzerland
 
  The LHC magnet test benches controls were designed in 1996. They were based on VME data acquisition systems and Siemens PLCs control and interlocks systems. During a review of renovation of superconducting laboratories at CERN in 2009 it was decided to replace the VME systems with PXI and the obsolete Sun/Solaris workstations with Linux PCs. This presentation covers the requirements for the new systems in terms of functionality, security, channel count, sampling frequency and precision. We will report on the experience with the commissioning of the first series of fixed and mobile measurement systems upgraded to this new platform, compared to the old systems. We also include the experience with the renovated control room.  
poster icon Poster MOPMS023 [1.310 MB]  
 
WEMAU003 The LabVIEW RADE Framework Distributed Architecture 658
 
  • O.O. Andreassen, D. Kudryavtsev, A. Raimondo, A. Rijllart
    CERN, Geneva, Switzerland
  • S. Shaipov, R. Sorokoletov
    JINR, Dubna, Moscow Region, Russia
 
  For accelerator GUI applications there is a need for a rapid development environment to create expert tools or to prototype operator applications. Typically a variety of tools are being used, such as Matlab™ or Excel™, but their scope is limited, either because of their low flexibility or limited integration into the accelerator infrastructure. In addition, having several tools obliges users to deal with different programming techniques and data structures. We have addressed these limitations by using LabVIEW™, extending it with interfaces to C++ and Java. In this way it fulfills requirements of ease of use, flexibility and connectivity. We present the RADE framework and four applications based on it. Recent application requirements could only be met by implementing a distributed architecture with multiple servers running multiple services. This brought us the additional advantage to implement redundant services, to increase the availability and to make transparent updates. We will present two applications requiring high availability. We also report on issues encountered with such a distributed architecture and how we have addressed them. The latest extension of the framework is to industrial equipment, with program templates and drivers for PLCs (Siemens and Schneider) and PXI with LabVIEW-Real Time.  
slides icon Slides WEMAU003 [0.157 MB]  
poster icon Poster WEMAU003 [2.978 MB]  
 
WEMAU007 Turn-key Applications for Accelerators with LabVIEW-RADE 670
 
  • O.O. Andreassen, P. Bestmann, C. Charrondière, T. Feniet, J. Kuczerowski, M. Nybø, A. Rijllart
    CERN, Geneva, Switzerland
 
  In the accelerator domain there is a need of integrating industrial devices and creating control and monitoring applications in an easy and yet structured way. The LabVIEW-RADE framework provides the method and tools to implement these requirements and also provides the essential integration of these applications into the CERN controls infrastructure. We present three examples of applications of different nature to show that the framework provides solutions at all three tiers of the control system, data access, process and supervision. The first example is a remotely controlled alignment system for the LHC collimators. The collimator alignment will need to be checked periodically. Due to limited access for personnel, the instruments are mounted on a small train. The system is composed of a PXI crate housing the instrument interfaces and a PLC for the motor control. We report on the design, development and commissioning of the system. The second application is the renovation of the PS beam spectrum analyser where both hardware and software were renewed. The control application was ported from Windows to LabVIEW-Real Time. We describe the technique used for a full integration into the PS console. The third example is a control and monitoring application of the CLIC two beam test stand. The application accesses CERN front-end equipment through the CERN middleware, CMW, and provides many different ways to view the data. We conclude with an evaluation of the framework based on the three examples and indicate new areas of improvement and extension.  
poster icon Poster WEMAU007 [2.504 MB]  
 
WEPMU010 Automatic Analysis at the Commissioning of the LHC Superconducting Electrical Circuits 1073
 
  • H. Reymond, O.O. Andreassen, C. Charrondière, A. Rijllart, M. Zerlauth
    CERN, Geneva, Switzerland
 
  Since the beginning of 2010 the LHC has been operating in a routinely manner, starting with a commissioning phase and then an operation for physics phase. The commissioning of the superconducting electrical circuits requires rigorous test procedures before entering into operation. To maximize the beam operation time of the LHC these tests should be done as fast as procedures allow. A full commissioning needs 12000 tests and is required after circuits have been warmed above liquid nitrogen temperature. Below this temperature, after an end of year break of two months, commissioning needs about 6000 tests. Because the manual analysis of the tests takes a major part of the commissioning time, we proceeded to the automation of the existing analysis tools. We present the way in which these LabVIEW™ applications were automated. We evaluate the gain in commissioning time and reduction of experts on night shift observed during the LHC hardware commissioning campaign of 2011 compared to 2010. We end with an outlook at what can be further optimized.  
poster icon Poster WEPMU010 [3.124 MB]