Keyword: operation
Paper Title Other Keywords Page
MOBAUST03 The MedAustron Accelerator Control System controls, interface, real-time, timing 9
 
  • J. Gutleber, M. Benedikt
    CERN, Geneva, Switzerland
  • A.B. Brett, A. Fabich, M. Marchhart, R. Moser, M. Thonke, C. Torcato de Matos
    EBG MedAustron, Wr. Neustadt, Austria
  • J. Dedič
    Cosylab, Ljubljana, Slovenia
 
  This paper presents the architecture and design of the MedAustron particle accelerator control system. The facility is currently under construction in Wr. Neustadt, Austria. The accelerator and its control system are designed at CERN. Accelerator control systems for ion therapy applications are characterized by rich sets of configuration data, real-time reconfiguration needs and high stability requirements. The machine is operated according to a pulse-to-pulse modulation scheme and beams are described in terms of ion type, energy, beam dimensions, intensity and spill length. An irradiation session for a patient consists of a few hundred accelerator cycles over a time period of about two minutes. No two cycles within a session are equal and the dead-time between two cycles must be kept low. The control system is based on a multi-tier architecture with the aim to achieve a clear separation between front-end devices and their controllers. Off-the-shelf technologies are deployed wherever possible. In-house developments cover a main timing system, a light-weight layer to standardize operation and communication of front-end controllers, the control of the power converters and a procedure programming framework for automating high-level control and data analysis tasks. In order to be able to roll out a system within a predictable schedule, an "off-shoring" project management process was adopted: A frame agreement with an integrator covers the provision of skilled personnel that specifies and builds components together with the core team.  
slides icon Slides MOBAUST03 [7.483 MB]  
 
MOBAUST05 Control System Achievement at KEKB and Upgrade Design for SuperKEKB controls, EPICS, software, linac 17
 
  • K. Furukawa, A. Akiyama, E. Kadokura, M. Kurashina, K. Mikawa, F. Miyahara, T.T. Nakamura, J.-I. Odagiri, M. Satoh, T. Suwada
    KEK, Ibaraki, Japan
  • T. Kudou, S. Kusano, T. Nakamura, K. Yoshii
    MELCO SC, Tsukuba, Japan
  • T. Okazaki
    EJIT, Hitachi, Ibaraki, Japan
 
  SuperKEKB electron-positron asymmetric collider is being constructed after a decade of successful operation at KEKB for B physics research. KEKB completed all of the technical milestones, and had offered important insights into the flavor structure of elementary particles, especially the CP violation. The combination of scripting languages at the operation layer and EPICS at the equipment layer had led the control system to successful performance. The new control system in SuperKEKB will continue to employ those major features of KEKB, with additional technologies for the reliability and flexibility. The major structure will be maintained especially the online linkage to the simulation code and slow controls. However, as the design luminosity is 40-times higher than that of KEKB, several orders of magnitude higher performance will be required at certain area. At the same time more controllers with embedded technology will be installed to meet the limited resources.  
slides icon Slides MOBAUST05 [2.781 MB]  
 
MOBAUST06 The LHCb Experiment Control System: on the Path to Full Automation controls, experiment, detector, framework 20
 
  • C. Gaspar, F. Alessio, L.G. Cardoso, M. Frank, J.C. Garnier, R. Jacobsson, B. Jost, N. Neufeld, R. Schwemmer, E. van Herwijnen
    CERN, Geneva, Switzerland
  • O. Callot
    LAL, Orsay, France
  • B. Franek
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
 
  LHCb is a large experiment at the LHC accelerator. The experiment control system is in charge of the configuration, control and monitoring of the different sub-detectors and of all areas of the online system: the Detector Control System (DCS), sub-detector's voltages, cooling, temperatures, etc.; the Data Acquisition System (DAQ), and the Run-Control; the High Level Trigger (HLT), a farm of around 1500 PCs running trigger algorithms; etc. The building blocks of the control system are based on the PVSS SCADA System complemented by a control Framework developed in common for the 4 LHC experiments. This framework includes an "expert system" like tool called SMI++ which we use for the system automation. The full control system runs distributed over around 160 PCs and is logically organised in a hierarchical structure, each level being capable of supervising and synchronizing the objects below. The experiment's operations are now almost completely automated driven by a top-level object called Big-Brother which pilots all the experiment's standard procedures and the most common error-recovery procedures. Some examples of automated procedures are: powering the detector, acting on the Run-Control (Start/Stop Run, etc.) and moving the vertex detector in/out of the beam, all driven by the state of the accelerator or recovering from errors in the HLT farm. The architecture, tools and mechanisms used for the implementation as well as some operational examples will be shown.  
slides icon Slides MOBAUST06 [1.451 MB]  
 
MODAULT01 Thirty Meter Telescope Adaptive Optics Computing Challenges real-time, FPGA, hardware, controls 36
 
  • C. Boyer, B.L. Ellerbroek, L. Gilles, L. Wang
    TMT, Pasadena, California, USA
  • S. Browne
    The Optical Sciences Company, Anaheim, California, USA
  • G. Herriot, J.P. Veran
    HIA, Victoria, Canada
  • G.J. Hovey
    DRAO, Penticton, British Columbia, Canada
 
  The Thirty Meter Telescope (TMT) will be used with Adaptive Optics (AO) systems to allow near diffraction-limited performance in the near-infrared and achieve the main TMT science goals. Adaptive optics systems reduce the effect of the atmospheric distortions by dynamically measuring the distortions with wavefront sensors, performing wavefront reconstruction with a Real Time Controller (RTC), and then compensating for the distortions with wavefront correctors. The requirements for the RTC subsystem of the TMT first light AO system will represent a significant advance over the current generation of astronomical AO control systems. Memory and processing requirements would be at least 2 orders of magnitude greater than the currently most powerful AO systems using conventional approaches, so that innovative wavefront reconstruction algorithms and new hardware approaches will be required. In this paper, we will first present the requirements and challenges for the RTC of the first light AO system, together with the algorithms that have been developed to reduce the memory and processing requirements, and then two possible hardware architectures based on Field Programmable Gate Array (FPGA).  
slides icon Slides MODAULT01 [2.666 MB]  
 
MOMMU003 Aperture Meter for the Large Hadron Collider optics, alignment, GUI, collimation 70
 
  • G.J. Müller, K. Fuchsberger, S. Redaelli
    CERN, Geneva, Switzerland
 
  The control of the high intensity beams of the CERN Large Hadron Collider (LHC) is particular challenging and requires a good modeling of the machine and monitoring of various machine parameters. During operation it is crucial to ensure a minimal distance between the beam edge and the aperture of sensitive equipment, e.g. the superconducting magnets, which in all cases must be in the shadow of the collimators that protect the machine. Possible dangerous situations must be detected as soon as possible. In order to provide the operator with information about the current machine bottlenecks an aperture meter application was developed based on the LHC online modeling toolchain. The calculation of available free aperture takes into account the best available optics and aperture model as well as the relevant beam measurements. This paper describes the design and integration of this application into the control environment and presents results of the usage in daily operation and from validation measurements.  
slides icon Slides MOMMU003 [0.565 MB]  
poster icon Poster MOMMU003 [0.694 MB]  
 
MOMMU012 A Digital Base-band RF Control System controls, FPGA, diagnostics, software 82
 
  • M. Konrad, U. Bonnes, C. Burandt, R. Eichhorn, J. Enders, N. Pietralla
    TU Darmstadt, Darmstadt, Germany
 
  Funding: Supported by DFG through CRC 634.
The analog RF control system of the S-DALINAC has been replaced by a new digital system. The new hardware consists of an RF module and an FPGA board that have been developed in-house. A self-developed CPU implemented in the FPGA executing the control algorithm allows to change the algorithm without time-consuming synthesis. Another micro-controller connects the FPGA board to a standard PC server via CAN bus. This connection is used to adjust control parameters as well as to send commands from the RF control system to the cavity tuner power supplies. The PC runs Linux and an EPICS IOC. The latter is connected to the CAN bus with a device support that uses the SocketCAN network stack included in recent Linux kernels making the IOC independent of the CAN controller hardware. A diagnostic server streams signals from the FPGAs to clients on the network. Clients used for diagnosis include a software oscilloscope as well as a software spectrum analyzer. The parameters of the controllers can be changed with Control System Studio. We will present the architecture of the RF control system as well as the functionality of its components from a control system developers point of view.
 
slides icon Slides MOMMU012 [0.087 MB]  
poster icon Poster MOMMU012 [33.544 MB]  
 
MOPKN002 LHC Supertable database, collider, interface, luminosity 86
 
  • M. Pereira, M. Lamont, G.J. Müller, D.D. Teixeira
    CERN, Geneva, Switzerland
  • T.E. Lahey
    SLAC, Menlo Park, California, USA
  • E.S.M. McCrory
    Fermilab, Batavia, USA
 
  LHC operations generate enormous amounts of data. These data are being stored in many different databases. Hence, it is difficult for operators, physicists, engineers and management to have a clear view on the overall accelerator performance. Until recently the logging database, through its desktop interface TIMBER, was the only way of retrieving information on a fill-by-fill basis. The LHC Supertable has been developed to provide a summary of key LHC performance parameters in a clear, consistent and comprehensive format. The columns in this table represent main parameters that describe the collider's operation such as luminosity, beam intensity, emittance, etc. The data is organized in a tabular fill-by-fill manner with different levels of detail. A particular emphasis was placed on data sharing by making data available in various open formats. Typically the contents are calculated for periods of time that map to the accelerator's states or beam modes such as Injection, Stable Beams, etc. Data retrieval and calculation is triggered automatically after the end of each fill. The LHC Supertable project currently publishes 80 columns of data on around 100 fills.  
 
MOPKN006 Algorithms and Data Structures for the EPICS Channel Archiver EPICS, hardware, software, database 94
 
  • J. Rowland, M.T. Heron, M.A. Leech, S.J. Singleton, K. Vijayan
    Diamond, Oxfordshire, United Kingdom
 
  Diamond Light Source records 3GB of process data per day and has a 15TB archive on line with the EPICS Channel Archiver. This paper describes recent modifications to the software to improve performance and usability. The file-size limit on the R-Tree index has been removed, allowing all archived data to be searchable from one index. A decimation system works directly on compressed archives from a backup server and produces multi-rate reduced data with minimum and maximum values to support time efficient summary reporting and range queries. The XMLRPC interface has been extended to provide binary data transfer to clients needing large amounts of raw data.  
poster icon Poster MOPKN006 [0.133 MB]  
 
MOPKN007 Lhc Dipole Magnet Splice Resistance From Sm18 Data Mining dipole, electronics, extraction, database 98
 
  • H. Reymond, O.O. Andreassen, C. Charrondière, G. Lehmann Miotto, A. Rijllart, D. Scannicchio
    CERN, Geneva, Switzerland
 
  The splice incident which happened during commissioning of the LHC on the 19th of September 2008 caused damage to several magnets and adjacent equipment. This raised not only the question of how it happened, but also about the state of all other splices. The inter magnet splices were studied very soon after with new measurements, but the internal magnet splices were also a concern. At the Chamonix meeting in January 2009, the CERN management decided to create a working group to analyse the provoked quench data of the magnet acceptance tests and try to find indications for bad splices in the main dipoles. This resulted in a data mining project that took about one year to complete. This presentation describes how the data was stored, extracted and analysed reusing existing LabVIEW™ based tools. We also present the encountered difficulties and the importance of combining measured data with operator notes in the logbook.  
poster icon Poster MOPKN007 [5.013 MB]  
 
MOPKN010 Database and Interface Modifications: Change Management Without Affecting the Clients database, controls, interface, software 106
 
  • M. Peryt, R. Billen, M. Martin Marquez, Z. Zaharieva
    CERN, Geneva, Switzerland
 
  The first Oracle-based Controls Configuration Database (CCDB) was developed in 1986, by which the controls system of CERN's Proton Synchrotron became data-driven. Since then, this mission-critical system has evolved tremendously going through several generational changes in terms of the increasing complexity of the control system, software technologies and data models. Today, the CCDB covers the whole CERN accelerator complex and satisfies a much wider range of functional requirements. Despite its online usage, everyday operations of the machines must not be disrupted. This paper describes our approach with respect to dealing with change while ensuring continuity. How do we manage the database schema changes? How do we take advantage of the latest web deployed application development frameworks without alienating the users? How do we minimize impact on the dependent systems connected to databases through various API's? In this paper we will provide our answers to these questions, and to many more.  
 
MOPKN011 CERN Alarms Data Management: State & Improvements laser, database, data-management, controls 110
 
  • Z. Zaharieva, M. Buttner
    CERN, Geneva, Switzerland
 
  The CERN Alarms System - LASER is a centralized service ensuring the capturing, storing and notification of anomalies for the whole accelerator chain, including the technical infrastructure at CERN. The underlying database holds the pre-defined configuration data for the alarm definitions, for the Operators alarms consoles as well as the time-stamped, run-time alarm events, propagated through the Alarms Systems. The article will discuss the current state of the Alarms database and recent improvements that have been introduced. It will look into the data management challenges related to the alarms configuration data that is taken from numerous sources. Specially developed ETL processes must be applied to this data in order to transform it into an appropriate format and load it into the Alarms database. The recorded alarms events together with some additional data, necessary for providing events statistics to users, are transferred to the long-term alarms archive. The article will cover as well the data management challenges related to the recently developed suite of data management interfaces in respect of keeping data consistency between the alarms configuration data coming from external data sources and the data modifications introduced by the end-users.  
poster icon Poster MOPKN011 [4.790 MB]  
 
MOPKN017 From Data Storage towards Decision Making: LHC Technical Data Integration and Analysis database, monitoring, beam-losses, Windows 131
 
  • A. Marsili, E.B. Holzer, A. Nordt, M. Sapinski
    CERN, Geneva, Switzerland
 
  The monitoring of the beam conditions, equipment conditions and measurements from the beam instrumentation devices in CERN's Large Hadron Collider (LHC) produces more than 100 Gb/day of data. Such a big quantity of data is unprecedented in accelerator monitoring and new developments are needed to access, process, combine and analyse data from different equipments. The Beam Loss Monitoring (BLM) system has been one of the most reliable pieces of equipment in the LHC during its 2010 run, issuing beam dumps when the detected losses were above the defined abort thresholds. Furthermore, the BLM system was able to detect and study unexpected losses, requiring intensive offline analysis. This article describes the techniques developed to: access the data produced (∼ 50.000 values/s); access relevant system layout information; access, combine and display different machine data.  
poster icon Poster MOPKN017 [0.411 MB]  
 
MOPKN020 The PSI Web Interface to the EPICS Channel Archiver interface, EPICS, controls, software 141
 
  • G. Jud, A. Lüdeke, W. Portmann
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The EPICS channel archiver is a powerful tool to collect control system data of thousands of EPICS process variables with rates of many Hertz each to an archive for later retrieval. [1] Within the package of the channel archiver version 2 you get a Java application for graphical data retrieval and a command line tool for data extraction into different file formats. For the Paul Scherrer Institute we wanted a possibility to retrieve the archived data from a web interface. It was desired to have flexible retrieval functions and to allow to interchange data references by e-mail. This web interface has been implemented by the PSI controls group and has now been in operation for several years. This presentation will highlight the special features of this PSI web interface to the EPICS channel archiver.
[1] http://sourceforge.net/apps/trac/epicschanarch/wiki
 
poster icon Poster MOPKN020 [0.385 MB]  
 
MOPKS010 Fast Orbit Correction for the ESRF Storage Ring FPGA, feedback, controls, diagnostics 177
 
  • J.M. Koch, F. Epaud, E. Plouviez, K.B. Scheidt
    ESRF, Grenoble, France
 
  Up to now, at the ESRF, the correction of the orbit position has been performed with two independent systems: one dealing with the slow movements and one correcting the motion in a range of up to 200Hz but with a limited number of fast BPMs and steerers. These latter will be removed and one unique system will cover the frequency range from DC to 200Hz using all the 224 BPMs and the 96 steerers. Indeed, thanks to the procurement of Libera Brilliance units and the installation of new AC power supplies, it is now possible to access all the Beam positions at a frequency of 10 kHz and to drive a small current in the steerers in a 200Hz bandwidth. The first tests of the correction of the beam position have been performed and will be presented. The data processing will be presented as well with a particular emphasis on the development inside the FPGA.  
 
MOPKS023 An Overview of the Active Optics Control Strategy for the Thirty Meter Telescope controls, alignment, optics, real-time 211
 
  • M.J. Sirota, G.Z. Angeli, D.G. MacMynowski
    TMT, Pasadena, California, USA
  • G.A. Chanan
    UCI, Irvine, California, USA
  • M.M. Colavita, C. Lindensmith, C. Shelton, M. Troy
    JPL, Pasadena, California, USA
  • T.S. Mast, J. Nelson
    UCSC, Santa Cruz, USA
  • P.M. Thompson
    STI, Hawthorne, USA
 
  Funding: This work was supported by the Gordon and Betty Moore Foundation
The primary (M1), secondary (M2) and tertiary (M3) mirrors of the Thirty Meter Telescope (TMT), taken together, have over 10,000 degrees of freedom. The vast majority of these are associated with the 492 individual primary mirror segments. The individual segments are converted into the equivalent of a monolithic thirty meter primary mirror via the Alignment and Phasing System (APS) and the M1 Control System (M1CS). In this paper we first provide an introduction to the TMT. We then describe the overall optical alignment and control strategy for the TMT and follow up with additional descriptions of the M1CS and the APS. We conclude with a short description of the TMT error budget process and provide an example of error allocation and predicted performance for wind induced segment jitter.
 
poster icon Poster MOPKS023 [2.318 MB]  
 
MOPKS027 Operational Status of theTransverse Multibunch Feedback System at Diamond feedback, FPGA, damping, controls 219
 
  • I. Uzun, M.G. Abbott, M.T. Heron, A.F.D. Morgan, G. Rehm
    Diamond, Oxfordshire, United Kingdom
 
  A transverse multibunch feedback (TMBF) system is in operation at Diamond Light Source to damp coupled-bunch instabilities up to 250 MHz in both the vertical and horizontal planes. It comprises an in-house designed and built analogue front-end combined with a Libera Bunch-by-Bunch feedback processor and output stripline kickers. FPGA-based feedback electronics is used to implement several diagnostic features in addition to the basic feedback functionality. This paper reports on the current operational status of the TMBF system along with its characteristics. Also discussed are operational diagnostic functionalities including continuous measurement of the betatron tune and chromaticity.  
poster icon Poster MOPKS027 [1.899 MB]  
 
MOPKS029 The CODAC Software Distribution for the ITER Plant Systems software, controls, EPICS, database 227
 
  • F. Di Maio, L. Abadie, C.S. Kim, K. Mahajan, P. Makijarvi, D. Stepanov, N. Utzel, A. Wallander
    ITER Organization, St. Paul lez Durance, France
 
  Most of the systems that constitutes the ITER plant will be built and supplied by the seven ITER domestic agencies. These plant systems will require their own Instrumentation and Control (I&C) that will be procured by the various suppliers. For improving the homogeneity of these plant system I&C, the CODAC group, that is in charge of the ITER control system, is promoting standardized solutions at project level and makes available, as a support for these standards, the software for the development and tests of the plant system I&C. The CODAC Core System is built by the ITER Organization and distributed to all ITER partners. It includes the ITER standard operating system, RHEL, and the ITER standard control framework, EPICS, as well as some ITER specific tools, mostly for configuration management, and ITER specific software modules, such as drivers for standard I/O boards. A process for the distribution and support is in place since the first release, in February 2010, and has been continuously improved to support the development and distribution of the following versions.  
poster icon Poster MOPKS029 [1.209 MB]  
 
MOPMN004 An Operational Event Announcer for the LHC Control Centre Using Speech Synthesis controls, timing, software, interface 242
 
  • S.T. Page, R. Alemany-Fernandez
    CERN, Geneva, Switzerland
 
  The LHC island of the CERN Control Centre is a busy working environment with many status displays and running software applications. An audible event announcer was developed in order to provide a simple and efficient method to notify the operations team of events occurring within the many subsystems of the accelerator. The LHC Announcer uses speech synthesis to report messages based upon data received from multiple sources. General accelerator information such as injections, beam energies and beam dumps are derived from data received from the LHC Timing System. Additionally, a software interface is provided that allows other surveillance processes to send messages to the Announcer using the standard control system middleware. Events are divided into categories which the user can enable or disable depending upon their interest. Use of the LHC Announcer is not limited to the Control Centre and is intended to be available to a wide audience, both inside and outside CERN. To accommodate this, it was designed to require no special software beyond a standard web browser. This paper describes the design of the LHC Announcer and how it is integrated into the LHC operational environment.  
poster icon Poster MOPMN004 [1.850 MB]  
 
MOPMN013 Operational Status Display and Automation Tools for FERMI@Elettra TANGO, controls, status, electron 263
 
  • C. Scafuri
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
Detecting and locating faults and malfunctions of an accelerator is a difficult and time consuming task. The situation is even more difficult during the commissioning phase of a new accelerator, when physicists and operators are still acquiring confidence with the plant. On the other hand a fault free machine does not imply that it is ready to run: the definition of "readiness" depends on what is the expected behavior of the plant. In the case of FERMI@Elettra, in which the electron beam goes to different branches of the machine depending on the programmed activity, the configuration of the plant determines the rules for understanding whether the activity can be carried out or not. In order to help the above task and display the global status of the plant, a tool known as the "matrix" has been developed. It is composed of a graphical front-end, which displays a synthetic view of the plant status grouped by subsystem and location along the accelerator, and by a back-end made of Tango servers which reads the status of the machine devices via the control system and calculates the rules. The back-end also includes a set of objects known as "sequencers" that perform complex actions automatically for actively switching from one accelerator configuration to another.
 
poster icon Poster MOPMN013 [0.461 MB]  
 
MOPMN015 Multi Channel Applications for Control System Studio (CSS) controls, EPICS, database, storage-ring 271
 
  • K. Shroff, G. Carcassi
    BNL, Upton, Long Island, New York, USA
  • R. Lange
    HZB, Berlin, Germany
 
  Funding: Work supported by U.S. Department of Energy
This talk will present a set of applications for CSS built on top of the services provided by the ChannelFinder, a directory service for control system, and PVManager, a client library for data manipulation and aggregation. ChannelFinder Viewer allows for the querying of the ChannelFinder service, and the sorting and tagging of the results. Multi Channel Viewer allows the creation of plots from the live data of a group of channels.
 
poster icon Poster MOPMN015 [0.297 MB]  
 
MOPMN023 Preliminary Design and Integration of EPICS Operation Interface for the Taiwan Photon Source controls, EPICS, interface, GUI 292
 
  • Y.-S. Cheng, J. Chen, P.C. Chiu, K.T. Hsu, C.H. Kuo, C.Y. Liao, C.Y. Wu
    NSRRC, Hsinchu, Taiwan
 
  The TPS (Taiwan Photon Source) is the latest generation of 3 GeV synchrotron light source which has been in construction since 2010. The EPICS framework is adopted as control system infrastructure for the TPS. The EPICS IOCs (Input Output Controller) and various database records have been gradually implemented to control and monitor each subsystem of TPS. The subsystem includes timing, power supply, motion controller, miscellaneous Ethernet-compliant devices etc. Through EPICS PVs (Process Variables) channel access, remote access I/O data via Ethernet interface can be observed by the useable graphical toolkits, such as the EDM (Extensible Display Manager) and MATLAB. The operation interface mainly includes the function of setting, reading, save, restore and etc. Integration of operation interfaces will depend upon properties of each subsystem. In addition, the centralized management method is utilized to serve every client from file servers in order to maintain consistent versions of related EPICS files. The efforts will be summarized in this report.  
 
MOPMN025 New SPring-8 Control Room: Towards Unified Operation with SACLA and SPring-8 II Era. controls, status, network, laser 296
 
  • A. Yamashita, R. Fujihara, N. Hosoda, Y. Ishizawa, H. Kimura, T. Masuda, C. Saji, T. Sugimoto, S. Suzuki, M. Takao, R. Tanaka
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Fukui, Y. Otake
    RIKEN/SPring-8, Hyogo, Japan
 
  We have renovated the SPring-8 control room. This is the first major renovation since its inauguration in 1997. In 2011, the construction of SACLA (SPring-8 Angstrom Compact Laser Accelerator) was completed and it is planned to be controlled from the new control room for close cooperative operation with the SPring-8 storage ring. It is expected that another SPring-8 II project will require more workstations than the current control room. We have extended the control room area for these foreseen projects. In this renovation we have employed new technology which did not exist 14 years ago, such as a large LCD and silent liquid cooling workstations for comfortable operation environment. We have incorporated many ideas which were obtained during the 14 years experience of the design. The operation in the new control room began in April 2011 after a short period of the construction.  
 
MOPMN027 The LHC Sequencer database, GUI, controls, injection 300
 
  • R. Alemany-Fernandez, V. Baggiolini, R. Gorbonosov, D. Khasbulatov, M. Lamont, P. Le Roux, C. Roderick
    CERN, Geneva, Switzerland
 
  The Large Hadron Collider (LHC) at CERN is a highly complex system made of many different sub-systems whose operation implies the execution of many tasks with stringent constraints on the order and duration of the execution. To be able to operate such a system in the most efficient and reliable way the operators in the CERN control room use a high level control system: the LHC Sequencer. The LHC Sequencer system is composed of several components, including an Oracle database where operational sequences are configured, a core server that orchestrates the execution of the sequences, and two graphical user interfaces: one for sequence edition, and another for sequence execution. This paper describes the architecture of the LHC Sequencer system, and how the sequences are prepared and used for LHC operation.  
poster icon Poster MOPMN027 [2.163 MB]  
 
MOPMS005 The Upgraded Corrector Control Subsystem for the Nuclotron Main Magnetic Field controls, power-supply, status, software 326
 
  • V. Andreev, V. Isadov, A. Kirichenko, S. Romanov, G.V. Trubnikov, V. Volkov
    JINR, Dubna, Moscow Region, Russia
 
  This report discusses a control subsystem of 40 main magnetic field correctors which is a part of the superconducting synchrotron Nuclotron Control System. The subsystem is used in static and dynamic (corrector's current depends on the magnetic field value) modes. Development of the subsystem is performed within the bounds of the Nuclotron-NICA project. Principles of digital (PSMBus/RS-485 protocol) and analog control of the correctors' power supplies, current monitoring, remote control of the subsystem via IP network, are also presented. The first results of the subsystem commissioning are given.  
poster icon Poster MOPMS005 [1.395 MB]  
 
MOPMS006 SARAF Beam Lines Control Systems Design controls, vacuum, status, hardware 329
 
  • E. Reinfeld, I. Eliyahu, I.G. Gertz, A. Grin, A. Kreisler, A. Perry, L. Weissman
    Soreq NRC, Yavne, Israel
 
  The first beam lines addition to the SARAF facility was completed in phase I and introduced new hardware to be controlled. This article will describe the beam lines vacuum, magnets and diagnostics control systems and the design methodology used to achieve a reliable and reusable control system. The vacuum control systems of the accelerator and beam lines have been integrated into one vacuum control system which controls all the vacuum control hardware for both the accelerator and beam lines. The new system fixes legacy issues and is designed for modularity and simple configuration. Several types of magnetic lenses have been introduced to the new beam line to control the beam direction and optimally focus it on the target. The control system was designed to be modular so that magnets can be quickly and simply inserted or removed. The diagnostics systems control the diagnostics devices used in the beam lines including data acquisition and measurement. Some of the older control systems were improved and redesigned using modern control hardware and software. The above systems were successfully integrated in the accelerator and are used during beam activation.  
poster icon Poster MOPMS006 [2.537 MB]  
 
MOPMS014 GSI Operation Software: Migration from OpenVMS to Linux software, Linux, controls, linac 351
 
  • R. Huhmann, G. Fröhlich, S. Jülicher, V.RW. Schaa
    GSI, Darmstadt, Germany
 
  The current operation software at GSI controlling the linac, beam transfer lines, synchrotron and storage ring, has been developed over a period of more than two decades using OpenVMS now on Alpha-Workstations. The GSI accelerator facilities will serve as a injector chain for the new FAIR accelerator complex for which a control system is currently developed. To enable reuse and integration of parts of the distributed GSI software system, in particular the linac operation software, within the FAIR control system, the corresponding software components must be migrated to Linux. The interoperability with FAIR controls applications is achieved by adding a generic middleware interface accessible from Java applications. For porting applications to Linux a set of libraries and tools has been developed covering the necessary OpenVMS system functionality. Currently, core applications and services are already ported or rewritten and functionally tested but not in operational usage. This paper presents the current status of the project and concepts for putting the migrated software into operation.  
 
MOPMS018 New Timing System Development at SNS timing, hardware, diagnostics, network 358
 
  • D. Curry
    ORNL RAD, Oak Ridge, Tennessee, USA
  • X.H. Chen, R. Dickson, S.M. Hartman, D.H. Thompson
    ORNL, Oak Ridge, Tennessee, USA
  • J. Dedič
    Cosylab, Ljubljana, Slovenia
 
  The timing system at the Spallation Neutron Source (SNS) has recently been updated to support the long range production and availability goals of the facility. A redesign of the hardware and software provided us with an opportunity to significantly reduce the complexity of the system as a whole and consolidate the functionality of multiple cards into single units eliminating almost half of our operating components in the field. It also presented a prime opportunity to integrate new system level diagnostics, previously unavailable, for experts and operations. These new tools provide us with a clear image of the health of our distribution links and enhance our ability to quickly identify and isolate errors.  
 
MOPMS020 High Intensity Proton Accelerator Controls Network Upgrade network, controls, monitoring, proton 361
 
  • R.A. Krempaska, A.G. Bertrand, F. Lendzian, H. Lutz
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The High Intensity Proton Accelerator (HIPA) control system network is spread through about six buildings and has grown historically in an unorganized way. It consisted of about 25 network switches, 150 nodes and 20 operator consoles. The miscellaneous hardware infrastructure and the lack of the documentation and components overview could not guarantee anymore the reliability of the control system and facility operation. Therefore, a new network, based on modern network topology, PSI standard hardware with monitoring and detailed documentation and overview was needed. We would like to present the process how we successfully achieved this goal and the advantages of the clean and well documented network infrastructure.  
poster icon Poster MOPMS020 [0.761 MB]  
 
MOPMS026 J-PARC Control toward Future Reliable Operation controls, EPICS, linac, GUI 378
 
  • N. Kamikubota, N. Yamamoto
    J-PARC, KEK & JAEA, Ibaraki-ken, Japan
  • S.F. Fukuta, D. Takahashi
    MELCO SC, Tsukuba, Japan
  • T. Iitsuka, S. Motohashi, M. Takagi, S.Y. Yoshida
    Kanto Information Service (KIS), Accelerator Group, Ibaraki, Japan
  • T. Ishiyama
    KEK/JAEA, Ibaraki-Ken, Japan
  • Y. Ito, H. Sakaki
    JAEA, Ibaraki-ken, Japan
  • Y. Kato, M. Kawase, N. Kikuzawa, H. Sako, K.C. Sato, H. Takahashi, H. Yoshikawa
    JAEA/J-PARC, Tokai-Mura, Naka-Gun, Ibaraki-Ken, Japan
  • T. Katoh, H. Nakagawa, J.-I. Odagiri, T. Suzuki, S. Yamada
    KEK, Ibaraki, Japan
  • H. Nemoto
    ACMOS INC., Tokai-mura, Ibaraki, Japan
 
  J-PARC accelerator complex comprises Linac, 3-GeV RCS (Rapid Cycle Synchrotron), and 30-GeV MR (Main Ring). The J-PARC is a joint project between JAEA and KEK. Two control systems, one for Linac and RCS and another for MR, were developed by two institutes. Both control systems use the EPICS toolkit, thus, inter-operation between two systems is possible. After the first beam in November, 2006, beam commissioning and operation have been successful. However, operation experience shows that two control systems often make operators distressed: for example, different GUI look-and-feels, separated alarm screens, independent archive systems, and so on. Considering demands of further power upgrade and longer beam delivery, we need something new, which is easy to understand for operators. It is essential to improve reliability of operation. We, two control groups, started to discuss future directions of our control systems. Ideas to develop common GUI screens of status and alarms, and to develop interfaces to connect archive systems to each other, are discussed. Progress will be reported.  
 
MOPMS028 CSNS Timing System Prototype timing, EPICS, controls, interface 386
 
  • G.L. Xu, G. Lei, L. Wang, Y.L. Zhang, P. Zhu
    IHEP Beijing, Beijing, People's Republic of China
 
  Timing system is important part of CSNS. Timing system prototype developments are based on the Event System 230 series. I use two debug platforms, one is EPICS base 3.14.8. IOC uses the MVME5100, running vxworks5.5 version; the other is EPICS base 3.13, using vxworks5.4 version. Prototype work included driver debugging, EVG/EVR-230 experimental new features, such as CML output signals using high-frequency step size of the signal cycle delay, the use of interlocking modules, CML, and TTL's Output to achieve interconnection function, data transmission functions. Finally, I programed the database with the new features and in order to achieve OPI.  
poster icon Poster MOPMS028 [0.434 MB]  
 
MOPMS030 Improvement of the Oracle Setup and Database Design at the Heidelberg Ion Therapy Center database, ion, controls, hardware 393
 
  • K. Höppner, Th. Haberer, J.M. Mosthaf, A. Peters
    HIT, Heidelberg, Germany
  • G. Fröhlich, S. Jülicher, V.RW. Schaa, W. Schiebel, S. Steinmetz
    GSI, Darmstadt, Germany
  • M. Thomas, A. Welde
    Eckelmann AG, Wiesbaden, Germany
 
  The HIT (Heidelberg Ion Therapy) center is an accelerator facility for cancer therapy using both carbon ions and protons, located at the university hospital in Heidelberg. It provides three therapy treatment rooms: two with fixed beam exit (both in clinical use), and a unique gantry with a rotating beam head, currently under commissioning. The backbone of the proprietary accelerator control system consists of an Oracle database running on a Windows server, storing and delivering data of beam cycles, error logging, measured values, and the device parameters and beam settings for about 100,000 combinations of energy, beam size and particle number used in treatment plans. Since going operational, we found some performance problems with the current database setup. Thus, we started an analysis in cooperation with the industrial supplier of the control system (Eckelmann AG) and the GSI Helmholtzzentrum für Schwerionenforschung. It focused on the following topics: hardware resources of the DB server, configuration of the Oracle instance, and a review of the database design that underwent several changes since its original design. The analysis revealed issues on all fields. The outdated server will be replaced by a state-of-the-art machine soon. We will present improvements of the Oracle configuration, the optimization of SQL statements, and the performance tuning of database design by adding new indexes which proved directly visible in accelerator operation, while data integrity was improved by additional foreign key constraints.  
poster icon Poster MOPMS030 [2.014 MB]  
 
MOPMS031 Did We Get What We Aimed for 10 Years Ago? detector, controls, experiment, hardware 397
 
  • P.Ch. Chochula, A. Augustinus, L.S. Jirdén, A.N. Kurepin, M. Lechman, P. Rosinský
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The ALICE Detector Control System (DCS) is in charge of control and operation of one of the large high energy physics experiments at CERN in Geneva. The DCS design which started in 2000 was partly inspired by the control systems of the previous generation of HEP experiments at the LEP accelerator at CERN. However, the scale of the LHC experiments, the use of modern, "intelligent" hardware and the harsh operational environment led to an innovative system design. The overall architecture has been largely based on commercial products like PVSS SCADA system and OPC servers extended by frameworks. Windows has been chosen as operating system platform for the core systems and Linux for the frontend devices. The concept of finite state machines has been deeply integrated into the system design. Finally, the design principles have been optimized and adapted to the expected operational needs. The ALICE DCS was designed, prototyped and developed at the time, when no experience with systems of similar scale and complexity existed. At the time of its implementation the detector hardware was not yet available and tests were performed only with partial detector installations. In this paper we analyse how well the original requirements and expectations set ten years ago comply with the real experiment needs after two years of operation. We provide an overview of system performance, reliability and scalability. Based on this experience we assess the need for future system enhancements to take place during the LHC technical stop in 2013.  
poster icon Poster MOPMS031 [5.534 MB]  
 
MOPMS032 Re-engineering of the SPring-8 Radiation Monitor Data Acquisition System radiation, data-acquisition, controls, monitoring 401
 
  • T. Masuda, M. Ishii, K. Kawata, T. Matsushita, C. Saji
    JASRI/SPring-8, Hyogo-ken, Japan
 
  We have re-engineered the data acquisition system for the SPring-8 radiation monitors. Around the site, 81 radiation monitors are deployed. Seventeen of them are utilized for the radiation safety interlock system for the accelerators. The old data-acquisition system consisted of dedicated NIM-like modules linked with the radiation monitors, eleven embedded computers for data acquisition from the modules and three programmable logic controllers (PLCs) for integrated dose surveillance. The embedded computers periodically collected the radiation data from GPIB interfaces with the modules. The dose-surveillance PLCs read analog outputs in proportion to the radiation rate from the modules. The modules and the dose-surveillance PLCs were also interfaced with the radiation safety interlock system. These components in the old system were dedicated, black-boxed and complicated for the operations. In addition, GPIB interface was legacy and not reliable enough for the important system. We, therefore, decided to replace the old system with a new one based on PLCs and FL-net, which were widely used technologies. We newly deployed twelve PLCs as substitutes for all the old components. Another PLC with two graphic panels is installed near a central control room for centralized operations and watches for the all monitors. All the new PLCs and a VME computer for data acquisition are connected through FL-net. In this paper, we describe the new system and the methodology of the replacement within the short interval between the accelerator operations.  
poster icon Poster MOPMS032 [1.761 MB]  
 
MOPMS036 Upgrade of the Nuclotron Extracted Beam Diagnostic Subsystem. controls, hardware, software, high-voltage 415
 
  • E.V. Gorbachev, N.I. Lebedev, N.V. Pilyar, S. Romanov, T.V. Rukoyatkina, V. Volkov
    JINR, Dubna, Moscow Region, Russia
 
  The subsystem is intended for the Nuclotron extracted beam parameters measurement. Multiwire proportional chambers are used for transversal beam profiles mesurements in four points of the beam transfer line. Gas amplification values are tuned by high voltage power supplies adjustments. The extracted beam intensity is measured by means of ionization chamber, variable gain current amplifier DDPCA-300 and voltage-to-frequency converter. The data is processed by industrial PC with National Instruments DAQ modules. The client-server distributed application written in LabView environment allows operators to control hardware and obtain measurement results over TCP/IP network.  
poster icon Poster MOPMS036 [1.753 MB]  
 
MOPMU007 ISHN Ion Source Control System Overview controls, EPICS, ion, ion-source 436
 
  • M. Eguiraun, I. Arredondo, J. Feuchtwanger, G. Harper, M. del Campo
    ESS-Bilbao, Zamudio, Spain
  • J. Jugo
    University of the Basque Country, Faculty of Science and Technology, Bilbao, Spain
  • S. Varnasseri
    ESS Bilbao, LEIOA, Spain
 
  Funding: The present work is supported by the Basque Government and Spanish Ministry of Science and Innovation.
ISHN project consists of a Penning ion source which will deliver up to 65mA of H beam pulsed at 50 Hz with a diagnostics vessel for beam testing purposes. The present work analyzes the control system of this research facility. The main devices of ISHN are the power supplies for high density plasma generation and beam extraction, the H2 supply and Cesium heating system, plus refrigeration, vacuum and monitoring devices. The control system implemented with LabVIEW is based on PXI systems from National Instruments, using two PXI chassis connected through a dedicated fiber optic link between HV platform and ground. Source operation is managed by a real time processor at ground, while additional tasks are performed by means of an FPGA located at HV. The real time system manages the control loop of heaters, the H2 pulsed supply for a stable pressure in the plasma chamber, data acquisition from several diagnostics and sensors and the communication with the control room. The FPGA generates the triggers for the different power supplies and H2 flow as well as some data acquisition at high voltage. A PLC is in charge of the vacuum control (two double stage pumps and two turbo pumps), and it is completely independent of the source operation for avoiding risky failures. A dedicated safety PLC is installed to handle personnel safety issues. Current running diagnostics are, ACCT, DCCT, Faraday Cup and a pepperpot. In addition, a MySQL database stores the whole operation parameters while source is running. The aim is to test and train in accelerator technologies for future developments.
 
poster icon Poster MOPMU007 [1.382 MB]  
 
MOPMU008 Solaris Project Status and Challenges controls, network, TANGO, linac 439
 
  • P.P. Goryl, C.J. Bocchetta, K. Królas, M. Młynarczyk, R. Nietubyć, M.J. Stankiewicz, P.S. Tracz, Ł. Walczak, A.I. Wawrzyniak
    Solaris, Krakow, Poland
  • K. Larsson, D.P. Spruce
    MAX-lab, Lund, Sweden
 
  Funding: Work supported by the European Regional Development Fund within the frame of the Innovative Economy Operational Program: POIG.02.01.00-12-213/09
The Polish synchrotron radiation facility, Solaris, is being built in Krakow. The project is strongly linked to the MAX-IV project and the 1.5 GeV storage ring. A overview will be given of activities and of the control system and will outline the similarities and differences between the two machines.
 
poster icon Poster MOPMU008 [11.197 MB]  
 
MOPMU009 The Diamond Control System: Five Years of Operations controls, EPICS, interface, photon 442
 
  • M.T. Heron
    Diamond, Oxfordshire, United Kingdom
 
  Commissioning of the Diamond Light Source accelerators began in 2005, with routine operation of the storage ring commencing in 2006 and photon beamline operation in January 2007. Since then the Diamond control system has provided a single interface and abstraction to (nearly) all the equipment required to operate the accelerators and beamlines. It now supports the three accelerators and a suite of twenty photon beamlines and experiment stations. This paper presents an analysis of the operation of the control system and further considers the developments that have taken place in the light of operational experience over this period.  
 
MOPMU018 Update On The Central Control System of TRIUMF's 500 MeV Cyclotron controls, cyclotron, software, hardware 469
 
  • M. Mouat, E. Klassen, K.S. Lee, J.J. Pon, P.J. Yogendran
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
 
  The Central Control System of TRIUMF's 500 MeV cyclotron was initially commissioned in the early 1970s. In 1987 a four year project to upgrade the control system was planned and commenced. By 1997 this upgrade was complete and the new system was operating with increased reliability, functionality and maintainability. Since 1997 an evolution of incremental change has existed. Functionality, reliability and maintainability have continued to improve. This paper provides an update on the present control system situation (2011) and possible future directions.  
poster icon Poster MOPMU018 [4.613 MB]  
 
MOPMU021 Control System for Magnet Power Supplies for Novosibirsk Free Electron Laser controls, power-supply, FEL, software 480
 
  • S.S. Serednyakov, B.A. Dovzhenko, A.A. Galt, V.R. Kozak, E.A. Kuper, L.E. Medvedev, A.S. Medvedko, Y.M. Velikanov, V.F. Veremeenko, N. Vinokurov
    BINP SB RAS, Novosibirsk, Russia
 
  The control system for the magnetic system of the free electron laser (FEL) is described. The characteristics and structure of the power supply system are presented. The power supply control system based on embedded intelligent controllers with the CAN-BUS interface is considered in detail. The control software structure and capabilities are described. Besides, software tools for power supply diagnostics are described.  
poster icon Poster MOPMU021 [0.291 MB]  
 
MOPMU024 Status of ALMA Software software, controls, monitoring, framework 487
 
  • T.C. Shen, J.P.A. Ibsen, R.A. Olguin, R. Soto
    ALMA, Joint ALMA Observatory, Santiago, Chile
 
  The Atacama Large Millimeter /submillimeter Array (ALMA) will be a unique research instrument composed of at least 66 reconfigurable high-precision antennas, located at the Chajnantor plain in the Chilean Andes at an elevation of 5000 m. Each antenna contains instruments capable of receiving radio signals from 31.3 GHz up to 950 GHz. These signals are correlated inside a Correlator and the spectral data are finally saved into the Archive system together with the observation metadata. This paper describes the progress in the deployment of the ALMA software, with emphasis on the control software, which is built on top of the ALMA Common Software (ACS), a CORBA based middleware framework. In order to support and maintain the installed software, it is essential to have a mechanism to align and distribute the same version of software packages across all systems. This is achieved rigorously with weekly based regression tests and strict configuration control. A build farm to provide continuous integration and testing in simulation has been established as well. Given the large amount of antennas, it is imperative to have also a monitoring system to allow trend analysis of each component in order to trigger preventive maintenance activities. A challenge for which we are preparing this year consists in testing the whole ALMA software performing complete end-to-end operation, from proposal submission to data distribution to the ALMA Regional Centers. The experience gained during deployment, testing and operation support will be presented.  
poster icon Poster MOPMU024 [0.471 MB]  
 
MOPMU030 Control System for Linear Induction Accelerator LIA-2: the Structure and Hardware controls, hardware, induction, high-voltage 502
 
  • G.A. Fatkin, P.A. Bak, A.M. Batrakov, P.V. Logachev, A. Panov, A.V. Pavlenko, V.Ya. Sazansky
    BINP SB RAS, Novosibirsk, Russia
 
  Power Linear Induction Accelerator (LIA) for flash radiography is commissioned in Budker Institute of Nuclear Physics (BINP) in Novosibirsk. It is a facility producing pulsed electron beam with energy 2 MeV, current 1 kA and spot size less than 2 mm. Beam quality and reliability of facility are required for radiography experiments. Features and structure of distributed control system ensuring these demands are discussed. Control system hardware based on CompactPCI and PMC standards is embedded directly into power pulsed generators. CAN-BUS and Ethernet are used as interconnection protocols. Parameters and essential details for measuring equipment and control electronics produced in BINP and available COTS are presented. The first results of the control system commissioning, reliability and hardware vitality are discussed.  
poster icon Poster MOPMU030 [43.133 MB]  
 
MOPMU035 Shape Controller Upgrades for the JET ITER-like Wall plasma, controls, real-time, experiment 514
 
  • A. Neto, D. Alves, I.S. Carvalho
    IPFN, Lisbon, Portugal
  • G. De Tommasi, F. Maviglia
    CREATE, Napoli, Italy
  • R.C. Felton, P. McCullen
    EFDA-JET, Abingdon, Oxon, United Kingdom
  • P.J. Lomas, F. G. Rimini, A.V. Stephen, K-D. Zastrow
    CCFE, Culham, Abingdon, Oxon, United Kingdom
  • R. Vitelli
    Università di Roma II Tor Vergata, Roma, Italy
 
  Funding: This work was supported by the European Communities under the contract of Association between EURATOM/IST and was carried out within the framework of the European Fusion Development Agreement.
The upgrade of JET to a new all-metal wall will pose a set of new challenges regarding machine operation and protection. One of the key problems is that the present way of terminating a pulse, upon the detection of a problem, is limited to a predefined set of global responses, tailored to maximise the likelihood of a safe plasma landing. With the new wall, these might conflict with the requirement of avoiding localised heat fluxes in the wall components. As a consequence, the new system will be capable of dynamically adapting its response behaviour, according to the experimental conditions at the time of the stop request and during the termination itself. Also in the context of the new ITER-like wall, two further upgrades were designed to be implemented in the shape controller architecture. The first will allow safer operation of the machine and consists of a power-supply current limit avoidance scheme, which provides a trade-off between the desired plasma shape and the current distribution between the relevant actuators. The second is aimed at an optimised operation of the machine, enabling an earlier formation of a special magnetic configuration where the last plasma closed flux surface is not defined by a physical limiter. The upgraded shape controller system, besides providing the new functionality, is expected to continue to provide the first line of defence against erroneous plasma position and current requests. This paper presents the required architectural changes to the JET plasma shape controller system.
 
poster icon Poster MOPMU035 [2.518 MB]  
 
TUAAULT03 BLED: A Top-down Approach to Accelerator Control System Design database, controls, lattice, EPICS 537
 
  • J. Bobnar, K. Žagar
    COBIK, Solkan, Slovenia
 
  In many existing controls projects the central database/inventory was introduced late in the project, usually to support installation or maintenance activities. Thus construction of this database was done in a bottom-up fashion by reverse engineering the installation. However, there are several benefits if the central database is introduced early in machine design, such as the ability to simulate the system as a whole without having all the IOCs in place, it can be used as an input to the installation/commissioning plan, or act as an enforcer of certain conventions and quality processes. Based on our experience with the control systems, we have designed a central database BLED (Best and Leanest Ever Database), which is used for storage of all machine configuration and parameters as well as control system configuration, inventory, and cabling. First implementation of BLED supports EPICS, meaning it is capable of storage and generation of EPICS templates and substitution files as well as archive, alarm and other configurations. With a goal in mind to provide functionality of several existing central databases (IRMIS, SNS db, DBSF etc.) a lot of effort has been made to design the database in a way to handle extremely large set-ups, consisting of millions of control system points. Furthermore, BLED also stores the lattice data, thus providing additional information (e.g. survey data) required by different engineering groups. The lattice import/export tools among others support MAD and TraceWin Tools formats which are widely used in the machine design community.  
slides icon Slides TUAAULT03 [4.660 MB]  
 
TUCAUST02 SARAF Control System Rebuild controls, network, software, proton 567
 
  • E. Reinfeld, I. Eliyahu, I.G. Gertz, I. Mardor
    Soreq NRC, Yavne, Israel
 
  The Soreq Applied Research Accelerator Facility (SARAF) is a proton/deuteron RF superconducting linear accelerator, which was commissioned at Soreq NRC. SARAF will be a multi-user facility, whose main activities will be neutron physics and applications, radio-pharmaceuticals development and production, and basic nuclear physics research. The SARAF Accelerator Control System (ACS) was delivered while still in development phase. Various issues limit our capability to use it as a basis for future phases of the accelerator operation and need to be addressed. Recently two projects have been launched in order to streamline the system and prepare it for the future development of the accelerator. This article will describe the plans and goals of these projects, the preparations undertaken by the SARAF team, the design principles on which the control methodology will be based and the architecture which is planned to be implemented. The rebuilding process will take place in two consecutive projects. The first will revamp the network architecture and the second will involve the actual rebuilding of the control system applications, features and procedures.  
slides icon Slides TUCAUST02 [1.733 MB]  
 
TUCAUST05 New Development of EPICS-based Data Acquisition System for Millimeter-wave Interferometer in KSTAR Tokamak diagnostics, plasma, data-acquisition, EPICS 577
 
  • T.G. Lee, Y.U. Nam, M.K. Park
    NFRI, Daejon, Republic of Korea
 
  After achievement of first plasma in 2008, Korea Superconducting Tokamak Advanced Research (KSTAR) is going to be performed in the 4nd campaign in 2011. During the campaigns, many diagnostic devices have been installed for measuring the various plasma properties in the KSTAR tokamak. From the first campaign, a data acquisition system of Millimeter-wave interferometer (MMWI) has been operated to measure the plasma electron density. The DAQ system at the beginning was developed for three different diagnostics having similar channel characteristics with a VME-form factor housing three digitizers in Linux OS platform; MMWI, H-alpha and ECE radiometer. However, this configuration made some limitations in operation although it had an advantage in hardware utilization. It caused unnecessarily increasing data acquired from the other diagnostics when one of them operated at higher frequency. Moreover, faults in a digitizer led to failure in data acquisition of the other diagnostics. In order to overcome these weak points, a new MMWI DAQ system is under development with a PXI-form factor in Linux OS platform and main control application is going to be developed based on EPICS framework like other control systems installed in KSTAR. It also includes MDSplus interface for the pulse-based archiving of experimental data. Main advantages of the new MMWI DAQ system besides solving the described problems are capabilities of calculating plasma electron density during plasma shot and display it in run-time. By this the data can be provided to users immediately after archiving in MDSplus DB.  
slides icon Slides TUCAUST05 [1.724 MB]  
 
TUCAUST06 Event-Synchronized Data Acquisition System of 5 Giga-bps Data Rate for User Experiment at the XFEL Facility, SACLA experiment, detector, controls, network 581
 
  • M. Yamaga, A. Amselem, T. Hirono, Y. Joti, A. Kiyomichi, T. Ohata, T. Sugimoto, R. Tanaka
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Hatsui
    RIKEN/SPring-8, Hyogo, Japan
 
  A data acquisition (DAQ), control, and storage system has been developed for user experiments at the XFEL facility, SACLA, in the SPring-8 site. The anticipated experiments demand shot-by-shot DAQ in synchronization with the beam operation cycle in order to correlate the beam characteristics, and recorded data such as X-ray diffraction pattern. The experiments produce waveform or image data, of which the data size ranges from 8 up to 48 M byte for each x-ray pulse at 60 Hz. To meet these requirements, we have constructed a DAQ system that is operated in synchronization with the 60Hz of beam operation cycle. The system is designed to handle up to 5 Gbps data rate after compression, and consists of the trigger distributor/counters, the data-filling computers, the parallel-writing high-speed data storage, and the relational database. The data rate is reduced by on-the-fly data compression through front-end embedded systems. The self-described data structure enables to handle any type of data. The pipeline data-buffer at each computer node ensures integrity of the data transfer with the non-real-time operating systems, and reduces the development cost. All the data are transmitted via TCP/IP protocol over GbE and 10GbE Ethernet. To monitor the experimental status, the system incorporates with on-line visualization of waveform/images as well as prompt data mining by 10 PFlops scale supercomputer to check the data health. Partial system for the light source commissioning was released in March 2011. Full system will be released to public users in March 2012.  
slides icon Slides TUCAUST06 [3.248 MB]  
 
TUDAUST01 Inauguration of the XFEL Facility, SACLA, in SPring-8 controls, laser, electron, experiment 585
 
  • R. Tanaka, Y. Furukawa, T. Hirono, M. Ishii, M. Kago, A. Kiyomichi, T. Masuda, T. Matsumoto, T. Matsushita, T. Ohata, C. Saji, T. Sugimoto, M. Yamaga, A. Yamashita
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Fukui, T. Hatsui, N. Hosoda, H. Maesaka, T. Ohshima, T. Otake, Y. Otake, H. Takebe
    RIKEN/SPring-8, Hyogo, Japan
 
  The construction of the X-ray free electron laser facility (SACLA) in SPring-8 started in 2006. After 5 years of construction, the facility completed to accelerate electron beams in February 2011. The main component of the accelerator consists of 64 C-band RF units to accelerate beams up to 8GeV. The beam shape is compressed to a length of 30fs, and the beams are introduced into the 18 insertion devices to generate 0.1nm X-ray laser. The first SASE X-ray was observed after the beam commissioning. The beam tuning will continue to achieve X-ray laser saturation for frontier scientific experiments. The control system adopts the 3-tier standard model by using MADOCA framework developed in SPring-8. The upper control layer consists of Linux PCs for operator consoles, Sybase RDBMS for data logging and FC-based NAS for NFS. The lower consists of 100 Solaris-operated VME systems with newly developed boards for RF waveform processing, and the PLC is used for slow control. The Device-net is adopted for the frontend devices to reduce signal cables. The VME systems have a beam-synchronized data-taking link to meet 60Hz beam operation for the beam tuning diagnostics. The accelerator control has gateways to the facility utility system not only to monitor devices but also to control the tuning points of the cooling water. The data acquisition system for the experiments is challenging. The data rate coming from 2D multiport CCD is 3.4Gbps that produces 30TB image data in a day. A sampled data will be transferred to the 10PFlops supercomputer via 10Gbps Ethernet for data evaluation.  
slides icon Slides TUDAUST01 [5.427 MB]  
 
WEBHMULT03 EtherBone - A Network Layer for the Wishbone SoC Bus hardware, Ethernet, software, timing 642
 
  • M. Kreider, W.W. Terpstra
    GSI, Darmstadt, Germany
  • J.H. Lewis, J. Serrano, T. Włostowski
    CERN, Geneva, Switzerland
 
  Today, there are several System on a Chip (SoC) bus systems. Typically, these busses are confined on-chip and rely on higher level components to communicate with the outside world. Taking these systems a step further, we see the possibility of extending the reach of the SoC bus to remote FPGAs or processors. This leads to the idea of the EtherBone (EB) core, which connects a Wishbone (WB) Ver. 4 Bus via a Gigabit Ethernet based network link to remote peripheral devices. EB acts as a transparent interconnect module towards attached WB Bus devices. Address information and data from one or more WB bus cycles is preceded with a descriptive header and encapsulated in a UDP/IP packet. Because of this standard compliance, EB is able to traverse Wide Area Networks and is therefore not bound to a geographic location. Due to the low level nature of the WB bus, EB provides a sound basis for remote hardware tools like a JTAG debugger, In-System-Programmer (ISP), boundary scan interface or logic analyser module. EB was developed in the scope of the WhiteRabbit Timing Project (WR) at CERN and GSI/FAIR, which employs GigaBit Ethernet technology to communicate with memory mapped slave devices. WR will make use of EB as means to issue commands to its timing nodes and control connected accelerator hardware.  
slides icon Slides WEBHMULT03 [1.547 MB]  
 
WEMAU001 A Remote Tracing Facility for Distributed Systems GUI, interface, controls, database 650
 
  • F. Ehm, A. Dworak
    CERN, Geneva, Switzerland
 
  Today the CERN's accelerator control system is built upon a large number of services mainly based on C++ and JAVA which produce log events. In such a largely distributed environment these log messages are essential for problem recognition and tracing. Tracing is therefore a vital part of operations, as understanding an issue in a subsystem means analyzing log events in an efficient and fast manner. At present 3150 device servers are deployed on 1600 diskless frontends and they send their log messages via the network to an in-house developed central server which, in turn, saves them to files. However, this solution is not able to provide several highly desired features and has performance limitations which led to the development of a new solution. The new distributed tracing facility fulfills these requirements by taking advantage of the Simple Text Orientated Message Protocol [STOMP] and ActiveMQ as the transport layer. The system not only allows to store critical log events centrally in files or in a database but also it allows other clients (e.g. graphical interfaces) to read the same events at the same time by using the provided JAVA API. This facility also ensures that each client receives only the log events of the desired level. Thanks to the ActiveMQ broker technology the system can easily be extended to clients implemented in other languages and it is highly scalable in terms of performance. Long running tests have shown that the system can handle up to 10.000 messages/second.  
slides icon Slides WEMAU001 [1.008 MB]  
poster icon Poster WEMAU001 [0.907 MB]  
 
WEMAU005 The ATLAS Transition Radiation Tracker (TRT) Detector Control System detector, controls, hardware, electronics 666
 
  • J. Olszowska, E. Banaś, Z. Hajduk
    IFJ-PAN, Kraków, Poland
  • M. Hance, D. Olivito, P. Wagner
    University of Pennsylvania, Philadelphia, Pennsylvania, USA
  • T. Kowalski, B. Mindur
    AGH University of Science and Technology, Krakow, Poland
  • R. Mashinistov, K. Zhukov
    LPI, Moscow, Russia
  • A. Romaniouk
    MEPhI, Moscow, Russia
 
  Funding: CERN; MNiSW, Poland; MES of Russia and ROSATOM, Russian Federation; DOE and NSF, United States of America
TRT is one of the ATLAS experiment Inner Detector components providing precise tracking and electrons identification. It consists of 370 000 proportional counters (straws) which have to be filled with stable active gas mixture and high voltage biased. High voltage setting at distinct topological regions are periodicaly modified by closed-loop regulation mechanism to ensure constant gaseous gain independent of drifts of atmospheric pressure, local detector temperatures and gas mixture composition. Low voltage system powers front-end electronics. Special algorithms provide fine tuning procedures for detector-wide discrimination threshold equalization to guarantee uniform noise figure for whole detector. Detector, cooling system and electronics temperatures are continuosly monitored by ~ 3000 temperature sensors. The standard industrial and custom developed server applications and protocols are used for devices integration into unique system. All parameters originating in TRT devices and external infrastructure systems (important for Detector operation or safety) are monitored and used by alert and interlock mechanisms. System runs on 11 computers as PVSS (industrial SCADA) projects and is fully integrated with ATLAS Detector Control System.
 
slides icon Slides WEMAU005 [1.384 MB]  
poster icon Poster WEMAU005 [1.978 MB]  
 
WEMMU006 Management Tools for Distributed Control System in KSTAR controls, software, monitoring, EPICS 694
 
  • S. Lee, J.S. Hong, J.S. Park, M.K. Park, S.W. Yun
    NFRI, Daejon, Republic of Korea
 
  The integrated control system of the Korea Superconducting Tokamak Advanced Research (KSTAR) has been developed with distributed control systems based on Experimental Physics and Industrial Control System (EPICS). It has the essential role of remote operation, supervising of tokamak device and conducting of plasma experiments without any interruption. Therefore, the availability of the control system directly impacts on the entire device performance. For the non-interrupted operation of the KSTAR control system, we have developed a tool named as Control System Monitoring (CSM) to monitor the resources of EPICS Input/Output Controller (IOC) servers (utilization of memory, cpu, disk, network, user-defined process and system-defined process), the soundness of storage systems (storage utilization, storage status), the status of network switches using Simple Network Management Protocol (SNMP), the network connection status of every local control sever using Internet Control Message Protocol (ICMP), and the operation environment of the main control room and the computer room (temperature, humidity, water-leak) in real time. When abnormal conditions or faults are detected by the CSM, it alerts abnormal or fault alarms to operators. Especially, if critical fault related to the data storage occurs, the CSM sends the simple messages to operator’s mobile phone. In addition to the CSM, other tools, which are subversion for software version control and vmware for the virtualized IT infrastructure, for managing the integrated control system for KSTAR operation will be introduced.  
slides icon Slides WEMMU006 [0.247 MB]  
poster icon Poster WEMMU006 [5.611 MB]  
 
WEMMU009 Status of the RBAC Infrastructure and Lessons Learnt from its Deployment in LHC controls, database, software, software-architecture 702
 
  • W. Sliwinski, P. Charrue, I. Yastrebov
    CERN, Geneva, Switzerland
 
  The distributed control system for the LHC accelerator poses many challenges due to its inherent heterogeneity and highly dynamic nature. One of the important aspects is to protect the machine against unauthorised access and unsafe operation of the control system, from the low-level front-end machines up to the high-level control applications running in the control room. In order to prevent an unauthorized access to the control system and accelerator equipment and to address the possible security issues, the Role Based Access Control (RBAC) project was designed and developed at CERN, with a major contribution from Fermilab laboratory. Furthermore, RBAC became an integral part of the CERN Controls Middleware (CMW) infrastructure and it was deployed and commissioned in the LHC operation in the summer 2008, well before the first beam in LHC. This paper presents the current status of the RBAC infrastructure, together with an outcome and gathered experience after a massive deployment in the LHC operation. Moreover, we outline how the project evolved over the last three years and give an overview of the major extensions introduced to improve integration, stability and its functionality. The paper also describes the plans of future project evolution and possible extensions, based on gathered users requirements and operational experience.  
slides icon Slides WEMMU009 [0.604 MB]  
poster icon Poster WEMMU009 [1.262 MB]  
 
WEPKN006 Running a Reliable Messaging Infrastructure for CERN's Control System controls, monitoring, network, GUI 724
 
  • F. Ehm
    CERN, Geneva, Switzerland
 
  The current middleware for CERN's accelerator controls system is based on two implementations: corba-based Controls MiddleWare (CMW) and Java Messaging Service [JMS]. The JMS service is realized using the open source messaging product ActiveMQ and had became an increasing vital part of beam operations as data need to be transported reliably for various areas such as the beam protection system, post mortem analysis, beam commissioning or the alarm system. The current JMS service is made of 17 brokers running either in clusters or as single nodes. The main service is deployed as a two node cluster providing failover and load balancing capabilities for high availability. Non-critical applications running on virtual machines or desktop machines read data via a third broker to decouple the load from the operational main cluster. This scenario was introduced last year and the statistics showed an uptime of 99.998% and an average data serving rate of 1.6GB /min represented by around 150 messages/sec. Deploying, running, maintaining and protecting such messaging infrastructure is not trivial and includes setting up of careful monitoring and failure pre-recognition. Naturally, lessons have been learnt and their outcome is very important for the current and future operation of such service.  
poster icon Poster WEPKN006 [0.877 MB]  
 
WEPKN007 A LEGO Paradigm for Virtual Accelerator Concept controls, experiment, simulation, software 728
 
  • S.N. Andrianov, A.N. Ivanov, E.A. Podzyvalov
    St. Petersburg State University, St. Petersburg, Russia
 
  The paper considers basic features of a Virtual Accelerator concept based on LEGO paradigm. This concept involves three types of components: different mathematical models for accelerator design problems, integrated beam simulation packages (i.e. COSY, MAD, OptiM and others), and a special class of virtual feedback instruments similar to real control systems (EPICS). All of these components should interoperate for more complete analysis of control systems and increased fault tolerance. The Virtual Accelerator is an information and computing environment which provides a framework for analysis based on these components that can be combined in different ways. Corresponding distributed computing services establish interaction between mathematical models and low level control system. The general idea of the software implementation is based on the Service-Oriented Architecture (SOA) that allows using cloud computing technology and enables remote access to the information and computing resources. The Virtual Accelerator allows a designer to combine powerful instruments for modeling beam dynamics in a friendly to use way including both self-developed and well-known packages. In the scope of this concept the following is also proposed: the control system identification, analysis and result verification, visualization as well as virtual feedback for beam line operation. The architecture of the Virtual Accelerator system itself and results of beam dynamics studies are presented.  
poster icon Poster WEPKN007 [0.969 MB]  
 
WEPKN019 A Programmable Logic Controller-Based System for the Recirculation of Liquid C6F14 in the ALICE High Momentum Particle Identification Detector at the Large Hadron Collider controls, detector, monitoring, framework 745
 
  • I. Sgura, G. De Cataldo, A. Franco, C. Pastore, G. Volpe
    INFN-Bari, Bari, Italy
 
  We present the design and the implementation of the Control System (CS) for the recirculation of liquid C6F14 (Perfluorohexane) in the High Momentum Particle Identification Detector (HMPID). The HMPID is a sub-detector of the ALICE experiment at the CERN Large Hadron Collider (LHC) and it uses liquid C6F14 as Cherenkov radiator medium in 21 quartz trays for the measurement of the velocity of charged particles. The primary task of the Liquid Circulation System (LCS) is to ensure the highest transparency of C6F14 to ultraviolet light by re-circulating the liquid through a set of special filters. In order to provide safe long term operation a PLC-based CS has been implemented. The CS supports both automatic and manual operating modes, remotely or locally. The adopted Finite State Machine approach minimizes the possible operator errors and provides a hierarchical control structure allowing the operation and monitoring of a single radiator tray. The LCS is protected against anomalous working conditions by both active and passive systems. The active ones are ensured via the control software running in the PLC whereas the human interface and data archiving are provided via PVSS, the SCADA framework which integrates the full detector control. The LCS under CS control has been fully commissioned and proved to meet all requirements, thus enabling HMPID to successfully collect the data from the first LHC operation..  
poster icon Poster WEPKN019 [1.270 MB]  
 
WEPKN024 UNICOS CPC New Domains of Application: Vacuum and Cooling & Ventilation controls, vacuum, framework, cryogenics 752
 
  • D. Willeman, E. Blanco Vinuela, B. Bradu, J.O. Ortola Vidal
    CERN, Geneva, Switzerland
 
  The UNICOS (UNified Industrial Control System) framework, and concretely the CPC package, has been extensively used in the domain of continuous processes (e.g. cryogenics, gas flows,…) and also others specific to the LHC machine as the collimators environmental measurements interlock system. The application of the UNICOS-CPC to other kind of processes: vacuum and the cooling and ventilation cases are depicted here. One of the major challenges was to figure out whether the model and devices created so far were also adapted for other types of processes (e.g Vacuum). To illustrate this challenge two domain use cases will be shown: ISOLDE vacuum control system and the STP18 (cooling & ventilation) control system. Both scenarios will be illustrated emphasizing the adaptability of the UNICOS CPC package to create those applications and highlighting the discovered needed features to include in the future UNICOS CPC package. This paper will also introduce the mechanisms used to optimize the commissioning time, the so-called virtual commissioning. In most of the cases, either the process is not yet accessible or the process is critical and its availability is then reduced, therefore a model of the process is used to offline validate the designed control system.  
poster icon Poster WEPKN024 [0.230 MB]  
 
WEPKN025 Supervision Application for the New Power Supply of the CERN PS (POPS) controls, interface, framework, software 756
 
  • H. Milcent, X. Genillon, M. Gonzalez-Berges, A. Voitier
    CERN, Geneva, Switzerland
 
  The power supply system for the magnets of the CERN PS has been recently upgraded to a new system called POPS (POwer for PS). The old mechanical machine has been replaced by a system based on capacitors. The equipment as well as the low level controls have been provided by an external company (CONVERTEAM). The supervision application has been developed at CERN reusing the technologies and tools used for the LHC Accelerator and Experiments (UNICOS and JCOP frameworks, PVSS SCADA tool). The paper describes the full architecture of the control application, and the challenges faced for the integration with an outsourced system. The benefits of reusing the CERN industrial control frameworks and the required adaptations will be discussed. Finally, the initial operational experience will be presented.  
poster icon Poster WEPKN025 [13.149 MB]  
 
WEPKS005 State Machine Framework and its Use for Driving LHC Operational States* framework, controls, embedded, GUI 782
 
  • M. Misiowiec, V. Baggiolini, M. Solfaroli Camillocci
    CERN, Geneva, Switzerland
 
  The LHC follows a complex operational cycle with 12 major phases that include equipment tests, preparation, beam injection, ramping and squeezing, finally followed by the physics phase. This cycle is modeled and enforced with a state machine, whereby each operational phase is represented by a state. On each transition, before entering the next state, a series of conditions is verified to make sure the LHC is ready to move on. The State Machine framework was developed to cater for building independent or embedded state machines. They safely drive between the states executing tasks bound to transitions and broadcast related information to interested parties. The framework encourages users to program their own actions. Simple configuration management allows the operators to define and maintain complex models themselves. An emphasis was also put on easy interaction with the remote state machine instances through standard communication protocols. On top of its core functionality, the framework offers a transparent integration with other crucial tools used to operate LHC, such as the LHC Sequencer. LHC Operational States has been in production for half a year and was seamlessly adopted by the operators. Further extensions to the framework and its application in operations are under way.
* http://cern.ch/marekm/icalepcs.html
 
poster icon Poster WEPKS005 [0.717 MB]  
 
WEPKS006 UNICOS Evolution: CPC Version 6 controls, framework, vacuum, cryogenics 786
 
  • E. Blanco Vinuela, J.M. Beckers, B. Bradu, Ph. Durand, B. Fernández Adiego, S. Izquierdo Rosas, A. Merezhin, J.O. Ortola Vidal, J. Rochez, D. Willeman
    CERN, Geneva, Switzerland
 
  The UNICOS (UNified Industrial Control System) framework was created back in 1998, since then a noticeable number of applications in different domains have used this framework to develop process control applications. Furthermore the UNICOS framework has been formalized and their supervision layer has been reused in other kinds of applications (e.g. monitoring or supervisory tasks) where a control layer is not necessarily UNICOS oriented. The process control package has been reformulated as the UNICOS CPC package (Continuous Process Control) and a reengineering process has been followed. These noticeable changes were motivated by many factors as (1) being able to upgrade to the new more performance IT technologies in the automatic code generation, (2) being flexible enough to create new additional device types to cope with other needs (e.g. Vacuum or Cooling and Ventilation applications) without major impact on the framework or the PLC code baselines and (3) enhance the framework with new functionalities (e.g. recipes). This publication addresses the motivation, changes, new functionalities and results obtained. It introduces in an overall view the technologies used and changes followed, emphasizing what has been gained for the developer and the final user. Finally some of the new domains where UNICOS CPC has been used will be illustrated.  
poster icon Poster WEPKS006 [0.449 MB]  
 
WEPKS012 Intuitionistic Fuzzy (IF) Evaluations of Multidimensional Model data-analysis, software, lattice, fuzzy set 805
 
  • I.D. Valova
    ICER, Sofia, Bulgaria
 
  There are different logical methods for data structuring, but no one is perfect enough. Multidimensional model of data is presentation of data in a form of cube (referred as infocube or hypercube) with data or in form of "star" type scheme (referred as multidimensional scheme), by use of F-structures (Facts) and set of D-structures (Dimensions), based on the notion of hierarchy of D-structures. The data, being subject of analysis in a specific multidimensional model is located in a Cartesian space, being restricted by D-structures. In fact, the data is either dispersed or "concentrated", therefore the data cells are not distributed evenly within the respective space. The moment of occurrence of any event is difficult to be predicted and the data is concentrated as per time periods, location of performed event, etc. To process such dispersed or concentrated data, various technical strategies are needed. The use of intuitionistic fuzzy evaluations- IFE provide us new possibilities for alternative presentation and processing of data, subject of analysis in any OLAP application. The use of IFE at the evaluation of multidimensional models will result in the following advantages: analysts will dispose with more complete information for processing and analysis of respective data; benefit for the managers is that the final decisions will be more effective ones; enabling design of more functional multidimensional schemes. The purpose of this work is to apply intuitionistic fuzzy evaluations of multidimensional model of data.  
 
WEPKS018 MstApp, a Rich Client Control Applications Framework at DESY framework, controls, hardware, status 819
 
  • W. Schütte, K. Hinsch
    DESY, Hamburg, Germany
 
  Funding: Deutsches Elektronen-Synchrotron DESY
The control system for PETRA 3 [1] and its pre accelerators extensively use rich clients for the control room and the servers. Most of them are written with the help of a rich client Java framework: MstApp. They total to 106 different console and 158 individual server applications. MstApp takes care of many common control system application aspects beyond communication. MstApp provides a common look and feel: core menu items, a color scheme for standard states of hardware components and standardized screen sizes/locations. It interfaces our console application manager (CAM) and displays on demand our communication link diagnostics tools. MstApp supplies an accelerator context for each application; it handles printing, logging, resizing and unexpected application crashes. Due to our standardized deploy process MstApp applications know their individual developers and can even send them – on button press of the users - emails. Further a concept of different operation modes is implemented: view only, operating and expert use. Administration of the corresponding rights is done via web access of a database server. Initialization files on a web server are instantiated as JAVA objects with the help of the Java SE XMLEncoder. Data tables are read with the same mechanism. New MstApp applications can easily be created with in house wizards like the NewProjectWizard or the DeviceServerWizard. MstApp improves the operator experience, application developer productivity and delivered software quality.
[1] Reinhard Bacher, “Commissioning of the New Control System for the PETRA 3 Accelerator Complex at Desy”, Proceedings of ICALEPCS 2009, Kobe, Japan
 
poster icon Poster WEPKS018 [0.474 MB]  
 
WEPKS020 Adding Flexible Subscription Options to EPICS EPICS, framework, database, controls 827
 
  • R. Lange
    HZB, Berlin, Germany
  • L.R. Dalesio
    BNL, Upton, Long Island, New York, USA
  • A.N. Johnson
    ANL, Argonne, USA
 
  Funding: Work supported by U.S. Department of Energy (under contracts DE-AC02-06CH11357 resp. DE-AC02-98CH10886), German Bundesministerium für Bildung und Forschung and Land Berlin.
The need for a mechanism to control and filter subscriptions to control system variables by the client was described in a paper at the ICALEPCS2009 conference.[1] The implementation follows a plug-in design that allows the insertion of plug-in instances into the event stream on the server side. The client can instantiate and configure these plug-ins when opening a subscription, by adding field modifiers to the channel name using JSON notation.[2] This paper describes the design and implementation of a modular server-side plug-in framework for Channel Access, and shows examples for plug-ins as well as their use within an EPICS control system.
[1] R. Lange, A. Johnson, L. Dalesio: Advanced Monitor/Subscription Mechanisms for EPICS, THP090, ICALEPCS2009, Kobe, Japan.
[2] A. Johnson, R. Lange: Evolutionary Plans for EPICS Version 3, WEA003, ICALEPCS2009, Kobe, Japan.
 
poster icon Poster WEPKS020 [0.996 MB]  
 
WEPKS033 UNICOS CPC6: Automated Code Generation for Process Control Applications controls, software, framework, vacuum 871
 
  • B. Fernández Adiego, E. Blanco Vinuela, I. Prieto Barreiro
    CERN, Geneva, Switzerland
 
  The Continuous Process Control package (CPC) is one of the components of the CERN Unified Industrial Control System framework (UNICOS). As a part of this framework, UNICOS-CPC provides a well defined library of device types, a methodology and a set of tools to design and implement industrial control applications. The new CPC version uses the software factory UNICOS Application Builder (UAB) to develop the CPC applications. The CPC component is composed of several platform oriented plug-ins (PLCs and SCADA) describing the structure and the format of the generated code. It uses a resource package where both, the library of device types and the generated file syntax are defined. The UAB core is the generic part of this software, it discovers and calls dynamically the different plug-ins and provides the required common services. In this paper the UNICOS CPC6 package is presented. It is composed of several plug-ins: the Instance generator and the Logic generator for both, Siemens and Schneider PLCs, the SCADA generator (based on PVSS) and the CPC wizard as a dedicated Plug-in created to provide the user a friendly GUI. A management tool called UAB bootstrap will administer the different CPC component versions and all the dependencies between the CPC resource packages and the components. This tool guides the control system developer to install and launch the different CPC component versions.  
poster icon Poster WEPKS033 [0.730 MB]  
 
WEPMN005 Spiral2 Control Command: a Standardized Interface between High Level Applications and EPICS IOCs interface, status, controls, EPICS 879
 
  • C.H. Haquin, P. Gillette, E. Lemaître, L. Philippe, D.T. Touchard
    GANIL, Caen, France
  • F. Gougnaud, Y. Lussignol
    CEA/DSM/IRFU, France
 
  The SPIRAL2 linear accelerator will produce entirely new particle beams enabling exploration of the boundaries of matter. Coupled with the existing GANIL machine this new facility will produce light and heavy exotic nuclei at extremely high intensities. The field deployment of the Control System relies on Linux PCs and servers, VME VxWorks crates and Siemens PLCs; equipment will be addressed either directly or using a Modbus/TCP field bus network. Several laboratories are involved in the software development of the control system. In order to improve efficiency of the collaboration, special care is taken of the software organization. During the development phase, in a context of tough budget and time constraints, this really makes sense, but also for the exploitation of the new machine, it helps us to design a control system that will require as little effort as possible for maintenance and evolution. The major concepts of this organization are the choice of EPICS, the definition of an EPICS directory tree specific to SPIRAL2, called "topSP2": this is our reference work area for development, integration and exploitation, and the use of version control system (SVN) to store and share our developments independently of the multi-site dimension of the project. The next concept is the definition of a "standardized interface" between high level applications programmed in Java and EPICS databases running in IOCs. This paper relates the rationale and objectives of this interface and also its development cycle from specification using UML diagrams to testing on the actual equipment.  
poster icon Poster WEPMN005 [0.945 MB]  
 
WEPMN012 PC/104 Asyn Drivers at Jefferson Lab controls, interface, EPICS, hardware 898
 
  • J. Yan, T.L. Allison, S.D. Witherspoon
    JLAB, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
PC/104 embedded IOCs that run RTEMS and EPICS have been applied in many new projects at Jefferson Lab. Different commercial PC/104 I/O modules on the market such as digital I/O, data acquisition, and communication modules are integrated in our control system. AsynDriver, which is a general facility for interfacing device specific code to low level drivers, was applied for PC/104 serial communication I/O cards. We choose the ines GPIB-PC/104-XL as the GPIB interface module and developed the low lever device driver that is compatible with the asynDriver. The ines GPIB-PC/104-XL has iGPIB 72110 chip, which is register compatible with NEC uPD7210 in GPIB Talker/Listener applications. Instrument device support was created to provide access to the operating parameters of GPIB devices. Low level device driver for the serial communication board Model 104-COM-8SM was also developed to run under asynDriver. This serial interface board contains eight independent ports and provides effective RS-485, RS-422 and RS-232 multipoint communication. StreamDevice protocols were applied for the serial communications. The asynDriver in PC/104 IOC application provides standard interface between the high level device support and hardwire level device drivers. This makes it easy to develop the GPIB and serial communication applications in PC/104 IOCs.
 
 
WEPMN017 PCI Hardware Support in LIA-2 Control System hardware, controls, Linux, interface 916
 
  • D. Bolkhovityanov, P.B. Cheblakov
    BINP SB RAS, Novosibirsk, Russia
 
  LIA-2 control system* is built on cPCI crates with x86-compatible processor boards running Linux. Slow electronics is connected via CAN-bus, while fast electronics (4MHz and 200MHz fast ADCs and 200MHz timers) are implemented as cPCI/PMC modules. Several ways to drive PCI control electronics in Linux were examined. Finally a userspace drivers approach was chosen. These drivers communicate with hardware via a small kernel module, which provides access to PCI BARs and to interrupt handling. This module was named USPCI (User-Space PCI access). This approach dramatically simplifies creation of drivers, as opposed to kernel drivers, and provides high reliability (because only a tiny and thoroughly-debugged piece of code runs in kernel). LIA-2 accelerator was successfully commissioned, and the solution chosen has proven adequate and very easy to use. Besides, USPCI turned out to be a handy tool for examination and debugging of PCI devices direct from command-line. In this paper available approaches to work with PCI control hardware in Linux are considered, and USPCI architecture is described.
* "LIA-2 Linear Induction Accelerator Control System", this conference
 
poster icon Poster WEPMN017 [0.954 MB]  
 
WEPMN032 Development of Pattern Awareness Unit (PAU) for the LCLS Beam Based Fast Feedback System feedback, timing, controls, software 954
 
  • K.H. Kim, S. Allison, D. Fairley, T.M. Himel, P. Krejcik, D. Rogind, E. Williams
    SLAC, Menlo Park, California, USA
 
  LCLS is now successfully operating at its design beam repetition rate of 120 Hz, but in order to ensure stable beam operation at this high rate we have developed a new timing pattern aware EPICS controller for beam line actuators. Actuators that are capable of responding at 120 Hz are controlled by the new Pattern Aware Unit (PAU) as part of the beam-based feedback system. The beam at the LCLS is synchronized to the 60 Hz AC power line phase and is subject to electrical noise which differs according to which of the six possible AC phases is chosen from the 3-phase site power line. Beam operation at 120 Hz interleaves two of these 60 Hz phases and the feedback must be able to apply independent corrections to the beam pulse according to which of the 60 Hz timing patterns the pulse is synchronized to. The PAU works together with the LCLS Event Timing system which broadcasts a timing pattern that uniquely identifies each pulse when it is measured and allows the feedback correction to be applied to subsequent pulses belonging to the same timing pattern, or time slot, as it is referred to at SLAC. At 120 Hz operation this effectively provides us with two independent, but interleaved feedback loops. Other beam programs at the SLAC facility such as LCLS-II and FACET will be pulsed on other time slots and the PAUs in those systems will respond to their appropriate timing patterns. This paper describes the details of the PAU development: real-time requirements and achievement, scalability, and consistency. The operational results will also be described.  
poster icon Poster WEPMN032 [0.430 MB]  
 
WEPMS001 Interconnection Test Framework for the CMS Level-1 Trigger System framework, hardware, distributed, controls 973
 
  • J. Hammer
    CERN, Geneva, Switzerland
  • M. Magrans de Abril
    UW-Madison/PD, Madison, Wisconsin, USA
  • C.-E. Wulz
    HEPHY, Wien, Austria
 
  The Level-1 Trigger Control and Monitoring System is a software package designed to configure, monitor and test the Level-1 Trigger System of the Compact Muon Solenoid (CMS) experiment at CERN's Large Hadron Collider. It is a large and distributed system that runs over 50 PCs and controls about 200 hardware units. The Interconnection Test Framework (ITF), a generic and highly flexible framework for creating and executing hardware tests within the Level-1 Trigger environment is presented. The framework is designed to automate testing of the 13 major subsystems interconnected with more than 1000 links. Features include a web interface to create and execute tests, modeling using finite state machines, dependency management, automatic configuration, and loops. Furthermore, the ITF will replace the existing heterogeneous testing procedures and help reducing maintenance and complexity of operation tasks. Finally, an example of operational use of the Interconnection Test Framework is presented. This case study proves the concept and describes the customization process and its performance characteristics.  
poster icon Poster WEPMS001 [0.576 MB]  
 
WEPMS003 A Testbed for Validating the LHC Controls System Core Before Deployment controls, software, hardware, timing 977
 
  • J. Nguyen Xuan, V. Baggiolini
    CERN, Geneva, Switzerland
 
  Since the start-up of the LHC, it is crucial to carefully test core controls components before deploying them operationally. The Testbed of the CERN accelerator controls group was developed for this purpose. It contains different hardware (PPC, i386) running different operating systems (Linux and LynxOS) and core software components running on front-ends, communication middleware and client libraries. The Testbed first executes integration tests to verify that the components delivered by individual teams interoperate, and then system tests, which verify high-level, end-user functionality. It also verifies that different versions of components are compatible, which is vital, because not all parts of the operational LHC control system can be upgraded simultaneously. In addition, the Testbed can be used for performance and stress tests. Internally, the Testbed is driven by Bamboo, a Continuous Integration server, which builds and deploys automatically new software versions into the Testbed environment and executes the tests continuously to prevent from software regression. Whenever a test fails, an e-mail is sent to the appropriate persons. The Testbed is part of the official controls development process wherein new releases of the controls system have to be validated before being deployed operationally. Integration and system tests are an important complement to the unit tests previously executed in the teams. The Testbed has already caught several bugs that were not discovered by the unit tests of the individual components.
* http://cern.ch/jnguyenx/ControlsTestBed.html
 
poster icon Poster WEPMS003 [0.111 MB]  
 
WEPMS005 Automated Coverage Tester for the Oracle Archiver of WinCC OA software, controls, status, database 981
 
  • A. Voitier, P. Golonka, M. Gonzalez-Berges
    CERN, Geneva, Switzerland
 
  A large number of control systems at CERN are built with the commercial SCADA tool WinCC OA. They cover projects in the experiments, accelerators and infrastructure. An important component is the Oracle archiver used for long term storage of process data (events) and alarms. The archived data provide feedback to the operators and experts about how the system was behaving at particular moment in the past. In addition a subset of these data is used for offline physics analysis. The consistency of the archived data has to be ensured from writing to reading as well as throughout updates of the control systems. The complexity of the archiving subsystem comes from the multiplicity of data types, required performance and other factors such as operating system, environment variables or versions of the different software components, therefore an automatic tester has been implemented to systematically execute test scenarios under different conditions. The tests are based on scripts which are automatically generated from templates. Therefore they can cover a wide range of software contexts. The tester has been fully written in the same software environment as the targeted SCADA system. The current implementation is able to handle over 300 test cases, both for events and alarms. It has enabled to report issues to the provider of WinCC OA. The template mechanism allows sufficient flexibility to adapt the suite of tests to future needs. The developed tools are generic enough to be used to tests other parts of the control systems.  
poster icon Poster WEPMS005 [0.279 MB]  
 
WEPMS006 Automated testing of OPC Servers DSL, software, Windows, Domain-Specific-Languages 985
 
  • B. Farnham
    CERN, Geneva, Switzerland
 
  CERN relies on OPC Server implementations from 3rd party device vendors to provide a software interface to their respective hardware. Each time a vendor releases a new OPC Server version it is regression tested internally to verify that existing functionality has not been inadvertently broken during the process of adding new features. In addition bugs and problems must be communicated to the vendors in a reliable and portable way. This presentation covers the automated test approach used at CERN to cover both cases: Scripts are written in a domain specific language specifically created for describing OPC tests and executed by a custom software engine driving the OPC Server implementation.  
poster icon Poster WEPMS006 [1.384 MB]  
 
WEPMS007 Backward Compatibility as a Key Measure for Smooth Upgrades to the LHC Control System controls, software, feedback, Linux 989
 
  • V. Baggiolini, M. Arruat, D. Csikos, R. Gorbonosov, P. Tarasenko, Z. Zaharieva
    CERN, Geneva, Switzerland
 
  Now that the LHC is operational, a big challenge is to upgrade the control system smoothly, with minimal downtime and interruptions. Backward compatibility (BC) is a key measure to achieve this: a subsystem with a stable API can be upgraded smoothly. As part of a broader Quality Assurance effort, the CERN Accelerator Controls group explored methods and tools supporting BC. We investigated two aspects in particular: (1) "Incoming dependencies", to know which part of an API is really used by clients and (2) BC validation, to check that a modification is really backward compatible. We used this approach for Java APIs and for FESA devices (which expose an API in the form of device/property sets). For Java APIs, we gather dependency information by regularly running byte-code analysis on all the 1000 Jar files that belong to the control system and find incoming dependencies (methods calls and inheritance). An Eclipse plug-in we developed shows these incoming dependencies to the developer. If an API method is used by many clients, it has to remain backward compatible. On the other hand, if a method is not used, it can be freely modified. To validate BC, we are exploring the official Eclipse tools (PDE-API tools), and others that check BC without need for invasive technology such as OSGi. For FESA devices, we instrumented key components of our controls system to know which devices and properties are in use. This information is collected in the Controls Database and is used (amongst others) by the FESA design tools in order to prevent the FESA class developer from breaking BC.  
 
WEPMS008 Software Tools for Electrical Quality Assurance in the LHC database, software, hardware, LabView 993
 
  • M. Bednarek
    CERN, Geneva, Switzerland
  • J. Ludwin
    IFJ-PAN, Kraków, Poland
 
  There are over 1600 superconducting magnet circuits in the LHC machine. Many of them consist of a large number of components electrically connected in series. This enhances the sensitivity of the whole circuits to electrical faults of individual components. Furthermore, circuits are equipped with a large number of instrumentation wires, which are exposed to accidental damage or swapping. In order to ensure safe operation, an Electrical Quality Assurance (ELQA) campaign is needed after each thermal cycle. Due to the complexity of the circuits, as well as their distant geographical distribution (tunnel of 27km circumference divided in 8 sectors), suitable software and hardware platforms had to be developed. The software combines an Oracle database, LabView data acquisition applications and PHP-based web follow-up tools. This paper describes the software used for the ELQA of the LHC.  
poster icon Poster WEPMS008 [8.781 MB]  
 
WEPMS020 NSLS-II Booster Power Supplies Control booster, controls, injection, extraction 1018
 
  • P.B. Cheblakov, S.E. Karnaev, S.S. Serednyakov
    BINP SB RAS, Novosibirsk, Russia
  • W. Louie, Y. Tian
    BNL, Upton, Long Island, New York, USA
 
  The NSLS-II booster Power Supplies (PSs) [1] are divided into two groups: ramping PSs providing passage of the beam during the beam ramp in the booster from 200 MeV up to 3 GeV at 300 ms time interval, and pulsed PSs providing beam injection from the linac and extraction to the Storage Ring. A special set of devices was developed at BNL for the NSLS-II magnetic system PSs control: Power Supply Controller (PSC) and Power Supply Interface (PSI). The PSI has one or two precision 18-bit DACs, nine channels of ADC for each DAC and digital input/outputs. It is capable of detecting the status change sequence of digital inputs with 10 ns resolution. The PSI is placed close to current regulators and is connected to the PSC via fiber-optic 50 Mbps data link. The PSC communicates with EPICS IOC through a 100 Mbps Ethernet port. The main function of IOC includes ramp curve upload, ADC waveform data download, and various process variable control. The 256 Mb DDR2 memory on PSC provides large storage for up to 16 ramping tables for the both DACs, and 20 second waveform recorder for all the ADC channels. The 100 Mbps Ethernet port enables real time display for 4 ADC waveforms. This paper describes a project of the NSLS-II booster PSs control. Characteristic features of the ramping magnets control and pulsed magnets control in a double-injection mode of operation are considered in the paper. First results of the control at PS testing stands are presented.
[1] Power Supply Control System of NSLS-II, Y. Tian, W. Louie, J. Ricciardelli, L.R. Dalesio, G. Ganetis, ICALEPCS2009, Japan
 
poster icon Poster WEPMS020 [1.818 MB]  
 
WEPMU001 Temperature Measurement System of Novosibirsk Free Electron Laser FEL, vacuum, controls, microtron 1044
 
  • S.S. Serednyakov, B.A. Gudkov, V.R. Kozak, E.A. Kuper, P.A. Selivanov, S.V. Tararyshkin
    BINP SB RAS, Novosibirsk, Russia
 
  This paper describes the temperature-monitoring system of Novosibirsk FEL. The main task of this system is to prevent the FEL from being overheated and its individual components from being damaged. The system accumulates information from a large number of temperature sensors installed on different parts of the FEL facility, which allows measuring the temperature of the vacuum chamber, cooling water, and magnetic elements windings. Since the architecture of this system allows processing information not only from temperature sensors, it is also used to measure, for instance, vacuum parameters and some parameters of the cooling water. The software part of this system is integrated into the FEL control system, so readings taken from all sensors are recorded to the database every 30 seconds.  
poster icon Poster WEPMU001 [0.484 MB]  
 
WEPMU006 Architecture for Interlock Systems: Reliability Analysis with Regard to Safety and Availability simulation, extraction, superconducting-magnet, detector 1058
 
  • S. Wagner, A. Apollonio, R. Schmidt, M. Zerlauth
    CERN, Geneva, Switzerland
  • A. Vergara-Fernandez
    ITER Organization, St. Paul lez Durance, France
 
  For accelerators (e.g. LHC) and other large experimental physics facilities (e.g. ITER), the machine protection relies on complex interlock systems. In the design of interlock loops, the choice of the hardware architecture impacts on machine safety and availability. While high machine safety is an inherent requirement, the constraints in terms of availability may differ from one facility to another. For the interlock loops protecting the LHC superconducting magnet circuits, reduced machine availability can be tolerated since shutdowns do not affect the longevity of the equipment. In ITER's case on the other hand, high availability is required since fast shutdowns cause significant magnet aging. A reliability analysis of various interlock loop architectures has been performed. The analysis based on an analytical model compares a 1oo3 (one-out-of-three) and a 2oo3 architecture with a single loop. It yields the probabilities for four scenarios: (1)- completed mission (e.g., a physics fill in LHC or a pulse in ITER without shutdown triggered), (2)- shutdown because of a failure in the interlock loop, (3)- emergency shutdown (e.g., after a quench of a magnet) and (4)- missed emergency shutdown (shutdown required but interlock loop fails, possibly leading to severe damage of the facility). Scenario 4 relates to machine safety and together with scenarios 2 and 3 defines the machine availability reflected by scenario 1. This paper presents the results of the analysis on the properties of the different architectures with regard to machine safety and availability.  
 
WEPMU007 Securing a Control System: Experiences from ISO 27001 Implementation controls, software, EPICS, network 1062
 
  • V. Vuppala, K.D. Davidson, J. Kusler, J.J. Vincent
    NSCL, East Lansing, Michigan, USA
 
  Recent incidents have emphasized the importance of security and operational continuity for achieving the quality objectives of an organization, and the safety of its personnel and machines. However, security and disaster recovery are either completely ignored or given a low priority during the design and development of an accelerator control system, the underlying technologies, and the overlaid applications. This leads to an operational facility that is easy to breach, and difficult to recover. Retrofitting security into the control system becomes much more difficult during operations. In this paper we describe our experiences in achieving ISO 27001 compliance for NSCL's control system. We illustrate problems faced with securing low-level controls, infrastructure, and applications. We also provide guidelines to address the security and disaster recovery issues upfront during the development phase.  
poster icon Poster WEPMU007 [1.304 MB]  
 
WEPMU008 Access Safety Systems – New Concepts from the LHC Experience controls, injection, site, hardware 1066
 
  • T. Ladzinski, Ch. Delamare, S. Di Luca, T. Hakulinen, L. Hammouti, F. Havart, J.-F. Juget, P. Ninin, R. Nunes, T.R. Riesco, E. Sanchez-Corral Mena, F. Valentini
    CERN, Geneva, Switzerland
 
  The LHC Access Safety System has introduced a number of new concepts into the domain of personnel protection at CERN. These can be grouped into several categories: organisational, architectural and concerning the end-user experience. By anchoring the project on the solid foundations of the IEC 61508/61511 methodology, the CERN team and its contractors managed to design, develop, test and commission on time a SIL3 safety system. The system uses a successful combination of the latest Siemens redundant safety programmable logic controllers with a traditional relay logic hardwired loop. The external envelope barriers used in the LHC include personnel and material access devices, which are interlocked door-booths introducing increased automation of individual access control, thus removing the strain from the operators. These devices ensure the inviolability of the controlled zones by users not holding the required credentials. To this end they are equipped with personnel presence detectors and the access control includes a state of the art biometry check. Building on the LHC experience, new projects targeting the refurbishment of the existing access safety infrastructure in the injector chain have started. This paper summarises the new concepts introduced in the LHC access control and safety systems, discusses the return of experience and outlines the main guiding principles for the renewal stage of the personnel protection systems in the LHC injector chain in a homogeneous manner.  
poster icon Poster WEPMU008 [1.039 MB]  
 
WEPMU009 The Laser MégaJoule Facility: Personnel Security and Safety Interlocks laser, controls, interlocks, GUI 1070
 
  • J.-C. Chapuis, J.P.A. Arnoul, A. Hurst, M.G. Manson
    CEA, Le Barp, France
 
  The French CEA (Commissariat à l'Énergie Atomique) is currently building the LMJ (Laser MégaJoule), at the CEA Laboratory CESTA near Bordeaux. The LMJ is designed to deliver about 1.4 MJ of 0.35 μm light to targets for high energy density physics experiments. Such an installation entails specific risks related to the presence of intense laser beams, and high voltage power laser amplifiers. Furthermore, the thermonuclear fusion reactions induced by the experiment also produce different radiations and neutrons burst and also activate some materials in the chamber environment. Both risks could be lethal. This presentation (paper) discusses the SSP (system for the personnel safety) that was designed to prevent accidents and protect personnel working in the LMJ. To achieve the security level imposed on us by labor law and by the French Safety Authority, the system consists of two independent safety barriers based on different technologies, whose combined effect can reduce to insignificant level the occurrence probability of all accidental scenarios identified during the risk analysis.  
 
WEPMU010 Automatic Analysis at the Commissioning of the LHC Superconducting Electrical Circuits framework, hardware, GUI, status 1073
 
  • H. Reymond, O.O. Andreassen, C. Charrondière, A. Rijllart, M. Zerlauth
    CERN, Geneva, Switzerland
 
  Since the beginning of 2010 the LHC has been operating in a routinely manner, starting with a commissioning phase and then an operation for physics phase. The commissioning of the superconducting electrical circuits requires rigorous test procedures before entering into operation. To maximize the beam operation time of the LHC these tests should be done as fast as procedures allow. A full commissioning needs 12000 tests and is required after circuits have been warmed above liquid nitrogen temperature. Below this temperature, after an end of year break of two months, commissioning needs about 6000 tests. Because the manual analysis of the tests takes a major part of the commissioning time, we proceeded to the automation of the existing analysis tools. We present the way in which these LabVIEW™ applications were automated. We evaluate the gain in commissioning time and reduction of experts on night shift observed during the LHC hardware commissioning campaign of 2011 compared to 2010. We end with an outlook at what can be further optimized.  
poster icon Poster WEPMU010 [3.124 MB]  
 
WEPMU012 First Experiences of Beam Presence Detection Based on Dedicated Beam Position Monitors injection, pick-up, extraction, instrumentation 1081
 
  • A. Jalal, S. Gabourin, M. Gasior, B. Todd
    CERN, Geneva, Switzerland
 
  High intensity particle beam injection into the LHC is only permitted when a low intensity pilot beam is already circulating in the LHC. This requirement addresses some of the risks associated with high intensity injection, and is enforced by a so-called Beam Presence Flag (BPF) system which is part of the interlock chain between the LHC and its injector complex. For the 2010 LHC run, the detection of the presence of this pilot beam was implemented using the LHC Fast Beam Current Transformer (FBCT) system. However, the primary function of the FBCTs, that is reliable measurement of beam currents, did not allow the BPF system to satisfy all quality requirements of the LHC Machine Protection System (MPS). Safety requirements associated with high intensity injections triggered the development of a dedicated system, based on Beam Position Monitors (BPM). This system was meant to work first in parallel with the FBCT BPF system and eventually replace it. At the end of 2010 and in 2011, this new BPF implementation based on BPMs was designed, built, tested and deployed. This paper reviews both the FBCT and BPM implementation of the BPF system, outlining the changes during the transition period. The paper briefly describes the testing methods, focuses on the results obtained from the tests performed during the end of 2010 LHC run and shows the changes made for the BPM BPF system deployment in LHC in 2011.  
 
WEPMU013 Development of a Machine Protection System for the Superconducting Beam Test Facility at FERMILAB controls, laser, status, FPGA 1084
 
  • L.R. Carmichael, M.D. Church, R. Neswold, A. Warner
    Fermilab, Batavia, USA
 
  Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
Fermilab’s Superconducting RF Beam Test Facility currently under construction will produce electron beams capable of damaging the acceleration structures and the beam line vacuum chambers in the event of an aberrant accelerator pulse. The accelerator is being designed with the capability to operate with up to 3000 bunches per macro-pulse, 5Hz repetition rate and 1.5 GeV beam energy. It will be able to sustain an average beam power of 72 KW at the bunch charge of 3.2 nC. Operation at full intensity will deposit enough energy in niobium material to approach the melting point of 2500 °C. In the early phase with only 3 cryomodules installed the facility will be capable of generating electron beam energies of 810 MeV and an average beam power that approaches 40 KW. In either case a robust Machine Protection System (MPS) is required to mitigate effects due to such large damage potentials. This paper will describe the MPS system being developed, the system requirements and the controls issues under consideration.
 
poster icon Poster WEPMU013 [0.755 MB]  
 
WEPMU016 Pre-Operation, During Operation and Post-Operational Verification of Protection Systems injection, controls, software, database 1090
 
  • I. Romera, M. Audrain
    CERN, Geneva, Switzerland
 
  This paper will provide an overview of the software checks performed on the Beam Interlock System ensuring that the system is functioning to specification. Critical protection functions are implemented in hardware, at the same time software tools play an important role in guaranteeing the correct configuration and operation of the system during all phases of operation. This paper will describe tests carried out pre-, during- and post- operation, if protection system integrity is not sure, subsequent injections of beam into the LHC will be inhibited.  
 
WEPMU019 First Operational Experience with the LHC Beam Dump Trigger Synchronisation Unit software, hardware, embedded, monitoring 1100
 
  • A. Antoine, C. Boucly, P. Juteau, N. Magnin, N. Voumard
    CERN, Geneva, Switzerland
 
  Two LHC Beam Dumping Systems (LBDS) remove the counter-rotating beams safely from the collider during setting up of the accelerator, at the end of a physics run and in case of emergencies. Dump requests can come from 3 different sources: the machine protection system in emergency cases, the machine timing system for scheduled dumps or the LBDS itself in case of internal failures. These dump requests are synchronised with the 3 μs beam abort gap in a fail-safe redundant Trigger Synchronisation Unit (TSU) based on Digital Phase Lock Loops (DPLL), locked onto the LHC beam revolution frequency with a maximum phase error of 40 ns. The synchronised trigger pulses coming out of the TSU are then distributed to the high voltage generators of the beam dump kickers through a redundant fault-tolerant trigger distribution system. This paper describes the operational experience gained with the TSU since their commissioning with beam in 2009, and highlights the improvements which have been implemented for a safer operation. This includes an increase of the diagnosis and monitoring functionalities, a more automated validation of the hardware and embedded firmware before deployment, or the execution of a post-operational analysis of the TSU performance after each dump action. In the light of this first experience the outcome of the external review performed in 2010 is presented. The lessons learnt on the project life-cycle for the design of mission critical electronic modules are discussed.  
poster icon Poster WEPMU019 [1.220 MB]  
 
WEPMU020 LHC Collimator Controls for a Safe LHC Operation controls, injection, FPGA, survey 1104
 
  • S. Redaelli, R.W. Assmann, M. Donzé, R. Losito, A. Masi
    CERN, Geneva, Switzerland
 
  The beam stored energy at the Large Hadron Collider (LHC) will be up to 360 MJ, to be compared with the quench limit of super-conducting magnets of a few mJ per cm3 and with the damage limit of metal of a few hundreds kJ. The LHC collimation system is designed to protect the machine against beam losses and consists of 108 collimators, 100 of which are movable, located along the 27 km long ring and in the transfer lines. Each collimator has two jaws controlled by four stepping motors to precisely adjust collimator position and angle with respect to the beam. Stepping motors have been used to ensure high position reproducibility. LVDT and resolvers have been installed to monitor in real-time at 100 Hz the jaw positions and the collimator gaps. The cleaning performance and machine protection role of the system depend critically on the accurate jaw positioning. A fully redundant survey system has been developed to ensure that the collimators dynamically follow optimum settings in all phases of the LHC operational cycle. Jaw positions and collimator gaps are interlocked against dump limits defined redundantly as functions of the time, of the beam energy and of the beta* functions that describes the focusing property of the beams. In this paper, the architectural choices that guarantee a safe LHC operation are presented. Hardware and software implementations that ensure the required reliability are described. The operational experience accumulated so far is reviewed and a detailed failure analysis that show the fulfillment of the machine protection specifications is presented.  
 
WEPMU022 Quality-Safety Management and Protective Systems for SPES controls, monitoring, proton, radiation 1108
 
  • S. Canella, D. Benini
    INFN/LNL, Legnaro (PD), Italy
 
  SPES (Selective Production of Exotic Species) is an INFN project to produce Radioactive Ion Beams (RIB) at Laboratori Nazionali di Legnaro (LNL). The RIB will be produced using the proton induced fission on a Direct Target of UCx. In SPES the proton driver will be a Cyclotron with variable energy (15-70 MeV) and a maximum current of 0.750 mA on two exit ports. The SPES Access Control System and the Dose Monitoring will be integrated in the facility Protective System to achieve the necessary high degree of safety and reliability and to prevent dangerous situations for people, environment and the facility itself. A Quality and Safety Management System for SPES (QSMS) will be realized at LNL for managing all the phases of the project (from design to decommissioning), including therefore the commissioning and operation of the Cyclotron machine too. The Protective System, its documents, data and procedures will be one of the first items that will be considered for the implementation of the QSMS of SPES. Here a general overview of SPES Radiation Protection System, its planned architecture, data and procedures, together with their integration in the QSMS are presented.  
poster icon Poster WEPMU022 [1.092 MB]  
 
WEPMU023 External Post-Operational Checks for the LHC Beam Dumping System controls, kicker, injection, extraction 1111
 
  • N. Magnin, V. Baggiolini, E. Carlier, B. Goddard, R. Gorbonosov, D. Khasbulatov, J.A. Uythoven, M. Zerlauth
    CERN, Geneva, Switzerland
 
  The LHC Beam Dumping System (LBDS) is a critical part of the LHC machine protection system. After every LHC beam dump action the various signals and transient data recordings of the beam dumping control systems and beam instrumentation measurements are automatically analysed by the eXternal Post-Operational Checks (XPOC) system to verify the correct execution of the dump action and the integrity of the related equipment. This software system complements the LHC machine protection hardware, and has to ascertain that the beam dumping system is ‘as good as new’ before the start of the next operational cycle. This is the only way by which the stringent reliability requirements can be met. The XPOC system has been developed within the framework of the LHC “Post-Mortem” system, allowing highly dependable data acquisition, data archiving, live analysis of acquired data and replay of previously recorded events. It is composed of various analysis modules, each one dedicated to the analysis of measurements coming from specific equipment. This paper describes the global architecture of the XPOC system and gives examples of the analyses performed by some of the most important analysis modules. It explains the integration of the XPOC into the LHC control infrastructure along with its integration into the decision chain to allow proceeding with beam operation. Finally, it discusses the operational experience with the XPOC system acquired during the first years of LHC operation, and illustrates examples of internal system faults or abnormal beam dump executions which it has detected.  
poster icon Poster WEPMU023 [1.768 MB]  
 
WEPMU028 Development Status of Personnel Protection System for IFMIF/EVEDA Accelerator Prototype radiation, controls, monitoring, status 1126
 
  • T. Kojima, T. Narita, K. Nishiyama, H. Sakaki, H. Takahashi, K. Tsutsumi
    Japan Atomic Energy Agency (JAEA), International Fusion Energy Research Center (IFERC), Rokkasho, Kamikita, Aomori, Japan
 
  The Control System for IFMIF/EVEDA* accelerator prototype consists of six subsystems; Central Control System (CCS), Local Area Network (LAN), Personnel Protection System (PPS), Machine Protection System (MPS), Timing System (TS) and Local Control System (LCS). The IFMIF/EVEDA accelerator prototype provides deuteron beam with power greater than 1 MW, which is the same as that of J-PARC and SNS. The PPS is required to protect technical and engineering staff against unnecessary exposure, electrical shock hazard and the other danger phenomena. The PPS has two functions of building management and accelerator management. For both managements, Programmable Logic Controllers (PLCs), monitoring cameras and limit switches and etc. are used for interlock system, and a sequence is programmed for entering and leaving of controlled area. This article presents the PPS design and the interface against each accelerator subsystems in details.
* International Fusion Material Irradiation Facility / Engineering Validation and Engineering Design Activity
 
poster icon Poster WEPMU028 [1.164 MB]  
 
WEPMU031 Virtualization in Control System Environment controls, EPICS, network, hardware 1138
 
  • L.R. Shen, D.K. Liu, T. Wan
    SINAP, Shanghai, People's Republic of China
 
  In a large scale distribute control system, there are lots of common services composing an environment of the entire control system, such as the server system for the common software base library, application server, archive server and so on. This paper gives a description of a virtualization realization for a control system environment, including the virtualization for server, storage, network system and application for the control system. With a virtualization instance of the epics based control system environment built by the VMware vSphere v4, we tested the whole functionality of this virtualization environment in the SSRF control system, including the common server of the NFS, NIS, NTP, Boot and EPICS base and extension library tools, we also carried out virtualization of the application server such as the Archive, Alarm, EPICS gateway and all of the network based IOC. Specially, we tested the high availability (HA) and VMotion for EPICS asynchronous IOC successfully under the different VLAN configuration of the current SSRF control system network.  
 
WEPMU033 Monitoring Control Applications at CERN controls, monitoring, framework, software 1141
 
  • F. Varela, F.B. Bernard, M. Gonzalez-Berges, H. Milcent, L.B. Petrova
    CERN, Geneva, Switzerland
 
  The Industrial Controls and Engineering (EN-ICE) group of the Engineering Department at CERN has produced, and is responsible for the operation of around 60 applications, which control critical processes in the domains of cryogenics, quench protections systems, power interlocks for the Large Hadron Collider and other sub-systems of the accelerator complex. These applications require 24/7 operation and a quick reaction to problems. For this reason the EN-ICE is presently developing the monitoring tool to detect, anticipate and inform of possible anomalies in the integrity of the applications. The tool builds on top of Simatic WinCC Open Architecture (formerly PVSS) SCADA and makes usage of the Joint COntrols Project (JCOP) and UNICOS Frameworks developed at CERN. The tool provides centralized monitoring of the different elements integrating the controls systems like Windows and Linux servers, PLCs, applications, etc. Although the primary aim of the tool is to assist the members of the EN-ICE Standby Service, the tool may present different levels of details of the systems depending on the user, which enables experts to diagnose and troubleshoot problems. In this paper, the scope, functionality and architecture of the tool are presented and some initial results on its performance are summarized.  
poster icon Poster WEPMU033 [1.719 MB]  
 
WEPMU038 Network Security System and Method for RIBF Control System controls, network, EPICS, status 1161
 
  • A. Uchiyama
    SHI Accelerator Service Ltd., Tokyo, Japan
  • M. Fujimaki, N. Fukunishi, M. Komiyama, R. Koyama
    RIKEN Nishina Center, Wako, Japan
 
  In RIKEN RI beam factory (RIBF), the local area network for accelerator control system (control system network) consists of commercially produced Ethernet switches, optical fibers and metal cables. On the other hand, E-mail and Internet access for unrelated task to accelerator operation are usually used in RIKEN virtual LAN (VLAN) as office network. From the viewpoint of information security, we decided to separate the control system network from the Internet and operate it independently from VLAN. However, it was inconvenient for users for the following reason; it was unable to monitor the information and status of accelerator operation from the user's office in a real time fashion. To improve this situation, we have constructed a secure system which allows the users to get the accelerator information from VLAN to control system network, while preventing outsiders from having access to the information. To allow access to inside control system network over the network from VLAN, we constructed reverse proxy server and firewall. In addition, we implement a system to send E-mail as security alert from control system network to VLAN. In our contribution, we report this system and the present status in detail.  
poster icon Poster WEPMU038 [45.776 MB]  
 
WEPMU039 Virtual IO Controllers at J-PARC MR using Xen EPICS, controls, network, Linux 1165
 
  • N. Kamikubota, N. Yamamoto
    J-PARC, KEK & JAEA, Ibaraki-ken, Japan
  • T. Iitsuka, S. Motohashi, M. Takagi, S.Y. Yoshida
    Kanto Information Service (KIS), Accelerator Group, Ibaraki, Japan
  • H. Nemoto
    ACMOS INC., Tokai-mura, Ibaraki, Japan
  • S. Yamada
    KEK, Ibaraki, Japan
 
  The control system for J-PARC accelerator complex has been developed based on the EPICS toolkit. About 100 traditional ("real") VME-bus computers are used as EPICS IOCs in the control system for J-PARC MR (Main Ring). Recently, we have introduced "virtual" IOCs using Xen, an open-source virtual machine monitor. Scientific Linux with an EPICS iocCore runs on a Xen virtual machine. EPICS databases for network devices and EPICS soft records can be configured. Multiple virtual IOCs run on a high performance blade-type server, running Scientific Linux as native OS. A few number of virtual IOCs have been demonstrated in MR operation since October, 2010. Experience and future perspective will be discussed.  
 
THBHAUST02 The Wonderland of Operating the ALICE Experiment detector, experiment, controls, interface 1182
 
  • A. Augustinus, P.Ch. Chochula, G. De Cataldo, L.S. Jirdén, A.N. Kurepin, M. Lechman, O. Pinazza, P. Rosinský
    CERN, Geneva, Switzerland
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
 
  ALICE is one of the experiments at the Large Hadron Collider (LHC), CERN (Geneva, Switzerland). Composed of 18 sub-detectors each with numerous subsystems that need to be controlled and operated in a safe and efficient way. The Detector Control System (DCS) is the key for this and has been used by detector experts with success during the commissioning of the individual detectors. With the transition from commissioning to operation more and more tasks were transferred from detector experts to central operators. By the end of the 2010 datataking campaign the ALICE experiment was run by a small crew of central operators, with only a single controls operator. The transition from expert to non-expert operation constituted a real challenge in terms of tools, documentation and training. In addition a relatively high turnover and diversity in the operator crew that is specific to the HEP experiment environment (as opposed to the more stable operation crews for accelerators) made this challenge even bigger. This paper describes the original architectural choices that were made and the key components that allowed to come to a homogeneous control system that would allow for efficient centralized operation. Challenges and specific constraints that apply to the operation of a large complex experiment are described. Emphasis will be put on the tools and procedures that were implemented to allow the transition from local detector expert operation during commissioning and early operation, to efficient centralized operation by a small operator crew not necessarily consisting of experts.  
slides icon Slides THBHAUST02 [1.933 MB]  
 
THBHAUST04 jddd, a State-of-the-art Solution for Control Panel Development controls, software, feedback, status 1189
 
  • E. Sombrowski, A. Petrosyan, K. Rehlich, W. Schütte
    DESY, Hamburg, Germany
 
  Software for graphical user interfaces to control systems may be developed as a rich or thin client. The thin client approach has the advantage that anyone can create and modify control system panels without specific skills in software programming. The Java DOOCS Data Display, jddd, is based on the thin client interaction model. It provides "Include" components and address inheritance for the creation of generic displays. Wildcard operations and regular expression filters are used to customize the graphics content at runtime, e.g. in a "DynamicList" component the parameters have to be painted only once in edit mode and then are automatically displayed multiple times for all available instances in run mode. This paper will describe the benefits of using jddd for control panel design as an alternative to rich client development.  
slides icon Slides THBHAUST04 [0.687 MB]  
 
THBHAUST05 First Operation of the Wide-area Remote Experiment System experiment, radiation, controls, synchrotron 1193
 
  • Y. Furukawa, K. Hasegawa
    JASRI/SPring-8, Hyogo-ken, Japan
  • G. Ueno
    RIKEN Spring-8 Harima, Hyogo, Japan
 
  The Wide-area Remote Experiment System (WRES) at the SPring-8 has been successfully developed [1]. The system communicates with the remote user's based on the SSL/TLS with the bi-directional authentication to avoid the interference from non-authorized access to the system. The system has message filtering system to allow remote user access only to the corresponding beamline equipment and safety interlock system to protect persons aside the experimental station from accidental motion of heavy equipment. The system also has a video streaming system to monitor samples or experimental equipment. We have tested the system from the point of view of safety, stability, reliability etc. and successfully made first experiment from remote site of RIKEN Wako site 480km away from SPring-8 in the end of October 2010.
[1] Y. Furukawa, K. Hasegawa, D. Maeda, G. Ueno, "Development of remote experiment system", Proc. ICALEPCS 2009(Kobe, Japan) P.615
 
slides icon Slides THBHAUST05 [5.455 MB]  
 
THBHAUIO06 Cognitive Ergonomics of Operational Tools controls, interface, power-supply, software 1196
 
  • A. Lüdeke
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  Control systems have become continuously more powerful over the past decades. The ability for high data throughput and sophisticated graphical interactions have opened a variety of new possibilities. But has it helped to provide intuitive, easy to use applications to simplify the operation of modern large scale accelerator facilities? We will discuss what makes an application useful to operation and what is necessary to make a tool easy to use. We will show that even the implementation of a small number of simple design rules for applications can help to ease the operation of a facility.  
slides icon Slides THBHAUIO06 [23.914 MB]  
 
THBHMUST03 System Design towards Higher Availability for Large Distributed Control Systems controls, hardware, network, neutron 1209
 
  • S.M. Hartman
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy
Large distributed control systems for particle accelerators present a complex system engineering challenge. The system, with its significant quantity of components and their complex interactions, must be able to support reliable accelerator operations while providing the flexibility to accommodate changing requirements. System design and architecture focused on required data flow are key to ensuring high control system availability. Using examples from the operational experience of the Spallation Neutron Source at Oak Ridge National Laboratory, recommendations will be presented for leveraging current technologies to design systems for high availability in future large scale projects.
 
slides icon Slides THBHMUST03 [7.833 MB]  
 
THBHMUST04 The Software Improvement Process – Tools and Rules to Encourage Quality software, controls, feedback, FEL 1212
 
  • K. Sigerud, V. Baggiolini
    CERN, Geneva, Switzerland
 
  The Applications section of the CERN accelerator controls group has decided to apply a systematic approach to quality assurance (QA), the "Software Improvement Process", SIP. This process focuses on three areas: the development process itself, suitable QA tools, and how to practically encourage developers to do QA. For each stage of the development process we have agreed on the recommended activities and deliverables, and identified tools to automate and support the task. For example we do more code reviews. As peer reviews are resource-intensive, we only do them for complex parts of a product. As a complement, we are using static code checking tools, like FindBugs and Checkstyle. We also encourage unit testing and have agreed on a minimum level of test coverage recommended for all products, measured using Clover. Each of these tools is well integrated with our IDE (Eclipse) and give instant feedback to the developer about the quality of their code. The major challenges of SIP have been to 1) agree on common standards and configurations, for example common code formatting and Javadoc documentation guidelines, and 2) how to encourage the developers to do QA. To address the second point, we have successfully implemented 'SIP days', i.e. one day dedicated to QA work to which the whole group of developers participates, and 'Top/Flop' lists, clearly indicating the best and worst products with regards to SIP guidelines and standards, for example test coverage. This paper presents the SIP initiative in more detail, summarizing our experience since two years and our future plans.  
slides icon Slides THBHMUST04 [5.638 MB]  
 
FRAAULT03 Development of the Diamond Light Source PSS in conformance with EN 61508 database, controls, interlocks, radiation 1289
 
  • M.C. Wilson, A.G. Price
    Diamond, Oxfordshire, United Kingdom
 
  Diamond Light Source is constructing a third phase (Phase III) of photon beamlines and experiment stations. Experience gained in the design, realization and operation of the Personnel Safety Systems (PSS) on the first two phases of beamlines is being used to improve the design process for this development. Information on the safety functionality of Phase I and Phase II photon beamlines is maintained in a hazard database. From this reports are used to assist in the design, verification and validation of the new PSSs. The data is used to make comparisons between beamlines, validate safety functions and to record documentation for each beamline. This forms part of documentations process demonstrating conformance to EN 61508.  
slides icon Slides FRAAULT03 [0.372 MB]  
 
FRAAULT04 Centralised Coordinated Control to Protect the JET ITER-like Wall. controls, plasma, real-time, diagnostics 1293
 
  • A.V. Stephen, G. Arnoux, T. Budd, P. Card, R.C. Felton, A. Goodyear, J. Harling, D. Kinna, P.J. Lomas, P. McCullen, P.D. Thomas, I.D. Young, K-D. Zastrow
    CCFE, Abingdon, Oxon, United Kingdom
  • D. Alves, D.F. Valcárcel
    IST, Lisboa, Portugal
  • S. Devaux
    MPI/IPP, Garching, Germany
  • S. Jachmich
    RMA, Brussels, Belgium
  • A. Neto
    IPFN, Lisbon, Portugal
 
  Funding: This work was carried out within the framework of the European Fusion Development Agreement. This work was also part-funded by the RCUK Energy Programme under grant EP/I501045.
The JET ITER-like wall project replaces the first wall carbon fibre composite tiles with beryllium and tungsten tiles which should have improved fuel retention characteristics but are less thermally robust. An enhanced protection system using new control and diagnostic systems has been designed which can modify the pre-planned experimental control to protect the new wall. Key design challenges were to extend the Level-1 supervisory control system to allow configurable responses to thermal problems to be defined without introducing excessive complexity, and to integrate the new functionality with existing control and protection systems efficiently and reliably. Alarms are generated by the vessel thermal map (VTM) system if infra-red camera measurements of tile temperatures are too high and by the plasma wall load system (WALLS) if component power limits are exceeded. The design introduces two new concepts: local protection, which inhibits individual heating components but allows the discharge to proceed, and stop responses, which allow highly configurable early termination of the pulse in the safest way for the plasma conditions and type of alarm. These are implemented via the new real-time protection system (RTPS), a centralised controller which responds to the VTM and WALLS alarms by providing override commands to the plasma shape, current, density and heating controllers. This paper describes the design and implementation of the RTPS system which is built with the Multithreaded Application Real-Time executor (MARTe) and will present results from initial operations.
 
slides icon Slides FRAAULT04 [2.276 MB]  
 
FRBHMUST03 Thirty Meter Telescope Observatory Software Architecture software, controls, hardware, software-architecture 1326
 
  • K.K. Gillies, C. Boyer
    TMT, Pasadena, California, USA
 
  The Thirty Meter Telescope (TMT) will be a ground-based, 30-m optical-IR telescope with a highly segmented primary mirror located on the summit of Mauna Kea in Hawaii. The TMT Observatory Software (OSW) system will deliver the software applications and infrastructure necessary to integrate all TMT software into a single system and implement a minimal end-to-end science operations system. At the telescope, OSW is focused on the task of integrating and efficiently controlling and coordinating the telescope, adaptive optics, science instruments, and their subsystems during observation execution. From the software architecture viewpoint, the software system is viewed as a set of software components distributed across many machines that are integrated using a shared software base and a set of services that provide communications and other needed functionality. This paper describes the current state of the TMT Observatory Software focusing on its unique requirements, architecture, and the use of middleware technologies and solutions that enable the OSW design.  
slides icon Slides FRBHMUST03 [3.788 MB]  
 
FRBHMULT04 Towards a State Based Control Architecture for Large Telescopes: Laying a Foundation at the VLT controls, software, distributed, interface 1330
 
  • R. Karban, N. Kornweibel
    ESO, Garching bei Muenchen, Germany
  • D.L. Dvorak, M.D. Ingham, D.A. Wagner
    JPL, Pasadena, California, USA
 
  Large telescopes are characterized by a high level of distribution of control-related tasks and will feature diverse data flow patterns and large ranges of sampling frequencies; there will often be no single, fixed server-client relationship between the control tasks. The architecture is also challenged by the task of integrating heterogeneous subsystems which will be delivered by multiple different contractors. Due to the high number of distributed components, the control system needs to effectively detect errors and faults, impede their propagation, and accurately mitigate them in the shortest time possible, enabling the service to be restored. The presented Data-Driven Architecture is based on a decentralized approach with an end-to-end integration of disparate independently-developed software components, using a high-performance standards-based communication middle-ware infrastructure, based on the Data Distribution Service. A set of rules and principles, based on JPL's State Analysis method and architecture, are established to avoid undisciplined component-to-component interactions, where the Control System and System Under Control are clearly separated. State Analysis provides a model-based process for capturing system and software requirements and design, helping reduce the gap between the requirements on software specified by systems engineers and the implementation by software engineers. The method and architecture has been field tested at the Very Large Telescope, where it has been integrated into an operational system with minimal downtime.  
slides icon Slides FRBHMULT04 [3.504 MB]