MOPMS —  Poster   (10-Oct-11   16:30—18:00)
Chair: J.M. Meyer, ESRF, Grenoble, France
Paper Title Page
MOPMS001 The New Control System for the Vacuum of ISOLDE 312
 
  • S. Blanchard, F. Bellorini, F.B. Bernard, E. Blanco Vinuela, P. Gomes, H. Vestergard, D. Willeman
    CERN, Geneva, Switzerland
 
  The On-Line Isotope Mass Separator (ISOLDE) is a facility dedicated to the production of radioactive ion beams for nuclear and atomic physics. From ISOLDE vacuum sectors to the pressurized gases storage tanks there are up to five stages of pumping for a total of more than one hundred pumps including turbo-molecular, cryo, dry, membrane and oil pumps. The ISOLDE vacuum control system is critical; the volatile radioactive elements present in the exhaust gases and the High and Ultra High Vacuum pressure specifications require a complex control and interlocks system. This paper describes the reengineering of the control system developed using the CERN UNICOS-CPC framework. An additional challenge has been the usage of the UNICOS-CPC in a vacuum domain for the first time. The process automation provides multiple operating modes (Rough pumping, bake-out, high vacuum pumping, regeneration for cryo-pumped sectors, venting, etc). The control system is composed of local controllers driven by PLC (logic, interlocks) and a SCADA application (operation, alarms monitoring and diagnostics).  
poster icon Poster MOPMS001 [4.105 MB]  
 
MOPMS002 LHC Survey Laser Tracker Controls Renovation 316
 
  • C. Charrondière, M. Nybø
    CERN, Geneva, Switzerland
 
  The LHC survey laser tracker control system is based on an industrial software package (Axyz) from Leica Geosystems™ that has an interface to Visual Basic 6.0™, which we used to automate the geometric measurements for the LHC magnets. As the Axyz package is no longer supported and the Visual Basic 6.0™ interface would need to be changed to Visual Basic. Net™ we have taken the decision to recode the automation application in LabVIEW™ interfacing to the PC-DMIS software, proposed by Leica Geosystems. This presentation describes the existing equipment, interface and application showing the reasons for our decisions to move to PC-DMIS and LabVIEW. We present the experience with the first prototype and make a comparison with the legacy system.  
poster icon Poster MOPMS002 [1.812 MB]  
 
MOPMS003 The Evolution of the Control System for the Electromagnetic Calorimeter of the Compact Muon Solenoid Experiment at the Large Hadron Collider 319
 
  • O. Holme, D.R.S. Di Calafiori, G. Dissertori, W. Lustermann
    ETH, Zurich, Switzerland
  • S. Zelepoukine
    UW-Madison/PD, Madison, Wisconsin, USA
 
  Funding: Swiss National Science Foundation (SNF)
This paper discusses the evolution of the Detector Control System (DCS) designed and implemented for the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) as well as the operational experience acquired during the LHC physics data taking periods of 2010 and 2011. The current implementation in terms of functionality and planned hardware upgrades are presented. Furthermore, a project for reducing the long-term software maintenance, including a year-long detailed analysis of the existing applications, is put forward and the current outcomes which have informed the design decisions for the next CMS ECAL DCS software generation are described. The main goals for the new version are to minimize external dependencies enabling smooth migration to new hardware and software platforms and to maintain the existing functionality whilst substantially reducing support and maintenance effort through homogenization, simplification and standardization of the control system software.
 
poster icon Poster MOPMS003 [3.508 MB]  
 
MOPMS004 First Experience with VMware Servers at HLS 323
 
  • G. Liu, X. Bao, C. Li, J.G. Wang, K. Xuan
    USTC/NSRL, Hefei, Anhui, People's Republic of China
 
  Hefei Light Source(HLS) is a dedicated second generation VUV light source, which was designed and constructed two decades ago. In order to improve the performance of HLS, especially getting higher brilliance and increasing the number of straight sections, an upgrade project is undergoing, accordingly the new control system is under construction. VMware vSphere 4 Enterprise Plus is used to construct the server system for HLS control system. Four DELL PowerEdge R710 rack servers and one DELL Equallogic PS6000E iSCSI SAN comprises the hardware platform. Some kinds of servers, such as file server, web server, database server, NIS servers etc. together with the softIOC applications are all integrated into this virtualization platform. The prototype of softIOC is setup and its performance is also given in this paper. High availability and flexibility are achieved with low cost.  
poster icon Poster MOPMS004 [0.463 MB]  
 
MOPMS005 The Upgraded Corrector Control Subsystem for the Nuclotron Main Magnetic Field 326
 
  • V. Andreev, V. Isadov, A. Kirichenko, S. Romanov, G.V. Trubnikov, V. Volkov
    JINR, Dubna, Moscow Region, Russia
 
  This report discusses a control subsystem of 40 main magnetic field correctors which is a part of the superconducting synchrotron Nuclotron Control System. The subsystem is used in static and dynamic (corrector's current depends on the magnetic field value) modes. Development of the subsystem is performed within the bounds of the Nuclotron-NICA project. Principles of digital (PSMBus/RS-485 protocol) and analog control of the correctors' power supplies, current monitoring, remote control of the subsystem via IP network, are also presented. The first results of the subsystem commissioning are given.  
poster icon Poster MOPMS005 [1.395 MB]  
 
MOPMS006 SARAF Beam Lines Control Systems Design 329
 
  • E. Reinfeld, I. Eliyahu, I.G. Gertz, A. Grin, A. Kreisler, A. Perry, L. Weissman
    Soreq NRC, Yavne, Israel
 
  The first beam lines addition to the SARAF facility was completed in phase I and introduced new hardware to be controlled. This article will describe the beam lines vacuum, magnets and diagnostics control systems and the design methodology used to achieve a reliable and reusable control system. The vacuum control systems of the accelerator and beam lines have been integrated into one vacuum control system which controls all the vacuum control hardware for both the accelerator and beam lines. The new system fixes legacy issues and is designed for modularity and simple configuration. Several types of magnetic lenses have been introduced to the new beam line to control the beam direction and optimally focus it on the target. The control system was designed to be modular so that magnets can be quickly and simply inserted or removed. The diagnostics systems control the diagnostics devices used in the beam lines including data acquisition and measurement. Some of the older control systems were improved and redesigned using modern control hardware and software. The above systems were successfully integrated in the accelerator and are used during beam activation.  
poster icon Poster MOPMS006 [2.537 MB]  
 
MOPMS007 Deep-Seated Cancer Treatment Spot-Scanning Control System 333
 
  • W. Zhang, S. An, G.H. Li, W.F. Liu, W.M. Qiao, Y.P. Wang, F. Yang
    IMP, Lanzhou, People's Republic of China
 
  System is mainly composed of hardware, the data for a given waveform scanning power supply controller, dose-controlled counting cards, and event generator system. Software consists of the following components: generating tumor shape and the corresponding waveform data system, waveform controller (ARM and DSP) program, counting cards FPGA procedures, event and data synchronization for transmission COM program.  
 
MOPMS008 Control of the SARAF High Intensity CW Protron Beam Target Systems 336
 
  • I. Eliyahu, D. Berkovits, M. Bisyakoev, I.G. Gertz, S. Halfon, N. Hazenshprung, D. Kijel, E. Reinfeld, I. Silverman, L. Weissman
    Soreq NRC, Yavne, Israel
 
  The first beam line addition to the SARAF facility was completed in phase I. two experiments are planned in this new beam line, the Liquid Lithium target and the Foils target. For those we are currently building hardware and software for their control systems. The Liquid Lithium target is planned to be a powerful neutron source for the accelerator, based on the proton beam of the SARAF phase I. The concept of this target is based on a liquid lithium that spins and produces neutron by the reaction Li7(p,n)Be7. This target was successfully tested in the laboratory and is intended to be integrated into the accelerator beam line and the control system this year. The Foils Target is planned for a radiation experiment designed to examine the problem of radiation damage to metallic foils. To accomplish this we have built a radiation system that enables us to test the foils. The control system includes varied diagnostic elements, vacuum, motor control, temp etc, for the two targets mentioned above. These systems were built to be modular, so that in the future new targets can be quickly and simply inserted. This article will describe the different control systems for the two targets as well as the design methodology used to achieve a reliable and reusable control on those targets.  
poster icon Poster MOPMS008 [1.391 MB]  
 
MOPMS009 IFMIF LLRF Control System Architecture Based on Epics 339
 
  • J.C. Calvo, A. Ibarra, A. Salom
    CIEMAT, Madrid, Spain
  • M.A. Patricio
    UCM, Colmenarejo, Spain
  • M.L. Rivers
    ANL, Argonne, USA
 
  The IFMIF-EVEDA (International Fusion Materials Irradiation Facility - Engineering Validation and Engineering Design Activity) linear accelerator will be a 9 MeV, 125mA CW (Continuous Wave) deuteron accelerator prototype to validate the technical options of the accelerator design for IFMIF. The RF (Radio Frequency) power system of IFMIF-EVEDA consists of 18 RF chains working at 175MHz with three amplification stages each; each one of the required chains for the accelerator prototype is based on several 175MHz amplification stages. The LLRF system provides the RF Drive input of the RF plants. It controls the amplitude and phase of this signal to be synchronized with the beam and it also controls the resonance frequency of the cavities. The system is based on a commercial cPCI FPGA Board provided by Lyrtech and controlled by a Windows Host PC. For this purpose, it is mandatory to communicate the cPCI FPGA Board with an EPICS Channel Access, building an IOC (Input Output Controller) between Lyrtech board and EPICS. A new software architecture to design a device support, using AsynPortDriver class and CSS as a GUI (Graphical User Interface), is presented.  
poster icon Poster MOPMS009 [2.763 MB]  
 
MOPMS010 LANSCE Control System Front-End and Infrastructure Hardware Upgrades 343
 
  • M. Pieck, D. Baros, C.D. Hatch, P.S. Marroquin, P.D. Olivas, F.E. Shelley, D.S. Warren, W. Winton
    LANL, Los Alamos, New Mexico, USA
 
  Funding: This work has benefited from the use of LANSCE at LANL. This facility is funded by the US DoE and operated by Los Alamos National Security for NSSA, Contract DE-AC52-06NA25396. LA-UR-11-10228
The Los Alamos Neutron Science Center (LANSCE) linear accelerator drives user facilities for isotope production, proton radiography, ultra-cold neutrons, weapons neutron research and various sciences using neutron scattering. The LANSCE Control System (LCS), which is in part 30 years old, provides control and data monitoring for most devices in the linac and for some of its associated experimental-area beam lines. In Fiscal Year 2011, the control system went through an upgrade process that affected different areas of the LCS. We improved our network infrastructure and we converted part of our front-end control system hardware to Allen Bradley ControlsLogix 5000 and National Instruments Compact RIO programmable automation controller (PAC). In this paper, we will discuss what we have done, what we have learned about upgrading the existing control system, and how this will affect our future planes.
 
 
MOPMS011
The Evolution of the HADES Slow Control System  
 
  • B.W. Kolb
    GSI, Darmstadt, Germany
 
  The EPICS based slow control system of the HADES detector evolved in the last 12 years going from CAMAC and VME modules to custom built boards distributed over the whole detector. Only few commercial components like HV and LV power supplies are still used. The development was mainly driven by the demands from the data acquisition for faster and more parallel readout. A new plastic optical fiber network for data acquisition, trigger and slow control will be described.  
slides icon Slides MOPMS011 [5.980 MB]  
 
MOPMS013 Progress in the Conversion of the In-house Developed Control System to EPICS and related technologies at iThemba LABS 347
 
  • I.H. Kohler, M.A. Crombie, C. Ellis, M.E. Hogan, H.W. Mostert, M. Mvungi, C. Oliva, J.V. Pilcher, N. Stodart
    iThemba LABS, Somerset West, South Africa
 
  This paper highlights challenges associated with the upgrading of the iThemba LABS control system. Issues include maintaining an ageing control system which is based on a LAN of PCs running OS/2, using in-house developed C-code, hardware interfacing consisting of elderly CAMAC and locally manufactured SABUS [1] modules. The developments around integrating the local hardware into EPICS, running both systems in parallel during the transition period, and the inclusion of other environments like Labview are discussed. It is concluded that it was a good decision to base the underlying intercommunications on channel access and to move the majority of process variables over to EPICS given that it is at least an international standard, less dependant on a handful of local developers, and enjoys the support from a very active world community.
[1] SABUS - a collaboration between Iskor (PTY) Ltd. and CSIR (Council for Scientific and Industrial reseach) (1980)
 
poster icon Poster MOPMS013 [24.327 MB]  
 
MOPMS014 GSI Operation Software: Migration from OpenVMS to Linux 351
 
  • R. Huhmann, G. Fröhlich, S. Jülicher, V.RW. Schaa
    GSI, Darmstadt, Germany
 
  The current operation software at GSI controlling the linac, beam transfer lines, synchrotron and storage ring, has been developed over a period of more than two decades using OpenVMS now on Alpha-Workstations. The GSI accelerator facilities will serve as a injector chain for the new FAIR accelerator complex for which a control system is currently developed. To enable reuse and integration of parts of the distributed GSI software system, in particular the linac operation software, within the FAIR control system, the corresponding software components must be migrated to Linux. The interoperability with FAIR controls applications is achieved by adding a generic middleware interface accessible from Java applications. For porting applications to Linux a set of libraries and tools has been developed covering the necessary OpenVMS system functionality. Currently, core applications and services are already ported or rewritten and functionally tested but not in operational usage. This paper presents the current status of the project and concepts for putting the migrated software into operation.  
 
MOPMS016 The Control System of CERN Accelerators Vacuum (Current Status and Recent Improvements) 354
 
  • P. Gomes, F. Antoniotti, S. Blanchard, M. Boccioli, G. Girardot, H. Vestergard
    CERN, Geneva, Switzerland
  • L. Kopylov, M.S. Mikheev
    IHEP Protvino, Protvino, Moscow Region, Russia
 
  The vacuum control system of most of the CERN accelerators is based on Siemens PLCs and on PVSS SCADA. The application software for both PLC and SCADA started to be developed specifically by the vacuum group; with time, it included a growing number of building blocks from the UNICOS framework. After the transition from the LHC commissioning phase to its regular operation, there has been a number of additions and improvements to the vacuum control system, driven by new technical requirements and by feedback from the accelerator operators and vacuum specialists. New functions have been implemented in PLC and SCADA: for the automatic restart of pumping groups, after power failure; for the control of the solenoids, added to reduce e-cloud effects; and for PLC power supply diagnosis. The automatic recognition and integration of mobile slave PLCs has been extended to the quick installation of pumping groups with the electronics kept in radiation-free zones. The ergonomics and navigation of the SCADA application have been enhanced; new tools have been developed for interlock analysis, and for device listing and selection; web pages have been created, summarizing the values and status of the system. The graphical interface for windows clients has been upgraded from ActiveX to QT, and the PVSS servers will soon be moved from Windows to Linux.  
poster icon Poster MOPMS016 [113.929 MB]  
 
MOPMS018 New Timing System Development at SNS 358
 
  • D. Curry
    ORNL RAD, Oak Ridge, Tennessee, USA
  • X.H. Chen, R. Dickson, S.M. Hartman, D.H. Thompson
    ORNL, Oak Ridge, Tennessee, USA
  • J. Dedič
    Cosylab, Ljubljana, Slovenia
 
  The timing system at the Spallation Neutron Source (SNS) has recently been updated to support the long range production and availability goals of the facility. A redesign of the hardware and software provided us with an opportunity to significantly reduce the complexity of the system as a whole and consolidate the functionality of multiple cards into single units eliminating almost half of our operating components in the field. It also presented a prime opportunity to integrate new system level diagnostics, previously unavailable, for experts and operations. These new tools provide us with a clear image of the health of our distribution links and enhance our ability to quickly identify and isolate errors.  
 
MOPMS020 High Intensity Proton Accelerator Controls Network Upgrade 361
 
  • R.A. Krempaska, A.G. Bertrand, F. Lendzian, H. Lutz
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The High Intensity Proton Accelerator (HIPA) control system network is spread through about six buildings and has grown historically in an unorganized way. It consisted of about 25 network switches, 150 nodes and 20 operator consoles. The miscellaneous hardware infrastructure and the lack of the documentation and components overview could not guarantee anymore the reliability of the control system and facility operation. Therefore, a new network, based on modern network topology, PSI standard hardware with monitoring and detailed documentation and overview was needed. We would like to present the process how we successfully achieved this goal and the advantages of the clean and well documented network infrastructure.  
poster icon Poster MOPMS020 [0.761 MB]  
 
MOPMS021 Detector Control System of the ATLAS Insertable B-Layer 364
 
  • S. Kersten, P. Kind, K. Lantzsch, P. Mättig, C. Zeitnitz
    Bergische Universität Wuppertal, Wuppertal, Germany
  • M. Citterio, C. Meroni
    Universita' degli Studi di Milano e INFN, Milano, Italy
  • F. Gensolen
    CPPM, Marseille, France
  • S. Kovalenko
    CERN, Geneva, Switzerland
  • B. Verlaat
    NIKHEF, Amsterdam, The Netherlands
 
  To improve tracking robustness and precision of the ATLAS inner tracker an additional fourth pixel layer is foreseen, called Insertable B-Layer (IBL). It will be installed between the innermost present Pixel layer and a new smaller beam pipe and is presently under construction. As, once installed into the experiment, no access is available, a highly reliable control system is required. It has to supply the detector with all entities required for operation and protect it at all times. Design constraints are the high power density inside the detector volume, the sensitivity of the sensors against heatups, and the protection of the front end electronics against transients. We present the architecture of the control system with an emphasis on the CO2 cooling system, the power supply system and protection strategies. As we aim for a common operation of pixel and IBL detector, the integration of the IBL control system into the Pixel one will be discussed as well.  
 
MOPMS023 LHC Magnet Test Benches Controls Renovation 368
 
  • A. Raimondo, O.O. Andreassen, D. Kudryavtsev, S.T. Page, A. Rijllart, E. Zorin
    CERN, Geneva, Switzerland
 
  The LHC magnet test benches controls were designed in 1996. They were based on VME data acquisition systems and Siemens PLCs control and interlocks systems. During a review of renovation of superconducting laboratories at CERN in 2009 it was decided to replace the VME systems with PXI and the obsolete Sun/Solaris workstations with Linux PCs. This presentation covers the requirements for the new systems in terms of functionality, security, channel count, sampling frequency and precision. We will report on the experience with the commissioning of the first series of fixed and mobile measurement systems upgraded to this new platform, compared to the old systems. We also include the experience with the renovated control room.  
poster icon Poster MOPMS023 [1.310 MB]  
 
MOPMS024 Evolution of the Argonne Tandem Linear Accelerator System (ATLAS) Control System 371
 
  • M.A. Power, F.H. Munson
    ANL, Argonne, USA
 
  Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357.
Given that the Argonne Tandem Linac Accelerator System (ATLAS) recently celebrated its 25th anniversary, this paper will explore the past, present and future of the ATLAS Control System and how it has evolved along with the accelerator and control system technology. ATLAS as we know it today, originated with a Tandem Van de Graff in the 1960's. With the addition of the Booster section in the late 1970's, came the first computerized control. ATLAS itself was placed into service on June 25, 1985 and was the world's first superconducting linear accelerator for ions. Since its dedication as a National User Facility, more than a thousand experiments by more than 2,000 users world-wide, have taken advantage of the unique capabilities it provides. Today, ATLAS continues to be a user facility for physicists who study the particles that form the heart of atoms. Its most recent addition, CARIBU (Californium Rare Isotope Breeder Upgrade), creates special beams that feed into ATLAS. ATLAS is similar to a living organism, changing and responding to new technological challenges and research needs. As it continues to evolve, so does the control system: from the original days using a DEC PDP-11/34 computer and 2 CAMAC crates, to a DEC Alpha computer running Vsystem software and more than twenty CAMAC crates, to distributed computers and VME systems. Future upgrades are also in the planning stages that will continue to evolve the control system.
 
poster icon Poster MOPMS024 [2.845 MB]  
 
MOPMS025 Migration from OPC-DA to OPC-UA 374
 
  • B. Farnham, R. Barillère
    CERN, Geneva, Switzerland
 
  The OPC-DA specification of OPC has been a highly successful interoperability standard for process automation since 1996, allowing communications between any compliant components regardless of vendor. CERN has a reliance on OPC-DA Server implementations from various 3rd party vendors which provide a standard interface to their hardware. The OPC foundation finalized the OPC-UA specification and OPC-UA implementations are now starting to gather momentum. This presentation gives a brief overview of the headline features of OPC-UA and a comparison with OPC-DA and outlines the necessity of migrating from OPC-DA and the motivation for migrating to OPC-UA. Feedback from research into the availability of tools and testing utilities will be presented and a practical overview of what will be required from a computing perspective in order to run OPC-UA clients and servers in the CERN network.  
poster icon Poster MOPMS025 [1.103 MB]  
 
MOPMS026 J-PARC Control toward Future Reliable Operation 378
 
  • N. Kamikubota, N. Yamamoto
    J-PARC, KEK & JAEA, Ibaraki-ken, Japan
  • S.F. Fukuta, D. Takahashi
    MELCO SC, Tsukuba, Japan
  • T. Iitsuka, S. Motohashi, M. Takagi, S.Y. Yoshida
    Kanto Information Service (KIS), Accelerator Group, Ibaraki, Japan
  • T. Ishiyama
    KEK/JAEA, Ibaraki-Ken, Japan
  • Y. Ito, H. Sakaki
    JAEA, Ibaraki-ken, Japan
  • Y. Kato, M. Kawase, N. Kikuzawa, H. Sako, K.C. Sato, H. Takahashi, H. Yoshikawa
    JAEA/J-PARC, Tokai-Mura, Naka-Gun, Ibaraki-Ken, Japan
  • T. Katoh, H. Nakagawa, J.-I. Odagiri, T. Suzuki, S. Yamada
    KEK, Ibaraki, Japan
  • H. Nemoto
    ACMOS INC., Tokai-mura, Ibaraki, Japan
 
  J-PARC accelerator complex comprises Linac, 3-GeV RCS (Rapid Cycle Synchrotron), and 30-GeV MR (Main Ring). The J-PARC is a joint project between JAEA and KEK. Two control systems, one for Linac and RCS and another for MR, were developed by two institutes. Both control systems use the EPICS toolkit, thus, inter-operation between two systems is possible. After the first beam in November, 2006, beam commissioning and operation have been successful. However, operation experience shows that two control systems often make operators distressed: for example, different GUI look-and-feels, separated alarm screens, independent archive systems, and so on. Considering demands of further power upgrade and longer beam delivery, we need something new, which is easy to understand for operators. It is essential to improve reliability of operation. We, two control groups, started to discuss future directions of our control systems. Ideas to develop common GUI screens of status and alarms, and to develop interfaces to connect archive systems to each other, are discussed. Progress will be reported.  
 
MOPMS027 Fast Beam Current Transformer Software for the CERN Injector Complex 382
 
  • M. Andersen
    CERN, Geneva, Switzerland
 
  The Fast transfer-line BCTs in CERN injector complex are undergoing a complete consolidation to eradicate obsolete, maintenance intensive hardware. The corresponding low-level software has been designed to minimise the effect of identified error sources while allowing remote diagnostics and calibration facilities. This paper will present the front-end and expert application software with the results obtained.  
poster icon Poster MOPMS027 [1.223 MB]  
 
MOPMS028 CSNS Timing System Prototype 386
 
  • G.L. Xu, G. Lei, L. Wang, Y.L. Zhang, P. Zhu
    IHEP Beijing, Beijing, People's Republic of China
 
  Timing system is important part of CSNS. Timing system prototype developments are based on the Event System 230 series. I use two debug platforms, one is EPICS base 3.14.8. IOC uses the MVME5100, running vxworks5.5 version; the other is EPICS base 3.13, using vxworks5.4 version. Prototype work included driver debugging, EVG/EVR-230 experimental new features, such as CML output signals using high-frequency step size of the signal cycle delay, the use of interlocking modules, CML, and TTL's Output to achieve interconnection function, data transmission functions. Finally, I programed the database with the new features and in order to achieve OPI.  
poster icon Poster MOPMS028 [0.434 MB]  
 
MOPMS029 The BPM DAQ System Upgrade for SuperKEKB Injector Linac 389
 
  • M. Satoh, K. Furukawa, F. Miyahara, T. Suwada
    KEK, Ibaraki, Japan
  • T. Kudou, S. Kusano
    MELCO SC, Tsukuba, Japan
 
  The KEK injector linac provides beams with four different rings: a KEKB high-energy ring (HER; 8 GeV/electron), a KEKB low-energy ring (LER; 3.5 GeV/positron), a Photon Factory ring (PF; 2.5 GeV/electron), and an Advanced Ring for Pulse X-rays (PF-AR; 3 GeV/electron). For the three rings except PF-AR, the simultaneous top-up injection has been completed since April 2009. In the simultaneous top-up operation, the common DC magnet settings are utilized for the beams with different energies and amount of charges, whereas the different optimized settings of RF timing and phase are applied to each beam acceleration by using a fast low-level RF (LLRF) phase and trigger delay control up to 50 Hz. The non-destructive beam position monitor (BPM) is an indispensable diagnostic tool for the stable beam operation. In the KEK Linac, approximately nineteen BPMs with the strip-line type electrodes are used for the beam orbit measurement and feedback. In addition, some of them are also used for the beam energy feedback loops. The current DAQ system consists of the digital oscilloscopes (Tektronix DPO7104, 10 GSa/s). A signal from each electrode is analyzed with a predetermined response function up to 50 Hz. The beam position resolution of the current system is limited to about 0.1 mm because of ADC resolution. For the SuperKEKB project, we have a plan to upgrade the BPM DAQ system since the Linac should provide the smaller emittance beam. We will report on the system description of the new DAQ system and the results of performance test in detail.  
poster icon Poster MOPMS029 [3.981 MB]  
 
MOPMS030 Improvement of the Oracle Setup and Database Design at the Heidelberg Ion Therapy Center 393
 
  • K. Höppner, Th. Haberer, J.M. Mosthaf, A. Peters
    HIT, Heidelberg, Germany
  • G. Fröhlich, S. Jülicher, V.RW. Schaa, W. Schiebel, S. Steinmetz
    GSI, Darmstadt, Germany
  • M. Thomas, A. Welde
    Eckelmann AG, Wiesbaden, Germany
 
  The HIT (Heidelberg Ion Therapy) center is an accelerator facility for cancer therapy using both carbon ions and protons, located at the university hospital in Heidelberg. It provides three therapy treatment rooms: two with fixed beam exit (both in clinical use), and a unique gantry with a rotating beam head, currently under commissioning. The backbone of the proprietary accelerator control system consists of an Oracle database running on a Windows server, storing and delivering data of beam cycles, error logging, measured values, and the device parameters and beam settings for about 100,000 combinations of energy, beam size and particle number used in treatment plans. Since going operational, we found some performance problems with the current database setup. Thus, we started an analysis in cooperation with the industrial supplier of the control system (Eckelmann AG) and the GSI Helmholtzzentrum für Schwerionenforschung. It focused on the following topics: hardware resources of the DB server, configuration of the Oracle instance, and a review of the database design that underwent several changes since its original design. The analysis revealed issues on all fields. The outdated server will be replaced by a state-of-the-art machine soon. We will present improvements of the Oracle configuration, the optimization of SQL statements, and the performance tuning of database design by adding new indexes which proved directly visible in accelerator operation, while data integrity was improved by additional foreign key constraints.  
poster icon Poster MOPMS030 [2.014 MB]  
 
MOPMS031 Did We Get What We Aimed for 10 Years Ago? 397
 
  • P.Ch. Chochula, A. Augustinus, L.S. Jirdén, A.N. Kurepin, M. Lechman, P. Rosinský
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The ALICE Detector Control System (DCS) is in charge of control and operation of one of the large high energy physics experiments at CERN in Geneva. The DCS design which started in 2000 was partly inspired by the control systems of the previous generation of HEP experiments at the LEP accelerator at CERN. However, the scale of the LHC experiments, the use of modern, "intelligent" hardware and the harsh operational environment led to an innovative system design. The overall architecture has been largely based on commercial products like PVSS SCADA system and OPC servers extended by frameworks. Windows has been chosen as operating system platform for the core systems and Linux for the frontend devices. The concept of finite state machines has been deeply integrated into the system design. Finally, the design principles have been optimized and adapted to the expected operational needs. The ALICE DCS was designed, prototyped and developed at the time, when no experience with systems of similar scale and complexity existed. At the time of its implementation the detector hardware was not yet available and tests were performed only with partial detector installations. In this paper we analyse how well the original requirements and expectations set ten years ago comply with the real experiment needs after two years of operation. We provide an overview of system performance, reliability and scalability. Based on this experience we assess the need for future system enhancements to take place during the LHC technical stop in 2013.  
poster icon Poster MOPMS031 [5.534 MB]  
 
MOPMS032 Re-engineering of the SPring-8 Radiation Monitor Data Acquisition System 401
 
  • T. Masuda, M. Ishii, K. Kawata, T. Matsushita, C. Saji
    JASRI/SPring-8, Hyogo-ken, Japan
 
  We have re-engineered the data acquisition system for the SPring-8 radiation monitors. Around the site, 81 radiation monitors are deployed. Seventeen of them are utilized for the radiation safety interlock system for the accelerators. The old data-acquisition system consisted of dedicated NIM-like modules linked with the radiation monitors, eleven embedded computers for data acquisition from the modules and three programmable logic controllers (PLCs) for integrated dose surveillance. The embedded computers periodically collected the radiation data from GPIB interfaces with the modules. The dose-surveillance PLCs read analog outputs in proportion to the radiation rate from the modules. The modules and the dose-surveillance PLCs were also interfaced with the radiation safety interlock system. These components in the old system were dedicated, black-boxed and complicated for the operations. In addition, GPIB interface was legacy and not reliable enough for the important system. We, therefore, decided to replace the old system with a new one based on PLCs and FL-net, which were widely used technologies. We newly deployed twelve PLCs as substitutes for all the old components. Another PLC with two graphic panels is installed near a central control room for centralized operations and watches for the all monitors. All the new PLCs and a VME computer for data acquisition are connected through FL-net. In this paper, we describe the new system and the methodology of the replacement within the short interval between the accelerator operations.  
poster icon Poster MOPMS032 [1.761 MB]  
 
MOPMS033 Status, Recent Developments and Perspective of TINE-powered Video System, Release 3 405
 
  • S. Weisse, D. Melkumyan
    DESY Zeuthen, Zeuthen, Germany
  • P. Duval
    DESY, Hamburg, Germany
 
  Experience has shown that imaging software and hardware installations at accelerator facilities need to be changed, adapted and updated on a semi-permanent basis. On this premise, the component-based core architecture of Video System 3 was founded. In design and implementation, emphasis was, is, and will be put on flexibility, performance, low latency, modularity, interoperability, use of open source, ease of use as well as reuse, good documentation and multi-platform capability. In the last year, a milestone was reached as Video System 3 entered production-level at PITZ, Hasylab and PETRA III. Since then, development path is stronger influenced by production-level experience and customer feedback. In this contribution, we describe the current status, layout, recent developments and perspective of the Video System. Focus will be put on integration of recording and playback of video sequences to Archive/DAQ, a standalone installation of the Video System on a notebook as well as experiences running on Windows 7-64bit. In addition, new client-side multi-platform GUI/application developments using Java are about to hit the surface. Last but not least it must be mentioned that although the implementation of Release 3 is integrated into the TINE control system, it is modular enough so that integration into other control systems can be considered.  
slides icon Slides MOPMS033 [0.254 MB]  
poster icon Poster MOPMS033 [2.127 MB]  
 
MOPMS034 Software Renovation of CERN's Experimental Areas 409
 
  • J. Fullerton, L.K. Jensen, J. Spanggaard
    CERN, Geneva, Switzerland
 
  The experimental areas at CERN (AD, PS and SPS) have undergone a wide-spread electronics and software consolidation based on modern techniques allowing them to be used in the many years to come. This paper will describe the scale of the software renovation and how the issues were overcome in order to ensure a complete integration into the respective control systems.  
poster icon Poster MOPMS034 [1.582 MB]  
 
MOPMS035 A Beam Profiler and Emittance Meter for the SPES Project at INFN-LNL 412
 
  • G. Bassato, A. Andrighetto, N. Conforto, M.G. Giacchini, J.A. Montano, M. Poggi, J.A. Vásquez
    INFN/LNL, Legnaro (PD), Italy
 
  The beam diagnostics system currently in use at LNL in the superconducting Linac has been upgraded for the SPES project. The control software has been rewritten using EPICS tools and a new emittance meter has been developed. The beam detector is based on wire grids, the IOC is implemented in a VME system running under Vxworks and the graphic interface is based on CSS. The system is now in operation in the SPES Target Laboratory for the characterization of beams produced by the new ion source.  
poster icon Poster MOPMS035 [0.367 MB]  
 
MOPMS036 Upgrade of the Nuclotron Extracted Beam Diagnostic Subsystem. 415
 
  • E.V. Gorbachev, N.I. Lebedev, N.V. Pilyar, S. Romanov, T.V. Rukoyatkina, V. Volkov
    JINR, Dubna, Moscow Region, Russia
 
  The subsystem is intended for the Nuclotron extracted beam parameters measurement. Multiwire proportional chambers are used for transversal beam profiles mesurements in four points of the beam transfer line. Gas amplification values are tuned by high voltage power supplies adjustments. The extracted beam intensity is measured by means of ionization chamber, variable gain current amplifier DDPCA-300 and voltage-to-frequency converter. The data is processed by industrial PC with National Instruments DAQ modules. The client-server distributed application written in LabView environment allows operators to control hardware and obtain measurement results over TCP/IP network.  
poster icon Poster MOPMS036 [1.753 MB]  
 
MOPMS037 A Customizable Platform for High-availability Monitoring, Control and Data Distribution at CERN 418
 
  • M. Brightwell, M. Bräger, A. Lang, A. Suwalska
    CERN, Geneva, Switzerland
 
  In complex operational environments, monitoring and control systems are asked to satisfy ever more stringent requirements. In addition to reliability, the availability of the system has become crucial to accommodate for tight planning schedules and increased dependencies to other systems. In this context, adapting a monitoring system to changes in its environment and meeting requests for new functionalities are increasingly challenging. Combining maintainability and high-availability within a portable architecture is the focus of this work. To meet these increased requirements, we present a new modular system developed at CERN. Using the experience gained from previous implementations, the new platform uses a multi-server architecture to allow running patches and updates to the application without affecting its availability. The data acquisition can also be reconfigured without any downtime or potential data loss. The modular architecture builds on a core system that aims to be reusable for multiple monitoring scenarios, while keeping each instance as lightweight as possible. Both for cost and future maintenance concerns, open and customizable technologies have been preferred.