Keyword: hardware
Paper Title Other Keywords Page
MODAULT01 Thirty Meter Telescope Adaptive Optics Computing Challenges real-time, FPGA, controls, operation 36
 
  • C. Boyer, B.L. Ellerbroek, L. Gilles, L. Wang
    TMT, Pasadena, California, USA
  • S. Browne
    The Optical Sciences Company, Anaheim, California, USA
  • G. Herriot, J.P. Veran
    HIA, Victoria, Canada
  • G.J. Hovey
    DRAO, Penticton, British Columbia, Canada
 
  The Thirty Meter Telescope (TMT) will be used with Adaptive Optics (AO) systems to allow near diffraction-limited performance in the near-infrared and achieve the main TMT science goals. Adaptive optics systems reduce the effect of the atmospheric distortions by dynamically measuring the distortions with wavefront sensors, performing wavefront reconstruction with a Real Time Controller (RTC), and then compensating for the distortions with wavefront correctors. The requirements for the RTC subsystem of the TMT first light AO system will represent a significant advance over the current generation of astronomical AO control systems. Memory and processing requirements would be at least 2 orders of magnitude greater than the currently most powerful AO systems using conventional approaches, so that innovative wavefront reconstruction algorithms and new hardware approaches will be required. In this paper, we will first present the requirements and challenges for the RTC of the first light AO system, together with the algorithms that have been developed to reduce the memory and processing requirements, and then two possible hardware architectures based on Field Programmable Gate Array (FPGA).  
slides icon Slides MODAULT01 [2.666 MB]  
 
MOMAU002 Improving Data Retrieval Rates Using Remote Data Servers network, software, database, controls 40
 
  • T. D'Ottavio, B. Frak, J. Morris, S. Nemesure
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work performed under the auspices of the U.S. Department of Energy
The power and scope of modern Control Systems has led to an increased amount of data being collected and stored, including data collected at high (kHz) frequencies. One consequence is that users now routinely make data requests that can cause gigabytes of data to be read and displayed. Given that a users patience can be measured in seconds, this can be quite a technical challenge. This paper explores one possible solution to this problem - the creation of remote data servers whose performance is optimized to handle context-sensitive data requests. Methods for increasing data delivery performance include the use of high speed network connections between the stored data and the data servers, smart caching of frequently used data, and the culling of data delivered as determined by the context of the data request. This paper describes decisions made when constructing these servers and compares data retrieval performance by clients that use or do not use an intermediate data server.
 
slides icon Slides MOMAU002 [0.085 MB]  
poster icon Poster MOMAU002 [1.077 MB]  
 
MOPKN006 Algorithms and Data Structures for the EPICS Channel Archiver EPICS, operation, software, database 94
 
  • J. Rowland, M.T. Heron, M.A. Leech, S.J. Singleton, K. Vijayan
    Diamond, Oxfordshire, United Kingdom
 
  Diamond Light Source records 3GB of process data per day and has a 15TB archive on line with the EPICS Channel Archiver. This paper describes recent modifications to the software to improve performance and usability. The file-size limit on the R-Tree index has been removed, allowing all archived data to be searchable from one index. A decimation system works directly on compressed archives from a backup server and produces multi-rate reduced data with minimum and maximum values to support time efficient summary reporting and range queries. The XMLRPC interface has been extended to provide binary data transfer to clients needing large amounts of raw data.  
poster icon Poster MOPKN006 [0.133 MB]  
 
MOPKN029 Design and Implementation of the CEBAF Element Database database, interface, controls, software 157
 
  • T. L. Larrieu, M.E. Joyce, C.J. Slominski
    JLAB, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
With inauguration of the CEBAF Element Database(CED) in Fall 2010, Jefferson Lab computer scientists have taken a first step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting front-end computers to building controls screens. A particular requirement influencing the CED design is that it must provide consistent access to not only present, but also future, and eventually past, configurations of the CEBAF accelerator. To accomplish this, an introspective database schema was designed that allows new elements, element types, and element properties to be defined on-the-fly without changing table structure. When used in conjunction with the Oracle Workspace Manager, it allows users to seamlessly query data from any time in the database history with the exact same tools as they use for querying the present configuration. Users can also check-out workspaces and use them as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented API that is translated automatically from original C++ into native libraries for script languages such as perl, php, and TCL making access to the CED easy and ubiquitous.
The U.S.Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.
 
poster icon Poster MOPKN029 [5.239 MB]  
 
MOPMN003 A Bottom-up Approach to Automatically Configured Tango Control Systems. controls, database, TANGO, vacuum 239
 
  • S. Rubio-Manrique, D.B. Beltrán, I. Costa, D.F.C. Fernández-Carreiras, J.V. Gigante, J. Klora, O. Matilla, R. Ranz, J. Ribas, O. Sanchez
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  Alba maintains a central repository, so called "Cabling and Controls database" (CCDB), which keeps the inventory of equipment, cables, connections and their configuration and technical specifications. The valuable information kept in this MySQL database enables some tools to automatically create and configure Tango devices and other software components of the control systems of Accelerators, beamlines and laboratories. This paper describes the process involved in this automatic setup.  
poster icon Poster MOPMN003 [0.922 MB]  
 
MOPMN014 Detector Control System for the ATLAS Muon Spectrometer And Operational Experience After The First Year of LHC Data Taking detector, controls, monitoring, electronics 267
 
  • S. Zimmermann
    Albert-Ludwig Universität Freiburg, Freiburg, Germany
  • G. Aielli
    Università di Roma II Tor Vergata, Roma, Italy
  • M. Bindi, A. Polini
    INFN-Bologna, Bologna, Italy
  • S. Bressler, E. Kajomovitz, S. Tarem
    Technion, Haifa, Israel
  • R.G.K. Hart
    NIKHEF, Amsterdam, The Netherlands
  • G. Iakovidis, E. Ikarios, K. Karakostas, S. Leontsinis, E. Mountricha
    National Technical University of Athens, Athens, Greece
 
  Muon Reconstruction is a key ingredient in any of the experiments at the Large Hadron Collider LHC. The muon spectrometer of ATLAS comprises Monitored Drift Tube (MDTs) and Cathode Strip Chambers (CSCs) for precision tracking as well as Resistive Plate (RPC) and Thin Gap (TGC) Chambers as muon trigger and for second coordinate measurement. Together with a strong magnetic field provided by a super conducting toroid magnet and an optical alignment system a high precision determination of muon momentum up to the highest particle energies accessible by the LHC collisions is provided. The Detector Control System (DCS) of each muon sub-detector technology must efficiently and safely manage several thousands of LV and HV channels, the front-end electronics initialization as well as monitoring of beam, background, magnetic field and environmental conditions. This contribution will describe the chosen hardware architecture, which as much as possible tries to use common technologies, and the implemented controls hierarchy. In addition the muon DCS human machine interface (HMI) layer and operator tools will be covered. Emphasis will be given to reviewing the experience from the first year of LHC and detector operations, and to lessons learned for future large scale detector control systems. We will also present the automatic procedures put in place during last year and review the improvements gained by them for data taking efficiency. Finally, we will describe the role DCS plays in assessing the quality of data for physics analysis and in online optimization of detector conditions.
On Behalf of the ATLAS Muon Collaboration
 
poster icon Poster MOPMN014 [0.249 MB]  
 
MOPMN022 Database Driven Control System Configuration for the PSI Proton Accelerator Facilities database, controls, EPICS, proton 289
 
  • H. Lutz, D. Anicic
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  At PSI there are two facilities with proton cyclotron accelerators. The machine control system for PROSCAN which is used for medical patient therapy, is running with EPICS. The High Intensity Proton Accelerator (HIPA) is mostly running under the in-house control system ACS. Dedicated parts of HIPA are under EPICS control. Both these facilities are configured through an Oracle database application suite. This paper presents the concepts and tools which are used to configure the control system directly from the database-stored configurations. Such an approach has advantages which contribute for better control system reliability, overview and consistency.  
poster icon Poster MOPMN022 [0.992 MB]  
 
MOPMS001 The New Control System for the Vacuum of ISOLDE vacuum, controls, interlocks, software 312
 
  • S. Blanchard, F. Bellorini, F.B. Bernard, E. Blanco Vinuela, P. Gomes, H. Vestergard, D. Willeman
    CERN, Geneva, Switzerland
 
  The On-Line Isotope Mass Separator (ISOLDE) is a facility dedicated to the production of radioactive ion beams for nuclear and atomic physics. From ISOLDE vacuum sectors to the pressurized gases storage tanks there are up to five stages of pumping for a total of more than one hundred pumps including turbo-molecular, cryo, dry, membrane and oil pumps. The ISOLDE vacuum control system is critical; the volatile radioactive elements present in the exhaust gases and the High and Ultra High Vacuum pressure specifications require a complex control and interlocks system. This paper describes the reengineering of the control system developed using the CERN UNICOS-CPC framework. An additional challenge has been the usage of the UNICOS-CPC in a vacuum domain for the first time. The process automation provides multiple operating modes (Rough pumping, bake-out, high vacuum pumping, regeneration for cryo-pumped sectors, venting, etc). The control system is composed of local controllers driven by PLC (logic, interlocks) and a SCADA application (operation, alarms monitoring and diagnostics).  
poster icon Poster MOPMS001 [4.105 MB]  
 
MOPMS002 LHC Survey Laser Tracker Controls Renovation software, laser, interface, controls 316
 
  • C. Charrondière, M. Nybø
    CERN, Geneva, Switzerland
 
  The LHC survey laser tracker control system is based on an industrial software package (Axyz) from Leica Geosystems™ that has an interface to Visual Basic 6.0™, which we used to automate the geometric measurements for the LHC magnets. As the Axyz package is no longer supported and the Visual Basic 6.0™ interface would need to be changed to Visual Basic. Net™ we have taken the decision to recode the automation application in LabVIEW™ interfacing to the PC-DMIS software, proposed by Leica Geosystems. This presentation describes the existing equipment, interface and application showing the reasons for our decisions to move to PC-DMIS and LabVIEW. We present the experience with the first prototype and make a comparison with the legacy system.  
poster icon Poster MOPMS002 [1.812 MB]  
 
MOPMS003 The Evolution of the Control System for the Electromagnetic Calorimeter of the Compact Muon Solenoid Experiment at the Large Hadron Collider software, controls, detector, interface 319
 
  • O. Holme, D.R.S. Di Calafiori, G. Dissertori, W. Lustermann
    ETH, Zurich, Switzerland
  • S. Zelepoukine
    UW-Madison/PD, Madison, Wisconsin, USA
 
  Funding: Swiss National Science Foundation (SNF)
This paper discusses the evolution of the Detector Control System (DCS) designed and implemented for the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) as well as the operational experience acquired during the LHC physics data taking periods of 2010 and 2011. The current implementation in terms of functionality and planned hardware upgrades are presented. Furthermore, a project for reducing the long-term software maintenance, including a year-long detailed analysis of the existing applications, is put forward and the current outcomes which have informed the design decisions for the next CMS ECAL DCS software generation are described. The main goals for the new version are to minimize external dependencies enabling smooth migration to new hardware and software platforms and to maintain the existing functionality whilst substantially reducing support and maintenance effort through homogenization, simplification and standardization of the control system software.
 
poster icon Poster MOPMS003 [3.508 MB]  
 
MOPMS004 First Experience with VMware Servers at HLS controls, database, brilliance, network 323
 
  • G. Liu, X. Bao, C. Li, J.G. Wang, K. Xuan
    USTC/NSRL, Hefei, Anhui, People's Republic of China
 
  Hefei Light Source(HLS) is a dedicated second generation VUV light source, which was designed and constructed two decades ago. In order to improve the performance of HLS, especially getting higher brilliance and increasing the number of straight sections, an upgrade project is undergoing, accordingly the new control system is under construction. VMware vSphere 4 Enterprise Plus is used to construct the server system for HLS control system. Four DELL PowerEdge R710 rack servers and one DELL Equallogic PS6000E iSCSI SAN comprises the hardware platform. Some kinds of servers, such as file server, web server, database server, NIS servers etc. together with the softIOC applications are all integrated into this virtualization platform. The prototype of softIOC is setup and its performance is also given in this paper. High availability and flexibility are achieved with low cost.  
poster icon Poster MOPMS004 [0.463 MB]  
 
MOPMS006 SARAF Beam Lines Control Systems Design controls, vacuum, operation, status 329
 
  • E. Reinfeld, I. Eliyahu, I.G. Gertz, A. Grin, A. Kreisler, A. Perry, L. Weissman
    Soreq NRC, Yavne, Israel
 
  The first beam lines addition to the SARAF facility was completed in phase I and introduced new hardware to be controlled. This article will describe the beam lines vacuum, magnets and diagnostics control systems and the design methodology used to achieve a reliable and reusable control system. The vacuum control systems of the accelerator and beam lines have been integrated into one vacuum control system which controls all the vacuum control hardware for both the accelerator and beam lines. The new system fixes legacy issues and is designed for modularity and simple configuration. Several types of magnetic lenses have been introduced to the new beam line to control the beam direction and optimally focus it on the target. The control system was designed to be modular so that magnets can be quickly and simply inserted or removed. The diagnostics systems control the diagnostics devices used in the beam lines including data acquisition and measurement. Some of the older control systems were improved and redesigned using modern control hardware and software. The above systems were successfully integrated in the accelerator and are used during beam activation.  
poster icon Poster MOPMS006 [2.537 MB]  
 
MOPMS007 Deep-Seated Cancer Treatment Spot-Scanning Control System heavy-ion, database, ion, controls 333
 
  • W. Zhang, S. An, G.H. Li, W.F. Liu, W.M. Qiao, Y.P. Wang, F. Yang
    IMP, Lanzhou, People's Republic of China
 
  System is mainly composed of hardware, the data for a given waveform scanning power supply controller, dose-controlled counting cards, and event generator system. Software consists of the following components: generating tumor shape and the corresponding waveform data system, waveform controller (ARM and DSP) program, counting cards FPGA procedures, event and data synchronization for transmission COM program.  
 
MOPMS010 LANSCE Control System Front-End and Infrastructure Hardware Upgrades controls, network, linac, EPICS 343
 
  • M. Pieck, D. Baros, C.D. Hatch, P.S. Marroquin, P.D. Olivas, F.E. Shelley, D.S. Warren, W. Winton
    LANL, Los Alamos, New Mexico, USA
 
  Funding: This work has benefited from the use of LANSCE at LANL. This facility is funded by the US DoE and operated by Los Alamos National Security for NSSA, Contract DE-AC52-06NA25396. LA-UR-11-10228
The Los Alamos Neutron Science Center (LANSCE) linear accelerator drives user facilities for isotope production, proton radiography, ultra-cold neutrons, weapons neutron research and various sciences using neutron scattering. The LANSCE Control System (LCS), which is in part 30 years old, provides control and data monitoring for most devices in the linac and for some of its associated experimental-area beam lines. In Fiscal Year 2011, the control system went through an upgrade process that affected different areas of the LCS. We improved our network infrastructure and we converted part of our front-end control system hardware to Allen Bradley ControlsLogix 5000 and National Instruments Compact RIO programmable automation controller (PAC). In this paper, we will discuss what we have done, what we have learned about upgrading the existing control system, and how this will affect our future planes.
 
 
MOPMS013 Progress in the Conversion of the In-house Developed Control System to EPICS and related technologies at iThemba LABS EPICS, controls, LabView, interface 347
 
  • I.H. Kohler, M.A. Crombie, C. Ellis, M.E. Hogan, H.W. Mostert, M. Mvungi, C. Oliva, J.V. Pilcher, N. Stodart
    iThemba LABS, Somerset West, South Africa
 
  This paper highlights challenges associated with the upgrading of the iThemba LABS control system. Issues include maintaining an ageing control system which is based on a LAN of PCs running OS/2, using in-house developed C-code, hardware interfacing consisting of elderly CAMAC and locally manufactured SABUS [1] modules. The developments around integrating the local hardware into EPICS, running both systems in parallel during the transition period, and the inclusion of other environments like Labview are discussed. It is concluded that it was a good decision to base the underlying intercommunications on channel access and to move the majority of process variables over to EPICS given that it is at least an international standard, less dependant on a handful of local developers, and enjoys the support from a very active world community.
[1] SABUS - a collaboration between Iskor (PTY) Ltd. and CSIR (Council for Scientific and Industrial reseach) (1980)
 
poster icon Poster MOPMS013 [24.327 MB]  
 
MOPMS018 New Timing System Development at SNS timing, diagnostics, operation, network 358
 
  • D. Curry
    ORNL RAD, Oak Ridge, Tennessee, USA
  • X.H. Chen, R. Dickson, S.M. Hartman, D.H. Thompson
    ORNL, Oak Ridge, Tennessee, USA
  • J. Dedič
    Cosylab, Ljubljana, Slovenia
 
  The timing system at the Spallation Neutron Source (SNS) has recently been updated to support the long range production and availability goals of the facility. A redesign of the hardware and software provided us with an opportunity to significantly reduce the complexity of the system as a whole and consolidate the functionality of multiple cards into single units eliminating almost half of our operating components in the field. It also presented a prime opportunity to integrate new system level diagnostics, previously unavailable, for experts and operations. These new tools provide us with a clear image of the health of our distribution links and enhance our ability to quickly identify and isolate errors.  
 
MOPMS021 Detector Control System of the ATLAS Insertable B-Layer detector, controls, monitoring, software 364
 
  • S. Kersten, P. Kind, K. Lantzsch, P. Mättig, C. Zeitnitz
    Bergische Universität Wuppertal, Wuppertal, Germany
  • M. Citterio, C. Meroni
    Universita' degli Studi di Milano e INFN, Milano, Italy
  • F. Gensolen
    CPPM, Marseille, France
  • S. Kovalenko
    CERN, Geneva, Switzerland
  • B. Verlaat
    NIKHEF, Amsterdam, The Netherlands
 
  To improve tracking robustness and precision of the ATLAS inner tracker an additional fourth pixel layer is foreseen, called Insertable B-Layer (IBL). It will be installed between the innermost present Pixel layer and a new smaller beam pipe and is presently under construction. As, once installed into the experiment, no access is available, a highly reliable control system is required. It has to supply the detector with all entities required for operation and protect it at all times. Design constraints are the high power density inside the detector volume, the sensitivity of the sensors against heatups, and the protection of the front end electronics against transients. We present the architecture of the control system with an emphasis on the CO2 cooling system, the power supply system and protection strategies. As we aim for a common operation of pixel and IBL detector, the integration of the IBL control system into the Pixel one will be discussed as well.  
 
MOPMS023 LHC Magnet Test Benches Controls Renovation controls, network, Linux, interface 368
 
  • A. Raimondo, O.O. Andreassen, D. Kudryavtsev, S.T. Page, A. Rijllart, E. Zorin
    CERN, Geneva, Switzerland
 
  The LHC magnet test benches controls were designed in 1996. They were based on VME data acquisition systems and Siemens PLCs control and interlocks systems. During a review of renovation of superconducting laboratories at CERN in 2009 it was decided to replace the VME systems with PXI and the obsolete Sun/Solaris workstations with Linux PCs. This presentation covers the requirements for the new systems in terms of functionality, security, channel count, sampling frequency and precision. We will report on the experience with the commissioning of the first series of fixed and mobile measurement systems upgraded to this new platform, compared to the old systems. We also include the experience with the renovated control room.  
poster icon Poster MOPMS023 [1.310 MB]  
 
MOPMS024 Evolution of the Argonne Tandem Linear Accelerator System (ATLAS) Control System controls, software, distributed, database 371
 
  • M.A. Power, F.H. Munson
    ANL, Argonne, USA
 
  Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357.
Given that the Argonne Tandem Linac Accelerator System (ATLAS) recently celebrated its 25th anniversary, this paper will explore the past, present and future of the ATLAS Control System and how it has evolved along with the accelerator and control system technology. ATLAS as we know it today, originated with a Tandem Van de Graff in the 1960's. With the addition of the Booster section in the late 1970's, came the first computerized control. ATLAS itself was placed into service on June 25, 1985 and was the world's first superconducting linear accelerator for ions. Since its dedication as a National User Facility, more than a thousand experiments by more than 2,000 users world-wide, have taken advantage of the unique capabilities it provides. Today, ATLAS continues to be a user facility for physicists who study the particles that form the heart of atoms. Its most recent addition, CARIBU (Californium Rare Isotope Breeder Upgrade), creates special beams that feed into ATLAS. ATLAS is similar to a living organism, changing and responding to new technological challenges and research needs. As it continues to evolve, so does the control system: from the original days using a DEC PDP-11/34 computer and 2 CAMAC crates, to a DEC Alpha computer running Vsystem software and more than twenty CAMAC crates, to distributed computers and VME systems. Future upgrades are also in the planning stages that will continue to evolve the control system.
 
poster icon Poster MOPMS024 [2.845 MB]  
 
MOPMS027 Fast Beam Current Transformer Software for the CERN Injector Complex software, GUI, real-time, timing 382
 
  • M. Andersen
    CERN, Geneva, Switzerland
 
  The Fast transfer-line BCTs in CERN injector complex are undergoing a complete consolidation to eradicate obsolete, maintenance intensive hardware. The corresponding low-level software has been designed to minimise the effect of identified error sources while allowing remote diagnostics and calibration facilities. This paper will present the front-end and expert application software with the results obtained.  
poster icon Poster MOPMS027 [1.223 MB]  
 
MOPMS030 Improvement of the Oracle Setup and Database Design at the Heidelberg Ion Therapy Center database, ion, controls, operation 393
 
  • K. Höppner, Th. Haberer, J.M. Mosthaf, A. Peters
    HIT, Heidelberg, Germany
  • G. Fröhlich, S. Jülicher, V.RW. Schaa, W. Schiebel, S. Steinmetz
    GSI, Darmstadt, Germany
  • M. Thomas, A. Welde
    Eckelmann AG, Wiesbaden, Germany
 
  The HIT (Heidelberg Ion Therapy) center is an accelerator facility for cancer therapy using both carbon ions and protons, located at the university hospital in Heidelberg. It provides three therapy treatment rooms: two with fixed beam exit (both in clinical use), and a unique gantry with a rotating beam head, currently under commissioning. The backbone of the proprietary accelerator control system consists of an Oracle database running on a Windows server, storing and delivering data of beam cycles, error logging, measured values, and the device parameters and beam settings for about 100,000 combinations of energy, beam size and particle number used in treatment plans. Since going operational, we found some performance problems with the current database setup. Thus, we started an analysis in cooperation with the industrial supplier of the control system (Eckelmann AG) and the GSI Helmholtzzentrum für Schwerionenforschung. It focused on the following topics: hardware resources of the DB server, configuration of the Oracle instance, and a review of the database design that underwent several changes since its original design. The analysis revealed issues on all fields. The outdated server will be replaced by a state-of-the-art machine soon. We will present improvements of the Oracle configuration, the optimization of SQL statements, and the performance tuning of database design by adding new indexes which proved directly visible in accelerator operation, while data integrity was improved by additional foreign key constraints.  
poster icon Poster MOPMS030 [2.014 MB]  
 
MOPMS031 Did We Get What We Aimed for 10 Years Ago? detector, operation, controls, experiment 397
 
  • P.Ch. Chochula, A. Augustinus, L.S. Jirdén, A.N. Kurepin, M. Lechman, P. Rosinský
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The ALICE Detector Control System (DCS) is in charge of control and operation of one of the large high energy physics experiments at CERN in Geneva. The DCS design which started in 2000 was partly inspired by the control systems of the previous generation of HEP experiments at the LEP accelerator at CERN. However, the scale of the LHC experiments, the use of modern, "intelligent" hardware and the harsh operational environment led to an innovative system design. The overall architecture has been largely based on commercial products like PVSS SCADA system and OPC servers extended by frameworks. Windows has been chosen as operating system platform for the core systems and Linux for the frontend devices. The concept of finite state machines has been deeply integrated into the system design. Finally, the design principles have been optimized and adapted to the expected operational needs. The ALICE DCS was designed, prototyped and developed at the time, when no experience with systems of similar scale and complexity existed. At the time of its implementation the detector hardware was not yet available and tests were performed only with partial detector installations. In this paper we analyse how well the original requirements and expectations set ten years ago comply with the real experiment needs after two years of operation. We provide an overview of system performance, reliability and scalability. Based on this experience we assess the need for future system enhancements to take place during the LHC technical stop in 2013.  
poster icon Poster MOPMS031 [5.534 MB]  
 
MOPMS034 Software Renovation of CERN's Experimental Areas software, controls, GUI, detector 409
 
  • J. Fullerton, L.K. Jensen, J. Spanggaard
    CERN, Geneva, Switzerland
 
  The experimental areas at CERN (AD, PS and SPS) have undergone a wide-spread electronics and software consolidation based on modern techniques allowing them to be used in the many years to come. This paper will describe the scale of the software renovation and how the issues were overcome in order to ensure a complete integration into the respective control systems.  
poster icon Poster MOPMS034 [1.582 MB]  
 
MOPMS036 Upgrade of the Nuclotron Extracted Beam Diagnostic Subsystem. controls, operation, software, high-voltage 415
 
  • E.V. Gorbachev, N.I. Lebedev, N.V. Pilyar, S. Romanov, T.V. Rukoyatkina, V. Volkov
    JINR, Dubna, Moscow Region, Russia
 
  The subsystem is intended for the Nuclotron extracted beam parameters measurement. Multiwire proportional chambers are used for transversal beam profiles mesurements in four points of the beam transfer line. Gas amplification values are tuned by high voltage power supplies adjustments. The extracted beam intensity is measured by means of ionization chamber, variable gain current amplifier DDPCA-300 and voltage-to-frequency converter. The data is processed by industrial PC with National Instruments DAQ modules. The client-server distributed application written in LabView environment allows operators to control hardware and obtain measurement results over TCP/IP network.  
poster icon Poster MOPMS036 [1.753 MB]  
 
MOPMS037 A Customizable Platform for High-availability Monitoring, Control and Data Distribution at CERN monitoring, controls, database, software 418
 
  • M. Brightwell, M. Bräger, A. Lang, A. Suwalska
    CERN, Geneva, Switzerland
 
  In complex operational environments, monitoring and control systems are asked to satisfy ever more stringent requirements. In addition to reliability, the availability of the system has become crucial to accommodate for tight planning schedules and increased dependencies to other systems. In this context, adapting a monitoring system to changes in its environment and meeting requests for new functionalities are increasingly challenging. Combining maintainability and high-availability within a portable architecture is the focus of this work. To meet these increased requirements, we present a new modular system developed at CERN. Using the experience gained from previous implementations, the new platform uses a multi-server architecture to allow running patches and updates to the application without affecting its availability. The data acquisition can also be reconfigured without any downtime or potential data loss. The modular architecture builds on a core system that aims to be reusable for multiple monitoring scenarios, while keeping each instance as lightweight as possible. Both for cost and future maintenance concerns, open and customizable technologies have been preferred.  
 
MOPMU018 Update On The Central Control System of TRIUMF's 500 MeV Cyclotron controls, cyclotron, software, operation 469
 
  • M. Mouat, E. Klassen, K.S. Lee, J.J. Pon, P.J. Yogendran
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
 
  The Central Control System of TRIUMF's 500 MeV cyclotron was initially commissioned in the early 1970s. In 1987 a four year project to upgrade the control system was planned and commenced. By 1997 this upgrade was complete and the new system was operating with increased reliability, functionality and maintainability. Since 1997 an evolution of incremental change has existed. Functionality, reliability and maintainability have continued to improve. This paper provides an update on the present control system situation (2011) and possible future directions.  
poster icon Poster MOPMU018 [4.613 MB]  
 
MOPMU026 A Readout and Control System for a CTA Prototype Telescope controls, software, interface, framework 494
 
  • I. Oya, U. Schwanke
    Humboldt University Berlin, Institut für Physik, Berlin, Germany
  • B. Behera, D. Melkumyan, T. Schmidt, P. Wegner, S. Wiesand, M. Winde
    DESY Zeuthen, Zeuthen, Germany
 
  CTA (Cherenkov Telescope Array) is an initiative to build the next generation ground-based gamma-ray instrument. The CTA array will allow studies in the very high-energy domain in the range from a few tens of GeV to more than hundred TeV, extending the existing energy coverage and increasing by a factor 10 the sensitivity compared to current installations, while enhancing other aspects like angular and energy resolution. These goals require the use of at least three different sizes of telescopes. CTA will comprise two arrays (one in the Northern hemisphere and one in the Southern hemisphere) for full sky coverage and will be operated as an open observatory. A prototype for the Medium Size Telescope (MST) type is under development and will be deployed in Berlin by the end of 2011. The MST prototype will consist of the mechanical structure, drive system, active mirror control, four CCD cameras for prototype instrumentation and a weather station. The ALMA Common Software (ACS) distributed control framework has been chosen for the implementation of the control system of the prototype. In the present approach, the interface to some of the hardware devices is achieved by using the OPC Unified Architecture (OPC UA). A code-generation framework (ACSCG) has been designed for ACS modeling. In this contribution the progress in the design and implementation of the control system for the CTA MST prototype is described.  
poster icon Poster MOPMU026 [1.953 MB]  
 
MOPMU030 Control System for Linear Induction Accelerator LIA-2: the Structure and Hardware controls, induction, high-voltage, operation 502
 
  • G.A. Fatkin, P.A. Bak, A.M. Batrakov, P.V. Logachev, A. Panov, A.V. Pavlenko, V.Ya. Sazansky
    BINP SB RAS, Novosibirsk, Russia
 
  Power Linear Induction Accelerator (LIA) for flash radiography is commissioned in Budker Institute of Nuclear Physics (BINP) in Novosibirsk. It is a facility producing pulsed electron beam with energy 2 MeV, current 1 kA and spot size less than 2 mm. Beam quality and reliability of facility are required for radiography experiments. Features and structure of distributed control system ensuring these demands are discussed. Control system hardware based on CompactPCI and PMC standards is embedded directly into power pulsed generators. CAN-BUS and Ethernet are used as interconnection protocols. Parameters and essential details for measuring equipment and control electronics produced in BINP and available COTS are presented. The first results of the control system commissioning, reliability and hardware vitality are discussed.  
poster icon Poster MOPMU030 [43.133 MB]  
 
MOPMU032 An EPICS IOC Builder EPICS, database, controls, software 506
 
  • M.G. Abbott, T.M. Cobb
    Diamond, Oxfordshire, United Kingdom
 
  An EPICS IO controller is typically assembled from a number of standard components each with potentially quite complex hardware or software initialisation procedures intermixed with a good deal of repetitive boilerplate code. Assembling and maintaining a complex IOC can be a quite difficult and error prone process, particularly if the components are unfamiliar. The EPICS IOC builder is a Python library designed to automate the assembly of a complete IOC from a concise component level description. The dependencies and interactions between components as well as their detailed initialisation procedures are automatically managed by the IOC builder through component description files maintained with the individual components. At Diamond Light Source we have a large library of components that can be assembled into EPICS IOCs. The IOC Builder is further finding increasing use in helping non-expert users to assemble an IOC without specialist knowledge.  
poster icon Poster MOPMU032 [3.887 MB]  
 
MOPMU040 REVOLUTION at SOLEIL: Review and Prospect for Motion Control controls, software, TANGO, radiation 525
 
  • D. Corruble, P. Betinelli-Deck, F. Blache, J. Coquet, N. Leclercq, R. Millet, A. Tournieux
    SOLEIL, Gif-sur-Yvette, France
 
  At any synchrotron facility, motors are numerous: it is a significant actuator of accelerators and the main actuator of beamlines. Since 2003, the Electronic Control and Data Acquisition group of SOLEIL has defined a modular and reliable motion architecture integrating industrial products (Galil controller, Midi Ingénierie and Phytron power boards). Simultaneously, the software control group has developed a set of dedicated Tango devices. At present, more than 1000 motors and 200 motion controller crates are in operation at SOLEIL. Aware that the motion control is important in improving performance as the positioning of optical systems and samples is a key element of any beamline, SOLEIL wants to upgrade its motion controller in order to maintain the facility at a high performance level and to be able to answer to new requirements: better accuracy, complex trajectory and coupling multi-axis devices like a hexapod. This project is called REVOLUTION (REconsider Various contrOLler for yoUr moTION).  
poster icon Poster MOPMU040 [1.388 MB]  
 
TUAAUST01 GDA and EPICS Working in Unison for Science Driven Data Acquisition and Control at Diamond Light Source detector, EPICS, controls, data-acquisition 529
 
  • E.P. Gibbons, M.T. Heron, N.P. Rees
    Diamond, Oxfordshire, United Kingdom
 
  Diamond Light Source has recently received funding for an additional 10 photon beamlines, bringing the total to 32 beamlines and around 40 end-stations. These all use EPICS for the control of the underlying instrumentation associated with photon delivery, the experiment and most of the data acquisition hardware. For the scientific users Diamond has developed the Generic Data Acquisition (GDA) application framework to provide a consistent science interface across all beamlines. While each application is customised to the science of its beamline, all applications are built from the framework and predominantly interface to the underlying instrumentation through the EPICS abstraction. We will describe the complete system, illustrate how it can be configured for a specific beamline application, and how other synchrotrons are, and can, adapt these tools for their needs.  
slides icon Slides TUAAUST01 [9.781 MB]  
 
TUBAUST01 FPGA-based Hardware Instrumentation Development on MAST FPGA, controls, plasma, diagnostics 544
 
  • B.K. Huang, R.M. Myers, R.M. Sharples
    Durham University, Durham, United Kingdom
  • N. Ben Ayed, G. Cunningham, A. Field, S. Khilar, G.A. Naylor
    CCFE, Abingdon, Oxon, United Kingdom
  • R.G.L. Vann
    York University, Heslington, York, United Kingdom
 
  Funding: This work was part-funded by the RCUK Energy Programme under grant EP/I501045 and the European Communities under the Contract of Association between EURATOM and CCFE.
On MAST (the Mega Amp Spherical Tokamak) at Culham Centre for Fusion Energy some key control systems and diagnostics are being developed and upgraded with FPGA hardware. FPGAs provide many benefits including low latency and real-time digital signal processing. FPGAs blur the line between hardware and software. They are programmed (in VHDL/Verilog language) using software, but once configured act deterministically as hardware. The challenges in developing a system are keeping up-front and maintenance costs low, and prolonging the life of the device as much as possible. We accomplish lower costs by using industry standards such as the FMC (FPGA Mezzanine Card) Vita 57 standard and by using COTS (Commercial Off The Shelf) components which are significantly less costly than developing them in-house. We extend the device operational lifetime by using a flexible FPGA architecture and industry standard interfaces. We discuss the implementation of FPGA control on two specific systems on MAST. The Vertical Stabilisation system comprises of a 1U form factor box with 1 SP601 Spartan6 FPGA board, 10/100 Ethernet access, Microblaze processor, 24-bit σ delta ADS1672 ADC and ATX power supply for remote power cycling. The Electron Bernstein Wave system comprises of a 4U form factor box with 2 ML605 Virtex6 FPGA boards, Gigabit Ethernet, Microblaze processor and 2 FMC108 ADC providing 16 Channels with 14-bit at 250MHz. AXI4 is used as the on chip bus between firmware components to allow very high data rates which has been tested at over 40Gbps streaming into a 2GB DDR3 SODIMM.
 
slides icon Slides TUBAUST01 [8.172 MB]  
 
TUBAUST02 FPGA Communications Based on Gigabit Ethernet FPGA, Ethernet, interface, controls 547
 
  • L.R. Doolittle, C. Serrano
    LBNL, Berkeley, California, USA
 
  The use of Field Programmable Gate Arrays (FPGAs) in accelerators is widespread due to their flexibility, performance, and affordability. Whether they are used for fast feedback systems, data acquisition, fast communications using custom protocols, or any other application, there is a need for the end-user and the global control software to access FPGA features using a commodity computer. The choice of communication standards that can be used to interface to a FPGA board is wide, however there is one that stands out for its maturity, basis in standards, performance, and hardware support: Gigabit Ethernet. In the context of accelerators it is desirable to have highly reliable, portable, and flexible solutions. We have therefore developed a chip- and board-independent FPGA design which implements the Gigabit Ethernet standard. Our design has been configured for use with multiple projects, supports full line-rate traffic, and communicates with any other device implementing the same well-established protocol, easily supported by any modern workstation or controls computer.  
slides icon Slides TUBAUST02 [0.909 MB]  
 
TUBAULT03 The Upgrade Path from Legacy VME to VXS Dual Star Connectivity for Large Scale Data Acquisition and Trigger Systems detector, fibre-optics, data-acquisition, FPGA 550
 
  • C. Cuevas, D. Abbott, F.J. Barbosa, H. Dong, W. Gu, E. Jastrzembski, S.R. Kaneta, B. Moffit, N. Nganga, B.J. Raydo, A. Somov, W.M. Taylor, J. Wilson
    JLAB, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177
New instrumentation modules have been designed by Jefferson Lab and to take advantage of the higher performance and elegant backplane connectivity of the VITA 41 VXS standard. These new modules are required to meet the 200KHz trigger rates envisioned for the 12GeV experimental program. Upgrading legacy VME designs to the high speed gigabit serial extensions that VXS offers, comes with significant challenges, including electronic engineering design, plus firmware and software development issues. This paper will detail our system design approach including the critical system requirement stages, and explain the pipeline design techniques and selection criteria for the FPGA that require embedded Gigabit serial transceivers. The entire trigger system is synchronous and operates at 250MHz clock with synchronization signals, and the global trigger signals distributed to each front end readout crate via the second switch slot in the 21 slot, dual star VXS backplane. The readout of the buffered detector signals relies on 2eSST over the standard VME64x path at >200MB/s. We have achieved 20Gb/s transfer rate of trigger information within one VXS crate and will present results using production modules in a two crate test configuration with both VXS crates fully populated. The VXS trigger modules that reside in the front end crates, will be ready for production orders by the end of the 2011 fiscal year. VXS Global trigger modules are in the design stage now, and will be complete to meet the installation schedule for the 12GeV Physics program.
 
slides icon Slides TUBAULT03 [7.189 MB]  
 
TUBAULT04 Open Hardware for CERN’s Accelerator Control Systems controls, FPGA, software, timing 554
 
  • E. Van der Bij, P. Alvarez, M. Ayass, A. Boccardi, M. Cattin, C. Gil Soriano, E. Gousiou, S. Iglesias Gonsálvez, G. Penacoba Fernandez, J. Serrano, N. Voumard, T. Włostowski
    CERN, Geneva, Switzerland
 
  The accelerator control systems at CERN will be renovated and many electronics modules will be redesigned as the modules they will replace cannot be bought anymore or use obsolete components. The modules used in the control systems are diverse: analog and digital I/O, level converters and repeaters, serial links and timing modules. Overall around 120 modules are supported that are used in systems such as beam instrumentation, cryogenics and power converters. Only a small percentage of the currently used modules are commercially available, while most of them had been specifically designed at CERN. The new developments are based on VITA and PCI-SIG standards such as FMC (FPGA Mezzanine Card), PCI Express and VME64x using transition modules. As system-on-chip interconnect, the public domain Wishbone specification is used. For the renovation, it is considered imperative to have for each board access to the full hardware design and its firmware so that problems could quickly be resolved by CERN engineers or its collaborators. To attract other partners, that are not necessarily part of the existing networks of particle physics, the new projects are developed in a fully 'Open' fashion. This allows for strong collaborations that will result in better and reusable designs. Within this Open Hardware project new ways of working with industry are being tested with the aim to prove that there is no contradiction between commercial off-the-shelf products and openness and that industry can be involved at all stages, from design to production and support.  
slides icon Slides TUBAULT04 [7.225 MB]  
 
TUBAUIO05 Challenges for Emerging New Electronics Standards for Physics controls, software, interface, monitoring 558
 
  • R.S. Larsen
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported by US Department of Energy Contract DE AC03 76SF00515
A unique effort is underway between industry and the international physics community to extend the Telecom industry’s Advanced Telecommunications Computing Architecture (ATCA and MicroTCA) to meet future needs of the physics machine and detector community. New standard extensions for physics have now been designed to deliver unprecedented performance and high subsystem availability for accelerator controls, instrumentation and data acquisition. Key technical features include a unique out-of-band imbedded standard Intelligent Platform Management Interface (IPMI) system to manage hot-swap module replacement and hardware-software failover. However the acceptance of any new standard depends critically on the creation of strong collaborations among users and between user and industry communities. For the relatively small high performance physics market to attract strong industry support requires collaborations to converge on core infrastructure components including hardware, timing, software and firmware architectures; as well as to strive for a much higher degree of interoperability of both lab and industry designed hardware-software products than past generations of standards. The xTCA platform presents a unique opportunity for future progress. This presentation will describe status of the hardware-software extension plans; technology advantages for machine controls and data acquisition systems; and examples of current collaborative efforts to help develop an industry base of generic ATCA and MicroTCA products in an open-source environment.
1. PICMG, the PCI Industrial Computer Manufacturer’s Group
2. Lab representation on PICMG includes CERN, DESY, FNAL, IHEP, IPFN, ITER and SLAC
 
slides icon Slides TUBAUIO05 [1.935 MB]  
 
TUCAUST01 Upgrading the Fermilab Fire and Security Reporting System interface, network, software, database 563
 
  • CA. King, R. Neswold
    Fermilab, Batavia, USA
 
  Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
Fermilab's homegrown fire and security system (known as FIRUS) is highly reliable and has been used nearly thirty years. The system has gone through some minor upgrades, however, none of those changes made significant, visible changes. In this paper, we present a major overhaul to the system that is halfway complete. We discuss the use of Apple's OS X for the new GUI, upgrading the servers to use the Erlang programming language and allowing limited access for iOS and Android-based mobile devices.
 
slides icon Slides TUCAUST01 [2.818 MB]  
 
TUDAUST04 Status of the Control System for the European XFEL controls, distributed, feedback, device-server 597
 
  • K. Rehlich
    DESY, Hamburg, Germany
 
  DESY is currently building a new 3.4 km-long X-ray free electron laser facility. Commissioning is planned in 2014. The facility will deliver ultra short light pulses with a peak power up to 100 GW and a wavelength down to 0.1 nm. About 200 distributed electronic crates will be used to control the facility. A major fraction of the controls will be installed inside the accelerator tunnel. MicroTCA was chosen as an adequate standard with state-of-the-art connectivity and performance including remote management. The FEL will produce up to 27000 bunches per second. Data acquisition and controls have to provide bunch-synchronous operation within the whole distributed system. Feedbacks implemented in FPGAs and on service tier processes will implement the required stability and automation of the FEL. This paper describes the progress in the development of the new hardware as well as the software architecture. Parts of the control system are currently implemented in the much smaller FLASH FEL facility.  
slides icon Slides TUDAUST04 [6.640 MB]  
 
WEBHAUST02 Optimizing Infrastructure for Software Testing Using Virtualization network, software, Windows, distributed 622
 
  • O. Khalid, B. Copy, A A. Shaikh
    CERN, Geneva, Switzerland
 
  Virtualization technology and cloud computing have a brought a paradigm shift in the way we utilize, deploy and manage computer resources. They allow fast deployment of multiple operating system as containers on physical machines which can be either discarded after use or snapshot for later re-deployment. At CERN, we have been using virtualization/cloud computing to quickly setup virtual machines for our developers with pre-configured software to enable them test/deploy a new version of a software patch for a given application. We also have been using the infrastructure to do security analysis of control systems as virtualization provides a degree of isolation where control systems such as SCADA systems could be evaluated for simulated network attacks. This paper reports both on the techniques that have been used for security analysis involving network configuration/isolation to prevent interference of other systems on the network. This paper also provides an overview of the technologies used to deploy such an infrastructure based on VMWare and OpenNebula cloud management platform.  
slides icon Slides WEBHAUST02 [2.899 MB]  
 
WEBHMULT03 EtherBone - A Network Layer for the Wishbone SoC Bus operation, Ethernet, software, timing 642
 
  • M. Kreider, W.W. Terpstra
    GSI, Darmstadt, Germany
  • J.H. Lewis, J. Serrano, T. Włostowski
    CERN, Geneva, Switzerland
 
  Today, there are several System on a Chip (SoC) bus systems. Typically, these busses are confined on-chip and rely on higher level components to communicate with the outside world. Taking these systems a step further, we see the possibility of extending the reach of the SoC bus to remote FPGAs or processors. This leads to the idea of the EtherBone (EB) core, which connects a Wishbone (WB) Ver. 4 Bus via a Gigabit Ethernet based network link to remote peripheral devices. EB acts as a transparent interconnect module towards attached WB Bus devices. Address information and data from one or more WB bus cycles is preceded with a descriptive header and encapsulated in a UDP/IP packet. Because of this standard compliance, EB is able to traverse Wide Area Networks and is therefore not bound to a geographic location. Due to the low level nature of the WB bus, EB provides a sound basis for remote hardware tools like a JTAG debugger, In-System-Programmer (ISP), boundary scan interface or logic analyser module. EB was developed in the scope of the WhiteRabbit Timing Project (WR) at CERN and GSI/FAIR, which employs GigaBit Ethernet technology to communicate with memory mapped slave devices. WR will make use of EB as means to issue commands to its timing nodes and control connected accelerator hardware.  
slides icon Slides WEBHMULT03 [1.547 MB]  
 
WEMAU005 The ATLAS Transition Radiation Tracker (TRT) Detector Control System detector, controls, electronics, operation 666
 
  • J. Olszowska, E. Banaś, Z. Hajduk
    IFJ-PAN, Kraków, Poland
  • M. Hance, D. Olivito, P. Wagner
    University of Pennsylvania, Philadelphia, Pennsylvania, USA
  • T. Kowalski, B. Mindur
    AGH University of Science and Technology, Krakow, Poland
  • R. Mashinistov, K. Zhukov
    LPI, Moscow, Russia
  • A. Romaniouk
    MEPhI, Moscow, Russia
 
  Funding: CERN; MNiSW, Poland; MES of Russia and ROSATOM, Russian Federation; DOE and NSF, United States of America
TRT is one of the ATLAS experiment Inner Detector components providing precise tracking and electrons identification. It consists of 370 000 proportional counters (straws) which have to be filled with stable active gas mixture and high voltage biased. High voltage setting at distinct topological regions are periodicaly modified by closed-loop regulation mechanism to ensure constant gaseous gain independent of drifts of atmospheric pressure, local detector temperatures and gas mixture composition. Low voltage system powers front-end electronics. Special algorithms provide fine tuning procedures for detector-wide discrimination threshold equalization to guarantee uniform noise figure for whole detector. Detector, cooling system and electronics temperatures are continuosly monitored by ~ 3000 temperature sensors. The standard industrial and custom developed server applications and protocols are used for devices integration into unique system. All parameters originating in TRT devices and external infrastructure systems (important for Detector operation or safety) are monitored and used by alert and interlock mechanisms. System runs on 11 computers as PVSS (industrial SCADA) projects and is fully integrated with ATLAS Detector Control System.
 
slides icon Slides WEMAU005 [1.384 MB]  
poster icon Poster WEMAU005 [1.978 MB]  
 
WEMAU011 LIMA: A Generic Library for High Throughput Image Acquisition detector, controls, software, interface 676
 
  • A. Homs, L. Claustre, A. Kirov, E. Papillon, S. Petitdemange
    ESRF, Grenoble, France
 
  A significant number of 2D detectors are used in large scale facilities' control systems for quantitative data analysis. In these devices, a common set of control parameters and features can be identified, but most of manufacturers provide specific software control interfaces. A generic image acquisition library, called LIMA, has been developed at the ESRF for a better compatibility and easier integration of 2D detectors to existing control systems. The LIMA design is driven by three main goals: i) independence of any control system to be shared by a wide scientific community; ii) a rich common set of functionalities (e.g., if a feature is not supported by hardware, then the alternative software implementation is provided); and iii) intensive use of events and multi-threaded algorithms for an optimal exploit of multi-core hardware resources, needed when controlling high throughput detectors. LIMA currently supports the ESRF Frelon and Maxipix detectors as well as the Dectris Pilatus. Within a collaborative framework, the integration of the Basler GigE cameras is a contribution from SOLEIL. Although it is still under development, LIMA features so far fast data saving on different file formats and basic data processing / reduction, like software pixel binning / sub-image, background subtraction, beam centroid and sub-image statistics calculation, among others.  
slides icon Slides WEMAU011 [0.073 MB]  
 
WEMMU001 Floating-point-based Hardware Accelerator of a Beam Phase-Magnitude Detector and Filter for a Beam Phase Control System in a Heavy-Ion Synchrotron Application detector, controls, synchrotron, FPGA 683
 
  • F.A. Samman
    Technische Universität Darmstadt, Darmstadt, Germany
  • M. Glesner, C. Spies, S. Surapong
    TUD, Darmstadt, Germany
 
  Funding: German Federal Ministry of Education and Research in the frame of Project FAIR (Facility for Antiproton and Ion Research), Grant Number 06DA9028I.
A hardware implementation of an adaptive phase and magnitude detector and filter of a beam-phase control system in a heavy ion synchrotron application is presented in this paper [1]. The main components of the hardware are adaptive LMS filters and a phase and magnitude detector. The phase detectors are implemented by using a CORDIC algorithm based on 32-bit binary floating-point arithmetic data formats. Therefore, a decimal to floating-point adapter is required to interface the data from an ADC to the phase and magnitude detector. The floating-point-based hardware is designed to improve the precision of the past hardware implementation that is based on fixed-point arithmetics. The hardware of the detector and the adaptive LMS filter have been implemented on a reconfigurable FPGA device for hardware acceleration purpose. The ideal Matlab/Simulink model of the hardware and the VHDL model of the adaptive LMS filter and the phase and magnitude detector are compared. The comparison result shows that the output signal of the floating-point based adaptive FIR filter as well as the phase and magnitude detector is simillar to the expected output signal of the ideal Matlab/Simulink model.
[1] H. Klingbeil, "A Fast DSP-Based Phase-Detector for Closed-Loop RF Control in Synchrotrons," IEEE Trans. Instrum. Meas., 54(3):1209–1213, 2005.
 
slides icon Slides WEMMU001 [0.383 MB]  
 
WEMMU007 Reliability in a White Rabbit Network network, timing, controls, Ethernet 698
 
  • M. Lipiński, J. Serrano, T. Włostowski
    CERN, Geneva, Switzerland
  • C. Prados
    GSI, Darmstadt, Germany
 
  White Rabbit (WR) is a time-deterministic, low-latency Ethernet-based network which enables transparent, sub-ns accuracy timing distribution. It is being developed to replace the General Machine Timing (GMT) system currently used at CERN and will become the foundation for the control system of the Facility for Antiproton and Ion Research (FAIR) at GSI. High reliability is an important issue in WR's design, since unavailability of the accelerator's control system will directly translate into expensive downtime of the machine. A typical WR network is required to lose not more than a single message per year. Due to WR's complexity, the translation of this real-world-requirement into a reliability-requirement constitutes an interesting issue on its own: a WR network is considered functional only if it provides all its services to all its clients at any time. This paper defines reliability in WR and describes how it was addressed by dividing it into sub-domains: deterministic packet delivery, data redundancy, topology redundancy and clock resilience. The studies show that the Mean Time Between Failure (MTBF) of the WR Network is the main factor affecting its reliability. Therefore, probability calculations for different topologies were performed using the "Fault Tree analysis" and analytic estimations. Results of the study show that the requirements of WR are demanding. Design changes might be needed and further in-depth studies required, e.g. Monte Carlo simulations. Therefore, a direction for further investigations is proposed.  
slides icon Slides WEMMU007 [0.689 MB]  
poster icon Poster WEMMU007 [1.080 MB]  
 
WEMMU010 Dependable Design Flow for Protection Systems using Programmable Logic Devices simulation, FPGA, controls, software 706
 
  • M. Kwiatkowski, B. Todd
    CERN, Geneva, Switzerland
 
  Programmable Logic Devices (PLD) such as Field Programmable Gate Arrays (FPGA) are becoming more prevalent in protection and safety-related electronic systems. When employing such programmable logic devices, extra care and attention needs to be taken. It is important to be confident that the final synthesis result, used to generate the bit-stream to program the device, meets the design requirements. This paper will describe how to maximize confidence using techniques such as Formal Methods, exhaustive Hardware Description Language (HDL) code simulation and hardware testing. An example will be given for one of the critical function of the Safe Machine Parameters (SMP) system, one of the key systems for the protection of the Large Hadrons Collider (LHC) at CERN. The design flow will be presented where the implementation phase is just one small element of the whole process. Techniques and tools presented can be applied for any PLD based system implementation and verification.  
slides icon Slides WEMMU010 [1.093 MB]  
poster icon Poster WEMMU010 [0.829 MB]  
 
WEPKN010 European XFEL Phase Shifter: PC-based Control System controls, LabView, undulator, GUI 731
 
  • E. Molina Marinas, J.M. Cela-Ruiz, A. Guirao, L.M. Martinez Fresno, I. Moya, A.L. Pardillo, S. Sanz, C. Vazquez, J.G.S. de la Gama
    CIEMAT, Madrid, Spain
 
  Funding: Work partially supported by the Spanish Ministry of Science and Innovation under SEI Resolution on 17-September-2009
The Accelerator Technology Unit at CIEMAT is in charge of part of the Spanish contribution to the European X-Ray Free-Electron Laser (EXFEL). This paper presents the control system of the Phase Shifter (PS), a beam phase corrector magnet that will be installed in the intersections of the SASE undulator system. Beckhoff has been chosen by EXFEL as its main supplier for the industrial control systems. Beckhoff Twincat PLC architecture is a PC-based control technology built over EtherCAT, a real-time Ethernet fieldbus. The PS is operated with a stepper motor, its position is monitored by an incremental encoder, and it is controlled by a Twincat-PLC program using the TcMC2 library, an implementation of the PLCopen Motion Control specification. A GUI has been developed in LabVIEW instead of using Beckhoff visualization tool. The control system for the first and second prototype devices has been developed in-house using COTS hardware and software. The specifications request a repeatability of ±50μm in bidirectional movements and ±10μm in unidirectional movements. The second prototype can reach speeds up to 15 mm/s.
 
poster icon Poster WEPKN010 [3.077 MB]  
 
WEPKN026 The ELBE Control System – 10 Years of Experience with Commercial Control, SCADA and DAQ Environments controls, software, electron, interface 759
 
  • M. Justus, F. Herbrand, R. Jainsch, N. Kretzschmar, K.-W. Leege, P. Michel, A. Schamlott
    HZDR, Dresden, Germany
 
  The electron accelerator facility ELBE is the central experimental site of the Helmholtz-Zentrum Dresden-Rossendorf, Germany. Experiments with Bremsstrahlung started in 2001 and since that, through a series of expansions and modifications, ELBE has evolved to a 24/7 user facility running a total of seven secondary sources including two IR FELs. As its control system, ELBE uses WinCC on top of a networked PLC architecture. For data acquisition with high temporal resolution, PXI and PC based systems are in use, applying National Instruments hardware and LabVIEW application software. Machine protection systems are based on in-house built digital and analogue hardware. An overview of the system is given, along with an experience report on maintenance, reliability and efforts to keep track with ongoing IT, OS and security developments. Limits of application and new demands imposed by the forthcoming facility upgrade as a centre for high intensity beams (in conjunction with TW/PW femtosecond lasers) are discussed.  
poster icon Poster WEPKN026 [0.102 MB]  
 
WEPKS002 Quick EXAFS Experiments Using a New GDA Eclipse RCP GUI with EPICS Hardware Control experiment, detector, interface, EPICS 771
 
  • R.J. Woolliscroft, C. Coles, M. Gerring, M.R. Pearson
    Diamond, Oxfordshire, United Kingdom
 
  Funding: Diamond Light Source Ltd.
The Generic Data Acquisition (GDA)* framework is an open source, Java and Eclipse RCP based data acquisition software for synchrotron and neutron facilities. A new implementation of the GDA on the B18 beamline at the Diamond synchrotron will be discussed. This beamline performs XAS energy scanning experiments and includes a continuous-scan mode of the monochromator synchronised with various detectors for Quick EXAFS (QEXAFS) experiments. A new perspective for the GDA's Eclipse RCP GUI has been developed in which graphical editors are used to write xml files which hold experimental parameters. The same xml files are marshalled by the GDA server to create Java beans used by the Jython scripts run within the GDA server. The underlying motion control is provided by EPICS. The new Eclipse RCP GUI and the integration and synchronisation between the two software systems and the detectors shall be covered.
* GDA website: http://www.opengda.org/
 
poster icon Poster WEPKS002 [1.277 MB]  
 
WEPKS004 ISAC EPICS on Linux: The March of the Penguins Linux, controls, EPICS, ISAC 778
 
  • J.E. Richards, R.B. Nussbaumer, S. Rapaz, G. Waters
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
 
  The DC linear accelerators of the ISAC radioactive beam facility at TRIUMF do not impose rigorous timing constraints on the control system. Therefore a real-time operating system is not essential for device control. The ISAC Control System is completing a move to the use of the open source Linux operating system for hosting all EPICS IOCs. The IOC platforms include GE-Fanuc VME based CPUs for control of most optics and diagnostics, rack mounted servers for supervising PLCs, small desktop PCs for GPIB and serial "one-of-a-kind" instruments, as well as embedded ARM processors controlling CAN-bus devices that provide a suitcase sized control system. This article focuses on the experience of creating a customized Linux distribution for front-end IOC deployment. Rationale, a roadmap of the process, and efficiency advantages in personnel training and system management realized by using a single OS will be discussed.  
 
WEPKS014 NOMAD – More Than a Simple Sequencer controls, CORBA, experiment, interface 808
 
  • P. Mutti, F. Cecillon, A. Elaazzouzi, Y. Le Goc, J. Locatelli, H. Ortiz, J. Ratel
    ILL, Grenoble, France
 
  NOMAD is the new instrument control software of the Institut Laue-Langevin. A highly sharable code among all the instruments’ suite, a user oriented design for tailored functionality and the improvement of the instrument team’s autonomy thanks to a uniform and ergonomic user interface are the essential elements guiding the software development. NOMAD implements a client/server approach. The server is the core business containing all the instrument methods and the hardware drivers, while the GUI provides all the necessary functionalities for the interaction between user and hardware. All instruments share the same executable while a set of XML configuration files adapts hardware needs and instrument methods to the specific experimental setup. Thanks to a complete graphical representation of experimental sequences, NOMAD provides an overview of past, present and future operations. Users have the freedom to build their own specific workflows using intuitive drag-and-drop technique. A complete drivers’ database to connect and control all possible instrument components has been created, simplifying the inclusion of a new piece of equipment for an experiment. A web application makes available outside the ILL all the relevant information on the status of the experiment. A set of scientific methods facilitates the interaction between users and hardware giving access to instrument control and to complex operations within just one click on the interface. NOMAD is not only for scientists. Dedicated tools allow a daily use for setting-up and testing a variety of technical equipments.  
poster icon Poster WEPKS014 [6.856 MB]  
 
WEPKS015 Automatic Creation of LabVIEW Network Shared Variables LabView, controls, network, distributed 812
 
  • T. Kluge
    Siemens AG, Erlangen, Germany
  • H.-C. Schröder
    ASTRUM IT GmbH, Erlangen, Germany
 
  We are in the process of preparing the LabVIEW controlled system components of our Solid State Direct Drive® experiments [1, 2, 3, 4] for the integration into a Supervisory Control And Data Acquisition (SCADA) or distributed control system. The predetermined route to this is the generation of LabVIEW network shared variables that can easily be exported by LabVIEW to the SCADA system using OLE for Process Control (OPC) or other means. Many repetitive tasks are associated with the creation of the shared variables and the required code. We are introducing an efficient and inexpensive procedure that automatically creates shared variable libraries and sets default values for the shared variables. Furthermore, LabVIEW controls are created that are used for managing the connection to the shared variable inside the LabVIEW code operating on the shared variables. The procedure takes as input an XML spreadsheet defining the required input. The procedure utilizes XSLT and LabVIEW scripting. In a later state of the project the code generation can be expanded to also create code and configuration files that will become necessary in order to access the shared variables from the SCADA system of choice.
[1] O. Heid, T. Hughes, THPD002, IPAC10, Kyoto, Japan
[2] R. Irsigler et al, 3B-9, PPC11, Chicago IL, USA
[3] O. Heid, T. Hughes, THP068, LINAC10, Tsukuba, Japan
[4] O. Heid, T. Hughes, MOPD42, HB2010, Morschach, Switzerland
 
poster icon Poster WEPKS015 [0.265 MB]  
 
WEPKS018 MstApp, a Rich Client Control Applications Framework at DESY framework, controls, operation, status 819
 
  • W. Schütte, K. Hinsch
    DESY, Hamburg, Germany
 
  Funding: Deutsches Elektronen-Synchrotron DESY
The control system for PETRA 3 [1] and its pre accelerators extensively use rich clients for the control room and the servers. Most of them are written with the help of a rich client Java framework: MstApp. They total to 106 different console and 158 individual server applications. MstApp takes care of many common control system application aspects beyond communication. MstApp provides a common look and feel: core menu items, a color scheme for standard states of hardware components and standardized screen sizes/locations. It interfaces our console application manager (CAM) and displays on demand our communication link diagnostics tools. MstApp supplies an accelerator context for each application; it handles printing, logging, resizing and unexpected application crashes. Due to our standardized deploy process MstApp applications know their individual developers and can even send them – on button press of the users - emails. Further a concept of different operation modes is implemented: view only, operating and expert use. Administration of the corresponding rights is done via web access of a database server. Initialization files on a web server are instantiated as JAVA objects with the help of the Java SE XMLEncoder. Data tables are read with the same mechanism. New MstApp applications can easily be created with in house wizards like the NewProjectWizard or the DeviceServerWizard. MstApp improves the operator experience, application developer productivity and delivered software quality.
[1] Reinhard Bacher, “Commissioning of the New Control System for the PETRA 3 Accelerator Complex at Desy”, Proceedings of ICALEPCS 2009, Kobe, Japan
 
poster icon Poster WEPKS018 [0.474 MB]  
 
WEPKS025 Evaluation of Software and Electronics Technologies for the Control of the E-ELT Instruments: a Case Study controls, software, framework, CORBA 844
 
  • P. Di Marcantonio, R. Cirami, I. Coretti
    INAF-OAT, Trieste, Italy
  • G. Chiozzi, M. Kiekebusch
    ESO, Garching bei Muenchen, Germany
 
  In the scope of the evaluation of architecture and technologies for the control system of the E-ELT (European-Extremely Large Telescope) instruments, a collaboration has been set up between the Instrumentation and Control Group of the INAF-OATs and the ESO Directorate of Engineering. The first result of this collaboration is the design and implementation of a prototype of a small but representative control system for an E-ELT instrument that has been setup at the INAF-OATs premises. The electronics has been based on PLCs (Programmable Logical Controller) and Ethernet based fieldbuses from different vendors but using international standards like the IEC 61131-3 and PLCopen Motion Control. The baseline design for the control software follows the architecture of the VLT (Very Large Telescope) Instrumentation application framework but it has been implemented using the ACS (ALMA Common Software), an open source software framework developed for the ALMA project and based on CORBA middleware. The communication among the software components is based in two models: CORBA calls for command/reply and CORBA notification channel for distributing the devices status. The communication with the PLCs is based on OPC-UA, an international standard for the communication with industrial controllers. The results of this work will contribute to the definition of the architecture of the control system that will be provided to all consortia responsible for the actual implementation of the E-ELT instruments. This paper presents the prototype motivation, its architecture, design and implementation.  
poster icon Poster WEPKS025 [3.039 MB]  
 
WEPMN006 Commercial FPGA Based Multipurpose Controller: Implementation Perspective EPICS, FPGA, GUI, controls 882
 
  • I. Arredondo, D. Belver, P. Echevarria, M. Eguiraun, H. Hassanzadegan, M. del Campo
    ESS-Bilbao, Zamudio, Spain
  • V. Etxebarria, J. Jugo
    University of the Basque Country, Faculty of Science and Technology, Bilbao, Spain
  • N. Garmendia, L. Muguira
    ESS Bilbao, Bilbao, Spain
 
  Funding: The present work is supported by the Basque Government and Spanish Ministry of Science and Innovation.
This work presents a fast acquisition multipurpose controller, focussing on its EPICS integration and on its XML based configuration. This controller is based on a Lyrtech VHS-ADC board which encloses an FPGA, connected to a Host PC. This Host acts as local controller and implements an IOC integrating the device in an EPICS network. These tasks have been performed using Java as the main tool to program the PC to make the device fit the desired application. All the process includes the use of different technologies: JNA to handle C functions i.e. FPGA API, JavaIOC to integrate EPICS and XML w3c DOM classes to easily configure the particular application. In order to manage the functions, Java specific tools have been developed: Methods to manage the FPGA (read/write registers, acquire data,…), methods to create and use the EPICS server (put, get, monitor,…), mathematical methods to process the data (numeric format conversions,…) and methods to create/initialize the application structure by means of an XML file (parse elements, build the DOM and the specific application structure). This XML file has some common nodes and tags for all the applications: FPGA registers specifications definition and EPICS variables. This means that the user only has to include a node for the specific application and use the mentioned tools. It is the developed main class which is in charge of managing the FPGA and EPICS server according to this XML file. This multipurpose controller has been successfully used to implement a BPM and an LLRF application for the ESS-Bilbao facility.
 
poster icon Poster WEPMN006 [0.559 MB]  
 
WEPMN009 Simplified Instrument/Application Development and System Integration Using Libera Base Software Framework software, framework, interface, controls 890
 
  • M. Kenda, T. Beltram, T. Juretič, B. Repič, D. Škvarč, C. Valentinčič
    I-Tech, Solkan, Slovenia
 
  Development of many appliances used in scientific environment forces us to face similar challenges, often executed repeatedly. One has to design or integrate hardware components. Support for network and other communications standards needs to be established. Data and signals are processed and dispatched. Interfaces are required to monitor and control the behaviour of the appliances. At Instrumentation Technologies we identified and addressed these issues by creating a generic framework which is composed of several reusable building blocks. They simplify some of the tedious tasks and leave more time to concentrate on real issues of the application. Further more, the end product quality benefits from larger common base of this middle-ware. We will present the benefits on concrete example of instrument implemented on MTCA platform accessible over graphical user interface.  
poster icon Poster WEPMN009 [5.755 MB]  
 
WEPMN011 Controlling the EXCALIBUR Detector software, detector, simulation, controls 894
 
  • J.A. Thompson, I. Horswell, J. Marchal, U.K. Pedersen
    Diamond, Oxfordshire, United Kingdom
  • S.R. Burge, J.D. Lipp, T.C. Nicholls
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
 
  EXCALIBUR is an advanced photon counting detector being designed and built by a collaboration of Diamond Light Source and the Science and Technology Facilities Council. It is based around 48 CERN Medipix III silicon detectors arranged as an 8x6 array. The main problem addressed by the design of the hardware and software is the uninterrupted collection and safe storage of image data at rates up to one hundred (2048x1536) frames per second. This is achieved by splitting the image into six 'stripes' and providing parallel data paths for them all the way from the detectors to the storage. This architecture requires the software to control the configuration of the stripes in a consistent manner and to keep track of the data so that the stripes can be subsequently stitched together into frames.  
poster icon Poster WEPMN011 [0.289 MB]  
 
WEPMN012 PC/104 Asyn Drivers at Jefferson Lab controls, interface, EPICS, operation 898
 
  • J. Yan, T.L. Allison, S.D. Witherspoon
    JLAB, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
PC/104 embedded IOCs that run RTEMS and EPICS have been applied in many new projects at Jefferson Lab. Different commercial PC/104 I/O modules on the market such as digital I/O, data acquisition, and communication modules are integrated in our control system. AsynDriver, which is a general facility for interfacing device specific code to low level drivers, was applied for PC/104 serial communication I/O cards. We choose the ines GPIB-PC/104-XL as the GPIB interface module and developed the low lever device driver that is compatible with the asynDriver. The ines GPIB-PC/104-XL has iGPIB 72110 chip, which is register compatible with NEC uPD7210 in GPIB Talker/Listener applications. Instrument device support was created to provide access to the operating parameters of GPIB devices. Low level device driver for the serial communication board Model 104-COM-8SM was also developed to run under asynDriver. This serial interface board contains eight independent ports and provides effective RS-485, RS-422 and RS-232 multipoint communication. StreamDevice protocols were applied for the serial communications. The asynDriver in PC/104 IOC application provides standard interface between the high level device support and hardwire level device drivers. This makes it easy to develop the GPIB and serial communication applications in PC/104 IOCs.
 
 
WEPMN017 PCI Hardware Support in LIA-2 Control System controls, Linux, interface, operation 916
 
  • D. Bolkhovityanov, P.B. Cheblakov
    BINP SB RAS, Novosibirsk, Russia
 
  LIA-2 control system* is built on cPCI crates with x86-compatible processor boards running Linux. Slow electronics is connected via CAN-bus, while fast electronics (4MHz and 200MHz fast ADCs and 200MHz timers) are implemented as cPCI/PMC modules. Several ways to drive PCI control electronics in Linux were examined. Finally a userspace drivers approach was chosen. These drivers communicate with hardware via a small kernel module, which provides access to PCI BARs and to interrupt handling. This module was named USPCI (User-Space PCI access). This approach dramatically simplifies creation of drivers, as opposed to kernel drivers, and provides high reliability (because only a tiny and thoroughly-debugged piece of code runs in kernel). LIA-2 accelerator was successfully commissioned, and the solution chosen has proven adequate and very easy to use. Besides, USPCI turned out to be a handy tool for examination and debugging of PCI devices direct from command-line. In this paper available approaches to work with PCI control hardware in Linux are considered, and USPCI architecture is described.
* "LIA-2 Linear Induction Accelerator Control System", this conference
 
poster icon Poster WEPMN017 [0.954 MB]  
 
WEPMN027 Fast Scalar Data Buffering Interface in Linux 2.6 Kernel Linux, interface, controls, instrumentation 943
 
  • A. Homs
    ESRF, Grenoble, France
 
  Key instrumentation devices like counter/timers, analog-to-digital converter and encoders provide scalar data input. Many of them allow fast acquisitions, but do not provide hardware triggering or buffering mechanisms. A Linux 2.4 kernel driver called Hook was developed at the ESRF as a generic software-triggered buffering interface. This work presents the portage of the ESRF Hook interface to the Linux 2.6 kernel. The interface distinguishes two independent functional groups: trigger event generators and data channels. Devices in the first group create software events, like hardware interrupts generated by timers or external signals. On each event, one or more device channels on the second group are read and stored in kernel buffers. The event generators and data channels to be read are fully configurable before each sequence. Designed for fast acquisitions, the Hook implementation is well adapted to multi-CPU systems, where the interrupt latency is notably reduced. On heavily loaded dual-core PCs running standard (non real time) Linux, data can be taken at 1 KHz without losing events. Additional features include full integration into the sysfs (/sys) virtual filesystem and hotplug devices support.  
 
WEPMN037 DEBROS: Design and Use of a Linux-like RTOS on an Inexpensive 8-bit Single Board Computer Linux, network, interface, software 965
 
  • M.A. Davis
    NSCL, East Lansing, Michigan, USA
 
  As the power, complexity, and capabilities of embedded processors continues to grow, it is easy to forget just how much can be done with inexpensive single board computers based on 8-bit processors. When the proprietary, non-standard tools from the vendor for one such embedded computer became a major roadblock, I embarked on a project to expand my own knowledge and provide a more flexible, standards based alternative. Inspired by operating systems such as Unix, Linux, and Minix, I wrote DEBROS (the Davis Embedded Baby Real-time Operating System) [1], which is a fully pre-emptive, priority-based OS with soft real-time capabilities that provides a subset of standard Linux/Unix compatible system calls such as stdio, BSD sockets, pipes, semaphores, etc. The end result was a much more flexible, standards-based development environment which allowed me to simplify my programming model, expand diagnostic capabilities, and reduce the time spent monitoring and applying updates to the hundreds of devices in the lab currently using this hardware.[2]
[1] http://groups.nscl.msu.edu/controls/files/DEBROS_User_Developer_Manual.doc
[2] http://groups.nscl.msu.edu/controls/
 
poster icon Poster WEPMN037 [0.112 MB]  
 
WEPMS001 Interconnection Test Framework for the CMS Level-1 Trigger System framework, operation, distributed, controls 973
 
  • J. Hammer
    CERN, Geneva, Switzerland
  • M. Magrans de Abril
    UW-Madison/PD, Madison, Wisconsin, USA
  • C.-E. Wulz
    HEPHY, Wien, Austria
 
  The Level-1 Trigger Control and Monitoring System is a software package designed to configure, monitor and test the Level-1 Trigger System of the Compact Muon Solenoid (CMS) experiment at CERN's Large Hadron Collider. It is a large and distributed system that runs over 50 PCs and controls about 200 hardware units. The Interconnection Test Framework (ITF), a generic and highly flexible framework for creating and executing hardware tests within the Level-1 Trigger environment is presented. The framework is designed to automate testing of the 13 major subsystems interconnected with more than 1000 links. Features include a web interface to create and execute tests, modeling using finite state machines, dependency management, automatic configuration, and loops. Furthermore, the ITF will replace the existing heterogeneous testing procedures and help reducing maintenance and complexity of operation tasks. Finally, an example of operational use of the Interconnection Test Framework is presented. This case study proves the concept and describes the customization process and its performance characteristics.  
poster icon Poster WEPMS001 [0.576 MB]  
 
WEPMS003 A Testbed for Validating the LHC Controls System Core Before Deployment controls, software, operation, timing 977
 
  • J. Nguyen Xuan, V. Baggiolini
    CERN, Geneva, Switzerland
 
  Since the start-up of the LHC, it is crucial to carefully test core controls components before deploying them operationally. The Testbed of the CERN accelerator controls group was developed for this purpose. It contains different hardware (PPC, i386) running different operating systems (Linux and LynxOS) and core software components running on front-ends, communication middleware and client libraries. The Testbed first executes integration tests to verify that the components delivered by individual teams interoperate, and then system tests, which verify high-level, end-user functionality. It also verifies that different versions of components are compatible, which is vital, because not all parts of the operational LHC control system can be upgraded simultaneously. In addition, the Testbed can be used for performance and stress tests. Internally, the Testbed is driven by Bamboo, a Continuous Integration server, which builds and deploys automatically new software versions into the Testbed environment and executes the tests continuously to prevent from software regression. Whenever a test fails, an e-mail is sent to the appropriate persons. The Testbed is part of the official controls development process wherein new releases of the controls system have to be validated before being deployed operationally. Integration and system tests are an important complement to the unit tests previously executed in the teams. The Testbed has already caught several bugs that were not discovered by the unit tests of the individual components.
* http://cern.ch/jnguyenx/ControlsTestBed.html
 
poster icon Poster WEPMS003 [0.111 MB]  
 
WEPMS008 Software Tools for Electrical Quality Assurance in the LHC database, software, LabView, operation 993
 
  • M. Bednarek
    CERN, Geneva, Switzerland
  • J. Ludwin
    IFJ-PAN, Kraków, Poland
 
  There are over 1600 superconducting magnet circuits in the LHC machine. Many of them consist of a large number of components electrically connected in series. This enhances the sensitivity of the whole circuits to electrical faults of individual components. Furthermore, circuits are equipped with a large number of instrumentation wires, which are exposed to accidental damage or swapping. In order to ensure safe operation, an Electrical Quality Assurance (ELQA) campaign is needed after each thermal cycle. Due to the complexity of the circuits, as well as their distant geographical distribution (tunnel of 27km circumference divided in 8 sectors), suitable software and hardware platforms had to be developed. The software combines an Oracle database, LabView data acquisition applications and PHP-based web follow-up tools. This paper describes the software used for the ELQA of the LHC.  
poster icon Poster WEPMS008 [8.781 MB]  
 
WEPMS017 The Global Trigger Processor: A VXS Switch Module for Triggering Large Scale Data Acquisition Systems FPGA, Ethernet, interface, embedded 1010
 
  • S.R. Kaneta, C. Cuevas, H. Dong, W. Gu, E. Jastrzembski, N. Nganga, B.J. Raydo, J. Wilson
    JLAB, Newport News, Virginia, USA
 
  Funding: Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
The 12 GeV upgrade for Jefferson Lab's Continuous Electron Beam Accelerator Facility requires the development of a new data acquisition system to accommodate the proposed 200 kHz Level 1 trigger rates expected for fixed target experiments at 12 GeV. As part of a suite of trigger electronics comprised of VXS switch and payload modules, the Global Trigger Processor (GTP) will handle up to 32,768 channels of preprocessed trigger information data from the multiple detector systems that surround the beam target at a system clock rate of 250 MHz. The GTP is configured with user programmable Physics trigger equations and when trigger conditions are satisfied, the GTP will activate the storage of data for subsequent analysis. The GTP features an Altera Stratix IV GX FPGA allowing interface to 16 Sub-System Processor modules via 32 5-Gbps links, DDR2 and flash memory devices, two gigabit Ethernet interfaces using Nios II embedded processors, fiber optic transceivers, and trigger output signals. The GTP's high-bandwidth interconnect with the payload modules in the VXS crate, the Ethernet interface for parameter control, status monitoring, and remote update, and the inherent nature of its FPGA give it the flexibility to be used large variety of tasks and adapt to future needs. This paper details the responsibilities of the GTP, the hardware's role in meeting those requirements, and elements of the VXS architecture that facilitated the design of the trigger system. Also presented will be the current status of development including significant milestones and challenges.
 
poster icon Poster WEPMS017 [0.851 MB]  
 
WEPMS028 Online Evaluation of New DBPM Processors at SINAP injection, betatron, electronics, feedback 1041
 
  • Y.B. Leng, G.Q. Huang, L.W. Lai, Y.B. Yan, X. Yi
    SSRF, Shanghai, People's Republic of China
 
  In this paper, we report our online evaluation results for new digital BPM signal processors, which are developed for the SSRF and the new Shanghai SXFEL facility. Two major prototypes have been evaluated. The first algorithm evaluation prototype is built using commercial development toolkits modules in order to test various digital processing blocks. The second prototype is designed and fabricated from chips level in order to evaluate the hardware performances of different functional modules and assembled processor.  
poster icon Poster WEPMS028 [0.546 MB]  
 
WEPMU002 Testing Digital Electronic Protection Systems LabView, software, FPGA, controls 1047
 
  • A. Garcia Muñoz, S. Gabourin
    CERN, Geneva, Switzerland
 
  The Safe Machine Parameters Controller (SMPC) ensures the correct configuration of the LHC machine protection system, and that safe injection conditions are maintained throughout the filling of the LHC machine. The SMPC receives information in real-time from measurement electronics installed throughout the LHC and SPS accelerators, determines the state of the machine, and informs the SPS and LHC machine protection systems of these conditions. This paper outlines the core concepts and realization of the SMPC test-bench, based on a VME crate and LabVIEW program. Its main goal is to ensure the correct function of the SMPC for the protection of the CERN accelerator complex. To achieve this, the tester has been built to replicate the machine environment and operation, in order to ensure that the chassis under test is completely exercised. The complexity of the task increases with the number of input combinations which are, in the case of the SMPC, in excess of 2364. This paper also outlines the benefits and weaknesses of developing a test suite independently of the hardware being tested, using the "V" approach.  
poster icon Poster WEPMU002 [0.763 MB]  
 
WEPMU008 Access Safety Systems – New Concepts from the LHC Experience controls, operation, injection, site 1066
 
  • T. Ladzinski, Ch. Delamare, S. Di Luca, T. Hakulinen, L. Hammouti, F. Havart, J.-F. Juget, P. Ninin, R. Nunes, T.R. Riesco, E. Sanchez-Corral Mena, F. Valentini
    CERN, Geneva, Switzerland
 
  The LHC Access Safety System has introduced a number of new concepts into the domain of personnel protection at CERN. These can be grouped into several categories: organisational, architectural and concerning the end-user experience. By anchoring the project on the solid foundations of the IEC 61508/61511 methodology, the CERN team and its contractors managed to design, develop, test and commission on time a SIL3 safety system. The system uses a successful combination of the latest Siemens redundant safety programmable logic controllers with a traditional relay logic hardwired loop. The external envelope barriers used in the LHC include personnel and material access devices, which are interlocked door-booths introducing increased automation of individual access control, thus removing the strain from the operators. These devices ensure the inviolability of the controlled zones by users not holding the required credentials. To this end they are equipped with personnel presence detectors and the access control includes a state of the art biometry check. Building on the LHC experience, new projects targeting the refurbishment of the existing access safety infrastructure in the injector chain have started. This paper summarises the new concepts introduced in the LHC access control and safety systems, discusses the return of experience and outlines the main guiding principles for the renewal stage of the personnel protection systems in the LHC injector chain in a homogeneous manner.  
poster icon Poster WEPMU008 [1.039 MB]  
 
WEPMU010 Automatic Analysis at the Commissioning of the LHC Superconducting Electrical Circuits operation, framework, GUI, status 1073
 
  • H. Reymond, O.O. Andreassen, C. Charrondière, A. Rijllart, M. Zerlauth
    CERN, Geneva, Switzerland
 
  Since the beginning of 2010 the LHC has been operating in a routinely manner, starting with a commissioning phase and then an operation for physics phase. The commissioning of the superconducting electrical circuits requires rigorous test procedures before entering into operation. To maximize the beam operation time of the LHC these tests should be done as fast as procedures allow. A full commissioning needs 12000 tests and is required after circuits have been warmed above liquid nitrogen temperature. Below this temperature, after an end of year break of two months, commissioning needs about 6000 tests. Because the manual analysis of the tests takes a major part of the commissioning time, we proceeded to the automation of the existing analysis tools. We present the way in which these LabVIEW™ applications were automated. We evaluate the gain in commissioning time and reduction of experts on night shift observed during the LHC hardware commissioning campaign of 2011 compared to 2010. We end with an outlook at what can be further optimized.  
poster icon Poster WEPMU010 [3.124 MB]  
 
WEPMU015 The Machine Protection System for the R&D Energy Recovery LINAC FPGA, LabView, software, interface 1087
 
  • Z. Altinbas, J.P. Jamilkowski, D. Kayran, R.C. Lee, B. Oerter
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy.
The Machine Protection System (MPS) is a device-safety system that is designed to prevent damage to hardware by generating interlocks, based upon the state of input signals generated by selected sub-systems. It protects all the key machinery in the R&D Project called the Energy Recovery LINAC (ERL) against the high beam current. The MPS is capable of responding to a fault with an interlock signal within several microseconds. The ERL MPS is based on a National Instruments CompactRIO platform, and is programmed by utilizing National Instruments' development environment for a visual programming language. The system also transfers data (interlock status, time of fault, etc.) to the main server. Transferred data is integrated into the pre-existing software architecture which is accessible by the operators. This paper will provide an overview of the hardware used, its configuration and operation, as well as the software written both on the device and the server side.
 
poster icon Poster WEPMU015 [17.019 MB]  
 
WEPMU019 First Operational Experience with the LHC Beam Dump Trigger Synchronisation Unit software, embedded, monitoring, operation 1100
 
  • A. Antoine, C. Boucly, P. Juteau, N. Magnin, N. Voumard
    CERN, Geneva, Switzerland
 
  Two LHC Beam Dumping Systems (LBDS) remove the counter-rotating beams safely from the collider during setting up of the accelerator, at the end of a physics run and in case of emergencies. Dump requests can come from 3 different sources: the machine protection system in emergency cases, the machine timing system for scheduled dumps or the LBDS itself in case of internal failures. These dump requests are synchronised with the 3 μs beam abort gap in a fail-safe redundant Trigger Synchronisation Unit (TSU) based on Digital Phase Lock Loops (DPLL), locked onto the LHC beam revolution frequency with a maximum phase error of 40 ns. The synchronised trigger pulses coming out of the TSU are then distributed to the high voltage generators of the beam dump kickers through a redundant fault-tolerant trigger distribution system. This paper describes the operational experience gained with the TSU since their commissioning with beam in 2009, and highlights the improvements which have been implemented for a safer operation. This includes an increase of the diagnosis and monitoring functionalities, a more automated validation of the hardware and embedded firmware before deployment, or the execution of a post-operational analysis of the TSU performance after each dump action. In the light of this first experience the outcome of the external review performed in 2010 is presented. The lessons learnt on the project life-cycle for the design of mission critical electronic modules are discussed.  
poster icon Poster WEPMU019 [1.220 MB]  
 
WEPMU031 Virtualization in Control System Environment controls, EPICS, network, operation 1138
 
  • L.R. Shen, D.K. Liu, T. Wan
    SINAP, Shanghai, People's Republic of China
 
  In a large scale distribute control system, there are lots of common services composing an environment of the entire control system, such as the server system for the common software base library, application server, archive server and so on. This paper gives a description of a virtualization realization for a control system environment, including the virtualization for server, storage, network system and application for the control system. With a virtualization instance of the epics based control system environment built by the VMware vSphere v4, we tested the whole functionality of this virtualization environment in the SSRF control system, including the common server of the NFS, NIS, NTP, Boot and EPICS base and extension library tools, we also carried out virtualization of the application server such as the Archive, Alarm, EPICS gateway and all of the network based IOC. Specially, we tested the high availability (HA) and VMotion for EPICS asynchronous IOC successfully under the different VLAN configuration of the current SSRF control system network.  
 
WEPMU037 Virtualization for the LHCb Experiment network, controls, experiment, Linux 1157
 
  • E. Bonaccorsi, L. Brarda, M. Chebbi, N. Neufeld
    CERN, Geneva, Switzerland
  • F. Sborzacchi
    INFN/LNF, Frascati (Roma), Italy
 
  The LHCb Experiment, one of the four large particle physics detectors at CERN, counts in its Online System more than 2000 servers and embedded systems. As a result of ever-increasing CPU performance in modern servers, many of the applications in the controls system are excellent candidates for virtualization technologies. We see virtualization as an approach to cut down cost, optimize resource usage and manage the complexity of the IT infrastructure of LHCb. Recently we have added a Kernel Virtual Machine (KVM) cluster based on Red Hat Enterprise Virtualization for Servers (RHEV) complementary to the existing Hyper-V cluster devoted only to the virtualization of the windows guests. This paper describes the architecture of our solution based on KVM and RHEV as along with its integration with the existing Hyper-V infrastructure and the Quattor cluster management tools and in particular how we use to run controls applications on a virtualized infrastructure. We present performance results of both the KVM and Hyper-V solutions, problems encountered and a description of the management tools developed for the integration with the Online cluster and LHCb SCADA control system based on PVSS.  
 
THAAUST01 Tailoring the Hardware to Your Control System controls, EPICS, FPGA, interface 1171
 
  • E. Björklund, S.A. Baily
    LANL, Los Alamos, New Mexico, USA
 
  Funding: Work supported by the US Department of Energy under contract DE-AC52-06NA25396
In the very early days of computerized accelerator control systems the entire control system, from the operator interface to the front-end data acquisition hardware, was custom designed and built for that one machine. This was expensive, but the resulting product was a control system seamlessly integrated (mostly) with the machine it was to control. Later, the advent of standardized bus systems such as CAMAC, VME, and CANBUS, made it practical and attractive to purchase commercially available data acquisition and control hardware. This greatly simplified the design but required that the control system be tailored to accommodate the features and eccentricities of the available hardware. Today we have standardized control systems (Tango, EPICS, DOOCS) using commercial hardware on standardized busses. With the advent of FPGA technology and programmable automation controllers (PACs & PLCs) it now becomes possible to tailor commercial hardware to the needs of a standardized control system and the target machine. In this paper, we will discuss our experiences with tailoring a commercial industrial I/O system to meet the needs of the EPICS control system and the LANSCE accelerator. We took the National Instruments Compact RIO platform, embedded an EPICS IOC in its processor, and used its FPGA backplane to create a "standardized" industrial I/O system (analog in/out, binary in/out, counters, and stepper motors) that meets the specific needs of the LANSCE accelerator.
 
slides icon Slides THAAUST01 [0.812 MB]  
 
THBHAUST03 Purpose and Benefit of Control System Training for Operators controls, EPICS, status, background 1186
 
  • E. Zimoch, A. Lüdeke
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The complexity of accelerators is ever increasing and today it is typical that a large number of feedback loops are implemented, based on sophisticated models which describe the underlying physics. Despite this increased complexity the machine operators must still effectively monitor and supervise the desired behaviour of the accelerator. This is not alone sufficient; additionally, the correct operation of the control system must also be verified. This is not always easy since the structure, design, and performance of the control system is usually not visualized and is often hidden to the operator. To better deal with this situation operators need some knowledge of the control system in order to react properly in the case of problems. In this paper we will present the approach of the Paul Scherrer Institute for operator control system training and discuss its benefits.  
slides icon Slides THBHAUST03 [4.407 MB]  
 
THBHMUST03 System Design towards Higher Availability for Large Distributed Control Systems controls, network, operation, neutron 1209
 
  • S.M. Hartman
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy
Large distributed control systems for particle accelerators present a complex system engineering challenge. The system, with its significant quantity of components and their complex interactions, must be able to support reliable accelerator operations while providing the flexibility to accommodate changing requirements. System design and architecture focused on required data flow are key to ensuring high control system availability. Using examples from the operational experience of the Spallation Neutron Source at Oak Ridge National Laboratory, recommendations will be presented for leveraging current technologies to design systems for high availability in future large scale projects.
 
slides icon Slides THBHMUST03 [7.833 MB]  
 
THCHMUST01 Control System for Cryogenic THD Layering at the National Ignition Facility target, cryogenics, controls, laser 1236
 
  • M.A. Fedorov, O.D. Edwards, E.A. Mapoles, J. Mauvais, T.G. Parham, R.J. Sanchez, J.M. Sater, B.A. Wilson
    LLNL, Livermore, California, USA
 
  Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
The National Ignition Facility (NIF) is the world largest and most energetic laser system for Inertial Confinement Fusion (ICF). In 2010, NIF began ignition experiments using cryogenically cooled targets containing layers of the tritium-hydrogen-deuterium (THD) fuel. The 75 μm thick layer is formed inside of the 2 mm target capsule at temperatures of approximately 18 K. The ICF target designs require sub-micron smoothness of the THD ice layers. Formation of such layers is still an active research area, requiring a flexible control system capable of executing the evolving layering protocols. This task is performed by the Cryogenic Target Subsystem (CTS) of the NIF Integrated Computer Control System (ICCS). The CTS provides cryogenic temperature control with the 1 mK resolution required for beta layering and for the thermal gradient fill of the capsule. The CTS also includes a 3-axis x-ray radiography engine for phase contrast imaging of the ice layers inside of the plastic and beryllium capsules. In addition to automatic control engines, CTS is integrated with the Matlab interactive programming environment to allow flexibility in experimental layering protocols. The CTS Layering Matlab Toolbox provides the tools for layer image analysis, system characterization and cryogenic control. The CTS Layering Report tool generates qualification metrics of the layers, such as concentricity of the layer and roughness of the growth boundary grooves. The CTS activities are automatically coordinated with other NIF controls in the carefully orchestrated NIF Shot Sequence.
LLNL-CONF-477418
 
slides icon Slides THCHMUST01 [8.058 MB]  
 
THCHMUST02 Control and Test Software for IRAM Widex Correlator real-time, software, Linux, simulation 1240
 
  • S. Blanchet, D. Broguiere, P. Chavatte, F. Morel, A. Perrigouard, M. Torres
    IRAM, Saint Martin d'Heres, France
 
  IRAM is an international research institute for radio astronomy. It has designed a new correlator called WideX for the Plateau de Bure interferometer (an array of six 15-meter telescopes) in the French Alps. The device started its official service in February 2010. This correlator must be driven in real-time at 32 Hz for sending parameters and for data acquisition. With 3.67 million channels, distributed over 1792 dedicated chips, that produce a 1.87 Gbits/sec data output rate, the data acquisition and processing and also the automatic hardware-failure detection are big challenges for the software. This article presents the software that has been developed to drive and test the correlator. In particular it presents an innovative usage of a high-speed optical link, initially developed for the CERN ALICE experiment, associated with real-time Linux (RTAI) to achieve our goals.  
slides icon Slides THCHMUST02 [2.272 MB]  
 
THCHMUST05 The Case for Soft-CPUs in Accelerator Control Systems FPGA, software, controls, Linux 1252
 
  • W.W. Terpstra
    GSI, Darmstadt, Germany
 
  The steady improvements in Field Programmable Gate Array (FPGA) performance, size, and cost have driven their ever increasing use in science and industry. As FPGA sizes continue to increase, more and more devices and logic are moved from external chips to FPGAs. For simple hardware devices, the savings in board area and ASIC manufacturing setup are compelling. For more dynamic logic, the trade-off is not always as clear. Traditionally, this has been the domain of CPUs and software programming languages. In hardware designs already including an FPGA, it is tempting to remove the CPU and implement all logic in the FPGA, saving component costs and increasing performance. However, that logic must then be implemented in the more constraining hardware description languages, cannot be as easily debugged or traced, and typically requires significant FPGA area. For performance-critical tasks this trade-off can make sense. However, for the myriad slower and dynamic tasks, software programming languages remain the better choice. One great benefit of a CPU is that it can perform many tasks. Thus, by including a small "Soft-CPU" inside the FPGA, all of the slower tasks can be aggregated into a single component. These tasks may then re-use existing software libraries, debugging techniques, and device drivers, while retaining ready access to the FPGA's internals. This paper discusses requirements for using Soft-CPUs in this niche, especially for the FAIR project. Several open-source alternatives will be compared and recommendations made for the best way to leverage a hybrid design.  
slides icon Slides THCHMUST05 [0.446 MB]  
 
THDAULT01 Modern System Architectures in Embedded Systems embedded, controls, FPGA, software 1260
 
  • T. Korhonen
    PSI, Villigen, Switzerland
 
  Several new technologies are making their way also in embedded systems. In addition to FPGA technology which has become commonplace, multicore CPUs and I/O virtualization (among others) are being introduced to the embedded systems. In our paper we present our ideas and studies about how to take advantage of these features in control systems. Some application examples involving things like CPU partitioning, virtualized I/O and so an are discussed, along with some benchmarks.  
slides icon Slides THDAULT01 [1.426 MB]  
 
THDAUST02 An Erlang-Based Front End Framework for Accelerator Controls framework, controls, interface, data-acquisition 1264
 
  • D.J. Nicklaus, C.I. Briegel, J.D. Firebaugh, CA. King, R. Neswold, R. Rechenmacher, J. You
    Fermilab, Batavia, USA
 
  We have developed a new front-end framework for the ACNET control system in Erlang. Erlang is a functional programming language developed for real-time telecommunications applications. The primary task of the front-end software is to connect the control system with drivers collecting data from individual field bus devices. Erlang's concurrency and message passing support have proven well-suited for managing large numbers of independent ACNET client requests for front-end data. Other Erlang features which make it particularly well-suited for a front-end framework include fault-tolerance with process monitoring and restarting, real-time response,and the ability to change code in running systems. Erlang's interactive shell and dynamic typing make writing and running unit tests an easy part of the development process. Erlang includes mechanisms for distributing applications which we will use for deploying our framework to multiple front-ends, along with a configured set of device drivers. We've developed Erlang code to use Fermilab's TCLK event distribution clock and Erlang's interface to C/C++ allows hardware-specific driver access.  
slides icon Slides THDAUST02 [1.439 MB]  
 
THDAULT06 MARTe Framework: a Middleware for Real-time Applications Development real-time, controls, framework, Linux 1277
 
  • A. Neto, D. Alves, B. Carvalho, P.J. Carvalho, H. Fernandes, D.F. Valcárcel
    IPFN, Lisbon, Portugal
  • A. Barbalace, G. Manduchi
    Consorzio RFX, Associazione Euratom-ENEA sulla Fusione, Padova, Italy
  • L. Boncagni
    ENEA C.R. Frascati, Frascati (Roma), Italy
  • G. De Tommasi
    CREATE, Napoli, Italy
  • P. McCullen, A.V. Stephen
    CCFE, Abingdon, Oxon, United Kingdom
  • F. Sartori
    F4E, Barcelona, Spain
  • R. Vitelli
    Università di Roma II Tor Vergata, Roma, Italy
  • L. Zabeo
    ITER Organization, St. Paul lez Durance, France
 
  Funding: This work was supported by the European Communities under the contract of Association between EURATOM/IST and was carried out within the framework of the European Fusion Development Agreement
The Multi-threaded Application Real-Time executor (MARTe) is a C++ framework that provides a development environment for the design and deployment of real-time applications, e.g. control systems. The kernel of MARTe comprises a set of data-driven independent blocks, connected using a shared bus. This modular design enforces a clear boundary between algorithms, hardware interaction and system configuration. The architecture, being multi-platform, facilitates the test and commissioning of new systems, enabling the execution of plant models in offline environments and with the hardware-in-the-loop, whilst also providing a set of non-intrusive introspection and logging facilities. Furthermore, applications can be developed in non real-time environments and deployed in a real-time operating system, using exactly the same code and configuration data. The framework is already being used in several fusion experiments, with control cycles ranging from 50 microseconds to 10 milliseconds exhibiting jitters of less than 2%, using VxWorks, RTAI or Linux. Codes can also be developed and executed in Microsoft Windows, Solaris and Mac OS X. This paper discusses the main design concepts of MARTe, in particular the architectural choices which enabled the combination of real-time accuracy, performance and robustness with complex and modular data driven applications.
 
slides icon Slides THDAULT06 [1.535 MB]  
 
FRAAULT02 STUXNET and the Impact on Accelerator Control Systems controls, software, network, Windows 1285
 
  • S. Lüders
    CERN, Geneva, Switzerland
 
  2010 has seen a wide news coverage of a new kind of computer attack, named "Stuxnet", targeting control systems. Due to its level of sophistication, it is widely acknowledged that this attack marks the very first case of a cyber-war of one country against the industrial infrastructure of another, although there is still is much speculation about the details. Worse yet, experts recognize that Stuxnet might just be the beginning and that similar attacks, eventually with much less sophistication, but with much more collateral damage, can be expected in the years to come. Stuxnet was targeting a special model of the Siemens 400 PLC series. Similar modules are also deployed for accelerator controls like the LHC cryogenics or vacuum systems or the detector control systems in LHC experiments. Therefore, the aim of this presentation is to give an insight into what this new attack does and why it is deemed to be special. In particular, the potential impact on accelerator and experiment control systems will be discussed, and means will be presented how to properly protect against similar attacks.  
slides icon Slides FRAAULT02 [8.221 MB]  
 
FRBHMUST03 Thirty Meter Telescope Observatory Software Architecture software, controls, operation, software-architecture 1326
 
  • K.K. Gillies, C. Boyer
    TMT, Pasadena, California, USA
 
  The Thirty Meter Telescope (TMT) will be a ground-based, 30-m optical-IR telescope with a highly segmented primary mirror located on the summit of Mauna Kea in Hawaii. The TMT Observatory Software (OSW) system will deliver the software applications and infrastructure necessary to integrate all TMT software into a single system and implement a minimal end-to-end science operations system. At the telescope, OSW is focused on the task of integrating and efficiently controlling and coordinating the telescope, adaptive optics, science instruments, and their subsystems during observation execution. From the software architecture viewpoint, the software system is viewed as a set of software components distributed across many machines that are integrated using a shared software base and a set of services that provide communications and other needed functionality. This paper describes the current state of the TMT Observatory Software focusing on its unique requirements, architecture, and the use of middleware technologies and solutions that enable the OSW design.  
slides icon Slides FRBHMUST03 [3.788 MB]  
 
FRCAUST03 Status of the ESS Control System controls, database, EPICS, software 1345
 
  • G. Trahern
    ESS, Lund, Sweden
 
  The European Spallation Source (ESS) is a high current proton LINAC to be built in Lund, Sweden. The LINAC delivers 5 MW of power to the target at 2500 MeV, with a nominal current of 50 mA. It is designed to include the ability to upgrade the LINAC to a higher power of 7.5 MW at a fixed energy of 2500 MeV. The Accelerator Design Update (ADU) collaboration of mainly European institutions will deliver a Technical Design Report at the end of 2012. First protons are expected in 2018, and first neutrons in 2019. The ESS will be constructed by a number of geographically dispersed institutions which means that a considerable part of control system integration will potentially be performed off-site. To mitigate this organizational risk, significant effort will be put into standardization of hardware, software, and development procedures early in the project. We have named the main result of this standardization the Control Box concept. The ESS will use EPICS, and will build on the positive distributed development experiences of SNS and ITER. Current state of control system design and key decisions are presented in the paper as well as immediate challenges and proposed solutions.
From PAC 2011 article
http://eval.esss.lu.se/cgi-bin/public/DocDB/ShowDocument?docid=45
From IPAC 2010 article
http://eval.esss.lu.se/cgi-bin/public/DocDB/ShowDocument?docid=26
 
slides icon Slides FRCAUST03 [1.944 MB]  
 
FRCAUST04 Status of the ASKAP Monitoring and Control System software, controls, EPICS, monitoring 1349
 
  • J.C. Guzman
    CSIRO ATNF, NSW, Australia
 
  The Australian Square Kilometre Array Pathfinder, or ASKAP, is CSIRO’s new radio telescope currently under construction at the Murchison Radio astronomy Observatory (MRO) in Mid West region of Western Australia. As well as being a world-leading telescope in its own right, ASKAP will be an important testbed for the Square Kilometre Array, a future international radio telescope that will be the world’s largest and most sensitive. This paper gives a status update of the ASKAP project and provides a detailed look at the initial deployment of the monitoring and control system as well as major issues to be addressed in future software releases before the start of system commissioning later this year.  
slides icon Slides FRCAUST04 [3.414 MB]