Keyword: interface
Paper Title Other Keywords Page
MOBAUST02 The ATLAS Detector Control System controls, detector, experiment, monitoring 5
 
  • S. Schlenker, S. Arfaoui, S. Franz, O. Gutzwiller, C.A. Tsarouchas
    CERN, Geneva, Switzerland
  • G. Aielli, F. Marchese
    Università di Roma II Tor Vergata, Roma, Italy
  • G. Arabidze
    MSU, East Lansing, Michigan, USA
  • E. Banaś, Z. Hajduk, J. Olszowska, E. Stanecka
    IFJ-PAN, Kraków, Poland
  • T. Barillari, J. Habring, J. Huber
    MPI, Muenchen, Germany
  • M. Bindi, A. Polini
    INFN-Bologna, Bologna, Italy
  • H. Boterenbrood, R.G.K. Hart
    NIKHEF, Amsterdam, The Netherlands
  • H. Braun, D. Hirschbuehl, S. Kersten, K. Lantzsch
    Bergische Universität Wuppertal, Wuppertal, Germany
  • R. Brenner
    Uppsala University, Uppsala, Sweden
  • D. Caforio, C. Sbarra
    Bologna University, Bologna, Italy
  • S. Chekulaev
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
  • S. D'Auria
    University of Glasgow, Glasgow, United Kingdom
  • M. Deliyergiyev, I. Mandić
    JSI, Ljubljana, Slovenia
  • E. Ertel
    Johannes Gutenberg University Mainz, Institut für Physik, Mainz, Germany
  • V. Filimonov, V. Khomutnikov, S. Kovalenko
    PNPI, Gatchina, Leningrad District, Russia
  • V. Grassi
    SBU, Stony Brook, New York, USA
  • J. Hartert, S. Zimmermann
    Albert-Ludwig Universität Freiburg, Freiburg, Germany
  • D. Hoffmann
    CPPM, Marseille, France
  • G. Iakovidis, K. Karakostas, S. Leontsinis, E. Mountricha
    National Technical University of Athens, Athens, Greece
  • P. Lafarguette
    Université Blaise Pascal, Clermont-Ferrand, France
  • F. Marques Vinagre, G. Ribeiro, H.F. Santos
    LIP, Lisboa, Portugal
  • T. Martin, P.D. Thompson
    Birmingham University, Birmingham, United Kingdom
  • B. Mindur
    AGH University of Science and Technology, Krakow, Poland
  • J. Mitrevski
    SCIPP, Santa Cruz, California, USA
  • K. Nagai
    University of Tsukuba, Graduate School of Pure and Applied Sciences,, Tsukuba, Ibaraki, Japan
  • S. Nemecek
    Czech Republic Academy of Sciences, Institute of Physics, Prague, Czech Republic
  • D. Oliveira Damazio, A. Poblaguev
    BNL, Upton, Long Island, New York, USA
  • P.W. Phillips
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
  • A. Robichaud-Veronneau
    DPNC, Genève, Switzerland
  • A. Talyshev
    BINP, Novosibirsk, Russia
  • G.F. Tartarelli
    Universita' degli Studi di Milano & INFN, Milano, Italy
  • B.M. Wynne
    Edinburgh University, Edinburgh, United Kingdom
 
  The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by using a system of 140 server machines running the industrial SCADA product PVSS. This highly distributed system reads, processes and archives of the order of 106 operational parameters. Higher level control system layers based on the CERN JCOP framework allow for automatic control procedures, efficient error recognition and handling, manage the communication with external control systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS physics data acquisition system. A web-based monitoring system allows accessing the DCS operator interface views and browse the conditions data archive worldwide with high availability. This contribution firstly describes the status of the ATLAS DCS and the experience gained during the LHC commissioning and the first physics data taking operation period. Secondly, the future evolution and maintenance constraints for the coming years and the LHC high luminosity upgrades are outlined.  
slides icon Slides MOBAUST02 [6.379 MB]  
 
MOBAUST03 The MedAustron Accelerator Control System controls, operation, real-time, timing 9
 
  • J. Gutleber, M. Benedikt
    CERN, Geneva, Switzerland
  • A.B. Brett, A. Fabich, M. Marchhart, R. Moser, M. Thonke, C. Torcato de Matos
    EBG MedAustron, Wr. Neustadt, Austria
  • J. Dedič
    Cosylab, Ljubljana, Slovenia
 
  This paper presents the architecture and design of the MedAustron particle accelerator control system. The facility is currently under construction in Wr. Neustadt, Austria. The accelerator and its control system are designed at CERN. Accelerator control systems for ion therapy applications are characterized by rich sets of configuration data, real-time reconfiguration needs and high stability requirements. The machine is operated according to a pulse-to-pulse modulation scheme and beams are described in terms of ion type, energy, beam dimensions, intensity and spill length. An irradiation session for a patient consists of a few hundred accelerator cycles over a time period of about two minutes. No two cycles within a session are equal and the dead-time between two cycles must be kept low. The control system is based on a multi-tier architecture with the aim to achieve a clear separation between front-end devices and their controllers. Off-the-shelf technologies are deployed wherever possible. In-house developments cover a main timing system, a light-weight layer to standardize operation and communication of front-end controllers, the control of the power converters and a procedure programming framework for automating high-level control and data analysis tasks. In order to be able to roll out a system within a predictable schedule, an "off-shoring" project management process was adopted: A frame agreement with an integrator covers the provision of skilled personnel that specifies and builds components together with the core team.  
slides icon Slides MOBAUST03 [7.483 MB]  
 
MOCAULT02 Managing the Development of Plant Subsystems for a Large International Project controls, software, EPICS, site 27
 
  • D.P. Gurd
    Private Address, Vancouver, Canada
 
  ITER is an international collaborative project under development by nations representing over one half of the world's population. Major components will be supplied by "Domestic Agencies" representing the various participating countries. While the supervisory control system, known as "CODAC", will be developed at the project site in the south of France, the EPICS and PLC-based plant control subsystems are to be developed and tested locally, where the subsystems themselves are being built. This is similar to the model used for the development of the Spallation Neutron Source (SNS), which was a US national collaboration. However the far more complex constraints of an international collaboration, as well as the mandated extensive use of externally contracted and commercially-built subsystems, preclude the use of many specifics of the SNS collaboration approach which may have contributed to its success. Moreover, procedures for final system integration and commissioning at ITER are not yet well defined. This paper will outline the particular issues either inherent in an international collaboration or specific to ITER, and will suggest approaches to mitigate those problems with the goal of assuring a successful and timely integration and commissioning phase.  
slides icon Slides MOCAULT02 [3.684 MB]  
 
MOMAU003 The Computing Model of the Experiments at PETRA III controls, TANGO, experiment, detector 44
 
  • T. Kracht, M. Alfaro, M. Flemming, J. Grabitz, T. Núñez, A. Rothkirch, F. Schlünzen, E. Wintersberger, P. van der Reest
    DESY, Hamburg, Germany
 
  The PETRA storage ring at DESY in Hamburg has been refurbished to become a highly brilliant synchrotron radiation source (now named PETRA III). Commissioning of the beamlines started in 2009, user operation in 2010. In comparison with our DORIS beamlimes, the PETRA III experiments have larger complexity, higher data rates and require an integrated system for data storage and archiving, data processing and data distribution. Tango [1] and Sardana [2] are the main components of our online control system. Both systems are developed by international collaborations. Tango serves as the backbone to operate all beamline components, certain storage ring devices and equipment from our users. Sardana is an abstraction layer on top of Tango. It standardizes the hardware access, organizes experimental procedures, has a command line interface and provides us with widgets for graphical user interfaces. Other clients like Spectra, which was written for DORIS, interact with Tango or Sardana. Modern 2D detectors create large data volumes. At PETRA III all data are transferred to an online file server which is hosted by the DESY computer center. Near real time analysis and reconstruction steps are executed on a CPU farm. A portal for remote data access is in preparation. Data archiving is done by the dCache [3]. An offline file server has been installed for further analysis and inhouse data storage.
[1] http://www.tango-controls.org
[2] http://computing.cells.es/services/collaborations/sardana
[3] http://www-dcache.desy.de
 
slides icon Slides MOMAU003 [0.347 MB]  
poster icon Poster MOMAU003 [0.563 MB]  
 
MOMAU004 Database Foundation for the Configuration Management of the CERN Accelerator Controls Systems controls, database, software, timing 48
 
  • Z. Zaharieva, M. Martin Marquez, M. Peryt
    CERN, Geneva, Switzerland
 
  The Controls Configuration DB (CCDB) and its interfaces have been developed over the last 25 years in order to become nowadays the basis for the Configuration Management of the Controls System for all accelerators at CERN. The CCDB contains data for all configuration items and their relationships, required for the correct functioning of the Controls System. The configuration items are quite heterogeneous, depicting different areas of the Controls System – ranging from 3000 Front-End Computers, 75 000 software devices allowing remote control of the accelerators, to valid states of the Accelerators Timing System. The article will describe the different areas of the CCDB, their interdependencies and the challenges to establish the data model for such a diverse configuration management database, serving a multitude of clients. The CCDB tracks the life of the configuration items by allowing their clear identification, triggering change management processes as well as providing status accounting and audits. This necessitated the development and implementation of a combination of tailored processes and tools. The Controls System is a data-driven one - the data stored in the CCDB is extracted and propagated to the controls hardware in order to configure it remotely. Therefore a special attention is placed on data security and data integrity as an incorrectly configured item can have a direct impact on the operation of the accelerators.  
slides icon Slides MOMAU004 [0.404 MB]  
poster icon Poster MOMAU004 [6.064 MB]  
 
MOPKN002 LHC Supertable database, operation, collider, luminosity 86
 
  • M. Pereira, M. Lamont, G.J. Müller, D.D. Teixeira
    CERN, Geneva, Switzerland
  • T.E. Lahey
    SLAC, Menlo Park, California, USA
  • E.S.M. McCrory
    Fermilab, Batavia, USA
 
  LHC operations generate enormous amounts of data. These data are being stored in many different databases. Hence, it is difficult for operators, physicists, engineers and management to have a clear view on the overall accelerator performance. Until recently the logging database, through its desktop interface TIMBER, was the only way of retrieving information on a fill-by-fill basis. The LHC Supertable has been developed to provide a summary of key LHC performance parameters in a clear, consistent and comprehensive format. The columns in this table represent main parameters that describe the collider's operation such as luminosity, beam intensity, emittance, etc. The data is organized in a tabular fill-by-fill manner with different levels of detail. A particular emphasis was placed on data sharing by making data available in various open formats. Typically the contents are calculated for periods of time that map to the accelerator's states or beam modes such as Injection, Stable Beams, etc. Data retrieval and calculation is triggered automatically after the end of each fill. The LHC Supertable project currently publishes 80 columns of data on around 100 fills.  
 
MOPKN010 Database and Interface Modifications: Change Management Without Affecting the Clients database, controls, software, operation 106
 
  • M. Peryt, R. Billen, M. Martin Marquez, Z. Zaharieva
    CERN, Geneva, Switzerland
 
  The first Oracle-based Controls Configuration Database (CCDB) was developed in 1986, by which the controls system of CERN's Proton Synchrotron became data-driven. Since then, this mission-critical system has evolved tremendously going through several generational changes in terms of the increasing complexity of the control system, software technologies and data models. Today, the CCDB covers the whole CERN accelerator complex and satisfies a much wider range of functional requirements. Despite its online usage, everyday operations of the machines must not be disrupted. This paper describes our approach with respect to dealing with change while ensuring continuity. How do we manage the database schema changes? How do we take advantage of the latest web deployed application development frameworks without alienating the users? How do we minimize impact on the dependent systems connected to databases through various API's? In this paper we will provide our answers to these questions, and to many more.  
 
MOPKN014 A Web Based Realtime Monitor on EPICS Data EPICS, monitoring, status, real-time 121
 
  • L.F. Li, C.H. Wang
    IHEP Beijing, Beijing, People's Republic of China
 
  Funding: IHEP China
Monitoring systems such as EDM and CSS are extremely important in EPICS system. Most of them are based on client/server(C/S). This paper designs and implements a web based realtime monitoring system on EPICS data. This system is based on browser and server (B/S using Flex [1]). Through CAJ [2] interface, it fetches EPICS data including beam energy, beam current, lifetime and luminosity and so on. Then all data is displayed in a realtime chart in browser (IE or Firefox/Mozilla). The chart is refreshed every regular interval and can be zoomed and adjusted. Also, it provides data tips showing and full screen mode.
[1]http://www.adobe.com/products/flex.html
[2]M.Sekoranja, "Native Java Implement of channel access for Epics", 10th ICALEPCS, Geneva, Oct 2005, PO2.089-5.
 
poster icon Poster MOPKN014 [1.105 MB]  
 
MOPKN018 Computing Architecture of the ALICE Detector Control System controls, detector, monitoring, network 134
 
  • P. Rosinský, A. Augustinus, P.Ch. Chochula, L.S. Jirdén, M. Lechman
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The ALICE Detector Control System (DCS) is based on a commercial SCADA product, running on a large Windows computer cluster. It communicates with about 1200 network attached devices to assure safe and stable operation of the experiment. In the presentation we focus on the design of the ALICE DCS computer systems. We describe the management of data flow, mechanisms for handling the large data amounts and information exchange with external systems. One of the key operational requirements is an intuitive, error proof and robust user interface allowing for simple operation of the experiment. At the same time the typical operator task, like trending or routine checks of the devices, must be decoupled from the automated operation in order to prevent overload of critical parts of the system. All these requirements must be implemented in an environment with strict security requirements. In the presentation we explain how these demands affected the architecture of the ALICE DCS.  
 
MOPKN019 ATLAS Detector Control System Data Viewer database, framework, controls, experiment 137
 
  • C.A. Tsarouchas, S.A. Roe, S. Schlenker
    CERN, Geneva, Switzerland
  • U.X. Bitenc, M.L. Fehling-Kaschek, S.X. Winkelmann
    Albert-Ludwig Universität Freiburg, Freiburg, Germany
  • S.X. D'Auria
    University of Glasgow, Glasgow, United Kingdom
  • D. Hoffmann, O.X. Pisano
    CPPM, Marseille, France
 
  The ATLAS experiment at CERN is one of the four Large Hadron Collider ex- periments. ATLAS uses a commercial SCADA system (PVSS) for its Detector Control System (DCS) which is responsible for the supervision of the detector equipment, the reading of operational parameters, the propagation of the alarms and the archiving of important operational data in a relational database. DCS Data Viewer (DDV) is an application that provides access to historical data of DCS parameters written to the database through a web interface. It has a modular and flexible design and is structured using a client-server architecture. The server can be operated stand alone with a command-like interface to the data while the client offers a user friendly, browser independent interface. The selection of the metadata of DCS parameters is done via a column-tree view or with a powerful search engine. The final visualisation of the data is done using various plugins such as "value over time" charts, data tables, raw ASCII or structured export to ROOT. Excessive access or malicious use of the database is prevented by dedicated protection mechanisms, allowing the exposure of the tool to hundreds of inexperienced users. The metadata selection and data output features can be used separately by XML configuration files. Security constraints have been taken into account in the implementation allowing the access of DDV by collaborators worldwide. Due to its flexible interface and its generic and modular approach, DDV could be easily used for other experiment control systems that archive data using PVSS.  
poster icon Poster MOPKN019 [0.938 MB]  
 
MOPKN020 The PSI Web Interface to the EPICS Channel Archiver EPICS, controls, software, operation 141
 
  • G. Jud, A. Lüdeke, W. Portmann
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The EPICS channel archiver is a powerful tool to collect control system data of thousands of EPICS process variables with rates of many Hertz each to an archive for later retrieval. [1] Within the package of the channel archiver version 2 you get a Java application for graphical data retrieval and a command line tool for data extraction into different file formats. For the Paul Scherrer Institute we wanted a possibility to retrieve the archived data from a web interface. It was desired to have flexible retrieval functions and to allow to interchange data references by e-mail. This web interface has been implemented by the PSI controls group and has now been in operation for several years. This presentation will highlight the special features of this PSI web interface to the EPICS channel archiver.
[1] http://sourceforge.net/apps/trac/epicschanarch/wiki
 
poster icon Poster MOPKN020 [0.385 MB]  
 
MOPKN024 The Integration of the LHC Cyrogenics Control System Data into the CERN Layout Database database, controls, cryogenics, instrumentation 147
 
  • E. Fortescue-Beck, R. Billen, P. Gomes
    CERN, Geneva, Switzerland
 
  The Large Hadron Collider's Cryogenic Control System makes extensive use of several databases to manage data appertaining to over 34,000 cryogenic instrumentation channels. This data is essential for populating the firmware of the PLCs which are responsible for maintaining the LHC at the appropriate temperature. In order to reduce the number of data sources and the overall complexity of the system, the databases have been rationalised and the automatic tool, that extracts data for the control software, has been simplified. This paper describes the main improvements that have been made and evaluates the success of the project.  
 
MOPKN027 BDNLS - BESSY Device Name Location Service database, controls, EPICS, target 154
 
  • D.B. Engel, P. Laux, R. Müller
    HZB, Berlin, Germany
 
  Initially the relational database (RDB) for control system configuration at BESSY has been built around the device concept [1]. Maintenance and consistency issues as well as complexity of scripts generating the configuration data, triggered the development of a novel, generic RDB structure based on hierarchies of named nodes with attribute/value pair [2]. Unfortunately it turned out that usability of this generic RDB structure for a comprehensive configuration management relies totally on sophisticated data maintenance tools. On this background BDNS, a new database management tool has been developed within the framework of the Eclipse Rich Client Platform. It uses the Model View Control (MVC) layer of Jface to cleanly dissect retrieval processes, data path, data visualization and actualization. It is based on extensible configurations described in XML allowing to chain SQL calls and compose profiles for various use cases. It solves the problem of data key forwarding to the subsequent SQL statement. BDNS and its potential to map various levels of complexity into the XML configurations allows to provide easy usable, tailored database access to the configuration maintainers for the different underlying database structures. Based on Eclipse the integration of BDNS into Control System Studio is straight forward.
[1] T. Birke et.al.: Relational Database for Controls Configuration Management, IADBG Workshop 2001, San Jose.
[2] T. Birke et.al.: Beyond Devices - An improved RDB Data-Model for Configuration Management, ICALEPCS 2005, Geneva
 
poster icon Poster MOPKN027 [0.210 MB]  
 
MOPKN029 Design and Implementation of the CEBAF Element Database database, controls, software, hardware 157
 
  • T. L. Larrieu, M.E. Joyce, C.J. Slominski
    JLAB, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
With inauguration of the CEBAF Element Database(CED) in Fall 2010, Jefferson Lab computer scientists have taken a first step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting front-end computers to building controls screens. A particular requirement influencing the CED design is that it must provide consistent access to not only present, but also future, and eventually past, configurations of the CEBAF accelerator. To accomplish this, an introspective database schema was designed that allows new elements, element types, and element properties to be defined on-the-fly without changing table structure. When used in conjunction with the Oracle Workspace Manager, it allows users to seamlessly query data from any time in the database history with the exact same tools as they use for querying the present configuration. Users can also check-out workspaces and use them as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented API that is translated automatically from original C++ into native libraries for script languages such as perl, php, and TCL making access to the CED easy and ubiquitous.
The U.S.Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.
 
poster icon Poster MOPKN029 [5.239 MB]  
 
MOPKS004 NSLS-II Beam Diagnostics Control System diagnostics, controls, electronics, timing 168
 
  • Y. Hu, L.R. Dalesio, K. Ha, O. Singh, H. Xu
    BNL, Upton, Long Island, New York, USA
 
  A correct measurement of NSLS-II beam parameters (beam position, beam size, circulating current, beam emittance, etc.) depends on the effective combinations of beam monitors, control and data acquisition system and high level physics applications. This paper will present EPICS-based control system for NSLS-II diagnostics and give detailed descriptions of diagnostics controls interfaces including classifications of diagnostics, proposed electronics and EPICS IOC platforms, and interfaces to other subsystems. Device counts in diagnostics subsystems will also be briefly described.  
poster icon Poster MOPKS004 [0.167 MB]  
 
MOPKS012 Design and Test of a Girder Control System at NSRRC controls, laser, network, storage-ring 183
 
  • H.S. Wang, J.-R. Chen, M. L. Chen, K.H. Hsu, W.Y. Lai, S.Y. Perng, Y.L. Tsai, T.C. Tseng
    NSRRC, Hsinchu, Taiwan
 
  A girder control system is proposed to quickly and precisely adjust the displacement and rotating angle of all girders in the storage ring with little manpower at the Taiwan Photon Source (TPS) project at National Synchrotron Research Center (NSRRC). In this control girder system, six motorized cam movers supporting a girder are driven on three pedestals to perform six-axis adjustments of a girder. A tiltmeter monitors the pitch and roll of each girder; several touch sensors measure the relative displacement between consecutive girders. Moreover, a laser position sensitive detector (PSD) system measuring the relative displacement between straight-section girders is included in this girder control system. Operator can use subroutines developed by MATLAB to control every local girder control system via intranet. This paper presents details of design and tests of the girder control system.  
 
MOPKS028 Using TANGO for Controlling a Microfluidic System with Automatic Image Analysis and Droplet Detection TANGO, device-server, controls, software 223
 
  • O. Taché, F. Malloggi
    CEA/DSM/IRAMIS/SIS2M, Gif sur Yvette, France
 
  Microfluidics allows one to manipulate small quantities of fluids, using channel dimensions of several micrometers. At CEA / LIONS, microfluidic chips are used to produce calibrated complex microdrops. This technique requires only a small volume of chemicals, but requires the use a number of accurate electronic equipment such as motorized syringes, valve and pressure sensors, video cameras with fast frame rate, coupled to microscopes. We use the TANGO control system for all heterogeneous equipment in microfluidics experiments and video acquisition. We have developed a set of tools that allow us to perform the image acquisition, allows shape detection of droplets, whose size, number, and speed can be determined, almost in real time. Using TANGO, we are able to provide feedback to actuators, in order to adjust the microfabrication parameters and time droplet formation.  
poster icon Poster MOPKS028 [1.594 MB]  
 
MOPMN001 Beam Sharing between the Therapy and a Secondary User controls, cyclotron, proton, network 231
 
  • K.J. Gajewski
    TSL, Uppsala, Sweden
 
  The 180 MeV proton beam from the cyclotron at The Svedberg Laboratory is primarily used for a patient treatment. Because of the fact that the proton beam is needed only during a small fraction of time scheduled for the treatment, there is a possibility to divert the beam to another location to be used by a secondary user. The therapy staff (primary user) controls the beam switching process after an initial set-up which is done by the cyclotron operator. They have an interface that allows controlling the accelerator and the beam line in all aspects needed for performing the treatment. The cyclotron operator is involved only if any problem occurs. The secondary user has its own interface that allows a limited access to the accelerators control system. Using this interface it is possible to start and stop the beam when it is not used for the therapy, grant access to the experimental hall and monitor the beam properties. The tools and procedures for the beam sharing between the primary and the secondary user are presented in the paper.  
poster icon Poster MOPMN001 [0.924 MB]  
 
MOPMN002 Integration of the Moment-Based Beam-Dynamics Simulation Tool V-Code into the S-DALINAC Control System simulation, recirculation, linac, quadrupole 235
 
  • S. Franke, W. Ackermann, T. Weiland
    TEMF, TU Darmstadt, Darmstadt, Germany
  • R. Eichhorn, F. Hug, C. Klose, N. Pietralla, M. Platz
    TU Darmstadt, Darmstadt, Germany
 
  Funding: This work is supported by DFG through SFB 634.
Within accelerator control systems fast and accurate beam dynamics simulation programs can advantageously assist the operators to get a more detailed insight into the actual machine status. The V-Code simulation tool implemented at TEMF is a fast tracking code based on the Vlasov equation. Instead of directly solving this partial differential equation the considered particle distribution function is represented by a discrete set of characteristic moments. The accuracy of this approach is adjustable with the help of the considered order of moments and by representing the particle distribution through multiple sets of moments in a multi-ensemble environment. In this contribution an overview of the numerical model is presented together with implemented features for its dedicated integration into the control system of the Superconducting Linear Accelerator S-DALINAC.
 
poster icon Poster MOPMN002 [0.901 MB]  
 
MOPMN004 An Operational Event Announcer for the LHC Control Centre Using Speech Synthesis controls, timing, software, operation 242
 
  • S.T. Page, R. Alemany-Fernandez
    CERN, Geneva, Switzerland
 
  The LHC island of the CERN Control Centre is a busy working environment with many status displays and running software applications. An audible event announcer was developed in order to provide a simple and efficient method to notify the operations team of events occurring within the many subsystems of the accelerator. The LHC Announcer uses speech synthesis to report messages based upon data received from multiple sources. General accelerator information such as injections, beam energies and beam dumps are derived from data received from the LHC Timing System. Additionally, a software interface is provided that allows other surveillance processes to send messages to the Announcer using the standard control system middleware. Events are divided into categories which the user can enable or disable depending upon their interest. Use of the LHC Announcer is not limited to the Control Centre and is intended to be available to a wide audience, both inside and outside CERN. To accommodate this, it was designed to require no special software beyond a standard web browser. This paper describes the design of the LHC Announcer and how it is integrated into the LHC operational environment.  
poster icon Poster MOPMN004 [1.850 MB]  
 
MOPMN005 ProShell – The MedAustron Accelerator Control Procedure Framework controls, ion, framework, ion-source 246
 
  • R. Moser, A.B. Brett, M. Marchhart, C. Torcato de Matos
    EBG MedAustron, Wr. Neustadt, Austria
  • J. Dedič, S. Sah
    Cosylab, Ljubljana, Slovenia
  • J. Gutleber
    CERN, Geneva, Switzerland
 
  MedAustron is a centre for ion-therapy and research in currently under construction in Austria. It features a synchrotron particle accelerator for proton and carbon-ion beams. This paper presents the architecture and concepts for implementing a procedure framework called ProShell. Procedures to automate high level control and analysis tasks for commissioning and during operation are modelled with Petri-Nets and user code is implemented with C#. It must be possible to execute procedures and monitor their execution progress remotely. Procedures include starting up devices and subsystems in a controlled manner, configuring, operating O(1000) devices and tuning their operational settings using iterative optimization algorithms. Device interfaces must be extensible to accommodate yet unanticipated functionalities. The framework implements a template for procedure specific graphical interfaces to access device specific information such as monitoring data. Procedures interact with physical devices through proxy software components that implement one of the following interfaces: (1) state-less or (2) state-driven device interface. Components can extend these device interfaces following an object-oriented single inheritance scheme to provide augmented, device-specific interfaces. As only two basic device interfaces need to be defined at an early project stage, devices can be integrated gradually as commissioning progresses. We present the architecture and design of ProShell and explain the programming model by giving the simple example of the ion source spectrum analysis procedure.  
poster icon Poster MOPMN005 [0.948 MB]  
 
MOPMN009 First Experience with the MATLAB Middle Layer at ANKA controls, EPICS, software, alignment 253
 
  • S. Marsching
    Aquenos GmbH, Baden-Baden, Germany
  • E. Huttel, M. Klein, A.-S. Müller, N.J. Smale
    KIT, Karlsruhe, Germany
 
  The MATLAB Middle Layer has been adapted for use at ANKA. It was finally commissioned in March 2011. It is used for accelerator physics studies and regular tasks like beam-based alignment and response matrix analysis using LOCO. Furthermore, we intend to study the MATLAB Middle Layer as default orbit correction tool for user operation. We will report on the experience made during the commissioning process and present the latest results obtained while using the MATLAB Middle Layer for machine studies.  
poster icon Poster MOPMN009 [0.646 MB]  
 
MOPMN016 The Spiral2 Radiofrequency Command Control controls, cavity, EPICS, LLRF 274
 
  • D.T. Touchard, C. Berthe, P. Gillette, M. Lechartier, E. Lécorché, G. Normand
    GANIL, Caen, France
  • Y. Lussignol, D. Uriot
    CEA/DSM/IRFU, France
 
  Mainly for carrying out nuclear physics experiences, the SPIRAL2 facility based at Caen in France will aim to provide new radioactive rare ion or high intensity stable ion beams. The driver accelerator uses several radiofrequency systems: RFQ, buncher and superconducting cavities, driven by independent amplifiers and controlled by digital electronics. This low level radiofrequency subsystem is integrated into a regulated loop driven by the control system. A test of a whole system is foreseen to define and check the computer control interface and applications. This paper describes the interfaces to the different RF equipment into the EPICS based computer control system. CSS supervision and foreseen high level tuning XAL/JAVA based applications are also considered.  
poster icon Poster MOPMN016 [0.986 MB]  
 
MOPMN020 Integrating Controls Frameworks: Control Systems for NA62 LAV Detector Test Beams framework, controls, detector, experiment 285
 
  • O. Holme, J.A.R. Arroyo Garcia, P. Golonka, M. Gonzalez-Berges, H. Milcent
    CERN, Geneva, Switzerland
  • O. Holme
    ETH, Zurich, Switzerland
 
  The detector control system for the NA62 experiment at CERN, to be ready for physics data-taking in 2014, is going to be built based on control technologies recommended by the CERN Engineering group. A rich portfolio of the technologies is planned to be showcased and deployed in the final application, and synergy between them is needed. In particular two approaches to building controls application need to play in harmony: the use of the high-level application framework called UNICOS, and a bottom-up approach of development based on the components of the JCOP Framework. The aim of combining the features provided by the two frameworks is to avoid duplication of functionality and minimize the maintenance and development effort for future controls applications. In the paper the result of the integration efforts obtained so far are presented; namely the control applications developed for beam-testing of NA62 detector prototypes. Even though the delivered applications are simple, significant conceptual and development work was required to bring about the smooth inter-play between the two frameworks, while assuring the possibility of unleashing their full power. A discussion of current open issues is presented, including the viability of the approach for larger-scale applications of high complexity, such as the complete detector control system for the NA62 detector.  
poster icon Poster MOPMN020 [1.464 MB]  
 
MOPMN023 Preliminary Design and Integration of EPICS Operation Interface for the Taiwan Photon Source controls, operation, EPICS, GUI 292
 
  • Y.-S. Cheng, J. Chen, P.C. Chiu, K.T. Hsu, C.H. Kuo, C.Y. Liao, C.Y. Wu
    NSRRC, Hsinchu, Taiwan
 
  The TPS (Taiwan Photon Source) is the latest generation of 3 GeV synchrotron light source which has been in construction since 2010. The EPICS framework is adopted as control system infrastructure for the TPS. The EPICS IOCs (Input Output Controller) and various database records have been gradually implemented to control and monitor each subsystem of TPS. The subsystem includes timing, power supply, motion controller, miscellaneous Ethernet-compliant devices etc. Through EPICS PVs (Process Variables) channel access, remote access I/O data via Ethernet interface can be observed by the useable graphical toolkits, such as the EDM (Extensible Display Manager) and MATLAB. The operation interface mainly includes the function of setting, reading, save, restore and etc. Integration of operation interfaces will depend upon properties of each subsystem. In addition, the centralized management method is utilized to serve every client from file servers in order to maintain consistent versions of related EPICS files. The efforts will be summarized in this report.  
 
MOPMS002 LHC Survey Laser Tracker Controls Renovation software, laser, hardware, controls 316
 
  • C. Charrondière, M. Nybø
    CERN, Geneva, Switzerland
 
  The LHC survey laser tracker control system is based on an industrial software package (Axyz) from Leica Geosystems™ that has an interface to Visual Basic 6.0™, which we used to automate the geometric measurements for the LHC magnets. As the Axyz package is no longer supported and the Visual Basic 6.0™ interface would need to be changed to Visual Basic. Net™ we have taken the decision to recode the automation application in LabVIEW™ interfacing to the PC-DMIS software, proposed by Leica Geosystems. This presentation describes the existing equipment, interface and application showing the reasons for our decisions to move to PC-DMIS and LabVIEW. We present the experience with the first prototype and make a comparison with the legacy system.  
poster icon Poster MOPMS002 [1.812 MB]  
 
MOPMS003 The Evolution of the Control System for the Electromagnetic Calorimeter of the Compact Muon Solenoid Experiment at the Large Hadron Collider software, controls, detector, hardware 319
 
  • O. Holme, D.R.S. Di Calafiori, G. Dissertori, W. Lustermann
    ETH, Zurich, Switzerland
  • S. Zelepoukine
    UW-Madison/PD, Madison, Wisconsin, USA
 
  Funding: Swiss National Science Foundation (SNF)
This paper discusses the evolution of the Detector Control System (DCS) designed and implemented for the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) as well as the operational experience acquired during the LHC physics data taking periods of 2010 and 2011. The current implementation in terms of functionality and planned hardware upgrades are presented. Furthermore, a project for reducing the long-term software maintenance, including a year-long detailed analysis of the existing applications, is put forward and the current outcomes which have informed the design decisions for the next CMS ECAL DCS software generation are described. The main goals for the new version are to minimize external dependencies enabling smooth migration to new hardware and software platforms and to maintain the existing functionality whilst substantially reducing support and maintenance effort through homogenization, simplification and standardization of the control system software.
 
poster icon Poster MOPMS003 [3.508 MB]  
 
MOPMS009 IFMIF LLRF Control System Architecture Based on Epics EPICS, controls, LLRF, database 339
 
  • J.C. Calvo, A. Ibarra, A. Salom
    CIEMAT, Madrid, Spain
  • M.A. Patricio
    UCM, Colmenarejo, Spain
  • M.L. Rivers
    ANL, Argonne, USA
 
  The IFMIF-EVEDA (International Fusion Materials Irradiation Facility - Engineering Validation and Engineering Design Activity) linear accelerator will be a 9 MeV, 125mA CW (Continuous Wave) deuteron accelerator prototype to validate the technical options of the accelerator design for IFMIF. The RF (Radio Frequency) power system of IFMIF-EVEDA consists of 18 RF chains working at 175MHz with three amplification stages each; each one of the required chains for the accelerator prototype is based on several 175MHz amplification stages. The LLRF system provides the RF Drive input of the RF plants. It controls the amplitude and phase of this signal to be synchronized with the beam and it also controls the resonance frequency of the cavities. The system is based on a commercial cPCI FPGA Board provided by Lyrtech and controlled by a Windows Host PC. For this purpose, it is mandatory to communicate the cPCI FPGA Board with an EPICS Channel Access, building an IOC (Input Output Controller) between Lyrtech board and EPICS. A new software architecture to design a device support, using AsynPortDriver class and CSS as a GUI (Graphical User Interface), is presented.  
poster icon Poster MOPMS009 [2.763 MB]  
 
MOPMS013 Progress in the Conversion of the In-house Developed Control System to EPICS and related technologies at iThemba LABS EPICS, controls, LabView, hardware 347
 
  • I.H. Kohler, M.A. Crombie, C. Ellis, M.E. Hogan, H.W. Mostert, M. Mvungi, C. Oliva, J.V. Pilcher, N. Stodart
    iThemba LABS, Somerset West, South Africa
 
  This paper highlights challenges associated with the upgrading of the iThemba LABS control system. Issues include maintaining an ageing control system which is based on a LAN of PCs running OS/2, using in-house developed C-code, hardware interfacing consisting of elderly CAMAC and locally manufactured SABUS [1] modules. The developments around integrating the local hardware into EPICS, running both systems in parallel during the transition period, and the inclusion of other environments like Labview are discussed. It is concluded that it was a good decision to base the underlying intercommunications on channel access and to move the majority of process variables over to EPICS given that it is at least an international standard, less dependant on a handful of local developers, and enjoys the support from a very active world community.
[1] SABUS - a collaboration between Iskor (PTY) Ltd. and CSIR (Council for Scientific and Industrial reseach) (1980)
 
poster icon Poster MOPMS013 [24.327 MB]  
 
MOPMS016 The Control System of CERN Accelerators Vacuum (Current Status and Recent Improvements) vacuum, controls, status, interlocks 354
 
  • P. Gomes, F. Antoniotti, S. Blanchard, M. Boccioli, G. Girardot, H. Vestergard
    CERN, Geneva, Switzerland
  • L. Kopylov, M.S. Mikheev
    IHEP Protvino, Protvino, Moscow Region, Russia
 
  The vacuum control system of most of the CERN accelerators is based on Siemens PLCs and on PVSS SCADA. The application software for both PLC and SCADA started to be developed specifically by the vacuum group; with time, it included a growing number of building blocks from the UNICOS framework. After the transition from the LHC commissioning phase to its regular operation, there has been a number of additions and improvements to the vacuum control system, driven by new technical requirements and by feedback from the accelerator operators and vacuum specialists. New functions have been implemented in PLC and SCADA: for the automatic restart of pumping groups, after power failure; for the control of the solenoids, added to reduce e-cloud effects; and for PLC power supply diagnosis. The automatic recognition and integration of mobile slave PLCs has been extended to the quick installation of pumping groups with the electronics kept in radiation-free zones. The ergonomics and navigation of the SCADA application have been enhanced; new tools have been developed for interlock analysis, and for device listing and selection; web pages have been created, summarizing the values and status of the system. The graphical interface for windows clients has been upgraded from ActiveX to QT, and the PVSS servers will soon be moved from Windows to Linux.  
poster icon Poster MOPMS016 [113.929 MB]  
 
MOPMS023 LHC Magnet Test Benches Controls Renovation controls, network, Linux, hardware 368
 
  • A. Raimondo, O.O. Andreassen, D. Kudryavtsev, S.T. Page, A. Rijllart, E. Zorin
    CERN, Geneva, Switzerland
 
  The LHC magnet test benches controls were designed in 1996. They were based on VME data acquisition systems and Siemens PLCs control and interlocks systems. During a review of renovation of superconducting laboratories at CERN in 2009 it was decided to replace the VME systems with PXI and the obsolete Sun/Solaris workstations with Linux PCs. This presentation covers the requirements for the new systems in terms of functionality, security, channel count, sampling frequency and precision. We will report on the experience with the commissioning of the first series of fixed and mobile measurement systems upgraded to this new platform, compared to the old systems. We also include the experience with the renovated control room.  
poster icon Poster MOPMS023 [1.310 MB]  
 
MOPMS028 CSNS Timing System Prototype timing, EPICS, controls, operation 386
 
  • G.L. Xu, G. Lei, L. Wang, Y.L. Zhang, P. Zhu
    IHEP Beijing, Beijing, People's Republic of China
 
  Timing system is important part of CSNS. Timing system prototype developments are based on the Event System 230 series. I use two debug platforms, one is EPICS base 3.14.8. IOC uses the MVME5100, running vxworks5.5 version; the other is EPICS base 3.13, using vxworks5.4 version. Prototype work included driver debugging, EVG/EVR-230 experimental new features, such as CML output signals using high-frequency step size of the signal cycle delay, the use of interlocking modules, CML, and TTL's Output to achieve interconnection function, data transmission functions. Finally, I programed the database with the new features and in order to achieve OPI.  
poster icon Poster MOPMS028 [0.434 MB]  
 
MOPMS033 Status, Recent Developments and Perspective of TINE-powered Video System, Release 3 controls, electron, Windows, site 405
 
  • S. Weisse, D. Melkumyan
    DESY Zeuthen, Zeuthen, Germany
  • P. Duval
    DESY, Hamburg, Germany
 
  Experience has shown that imaging software and hardware installations at accelerator facilities need to be changed, adapted and updated on a semi-permanent basis. On this premise, the component-based core architecture of Video System 3 was founded. In design and implementation, emphasis was, is, and will be put on flexibility, performance, low latency, modularity, interoperability, use of open source, ease of use as well as reuse, good documentation and multi-platform capability. In the last year, a milestone was reached as Video System 3 entered production-level at PITZ, Hasylab and PETRA III. Since then, development path is stronger influenced by production-level experience and customer feedback. In this contribution, we describe the current status, layout, recent developments and perspective of the Video System. Focus will be put on integration of recording and playback of video sequences to Archive/DAQ, a standalone installation of the Video System on a notebook as well as experiences running on Windows 7-64bit. In addition, new client-side multi-platform GUI/application developments using Java are about to hit the surface. Last but not least it must be mentioned that although the implementation of Release 3 is integrated into the TINE control system, it is modular enough so that integration into other control systems can be considered.  
slides icon Slides MOPMS033 [0.254 MB]  
poster icon Poster MOPMS033 [2.127 MB]  
 
MOPMU002 Progress of the TPS Control System Development controls, EPICS, power-supply, feedback 425
 
  • J. Chen, Y.-T. Chang, Y.K. Chen, Y.-S. Cheng, P.C. Chiu, K.T. Hsu, S.Y. Hsu, K.H. Hu, C.H. Kuo, D. Lee, C.Y. Liao, Y.R. Pan, C.-J. Wang, C.Y. Wu
    NSRRC, Hsinchu, Taiwan
 
  The Taiwan Photon Source (TPS) is a low-emittance 3-GeV synchrotron light source which is in construction on the National Synchrotron Radiation Research Center (NSRRC) campus. The control system for the TPS is based upon EPICS framework. The standard hardware and software components have been defined. The prototype of various subsystems is on going. The event based timing system has been adopted. The power supply control interface accompanied with orbit feedback support have also been defined. The machine protection system is in design phase. Integration with the linear accelerator system which are installed and commissioned at temporary site for acceptance test has already been done. The interface to various systems is still on going. The infrastructures of high level and low level software are on going. Progress will be summarized in the report.  
 
MOPMU005 Overview of the Spiral2 Control System Progress controls, EPICS, ion, database 429
 
  • E. Lécorché, P. Gillette, C.H. Haquin, E. Lemaître, L. Philippe, D.T. Touchard
    GANIL, Caen, France
  • J.F. Denis, F. Gougnaud, J.-F. Gournay, Y. Lussignol, P. Mattei
    CEA/DSM/IRFU, France
  • P.G. Graehling, J.H. Hosselet, C. Maazouzi
    IPHC, Strasbourg Cedex 2, France
 
  Spiral2 whose construction physically started at the beginning of this year at Ganil (Caen, France) will be a new Radioactive Ion Beams facility to extend scientific knowledge in nuclear physics, astrophysics and interdisciplinary researches. The project consists of a high intensity multi-ion accelerator driver delivering beams to a high power production system to generate the Radioactive Ion Beams being then post-accelerated and used within the existing Ganil complex. Resulting from the collaboration between several laboratories, Epics has been adopted as the standard framework for the control command system. At the lower level, pieces of equipment are handled through VME/VxWorks chassis or directly interfaced using the Modbus/TCP protocol; also, Siemens programmable logic controllers are tightly coupled to the control system, being in charge of specific devices or hardware safety systems. The graphical user interface layer integrates both some standard Epics client tools (EDM, CSS under evaluation, etc …) and specific high level applications written in Java, also deriving developments from the Xal framework. Relational databases are involved into the control system for equipment configuration (foreseen), machine representation and configuration, CSS archivers (under evaluation) and Irmis (mainly for process variable description). The first components of the Spiral2 control system are now used in operation within the context of the ion and deuteron sources test platforms. The paper also describes how software development and sharing is managed within the collaboration.  
poster icon Poster MOPMU005 [2.093 MB]  
 
MOPMU009 The Diamond Control System: Five Years of Operations controls, EPICS, operation, photon 442
 
  • M.T. Heron
    Diamond, Oxfordshire, United Kingdom
 
  Commissioning of the Diamond Light Source accelerators began in 2005, with routine operation of the storage ring commencing in 2006 and photon beamline operation in January 2007. Since then the Diamond control system has provided a single interface and abstraction to (nearly) all the equipment required to operate the accelerators and beamlines. It now supports the three accelerators and a suite of twenty photon beamlines and experiment stations. This paper presents an analysis of the operation of the control system and further considers the developments that have taken place in the light of operational experience over this period.  
 
MOPMU013 Phase II and III The Next Generation of CLS Beamline Control and Data Acquisition Systems controls, software, EPICS, experiment 454
 
  • E. D. Matias, D. Beauregard, R. Berg, G. Black, M.J. Boots, W. Dolton, D. Hunter, R. Igarashi, D. Liu, D.G. Maxwell, C.D. Miller, T. Wilson, G. Wright
    CLS, Saskatoon, Saskatchewan, Canada
 
  The Canadian Light Source is nearing the completion of its suite of phase II Beamlines and in detailed design of its Phase III Beamlines. The paper presents an overview of the overall approach adopted by CLS in the development of beamline control and data acquisition systems. Building on the experience of our first phase of beamlines the CLS has continued to make extensive use of EPICS with EDM and QT based user interfaces. Increasing interpretive languages such as Python are finding a place in the beamline control systems. Web based environment such as ScienceStudio have also found a prominent place in the control system architecture as we move to tighter integration between data acquisition, visualization and data analysis.  
 
MOPMU014 Development of Distributed Data Acquisition and Control System for Radioactive Ion Beam Facility at Variable Energy Cyclotron Centre, Kolkata. controls, embedded, linac, status 458
 
  • K. Datta, C. Datta, D.P. Dutta, T.K. Mandi, H.K. Pandey, D. Sarkar
    DAE/VECC, Calcutta, India
  • R. Anitha, A. Balasubramanian, K. Mourougayane
    SAMEER, Chennai, India
 
  To facilitate frontline nuclear physics research, an ISOL (Isotope Separator On Line) type Radioactive Ion Beam (RIB) facility is being constructed at Variable Energy Cyclotron Centre (VECC), Kolkata. The RIB facility at VECC consists of various subsystems like ECR Ion source, RFQ, Rebunchers, LINACs etc. that produce and accelerate the energetic beam of radioactive isotopes required for different experiments. The Distributed Data Acquisition and Control System (DDACS) is intended to monitor and control large number of parameters associated with different sub systems from a centralized location to do the complete operation of beam generation and beam tuning in a user friendly manner. The DDACS has been designed based on a 3-layer architecture namely Equipment interface layer, Supervisory layer and Operator interface layer. The Equipment interface layer consists of different Equipment Interface Modules (EIMs) which are designed around ARM processor and connected to different equipment through various interfaces such as RS-232, RS-485 etc. The Supervisory layer consists of VIA-processor based Embedded Controller (EC) with embedded XP operating system. This embedded controller, interfaced with EIMs through fiber optic cable, acquires and analyses the data from different EIMs. Operator interface layer consists mainly of PCs/Workstations working as operator consoles. The data acquired and analysed by the EC can be displayed at the operator console and the operator can centrally supervise and control the whole facility.  
poster icon Poster MOPMU014 [2.291 MB]  
 
MOPMU017 TRIUMF's ARIEL Project controls, ISAC, EPICS, linac 465
 
  • J.E. Richards, D. Dale, K. Ezawa, D.B. Morris, K. Negishi, R.B. Nussbaumer, S. Rapaz, E. Tikhomolov, G. Waters, M. Leross
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
 
  The Advanced Rare IsotopE Laboratory (ARIEL) will expand TRIUMF's capabilities in rare-isotope beam physics by doubling the size of the current ISAC facility. Two simultaneous radioactive beams will be available in addition to the present ISAC beam. ARIEL will consist of a 50 MeV, 10 mA CW superconducting electron linear accelerator (E-Linac), an additional proton beam-line from the 520MeV cyclotron, two new target stations, a beam-line connecting to the existing ISAC superconducting linac, and a beam-line to the ISAC low-energy experimental facility. Construction will begin in 2012 with commissioning to start in 2014. The ARIEL Control System will be implemented using EPICS allowing seamless integration with the EPICS based ISAC Control System. The ARIEL control system conceptual design will be discussed.  
poster icon Poster MOPMU017 [1.232 MB]  
 
MOPMU026 A Readout and Control System for a CTA Prototype Telescope controls, software, framework, hardware 494
 
  • I. Oya, U. Schwanke
    Humboldt University Berlin, Institut für Physik, Berlin, Germany
  • B. Behera, D. Melkumyan, T. Schmidt, P. Wegner, S. Wiesand, M. Winde
    DESY Zeuthen, Zeuthen, Germany
 
  CTA (Cherenkov Telescope Array) is an initiative to build the next generation ground-based gamma-ray instrument. The CTA array will allow studies in the very high-energy domain in the range from a few tens of GeV to more than hundred TeV, extending the existing energy coverage and increasing by a factor 10 the sensitivity compared to current installations, while enhancing other aspects like angular and energy resolution. These goals require the use of at least three different sizes of telescopes. CTA will comprise two arrays (one in the Northern hemisphere and one in the Southern hemisphere) for full sky coverage and will be operated as an open observatory. A prototype for the Medium Size Telescope (MST) type is under development and will be deployed in Berlin by the end of 2011. The MST prototype will consist of the mechanical structure, drive system, active mirror control, four CCD cameras for prototype instrumentation and a weather station. The ALMA Common Software (ACS) distributed control framework has been chosen for the implementation of the control system of the prototype. In the present approach, the interface to some of the hardware devices is achieved by using the OPC Unified Architecture (OPC UA). A code-generation framework (ACSCG) has been designed for ACS modeling. In this contribution the progress in the design and implementation of the control system for the CTA MST prototype is described.  
poster icon Poster MOPMU026 [1.953 MB]  
 
MOPMU027 Controls System Developments for the ERL Facility controls, software, Linux, electron 498
 
  • J.P. Jamilkowski, Z. Altinbas, D.M. Gassner, L.T. Hoff, P. Kankiya, D. Kayran, T.A. Miller, R.H. Olsen, B. Sheehy, W. Xu
    BNL, Upton, Long Island, New York, USA
 
  Funding: Funding: This manuscript has been authored by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U. S. Department of Energy.
The BNL Energy Recovery LINAC (ERL) is a high beam current, superconducting RF electron accelerator that is being commissioned to serve as a research and development prototype for a RHIC facility upgrade for electron-ion collision (eRHIC). Key components of the machine include a laser, photocathode, and 5-cell superconducting RF cavity operating at a frequency of 703 MHz. Starting with a foundation based on existing ADO software running on Linux servers and on the VME/VxWorks platforms developed for RHIC, we are developing a controls system that incorporates a wide range of hardware I/O interfaces that are needed for machine R&D. Details of the system layout, specifications, and user interfaces are provided.
 
poster icon Poster MOPMU027 [0.709 MB]  
 
TUAAULT04 Web-based Execution of Graphical Workflows : a Modular Platform for Multifunctional Scientific Process Automation controls, synchrotron, framework, database 540
 
  • E. De Ley, D. Jacobs
    iSencia Belgium, Gent, Belgium
  • M. Ounsy
    SOLEIL, Gif-sur-Yvette, France
 
  The Passerelle process automation suite offers a fundamentally modular solution platform, based on a layered integration of several best-of-breed technologies. It has been successfully applied by Synchrotron Soleil as the sequencer for data acquisition and control processes on its beamlines, integrated with TANGO as a control bus and GlobalScreen as the Scada package. Since last year it is being used as the graphical workflow component for the development of an eclipse-based Data Analysis Work Bench, at ESRF. The top layer of Passerelle exposes an actor-based development paradigm, based on the Ptolemy framework (UC Berkeley). Actors provide explicit reusability and strong decoupling, combined with an inherently concurrent execution model. Actor libraries exist for TANGO integration, web-services, database operations, flow control, rules-based analysis, mathematical calculations, launching external scripts etc. Passerelle's internal architecture is based on OSGi, the major Java framework for modular service-based applications. A large set of modules exist that can be recombined as desired to obtain different features and deployment models. Besides desktop versions of the Passerelle workflow workbench, there is also the Passerelle Manager. It is a secured web application including a graphical editor, for centralized design, execution, management and monitoring of process flows, integrating standard Java Enterprise services with OSGi. We will present the internal technical architecture, some interesting application cases and the lessons learnt.  
slides icon Slides TUAAULT04 [10.055 MB]  
 
TUBAUST02 FPGA Communications Based on Gigabit Ethernet FPGA, Ethernet, hardware, controls 547
 
  • L.R. Doolittle, C. Serrano
    LBNL, Berkeley, California, USA
 
  The use of Field Programmable Gate Arrays (FPGAs) in accelerators is widespread due to their flexibility, performance, and affordability. Whether they are used for fast feedback systems, data acquisition, fast communications using custom protocols, or any other application, there is a need for the end-user and the global control software to access FPGA features using a commodity computer. The choice of communication standards that can be used to interface to a FPGA board is wide, however there is one that stands out for its maturity, basis in standards, performance, and hardware support: Gigabit Ethernet. In the context of accelerators it is desirable to have highly reliable, portable, and flexible solutions. We have therefore developed a chip- and board-independent FPGA design which implements the Gigabit Ethernet standard. Our design has been configured for use with multiple projects, supports full line-rate traffic, and communicates with any other device implementing the same well-established protocol, easily supported by any modern workstation or controls computer.  
slides icon Slides TUBAUST02 [0.909 MB]  
 
TUBAUIO05 Challenges for Emerging New Electronics Standards for Physics controls, software, hardware, monitoring 558
 
  • R.S. Larsen
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported by US Department of Energy Contract DE AC03 76SF00515
A unique effort is underway between industry and the international physics community to extend the Telecom industry’s Advanced Telecommunications Computing Architecture (ATCA and MicroTCA) to meet future needs of the physics machine and detector community. New standard extensions for physics have now been designed to deliver unprecedented performance and high subsystem availability for accelerator controls, instrumentation and data acquisition. Key technical features include a unique out-of-band imbedded standard Intelligent Platform Management Interface (IPMI) system to manage hot-swap module replacement and hardware-software failover. However the acceptance of any new standard depends critically on the creation of strong collaborations among users and between user and industry communities. For the relatively small high performance physics market to attract strong industry support requires collaborations to converge on core infrastructure components including hardware, timing, software and firmware architectures; as well as to strive for a much higher degree of interoperability of both lab and industry designed hardware-software products than past generations of standards. The xTCA platform presents a unique opportunity for future progress. This presentation will describe status of the hardware-software extension plans; technology advantages for machine controls and data acquisition systems; and examples of current collaborative efforts to help develop an industry base of generic ATCA and MicroTCA products in an open-source environment.
1. PICMG, the PCI Industrial Computer Manufacturer’s Group
2. Lab representation on PICMG includes CERN, DESY, FNAL, IHEP, IPFN, ITER and SLAC
 
slides icon Slides TUBAUIO05 [1.935 MB]  
 
TUCAUST01 Upgrading the Fermilab Fire and Security Reporting System hardware, network, software, database 563
 
  • CA. King, R. Neswold
    Fermilab, Batavia, USA
 
  Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
Fermilab's homegrown fire and security system (known as FIRUS) is highly reliable and has been used nearly thirty years. The system has gone through some minor upgrades, however, none of those changes made significant, visible changes. In this paper, we present a major overhaul to the system that is halfway complete. We discuss the use of Apple's OS X for the new GUI, upgrading the servers to use the Erlang programming language and allowing limited access for iOS and Android-based mobile devices.
 
slides icon Slides TUCAUST01 [2.818 MB]  
 
TUCAUST04 Changing Horses Mid-stream: Upgrading the LCLS Control System During Production Operations controls, EPICS, linac, software 574
 
  • S. L. Hoobler, R.P. Chestnut, S. Chevtsov, T.M. Himel, K.D. Kotturi, K. Luchini, J.J. Olsen, S. Peng, J. Rock, R.C. Sass, T. Straumann, R. Traller, G.R. White, S. Zelazny, J. Zhou
    SLAC, Menlo Park, California, USA
 
  The control system for the Linac Coherent Light Source (LCLS) began as a combination of new and legacy systems. When the LCLS began operating, the bulk of the facility was newly constructed, including a new control system using the Experimental Physics and Industrial Control System (EPICS) framework. The Linear Accelerator (LINAC) portion of the LCLS was repurposed for use by the LCLS and was controlled by the legacy system, which was built nearly 30 years ago. This system uses CAMAC, distributed 80386 microprocessors, and a central Alpha 6600 computer running the VMS operating system. This legacy control system has been successfully upgraded to EPICS during LCLS production operations while maintaining the 95% uptime required by the LCLS users. The successful transition was made possible by thorough testing in sections of the LINAC which were not in use by the LCLS. Additionally, a system was implemented to switch control of a LINAC section between new and legacy control systems in a few minutes. Using this rapid switching, testing could be performed during maintenance periods and accelerator development days. If any problems were encountered after a section had been switched to the new control system, it could be quickly switched back.  
slides icon Slides TUCAUST04 [0.183 MB]  
 
WEAAUST01 Sardana: The Software for Building SCADAS in Scientific Environments controls, TANGO, synchrotron, GUI 607
 
  • T.M. Coutinho, G. Cuní, D.F.C. Fernández-Carreiras, J. Klora, C. Pascual-Izarra, Z. Reszela, R. Suñé
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
  • A. Homs, E.T. Taurel
    ESRF, Grenoble, France
 
  Sardana is a software for supervision, control and data acquisition in large and small scientific installations. It delivers important cost and time reductions associated with the design, development and support of the control and data acquisition systems. It enhances Tango with the capabilities for building graphical interfaces without writing code, a powerful python-based macro environment for building sequences and complex macros, and a comprehensive access to the hardware. It scales well to small laboratories as well as to large scientific institutions. It has been commissioned for the control system of Accelerators and Beamlines at the Alba Synchrotron.  
slides icon Slides WEAAUST01 [6.978 MB]  
 
WEBHMUST01 The MicroTCA Acquisition and Processing Back-end for FERMI@Elettra Diagnostics controls, diagnostics, FEL, timing 634
 
  • A.O. Borga, R. De Monte, M. Ferianis, G. Gaio, L. Pavlovič, M. Predonzani, F. Rossi
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
Several diagnostics instruments for the FERMI@Elettra FEL require accurate readout, processing, and control electronics; together with a complete integration within the TANGO control system. A custom developed back-end system, compliant to the PICMG MicroTCA standard, provides a robust platform for accommodating such electronics; including reliable slow control and monitoring infrastructural features. Two types of digitizer AMCs have been developed, manufactured, tested and successfully commissioned in the FERMI facility. The first being a fast (160Msps) and high-resolution (16 bits) Analog to Digital and Digital to Analog (A|D|A) Convert Board, hosting 2 A-D and 2 D-A converters controlled by a large FPGA (Xilinx Virtex-5 SX50T) responsible also for the fast communication interface handling. The latter being an Analog to Digital Only (A|D|O), derived from A|D|A, with an analog front-side stage made of 4 A-D converters. A simple MicroTCA Timing Central Hub (MiTiCH) completes the set of modules necessary for operating the system. Several TANGO servers and panels have been developed and put in operation with the support of the controls group. The overall system's architectures, with different practical application examples, together with the specific AMCs' functionalities, are presented. Impressions on our experience on the field using the novel MicroTCA standard are also discussed.
 
slides icon Slides WEBHMUST01 [2.715 MB]  
 
WEMAU001 A Remote Tracing Facility for Distributed Systems GUI, controls, database, operation 650
 
  • F. Ehm, A. Dworak
    CERN, Geneva, Switzerland
 
  Today the CERN's accelerator control system is built upon a large number of services mainly based on C++ and JAVA which produce log events. In such a largely distributed environment these log messages are essential for problem recognition and tracing. Tracing is therefore a vital part of operations, as understanding an issue in a subsystem means analyzing log events in an efficient and fast manner. At present 3150 device servers are deployed on 1600 diskless frontends and they send their log messages via the network to an in-house developed central server which, in turn, saves them to files. However, this solution is not able to provide several highly desired features and has performance limitations which led to the development of a new solution. The new distributed tracing facility fulfills these requirements by taking advantage of the Simple Text Orientated Message Protocol [STOMP] and ActiveMQ as the transport layer. The system not only allows to store critical log events centrally in files or in a database but also it allows other clients (e.g. graphical interfaces) to read the same events at the same time by using the provided JAVA API. This facility also ensures that each client receives only the log events of the desired level. Thanks to the ActiveMQ broker technology the system can easily be extended to clients implemented in other languages and it is highly scalable in terms of performance. Long running tests have shown that the system can handle up to 10.000 messages/second.  
slides icon Slides WEMAU001 [1.008 MB]  
poster icon Poster WEMAU001 [0.907 MB]  
 
WEMAU002 Coordinating Simultaneous Instruments at the Advanced Technology Solar Telescope controls, experiment, software, target 654
 
  • S.B. Wampler, B.D. Goodrich, E.M. Johansson
    Advanced Technology Solar Telescope, National Solar Observatory, Tucson, USA
 
  A key component of the Advanced Technology Solar Telescope control system design is the efficient support of multiple instruments sharing the light path provided by the telescope. The set of active instruments varies with each experiment and possibly with each observation within an experiment. The flow of control for a typical experiment is traced through the control system to preset the main aspects of the design that facilitate this behavior. Special attention is paid to the role of ATST's Common Services Framework in assisting the coordination of instruments with each other and with the telescope.  
slides icon Slides WEMAU002 [0.251 MB]  
poster icon Poster WEMAU002 [0.438 MB]  
 
WEMAU003 The LabVIEW RADE Framework Distributed Architecture LabView, framework, software, distributed 658
 
  • O.O. Andreassen, D. Kudryavtsev, A. Raimondo, A. Rijllart
    CERN, Geneva, Switzerland
  • S. Shaipov, R. Sorokoletov
    JINR, Dubna, Moscow Region, Russia
 
  For accelerator GUI applications there is a need for a rapid development environment to create expert tools or to prototype operator applications. Typically a variety of tools are being used, such as Matlab™ or Excel™, but their scope is limited, either because of their low flexibility or limited integration into the accelerator infrastructure. In addition, having several tools obliges users to deal with different programming techniques and data structures. We have addressed these limitations by using LabVIEW™, extending it with interfaces to C++ and Java. In this way it fulfills requirements of ease of use, flexibility and connectivity. We present the RADE framework and four applications based on it. Recent application requirements could only be met by implementing a distributed architecture with multiple servers running multiple services. This brought us the additional advantage to implement redundant services, to increase the availability and to make transparent updates. We will present two applications requiring high availability. We also report on issues encountered with such a distributed architecture and how we have addressed them. The latest extension of the framework is to industrial equipment, with program templates and drivers for PLCs (Siemens and Schneider) and PXI with LabVIEW-Real Time.  
slides icon Slides WEMAU003 [0.157 MB]  
poster icon Poster WEMAU003 [2.978 MB]  
 
WEMAU011 LIMA: A Generic Library for High Throughput Image Acquisition detector, hardware, controls, software 676
 
  • A. Homs, L. Claustre, A. Kirov, E. Papillon, S. Petitdemange
    ESRF, Grenoble, France
 
  A significant number of 2D detectors are used in large scale facilities' control systems for quantitative data analysis. In these devices, a common set of control parameters and features can be identified, but most of manufacturers provide specific software control interfaces. A generic image acquisition library, called LIMA, has been developed at the ESRF for a better compatibility and easier integration of 2D detectors to existing control systems. The LIMA design is driven by three main goals: i) independence of any control system to be shared by a wide scientific community; ii) a rich common set of functionalities (e.g., if a feature is not supported by hardware, then the alternative software implementation is provided); and iii) intensive use of events and multi-threaded algorithms for an optimal exploit of multi-core hardware resources, needed when controlling high throughput detectors. LIMA currently supports the ESRF Frelon and Maxipix detectors as well as the Dectris Pilatus. Within a collaborative framework, the integration of the Basler GigE cameras is a contribution from SOLEIL. Although it is still under development, LIMA features so far fast data saving on different file formats and basic data processing / reduction, like software pixel binning / sub-image, background subtraction, beam centroid and sub-image statistics calculation, among others.  
slides icon Slides WEMAU011 [0.073 MB]  
 
WEMMU004 SPI Boards Package, a New Set of Electronic Boards at Synchrotron SOLEIL controls, undulator, FPGA, detector 687
 
  • Y.-M. Abiven, P. Betinelli-Deck, J. Bisou, F. Blache, F. Briquez, A. Chattou, J. Coquet, P. Gourhant, N. Leclercq, P. Monteiro, G. Renaud, J.P. Ricaud, L. Roussier
    SOLEIL, Gif-sur-Yvette, France
 
  SOLEIL is a third generation Synchrotron radiation source located in France near Paris. At the moment, the Storage Ring delivers photon beam to 23 beamlines. Since machine and beamlines improve their performance, new requirements are identified. On the machine side, new implementation for feedforward of electromagnetic undulators is required to improve beam stability. On the beamlines side, a solution is required to synchronize data acquisition with motor position during continuous scan. In order to provide a simple and modular solution for these applications requiring synchronization, the electronic group developed a set of electronic boards called "SPI board package". In this package, the boards can be connected together in daisy chain and communicate to the controller through a SPI* Bus. Communication with control system is done via Ethernet. At the moment the following boards are developed: a controller board based on a Cortex M3 MCU, 16bits ADC board, 16bits DAC board and a board allowing to process motor encoder signals based on a FPGA Spartan III. This platform allows us to embed process close to the hardware with open tools. Thanks to this solution we reach the best performances of synchronization.
* SPI: Serial Peripheral Interface
 
slides icon Slides WEMMU004 [0.230 MB]  
poster icon Poster WEMMU004 [0.430 MB]  
 
WEPKN005 Experiences in Messaging Middleware for High-Level Control Applications controls, EPICS, framework, software 720
 
  • N. Wang, J.L. Matykiewicz, R. Pundaleeka, S.G. Shasharina
    Tech-X, Boulder, Colorado, USA
 
  Funding: This project is funded by the US Department of Energy, Office of High Energy Physics under the contract #DE-FG02-08ER85043.
Existing high-level applications in accelerator control and modeling systems leverage many different languages, tools and frameworks that do not interoperate with one another. As a result, the community has moved toward the proven Service-Oriented Architecture approach to address the interoperability challenges among heterogeneous high-level application modules. This paper presents our experiences in developing a demonstrative high-level application environment using emerging messaging middleware standards. In particular, we utilized new features such as pvData, in the EPICS v4 and other emerging standards such as Data Distribution Service (DDS) and Extensible Type Interface by the Object Management Group. Our work on developing the demonstrative environment focuses on documenting the procedures to develop high-level accelerator control applications using the aforementioned technologies. Examples of such applications include presentation panel clients based on Control System Studio (CSS), Model-Independent plug-in for CSS, and data producing middle-layer applications such as model/data servers. Finally, we will show how these technologies enable developers to package various control subsystems and activities into "services" with well-defined "interfaces" and make leveraging heterogeneous high-level applications via flexible composition possible.
 
poster icon Poster WEPKN005 [2.723 MB]  
 
WEPKN015 A New Helmholtz Coil Permanent Magnet Measurement System* controls, FPGA, data-acquisition, permanent-magnet 738
 
  • J.Z. Xu, I. Vasserman
    ANL, Argonne, USA
 
  Funding: Work supported by U.S. Department of Energy Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357.
A new Helmholtz Coil magnet measurement system has been developed at the Advanced Phone Source (APS) to characterize and sort the insertion device permanent magnets. The system uses the latest state-of-the-art field programmable gate array (FPGA) technology to compensate the speed variations of the magnet motion. Initial results demonstrate that the system achieves a measurement precision better than 0.001 ampere-meters squared (A·m2) in a permanent magnet moment measurement of 32 A·m2, probably the world's best precision of its kind.
 
poster icon Poster WEPKN015 [0.710 MB]  
 
WEPKN025 Supervision Application for the New Power Supply of the CERN PS (POPS) controls, framework, operation, software 756
 
  • H. Milcent, X. Genillon, M. Gonzalez-Berges, A. Voitier
    CERN, Geneva, Switzerland
 
  The power supply system for the magnets of the CERN PS has been recently upgraded to a new system called POPS (POwer for PS). The old mechanical machine has been replaced by a system based on capacitors. The equipment as well as the low level controls have been provided by an external company (CONVERTEAM). The supervision application has been developed at CERN reusing the technologies and tools used for the LHC Accelerator and Experiments (UNICOS and JCOP frameworks, PVSS SCADA tool). The paper describes the full architecture of the control application, and the challenges faced for the integration with an outsourced system. The benefits of reusing the CERN industrial control frameworks and the required adaptations will be discussed. Finally, the initial operational experience will be presented.  
poster icon Poster WEPKN025 [13.149 MB]  
 
WEPKN026 The ELBE Control System – 10 Years of Experience with Commercial Control, SCADA and DAQ Environments controls, software, hardware, electron 759
 
  • M. Justus, F. Herbrand, R. Jainsch, N. Kretzschmar, K.-W. Leege, P. Michel, A. Schamlott
    HZDR, Dresden, Germany
 
  The electron accelerator facility ELBE is the central experimental site of the Helmholtz-Zentrum Dresden-Rossendorf, Germany. Experiments with Bremsstrahlung started in 2001 and since that, through a series of expansions and modifications, ELBE has evolved to a 24/7 user facility running a total of seven secondary sources including two IR FELs. As its control system, ELBE uses WinCC on top of a networked PLC architecture. For data acquisition with high temporal resolution, PXI and PC based systems are in use, applying National Instruments hardware and LabVIEW application software. Machine protection systems are based on in-house built digital and analogue hardware. An overview of the system is given, along with an experience report on maintenance, reliability and efforts to keep track with ongoing IT, OS and security developments. Limits of application and new demands imposed by the forthcoming facility upgrade as a centre for high intensity beams (in conjunction with TW/PW femtosecond lasers) are discussed.  
poster icon Poster WEPKN026 [0.102 MB]  
 
WEPKS002 Quick EXAFS Experiments Using a New GDA Eclipse RCP GUI with EPICS Hardware Control experiment, detector, EPICS, hardware 771
 
  • R.J. Woolliscroft, C. Coles, M. Gerring, M.R. Pearson
    Diamond, Oxfordshire, United Kingdom
 
  Funding: Diamond Light Source Ltd.
The Generic Data Acquisition (GDA)* framework is an open source, Java and Eclipse RCP based data acquisition software for synchrotron and neutron facilities. A new implementation of the GDA on the B18 beamline at the Diamond synchrotron will be discussed. This beamline performs XAS energy scanning experiments and includes a continuous-scan mode of the monochromator synchronised with various detectors for Quick EXAFS (QEXAFS) experiments. A new perspective for the GDA's Eclipse RCP GUI has been developed in which graphical editors are used to write xml files which hold experimental parameters. The same xml files are marshalled by the GDA server to create Java beans used by the Jython scripts run within the GDA server. The underlying motion control is provided by EPICS. The new Eclipse RCP GUI and the integration and synchronisation between the two software systems and the detectors shall be covered.
* GDA website: http://www.opengda.org/
 
poster icon Poster WEPKS002 [1.277 MB]  
 
WEPKS003 An Object Oriented Framework of EPICS for MicroTCA Based Control System EPICS, controls, framework, software 775
 
  • Z. Geng
    SLAC, Menlo Park, California, USA
 
  EPICS (Experimental Physics and Industrial Control System) is a distributed control system platform which has been widely used for large scientific devices control like particle accelerators and fusion plant. EPICS has introduced object oriented (C++) interfaces to most of the core services. But the major part of EPICS, the run-time database, only provides C interfaces, which is hard to involve the EPICS record concerned data and routines in the object oriented architecture of the software. This paper presents an object oriented framework which contains some abstract classes to encapsulate the EPICS record concerned data and routines in C++ classes so that full OOA (Objected Oriented Analysis) and OOD (Object Oriented Design) methodologies can be used for EPCIS IOC design. We also present a dynamic device management scheme for the hot-swap capability of the MicroTCA based control system.  
poster icon Poster WEPKS003 [0.176 MB]  
 
WEPKS010 Architecture Design of the Application Software for the Low-Level RF Control System of the Free-Electron Laser at Hamburg LLRF, controls, software, cavity 798
 
  • Z. Geng
    SLAC, Menlo Park, California, USA
  • V. Ayvazyan
    DESY, Hamburg, Germany
  • S. Simrock
    ITER Organization, St. Paul lez Durance, France
 
  The superconducting linear accelerator of the Free-Electron Laser at Hamburg (FLASH) provides high performance electron beams to the lasing system to generate synchrotron radiation to various users. The Low-Level RF (LLRF) system is used to maintain the beam stabilities by stabilizing the RF field in the superconducting cavities with feedback and feed forward algorithms. The LLRF applications are sets of software to perform RF system model identification, control parameters optimization, exception detection and handling, so as to improve the precision, robustness and operability of the LLRF system. In order to implement the LLRF applications in the hardware with multiple distributed processors, an optimized architecture of the software is required for good understandability, maintainability and extendibility. This paper presents the design of the LLRF application software architecture based on the software engineering approach and the implementation at FLASH.  
poster icon Poster WEPKS010 [0.307 MB]  
 
WEPKS014 NOMAD – More Than a Simple Sequencer controls, hardware, CORBA, experiment 808
 
  • P. Mutti, F. Cecillon, A. Elaazzouzi, Y. Le Goc, J. Locatelli, H. Ortiz, J. Ratel
    ILL, Grenoble, France
 
  NOMAD is the new instrument control software of the Institut Laue-Langevin. A highly sharable code among all the instruments’ suite, a user oriented design for tailored functionality and the improvement of the instrument team’s autonomy thanks to a uniform and ergonomic user interface are the essential elements guiding the software development. NOMAD implements a client/server approach. The server is the core business containing all the instrument methods and the hardware drivers, while the GUI provides all the necessary functionalities for the interaction between user and hardware. All instruments share the same executable while a set of XML configuration files adapts hardware needs and instrument methods to the specific experimental setup. Thanks to a complete graphical representation of experimental sequences, NOMAD provides an overview of past, present and future operations. Users have the freedom to build their own specific workflows using intuitive drag-and-drop technique. A complete drivers’ database to connect and control all possible instrument components has been created, simplifying the inclusion of a new piece of equipment for an experiment. A web application makes available outside the ILL all the relevant information on the status of the experiment. A set of scientific methods facilitates the interaction between users and hardware giving access to instrument control and to complex operations within just one click on the interface. NOMAD is not only for scientists. Dedicated tools allow a daily use for setting-up and testing a variety of technical equipments.  
poster icon Poster WEPKS014 [6.856 MB]  
 
WEPKS019 Data Analysis Workbench data-analysis, experiment, TANGO, synchrotron 823
 
  • A. Götz, M.W. Gerring, O. Svensson
    ESRF, Grenoble, France
  • S. Brockhauser
    EMBL, Heidelberg, Germany
 
  Funding: ESRF
Data Analysis Workbench [1] is a new software tool produced in collaboration by the ESRF, Soleil and Diamond. It provides data visualization and workflow algorithm design for data analysis in combination with data collection. The workbench uses Passerelle as the workflow engine and EDNA plugins for data analysis. Actors talking to Tango are used for sending limited commands to hardware and starting existing data collection algorithms. There are scripting interfaces to SPEC and Python. The current state at the ESRF is prototype.
[1] http://www.dawb.org
 
poster icon Poster WEPKS019 [2.249 MB]  
 
WEPKS022 Mango: an Online GUI Development Tool for the Tango Control System TANGO, controls, GUI, device-server 833
 
  • G. Strangolino, C. Scafuri
    ELETTRA, Basovizza, Italy
 
  Mango is an online tool based on QTango that allows easy development of graphical panels ready to run without need to be compiled. Developing with Mango is easy and fast because widgets are dragged from a widget catalogue and dropped into the Mango container. Widgets are then connected to the control system variables by choosing them from a Tango device list or by dragging them from any other running application built with the QTango library. Mango has also been successfully used during the FERMI@Elettra commissioning both by machine physicists and technicians.  
poster icon Poster WEPKS022 [0.429 MB]  
 
WEPKS024 CAFE, A Modern C++ Interface to the EPICS Channel Access Library EPICS, controls, GUI, framework 840
 
  • J.T.M. Chrin, M.C. Sloan
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  CAFE (Channel Access interFacE) is a C++ library that provides a modern, multifaceted interface to the EPICS-based control system. CAFE makes extensive use of templates and multi-index containers to enhance efficiency, flexibility and performance. Stability and robustness are accomplished by ensuring that connectivity to EPICS channels remains in a well defined state in every eventuality, and results of all synchronous and asynchronous operations are captured and reported with integrity. CAFE presents the user with a number of options for writing and retrieving data to and fro the control system. In addition to basic read and write operations, a further abstraction layer provides transparency to more intricate functionality involving logical sets of data; such object sequences are easily instantiated through an XML-based configuration mechanism. CAFE's suitability for use in a broad spectrum of applications is demonstrated. These range from high performance Qt GUI control widgets, to event processing agents that propagate data through OMG's Data Distribution Service (DDS), to script-like frameworks such as MATLAB. The methodology for the modular use of CAFE serves to improve maintainability by enforcing a logical boundary between the channel access components and the specifics of the application framework at hand.  
poster icon Poster WEPKS024 [0.637 MB]  
 
WEPKS029 Integrating a Workflow Engine within a Commercial SCADA to Build End User Applications in a Scientific Environment GUI, controls, alignment, software 860
 
  • M. Ounsy, G. Abeillé, S. Pierre-Joseph Zéphir, K.S. Saintin
    SOLEIL, Gif-sur-Yvette, France
  • E. De Ley
    iSencia Belgium, Gent, Belgium
 
  To build integrated high-level applications, SOLEIL is using an original component-oriented approach based on GlobalSCREEN, an industrial Java SCADA [1]. The aim of this integrated development environment is to give SOLEIL's scientific and technical staff a way to develop GUI applications for beamlines external users . These GUI applications must address the 2 following needs : monitoring and supervision of a control system and development and execution of automated processes (like beamline alignment, data collections, and on-line data analysis). The first need is now completely answered through a rich set of Java graphical components based on the COMETE [2] library and providing a high level of service for data logging, scanning and so on. To reach the same quality of service for process automation, a big effort has been made to integrate more smoothly PASSERELLE [3], a workflow engine, with dedicated user-friendly interfaces for end users, packaged as JavaBeans in GlobalSCREEN components library. Starting with brief descriptions of software architecture of the PASSERELLE and GlobalSCREEN environments, we will then present the overall system integration design as well as the current status of deployment on SOLEIL beamlines.
[1] V. Hardion, M. Ounsy, K. Saintin, "How to Use a SCADA for High-Level Application Development on a Large-Scale Basis in a Scientific Environment", ICALEPS 2007
[2] G. Viguier, K. Saintin, https://comete.svn.sourceforge.net/svnroot/comete, ICALEPS'11, MOPKN016.
[3] A. Buteau, M. Ounsy, G. Abeille, "A Graphical Sequencer for SOLEIL Beamline Acquisitions", ICALEPS'07, Knoxville, Tennessee - USA, Oct 2007.
 
 
WEPKS032 A UML Profile for Code Generation of Component Based Distributed Systems software, distributed, controls, framework 867
 
  • G. Chiozzi, L. Andolfato, R. Karban
    ESO, Garching bei Muenchen, Germany
  • A. Tejeda
    UCM, Antofagasta, Chile
 
  A consistent and unambiguous implementation of code generation (model to text transformation) from UML must rely on a well defined UML profile, customizing UML for a particular application domain. Such a profile must have a solid foundation in a formally correct ontology, formalizing the concepts and their relations in the specific domain, in order to avoid a maze or set of wildly created stereotypes. The paper describes a generic profile for the code generation of component based distributed systems for control applications, the process to distill the ontology and define the profile, and the strategy followed to implement the code generator. The main steps that take place iteratively include: defining the terms and relations with an ontology, mapping the ontology to the appropriate UML metaclasses, testing the profile by creating modelling examples, and generating the code.  
poster icon Poster WEPKS032 [1.925 MB]  
 
WEPMN005 Spiral2 Control Command: a Standardized Interface between High Level Applications and EPICS IOCs status, controls, operation, EPICS 879
 
  • C.H. Haquin, P. Gillette, E. Lemaître, L. Philippe, D.T. Touchard
    GANIL, Caen, France
  • F. Gougnaud, Y. Lussignol
    CEA/DSM/IRFU, France
 
  The SPIRAL2 linear accelerator will produce entirely new particle beams enabling exploration of the boundaries of matter. Coupled with the existing GANIL machine this new facility will produce light and heavy exotic nuclei at extremely high intensities. The field deployment of the Control System relies on Linux PCs and servers, VME VxWorks crates and Siemens PLCs; equipment will be addressed either directly or using a Modbus/TCP field bus network. Several laboratories are involved in the software development of the control system. In order to improve efficiency of the collaboration, special care is taken of the software organization. During the development phase, in a context of tough budget and time constraints, this really makes sense, but also for the exploitation of the new machine, it helps us to design a control system that will require as little effort as possible for maintenance and evolution. The major concepts of this organization are the choice of EPICS, the definition of an EPICS directory tree specific to SPIRAL2, called "topSP2": this is our reference work area for development, integration and exploitation, and the use of version control system (SVN) to store and share our developments independently of the multi-site dimension of the project. The next concept is the definition of a "standardized interface" between high level applications programmed in Java and EPICS databases running in IOCs. This paper relates the rationale and objectives of this interface and also its development cycle from specification using UML diagrams to testing on the actual equipment.  
poster icon Poster WEPMN005 [0.945 MB]  
 
WEPMN009 Simplified Instrument/Application Development and System Integration Using Libera Base Software Framework software, hardware, framework, controls 890
 
  • M. Kenda, T. Beltram, T. Juretič, B. Repič, D. Škvarč, C. Valentinčič
    I-Tech, Solkan, Slovenia
 
  Development of many appliances used in scientific environment forces us to face similar challenges, often executed repeatedly. One has to design or integrate hardware components. Support for network and other communications standards needs to be established. Data and signals are processed and dispatched. Interfaces are required to monitor and control the behaviour of the appliances. At Instrumentation Technologies we identified and addressed these issues by creating a generic framework which is composed of several reusable building blocks. They simplify some of the tedious tasks and leave more time to concentrate on real issues of the application. Further more, the end product quality benefits from larger common base of this middle-ware. We will present the benefits on concrete example of instrument implemented on MTCA platform accessible over graphical user interface.  
poster icon Poster WEPMN009 [5.755 MB]  
 
WEPMN012 PC/104 Asyn Drivers at Jefferson Lab controls, EPICS, hardware, operation 898
 
  • J. Yan, T.L. Allison, S.D. Witherspoon
    JLAB, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
PC/104 embedded IOCs that run RTEMS and EPICS have been applied in many new projects at Jefferson Lab. Different commercial PC/104 I/O modules on the market such as digital I/O, data acquisition, and communication modules are integrated in our control system. AsynDriver, which is a general facility for interfacing device specific code to low level drivers, was applied for PC/104 serial communication I/O cards. We choose the ines GPIB-PC/104-XL as the GPIB interface module and developed the low lever device driver that is compatible with the asynDriver. The ines GPIB-PC/104-XL has iGPIB 72110 chip, which is register compatible with NEC uPD7210 in GPIB Talker/Listener applications. Instrument device support was created to provide access to the operating parameters of GPIB devices. Low level device driver for the serial communication board Model 104-COM-8SM was also developed to run under asynDriver. This serial interface board contains eight independent ports and provides effective RS-485, RS-422 and RS-232 multipoint communication. StreamDevice protocols were applied for the serial communications. The asynDriver in PC/104 IOC application provides standard interface between the high level device support and hardwire level device drivers. This makes it easy to develop the GPIB and serial communication applications in PC/104 IOCs.
 
 
WEPMN013 Recent Developments in Synchronised Motion Control at Diamond Light Source EPICS, controls, software, framework 901
 
  • B.J. Nutter, T.M. Cobb, M.R. Pearson, N.P. Rees, F. Yuan
    Diamond, Oxfordshire, United Kingdom
 
  At Diamond Light Source the EPICS control system is used with a variety of motion controllers. The use of EPICS ensures a common interface over a range of motorised applications. We have developed a system to enable the use of the same interface for synchronised motion over multiple axes using the Delta Tau PMAC controller. Details of this work will be presented, along with examples and possible future developments.  
 
WEPMN016 Synchronously Driven Power Converter Controller Solution for MedAustron timing, controls, real-time, FPGA 912
 
  • L. Šepetavc, J. Dedič, R. Tavčar
    Cosylab, Ljubljana, Slovenia
  • J. Gutleber
    CERN, Geneva, Switzerland
  • R. Moser
    EBG MedAustron, Wr. Neustadt, Austria
 
  MedAustron is an ion beam cancer therapy and research centre currently under construction in Wiener Neustadt, Austria. This facility features a synchrotron particle accelerator for light ions. Cosylab is closely working together with MedAustron on the development of a power converter controller (PCC) for the 260 deployed converters. The majority are voltage sources that are regulated in real-time via digital signal processor (DSP) boards. The in-house developed PCC operates the DSP boards remotely, via real-time fiber optic links. A single PCC will control up to 30 power converters that deliver power to magnets used for focusing and steering particle beams. Outputs of all PCCs must be synchronized within a time frame of at most 1 microsecond, which is achieved by integration with the timing system. This pulse-to-pulse modulation machine requires different waveforms for each beam generation cycle. Dead times between cycles must be kept low, therefore the PCC is reconfigured during beam generation. The system is based on a PXI platform from National Instruments running LabVIEW Real-Time. An in-house developed generic real-time optical link connects the PCCs to custom developed front-end devices. These FPGA-based hardware components facilitate integration with different types of power converters. All PCCs are integrated within the SIMATIC WinCC OA SCADA system which coordinates and supervises their operation. This paper describes the overall system architecture, its main components, challenges we faced and the technical solutions.  
poster icon Poster WEPMN016 [0.695 MB]  
 
WEPMN017 PCI Hardware Support in LIA-2 Control System hardware, controls, Linux, operation 916
 
  • D. Bolkhovityanov, P.B. Cheblakov
    BINP SB RAS, Novosibirsk, Russia
 
  LIA-2 control system* is built on cPCI crates with x86-compatible processor boards running Linux. Slow electronics is connected via CAN-bus, while fast electronics (4MHz and 200MHz fast ADCs and 200MHz timers) are implemented as cPCI/PMC modules. Several ways to drive PCI control electronics in Linux were examined. Finally a userspace drivers approach was chosen. These drivers communicate with hardware via a small kernel module, which provides access to PCI BARs and to interrupt handling. This module was named USPCI (User-Space PCI access). This approach dramatically simplifies creation of drivers, as opposed to kernel drivers, and provides high reliability (because only a tiny and thoroughly-debugged piece of code runs in kernel). LIA-2 accelerator was successfully commissioned, and the solution chosen has proven adequate and very easy to use. Besides, USPCI turned out to be a handy tool for examination and debugging of PCI devices direct from command-line. In this paper available approaches to work with PCI control hardware in Linux are considered, and USPCI architecture is described.
* "LIA-2 Linear Induction Accelerator Control System", this conference
 
poster icon Poster WEPMN017 [0.954 MB]  
 
WEPMN027 Fast Scalar Data Buffering Interface in Linux 2.6 Kernel Linux, hardware, controls, instrumentation 943
 
  • A. Homs
    ESRF, Grenoble, France
 
  Key instrumentation devices like counter/timers, analog-to-digital converter and encoders provide scalar data input. Many of them allow fast acquisitions, but do not provide hardware triggering or buffering mechanisms. A Linux 2.4 kernel driver called Hook was developed at the ESRF as a generic software-triggered buffering interface. This work presents the portage of the ESRF Hook interface to the Linux 2.6 kernel. The interface distinguishes two independent functional groups: trigger event generators and data channels. Devices in the first group create software events, like hardware interrupts generated by timers or external signals. On each event, one or more device channels on the second group are read and stored in kernel buffers. The event generators and data channels to be read are fully configurable before each sequence. Designed for fast acquisitions, the Hook implementation is well adapted to multi-CPU systems, where the interrupt latency is notably reduced. On heavily loaded dual-core PCs running standard (non real time) Linux, data can be taken at 1 KHz without losing events. Additional features include full integration into the sysfs (/sys) virtual filesystem and hotplug devices support.  
 
WEPMN028 Development of Image Data Acquisition System for 2D Detector at SACLA (SPring-8 XFEL) detector, data-acquisition, laser, FPGA 947
 
  • A. Kiyomichi, A. Amselem, T. Hirono, T. Ohata, R. Tanaka, M. Yamaga
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Hatsui
    RIKEN/SPring-8, Hyogo, Japan
 
  The x-ray free electron laser facility SACLA (SPring-8 Angstrom Compact free electron LAser) was constructed and started beam commissioning from March 2011. For the requirements of proposed experiments at SACLA, x-ray multi-readout ports CCD detectors (MPCCD) have been developed to realize a system with the total amount of 4 Mega-pixels area and 16bit wide dynamic range at a frame rate of 60Hz shot rate. We have developed the image data-handling scheme using the event-synchronized data-acquisition system. The front-end system used the CameraLink interface that excels in abilities of real-time triggering and high-speed data transfer. For the total data rate up to 4Gbps, the image data are collected by dividing the CCD detector into eight segments, which handles 0.5M pixels each, and then sent to high-speed data storage in parallel. We prepared two types of Cameralink imaging system for the VME and PC base. The Image Distribution board is made up of logic-reconfigurable VME board with CameraLink mezzanine card. The front-end system of MPCCD detector consists of eight sets of Image Distribution boards. We plan to introduce the online lossless compression using FPGA with arithmetic coding algorithm. For wide adaptability of user requirements, we also prepared the PC based imaging system, which consists of Linux server and commercial CameraLink PCI interface. It does not contain compression function, but supports various type of CCD camera, for example, high-definition (1920x1080) single CCD camera.  
poster icon Poster WEPMN028 [5.574 MB]  
 
WEPMN030 Power Supply Control Interface for the Taiwan Photon Source power-supply, controls, Ethernet, quadrupole 950
 
  • C.Y. Wu, J. Chen, Y.-S. Cheng, P.C. Chiu, K.T. Hsu, K.H. Hu, C.H. Kuo, D. Lee, C.Y. Liao, K.-B. Liu
    NSRRC, Hsinchu, Taiwan
 
  The Taiwan Photon Source (TPS) is a latest generation synchrotron light source. Stringent power supply specifications should be met to achieve design goals of the TPS. High precision power supply equipped with 20, 18, and 16 bits DAC for the storage ring dipole, quadrupole, and sextupole are equipped with Ethernet interfaces. Control interface include basic functionality and some advanced features which are useful for performance monitoring and post-mortem diagnostics. Power supply of these categories can be accessed by EPICS IOCs. The corrector power supplies' control interface is a specially designed embedded interface module which will be mounted on the corrector power supply cages to achieve required performance. The setting reference of the corrector power supply is generated by 20 bits DAC and readback is done by 24 bits ADC. The interface module has embedded EPICS IOC for slow control. Fast setting ports are also supported by the internal FPGA for orbit feedback supports.  
 
WEPMN034 YAMS: a Stepper Motor Controller for the FERMI@Elettra Free Electron Laser controls, power-supply, software, TANGO 958
 
  • A. Abrami, M. De Marco, M. Lonza, D. Vittor
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
New projects, like FERMI@Elettra, demand for standardization of the systems in order to cut development and maintenance costs. The various motion control applications foreseen in this project required a specific controller able to flexibly adapt to any need while maintaining a common interface to the control system to minimize software development efforts. These reasons led us to design and build "Yet Another Motor Subrack", YAMS, a 3U chassis containing a commercial stepper motor controller, up to eight motor drivers and all the necessary auxiliary systems. The motors can be controlled locally by means of an operator panel or remotely through an Ethernet interface and a dedicated Tango device server. The paper describes the details of the project and the deployment issues.
 
poster icon Poster WEPMN034 [4.274 MB]  
 
WEPMN037 DEBROS: Design and Use of a Linux-like RTOS on an Inexpensive 8-bit Single Board Computer Linux, network, hardware, software 965
 
  • M.A. Davis
    NSCL, East Lansing, Michigan, USA
 
  As the power, complexity, and capabilities of embedded processors continues to grow, it is easy to forget just how much can be done with inexpensive single board computers based on 8-bit processors. When the proprietary, non-standard tools from the vendor for one such embedded computer became a major roadblock, I embarked on a project to expand my own knowledge and provide a more flexible, standards based alternative. Inspired by operating systems such as Unix, Linux, and Minix, I wrote DEBROS (the Davis Embedded Baby Real-time Operating System) [1], which is a fully pre-emptive, priority-based OS with soft real-time capabilities that provides a subset of standard Linux/Unix compatible system calls such as stdio, BSD sockets, pipes, semaphores, etc. The end result was a much more flexible, standards-based development environment which allowed me to simplify my programming model, expand diagnostic capabilities, and reduce the time spent monitoring and applying updates to the hundreds of devices in the lab currently using this hardware.[2]
[1] http://groups.nscl.msu.edu/controls/files/DEBROS_User_Developer_Manual.doc
[2] http://groups.nscl.msu.edu/controls/
 
poster icon Poster WEPMN037 [0.112 MB]  
 
WEPMS016 Network on Chip Master Control Board for Neutron's Acquisition FPGA, neutron, controls, network 1006
 
  • E. Ruiz-Martinez, T. Mary, P. Mutti, J. Ratel, F. Rey
    ILL, Grenoble, France
 
  In the neutron scattering instruments at the Institute Laue-Langevin, one of the main challenges for the acquisition control is to generate the suitable signalling for the different modes of neutron acquisition. An inappropriate management could cause loss of information during the course of the experiments and in the subsequent data analysis. It is necessary to define a central element to provide synchronization to the rest of the units. The backbone of the proposed acquisition control system is the denominated master acquisition board. This main board is designed to gather together the modes of neutron acquisition used in the facility, and make it common for all the instruments in a simple, modular and open way, giving the possibility of adding new performances. The complete system also includes a display board and n histogramming modules connected to the neutrons detectors. The master board consists of a VME64X configurable high density I/O connection carrier board based on latest Xilinx Virtex-6T FPGA. The internal architecture of the FPGA is designed as a Network on Chip (NoC) approach. It represents a switch able to communicate efficiently the several resources available in the board (PCI Express, VME64x Master/Slave, DDR3 controllers and user's area). The core of the global signal synchronization is fully implemented in the FPGA, the board has a completely user configurable IO front-end to collect external signals, to process them and to distribute the synchronization control via the bus VME to the others modules involved in the acquisition.  
poster icon Poster WEPMS016 [7.974 MB]  
 
WEPMS017 The Global Trigger Processor: A VXS Switch Module for Triggering Large Scale Data Acquisition Systems FPGA, Ethernet, hardware, embedded 1010
 
  • S.R. Kaneta, C. Cuevas, H. Dong, W. Gu, E. Jastrzembski, N. Nganga, B.J. Raydo, J. Wilson
    JLAB, Newport News, Virginia, USA
 
  Funding: Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
The 12 GeV upgrade for Jefferson Lab's Continuous Electron Beam Accelerator Facility requires the development of a new data acquisition system to accommodate the proposed 200 kHz Level 1 trigger rates expected for fixed target experiments at 12 GeV. As part of a suite of trigger electronics comprised of VXS switch and payload modules, the Global Trigger Processor (GTP) will handle up to 32,768 channels of preprocessed trigger information data from the multiple detector systems that surround the beam target at a system clock rate of 250 MHz. The GTP is configured with user programmable Physics trigger equations and when trigger conditions are satisfied, the GTP will activate the storage of data for subsequent analysis. The GTP features an Altera Stratix IV GX FPGA allowing interface to 16 Sub-System Processor modules via 32 5-Gbps links, DDR2 and flash memory devices, two gigabit Ethernet interfaces using Nios II embedded processors, fiber optic transceivers, and trigger output signals. The GTP's high-bandwidth interconnect with the payload modules in the VXS crate, the Ethernet interface for parameter control, status monitoring, and remote update, and the inherent nature of its FPGA give it the flexibility to be used large variety of tasks and adapt to future needs. This paper details the responsibilities of the GTP, the hardware's role in meeting those requirements, and elements of the VXS architecture that facilitated the design of the trigger system. Also presented will be the current status of development including significant milestones and challenges.
 
poster icon Poster WEPMS017 [0.851 MB]  
 
WEPMS027 The RF Control System of the SSRF 150MeV Linac controls, linac, EPICS, Ethernet 1039
 
  • S.M. Hu, J.G. Ding, G.-Y. Jiang, L.R. Shen, M.H. Zhao, S.P. Zhong
    SINAP, Shanghai, People's Republic of China
 
  Shanghai Synchrotron Radiation Facility (SSRF) use a 150 MeV linear electron accelerator as injector, its RF system consists of many discrete devices. The control system is mainly composed of a VME controller and a home-made signal conditioner with DC power supplies. The uniform signal conditioner serves as a hardware interface between the controller and the RF components. The DC power supplies are used for driving the mechanical phase shifters. The control software is based on EPICS toolkit. Device drivers and related runtime database for the VME modules were developed. The operator interface was implemented by EDM.  
poster icon Poster WEPMS027 [0.702 MB]  
 
WEPMU003 The Diamond Machine Protection System controls, interlocks, vacuum, photon 1051
 
  • M.T. Heron, Y.S. Chernousko, P. Hamadyk, S.C. Lay, N. Rotolo
    Diamond, Oxfordshire, United Kingdom
 
  Funding: Diamond Light Source LTD
The Diamond Light Source Machine Protection System manages the hazards from high power photon beams and other hazards to ensure equipment protection on the booster synchrotron and storage ring. The system has a shutdown requirement, on a beam mis-steer, of under 1msec and has to manage in excess of a thousand interlocks. This is realised using a combination of bespoke hardware and programmable logic controllers. The structure of the Machine Protection System will be described, together with operational experience and developments to provide post-mortem functionality.
 
poster icon Poster WEPMU003 [0.694 MB]  
 
WEPMU015 The Machine Protection System for the R&D Energy Recovery LINAC FPGA, LabView, hardware, software 1087
 
  • Z. Altinbas, J.P. Jamilkowski, D. Kayran, R.C. Lee, B. Oerter
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy.
The Machine Protection System (MPS) is a device-safety system that is designed to prevent damage to hardware by generating interlocks, based upon the state of input signals generated by selected sub-systems. It protects all the key machinery in the R&D Project called the Energy Recovery LINAC (ERL) against the high beam current. The MPS is capable of responding to a fault with an interlock signal within several microseconds. The ERL MPS is based on a National Instruments CompactRIO platform, and is programmed by utilizing National Instruments' development environment for a visual programming language. The system also transfers data (interlock status, time of fault, etc.) to the main server. Transferred data is integrated into the pre-existing software architecture which is accessible by the operators. This paper will provide an overview of the hardware used, its configuration and operation, as well as the software written both on the device and the server side.
 
poster icon Poster WEPMU015 [17.019 MB]  
 
WEPMU017 Safety Control System and its Interface to EPICS for the Off-Line Front-End of the SPES Project controls, EPICS, target, status 1093
 
  • J.A. Vásquez, A. Andrighetto, G. Bassato, L. Costa, M.G. Giacchini
    INFN/LNL, Legnaro (PD), Italy
  • M. Bertocco
    UNIPD, Padova (PD), Italy
 
  The SPES off-line front-end apparatus involves a number of subsystems and procedures that are potentially dangerous both for human operators and for the equipments. The high voltage power supply, the ion source complex power supplies, the target chamber handling systems and the laser source are some example of these subsystems. For that reason, a safety control system has been developed. It is based on Schneider Electrics Preventa family safety modules that control the power supply of critical subsystems in combination with safety detectors that monitor critical variables. A Programmable Logic Controller (PLC), model BMXP342020 from the Schneider Electrics Modicon M340 family, is used for monitoring the status of the system as well as controlling the sequence of some operations in automatic way. A touch screen, model XBTGT5330 from the Schneider Electrics Magelis family, is used as Human Machine Interface (HMI) and communicates with the PLC using MODBUS-TCP. Additionally, an interface to the EPICS control network was developed using a home-made MODBUS-TCP EPICS driver in order to integrate it to the control system of the Front End as well as present the status of the system to the users on the main control panel.  
poster icon Poster WEPMU017 [2.847 MB]  
 
WEPMU030 CERN Safety System Monitoring - SSM monitoring, network, database, controls 1134
 
  • T. Hakulinen, P. Ninin, F. Valentini
    CERN, Geneva, Switzerland
  • J. Gonzalez, C. Salatko-Petryszcze
    ASsystem, St Genis Pouilly, France
 
  CERN SSM (Safety System Monitoring) is a system for monitoring state-of-health of the various access and safety systems of the CERN site and accelerator infrastructure. The emphasis of SSM is on the needs of maintenance and system operation with the aim of providing an independent and reliable verification path of the basic operational parameters of each system. Included are all network-connected devices, such as PLCs, servers, panel displays, operator posts, etc. The basic monitoring engine of SSM is a freely available system monitoring framework Zabbix, on top of which a simplified traffic-light-type web-interface has been built. The web-interface of SSM is designed to be ultra-light to facilitate access from handheld devices over slow connections. The underlying Zabbix system offers history and notification mechanisms typical advanced monitoring systems.  
poster icon Poster WEPMU030 [1.231 MB]  
 
WEPMU036 Efficient Network Monitoring for Large Data Acquisition Systems network, monitoring, database, software 1153
 
  • D.O. Savu, B. Martin
    CERN, Geneva, Switzerland
  • A. Al-Shabibi
    Heidelberg University, Heidelberg, Germany
  • S.M. Batraneanu, S.N. Stancu
    UCI, Irvine, California, USA
  • R. Sjoen
    University of Oslo, Oslo, Norway
 
  Though constantly evolving and improving, the available network monitoring solutions have limitations when applied to the infrastructure of a high speed real-time data acquisition (DAQ) system. DAQ networks are particular computer networks where experts have to pay attention to both individual subsections as well as system wide traffic flows while monitoring the network. The ATLAS Network at the Large Hadron Collider (LHC) has more than 200 switches interconnecting 3500 hosts and totaling 8500 high speed links. The use of heterogeneous tools for monitoring various infrastructure parameters, in order to assure optimal DAQ system performance, proved to be a tedious and time consuming task for experts. To alleviate this problem we used our networking and DAQ expertise to build a flexible and scalable monitoring system providing an intuitive user interface with the same look and feel irrespective of the data provider that is used. Our system uses custom developed components for critical performance monitoring and seamlessly integrates complementary data from auxiliary tools, such as NAGIOS, information services or custom databases. A number of techniques (e.g. normalization, aggregation and data caching) were used in order to improve the user interface response time. The end result is a unified monitoring interface, for fast and uniform access to system statistics, which significantly reduced the time spent by experts for ad-hoc and post-mortem analysis.  
poster icon Poster WEPMU036 [5.945 MB]  
 
THAAUST01 Tailoring the Hardware to Your Control System controls, EPICS, hardware, FPGA 1171
 
  • E. Björklund, S.A. Baily
    LANL, Los Alamos, New Mexico, USA
 
  Funding: Work supported by the US Department of Energy under contract DE-AC52-06NA25396
In the very early days of computerized accelerator control systems the entire control system, from the operator interface to the front-end data acquisition hardware, was custom designed and built for that one machine. This was expensive, but the resulting product was a control system seamlessly integrated (mostly) with the machine it was to control. Later, the advent of standardized bus systems such as CAMAC, VME, and CANBUS, made it practical and attractive to purchase commercially available data acquisition and control hardware. This greatly simplified the design but required that the control system be tailored to accommodate the features and eccentricities of the available hardware. Today we have standardized control systems (Tango, EPICS, DOOCS) using commercial hardware on standardized busses. With the advent of FPGA technology and programmable automation controllers (PACs & PLCs) it now becomes possible to tailor commercial hardware to the needs of a standardized control system and the target machine. In this paper, we will discuss our experiences with tailoring a commercial industrial I/O system to meet the needs of the EPICS control system and the LANSCE accelerator. We took the National Instruments Compact RIO platform, embedded an EPICS IOC in its processor, and used its FPGA backplane to create a "standardized" industrial I/O system (analog in/out, binary in/out, counters, and stepper motors) that meets the specific needs of the LANSCE accelerator.
 
slides icon Slides THAAUST01 [0.812 MB]  
 
THAAUST02 Suitability Assessment of OPC UA as the Backbone of Ground-based Observatory Control Systems controls, software, framework, CORBA 1174
 
  • W. Pessemier, G. Deconinck, G. Raskin, H. Van Winckel
    KU Leuven, Leuven, Belgium
  • P. Saey
    Katholieke Hogeschool Sint-Lieven, Gent, Belgium
 
  A common requirement of modern observatory control systems is to allow interaction between various heterogeneous subsystems in a transparent way. However, the integration of COTS industrial products - such as PLCs and SCADA software - has long been hampered by the lack of an adequate, standardized interfacing method. With the advent of the Unified Architecture version of OPC (Object Linking and Embedding for Process Control), the limitations of the original industry-accepted interface are now lifted, and in addition much more functionality has been defined. In this paper the most important features of OPC UA are matched against the requirements of ground-based observatory control systems in general and in particular of the 1.2m Mercator Telescope. We investigate the opportunities of the "information modelling" idea behind OPC UA, which could allow an extensive standardization in the field of astronomical instrumentation, similar to the standardization efforts emerging in several industry domains. Because OPC UA is designed for both vertical and horizontal integration of heterogeneous subsystems and subnetworks, we explore its capabilities to serve as the backbone of a dependable and scalable observatory control system, treating "industrial components" like PLCs no differently than custom software components. In order to quantitatively assess the performance and scalability of OPC UA, stress tests are described and their results are presented. Finally, we consider practical issues such as the availability of COTS OPC UA stacks, software development kits, servers and clients.  
slides icon Slides THAAUST02 [2.879 MB]  
 
THBHAUST02 The Wonderland of Operating the ALICE Experiment detector, operation, experiment, controls 1182
 
  • A. Augustinus, P.Ch. Chochula, G. De Cataldo, L.S. Jirdén, A.N. Kurepin, M. Lechman, O. Pinazza, P. Rosinský
    CERN, Geneva, Switzerland
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
 
  ALICE is one of the experiments at the Large Hadron Collider (LHC), CERN (Geneva, Switzerland). Composed of 18 sub-detectors each with numerous subsystems that need to be controlled and operated in a safe and efficient way. The Detector Control System (DCS) is the key for this and has been used by detector experts with success during the commissioning of the individual detectors. With the transition from commissioning to operation more and more tasks were transferred from detector experts to central operators. By the end of the 2010 datataking campaign the ALICE experiment was run by a small crew of central operators, with only a single controls operator. The transition from expert to non-expert operation constituted a real challenge in terms of tools, documentation and training. In addition a relatively high turnover and diversity in the operator crew that is specific to the HEP experiment environment (as opposed to the more stable operation crews for accelerators) made this challenge even bigger. This paper describes the original architectural choices that were made and the key components that allowed to come to a homogeneous control system that would allow for efficient centralized operation. Challenges and specific constraints that apply to the operation of a large complex experiment are described. Emphasis will be put on the tools and procedures that were implemented to allow the transition from local detector expert operation during commissioning and early operation, to efficient centralized operation by a small operator crew not necessarily consisting of experts.  
slides icon Slides THBHAUST02 [1.933 MB]  
 
THBHAUIO06 Cognitive Ergonomics of Operational Tools controls, operation, power-supply, software 1196
 
  • A. Lüdeke
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  Control systems have become continuously more powerful over the past decades. The ability for high data throughput and sophisticated graphical interactions have opened a variety of new possibilities. But has it helped to provide intuitive, easy to use applications to simplify the operation of modern large scale accelerator facilities? We will discuss what makes an application useful to operation and what is necessary to make a tool easy to use. We will show that even the implementation of a small number of simple design rules for applications can help to ease the operation of a facility.  
slides icon Slides THBHAUIO06 [23.914 MB]  
 
THDAUST02 An Erlang-Based Front End Framework for Accelerator Controls framework, controls, data-acquisition, hardware 1264
 
  • D.J. Nicklaus, C.I. Briegel, J.D. Firebaugh, CA. King, R. Neswold, R. Rechenmacher, J. You
    Fermilab, Batavia, USA
 
  We have developed a new front-end framework for the ACNET control system in Erlang. Erlang is a functional programming language developed for real-time telecommunications applications. The primary task of the front-end software is to connect the control system with drivers collecting data from individual field bus devices. Erlang's concurrency and message passing support have proven well-suited for managing large numbers of independent ACNET client requests for front-end data. Other Erlang features which make it particularly well-suited for a front-end framework include fault-tolerance with process monitoring and restarting, real-time response,and the ability to change code in running systems. Erlang's interactive shell and dynamic typing make writing and running unit tests an easy part of the development process. Erlang includes mechanisms for distributing applications which we will use for deploying our framework to multiple front-ends, along with a configured set of device drivers. We've developed Erlang code to use Fermilab's TCLK event distribution clock and Erlang's interface to C/C++ allows hardware-specific driver access.  
slides icon Slides THDAUST02 [1.439 MB]  
 
FRAAUST01 Development of the Machine Protection System for LCLS-I controls, Ethernet, FPGA, network 1281
 
  • J.E. Dusatko, M. Boyes, P. Krejcik, S.R. Norum, J.J. Olsen
    SLAC, Menlo Park, California, USA
 
  Funding: U.S. Department of Energy under Contract Nos. DE-AC02-06CH11357 and DE-AC02-76SF00515
Machine Protection System (MPS) requirements for the Linac Coherent Light Source I demand that fault detection and mitigation occur within one machine pulse (1/120th of a second at full beam rate). The MPS must handle inputs from a variety of sources including loss monitors as well as standard state-type inputs. These sensors exist at various places across the full 2.2km length of the machine. A new MPS has been developed based on a distributed star network where custom-designed local hardware nodes handle sensor inputs and mitigation outputs for localized regions of the LCLS accelerator complex. These Link-Nodes report status information and receive action commands from a centralized processor running the MPS algorithm over a private network. The individual Link-Node is a 3u chassis with configurable hardware components that can be setup with digital and analog inputs and outputs, depending upon the sensor and actuator requirements. Features include a custom MPS digital input/output subsystem, a private Ethernet interface, an embedded processor, a custom MPS engine implemented in an FPGA and an Industry Pack (IP) bus interface, allowing COTS and custom analog/digital I/O modules to be utilized for MPS functions. These features, while capable of handing standard MPS state-type inputs and outputs, allow other systems like beam loss monitors to be completely integrated within them. To date, four different types of Link-Nodes are in use in LCLS-I. This paper describes the design, construction and implementation of the LCLS MPS with a focus in the Link-Node.
 
slides icon Slides FRAAUST01 [3.573 MB]  
 
FRBHMUST01 The Design of the Alba Control System: A Cost-Effective Distributed Hardware and Software Architecture. controls, TANGO, database, software 1318
 
  • D.F.C. Fernández-Carreiras, D.B. Beltrán, T.M. Coutinho, G. Cuní, J. Klora, O. Matilla, R. Montaño, C. Pascual-Izarra, S. Pusó, R. Ranz, A. Rubio, S. Rubio-Manrique
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  The control system of Alba is highly distributed from both hardware and software points of view. The hardware infrastructure for the control system includes in the order of 350 racks, 20000 cables and 6200 equipments. More than 150 diskless industrial computers, distributed in the service area and 30 multicore servers in the data center, manage several thousands of process variables. The software is, of course, as distributed as the hardware. It is also a success story of the Tango Collaboration where a complete software infrastructure is available "off the shelf". In addition Tango has been productively complemented with the powerful Sardana framework, a great effort in terms of development, which nowadays, several institutes benefit from. The whole installation has been coordinated from the beginning with a complete cabling and equipment database, where all the equipment, cables, connectors are described and inventoried. The so called "cabling database" is core of the installation. The equipments and cables are defined there. The basic configurations of the hardware like MAC and IP addresses, DNS names, etc. are also gathered in this database, allowing the network communication files and declaration of variables in the PLCs to be created automatically. This paper explains the design and the architecture of the control system, describes the tools and justifies the choices made. Furthermore, it presents and analyzes the figures regarding cost and performances.  
slides icon Slides FRBHMUST01 [4.616 MB]  
 
FRBHMULT04 Towards a State Based Control Architecture for Large Telescopes: Laying a Foundation at the VLT controls, software, distributed, operation 1330
 
  • R. Karban, N. Kornweibel
    ESO, Garching bei Muenchen, Germany
  • D.L. Dvorak, M.D. Ingham, D.A. Wagner
    JPL, Pasadena, California, USA
 
  Large telescopes are characterized by a high level of distribution of control-related tasks and will feature diverse data flow patterns and large ranges of sampling frequencies; there will often be no single, fixed server-client relationship between the control tasks. The architecture is also challenged by the task of integrating heterogeneous subsystems which will be delivered by multiple different contractors. Due to the high number of distributed components, the control system needs to effectively detect errors and faults, impede their propagation, and accurately mitigate them in the shortest time possible, enabling the service to be restored. The presented Data-Driven Architecture is based on a decentralized approach with an end-to-end integration of disparate independently-developed software components, using a high-performance standards-based communication middle-ware infrastructure, based on the Data Distribution Service. A set of rules and principles, based on JPL's State Analysis method and architecture, are established to avoid undisciplined component-to-component interactions, where the Control System and System Under Control are clearly separated. State Analysis provides a model-based process for capturing system and software requirements and design, helping reduce the gap between the requirements on software specified by systems engineers and the implementation by software engineers. The method and architecture has been field tested at the Very Large Telescope, where it has been integrated into an operational system with minimal downtime.  
slides icon Slides FRBHMULT04 [3.504 MB]  
 
FRBHMULT06 EPICS V4 Expands Support to Physics Application, Data Acsuisition, and Data Analysis controls, EPICS, data-acquisition, database 1338
 
  • L.R. Dalesio, G. Carcassi, M.A. Davidsaver, M.R. Kraimer, R. Lange, N. Malitsky, G. Shen
    BNL, Upton, Long Island, New York, USA
  • T. Korhonen
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
  • J. Rowland
    Diamond, Oxfordshire, United Kingdom
  • M. Sekoranja
    Cosylab, Ljubljana, Slovenia
  • G.R. White
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported under auspices of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookhaven Science Associates, LLC, and in part by the DOE Contract DE-AC02-76SF00515
EPICS version 4 extends the functionality of version 3 by providing the ability to define, transport, and introspect composite data types. Version 3 provided a set of process variables and a data protocol that adequately defined scalar data along with an atomic set of attributes. While remaining backward compatible, Version 4 is able to easily expand this set with a data protocol capable of exchanging complex data types and parameterized data requests. Additionally, a group of engineers defined reference types for some applications in this environment. The goal of this work is to define a narrow interface with the minimal set of data types needed to support a distributed architecture for physics applications, data acquisition, and data analysis.
 
slides icon Slides FRBHMULT06 [0.188 MB]  
 
FRCAUST02 Status of the CSNS Controls System controls, linac, power-supply, Ethernet 1341
 
  • C.H. Wang
    IHEP Beijing, Beijing, People's Republic of China
 
  The China Spallation Neutron Source (CSNS) is planning to start construction in 2011 in China. The CSNS controls system will use EPICS as development platform. The scope of the controls system covers thousands of devices located in Linac, RCS and two transfer lines. The interface from the control system to the equipment will be through VME Power PC processors and embedded PLC as well as embedded IPC. The high level applications will choose XAL core and Eclipse platform. Oracle database is used to save historical data. This paper introduces controls preliminary design and progress. Some key technologies, prototypes,schedule and personnel plan are also discussed.  
slides icon Slides FRCAUST02 [3.676 MB]