Keyword: controls
Paper Title Other Keywords Page
MOBAUST01 News from ITER Controls - A Status Report EPICS, network, real-time, software 1
 
  • A. Wallander, L. Abadie, F. Di Maio, B. Evrard, J-M. Fourneron, H.K. Gulati, C. Hansalia, J.Y. Journeaux, C.S. Kim, W.-D. Klotz, K. Mahajan, P. Makijarvi, Y. Matsumoto, S. Pande, S. Simrock, D. Stepanov, N. Utzel, A. Vergara-Fernandez, A. Winter, I. Yonekawa
    ITER Organization, St. Paul lez Durance, France
 
  Construction of ITER has started at the Cadarache site in southern France. The first buildings are taking shape and more than 60 % of the in-kind procurement has been committed by the seven ITER member states (China, Europe, India, Japan, Korea, Russia and Unites States). The design and manufacturing of the main components of the machine is now underway all over the world. Each of these components comes with a local control system, which must be integrated in the central control system. The control group at ITER has developed two products to facilitate this; the plant control design handbook (PCDH) and the control, data access and communication (CODAC) core system. PCDH is a document which prescribes the technologies and methods to be used in developing the local control system and sets the rules applicable to the in-kind procurements. CODAC core system is a software package, distributed to all in-kind procurement developers, which implements the PCDH and facilitates the compliance of the local control system. In parallel, the ITER control group is proceeding with the design of the central control system to allow fully integrated and automated operation of ITER. In this paper we report on the progress of design, technology choices and discuss justifications of those choices. We also report on the results of some pilot projects aiming at validating the design and technologies.  
slides icon Slides MOBAUST01 [4.238 MB]  
 
MOBAUST02 The ATLAS Detector Control System detector, interface, experiment, monitoring 5
 
  • S. Schlenker, S. Arfaoui, S. Franz, O. Gutzwiller, C.A. Tsarouchas
    CERN, Geneva, Switzerland
  • G. Aielli, F. Marchese
    Università di Roma II Tor Vergata, Roma, Italy
  • G. Arabidze
    MSU, East Lansing, Michigan, USA
  • E. Banaś, Z. Hajduk, J. Olszowska, E. Stanecka
    IFJ-PAN, Kraków, Poland
  • T. Barillari, J. Habring, J. Huber
    MPI, Muenchen, Germany
  • M. Bindi, A. Polini
    INFN-Bologna, Bologna, Italy
  • H. Boterenbrood, R.G.K. Hart
    NIKHEF, Amsterdam, The Netherlands
  • H. Braun, D. Hirschbuehl, S. Kersten, K. Lantzsch
    Bergische Universität Wuppertal, Wuppertal, Germany
  • R. Brenner
    Uppsala University, Uppsala, Sweden
  • D. Caforio, C. Sbarra
    Bologna University, Bologna, Italy
  • S. Chekulaev
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
  • S. D'Auria
    University of Glasgow, Glasgow, United Kingdom
  • M. Deliyergiyev, I. Mandić
    JSI, Ljubljana, Slovenia
  • E. Ertel
    Johannes Gutenberg University Mainz, Institut für Physik, Mainz, Germany
  • V. Filimonov, V. Khomutnikov, S. Kovalenko
    PNPI, Gatchina, Leningrad District, Russia
  • V. Grassi
    SBU, Stony Brook, New York, USA
  • J. Hartert, S. Zimmermann
    Albert-Ludwig Universität Freiburg, Freiburg, Germany
  • D. Hoffmann
    CPPM, Marseille, France
  • G. Iakovidis, K. Karakostas, S. Leontsinis, E. Mountricha
    National Technical University of Athens, Athens, Greece
  • P. Lafarguette
    Université Blaise Pascal, Clermont-Ferrand, France
  • F. Marques Vinagre, G. Ribeiro, H.F. Santos
    LIP, Lisboa, Portugal
  • T. Martin, P.D. Thompson
    Birmingham University, Birmingham, United Kingdom
  • B. Mindur
    AGH University of Science and Technology, Krakow, Poland
  • J. Mitrevski
    SCIPP, Santa Cruz, California, USA
  • K. Nagai
    University of Tsukuba, Graduate School of Pure and Applied Sciences,, Tsukuba, Ibaraki, Japan
  • S. Nemecek
    Czech Republic Academy of Sciences, Institute of Physics, Prague, Czech Republic
  • D. Oliveira Damazio, A. Poblaguev
    BNL, Upton, Long Island, New York, USA
  • P.W. Phillips
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
  • A. Robichaud-Veronneau
    DPNC, Genève, Switzerland
  • A. Talyshev
    BINP, Novosibirsk, Russia
  • G.F. Tartarelli
    Universita' degli Studi di Milano & INFN, Milano, Italy
  • B.M. Wynne
    Edinburgh University, Edinburgh, United Kingdom
 
  The ATLAS experiment is one of the multi-purpose experiments at the Large Hadron Collider (LHC), constructed to study elementary particle interactions in collisions of high-energy proton beams. Twelve different sub-detectors as well as the common experimental infrastructure are supervised by the Detector Control System (DCS). The DCS enables equipment supervision of all ATLAS sub-detectors by using a system of 140 server machines running the industrial SCADA product PVSS. This highly distributed system reads, processes and archives of the order of 106 operational parameters. Higher level control system layers based on the CERN JCOP framework allow for automatic control procedures, efficient error recognition and handling, manage the communication with external control systems such as the LHC controls, and provide a synchronization mechanism with the ATLAS physics data acquisition system. A web-based monitoring system allows accessing the DCS operator interface views and browse the conditions data archive worldwide with high availability. This contribution firstly describes the status of the ATLAS DCS and the experience gained during the LHC commissioning and the first physics data taking operation period. Secondly, the future evolution and maintenance constraints for the coming years and the LHC high luminosity upgrades are outlined.  
slides icon Slides MOBAUST02 [6.379 MB]  
 
MOBAUST03 The MedAustron Accelerator Control System operation, interface, real-time, timing 9
 
  • J. Gutleber, M. Benedikt
    CERN, Geneva, Switzerland
  • A.B. Brett, A. Fabich, M. Marchhart, R. Moser, M. Thonke, C. Torcato de Matos
    EBG MedAustron, Wr. Neustadt, Austria
  • J. Dedič
    Cosylab, Ljubljana, Slovenia
 
  This paper presents the architecture and design of the MedAustron particle accelerator control system. The facility is currently under construction in Wr. Neustadt, Austria. The accelerator and its control system are designed at CERN. Accelerator control systems for ion therapy applications are characterized by rich sets of configuration data, real-time reconfiguration needs and high stability requirements. The machine is operated according to a pulse-to-pulse modulation scheme and beams are described in terms of ion type, energy, beam dimensions, intensity and spill length. An irradiation session for a patient consists of a few hundred accelerator cycles over a time period of about two minutes. No two cycles within a session are equal and the dead-time between two cycles must be kept low. The control system is based on a multi-tier architecture with the aim to achieve a clear separation between front-end devices and their controllers. Off-the-shelf technologies are deployed wherever possible. In-house developments cover a main timing system, a light-weight layer to standardize operation and communication of front-end controllers, the control of the power converters and a procedure programming framework for automating high-level control and data analysis tasks. In order to be able to roll out a system within a predictable schedule, an "off-shoring" project management process was adopted: A frame agreement with an integrator covers the provision of skilled personnel that specifies and builds components together with the core team.  
slides icon Slides MOBAUST03 [7.483 MB]  
 
MOBAUST04 The RHIC and RHIC Pre-Injectors Controls Systems: Status and Plans ion, proton, electron, luminosity 13
 
  • K.A. Brown, Z. Altinbas, J. Aronson, S. Binello, I.G. Campbell, M.R. Costanzo, T. D'Ottavio, W. Eisele, A. Fernando, B. Frak, W. Fu, C. Ho, L.T. Hoff, J.P. Jamilkowski, P. Kankiya, R.A. Katz, S.A. Kennell, J.S. Laster, R.C. Lee, G.J. Marr, A. Marusic, R.J. Michnoff, J. Morris, S. Nemesure, B. Oerter, R.H. Olsen, J. Piacentino, G. Robert-Demolaize, V. Schoefer, R.F. Schoenfeld, S. Tepikian, C. Theisen, C.M. Zimmer
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy.
Brookhaven National Laboratory (BNL) is one of the premier high energy and nuclear physics laboratories in the world and has been a leader in accelerator based physics research for well over half a century. For the past ten years experiments at the Relativistic Heavy Ion Collider (RHIC) have recorded data from collisions of heavy ions and polarized protons, leading to major discoveries in nuclear physics and the spin dynamics of quarks and gluons. BNL is also the site of one of the oldest alternating gradient synchrotrons, the AGS, which first operated in 1960. The accelerator controls systems for these instruments span multiple generations of technologies. In this report we will describe the current status of the Collider-Accelerator Department controls systems, which are used to control seven different accelerator facilities (from the LINAC and Tandem van de Graafs to RHIC) and multiple science programs (high energy nuclear physics, high energy polarized proton physics, NASA programs, isotope production, and multiple accelerator research and development projects). We will describe the status of current projects, such as the just completed Electron Beam Ion Source (EBIS), our R&D programs in superconducting RF and an Energy Recovery LINAC (ERL), innovations in feedback systems and bunched beam stochastic cooling at RHIC, and plans for future controls system developments.
 
slides icon Slides MOBAUST04 [6.386 MB]  
 
MOBAUST05 Control System Achievement at KEKB and Upgrade Design for SuperKEKB EPICS, software, operation, linac 17
 
  • K. Furukawa, A. Akiyama, E. Kadokura, M. Kurashina, K. Mikawa, F. Miyahara, T.T. Nakamura, J.-I. Odagiri, M. Satoh, T. Suwada
    KEK, Ibaraki, Japan
  • T. Kudou, S. Kusano, T. Nakamura, K. Yoshii
    MELCO SC, Tsukuba, Japan
  • T. Okazaki
    EJIT, Hitachi, Ibaraki, Japan
 
  SuperKEKB electron-positron asymmetric collider is being constructed after a decade of successful operation at KEKB for B physics research. KEKB completed all of the technical milestones, and had offered important insights into the flavor structure of elementary particles, especially the CP violation. The combination of scripting languages at the operation layer and EPICS at the equipment layer had led the control system to successful performance. The new control system in SuperKEKB will continue to employ those major features of KEKB, with additional technologies for the reliability and flexibility. The major structure will be maintained especially the online linkage to the simulation code and slow controls. However, as the design luminosity is 40-times higher than that of KEKB, several orders of magnitude higher performance will be required at certain area. At the same time more controllers with embedded technology will be installed to meet the limited resources.  
slides icon Slides MOBAUST05 [2.781 MB]  
 
MOBAUST06 The LHCb Experiment Control System: on the Path to Full Automation experiment, detector, framework, operation 20
 
  • C. Gaspar, F. Alessio, L.G. Cardoso, M. Frank, J.C. Garnier, R. Jacobsson, B. Jost, N. Neufeld, R. Schwemmer, E. van Herwijnen
    CERN, Geneva, Switzerland
  • O. Callot
    LAL, Orsay, France
  • B. Franek
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
 
  LHCb is a large experiment at the LHC accelerator. The experiment control system is in charge of the configuration, control and monitoring of the different sub-detectors and of all areas of the online system: the Detector Control System (DCS), sub-detector's voltages, cooling, temperatures, etc.; the Data Acquisition System (DAQ), and the Run-Control; the High Level Trigger (HLT), a farm of around 1500 PCs running trigger algorithms; etc. The building blocks of the control system are based on the PVSS SCADA System complemented by a control Framework developed in common for the 4 LHC experiments. This framework includes an "expert system" like tool called SMI++ which we use for the system automation. The full control system runs distributed over around 160 PCs and is logically organised in a hierarchical structure, each level being capable of supervising and synchronizing the objects below. The experiment's operations are now almost completely automated driven by a top-level object called Big-Brother which pilots all the experiment's standard procedures and the most common error-recovery procedures. Some examples of automated procedures are: powering the detector, acting on the Run-Control (Start/Stop Run, etc.) and moving the vertex detector in/out of the beam, all driven by the state of the accelerator or recovering from errors in the HLT farm. The architecture, tools and mechanisms used for the implementation as well as some operational examples will be shown.  
slides icon Slides MOBAUST06 [1.451 MB]  
 
MOCAULT01 Managing Mayhem factory, background 24
 
  • K.S. White
    ORNL, Oak Ridge, Tennessee, USA
 
  In research institutes, scientists and engineers are often promoted to managerial positions based on their excellence in the technical aspects of their work. Once in the managerial role, they may discover they lack the skills or interest necessary to perform the management functions that enable a healthy, productive organization. This is not really surprising when one considers that scientists and engineers are trained for quantitative analysis while the management arena is dominated by qualitative concepts. Management is generally considered to include planning, organizing, leading and controlling. This paper discusses the essential management functions and techniques that can be employed to maximize success in a research and development organization.  
slides icon Slides MOCAULT01 [2.311 MB]  
 
MOCAULT02 Managing the Development of Plant Subsystems for a Large International Project software, interface, EPICS, site 27
 
  • D.P. Gurd
    Private Address, Vancouver, Canada
 
  ITER is an international collaborative project under development by nations representing over one half of the world's population. Major components will be supplied by "Domestic Agencies" representing the various participating countries. While the supervisory control system, known as "CODAC", will be developed at the project site in the south of France, the EPICS and PLC-based plant control subsystems are to be developed and tested locally, where the subsystems themselves are being built. This is similar to the model used for the development of the Spallation Neutron Source (SNS), which was a US national collaboration. However the far more complex constraints of an international collaboration, as well as the mandated extensive use of externally contracted and commercially-built subsystems, preclude the use of many specifics of the SNS collaboration approach which may have contributed to its success. Moreover, procedures for final system integration and commissioning at ITER are not yet well defined. This paper will outline the particular issues either inherent in an international collaboration or specific to ITER, and will suggest approaches to mitigate those problems with the goal of assuring a successful and timely integration and commissioning phase.  
slides icon Slides MOCAULT02 [3.684 MB]  
 
MOCAUIO04 The SESAME Project booster, EPICS, synchrotron, electron 31
 
  • A. Nadji, S. Abu Ghannam, Z. Qazi, I. Saleh
    SESAME, Amman, Jordan
  • P. Betinelli-Deck, L.S. Nadolski
    SOLEIL, Gif-sur-Yvette, France
  • J.-F. Gournay
    CEA/IRFU, Gif-sur-Yvette, France
  • M.T. Heron
    Diamond, Oxfordshire, United Kingdom
  • H. Hoorani
    NCP, Islamabad, Pakistan
  • B. Kalantari
    PSI, Villigen, Switzerland
  • E. D. Matias, G. Wright
    CLS, Saskatoon, Saskatchewan, Canada
 
  SESAME (Synchrotron-light for Experimental Science and Applications in the Middle East) is a third generation synchrotron light source under construction near Amman (Jordan), which is expected to begin operation in 2015. SESAME will foster scientific and technological excellence in the Middle East and the Mediterranean region, build scientific bridges between neighbouring countries and foster mutual understanding through international cooperation. The members of SESAME are currently Bahrain, Cyprus, Egypt, Iran, Israel, Jordan, Pakistan, the Palestinian Authority and Turkey. An overview about the progress of the facility and the general plan will be given in this talk. Then I will focus on the control system by explaining how this part is managed: the technical choice, the main deadlines, the local staff, the international virtual control team, and the first results.  
slides icon Slides MOCAUIO04 [8.526 MB]  
 
MODAULT01 Thirty Meter Telescope Adaptive Optics Computing Challenges real-time, FPGA, hardware, operation 36
 
  • C. Boyer, B.L. Ellerbroek, L. Gilles, L. Wang
    TMT, Pasadena, California, USA
  • S. Browne
    The Optical Sciences Company, Anaheim, California, USA
  • G. Herriot, J.P. Veran
    HIA, Victoria, Canada
  • G.J. Hovey
    DRAO, Penticton, British Columbia, Canada
 
  The Thirty Meter Telescope (TMT) will be used with Adaptive Optics (AO) systems to allow near diffraction-limited performance in the near-infrared and achieve the main TMT science goals. Adaptive optics systems reduce the effect of the atmospheric distortions by dynamically measuring the distortions with wavefront sensors, performing wavefront reconstruction with a Real Time Controller (RTC), and then compensating for the distortions with wavefront correctors. The requirements for the RTC subsystem of the TMT first light AO system will represent a significant advance over the current generation of astronomical AO control systems. Memory and processing requirements would be at least 2 orders of magnitude greater than the currently most powerful AO systems using conventional approaches, so that innovative wavefront reconstruction algorithms and new hardware approaches will be required. In this paper, we will first present the requirements and challenges for the RTC of the first light AO system, together with the algorithms that have been developed to reduce the memory and processing requirements, and then two possible hardware architectures based on Field Programmable Gate Array (FPGA).  
slides icon Slides MODAULT01 [2.666 MB]  
 
MOMAU002 Improving Data Retrieval Rates Using Remote Data Servers network, software, hardware, database 40
 
  • T. D'Ottavio, B. Frak, J. Morris, S. Nemesure
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work performed under the auspices of the U.S. Department of Energy
The power and scope of modern Control Systems has led to an increased amount of data being collected and stored, including data collected at high (kHz) frequencies. One consequence is that users now routinely make data requests that can cause gigabytes of data to be read and displayed. Given that a users patience can be measured in seconds, this can be quite a technical challenge. This paper explores one possible solution to this problem - the creation of remote data servers whose performance is optimized to handle context-sensitive data requests. Methods for increasing data delivery performance include the use of high speed network connections between the stored data and the data servers, smart caching of frequently used data, and the culling of data delivered as determined by the context of the data request. This paper describes decisions made when constructing these servers and compares data retrieval performance by clients that use or do not use an intermediate data server.
 
slides icon Slides MOMAU002 [0.085 MB]  
poster icon Poster MOMAU002 [1.077 MB]  
 
MOMAU003 The Computing Model of the Experiments at PETRA III TANGO, experiment, interface, detector 44
 
  • T. Kracht, M. Alfaro, M. Flemming, J. Grabitz, T. Núñez, A. Rothkirch, F. Schlünzen, E. Wintersberger, P. van der Reest
    DESY, Hamburg, Germany
 
  The PETRA storage ring at DESY in Hamburg has been refurbished to become a highly brilliant synchrotron radiation source (now named PETRA III). Commissioning of the beamlines started in 2009, user operation in 2010. In comparison with our DORIS beamlimes, the PETRA III experiments have larger complexity, higher data rates and require an integrated system for data storage and archiving, data processing and data distribution. Tango [1] and Sardana [2] are the main components of our online control system. Both systems are developed by international collaborations. Tango serves as the backbone to operate all beamline components, certain storage ring devices and equipment from our users. Sardana is an abstraction layer on top of Tango. It standardizes the hardware access, organizes experimental procedures, has a command line interface and provides us with widgets for graphical user interfaces. Other clients like Spectra, which was written for DORIS, interact with Tango or Sardana. Modern 2D detectors create large data volumes. At PETRA III all data are transferred to an online file server which is hosted by the DESY computer center. Near real time analysis and reconstruction steps are executed on a CPU farm. A portal for remote data access is in preparation. Data archiving is done by the dCache [3]. An offline file server has been installed for further analysis and inhouse data storage.
[1] http://www.tango-controls.org
[2] http://computing.cells.es/services/collaborations/sardana
[3] http://www-dcache.desy.de
 
slides icon Slides MOMAU003 [0.347 MB]  
poster icon Poster MOMAU003 [0.563 MB]  
 
MOMAU004 Database Foundation for the Configuration Management of the CERN Accelerator Controls Systems database, interface, software, timing 48
 
  • Z. Zaharieva, M. Martin Marquez, M. Peryt
    CERN, Geneva, Switzerland
 
  The Controls Configuration DB (CCDB) and its interfaces have been developed over the last 25 years in order to become nowadays the basis for the Configuration Management of the Controls System for all accelerators at CERN. The CCDB contains data for all configuration items and their relationships, required for the correct functioning of the Controls System. The configuration items are quite heterogeneous, depicting different areas of the Controls System – ranging from 3000 Front-End Computers, 75 000 software devices allowing remote control of the accelerators, to valid states of the Accelerators Timing System. The article will describe the different areas of the CCDB, their interdependencies and the challenges to establish the data model for such a diverse configuration management database, serving a multitude of clients. The CCDB tracks the life of the configuration items by allowing their clear identification, triggering change management processes as well as providing status accounting and audits. This necessitated the development and implementation of a combination of tailored processes and tools. The Controls System is a data-driven one - the data stored in the CCDB is extracted and propagated to the controls hardware in order to configure it remotely. Therefore a special attention is placed on data security and data integrity as an incorrectly configured item can have a direct impact on the operation of the accelerators.  
slides icon Slides MOMAU004 [0.404 MB]  
poster icon Poster MOMAU004 [6.064 MB]  
 
MOMAU005 Integrated Approach to the Development of the ITER Control System Configuration Data database, software, network, status 52
 
  • D. Stepanov, L. Abadie
    ITER Organization, St. Paul lez Durance, France
  • J. Bertin, G. Bourguignon, G. Darcourt
    Sopra Group, Aix-en-Provence, France
  • O. Liotard
    TCS France, Puteaux, France
 
  ITER control system (CODAC) is steadily going into the implementation phase. A design guidelines handbook and a software development toolkit, named CODAC Core System, were produced in February 2011. They are ready to be used off-site, in the ITER domestic agencies and associated industries, in order to develop first control "islands" of various ITER plant systems. In addition to the work done off-site there is wealth of I&C related data developed centrally at ITER, but scattered through various sources. These data include I&C design diagrams, 3-D data, volume allocation, inventory control, administrative data, planning and scheduling, tracking of deliveries and associated documentation, requirements control, etc. All these data have to be kept coherent and up-to-date, with various types of cross-checks and procedures imposed on them. A "plant system profile" database, currently under development at ITER, represents an effort to provide integrated view into the I&C data. Supported by a platform-independent data modeling, done with a help of XML Schema, it accumulates all the data in a single hierarchy and provides different views for different aspects of the I&C data. The database is implemented using MS SQL Server and Java-based web interface. Import and data linking services are implemented using Talend software, and the report generation is done with a help of MS SQL Server Reporting Services. This paper will report on the first implementation of the database, the kind of data stored so far, typical work flows and processes, and directions of further work.  
slides icon Slides MOMAU005 [0.384 MB]  
poster icon Poster MOMAU005 [0.692 MB]  
 
MOMAU007 How to Maintain Hundreds of Computers Offering Different Functionalities with Only Two System Administrators software, Linux, database, EPICS 56
 
  • R.A. Krempaska, A.G. Bertrand, C.E. Higgs, R. Kapeller, H. Lutz, M. Provenzano
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The Controls section in PSI is responsible for the Control Systems of four Accelerators: two proton accelerators HIPA and PROSCAN, Swiss Light Source SLS and the Free Electron Laser (SwissFEL) Test Facility. On top of that, we have 18 additional SLS beamlines to control. The controls system is mainly composed of the so called Input Output Controllers (IOCs) which require a complete and complex computing infrastructure in order to boot, being developed, debugged and monitored. This infrastructure consists currently mainly of Linux computers like boot server, port server, or configuration server (called save and restore server). Overall, the constellation of computers and servers which compose the control system counts about five hundred Linux computers which can be split into 38 different configurations based on the work each of this system need to provide. For the administration of all this we do employ only two system administrators who are responsible for the installation, configuration and maintenance of those computers. This paper shows which tools are used to squash this difficult task: like Puppet (an open source Linux tool we further adapted) and many in-house developed tools offering an overview about computers, installation status and relations between the different servers / computers.  
slides icon Slides MOMAU007 [0.384 MB]  
poster icon Poster MOMAU007 [0.708 MB]  
 
MOMAU008 Integrated Management Tool for Controls Software Problems, Requests and Project Tasking at SLAC software, status, HOM, feedback 59
 
  • D. Rogind, W. Allen, W.S. Colocho, G. DeContreras, J.B. Gordon, P. Pandey, H. Shoaee
    SLAC, Menlo Park, California, USA
 
  The Controls Department at SLAC, with its service center model, continuously receives engineering requests to design, build and support controls for accelerator systems lab-wide. Each customer request can vary in complexity from installing a minor feature to enhancing a major subsystem. Departmental accelerator improvement projects, along with DOE-approved construction projects, also contribute heavily to the work load. These various customer requests and projects, paired with the ongoing operational maintenance and problem reports, place a demand on the department that usually exceeds the capacity of available resources. An integrated, centralized repository - comprised of all problems, requests, and project tasks - available to all customers, operators, managers, and engineers alike - is essential to capture, communicate, prioritize, assign, schedule, track progress, and finally, commission all work components. The Controls software group has recently integrated its request/task management into its online problem tracking "Comprehensive Accelerator Tool for Enhancing Reliability" (CATER ) tool. This paper discusses the new integrated software problem/request/task management tool - its work-flow, reporting capability, and its many benefits.  
slides icon Slides MOMAU008 [0.083 MB]  
poster icon Poster MOMAU008 [1.444 MB]  
 
MOMMU001 Extending Alarm Handling in Tango TANGO, database, synchrotron, status 63
 
  • S. Rubio-Manrique, F. Becheri, D.F.C. Fernández-Carreiras, J. Klora, L. Krause, A. Milán Otero, Z. Reszela, P. Skorek
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  This paper describes the alarm system developed at Alba Synchrotron, built on Tango Control System. It describes the tools used for configuration and visualization, its integration in user interfaces and its approach to alarm specification; either assigning discrete Alarm/Warning levels or allowing versatile logic rules in Python. This paper also covers the life cycle of the alarm (triggering, logging, notification, explanation and acknowledge) and the automatic control actions that can be triggered by the alarms.  
slides icon Slides MOMMU001 [1.119 MB]  
poster icon Poster MOMMU001 [2.036 MB]  
 
MOMMU002 NFC Like Wireless Technology for Monitoring Purposes in Scientific/Industrial Facilities EPICS, monitoring, network, vacuum 66
 
  • I. Badillo, M. Eguiraun
    ESS-Bilbao, Zamudio, Spain
  • J. Jugo
    University of the Basque Country, Faculty of Science and Technology, Bilbao, Spain
 
  Funding: The present work is supported by the Basque Government and Spanish Ministry of Science and Innovation.
Wireless technologies are becoming more and more used in large industrial and scientific facilities like particle accelerators for facilitating the monitoring and indeed sensing in these kind of large environments. Cabled equipment means little flexibility in placement and is very expensive in both money an effort whenever reorganization or new installation is needed. So, when cabling is not really needed for performance reasons wireless monitoring and control is a good option, due to the speed of implementation. There are several wireless flavors to choose, as Bluetooth, Zigbee, WiFi, etc. depending on the requirements of each specific application. In this work a wireless monitoring system for EPICS (Experimental Physics and Industrial Control System) is presented, where desired control system variables are acquired over the network and published in a mobile device, allowing the operator to check process variables everywhere the signal spreads. In this approach, a Python based server will be continuously getting EPICS Process Variables via Channel Access protocol and sending them through a WiFi standard 802.11 network using ICE middleware. ICE is a toolkit oriented to build distributed applications. Finally the mobile device will read the data and show it to the operator. The security of the communication can be assured by means of a weak wireless signal, following the same idea as in NFC, but for more large distances. With this approach, local monitoring and control applications, as for example a vacuum control system for several pumps, are easily implemented.
 
slides icon Slides MOMMU002 [0.309 MB]  
poster icon Poster MOMMU002 [7.243 MB]  
 
MOMMU005 Stabilization and Positioning of CLIC Quadrupole Magnets with sub-Nanometre Resolution quadrupole, feedback, luminosity, simulation 74
 
  • S.M. Janssens, K. Artoos, C.G.R.L. Collette, M. Esposito, P. Fernandez Carmona, M. Guinchard, C. Hauviller, A.M. Kuzmin, R. Leuxe, R. Morón Ballester
    CERN, Geneva, Switzerland
 
  Funding: The research leading to these results has received funding from the European Commission under the FP7 Research Infrastructures project EuCARD, grant agreement no.227579
To reach the required luminosity at the CLIC interaction point, about 2000 quadrupoles along each linear collider are needed to obtain a vertical beam size of 1 nm at the interaction point. Active mechanical stabilization is required to limit the vibrations of the magnetic axis to the nanometre level in a frequency range from 1 to 100 Hz. The approach of a stiff actuator support was chosen to isolate from ground motion and technical vibrations acting directly on the quadrupoles. The actuators can also reposition the quadrupoles between beam pulses with nanometre resolution. A first conceptual design of the active stabilization and nano positioning based on the stiff support and seismometers was validated in models and experimentally demonstrated on test benches. Lessons learnt from the test benches and information from integrated luminosity simulations using measured stabilization transfer functions lead to improvements of the actuating support, the sensors used and the system controller. The controller electronics were customized to improve performance and to reduce cost, size and power consumption. The outcome of this R&D is implemented in the design of the first prototype of a stabilized CLIC quadrupole magnet.
 
slides icon Slides MOMMU005 [1.046 MB]  
poster icon Poster MOMMU005 [1.551 MB]  
 
MOMMU009 Upgrade of the Server Architecture for the Accelerator Control System at the Heidelberg Ion Therapy Center database, ion, network, proton 78
 
  • J.M. Mosthaf, Th. Haberer, S. Hanke, K. Höppner, A. Peters, S. Stumpf
    HIT, Heidelberg, Germany
 
  The Heidelberg Ion Therapy Center (HIT) is a heavy ion accelerator facility located at the Heidelberg university hospital and intended for cancer treatment with heavy ions and protons. It provides three treatment rooms for therapy of which two using horizontal beam nozzles are in use and the unique gantry with a 360° rotating beam port is currently under commissioning. The proprietary accelerator control system runs on several classical server machines, including a main control server, a database server running Oracle, a device settings modeling server (DSM) and several gateway servers for auxiliary system control. As the load on some of the main systems, especially the database and DSM servers, has become very high in terms of CPU and I/O load, a change to a more up to date blade server enclosure with four redundant blades and a 10Gbit internal network architecture has been decided. Due to budgetary reasons, this enclosure will at first only replace the main control, database and DVM servers and consolidate some of the services now running on auxiliary servers. The internal configurable network will improve the communication between servers and database. As all blades in the enclosure are configured identically, one dedicated spare blade is used to provide redundancy in case of hardware failure. Additionally we plan to use virtualization software to further improve redundancy and consolidate the services running on gateways and to make dynamic load balancing available to account for different performance needs e.g. in commissioning or therapy use of the accelerator.  
slides icon Slides MOMMU009 [0.233 MB]  
poster icon Poster MOMMU009 [1.132 MB]  
 
MOMMU012 A Digital Base-band RF Control System FPGA, diagnostics, operation, software 82
 
  • M. Konrad, U. Bonnes, C. Burandt, R. Eichhorn, J. Enders, N. Pietralla
    TU Darmstadt, Darmstadt, Germany
 
  Funding: Supported by DFG through CRC 634.
The analog RF control system of the S-DALINAC has been replaced by a new digital system. The new hardware consists of an RF module and an FPGA board that have been developed in-house. A self-developed CPU implemented in the FPGA executing the control algorithm allows to change the algorithm without time-consuming synthesis. Another micro-controller connects the FPGA board to a standard PC server via CAN bus. This connection is used to adjust control parameters as well as to send commands from the RF control system to the cavity tuner power supplies. The PC runs Linux and an EPICS IOC. The latter is connected to the CAN bus with a device support that uses the SocketCAN network stack included in recent Linux kernels making the IOC independent of the CAN controller hardware. A diagnostic server streams signals from the FPGAs to clients on the network. Clients used for diagnosis include a software oscilloscope as well as a software spectrum analyzer. The parameters of the controllers can be changed with Control System Studio. We will present the architecture of the RF control system as well as the functionality of its components from a control system developers point of view.
 
slides icon Slides MOMMU012 [0.087 MB]  
poster icon Poster MOMMU012 [33.544 MB]  
 
MOPKN005 Construction of New Data Archive System in RIKEN RI Beam Factory EPICS, database, data-acquisition, beam-diagnostic 90
 
  • M. Komiyama, N. Fukunishi
    RIKEN Nishina Center, Wako, Japan
  • A. Uchiyama
    SHI Accelerator Service Ltd., Tokyo, Japan
 
  The control system of RIKEN RI Beam Factory (RIBF) is based on EPICS and three kinds of data archive system have been in operation. Two of them are EPICS applications and the other is MyDAQ2 developed by SPring-8 control group. MyDAQ2 collects data such as cooling-water temperature and magnet temperature etc and is not integrated into our EPICS control system. In order to unify the three applications into a single system, we have started to develop a new system since October, 2009. One of the requirements for this RIBF Control data Archive System (RIBFCAS) is that it routinely collects more than 3000 data from 21 EPICS Input/Output Controllers (IOC) at every 1 to 60 seconds, depending on the type of equipment. An ability to unify MyDAQ2 database is also required. To fulfill these requirements, a Java-based system is constructed, in which Java Channel Access Light Library (JCAL) developed by J-PARC control group is adopted in order to acquire large amounts of data as mentioned above. The main advantage of JCAL is that it is based on single threaded architecture for thread safety and user thread can be multi-threaded. The RIBFCAS hardware consists of an application server, a database server and a client-PC. The client application is executed on the Adobe AIR runtime. At the moment, we succeeded in getting about 3000 data from 21 EPICS IOCs at every 10 seconds for one day, and validation tests are proceeding. Unification of MyDAQ2 is now in progress and it is scheduled to be completed in 2011.  
poster icon Poster MOPKN005 [27.545 MB]  
 
MOPKN009 The CERN Accelerator Measurement Database: On the Road to Federation database, extraction, data-management, status 102
 
  • C. Roderick, R. Billen, M. Gourber-Pace, N. Hoibian, M. Peryt
    CERN, Geneva, Switzerland
 
  The Measurement database, acting as short-term central persistence and front-end of the CERN accelerator Logging Service, receives billions of time-series data per day for 200,000+ signals. A variety of data acquisition systems on hundreds of front-end computers publish source data that eventually end up being logged in the Measurement database. As part of a federated approach to data management, information about source devices are defined in a Configuration database, whilst the signals to be logged are defined in the Measurement database. A mapping, which is often complex and subject to change and extension, is therefore required in order to subscribe to the source devices, and write the published data to the corresponding named signals. Since 2005, this mapping was done by means of dozens of XML files, which were manually maintained by multiple persons, resulting in a configuration that was error prone. In 2010 this configuration was improved, such that it becomes fully centralized in the Measurement database, reducing significantly the complexity and the number of actors in the process. Furthermore, logging processes immediately pick up modified configurations via JMS based notifications sent directly from the database, allowing targeted device subscription updates rather than a full process restart as was required previously. This paper will describe the architecture and the benefits of current implementation, as well as the next steps on the road to a fully federated solution.  
 
MOPKN010 Database and Interface Modifications: Change Management Without Affecting the Clients database, interface, software, operation 106
 
  • M. Peryt, R. Billen, M. Martin Marquez, Z. Zaharieva
    CERN, Geneva, Switzerland
 
  The first Oracle-based Controls Configuration Database (CCDB) was developed in 1986, by which the controls system of CERN's Proton Synchrotron became data-driven. Since then, this mission-critical system has evolved tremendously going through several generational changes in terms of the increasing complexity of the control system, software technologies and data models. Today, the CCDB covers the whole CERN accelerator complex and satisfies a much wider range of functional requirements. Despite its online usage, everyday operations of the machines must not be disrupted. This paper describes our approach with respect to dealing with change while ensuring continuity. How do we manage the database schema changes? How do we take advantage of the latest web deployed application development frameworks without alienating the users? How do we minimize impact on the dependent systems connected to databases through various API's? In this paper we will provide our answers to these questions, and to many more.  
 
MOPKN011 CERN Alarms Data Management: State & Improvements laser, database, data-management, operation 110
 
  • Z. Zaharieva, M. Buttner
    CERN, Geneva, Switzerland
 
  The CERN Alarms System - LASER is a centralized service ensuring the capturing, storing and notification of anomalies for the whole accelerator chain, including the technical infrastructure at CERN. The underlying database holds the pre-defined configuration data for the alarm definitions, for the Operators alarms consoles as well as the time-stamped, run-time alarm events, propagated through the Alarms Systems. The article will discuss the current state of the Alarms database and recent improvements that have been introduced. It will look into the data management challenges related to the alarms configuration data that is taken from numerous sources. Specially developed ETL processes must be applied to this data in order to transform it into an appropriate format and load it into the Alarms database. The recorded alarms events together with some additional data, necessary for providing events statistics to users, are transferred to the long-term alarms archive. The article will cover as well the data management challenges related to the recently developed suite of data management interfaces in respect of keeping data consistency between the alarms configuration data coming from external data sources and the data modifications introduced by the end-users.  
poster icon Poster MOPKN011 [4.790 MB]  
 
MOPKN012 Hyperarchiver: An Epics Archiver Prototype Based on Hypertable EPICS, embedded, Linux, target 114
 
  • M.G. Giacchini, A. Andrighetto, G. Bassato, L.G. Giovannini, M. Montis, G.P. Prete, J.A. Vásquez
    INFN/LNL, Legnaro (PD), Italy
  • J. Jugo
    University of the Basque Country, Faculty of Science and Technology, Bilbao, Spain
  • K.-U. Kasemir
    ORNL, Oak Ridge, Tennessee, USA
  • R. Lange
    HZB, Berlin, Germany
  • R. Petkus
    BNL, Upton, Long Island, New York, USA
  • M. del Campo
    ESS-Bilbao, Zamudio, Spain
 
  This work started in the context of NSLS2 project at Brookhaven National Laboratory. The NSLS2 control system foresees a very high number of PV variables and has strict requirements in terms of archiving/retrieving rate: our goal was to store 10K PV/sec and retrieve 4K PV/sec for a group of 4 signals. The HyperArchiver is an EPICS Archiver implementation engined by Hypertable, an open source database whose internal architecture is derived from Google's Big Table. We discuss the performance of HyperArchiver and present the results of some comparative tests.
HyperArchiver: http://www.lnl.infn.it/~epics/joomla/archiver.html
Epics: http://www.aps.anl.gov/epics/
 
poster icon Poster MOPKN012 [1.231 MB]  
 
MOPKN013 Image Acquisition and Analysis for Beam Diagnostics Applications of the Taiwan Photon Source EPICS, GUI, linac, software 117
 
  • C.Y. Liao, J. Chen, Y.-S. Cheng, K.T. Hsu, K.H. Hu, C.H. Kuo, C.Y. Wu
    NSRRC, Hsinchu, Taiwan
 
  Design and implementation of image acquisition and analysis is in proceeding for the Taiwan Photon Source (TPS) diagnostic applications. The optical system contains screen, lens, and lighting system. A CCD camera with Gigabit Ethernet interface (GigE Vision) will be a standard image acquisition device. Image acquisition will be done on EPICS IOC via PV channel and analysis the properties by using Matlab tool to evaluate the beam profile (σ), beam size position and tilt angle et al. The EPICS IOC integrated with Matlab as a data processing system is not only could be used in image analysis but also in many types of equipment data processing applications. Progress of the project will be summarized in this report.  
poster icon Poster MOPKN013 [0.816 MB]  
 
MOPKN015 Managing Information Flow in ALICE detector, distributed, monitoring, database 124
 
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
  • A. Augustinus, P.Ch. Chochula, L.S. Jirdén, A.N. Kurepin, M. Lechman, P. Rosinský
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
 
  ALICE is one of the experiments at the Large Hadron Collider (LHC), CERN (Geneva, Switzerland). The ALICE detector control system is an integrated system collecting 18 different subdetectors' controls and general services and is implemented using the commercial SCADA package PVSS. Information of general interest, beam and ALICE condition data, together with data related to shared plants or systems, are made available to all the subsystems through the distribution capabilities of PVSS. Great care has been taken during the design and implementation to build the control system as a hierarchical system, limiting the interdependencies of the various subsystems. Accessing remote resources in a PVSS distributed environment is very simple, and can be initiated unilaterally. In order to improve the reliability of distributed data and to avoid unforeseen dependencies, the ALICE DCS group has enforced the centralization of the publication of global data and other specific variables requested by the subsystems. As an example, a specific monitoring tool will be presented that has been developed in PVSS to estimate the level of interdependency and to understand the optimal layout of the distributed connections, allowing for an interactive visualization of the distribution topology.  
poster icon Poster MOPKN015 [2.585 MB]  
 
MOPKN016 Tango Archiving Service Status TANGO, database, GUI, insertion 127
 
  • G. Abeillé, J. Guyot, M. Ounsy, S. Pierre-Joseph Zéphir
    SOLEIL, Gif-sur-Yvette, France
  • R. Passuello, G. Strangolino
    ELETTRA, Basovizza, Italy
  • S. Rubio-Manrique
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  In modern scientific instruments like ALBA, ELETTRA or Synchrotron Soleil the monitoring and tuning of thousands of parameters is essential to drive high-performing accelerators and beamlines. To keep tracks of these parameters and to manage easily large volumes of technical data, an archiving service is a key component of a modern control system like Tango [1]. To do so, a high-availability archiving service is provided as a feature of the Tango control system. This archiving service stores data coming from the Tango control system into MySQL [2] or Oracle [3] databases. Tree sub-services are provided: An historical service with an archiving period up to 10 seconds; a short term service providing a few weeks retention with a period up to 100 milliseconds; a snapshot service which takes "pictures" of Tango parameters and can reapply them to the control system on user demand. This paper presents how to obtain a high-performance and scalable service based on our feedback after years of operation. Then, the deployment architecture in the different Tango institutes will be detailed. The paper conclusion is a description of the next steps and incoming features which will be available in the next future.
[1] http://www.tango-controls.org/
[2] http://www.mysql.com/
[3] http://www.oracle.com/us/products/database/index.html
 
 
MOPKN018 Computing Architecture of the ALICE Detector Control System detector, monitoring, network, interface 134
 
  • P. Rosinský, A. Augustinus, P.Ch. Chochula, L.S. Jirdén, M. Lechman
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The ALICE Detector Control System (DCS) is based on a commercial SCADA product, running on a large Windows computer cluster. It communicates with about 1200 network attached devices to assure safe and stable operation of the experiment. In the presentation we focus on the design of the ALICE DCS computer systems. We describe the management of data flow, mechanisms for handling the large data amounts and information exchange with external systems. One of the key operational requirements is an intuitive, error proof and robust user interface allowing for simple operation of the experiment. At the same time the typical operator task, like trending or routine checks of the devices, must be decoupled from the automated operation in order to prevent overload of critical parts of the system. All these requirements must be implemented in an environment with strict security requirements. In the presentation we explain how these demands affected the architecture of the ALICE DCS.  
 
MOPKN019 ATLAS Detector Control System Data Viewer database, interface, framework, experiment 137
 
  • C.A. Tsarouchas, S.A. Roe, S. Schlenker
    CERN, Geneva, Switzerland
  • U.X. Bitenc, M.L. Fehling-Kaschek, S.X. Winkelmann
    Albert-Ludwig Universität Freiburg, Freiburg, Germany
  • S.X. D'Auria
    University of Glasgow, Glasgow, United Kingdom
  • D. Hoffmann, O.X. Pisano
    CPPM, Marseille, France
 
  The ATLAS experiment at CERN is one of the four Large Hadron Collider ex- periments. ATLAS uses a commercial SCADA system (PVSS) for its Detector Control System (DCS) which is responsible for the supervision of the detector equipment, the reading of operational parameters, the propagation of the alarms and the archiving of important operational data in a relational database. DCS Data Viewer (DDV) is an application that provides access to historical data of DCS parameters written to the database through a web interface. It has a modular and flexible design and is structured using a client-server architecture. The server can be operated stand alone with a command-like interface to the data while the client offers a user friendly, browser independent interface. The selection of the metadata of DCS parameters is done via a column-tree view or with a powerful search engine. The final visualisation of the data is done using various plugins such as "value over time" charts, data tables, raw ASCII or structured export to ROOT. Excessive access or malicious use of the database is prevented by dedicated protection mechanisms, allowing the exposure of the tool to hundreds of inexperienced users. The metadata selection and data output features can be used separately by XML configuration files. Security constraints have been taken into account in the implementation allowing the access of DDV by collaborators worldwide. Due to its flexible interface and its generic and modular approach, DDV could be easily used for other experiment control systems that archive data using PVSS.  
poster icon Poster MOPKN019 [0.938 MB]  
 
MOPKN020 The PSI Web Interface to the EPICS Channel Archiver interface, EPICS, software, operation 141
 
  • G. Jud, A. Lüdeke, W. Portmann
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The EPICS channel archiver is a powerful tool to collect control system data of thousands of EPICS process variables with rates of many Hertz each to an archive for later retrieval. [1] Within the package of the channel archiver version 2 you get a Java application for graphical data retrieval and a command line tool for data extraction into different file formats. For the Paul Scherrer Institute we wanted a possibility to retrieve the archived data from a web interface. It was desired to have flexible retrieval functions and to allow to interchange data references by e-mail. This web interface has been implemented by the PSI controls group and has now been in operation for several years. This presentation will highlight the special features of this PSI web interface to the EPICS channel archiver.
[1] http://sourceforge.net/apps/trac/epicschanarch/wiki
 
poster icon Poster MOPKN020 [0.385 MB]  
 
MOPKN021 Asynchronous Data Change Notification between Database Server and Accelerator Control Systems database, target, software, EPICS 144
 
  • W. Fu, J. Morris, S. Nemesure
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy.
Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality.
 
poster icon Poster MOPKN021 [0.355 MB]  
 
MOPKN024 The Integration of the LHC Cyrogenics Control System Data into the CERN Layout Database database, cryogenics, instrumentation, interface 147
 
  • E. Fortescue-Beck, R. Billen, P. Gomes
    CERN, Geneva, Switzerland
 
  The Large Hadron Collider's Cryogenic Control System makes extensive use of several databases to manage data appertaining to over 34,000 cryogenic instrumentation channels. This data is essential for populating the firmware of the PLCs which are responsible for maintaining the LHC at the appropriate temperature. In order to reduce the number of data sources and the overall complexity of the system, the databases have been rationalised and the automatic tool, that extracts data for the control software, has been simplified. This paper describes the main improvements that have been made and evaluates the success of the project.  
 
MOPKN025 Integrating the EPICS IOC Log into the CSS Message Log EPICS, database, network, monitoring 151
 
  • K.-U. Kasemir, E. Danilova
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy
The Experimental Physics and Industrial Control System (EPICS) includes the "IOCLogServer", a tool that logs error messages from front-end computers (Input/Output Controllers, IOCs) into a set of text files. Control System Studio (CSS) includes a distributed message logging system with relational database persistence and various log analysis tools. We implemented a log server that forwards IOC messages to the CSS log database, allowing several ways of monitoring and analyzing the IOC error messages.
 
poster icon Poster MOPKN025 [4.006 MB]  
 
MOPKN027 BDNLS - BESSY Device Name Location Service database, EPICS, interface, target 154
 
  • D.B. Engel, P. Laux, R. Müller
    HZB, Berlin, Germany
 
  Initially the relational database (RDB) for control system configuration at BESSY has been built around the device concept [1]. Maintenance and consistency issues as well as complexity of scripts generating the configuration data, triggered the development of a novel, generic RDB structure based on hierarchies of named nodes with attribute/value pair [2]. Unfortunately it turned out that usability of this generic RDB structure for a comprehensive configuration management relies totally on sophisticated data maintenance tools. On this background BDNS, a new database management tool has been developed within the framework of the Eclipse Rich Client Platform. It uses the Model View Control (MVC) layer of Jface to cleanly dissect retrieval processes, data path, data visualization and actualization. It is based on extensible configurations described in XML allowing to chain SQL calls and compose profiles for various use cases. It solves the problem of data key forwarding to the subsequent SQL statement. BDNS and its potential to map various levels of complexity into the XML configurations allows to provide easy usable, tailored database access to the configuration maintainers for the different underlying database structures. Based on Eclipse the integration of BDNS into Control System Studio is straight forward.
[1] T. Birke et.al.: Relational Database for Controls Configuration Management, IADBG Workshop 2001, San Jose.
[2] T. Birke et.al.: Beyond Devices - An improved RDB Data-Model for Configuration Management, ICALEPCS 2005, Geneva
 
poster icon Poster MOPKN027 [0.210 MB]  
 
MOPKN029 Design and Implementation of the CEBAF Element Database database, interface, software, hardware 157
 
  • T. L. Larrieu, M.E. Joyce, C.J. Slominski
    JLAB, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
With inauguration of the CEBAF Element Database(CED) in Fall 2010, Jefferson Lab computer scientists have taken a first step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting front-end computers to building controls screens. A particular requirement influencing the CED design is that it must provide consistent access to not only present, but also future, and eventually past, configurations of the CEBAF accelerator. To accomplish this, an introspective database schema was designed that allows new elements, element types, and element properties to be defined on-the-fly without changing table structure. When used in conjunction with the Oracle Workspace Manager, it allows users to seamlessly query data from any time in the database history with the exact same tools as they use for querying the present configuration. Users can also check-out workspaces and use them as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented API that is translated automatically from original C++ into native libraries for script languages such as perl, php, and TCL making access to the CED easy and ubiquitous.
The U.S.Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.
 
poster icon Poster MOPKN029 [5.239 MB]  
 
MOPKS001 Diamond Light Source Booster Fast Orbit Feedback System booster, feedback, storage-ring, synchrotron 160
 
  • S. Gayadeen, S. Duncan
    University of Oxford, Oxford, United Kingdom
  • C. Christou, M.T. Heron, J. Rowland
    Diamond, Oxfordshire, United Kingdom
 
  The Fast Orbit Feedback system that has been installed on the Diamond Light Source Storage ring has been replicated on the Booster synchrotron in order to provide a test bed for the development of the Storage Ring controller design. To realise this the Booster is operated in DC mode. The electron beam is regulated in two planes using the Fast Orbit Feedback system, which takes the beam position from 22 beam position monitors for each plane, and calculates offsets to 44 corrector power supplies at a sample rate of 10~kHz. This paper describes the design and realization of the controller for the Booster Fast Orbit Feedback, presents results from the implementation and considers future development.  
poster icon Poster MOPKS001 [0.597 MB]  
 
MOPKS004 NSLS-II Beam Diagnostics Control System diagnostics, interface, electronics, timing 168
 
  • Y. Hu, L.R. Dalesio, K. Ha, O. Singh, H. Xu
    BNL, Upton, Long Island, New York, USA
 
  A correct measurement of NSLS-II beam parameters (beam position, beam size, circulating current, beam emittance, etc.) depends on the effective combinations of beam monitors, control and data acquisition system and high level physics applications. This paper will present EPICS-based control system for NSLS-II diagnostics and give detailed descriptions of diagnostics controls interfaces including classifications of diagnostics, proposed electronics and EPICS IOC platforms, and interfaces to other subsystems. Device counts in diagnostics subsystems will also be briefly described.  
poster icon Poster MOPKS004 [0.167 MB]  
 
MOPKS006 Application of Integral-Separated PID Algorithm in Orbit Feedback feedback, closed-orbit, simulation, storage-ring 171
 
  • K. Xuan, X. Bao, C. Li, W. Li, G. Liu, J.G. Wang, L. Wang
    USTC/NSRL, Hefei, Anhui, People's Republic of China
 
  The algorithm in the feedback system has important influence on the performance of the beam orbit. PID algorithm is widely used in the orbit feedback system; however the deficiency of PID algorithm is big overshooting in strong perturbations. In order to overcome the deficiencies, Integral Separated PID algorithm is developed. When the closed orbit distortion is too large, it cancels integration action until the closed orbit distortions are lower than the threshold value. The implementation of Integral Separated PID algorithm with MATLAB is described in this paper. The simulation results show that this algorithm can improve the control precision.  
poster icon Poster MOPKS006 [0.091 MB]  
 
MOPKS007 Design of a Digital Controller for ALPI 80 MHz Resonators cavity, feedback, FPGA, resonance 174
 
  • S.V. Barabin
    ITEP, Moscow, Russia
  • G. Bassato
    INFN/LNL, Legnaro (PD), Italy
 
  We discuss the design of a resonator controller completely based on digital technology. The controller is currently operating at 80 MHz but can be easily adapted to frequencies up to 350MHz; it can work either in "Generator Driven" and in "Self Excited Loop" mode. The signal processing unit is a commercial board (Bittware T2-Pci) with 4 TigerSharc DSPs and a Xilinx Virtex II-Pro FPGA. The front-end board includes five A/D channels supporting a sampling rate in excess of 100M/s and a clock distribution system with a jitter less than 10ps, allowing direct sampling of RF signals with no need of analog downconversion. We present the results of some preliminary tests carried out on a 80 MHz quarter wave resonator installed in the ALPI Linac accelerator at INFN-LNL and discuss possible developments of this project.  
poster icon Poster MOPKS007 [0.931 MB]  
 
MOPKS010 Fast Orbit Correction for the ESRF Storage Ring FPGA, feedback, operation, diagnostics 177
 
  • J.M. Koch, F. Epaud, E. Plouviez, K.B. Scheidt
    ESRF, Grenoble, France
 
  Up to now, at the ESRF, the correction of the orbit position has been performed with two independent systems: one dealing with the slow movements and one correcting the motion in a range of up to 200Hz but with a limited number of fast BPMs and steerers. These latter will be removed and one unique system will cover the frequency range from DC to 200Hz using all the 224 BPMs and the 96 steerers. Indeed, thanks to the procurement of Libera Brilliance units and the installation of new AC power supplies, it is now possible to access all the Beam positions at a frequency of 10 kHz and to drive a small current in the steerers in a 200Hz bandwidth. The first tests of the correction of the beam position have been performed and will be presented. The data processing will be presented as well with a particular emphasis on the development inside the FPGA.  
 
MOPKS011 Beam Synchronous Data Acquisition for SwissFEL Test Injector timing, data-acquisition, EPICS, real-time 180
 
  • B. Kalantari, T. Korhonen
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  Funding: Paul Scherrer Institute
A 250 MeV injector facility at PSI has been constructed to study the scientific and technological challenges of the SwissFEL project. Since in such pulsed machines in principle every beam can have different characteristics, due to varying machine parameters and/or conditions, it is very crucial to be able to acquire and distinguish control system data from one pulse to the next. In this paper we describe the technique we have developed to perform beam synchronous data acquisition at 100 Hz rate. This has been particularly challenging since it has provided us with a reliable and real-time data acquisition method in a non real-time control system. We describe how this can be achieved by employing a powerful and flexible timing system with well defined interfaces to the control system.
 
poster icon Poster MOPKS011 [0.126 MB]  
 
MOPKS012 Design and Test of a Girder Control System at NSRRC laser, network, storage-ring, interface 183
 
  • H.S. Wang, J.-R. Chen, M. L. Chen, K.H. Hsu, W.Y. Lai, S.Y. Perng, Y.L. Tsai, T.C. Tseng
    NSRRC, Hsinchu, Taiwan
 
  A girder control system is proposed to quickly and precisely adjust the displacement and rotating angle of all girders in the storage ring with little manpower at the Taiwan Photon Source (TPS) project at National Synchrotron Research Center (NSRRC). In this control girder system, six motorized cam movers supporting a girder are driven on three pedestals to perform six-axis adjustments of a girder. A tiltmeter monitors the pitch and roll of each girder; several touch sensors measure the relative displacement between consecutive girders. Moreover, a laser position sensitive detector (PSD) system measuring the relative displacement between straight-section girders is included in this girder control system. Operator can use subroutines developed by MATLAB to control every local girder control system via intranet. This paper presents details of design and tests of the girder control system.  
 
MOPKS013 Beam Spill Structure Feedback Test in HIRFL-CSR feedback, extraction, power-supply, FPGA 186
 
  • R.S. Mao, P. Li, L.Z. Ma, J.X. Wu, J.W. Xia, J.C. Yang, Y.J. Yuan, T.C. Zhao, Z.Z. Zhou
    IMP, Lanzhou, People's Republic of China
 
  The slow extraction beam from HIRFL-CSR is used in nuclear physics experiments and heavy ion therapy. 50Hz ripple and harmonics are observed in beam spill. To improve the spill structure, the first set of control system consisting of fast Q-magnet and feedback device based FPGA is developed and installed in 2010, and spill structure feedback test also has been started. The commissioning results with spill feedback system are presented in this paper.  
poster icon Poster MOPKS013 [0.268 MB]  
 
MOPKS014 Architecture and Control of the Fast Orbit Correction for the ESRF Storage Ring network, FPGA, storage-ring, device-server 189
 
  • F. Epaud, J.M. Koch, E. Plouviez
    ESRF, Grenoble, France
 
  Two years ago, the electronics of all the 224 Beam Position Monitors (BPM) of the ESRF Storage Ring were replaced by the commercial Libera Brilliance units to drastically improve the speed and position resolution of the Orbit measurement. Also, at the start of this year, all the 96 power supplies that drive the Orbit steerers have been replaced by new units that now cover a full DC-AC range up to 200Hz. We are now working on the replacement of the previous Fast Orbit Correction system. This new architecture will also use the 224 Libera Brilliance units and in particular the 10 KHz optical links handled by the Diamond Communication Controller (DCC) which has now been integrated within the Libera FPGA as a standard option. The 224 Liberas are connected together with the optical links to form a redundant network where the data are broadcast and are received by all nodes within 40 μS. The 4 corrections stations will be based on FPGA cards (2 per station) also connected to the FOFB network as additional nodes and using the same DCC firmware on one side and are connected to the steerers power supplies using RS485 electronics standard on the other side. Finally two extra nodes have been added to collect data for diagnostics and to give BPMs positions to the beamlines at high rate. This paper will present the network architecture and the control software to operate this new equipment.  
poster icon Poster MOPKS014 [3.242 MB]  
 
MOPKS015 Diagnostics Control Requirements and Applications at NSLS-II diagnostics, feedback, injection, emittance 192
 
  • Y. Hu, L.R. Dalesio, K. Ha, O. Singh
    BNL, Upton, Long Island, New York, USA
 
  To measure various beam parameters such as beam position, beam size, circulating current, beam emittance, etc., a variety of diagnostic monitors will be deployed at NSLS-II. The Diagnostics Group and the Controls Group are working together on control requirements for the beam monitors. The requirements are originated from and determined by accelerator physics. An attempt of analyzing and translating physics needs into control requirements is made. The basic functionalities and applications of diagnostics controls are also presented.  
poster icon Poster MOPKS015 [0.142 MB]  
 
MOPKS019 Electro Optical Beam Diagnostics System and its Control at PSI laser, software, electron, electronics 195
 
  • P. Chevtsov, F. Müller, V. Schlott, D.M. Treyer
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
  • P. Peier
    PSI, Villigen, Switzerland
  • B. Steffen
    DESY, Hamburg, Germany
 
  Electro Optical (EO) techniques are very promising non-invasive methods for measuring extremely short (in a sub-picosecond range) electron bunches. A prototype of an EO Bunch Length Monitoring System (BLMS) for the future SwissFEL facility is created at PSI. The core of this system is an advanced fiber laser unit with pulse generating and mode locking electronics. The system is integrated into the EPICS based PSI controls, which significantly simplifies its operations. The paper presents main components of the BLMS and its performance.  
poster icon Poster MOPKS019 [0.718 MB]  
 
MOPKS020 Low Level RF Control System for Cyclotron 10 MeV feedback, cyclotron, low-level-rf, cavity 199
 
  • J. Huang, D. Li, K.F. Liu
    Huazhong University of Science and Technology (HUST), Wuhan, People's Republic of China
  • T. Hu
    HUST, Wuhan, People's Republic of China
 
  The low level RF control system consists of a 101MHz signal generator, three feedback loops, an interlock and a protection system. The stability of control system is one of the most important indicators in the cyclotron design, especially when the whole system has a high current. Due to the hugeness of the RF system and the complexity of control objects, the low level RF control system must combine the basic theory with the electronic circuit to optimize the whole system. The major obstacles in the research, which rarely exist in other control systems, lay in the coupling of beam and resonant cavity, requiring to be described by the transfer function between beam and cavity, the complex coupling between microwave devices and the interference signals of all loops. By introducing the three feedback loops (tuning loop, amplitude loop and phase loop) and test results from some parts of electric circuits, this paper unfolds the performance index and design of low level RF control system, which may contribute to the design of cyclotron with a high and reliable performance.  
 
MOPKS021 High-speed Data Handling Using Reflective Memory Thread for Tokamak Plasma Control real-time, feedback, plasma, power-supply 203
 
  • S.Y. Park, S.H. Hahn, W.C. Kim
    NFRI, Daejon, Republic of Korea
  • R.D. Johnson, B.G. Penaflor, D.A. Piglowski, M.L. Walker
    GA, San Diego, California, USA
 
  The KSTAR plasma control system (PCS) is defined as a system consisting of electronic devices and control software that identifies and diagnoses various plasma parameters, calculates appropriate control signals to each actuator to keep the plasma sustained in the KSTAR operation regime. Based on the DIII-D PCS, the KSTAR PCS consists of a single box of multiprocess Linux system which can run up to 8 processes, and both digital and analog data acquisition methods are adapted for fast real-time data acquisition up to 20 kHz. The digital interface uses a well-known shared memory technology, the reflective memory (RFM), which can support data transmission up to 2Gbits/s. An RFM technology is adopted for interfacing the actuators, 11 PF power supplies and 1 IVC power supply, and the data acquisition system for plasma diagnostics. To handle the fast control of the RFM data transfer, the communication using the RFM with the actuators and diagnostics system is implemented as thread. The RFM thread sends commands like target current or voltage which is calculated by the PCS to the actuators area of RFM for plasma control and receives measured data by the magnet power supply. The RFM thread also provides the method for monitoring signal in real time by sharing data of diagnostics system. The RFM thread complete all data transfer within 50us so that data process can be completed within the fastest control cycle time of the PCS. This paper will describe the design, implementations, performances of RFM thread and applications to the tokamak plasma controls utilizing the technique.  
poster icon Poster MOPKS021 [1.745 MB]  
 
MOPKS022 BPM System And Orbit Feedback System Deisgn For the Taiwan Photon Source feedback, FPGA, EPICS, power-supply 207
 
  • C.H. Kuo, J. Chen, Y.-S. Cheng, P.C. Chiu, K.T. Hsu, K.H. Hu, C.Y. Wu
    NSRRC, Hsinchu, Taiwan
 
  Taiwan Photon Source (TPS) is a 3 GeV synchrotron light source which is in construction at NSRRC. Latest generation BPM electronics with FPGA enhanced functionality of current generation products was adopted. The prototype is under testing. To achieve its design goal of the TPS and eliminate beam motions due to various perturbation sources, orbit feedback is designed with integration of BPM and corrector control system . The design and implementation of the BPM system will be summarized in this report.  
 
MOPKS023 An Overview of the Active Optics Control Strategy for the Thirty Meter Telescope alignment, optics, real-time, operation 211
 
  • M.J. Sirota, G.Z. Angeli, D.G. MacMynowski
    TMT, Pasadena, California, USA
  • G.A. Chanan
    UCI, Irvine, California, USA
  • M.M. Colavita, C. Lindensmith, C. Shelton, M. Troy
    JPL, Pasadena, California, USA
  • T.S. Mast, J. Nelson
    UCSC, Santa Cruz, USA
  • P.M. Thompson
    STI, Hawthorne, USA
 
  Funding: This work was supported by the Gordon and Betty Moore Foundation
The primary (M1), secondary (M2) and tertiary (M3) mirrors of the Thirty Meter Telescope (TMT), taken together, have over 10,000 degrees of freedom. The vast majority of these are associated with the 492 individual primary mirror segments. The individual segments are converted into the equivalent of a monolithic thirty meter primary mirror via the Alignment and Phasing System (APS) and the M1 Control System (M1CS). In this paper we first provide an introduction to the TMT. We then describe the overall optical alignment and control strategy for the TMT and follow up with additional descriptions of the M1CS and the APS. We conclude with a short description of the TMT error budget process and provide an example of error allocation and predicted performance for wind induced segment jitter.
 
poster icon Poster MOPKS023 [2.318 MB]  
 
MOPKS024 A Digital System for Longitudinal Emittance Blow-Up in the LHC feedback, FPGA, software, synchrotron 215
 
  • M. Jaussi, M. E. Angoletta, P. Baudrenghien, A.C. Butterworth, J. Sanchez-Quesada, E.N. Shaposhnikova, J. Tückmantel
    CERN, Geneva, Switzerland
 
  In order to preserve beam stability above injection energy in the LHC, longitudinal emittance blowup is performed during the energy ramp by injecting band-limited noise around the synchrotron frequency into the beam phase loop. The noise is generated continuously in software and streamed digitally into the DSP of the Beam Control system. In order to achieve reproducible results, a feedback system on the observed average bunch length controls the strength of the excitation, allowing the operator to simply set a target bunch length. The frequency spectrum of the excitation depends on the desired bunch length, and as it must follow the evolution of the synchrotron frequency spread through the ramp, it is automatically calculated by the LHC settings management software from the energy and RF voltage. The system is routinely used in LHC operation since August 2010. We present here the details of the implementation in software, FPGA firmware and DSP code, as well as some results with beam.  
poster icon Poster MOPKS024 [0.467 MB]  
 
MOPKS027 Operational Status of theTransverse Multibunch Feedback System at Diamond feedback, FPGA, damping, operation 219
 
  • I. Uzun, M.G. Abbott, M.T. Heron, A.F.D. Morgan, G. Rehm
    Diamond, Oxfordshire, United Kingdom
 
  A transverse multibunch feedback (TMBF) system is in operation at Diamond Light Source to damp coupled-bunch instabilities up to 250 MHz in both the vertical and horizontal planes. It comprises an in-house designed and built analogue front-end combined with a Libera Bunch-by-Bunch feedback processor and output stripline kickers. FPGA-based feedback electronics is used to implement several diagnostic features in addition to the basic feedback functionality. This paper reports on the current operational status of the TMBF system along with its characteristics. Also discussed are operational diagnostic functionalities including continuous measurement of the betatron tune and chromaticity.  
poster icon Poster MOPKS027 [1.899 MB]  
 
MOPKS028 Using TANGO for Controlling a Microfluidic System with Automatic Image Analysis and Droplet Detection TANGO, device-server, software, interface 223
 
  • O. Taché, F. Malloggi
    CEA/DSM/IRAMIS/SIS2M, Gif sur Yvette, France
 
  Microfluidics allows one to manipulate small quantities of fluids, using channel dimensions of several micrometers. At CEA / LIONS, microfluidic chips are used to produce calibrated complex microdrops. This technique requires only a small volume of chemicals, but requires the use a number of accurate electronic equipment such as motorized syringes, valve and pressure sensors, video cameras with fast frame rate, coupled to microscopes. We use the TANGO control system for all heterogeneous equipment in microfluidics experiments and video acquisition. We have developed a set of tools that allow us to perform the image acquisition, allows shape detection of droplets, whose size, number, and speed can be determined, almost in real time. Using TANGO, we are able to provide feedback to actuators, in order to adjust the microfabrication parameters and time droplet formation.  
poster icon Poster MOPKS028 [1.594 MB]  
 
MOPKS029 The CODAC Software Distribution for the ITER Plant Systems software, EPICS, database, operation 227
 
  • F. Di Maio, L. Abadie, C.S. Kim, K. Mahajan, P. Makijarvi, D. Stepanov, N. Utzel, A. Wallander
    ITER Organization, St. Paul lez Durance, France
 
  Most of the systems that constitutes the ITER plant will be built and supplied by the seven ITER domestic agencies. These plant systems will require their own Instrumentation and Control (I&C) that will be procured by the various suppliers. For improving the homogeneity of these plant system I&C, the CODAC group, that is in charge of the ITER control system, is promoting standardized solutions at project level and makes available, as a support for these standards, the software for the development and tests of the plant system I&C. The CODAC Core System is built by the ITER Organization and distributed to all ITER partners. It includes the ITER standard operating system, RHEL, and the ITER standard control framework, EPICS, as well as some ITER specific tools, mostly for configuration management, and ITER specific software modules, such as drivers for standard I/O boards. A process for the distribution and support is in place since the first release, in February 2010, and has been continuously improved to support the development and distribution of the following versions.  
poster icon Poster MOPKS029 [1.209 MB]  
 
MOPMN001 Beam Sharing between the Therapy and a Secondary User cyclotron, interface, proton, network 231
 
  • K.J. Gajewski
    TSL, Uppsala, Sweden
 
  The 180 MeV proton beam from the cyclotron at The Svedberg Laboratory is primarily used for a patient treatment. Because of the fact that the proton beam is needed only during a small fraction of time scheduled for the treatment, there is a possibility to divert the beam to another location to be used by a secondary user. The therapy staff (primary user) controls the beam switching process after an initial set-up which is done by the cyclotron operator. They have an interface that allows controlling the accelerator and the beam line in all aspects needed for performing the treatment. The cyclotron operator is involved only if any problem occurs. The secondary user has its own interface that allows a limited access to the accelerators control system. Using this interface it is possible to start and stop the beam when it is not used for the therapy, grant access to the experimental hall and monitor the beam properties. The tools and procedures for the beam sharing between the primary and the secondary user are presented in the paper.  
poster icon Poster MOPMN001 [0.924 MB]  
 
MOPMN003 A Bottom-up Approach to Automatically Configured Tango Control Systems. database, TANGO, vacuum, hardware 239
 
  • S. Rubio-Manrique, D.B. Beltrán, I. Costa, D.F.C. Fernández-Carreiras, J.V. Gigante, J. Klora, O. Matilla, R. Ranz, J. Ribas, O. Sanchez
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  Alba maintains a central repository, so called "Cabling and Controls database" (CCDB), which keeps the inventory of equipment, cables, connections and their configuration and technical specifications. The valuable information kept in this MySQL database enables some tools to automatically create and configure Tango devices and other software components of the control systems of Accelerators, beamlines and laboratories. This paper describes the process involved in this automatic setup.  
poster icon Poster MOPMN003 [0.922 MB]  
 
MOPMN004 An Operational Event Announcer for the LHC Control Centre Using Speech Synthesis timing, software, interface, operation 242
 
  • S.T. Page, R. Alemany-Fernandez
    CERN, Geneva, Switzerland
 
  The LHC island of the CERN Control Centre is a busy working environment with many status displays and running software applications. An audible event announcer was developed in order to provide a simple and efficient method to notify the operations team of events occurring within the many subsystems of the accelerator. The LHC Announcer uses speech synthesis to report messages based upon data received from multiple sources. General accelerator information such as injections, beam energies and beam dumps are derived from data received from the LHC Timing System. Additionally, a software interface is provided that allows other surveillance processes to send messages to the Announcer using the standard control system middleware. Events are divided into categories which the user can enable or disable depending upon their interest. Use of the LHC Announcer is not limited to the Control Centre and is intended to be available to a wide audience, both inside and outside CERN. To accommodate this, it was designed to require no special software beyond a standard web browser. This paper describes the design of the LHC Announcer and how it is integrated into the LHC operational environment.  
poster icon Poster MOPMN004 [1.850 MB]  
 
MOPMN005 ProShell – The MedAustron Accelerator Control Procedure Framework interface, ion, framework, ion-source 246
 
  • R. Moser, A.B. Brett, M. Marchhart, C. Torcato de Matos
    EBG MedAustron, Wr. Neustadt, Austria
  • J. Dedič, S. Sah
    Cosylab, Ljubljana, Slovenia
  • J. Gutleber
    CERN, Geneva, Switzerland
 
  MedAustron is a centre for ion-therapy and research in currently under construction in Austria. It features a synchrotron particle accelerator for proton and carbon-ion beams. This paper presents the architecture and concepts for implementing a procedure framework called ProShell. Procedures to automate high level control and analysis tasks for commissioning and during operation are modelled with Petri-Nets and user code is implemented with C#. It must be possible to execute procedures and monitor their execution progress remotely. Procedures include starting up devices and subsystems in a controlled manner, configuring, operating O(1000) devices and tuning their operational settings using iterative optimization algorithms. Device interfaces must be extensible to accommodate yet unanticipated functionalities. The framework implements a template for procedure specific graphical interfaces to access device specific information such as monitoring data. Procedures interact with physical devices through proxy software components that implement one of the following interfaces: (1) state-less or (2) state-driven device interface. Components can extend these device interfaces following an object-oriented single inheritance scheme to provide augmented, device-specific interfaces. As only two basic device interfaces need to be defined at an early project stage, devices can be integrated gradually as commissioning progresses. We present the architecture and design of ProShell and explain the programming model by giving the simple example of the ion source spectrum analysis procedure.  
poster icon Poster MOPMN005 [0.948 MB]  
 
MOPMN008 LASSIE: The Large Analogue Signal and Scaling Information Environment for FAIR timing, detector, data-acquisition, diagnostics 250
 
  • T. Hoffmann, H. Bräuning, R. Haseitl
    GSI, Darmstadt, Germany
 
  At FAIR, the Facility for Antiproton and Ion Research, several new accelerators such as the SIS 100, HESR, CR, the inter-connecting HEBT beam lines, S-FRS and experiments will be built. All of these installations are equipped with beam diagnostic devices and other components which deliver time-resolved analogue signals to show status, quality, and performance of the accelerators. These signals can originate from particle detectors such as ionization chambers and plastic scintillators, but also from adapted output signals of transformers, collimators, magnet functions, RF cavities, and others. To visualize and precisely correlate the time axis of all input signals a dedicated FESA based data acquisition and analysis system named LASSIE, the Large Analogue Signal and Scaling Information Environment, is under way. As the main operation mode of LASSIE, pulse counting with adequate scaler boards is used, without excluding enhancements for ADC, QDC, or TDC digitization in the future. The concept, features, and challenges of this large distributed DAQ system will be presented.  
poster icon Poster MOPMN008 [7.850 MB]  
 
MOPMN009 First Experience with the MATLAB Middle Layer at ANKA EPICS, software, interface, alignment 253
 
  • S. Marsching
    Aquenos GmbH, Baden-Baden, Germany
  • E. Huttel, M. Klein, A.-S. Müller, N.J. Smale
    KIT, Karlsruhe, Germany
 
  The MATLAB Middle Layer has been adapted for use at ANKA. It was finally commissioned in March 2011. It is used for accelerator physics studies and regular tasks like beam-based alignment and response matrix analysis using LOCO. Furthermore, we intend to study the MATLAB Middle Layer as default orbit correction tool for user operation. We will report on the experience made during the commissioning process and present the latest results obtained while using the MATLAB Middle Layer for machine studies.  
poster icon Poster MOPMN009 [0.646 MB]  
 
MOPMN010 Development of a Surveillance System with Motion Detection and Self-location Capability network, radiation, status, survey 257
 
  • M. Tanigaki, S. Fukutani, Y. Hirai, H. Kawabe, Y. Kobayashi, Y. Kuriyama, M. Miyabe, Y. Morimoto, T. Sano, N. Sato, K. Takamiya
    KURRI, Osaka, Japan
 
  A surveillance system with the motion detection and the location measurement capability has been in development for the help of effective security control of facilities in our institute. The surveillance cameras and sensors placed around the facilities and the institute have the primary responsibility for preventing unwanted accesses to our institute, but there are some cases where additional temporary surveillance cameras are used for the subsidiary purposes. Problems in these additional surveillance cameras are the detection of such unwanted accesses and the determination of their respective locations. To eliminate such problems, we are constructing a surveillance camera system with motion detection and self-locating features based on a server-client scheme. A client, consisting of a network camera, wi-fi and GPS modules, acquires its location measured by use of GPS or the radio wave from surrounding wifi access points, then sends its location to a remote server along with the motion picture over the network. The server analyzes such information to detect the unwanted access and serves the status or alerts on a web-based interactive map for the easy access to such information. We report the current status of the development and expected applications of such self-locating system beyond this surveillance system.  
 
MOPMN013 Operational Status Display and Automation Tools for FERMI@Elettra TANGO, operation, status, electron 263
 
  • C. Scafuri
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
Detecting and locating faults and malfunctions of an accelerator is a difficult and time consuming task. The situation is even more difficult during the commissioning phase of a new accelerator, when physicists and operators are still acquiring confidence with the plant. On the other hand a fault free machine does not imply that it is ready to run: the definition of "readiness" depends on what is the expected behavior of the plant. In the case of FERMI@Elettra, in which the electron beam goes to different branches of the machine depending on the programmed activity, the configuration of the plant determines the rules for understanding whether the activity can be carried out or not. In order to help the above task and display the global status of the plant, a tool known as the "matrix" has been developed. It is composed of a graphical front-end, which displays a synthetic view of the plant status grouped by subsystem and location along the accelerator, and by a back-end made of Tango servers which reads the status of the machine devices via the control system and calculates the rules. The back-end also includes a set of objects known as "sequencers" that perform complex actions automatically for actively switching from one accelerator configuration to another.
 
poster icon Poster MOPMN013 [0.461 MB]  
 
MOPMN014 Detector Control System for the ATLAS Muon Spectrometer And Operational Experience After The First Year of LHC Data Taking detector, monitoring, electronics, hardware 267
 
  • S. Zimmermann
    Albert-Ludwig Universität Freiburg, Freiburg, Germany
  • G. Aielli
    Università di Roma II Tor Vergata, Roma, Italy
  • M. Bindi, A. Polini
    INFN-Bologna, Bologna, Italy
  • S. Bressler, E. Kajomovitz, S. Tarem
    Technion, Haifa, Israel
  • R.G.K. Hart
    NIKHEF, Amsterdam, The Netherlands
  • G. Iakovidis, E. Ikarios, K. Karakostas, S. Leontsinis, E. Mountricha
    National Technical University of Athens, Athens, Greece
 
  Muon Reconstruction is a key ingredient in any of the experiments at the Large Hadron Collider LHC. The muon spectrometer of ATLAS comprises Monitored Drift Tube (MDTs) and Cathode Strip Chambers (CSCs) for precision tracking as well as Resistive Plate (RPC) and Thin Gap (TGC) Chambers as muon trigger and for second coordinate measurement. Together with a strong magnetic field provided by a super conducting toroid magnet and an optical alignment system a high precision determination of muon momentum up to the highest particle energies accessible by the LHC collisions is provided. The Detector Control System (DCS) of each muon sub-detector technology must efficiently and safely manage several thousands of LV and HV channels, the front-end electronics initialization as well as monitoring of beam, background, magnetic field and environmental conditions. This contribution will describe the chosen hardware architecture, which as much as possible tries to use common technologies, and the implemented controls hierarchy. In addition the muon DCS human machine interface (HMI) layer and operator tools will be covered. Emphasis will be given to reviewing the experience from the first year of LHC and detector operations, and to lessons learned for future large scale detector control systems. We will also present the automatic procedures put in place during last year and review the improvements gained by them for data taking efficiency. Finally, we will describe the role DCS plays in assessing the quality of data for physics analysis and in online optimization of detector conditions.
On Behalf of the ATLAS Muon Collaboration
 
poster icon Poster MOPMN014 [0.249 MB]  
 
MOPMN015 Multi Channel Applications for Control System Studio (CSS) EPICS, operation, database, storage-ring 271
 
  • K. Shroff, G. Carcassi
    BNL, Upton, Long Island, New York, USA
  • R. Lange
    HZB, Berlin, Germany
 
  Funding: Work supported by U.S. Department of Energy
This talk will present a set of applications for CSS built on top of the services provided by the ChannelFinder, a directory service for control system, and PVManager, a client library for data manipulation and aggregation. ChannelFinder Viewer allows for the querying of the ChannelFinder service, and the sorting and tagging of the results. Multi Channel Viewer allows the creation of plots from the live data of a group of channels.
 
poster icon Poster MOPMN015 [0.297 MB]  
 
MOPMN016 The Spiral2 Radiofrequency Command Control interface, cavity, EPICS, LLRF 274
 
  • D.T. Touchard, C. Berthe, P. Gillette, M. Lechartier, E. Lécorché, G. Normand
    GANIL, Caen, France
  • Y. Lussignol, D. Uriot
    CEA/DSM/IRFU, France
 
  Mainly for carrying out nuclear physics experiences, the SPIRAL2 facility based at Caen in France will aim to provide new radioactive rare ion or high intensity stable ion beams. The driver accelerator uses several radiofrequency systems: RFQ, buncher and superconducting cavities, driven by independent amplifiers and controlled by digital electronics. This low level radiofrequency subsystem is integrated into a regulated loop driven by the control system. A test of a whole system is foreseen to define and check the computer control interface and applications. This paper describes the interfaces to the different RF equipment into the EPICS based computer control system. CSS supervision and foreseen high level tuning XAL/JAVA based applications are also considered.  
poster icon Poster MOPMN016 [0.986 MB]  
 
MOPMN018 Toolchain for Online Modeling of the LHC optics, simulation, software, framework 277
 
  • G.J. Müller, X. Buffat, K. Fuchsberger, M. Giovannozzi, S. Redaelli, F. Schmidt
    CERN, Geneva, Switzerland
 
  The control of high intensity beams in a high energy, superconducting machine with complex optics like the CERN Large Hadron Collider (LHC) is challenging not only from the design aspect but also for operation. To support the LHC beam commissioning, operation and luminosity production, efforts were recently devoted towards the design and implementation of a software infrastructure aimed to use the computing power of the beam dynamics code MADX-X in the framework of the Java-based LHC control and measurement environment. Alongside interfacing to measurement data as well as to settings of the control system, the best knowledge of machine aperture and optic models is provided. In this paper, we present the status of the toolchain and illustrate how it has been used during commissioning and operation of the LHC. Possible future implementations are also discussed.  
poster icon Poster MOPMN018 [0.562 MB]  
 
MOPMN019 Controling and Monitoring the Data Flow of the LHCb Read-out and DAQ Network network, detector, monitoring, FPGA 281
 
  • R. Schwemmer, C. Gaspar, N. Neufeld, D. Svantesson
    CERN, Geneva, Switzerland
 
  The LHCb readout uses a set of 320 FPGA based boards as interface between the on-detector hardware and the GBE DAQ network. The boards are the logical Level 1 (L1) read-out electronics and aggregate the experiment's raw data into event fragments that are sent to the DAQ network. To control the many parameters of the read-out boards, an embedded PC is included on each board, connecting to the boards ICs and FPGAs. The data from the L1 boards is sent through an aggregation network into the High Level Trigger farm. The farm comprises approximately 1500 PCs which at first assemble the fragments from the L1 boards and then do a partial reconstruction and selection of the events. In total there are approximately 3500 network connections. Data is pushed through the network and there is no mechanism for resending packets. Loss of data on a small scale is acceptable but care has to be taken to avoid data loss if possible. To monitor and debug losses, different probes are inserted throughout the entire read-out chain to count fragments, packets and their rates at different positions. To keep uniformity throughout the experiment, all control software was developed using the common SCADA software, PVSS, with the JCOP framework as base. The presentation will focus on the low level controls interface developed for the L1 boards and the networking probes, as well as the integration of the high level user interfaces into PVSS. We will show the way in which the users and developers interact with the software, configure the hardware and follow the flow of data through the DAQ network.  
 
MOPMN020 Integrating Controls Frameworks: Control Systems for NA62 LAV Detector Test Beams framework, detector, experiment, interface 285
 
  • O. Holme, J.A.R. Arroyo Garcia, P. Golonka, M. Gonzalez-Berges, H. Milcent
    CERN, Geneva, Switzerland
  • O. Holme
    ETH, Zurich, Switzerland
 
  The detector control system for the NA62 experiment at CERN, to be ready for physics data-taking in 2014, is going to be built based on control technologies recommended by the CERN Engineering group. A rich portfolio of the technologies is planned to be showcased and deployed in the final application, and synergy between them is needed. In particular two approaches to building controls application need to play in harmony: the use of the high-level application framework called UNICOS, and a bottom-up approach of development based on the components of the JCOP Framework. The aim of combining the features provided by the two frameworks is to avoid duplication of functionality and minimize the maintenance and development effort for future controls applications. In the paper the result of the integration efforts obtained so far are presented; namely the control applications developed for beam-testing of NA62 detector prototypes. Even though the delivered applications are simple, significant conceptual and development work was required to bring about the smooth inter-play between the two frameworks, while assuring the possibility of unleashing their full power. A discussion of current open issues is presented, including the viability of the approach for larger-scale applications of high complexity, such as the complete detector control system for the NA62 detector.  
poster icon Poster MOPMN020 [1.464 MB]  
 
MOPMN022 Database Driven Control System Configuration for the PSI Proton Accelerator Facilities database, EPICS, proton, hardware 289
 
  • H. Lutz, D. Anicic
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  At PSI there are two facilities with proton cyclotron accelerators. The machine control system for PROSCAN which is used for medical patient therapy, is running with EPICS. The High Intensity Proton Accelerator (HIPA) is mostly running under the in-house control system ACS. Dedicated parts of HIPA are under EPICS control. Both these facilities are configured through an Oracle database application suite. This paper presents the concepts and tools which are used to configure the control system directly from the database-stored configurations. Such an approach has advantages which contribute for better control system reliability, overview and consistency.  
poster icon Poster MOPMN022 [0.992 MB]  
 
MOPMN023 Preliminary Design and Integration of EPICS Operation Interface for the Taiwan Photon Source operation, EPICS, interface, GUI 292
 
  • Y.-S. Cheng, J. Chen, P.C. Chiu, K.T. Hsu, C.H. Kuo, C.Y. Liao, C.Y. Wu
    NSRRC, Hsinchu, Taiwan
 
  The TPS (Taiwan Photon Source) is the latest generation of 3 GeV synchrotron light source which has been in construction since 2010. The EPICS framework is adopted as control system infrastructure for the TPS. The EPICS IOCs (Input Output Controller) and various database records have been gradually implemented to control and monitor each subsystem of TPS. The subsystem includes timing, power supply, motion controller, miscellaneous Ethernet-compliant devices etc. Through EPICS PVs (Process Variables) channel access, remote access I/O data via Ethernet interface can be observed by the useable graphical toolkits, such as the EDM (Extensible Display Manager) and MATLAB. The operation interface mainly includes the function of setting, reading, save, restore and etc. Integration of operation interfaces will depend upon properties of each subsystem. In addition, the centralized management method is utilized to serve every client from file servers in order to maintain consistent versions of related EPICS files. The efforts will be summarized in this report.  
 
MOPMN025 New SPring-8 Control Room: Towards Unified Operation with SACLA and SPring-8 II Era. operation, status, network, laser 296
 
  • A. Yamashita, R. Fujihara, N. Hosoda, Y. Ishizawa, H. Kimura, T. Masuda, C. Saji, T. Sugimoto, S. Suzuki, M. Takao, R. Tanaka
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Fukui, Y. Otake
    RIKEN/SPring-8, Hyogo, Japan
 
  We have renovated the SPring-8 control room. This is the first major renovation since its inauguration in 1997. In 2011, the construction of SACLA (SPring-8 Angstrom Compact Laser Accelerator) was completed and it is planned to be controlled from the new control room for close cooperative operation with the SPring-8 storage ring. It is expected that another SPring-8 II project will require more workstations than the current control room. We have extended the control room area for these foreseen projects. In this renovation we have employed new technology which did not exist 14 years ago, such as a large LCD and silent liquid cooling workstations for comfortable operation environment. We have incorporated many ideas which were obtained during the 14 years experience of the design. The operation in the new control room began in April 2011 after a short period of the construction.  
 
MOPMN027 The LHC Sequencer database, GUI, operation, injection 300
 
  • R. Alemany-Fernandez, V. Baggiolini, R. Gorbonosov, D. Khasbulatov, M. Lamont, P. Le Roux, C. Roderick
    CERN, Geneva, Switzerland
 
  The Large Hadron Collider (LHC) at CERN is a highly complex system made of many different sub-systems whose operation implies the execution of many tasks with stringent constraints on the order and duration of the execution. To be able to operate such a system in the most efficient and reliable way the operators in the CERN control room use a high level control system: the LHC Sequencer. The LHC Sequencer system is composed of several components, including an Oracle database where operational sequences are configured, a core server that orchestrates the execution of the sequences, and two graphical user interfaces: one for sequence edition, and another for sequence execution. This paper describes the architecture of the LHC Sequencer system, and how the sequences are prepared and used for LHC operation.  
poster icon Poster MOPMN027 [2.163 MB]  
 
MOPMN028 Automated Voltage Control in LHCb detector, experiment, status, high-voltage 304
 
  • L.G. Cardoso, C. Gaspar, R. Jacobsson
    CERN, Geneva, Switzerland
 
  LHCb is one of the 4 LHC experiments. In order to ensure the safety of the detector and to maximize efficiency, LHCb needs to coordinate its own operations, in particular the voltage configuration of the different sub-detectors, according to the accelerator status. A control software has been developed for this purpose, based on the Finite State Machine toolkit and the SCADA system used for control throughout LHCb (and the other LHC experiments). This software permits to efficiently drive both the Low Voltage (LV) and High Voltage (HV) systems of the 10 different sub-detectors that constitute LHCb, setting each sub-system to the required voltage (easily configurable at run-time) based on the accelerator state. The control software is also responsible for monitoring the state of the Sub-detector voltages and adding it to the event data in the form of status-bits. Safe and yet flexible operation of the LHCb detector has been obtained and automatic actions, triggered by the state changes of the accelerator, have been implemented. This paper will detail the implementation of the voltage control software, its flexible run-time configuration and its usage in the LHCb experiment.  
poster icon Poster MOPMN028 [0.479 MB]  
 
MOPMN029 Spiral2 Control Command: First High-level Java Applications Based on the OPEN-XAL Library database, software, EPICS, ion 308
 
  • P. Gillette, E. Lemaître, G. Normand, L. Philippe
    GANIL, Caen, France
 
  The Radioactive Ions Beam SPIRAL2 facility will be based on a supra-conducting driver providing deuterons or heavy ions beams at different energies and intensities. Using then the ISOLD method, exotic nuclei beams will be sent either to new physics facilities or to the existing GANIL experimental areas. To tune this large range of beams, high-level applications will be mainly developed in Java language. The choice of the OPEN-XAL application framework, developed at the Spallation Neutron Source (SNS), has proven to be very efficient and greatly helps us to design our first software pieces to tune the accelerator. The first part of this paper presents some new applications: "Minimisation" which aims at optimizing a section of the accelerator; a general purpose software named "Hook" for interacting with equipment of any kind; and an application called "Profils" to visualize and control the Spiral2 beam wire harps. As tuning operation has to deal with configuration and archiving issues, databases are an effective way to manage data. Therefore, two databases are being developed to address these problems for the SPIRAL2 command control: one is in charge of device configuration upstream the Epics databases while another one is in charge of accelerator configuration (lattice, optics and set of values). The last part of this paper aims at describing these databases and how java applications will interact with them.  
poster icon Poster MOPMN029 [1.654 MB]  
 
MOPMS001 The New Control System for the Vacuum of ISOLDE vacuum, interlocks, hardware, software 312
 
  • S. Blanchard, F. Bellorini, F.B. Bernard, E. Blanco Vinuela, P. Gomes, H. Vestergard, D. Willeman
    CERN, Geneva, Switzerland
 
  The On-Line Isotope Mass Separator (ISOLDE) is a facility dedicated to the production of radioactive ion beams for nuclear and atomic physics. From ISOLDE vacuum sectors to the pressurized gases storage tanks there are up to five stages of pumping for a total of more than one hundred pumps including turbo-molecular, cryo, dry, membrane and oil pumps. The ISOLDE vacuum control system is critical; the volatile radioactive elements present in the exhaust gases and the High and Ultra High Vacuum pressure specifications require a complex control and interlocks system. This paper describes the reengineering of the control system developed using the CERN UNICOS-CPC framework. An additional challenge has been the usage of the UNICOS-CPC in a vacuum domain for the first time. The process automation provides multiple operating modes (Rough pumping, bake-out, high vacuum pumping, regeneration for cryo-pumped sectors, venting, etc). The control system is composed of local controllers driven by PLC (logic, interlocks) and a SCADA application (operation, alarms monitoring and diagnostics).  
poster icon Poster MOPMS001 [4.105 MB]  
 
MOPMS002 LHC Survey Laser Tracker Controls Renovation software, laser, interface, hardware 316
 
  • C. Charrondière, M. Nybø
    CERN, Geneva, Switzerland
 
  The LHC survey laser tracker control system is based on an industrial software package (Axyz) from Leica Geosystems™ that has an interface to Visual Basic 6.0™, which we used to automate the geometric measurements for the LHC magnets. As the Axyz package is no longer supported and the Visual Basic 6.0™ interface would need to be changed to Visual Basic. Net™ we have taken the decision to recode the automation application in LabVIEW™ interfacing to the PC-DMIS software, proposed by Leica Geosystems. This presentation describes the existing equipment, interface and application showing the reasons for our decisions to move to PC-DMIS and LabVIEW. We present the experience with the first prototype and make a comparison with the legacy system.  
poster icon Poster MOPMS002 [1.812 MB]  
 
MOPMS003 The Evolution of the Control System for the Electromagnetic Calorimeter of the Compact Muon Solenoid Experiment at the Large Hadron Collider software, detector, hardware, interface 319
 
  • O. Holme, D.R.S. Di Calafiori, G. Dissertori, W. Lustermann
    ETH, Zurich, Switzerland
  • S. Zelepoukine
    UW-Madison/PD, Madison, Wisconsin, USA
 
  Funding: Swiss National Science Foundation (SNF)
This paper discusses the evolution of the Detector Control System (DCS) designed and implemented for the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) as well as the operational experience acquired during the LHC physics data taking periods of 2010 and 2011. The current implementation in terms of functionality and planned hardware upgrades are presented. Furthermore, a project for reducing the long-term software maintenance, including a year-long detailed analysis of the existing applications, is put forward and the current outcomes which have informed the design decisions for the next CMS ECAL DCS software generation are described. The main goals for the new version are to minimize external dependencies enabling smooth migration to new hardware and software platforms and to maintain the existing functionality whilst substantially reducing support and maintenance effort through homogenization, simplification and standardization of the control system software.
 
poster icon Poster MOPMS003 [3.508 MB]  
 
MOPMS004 First Experience with VMware Servers at HLS hardware, database, brilliance, network 323
 
  • G. Liu, X. Bao, C. Li, J.G. Wang, K. Xuan
    USTC/NSRL, Hefei, Anhui, People's Republic of China
 
  Hefei Light Source(HLS) is a dedicated second generation VUV light source, which was designed and constructed two decades ago. In order to improve the performance of HLS, especially getting higher brilliance and increasing the number of straight sections, an upgrade project is undergoing, accordingly the new control system is under construction. VMware vSphere 4 Enterprise Plus is used to construct the server system for HLS control system. Four DELL PowerEdge R710 rack servers and one DELL Equallogic PS6000E iSCSI SAN comprises the hardware platform. Some kinds of servers, such as file server, web server, database server, NIS servers etc. together with the softIOC applications are all integrated into this virtualization platform. The prototype of softIOC is setup and its performance is also given in this paper. High availability and flexibility are achieved with low cost.  
poster icon Poster MOPMS004 [0.463 MB]  
 
MOPMS005 The Upgraded Corrector Control Subsystem for the Nuclotron Main Magnetic Field power-supply, status, software, operation 326
 
  • V. Andreev, V. Isadov, A. Kirichenko, S. Romanov, G.V. Trubnikov, V. Volkov
    JINR, Dubna, Moscow Region, Russia
 
  This report discusses a control subsystem of 40 main magnetic field correctors which is a part of the superconducting synchrotron Nuclotron Control System. The subsystem is used in static and dynamic (corrector's current depends on the magnetic field value) modes. Development of the subsystem is performed within the bounds of the Nuclotron-NICA project. Principles of digital (PSMBus/RS-485 protocol) and analog control of the correctors' power supplies, current monitoring, remote control of the subsystem via IP network, are also presented. The first results of the subsystem commissioning are given.  
poster icon Poster MOPMS005 [1.395 MB]  
 
MOPMS006 SARAF Beam Lines Control Systems Design vacuum, operation, status, hardware 329
 
  • E. Reinfeld, I. Eliyahu, I.G. Gertz, A. Grin, A. Kreisler, A. Perry, L. Weissman
    Soreq NRC, Yavne, Israel
 
  The first beam lines addition to the SARAF facility was completed in phase I and introduced new hardware to be controlled. This article will describe the beam lines vacuum, magnets and diagnostics control systems and the design methodology used to achieve a reliable and reusable control system. The vacuum control systems of the accelerator and beam lines have been integrated into one vacuum control system which controls all the vacuum control hardware for both the accelerator and beam lines. The new system fixes legacy issues and is designed for modularity and simple configuration. Several types of magnetic lenses have been introduced to the new beam line to control the beam direction and optimally focus it on the target. The control system was designed to be modular so that magnets can be quickly and simply inserted or removed. The diagnostics systems control the diagnostics devices used in the beam lines including data acquisition and measurement. Some of the older control systems were improved and redesigned using modern control hardware and software. The above systems were successfully integrated in the accelerator and are used during beam activation.  
poster icon Poster MOPMS006 [2.537 MB]  
 
MOPMS007 Deep-Seated Cancer Treatment Spot-Scanning Control System heavy-ion, database, hardware, ion 333
 
  • W. Zhang, S. An, G.H. Li, W.F. Liu, W.M. Qiao, Y.P. Wang, F. Yang
    IMP, Lanzhou, People's Republic of China
 
  System is mainly composed of hardware, the data for a given waveform scanning power supply controller, dose-controlled counting cards, and event generator system. Software consists of the following components: generating tumor shape and the corresponding waveform data system, waveform controller (ARM and DSP) program, counting cards FPGA procedures, event and data synchronization for transmission COM program.  
 
MOPMS008 Control of the SARAF High Intensity CW Protron Beam Target Systems target, experiment, proton, vacuum 336
 
  • I. Eliyahu, D. Berkovits, M. Bisyakoev, I.G. Gertz, S. Halfon, N. Hazenshprung, D. Kijel, E. Reinfeld, I. Silverman, L. Weissman
    Soreq NRC, Yavne, Israel
 
  The first beam line addition to the SARAF facility was completed in phase I. two experiments are planned in this new beam line, the Liquid Lithium target and the Foils target. For those we are currently building hardware and software for their control systems. The Liquid Lithium target is planned to be a powerful neutron source for the accelerator, based on the proton beam of the SARAF phase I. The concept of this target is based on a liquid lithium that spins and produces neutron by the reaction Li7(p,n)Be7. This target was successfully tested in the laboratory and is intended to be integrated into the accelerator beam line and the control system this year. The Foils Target is planned for a radiation experiment designed to examine the problem of radiation damage to metallic foils. To accomplish this we have built a radiation system that enables us to test the foils. The control system includes varied diagnostic elements, vacuum, motor control, temp etc, for the two targets mentioned above. These systems were built to be modular, so that in the future new targets can be quickly and simply inserted. This article will describe the different control systems for the two targets as well as the design methodology used to achieve a reliable and reusable control on those targets.  
poster icon Poster MOPMS008 [1.391 MB]  
 
MOPMS009 IFMIF LLRF Control System Architecture Based on Epics EPICS, LLRF, interface, database 339
 
  • J.C. Calvo, A. Ibarra, A. Salom
    CIEMAT, Madrid, Spain
  • M.A. Patricio
    UCM, Colmenarejo, Spain
  • M.L. Rivers
    ANL, Argonne, USA
 
  The IFMIF-EVEDA (International Fusion Materials Irradiation Facility - Engineering Validation and Engineering Design Activity) linear accelerator will be a 9 MeV, 125mA CW (Continuous Wave) deuteron accelerator prototype to validate the technical options of the accelerator design for IFMIF. The RF (Radio Frequency) power system of IFMIF-EVEDA consists of 18 RF chains working at 175MHz with three amplification stages each; each one of the required chains for the accelerator prototype is based on several 175MHz amplification stages. The LLRF system provides the RF Drive input of the RF plants. It controls the amplitude and phase of this signal to be synchronized with the beam and it also controls the resonance frequency of the cavities. The system is based on a commercial cPCI FPGA Board provided by Lyrtech and controlled by a Windows Host PC. For this purpose, it is mandatory to communicate the cPCI FPGA Board with an EPICS Channel Access, building an IOC (Input Output Controller) between Lyrtech board and EPICS. A new software architecture to design a device support, using AsynPortDriver class and CSS as a GUI (Graphical User Interface), is presented.  
poster icon Poster MOPMS009 [2.763 MB]  
 
MOPMS010 LANSCE Control System Front-End and Infrastructure Hardware Upgrades network, linac, EPICS, hardware 343
 
  • M. Pieck, D. Baros, C.D. Hatch, P.S. Marroquin, P.D. Olivas, F.E. Shelley, D.S. Warren, W. Winton
    LANL, Los Alamos, New Mexico, USA
 
  Funding: This work has benefited from the use of LANSCE at LANL. This facility is funded by the US DoE and operated by Los Alamos National Security for NSSA, Contract DE-AC52-06NA25396. LA-UR-11-10228
The Los Alamos Neutron Science Center (LANSCE) linear accelerator drives user facilities for isotope production, proton radiography, ultra-cold neutrons, weapons neutron research and various sciences using neutron scattering. The LANSCE Control System (LCS), which is in part 30 years old, provides control and data monitoring for most devices in the linac and for some of its associated experimental-area beam lines. In Fiscal Year 2011, the control system went through an upgrade process that affected different areas of the LCS. We improved our network infrastructure and we converted part of our front-end control system hardware to Allen Bradley ControlsLogix 5000 and National Instruments Compact RIO programmable automation controller (PAC). In this paper, we will discuss what we have done, what we have learned about upgrading the existing control system, and how this will affect our future planes.
 
 
MOPMS013 Progress in the Conversion of the In-house Developed Control System to EPICS and related technologies at iThemba LABS EPICS, LabView, interface, hardware 347
 
  • I.H. Kohler, M.A. Crombie, C. Ellis, M.E. Hogan, H.W. Mostert, M. Mvungi, C. Oliva, J.V. Pilcher, N. Stodart
    iThemba LABS, Somerset West, South Africa
 
  This paper highlights challenges associated with the upgrading of the iThemba LABS control system. Issues include maintaining an ageing control system which is based on a LAN of PCs running OS/2, using in-house developed C-code, hardware interfacing consisting of elderly CAMAC and locally manufactured SABUS [1] modules. The developments around integrating the local hardware into EPICS, running both systems in parallel during the transition period, and the inclusion of other environments like Labview are discussed. It is concluded that it was a good decision to base the underlying intercommunications on channel access and to move the majority of process variables over to EPICS given that it is at least an international standard, less dependant on a handful of local developers, and enjoys the support from a very active world community.
[1] SABUS - a collaboration between Iskor (PTY) Ltd. and CSIR (Council for Scientific and Industrial reseach) (1980)
 
poster icon Poster MOPMS013 [24.327 MB]  
 
MOPMS014 GSI Operation Software: Migration from OpenVMS to Linux software, Linux, operation, linac 351
 
  • R. Huhmann, G. Fröhlich, S. Jülicher, V.RW. Schaa
    GSI, Darmstadt, Germany
 
  The current operation software at GSI controlling the linac, beam transfer lines, synchrotron and storage ring, has been developed over a period of more than two decades using OpenVMS now on Alpha-Workstations. The GSI accelerator facilities will serve as a injector chain for the new FAIR accelerator complex for which a control system is currently developed. To enable reuse and integration of parts of the distributed GSI software system, in particular the linac operation software, within the FAIR control system, the corresponding software components must be migrated to Linux. The interoperability with FAIR controls applications is achieved by adding a generic middleware interface accessible from Java applications. For porting applications to Linux a set of libraries and tools has been developed covering the necessary OpenVMS system functionality. Currently, core applications and services are already ported or rewritten and functionally tested but not in operational usage. This paper presents the current status of the project and concepts for putting the migrated software into operation.  
 
MOPMS016 The Control System of CERN Accelerators Vacuum (Current Status and Recent Improvements) vacuum, interface, status, interlocks 354
 
  • P. Gomes, F. Antoniotti, S. Blanchard, M. Boccioli, G. Girardot, H. Vestergard
    CERN, Geneva, Switzerland
  • L. Kopylov, M.S. Mikheev
    IHEP Protvino, Protvino, Moscow Region, Russia
 
  The vacuum control system of most of the CERN accelerators is based on Siemens PLCs and on PVSS SCADA. The application software for both PLC and SCADA started to be developed specifically by the vacuum group; with time, it included a growing number of building blocks from the UNICOS framework. After the transition from the LHC commissioning phase to its regular operation, there has been a number of additions and improvements to the vacuum control system, driven by new technical requirements and by feedback from the accelerator operators and vacuum specialists. New functions have been implemented in PLC and SCADA: for the automatic restart of pumping groups, after power failure; for the control of the solenoids, added to reduce e-cloud effects; and for PLC power supply diagnosis. The automatic recognition and integration of mobile slave PLCs has been extended to the quick installation of pumping groups with the electronics kept in radiation-free zones. The ergonomics and navigation of the SCADA application have been enhanced; new tools have been developed for interlock analysis, and for device listing and selection; web pages have been created, summarizing the values and status of the system. The graphical interface for windows clients has been upgraded from ActiveX to QT, and the PVSS servers will soon be moved from Windows to Linux.  
poster icon Poster MOPMS016 [113.929 MB]  
 
MOPMS020 High Intensity Proton Accelerator Controls Network Upgrade network, monitoring, operation, proton 361
 
  • R.A. Krempaska, A.G. Bertrand, F. Lendzian, H. Lutz
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The High Intensity Proton Accelerator (HIPA) control system network is spread through about six buildings and has grown historically in an unorganized way. It consisted of about 25 network switches, 150 nodes and 20 operator consoles. The miscellaneous hardware infrastructure and the lack of the documentation and components overview could not guarantee anymore the reliability of the control system and facility operation. Therefore, a new network, based on modern network topology, PSI standard hardware with monitoring and detailed documentation and overview was needed. We would like to present the process how we successfully achieved this goal and the advantages of the clean and well documented network infrastructure.  
poster icon Poster MOPMS020 [0.761 MB]  
 
MOPMS021 Detector Control System of the ATLAS Insertable B-Layer detector, monitoring, software, hardware 364
 
  • S. Kersten, P. Kind, K. Lantzsch, P. Mättig, C. Zeitnitz
    Bergische Universität Wuppertal, Wuppertal, Germany
  • M. Citterio, C. Meroni
    Universita' degli Studi di Milano e INFN, Milano, Italy
  • F. Gensolen
    CPPM, Marseille, France
  • S. Kovalenko
    CERN, Geneva, Switzerland
  • B. Verlaat
    NIKHEF, Amsterdam, The Netherlands
 
  To improve tracking robustness and precision of the ATLAS inner tracker an additional fourth pixel layer is foreseen, called Insertable B-Layer (IBL). It will be installed between the innermost present Pixel layer and a new smaller beam pipe and is presently under construction. As, once installed into the experiment, no access is available, a highly reliable control system is required. It has to supply the detector with all entities required for operation and protect it at all times. Design constraints are the high power density inside the detector volume, the sensitivity of the sensors against heatups, and the protection of the front end electronics against transients. We present the architecture of the control system with an emphasis on the CO2 cooling system, the power supply system and protection strategies. As we aim for a common operation of pixel and IBL detector, the integration of the IBL control system into the Pixel one will be discussed as well.  
 
MOPMS023 LHC Magnet Test Benches Controls Renovation network, Linux, hardware, interface 368
 
  • A. Raimondo, O.O. Andreassen, D. Kudryavtsev, S.T. Page, A. Rijllart, E. Zorin
    CERN, Geneva, Switzerland
 
  The LHC magnet test benches controls were designed in 1996. They were based on VME data acquisition systems and Siemens PLCs control and interlocks systems. During a review of renovation of superconducting laboratories at CERN in 2009 it was decided to replace the VME systems with PXI and the obsolete Sun/Solaris workstations with Linux PCs. This presentation covers the requirements for the new systems in terms of functionality, security, channel count, sampling frequency and precision. We will report on the experience with the commissioning of the first series of fixed and mobile measurement systems upgraded to this new platform, compared to the old systems. We also include the experience with the renovated control room.  
poster icon Poster MOPMS023 [1.310 MB]  
 
MOPMS024 Evolution of the Argonne Tandem Linear Accelerator System (ATLAS) Control System software, distributed, database, hardware 371
 
  • M.A. Power, F.H. Munson
    ANL, Argonne, USA
 
  Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357.
Given that the Argonne Tandem Linac Accelerator System (ATLAS) recently celebrated its 25th anniversary, this paper will explore the past, present and future of the ATLAS Control System and how it has evolved along with the accelerator and control system technology. ATLAS as we know it today, originated with a Tandem Van de Graff in the 1960's. With the addition of the Booster section in the late 1970's, came the first computerized control. ATLAS itself was placed into service on June 25, 1985 and was the world's first superconducting linear accelerator for ions. Since its dedication as a National User Facility, more than a thousand experiments by more than 2,000 users world-wide, have taken advantage of the unique capabilities it provides. Today, ATLAS continues to be a user facility for physicists who study the particles that form the heart of atoms. Its most recent addition, CARIBU (Californium Rare Isotope Breeder Upgrade), creates special beams that feed into ATLAS. ATLAS is similar to a living organism, changing and responding to new technological challenges and research needs. As it continues to evolve, so does the control system: from the original days using a DEC PDP-11/34 computer and 2 CAMAC crates, to a DEC Alpha computer running Vsystem software and more than twenty CAMAC crates, to distributed computers and VME systems. Future upgrades are also in the planning stages that will continue to evolve the control system.
 
poster icon Poster MOPMS024 [2.845 MB]  
 
MOPMS025 Migration from OPC-DA to OPC-UA Windows, Linux, toolkit, embedded 374
 
  • B. Farnham, R. Barillère
    CERN, Geneva, Switzerland
 
  The OPC-DA specification of OPC has been a highly successful interoperability standard for process automation since 1996, allowing communications between any compliant components regardless of vendor. CERN has a reliance on OPC-DA Server implementations from various 3rd party vendors which provide a standard interface to their hardware. The OPC foundation finalized the OPC-UA specification and OPC-UA implementations are now starting to gather momentum. This presentation gives a brief overview of the headline features of OPC-UA and a comparison with OPC-DA and outlines the necessity of migrating from OPC-DA and the motivation for migrating to OPC-UA. Feedback from research into the availability of tools and testing utilities will be presented and a practical overview of what will be required from a computing perspective in order to run OPC-UA clients and servers in the CERN network.  
poster icon Poster MOPMS025 [1.103 MB]  
 
MOPMS026 J-PARC Control toward Future Reliable Operation EPICS, operation, linac, GUI 378
 
  • N. Kamikubota, N. Yamamoto
    J-PARC, KEK & JAEA, Ibaraki-ken, Japan
  • S.F. Fukuta, D. Takahashi
    MELCO SC, Tsukuba, Japan
  • T. Iitsuka, S. Motohashi, M. Takagi, S.Y. Yoshida
    Kanto Information Service (KIS), Accelerator Group, Ibaraki, Japan
  • T. Ishiyama
    KEK/JAEA, Ibaraki-Ken, Japan
  • Y. Ito, H. Sakaki
    JAEA, Ibaraki-ken, Japan
  • Y. Kato, M. Kawase, N. Kikuzawa, H. Sako, K.C. Sato, H. Takahashi, H. Yoshikawa
    JAEA/J-PARC, Tokai-Mura, Naka-Gun, Ibaraki-Ken, Japan
  • T. Katoh, H. Nakagawa, J.-I. Odagiri, T. Suzuki, S. Yamada
    KEK, Ibaraki, Japan
  • H. Nemoto
    ACMOS INC., Tokai-mura, Ibaraki, Japan
 
  J-PARC accelerator complex comprises Linac, 3-GeV RCS (Rapid Cycle Synchrotron), and 30-GeV MR (Main Ring). The J-PARC is a joint project between JAEA and KEK. Two control systems, one for Linac and RCS and another for MR, were developed by two institutes. Both control systems use the EPICS toolkit, thus, inter-operation between two systems is possible. After the first beam in November, 2006, beam commissioning and operation have been successful. However, operation experience shows that two control systems often make operators distressed: for example, different GUI look-and-feels, separated alarm screens, independent archive systems, and so on. Considering demands of further power upgrade and longer beam delivery, we need something new, which is easy to understand for operators. It is essential to improve reliability of operation. We, two control groups, started to discuss future directions of our control systems. Ideas to develop common GUI screens of status and alarms, and to develop interfaces to connect archive systems to each other, are discussed. Progress will be reported.  
 
MOPMS028 CSNS Timing System Prototype timing, EPICS, operation, interface 386
 
  • G.L. Xu, G. Lei, L. Wang, Y.L. Zhang, P. Zhu
    IHEP Beijing, Beijing, People's Republic of China
 
  Timing system is important part of CSNS. Timing system prototype developments are based on the Event System 230 series. I use two debug platforms, one is EPICS base 3.14.8. IOC uses the MVME5100, running vxworks5.5 version; the other is EPICS base 3.13, using vxworks5.4 version. Prototype work included driver debugging, EVG/EVR-230 experimental new features, such as CML output signals using high-frequency step size of the signal cycle delay, the use of interlocking modules, CML, and TTL's Output to achieve interconnection function, data transmission functions. Finally, I programed the database with the new features and in order to achieve OPI.  
poster icon Poster MOPMS028 [0.434 MB]  
 
MOPMS029 The BPM DAQ System Upgrade for SuperKEKB Injector Linac linac, emittance, electron, positron 389
 
  • M. Satoh, K. Furukawa, F. Miyahara, T. Suwada
    KEK, Ibaraki, Japan
  • T. Kudou, S. Kusano
    MELCO SC, Tsukuba, Japan
 
  The KEK injector linac provides beams with four different rings: a KEKB high-energy ring (HER; 8 GeV/electron), a KEKB low-energy ring (LER; 3.5 GeV/positron), a Photon Factory ring (PF; 2.5 GeV/electron), and an Advanced Ring for Pulse X-rays (PF-AR; 3 GeV/electron). For the three rings except PF-AR, the simultaneous top-up injection has been completed since April 2009. In the simultaneous top-up operation, the common DC magnet settings are utilized for the beams with different energies and amount of charges, whereas the different optimized settings of RF timing and phase are applied to each beam acceleration by using a fast low-level RF (LLRF) phase and trigger delay control up to 50 Hz. The non-destructive beam position monitor (BPM) is an indispensable diagnostic tool for the stable beam operation. In the KEK Linac, approximately nineteen BPMs with the strip-line type electrodes are used for the beam orbit measurement and feedback. In addition, some of them are also used for the beam energy feedback loops. The current DAQ system consists of the digital oscilloscopes (Tektronix DPO7104, 10 GSa/s). A signal from each electrode is analyzed with a predetermined response function up to 50 Hz. The beam position resolution of the current system is limited to about 0.1 mm because of ADC resolution. For the SuperKEKB project, we have a plan to upgrade the BPM DAQ system since the Linac should provide the smaller emittance beam. We will report on the system description of the new DAQ system and the results of performance test in detail.  
poster icon Poster MOPMS029 [3.981 MB]  
 
MOPMS030 Improvement of the Oracle Setup and Database Design at the Heidelberg Ion Therapy Center database, ion, operation, hardware 393
 
  • K. Höppner, Th. Haberer, J.M. Mosthaf, A. Peters
    HIT, Heidelberg, Germany
  • G. Fröhlich, S. Jülicher, V.RW. Schaa, W. Schiebel, S. Steinmetz
    GSI, Darmstadt, Germany
  • M. Thomas, A. Welde
    Eckelmann AG, Wiesbaden, Germany
 
  The HIT (Heidelberg Ion Therapy) center is an accelerator facility for cancer therapy using both carbon ions and protons, located at the university hospital in Heidelberg. It provides three therapy treatment rooms: two with fixed beam exit (both in clinical use), and a unique gantry with a rotating beam head, currently under commissioning. The backbone of the proprietary accelerator control system consists of an Oracle database running on a Windows server, storing and delivering data of beam cycles, error logging, measured values, and the device parameters and beam settings for about 100,000 combinations of energy, beam size and particle number used in treatment plans. Since going operational, we found some performance problems with the current database setup. Thus, we started an analysis in cooperation with the industrial supplier of the control system (Eckelmann AG) and the GSI Helmholtzzentrum für Schwerionenforschung. It focused on the following topics: hardware resources of the DB server, configuration of the Oracle instance, and a review of the database design that underwent several changes since its original design. The analysis revealed issues on all fields. The outdated server will be replaced by a state-of-the-art machine soon. We will present improvements of the Oracle configuration, the optimization of SQL statements, and the performance tuning of database design by adding new indexes which proved directly visible in accelerator operation, while data integrity was improved by additional foreign key constraints.  
poster icon Poster MOPMS030 [2.014 MB]  
 
MOPMS031 Did We Get What We Aimed for 10 Years Ago? detector, operation, experiment, hardware 397
 
  • P.Ch. Chochula, A. Augustinus, L.S. Jirdén, A.N. Kurepin, M. Lechman, P. Rosinský
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The ALICE Detector Control System (DCS) is in charge of control and operation of one of the large high energy physics experiments at CERN in Geneva. The DCS design which started in 2000 was partly inspired by the control systems of the previous generation of HEP experiments at the LEP accelerator at CERN. However, the scale of the LHC experiments, the use of modern, "intelligent" hardware and the harsh operational environment led to an innovative system design. The overall architecture has been largely based on commercial products like PVSS SCADA system and OPC servers extended by frameworks. Windows has been chosen as operating system platform for the core systems and Linux for the frontend devices. The concept of finite state machines has been deeply integrated into the system design. Finally, the design principles have been optimized and adapted to the expected operational needs. The ALICE DCS was designed, prototyped and developed at the time, when no experience with systems of similar scale and complexity existed. At the time of its implementation the detector hardware was not yet available and tests were performed only with partial detector installations. In this paper we analyse how well the original requirements and expectations set ten years ago comply with the real experiment needs after two years of operation. We provide an overview of system performance, reliability and scalability. Based on this experience we assess the need for future system enhancements to take place during the LHC technical stop in 2013.  
poster icon Poster MOPMS031 [5.534 MB]  
 
MOPMS032 Re-engineering of the SPring-8 Radiation Monitor Data Acquisition System radiation, data-acquisition, operation, monitoring 401
 
  • T. Masuda, M. Ishii, K. Kawata, T. Matsushita, C. Saji
    JASRI/SPring-8, Hyogo-ken, Japan
 
  We have re-engineered the data acquisition system for the SPring-8 radiation monitors. Around the site, 81 radiation monitors are deployed. Seventeen of them are utilized for the radiation safety interlock system for the accelerators. The old data-acquisition system consisted of dedicated NIM-like modules linked with the radiation monitors, eleven embedded computers for data acquisition from the modules and three programmable logic controllers (PLCs) for integrated dose surveillance. The embedded computers periodically collected the radiation data from GPIB interfaces with the modules. The dose-surveillance PLCs read analog outputs in proportion to the radiation rate from the modules. The modules and the dose-surveillance PLCs were also interfaced with the radiation safety interlock system. These components in the old system were dedicated, black-boxed and complicated for the operations. In addition, GPIB interface was legacy and not reliable enough for the important system. We, therefore, decided to replace the old system with a new one based on PLCs and FL-net, which were widely used technologies. We newly deployed twelve PLCs as substitutes for all the old components. Another PLC with two graphic panels is installed near a central control room for centralized operations and watches for the all monitors. All the new PLCs and a VME computer for data acquisition are connected through FL-net. In this paper, we describe the new system and the methodology of the replacement within the short interval between the accelerator operations.  
poster icon Poster MOPMS032 [1.761 MB]  
 
MOPMS033 Status, Recent Developments and Perspective of TINE-powered Video System, Release 3 interface, electron, Windows, site 405
 
  • S. Weisse, D. Melkumyan
    DESY Zeuthen, Zeuthen, Germany
  • P. Duval
    DESY, Hamburg, Germany
 
  Experience has shown that imaging software and hardware installations at accelerator facilities need to be changed, adapted and updated on a semi-permanent basis. On this premise, the component-based core architecture of Video System 3 was founded. In design and implementation, emphasis was, is, and will be put on flexibility, performance, low latency, modularity, interoperability, use of open source, ease of use as well as reuse, good documentation and multi-platform capability. In the last year, a milestone was reached as Video System 3 entered production-level at PITZ, Hasylab and PETRA III. Since then, development path is stronger influenced by production-level experience and customer feedback. In this contribution, we describe the current status, layout, recent developments and perspective of the Video System. Focus will be put on integration of recording and playback of video sequences to Archive/DAQ, a standalone installation of the Video System on a notebook as well as experiences running on Windows 7-64bit. In addition, new client-side multi-platform GUI/application developments using Java are about to hit the surface. Last but not least it must be mentioned that although the implementation of Release 3 is integrated into the TINE control system, it is modular enough so that integration into other control systems can be considered.  
slides icon Slides MOPMS033 [0.254 MB]  
poster icon Poster MOPMS033 [2.127 MB]  
 
MOPMS034 Software Renovation of CERN's Experimental Areas software, hardware, GUI, detector 409
 
  • J. Fullerton, L.K. Jensen, J. Spanggaard
    CERN, Geneva, Switzerland
 
  The experimental areas at CERN (AD, PS and SPS) have undergone a wide-spread electronics and software consolidation based on modern techniques allowing them to be used in the many years to come. This paper will describe the scale of the software renovation and how the issues were overcome in order to ensure a complete integration into the respective control systems.  
poster icon Poster MOPMS034 [1.582 MB]  
 
MOPMS036 Upgrade of the Nuclotron Extracted Beam Diagnostic Subsystem. hardware, operation, software, high-voltage 415
 
  • E.V. Gorbachev, N.I. Lebedev, N.V. Pilyar, S. Romanov, T.V. Rukoyatkina, V. Volkov
    JINR, Dubna, Moscow Region, Russia
 
  The subsystem is intended for the Nuclotron extracted beam parameters measurement. Multiwire proportional chambers are used for transversal beam profiles mesurements in four points of the beam transfer line. Gas amplification values are tuned by high voltage power supplies adjustments. The extracted beam intensity is measured by means of ionization chamber, variable gain current amplifier DDPCA-300 and voltage-to-frequency converter. The data is processed by industrial PC with National Instruments DAQ modules. The client-server distributed application written in LabView environment allows operators to control hardware and obtain measurement results over TCP/IP network.  
poster icon Poster MOPMS036 [1.753 MB]  
 
MOPMS037 A Customizable Platform for High-availability Monitoring, Control and Data Distribution at CERN monitoring, database, software, hardware 418
 
  • M. Brightwell, M. Bräger, A. Lang, A. Suwalska
    CERN, Geneva, Switzerland
 
  In complex operational environments, monitoring and control systems are asked to satisfy ever more stringent requirements. In addition to reliability, the availability of the system has become crucial to accommodate for tight planning schedules and increased dependencies to other systems. In this context, adapting a monitoring system to changes in its environment and meeting requests for new functionalities are increasingly challenging. Combining maintainability and high-availability within a portable architecture is the focus of this work. To meet these increased requirements, we present a new modular system developed at CERN. Using the experience gained from previous implementations, the new platform uses a multi-server architecture to allow running patches and updates to the application without affecting its availability. The data acquisition can also be reconfigured without any downtime or potential data loss. The modular architecture builds on a core system that aims to be reusable for multiple monitoring scenarios, while keeping each instance as lightweight as possible. Both for cost and future maintenance concerns, open and customizable technologies have been preferred.  
 
MOPMU001 Software and Capabilities of the Beam Position Measurement System for Novosibirsk Free Electron Laser electron, FEL, pick-up, software 422
 
  • S.S. Serednyakov, E.N. Dementyev, A.S. Medvedko, E. Shubin, V.G. Tcheskidov, N. Vinokurov
    BINP SB RAS, Novosibirsk, Russia
 
  The system that measures the electron beam position in Novosibirsk free electron laser with the application of electrostatic pick-up electrodes is described. The measuring hardware and main principles of measurement are considered. The capabilities and different operation modes of this system are described. In particular, the option of simultaneous detection of accelerated and decelerated electron beams at one pick-up station is considered. Besides, the operational features of this system at different modes of FEL performance (the 1st, 2nd, and 3rd stages) are mentioned.  
poster icon Poster MOPMU001 [0.339 MB]  
 
MOPMU002 Progress of the TPS Control System Development EPICS, interface, power-supply, feedback 425
 
  • J. Chen, Y.-T. Chang, Y.K. Chen, Y.-S. Cheng, P.C. Chiu, K.T. Hsu, S.Y. Hsu, K.H. Hu, C.H. Kuo, D. Lee, C.Y. Liao, Y.R. Pan, C.-J. Wang, C.Y. Wu
    NSRRC, Hsinchu, Taiwan
 
  The Taiwan Photon Source (TPS) is a low-emittance 3-GeV synchrotron light source which is in construction on the National Synchrotron Radiation Research Center (NSRRC) campus. The control system for the TPS is based upon EPICS framework. The standard hardware and software components have been defined. The prototype of various subsystems is on going. The event based timing system has been adopted. The power supply control interface accompanied with orbit feedback support have also been defined. The machine protection system is in design phase. Integration with the linear accelerator system which are installed and commissioned at temporary site for acceptance test has already been done. The interface to various systems is still on going. The infrastructures of high level and low level software are on going. Progress will be summarized in the report.  
 
MOPMU005 Overview of the Spiral2 Control System Progress EPICS, ion, database, interface 429
 
  • E. Lécorché, P. Gillette, C.H. Haquin, E. Lemaître, L. Philippe, D.T. Touchard
    GANIL, Caen, France
  • J.F. Denis, F. Gougnaud, J.-F. Gournay, Y. Lussignol, P. Mattei
    CEA/DSM/IRFU, France
  • P.G. Graehling, J.H. Hosselet, C. Maazouzi
    IPHC, Strasbourg Cedex 2, France
 
  Spiral2 whose construction physically started at the beginning of this year at Ganil (Caen, France) will be a new Radioactive Ion Beams facility to extend scientific knowledge in nuclear physics, astrophysics and interdisciplinary researches. The project consists of a high intensity multi-ion accelerator driver delivering beams to a high power production system to generate the Radioactive Ion Beams being then post-accelerated and used within the existing Ganil complex. Resulting from the collaboration between several laboratories, Epics has been adopted as the standard framework for the control command system. At the lower level, pieces of equipment are handled through VME/VxWorks chassis or directly interfaced using the Modbus/TCP protocol; also, Siemens programmable logic controllers are tightly coupled to the control system, being in charge of specific devices or hardware safety systems. The graphical user interface layer integrates both some standard Epics client tools (EDM, CSS under evaluation, etc …) and specific high level applications written in Java, also deriving developments from the Xal framework. Relational databases are involved into the control system for equipment configuration (foreseen), machine representation and configuration, CSS archivers (under evaluation) and Irmis (mainly for process variable description). The first components of the Spiral2 control system are now used in operation within the context of the ion and deuteron sources test platforms. The paper also describes how software development and sharing is managed within the collaboration.  
poster icon Poster MOPMU005 [2.093 MB]  
 
MOPMU006 The Commissioning of the Control System of the Accelerators and Beamlines at the Alba Synchrotron database, TANGO, booster, project-management 432
 
  • D.F.C. Fernández-Carreiras, F. Becheri, S. Blanch, A. Camps, T.M. Coutinho, G. Cuní, J.V. Gigante, J.J. Jamroz, J. Klora, J. Lidón-Simon, O. Matilla, J. Metge, A. Milán, J. Moldes, R. Montaño, M. Niegowski, C. Pascual-Izarra, S. Pusó, Z. Reszela, A. Rubio, S. Rubio-Manrique, A. Ruz
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  Alba is a third generation synchrotron located near Barcelona in Spain. The final commissioning of all accelerators and beamlines started the 8th of March 2011. The Alba control system is based on the middle layer and tools provided by TANGO. It extensively uses the Sardana Framework, including the Taurus graphical toolkit, based on Python and Qt. The control system of Alba is highly distributed. The design choices made five years ago, have been validated during the commissioning. Alba uses extensively Ethernet as a Fieldbus, and combines diskless machines running Tango on Linux and Windows, with specific hardware based in FPGA and fiber optics for fast real time transmissions and synchronizations. B&R PLCs, robust, reliable and cost-effective are widely used in the different components of the machine protection system. In order to match the requirements in terms of speed, these PLCs are sometimes combined with the MRF Timing for the fast interlocks. This paper describes the design, requirements, challenges and the lessons learnt in the installation and commissioning of the control system.  
poster icon Poster MOPMU006 [24.241 MB]  
 
MOPMU007 ISHN Ion Source Control System Overview EPICS, ion, ion-source, operation 436
 
  • M. Eguiraun, I. Arredondo, J. Feuchtwanger, G. Harper, M. del Campo
    ESS-Bilbao, Zamudio, Spain
  • J. Jugo
    University of the Basque Country, Faculty of Science and Technology, Bilbao, Spain
  • S. Varnasseri
    ESS Bilbao, LEIOA, Spain
 
  Funding: The present work is supported by the Basque Government and Spanish Ministry of Science and Innovation.
ISHN project consists of a Penning ion source which will deliver up to 65mA of H beam pulsed at 50 Hz with a diagnostics vessel for beam testing purposes. The present work analyzes the control system of this research facility. The main devices of ISHN are the power supplies for high density plasma generation and beam extraction, the H2 supply and Cesium heating system, plus refrigeration, vacuum and monitoring devices. The control system implemented with LabVIEW is based on PXI systems from National Instruments, using two PXI chassis connected through a dedicated fiber optic link between HV platform and ground. Source operation is managed by a real time processor at ground, while additional tasks are performed by means of an FPGA located at HV. The real time system manages the control loop of heaters, the H2 pulsed supply for a stable pressure in the plasma chamber, data acquisition from several diagnostics and sensors and the communication with the control room. The FPGA generates the triggers for the different power supplies and H2 flow as well as some data acquisition at high voltage. A PLC is in charge of the vacuum control (two double stage pumps and two turbo pumps), and it is completely independent of the source operation for avoiding risky failures. A dedicated safety PLC is installed to handle personnel safety issues. Current running diagnostics are, ACCT, DCCT, Faraday Cup and a pepperpot. In addition, a MySQL database stores the whole operation parameters while source is running. The aim is to test and train in accelerator technologies for future developments.
 
poster icon Poster MOPMU007 [1.382 MB]  
 
MOPMU008 Solaris Project Status and Challenges network, TANGO, linac, operation 439
 
  • P.P. Goryl, C.J. Bocchetta, K. Królas, M. Młynarczyk, R. Nietubyć, M.J. Stankiewicz, P.S. Tracz, Ł. Walczak, A.I. Wawrzyniak
    Solaris, Krakow, Poland
  • K. Larsson, D.P. Spruce
    MAX-lab, Lund, Sweden
 
  Funding: Work supported by the European Regional Development Fund within the frame of the Innovative Economy Operational Program: POIG.02.01.00-12-213/09
The Polish synchrotron radiation facility, Solaris, is being built in Krakow. The project is strongly linked to the MAX-IV project and the 1.5 GeV storage ring. A overview will be given of activities and of the control system and will outline the similarities and differences between the two machines.
 
poster icon Poster MOPMU008 [11.197 MB]  
 
MOPMU009 The Diamond Control System: Five Years of Operations EPICS, operation, interface, photon 442
 
  • M.T. Heron
    Diamond, Oxfordshire, United Kingdom
 
  Commissioning of the Diamond Light Source accelerators began in 2005, with routine operation of the storage ring commencing in 2006 and photon beamline operation in January 2007. Since then the Diamond control system has provided a single interface and abstraction to (nearly) all the equipment required to operate the accelerators and beamlines. It now supports the three accelerators and a suite of twenty photon beamlines and experiment stations. This paper presents an analysis of the operation of the control system and further considers the developments that have taken place in the light of operational experience over this period.  
 
MOPMU011 The Design Status of CSNS Experimental Control System EPICS, software, neutron, database 446
 
  • J. Zhuang, Y.P. Chu, L.B. Ding, L. Hu, D.P. Jin, J.J. Li, Y.L. Liu, Y.Q. Liu, Y.H. Zhang, Z.Y. Zhang, K.J. Zhu
    IHEP Beijing, Beijing, People's Republic of China
 
  To meet the increasing demand from user community, China decided to build a world-class spallation neutron source, called CSNS(China Spallation Neutron Source). It can provide users a neutron scattering platform with high flux, wide wavelength range and high efficiency. CSNS construction is expected to start in 2011 and will last 6.5 years. The control system of CSNS is divided into accelerator control system and experimental control system. CSNS Experimental Control System is based on EPICS architecture, offering device operating and device debug interface, communication between devices, environment monitor, machine and people protection, interface for accelerator system, control system monitor and database service. The all control system is divided into 4 parts, such as front control layer, Epics global control layer, database and network service. The front control layer is based on YOKOGAWA PLC and other controllers. Epics layer provides all system control and information exchange. Embedded PLC YOKOGAWA RP61 is considered used as communication node between front layer and EPICS layer. Database service provides system configuration and historical data. From the experience of BESIII, MySQL is a option. The system will be developed in Dongguan , Guangdong p province and Beijing, so VPN will be used to help development. Now,there are 9 people working on this system. The system design is completed. We are working on a prototype system now.  
poster icon Poster MOPMU011 [0.224 MB]  
 
MOPMU012 The Local Control System of an Undulator Cell for the European XFEL undulator, quadrupole, electron, photon 450
 
  • S. Karabekyan, R. Pannier, J. Pflüger
    European XFEL GmbH, Hamburg, Germany
  • N. Burandt, J. Kuhn
    Beckhoff Automation GmbH, Verl, Germany
  • A. Schöps
    DESY, Hamburg, Germany
 
  The European XFEL project is a 4th generation light source. The first beam will be delivered in the beginning of 2015. At the project startup three light sources SASE 1, SASE 2 and SASE 3 will produce spatially coherent ≤80fs short photon pulses with a peak brilliance of 1032-1034 photons/s/mm2/mrad2/0.1% BW in the energy range from 0.26 to 24 keV at an electron beam energy 14 GeV. The Undulator systems are used to produce photon beams for SASE 1, SASE 2 and SASE 3. Each undulator system consists of an array of undulator cells installed in a row along the electron beam. The undulator cell itself consists of a planar undulator, a phase shifter, magnetic field correction coils and a quadrupole mover. The local control system of the undulator cell is based on industrial components produced by Beckhoff and on PLC software implemented in TwinCAT system. Four servo motors are installed on each undulator and control the gap between girders with micrometer accuracy. One stepper motor is used for phase shifter control, and two other stepper motors control the position of the quadrupole magnet. The current of magnetic field correction coils as well as the gap of the phase shifter are adjustable as a function of the undulator gap. The high level of synchronization (<<1μs) for the complete undulator system (for instance SASE2 with 35 undulator cells in total) could be achieved due to implementation of the EtherCAT fieldbus system in the local control. The description of the hardware components and the software functionality of the local control system will be discussed.  
poster icon Poster MOPMU012 [1.163 MB]  
 
MOPMU013 Phase II and III The Next Generation of CLS Beamline Control and Data Acquisition Systems software, EPICS, experiment, interface 454
 
  • E. D. Matias, D. Beauregard, R. Berg, G. Black, M.J. Boots, W. Dolton, D. Hunter, R. Igarashi, D. Liu, D.G. Maxwell, C.D. Miller, T. Wilson, G. Wright
    CLS, Saskatoon, Saskatchewan, Canada
 
  The Canadian Light Source is nearing the completion of its suite of phase II Beamlines and in detailed design of its Phase III Beamlines. The paper presents an overview of the overall approach adopted by CLS in the development of beamline control and data acquisition systems. Building on the experience of our first phase of beamlines the CLS has continued to make extensive use of EPICS with EDM and QT based user interfaces. Increasing interpretive languages such as Python are finding a place in the beamline control systems. Web based environment such as ScienceStudio have also found a prominent place in the control system architecture as we move to tighter integration between data acquisition, visualization and data analysis.  
 
MOPMU014 Development of Distributed Data Acquisition and Control System for Radioactive Ion Beam Facility at Variable Energy Cyclotron Centre, Kolkata. interface, embedded, linac, status 458
 
  • K. Datta, C. Datta, D.P. Dutta, T.K. Mandi, H.K. Pandey, D. Sarkar
    DAE/VECC, Calcutta, India
  • R. Anitha, A. Balasubramanian, K. Mourougayane
    SAMEER, Chennai, India
 
  To facilitate frontline nuclear physics research, an ISOL (Isotope Separator On Line) type Radioactive Ion Beam (RIB) facility is being constructed at Variable Energy Cyclotron Centre (VECC), Kolkata. The RIB facility at VECC consists of various subsystems like ECR Ion source, RFQ, Rebunchers, LINACs etc. that produce and accelerate the energetic beam of radioactive isotopes required for different experiments. The Distributed Data Acquisition and Control System (DDACS) is intended to monitor and control large number of parameters associated with different sub systems from a centralized location to do the complete operation of beam generation and beam tuning in a user friendly manner. The DDACS has been designed based on a 3-layer architecture namely Equipment interface layer, Supervisory layer and Operator interface layer. The Equipment interface layer consists of different Equipment Interface Modules (EIMs) which are designed around ARM processor and connected to different equipment through various interfaces such as RS-232, RS-485 etc. The Supervisory layer consists of VIA-processor based Embedded Controller (EC) with embedded XP operating system. This embedded controller, interfaced with EIMs through fiber optic cable, acquires and analyses the data from different EIMs. Operator interface layer consists mainly of PCs/Workstations working as operator consoles. The data acquired and analysed by the EC can be displayed at the operator console and the operator can centrally supervise and control the whole facility.  
poster icon Poster MOPMU014 [2.291 MB]  
 
MOPMU015 Control and Data Acquisition Systems for the FERMI@Elettra Experimental Stations data-acquisition, framework, TANGO, instrumentation 462
 
  • R. Borghes, V. Chenda, A. Curri, G. Gaio, G. Kourousias, M. Lonza, G. Passos, R. Passuello, L. Pivetta, M. Prica, M. Pugliese, G. Strangolino
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
FERMI@Elettra is a single-pass Free Electron Laser (FEL) user-facility covering the wavelength range from 100 nm to 4 nm. The facility is located in Trieste, Italy, nearby the third-generation synchrotron light source Elettra. Three experimental stations, dedicated to different scientific areas, have been installed installed in 2011: Low Density Matter (LDM), Elastic and Inelastic Scattering (EIS) and Diffraction and Projection Imaging (DiProI). The experiment control and data acquisition system is the natural extension of the machine control system. It integrates a shot-by-shot data acquisition framework with a centralized data storage and analysis system. Low-level applications for data acquisition and online processing have been developed using the Tango framework on Linux platforms. High-level experimental applications can be developed on both Linux and Windows platforms using C/C++, Python, LabView, IDL or Matlab. The Elettra scientific computing portal allows remote access to the experiment and to the data storage system.
 
poster icon Poster MOPMU015 [0.884 MB]  
 
MOPMU017 TRIUMF's ARIEL Project ISAC, EPICS, linac, interface 465
 
  • J.E. Richards, D. Dale, K. Ezawa, D.B. Morris, K. Negishi, R.B. Nussbaumer, S. Rapaz, E. Tikhomolov, G. Waters, M. Leross
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
 
  The Advanced Rare IsotopE Laboratory (ARIEL) will expand TRIUMF's capabilities in rare-isotope beam physics by doubling the size of the current ISAC facility. Two simultaneous radioactive beams will be available in addition to the present ISAC beam. ARIEL will consist of a 50 MeV, 10 mA CW superconducting electron linear accelerator (E-Linac), an additional proton beam-line from the 520MeV cyclotron, two new target stations, a beam-line connecting to the existing ISAC superconducting linac, and a beam-line to the ISAC low-energy experimental facility. Construction will begin in 2012 with commissioning to start in 2014. The ARIEL Control System will be implemented using EPICS allowing seamless integration with the EPICS based ISAC Control System. The ARIEL control system conceptual design will be discussed.  
poster icon Poster MOPMU017 [1.232 MB]  
 
MOPMU018 Update On The Central Control System of TRIUMF's 500 MeV Cyclotron cyclotron, software, hardware, operation 469
 
  • M. Mouat, E. Klassen, K.S. Lee, J.J. Pon, P.J. Yogendran
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
 
  The Central Control System of TRIUMF's 500 MeV cyclotron was initially commissioned in the early 1970s. In 1987 a four year project to upgrade the control system was planned and commenced. By 1997 this upgrade was complete and the new system was operating with increased reliability, functionality and maintainability. Since 1997 an evolution of incremental change has existed. Functionality, reliability and maintainability have continued to improve. This paper provides an update on the present control system situation (2011) and possible future directions.  
poster icon Poster MOPMU018 [4.613 MB]  
 
MOPMU019 The Gateways of Facility Control for SPring-8 Accelerators data-acquisition, database, framework, network 473
 
  • M. Ishii, T. Masuda, R. Tanaka, A. Yamashita
    JASRI/SPring-8, Hyogo-ken, Japan
 
  We integrated the utilities data acquisition into the SPring-8 accelerator control system based on MADOCA framework. The utilities data such as air temperature, power line voltage and temperature of machine cooling water are helpful to study the correlation between the beam stability and the environmental conditions. However the accelerator control system had no way to take many utilities data managed by the facility control system, because the accelerator control system and the facility control system was independent system without an interconnection. In 2010, we had a chance to replace the old facility control system. At that time, we constructed the gateways between the MADOCA-based accelerator control system and the new facility control system installing BACnet, that is a data communication protocol for Building Automation and Control Networks, as a fieldbus. The system requirements were as follows: to monitor utilities data with required sampling rate and resolution, to store all acquired data in the accelerator database, to keep an independence between the accelerator control system and the facility control system, to have a future expandability to control the facilities from the accelerator control system. During the work, we outsourced to build the gateways including data taking software of MADOCA to solve the problems of less manpower and short work period. In this paper we describe the system design and the approach of outsourcing.  
 
MOPMU020 The Control and Data Acquisition System of the Neutron Instrument BIODIFF neutron, detector, TANGO, software 477
 
  • H. Kleines, M. Drochner, L. Fleischhauer-Fuss, T. E. Schrader, F. Suxdorf, M. Wagener, S. van Waasen
    FZJ, Jülich, Germany
  • A. Ostermann
    TUM/Physik, Garching bei München, Germany
 
  The Neutron instrument BIODIFF is a single crystal diffractometer for biological macromolecules that has been built in a cooperation of Forschungszentrum Jülich and the Technical University of Munich. It is located at the research reactor FRM-II in Garching, Germany, and is in its commissioning phase, now. The control and data acquisition system of BIODIFF is based on the so-called "Jülich-Munich Standard", a set of standards and technologies commonly accepted at the FRM-II, which is based on the TACO control system developed by the ESRF. In future, it is intended to introduce TANGO at the FRM-II. The Image Plate detector system of BIODIFF is already equipped with a TANGO subsystem that was integrated into the overall TACO instrument control system.  
 
MOPMU021 Control System for Magnet Power Supplies for Novosibirsk Free Electron Laser power-supply, FEL, operation, software 480
 
  • S.S. Serednyakov, B.A. Dovzhenko, A.A. Galt, V.R. Kozak, E.A. Kuper, L.E. Medvedev, A.S. Medvedko, Y.M. Velikanov, V.F. Veremeenko, N. Vinokurov
    BINP SB RAS, Novosibirsk, Russia
 
  The control system for the magnetic system of the free electron laser (FEL) is described. The characteristics and structure of the power supply system are presented. The power supply control system based on embedded intelligent controllers with the CAN-BUS interface is considered in detail. The control software structure and capabilities are described. Besides, software tools for power supply diagnostics are described.  
poster icon Poster MOPMU021 [0.291 MB]  
 
MOPMU023 The MRF Timing System. The Complete Control Software Integration in Tango. timing, TANGO, device-server, GUI 483
 
  • J. Moldes, D.B. Beltrán, D.F.C. Fernández-Carreiras, J.J. Jamroz, J. Klora, O. Matilla, R. Suñé
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  The deployment of the Timing system based on the MRF hardware has been a important part of the control system. Hundreds of elements are integrated in the scheme, which provides synchronization signals and interlocks, transmitted in the microsecond range and distributed all around the installation. It has influenced several hardware choices and has been largely improved to support interlock events. The operation of the timing system requires a complex setup of all elements. A complete solution has been developed including libraries and stand alone Graphical User Interfaces. Therefore this set of tools is of a great added value, even increased if using Tango, since most high level applications and GUIs are based on Tango Servers. A complete software solution for managing the events, and interlocks of a large installation is presented.  
poster icon Poster MOPMU023 [25.650 MB]  
 
MOPMU024 Status of ALMA Software software, operation, monitoring, framework 487
 
  • T.C. Shen, J.P.A. Ibsen, R.A. Olguin, R. Soto
    ALMA, Joint ALMA Observatory, Santiago, Chile
 
  The Atacama Large Millimeter /submillimeter Array (ALMA) will be a unique research instrument composed of at least 66 reconfigurable high-precision antennas, located at the Chajnantor plain in the Chilean Andes at an elevation of 5000 m. Each antenna contains instruments capable of receiving radio signals from 31.3 GHz up to 950 GHz. These signals are correlated inside a Correlator and the spectral data are finally saved into the Archive system together with the observation metadata. This paper describes the progress in the deployment of the ALMA software, with emphasis on the control software, which is built on top of the ALMA Common Software (ACS), a CORBA based middleware framework. In order to support and maintain the installed software, it is essential to have a mechanism to align and distribute the same version of software packages across all systems. This is achieved rigorously with weekly based regression tests and strict configuration control. A build farm to provide continuous integration and testing in simulation has been established as well. Given the large amount of antennas, it is imperative to have also a monitoring system to allow trend analysis of each component in order to trigger preventive maintenance activities. A challenge for which we are preparing this year consists in testing the whole ALMA software performing complete end-to-end operation, from proposal submission to data distribution to the ALMA Regional Centers. The experience gained during deployment, testing and operation support will be presented.  
poster icon Poster MOPMU024 [0.471 MB]  
 
MOPMU025 The Implementation of the Spiral2 Injector Control System EPICS, emittance, software, diagnostics 491
 
  • F. Gougnaud, J.F. Denis, J.-F. Gournay, Y. Lussignol, P. Mattei, R. Touzery
    CEA/DSM/IRFU, France
  • P. Gillette, C.H. Haquin
    GANIL, Caen, France
  • J.H. Hosselet, C. Maazouzi
    IPHC, Strasbourg Cedex 2, France
 
  The EPICS framework was chosen for the Spiral2 project control system [1] in 2007. Four institutes are involved in the command control: Ganil (Caen), IPHC (Strasbourg) and IRFU (Saclay) and LPSC (Grenoble), the IRFU institute being in charge of the Injector controls. This injector includes two ECR sources (one for deuterons and one for A/q= 3 ions) with their associated low-energy beam transport lines (LEBTs). The deuteron source is installed at Saclay and the A/q=3 ion source at Grenoble. Both lines will merge before injecting beam in a RFQ cavity for pre acceleration. This paper presents the control system for both injector beamlines with their diagnostics (Faraday cups, ACCT/DCCT, profilers, emittancemeters) and slits. This control relies on COTS VME boards and an EPICS software platform. Modbus/TCP protocol is also used with COTS devices like power supplies and Siemens PLCs. The Injector graphical user interface is based on Edm while the port to CSS BOY is under evaluation; also high level applications are developed in Java. This paper also emphasizes the EPICS development for new industrial VME boards ADAS ICV108/178 with a sampling rate ranging from 100 K Samples/s to 1.2 M Samples/s. This new software is used for the beam intensity measurement by diagnostics and the acquisition of sources.
[1] Overview of the Spiral2 control system progress E. Lécorché & al (Ganil/CAEN),this conference.
 
poster icon Poster MOPMU025 [1.036 MB]  
 
MOPMU026 A Readout and Control System for a CTA Prototype Telescope software, interface, framework, hardware 494
 
  • I. Oya, U. Schwanke
    Humboldt University Berlin, Institut für Physik, Berlin, Germany
  • B. Behera, D. Melkumyan, T. Schmidt, P. Wegner, S. Wiesand, M. Winde
    DESY Zeuthen, Zeuthen, Germany
 
  CTA (Cherenkov Telescope Array) is an initiative to build the next generation ground-based gamma-ray instrument. The CTA array will allow studies in the very high-energy domain in the range from a few tens of GeV to more than hundred TeV, extending the existing energy coverage and increasing by a factor 10 the sensitivity compared to current installations, while enhancing other aspects like angular and energy resolution. These goals require the use of at least three different sizes of telescopes. CTA will comprise two arrays (one in the Northern hemisphere and one in the Southern hemisphere) for full sky coverage and will be operated as an open observatory. A prototype for the Medium Size Telescope (MST) type is under development and will be deployed in Berlin by the end of 2011. The MST prototype will consist of the mechanical structure, drive system, active mirror control, four CCD cameras for prototype instrumentation and a weather station. The ALMA Common Software (ACS) distributed control framework has been chosen for the implementation of the control system of the prototype. In the present approach, the interface to some of the hardware devices is achieved by using the OPC Unified Architecture (OPC UA). A code-generation framework (ACSCG) has been designed for ACS modeling. In this contribution the progress in the design and implementation of the control system for the CTA MST prototype is described.  
poster icon Poster MOPMU026 [1.953 MB]  
 
MOPMU027 Controls System Developments for the ERL Facility software, interface, Linux, electron 498
 
  • J.P. Jamilkowski, Z. Altinbas, D.M. Gassner, L.T. Hoff, P. Kankiya, D. Kayran, T.A. Miller, R.H. Olsen, B. Sheehy, W. Xu
    BNL, Upton, Long Island, New York, USA
 
  Funding: Funding: This manuscript has been authored by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U. S. Department of Energy.
The BNL Energy Recovery LINAC (ERL) is a high beam current, superconducting RF electron accelerator that is being commissioned to serve as a research and development prototype for a RHIC facility upgrade for electron-ion collision (eRHIC). Key components of the machine include a laser, photocathode, and 5-cell superconducting RF cavity operating at a frequency of 703 MHz. Starting with a foundation based on existing ADO software running on Linux servers and on the VME/VxWorks platforms developed for RHIC, we are developing a controls system that incorporates a wide range of hardware I/O interfaces that are needed for machine R&D. Details of the system layout, specifications, and user interfaces are provided.
 
poster icon Poster MOPMU027 [0.709 MB]  
 
MOPMU030 Control System for Linear Induction Accelerator LIA-2: the Structure and Hardware hardware, induction, high-voltage, operation 502
 
  • G.A. Fatkin, P.A. Bak, A.M. Batrakov, P.V. Logachev, A. Panov, A.V. Pavlenko, V.Ya. Sazansky
    BINP SB RAS, Novosibirsk, Russia
 
  Power Linear Induction Accelerator (LIA) for flash radiography is commissioned in Budker Institute of Nuclear Physics (BINP) in Novosibirsk. It is a facility producing pulsed electron beam with energy 2 MeV, current 1 kA and spot size less than 2 mm. Beam quality and reliability of facility are required for radiography experiments. Features and structure of distributed control system ensuring these demands are discussed. Control system hardware based on CompactPCI and PMC standards is embedded directly into power pulsed generators. CAN-BUS and Ethernet are used as interconnection protocols. Parameters and essential details for measuring equipment and control electronics produced in BINP and available COTS are presented. The first results of the control system commissioning, reliability and hardware vitality are discussed.  
poster icon Poster MOPMU030 [43.133 MB]  
 
MOPMU032 An EPICS IOC Builder EPICS, hardware, database, software 506
 
  • M.G. Abbott, T.M. Cobb
    Diamond, Oxfordshire, United Kingdom
 
  An EPICS IO controller is typically assembled from a number of standard components each with potentially quite complex hardware or software initialisation procedures intermixed with a good deal of repetitive boilerplate code. Assembling and maintaining a complex IOC can be a quite difficult and error prone process, particularly if the components are unfamiliar. The EPICS IOC builder is a Python library designed to automate the assembly of a complete IOC from a concise component level description. The dependencies and interactions between components as well as their detailed initialisation procedures are automatically managed by the IOC builder through component description files maintained with the individual components. At Diamond Light Source we have a large library of components that can be assembled into EPICS IOCs. The IOC Builder is further finding increasing use in helping non-expert users to assemble an IOC without specialist knowledge.  
poster icon Poster MOPMU032 [3.887 MB]  
 
MOPMU033 ControlView to EPICS Conversion of the TRIUMF TR13 Cyclotron Control System EPICS, TRIUMF, database, ISAC 510
 
  • D.B. Morris
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
 
  The TRIUMF TR13 Cyclotron Control System was developed in 1995 using Allen Bradley PLCs and ControlView. A console replacement project using the EPICS toolkit was started in Fall 2009 with the strict requirement that the PLC code not be modified. Access to the operating machine would be limited due to production schedules. A complete mock-up of the PLC control system was built, to allow parallel development and testing without interfering with the production system. The deployment allows both systems to operate simultaneously easing verification of all functions. A major modification was required to the EPICS Allen Bradley PLC5 Device Support software to support the original PLC programming schema. EDM screens were manually built to create similar displays to the original ControlView screens, reducing operator re-training. A discussion is presented on some of the problems encountered and their solutions.  
poster icon Poster MOPMU033 [2.443 MB]  
 
MOPMU035 Shape Controller Upgrades for the JET ITER-like Wall plasma, real-time, operation, experiment 514
 
  • A. Neto, D. Alves, I.S. Carvalho
    IPFN, Lisbon, Portugal
  • G. De Tommasi, F. Maviglia
    CREATE, Napoli, Italy
  • R.C. Felton, P. McCullen
    EFDA-JET, Abingdon, Oxon, United Kingdom
  • P.J. Lomas, F. G. Rimini, A.V. Stephen, K-D. Zastrow
    CCFE, Culham, Abingdon, Oxon, United Kingdom
  • R. Vitelli
    Università di Roma II Tor Vergata, Roma, Italy
 
  Funding: This work was supported by the European Communities under the contract of Association between EURATOM/IST and was carried out within the framework of the European Fusion Development Agreement.
The upgrade of JET to a new all-metal wall will pose a set of new challenges regarding machine operation and protection. One of the key problems is that the present way of terminating a pulse, upon the detection of a problem, is limited to a predefined set of global responses, tailored to maximise the likelihood of a safe plasma landing. With the new wall, these might conflict with the requirement of avoiding localised heat fluxes in the wall components. As a consequence, the new system will be capable of dynamically adapting its response behaviour, according to the experimental conditions at the time of the stop request and during the termination itself. Also in the context of the new ITER-like wall, two further upgrades were designed to be implemented in the shape controller architecture. The first will allow safer operation of the machine and consists of a power-supply current limit avoidance scheme, which provides a trade-off between the desired plasma shape and the current distribution between the relevant actuators. The second is aimed at an optimised operation of the machine, enabling an earlier formation of a special magnetic configuration where the last plasma closed flux surface is not defined by a physical limiter. The upgraded shape controller system, besides providing the new functionality, is expected to continue to provide the first line of defence against erroneous plasma position and current requests. This paper presents the required architectural changes to the JET plasma shape controller system.
 
poster icon Poster MOPMU035 [2.518 MB]  
 
MOPMU036 Upgrade of the CLS Accelerator Control and Instrumentation Systems booster, feedback, linac, EPICS 518
 
  • E. D. Matias, L. Baribeau, S. Hu, C.G. Payne, H. Zhang
    CLS, Saskatoon, Saskatchewan, Canada
 
  The Canadian Light Source is undertaking a major upgrade to it's accelerator system in preparation for the eventual migration to top-up and to meet the increasing demanding needs of it's synchrotron user community. These upgrades on the Linac include the development of software for new modulators, RF sections, power supplies and current monitors. On the booster ring the upgrades include the development of new improved BPM instrumentation and improved diagnostics on the extracted beam. For the storage ring these upgrades include fast orbit correct, instrumentation for use by the safety systems and a new transverse feedback system.  
 
MOPMU039 ACSys in a Box framework, database, site, Linux 522
 
  • C.I. Briegel, D. Finstrom, B. Hendricks, CA. King, R. Neswold, D.J. Nicklaus, J.F. Patrick, A.D. Petrov, C.L. Schumann, J.G. Smedinghoff
    Fermilab, Batavia, USA
 
  Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
The Accelerator Control System at Fermilab has evolved to enable this relatively large control system to be encapsulated into a "box" such as a laptop. The goal was to provide a platform isolated from the "online" control system. This platform can be used internally for making major upgrades and modifications without impacting operations. It also provides a standalone environment for research and development including a turnkey control system for collaborators. Over time, the code base running on Scientific Linux has enabled all the salient features of the Fermilab's control system to be captured in an off-the-shelf laptop. The anticipated additional benefits of packaging the system include improved maintenance, reliability, documentation, and future enhancements.
 
 
MOPMU040 REVOLUTION at SOLEIL: Review and Prospect for Motion Control software, TANGO, hardware, radiation 525
 
  • D. Corruble, P. Betinelli-Deck, F. Blache, J. Coquet, N. Leclercq, R. Millet, A. Tournieux
    SOLEIL, Gif-sur-Yvette, France
 
  At any synchrotron facility, motors are numerous: it is a significant actuator of accelerators and the main actuator of beamlines. Since 2003, the Electronic Control and Data Acquisition group of SOLEIL has defined a modular and reliable motion architecture integrating industrial products (Galil controller, Midi Ingénierie and Phytron power boards). Simultaneously, the software control group has developed a set of dedicated Tango devices. At present, more than 1000 motors and 200 motion controller crates are in operation at SOLEIL. Aware that the motion control is important in improving performance as the positioning of optical systems and samples is a key element of any beamline, SOLEIL wants to upgrade its motion controller in order to maintain the facility at a high performance level and to be able to answer to new requirements: better accuracy, complex trajectory and coupling multi-axis devices like a hexapod. This project is called REVOLUTION (REconsider Various contrOLler for yoUr moTION).  
poster icon Poster MOPMU040 [1.388 MB]  
 
TUAAUST01 GDA and EPICS Working in Unison for Science Driven Data Acquisition and Control at Diamond Light Source detector, EPICS, hardware, data-acquisition 529
 
  • E.P. Gibbons, M.T. Heron, N.P. Rees
    Diamond, Oxfordshire, United Kingdom
 
  Diamond Light Source has recently received funding for an additional 10 photon beamlines, bringing the total to 32 beamlines and around 40 end-stations. These all use EPICS for the control of the underlying instrumentation associated with photon delivery, the experiment and most of the data acquisition hardware. For the scientific users Diamond has developed the Generic Data Acquisition (GDA) application framework to provide a consistent science interface across all beamlines. While each application is customised to the science of its beamline, all applications are built from the framework and predominantly interface to the underlying instrumentation through the EPICS abstraction. We will describe the complete system, illustrate how it can be configured for a specific beamline application, and how other synchrotrons are, and can, adapt these tools for their needs.  
slides icon Slides TUAAUST01 [9.781 MB]  
 
TUAAULT02 Tango Collaboration and Kernel Status TANGO, CORBA, software, device-server 533
 
  • E.T. Taurel
    ESRF, Grenoble, France
 
  This paper is divided in two parts. The first part summarises the main changes done within the Tango collaboration since the last Icalepcs conference. This will cover technical evolutions but also the new way our collaboration is managed. The second part will focus on the evolution of the so-called Tango event system (asynchronous communication between client and server). Since its beginning, within Tango, this type of communication is implemented using a CORBA notification service implementation called omniNotify. This system is currently re-written using zeromq as transport layer. Reasons of the zeromq choice will be detailed. A first feedback of the new implementation will be given.  
slides icon Slides TUAAULT02 [1.458 MB]  
 
TUAAULT03 BLED: A Top-down Approach to Accelerator Control System Design database, lattice, operation, EPICS 537
 
  • J. Bobnar, K. Žagar
    COBIK, Solkan, Slovenia
 
  In many existing controls projects the central database/inventory was introduced late in the project, usually to support installation or maintenance activities. Thus construction of this database was done in a bottom-up fashion by reverse engineering the installation. However, there are several benefits if the central database is introduced early in machine design, such as the ability to simulate the system as a whole without having all the IOCs in place, it can be used as an input to the installation/commissioning plan, or act as an enforcer of certain conventions and quality processes. Based on our experience with the control systems, we have designed a central database BLED (Best and Leanest Ever Database), which is used for storage of all machine configuration and parameters as well as control system configuration, inventory, and cabling. First implementation of BLED supports EPICS, meaning it is capable of storage and generation of EPICS templates and substitution files as well as archive, alarm and other configurations. With a goal in mind to provide functionality of several existing central databases (IRMIS, SNS db, DBSF etc.) a lot of effort has been made to design the database in a way to handle extremely large set-ups, consisting of millions of control system points. Furthermore, BLED also stores the lattice data, thus providing additional information (e.g. survey data) required by different engineering groups. The lattice import/export tools among others support MAD and TraceWin Tools formats which are widely used in the machine design community.  
slides icon Slides TUAAULT03 [4.660 MB]  
 
TUAAULT04 Web-based Execution of Graphical Workflows : a Modular Platform for Multifunctional Scientific Process Automation interface, synchrotron, framework, database 540
 
  • E. De Ley, D. Jacobs
    iSencia Belgium, Gent, Belgium
  • M. Ounsy
    SOLEIL, Gif-sur-Yvette, France
 
  The Passerelle process automation suite offers a fundamentally modular solution platform, based on a layered integration of several best-of-breed technologies. It has been successfully applied by Synchrotron Soleil as the sequencer for data acquisition and control processes on its beamlines, integrated with TANGO as a control bus and GlobalScreen as the Scada package. Since last year it is being used as the graphical workflow component for the development of an eclipse-based Data Analysis Work Bench, at ESRF. The top layer of Passerelle exposes an actor-based development paradigm, based on the Ptolemy framework (UC Berkeley). Actors provide explicit reusability and strong decoupling, combined with an inherently concurrent execution model. Actor libraries exist for TANGO integration, web-services, database operations, flow control, rules-based analysis, mathematical calculations, launching external scripts etc. Passerelle's internal architecture is based on OSGi, the major Java framework for modular service-based applications. A large set of modules exist that can be recombined as desired to obtain different features and deployment models. Besides desktop versions of the Passerelle workflow workbench, there is also the Passerelle Manager. It is a secured web application including a graphical editor, for centralized design, execution, management and monitoring of process flows, integrating standard Java Enterprise services with OSGi. We will present the internal technical architecture, some interesting application cases and the lessons learnt.  
slides icon Slides TUAAULT04 [10.055 MB]  
 
TUBAUST01 FPGA-based Hardware Instrumentation Development on MAST FPGA, plasma, hardware, diagnostics 544
 
  • B.K. Huang, R.M. Myers, R.M. Sharples
    Durham University, Durham, United Kingdom
  • N. Ben Ayed, G. Cunningham, A. Field, S. Khilar, G.A. Naylor
    CCFE, Abingdon, Oxon, United Kingdom
  • R.G.L. Vann
    York University, Heslington, York, United Kingdom
 
  Funding: This work was part-funded by the RCUK Energy Programme under grant EP/I501045 and the European Communities under the Contract of Association between EURATOM and CCFE.
On MAST (the Mega Amp Spherical Tokamak) at Culham Centre for Fusion Energy some key control systems and diagnostics are being developed and upgraded with FPGA hardware. FPGAs provide many benefits including low latency and real-time digital signal processing. FPGAs blur the line between hardware and software. They are programmed (in VHDL/Verilog language) using software, but once configured act deterministically as hardware. The challenges in developing a system are keeping up-front and maintenance costs low, and prolonging the life of the device as much as possible. We accomplish lower costs by using industry standards such as the FMC (FPGA Mezzanine Card) Vita 57 standard and by using COTS (Commercial Off The Shelf) components which are significantly less costly than developing them in-house. We extend the device operational lifetime by using a flexible FPGA architecture and industry standard interfaces. We discuss the implementation of FPGA control on two specific systems on MAST. The Vertical Stabilisation system comprises of a 1U form factor box with 1 SP601 Spartan6 FPGA board, 10/100 Ethernet access, Microblaze processor, 24-bit σ delta ADS1672 ADC and ATX power supply for remote power cycling. The Electron Bernstein Wave system comprises of a 4U form factor box with 2 ML605 Virtex6 FPGA boards, Gigabit Ethernet, Microblaze processor and 2 FMC108 ADC providing 16 Channels with 14-bit at 250MHz. AXI4 is used as the on chip bus between firmware components to allow very high data rates which has been tested at over 40Gbps streaming into a 2GB DDR3 SODIMM.
 
slides icon Slides TUBAUST01 [8.172 MB]  
 
TUBAUST02 FPGA Communications Based on Gigabit Ethernet FPGA, Ethernet, interface, hardware 547
 
  • L.R. Doolittle, C. Serrano
    LBNL, Berkeley, California, USA
 
  The use of Field Programmable Gate Arrays (FPGAs) in accelerators is widespread due to their flexibility, performance, and affordability. Whether they are used for fast feedback systems, data acquisition, fast communications using custom protocols, or any other application, there is a need for the end-user and the global control software to access FPGA features using a commodity computer. The choice of communication standards that can be used to interface to a FPGA board is wide, however there is one that stands out for its maturity, basis in standards, performance, and hardware support: Gigabit Ethernet. In the context of accelerators it is desirable to have highly reliable, portable, and flexible solutions. We have therefore developed a chip- and board-independent FPGA design which implements the Gigabit Ethernet standard. Our design has been configured for use with multiple projects, supports full line-rate traffic, and communicates with any other device implementing the same well-established protocol, easily supported by any modern workstation or controls computer.  
slides icon Slides TUBAUST02 [0.909 MB]  
 
TUBAULT04 Open Hardware for CERN’s Accelerator Control Systems hardware, FPGA, software, timing 554
 
  • E. Van der Bij, P. Alvarez, M. Ayass, A. Boccardi, M. Cattin, C. Gil Soriano, E. Gousiou, S. Iglesias Gonsálvez, G. Penacoba Fernandez, J. Serrano, N. Voumard, T. Włostowski
    CERN, Geneva, Switzerland
 
  The accelerator control systems at CERN will be renovated and many electronics modules will be redesigned as the modules they will replace cannot be bought anymore or use obsolete components. The modules used in the control systems are diverse: analog and digital I/O, level converters and repeaters, serial links and timing modules. Overall around 120 modules are supported that are used in systems such as beam instrumentation, cryogenics and power converters. Only a small percentage of the currently used modules are commercially available, while most of them had been specifically designed at CERN. The new developments are based on VITA and PCI-SIG standards such as FMC (FPGA Mezzanine Card), PCI Express and VME64x using transition modules. As system-on-chip interconnect, the public domain Wishbone specification is used. For the renovation, it is considered imperative to have for each board access to the full hardware design and its firmware so that problems could quickly be resolved by CERN engineers or its collaborators. To attract other partners, that are not necessarily part of the existing networks of particle physics, the new projects are developed in a fully 'Open' fashion. This allows for strong collaborations that will result in better and reusable designs. Within this Open Hardware project new ways of working with industry are being tested with the aim to prove that there is no contradiction between commercial off-the-shelf products and openness and that industry can be involved at all stages, from design to production and support.  
slides icon Slides TUBAULT04 [7.225 MB]  
 
TUBAUIO05 Challenges for Emerging New Electronics Standards for Physics software, hardware, interface, monitoring 558
 
  • R.S. Larsen
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported by US Department of Energy Contract DE AC03 76SF00515
A unique effort is underway between industry and the international physics community to extend the Telecom industry’s Advanced Telecommunications Computing Architecture (ATCA and MicroTCA) to meet future needs of the physics machine and detector community. New standard extensions for physics have now been designed to deliver unprecedented performance and high subsystem availability for accelerator controls, instrumentation and data acquisition. Key technical features include a unique out-of-band imbedded standard Intelligent Platform Management Interface (IPMI) system to manage hot-swap module replacement and hardware-software failover. However the acceptance of any new standard depends critically on the creation of strong collaborations among users and between user and industry communities. For the relatively small high performance physics market to attract strong industry support requires collaborations to converge on core infrastructure components including hardware, timing, software and firmware architectures; as well as to strive for a much higher degree of interoperability of both lab and industry designed hardware-software products than past generations of standards. The xTCA platform presents a unique opportunity for future progress. This presentation will describe status of the hardware-software extension plans; technology advantages for machine controls and data acquisition systems; and examples of current collaborative efforts to help develop an industry base of generic ATCA and MicroTCA products in an open-source environment.
1. PICMG, the PCI Industrial Computer Manufacturer’s Group
2. Lab representation on PICMG includes CERN, DESY, FNAL, IHEP, IPFN, ITER and SLAC
 
slides icon Slides TUBAUIO05 [1.935 MB]  
 
TUCAUST02 SARAF Control System Rebuild network, software, operation, proton 567
 
  • E. Reinfeld, I. Eliyahu, I.G. Gertz, I. Mardor
    Soreq NRC, Yavne, Israel
 
  The Soreq Applied Research Accelerator Facility (SARAF) is a proton/deuteron RF superconducting linear accelerator, which was commissioned at Soreq NRC. SARAF will be a multi-user facility, whose main activities will be neutron physics and applications, radio-pharmaceuticals development and production, and basic nuclear physics research. The SARAF Accelerator Control System (ACS) was delivered while still in development phase. Various issues limit our capability to use it as a basis for future phases of the accelerator operation and need to be addressed. Recently two projects have been launched in order to streamline the system and prepare it for the future development of the accelerator. This article will describe the plans and goals of these projects, the preparations undertaken by the SARAF team, the design principles on which the control methodology will be based and the architecture which is planned to be implemented. The rebuilding process will take place in two consecutive projects. The first will revamp the network architecture and the second will involve the actual rebuilding of the control system applications, features and procedures.  
slides icon Slides TUCAUST02 [1.733 MB]  
 
TUCAUST03 The Upgrade Programme for the ESRF Accelerator Control System TANGO, software, storage-ring, insertion 570
 
  • J.M. Meyer, J.M. Chaize, F. Epaud, F. Poncet, J.L. Pons, B. Regad, E.T. Taurel, B. Vedder, P.V. Verdier
    ESRF, Grenoble, France
 
  To reach the goals specified in the ESRF upgrade program [1], for the new experiments to be built, the storage ring needs to be modified. The optics must to be changed to allow up to seven meter long straight sections and canted undulator set-ups. Better beam stabilization and feedback systems are necessary for the nano-focus experiments planned. Also we are undergoing a renovation and modernization phase to increase the lifetime of the accelerator and its control system. This paper resumes the major upgrade projects, like the new BPM system, the fast orbit feedback or the ultra small vertical emittance, and their implications on the control system. Ongoing modernization projects such as the solid state radio frequency amplifier or the HOM damped cavities are described. Software upgrades of several sub-systems like vacuum and insertion devices, which are planned for this year or for the long shutdown period beginning of 2012 are covered as well. The final goal is to move to a Tango only control system.
[1] http://www.esrf.fr/AboutUs/Upgrade
 
slides icon Slides TUCAUST03 [1.750 MB]  
 
TUCAUST04 Changing Horses Mid-stream: Upgrading the LCLS Control System During Production Operations EPICS, linac, interface, software 574
 
  • S. L. Hoobler, R.P. Chestnut, S. Chevtsov, T.M. Himel, K.D. Kotturi, K. Luchini, J.J. Olsen, S. Peng, J. Rock, R.C. Sass, T. Straumann, R. Traller, G.R. White, S. Zelazny, J. Zhou
    SLAC, Menlo Park, California, USA
 
  The control system for the Linac Coherent Light Source (LCLS) began as a combination of new and legacy systems. When the LCLS began operating, the bulk of the facility was newly constructed, including a new control system using the Experimental Physics and Industrial Control System (EPICS) framework. The Linear Accelerator (LINAC) portion of the LCLS was repurposed for use by the LCLS and was controlled by the legacy system, which was built nearly 30 years ago. This system uses CAMAC, distributed 80386 microprocessors, and a central Alpha 6600 computer running the VMS operating system. This legacy control system has been successfully upgraded to EPICS during LCLS production operations while maintaining the 95% uptime required by the LCLS users. The successful transition was made possible by thorough testing in sections of the LINAC which were not in use by the LCLS. Additionally, a system was implemented to switch control of a LINAC section between new and legacy control systems in a few minutes. Using this rapid switching, testing could be performed during maintenance periods and accelerator development days. If any problems were encountered after a section had been switched to the new control system, it could be quickly switched back.  
slides icon Slides TUCAUST04 [0.183 MB]  
 
TUCAUST06 Event-Synchronized Data Acquisition System of 5 Giga-bps Data Rate for User Experiment at the XFEL Facility, SACLA experiment, operation, detector, network 581
 
  • M. Yamaga, A. Amselem, T. Hirono, Y. Joti, A. Kiyomichi, T. Ohata, T. Sugimoto, R. Tanaka
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Hatsui
    RIKEN/SPring-8, Hyogo, Japan
 
  A data acquisition (DAQ), control, and storage system has been developed for user experiments at the XFEL facility, SACLA, in the SPring-8 site. The anticipated experiments demand shot-by-shot DAQ in synchronization with the beam operation cycle in order to correlate the beam characteristics, and recorded data such as X-ray diffraction pattern. The experiments produce waveform or image data, of which the data size ranges from 8 up to 48 M byte for each x-ray pulse at 60 Hz. To meet these requirements, we have constructed a DAQ system that is operated in synchronization with the 60Hz of beam operation cycle. The system is designed to handle up to 5 Gbps data rate after compression, and consists of the trigger distributor/counters, the data-filling computers, the parallel-writing high-speed data storage, and the relational database. The data rate is reduced by on-the-fly data compression through front-end embedded systems. The self-described data structure enables to handle any type of data. The pipeline data-buffer at each computer node ensures integrity of the data transfer with the non-real-time operating systems, and reduces the development cost. All the data are transmitted via TCP/IP protocol over GbE and 10GbE Ethernet. To monitor the experimental status, the system incorporates with on-line visualization of waveform/images as well as prompt data mining by 10 PFlops scale supercomputer to check the data health. Partial system for the light source commissioning was released in March 2011. Full system will be released to public users in March 2012.  
slides icon Slides TUCAUST06 [3.248 MB]  
 
TUDAUST01 Inauguration of the XFEL Facility, SACLA, in SPring-8 laser, electron, experiment, operation 585
 
  • R. Tanaka, Y. Furukawa, T. Hirono, M. Ishii, M. Kago, A. Kiyomichi, T. Masuda, T. Matsumoto, T. Matsushita, T. Ohata, C. Saji, T. Sugimoto, M. Yamaga, A. Yamashita
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Fukui, T. Hatsui, N. Hosoda, H. Maesaka, T. Ohshima, T. Otake, Y. Otake, H. Takebe
    RIKEN/SPring-8, Hyogo, Japan
 
  The construction of the X-ray free electron laser facility (SACLA) in SPring-8 started in 2006. After 5 years of construction, the facility completed to accelerate electron beams in February 2011. The main component of the accelerator consists of 64 C-band RF units to accelerate beams up to 8GeV. The beam shape is compressed to a length of 30fs, and the beams are introduced into the 18 insertion devices to generate 0.1nm X-ray laser. The first SASE X-ray was observed after the beam commissioning. The beam tuning will continue to achieve X-ray laser saturation for frontier scientific experiments. The control system adopts the 3-tier standard model by using MADOCA framework developed in SPring-8. The upper control layer consists of Linux PCs for operator consoles, Sybase RDBMS for data logging and FC-based NAS for NFS. The lower consists of 100 Solaris-operated VME systems with newly developed boards for RF waveform processing, and the PLC is used for slow control. The Device-net is adopted for the frontend devices to reduce signal cables. The VME systems have a beam-synchronized data-taking link to meet 60Hz beam operation for the beam tuning diagnostics. The accelerator control has gateways to the facility utility system not only to monitor devices but also to control the tuning points of the cooling water. The data acquisition system for the experiments is challenging. The data rate coming from 2D multiport CCD is 3.4Gbps that produces 30TB image data in a day. A sampled data will be transferred to the 10PFlops supercomputer via 10Gbps Ethernet for data evaluation.  
slides icon Slides TUDAUST01 [5.427 MB]  
 
TUDAUST02 Status Report of the FERMI@Elettra Control System TANGO, real-time, electron, FEL 589
 
  • M. Lonza, A. Abrami, F. Asnicar, L. Battistello, A.I. Bogani, R. Borghes, V. Chenda, S. Cleva, A. Curri, M. De Marco, M.F. Dos Santos, G. Gaio, F. Giacuzzo, G. Kourousias, G. Passos, R. Passuello, L. Pivetta, M. Prica, M. Pugliese, C. Scafuri, G. Scalamera, G. Strangolino, D. Vittor, L. Zambon
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
FERMI@Elettra is a new 4th-generation light source based on a seeded Free Electron Laser (FEL) presently under commissioning in Trieste, Italy. It is the first seeded FEL in the world designed to produce fundamental output wavelength down to 4 nm with High Gain Harmonic Generation (HGHG). Unlike storage ring based synchrotron light sources that are well known machines, the commissioning of a new-concept FEL is a complex and time consuming process consisting in thorough testing, understanding and optimization, in which a reliable and powerful control system is mandatory. In particular, integrated shot-by-shot beam manipulation capabilities and easy to use high level applications are crucial to allow an effective and smooth machine commissioning. The paper reports the status of the control system and the experience gained in two years of alternating construction and commissioning phases.
 
slides icon Slides TUDAUST02 [8.064 MB]  
 
TUDAUST03 Control System in SwissFEL Injector Test Facility EPICS, laser, electron, network 593
 
  • M. Dach, D. Anicic, D.A. Armstrong, K. Bitterli, H. Brands, P. Chevtsov, F. Haemmerli, M. Heiniger, C.E. Higgs, W. Hugentobler, G. Janser, G. Jud, B. Kalantari, R. Kapeller, T. Korhonen, R.A. Krempaska, M.P. Laznovsky, T. Pal, W. Portmann, D. Vermeulen, E. Zimoch
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The Free Electron Laser (SwissFEL) Test Facility is an important milestone for realization of a new SwissFEL facility. The first beam in the Test Facility was produced on the 24th of August 2010 which inaugurated the operation of the Injector. Since then, beam quality in various aspects has been greatly improved. This paper presents the current status of the Test Facility and is focused on the control system related issues which led to the successful commissioning. In addition, the technical challenges and opportunities in view of the future SwissFEL facility are discussed.  
slides icon Slides TUDAUST03 [3.247 MB]  
 
TUDAUST04 Status of the Control System for the European XFEL hardware, distributed, feedback, device-server 597
 
  • K. Rehlich
    DESY, Hamburg, Germany
 
  DESY is currently building a new 3.4 km-long X-ray free electron laser facility. Commissioning is planned in 2014. The facility will deliver ultra short light pulses with a peak power up to 100 GW and a wavelength down to 0.1 nm. About 200 distributed electronic crates will be used to control the facility. A major fraction of the controls will be installed inside the accelerator tunnel. MicroTCA was chosen as an adequate standard with state-of-the-art connectivity and performance including remote management. The FEL will produce up to 27000 bunches per second. Data acquisition and controls have to provide bunch-synchronous operation within the whole distributed system. Feedbacks implemented in FPGAs and on service tier processes will implement the required stability and automation of the FEL. This paper describes the progress in the development of the new hardware as well as the software architecture. Parts of the control system are currently implemented in the much smaller FLASH FEL facility.  
slides icon Slides TUDAUST04 [6.640 MB]  
 
TUDAUST05 The Laser MegaJoule Facility: Control System Status Report laser, target, software, experiment 600
 
  • J.I. Nicoloso
    CEA/DAM/DIF, Arpajon, France
  • J.P.A. Arnoul
    CEA, Le Barp, France
 
  The French Commissariat à l'Energie Atomique (CEA) is currently building the Laser MegaJoule (LMJ), a 176-beam laser facility, at the CEA Laboratory CESTA near Bordeaux. It is designed to deliver about 1.4 MJ of energy to targets for high energy density physics experiments, including fusion experiments. LMJ technological choices were validated with the LIL, a scale 1 prototype of one LMJ bundle. The construction of the LMJ building itself is now achieved and the assembly of laser components is on-going. A Petawatt laser line is also being installed in the building. The presentation gives an overview of the general control system architecture, and focuses on the hardware platform being installed on the LMJ, in the aim of hosting the different software applications for system supervisory and sub-system controls. This platform is based on the use of virtualization techniques that were used to develop a high availability optimized hardware platform, with a high operating flexibility, including power consumption and cooling considerations. This platform is spread over 2 sites, the LMJ itself of course, but also on the software integration platform built outside LMJ, and intended to provide system integration of various software control system components of the LMJ.  
slides icon Slides TUDAUST05 [9.215 MB]  
 
TURAULT01 Summary of the 3rd Control System Cyber-security (CS)2/HEP Workshop network, experiment, software, detector 603
 
  • S. Lüders
    CERN, Geneva, Switzerland
 
  Over the last decade modern accelerator and experiment control systems have increasingly been based on commercial-off-the-shelf products (VME crates, programmable logic controllers (PLCs), supervisory control and data acquisition (SCADA) systems, etc.), on Windows or Linux PCs, and on communication infrastructures using Ethernet and TCP/IP. Despite the benefits coming with this (r)evolution, new vulnerabilities are inherited, too: Worms and viruses spread within seconds via the Ethernet cable, and attackers are becoming interested in control systems. The Stuxnet worm of 2010 against a particular Siemens PLC is a unique example for a sophisticated attack against control systems [1]. Unfortunately, control PCs cannot be patched as fast as office PCs. Even worse, vulnerability scans at CERN using standard IT tools have shown that commercial automation systems lack fundamental security precautions: Some systems crashed during the scan, others could easily be stopped or their process data being altered [2]. The 3rd (CS)2/HEP workshop [3] held the weekend before the ICALEPCS2011 conference was intended to raise awareness; exchange good practices, ideas, and implementations; discuss what works & what not as well as their pros & cons; report on security events, lessons learned & successes; and update on progresses made at HEP laboratories around the world in order to secure control systems. This presentation will give a summary of the solutions planned, deployed and the experience gained.
[1] S. Lüders, "Stuxnet and the Impact on Accelerator Control Systems", FRAAULT02, ICALEPCS, Grenoble, October 2011;
[2] S. Lüders, "Control Systems Under Attack?", O5_008, ICALEPCS, Geneva, October 2005.
[3] 3rd Control System Cyber-Security CS2/HEP Workshop, http://indico.cern.ch/conferenceDisplay.py?confId=120418
 
 
WEAAUST01 Sardana: The Software for Building SCADAS in Scientific Environments interface, TANGO, synchrotron, GUI 607
 
  • T.M. Coutinho, G. Cuní, D.F.C. Fernández-Carreiras, J. Klora, C. Pascual-Izarra, Z. Reszela, R. Suñé
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
  • A. Homs, E.T. Taurel
    ESRF, Grenoble, France
 
  Sardana is a software for supervision, control and data acquisition in large and small scientific installations. It delivers important cost and time reductions associated with the design, development and support of the control and data acquisition systems. It enhances Tango with the capabilities for building graphical interfaces without writing code, a powerful python-based macro environment for building sequences and complex macros, and a comprehensive access to the hardware. It scales well to small laboratories as well as to large scientific institutions. It has been commissioned for the control system of Accelerators and Beamlines at the Alba Synchrotron.  
slides icon Slides WEAAUST01 [6.978 MB]  
 
WEAAULT02 Model Oriented Application Generation for Industrial Control Systems software, target, framework, factory 610
 
  • B. Copy, R. Barillère, E. Blanco Vinuela, R.N. Fernandes, B. Fernández Adiego, I. Prieto Barreiro
    CERN, Geneva, Switzerland
 
  The CERN Unified Industrial Control Systems framework (UNICOS) is a software generation methodology that standardizes the design of slow process control applications [1]. A Software Factory, named the UNICOS Application Builder (UAB) [2], was introduced to provide a stable metamodel, a set of platform-independent models and platform-specific configurations against which code and configuration generation plugins can be written. Such plugins currently target PLC programming environments (Schneider UNITY and SIEMENS Step7 PLCs) as well as SIEMENS WinCC Open Architecture SCADA (previously known as ETM PVSS) but are being expanded to cover more and more aspects of process control systems. We present what constitutes the UAB metamodel and the models in use, how these models can be used to capture knowledge about industrial control systems and how this knowledge can be leveraged to generate both code and configuration for a variety of target usages.
[1] H. Milcent et al, "UNICOS: AN OPEN FRAMEWORK", ICALEPCS2009, Kobe, Japan, (THD003)
[2] M. Dutour, "Software factory techniques applied to Process Control at CERN", ICALEPCS 2007, Knoxville Tennessee, USA
 
slides icon Slides WEAAULT02 [1.757 MB]  
 
WEAAULT03 A Platform Independent Framework for Statecharts Code Generation software, framework, CORBA, target 614
 
  • L. Andolfato, G. Chiozzi
    ESO, Garching bei Muenchen, Germany
  • N. Migliorini
    ENDIF, Ferrara, Italy
  • C. Morales
    UTFSM, Valparaíso, Chile
 
  Control systems for telescopes and their instruments are reactive systems very well suited to be modeled using Statecharts formalism. The World Wide Web Consortium is working on a new standard called SCXML that specifies an XML notation to describe Statecharts and provides a well defined operational semantic for run-time interpretation of the SCXML models. This paper presents a generic application framework for reactive non real-time systems based on interpreted Statecharts. The framework consists of a model to text transformation tool and an SCXML interpreter. The tool generates from UML state machine models the SCXML representation of the state machines and the application skeletons for the supported software platforms. An abstraction layer propagates the events from the middleware to the SCXML interpreter facilitating the support of different software platforms. This project benefits from the positive experience gained in several years of development of coordination and monitoring applications for the telescope control software domain using Model Driven Development technologies.  
slides icon Slides WEAAULT03 [2.179 MB]  
 
WEBHAUST01 LHCb Online Infrastructure Monitoring Tools monitoring, status, Windows, Linux 618
 
  • L.G. Cardoso, C. Gaspar, C. Haen, N. Neufeld, F. Varela
    CERN, Geneva, Switzerland
  • D. Galli
    INFN-Bologna, Bologna, Italy
 
  The Online System of the LHCb experiment at CERN is composed of a very large number of PCs: around 1500 in a CPU farm for performing the High Level Trigger; around 170 for the control system, running the SCADA system - PVSS; and several others for performing data monitoring, reconstruction, storage, and infrastructure tasks, like databases, etc. Some PCs run Linux, some run Windows but all of them need to be remotely controlled and monitored to make sure they are correctly running and to be able, for example, to reboot them whenever necessary. A set of tools was developed in order to centrally monitor the status of all PCs and PVSS Projects needed to run the experiment: a Farm Monitoring and Control (FMC) tool, which provides the lower level access to the PCs, and a System Overview Tool (developed within the Joint Controls Project – JCOP), which provides a centralized interface to the FMC tool and adds PVSS project monitoring and control. The implementation of these tools has provided a reliable and efficient way to manage the system, both during normal operations but also during shutdowns, upgrades or maintenance operations. This paper will present the particular implementation of this tool in the LHCb experiment and the benefits of its usage in a large scale heterogeneous system.  
slides icon Slides WEBHAUST01 [3.211 MB]  
 
WEBHAUST03 Large-bandwidth Data Acquisition Network for XFEL Facility, SACLA network, site, experiment, laser 626
 
  • T. Sugimoto, Y. Joti, T. Ohata, R. Tanaka, M. Yamaga
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Hatsui
    RIKEN/SPring-8, Hyogo, Japan
 
  We have developed a large-bandwidth data acquisition (DAQ) network for user experiments at the SPring-8 Angstrom Compact Free Electron Laser (SACLA) facility. The network connects detectors, on-line visualization terminals and a high-speed storage of the control and DAQ system to transfer beam diagnostic data of each X-ray pulse as well as the experimental data. The development of DAQ network system (DAQ-LAN) was one of the critical elements in the system development because the data with transfer rate reaching 5 Gbps should be stored and visualized with high availability. DAQ-LAN is also used for instrument control. In order to guarantee the operation of both the high-speed data transfer and instrument control, we have implemented physical and logical network system. The DAQ-LAN currently consists of six 10-GbE capable network switches exclusively used for the data transfer, and ten 1-GbE capable network switches for instrument control and on-line visualization. High-availability was achieved by link aggregation (LAG) with typical convergence time of 500 ms, which is faster than RSTP (2 sec.). To prevent network trouble caused by broadcast, DAQ-LAN is logically separated into twelve network segments. Logical network segmentation are based on DAQ applications such as data transfer, on-line visualization, and instrument control. The DAQ-LAN will connect the control and DAQ system to the on-site high performance computing system, and to the next-generation super computers in Japan including K-computer for instant data mining during the beamtime, and post analysis.  
slides icon Slides WEBHAUST03 [5.795 MB]  
 
WEBHAUST06 Virtualized High Performance Computing Infrastructure of Novosibirsk Scientific Center network, experiment, site, detector 630
 
  • A. Zaytsev, S. Belov, V.I. Kaplin, A. Sukharev
    BINP SB RAS, Novosibirsk, Russia
  • A.S. Adakin, D. Chubarov, V. Nikultsev
    ICT SB RAS, Novosibirsk, Russia
  • V. Kalyuzhny
    NSU, Novosibirsk, Russia
  • N. Kuchin, S. Lomakin
    ICM&MG SB RAS, Novosibirsk, Russia
 
  Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. Our contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure focusing on its applications for handling everyday data processing tasks of HEP experiments being carried out at BINP.  
slides icon Slides WEBHAUST06 [14.369 MB]  
 
WEBHMUST01 The MicroTCA Acquisition and Processing Back-end for FERMI@Elettra Diagnostics diagnostics, interface, FEL, timing 634
 
  • A.O. Borga, R. De Monte, M. Ferianis, G. Gaio, L. Pavlovič, M. Predonzani, F. Rossi
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
Several diagnostics instruments for the FERMI@Elettra FEL require accurate readout, processing, and control electronics; together with a complete integration within the TANGO control system. A custom developed back-end system, compliant to the PICMG MicroTCA standard, provides a robust platform for accommodating such electronics; including reliable slow control and monitoring infrastructural features. Two types of digitizer AMCs have been developed, manufactured, tested and successfully commissioned in the FERMI facility. The first being a fast (160Msps) and high-resolution (16 bits) Analog to Digital and Digital to Analog (A|D|A) Convert Board, hosting 2 A-D and 2 D-A converters controlled by a large FPGA (Xilinx Virtex-5 SX50T) responsible also for the fast communication interface handling. The latter being an Analog to Digital Only (A|D|O), derived from A|D|A, with an analog front-side stage made of 4 A-D converters. A simple MicroTCA Timing Central Hub (MiTiCH) completes the set of modules necessary for operating the system. Several TANGO servers and panels have been developed and put in operation with the support of the controls group. The overall system's architectures, with different practical application examples, together with the specific AMCs' functionalities, are presented. Impressions on our experience on the field using the novel MicroTCA standard are also discussed.
 
slides icon Slides WEBHMUST01 [2.715 MB]  
 
WEBHMUST02 Solid State Direct Drive RF Linac: Control System cavity, experiment, software, LLRF 638
 
  • T. Kluge, M. Back, U. Hagen, O. Heid, M. Hergt, T.J.S. Hughes, R. Irsigler, J. Sirtl
    Siemens AG, Erlangen, Germany
  • R. Fleck
    Siemens AG, Corporate Technology, CT T DE HW 4, Erlangen, Germany
  • H.-C. Schröder
    ASTRUM IT GmbH, Erlangen, Germany
 
  Recently a Solid State Direct Drive ® concept for RF linacs has been introduced [1]. This new approach integrates the RF source, comprised of multiple Silicon Carbide (SiC) solid state Rf-modules [2], directly onto the cavity. Such an approach introduces new challenges for the control of such machines namely the non-linear behavior of the solid state RF-modules and the direct coupling of the RF-modules onto the cavity. In this paper we discuss further results of the experimental program [3,4] to integrate and control 64 RF-modules onto a λ/4 cavity. The next stage of experiments aims on gaining better feed forward control of the system and on detailed system identification. For this purpose a digital control board comprising of a Virtex 6 FPGA, high speed DACs/ADCs and trigger I/O is developed and integrated into the experiment and used to control the system. The design of the board is consequently digital aiming at direct processing of the signals. Power control within the cavity is achieved by an outphasing control of two groups of the RF-modules. This allows a power control without degradation of RF-module efficiency.
[1] Heid O., Hughes T., THPD002, IPAC10, Kyoto, Japan
[2] Irsigler R. et al, 3B-9, PPC11, Chicago IL, USA
[3] Heid O., Hughes T., THP068, LINAC10, Tsukuba, Japan
[4] Heid O., Hughes T., MOPD42, HB2010, Morschach, Switzerland
 
slides icon Slides WEBHMUST02 [1.201 MB]  
 
WEMAU001 A Remote Tracing Facility for Distributed Systems GUI, interface, database, operation 650
 
  • F. Ehm, A. Dworak
    CERN, Geneva, Switzerland
 
  Today the CERN's accelerator control system is built upon a large number of services mainly based on C++ and JAVA which produce log events. In such a largely distributed environment these log messages are essential for problem recognition and tracing. Tracing is therefore a vital part of operations, as understanding an issue in a subsystem means analyzing log events in an efficient and fast manner. At present 3150 device servers are deployed on 1600 diskless frontends and they send their log messages via the network to an in-house developed central server which, in turn, saves them to files. However, this solution is not able to provide several highly desired features and has performance limitations which led to the development of a new solution. The new distributed tracing facility fulfills these requirements by taking advantage of the Simple Text Orientated Message Protocol [STOMP] and ActiveMQ as the transport layer. The system not only allows to store critical log events centrally in files or in a database but also it allows other clients (e.g. graphical interfaces) to read the same events at the same time by using the provided JAVA API. This facility also ensures that each client receives only the log events of the desired level. Thanks to the ActiveMQ broker technology the system can easily be extended to clients implemented in other languages and it is highly scalable in terms of performance. Long running tests have shown that the system can handle up to 10.000 messages/second.  
slides icon Slides WEMAU001 [1.008 MB]  
poster icon Poster WEMAU001 [0.907 MB]  
 
WEMAU002 Coordinating Simultaneous Instruments at the Advanced Technology Solar Telescope experiment, software, interface, target 654
 
  • S.B. Wampler, B.D. Goodrich, E.M. Johansson
    Advanced Technology Solar Telescope, National Solar Observatory, Tucson, USA
 
  A key component of the Advanced Technology Solar Telescope control system design is the efficient support of multiple instruments sharing the light path provided by the telescope. The set of active instruments varies with each experiment and possibly with each observation within an experiment. The flow of control for a typical experiment is traced through the control system to preset the main aspects of the design that facilitate this behavior. Special attention is paid to the role of ATST's Common Services Framework in assisting the coordination of instruments with each other and with the telescope.  
slides icon Slides WEMAU002 [0.251 MB]  
poster icon Poster WEMAU002 [0.438 MB]  
 
WEMAU004 Integrating EtherCAT Based IO into EPICS at Diamond EPICS, real-time, Ethernet, Linux 662
 
  • R. Mercado, I.J. Gillingham, J. Rowland, K.G. Wilkinson
    Diamond, Oxfordshire, United Kingdom
 
  Diamond Light Source is actively investigating the use of EtherCAT-based Remote I/O modules for the next phase of photon beamline construction. Ethernet-based I/O in general is attractive, because of reduced equipment footprint, flexible configuration and reduced cabling. EtherCAT offers, in addition, the possibility of using inexpensive Ethernet hardware, off-the-shelf components with a throughput comparable to current VME based solutions. This paper presents the work to integrate EtherCAT-based I/O to the EPICS control system, listing platform decisions, requirement considerations and software design, and discussing the use of real-time pre-emptive Linux extensions to support high-rate devices that require deterministic sampling.  
slides icon Slides WEMAU004 [0.057 MB]  
poster icon Poster WEMAU004 [0.925 MB]  
 
WEMAU005 The ATLAS Transition Radiation Tracker (TRT) Detector Control System detector, hardware, electronics, operation 666
 
  • J. Olszowska, E. Banaś, Z. Hajduk
    IFJ-PAN, Kraków, Poland
  • M. Hance, D. Olivito, P. Wagner
    University of Pennsylvania, Philadelphia, Pennsylvania, USA
  • T. Kowalski, B. Mindur
    AGH University of Science and Technology, Krakow, Poland
  • R. Mashinistov, K. Zhukov
    LPI, Moscow, Russia
  • A. Romaniouk
    MEPhI, Moscow, Russia
 
  Funding: CERN; MNiSW, Poland; MES of Russia and ROSATOM, Russian Federation; DOE and NSF, United States of America
TRT is one of the ATLAS experiment Inner Detector components providing precise tracking and electrons identification. It consists of 370 000 proportional counters (straws) which have to be filled with stable active gas mixture and high voltage biased. High voltage setting at distinct topological regions are periodicaly modified by closed-loop regulation mechanism to ensure constant gaseous gain independent of drifts of atmospheric pressure, local detector temperatures and gas mixture composition. Low voltage system powers front-end electronics. Special algorithms provide fine tuning procedures for detector-wide discrimination threshold equalization to guarantee uniform noise figure for whole detector. Detector, cooling system and electronics temperatures are continuosly monitored by ~ 3000 temperature sensors. The standard industrial and custom developed server applications and protocols are used for devices integration into unique system. All parameters originating in TRT devices and external infrastructure systems (important for Detector operation or safety) are monitored and used by alert and interlock mechanisms. System runs on 11 computers as PVSS (industrial SCADA) projects and is fully integrated with ATLAS Detector Control System.
 
slides icon Slides WEMAU005 [1.384 MB]  
poster icon Poster WEMAU005 [1.978 MB]  
 
WEMAU007 Turn-key Applications for Accelerators with LabVIEW-RADE framework, LabView, software, alignment 670
 
  • O.O. Andreassen, P. Bestmann, C. Charrondière, T. Feniet, J. Kuczerowski, M. Nybø, A. Rijllart
    CERN, Geneva, Switzerland
 
  In the accelerator domain there is a need of integrating industrial devices and creating control and monitoring applications in an easy and yet structured way. The LabVIEW-RADE framework provides the method and tools to implement these requirements and also provides the essential integration of these applications into the CERN controls infrastructure. We present three examples of applications of different nature to show that the framework provides solutions at all three tiers of the control system, data access, process and supervision. The first example is a remotely controlled alignment system for the LHC collimators. The collimator alignment will need to be checked periodically. Due to limited access for personnel, the instruments are mounted on a small train. The system is composed of a PXI crate housing the instrument interfaces and a PLC for the motor control. We report on the design, development and commissioning of the system. The second application is the renovation of the PS beam spectrum analyser where both hardware and software were renewed. The control application was ported from Windows to LabVIEW-Real Time. We describe the technique used for a full integration into the PS console. The third example is a control and monitoring application of the CLIC two beam test stand. The application accesses CERN front-end equipment through the CERN middleware, CMW, and provides many different ways to view the data. We conclude with an evaluation of the framework based on the three examples and indicate new areas of improvement and extension.  
poster icon Poster WEMAU007 [2.504 MB]  
 
WEMAU010 Web-based Control Application using WebSocket GUI, Linux, Windows, experiment 673
 
  • Y. Furukawa
    JASRI/SPring-8, Hyogo-ken, Japan
 
  The Websocket [1] brings asynchronous full-duplex communication between a web-based (i.e. java-script based) application and a web-server. The WebSocket started as a part of HTML5 standardization but has now been separated from the HTML5 and developed independently. Using the WebSocket, it becomes easy to develop platform independent presentation layer applications of accelerator and beamline control software. In addition, no application program has to be installed on client computers except for the web-browser. The WebSocket based applications communicate with the WebSocket server using simple text based messages, so the WebSocket can be applicable message based control system like MADOCA, which was developed for the SPring-8 control system. A simple WebSocket server for the MADOCA control system and a simple motor control application was successfully made as a first trial of the WebSocket control application. Using google-chrome (version 10.x) on Debian/Linux and Windows 7, opera (version 11.0 beta) on Debian/Linux and safari (version 5.0.3) on MacOSX as clients, the motors can be controlled using the WebSocket based web-application. The more complex applications are now under development for synchrotron radiation experiments combined with other HTML5 features.
[1] http://websocket.org/
 
poster icon Poster WEMAU010 [44.675 MB]  
 
WEMAU011 LIMA: A Generic Library for High Throughput Image Acquisition detector, hardware, software, interface 676
 
  • A. Homs, L. Claustre, A. Kirov, E. Papillon, S. Petitdemange
    ESRF, Grenoble, France
 
  A significant number of 2D detectors are used in large scale facilities' control systems for quantitative data analysis. In these devices, a common set of control parameters and features can be identified, but most of manufacturers provide specific software control interfaces. A generic image acquisition library, called LIMA, has been developed at the ESRF for a better compatibility and easier integration of 2D detectors to existing control systems. The LIMA design is driven by three main goals: i) independence of any control system to be shared by a wide scientific community; ii) a rich common set of functionalities (e.g., if a feature is not supported by hardware, then the alternative software implementation is provided); and iii) intensive use of events and multi-threaded algorithms for an optimal exploit of multi-core hardware resources, needed when controlling high throughput detectors. LIMA currently supports the ESRF Frelon and Maxipix detectors as well as the Dectris Pilatus. Within a collaborative framework, the integration of the Basler GigE cameras is a contribution from SOLEIL. Although it is still under development, LIMA features so far fast data saving on different file formats and basic data processing / reduction, like software pixel binning / sub-image, background subtraction, beam centroid and sub-image statistics calculation, among others.  
slides icon Slides WEMAU011 [0.073 MB]  
 
WEMAU012 COMETE: A Multi Data Source Oriented Graphical Framework software, TANGO, toolkit, framework 680
 
  • G. Viguier, Y. Huriez, M. Ounsy, K.S. Saintin
    SOLEIL, Gif-sur-Yvette, France
  • R. Girardot
    EXTIA, Boulogne Billancourt, France
 
  Modern beamlines at SOLEIL need to browse a large amount of scientific data through multiple sources that can be scientific measurement data files, databases or Tango [1] control systems. We created the COMETE [2] framework because we thought it was necessary for the end users to use the same collection of widgets for all the different data sources to be accessed. On the other side, for GUI application developers, the complexity of data source handling had to be hidden. These 2 requirements being now fulfilled, our development team is able to build high quality, modular and reusable scientific oriented GUI software, with consistent look and feel for end users. COMETE offers some key features to our developers: Smart refreshing service , easy-to-use and succinct API, Data Reduction functionality. This paper will present the work organization, the modern software architecture and design of the whole system. Then, the migration from our old GUI framework to COMETE will be detailed. The paper will conclude with an application example and a summary of the incoming features available in the framework.
[1] http://www.tango-controls.org
[2] http://comete.sourceforge.net
 
slides icon Slides WEMAU012 [0.083 MB]  
 
WEMMU001 Floating-point-based Hardware Accelerator of a Beam Phase-Magnitude Detector and Filter for a Beam Phase Control System in a Heavy-Ion Synchrotron Application detector, hardware, synchrotron, FPGA 683
 
  • F.A. Samman
    Technische Universität Darmstadt, Darmstadt, Germany
  • M. Glesner, C. Spies, S. Surapong
    TUD, Darmstadt, Germany
 
  Funding: German Federal Ministry of Education and Research in the frame of Project FAIR (Facility for Antiproton and Ion Research), Grant Number 06DA9028I.
A hardware implementation of an adaptive phase and magnitude detector and filter of a beam-phase control system in a heavy ion synchrotron application is presented in this paper [1]. The main components of the hardware are adaptive LMS filters and a phase and magnitude detector. The phase detectors are implemented by using a CORDIC algorithm based on 32-bit binary floating-point arithmetic data formats. Therefore, a decimal to floating-point adapter is required to interface the data from an ADC to the phase and magnitude detector. The floating-point-based hardware is designed to improve the precision of the past hardware implementation that is based on fixed-point arithmetics. The hardware of the detector and the adaptive LMS filter have been implemented on a reconfigurable FPGA device for hardware acceleration purpose. The ideal Matlab/Simulink model of the hardware and the VHDL model of the adaptive LMS filter and the phase and magnitude detector are compared. The comparison result shows that the output signal of the floating-point based adaptive FIR filter as well as the phase and magnitude detector is simillar to the expected output signal of the ideal Matlab/Simulink model.
[1] H. Klingbeil, "A Fast DSP-Based Phase-Detector for Closed-Loop RF Control in Synchrotrons," IEEE Trans. Instrum. Meas., 54(3):1209–1213, 2005.
 
slides icon Slides WEMMU001 [0.383 MB]  
 
WEMMU004 SPI Boards Package, a New Set of Electronic Boards at Synchrotron SOLEIL undulator, FPGA, detector, interface 687
 
  • Y.-M. Abiven, P. Betinelli-Deck, J. Bisou, F. Blache, F. Briquez, A. Chattou, J. Coquet, P. Gourhant, N. Leclercq, P. Monteiro, G. Renaud, J.P. Ricaud, L. Roussier
    SOLEIL, Gif-sur-Yvette, France
 
  SOLEIL is a third generation Synchrotron radiation source located in France near Paris. At the moment, the Storage Ring delivers photon beam to 23 beamlines. Since machine and beamlines improve their performance, new requirements are identified. On the machine side, new implementation for feedforward of electromagnetic undulators is required to improve beam stability. On the beamlines side, a solution is required to synchronize data acquisition with motor position during continuous scan. In order to provide a simple and modular solution for these applications requiring synchronization, the electronic group developed a set of electronic boards called "SPI board package". In this package, the boards can be connected together in daisy chain and communicate to the controller through a SPI* Bus. Communication with control system is done via Ethernet. At the moment the following boards are developed: a controller board based on a Cortex M3 MCU, 16bits ADC board, 16bits DAC board and a board allowing to process motor encoder signals based on a FPGA Spartan III. This platform allows us to embed process close to the hardware with open tools. Thanks to this solution we reach the best performances of synchronization.
* SPI: Serial Peripheral Interface
 
slides icon Slides WEMMU004 [0.230 MB]  
poster icon Poster WEMMU004 [0.430 MB]  
 
WEMMU005 Fabric Management with Diskless Servers and Quattor on LHCb Linux, experiment, embedded, collider 691
 
  • P. Schweitzer, E. Bonaccorsi, L. Brarda, N. Neufeld
    CERN, Geneva, Switzerland
 
  Large scientific experiments nowadays very often are using large computer farms to process the events acquired from the detectors. In LHCb a small sysadmin team manages 1400 servers of the LHCb Event Filter Farm, but also a wide variety of control servers for the detector electronics and infrastructure computers : file servers, gateways, DNS, DHCP and others. This variety of servers could not be handled without a solid fabric management system. We choose the Quattor toolkit for this task. We will present our use of this toolkit, with an emphasis on how we handle our diskless nodes (Event filter farm nodes and computers embedded in the acquisition electronic cards). We will show our current tests to replace the standard (RedHat/Scientific Linux) way of handling diskless nodes to fusion filesystems and how it improves fabric management.  
slides icon Slides WEMMU005 [0.119 MB]  
poster icon Poster WEMMU005 [0.602 MB]  
 
WEMMU006 Management Tools for Distributed Control System in KSTAR software, monitoring, operation, EPICS 694
 
  • S. Lee, J.S. Hong, J.S. Park, M.K. Park, S.W. Yun
    NFRI, Daejon, Republic of Korea
 
  The integrated control system of the Korea Superconducting Tokamak Advanced Research (KSTAR) has been developed with distributed control systems based on Experimental Physics and Industrial Control System (EPICS). It has the essential role of remote operation, supervising of tokamak device and conducting of plasma experiments without any interruption. Therefore, the availability of the control system directly impacts on the entire device performance. For the non-interrupted operation of the KSTAR control system, we have developed a tool named as Control System Monitoring (CSM) to monitor the resources of EPICS Input/Output Controller (IOC) servers (utilization of memory, cpu, disk, network, user-defined process and system-defined process), the soundness of storage systems (storage utilization, storage status), the status of network switches using Simple Network Management Protocol (SNMP), the network connection status of every local control sever using Internet Control Message Protocol (ICMP), and the operation environment of the main control room and the computer room (temperature, humidity, water-leak) in real time. When abnormal conditions or faults are detected by the CSM, it alerts abnormal or fault alarms to operators. Especially, if critical fault related to the data storage occurs, the CSM sends the simple messages to operator’s mobile phone. In addition to the CSM, other tools, which are subversion for software version control and vmware for the virtualized IT infrastructure, for managing the integrated control system for KSTAR operation will be introduced.  
slides icon Slides WEMMU006 [0.247 MB]  
poster icon Poster WEMMU006 [5.611 MB]  
 
WEMMU007 Reliability in a White Rabbit Network network, timing, Ethernet, hardware 698
 
  • M. Lipiński, J. Serrano, T. Włostowski
    CERN, Geneva, Switzerland
  • C. Prados
    GSI, Darmstadt, Germany
 
  White Rabbit (WR) is a time-deterministic, low-latency Ethernet-based network which enables transparent, sub-ns accuracy timing distribution. It is being developed to replace the General Machine Timing (GMT) system currently used at CERN and will become the foundation for the control system of the Facility for Antiproton and Ion Research (FAIR) at GSI. High reliability is an important issue in WR's design, since unavailability of the accelerator's control system will directly translate into expensive downtime of the machine. A typical WR network is required to lose not more than a single message per year. Due to WR's complexity, the translation of this real-world-requirement into a reliability-requirement constitutes an interesting issue on its own: a WR network is considered functional only if it provides all its services to all its clients at any time. This paper defines reliability in WR and describes how it was addressed by dividing it into sub-domains: deterministic packet delivery, data redundancy, topology redundancy and clock resilience. The studies show that the Mean Time Between Failure (MTBF) of the WR Network is the main factor affecting its reliability. Therefore, probability calculations for different topologies were performed using the "Fault Tree analysis" and analytic estimations. Results of the study show that the requirements of WR are demanding. Design changes might be needed and further in-depth studies required, e.g. Monte Carlo simulations. Therefore, a direction for further investigations is proposed.  
slides icon Slides WEMMU007 [0.689 MB]  
poster icon Poster WEMMU007 [1.080 MB]  
 
WEMMU009 Status of the RBAC Infrastructure and Lessons Learnt from its Deployment in LHC operation, database, software, software-architecture 702
 
  • W. Sliwinski, P. Charrue, I. Yastrebov
    CERN, Geneva, Switzerland
 
  The distributed control system for the LHC accelerator poses many challenges due to its inherent heterogeneity and highly dynamic nature. One of the important aspects is to protect the machine against unauthorised access and unsafe operation of the control system, from the low-level front-end machines up to the high-level control applications running in the control room. In order to prevent an unauthorized access to the control system and accelerator equipment and to address the possible security issues, the Role Based Access Control (RBAC) project was designed and developed at CERN, with a major contribution from Fermilab laboratory. Furthermore, RBAC became an integral part of the CERN Controls Middleware (CMW) infrastructure and it was deployed and commissioned in the LHC operation in the summer 2008, well before the first beam in LHC. This paper presents the current status of the RBAC infrastructure, together with an outcome and gathered experience after a massive deployment in the LHC operation. Moreover, we outline how the project evolved over the last three years and give an overview of the major extensions introduced to improve integration, stability and its functionality. The paper also describes the plans of future project evolution and possible extensions, based on gathered users requirements and operational experience.  
slides icon Slides WEMMU009 [0.604 MB]  
poster icon Poster WEMMU009 [1.262 MB]  
 
WEMMU010 Dependable Design Flow for Protection Systems using Programmable Logic Devices hardware, simulation, FPGA, software 706
 
  • M. Kwiatkowski, B. Todd
    CERN, Geneva, Switzerland
 
  Programmable Logic Devices (PLD) such as Field Programmable Gate Arrays (FPGA) are becoming more prevalent in protection and safety-related electronic systems. When employing such programmable logic devices, extra care and attention needs to be taken. It is important to be confident that the final synthesis result, used to generate the bit-stream to program the device, meets the design requirements. This paper will describe how to maximize confidence using techniques such as Formal Methods, exhaustive Hardware Description Language (HDL) code simulation and hardware testing. An example will be given for one of the critical function of the Safe Machine Parameters (SMP) system, one of the key systems for the protection of the Large Hadrons Collider (LHC) at CERN. The design flow will be presented where the implementation phase is just one small element of the whole process. Techniques and tools presented can be applied for any PLD based system implementation and verification.  
slides icon Slides WEMMU010 [1.093 MB]  
poster icon Poster WEMMU010 [0.829 MB]  
 
WEMMU011 Radiation Safety Interlock System for SACLA (XFEL/SPring-8) radiation, electron, gun, network 710
 
  • M. Kago, T. Matsushita, N. Nariyama, C. Saji, R. Tanaka, A. Yamashita
    JASRI/SPring-8, Hyogo-ken, Japan
  • Y. Asano, T. Hara, T. Itoga, Y. Otake, H. Takebe
    RIKEN/SPring-8, Hyogo, Japan
  • H. Tanaka
    RIKEN SPring-8 Center, Sayo-cho, Sayo-gun, Hyogo, Japan
 
  The radiation safety interlock system for SACLA (XFEL/SPring-8) protects personnel from radiation hazards. The system controls access to the accelerator tunnel, monitors the status of safety equipment such as emergency stop buttons, and gives permission for accelerator operation. The special feature of the system is a fast beam termination when the system detects an unsafe state. A total beam termination time is required less than 16.6 ms (linac operation repetition cycle: 60 Hz). Especially important is the fast beam termination when the electron beams deviates from the proper transport route. Therefore, we developed optical modules in order to transmit a signal at a high speed for a long distance (an overall length of around 700 m). An exclusive system was installed for fast judgment of a proper beam route. It is independent from the main interlock system which manages access control and so on. The system achieved a response time of less than 7ms, which is sufficient for our demand. The construction of the system was completed in February 2011 and the system commenced operation in March 2011. We will report on the design of the system and its detailed performance.  
slides icon Slides WEMMU011 [0.555 MB]  
poster icon Poster WEMMU011 [0.571 MB]  
 
WEPKN002 Tango Control System Management Tool status, TANGO, device-server, database 713
 
  • P.V. Verdier, F. Poncet, J.L. Pons
    ESRF, Grenoble, France
  • N. Leclercq
    SOLEIL, Gif-sur-Yvette, France
 
  Tango is an object oriented control system toolkit based on CORBA initially developed at the ESRF. It is now also developed and used by Soleil, Elettra, Alba, Desy, MAX Lab, FRM II and some other labs. Tango concept is a full distributed control system. That means that several processes (called servers) are running on many different hosts. Each server manages one or several Tango classes. Each class could have one or several instances. This poster will show existing tools to configure, survey and manage a very large number of Tango components.  
poster icon Poster WEPKN002 [1.982 MB]  
 
WEPKN005 Experiences in Messaging Middleware for High-Level Control Applications EPICS, framework, interface, software 720
 
  • N. Wang, J.L. Matykiewicz, R. Pundaleeka, S.G. Shasharina
    Tech-X, Boulder, Colorado, USA
 
  Funding: This project is funded by the US Department of Energy, Office of High Energy Physics under the contract #DE-FG02-08ER85043.
Existing high-level applications in accelerator control and modeling systems leverage many different languages, tools and frameworks that do not interoperate with one another. As a result, the community has moved toward the proven Service-Oriented Architecture approach to address the interoperability challenges among heterogeneous high-level application modules. This paper presents our experiences in developing a demonstrative high-level application environment using emerging messaging middleware standards. In particular, we utilized new features such as pvData, in the EPICS v4 and other emerging standards such as Data Distribution Service (DDS) and Extensible Type Interface by the Object Management Group. Our work on developing the demonstrative environment focuses on documenting the procedures to develop high-level accelerator control applications using the aforementioned technologies. Examples of such applications include presentation panel clients based on Control System Studio (CSS), Model-Independent plug-in for CSS, and data producing middle-layer applications such as model/data servers. Finally, we will show how these technologies enable developers to package various control subsystems and activities into "services" with well-defined "interfaces" and make leveraging heterogeneous high-level applications via flexible composition possible.
 
poster icon Poster WEPKN005 [2.723 MB]  
 
WEPKN006 Running a Reliable Messaging Infrastructure for CERN's Control System monitoring, network, operation, GUI 724
 
  • F. Ehm
    CERN, Geneva, Switzerland
 
  The current middleware for CERN's accelerator controls system is based on two implementations: corba-based Controls MiddleWare (CMW) and Java Messaging Service [JMS]. The JMS service is realized using the open source messaging product ActiveMQ and had became an increasing vital part of beam operations as data need to be transported reliably for various areas such as the beam protection system, post mortem analysis, beam commissioning or the alarm system. The current JMS service is made of 17 brokers running either in clusters or as single nodes. The main service is deployed as a two node cluster providing failover and load balancing capabilities for high availability. Non-critical applications running on virtual machines or desktop machines read data via a third broker to decouple the load from the operational main cluster. This scenario was introduced last year and the statistics showed an uptime of 99.998% and an average data serving rate of 1.6GB /min represented by around 150 messages/sec. Deploying, running, maintaining and protecting such messaging infrastructure is not trivial and includes setting up of careful monitoring and failure pre-recognition. Naturally, lessons have been learnt and their outcome is very important for the current and future operation of such service.  
poster icon Poster WEPKN006 [0.877 MB]  
 
WEPKN007 A LEGO Paradigm for Virtual Accelerator Concept experiment, simulation, software, operation 728
 
  • S.N. Andrianov, A.N. Ivanov, E.A. Podzyvalov
    St. Petersburg State University, St. Petersburg, Russia
 
  The paper considers basic features of a Virtual Accelerator concept based on LEGO paradigm. This concept involves three types of components: different mathematical models for accelerator design problems, integrated beam simulation packages (i.e. COSY, MAD, OptiM and others), and a special class of virtual feedback instruments similar to real control systems (EPICS). All of these components should interoperate for more complete analysis of control systems and increased fault tolerance. The Virtual Accelerator is an information and computing environment which provides a framework for analysis based on these components that can be combined in different ways. Corresponding distributed computing services establish interaction between mathematical models and low level control system. The general idea of the software implementation is based on the Service-Oriented Architecture (SOA) that allows using cloud computing technology and enables remote access to the information and computing resources. The Virtual Accelerator allows a designer to combine powerful instruments for modeling beam dynamics in a friendly to use way including both self-developed and well-known packages. In the scope of this concept the following is also proposed: the control system identification, analysis and result verification, visualization as well as virtual feedback for beam line operation. The architecture of the Virtual Accelerator system itself and results of beam dynamics studies are presented.  
poster icon Poster WEPKN007 [0.969 MB]  
 
WEPKN010 European XFEL Phase Shifter: PC-based Control System LabView, undulator, hardware, GUI 731
 
  • E. Molina Marinas, J.M. Cela-Ruiz, A. Guirao, L.M. Martinez Fresno, I. Moya, A.L. Pardillo, S. Sanz, C. Vazquez, J.G.S. de la Gama
    CIEMAT, Madrid, Spain
 
  Funding: Work partially supported by the Spanish Ministry of Science and Innovation under SEI Resolution on 17-September-2009
The Accelerator Technology Unit at CIEMAT is in charge of part of the Spanish contribution to the European X-Ray Free-Electron Laser (EXFEL). This paper presents the control system of the Phase Shifter (PS), a beam phase corrector magnet that will be installed in the intersections of the SASE undulator system. Beckhoff has been chosen by EXFEL as its main supplier for the industrial control systems. Beckhoff Twincat PLC architecture is a PC-based control technology built over EtherCAT, a real-time Ethernet fieldbus. The PS is operated with a stepper motor, its position is monitored by an incremental encoder, and it is controlled by a Twincat-PLC program using the TcMC2 library, an implementation of the PLCopen Motion Control specification. A GUI has been developed in LabVIEW instead of using Beckhoff visualization tool. The control system for the first and second prototype devices has been developed in-house using COTS hardware and software. The specifications request a repeatability of ±50μm in bidirectional movements and ±10μm in unidirectional movements. The second prototype can reach speeds up to 15 mm/s.
 
poster icon Poster WEPKN010 [3.077 MB]  
 
WEPKN014 NSLS-II Filling Pattern Measurement EPICS, storage-ring, diagnostics, linac 735
 
  • Y. Hu, L.R. Dalesio, K. Ha, I. Pinayev
    BNL, Upton, Long Island, New York, USA
 
  Multi-bunch injection will be deployed at NSLS-II. High bandwidth diagnostic monitors with high-speed digitizers are used to measure bunch-by-bunch charge variation. The requirements of filling pattern measurement and layout of beam monitors are described. The evaluation results of commercial fast digitizer Agilent Acqiris and high bandwidth detector Bergoz FCT are presented.  
poster icon Poster WEPKN014 [0.313 MB]  
 
WEPKN015 A New Helmholtz Coil Permanent Magnet Measurement System* FPGA, data-acquisition, interface, permanent-magnet 738
 
  • J.Z. Xu, I. Vasserman
    ANL, Argonne, USA
 
  Funding: Work supported by U.S. Department of Energy Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357.
A new Helmholtz Coil magnet measurement system has been developed at the Advanced Phone Source (APS) to characterize and sort the insertion device permanent magnets. The system uses the latest state-of-the-art field programmable gate array (FPGA) technology to compensate the speed variations of the magnet motion. Initial results demonstrate that the system achieves a measurement precision better than 0.001 ampere-meters squared (A·m2) in a permanent magnet moment measurement of 32 A·m2, probably the world's best precision of its kind.
 
poster icon Poster WEPKN015 [0.710 MB]  
 
WEPKN018 NSLS-II Vacuum Control for Chamber Acceptance vacuum, ion, storage-ring, multipole 742
 
  • H. Xu, L.R. Dalesio, M.J. Ferreira, H.-C. Hseuh, D. Zigrosser
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported by U.S. Department of Energy
The National Synchrotron Light Source II (NSLS-II) uses extruded aluminium chambers as an integral part of the vacuum system. Prior to installation in the Storage Ring all dipole and multipole chamber assemblies must be tested to ensure vacuum integrity. A significant part of the chamber test requires a full bakeout of the assembly, as well as control and monitoring of the titanium sublimation pumps (TSP), non-evaporable getter pumps (NEG) and ion pumps (IP). Data that will be acquired by the system during bakeouts includes system temperature, vacuum pressure, residual gas analyzer scans, ion pump current, TSP operation and NEG activation. This data will be used as part of the acceptance process of the chambers prior to the installation in the storage ring tunnel. This paper presents the design and implementation of the vacuum bakeout control, as well as related vacuum control issues.
 
poster icon Poster WEPKN018 [1.174 MB]  
 
WEPKN019 A Programmable Logic Controller-Based System for the Recirculation of Liquid C6F14 in the ALICE High Momentum Particle Identification Detector at the Large Hadron Collider detector, operation, monitoring, framework 745
 
  • I. Sgura, G. De Cataldo, A. Franco, C. Pastore, G. Volpe
    INFN-Bari, Bari, Italy
 
  We present the design and the implementation of the Control System (CS) for the recirculation of liquid C6F14 (Perfluorohexane) in the High Momentum Particle Identification Detector (HMPID). The HMPID is a sub-detector of the ALICE experiment at the CERN Large Hadron Collider (LHC) and it uses liquid C6F14 as Cherenkov radiator medium in 21 quartz trays for the measurement of the velocity of charged particles. The primary task of the Liquid Circulation System (LCS) is to ensure the highest transparency of C6F14 to ultraviolet light by re-circulating the liquid through a set of special filters. In order to provide safe long term operation a PLC-based CS has been implemented. The CS supports both automatic and manual operating modes, remotely or locally. The adopted Finite State Machine approach minimizes the possible operator errors and provides a hierarchical control structure allowing the operation and monitoring of a single radiator tray. The LCS is protected against anomalous working conditions by both active and passive systems. The active ones are ensured via the control software running in the PLC whereas the human interface and data archiving are provided via PVSS, the SCADA framework which integrates the full detector control. The LCS under CS control has been fully commissioned and proved to meet all requirements, thus enabling HMPID to successfully collect the data from the first LHC operation..  
poster icon Poster WEPKN019 [1.270 MB]  
 
WEPKN020 TANGO Integration of a SIMATIC WinCC Open Architecture SCADA System at ANKA TANGO, synchrotron, software, Linux 749
 
  • T. Spangenberg, K. Cerff, W. Mexner
    Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
  • V. Kaiser
    Softwareschneiderei GmbH, Karlsruhe, Germany
 
  The WinCC OA supervisory control and data acquisition (SCADA) system provides at the ANKA synchrotron facility a powerful and very scalable tool to manage the enormous variety of technical equipment relevant for house keeping and beamline operation. Crucial to the applicability of a SCADA system for the ANKA synchrotron are the provided options to integrate it into other control concepts even if they are working e.g. on different time scales, managing concepts, and control standards. Especially these latter aspects result into different approaches for controlling concepts for technical services, storage ring, and beamlines. The beamline control at ANKA is mainly based on TANGO and SPEC, which has been expanded by TANGO server capabilities. This approach implies the essential need to provide a stable and fast link, that does not increase the dead time of a measurement, to the slower WinCC OA SCADA system. The open architecture of WinCC OA offers a smooth integration in both directions and therefore gives options to combine potential advantages, e.g. native hardware drivers or convenient graphical skills. The implemented solution will be presented and discussed at selected examples.  
poster icon Poster WEPKN020 [0.378 MB]  
 
WEPKN024 UNICOS CPC New Domains of Application: Vacuum and Cooling & Ventilation vacuum, framework, operation, cryogenics 752
 
  • D. Willeman, E. Blanco Vinuela, B. Bradu, J.O. Ortola Vidal
    CERN, Geneva, Switzerland
 
  The UNICOS (UNified Industrial Control System) framework, and concretely the CPC package, has been extensively used in the domain of continuous processes (e.g. cryogenics, gas flows,…) and also others specific to the LHC machine as the collimators environmental measurements interlock system. The application of the UNICOS-CPC to other kind of processes: vacuum and the cooling and ventilation cases are depicted here. One of the major challenges was to figure out whether the model and devices created so far were also adapted for other types of processes (e.g Vacuum). To illustrate this challenge two domain use cases will be shown: ISOLDE vacuum control system and the STP18 (cooling & ventilation) control system. Both scenarios will be illustrated emphasizing the adaptability of the UNICOS CPC package to create those applications and highlighting the discovered needed features to include in the future UNICOS CPC package. This paper will also introduce the mechanisms used to optimize the commissioning time, the so-called virtual commissioning. In most of the cases, either the process is not yet accessible or the process is critical and its availability is then reduced, therefore a model of the process is used to offline validate the designed control system.  
poster icon Poster WEPKN024 [0.230 MB]  
 
WEPKN025 Supervision Application for the New Power Supply of the CERN PS (POPS) interface, framework, operation, software 756
 
  • H. Milcent, X. Genillon, M. Gonzalez-Berges, A. Voitier
    CERN, Geneva, Switzerland
 
  The power supply system for the magnets of the CERN PS has been recently upgraded to a new system called POPS (POwer for PS). The old mechanical machine has been replaced by a system based on capacitors. The equipment as well as the low level controls have been provided by an external company (CONVERTEAM). The supervision application has been developed at CERN reusing the technologies and tools used for the LHC Accelerator and Experiments (UNICOS and JCOP frameworks, PVSS SCADA tool). The paper describes the full architecture of the control application, and the challenges faced for the integration with an outsourced system. The benefits of reusing the CERN industrial control frameworks and the required adaptations will be discussed. Finally, the initial operational experience will be presented.  
poster icon Poster WEPKN025 [13.149 MB]  
 
WEPKN026 The ELBE Control System – 10 Years of Experience with Commercial Control, SCADA and DAQ Environments software, hardware, electron, interface 759
 
  • M. Justus, F. Herbrand, R. Jainsch, N. Kretzschmar, K.-W. Leege, P. Michel, A. Schamlott
    HZDR, Dresden, Germany
 
  The electron accelerator facility ELBE is the central experimental site of the Helmholtz-Zentrum Dresden-Rossendorf, Germany. Experiments with Bremsstrahlung started in 2001 and since that, through a series of expansions and modifications, ELBE has evolved to a 24/7 user facility running a total of seven secondary sources including two IR FELs. As its control system, ELBE uses WinCC on top of a networked PLC architecture. For data acquisition with high temporal resolution, PXI and PC based systems are in use, applying National Instruments hardware and LabVIEW application software. Machine protection systems are based on in-house built digital and analogue hardware. An overview of the system is given, along with an experience report on maintenance, reliability and efforts to keep track with ongoing IT, OS and security developments. Limits of application and new demands imposed by the forthcoming facility upgrade as a centre for high intensity beams (in conjunction with TW/PW femtosecond lasers) are discussed.  
poster icon Poster WEPKN026 [0.102 MB]  
 
WEPKN027 The Performance Test of F3RP61 and Its Applications in CSNS Experimental Control System EPICS, target, Linux, embedded 763
 
  • J. Zhuang, Y.P. Chu, D.P. Jin, J.J. Li
    IHEP Beijing, Beijing, People's Republic of China
 
  F3RP61 is an embedded PLC developed by Yokogawa, Japan. It is based on PowerPC 8347 platform. Linux and EPICS can run on it. We do some tests on this device, including CPU performance, network performance, CA access time and scan time stability of EPICS. We also compare E3RP61 with MVME5100, which is most used IOC in BEPCII. After the tests and comparison, the performance and ability of F3RP61 is clear. It can be used in Experiment Control System of CSNS (China Spallation Neutron Source) as communication nodes between front control layer and Epics layer. And in some cases, F3RP61 also has the ability to exert more functions such as control tasks.  
poster icon Poster WEPKN027 [0.200 MB]  
 
WEPKS001 Agile Development and Dependency Management for Industrial Control Systems software, framework, site, project-management 767
 
  • B. Copy, M. Mettälä
    CERN, Geneva, Switzerland
 
  The production and exploitation of industrial control systems differ substantially from traditional information systems; this is in part due to constraints on the availability and change life-cycle of production systems, as well as their reliance on proprietary protocols and software packages with little support for open development standards [1]. The application of agile software development methods therefore represents a challenge which requires the adoption of existing change and build management tools and approaches that can help bridging the gap and reap the benefits of managed development when dealing with industrial control systems. This paper will consider how agile development tools such as Apache Maven for build management, Hudson for continuous integration or Sonatype Nexus for the operation of "definite media libraries" were leveraged to manage the development life-cyle of the CERN UAB framework [2], as well as other crucial building blocks of the CERN accelerator infrastructure, such as the CERN Common Middleware or the FESA project.
[1] H. Milcent et al, "UNICOS: AN OPEN FRAMEWORK", THD003, ICALEPCS2009, Kobe, Japan
[2] M. Dutour, "Software factory techniques applied to Process Control at CERN", ICALEPCS 2007, Knoxville Tennessee, USA
 
slides icon Slides WEPKS001 [10.592 MB]  
poster icon Poster WEPKS001 [1.032 MB]  
 
WEPKS003 An Object Oriented Framework of EPICS for MicroTCA Based Control System EPICS, framework, interface, software 775
 
  • Z. Geng
    SLAC, Menlo Park, California, USA
 
  EPICS (Experimental Physics and Industrial Control System) is a distributed control system platform which has been widely used for large scientific devices control like particle accelerators and fusion plant. EPICS has introduced object oriented (C++) interfaces to most of the core services. But the major part of EPICS, the run-time database, only provides C interfaces, which is hard to involve the EPICS record concerned data and routines in the object oriented architecture of the software. This paper presents an object oriented framework which contains some abstract classes to encapsulate the EPICS record concerned data and routines in C++ classes so that full OOA (Objected Oriented Analysis) and OOD (Object Oriented Design) methodologies can be used for EPCIS IOC design. We also present a dynamic device management scheme for the hot-swap capability of the MicroTCA based control system.  
poster icon Poster WEPKS003 [0.176 MB]  
 
WEPKS004 ISAC EPICS on Linux: The March of the Penguins Linux, EPICS, ISAC, hardware 778
 
  • J.E. Richards, R.B. Nussbaumer, S. Rapaz, G. Waters
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
 
  The DC linear accelerators of the ISAC radioactive beam facility at TRIUMF do not impose rigorous timing constraints on the control system. Therefore a real-time operating system is not essential for device control. The ISAC Control System is completing a move to the use of the open source Linux operating system for hosting all EPICS IOCs. The IOC platforms include GE-Fanuc VME based CPUs for control of most optics and diagnostics, rack mounted servers for supervising PLCs, small desktop PCs for GPIB and serial "one-of-a-kind" instruments, as well as embedded ARM processors controlling CAN-bus devices that provide a suitcase sized control system. This article focuses on the experience of creating a customized Linux distribution for front-end IOC deployment. Rationale, a roadmap of the process, and efficiency advantages in personnel training and system management realized by using a single OS will be discussed.  
 
WEPKS005 State Machine Framework and its Use for Driving LHC Operational States* framework, operation, embedded, GUI 782
 
  • M. Misiowiec, V. Baggiolini, M. Solfaroli Camillocci
    CERN, Geneva, Switzerland
 
  The LHC follows a complex operational cycle with 12 major phases that include equipment tests, preparation, beam injection, ramping and squeezing, finally followed by the physics phase. This cycle is modeled and enforced with a state machine, whereby each operational phase is represented by a state. On each transition, before entering the next state, a series of conditions is verified to make sure the LHC is ready to move on. The State Machine framework was developed to cater for building independent or embedded state machines. They safely drive between the states executing tasks bound to transitions and broadcast related information to interested parties. The framework encourages users to program their own actions. Simple configuration management allows the operators to define and maintain complex models themselves. An emphasis was also put on easy interaction with the remote state machine instances through standard communication protocols. On top of its core functionality, the framework offers a transparent integration with other crucial tools used to operate LHC, such as the LHC Sequencer. LHC Operational States has been in production for half a year and was seamlessly adopted by the operators. Further extensions to the framework and its application in operations are under way.
* http://cern.ch/marekm/icalepcs.html
 
poster icon Poster WEPKS005 [0.717 MB]  
 
WEPKS006 UNICOS Evolution: CPC Version 6 framework, operation, vacuum, cryogenics 786
 
  • E. Blanco Vinuela, J.M. Beckers, B. Bradu, Ph. Durand, B. Fernández Adiego, S. Izquierdo Rosas, A. Merezhin, J.O. Ortola Vidal, J. Rochez, D. Willeman
    CERN, Geneva, Switzerland
 
  The UNICOS (UNified Industrial Control System) framework was created back in 1998, since then a noticeable number of applications in different domains have used this framework to develop process control applications. Furthermore the UNICOS framework has been formalized and their supervision layer has been reused in other kinds of applications (e.g. monitoring or supervisory tasks) where a control layer is not necessarily UNICOS oriented. The process control package has been reformulated as the UNICOS CPC package (Continuous Process Control) and a reengineering process has been followed. These noticeable changes were motivated by many factors as (1) being able to upgrade to the new more performance IT technologies in the automatic code generation, (2) being flexible enough to create new additional device types to cope with other needs (e.g. Vacuum or Cooling and Ventilation applications) without major impact on the framework or the PLC code baselines and (3) enhance the framework with new functionalities (e.g. recipes). This publication addresses the motivation, changes, new functionalities and results obtained. It introduces in an overall view the technologies used and changes followed, emphasizing what has been gained for the developer and the final user. Finally some of the new domains where UNICOS CPC has been used will be illustrated.  
poster icon Poster WEPKS006 [0.449 MB]  
 
WEPKS008 Rules-based Analysis with JBoss Drools : Adding Intelligence to Automation monitoring, synchrotron, software, DSL 790
 
  • E. De Ley, D. Jacobs
    iSencia Belgium, Gent, Belgium
 
  Rules engines are less-known as software technology than the traditional procedural, object-oriented, scripting or dynamic development languages. This is a pity, as their usage may offer an important enrichment to a development toolbox. JBoss Drools is an open-source rules engine that can easily be embedded in any Java application. Through an integration in our Passerelle process automation suite, we have been able to provide advanced solutions for intelligent process automation, complex event processing, system monitoring and alarming, automated repair etc. This platform has been proven for many years as an automated diagnosis and repair engine for Belgium's largest telecom provider, and it is being piloted at Synchrotron Soleil for device monitoring and alarming. After an introduction to rules engines in general and JBoss Drools in particular, we will present some practical use cases and important caveats.  
 
WEPKS009 Integrating Gigabit Ethernet Cameras into EPICS at Diamond Light Source EPICS, Ethernet, software, photon 794
 
  • T.M. Cobb
    Diamond, Oxfordshire, United Kingdom
 
  At Diamond Light Source we have selected Gigabit Ethernet cameras supporting GigE Vision for our new photon beamlines. GigE Vision is an interface standard for high speed Ethernet cameras which encourages interoperability between manufacturers. This paper describes the challenges encountered while integrating GigE Vision cameras from a range of vendors into EPICS.  
poster icon Poster WEPKS009 [0.976 MB]  
 
WEPKS010 Architecture Design of the Application Software for the Low-Level RF Control System of the Free-Electron Laser at Hamburg LLRF, software, cavity, interface 798
 
  • Z. Geng
    SLAC, Menlo Park, California, USA
  • V. Ayvazyan
    DESY, Hamburg, Germany
  • S. Simrock
    ITER Organization, St. Paul lez Durance, France
 
  The superconducting linear accelerator of the Free-Electron Laser at Hamburg (FLASH) provides high performance electron beams to the lasing system to generate synchrotron radiation to various users. The Low-Level RF (LLRF) system is used to maintain the beam stabilities by stabilizing the RF field in the superconducting cavities with feedback and feed forward algorithms. The LLRF applications are sets of software to perform RF system model identification, control parameters optimization, exception detection and handling, so as to improve the precision, robustness and operability of the LLRF system. In order to implement the LLRF applications in the hardware with multiple distributed processors, an optimized architecture of the software is required for good understandability, maintainability and extendibility. This paper presents the design of the LLRF application software architecture based on the software engineering approach and the implementation at FLASH.  
poster icon Poster WEPKS010 [0.307 MB]  
 
WEPKS011 Use of ITER CODAC Core System in SPIDER Ion Source EPICS, experiment, data-acquisition, framework 801
 
  • C. Taliercio, A. Barbalace, M. Breda, R. Capobianco, A. Luchetta, G. Manduchi, F. Molon, M. Moressa, P. Simionato
    Consorzio RFX, Associazione Euratom-ENEA sulla Fusione, Padova, Italy
 
  In February 2011 ITER released a new version (v2) of the CODAC Core System. In addition to the selected EPICS core, the new package includes also several tools from Control System Studio [1]. These tools are all integrated in Eclipse and offer an integrated environment for development and operation. The SPIDER Ion Source experiment is the first experiment planned in the ITER Neutral Beam Test Facility under construction at Consorzio RFX, Padova, Italy. As the final product of the Test Facility is the ITER Neutral Beam Injector, we decided to adhere since the beginning to the ITER CODAC guidelines. Therefore the EPICS system provided in the CODAC Core System will be used in SPIDER for plant control and supervision and, to some extent, for data acquisition. In this paper we report our experience in the usage of CODAC Core System v2 in the implementation of the control system of SPIDER and, in particular, we analyze the benefits and drawbacks of the Self Description Data (SDD) tools which, based on a XML description of the signals involved in the system, provide the automatic generation of the configuration files for the EPICS tools and PLC data exchange.
[1] Control System Studio home page: http://css.desy.de/content/index_eng.html
 
 
WEPKS014 NOMAD – More Than a Simple Sequencer hardware, CORBA, experiment, interface 808
 
  • P. Mutti, F. Cecillon, A. Elaazzouzi, Y. Le Goc, J. Locatelli, H. Ortiz, J. Ratel
    ILL, Grenoble, France
 
  NOMAD is the new instrument control software of the Institut Laue-Langevin. A highly sharable code among all the instruments’ suite, a user oriented design for tailored functionality and the improvement of the instrument team’s autonomy thanks to a uniform and ergonomic user interface are the essential elements guiding the software development. NOMAD implements a client/server approach. The server is the core business containing all the instrument methods and the hardware drivers, while the GUI provides all the necessary functionalities for the interaction between user and hardware. All instruments share the same executable while a set of XML configuration files adapts hardware needs and instrument methods to the specific experimental setup. Thanks to a complete graphical representation of experimental sequences, NOMAD provides an overview of past, present and future operations. Users have the freedom to build their own specific workflows using intuitive drag-and-drop technique. A complete drivers’ database to connect and control all possible instrument components has been created, simplifying the inclusion of a new piece of equipment for an experiment. A web application makes available outside the ILL all the relevant information on the status of the experiment. A set of scientific methods facilitates the interaction between users and hardware giving access to instrument control and to complex operations within just one click on the interface. NOMAD is not only for scientists. Dedicated tools allow a daily use for setting-up and testing a variety of technical equipments.  
poster icon Poster WEPKS014 [6.856 MB]  
 
WEPKS015 Automatic Creation of LabVIEW Network Shared Variables LabView, hardware, network, distributed 812
 
  • T. Kluge
    Siemens AG, Erlangen, Germany
  • H.-C. Schröder
    ASTRUM IT GmbH, Erlangen, Germany
 
  We are in the process of preparing the LabVIEW controlled system components of our Solid State Direct Drive® experiments [1, 2, 3, 4] for the integration into a Supervisory Control And Data Acquisition (SCADA) or distributed control system. The predetermined route to this is the generation of LabVIEW network shared variables that can easily be exported by LabVIEW to the SCADA system using OLE for Process Control (OPC) or other means. Many repetitive tasks are associated with the creation of the shared variables and the required code. We are introducing an efficient and inexpensive procedure that automatically creates shared variable libraries and sets default values for the shared variables. Furthermore, LabVIEW controls are created that are used for managing the connection to the shared variable inside the LabVIEW code operating on the shared variables. The procedure takes as input an XML spreadsheet defining the required input. The procedure utilizes XSLT and LabVIEW scripting. In a later state of the project the code generation can be expanded to also create code and configuration files that will become necessary in order to access the shared variables from the SCADA system of choice.
[1] O. Heid, T. Hughes, THPD002, IPAC10, Kyoto, Japan
[2] R. Irsigler et al, 3B-9, PPC11, Chicago IL, USA
[3] O. Heid, T. Hughes, THP068, LINAC10, Tsukuba, Japan
[4] O. Heid, T. Hughes, MOPD42, HB2010, Morschach, Switzerland
 
poster icon Poster WEPKS015 [0.265 MB]  
 
WEPKS018 MstApp, a Rich Client Control Applications Framework at DESY framework, operation, hardware, status 819
 
  • W. Schütte, K. Hinsch
    DESY, Hamburg, Germany
 
  Funding: Deutsches Elektronen-Synchrotron DESY
The control system for PETRA 3 [1] and its pre accelerators extensively use rich clients for the control room and the servers. Most of them are written with the help of a rich client Java framework: MstApp. They total to 106 different console and 158 individual server applications. MstApp takes care of many common control system application aspects beyond communication. MstApp provides a common look and feel: core menu items, a color scheme for standard states of hardware components and standardized screen sizes/locations. It interfaces our console application manager (CAM) and displays on demand our communication link diagnostics tools. MstApp supplies an accelerator context for each application; it handles printing, logging, resizing and unexpected application crashes. Due to our standardized deploy process MstApp applications know their individual developers and can even send them – on button press of the users - emails. Further a concept of different operation modes is implemented: view only, operating and expert use. Administration of the corresponding rights is done via web access of a database server. Initialization files on a web server are instantiated as JAVA objects with the help of the Java SE XMLEncoder. Data tables are read with the same mechanism. New MstApp applications can easily be created with in house wizards like the NewProjectWizard or the DeviceServerWizard. MstApp improves the operator experience, application developer productivity and delivered software quality.
[1] Reinhard Bacher, “Commissioning of the New Control System for the PETRA 3 Accelerator Complex at Desy”, Proceedings of ICALEPCS 2009, Kobe, Japan
 
poster icon Poster WEPKS018 [0.474 MB]  
 
WEPKS020 Adding Flexible Subscription Options to EPICS EPICS, framework, database, operation 827
 
  • R. Lange
    HZB, Berlin, Germany
  • L.R. Dalesio
    BNL, Upton, Long Island, New York, USA
  • A.N. Johnson
    ANL, Argonne, USA
 
  Funding: Work supported by U.S. Department of Energy (under contracts DE-AC02-06CH11357 resp. DE-AC02-98CH10886), German Bundesministerium für Bildung und Forschung and Land Berlin.
The need for a mechanism to control and filter subscriptions to control system variables by the client was described in a paper at the ICALEPCS2009 conference.[1] The implementation follows a plug-in design that allows the insertion of plug-in instances into the event stream on the server side. The client can instantiate and configure these plug-ins when opening a subscription, by adding field modifiers to the channel name using JSON notation.[2] This paper describes the design and implementation of a modular server-side plug-in framework for Channel Access, and shows examples for plug-ins as well as their use within an EPICS control system.
[1] R. Lange, A. Johnson, L. Dalesio: Advanced Monitor/Subscription Mechanisms for EPICS, THP090, ICALEPCS2009, Kobe, Japan.
[2] A. Johnson, R. Lange: Evolutionary Plans for EPICS Version 3, WEA003, ICALEPCS2009, Kobe, Japan.
 
poster icon Poster WEPKS020 [0.996 MB]  
 
WEPKS021 EPICS V4 in Python EPICS, software, status, data-analysis 830
 
  • G. Shen, M.A. Davidsaver, M.R. Kraimer
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported under auspices of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookhaven Science Associates, LLC, and in part by the DOE Contract DE-AC02-76SF00515
A novel design and implementation of EPICS version 4 is undergoing in Python. EPICS V4 defined an efficient way to describe a complex data structure, and data protocol. Current implementation in either C++ or Java has to invent a new wheel to present its data structure. However, it is more efficient in Python by mapping the data structure into a numpy array. This presentation shows the performance benchmarking, comparison in different language, and current status.
 
 
WEPKS022 Mango: an Online GUI Development Tool for the Tango Control System TANGO, GUI, interface, device-server 833
 
  • G. Strangolino, C. Scafuri
    ELETTRA, Basovizza, Italy
 
  Mango is an online tool based on QTango that allows easy development of graphical panels ready to run without need to be compiled. Developing with Mango is easy and fast because widgets are dragged from a widget catalogue and dropped into the Mango container. Widgets are then connected to the control system variables by choosing them from a Tango device list or by dragging them from any other running application built with the QTango library. Mango has also been successfully used during the FERMI@Elettra commissioning both by machine physicists and technicians.  
poster icon Poster WEPKS022 [0.429 MB]  
 
WEPKS023 Further Developments in Generating Type-Safe Messaging software, status, target, network 836
 
  • R. Neswold, CA. King
    Fermilab, Batavia, USA
 
  Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
At ICALEPCS '09, we introduced a source code generator that allows processes to communicate safely using native data types. In this paper, we discuss further development that has occurred since the conference in Kobe, Japan, including adding three more client languages, an optimization in network packet size and the addition of a new protocol data type.
 
poster icon Poster WEPKS023 [3.219 MB]  
 
WEPKS024 CAFE, A Modern C++ Interface to the EPICS Channel Access Library interface, EPICS, GUI, framework 840
 
  • J.T.M. Chrin, M.C. Sloan
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  CAFE (Channel Access interFacE) is a C++ library that provides a modern, multifaceted interface to the EPICS-based control system. CAFE makes extensive use of templates and multi-index containers to enhance efficiency, flexibility and performance. Stability and robustness are accomplished by ensuring that connectivity to EPICS channels remains in a well defined state in every eventuality, and results of all synchronous and asynchronous operations are captured and reported with integrity. CAFE presents the user with a number of options for writing and retrieving data to and fro the control system. In addition to basic read and write operations, a further abstraction layer provides transparency to more intricate functionality involving logical sets of data; such object sequences are easily instantiated through an XML-based configuration mechanism. CAFE's suitability for use in a broad spectrum of applications is demonstrated. These range from high performance Qt GUI control widgets, to event processing agents that propagate data through OMG's Data Distribution Service (DDS), to script-like frameworks such as MATLAB. The methodology for the modular use of CAFE serves to improve maintainability by enforcing a logical boundary between the channel access components and the specifics of the application framework at hand.  
poster icon Poster WEPKS024 [0.637 MB]  
 
WEPKS025 Evaluation of Software and Electronics Technologies for the Control of the E-ELT Instruments: a Case Study software, hardware, framework, CORBA 844
 
  • P. Di Marcantonio, R. Cirami, I. Coretti
    INAF-OAT, Trieste, Italy
  • G. Chiozzi, M. Kiekebusch
    ESO, Garching bei Muenchen, Germany
 
  In the scope of the evaluation of architecture and technologies for the control system of the E-ELT (European-Extremely Large Telescope) instruments, a collaboration has been set up between the Instrumentation and Control Group of the INAF-OATs and the ESO Directorate of Engineering. The first result of this collaboration is the design and implementation of a prototype of a small but representative control system for an E-ELT instrument that has been setup at the INAF-OATs premises. The electronics has been based on PLCs (Programmable Logical Controller) and Ethernet based fieldbuses from different vendors but using international standards like the IEC 61131-3 and PLCopen Motion Control. The baseline design for the control software follows the architecture of the VLT (Very Large Telescope) Instrumentation application framework but it has been implemented using the ACS (ALMA Common Software), an open source software framework developed for the ALMA project and based on CORBA middleware. The communication among the software components is based in two models: CORBA calls for command/reply and CORBA notification channel for distributing the devices status. The communication with the PLCs is based on OPC-UA, an international standard for the communication with industrial controllers. The results of this work will contribute to the definition of the architecture of the control system that will be provided to all consortia responsible for the actual implementation of the E-ELT instruments. This paper presents the prototype motivation, its architecture, design and implementation.  
poster icon Poster WEPKS025 [3.039 MB]  
 
WEPKS026 A C/C++ Build System Based on Maven for the LHC Controls System target, Linux, pick-up, framework 848
 
  • J. Nguyen Xuan, B. Copy, M. Dönszelmann
    CERN, Geneva, Switzerland
 
  The CERN accelerator controls system, mainly written in Java and C/C++, consists nowadays of 50 projects and 150 active developers. The controls group has decided to unify the development process and standards (e.g. project layout) using Apache Maven and Sonatype Nexus. Maven is the de-facto build tool for Java, it deals with versioning and dependency management, whereas Nexus is a repository manager. C/C++ developers were struggling to keep their dependencies on other CERN projects, as no versioning was applied, the libraries have to be compiled and available for several platforms and architectures, and finally there was no dependency management mechanism. This results in very complex Makefiles which were difficult to maintain. Even if Maven is primarily designed for Java, a plugin (Maven NAR [1]) adapts the build process for native programming languages for different operating systems and platforms. However C/C++ developers were not keen to abandon their current Makefiles. Hence our approach was to combine the best of the two worlds: NAR/Nexus and Makefiles. Maven NAR manages the dependencies, the versioning and creates a file with the linker and compiler options to include the dependencies. The Makefiles carry the build process to generate the binaries. Finally the resulting artifacts (binaries, header files, metadata) are versioned and stored in a central Nexus repository. Early experiments were conducted in the scope of the controls group's Testbed. Some existing projects have been successfully converted to this solution and some starting projects use this implementation.
[1] http://cern.ch/jnguyenx/MavenNAR.html
 
poster icon Poster WEPKS026 [0.518 MB]  
 
WEPKS027 Java Expert GUI Framework for CERN's Beam Instrumentation Systems GUI, framework, software, software-architecture 852
 
  • S. Bart Pedersen, S. Bozyigit, S. Jackson
    CERN, Geneva, Switzerland
 
  The CERN Beam Instrumentation Group software section have recently performed a study of the tools used to produce Java expert applications. This paper will present the analysis that was made to understand the requirements for generic components and the resulting tools including a compilation of Java components that have been made available for a wider audience. The paper will also discuss the eventuality of using MAVEN as deployment tool with its implications for developers and users.  
poster icon Poster WEPKS027 [1.838 MB]  
 
WEPKS028 Exploring a New Paradigm for Accelerators and Large Experimental Apparatus Control Systems distributed, toolkit, software, database 856
 
  • L. Catani, R. Ammendola, F. Zani
    INFN-Roma II, Roma, Italy
  • C. Bisegni, S. Calabrò, P. Ciuffetti, G. Di Pirro, G. Mazzitelli, A. Stecchi
    INFN/LNF, Frascati (Roma), Italy
  • L.G. Foggetta
    LAL, Orsay, France
 
  The integration of web technologies and web services has been, in the recent years, one of the major trends in upgrading and developing control systems for accelerators and large experimental apparatuses. Usually, web technologies have been introduced to complement the control systems with smart add-ons and user friendly services or, for instance, to safely allow access to the control system to users from remote sites. In spite of this still narrow spectrum of employment, some software technologies developed for high performance web services, although originally intended and optimized for these particular applications, deserve some features that would allow their deeper integration in a control system and, eventually, use them to develop some of the control system's core components. In this paper we present the conclusion of the preliminary investigations of a new paradigm for an accelerator control system and associated machine data acquisition system (DAQ), based on a synergic combination of network distributed cache memory and a non-relational key/value database. We investigated these technologies with particular interest on performances, namely speed of data storage and retrieve for the network memory, data throughput and queries execution time for the database and, especially, how much this performances can benefit from their inherent scalability. The work has been developed in a collaboration between INFN-LNF and INFN-Roma Tor Vergata.  
 
WEPKS029 Integrating a Workflow Engine within a Commercial SCADA to Build End User Applications in a Scientific Environment GUI, alignment, software, interface 860
 
  • M. Ounsy, G. Abeillé, S. Pierre-Joseph Zéphir, K.S. Saintin
    SOLEIL, Gif-sur-Yvette, France
  • E. De Ley
    iSencia Belgium, Gent, Belgium
 
  To build integrated high-level applications, SOLEIL is using an original component-oriented approach based on GlobalSCREEN, an industrial Java SCADA [1]. The aim of this integrated development environment is to give SOLEIL's scientific and technical staff a way to develop GUI applications for beamlines external users . These GUI applications must address the 2 following needs : monitoring and supervision of a control system and development and execution of automated processes (like beamline alignment, data collections, and on-line data analysis). The first need is now completely answered through a rich set of Java graphical components based on the COMETE [2] library and providing a high level of service for data logging, scanning and so on. To reach the same quality of service for process automation, a big effort has been made to integrate more smoothly PASSERELLE [3], a workflow engine, with dedicated user-friendly interfaces for end users, packaged as JavaBeans in GlobalSCREEN components library. Starting with brief descriptions of software architecture of the PASSERELLE and GlobalSCREEN environments, we will then present the overall system integration design as well as the current status of deployment on SOLEIL beamlines.
[1] V. Hardion, M. Ounsy, K. Saintin, "How to Use a SCADA for High-Level Application Development on a Large-Scale Basis in a Scientific Environment", ICALEPS 2007
[2] G. Viguier, K. Saintin, https://comete.svn.sourceforge.net/svnroot/comete, ICALEPS'11, MOPKN016.
[3] A. Buteau, M. Ounsy, G. Abeille, "A Graphical Sequencer for SOLEIL Beamline Acquisitions", ICALEPS'07, Knoxville, Tennessee - USA, Oct 2007.
 
 
WEPKS030 A General Device Driver Simulator to Help Compare Real Time Control Systems EPICS, TANGO, device-server, software 863
 
  • M.S. Mohan
    EGO, Pisa, Italy
 
  Supervisory Control And Data Acquisition systems (SCADA) such as Epics, Tango and Tine usually provide small example device driver programs for testing or to help users get started, however they differ between systems making it hard to compare the SCADA. To address this, a small simulator driver was created which emulates signals and errors similar to those received from a hardware device. The simulator driver can return from one to four signals: a ramp signal, a large alarm ramp signal, an error signal and a timeout. The different signals or errors are selected using the associated software device number. The simulator driver performs similar functions to Epic’s clockApp [1], Tango’s TangoTest and the Tine’s sinegenerator but the signals are independent of the SCADA. A command line application, an Epics server (IOC), a Tango device server, and a Tine server (FEC) were created and linked with the simulator driver. In each case the software device numbers were equated to a dummy device. Using the servers it was possible to compare how each SCADA behaved against the same repeatable signals. In addition to comparing and testing the SCADA the finished servers proved useful as templates for real hardware device drivers.
[1] F.Furukawa, "Very Simple Example of EPICS Device Suport", http://www-linac.kek.jp/epics/second
 
poster icon Poster WEPKS030 [1.504 MB]  
 
WEPKS032 A UML Profile for Code Generation of Component Based Distributed Systems interface, software, distributed, framework 867
 
  • G. Chiozzi, L. Andolfato, R. Karban
    ESO, Garching bei Muenchen, Germany
  • A. Tejeda
    UCM, Antofagasta, Chile
 
  A consistent and unambiguous implementation of code generation (model to text transformation) from UML must rely on a well defined UML profile, customizing UML for a particular application domain. Such a profile must have a solid foundation in a formally correct ontology, formalizing the concepts and their relations in the specific domain, in order to avoid a maze or set of wildly created stereotypes. The paper describes a generic profile for the code generation of component based distributed systems for control applications, the process to distill the ontology and define the profile, and the strategy followed to implement the code generator. The main steps that take place iteratively include: defining the terms and relations with an ontology, mapping the ontology to the appropriate UML metaclasses, testing the profile by creating modelling examples, and generating the code.  
poster icon Poster WEPKS032 [1.925 MB]  
 
WEPKS033 UNICOS CPC6: Automated Code Generation for Process Control Applications software, framework, operation, vacuum 871
 
  • B. Fernández Adiego, E. Blanco Vinuela, I. Prieto Barreiro
    CERN, Geneva, Switzerland
 
  The Continuous Process Control package (CPC) is one of the components of the CERN Unified Industrial Control System framework (UNICOS). As a part of this framework, UNICOS-CPC provides a well defined library of device types, a methodology and a set of tools to design and implement industrial control applications. The new CPC version uses the software factory UNICOS Application Builder (UAB) to develop the CPC applications. The CPC component is composed of several platform oriented plug-ins (PLCs and SCADA) describing the structure and the format of the generated code. It uses a resource package where both, the library of device types and the generated file syntax are defined. The UAB core is the generic part of this software, it discovers and calls dynamically the different plug-ins and provides the required common services. In this paper the UNICOS CPC6 package is presented. It is composed of several plug-ins: the Instance generator and the Logic generator for both, Siemens and Schneider PLCs, the SCADA generator (based on PVSS) and the CPC wizard as a dedicated Plug-in created to provide the user a friendly GUI. A management tool called UAB bootstrap will administer the different CPC component versions and all the dependencies between the CPC resource packages and the components. This tool guides the control system developer to install and launch the different CPC component versions.  
poster icon Poster WEPKS033 [0.730 MB]  
 
WEPMN001 Experience in Using Linux Based Embedded Controllers with EPICS Environment for the Beam Transport in SPES Off–Line Target Prototype EPICS, software, database, target 875
 
  • M. Montis, M.G. Giacchini
    INFN/LNL, Legnaro (PD), Italy
 
  EPICS [1] was chosen as general framework to develop the control system of SPES facility under construction at LNL [2]. We report some experience in using some commercial devices based on Debian Linux to control the electrostatic deflectors installed on the beam line at the output of target chamber. We discuss this solution and compare it to other IOC implementations in use in the Target control system.
[1] http://www.aps.anl.gov/epics/
[2] http://www.lnl.infn.it/~epics
* M.Montis, MS thesis: http://www.lnl.infn.it/~epics/THESIS/TesiMaurizioMontis.pdf
 
poster icon Poster WEPMN001 [1.036 MB]  
 
WEPMN005 Spiral2 Control Command: a Standardized Interface between High Level Applications and EPICS IOCs interface, status, operation, EPICS 879
 
  • C.H. Haquin, P. Gillette, E. Lemaître, L. Philippe, D.T. Touchard
    GANIL, Caen, France
  • F. Gougnaud, Y. Lussignol
    CEA/DSM/IRFU, France
 
  The SPIRAL2 linear accelerator will produce entirely new particle beams enabling exploration of the boundaries of matter. Coupled with the existing GANIL machine this new facility will produce light and heavy exotic nuclei at extremely high intensities. The field deployment of the Control System relies on Linux PCs and servers, VME VxWorks crates and Siemens PLCs; equipment will be addressed either directly or using a Modbus/TCP field bus network. Several laboratories are involved in the software development of the control system. In order to improve efficiency of the collaboration, special care is taken of the software organization. During the development phase, in a context of tough budget and time constraints, this really makes sense, but also for the exploitation of the new machine, it helps us to design a control system that will require as little effort as possible for maintenance and evolution. The major concepts of this organization are the choice of EPICS, the definition of an EPICS directory tree specific to SPIRAL2, called "topSP2": this is our reference work area for development, integration and exploitation, and the use of version control system (SVN) to store and share our developments independently of the multi-site dimension of the project. The next concept is the definition of a "standardized interface" between high level applications programmed in Java and EPICS databases running in IOCs. This paper relates the rationale and objectives of this interface and also its development cycle from specification using UML diagrams to testing on the actual equipment.  
poster icon Poster WEPMN005 [0.945 MB]  
 
WEPMN006 Commercial FPGA Based Multipurpose Controller: Implementation Perspective EPICS, FPGA, hardware, GUI 882
 
  • I. Arredondo, D. Belver, P. Echevarria, M. Eguiraun, H. Hassanzadegan, M. del Campo
    ESS-Bilbao, Zamudio, Spain
  • V. Etxebarria, J. Jugo
    University of the Basque Country, Faculty of Science and Technology, Bilbao, Spain
  • N. Garmendia, L. Muguira
    ESS Bilbao, Bilbao, Spain
 
  Funding: The present work is supported by the Basque Government and Spanish Ministry of Science and Innovation.
This work presents a fast acquisition multipurpose controller, focussing on its EPICS integration and on its XML based configuration. This controller is based on a Lyrtech VHS-ADC board which encloses an FPGA, connected to a Host PC. This Host acts as local controller and implements an IOC integrating the device in an EPICS network. These tasks have been performed using Java as the main tool to program the PC to make the device fit the desired application. All the process includes the use of different technologies: JNA to handle C functions i.e. FPGA API, JavaIOC to integrate EPICS and XML w3c DOM classes to easily configure the particular application. In order to manage the functions, Java specific tools have been developed: Methods to manage the FPGA (read/write registers, acquire data,…), methods to create and use the EPICS server (put, get, monitor,…), mathematical methods to process the data (numeric format conversions,…) and methods to create/initialize the application structure by means of an XML file (parse elements, build the DOM and the specific application structure). This XML file has some common nodes and tags for all the applications: FPGA registers specifications definition and EPICS variables. This means that the user only has to include a node for the specific application and use the mentioned tools. It is the developed main class which is in charge of managing the FPGA and EPICS server according to this XML file. This multipurpose controller has been successfully used to implement a BPM and an LLRF application for the ESS-Bilbao facility.
 
poster icon Poster WEPMN006 [0.559 MB]  
 
WEPMN008 Function Generation and Regulation Libraries and their Application to the Control of the New Main Power Converter (POPS) at the CERN CPS software, simulation, real-time, Linux 886
 
  • Q. King, S.T. Page, H. Thiesen
    CERN, Geneva, Switzerland
  • M. Veenstra
    EBG MedAustron, Wr. Neustadt, Austria
 
  Power converter control for the LHC is based on an embedded control computer called a Function Generator/Controller (FGC). Every converter includes an FGC with responsibility for the generation of the reference current as a function of time and the regulation of the circuit current, as well as control of the converter state. With many new converter controls software classes in development it was decided to generalise several key components of the FGC software in the form of C libraries: function generation in libfg, regulation, limits and simulation in libreg and DCCT, ADC and DAC calibration in libcal. These libraries were first used in the software class dedicated to controlling the new 60MW main power converter (POPS) at the CERN CPS where regulation of both magnetic field and circuit current is supported. This paper reports on the functionality provided by each library and in particular libfg and libreg. The libraries are already being used by software classes in development for the next generation FGC for Linac4 converters, as well as the CERN SPS converter controls (MUGEF) and MedAustron converter regulation board (CRB).  
poster icon Poster WEPMN008 [3.304 MB]  
 
WEPMN009 Simplified Instrument/Application Development and System Integration Using Libera Base Software Framework software, hardware, framework, interface 890
 
  • M. Kenda, T. Beltram, T. Juretič, B. Repič, D. Škvarč, C. Valentinčič
    I-Tech, Solkan, Slovenia
 
  Development of many appliances used in scientific environment forces us to face similar challenges, often executed repeatedly. One has to design or integrate hardware components. Support for network and other communications standards needs to be established. Data and signals are processed and dispatched. Interfaces are required to monitor and control the behaviour of the appliances. At Instrumentation Technologies we identified and addressed these issues by creating a generic framework which is composed of several reusable building blocks. They simplify some of the tedious tasks and leave more time to concentrate on real issues of the application. Further more, the end product quality benefits from larger common base of this middle-ware. We will present the benefits on concrete example of instrument implemented on MTCA platform accessible over graphical user interface.  
poster icon Poster WEPMN009 [5.755 MB]  
 
WEPMN011 Controlling the EXCALIBUR Detector software, detector, simulation, hardware 894
 
  • J.A. Thompson, I. Horswell, J. Marchal, U.K. Pedersen
    Diamond, Oxfordshire, United Kingdom
  • S.R. Burge, J.D. Lipp, T.C. Nicholls
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
 
  EXCALIBUR is an advanced photon counting detector being designed and built by a collaboration of Diamond Light Source and the Science and Technology Facilities Council. It is based around 48 CERN Medipix III silicon detectors arranged as an 8x6 array. The main problem addressed by the design of the hardware and software is the uninterrupted collection and safe storage of image data at rates up to one hundred (2048x1536) frames per second. This is achieved by splitting the image into six 'stripes' and providing parallel data paths for them all the way from the detectors to the storage. This architecture requires the software to control the configuration of the stripes in a consistent manner and to keep track of the data so that the stripes can be subsequently stitched together into frames.  
poster icon Poster WEPMN011 [0.289 MB]  
 
WEPMN012 PC/104 Asyn Drivers at Jefferson Lab interface, EPICS, hardware, operation 898
 
  • J. Yan, T.L. Allison, S.D. Witherspoon
    JLAB, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
PC/104 embedded IOCs that run RTEMS and EPICS have been applied in many new projects at Jefferson Lab. Different commercial PC/104 I/O modules on the market such as digital I/O, data acquisition, and communication modules are integrated in our control system. AsynDriver, which is a general facility for interfacing device specific code to low level drivers, was applied for PC/104 serial communication I/O cards. We choose the ines GPIB-PC/104-XL as the GPIB interface module and developed the low lever device driver that is compatible with the asynDriver. The ines GPIB-PC/104-XL has iGPIB 72110 chip, which is register compatible with NEC uPD7210 in GPIB Talker/Listener applications. Instrument device support was created to provide access to the operating parameters of GPIB devices. Low level device driver for the serial communication board Model 104-COM-8SM was also developed to run under asynDriver. This serial interface board contains eight independent ports and provides effective RS-485, RS-422 and RS-232 multipoint communication. StreamDevice protocols were applied for the serial communications. The asynDriver in PC/104 IOC application provides standard interface between the high level device support and hardwire level device drivers. This makes it easy to develop the GPIB and serial communication applications in PC/104 IOCs.
 
 
WEPMN013 Recent Developments in Synchronised Motion Control at Diamond Light Source EPICS, software, interface, framework 901
 
  • B.J. Nutter, T.M. Cobb, M.R. Pearson, N.P. Rees, F. Yuan
    Diamond, Oxfordshire, United Kingdom
 
  At Diamond Light Source the EPICS control system is used with a variety of motion controllers. The use of EPICS ensures a common interface over a range of motorised applications. We have developed a system to enable the use of the same interface for synchronised motion over multiple axes using the Delta Tau PMAC controller. Details of this work will be presented, along with examples and possible future developments.  
 
WEPMN014 The Software and Hardware Architectural Design of the Vessel Thermal Map Real-Time System in JET real-time, plasma, Linux, network 905
 
  • D. Alves, A. Neto, D.F. Valcárcel
    IPFN, Lisbon, Portugal
  • G. Arnoux, P. Card, S. Devaux, R.C. Felton, A. Goodyear, D. Kinna, P.J. Lomas, P. McCullen, A.V. Stephen, K-D. Zastrow
    CCFE, Abingdon, Oxon, United Kingdom
  • S. Jachmich
    RMA, Brussels, Belgium
 
  The installation of ITER-relevant materials for the plasma facing components (PFCs) in the Joint European Torus (JET) is expected to have a strong impact on the operation and protection of the experiment. In particular, the use of all-beryllium tiles, which deteriorate at a substantially lower temperature than the formerly installed CFC tiles, imposes strict thermal restrictions on the PFCs during operation. Prompt and precise responses are therefore required whenever anomalous temperatures are detected. The new Vessel Thermal Map (VTM) real-time application collects the temperature measurements provided by dedicated pyrometers and Infra-Red (IR) cameras, groups them according to spatial location and probable offending heat source and raises alarms that will trigger appropriate protective responses. In the context of JET's global scheme for the protection of the new wall, the system is required to run on a 10 millisecond cycle communicating with other systems through the Real-Time Data Network (RTDN). In order to meet these requirements a Commercial Off-The-Shelf (COTS) solution has been adopted based on standard x86 multi-core technology, Linux and the Multi-threaded Application Real-Time executor (MARTe) software framework. This paper presents an overview of the system with particular technical focus on the configuration of its real-time capability and the benefits of the modular development approach and advanced tools provided by the MARTe framework.
See the Appendix of F. Romanelli et al., Proceedings of the 23rd IAEA Fusion Energy Conference 2010, Daejeon, Korea
 
poster icon Poster WEPMN014 [5.306 MB]  
 
WEPMN015 Timing-system Solution for MedAustron; Real-time Event and Data Distribution Network timing, real-time, software, ion 909
 
  • R. Štefanič, J. Dedič, R. Tavčar
    Cosylab, Ljubljana, Slovenia
  • J. Gutleber
    CERN, Geneva, Switzerland
  • R. Moser
    EBG MedAustron, Wr. Neustadt, Austria
 
  MedAustron is an ion beam cancer therapy and research centre currently under construction in Wiener Neustadt, Austria. This facility features a synchrotron particle accelerator for light ions. A timing system is being developed for that class of accelerators targeted at clinical use as a product of close collaboration between MedAustron and Cosylab. We redesignedμResearch Finland transport layer's FPGA firmware, extending its capabilities to address specific requirements of the machine to come to a generic real-time broadcast network for coordinating actions of a compact, pulse-to-pulse modulation based particle accelerator. One such requirement is the need to support for configurable responses to timing events on the receiver side. The system comes with National Instruments LabView based software support, ready to be integrated into the PXI based front-end controllers. This paper explains the design process from initial requirements refinement to technology choice, architectural design and implementation. It elaborates the main characteristics of the accelerator that the timing system has to address, such as support for concurrently operating partitions, real-time and non real-time data transport needs and flexible configuration schemes for real-time response to timing event reception. Finally, the architectural overview is given, with the main components explained in due detail.  
poster icon Poster WEPMN015 [0.800 MB]  
 
WEPMN016 Synchronously Driven Power Converter Controller Solution for MedAustron timing, interface, real-time, FPGA 912
 
  • L. Šepetavc, J. Dedič, R. Tavčar
    Cosylab, Ljubljana, Slovenia
  • J. Gutleber
    CERN, Geneva, Switzerland
  • R. Moser
    EBG MedAustron, Wr. Neustadt, Austria
 
  MedAustron is an ion beam cancer therapy and research centre currently under construction in Wiener Neustadt, Austria. This facility features a synchrotron particle accelerator for light ions. Cosylab is closely working together with MedAustron on the development of a power converter controller (PCC) for the 260 deployed converters. The majority are voltage sources that are regulated in real-time via digital signal processor (DSP) boards. The in-house developed PCC operates the DSP boards remotely, via real-time fiber optic links. A single PCC will control up to 30 power converters that deliver power to magnets used for focusing and steering particle beams. Outputs of all PCCs must be synchronized within a time frame of at most 1 microsecond, which is achieved by integration with the timing system. This pulse-to-pulse modulation machine requires different waveforms for each beam generation cycle. Dead times between cycles must be kept low, therefore the PCC is reconfigured during beam generation. The system is based on a PXI platform from National Instruments running LabVIEW Real-Time. An in-house developed generic real-time optical link connects the PCCs to custom developed front-end devices. These FPGA-based hardware components facilitate integration with different types of power converters. All PCCs are integrated within the SIMATIC WinCC OA SCADA system which coordinates and supervises their operation. This paper describes the overall system architecture, its main components, challenges we faced and the technical solutions.  
poster icon Poster WEPMN016 [0.695 MB]  
 
WEPMN017 PCI Hardware Support in LIA-2 Control System hardware, Linux, interface, operation 916
 
  • D. Bolkhovityanov, P.B. Cheblakov
    BINP SB RAS, Novosibirsk, Russia
 
  LIA-2 control system* is built on cPCI crates with x86-compatible processor boards running Linux. Slow electronics is connected via CAN-bus, while fast electronics (4MHz and 200MHz fast ADCs and 200MHz timers) are implemented as cPCI/PMC modules. Several ways to drive PCI control electronics in Linux were examined. Finally a userspace drivers approach was chosen. These drivers communicate with hardware via a small kernel module, which provides access to PCI BARs and to interrupt handling. This module was named USPCI (User-Space PCI access). This approach dramatically simplifies creation of drivers, as opposed to kernel drivers, and provides high reliability (because only a tiny and thoroughly-debugged piece of code runs in kernel). LIA-2 accelerator was successfully commissioned, and the solution chosen has proven adequate and very easy to use. Besides, USPCI turned out to be a handy tool for examination and debugging of PCI devices direct from command-line. In this paper available approaches to work with PCI control hardware in Linux are considered, and USPCI architecture is described.
* "LIA-2 Linear Induction Accelerator Control System", this conference
 
poster icon Poster WEPMN017 [0.954 MB]  
 
WEPMN018 Performance Tests of the Standard FAIR Equipment Controller Prototype FPGA, timing, Ethernet, software 919
 
  • S. Rauch, R. Bär, W. Panschow, M. Thieme
    GSI, Darmstadt, Germany
 
  For the control system of the new FAIR accelerator facility a standard equipment controller, the Scalable Control Unit (SCU), is presently under development. First prototypes have already been tested in real applications. The controller combines an x86 ComExpress Board and an Altera Arria II FPGA. Over a parallel bus interface called the SCU bus, up to 12 slave boards can be controlled. Communication between CPU and FPGA is done by a PCIe link. We discuss the real time behaviour between the Linux OS and the FPGA Hardware. For the test, a Front-End Software Architecture (FESA) class, running under Linux, communicates with the PCIe bridge in the FPGA. Although we are using PCIe only for single 32 bit wide accesses to the FPGA address space, the performance still seems sufficient. The tests showed an average response time to IRQs of 50 microseconds with a 1.6 GHz Intel Atom CPU. This includes the context change to the FESA userspace application and the reply back to the FPGA. Further topics are the bandwidth of the PCIe link for single/burst transfers and the performance of the SCU bus communication.  
 
WEPMN020 New Developments on Tore Supra Data Acquisition Units Linux, real-time, data-acquisition, target 922
 
  • F. Leroux, G. Caulier, L. Ducobu, M. Goniche
    Association EURATOM-CEA, St Paul Lez Durance, France
  • G. Antar
    American University of Beirut, Beirut, Lebanon
 
  Tore Supra data acquisition system (DAS) was designed in the early 1980s and has considerably evolved since then. Three generations of data acquisition units still coexist: Multibus, VME, and PCI bus system. The second generation, VME bus system, running LynxOS real-time operating system (OS) is diskless. The third generation, PCI bus system, allows to perform extensive data acquisition for infrared and visible video cameras that produce large amounts of data to handle. Nevertheless, this third generation was up to now provided with an hard drive and a non real-time operating system Microsoft Windows. Diskless system is a better solution for reliability and maintainability as they share common resources like kernel and file system. Moreover, open source real-time OS is now available which provide free and convenient solutions for DAS. As a result, it was decided to explore an alternative solution based on an open source OS with a diskless system for the fourth generation. In 2010, Linux distributions for VME bus and PCI bus systems have been evaluated and compared to LynxOS. Currently, Linux OS is fairly mature to be used on DAS with pre-emptive and real time features on Motorola PowerPC, x86 and x86 multi-core architecture. The results allowed to choose a Linux version for VME and PC platform for DAS on Tore Supra. In 2011, the Tore Supra DAS dedicated software was ported on a Linux diskless PCI platform. The new generation was successfully tested during real plasma experiment on one diagnostic. The new diagnostics for Tore Supra will be developed with this new set up.  
poster icon Poster WEPMN020 [0.399 MB]  
 
WEPMN022 LIA-2 Power Supply Control System interlocks, electron, experiment, network 926
 
  • A. Panov, P.A. Bak, D. Bolkhovityanov
    BINP SB RAS, Novosibirsk, Russia
 
  LIA-2 is an electron Linear Induction Accelerator designed and built by BINP for flash radiography. Inductors get power from 48 modulators, grouped by 6 in 8 racks. Each modulator includes 3 control devices, connected via internal CAN bus to an embedded modulator controller, which runs Keil RTX real-time OS. Each rack includes a cPCI crate equipped with x86-compatible processor board running Linux*. Modulator controllers are connected to cPCI crate via external CAN bus. Additionally, brief modulator status is displayed on front indicator. Integration of control electronics into devices with high level of electromagnetic interferences is discussed, use of real-time OSes in such devices and interaction between them is described.
*"LIA-2 Linear Induction Accelerator Control System", this conference
 
poster icon Poster WEPMN022 [5.035 MB]  
 
WEPMN023 The ATLAS Tile Calorimeter Detector Control System detector, monitoring, experiment, electronics 929
 
  • G. Ribeiro
    LIP, Lisboa, Portugal
  • G. Arabidze
    MSU, East Lansing, Michigan, USA
  • P. Lafarguette
    Université Blaise Pascal, Clermont-Ferrand, France
  • S. Nemecek
    Czech Republic Academy of Sciences, Institute of Physics, Prague, Czech Republic
 
  The main task of the ATLAS Tile calorimeter Detector Control System (DCS) is to enable the coherent and safe operation of the calorimeter. All actions initiated by the operator, as well as all errors, warnings and alarms concerning the hardware of the detector are handled by DCS. The Tile calorimeter DCS controls and monitors mainly the low voltage and high voltage power supply systems, but it is also interfaced with the infrastructure (cooling system and racks), the calibration systems, the data acquisition system, configuration and conditions databases and the detector safety system. The system has been operational since the beginning of LHC operation and has been extensively used in the operation of the detector. In the last months effort was directed to the implementation of automatic recovery of power supplies after trips. Current status, results and latest developments will be presented.  
poster icon Poster WEPMN023 [0.404 MB]  
 
WEPMN024 NSLS-II Beam Position Monitor Embedded Processor and Control System embedded, EPICS, Ethernet, FPGA 932
 
  • K. Ha, L.R. Dalesio, J.H. De Long, J. Mead, Y. Tian, K. Vetter
    BNL, Upton, New York, USA
 
  Funding: Work supported by DOE contract No: DE-AC02-98CH10886
NSLS-II is a 3 Gev 3rd generation light source that is currently under construction. A sub-micron Digital Beam Position Monitor (DBPM) system which is hardware electronics and embedded software processor and EPICS IOC has been successfully developed and tested in the ALS storage ring and BNL Lab.
 
 
WEPMN025 A New Fast Triggerless Acquisition System For Large Detector Arrays detector, FPGA, real-time, experiment 935
 
  • P. Mutti, M. Jentschel, J. Ratel, F. Rey, E. Ruiz-Martinez, W. Urban
    ILL, Grenoble, France
 
  Presently a common characteristic trend in low and medium energy nuclear physics is to develop more complex detector systems to form multi-detector arrays. The main objective of such an elaborated set-up is to obtain comprehensive information about the products of all reactions. State-of-art γ-ray spectroscopy requires nowadays the use of large arrays of HPGe detectors often coupled with anti-Compton active shielding to reduce the ambient background. In view of this complexity, the front-end electronics must provide precise information about energy, time and possibly pulse shape. The large multiplicity of the detection system requires the capability to process the multitude of signals from many detectors, fast processing and very high throughput of more that 106 data words/sec. The possibility to handle such a complex system using traditional analogue electronics has shown rapidly its limitation due, first of all, to the non negligible cost per channel and, moreover, to the signal degradation associated to complex analogue path. Nowadays, digital pulse processing systems are available, with performances, in terms of timing and energy resolution, equal when not better than the corresponding analogue ones for a fraction of the cost per channel. The presented system uses a combination of a 15-bit 100 MS/s digitizer with a PowerPC-based VME single board computer. Real-time processing algorithms have been developed to handle total event rates of more than 1 MHz, providing on-line display for single and coincidence events.  
poster icon Poster WEPMN025 [15.172 MB]  
 
WEPMN026 Evolution of the CERN Power Converter Function Generator/Controller for Operation in Fast Cycling Accelerators Ethernet, network, software, radiation 939
 
  • D.O. Calcoen, Q. King, P.F. Semanaz
    CERN, Geneva, Switzerland
 
  Power converters in the LHC are controlled by the second generation of an embedded computer known as a Function Generator/Controller (FGC2). Following the success of this control system, new power converter installations at CERN will be based around an evolution of the design - a third generation called FGC3. The FGC3 will initially be used in the PS Booster and Linac4. This paper compares the hardware of the two generations of FGC and details the decisions made during the design of the FGC3.  
poster icon Poster WEPMN026 [0.586 MB]  
 
WEPMN027 Fast Scalar Data Buffering Interface in Linux 2.6 Kernel Linux, interface, hardware, instrumentation 943
 
  • A. Homs
    ESRF, Grenoble, France
 
  Key instrumentation devices like counter/timers, analog-to-digital converter and encoders provide scalar data input. Many of them allow fast acquisitions, but do not provide hardware triggering or buffering mechanisms. A Linux 2.4 kernel driver called Hook was developed at the ESRF as a generic software-triggered buffering interface. This work presents the portage of the ESRF Hook interface to the Linux 2.6 kernel. The interface distinguishes two independent functional groups: trigger event generators and data channels. Devices in the first group create software events, like hardware interrupts generated by timers or external signals. On each event, one or more device channels on the second group are read and stored in kernel buffers. The event generators and data channels to be read are fully configurable before each sequence. Designed for fast acquisitions, the Hook implementation is well adapted to multi-CPU systems, where the interrupt latency is notably reduced. On heavily loaded dual-core PCs running standard (non real time) Linux, data can be taken at 1 KHz without losing events. Additional features include full integration into the sysfs (/sys) virtual filesystem and hotplug devices support.  
 
WEPMN030 Power Supply Control Interface for the Taiwan Photon Source power-supply, interface, Ethernet, quadrupole 950
 
  • C.Y. Wu, J. Chen, Y.-S. Cheng, P.C. Chiu, K.T. Hsu, K.H. Hu, C.H. Kuo, D. Lee, C.Y. Liao, K.-B. Liu
    NSRRC, Hsinchu, Taiwan
 
  The Taiwan Photon Source (TPS) is a latest generation synchrotron light source. Stringent power supply specifications should be met to achieve design goals of the TPS. High precision power supply equipped with 20, 18, and 16 bits DAC for the storage ring dipole, quadrupole, and sextupole are equipped with Ethernet interfaces. Control interface include basic functionality and some advanced features which are useful for performance monitoring and post-mortem diagnostics. Power supply of these categories can be accessed by EPICS IOCs. The corrector power supplies' control interface is a specially designed embedded interface module which will be mounted on the corrector power supply cages to achieve required performance. The setting reference of the corrector power supply is generated by 20 bits DAC and readback is done by 24 bits ADC. The interface module has embedded EPICS IOC for slow control. Fast setting ports are also supported by the internal FPGA for orbit feedback supports.  
 
WEPMN032 Development of Pattern Awareness Unit (PAU) for the LCLS Beam Based Fast Feedback System feedback, timing, operation, software 954
 
  • K.H. Kim, S. Allison, D. Fairley, T.M. Himel, P. Krejcik, D. Rogind, E. Williams
    SLAC, Menlo Park, California, USA
 
  LCLS is now successfully operating at its design beam repetition rate of 120 Hz, but in order to ensure stable beam operation at this high rate we have developed a new timing pattern aware EPICS controller for beam line actuators. Actuators that are capable of responding at 120 Hz are controlled by the new Pattern Aware Unit (PAU) as part of the beam-based feedback system. The beam at the LCLS is synchronized to the 60 Hz AC power line phase and is subject to electrical noise which differs according to which of the six possible AC phases is chosen from the 3-phase site power line. Beam operation at 120 Hz interleaves two of these 60 Hz phases and the feedback must be able to apply independent corrections to the beam pulse according to which of the 60 Hz timing patterns the pulse is synchronized to. The PAU works together with the LCLS Event Timing system which broadcasts a timing pattern that uniquely identifies each pulse when it is measured and allows the feedback correction to be applied to subsequent pulses belonging to the same timing pattern, or time slot, as it is referred to at SLAC. At 120 Hz operation this effectively provides us with two independent, but interleaved feedback loops. Other beam programs at the SLAC facility such as LCLS-II and FACET will be pulsed on other time slots and the PAUs in those systems will respond to their appropriate timing patterns. This paper describes the details of the PAU development: real-time requirements and achievement, scalability, and consistency. The operational results will also be described.  
poster icon Poster WEPMN032 [0.430 MB]  
 
WEPMN034 YAMS: a Stepper Motor Controller for the FERMI@Elettra Free Electron Laser power-supply, software, interface, TANGO 958
 
  • A. Abrami, M. De Marco, M. Lonza, D. Vittor
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
New projects, like FERMI@Elettra, demand for standardization of the systems in order to cut development and maintenance costs. The various motion control applications foreseen in this project required a specific controller able to flexibly adapt to any need while maintaining a common interface to the control system to minimize software development efforts. These reasons led us to design and build "Yet Another Motor Subrack", YAMS, a 3U chassis containing a commercial stepper motor controller, up to eight motor drivers and all the necessary auxiliary systems. The motors can be controlled locally by means of an operator panel or remotely through an Ethernet interface and a dedicated Tango device server. The paper describes the details of the project and the deployment issues.
 
poster icon Poster WEPMN034 [4.274 MB]  
 
WEPMN036 Comparative Analysis of EPICS IOC and MARTe for the Development of a Hard Real-Time Control Application EPICS, real-time, framework, software 961
 
  • A. Barbalace, A. Luchetta, G. Manduchi, C. Taliercio
    Consorzio RFX, Associazione Euratom-ENEA sulla Fusione, Padova, Italy
  • B. Carvalho, D.F. Valcárcel
    IPFN, Lisbon, Portugal
 
  EPICS is used worldwide to build distributed control systems for scientific experiments. The EPICS software suite is based around the Channel Access (CA) network protocol that allows the communication of different EPICS clients and servers in a distributed architecture. Servers are called Input/Output Controllers (IOCs) and perform real-world I/O or local control tasks. EPICS IOCs were originally designed for VxWorks to meet the demanding real-time requirements of control algorithms and have lately been ported to different operating systems. The MARTe framework has recently been adopted to develop an increasing number of hard real-time systems in different fusion experiments. MARTe is a software library that allows the rapid and modular development of stand-alone hard real-time control applications on different operating systems. MARTe has been created to be portable and during the last years it has evolved to follow the multicore evolution. In this paper we review several implementation differences between EPICS IOC and MARTe. We dissect their internal data structures and synchronization mechanisms to understand what happens behind the scenes. Differences in the component based approach and in the concurrent model of computation in EPICS IOC and MARTe are explained. Such differences lead to distinct time models in the computational blocks and distinct real-time capabilities of the two frameworks that a developer must be aware of.  
poster icon Poster WEPMN036 [2.406 MB]  
 
WEPMN038 A Combined On-line Acoustic Flowmeter and Fluorocarbon Coolant Mixture Analyzer for the ATLAS Silicon Tracker software, detector, database, real-time 969
 
  • A. Bitadze, R.L. Bates
    University of Glasgow, Glasgow, United Kingdom
  • M. Battistin, S. Berry, P. Bonneau, J. Botelho-Direito, B. Di Girolamo, J. Godlewski, E. Perez-Rodriguez, L. Zwalinski
    CERN, Geneva, Switzerland
  • N. Bousson, G.D. Hallewell, M. Mathieu, A. Rozanov
    CNRS/CPT, Marseille, France
  • R. Boyd
    University of Oklahoma, Norman, Oklahoma, USA
  • M. Doubek, V. Vacek, M. Vitek
    Czech Technical University in Prague, Faculty of Mechanical Engineering, Prague, Czech Republic
  • K. Egorov
    Indiana University, Bloomington, Indiana, USA
  • S. Katunin
    PNPI, Gatchina, Leningrad District, Russia
  • S. McMahon
    STFC/RAL/ASTeC, Chilton, Didcot, Oxon, United Kingdom
  • K. Nagai
    University of Tsukuba, Graduate School of Pure and Applied Sciences,, Tsukuba, Ibaraki, Japan
 
  An upgrade to the ATLAS silicon tracker cooling control system requires a change from C3F8 (molecular weight 188) coolant to a blend with 10-30% C2F6 (mw 138) to reduce the evaporation temperature and better protect the silicon from cumulative radiation damage at LHC. Central to this upgrade an acoustic instrument for measurement of C3F8/C2F6 mixture and flow has been developed. Sound velocity in a binary gas mixture at known temperature and pressure depends on the component concentrations. 50 kHz sound bursts are simultaneously sent via ultrasonic transceivers parallel and anti-parallel to the gas flow. A 20 MHz transit clock is started synchronous with burst transmission and stopped by over-threshold received sound pulses. Transit times in both directions, together with temperature and pressure, enter a FIFO memory 100 times/second. Gas mixture is continuously analyzed using PVSS-II, by comparison of average sound velocity in both directions with stored velocity-mixture look-up tables. Flow is calculated from the difference in sound velocity in the two directions. In future versions these calculations may be made in a micro-controller. The instrument has demonstrated a resolution of <0.3% for C3F8/C2F6 mixtures with ~20%C2F6, with simultaneous flow resolution of ~0.1% of F.S. Higher precision is possible: a sensitivity of ~0.005% to leaks of C3F8 into the ATLAS pixel detector nitrogen envelope (mw difference 156) has been seen. The instrument has many applications, including analysis of hydrocarbons, mixtures for semi-conductor manufacture and anesthesia.  
 
WEPMS001 Interconnection Test Framework for the CMS Level-1 Trigger System framework, operation, hardware, distributed 973
 
  • J. Hammer
    CERN, Geneva, Switzerland
  • M. Magrans de Abril
    UW-Madison/PD, Madison, Wisconsin, USA
  • C.-E. Wulz
    HEPHY, Wien, Austria
 
  The Level-1 Trigger Control and Monitoring System is a software package designed to configure, monitor and test the Level-1 Trigger System of the Compact Muon Solenoid (CMS) experiment at CERN's Large Hadron Collider. It is a large and distributed system that runs over 50 PCs and controls about 200 hardware units. The Interconnection Test Framework (ITF), a generic and highly flexible framework for creating and executing hardware tests within the Level-1 Trigger environment is presented. The framework is designed to automate testing of the 13 major subsystems interconnected with more than 1000 links. Features include a web interface to create and execute tests, modeling using finite state machines, dependency management, automatic configuration, and loops. Furthermore, the ITF will replace the existing heterogeneous testing procedures and help reducing maintenance and complexity of operation tasks. Finally, an example of operational use of the Interconnection Test Framework is presented. This case study proves the concept and describes the customization process and its performance characteristics.  
poster icon Poster WEPMS001 [0.576 MB]  
 
WEPMS003 A Testbed for Validating the LHC Controls System Core Before Deployment software, hardware, operation, timing 977
 
  • J. Nguyen Xuan, V. Baggiolini
    CERN, Geneva, Switzerland
 
  Since the start-up of the LHC, it is crucial to carefully test core controls components before deploying them operationally. The Testbed of the CERN accelerator controls group was developed for this purpose. It contains different hardware (PPC, i386) running different operating systems (Linux and LynxOS) and core software components running on front-ends, communication middleware and client libraries. The Testbed first executes integration tests to verify that the components delivered by individual teams interoperate, and then system tests, which verify high-level, end-user functionality. It also verifies that different versions of components are compatible, which is vital, because not all parts of the operational LHC control system can be upgraded simultaneously. In addition, the Testbed can be used for performance and stress tests. Internally, the Testbed is driven by Bamboo, a Continuous Integration server, which builds and deploys automatically new software versions into the Testbed environment and executes the tests continuously to prevent from software regression. Whenever a test fails, an e-mail is sent to the appropriate persons. The Testbed is part of the official controls development process wherein new releases of the controls system have to be validated before being deployed operationally. Integration and system tests are an important complement to the unit tests previously executed in the teams. The Testbed has already caught several bugs that were not discovered by the unit tests of the individual components.
* http://cern.ch/jnguyenx/ControlsTestBed.html
 
poster icon Poster WEPMS003 [0.111 MB]  
 
WEPMS005 Automated Coverage Tester for the Oracle Archiver of WinCC OA software, status, operation, database 981
 
  • A. Voitier, P. Golonka, M. Gonzalez-Berges
    CERN, Geneva, Switzerland
 
  A large number of control systems at CERN are built with the commercial SCADA tool WinCC OA. They cover projects in the experiments, accelerators and infrastructure. An important component is the Oracle archiver used for long term storage of process data (events) and alarms. The archived data provide feedback to the operators and experts about how the system was behaving at particular moment in the past. In addition a subset of these data is used for offline physics analysis. The consistency of the archived data has to be ensured from writing to reading as well as throughout updates of the control systems. The complexity of the archiving subsystem comes from the multiplicity of data types, required performance and other factors such as operating system, environment variables or versions of the different software components, therefore an automatic tester has been implemented to systematically execute test scenarios under different conditions. The tests are based on scripts which are automatically generated from templates. Therefore they can cover a wide range of software contexts. The tester has been fully written in the same software environment as the targeted SCADA system. The current implementation is able to handle over 300 test cases, both for events and alarms. It has enabled to report issues to the provider of WinCC OA. The template mechanism allows sufficient flexibility to adapt the suite of tests to future needs. The developed tools are generic enough to be used to tests other parts of the control systems.  
poster icon Poster WEPMS005 [0.279 MB]  
 
WEPMS007 Backward Compatibility as a Key Measure for Smooth Upgrades to the LHC Control System software, operation, feedback, Linux 989
 
  • V. Baggiolini, M. Arruat, D. Csikos, R. Gorbonosov, P. Tarasenko, Z. Zaharieva
    CERN, Geneva, Switzerland
 
  Now that the LHC is operational, a big challenge is to upgrade the control system smoothly, with minimal downtime and interruptions. Backward compatibility (BC) is a key measure to achieve this: a subsystem with a stable API can be upgraded smoothly. As part of a broader Quality Assurance effort, the CERN Accelerator Controls group explored methods and tools supporting BC. We investigated two aspects in particular: (1) "Incoming dependencies", to know which part of an API is really used by clients and (2) BC validation, to check that a modification is really backward compatible. We used this approach for Java APIs and for FESA devices (which expose an API in the form of device/property sets). For Java APIs, we gather dependency information by regularly running byte-code analysis on all the 1000 Jar files that belong to the control system and find incoming dependencies (methods calls and inheritance). An Eclipse plug-in we developed shows these incoming dependencies to the developer. If an API method is used by many clients, it has to remain backward compatible. On the other hand, if a method is not used, it can be freely modified. To validate BC, we are exploring the official Eclipse tools (PDE-API tools), and others that check BC without need for invasive technology such as OSGi. For FESA devices, we instrumented key components of our controls system to know which devices and properties are in use. This information is collected in the Controls Database and is used (amongst others) by the FESA design tools in order to prevent the FESA class developer from breaking BC.  
 
WEPMS013 Timing System of the Taiwan Photon Source timing, injection, gun, EPICS 999
 
  • C.Y. Wu, Y.-T. Chang, J. Chen, Y.-S. Cheng, P.C. Chiu, K.T. Hsu, K.H. Hu, C.H. Kuo, D. Lee, C.Y. Liao
    NSRRC, Hsinchu, Taiwan
 
  The timing system of the Taiwan Photon Source provides synchronization for electron gun, modulators of linac, pulse magnet power supplies, booster power supply ramp, bucket addressing of storage ring, diagnostic equipments, beamline gating signal for top-up injection. The system is based on an event distribution system that broadcasts the timing events over a optic fiber network, and decodes and processes them at the timing event receivers. The system supports uplink functionality which will be used for the fast interlock system to distribute signals like beam dump and post-mortem trigger with 10 μsec response time. The hardware of the event system is a new design that is based on 6U CompactPCI form factor. This paper describes the technical solution, the functionality of the system and some applications that are based on the timing system.  
 
WEPMS016 Network on Chip Master Control Board for Neutron's Acquisition FPGA, neutron, interface, network 1006
 
  • E. Ruiz-Martinez, T. Mary, P. Mutti, J. Ratel, F. Rey
    ILL, Grenoble, France
 
  In the neutron scattering instruments at the Institute Laue-Langevin, one of the main challenges for the acquisition control is to generate the suitable signalling for the different modes of neutron acquisition. An inappropriate management could cause loss of information during the course of the experiments and in the subsequent data analysis. It is necessary to define a central element to provide synchronization to the rest of the units. The backbone of the proposed acquisition control system is the denominated master acquisition board. This main board is designed to gather together the modes of neutron acquisition used in the facility, and make it common for all the instruments in a simple, modular and open way, giving the possibility of adding new performances. The complete system also includes a display board and n histogramming modules connected to the neutrons detectors. The master board consists of a VME64X configurable high density I/O connection carrier board based on latest Xilinx Virtex-6T FPGA. The internal architecture of the FPGA is designed as a Network on Chip (NoC) approach. It represents a switch able to communicate efficiently the several resources available in the board (PCI Express, VME64x Master/Slave, DDR3 controllers and user's area). The core of the global signal synchronization is fully implemented in the FPGA, the board has a completely user configurable IO front-end to collect external signals, to process them and to distribute the synchronization control via the bus VME to the others modules involved in the acquisition.  
poster icon Poster WEPMS016 [7.974 MB]  
 
WEPMS019 Measuring Angle with Pico Meter Resolution electronics, FPGA, laser, ion 1014
 
  • P. Mutti, M. Jentschel, T. Mary, F. Rey
    ILL, Grenoble, France
  • G. Mana, E. Massa
    INRIM, Turin, Italy
 
  The kilogram is the only remaining fundamental unit within the SI system that is defined in terms of a material artefact (a PtIr cylinder kept in Paris). Therefore, one of the major tasks of modern metrology is the redefinition of the kilogram on the basis of a natural quantity or of a fundamental constant. However, any kilogram redefinition must approach a 10-8 relative accuracy in its practical realization. A joint research project amongst the major metrology institutes in Europe has proposed the redefinition of the kilogram based on the mass of the 12C atom. The goal can be achieved by counting in a first step the number of atoms in a macroscopic weighable object and, in a second step, by weighing the atom by means of measuring its Compton frequency vC. It is in the second step of the procedure, where the ILL is playing a fundamental role with GAMS, the high-resolution γ-ray spectrometer. Energies of the γ-rays emitted in the decay of the capture state to the ground state of a daughter nucleus after a neutron capture reaction can be measured with high precision. In order to match the high demand in angle measurement accuracy, a new optical interferometer with 10 picorad resolution and linearity over a total measurement range of 15° and high stability of about 0.1 nrad/hour has been developed. To drive the interferometer, a new FPGA based electronics for the heterodyne frequency generation and for real time phase measurement and axis control has been realized. The basic concepts of the FPGA implementation will be revised.  
poster icon Poster WEPMS019 [6.051 MB]  
 
WEPMS020 NSLS-II Booster Power Supplies Control booster, operation, injection, extraction 1018
 
  • P.B. Cheblakov, S.E. Karnaev, S.S. Serednyakov
    BINP SB RAS, Novosibirsk, Russia
  • W. Louie, Y. Tian
    BNL, Upton, Long Island, New York, USA
 
  The NSLS-II booster Power Supplies (PSs) [1] are divided into two groups: ramping PSs providing passage of the beam during the beam ramp in the booster from 200 MeV up to 3 GeV at 300 ms time interval, and pulsed PSs providing beam injection from the linac and extraction to the Storage Ring. A special set of devices was developed at BNL for the NSLS-II magnetic system PSs control: Power Supply Controller (PSC) and Power Supply Interface (PSI). The PSI has one or two precision 18-bit DACs, nine channels of ADC for each DAC and digital input/outputs. It is capable of detecting the status change sequence of digital inputs with 10 ns resolution. The PSI is placed close to current regulators and is connected to the PSC via fiber-optic 50 Mbps data link. The PSC communicates with EPICS IOC through a 100 Mbps Ethernet port. The main function of IOC includes ramp curve upload, ADC waveform data download, and various process variable control. The 256 Mb DDR2 memory on PSC provides large storage for up to 16 ramping tables for the both DACs, and 20 second waveform recorder for all the ADC channels. The 100 Mbps Ethernet port enables real time display for 4 ADC waveforms. This paper describes a project of the NSLS-II booster PSs control. Characteristic features of the ramping magnets control and pulsed magnets control in a double-injection mode of operation are considered in the paper. First results of the control at PS testing stands are presented.
[1] Power Supply Control System of NSLS-II, Y. Tian, W. Louie, J. Ricciardelli, L.R. Dalesio, G. Ganetis, ICALEPCS2009, Japan
 
poster icon Poster WEPMS020 [1.818 MB]  
 
WEPMS022 The Controller Design for Kicker Magnet Adjustment Mechanism in SSRF software, feedback, kicker, injection 1021
 
  • R. Wang, R. Chen, Z.H. Chen, M. Gu
    SINAP, Shanghai, People's Republic of China
 
  The kicker magnet adjustment mechanism controller in SSRF is to improve the efficiency of injection by changing the magnet real-time, especially in the top-up mode. The controller mainly consists of Programmable Logic Controller (PLC), stepper motor, reducer, worm and mechanism. PLC controls the stepper motors for adjusting the azimuth of the magnet, monitors and regulates the magnet with inclinometer sensor. It also monitors the interlock. In addition, the controller is provided with local and remote working mode. This paper mainly introduces related hardware and software designs for this device.  
poster icon Poster WEPMS022 [0.173 MB]  
 
WEPMS024 ALBA High Voltage Splitter - Power Distribution to Ion Pumps ion, high-voltage, vacuum, Ethernet 1028
 
  • J.J. Jamroz, E. Al-dmour, D.B. Beltrán, J. Klora, R. Martin, O. Matilla, S. Rubio-Manrique
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  High Voltage Splitter (HVS) is an equipment designed in Alba that allows a high voltage (HV) distribution (up to +7kV) from one ion pump controller up to eight ion pumps. Using it, the total number of high voltage power supplies needed in Alba's vacuum installation has decreased significantly. The current drawn by each splitter channel is measured independently inside a range from 10nA up to 10mA with 5% accuracy, those measurements are a base for vacuum pressure calculations. A relation, current-pressure depends mostly on the ion pump type, so different tools providing the full calibration flexibility have been implemented. Splitter settings, status and recorded data are accessible over a 10/100 Base-T Ethernet network, none the less a local (manual) control was implemented mostly for service purposes. The device supports also additional functions as a HV cable interlock, pressure interlock output cooperating with the facility's Equipment Protection System (EPS), programmable pressure warnings/alarms and automatic calibration process based on an external current source. This paper describes the project, functionality, implementation, installation and operation as a part of the vacuum system at Alba.  
poster icon Poster WEPMS024 [3.734 MB]  
 
WEPMS025 Low Current Measurements at ALBA data-acquisition, diagnostics, TANGO, Ethernet 1032
 
  • J. Lidón-Simon, D.F.C. Fernández-Carreiras, J.V. Gigante, J.J. Jamroz, J. Klora, O. Matilla
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  High accuracy low current readout is an extensively demanded technique in 3rd generation synchrotrons. Whether reading from scintillation excited large-area photodiodes for beam position measurement or out of gold meshes or metallic coated surfaces in drain-current based intensity monitors, low current measurement devices are an ubiquitous need both for diagnostics and data acquisition in today's photon labs. In order to tackle the problem of measuring from various sources of different nature and magnitude synchronously, while remaining flexible at the same time, ALBA has developed a 4 independent channel electrometer. It is based on transimpedance amplifiers and integrates high resolution ADC converters and an 10/100 Base-T Ethernet communication port. Each channel has independently configurable range, offset and low pass filter cut-off frequency settings and the main unit has external I/O to synchronize the data acquisition with the rest of the control system.  
poster icon Poster WEPMS025 [0.797 MB]  
 
WEPMS027 The RF Control System of the SSRF 150MeV Linac interface, linac, EPICS, Ethernet 1039
 
  • S.M. Hu, J.G. Ding, G.-Y. Jiang, L.R. Shen, M.H. Zhao, S.P. Zhong
    SINAP, Shanghai, People's Republic of China
 
  Shanghai Synchrotron Radiation Facility (SSRF) use a 150 MeV linear electron accelerator as injector, its RF system consists of many discrete devices. The control system is mainly composed of a VME controller and a home-made signal conditioner with DC power supplies. The uniform signal conditioner serves as a hardware interface between the controller and the RF components. The DC power supplies are used for driving the mechanical phase shifters. The control software is based on EPICS toolkit. Device drivers and related runtime database for the VME modules were developed. The operator interface was implemented by EDM.  
poster icon Poster WEPMS027 [0.702 MB]  
 
WEPMU001 Temperature Measurement System of Novosibirsk Free Electron Laser FEL, vacuum, operation, microtron 1044
 
  • S.S. Serednyakov, B.A. Gudkov, V.R. Kozak, E.A. Kuper, P.A. Selivanov, S.V. Tararyshkin
    BINP SB RAS, Novosibirsk, Russia
 
  This paper describes the temperature-monitoring system of Novosibirsk FEL. The main task of this system is to prevent the FEL from being overheated and its individual components from being damaged. The system accumulates information from a large number of temperature sensors installed on different parts of the FEL facility, which allows measuring the temperature of the vacuum chamber, cooling water, and magnetic elements windings. Since the architecture of this system allows processing information not only from temperature sensors, it is also used to measure, for instance, vacuum parameters and some parameters of the cooling water. The software part of this system is integrated into the FEL control system, so readings taken from all sensors are recorded to the database every 30 seconds.  
poster icon Poster WEPMU001 [0.484 MB]  
 
WEPMU002 Testing Digital Electronic Protection Systems hardware, LabView, software, FPGA 1047
 
  • A. Garcia Muñoz, S. Gabourin
    CERN, Geneva, Switzerland
 
  The Safe Machine Parameters Controller (SMPC) ensures the correct configuration of the LHC machine protection system, and that safe injection conditions are maintained throughout the filling of the LHC machine. The SMPC receives information in real-time from measurement electronics installed throughout the LHC and SPS accelerators, determines the state of the machine, and informs the SPS and LHC machine protection systems of these conditions. This paper outlines the core concepts and realization of the SMPC test-bench, based on a VME crate and LabVIEW program. Its main goal is to ensure the correct function of the SMPC for the protection of the CERN accelerator complex. To achieve this, the tester has been built to replicate the machine environment and operation, in order to ensure that the chassis under test is completely exercised. The complexity of the task increases with the number of input combinations which are, in the case of the SMPC, in excess of 2364. This paper also outlines the benefits and weaknesses of developing a test suite independently of the hardware being tested, using the "V" approach.  
poster icon Poster WEPMU002 [0.763 MB]  
 
WEPMU003 The Diamond Machine Protection System interlocks, vacuum, interface, photon 1051
 
  • M.T. Heron, Y.S. Chernousko, P. Hamadyk, S.C. Lay, N. Rotolo
    Diamond, Oxfordshire, United Kingdom
 
  Funding: Diamond Light Source LTD
The Diamond Light Source Machine Protection System manages the hazards from high power photon beams and other hazards to ensure equipment protection on the booster synchrotron and storage ring. The system has a shutdown requirement, on a beam mis-steer, of under 1msec and has to manage in excess of a thousand interlocks. This is realised using a combination of bespoke hardware and programmable logic controllers. The structure of the Machine Protection System will be described, together with operational experience and developments to provide post-mortem functionality.
 
poster icon Poster WEPMU003 [0.694 MB]  
 
WEPMU005 Personnel Protection, Equipment Protection and Fast Interlock Systems: Three Different Technologies to Provide Protection at Three Different Levels radiation, linac, network, interlocks 1055
 
  • D.F.C. Fernández-Carreiras, D.B. Beltrán, J. Klora, O. Matilla, J. Moldes, R. Montaño, M. Niegowski, R. Ranz, A. Rubio, S. Rubio-Manrique
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  The Personnel Safety System is based on PILZ PLCs, SIL3 compatible following the norm IEC 61508. It is independent from other subsystems and relies on a dedicated certification by PILZ first and then by TÜV. The Equipment Protection System uses B&R hardware and comprises more than 50 PLCs and more than 100 distributed I/0 modules installed inside the tunnel. The CPUs of the PLCs are interconnected by a deterministic network, supervising more than 7000 signals. Each Beamline has an independent system. The fast interlocks use the bidirectional fibers of the MRF timing system for distributing the interlocks in the microsecond range. Events are distributed by fiber optics for synchronizing more than 280 elements.  
poster icon Poster WEPMU005 [32.473 MB]  
 
WEPMU007 Securing a Control System: Experiences from ISO 27001 Implementation software, EPICS, operation, network 1062
 
  • V. Vuppala, K.D. Davidson, J. Kusler, J.J. Vincent
    NSCL, East Lansing, Michigan, USA
 
  Recent incidents have emphasized the importance of security and operational continuity for achieving the quality objectives of an organization, and the safety of its personnel and machines. However, security and disaster recovery are either completely ignored or given a low priority during the design and development of an accelerator control system, the underlying technologies, and the overlaid applications. This leads to an operational facility that is easy to breach, and difficult to recover. Retrofitting security into the control system becomes much more difficult during operations. In this paper we describe our experiences in achieving ISO 27001 compliance for NSCL's control system. We illustrate problems faced with securing low-level controls, infrastructure, and applications. We also provide guidelines to address the security and disaster recovery issues upfront during the development phase.  
poster icon Poster WEPMU007 [1.304 MB]  
 
WEPMU008 Access Safety Systems – New Concepts from the LHC Experience operation, injection, site, hardware 1066
 
  • T. Ladzinski, Ch. Delamare, S. Di Luca, T. Hakulinen, L. Hammouti, F. Havart, J.-F. Juget, P. Ninin, R. Nunes, T.R. Riesco, E. Sanchez-Corral Mena, F. Valentini
    CERN, Geneva, Switzerland
 
  The LHC Access Safety System has introduced a number of new concepts into the domain of personnel protection at CERN. These can be grouped into several categories: organisational, architectural and concerning the end-user experience. By anchoring the project on the solid foundations of the IEC 61508/61511 methodology, the CERN team and its contractors managed to design, develop, test and commission on time a SIL3 safety system. The system uses a successful combination of the latest Siemens redundant safety programmable logic controllers with a traditional relay logic hardwired loop. The external envelope barriers used in the LHC include personnel and material access devices, which are interlocked door-booths introducing increased automation of individual access control, thus removing the strain from the operators. These devices ensure the inviolability of the controlled zones by users not holding the required credentials. To this end they are equipped with personnel presence detectors and the access control includes a state of the art biometry check. Building on the LHC experience, new projects targeting the refurbishment of the existing access safety infrastructure in the injector chain have started. This paper summarises the new concepts introduced in the LHC access control and safety systems, discusses the return of experience and outlines the main guiding principles for the renewal stage of the personnel protection systems in the LHC injector chain in a homogeneous manner.  
poster icon Poster WEPMU008 [1.039 MB]  
 
WEPMU009 The Laser MégaJoule Facility: Personnel Security and Safety Interlocks laser, interlocks, GUI, operation 1070
 
  • J.-C. Chapuis, J.P.A. Arnoul, A. Hurst, M.G. Manson
    CEA, Le Barp, France
 
  The French CEA (Commissariat à l'Énergie Atomique) is currently building the LMJ (Laser MégaJoule), at the CEA Laboratory CESTA near Bordeaux. The LMJ is designed to deliver about 1.4 MJ of 0.35 μm light to targets for high energy density physics experiments. Such an installation entails specific risks related to the presence of intense laser beams, and high voltage power laser amplifiers. Furthermore, the thermonuclear fusion reactions induced by the experiment also produce different radiations and neutrons burst and also activate some materials in the chamber environment. Both risks could be lethal. This presentation (paper) discusses the SSP (system for the personnel safety) that was designed to prevent accidents and protect personnel working in the LMJ. To achieve the security level imposed on us by labor law and by the French Safety Authority, the system consists of two independent safety barriers based on different technologies, whose combined effect can reduce to insignificant level the occurrence probability of all accidental scenarios identified during the risk analysis.  
 
WEPMU013 Development of a Machine Protection System for the Superconducting Beam Test Facility at FERMILAB laser, operation, status, FPGA 1084
 
  • L.R. Carmichael, M.D. Church, R. Neswold, A. Warner
    Fermilab, Batavia, USA
 
  Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
Fermilab’s Superconducting RF Beam Test Facility currently under construction will produce electron beams capable of damaging the acceleration structures and the beam line vacuum chambers in the event of an aberrant accelerator pulse. The accelerator is being designed with the capability to operate with up to 3000 bunches per macro-pulse, 5Hz repetition rate and 1.5 GeV beam energy. It will be able to sustain an average beam power of 72 KW at the bunch charge of 3.2 nC. Operation at full intensity will deposit enough energy in niobium material to approach the melting point of 2500 °C. In the early phase with only 3 cryomodules installed the facility will be capable of generating electron beam energies of 810 MeV and an average beam power that approaches 40 KW. In either case a robust Machine Protection System (MPS) is required to mitigate effects due to such large damage potentials. This paper will describe the MPS system being developed, the system requirements and the controls issues under consideration.
 
poster icon Poster WEPMU013 [0.755 MB]  
 
WEPMU016 Pre-Operation, During Operation and Post-Operational Verification of Protection Systems operation, injection, software, database 1090
 
  • I. Romera, M. Audrain
    CERN, Geneva, Switzerland
 
  This paper will provide an overview of the software checks performed on the Beam Interlock System ensuring that the system is functioning to specification. Critical protection functions are implemented in hardware, at the same time software tools play an important role in guaranteeing the correct configuration and operation of the system during all phases of operation. This paper will describe tests carried out pre-, during- and post- operation, if protection system integrity is not sure, subsequent injections of beam into the LHC will be inhibited.  
 
WEPMU017 Safety Control System and its Interface to EPICS for the Off-Line Front-End of the SPES Project EPICS, target, status, interface 1093
 
  • J.A. Vásquez, A. Andrighetto, G. Bassato, L. Costa, M.G. Giacchini
    INFN/LNL, Legnaro (PD), Italy
  • M. Bertocco
    UNIPD, Padova (PD), Italy
 
  The SPES off-line front-end apparatus involves a number of subsystems and procedures that are potentially dangerous both for human operators and for the equipments. The high voltage power supply, the ion source complex power supplies, the target chamber handling systems and the laser source are some example of these subsystems. For that reason, a safety control system has been developed. It is based on Schneider Electrics Preventa family safety modules that control the power supply of critical subsystems in combination with safety detectors that monitor critical variables. A Programmable Logic Controller (PLC), model BMXP342020 from the Schneider Electrics Modicon M340 family, is used for monitoring the status of the system as well as controlling the sequence of some operations in automatic way. A touch screen, model XBTGT5330 from the Schneider Electrics Magelis family, is used as Human Machine Interface (HMI) and communicates with the PLC using MODBUS-TCP. Additionally, an interface to the EPICS control network was developed using a home-made MODBUS-TCP EPICS driver in order to integrate it to the control system of the Front End as well as present the status of the system to the users on the main control panel.  
poster icon Poster WEPMU017 [2.847 MB]  
 
WEPMU018 Real-time Protection of the "ITER-like Wall at JET" real-time, FPGA, plasma, network 1096
 
  • M.B. Jouve, C. Balorin
    Association EURATOM-CEA, St Paul Lez Durance, France
  • G. Arnoux, S. Devaux, D. Kinna, P.D. Thomas, K-D. Zastrow
    CCFE, Abingdon, Oxon, United Kingdom
  • P.J. Carvalho
    IPFN, Lisbon, Portugal
  • J. Veyret
    Sundance France, Matignon, France
 
  During the last JET tokamak shutdown a new ITER-Like Wall was installed using Tungsten and Beryllium materials. To ensure plasma facing component (PFC) integrity, the real-time protection of the wall has been upgraded through the project "Protection for the ITER-like Wall" (PIW). The choice has been made to work with 13 CCD robust analog cameras viewing the main areas of plasma wall interaction and to use regions of interest (ROI) for monitoring in real time the surface temperature of the PFCs. For each camera, ROIs will be set up pre-pulse and, during plasma operation, surface temperatures from these ROIs will be sent to the real time processing system for monitoring and eventually preventing damages on PFCs by modifying the plasma parameters. The video and the associated control system developed for this project is presented in this paper. The video is captured using PLEORA frame grabber and it is sent on GigE network to the real time processing system (RTPS) divided into a 'Real time processing unit' (RTPU), for surface temperature calculation, and the 'RTPU Host', for connection between RTPU and other systems. The RTPU design is based on commercial Xilinx Virtex5 FPGA boards with one board per camera and 2 boards per host. Programmed under Simulink using System generator blockset, the field programmable gate array (FPGA) can manage simultaneously up to 96 ROI defined pixel by pixel.  
poster icon Poster WEPMU018 [2.450 MB]  
 
WEPMU020 LHC Collimator Controls for a Safe LHC Operation injection, FPGA, survey, operation 1104
 
  • S. Redaelli, R.W. Assmann, M. Donzé, R. Losito, A. Masi
    CERN, Geneva, Switzerland
 
  The beam stored energy at the Large Hadron Collider (LHC) will be up to 360 MJ, to be compared with the quench limit of super-conducting magnets of a few mJ per cm3 and with the damage limit of metal of a few hundreds kJ. The LHC collimation system is designed to protect the machine against beam losses and consists of 108 collimators, 100 of which are movable, located along the 27 km long ring and in the transfer lines. Each collimator has two jaws controlled by four stepping motors to precisely adjust collimator position and angle with respect to the beam. Stepping motors have been used to ensure high position reproducibility. LVDT and resolvers have been installed to monitor in real-time at 100 Hz the jaw positions and the collimator gaps. The cleaning performance and machine protection role of the system depend critically on the accurate jaw positioning. A fully redundant survey system has been developed to ensure that the collimators dynamically follow optimum settings in all phases of the LHC operational cycle. Jaw positions and collimator gaps are interlocked against dump limits defined redundantly as functions of the time, of the beam energy and of the beta* functions that describes the focusing property of the beams. In this paper, the architectural choices that guarantee a safe LHC operation are presented. Hardware and software implementations that ensure the required reliability are described. The operational experience accumulated so far is reviewed and a detailed failure analysis that show the fulfillment of the machine protection specifications is presented.  
 
WEPMU022 Quality-Safety Management and Protective Systems for SPES monitoring, proton, radiation, operation 1108
 
  • S. Canella, D. Benini
    INFN/LNL, Legnaro (PD), Italy
 
  SPES (Selective Production of Exotic Species) is an INFN project to produce Radioactive Ion Beams (RIB) at Laboratori Nazionali di Legnaro (LNL). The RIB will be produced using the proton induced fission on a Direct Target of UCx. In SPES the proton driver will be a Cyclotron with variable energy (15-70 MeV) and a maximum current of 0.750 mA on two exit ports. The SPES Access Control System and the Dose Monitoring will be integrated in the facility Protective System to achieve the necessary high degree of safety and reliability and to prevent dangerous situations for people, environment and the facility itself. A Quality and Safety Management System for SPES (QSMS) will be realized at LNL for managing all the phases of the project (from design to decommissioning), including therefore the commissioning and operation of the Cyclotron machine too. The Protective System, its documents, data and procedures will be one of the first items that will be considered for the implementation of the QSMS of SPES. Here a general overview of SPES Radiation Protection System, its planned architecture, data and procedures, together with their integration in the QSMS are presented.  
poster icon Poster WEPMU022 [1.092 MB]  
 
WEPMU023 External Post-Operational Checks for the LHC Beam Dumping System kicker, operation, injection, extraction 1111
 
  • N. Magnin, V. Baggiolini, E. Carlier, B. Goddard, R. Gorbonosov, D. Khasbulatov, J.A. Uythoven, M. Zerlauth
    CERN, Geneva, Switzerland
 
  The LHC Beam Dumping System (LBDS) is a critical part of the LHC machine protection system. After every LHC beam dump action the various signals and transient data recordings of the beam dumping control systems and beam instrumentation measurements are automatically analysed by the eXternal Post-Operational Checks (XPOC) system to verify the correct execution of the dump action and the integrity of the related equipment. This software system complements the LHC machine protection hardware, and has to ascertain that the beam dumping system is ‘as good as new’ before the start of the next operational cycle. This is the only way by which the stringent reliability requirements can be met. The XPOC system has been developed within the framework of the LHC “Post-Mortem” system, allowing highly dependable data acquisition, data archiving, live analysis of acquired data and replay of previously recorded events. It is composed of various analysis modules, each one dedicated to the analysis of measurements coming from specific equipment. This paper describes the global architecture of the XPOC system and gives examples of the analyses performed by some of the most important analysis modules. It explains the integration of the XPOC into the LHC control infrastructure along with its integration into the decision chain to allow proceeding with beam operation. Finally, it discusses the operational experience with the XPOC system acquired during the first years of LHC operation, and illustrates examples of internal system faults or abnormal beam dump executions which it has detected.  
poster icon Poster WEPMU023 [1.768 MB]  
 
WEPMU025 Equipment and Machine Protection Systems for the FERMI@Elettra FEL facility vacuum, TANGO, electron, linac 1119
 
  • F. Giacuzzo, L. Battistello, L. Fröhlich, G. Gaio, M. Lonza, G. Scalamera, G. Strangolino, D. Vittor
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
FERMI@Elettra is a Free Electron Laser (FEL) based on a 1.5 GeV linac presently under commissioning in Trieste, Italy. Three PLC-based systems communicating to each other assure the protection of machine devices and equipment. The first is the interlock system for the linac radiofrequency plants; the second is dedicated to the protection of vacuum devices and magnets; the third is in charge of protecting various machine components from radiation damage. They all make use of a distributed architecture based on fieldbus technology and communicate with the control system via Ethernet interfaces and dedicated Tango device servers. A complete set of tools including graphical panels, logging and archiving systems are used to monitor the systems from the control room.
 
poster icon Poster WEPMU025 [0.506 MB]  
 
WEPMU026 Protecting Detectors in ALICE detector, injection, experiment, monitoring 1122
 
  • M. Lechman, A. Augustinus, P.Ch. Chochula, G. De Cataldo, A. Di Mauro, L.S. Jirdén, A.N. Kurepin, P. Rosinský, H. Schindler
    CERN, Geneva, Switzerland
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  ALICE is one of the big LHC experiments at CERN in Geneva. It is composed of many sophisticated and complex detectors mounted very compactly around the beam pipe. Each detector is a unique masterpiece of design, engineering and construction and any damage to it could stop the experiment for months or even for years. It is therefore essential that the detectors are protected from any danger and this is one very important role of the Detector Control System (DCS). One of the main dangers for the detectors is the particle beam itself. Since the detectors are designed to be extremely sensitive to particles they are also vulnerable to any excess of beam conditions provided by the LHC accelerator. The beam protection consists of a combination of hardware interlocks and control software and this paper will describe how this is implemented and handled in ALICE. Tools have also been developed to support operators and shift leaders in the decision making related to beam safety. The gained experiences and conclusions from the individual safety projects are also presented.  
poster icon Poster WEPMU026 [1.561 MB]  
 
WEPMU028 Development Status of Personnel Protection System for IFMIF/EVEDA Accelerator Prototype radiation, operation, monitoring, status 1126
 
  • T. Kojima, T. Narita, K. Nishiyama, H. Sakaki, H. Takahashi, K. Tsutsumi
    Japan Atomic Energy Agency (JAEA), International Fusion Energy Research Center (IFERC), Rokkasho, Kamikita, Aomori, Japan
 
  The Control System for IFMIF/EVEDA* accelerator prototype consists of six subsystems; Central Control System (CCS), Local Area Network (LAN), Personnel Protection System (PPS), Machine Protection System (MPS), Timing System (TS) and Local Control System (LCS). The IFMIF/EVEDA accelerator prototype provides deuteron beam with power greater than 1 MW, which is the same as that of J-PARC and SNS. The PPS is required to protect technical and engineering staff against unnecessary exposure, electrical shock hazard and the other danger phenomena. The PPS has two functions of building management and accelerator management. For both managements, Programmable Logic Controllers (PLCs), monitoring cameras and limit switches and etc. are used for interlock system, and a sequence is programmed for entering and leaving of controlled area. This article presents the PPS design and the interface against each accelerator subsystems in details.
* International Fusion Material Irradiation Facility / Engineering Validation and Engineering Design Activity
 
poster icon Poster WEPMU028 [1.164 MB]  
 
WEPMU029 Assessment And Testing of Industrial Devices Robustness Against Cyber Security Attacks network, framework, monitoring, target 1130
 
  • F.M. Tilaro, B. Copy
    CERN, Geneva, Switzerland
 
  CERN (European Organization for Nuclear Research),like any organization, needs to achieve the conflicting objectives of connecting its operational network to Internet while at the same time keeping its industrial control systems secure from external and internal cyber attacks. With this in mind, the ISA-99 [1] international cyber security standard has been adopted at CERN as a reference model to define a set of guidelines and security robustness criteria applicable to any network device. Devices robustness represents a key link in the defense-in-depth concept as some attacks will inevitably penetrate security boundaries and thus require further protection measures. When assessing the cyber security robustness of devices we have singled out control system-relevant attack patterns derived from the well-known CAPEC [2] classification. Once a vulnerability is identified, it needs to be documented, prioritized and reproduced at will in a dedicated test environment for debugging purposes. CERN - in collaboration with SIEMENS –has designed and implemented a dedicated working environment, the Test-bench for Robustness of Industrial Equipments [3] (“TRoIE”). Such tests attempt to detect possible anomalies by exploiting corrupt communication channels and manipulating the normal behavior of the communication protocols, in the same way as a cyber attacker would proceed. This document provides an inventory of security guidelines [4] relevant to the CERN industrial environment and describes how we have automated the collection and classification of identified vulnerabilities into a test-bench.
[1] http://www.isa.org
[2] http://capec.mitre.org
[3] F. Tilaro, "Test-bench for Robustness…", CERN, 2009
[4] B. Copy, F. Tilaro, "Standards based measurable security for embedded devices", ICALEPCS 2009
 
poster icon Poster WEPMU029 [3.152 MB]  
 
WEPMU030 CERN Safety System Monitoring - SSM monitoring, network, interface, database 1134
 
  • T. Hakulinen, P. Ninin, F. Valentini
    CERN, Geneva, Switzerland
  • J. Gonzalez, C. Salatko-Petryszcze
    ASsystem, St Genis Pouilly, France
 
  CERN SSM (Safety System Monitoring) is a system for monitoring state-of-health of the various access and safety systems of the CERN site and accelerator infrastructure. The emphasis of SSM is on the needs of maintenance and system operation with the aim of providing an independent and reliable verification path of the basic operational parameters of each system. Included are all network-connected devices, such as PLCs, servers, panel displays, operator posts, etc. The basic monitoring engine of SSM is a freely available system monitoring framework Zabbix, on top of which a simplified traffic-light-type web-interface has been built. The web-interface of SSM is designed to be ultra-light to facilitate access from handheld devices over slow connections. The underlying Zabbix system offers history and notification mechanisms typical advanced monitoring systems.  
poster icon Poster WEPMU030 [1.231 MB]  
 
WEPMU031 Virtualization in Control System Environment EPICS, network, hardware, operation 1138
 
  • L.R. Shen, D.K. Liu, T. Wan
    SINAP, Shanghai, People's Republic of China
 
  In a large scale distribute control system, there are lots of common services composing an environment of the entire control system, such as the server system for the common software base library, application server, archive server and so on. This paper gives a description of a virtualization realization for a control system environment, including the virtualization for server, storage, network system and application for the control system. With a virtualization instance of the epics based control system environment built by the VMware vSphere v4, we tested the whole functionality of this virtualization environment in the SSRF control system, including the common server of the NFS, NIS, NTP, Boot and EPICS base and extension library tools, we also carried out virtualization of the application server such as the Archive, Alarm, EPICS gateway and all of the network based IOC. Specially, we tested the high availability (HA) and VMotion for EPICS asynchronous IOC successfully under the different VLAN configuration of the current SSRF control system network.  
 
WEPMU033 Monitoring Control Applications at CERN monitoring, operation, framework, software 1141
 
  • F. Varela, F.B. Bernard, M. Gonzalez-Berges, H. Milcent, L.B. Petrova
    CERN, Geneva, Switzerland
 
  The Industrial Controls and Engineering (EN-ICE) group of the Engineering Department at CERN has produced, and is responsible for the operation of around 60 applications, which control critical processes in the domains of cryogenics, quench protections systems, power interlocks for the Large Hadron Collider and other sub-systems of the accelerator complex. These applications require 24/7 operation and a quick reaction to problems. For this reason the EN-ICE is presently developing the monitoring tool to detect, anticipate and inform of possible anomalies in the integrity of the applications. The tool builds on top of Simatic WinCC Open Architecture (formerly PVSS) SCADA and makes usage of the Joint COntrols Project (JCOP) and UNICOS Frameworks developed at CERN. The tool provides centralized monitoring of the different elements integrating the controls systems like Windows and Linux servers, PLCs, applications, etc. Although the primary aim of the tool is to assist the members of the EN-ICE Standby Service, the tool may present different levels of details of the systems depending on the user, which enables experts to diagnose and troubleshoot problems. In this paper, the scope, functionality and architecture of the tool are presented and some initial results on its performance are summarized.  
poster icon Poster WEPMU033 [1.719 MB]  
 
WEPMU034 Infrastructure of Taiwan Photon Source Control Network network, EPICS, Ethernet, timing 1145
 
  • Y.-T. Chang, J. Chen, Y.-S. Cheng, K.T. Hsu, S.Y. Hsu, K.H. Hu, C.H. Kuo, C.Y. Wu
    NSRRC, Hsinchu, Taiwan
 
  A reliable, flexible and secure network is essential for the Taiwan Photon Source (TPS) control system which is based upon the EPICS toolkit framework. Subsystem subnets will connect to control system via EPICS based CA gateways for forwarding data and reducing network traffic. Combining cyber security technologies such as firewall, NAT and VLAN, control network is isolated to protect IOCs and accelerator components. Network management tools are used to improve network performance. Remote access mechanism will be constructed for maintenance and troubleshooting. The Ethernet is also used as fieldbus for instruments such as power supplies. This paper will describe the system architecture for the TPS control network. Cabling topology, redundancy and maintainability are also discussed.  
 
WEPMU037 Virtualization for the LHCb Experiment network, experiment, Linux, hardware 1157
 
  • E. Bonaccorsi, L. Brarda, M. Chebbi, N. Neufeld
    CERN, Geneva, Switzerland
  • F. Sborzacchi
    INFN/LNF, Frascati (Roma), Italy
 
  The LHCb Experiment, one of the four large particle physics detectors at CERN, counts in its Online System more than 2000 servers and embedded systems. As a result of ever-increasing CPU performance in modern servers, many of the applications in the controls system are excellent candidates for virtualization technologies. We see virtualization as an approach to cut down cost, optimize resource usage and manage the complexity of the IT infrastructure of LHCb. Recently we have added a Kernel Virtual Machine (KVM) cluster based on Red Hat Enterprise Virtualization for Servers (RHEV) complementary to the existing Hyper-V cluster devoted only to the virtualization of the windows guests. This paper describes the architecture of our solution based on KVM and RHEV as along with its integration with the existing Hyper-V infrastructure and the Quattor cluster management tools and in particular how we use to run controls applications on a virtualized infrastructure. We present performance results of both the KVM and Hyper-V solutions, problems encountered and a description of the management tools developed for the integration with the Online cluster and LHCb SCADA control system based on PVSS.  
 
WEPMU038 Network Security System and Method for RIBF Control System network, EPICS, operation, status 1161
 
  • A. Uchiyama
    SHI Accelerator Service Ltd., Tokyo, Japan
  • M. Fujimaki, N. Fukunishi, M. Komiyama, R. Koyama
    RIKEN Nishina Center, Wako, Japan
 
  In RIKEN RI beam factory (RIBF), the local area network for accelerator control system (control system network) consists of commercially produced Ethernet switches, optical fibers and metal cables. On the other hand, E-mail and Internet access for unrelated task to accelerator operation are usually used in RIKEN virtual LAN (VLAN) as office network. From the viewpoint of information security, we decided to separate the control system network from the Internet and operate it independently from VLAN. However, it was inconvenient for users for the following reason; it was unable to monitor the information and status of accelerator operation from the user's office in a real time fashion. To improve this situation, we have constructed a secure system which allows the users to get the accelerator information from VLAN to control system network, while preventing outsiders from having access to the information. To allow access to inside control system network over the network from VLAN, we constructed reverse proxy server and firewall. In addition, we implement a system to send E-mail as security alert from control system network to VLAN. In our contribution, we report this system and the present status in detail.  
poster icon Poster WEPMU038 [45.776 MB]  
 
WEPMU039 Virtual IO Controllers at J-PARC MR using Xen EPICS, operation, network, Linux 1165
 
  • N. Kamikubota, N. Yamamoto
    J-PARC, KEK & JAEA, Ibaraki-ken, Japan
  • T. Iitsuka, S. Motohashi, M. Takagi, S.Y. Yoshida
    Kanto Information Service (KIS), Accelerator Group, Ibaraki, Japan
  • H. Nemoto
    ACMOS INC., Tokai-mura, Ibaraki, Japan
  • S. Yamada
    KEK, Ibaraki, Japan
 
  The control system for J-PARC accelerator complex has been developed based on the EPICS toolkit. About 100 traditional ("real") VME-bus computers are used as EPICS IOCs in the control system for J-PARC MR (Main Ring). Recently, we have introduced "virtual" IOCs using Xen, an open-source virtual machine monitor. Scientific Linux with an EPICS iocCore runs on a Xen virtual machine. EPICS databases for network devices and EPICS soft records can be configured. Multiple virtual IOCs run on a high performance blade-type server, running Scientific Linux as native OS. A few number of virtual IOCs have been demonstrated in MR operation since October, 2010. Experience and future perspective will be discussed.  
 
WEPMU040 Packaging of Control System Software software, EPICS, Linux, database 1168
 
  • K. Žagar, M. Kobal, N. Saje, A. Žagar
    Cosylab, Ljubljana, Slovenia
  • F. Di Maio, D. Stepanov
    ITER Organization, St. Paul lez Durance, France
  • R. Šabjan
    COBIK, Solkan, Slovenia
 
  Funding: ITER European Union, European Regional Development Fund and Republic of Slovenia, Ministry of Higher Education, Science and Technology
Control system software consists of several parts – the core of the control system, drivers for integration of devices, configuration for user interfaces, alarm system, etc. Once the software is developed and configured, it must be installed to computers where it runs. Usually, it is installed on an operating system whose services it needs, and also in some cases dynamically links with the libraries it provides. Operating system can be quite complex itself – for example, a typical Linux distribution consists of several thousand packages. To manage this complexity, we have decided to rely on Red Hat Package Management system (RPM) to package control system software, and also ensure it is properly installed (i.e., that dependencies are also installed, and that scripts are run after installation if any additional actions need to be performed). As dozens of RPM packages need to be prepared, we are reducing the amount of effort and improving consistency between packages through a Maven-based infrastructure that assists in packaging (e.g., automated generation of RPM SPEC files, including automated identification of dependencies). So far, we have used it to package EPICS, Control System Studio (CSS) and several device drivers. We perform extensive testing on Red Hat Enterprise Linux 5.5, but we have also verified that packaging works on CentOS and Scientific Linux. In this article, we describe in greater detail the systematic system of packaging we are using, and its particular application for the ITER CODAC Core System.
 
poster icon Poster WEPMU040 [0.740 MB]  
 
THAAUST01 Tailoring the Hardware to Your Control System EPICS, hardware, FPGA, interface 1171
 
  • E. Björklund, S.A. Baily
    LANL, Los Alamos, New Mexico, USA
 
  Funding: Work supported by the US Department of Energy under contract DE-AC52-06NA25396
In the very early days of computerized accelerator control systems the entire control system, from the operator interface to the front-end data acquisition hardware, was custom designed and built for that one machine. This was expensive, but the resulting product was a control system seamlessly integrated (mostly) with the machine it was to control. Later, the advent of standardized bus systems such as CAMAC, VME, and CANBUS, made it practical and attractive to purchase commercially available data acquisition and control hardware. This greatly simplified the design but required that the control system be tailored to accommodate the features and eccentricities of the available hardware. Today we have standardized control systems (Tango, EPICS, DOOCS) using commercial hardware on standardized busses. With the advent of FPGA technology and programmable automation controllers (PACs & PLCs) it now becomes possible to tailor commercial hardware to the needs of a standardized control system and the target machine. In this paper, we will discuss our experiences with tailoring a commercial industrial I/O system to meet the needs of the EPICS control system and the LANSCE accelerator. We took the National Instruments Compact RIO platform, embedded an EPICS IOC in its processor, and used its FPGA backplane to create a "standardized" industrial I/O system (analog in/out, binary in/out, counters, and stepper motors) that meets the specific needs of the LANSCE accelerator.
 
slides icon Slides THAAUST01 [0.812 MB]  
 
THAAUST02 Suitability Assessment of OPC UA as the Backbone of Ground-based Observatory Control Systems software, framework, interface, CORBA 1174
 
  • W. Pessemier, G. Deconinck, G. Raskin, H. Van Winckel
    KU Leuven, Leuven, Belgium
  • P. Saey
    Katholieke Hogeschool Sint-Lieven, Gent, Belgium
 
  A common requirement of modern observatory control systems is to allow interaction between various heterogeneous subsystems in a transparent way. However, the integration of COTS industrial products - such as PLCs and SCADA software - has long been hampered by the lack of an adequate, standardized interfacing method. With the advent of the Unified Architecture version of OPC (Object Linking and Embedding for Process Control), the limitations of the original industry-accepted interface are now lifted, and in addition much more functionality has been defined. In this paper the most important features of OPC UA are matched against the requirements of ground-based observatory control systems in general and in particular of the 1.2m Mercator Telescope. We investigate the opportunities of the "information modelling" idea behind OPC UA, which could allow an extensive standardization in the field of astronomical instrumentation, similar to the standardization efforts emerging in several industry domains. Because OPC UA is designed for both vertical and horizontal integration of heterogeneous subsystems and subnetworks, we explore its capabilities to serve as the backbone of a dependable and scalable observatory control system, treating "industrial components" like PLCs no differently than custom software components. In order to quantitatively assess the performance and scalability of OPC UA, stress tests are described and their results are presented. Finally, we consider practical issues such as the availability of COTS OPC UA stacks, software development kits, servers and clients.  
slides icon Slides THAAUST02 [2.879 MB]  
 
THBHAUST01 SNS Online Display Technologies for EPICS network, status, site, EPICS 1178
 
  • K.-U. Kasemir, X.H. Chen, E. Danilova, J.D. Purcell
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy
The ubiquitousness of web clients from personal computers to cell phones results in a growing demand for web-based access to control system data. At the Oak Ridge National Laboratory Spallation Neutron Source (SNS) we have investigated different technical approaches to provide read access to data in the Experimental Physics and Industrial Control System (EPICS) for a wide variety of web client devices. We compare them in terms of requirements, performance and ease of maintenance.
 
slides icon Slides THBHAUST01 [3.040 MB]  
 
THBHAUST02 The Wonderland of Operating the ALICE Experiment detector, operation, experiment, interface 1182
 
  • A. Augustinus, P.Ch. Chochula, G. De Cataldo, L.S. Jirdén, A.N. Kurepin, M. Lechman, O. Pinazza, P. Rosinský
    CERN, Geneva, Switzerland
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
 
  ALICE is one of the experiments at the Large Hadron Collider (LHC), CERN (Geneva, Switzerland). Composed of 18 sub-detectors each with numerous subsystems that need to be controlled and operated in a safe and efficient way. The Detector Control System (DCS) is the key for this and has been used by detector experts with success during the commissioning of the individual detectors. With the transition from commissioning to operation more and more tasks were transferred from detector experts to central operators. By the end of the 2010 datataking campaign the ALICE experiment was run by a small crew of central operators, with only a single controls operator. The transition from expert to non-expert operation constituted a real challenge in terms of tools, documentation and training. In addition a relatively high turnover and diversity in the operator crew that is specific to the HEP experiment environment (as opposed to the more stable operation crews for accelerators) made this challenge even bigger. This paper describes the original architectural choices that were made and the key components that allowed to come to a homogeneous control system that would allow for efficient centralized operation. Challenges and specific constraints that apply to the operation of a large complex experiment are described. Emphasis will be put on the tools and procedures that were implemented to allow the transition from local detector expert operation during commissioning and early operation, to efficient centralized operation by a small operator crew not necessarily consisting of experts.  
slides icon Slides THBHAUST02 [1.933 MB]  
 
THBHAUST03 Purpose and Benefit of Control System Training for Operators EPICS, status, hardware, background 1186
 
  • E. Zimoch, A. Lüdeke
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The complexity of accelerators is ever increasing and today it is typical that a large number of feedback loops are implemented, based on sophisticated models which describe the underlying physics. Despite this increased complexity the machine operators must still effectively monitor and supervise the desired behaviour of the accelerator. This is not alone sufficient; additionally, the correct operation of the control system must also be verified. This is not always easy since the structure, design, and performance of the control system is usually not visualized and is often hidden to the operator. To better deal with this situation operators need some knowledge of the control system in order to react properly in the case of problems. In this paper we will present the approach of the Paul Scherrer Institute for operator control system training and discuss its benefits.  
slides icon Slides THBHAUST03 [4.407 MB]  
 
THBHAUST04 jddd, a State-of-the-art Solution for Control Panel Development operation, software, feedback, status 1189
 
  • E. Sombrowski, A. Petrosyan, K. Rehlich, W. Schütte
    DESY, Hamburg, Germany
 
  Software for graphical user interfaces to control systems may be developed as a rich or thin client. The thin client approach has the advantage that anyone can create and modify control system panels without specific skills in software programming. The Java DOOCS Data Display, jddd, is based on the thin client interaction model. It provides "Include" components and address inheritance for the creation of generic displays. Wildcard operations and regular expression filters are used to customize the graphics content at runtime, e.g. in a "DynamicList" component the parameters have to be painted only once in edit mode and then are automatically displayed multiple times for all available instances in run mode. This paper will describe the benefits of using jddd for control panel design as an alternative to rich client development.  
slides icon Slides THBHAUST04 [0.687 MB]  
 
THBHAUST05 First Operation of the Wide-area Remote Experiment System experiment, operation, radiation, synchrotron 1193
 
  • Y. Furukawa, K. Hasegawa
    JASRI/SPring-8, Hyogo-ken, Japan
  • G. Ueno
    RIKEN Spring-8 Harima, Hyogo, Japan
 
  The Wide-area Remote Experiment System (WRES) at the SPring-8 has been successfully developed [1]. The system communicates with the remote user's based on the SSL/TLS with the bi-directional authentication to avoid the interference from non-authorized access to the system. The system has message filtering system to allow remote user access only to the corresponding beamline equipment and safety interlock system to protect persons aside the experimental station from accidental motion of heavy equipment. The system also has a video streaming system to monitor samples or experimental equipment. We have tested the system from the point of view of safety, stability, reliability etc. and successfully made first experiment from remote site of RIKEN Wako site 480km away from SPring-8 in the end of October 2010.
[1] Y. Furukawa, K. Hasegawa, D. Maeda, G. Ueno, "Development of remote experiment system", Proc. ICALEPCS 2009(Kobe, Japan) P.615
 
slides icon Slides THBHAUST05 [5.455 MB]  
 
THBHAUIO06 Cognitive Ergonomics of Operational Tools interface, operation, power-supply, software 1196
 
  • A. Lüdeke
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  Control systems have become continuously more powerful over the past decades. The ability for high data throughput and sophisticated graphical interactions have opened a variety of new possibilities. But has it helped to provide intuitive, easy to use applications to simplify the operation of modern large scale accelerator facilities? We will discuss what makes an application useful to operation and what is necessary to make a tool easy to use. We will show that even the implementation of a small number of simple design rules for applications can help to ease the operation of a facility.  
slides icon Slides THBHAUIO06 [23.914 MB]  
 
THBHMUST02 Assessing Software Quality at Each Step of its Lifecycle to Enhance Reliability of Control Systems software, TANGO, monitoring, factory 1205
 
  • V.H. Hardion, G. Abeillé, A. Buteau, S. Lê, N. Leclercq, S. Pierre-Joseph Zéphir
    SOLEIL, Gif-sur-Yvette, France
 
  A distributed software control system aims to enhance the evolutivity and reliability by sharing responsibility between several components. Disadvantage is that detection of problems is harder on a significant number of modules. In the Kaizen spirit, we choose to continuously invest in automatism to obtain a complete overview of software quality despite the growth of legacy code. The development process was already mastered by staging each lifecycle step thanks to a continuous integration server based on JENKINS and MAVEN. We enhanced this process focusing on 3 objectives : Automatic Test, Static Code Analysis and Post-Mortem Supervision. Now the build process automatically includes the test part to detect regression, wrong behavior and integration incompatibility. The in-house TANGOUNIT project satisfies the difficulties of testing the distributed components that Tango Devices are. Next step, the programming code has to pass a complete code quality check-up. SONAR quality server was integrated to the process, to collect each static code analysis and display the hot topics on synthetic web pages. Finally, the integration of Google BREAKPAD in every TANGO Devices gives us an essential statistic from crash reports and allows to replay the crash scenarii at any time. The gain already gives us more visibility on current developments. Some concrete results will be presented like reliability enhancement, better management of subcontracted software development, quicker adoption of coding standard by new developers and understanding of impacts when moving to a new technology.  
slides icon Slides THBHMUST02 [2.973 MB]  
 
THBHMUST03 System Design towards Higher Availability for Large Distributed Control Systems hardware, network, operation, neutron 1209
 
  • S.M. Hartman
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy
Large distributed control systems for particle accelerators present a complex system engineering challenge. The system, with its significant quantity of components and their complex interactions, must be able to support reliable accelerator operations while providing the flexibility to accommodate changing requirements. System design and architecture focused on required data flow are key to ensuring high control system availability. Using examples from the operational experience of the Spallation Neutron Source at Oak Ridge National Laboratory, recommendations will be presented for leveraging current technologies to design systems for high availability in future large scale projects.
 
slides icon Slides THBHMUST03 [7.833 MB]  
 
THBHMUST04 The Software Improvement Process – Tools and Rules to Encourage Quality software, operation, feedback, FEL 1212
 
  • K. Sigerud, V. Baggiolini
    CERN, Geneva, Switzerland
 
  The Applications section of the CERN accelerator controls group has decided to apply a systematic approach to quality assurance (QA), the "Software Improvement Process", SIP. This process focuses on three areas: the development process itself, suitable QA tools, and how to practically encourage developers to do QA. For each stage of the development process we have agreed on the recommended activities and deliverables, and identified tools to automate and support the task. For example we do more code reviews. As peer reviews are resource-intensive, we only do them for complex parts of a product. As a complement, we are using static code checking tools, like FindBugs and Checkstyle. We also encourage unit testing and have agreed on a minimum level of test coverage recommended for all products, measured using Clover. Each of these tools is well integrated with our IDE (Eclipse) and give instant feedback to the developer about the quality of their code. The major challenges of SIP have been to 1) agree on common standards and configurations, for example common code formatting and Javadoc documentation guidelines, and 2) how to encourage the developers to do QA. To address the second point, we have successfully implemented 'SIP days', i.e. one day dedicated to QA work to which the whole group of developers participates, and 'Top/Flop' lists, clearly indicating the best and worst products with regards to SIP guidelines and standards, for example test coverage. This paper presents the SIP initiative in more detail, summarizing our experience since two years and our future plans.  
slides icon Slides THBHMUST04 [5.638 MB]  
 
THCHAUST04 Management of Experiments and Data at the National Ignition Facility laser, target, experiment, diagnostics 1224
 
  • S.G. Azevedo, R.G. Beeler, R.C. Bettenhausen, E.J. Bond, A.D. Casey, H.C. Chandrasekaran, C.B. Foxworthy, M.S. Hutton, J.E. Krammen, J.A. Liebman, A.A. Marsh, T. M. Pannell, D.E. Speck, J.D. Tappero, A.L. Warrick
    LLNL, Livermore, California, USA
 
  Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Experiments, or "shots", conducted at the National Ignition Facility (NIF) are discrete events that occur over a very short time frame (tens of ns) separated by hours. Each shot is part of a larger campaign of shots to advance scientific understanding in high-energy-density physics. In one campaign, energy from the 192-beam, 1.8-Megajoule pulsed laser in NIF will be used to implode a hydrogen-filled target to demonstrate controlled fusion. Each shot generates gigabytes of data from over 30 diagnostics that measure optical, x-ray, and nuclear phenomena from the imploding target. Because of the low duty cycle of shots, and the thousands of adjustments for each shot (target type, composition, shape; laser beams used, their power profiles, pointing; diagnostic systems used, their configuration, calibration, settings) it is imperative that we accurately define all equipment prior to the shot. Following the shot, and the data acquisition by the automatic control system, it is equally imperative that we archive, analyze and visualize the results within the required 30 minutes post-shot. Results must be securely stored, approved, web-visible and downloadable in order to facilitate subsequent publication. To-date NIF has successfully fired over 2,500 system shots, and thousands of test firings and dry-runs. We will present an overview of the highly-flexible and scalable campaign setup and management systems that control all aspects of the experimental NIF shot-cycle, from configuration of drive lasers all the way through presentation of analyzed results.
LLNL-CONF-476112
 
slides icon Slides THCHAUST04 [5.650 MB]  
 
THCHAUST05 LHCb Online Log Analysis and Maintenance System Linux, software, network, detector 1228
 
  • J.C. Garnier, L. Brarda, N. Neufeld, F. Nikolaidis
    CERN, Geneva, Switzerland
 
  History has shown, many times computer logs are the only information an administrator may have for an incident, which could be caused either by a malfunction or an attack. Due to huge amount of logs that are produced from large-scale IT infrastructures, such as LHCb Online, critical information may overlooked or simply be drowned in a sea of other messages . This clearly demonstrates the need for an automatic system for long-term maintenance and real time analysis of the logs. We have constructed a low cost, fault tolerant centralized logging system which is able to do in-depth analysis and cross-correlation of every log. This system is capable of handling O(10000) different log sources and numerous formats, while trying to keep the overhead as low as possible. It provides log gathering and management, offline analysis and online analysis. We call offline analysis the procedure of analyzing old logs for critical information, while Online analysis refer to the procedure of early alerting and reacting. The system is extensible and cooperates well with other applications such as Intrusion Detection / Prevention Systems. This paper presents the LHCb Online topology, problems we had to overcome and our solutions. Special emphasis is given to log analysis and how we use it for monitoring and how we can have uninterrupted access to the logs. We provide performance plots, code modification in well known log tools and our experience from trying various storage strategies.  
slides icon Slides THCHAUST05 [0.377 MB]  
 
THCHMUST01 Control System for Cryogenic THD Layering at the National Ignition Facility target, cryogenics, hardware, laser 1236
 
  • M.A. Fedorov, O.D. Edwards, E.A. Mapoles, J. Mauvais, T.G. Parham, R.J. Sanchez, J.M. Sater, B.A. Wilson
    LLNL, Livermore, California, USA
 
  Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
The National Ignition Facility (NIF) is the world largest and most energetic laser system for Inertial Confinement Fusion (ICF). In 2010, NIF began ignition experiments using cryogenically cooled targets containing layers of the tritium-hydrogen-deuterium (THD) fuel. The 75 μm thick layer is formed inside of the 2 mm target capsule at temperatures of approximately 18 K. The ICF target designs require sub-micron smoothness of the THD ice layers. Formation of such layers is still an active research area, requiring a flexible control system capable of executing the evolving layering protocols. This task is performed by the Cryogenic Target Subsystem (CTS) of the NIF Integrated Computer Control System (ICCS). The CTS provides cryogenic temperature control with the 1 mK resolution required for beta layering and for the thermal gradient fill of the capsule. The CTS also includes a 3-axis x-ray radiography engine for phase contrast imaging of the ice layers inside of the plastic and beryllium capsules. In addition to automatic control engines, CTS is integrated with the Matlab interactive programming environment to allow flexibility in experimental layering protocols. The CTS Layering Matlab Toolbox provides the tools for layer image analysis, system characterization and cryogenic control. The CTS Layering Report tool generates qualification metrics of the layers, such as concentricity of the layer and roughness of the growth boundary grooves. The CTS activities are automatically coordinated with other NIF controls in the carefully orchestrated NIF Shot Sequence.
LLNL-CONF-477418
 
slides icon Slides THCHMUST01 [8.058 MB]  
 
THCHMUST04 Free and Open Source Software at CERN: Integration of Drivers in the Linux Kernel Linux, FPGA, framework, data-acquisition 1248
 
  • J.D. González Cobas, S. Iglesias Gonsálvez, J.H. Lewis, J. Serrano, M. Vanga
    CERN, Geneva, Switzerland
  • E.G. Cota
    Columbia University, NY, USA
  • A. Rubini, F. Vaga
    University of Pavia, Pavia, Italy
 
  We describe the experience acquired during the integration of the tsi148 driver into the main Linux kernel tree. The benefits (and some of the drawbacks) for long-term software maintenance are analysed, the most immediate one being the support and quality review added by an enormous community of skilled developers. Indirect consequences are also analysed, and these are no less important: a serious impact in the style of the development process, the use of cutting edge tools and technologies supporting development, the adoption of the very strict standards enforced by the Linux kernel community, etc. These elements were also exported to the hardware development process in our section and we will explain how they were used with a particular example in mind: the development of the FMC family of boards following the Open Hardware philosophy, and how its architecture must fit the Linux model. This delicate interplay of hardware and software architectures is a perfect showcase of the benefits we get from the strategic decision of having our drivers integrated in the kernel. Finally, the case for a whole family of CERN-developed drivers for data acquisition models, the prospects for its integration in the kernel, and the adoption of a model parallel to Comedi, is also taken as an example of how this model will perform in the future.  
slides icon Slides THCHMUST04 [0.777 MB]  
 
THCHMUST05 The Case for Soft-CPUs in Accelerator Control Systems FPGA, software, hardware, Linux 1252
 
  • W.W. Terpstra
    GSI, Darmstadt, Germany
 
  The steady improvements in Field Programmable Gate Array (FPGA) performance, size, and cost have driven their ever increasing use in science and industry. As FPGA sizes continue to increase, more and more devices and logic are moved from external chips to FPGAs. For simple hardware devices, the savings in board area and ASIC manufacturing setup are compelling. For more dynamic logic, the trade-off is not always as clear. Traditionally, this has been the domain of CPUs and software programming languages. In hardware designs already including an FPGA, it is tempting to remove the CPU and implement all logic in the FPGA, saving component costs and increasing performance. However, that logic must then be implemented in the more constraining hardware description languages, cannot be as easily debugged or traced, and typically requires significant FPGA area. For performance-critical tasks this trade-off can make sense. However, for the myriad slower and dynamic tasks, software programming languages remain the better choice. One great benefit of a CPU is that it can perform many tasks. Thus, by including a small "Soft-CPU" inside the FPGA, all of the slower tasks can be aggregated into a single component. These tasks may then re-use existing software libraries, debugging techniques, and device drivers, while retaining ready access to the FPGA's internals. This paper discusses requirements for using Soft-CPUs in this niche, especially for the FAIR project. Several open-source alternatives will be compared and recommendations made for the best way to leverage a hybrid design.  
slides icon Slides THCHMUST05 [0.446 MB]  
 
THCHMUST06 The FAIR Timing Master: A Discussion of Performance Requirements and Architectures for a High-precision Timing System timing, FPGA, kicker, network 1256
 
  • M. Kreider
    GSI, Darmstadt, Germany
  • M. Kreider
    Hochschule Darmstadt, University of Applied Science, Darmstadt, Germany
 
  Production chains in a particle accelerator are complex structures with many interdependencies and multiple paths to consider. This ranges from system initialisation and synchronisation of numerous machines to interlock handling and appropriate contingency measures like beam dump scenarios. The FAIR facility will employ WhiteRabbit, a time based system which delivers an instruction and a corresponding execution time to a machine. In order to meet the deadlines in any given production chain, instructions need to be sent out ahead of time. For this purpose, code execution and message delivery times need to be known in advance. The FAIR Timing Master needs to be reliably capable of satisfying these timing requirements as well as being fault tolerant. Event sequences of recorded production chains indicate that low reaction times to internal and external events and fast, parallel execution are required. This suggests a slim architecture, especially devised for this purpose. Using the thread model of an OS or other high level programs on a generic CPU would be counterproductive when trying to achieve deterministic processing times. This paper deals with the analysis of said requirements as well as a comparison of known processor and virtual machine architectures and the possibilities of parallelisation in programmable hardware. In addition, existing proposals at GSI will be checked against these findings. The final goal will be to determine the best instruction set for modelling any given production chain and devising a suitable architecture to execute these models.  
slides icon Slides THCHMUST06 [2.757 MB]  
 
THDAULT01 Modern System Architectures in Embedded Systems embedded, FPGA, hardware, software 1260
 
  • T. Korhonen
    PSI, Villigen, Switzerland
 
  Several new technologies are making their way also in embedded systems. In addition to FPGA technology which has become commonplace, multicore CPUs and I/O virtualization (among others) are being introduced to the embedded systems. In our paper we present our ideas and studies about how to take advantage of these features in control systems. Some application examples involving things like CPU partitioning, virtualized I/O and so an are discussed, along with some benchmarks.  
slides icon Slides THDAULT01 [1.426 MB]  
 
THDAUST02 An Erlang-Based Front End Framework for Accelerator Controls framework, interface, data-acquisition, hardware 1264
 
  • D.J. Nicklaus, C.I. Briegel, J.D. Firebaugh, CA. King, R. Neswold, R. Rechenmacher, J. You
    Fermilab, Batavia, USA
 
  We have developed a new front-end framework for the ACNET control system in Erlang. Erlang is a functional programming language developed for real-time telecommunications applications. The primary task of the front-end software is to connect the control system with drivers collecting data from individual field bus devices. Erlang's concurrency and message passing support have proven well-suited for managing large numbers of independent ACNET client requests for front-end data. Other Erlang features which make it particularly well-suited for a front-end framework include fault-tolerance with process monitoring and restarting, real-time response,and the ability to change code in running systems. Erlang's interactive shell and dynamic typing make writing and running unit tests an easy part of the development process. Erlang includes mechanisms for distributing applications which we will use for deploying our framework to multiple front-ends, along with a configured set of device drivers. We've developed Erlang code to use Fermilab's TCLK event distribution clock and Erlang's interface to C/C++ allows hardware-specific driver access.  
slides icon Slides THDAUST02 [1.439 MB]  
 
THDAUST03 The FERMI@Elettra Distributed Real-time Framework real-time, Linux, Ethernet, network 1267
 
  • L. Pivetta, G. Gaio, R. Passuello, G. Scalamera
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
FERMI@Elettra is a Free Electron Laser (FEL) based on a 1.5 GeV linac. The pulsed operation of the accelerator and the necessity to characterize and control each electron bunch requires synchronous acquisition of the beam diagnostics together with the ability to drive actuators in real-time at the linac repetition rate. The Adeos/Xenomai real-time extensions have been adopted in order to add real-time capabilities to the Linux based control system computers running the Tango software. A software communication protocol based on gigabit Ethernet and known as Network Reflective Memory (NRM) has been developed to implement a shared memory across the whole control system, allowing computers to communicate in real-time. The NRM architecture, the real-time performance and the integration in the control system are described.
 
slides icon Slides THDAUST03 [0.490 MB]  
 
THDAULT04 Embedded Linux on FPGA Instruments for Control Interface and Remote Management FPGA, Linux, embedded, TANGO 1271
 
  • B.K. Huang, R.M. Myers, R.M. Sharples
    Durham University, Durham, United Kingdom
  • G. Cunningham, G.A. Naylor
    CCFE, Abingdon, Oxon, United Kingdom
  • O. Goudard
    ESRF, Grenoble, France
  • J.J. Harrison
    Merton College, Oxford, United Kingdom
  • R.G.L. Vann
    York University, Heslington, York, United Kingdom
 
  Funding: This work was part-funded by the RCUK Energy Programme under grant EP/I501045 and the European Communities under the contract of Association between EURATOM and CCFE.
FPGAs are now large enough that they can easily accommodate an embedded 32-bit processor which can be used to great advantage. Running embedded Linux gives the user many more options for interfacing to their FPGA-based instrument, and in some cases this enables removal of the middle-person PC. It is now possible to manage the instrument directly by widely used control systems such EPICS or TANGO. As an example, on MAST (the Mega Amp Spherical Tokamak) at Culham Centre for Fusion Energy, a new vertical feedback system is under development in which waveform coefficients can be changed between plasma discharges to define the plasma position behaviour. Additionally it is possible to use the embedded processor to facilitate remote updating of firmware which, in combination with a watchdog and network booting ensures that full remote management over Ethernet is possible. We also discuss UDP data streaming using embedded Linux and a web based control interface running on the embedded processor to interface to the FPGA board.
 
slides icon Slides THDAULT04 [2.267 MB]  
 
THDAULT05 Embedded LLRF Controller with Channel Access on MicroTCA Backplane Interconnect LLRF, EPICS, embedded, FPGA 1274
 
  • K. Furukawa, K. Akai, T. Kobayashi, S. Michizono, T. Miura, K. Nakanishi, J.-I. Odagiri
    KEK, Ibaraki, Japan
  • H. Deguchi, K. Hayashi, M. Ryoshi
    Mitsubishi Electric TOKKI Systems, Amagasaki, Hyogo, Japan
 
  A low-level RF controller has been developed for the accelerator controls for SuperKEKB, Super-conducting RF Test facility (STF) and Compact-ERL (cERL) at KEK. The feedback mechanism will be performed on Vertex-V FPGA with 16-bit ADCs and DACs. The card was designed as an advanced mezzanine card (AMC) for a MicroTCA shelf. An embedded EPICS IOC on the PowerPC core in FPGA will provide the global controls through channel access (CA) protocol on the backplane interconnect of the shelf. No other mechanisms are required for the external linkages. CA is exclusively employed in order to communicate with central controls and with an embedded IOC on a Linux-based PLC for slow controls.  
slides icon Slides THDAULT05 [1.780 MB]  
 
THDAULT06 MARTe Framework: a Middleware for Real-time Applications Development real-time, framework, hardware, Linux 1277
 
  • A. Neto, D. Alves, B. Carvalho, P.J. Carvalho, H. Fernandes, D.F. Valcárcel
    IPFN, Lisbon, Portugal
  • A. Barbalace, G. Manduchi
    Consorzio RFX, Associazione Euratom-ENEA sulla Fusione, Padova, Italy
  • L. Boncagni
    ENEA C.R. Frascati, Frascati (Roma), Italy
  • G. De Tommasi
    CREATE, Napoli, Italy
  • P. McCullen, A.V. Stephen
    CCFE, Abingdon, Oxon, United Kingdom
  • F. Sartori
    F4E, Barcelona, Spain
  • R. Vitelli
    Università di Roma II Tor Vergata, Roma, Italy
  • L. Zabeo
    ITER Organization, St. Paul lez Durance, France
 
  Funding: This work was supported by the European Communities under the contract of Association between EURATOM/IST and was carried out within the framework of the European Fusion Development Agreement
The Multi-threaded Application Real-Time executor (MARTe) is a C++ framework that provides a development environment for the design and deployment of real-time applications, e.g. control systems. The kernel of MARTe comprises a set of data-driven independent blocks, connected using a shared bus. This modular design enforces a clear boundary between algorithms, hardware interaction and system configuration. The architecture, being multi-platform, facilitates the test and commissioning of new systems, enabling the execution of plant models in offline environments and with the hardware-in-the-loop, whilst also providing a set of non-intrusive introspection and logging facilities. Furthermore, applications can be developed in non real-time environments and deployed in a real-time operating system, using exactly the same code and configuration data. The framework is already being used in several fusion experiments, with control cycles ranging from 50 microseconds to 10 milliseconds exhibiting jitters of less than 2%, using VxWorks, RTAI or Linux. Codes can also be developed and executed in Microsoft Windows, Solaris and Mac OS X. This paper discusses the main design concepts of MARTe, in particular the architectural choices which enabled the combination of real-time accuracy, performance and robustness with complex and modular data driven applications.
 
slides icon Slides THDAULT06 [1.535 MB]  
 
FRAAUST01 Development of the Machine Protection System for LCLS-I interface, Ethernet, FPGA, network 1281
 
  • J.E. Dusatko, M. Boyes, P. Krejcik, S.R. Norum, J.J. Olsen
    SLAC, Menlo Park, California, USA
 
  Funding: U.S. Department of Energy under Contract Nos. DE-AC02-06CH11357 and DE-AC02-76SF00515
Machine Protection System (MPS) requirements for the Linac Coherent Light Source I demand that fault detection and mitigation occur within one machine pulse (1/120th of a second at full beam rate). The MPS must handle inputs from a variety of sources including loss monitors as well as standard state-type inputs. These sensors exist at various places across the full 2.2km length of the machine. A new MPS has been developed based on a distributed star network where custom-designed local hardware nodes handle sensor inputs and mitigation outputs for localized regions of the LCLS accelerator complex. These Link-Nodes report status information and receive action commands from a centralized processor running the MPS algorithm over a private network. The individual Link-Node is a 3u chassis with configurable hardware components that can be setup with digital and analog inputs and outputs, depending upon the sensor and actuator requirements. Features include a custom MPS digital input/output subsystem, a private Ethernet interface, an embedded processor, a custom MPS engine implemented in an FPGA and an Industry Pack (IP) bus interface, allowing COTS and custom analog/digital I/O modules to be utilized for MPS functions. These features, while capable of handing standard MPS state-type inputs and outputs, allow other systems like beam loss monitors to be completely integrated within them. To date, four different types of Link-Nodes are in use in LCLS-I. This paper describes the design, construction and implementation of the LCLS MPS with a focus in the Link-Node.
 
slides icon Slides FRAAUST01 [3.573 MB]  
 
FRAAULT02 STUXNET and the Impact on Accelerator Control Systems software, network, hardware, Windows 1285
 
  • S. Lüders
    CERN, Geneva, Switzerland
 
  2010 has seen a wide news coverage of a new kind of computer attack, named "Stuxnet", targeting control systems. Due to its level of sophistication, it is widely acknowledged that this attack marks the very first case of a cyber-war of one country against the industrial infrastructure of another, although there is still is much speculation about the details. Worse yet, experts recognize that Stuxnet might just be the beginning and that similar attacks, eventually with much less sophistication, but with much more collateral damage, can be expected in the years to come. Stuxnet was targeting a special model of the Siemens 400 PLC series. Similar modules are also deployed for accelerator controls like the LHC cryogenics or vacuum systems or the detector control systems in LHC experiments. Therefore, the aim of this presentation is to give an insight into what this new attack does and why it is deemed to be special. In particular, the potential impact on accelerator and experiment control systems will be discussed, and means will be presented how to properly protect against similar attacks.  
slides icon Slides FRAAULT02 [8.221 MB]  
 
FRAAULT03 Development of the Diamond Light Source PSS in conformance with EN 61508 database, interlocks, radiation, operation 1289
 
  • M.C. Wilson, A.G. Price
    Diamond, Oxfordshire, United Kingdom
 
  Diamond Light Source is constructing a third phase (Phase III) of photon beamlines and experiment stations. Experience gained in the design, realization and operation of the Personnel Safety Systems (PSS) on the first two phases of beamlines is being used to improve the design process for this development. Information on the safety functionality of Phase I and Phase II photon beamlines is maintained in a hazard database. From this reports are used to assist in the design, verification and validation of the new PSSs. The data is used to make comparisons between beamlines, validate safety functions and to record documentation for each beamline. This forms part of documentations process demonstrating conformance to EN 61508.  
slides icon Slides FRAAULT03 [0.372 MB]  
 
FRAAULT04 Centralised Coordinated Control to Protect the JET ITER-like Wall. plasma, real-time, diagnostics, operation 1293
 
  • A.V. Stephen, G. Arnoux, T. Budd, P. Card, R.C. Felton, A. Goodyear, J. Harling, D. Kinna, P.J. Lomas, P. McCullen, P.D. Thomas, I.D. Young, K-D. Zastrow
    CCFE, Abingdon, Oxon, United Kingdom
  • D. Alves, D.F. Valcárcel
    IST, Lisboa, Portugal
  • S. Devaux
    MPI/IPP, Garching, Germany
  • S. Jachmich
    RMA, Brussels, Belgium
  • A. Neto
    IPFN, Lisbon, Portugal
 
  Funding: This work was carried out within the framework of the European Fusion Development Agreement. This work was also part-funded by the RCUK Energy Programme under grant EP/I501045.
The JET ITER-like wall project replaces the first wall carbon fibre composite tiles with beryllium and tungsten tiles which should have improved fuel retention characteristics but are less thermally robust. An enhanced protection system using new control and diagnostic systems has been designed which can modify the pre-planned experimental control to protect the new wall. Key design challenges were to extend the Level-1 supervisory control system to allow configurable responses to thermal problems to be defined without introducing excessive complexity, and to integrate the new functionality with existing control and protection systems efficiently and reliably. Alarms are generated by the vessel thermal map (VTM) system if infra-red camera measurements of tile temperatures are too high and by the plasma wall load system (WALLS) if component power limits are exceeded. The design introduces two new concepts: local protection, which inhibits individual heating components but allows the discharge to proceed, and stop responses, which allow highly configurable early termination of the pulse in the safest way for the plasma conditions and type of alarm. These are implemented via the new real-time protection system (RTPS), a centralised controller which responds to the VTM and WALLS alarms by providing override commands to the plasma shape, current, density and heating controllers. This paper describes the design and implementation of the RTPS system which is built with the Multithreaded Application Real-Time executor (MARTe) and will present results from initial operations.
 
slides icon Slides FRAAULT04 [2.276 MB]  
 
FRAAUIO05 High-Integrity Software, Computation and the Scientific Method software, experiment, background, vacuum 1297
 
  • L. Hatton
    Kingston University, Kingston on Thames, United Kingdom
 
  Given the overwhelming use of computation in modern science and the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. There is a growing debate but this has some distance to run as yet with journals still divided on what even constitutes repeatability. Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. In this paper, some of the problems with computation, for example with respect to specification, implementation, the use of programming languages and the long-term unquantifiable presence of undiscovered defect will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within CS itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest.  
slides icon Slides FRAAUIO05 [0.710 MB]  
 
FRBHAULT01 Feed-forward in the LHC feedback, software, real-time, database 1302
 
  • M. Pereira, X. Buffat, K. Fuchsberger, M. Lamont, G.J. Müller, S. Redaelli, R.J. Steinhagen, J. Wenninger
    CERN, Geneva, Switzerland
 
  The LHC operational cycle is comprised of several phases such as the ramp, the squeeze and stable beams. During the ramp and squeeze in particular, it has been observed that the behaviour of key LHC beam parameters such as tune, orbit and chromaticity are highly reproducible from fill to fill. To reduced the reliance on the crucial feedback systems, it was decided to perform fill-to-fill feed-forward corrections. The LHC feed-forward application was developed to ease the introduction of corrections to the operational settings. It retrieves the feedback system's corrections from the logging database and applies appropriate corrections to the ramp and squeeze settings. The LHC Feed-Forward software has been used during LHC commissioning and tune and orbit corrections during ramp have been successfully applied. As a result, the required real-time corrections for the above parameters have been reduced to a minimum.  
slides icon Slides FRBHAULT01 [0.961 MB]  
 
FRBHAULT03 Beam-based Feedback for the Linac Coherent Light Source feedback, network, timing, linac 1310
 
  • D. Fairley, K.H. Kim, K. Luchini, P. Natampalli, L. Piccoli, D. Rogind, T. Straumann
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported by the U. S. Department of Energy Contract DE-AC02-76SF00515
Beam-based feedback control loops are required by the Linac Coherent Light Source (LCLS) program in order to provide fast, single-pulse stabilization of beam parameters. Eight transverse feedback loops, a 6x6 longitudinal feedback loop, and a loop to maintain the electron bunch charge were successfully commissioned for the LCLS, and have been maintaining stability of the LCLS electron beam at beam rates up to 120Hz. In order to run the feedback loops at beam rate, the feedback loops were implemented in EPICS IOCs with a dedicated ethernet multicast network. This paper will discuss the design, configuration and commissioning of the beam-based Fast Feedback System for LCLS. Topics include algorithms for 120Hz feedback, multicast network performance, actuator and sensor performance for single-pulse control and sensor readback, and feedback configuration and runtime control.
 
slides icon Slides FRBHAULT03 [1.918 MB]  
 
FRBHAULT04 Commissioning of the FERMI@Elettra Fast Trajectory Feedback feedback, real-time, linac, Ethernet 1314
 
  • G. Gaio, M. Lonza, R. Passuello, L. Pivetta, G. Strangolino
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
FERMI@Elettra is a new 4th-generation light source based on a single pass Free Electron Laser (FEL). In order to ensure the feasibility of the free electron lasing and the quality of the produced photon beam, a high degree of stability is required for the main parameters of the electron beam. For this reason a flexible real-time feedback framework integrated in the control system has been developed. The first implemented bunch-by-bunch feedback loop controls the beam trajectory. The measurements of the beam position and the corrector magnet settings are synchronized to the 50 Hz linac repetition rate by means of the real-time framework. The feedback system implementation, the control algorithms and preliminary close loop results are presented.
 
slides icon Slides FRBHAULT04 [2.864 MB]  
 
FRBHMUST01 The Design of the Alba Control System: A Cost-Effective Distributed Hardware and Software Architecture. TANGO, database, software, interface 1318
 
  • D.F.C. Fernández-Carreiras, D.B. Beltrán, T.M. Coutinho, G. Cuní, J. Klora, O. Matilla, R. Montaño, C. Pascual-Izarra, S. Pusó, R. Ranz, A. Rubio, S. Rubio-Manrique
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  The control system of Alba is highly distributed from both hardware and software points of view. The hardware infrastructure for the control system includes in the order of 350 racks, 20000 cables and 6200 equipments. More than 150 diskless industrial computers, distributed in the service area and 30 multicore servers in the data center, manage several thousands of process variables. The software is, of course, as distributed as the hardware. It is also a success story of the Tango Collaboration where a complete software infrastructure is available "off the shelf". In addition Tango has been productively complemented with the powerful Sardana framework, a great effort in terms of development, which nowadays, several institutes benefit from. The whole installation has been coordinated from the beginning with a complete cabling and equipment database, where all the equipment, cables, connectors are described and inventoried. The so called "cabling database" is core of the installation. The equipments and cables are defined there. The basic configurations of the hardware like MAC and IP addresses, DNS names, etc. are also gathered in this database, allowing the network communication files and declaration of variables in the PLCs to be created automatically. This paper explains the design and the architecture of the control system, describes the tools and justifies the choices made. Furthermore, it presents and analyzes the figures regarding cost and performances.  
slides icon Slides FRBHMUST01 [4.616 MB]  
 
FRBHMUST02 Towards High Performance Processing in Modern Java Based Control Systems monitoring, software, real-time, distributed 1322
 
  • M. Misiowiec, W. Buczak, M. Buttner
    CERN, Geneva, Switzerland
 
  CERN controls software is often developed on Java foundation. Some systems carry out a combination of data, network and processor intensive tasks within strict time limits. Hence, there is a demand for high performing, quasi real time solutions. Extensive prototyping of the new CERN monitoring and alarm software required us to address such expectations. The system must handle dozens of thousands of data samples every second, along its three tiers, applying complex computations throughout. To accomplish the goal, a deep understanding of multithreading, memory management and interprocess communication was required. There are unexpected traps hidden behind an excessive use of 64 bit memory or severe impact on the processing flow of modern garbage collectors, including the state of the art Oracle GarbageFirst. Tuning JVM configuration significantly affects the execution of the code. Even more important is the amount of threads and the data structures used between them. Accurately dividing work into independent tasks might boost system performance. Thorough profiling with dedicated tools helped understand the bottlenecks and choose algorithmically optimal solutions. Different virtual machines were tested, in a variety of setups and garbage collection options. The overall work provided for discovering actual hard limits of the whole setup. We present this process of architecting a challenging system in view of the characteristics and limitations of the contemporary Java runtime environment.
http://cern.ch/marekm/icalepcs.html
 
slides icon Slides FRBHMUST02 [4.514 MB]  
 
FRBHMUST03 Thirty Meter Telescope Observatory Software Architecture software, hardware, operation, software-architecture 1326
 
  • K.K. Gillies, C. Boyer
    TMT, Pasadena, California, USA
 
  The Thirty Meter Telescope (TMT) will be a ground-based, 30-m optical-IR telescope with a highly segmented primary mirror located on the summit of Mauna Kea in Hawaii. The TMT Observatory Software (OSW) system will deliver the software applications and infrastructure necessary to integrate all TMT software into a single system and implement a minimal end-to-end science operations system. At the telescope, OSW is focused on the task of integrating and efficiently controlling and coordinating the telescope, adaptive optics, science instruments, and their subsystems during observation execution. From the software architecture viewpoint, the software system is viewed as a set of software components distributed across many machines that are integrated using a shared software base and a set of services that provide communications and other needed functionality. This paper describes the current state of the TMT Observatory Software focusing on its unique requirements, architecture, and the use of middleware technologies and solutions that enable the OSW design.  
slides icon Slides FRBHMUST03 [3.788 MB]  
 
FRBHMULT04 Towards a State Based Control Architecture for Large Telescopes: Laying a Foundation at the VLT software, distributed, operation, interface 1330
 
  • R. Karban, N. Kornweibel
    ESO, Garching bei Muenchen, Germany
  • D.L. Dvorak, M.D. Ingham, D.A. Wagner
    JPL, Pasadena, California, USA
 
  Large telescopes are characterized by a high level of distribution of control-related tasks and will feature diverse data flow patterns and large ranges of sampling frequencies; there will often be no single, fixed server-client relationship between the control tasks. The architecture is also challenged by the task of integrating heterogeneous subsystems which will be delivered by multiple different contractors. Due to the high number of distributed components, the control system needs to effectively detect errors and faults, impede their propagation, and accurately mitigate them in the shortest time possible, enabling the service to be restored. The presented Data-Driven Architecture is based on a decentralized approach with an end-to-end integration of disparate independently-developed software components, using a high-performance standards-based communication middle-ware infrastructure, based on the Data Distribution Service. A set of rules and principles, based on JPL's State Analysis method and architecture, are established to avoid undisciplined component-to-component interactions, where the Control System and System Under Control are clearly separated. State Analysis provides a model-based process for capturing system and software requirements and design, helping reduce the gap between the requirements on software specified by systems engineers and the implementation by software engineers. The method and architecture has been field tested at the Very Large Telescope, where it has been integrated into an operational system with minimal downtime.  
slides icon Slides FRBHMULT04 [3.504 MB]  
 
FRBHMULT05 Middleware Trends and Market Leaders 2011 CORBA, Windows, network, Linux 1334
 
  • A. Dworak, P. Charrue, F. Ehm, W. Sliwinski, M. Sobczak
    CERN, Geneva, Switzerland
 
  The Controls Middleware (CMW) project was launched over ten years ago. Its main goal was to unify middleware solutions used to operate CERN accelerators. An important part of the project, the equipment access library RDA, was based on CORBA, an unquestionable standard at the time. RDA became an operational and critical part of the infrastructure, yet the demanding run-time environment revealed some shortcomings of the system. Accumulation of fixes and workarounds led to unnecessary complexity. RDA became difficult to maintain and to extend. CORBA proved to be rather a cumbersome product than a panacea. Fortunately, many new transport frameworks appeared since then. They boasted a better design, and supported concepts that made them easy to use. Willing to profit from the new libraries, the CMW team updated user requirements, and in their terms investigated eventual CORBA substitutes. The process consisted of several phases: a review of middleware solutions belonging to different categories (e.g. data-centric, object-, and message-oriented) and their applicability to a communication model in RDA; evaluation of several market recognized products and promising start-ups; prototyping of typical communication scenarios; testing the libraries against exceptional situations and errors; verifying that mandatory performance constraints were met. Thanks to the performed investigation the team have selected a few libraries that suit their needs better than CORBA. Further prototyping will select the best candidate.  
slides icon Slides FRBHMULT05 [8.508 MB]  
 
FRBHMULT06 EPICS V4 Expands Support to Physics Application, Data Acsuisition, and Data Analysis EPICS, data-acquisition, database, interface 1338
 
  • L.R. Dalesio, G. Carcassi, M.A. Davidsaver, M.R. Kraimer, R. Lange, N. Malitsky, G. Shen
    BNL, Upton, Long Island, New York, USA
  • T. Korhonen
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
  • J. Rowland
    Diamond, Oxfordshire, United Kingdom
  • M. Sekoranja
    Cosylab, Ljubljana, Slovenia
  • G.R. White
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported under auspices of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookhaven Science Associates, LLC, and in part by the DOE Contract DE-AC02-76SF00515
EPICS version 4 extends the functionality of version 3 by providing the ability to define, transport, and introspect composite data types. Version 3 provided a set of process variables and a data protocol that adequately defined scalar data along with an atomic set of attributes. While remaining backward compatible, Version 4 is able to easily expand this set with a data protocol capable of exchanging complex data types and parameterized data requests. Additionally, a group of engineers defined reference types for some applications in this environment. The goal of this work is to define a narrow interface with the minimal set of data types needed to support a distributed architecture for physics applications, data acquisition, and data analysis.
 
slides icon Slides FRBHMULT06 [0.188 MB]  
 
FRCAUST02 Status of the CSNS Controls System interface, linac, power-supply, Ethernet 1341
 
  • C.H. Wang
    IHEP Beijing, Beijing, People's Republic of China
 
  The China Spallation Neutron Source (CSNS) is planning to start construction in 2011 in China. The CSNS controls system will use EPICS as development platform. The scope of the controls system covers thousands of devices located in Linac, RCS and two transfer lines. The interface from the control system to the equipment will be through VME Power PC processors and embedded PLC as well as embedded IPC. The high level applications will choose XAL core and Eclipse platform. Oracle database is used to save historical data. This paper introduces controls preliminary design and progress. Some key technologies, prototypes,schedule and personnel plan are also discussed.  
slides icon Slides FRCAUST02 [3.676 MB]  
 
FRCAUST03 Status of the ESS Control System database, hardware, EPICS, software 1345
 
  • G. Trahern
    ESS, Lund, Sweden
 
  The European Spallation Source (ESS) is a high current proton LINAC to be built in Lund, Sweden. The LINAC delivers 5 MW of power to the target at 2500 MeV, with a nominal current of 50 mA. It is designed to include the ability to upgrade the LINAC to a higher power of 7.5 MW at a fixed energy of 2500 MeV. The Accelerator Design Update (ADU) collaboration of mainly European institutions will deliver a Technical Design Report at the end of 2012. First protons are expected in 2018, and first neutrons in 2019. The ESS will be constructed by a number of geographically dispersed institutions which means that a considerable part of control system integration will potentially be performed off-site. To mitigate this organizational risk, significant effort will be put into standardization of hardware, software, and development procedures early in the project. We have named the main result of this standardization the Control Box concept. The ESS will use EPICS, and will build on the positive distributed development experiences of SNS and ITER. Current state of control system design and key decisions are presented in the paper as well as immediate challenges and proposed solutions.
From PAC 2011 article
http://eval.esss.lu.se/cgi-bin/public/DocDB/ShowDocument?docid=45
From IPAC 2010 article
http://eval.esss.lu.se/cgi-bin/public/DocDB/ShowDocument?docid=26
 
slides icon Slides FRCAUST03 [1.944 MB]  
 
FRCAUST04 Status of the ASKAP Monitoring and Control System software, EPICS, hardware, monitoring 1349
 
  • J.C. Guzman
    CSIRO ATNF, NSW, Australia
 
  The Australian Square Kilometre Array Pathfinder, or ASKAP, is CSIRO’s new radio telescope currently under construction at the Murchison Radio astronomy Observatory (MRO) in Mid West region of Western Australia. As well as being a world-leading telescope in its own right, ASKAP will be an important testbed for the Square Kilometre Array, a future international radio telescope that will be the world’s largest and most sensitive. This paper gives a status update of the ASKAP project and provides a detailed look at the initial deployment of the monitoring and control system as well as major issues to be addressed in future software releases before the start of system commissioning later this year.  
slides icon Slides FRCAUST04 [3.414 MB]