Keyword: network
Paper Title Other Keywords Page
MOBAUST01 News from ITER Controls - A Status Report controls, EPICS, real-time, software 1
 
  • A. Wallander, L. Abadie, F. Di Maio, B. Evrard, J-M. Fourneron, H.K. Gulati, C. Hansalia, J.Y. Journeaux, C.S. Kim, W.-D. Klotz, K. Mahajan, P. Makijarvi, Y. Matsumoto, S. Pande, S. Simrock, D. Stepanov, N. Utzel, A. Vergara-Fernandez, A. Winter, I. Yonekawa
    ITER Organization, St. Paul lez Durance, France
 
  Construction of ITER has started at the Cadarache site in southern France. The first buildings are taking shape and more than 60 % of the in-kind procurement has been committed by the seven ITER member states (China, Europe, India, Japan, Korea, Russia and Unites States). The design and manufacturing of the main components of the machine is now underway all over the world. Each of these components comes with a local control system, which must be integrated in the central control system. The control group at ITER has developed two products to facilitate this; the plant control design handbook (PCDH) and the control, data access and communication (CODAC) core system. PCDH is a document which prescribes the technologies and methods to be used in developing the local control system and sets the rules applicable to the in-kind procurements. CODAC core system is a software package, distributed to all in-kind procurement developers, which implements the PCDH and facilitates the compliance of the local control system. In parallel, the ITER control group is proceeding with the design of the central control system to allow fully integrated and automated operation of ITER. In this paper we report on the progress of design, technology choices and discuss justifications of those choices. We also report on the results of some pilot projects aiming at validating the design and technologies.  
slides icon Slides MOBAUST01 [4.238 MB]  
 
MOMAU002 Improving Data Retrieval Rates Using Remote Data Servers software, hardware, database, controls 40
 
  • T. D'Ottavio, B. Frak, J. Morris, S. Nemesure
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work performed under the auspices of the U.S. Department of Energy
The power and scope of modern Control Systems has led to an increased amount of data being collected and stored, including data collected at high (kHz) frequencies. One consequence is that users now routinely make data requests that can cause gigabytes of data to be read and displayed. Given that a users patience can be measured in seconds, this can be quite a technical challenge. This paper explores one possible solution to this problem - the creation of remote data servers whose performance is optimized to handle context-sensitive data requests. Methods for increasing data delivery performance include the use of high speed network connections between the stored data and the data servers, smart caching of frequently used data, and the culling of data delivered as determined by the context of the data request. This paper describes decisions made when constructing these servers and compares data retrieval performance by clients that use or do not use an intermediate data server.
 
slides icon Slides MOMAU002 [0.085 MB]  
poster icon Poster MOMAU002 [1.077 MB]  
 
MOMAU005 Integrated Approach to the Development of the ITER Control System Configuration Data database, controls, software, status 52
 
  • D. Stepanov, L. Abadie
    ITER Organization, St. Paul lez Durance, France
  • J. Bertin, G. Bourguignon, G. Darcourt
    Sopra Group, Aix-en-Provence, France
  • O. Liotard
    TCS France, Puteaux, France
 
  ITER control system (CODAC) is steadily going into the implementation phase. A design guidelines handbook and a software development toolkit, named CODAC Core System, were produced in February 2011. They are ready to be used off-site, in the ITER domestic agencies and associated industries, in order to develop first control "islands" of various ITER plant systems. In addition to the work done off-site there is wealth of I&C related data developed centrally at ITER, but scattered through various sources. These data include I&C design diagrams, 3-D data, volume allocation, inventory control, administrative data, planning and scheduling, tracking of deliveries and associated documentation, requirements control, etc. All these data have to be kept coherent and up-to-date, with various types of cross-checks and procedures imposed on them. A "plant system profile" database, currently under development at ITER, represents an effort to provide integrated view into the I&C data. Supported by a platform-independent data modeling, done with a help of XML Schema, it accumulates all the data in a single hierarchy and provides different views for different aspects of the I&C data. The database is implemented using MS SQL Server and Java-based web interface. Import and data linking services are implemented using Talend software, and the report generation is done with a help of MS SQL Server Reporting Services. This paper will report on the first implementation of the database, the kind of data stored so far, typical work flows and processes, and directions of further work.  
slides icon Slides MOMAU005 [0.384 MB]  
poster icon Poster MOMAU005 [0.692 MB]  
 
MOMMU002 NFC Like Wireless Technology for Monitoring Purposes in Scientific/Industrial Facilities controls, EPICS, monitoring, vacuum 66
 
  • I. Badillo, M. Eguiraun
    ESS-Bilbao, Zamudio, Spain
  • J. Jugo
    University of the Basque Country, Faculty of Science and Technology, Bilbao, Spain
 
  Funding: The present work is supported by the Basque Government and Spanish Ministry of Science and Innovation.
Wireless technologies are becoming more and more used in large industrial and scientific facilities like particle accelerators for facilitating the monitoring and indeed sensing in these kind of large environments. Cabled equipment means little flexibility in placement and is very expensive in both money an effort whenever reorganization or new installation is needed. So, when cabling is not really needed for performance reasons wireless monitoring and control is a good option, due to the speed of implementation. There are several wireless flavors to choose, as Bluetooth, Zigbee, WiFi, etc. depending on the requirements of each specific application. In this work a wireless monitoring system for EPICS (Experimental Physics and Industrial Control System) is presented, where desired control system variables are acquired over the network and published in a mobile device, allowing the operator to check process variables everywhere the signal spreads. In this approach, a Python based server will be continuously getting EPICS Process Variables via Channel Access protocol and sending them through a WiFi standard 802.11 network using ICE middleware. ICE is a toolkit oriented to build distributed applications. Finally the mobile device will read the data and show it to the operator. The security of the communication can be assured by means of a weak wireless signal, following the same idea as in NFC, but for more large distances. With this approach, local monitoring and control applications, as for example a vacuum control system for several pumps, are easily implemented.
 
slides icon Slides MOMMU002 [0.309 MB]  
poster icon Poster MOMMU002 [7.243 MB]  
 
MOMMU009 Upgrade of the Server Architecture for the Accelerator Control System at the Heidelberg Ion Therapy Center database, controls, ion, proton 78
 
  • J.M. Mosthaf, Th. Haberer, S. Hanke, K. Höppner, A. Peters, S. Stumpf
    HIT, Heidelberg, Germany
 
  The Heidelberg Ion Therapy Center (HIT) is a heavy ion accelerator facility located at the Heidelberg university hospital and intended for cancer treatment with heavy ions and protons. It provides three treatment rooms for therapy of which two using horizontal beam nozzles are in use and the unique gantry with a 360° rotating beam port is currently under commissioning. The proprietary accelerator control system runs on several classical server machines, including a main control server, a database server running Oracle, a device settings modeling server (DSM) and several gateway servers for auxiliary system control. As the load on some of the main systems, especially the database and DSM servers, has become very high in terms of CPU and I/O load, a change to a more up to date blade server enclosure with four redundant blades and a 10Gbit internal network architecture has been decided. Due to budgetary reasons, this enclosure will at first only replace the main control, database and DVM servers and consolidate some of the services now running on auxiliary servers. The internal configurable network will improve the communication between servers and database. As all blades in the enclosure are configured identically, one dedicated spare blade is used to provide redundancy in case of hardware failure. Additionally we plan to use virtualization software to further improve redundancy and consolidate the services running on gateways and to make dynamic load balancing available to account for different performance needs e.g. in commissioning or therapy use of the accelerator.  
slides icon Slides MOMMU009 [0.233 MB]  
poster icon Poster MOMMU009 [1.132 MB]  
 
MOPKN018 Computing Architecture of the ALICE Detector Control System controls, detector, monitoring, interface 134
 
  • P. Rosinský, A. Augustinus, P.Ch. Chochula, L.S. Jirdén, M. Lechman
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The ALICE Detector Control System (DCS) is based on a commercial SCADA product, running on a large Windows computer cluster. It communicates with about 1200 network attached devices to assure safe and stable operation of the experiment. In the presentation we focus on the design of the ALICE DCS computer systems. We describe the management of data flow, mechanisms for handling the large data amounts and information exchange with external systems. One of the key operational requirements is an intuitive, error proof and robust user interface allowing for simple operation of the experiment. At the same time the typical operator task, like trending or routine checks of the devices, must be decoupled from the automated operation in order to prevent overload of critical parts of the system. All these requirements must be implemented in an environment with strict security requirements. In the presentation we explain how these demands affected the architecture of the ALICE DCS.  
 
MOPKN025 Integrating the EPICS IOC Log into the CSS Message Log EPICS, database, controls, monitoring 151
 
  • K.-U. Kasemir, E. Danilova
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy
The Experimental Physics and Industrial Control System (EPICS) includes the "IOCLogServer", a tool that logs error messages from front-end computers (Input/Output Controllers, IOCs) into a set of text files. Control System Studio (CSS) includes a distributed message logging system with relational database persistence and various log analysis tools. We implemented a log server that forwards IOC messages to the CSS log database, allowing several ways of monitoring and analyzing the IOC error messages.
 
poster icon Poster MOPKN025 [4.006 MB]  
 
MOPKS012 Design and Test of a Girder Control System at NSRRC controls, laser, storage-ring, interface 183
 
  • H.S. Wang, J.-R. Chen, M. L. Chen, K.H. Hsu, W.Y. Lai, S.Y. Perng, Y.L. Tsai, T.C. Tseng
    NSRRC, Hsinchu, Taiwan
 
  A girder control system is proposed to quickly and precisely adjust the displacement and rotating angle of all girders in the storage ring with little manpower at the Taiwan Photon Source (TPS) project at National Synchrotron Research Center (NSRRC). In this control girder system, six motorized cam movers supporting a girder are driven on three pedestals to perform six-axis adjustments of a girder. A tiltmeter monitors the pitch and roll of each girder; several touch sensors measure the relative displacement between consecutive girders. Moreover, a laser position sensitive detector (PSD) system measuring the relative displacement between straight-section girders is included in this girder control system. Operator can use subroutines developed by MATLAB to control every local girder control system via intranet. This paper presents details of design and tests of the girder control system.  
 
MOPKS014 Architecture and Control of the Fast Orbit Correction for the ESRF Storage Ring FPGA, storage-ring, controls, device-server 189
 
  • F. Epaud, J.M. Koch, E. Plouviez
    ESRF, Grenoble, France
 
  Two years ago, the electronics of all the 224 Beam Position Monitors (BPM) of the ESRF Storage Ring were replaced by the commercial Libera Brilliance units to drastically improve the speed and position resolution of the Orbit measurement. Also, at the start of this year, all the 96 power supplies that drive the Orbit steerers have been replaced by new units that now cover a full DC-AC range up to 200Hz. We are now working on the replacement of the previous Fast Orbit Correction system. This new architecture will also use the 224 Libera Brilliance units and in particular the 10 KHz optical links handled by the Diamond Communication Controller (DCC) which has now been integrated within the Libera FPGA as a standard option. The 224 Liberas are connected together with the optical links to form a redundant network where the data are broadcast and are received by all nodes within 40 μS. The 4 corrections stations will be based on FPGA cards (2 per station) also connected to the FOFB network as additional nodes and using the same DCC firmware on one side and are connected to the steerers power supplies using RS485 electronics standard on the other side. Finally two extra nodes have been added to collect data for diagnostics and to give BPMs positions to the beamlines at high rate. This paper will present the network architecture and the control software to operate this new equipment.  
poster icon Poster MOPKS014 [3.242 MB]  
 
MOPMN001 Beam Sharing between the Therapy and a Secondary User controls, cyclotron, interface, proton 231
 
  • K.J. Gajewski
    TSL, Uppsala, Sweden
 
  The 180 MeV proton beam from the cyclotron at The Svedberg Laboratory is primarily used for a patient treatment. Because of the fact that the proton beam is needed only during a small fraction of time scheduled for the treatment, there is a possibility to divert the beam to another location to be used by a secondary user. The therapy staff (primary user) controls the beam switching process after an initial set-up which is done by the cyclotron operator. They have an interface that allows controlling the accelerator and the beam line in all aspects needed for performing the treatment. The cyclotron operator is involved only if any problem occurs. The secondary user has its own interface that allows a limited access to the accelerators control system. Using this interface it is possible to start and stop the beam when it is not used for the therapy, grant access to the experimental hall and monitor the beam properties. The tools and procedures for the beam sharing between the primary and the secondary user are presented in the paper.  
poster icon Poster MOPMN001 [0.924 MB]  
 
MOPMN010 Development of a Surveillance System with Motion Detection and Self-location Capability radiation, status, survey, controls 257
 
  • M. Tanigaki, S. Fukutani, Y. Hirai, H. Kawabe, Y. Kobayashi, Y. Kuriyama, M. Miyabe, Y. Morimoto, T. Sano, N. Sato, K. Takamiya
    KURRI, Osaka, Japan
 
  A surveillance system with the motion detection and the location measurement capability has been in development for the help of effective security control of facilities in our institute. The surveillance cameras and sensors placed around the facilities and the institute have the primary responsibility for preventing unwanted accesses to our institute, but there are some cases where additional temporary surveillance cameras are used for the subsidiary purposes. Problems in these additional surveillance cameras are the detection of such unwanted accesses and the determination of their respective locations. To eliminate such problems, we are constructing a surveillance camera system with motion detection and self-locating features based on a server-client scheme. A client, consisting of a network camera, wi-fi and GPS modules, acquires its location measured by use of GPS or the radio wave from surrounding wifi access points, then sends its location to a remote server along with the motion picture over the network. The server analyzes such information to detect the unwanted access and serves the status or alerts on a web-based interactive map for the easy access to such information. We report the current status of the development and expected applications of such self-locating system beyond this surveillance system.  
 
MOPMN019 Controling and Monitoring the Data Flow of the LHCb Read-out and DAQ Network detector, controls, monitoring, FPGA 281
 
  • R. Schwemmer, C. Gaspar, N. Neufeld, D. Svantesson
    CERN, Geneva, Switzerland
 
  The LHCb readout uses a set of 320 FPGA based boards as interface between the on-detector hardware and the GBE DAQ network. The boards are the logical Level 1 (L1) read-out electronics and aggregate the experiment's raw data into event fragments that are sent to the DAQ network. To control the many parameters of the read-out boards, an embedded PC is included on each board, connecting to the boards ICs and FPGAs. The data from the L1 boards is sent through an aggregation network into the High Level Trigger farm. The farm comprises approximately 1500 PCs which at first assemble the fragments from the L1 boards and then do a partial reconstruction and selection of the events. In total there are approximately 3500 network connections. Data is pushed through the network and there is no mechanism for resending packets. Loss of data on a small scale is acceptable but care has to be taken to avoid data loss if possible. To monitor and debug losses, different probes are inserted throughout the entire read-out chain to count fragments, packets and their rates at different positions. To keep uniformity throughout the experiment, all control software was developed using the common SCADA software, PVSS, with the JCOP framework as base. The presentation will focus on the low level controls interface developed for the L1 boards and the networking probes, as well as the integration of the high level user interfaces into PVSS. We will show the way in which the users and developers interact with the software, configure the hardware and follow the flow of data through the DAQ network.  
 
MOPMN025 New SPring-8 Control Room: Towards Unified Operation with SACLA and SPring-8 II Era. controls, operation, status, laser 296
 
  • A. Yamashita, R. Fujihara, N. Hosoda, Y. Ishizawa, H. Kimura, T. Masuda, C. Saji, T. Sugimoto, S. Suzuki, M. Takao, R. Tanaka
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Fukui, Y. Otake
    RIKEN/SPring-8, Hyogo, Japan
 
  We have renovated the SPring-8 control room. This is the first major renovation since its inauguration in 1997. In 2011, the construction of SACLA (SPring-8 Angstrom Compact Laser Accelerator) was completed and it is planned to be controlled from the new control room for close cooperative operation with the SPring-8 storage ring. It is expected that another SPring-8 II project will require more workstations than the current control room. We have extended the control room area for these foreseen projects. In this renovation we have employed new technology which did not exist 14 years ago, such as a large LCD and silent liquid cooling workstations for comfortable operation environment. We have incorporated many ideas which were obtained during the 14 years experience of the design. The operation in the new control room began in April 2011 after a short period of the construction.  
 
MOPMS004 First Experience with VMware Servers at HLS controls, hardware, database, brilliance 323
 
  • G. Liu, X. Bao, C. Li, J.G. Wang, K. Xuan
    USTC/NSRL, Hefei, Anhui, People's Republic of China
 
  Hefei Light Source(HLS) is a dedicated second generation VUV light source, which was designed and constructed two decades ago. In order to improve the performance of HLS, especially getting higher brilliance and increasing the number of straight sections, an upgrade project is undergoing, accordingly the new control system is under construction. VMware vSphere 4 Enterprise Plus is used to construct the server system for HLS control system. Four DELL PowerEdge R710 rack servers and one DELL Equallogic PS6000E iSCSI SAN comprises the hardware platform. Some kinds of servers, such as file server, web server, database server, NIS servers etc. together with the softIOC applications are all integrated into this virtualization platform. The prototype of softIOC is setup and its performance is also given in this paper. High availability and flexibility are achieved with low cost.  
poster icon Poster MOPMS004 [0.463 MB]  
 
MOPMS010 LANSCE Control System Front-End and Infrastructure Hardware Upgrades controls, linac, EPICS, hardware 343
 
  • M. Pieck, D. Baros, C.D. Hatch, P.S. Marroquin, P.D. Olivas, F.E. Shelley, D.S. Warren, W. Winton
    LANL, Los Alamos, New Mexico, USA
 
  Funding: This work has benefited from the use of LANSCE at LANL. This facility is funded by the US DoE and operated by Los Alamos National Security for NSSA, Contract DE-AC52-06NA25396. LA-UR-11-10228
The Los Alamos Neutron Science Center (LANSCE) linear accelerator drives user facilities for isotope production, proton radiography, ultra-cold neutrons, weapons neutron research and various sciences using neutron scattering. The LANSCE Control System (LCS), which is in part 30 years old, provides control and data monitoring for most devices in the linac and for some of its associated experimental-area beam lines. In Fiscal Year 2011, the control system went through an upgrade process that affected different areas of the LCS. We improved our network infrastructure and we converted part of our front-end control system hardware to Allen Bradley ControlsLogix 5000 and National Instruments Compact RIO programmable automation controller (PAC). In this paper, we will discuss what we have done, what we have learned about upgrading the existing control system, and how this will affect our future planes.
 
 
MOPMS018 New Timing System Development at SNS timing, hardware, diagnostics, operation 358
 
  • D. Curry
    ORNL RAD, Oak Ridge, Tennessee, USA
  • X.H. Chen, R. Dickson, S.M. Hartman, D.H. Thompson
    ORNL, Oak Ridge, Tennessee, USA
  • J. Dedič
    Cosylab, Ljubljana, Slovenia
 
  The timing system at the Spallation Neutron Source (SNS) has recently been updated to support the long range production and availability goals of the facility. A redesign of the hardware and software provided us with an opportunity to significantly reduce the complexity of the system as a whole and consolidate the functionality of multiple cards into single units eliminating almost half of our operating components in the field. It also presented a prime opportunity to integrate new system level diagnostics, previously unavailable, for experts and operations. These new tools provide us with a clear image of the health of our distribution links and enhance our ability to quickly identify and isolate errors.  
 
MOPMS020 High Intensity Proton Accelerator Controls Network Upgrade controls, monitoring, operation, proton 361
 
  • R.A. Krempaska, A.G. Bertrand, F. Lendzian, H. Lutz
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The High Intensity Proton Accelerator (HIPA) control system network is spread through about six buildings and has grown historically in an unorganized way. It consisted of about 25 network switches, 150 nodes and 20 operator consoles. The miscellaneous hardware infrastructure and the lack of the documentation and components overview could not guarantee anymore the reliability of the control system and facility operation. Therefore, a new network, based on modern network topology, PSI standard hardware with monitoring and detailed documentation and overview was needed. We would like to present the process how we successfully achieved this goal and the advantages of the clean and well documented network infrastructure.  
poster icon Poster MOPMS020 [0.761 MB]  
 
MOPMS023 LHC Magnet Test Benches Controls Renovation controls, Linux, hardware, interface 368
 
  • A. Raimondo, O.O. Andreassen, D. Kudryavtsev, S.T. Page, A. Rijllart, E. Zorin
    CERN, Geneva, Switzerland
 
  The LHC magnet test benches controls were designed in 1996. They were based on VME data acquisition systems and Siemens PLCs control and interlocks systems. During a review of renovation of superconducting laboratories at CERN in 2009 it was decided to replace the VME systems with PXI and the obsolete Sun/Solaris workstations with Linux PCs. This presentation covers the requirements for the new systems in terms of functionality, security, channel count, sampling frequency and precision. We will report on the experience with the commissioning of the first series of fixed and mobile measurement systems upgraded to this new platform, compared to the old systems. We also include the experience with the renovated control room.  
poster icon Poster MOPMS023 [1.310 MB]  
 
MOPMU008 Solaris Project Status and Challenges controls, TANGO, linac, operation 439
 
  • P.P. Goryl, C.J. Bocchetta, K. Królas, M. Młynarczyk, R. Nietubyć, M.J. Stankiewicz, P.S. Tracz, Ł. Walczak, A.I. Wawrzyniak
    Solaris, Krakow, Poland
  • K. Larsson, D.P. Spruce
    MAX-lab, Lund, Sweden
 
  Funding: Work supported by the European Regional Development Fund within the frame of the Innovative Economy Operational Program: POIG.02.01.00-12-213/09
The Polish synchrotron radiation facility, Solaris, is being built in Krakow. The project is strongly linked to the MAX-IV project and the 1.5 GeV storage ring. A overview will be given of activities and of the control system and will outline the similarities and differences between the two machines.
 
poster icon Poster MOPMU008 [11.197 MB]  
 
MOPMU019 The Gateways of Facility Control for SPring-8 Accelerators controls, data-acquisition, database, framework 473
 
  • M. Ishii, T. Masuda, R. Tanaka, A. Yamashita
    JASRI/SPring-8, Hyogo-ken, Japan
 
  We integrated the utilities data acquisition into the SPring-8 accelerator control system based on MADOCA framework. The utilities data such as air temperature, power line voltage and temperature of machine cooling water are helpful to study the correlation between the beam stability and the environmental conditions. However the accelerator control system had no way to take many utilities data managed by the facility control system, because the accelerator control system and the facility control system was independent system without an interconnection. In 2010, we had a chance to replace the old facility control system. At that time, we constructed the gateways between the MADOCA-based accelerator control system and the new facility control system installing BACnet, that is a data communication protocol for Building Automation and Control Networks, as a fieldbus. The system requirements were as follows: to monitor utilities data with required sampling rate and resolution, to store all acquired data in the accelerator database, to keep an independence between the accelerator control system and the facility control system, to have a future expandability to control the facilities from the accelerator control system. During the work, we outsourced to build the gateways including data taking software of MADOCA to solve the problems of less manpower and short work period. In this paper we describe the system design and the approach of outsourcing.  
 
TUCAUST01 Upgrading the Fermilab Fire and Security Reporting System hardware, interface, software, database 563
 
  • CA. King, R. Neswold
    Fermilab, Batavia, USA
 
  Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
Fermilab's homegrown fire and security system (known as FIRUS) is highly reliable and has been used nearly thirty years. The system has gone through some minor upgrades, however, none of those changes made significant, visible changes. In this paper, we present a major overhaul to the system that is halfway complete. We discuss the use of Apple's OS X for the new GUI, upgrading the servers to use the Erlang programming language and allowing limited access for iOS and Android-based mobile devices.
 
slides icon Slides TUCAUST01 [2.818 MB]  
 
TUCAUST02 SARAF Control System Rebuild controls, software, operation, proton 567
 
  • E. Reinfeld, I. Eliyahu, I.G. Gertz, I. Mardor
    Soreq NRC, Yavne, Israel
 
  The Soreq Applied Research Accelerator Facility (SARAF) is a proton/deuteron RF superconducting linear accelerator, which was commissioned at Soreq NRC. SARAF will be a multi-user facility, whose main activities will be neutron physics and applications, radio-pharmaceuticals development and production, and basic nuclear physics research. The SARAF Accelerator Control System (ACS) was delivered while still in development phase. Various issues limit our capability to use it as a basis for future phases of the accelerator operation and need to be addressed. Recently two projects have been launched in order to streamline the system and prepare it for the future development of the accelerator. This article will describe the plans and goals of these projects, the preparations undertaken by the SARAF team, the design principles on which the control methodology will be based and the architecture which is planned to be implemented. The rebuilding process will take place in two consecutive projects. The first will revamp the network architecture and the second will involve the actual rebuilding of the control system applications, features and procedures.  
slides icon Slides TUCAUST02 [1.733 MB]  
 
TUCAUST06 Event-Synchronized Data Acquisition System of 5 Giga-bps Data Rate for User Experiment at the XFEL Facility, SACLA experiment, operation, detector, controls 581
 
  • M. Yamaga, A. Amselem, T. Hirono, Y. Joti, A. Kiyomichi, T. Ohata, T. Sugimoto, R. Tanaka
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Hatsui
    RIKEN/SPring-8, Hyogo, Japan
 
  A data acquisition (DAQ), control, and storage system has been developed for user experiments at the XFEL facility, SACLA, in the SPring-8 site. The anticipated experiments demand shot-by-shot DAQ in synchronization with the beam operation cycle in order to correlate the beam characteristics, and recorded data such as X-ray diffraction pattern. The experiments produce waveform or image data, of which the data size ranges from 8 up to 48 M byte for each x-ray pulse at 60 Hz. To meet these requirements, we have constructed a DAQ system that is operated in synchronization with the 60Hz of beam operation cycle. The system is designed to handle up to 5 Gbps data rate after compression, and consists of the trigger distributor/counters, the data-filling computers, the parallel-writing high-speed data storage, and the relational database. The data rate is reduced by on-the-fly data compression through front-end embedded systems. The self-described data structure enables to handle any type of data. The pipeline data-buffer at each computer node ensures integrity of the data transfer with the non-real-time operating systems, and reduces the development cost. All the data are transmitted via TCP/IP protocol over GbE and 10GbE Ethernet. To monitor the experimental status, the system incorporates with on-line visualization of waveform/images as well as prompt data mining by 10 PFlops scale supercomputer to check the data health. Partial system for the light source commissioning was released in March 2011. Full system will be released to public users in March 2012.  
slides icon Slides TUCAUST06 [3.248 MB]  
 
TUDAUST03 Control System in SwissFEL Injector Test Facility controls, EPICS, laser, electron 593
 
  • M. Dach, D. Anicic, D.A. Armstrong, K. Bitterli, H. Brands, P. Chevtsov, F. Haemmerli, M. Heiniger, C.E. Higgs, W. Hugentobler, G. Janser, G. Jud, B. Kalantari, R. Kapeller, T. Korhonen, R.A. Krempaska, M.P. Laznovsky, T. Pal, W. Portmann, D. Vermeulen, E. Zimoch
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The Free Electron Laser (SwissFEL) Test Facility is an important milestone for realization of a new SwissFEL facility. The first beam in the Test Facility was produced on the 24th of August 2010 which inaugurated the operation of the Injector. Since then, beam quality in various aspects has been greatly improved. This paper presents the current status of the Test Facility and is focused on the control system related issues which led to the successful commissioning. In addition, the technical challenges and opportunities in view of the future SwissFEL facility are discussed.  
slides icon Slides TUDAUST03 [3.247 MB]  
 
TURAULT01 Summary of the 3rd Control System Cyber-security (CS)2/HEP Workshop controls, experiment, software, detector 603
 
  • S. Lüders
    CERN, Geneva, Switzerland
 
  Over the last decade modern accelerator and experiment control systems have increasingly been based on commercial-off-the-shelf products (VME crates, programmable logic controllers (PLCs), supervisory control and data acquisition (SCADA) systems, etc.), on Windows or Linux PCs, and on communication infrastructures using Ethernet and TCP/IP. Despite the benefits coming with this (r)evolution, new vulnerabilities are inherited, too: Worms and viruses spread within seconds via the Ethernet cable, and attackers are becoming interested in control systems. The Stuxnet worm of 2010 against a particular Siemens PLC is a unique example for a sophisticated attack against control systems [1]. Unfortunately, control PCs cannot be patched as fast as office PCs. Even worse, vulnerability scans at CERN using standard IT tools have shown that commercial automation systems lack fundamental security precautions: Some systems crashed during the scan, others could easily be stopped or their process data being altered [2]. The 3rd (CS)2/HEP workshop [3] held the weekend before the ICALEPCS2011 conference was intended to raise awareness; exchange good practices, ideas, and implementations; discuss what works & what not as well as their pros & cons; report on security events, lessons learned & successes; and update on progresses made at HEP laboratories around the world in order to secure control systems. This presentation will give a summary of the solutions planned, deployed and the experience gained.
[1] S. Lüders, "Stuxnet and the Impact on Accelerator Control Systems", FRAAULT02, ICALEPCS, Grenoble, October 2011;
[2] S. Lüders, "Control Systems Under Attack?", O5_008, ICALEPCS, Geneva, October 2005.
[3] 3rd Control System Cyber-Security CS2/HEP Workshop, http://indico.cern.ch/conferenceDisplay.py?confId=120418
 
 
WEBHAUST02 Optimizing Infrastructure for Software Testing Using Virtualization software, hardware, Windows, distributed 622
 
  • O. Khalid, B. Copy, A A. Shaikh
    CERN, Geneva, Switzerland
 
  Virtualization technology and cloud computing have a brought a paradigm shift in the way we utilize, deploy and manage computer resources. They allow fast deployment of multiple operating system as containers on physical machines which can be either discarded after use or snapshot for later re-deployment. At CERN, we have been using virtualization/cloud computing to quickly setup virtual machines for our developers with pre-configured software to enable them test/deploy a new version of a software patch for a given application. We also have been using the infrastructure to do security analysis of control systems as virtualization provides a degree of isolation where control systems such as SCADA systems could be evaluated for simulated network attacks. This paper reports both on the techniques that have been used for security analysis involving network configuration/isolation to prevent interference of other systems on the network. This paper also provides an overview of the technologies used to deploy such an infrastructure based on VMWare and OpenNebula cloud management platform.  
slides icon Slides WEBHAUST02 [2.899 MB]  
 
WEBHAUST03 Large-bandwidth Data Acquisition Network for XFEL Facility, SACLA controls, site, experiment, laser 626
 
  • T. Sugimoto, Y. Joti, T. Ohata, R. Tanaka, M. Yamaga
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Hatsui
    RIKEN/SPring-8, Hyogo, Japan
 
  We have developed a large-bandwidth data acquisition (DAQ) network for user experiments at the SPring-8 Angstrom Compact Free Electron Laser (SACLA) facility. The network connects detectors, on-line visualization terminals and a high-speed storage of the control and DAQ system to transfer beam diagnostic data of each X-ray pulse as well as the experimental data. The development of DAQ network system (DAQ-LAN) was one of the critical elements in the system development because the data with transfer rate reaching 5 Gbps should be stored and visualized with high availability. DAQ-LAN is also used for instrument control. In order to guarantee the operation of both the high-speed data transfer and instrument control, we have implemented physical and logical network system. The DAQ-LAN currently consists of six 10-GbE capable network switches exclusively used for the data transfer, and ten 1-GbE capable network switches for instrument control and on-line visualization. High-availability was achieved by link aggregation (LAG) with typical convergence time of 500 ms, which is faster than RSTP (2 sec.). To prevent network trouble caused by broadcast, DAQ-LAN is logically separated into twelve network segments. Logical network segmentation are based on DAQ applications such as data transfer, on-line visualization, and instrument control. The DAQ-LAN will connect the control and DAQ system to the on-site high performance computing system, and to the next-generation super computers in Japan including K-computer for instant data mining during the beamtime, and post analysis.  
slides icon Slides WEBHAUST03 [5.795 MB]  
 
WEBHAUST06 Virtualized High Performance Computing Infrastructure of Novosibirsk Scientific Center experiment, site, detector, controls 630
 
  • A. Zaytsev, S. Belov, V.I. Kaplin, A. Sukharev
    BINP SB RAS, Novosibirsk, Russia
  • A.S. Adakin, D. Chubarov, V. Nikultsev
    ICT SB RAS, Novosibirsk, Russia
  • V. Kalyuzhny
    NSU, Novosibirsk, Russia
  • N. Kuchin, S. Lomakin
    ICM&MG SB RAS, Novosibirsk, Russia
 
  Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. Our contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure focusing on its applications for handling everyday data processing tasks of HEP experiments being carried out at BINP.  
slides icon Slides WEBHAUST06 [14.369 MB]  
 
WEBHMULT04 Sub-nanosecond Timing System Design and Development for LHAASO Project detector, Ethernet, timing, FPGA 646
 
  • G.H. Gong, S. Chen, Q. Du, J.M. Li, Y. Liu
    Tsinghua University, Beijing, People's Republic of China
  • H. He
    IHEP Beijing, Beijing, People's Republic of China
 
  Funding: National Science Foundation of China (No.11005065)
The Large High Altitude Air Shower Observatory (LHAASO) [1] project is designed to trace galactic cosmic ray sources by approximately 10,000 different types of ground air shower detectors. Reconstruction of cosmic ray arrival directions requires sub-nanosecond time synchronization, a novel design of the LHAASO timing system by means of packet-based frequency distribution and time synchronization over Ethernet is proposed. The White Rabbit Protocol (WR) [2] is applied as the infrastructure of the timing system, which implements a distributed adaptive phase tracking technology based on Synchronous Ethernet to lock all local clocks, and a real time delay calibration method based on the Precision Time Protocol to keep all local time synchronized within a nanosecond. We also demonstrate the development and test status on prototype WR switches and nodes.
[1] Cao Zhen, "A future project at tibet: the large high altitude air shower observatory (LHAASO)", Chinese Phys. C 34 249,2010
[2] P. Moreira, et al, "White Rabbit: Sub-Nanosecond Timing Distribution over Ethernet", ISPCS 2009
 
slides icon Slides WEBHMULT04 [8.775 MB]  
 
WEMMU007 Reliability in a White Rabbit Network timing, controls, Ethernet, hardware 698
 
  • M. Lipiński, J. Serrano, T. Włostowski
    CERN, Geneva, Switzerland
  • C. Prados
    GSI, Darmstadt, Germany
 
  White Rabbit (WR) is a time-deterministic, low-latency Ethernet-based network which enables transparent, sub-ns accuracy timing distribution. It is being developed to replace the General Machine Timing (GMT) system currently used at CERN and will become the foundation for the control system of the Facility for Antiproton and Ion Research (FAIR) at GSI. High reliability is an important issue in WR's design, since unavailability of the accelerator's control system will directly translate into expensive downtime of the machine. A typical WR network is required to lose not more than a single message per year. Due to WR's complexity, the translation of this real-world-requirement into a reliability-requirement constitutes an interesting issue on its own: a WR network is considered functional only if it provides all its services to all its clients at any time. This paper defines reliability in WR and describes how it was addressed by dividing it into sub-domains: deterministic packet delivery, data redundancy, topology redundancy and clock resilience. The studies show that the Mean Time Between Failure (MTBF) of the WR Network is the main factor affecting its reliability. Therefore, probability calculations for different topologies were performed using the "Fault Tree analysis" and analytic estimations. Results of the study show that the requirements of WR are demanding. Design changes might be needed and further in-depth studies required, e.g. Monte Carlo simulations. Therefore, a direction for further investigations is proposed.  
slides icon Slides WEMMU007 [0.689 MB]  
poster icon Poster WEMMU007 [1.080 MB]  
 
WEMMU011 Radiation Safety Interlock System for SACLA (XFEL/SPring-8) radiation, electron, gun, controls 710
 
  • M. Kago, T. Matsushita, N. Nariyama, C. Saji, R. Tanaka, A. Yamashita
    JASRI/SPring-8, Hyogo-ken, Japan
  • Y. Asano, T. Hara, T. Itoga, Y. Otake, H. Takebe
    RIKEN/SPring-8, Hyogo, Japan
  • H. Tanaka
    RIKEN SPring-8 Center, Sayo-cho, Sayo-gun, Hyogo, Japan
 
  The radiation safety interlock system for SACLA (XFEL/SPring-8) protects personnel from radiation hazards. The system controls access to the accelerator tunnel, monitors the status of safety equipment such as emergency stop buttons, and gives permission for accelerator operation. The special feature of the system is a fast beam termination when the system detects an unsafe state. A total beam termination time is required less than 16.6 ms (linac operation repetition cycle: 60 Hz). Especially important is the fast beam termination when the electron beams deviates from the proper transport route. Therefore, we developed optical modules in order to transmit a signal at a high speed for a long distance (an overall length of around 700 m). An exclusive system was installed for fast judgment of a proper beam route. It is independent from the main interlock system which manages access control and so on. The system achieved a response time of less than 7ms, which is sufficient for our demand. The construction of the system was completed in February 2011 and the system commenced operation in March 2011. We will report on the design of the system and its detailed performance.  
slides icon Slides WEMMU011 [0.555 MB]  
poster icon Poster WEMMU011 [0.571 MB]  
 
WEPKN006 Running a Reliable Messaging Infrastructure for CERN's Control System controls, monitoring, operation, GUI 724
 
  • F. Ehm
    CERN, Geneva, Switzerland
 
  The current middleware for CERN's accelerator controls system is based on two implementations: corba-based Controls MiddleWare (CMW) and Java Messaging Service [JMS]. The JMS service is realized using the open source messaging product ActiveMQ and had became an increasing vital part of beam operations as data need to be transported reliably for various areas such as the beam protection system, post mortem analysis, beam commissioning or the alarm system. The current JMS service is made of 17 brokers running either in clusters or as single nodes. The main service is deployed as a two node cluster providing failover and load balancing capabilities for high availability. Non-critical applications running on virtual machines or desktop machines read data via a third broker to decouple the load from the operational main cluster. This scenario was introduced last year and the statistics showed an uptime of 99.998% and an average data serving rate of 1.6GB /min represented by around 150 messages/sec. Deploying, running, maintaining and protecting such messaging infrastructure is not trivial and includes setting up of careful monitoring and failure pre-recognition. Naturally, lessons have been learnt and their outcome is very important for the current and future operation of such service.  
poster icon Poster WEPKN006 [0.877 MB]  
 
WEPKS015 Automatic Creation of LabVIEW Network Shared Variables LabView, controls, hardware, distributed 812
 
  • T. Kluge
    Siemens AG, Erlangen, Germany
  • H.-C. Schröder
    ASTRUM IT GmbH, Erlangen, Germany
 
  We are in the process of preparing the LabVIEW controlled system components of our Solid State Direct Drive® experiments [1, 2, 3, 4] for the integration into a Supervisory Control And Data Acquisition (SCADA) or distributed control system. The predetermined route to this is the generation of LabVIEW network shared variables that can easily be exported by LabVIEW to the SCADA system using OLE for Process Control (OPC) or other means. Many repetitive tasks are associated with the creation of the shared variables and the required code. We are introducing an efficient and inexpensive procedure that automatically creates shared variable libraries and sets default values for the shared variables. Furthermore, LabVIEW controls are created that are used for managing the connection to the shared variable inside the LabVIEW code operating on the shared variables. The procedure takes as input an XML spreadsheet defining the required input. The procedure utilizes XSLT and LabVIEW scripting. In a later state of the project the code generation can be expanded to also create code and configuration files that will become necessary in order to access the shared variables from the SCADA system of choice.
[1] O. Heid, T. Hughes, THPD002, IPAC10, Kyoto, Japan
[2] R. Irsigler et al, 3B-9, PPC11, Chicago IL, USA
[3] O. Heid, T. Hughes, THP068, LINAC10, Tsukuba, Japan
[4] O. Heid, T. Hughes, MOPD42, HB2010, Morschach, Switzerland
 
poster icon Poster WEPKS015 [0.265 MB]  
 
WEPKS023 Further Developments in Generating Type-Safe Messaging software, status, target, controls 836
 
  • R. Neswold, CA. King
    Fermilab, Batavia, USA
 
  Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
At ICALEPCS '09, we introduced a source code generator that allows processes to communicate safely using native data types. In this paper, we discuss further development that has occurred since the conference in Kobe, Japan, including adding three more client languages, an optimization in network packet size and the addition of a new protocol data type.
 
poster icon Poster WEPKS023 [3.219 MB]  
 
WEPMN014 The Software and Hardware Architectural Design of the Vessel Thermal Map Real-Time System in JET real-time, plasma, Linux, controls 905
 
  • D. Alves, A. Neto, D.F. Valcárcel
    IPFN, Lisbon, Portugal
  • G. Arnoux, P. Card, S. Devaux, R.C. Felton, A. Goodyear, D. Kinna, P.J. Lomas, P. McCullen, A.V. Stephen, K-D. Zastrow
    CCFE, Abingdon, Oxon, United Kingdom
  • S. Jachmich
    RMA, Brussels, Belgium
 
  The installation of ITER-relevant materials for the plasma facing components (PFCs) in the Joint European Torus (JET) is expected to have a strong impact on the operation and protection of the experiment. In particular, the use of all-beryllium tiles, which deteriorate at a substantially lower temperature than the formerly installed CFC tiles, imposes strict thermal restrictions on the PFCs during operation. Prompt and precise responses are therefore required whenever anomalous temperatures are detected. The new Vessel Thermal Map (VTM) real-time application collects the temperature measurements provided by dedicated pyrometers and Infra-Red (IR) cameras, groups them according to spatial location and probable offending heat source and raises alarms that will trigger appropriate protective responses. In the context of JET's global scheme for the protection of the new wall, the system is required to run on a 10 millisecond cycle communicating with other systems through the Real-Time Data Network (RTDN). In order to meet these requirements a Commercial Off-The-Shelf (COTS) solution has been adopted based on standard x86 multi-core technology, Linux and the Multi-threaded Application Real-Time executor (MARTe) software framework. This paper presents an overview of the system with particular technical focus on the configuration of its real-time capability and the benefits of the modular development approach and advanced tools provided by the MARTe framework.
See the Appendix of F. Romanelli et al., Proceedings of the 23rd IAEA Fusion Energy Conference 2010, Daejeon, Korea
 
poster icon Poster WEPMN014 [5.306 MB]  
 
WEPMN022 LIA-2 Power Supply Control System controls, interlocks, electron, experiment 926
 
  • A. Panov, P.A. Bak, D. Bolkhovityanov
    BINP SB RAS, Novosibirsk, Russia
 
  LIA-2 is an electron Linear Induction Accelerator designed and built by BINP for flash radiography. Inductors get power from 48 modulators, grouped by 6 in 8 racks. Each modulator includes 3 control devices, connected via internal CAN bus to an embedded modulator controller, which runs Keil RTX real-time OS. Each rack includes a cPCI crate equipped with x86-compatible processor board running Linux*. Modulator controllers are connected to cPCI crate via external CAN bus. Additionally, brief modulator status is displayed on front indicator. Integration of control electronics into devices with high level of electromagnetic interferences is discussed, use of real-time OSes in such devices and interaction between them is described.
*"LIA-2 Linear Induction Accelerator Control System", this conference
 
poster icon Poster WEPMN022 [5.035 MB]  
 
WEPMN026 Evolution of the CERN Power Converter Function Generator/Controller for Operation in Fast Cycling Accelerators Ethernet, controls, software, radiation 939
 
  • D.O. Calcoen, Q. King, P.F. Semanaz
    CERN, Geneva, Switzerland
 
  Power converters in the LHC are controlled by the second generation of an embedded computer known as a Function Generator/Controller (FGC2). Following the success of this control system, new power converter installations at CERN will be based around an evolution of the design - a third generation called FGC3. The FGC3 will initially be used in the PS Booster and Linac4. This paper compares the hardware of the two generations of FGC and details the decisions made during the design of the FGC3.  
poster icon Poster WEPMN026 [0.586 MB]  
 
WEPMN037 DEBROS: Design and Use of a Linux-like RTOS on an Inexpensive 8-bit Single Board Computer Linux, hardware, interface, software 965
 
  • M.A. Davis
    NSCL, East Lansing, Michigan, USA
 
  As the power, complexity, and capabilities of embedded processors continues to grow, it is easy to forget just how much can be done with inexpensive single board computers based on 8-bit processors. When the proprietary, non-standard tools from the vendor for one such embedded computer became a major roadblock, I embarked on a project to expand my own knowledge and provide a more flexible, standards based alternative. Inspired by operating systems such as Unix, Linux, and Minix, I wrote DEBROS (the Davis Embedded Baby Real-time Operating System) [1], which is a fully pre-emptive, priority-based OS with soft real-time capabilities that provides a subset of standard Linux/Unix compatible system calls such as stdio, BSD sockets, pipes, semaphores, etc. The end result was a much more flexible, standards-based development environment which allowed me to simplify my programming model, expand diagnostic capabilities, and reduce the time spent monitoring and applying updates to the hundreds of devices in the lab currently using this hardware.[2]
[1] http://groups.nscl.msu.edu/controls/files/DEBROS_User_Developer_Manual.doc
[2] http://groups.nscl.msu.edu/controls/
 
poster icon Poster WEPMN037 [0.112 MB]  
 
WEPMS011 The Timing Master for the FAIR Accelerator Facility timing, FPGA, embedded, real-time 996
 
  • R. Bär, T. Fleck, M. Kreider, S. Mauro
    GSI, Darmstadt, Germany
 
  One central design feature of the FAIR accelerator complex is a high level of parallel beam operation, imposing ambitious demands on the timing and management of accelerator cycles. Several linear accelerators, synchrotrons, storage rings and beam lines have to be controlled and re-configured for each beam production chain on a pulse-to-pulse basis, with cycle lengths ranging from 20 ms to several hours. This implies initialization, synchronization of equipment on the time scale down to the ns level, interdependencies, multiple paths and contingency actions like emergency beam dump scenarios. The FAIR timing system will be based on White Rabbit [1] network technology, implementing a central Timing Master (TM) unit to orchestrate all machines. The TM is subdivided into separate functional blocks: the Clock Master, which deals with time and clock sources and their distribution over WR, the Management Master, which administrates all WR timing receivers, and the Data Master, which schedules and coordinates machine instructions and broadcasts them over the WR network. The TM triggers equipment actions based on the transmitted execution time. Since latencies in the low μs range are required, this paper investigates the possibilities of parallelisation in programmable hardware and discusses the benefits to either a distributed or monolithic timing master architecture. The proposed FPGA based TM will meet said timing requirements while providing fast reaction to interlocks and internal events and offers parallel processing of multiple signals and state machines.
[1] J. Serrano, et al, "The White Rabbit Project", ICALEPCS 2009.
 
 
WEPMS016 Network on Chip Master Control Board for Neutron's Acquisition FPGA, neutron, interface, controls 1006
 
  • E. Ruiz-Martinez, T. Mary, P. Mutti, J. Ratel, F. Rey
    ILL, Grenoble, France
 
  In the neutron scattering instruments at the Institute Laue-Langevin, one of the main challenges for the acquisition control is to generate the suitable signalling for the different modes of neutron acquisition. An inappropriate management could cause loss of information during the course of the experiments and in the subsequent data analysis. It is necessary to define a central element to provide synchronization to the rest of the units. The backbone of the proposed acquisition control system is the denominated master acquisition board. This main board is designed to gather together the modes of neutron acquisition used in the facility, and make it common for all the instruments in a simple, modular and open way, giving the possibility of adding new performances. The complete system also includes a display board and n histogramming modules connected to the neutrons detectors. The master board consists of a VME64X configurable high density I/O connection carrier board based on latest Xilinx Virtex-6T FPGA. The internal architecture of the FPGA is designed as a Network on Chip (NoC) approach. It represents a switch able to communicate efficiently the several resources available in the board (PCI Express, VME64x Master/Slave, DDR3 controllers and user's area). The core of the global signal synchronization is fully implemented in the FPGA, the board has a completely user configurable IO front-end to collect external signals, to process them and to distribute the synchronization control via the bus VME to the others modules involved in the acquisition.  
poster icon Poster WEPMS016 [7.974 MB]  
 
WEPMS023 ALBA Timing System - A Known Architecture with Fast Interlock System Upgrade timing, diagnostics, interlocks, booster 1024
 
  • O. Matilla, D.B. Beltrán, D.F.C. Fernández-Carreiras, J.J. Jamroz, J. Klora, J. Moldes, R. Suñé
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  Like most of the newest synchrotron facilities the ALBA Timing System works on event based architecture. Its main particularity is that integrated with the Timing system a Fast Interlock System has been implemented which allows for an automated and synchronous reaction time from any-to-any point of the machine faster than 5μs. The list of benefits of combining both systems is large: very high flexibility, reuse of the timing actuators, direct synchronous output in different points of the machine reacting to an interlock, implementation of the Fast Interlock with very low cost increase as the timing optic fiber network is reused or the possibility of combined diagnostic tools implementation for triggers and interlocks. To enhance this last point a global timestamp of 8ns accuracy that could be used both for triggers and interlocks has been implemented. The system has been designed, installed and extensively used during the Storage Ring commissioning with very good results.  
poster icon Poster WEPMS023 [0.920 MB]  
 
WEPMU005 Personnel Protection, Equipment Protection and Fast Interlock Systems: Three Different Technologies to Provide Protection at Three Different Levels controls, radiation, linac, interlocks 1055
 
  • D.F.C. Fernández-Carreiras, D.B. Beltrán, J. Klora, O. Matilla, J. Moldes, R. Montaño, M. Niegowski, R. Ranz, A. Rubio, S. Rubio-Manrique
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  The Personnel Safety System is based on PILZ PLCs, SIL3 compatible following the norm IEC 61508. It is independent from other subsystems and relies on a dedicated certification by PILZ first and then by TÜV. The Equipment Protection System uses B&R hardware and comprises more than 50 PLCs and more than 100 distributed I/0 modules installed inside the tunnel. The CPUs of the PLCs are interconnected by a deterministic network, supervising more than 7000 signals. Each Beamline has an independent system. The fast interlocks use the bidirectional fibers of the MRF timing system for distributing the interlocks in the microsecond range. Events are distributed by fiber optics for synchronizing more than 280 elements.  
poster icon Poster WEPMU005 [32.473 MB]  
 
WEPMU007 Securing a Control System: Experiences from ISO 27001 Implementation controls, software, EPICS, operation 1062
 
  • V. Vuppala, K.D. Davidson, J. Kusler, J.J. Vincent
    NSCL, East Lansing, Michigan, USA
 
  Recent incidents have emphasized the importance of security and operational continuity for achieving the quality objectives of an organization, and the safety of its personnel and machines. However, security and disaster recovery are either completely ignored or given a low priority during the design and development of an accelerator control system, the underlying technologies, and the overlaid applications. This leads to an operational facility that is easy to breach, and difficult to recover. Retrofitting security into the control system becomes much more difficult during operations. In this paper we describe our experiences in achieving ISO 27001 compliance for NSCL's control system. We illustrate problems faced with securing low-level controls, infrastructure, and applications. We also provide guidelines to address the security and disaster recovery issues upfront during the development phase.  
poster icon Poster WEPMU007 [1.304 MB]  
 
WEPMU018 Real-time Protection of the "ITER-like Wall at JET" real-time, FPGA, controls, plasma 1096
 
  • M.B. Jouve, C. Balorin
    Association EURATOM-CEA, St Paul Lez Durance, France
  • G. Arnoux, S. Devaux, D. Kinna, P.D. Thomas, K-D. Zastrow
    CCFE, Abingdon, Oxon, United Kingdom
  • P.J. Carvalho
    IPFN, Lisbon, Portugal
  • J. Veyret
    Sundance France, Matignon, France
 
  During the last JET tokamak shutdown a new ITER-Like Wall was installed using Tungsten and Beryllium materials. To ensure plasma facing component (PFC) integrity, the real-time protection of the wall has been upgraded through the project "Protection for the ITER-like Wall" (PIW). The choice has been made to work with 13 CCD robust analog cameras viewing the main areas of plasma wall interaction and to use regions of interest (ROI) for monitoring in real time the surface temperature of the PFCs. For each camera, ROIs will be set up pre-pulse and, during plasma operation, surface temperatures from these ROIs will be sent to the real time processing system for monitoring and eventually preventing damages on PFCs by modifying the plasma parameters. The video and the associated control system developed for this project is presented in this paper. The video is captured using PLEORA frame grabber and it is sent on GigE network to the real time processing system (RTPS) divided into a 'Real time processing unit' (RTPU), for surface temperature calculation, and the 'RTPU Host', for connection between RTPU and other systems. The RTPU design is based on commercial Xilinx Virtex5 FPGA boards with one board per camera and 2 boards per host. Programmed under Simulink using System generator blockset, the field programmable gate array (FPGA) can manage simultaneously up to 96 ROI defined pixel by pixel.  
poster icon Poster WEPMU018 [2.450 MB]  
 
WEPMU029 Assessment And Testing of Industrial Devices Robustness Against Cyber Security Attacks controls, framework, monitoring, target 1130
 
  • F.M. Tilaro, B. Copy
    CERN, Geneva, Switzerland
 
  CERN (European Organization for Nuclear Research),like any organization, needs to achieve the conflicting objectives of connecting its operational network to Internet while at the same time keeping its industrial control systems secure from external and internal cyber attacks. With this in mind, the ISA-99 [1] international cyber security standard has been adopted at CERN as a reference model to define a set of guidelines and security robustness criteria applicable to any network device. Devices robustness represents a key link in the defense-in-depth concept as some attacks will inevitably penetrate security boundaries and thus require further protection measures. When assessing the cyber security robustness of devices we have singled out control system-relevant attack patterns derived from the well-known CAPEC [2] classification. Once a vulnerability is identified, it needs to be documented, prioritized and reproduced at will in a dedicated test environment for debugging purposes. CERN - in collaboration with SIEMENS –has designed and implemented a dedicated working environment, the Test-bench for Robustness of Industrial Equipments [3] (“TRoIE”). Such tests attempt to detect possible anomalies by exploiting corrupt communication channels and manipulating the normal behavior of the communication protocols, in the same way as a cyber attacker would proceed. This document provides an inventory of security guidelines [4] relevant to the CERN industrial environment and describes how we have automated the collection and classification of identified vulnerabilities into a test-bench.
[1] http://www.isa.org
[2] http://capec.mitre.org
[3] F. Tilaro, "Test-bench for Robustness…", CERN, 2009
[4] B. Copy, F. Tilaro, "Standards based measurable security for embedded devices", ICALEPCS 2009
 
poster icon Poster WEPMU029 [3.152 MB]  
 
WEPMU030 CERN Safety System Monitoring - SSM monitoring, interface, database, controls 1134
 
  • T. Hakulinen, P. Ninin, F. Valentini
    CERN, Geneva, Switzerland
  • J. Gonzalez, C. Salatko-Petryszcze
    ASsystem, St Genis Pouilly, France
 
  CERN SSM (Safety System Monitoring) is a system for monitoring state-of-health of the various access and safety systems of the CERN site and accelerator infrastructure. The emphasis of SSM is on the needs of maintenance and system operation with the aim of providing an independent and reliable verification path of the basic operational parameters of each system. Included are all network-connected devices, such as PLCs, servers, panel displays, operator posts, etc. The basic monitoring engine of SSM is a freely available system monitoring framework Zabbix, on top of which a simplified traffic-light-type web-interface has been built. The web-interface of SSM is designed to be ultra-light to facilitate access from handheld devices over slow connections. The underlying Zabbix system offers history and notification mechanisms typical advanced monitoring systems.  
poster icon Poster WEPMU030 [1.231 MB]  
 
WEPMU031 Virtualization in Control System Environment controls, EPICS, hardware, operation 1138
 
  • L.R. Shen, D.K. Liu, T. Wan
    SINAP, Shanghai, People's Republic of China
 
  In a large scale distribute control system, there are lots of common services composing an environment of the entire control system, such as the server system for the common software base library, application server, archive server and so on. This paper gives a description of a virtualization realization for a control system environment, including the virtualization for server, storage, network system and application for the control system. With a virtualization instance of the epics based control system environment built by the VMware vSphere v4, we tested the whole functionality of this virtualization environment in the SSRF control system, including the common server of the NFS, NIS, NTP, Boot and EPICS base and extension library tools, we also carried out virtualization of the application server such as the Archive, Alarm, EPICS gateway and all of the network based IOC. Specially, we tested the high availability (HA) and VMotion for EPICS asynchronous IOC successfully under the different VLAN configuration of the current SSRF control system network.  
 
WEPMU034 Infrastructure of Taiwan Photon Source Control Network controls, EPICS, Ethernet, timing 1145
 
  • Y.-T. Chang, J. Chen, Y.-S. Cheng, K.T. Hsu, S.Y. Hsu, K.H. Hu, C.H. Kuo, C.Y. Wu
    NSRRC, Hsinchu, Taiwan
 
  A reliable, flexible and secure network is essential for the Taiwan Photon Source (TPS) control system which is based upon the EPICS toolkit framework. Subsystem subnets will connect to control system via EPICS based CA gateways for forwarding data and reducing network traffic. Combining cyber security technologies such as firewall, NAT and VLAN, control network is isolated to protect IOCs and accelerator components. Network management tools are used to improve network performance. Remote access mechanism will be constructed for maintenance and troubleshooting. The Ethernet is also used as fieldbus for instruments such as power supplies. This paper will describe the system architecture for the TPS control network. Cabling topology, redundancy and maintainability are also discussed.  
 
WEPMU035 Distributed Monitoring System Based on ICINGA monitoring, distributed, database, experiment 1149
 
  • C. Haen, E. Bonaccorsi, N. Neufeld
    CERN, Geneva, Switzerland
 
  The basic services of the large IT infrastructure of the LHCb experiment are monitored with ICINGA, a fork of the industry standard monitoring software NAGIOS. The infrastructure includes thousands of servers and computers, storage devices, more than 200 network devices and many VLANS, databases, hundreds diskless nodes and many more. The amount of configuration files needed to control the whole installation is big, and there is a lot of duplication, when the monitoring infrastructure is distributed over several servers. In order to ease the manipulation of the configuration files, we designed a monitoring schema particularly adapted to our network and taking advantage of its specificities, and developed a tool to centralize its configuration in a database. Thanks to this tool, we could also parse all our previous configuration files, and thus fill in our Oracle database, that comes as a replacement of the previous Active Directory based solution. A web frontend allows non-expert users to easily add new entities to monitor. We present the schema of our monitoring infrastructure and the tool used to manage and automatically generate the configuration for ICINGA.  
poster icon Poster WEPMU035 [0.375 MB]  
 
WEPMU036 Efficient Network Monitoring for Large Data Acquisition Systems monitoring, interface, database, software 1153
 
  • D.O. Savu, B. Martin
    CERN, Geneva, Switzerland
  • A. Al-Shabibi
    Heidelberg University, Heidelberg, Germany
  • S.M. Batraneanu, S.N. Stancu
    UCI, Irvine, California, USA
  • R. Sjoen
    University of Oslo, Oslo, Norway
 
  Though constantly evolving and improving, the available network monitoring solutions have limitations when applied to the infrastructure of a high speed real-time data acquisition (DAQ) system. DAQ networks are particular computer networks where experts have to pay attention to both individual subsections as well as system wide traffic flows while monitoring the network. The ATLAS Network at the Large Hadron Collider (LHC) has more than 200 switches interconnecting 3500 hosts and totaling 8500 high speed links. The use of heterogeneous tools for monitoring various infrastructure parameters, in order to assure optimal DAQ system performance, proved to be a tedious and time consuming task for experts. To alleviate this problem we used our networking and DAQ expertise to build a flexible and scalable monitoring system providing an intuitive user interface with the same look and feel irrespective of the data provider that is used. Our system uses custom developed components for critical performance monitoring and seamlessly integrates complementary data from auxiliary tools, such as NAGIOS, information services or custom databases. A number of techniques (e.g. normalization, aggregation and data caching) were used in order to improve the user interface response time. The end result is a unified monitoring interface, for fast and uniform access to system statistics, which significantly reduced the time spent by experts for ad-hoc and post-mortem analysis.  
poster icon Poster WEPMU036 [5.945 MB]  
 
WEPMU037 Virtualization for the LHCb Experiment controls, experiment, Linux, hardware 1157
 
  • E. Bonaccorsi, L. Brarda, M. Chebbi, N. Neufeld
    CERN, Geneva, Switzerland
  • F. Sborzacchi
    INFN/LNF, Frascati (Roma), Italy
 
  The LHCb Experiment, one of the four large particle physics detectors at CERN, counts in its Online System more than 2000 servers and embedded systems. As a result of ever-increasing CPU performance in modern servers, many of the applications in the controls system are excellent candidates for virtualization technologies. We see virtualization as an approach to cut down cost, optimize resource usage and manage the complexity of the IT infrastructure of LHCb. Recently we have added a Kernel Virtual Machine (KVM) cluster based on Red Hat Enterprise Virtualization for Servers (RHEV) complementary to the existing Hyper-V cluster devoted only to the virtualization of the windows guests. This paper describes the architecture of our solution based on KVM and RHEV as along with its integration with the existing Hyper-V infrastructure and the Quattor cluster management tools and in particular how we use to run controls applications on a virtualized infrastructure. We present performance results of both the KVM and Hyper-V solutions, problems encountered and a description of the management tools developed for the integration with the Online cluster and LHCb SCADA control system based on PVSS.  
 
WEPMU038 Network Security System and Method for RIBF Control System controls, EPICS, operation, status 1161
 
  • A. Uchiyama
    SHI Accelerator Service Ltd., Tokyo, Japan
  • M. Fujimaki, N. Fukunishi, M. Komiyama, R. Koyama
    RIKEN Nishina Center, Wako, Japan
 
  In RIKEN RI beam factory (RIBF), the local area network for accelerator control system (control system network) consists of commercially produced Ethernet switches, optical fibers and metal cables. On the other hand, E-mail and Internet access for unrelated task to accelerator operation are usually used in RIKEN virtual LAN (VLAN) as office network. From the viewpoint of information security, we decided to separate the control system network from the Internet and operate it independently from VLAN. However, it was inconvenient for users for the following reason; it was unable to monitor the information and status of accelerator operation from the user's office in a real time fashion. To improve this situation, we have constructed a secure system which allows the users to get the accelerator information from VLAN to control system network, while preventing outsiders from having access to the information. To allow access to inside control system network over the network from VLAN, we constructed reverse proxy server and firewall. In addition, we implement a system to send E-mail as security alert from control system network to VLAN. In our contribution, we report this system and the present status in detail.  
poster icon Poster WEPMU038 [45.776 MB]  
 
WEPMU039 Virtual IO Controllers at J-PARC MR using Xen EPICS, controls, operation, Linux 1165
 
  • N. Kamikubota, N. Yamamoto
    J-PARC, KEK & JAEA, Ibaraki-ken, Japan
  • T. Iitsuka, S. Motohashi, M. Takagi, S.Y. Yoshida
    Kanto Information Service (KIS), Accelerator Group, Ibaraki, Japan
  • H. Nemoto
    ACMOS INC., Tokai-mura, Ibaraki, Japan
  • S. Yamada
    KEK, Ibaraki, Japan
 
  The control system for J-PARC accelerator complex has been developed based on the EPICS toolkit. About 100 traditional ("real") VME-bus computers are used as EPICS IOCs in the control system for J-PARC MR (Main Ring). Recently, we have introduced "virtual" IOCs using Xen, an open-source virtual machine monitor. Scientific Linux with an EPICS iocCore runs on a Xen virtual machine. EPICS databases for network devices and EPICS soft records can be configured. Multiple virtual IOCs run on a high performance blade-type server, running Scientific Linux as native OS. A few number of virtual IOCs have been demonstrated in MR operation since October, 2010. Experience and future perspective will be discussed.  
 
THBHAUST01 SNS Online Display Technologies for EPICS controls, status, site, EPICS 1178
 
  • K.-U. Kasemir, X.H. Chen, E. Danilova, J.D. Purcell
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy
The ubiquitousness of web clients from personal computers to cell phones results in a growing demand for web-based access to control system data. At the Oak Ridge National Laboratory Spallation Neutron Source (SNS) we have investigated different technical approaches to provide read access to data in the Experimental Physics and Industrial Control System (EPICS) for a wide variety of web client devices. We compare them in terms of requirements, performance and ease of maintenance.
 
slides icon Slides THBHAUST01 [3.040 MB]  
 
THBHMUST03 System Design towards Higher Availability for Large Distributed Control Systems controls, hardware, operation, neutron 1209
 
  • S.M. Hartman
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy
Large distributed control systems for particle accelerators present a complex system engineering challenge. The system, with its significant quantity of components and their complex interactions, must be able to support reliable accelerator operations while providing the flexibility to accommodate changing requirements. System design and architecture focused on required data flow are key to ensuring high control system availability. Using examples from the operational experience of the Spallation Neutron Source at Oak Ridge National Laboratory, recommendations will be presented for leveraging current technologies to design systems for high availability in future large scale projects.
 
slides icon Slides THBHMUST03 [7.833 MB]  
 
THCHAUST05 LHCb Online Log Analysis and Maintenance System Linux, software, detector, controls 1228
 
  • J.C. Garnier, L. Brarda, N. Neufeld, F. Nikolaidis
    CERN, Geneva, Switzerland
 
  History has shown, many times computer logs are the only information an administrator may have for an incident, which could be caused either by a malfunction or an attack. Due to huge amount of logs that are produced from large-scale IT infrastructures, such as LHCb Online, critical information may overlooked or simply be drowned in a sea of other messages . This clearly demonstrates the need for an automatic system for long-term maintenance and real time analysis of the logs. We have constructed a low cost, fault tolerant centralized logging system which is able to do in-depth analysis and cross-correlation of every log. This system is capable of handling O(10000) different log sources and numerous formats, while trying to keep the overhead as low as possible. It provides log gathering and management, offline analysis and online analysis. We call offline analysis the procedure of analyzing old logs for critical information, while Online analysis refer to the procedure of early alerting and reacting. The system is extensible and cooperates well with other applications such as Intrusion Detection / Prevention Systems. This paper presents the LHCb Online topology, problems we had to overcome and our solutions. Special emphasis is given to log analysis and how we use it for monitoring and how we can have uninterrupted access to the logs. We provide performance plots, code modification in well known log tools and our experience from trying various storage strategies.  
slides icon Slides THCHAUST05 [0.377 MB]  
 
THCHMUST03 A New Fast Data Logger and Viewer at Diamond: the FA Archiver FPGA, feedback, electron, target 1244
 
  • M.G. Abbott, G. Rehm, I. Uzun
    Diamond, Oxfordshire, United Kingdom
 
  At the Diamond Light Source position data from 168 Electron Beam Position Monitors (BPMs) and some X-Ray BPMs is distributed over the Fast Acquisition communications network at an update rate of 10kHz; the total aggregate data rate is around 15MB/s. The data logger described here (the FA Archiver) captures this entire data stream to disk in real time, re-broadcasts selected subsets of the live stream to interested clients, and allows rapid access to any part of the saved data. The archive is saved into a rolling buffer allowing retrieval of detailed beam position data from any time in the last four days. A simple socket-based interface to the FA Archiver allows easy access to both the stored and live data from a variety of clients. Clients include a graphical viewer for visualising the motion or spectrum of a single BPM in real time, a command line tool for retrieving any part of the stored data by time of day, and Matlab scripts for exploring the dataset, helped by the storage of decimated minimum, maximum, and mean data.  
slides icon Slides THCHMUST03 [0.482 MB]  
 
THCHMUST06 The FAIR Timing Master: A Discussion of Performance Requirements and Architectures for a High-precision Timing System timing, controls, FPGA, kicker 1256
 
  • M. Kreider
    GSI, Darmstadt, Germany
  • M. Kreider
    Hochschule Darmstadt, University of Applied Science, Darmstadt, Germany
 
  Production chains in a particle accelerator are complex structures with many interdependencies and multiple paths to consider. This ranges from system initialisation and synchronisation of numerous machines to interlock handling and appropriate contingency measures like beam dump scenarios. The FAIR facility will employ WhiteRabbit, a time based system which delivers an instruction and a corresponding execution time to a machine. In order to meet the deadlines in any given production chain, instructions need to be sent out ahead of time. For this purpose, code execution and message delivery times need to be known in advance. The FAIR Timing Master needs to be reliably capable of satisfying these timing requirements as well as being fault tolerant. Event sequences of recorded production chains indicate that low reaction times to internal and external events and fast, parallel execution are required. This suggests a slim architecture, especially devised for this purpose. Using the thread model of an OS or other high level programs on a generic CPU would be counterproductive when trying to achieve deterministic processing times. This paper deals with the analysis of said requirements as well as a comparison of known processor and virtual machine architectures and the possibilities of parallelisation in programmable hardware. In addition, existing proposals at GSI will be checked against these findings. The final goal will be to determine the best instruction set for modelling any given production chain and devising a suitable architecture to execute these models.  
slides icon Slides THCHMUST06 [2.757 MB]  
 
THDAUST03 The FERMI@Elettra Distributed Real-time Framework real-time, Linux, controls, Ethernet 1267
 
  • L. Pivetta, G. Gaio, R. Passuello, G. Scalamera
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
FERMI@Elettra is a Free Electron Laser (FEL) based on a 1.5 GeV linac. The pulsed operation of the accelerator and the necessity to characterize and control each electron bunch requires synchronous acquisition of the beam diagnostics together with the ability to drive actuators in real-time at the linac repetition rate. The Adeos/Xenomai real-time extensions have been adopted in order to add real-time capabilities to the Linux based control system computers running the Tango software. A software communication protocol based on gigabit Ethernet and known as Network Reflective Memory (NRM) has been developed to implement a shared memory across the whole control system, allowing computers to communicate in real-time. The NRM architecture, the real-time performance and the integration in the control system are described.
 
slides icon Slides THDAUST03 [0.490 MB]  
 
FRAAUST01 Development of the Machine Protection System for LCLS-I interface, controls, Ethernet, FPGA 1281
 
  • J.E. Dusatko, M. Boyes, P. Krejcik, S.R. Norum, J.J. Olsen
    SLAC, Menlo Park, California, USA
 
  Funding: U.S. Department of Energy under Contract Nos. DE-AC02-06CH11357 and DE-AC02-76SF00515
Machine Protection System (MPS) requirements for the Linac Coherent Light Source I demand that fault detection and mitigation occur within one machine pulse (1/120th of a second at full beam rate). The MPS must handle inputs from a variety of sources including loss monitors as well as standard state-type inputs. These sensors exist at various places across the full 2.2km length of the machine. A new MPS has been developed based on a distributed star network where custom-designed local hardware nodes handle sensor inputs and mitigation outputs for localized regions of the LCLS accelerator complex. These Link-Nodes report status information and receive action commands from a centralized processor running the MPS algorithm over a private network. The individual Link-Node is a 3u chassis with configurable hardware components that can be setup with digital and analog inputs and outputs, depending upon the sensor and actuator requirements. Features include a custom MPS digital input/output subsystem, a private Ethernet interface, an embedded processor, a custom MPS engine implemented in an FPGA and an Industry Pack (IP) bus interface, allowing COTS and custom analog/digital I/O modules to be utilized for MPS functions. These features, while capable of handing standard MPS state-type inputs and outputs, allow other systems like beam loss monitors to be completely integrated within them. To date, four different types of Link-Nodes are in use in LCLS-I. This paper describes the design, construction and implementation of the LCLS MPS with a focus in the Link-Node.
 
slides icon Slides FRAAUST01 [3.573 MB]  
 
FRAAULT02 STUXNET and the Impact on Accelerator Control Systems controls, software, hardware, Windows 1285
 
  • S. Lüders
    CERN, Geneva, Switzerland
 
  2010 has seen a wide news coverage of a new kind of computer attack, named "Stuxnet", targeting control systems. Due to its level of sophistication, it is widely acknowledged that this attack marks the very first case of a cyber-war of one country against the industrial infrastructure of another, although there is still is much speculation about the details. Worse yet, experts recognize that Stuxnet might just be the beginning and that similar attacks, eventually with much less sophistication, but with much more collateral damage, can be expected in the years to come. Stuxnet was targeting a special model of the Siemens 400 PLC series. Similar modules are also deployed for accelerator controls like the LHC cryogenics or vacuum systems or the detector control systems in LHC experiments. Therefore, the aim of this presentation is to give an insight into what this new attack does and why it is deemed to be special. In particular, the potential impact on accelerator and experiment control systems will be discussed, and means will be presented how to properly protect against similar attacks.  
slides icon Slides FRAAULT02 [8.221 MB]  
 
FRBHAULT03 Beam-based Feedback for the Linac Coherent Light Source feedback, timing, linac, controls 1310
 
  • D. Fairley, K.H. Kim, K. Luchini, P. Natampalli, L. Piccoli, D. Rogind, T. Straumann
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported by the U. S. Department of Energy Contract DE-AC02-76SF00515
Beam-based feedback control loops are required by the Linac Coherent Light Source (LCLS) program in order to provide fast, single-pulse stabilization of beam parameters. Eight transverse feedback loops, a 6x6 longitudinal feedback loop, and a loop to maintain the electron bunch charge were successfully commissioned for the LCLS, and have been maintaining stability of the LCLS electron beam at beam rates up to 120Hz. In order to run the feedback loops at beam rate, the feedback loops were implemented in EPICS IOCs with a dedicated ethernet multicast network. This paper will discuss the design, configuration and commissioning of the beam-based Fast Feedback System for LCLS. Topics include algorithms for 120Hz feedback, multicast network performance, actuator and sensor performance for single-pulse control and sensor readback, and feedback configuration and runtime control.
 
slides icon Slides FRBHAULT03 [1.918 MB]  
 
FRBHMULT05 Middleware Trends and Market Leaders 2011 CORBA, controls, Windows, Linux 1334
 
  • A. Dworak, P. Charrue, F. Ehm, W. Sliwinski, M. Sobczak
    CERN, Geneva, Switzerland
 
  The Controls Middleware (CMW) project was launched over ten years ago. Its main goal was to unify middleware solutions used to operate CERN accelerators. An important part of the project, the equipment access library RDA, was based on CORBA, an unquestionable standard at the time. RDA became an operational and critical part of the infrastructure, yet the demanding run-time environment revealed some shortcomings of the system. Accumulation of fixes and workarounds led to unnecessary complexity. RDA became difficult to maintain and to extend. CORBA proved to be rather a cumbersome product than a panacea. Fortunately, many new transport frameworks appeared since then. They boasted a better design, and supported concepts that made them easy to use. Willing to profit from the new libraries, the CMW team updated user requirements, and in their terms investigated eventual CORBA substitutes. The process consisted of several phases: a review of middleware solutions belonging to different categories (e.g. data-centric, object-, and message-oriented) and their applicability to a communication model in RDA; evaluation of several market recognized products and promising start-ups; prototyping of typical communication scenarios; testing the libraries against exceptional situations and errors; verifying that mandatory performance constraints were met. Thanks to the performed investigation the team have selected a few libraries that suit their needs better than CORBA. Further prototyping will select the best candidate.  
slides icon Slides FRBHMULT05 [8.508 MB]