Keyword: database
Paper Title Other Keywords Page
MOMAU002 Improving Data Retrieval Rates Using Remote Data Servers network, software, hardware, controls 40
 
  • T. D'Ottavio, B. Frak, J. Morris, S. Nemesure
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work performed under the auspices of the U.S. Department of Energy
The power and scope of modern Control Systems has led to an increased amount of data being collected and stored, including data collected at high (kHz) frequencies. One consequence is that users now routinely make data requests that can cause gigabytes of data to be read and displayed. Given that a users patience can be measured in seconds, this can be quite a technical challenge. This paper explores one possible solution to this problem - the creation of remote data servers whose performance is optimized to handle context-sensitive data requests. Methods for increasing data delivery performance include the use of high speed network connections between the stored data and the data servers, smart caching of frequently used data, and the culling of data delivered as determined by the context of the data request. This paper describes decisions made when constructing these servers and compares data retrieval performance by clients that use or do not use an intermediate data server.
 
slides icon Slides MOMAU002 [0.085 MB]  
poster icon Poster MOMAU002 [1.077 MB]  
 
MOMAU004 Database Foundation for the Configuration Management of the CERN Accelerator Controls Systems controls, interface, software, timing 48
 
  • Z. Zaharieva, M. Martin Marquez, M. Peryt
    CERN, Geneva, Switzerland
 
  The Controls Configuration DB (CCDB) and its interfaces have been developed over the last 25 years in order to become nowadays the basis for the Configuration Management of the Controls System for all accelerators at CERN. The CCDB contains data for all configuration items and their relationships, required for the correct functioning of the Controls System. The configuration items are quite heterogeneous, depicting different areas of the Controls System – ranging from 3000 Front-End Computers, 75 000 software devices allowing remote control of the accelerators, to valid states of the Accelerators Timing System. The article will describe the different areas of the CCDB, their interdependencies and the challenges to establish the data model for such a diverse configuration management database, serving a multitude of clients. The CCDB tracks the life of the configuration items by allowing their clear identification, triggering change management processes as well as providing status accounting and audits. This necessitated the development and implementation of a combination of tailored processes and tools. The Controls System is a data-driven one - the data stored in the CCDB is extracted and propagated to the controls hardware in order to configure it remotely. Therefore a special attention is placed on data security and data integrity as an incorrectly configured item can have a direct impact on the operation of the accelerators.  
slides icon Slides MOMAU004 [0.404 MB]  
poster icon Poster MOMAU004 [6.064 MB]  
 
MOMAU005 Integrated Approach to the Development of the ITER Control System Configuration Data controls, software, network, status 52
 
  • D. Stepanov, L. Abadie
    ITER Organization, St. Paul lez Durance, France
  • J. Bertin, G. Bourguignon, G. Darcourt
    Sopra Group, Aix-en-Provence, France
  • O. Liotard
    TCS France, Puteaux, France
 
  ITER control system (CODAC) is steadily going into the implementation phase. A design guidelines handbook and a software development toolkit, named CODAC Core System, were produced in February 2011. They are ready to be used off-site, in the ITER domestic agencies and associated industries, in order to develop first control "islands" of various ITER plant systems. In addition to the work done off-site there is wealth of I&C related data developed centrally at ITER, but scattered through various sources. These data include I&C design diagrams, 3-D data, volume allocation, inventory control, administrative data, planning and scheduling, tracking of deliveries and associated documentation, requirements control, etc. All these data have to be kept coherent and up-to-date, with various types of cross-checks and procedures imposed on them. A "plant system profile" database, currently under development at ITER, represents an effort to provide integrated view into the I&C data. Supported by a platform-independent data modeling, done with a help of XML Schema, it accumulates all the data in a single hierarchy and provides different views for different aspects of the I&C data. The database is implemented using MS SQL Server and Java-based web interface. Import and data linking services are implemented using Talend software, and the report generation is done with a help of MS SQL Server Reporting Services. This paper will report on the first implementation of the database, the kind of data stored so far, typical work flows and processes, and directions of further work.  
slides icon Slides MOMAU005 [0.384 MB]  
poster icon Poster MOMAU005 [0.692 MB]  
 
MOMAU007 How to Maintain Hundreds of Computers Offering Different Functionalities with Only Two System Administrators controls, software, Linux, EPICS 56
 
  • R.A. Krempaska, A.G. Bertrand, C.E. Higgs, R. Kapeller, H. Lutz, M. Provenzano
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The Controls section in PSI is responsible for the Control Systems of four Accelerators: two proton accelerators HIPA and PROSCAN, Swiss Light Source SLS and the Free Electron Laser (SwissFEL) Test Facility. On top of that, we have 18 additional SLS beamlines to control. The controls system is mainly composed of the so called Input Output Controllers (IOCs) which require a complete and complex computing infrastructure in order to boot, being developed, debugged and monitored. This infrastructure consists currently mainly of Linux computers like boot server, port server, or configuration server (called save and restore server). Overall, the constellation of computers and servers which compose the control system counts about five hundred Linux computers which can be split into 38 different configurations based on the work each of this system need to provide. For the administration of all this we do employ only two system administrators who are responsible for the installation, configuration and maintenance of those computers. This paper shows which tools are used to squash this difficult task: like Puppet (an open source Linux tool we further adapted) and many in-house developed tools offering an overview about computers, installation status and relations between the different servers / computers.  
slides icon Slides MOMAU007 [0.384 MB]  
poster icon Poster MOMAU007 [0.708 MB]  
 
MOMMU001 Extending Alarm Handling in Tango TANGO, controls, synchrotron, status 63
 
  • S. Rubio-Manrique, F. Becheri, D.F.C. Fernández-Carreiras, J. Klora, L. Krause, A. Milán Otero, Z. Reszela, P. Skorek
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  This paper describes the alarm system developed at Alba Synchrotron, built on Tango Control System. It describes the tools used for configuration and visualization, its integration in user interfaces and its approach to alarm specification; either assigning discrete Alarm/Warning levels or allowing versatile logic rules in Python. This paper also covers the life cycle of the alarm (triggering, logging, notification, explanation and acknowledge) and the automatic control actions that can be triggered by the alarms.  
slides icon Slides MOMMU001 [1.119 MB]  
poster icon Poster MOMMU001 [2.036 MB]  
 
MOMMU009 Upgrade of the Server Architecture for the Accelerator Control System at the Heidelberg Ion Therapy Center controls, ion, network, proton 78
 
  • J.M. Mosthaf, Th. Haberer, S. Hanke, K. Höppner, A. Peters, S. Stumpf
    HIT, Heidelberg, Germany
 
  The Heidelberg Ion Therapy Center (HIT) is a heavy ion accelerator facility located at the Heidelberg university hospital and intended for cancer treatment with heavy ions and protons. It provides three treatment rooms for therapy of which two using horizontal beam nozzles are in use and the unique gantry with a 360° rotating beam port is currently under commissioning. The proprietary accelerator control system runs on several classical server machines, including a main control server, a database server running Oracle, a device settings modeling server (DSM) and several gateway servers for auxiliary system control. As the load on some of the main systems, especially the database and DSM servers, has become very high in terms of CPU and I/O load, a change to a more up to date blade server enclosure with four redundant blades and a 10Gbit internal network architecture has been decided. Due to budgetary reasons, this enclosure will at first only replace the main control, database and DVM servers and consolidate some of the services now running on auxiliary servers. The internal configurable network will improve the communication between servers and database. As all blades in the enclosure are configured identically, one dedicated spare blade is used to provide redundancy in case of hardware failure. Additionally we plan to use virtualization software to further improve redundancy and consolidate the services running on gateways and to make dynamic load balancing available to account for different performance needs e.g. in commissioning or therapy use of the accelerator.  
slides icon Slides MOMMU009 [0.233 MB]  
poster icon Poster MOMMU009 [1.132 MB]  
 
MOPKN002 LHC Supertable operation, collider, interface, luminosity 86
 
  • M. Pereira, M. Lamont, G.J. Müller, D.D. Teixeira
    CERN, Geneva, Switzerland
  • T.E. Lahey
    SLAC, Menlo Park, California, USA
  • E.S.M. McCrory
    Fermilab, Batavia, USA
 
  LHC operations generate enormous amounts of data. These data are being stored in many different databases. Hence, it is difficult for operators, physicists, engineers and management to have a clear view on the overall accelerator performance. Until recently the logging database, through its desktop interface TIMBER, was the only way of retrieving information on a fill-by-fill basis. The LHC Supertable has been developed to provide a summary of key LHC performance parameters in a clear, consistent and comprehensive format. The columns in this table represent main parameters that describe the collider's operation such as luminosity, beam intensity, emittance, etc. The data is organized in a tabular fill-by-fill manner with different levels of detail. A particular emphasis was placed on data sharing by making data available in various open formats. Typically the contents are calculated for periods of time that map to the accelerator's states or beam modes such as Injection, Stable Beams, etc. Data retrieval and calculation is triggered automatically after the end of each fill. The LHC Supertable project currently publishes 80 columns of data on around 100 fills.  
 
MOPKN005 Construction of New Data Archive System in RIKEN RI Beam Factory EPICS, controls, data-acquisition, beam-diagnostic 90
 
  • M. Komiyama, N. Fukunishi
    RIKEN Nishina Center, Wako, Japan
  • A. Uchiyama
    SHI Accelerator Service Ltd., Tokyo, Japan
 
  The control system of RIKEN RI Beam Factory (RIBF) is based on EPICS and three kinds of data archive system have been in operation. Two of them are EPICS applications and the other is MyDAQ2 developed by SPring-8 control group. MyDAQ2 collects data such as cooling-water temperature and magnet temperature etc and is not integrated into our EPICS control system. In order to unify the three applications into a single system, we have started to develop a new system since October, 2009. One of the requirements for this RIBF Control data Archive System (RIBFCAS) is that it routinely collects more than 3000 data from 21 EPICS Input/Output Controllers (IOC) at every 1 to 60 seconds, depending on the type of equipment. An ability to unify MyDAQ2 database is also required. To fulfill these requirements, a Java-based system is constructed, in which Java Channel Access Light Library (JCAL) developed by J-PARC control group is adopted in order to acquire large amounts of data as mentioned above. The main advantage of JCAL is that it is based on single threaded architecture for thread safety and user thread can be multi-threaded. The RIBFCAS hardware consists of an application server, a database server and a client-PC. The client application is executed on the Adobe AIR runtime. At the moment, we succeeded in getting about 3000 data from 21 EPICS IOCs at every 10 seconds for one day, and validation tests are proceeding. Unification of MyDAQ2 is now in progress and it is scheduled to be completed in 2011.  
poster icon Poster MOPKN005 [27.545 MB]  
 
MOPKN006 Algorithms and Data Structures for the EPICS Channel Archiver EPICS, hardware, operation, software 94
 
  • J. Rowland, M.T. Heron, M.A. Leech, S.J. Singleton, K. Vijayan
    Diamond, Oxfordshire, United Kingdom
 
  Diamond Light Source records 3GB of process data per day and has a 15TB archive on line with the EPICS Channel Archiver. This paper describes recent modifications to the software to improve performance and usability. The file-size limit on the R-Tree index has been removed, allowing all archived data to be searchable from one index. A decimation system works directly on compressed archives from a backup server and produces multi-rate reduced data with minimum and maximum values to support time efficient summary reporting and range queries. The XMLRPC interface has been extended to provide binary data transfer to clients needing large amounts of raw data.  
poster icon Poster MOPKN006 [0.133 MB]  
 
MOPKN007 Lhc Dipole Magnet Splice Resistance From Sm18 Data Mining dipole, electronics, operation, extraction 98
 
  • H. Reymond, O.O. Andreassen, C. Charrondière, G. Lehmann Miotto, A. Rijllart, D. Scannicchio
    CERN, Geneva, Switzerland
 
  The splice incident which happened during commissioning of the LHC on the 19th of September 2008 caused damage to several magnets and adjacent equipment. This raised not only the question of how it happened, but also about the state of all other splices. The inter magnet splices were studied very soon after with new measurements, but the internal magnet splices were also a concern. At the Chamonix meeting in January 2009, the CERN management decided to create a working group to analyse the provoked quench data of the magnet acceptance tests and try to find indications for bad splices in the main dipoles. This resulted in a data mining project that took about one year to complete. This presentation describes how the data was stored, extracted and analysed reusing existing LabVIEW™ based tools. We also present the encountered difficulties and the importance of combining measured data with operator notes in the logbook.  
poster icon Poster MOPKN007 [5.013 MB]  
 
MOPKN009 The CERN Accelerator Measurement Database: On the Road to Federation controls, extraction, data-management, status 102
 
  • C. Roderick, R. Billen, M. Gourber-Pace, N. Hoibian, M. Peryt
    CERN, Geneva, Switzerland
 
  The Measurement database, acting as short-term central persistence and front-end of the CERN accelerator Logging Service, receives billions of time-series data per day for 200,000+ signals. A variety of data acquisition systems on hundreds of front-end computers publish source data that eventually end up being logged in the Measurement database. As part of a federated approach to data management, information about source devices are defined in a Configuration database, whilst the signals to be logged are defined in the Measurement database. A mapping, which is often complex and subject to change and extension, is therefore required in order to subscribe to the source devices, and write the published data to the corresponding named signals. Since 2005, this mapping was done by means of dozens of XML files, which were manually maintained by multiple persons, resulting in a configuration that was error prone. In 2010 this configuration was improved, such that it becomes fully centralized in the Measurement database, reducing significantly the complexity and the number of actors in the process. Furthermore, logging processes immediately pick up modified configurations via JMS based notifications sent directly from the database, allowing targeted device subscription updates rather than a full process restart as was required previously. This paper will describe the architecture and the benefits of current implementation, as well as the next steps on the road to a fully federated solution.  
 
MOPKN010 Database and Interface Modifications: Change Management Without Affecting the Clients controls, interface, software, operation 106
 
  • M. Peryt, R. Billen, M. Martin Marquez, Z. Zaharieva
    CERN, Geneva, Switzerland
 
  The first Oracle-based Controls Configuration Database (CCDB) was developed in 1986, by which the controls system of CERN's Proton Synchrotron became data-driven. Since then, this mission-critical system has evolved tremendously going through several generational changes in terms of the increasing complexity of the control system, software technologies and data models. Today, the CCDB covers the whole CERN accelerator complex and satisfies a much wider range of functional requirements. Despite its online usage, everyday operations of the machines must not be disrupted. This paper describes our approach with respect to dealing with change while ensuring continuity. How do we manage the database schema changes? How do we take advantage of the latest web deployed application development frameworks without alienating the users? How do we minimize impact on the dependent systems connected to databases through various API's? In this paper we will provide our answers to these questions, and to many more.  
 
MOPKN011 CERN Alarms Data Management: State & Improvements laser, data-management, controls, operation 110
 
  • Z. Zaharieva, M. Buttner
    CERN, Geneva, Switzerland
 
  The CERN Alarms System - LASER is a centralized service ensuring the capturing, storing and notification of anomalies for the whole accelerator chain, including the technical infrastructure at CERN. The underlying database holds the pre-defined configuration data for the alarm definitions, for the Operators alarms consoles as well as the time-stamped, run-time alarm events, propagated through the Alarms Systems. The article will discuss the current state of the Alarms database and recent improvements that have been introduced. It will look into the data management challenges related to the alarms configuration data that is taken from numerous sources. Specially developed ETL processes must be applied to this data in order to transform it into an appropriate format and load it into the Alarms database. The recorded alarms events together with some additional data, necessary for providing events statistics to users, are transferred to the long-term alarms archive. The article will cover as well the data management challenges related to the recently developed suite of data management interfaces in respect of keeping data consistency between the alarms configuration data coming from external data sources and the data modifications introduced by the end-users.  
poster icon Poster MOPKN011 [4.790 MB]  
 
MOPKN015 Managing Information Flow in ALICE detector, distributed, controls, monitoring 124
 
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
  • A. Augustinus, P.Ch. Chochula, L.S. Jirdén, A.N. Kurepin, M. Lechman, P. Rosinský
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
 
  ALICE is one of the experiments at the Large Hadron Collider (LHC), CERN (Geneva, Switzerland). The ALICE detector control system is an integrated system collecting 18 different subdetectors' controls and general services and is implemented using the commercial SCADA package PVSS. Information of general interest, beam and ALICE condition data, together with data related to shared plants or systems, are made available to all the subsystems through the distribution capabilities of PVSS. Great care has been taken during the design and implementation to build the control system as a hierarchical system, limiting the interdependencies of the various subsystems. Accessing remote resources in a PVSS distributed environment is very simple, and can be initiated unilaterally. In order to improve the reliability of distributed data and to avoid unforeseen dependencies, the ALICE DCS group has enforced the centralization of the publication of global data and other specific variables requested by the subsystems. As an example, a specific monitoring tool will be presented that has been developed in PVSS to estimate the level of interdependency and to understand the optimal layout of the distributed connections, allowing for an interactive visualization of the distribution topology.  
poster icon Poster MOPKN015 [2.585 MB]  
 
MOPKN016 Tango Archiving Service Status TANGO, controls, GUI, insertion 127
 
  • G. Abeillé, J. Guyot, M. Ounsy, S. Pierre-Joseph Zéphir
    SOLEIL, Gif-sur-Yvette, France
  • R. Passuello, G. Strangolino
    ELETTRA, Basovizza, Italy
  • S. Rubio-Manrique
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  In modern scientific instruments like ALBA, ELETTRA or Synchrotron Soleil the monitoring and tuning of thousands of parameters is essential to drive high-performing accelerators and beamlines. To keep tracks of these parameters and to manage easily large volumes of technical data, an archiving service is a key component of a modern control system like Tango [1]. To do so, a high-availability archiving service is provided as a feature of the Tango control system. This archiving service stores data coming from the Tango control system into MySQL [2] or Oracle [3] databases. Tree sub-services are provided: An historical service with an archiving period up to 10 seconds; a short term service providing a few weeks retention with a period up to 100 milliseconds; a snapshot service which takes "pictures" of Tango parameters and can reapply them to the control system on user demand. This paper presents how to obtain a high-performance and scalable service based on our feedback after years of operation. Then, the deployment architecture in the different Tango institutes will be detailed. The paper conclusion is a description of the next steps and incoming features which will be available in the next future.
[1] http://www.tango-controls.org/
[2] http://www.mysql.com/
[3] http://www.oracle.com/us/products/database/index.html
 
 
MOPKN017 From Data Storage towards Decision Making: LHC Technical Data Integration and Analysis monitoring, operation, beam-losses, Windows 131
 
  • A. Marsili, E.B. Holzer, A. Nordt, M. Sapinski
    CERN, Geneva, Switzerland
 
  The monitoring of the beam conditions, equipment conditions and measurements from the beam instrumentation devices in CERN's Large Hadron Collider (LHC) produces more than 100 Gb/day of data. Such a big quantity of data is unprecedented in accelerator monitoring and new developments are needed to access, process, combine and analyse data from different equipments. The Beam Loss Monitoring (BLM) system has been one of the most reliable pieces of equipment in the LHC during its 2010 run, issuing beam dumps when the detected losses were above the defined abort thresholds. Furthermore, the BLM system was able to detect and study unexpected losses, requiring intensive offline analysis. This article describes the techniques developed to: access the data produced (∼ 50.000 values/s); access relevant system layout information; access, combine and display different machine data.  
poster icon Poster MOPKN017 [0.411 MB]  
 
MOPKN019 ATLAS Detector Control System Data Viewer interface, framework, controls, experiment 137
 
  • C.A. Tsarouchas, S.A. Roe, S. Schlenker
    CERN, Geneva, Switzerland
  • U.X. Bitenc, M.L. Fehling-Kaschek, S.X. Winkelmann
    Albert-Ludwig Universität Freiburg, Freiburg, Germany
  • S.X. D'Auria
    University of Glasgow, Glasgow, United Kingdom
  • D. Hoffmann, O.X. Pisano
    CPPM, Marseille, France
 
  The ATLAS experiment at CERN is one of the four Large Hadron Collider ex- periments. ATLAS uses a commercial SCADA system (PVSS) for its Detector Control System (DCS) which is responsible for the supervision of the detector equipment, the reading of operational parameters, the propagation of the alarms and the archiving of important operational data in a relational database. DCS Data Viewer (DDV) is an application that provides access to historical data of DCS parameters written to the database through a web interface. It has a modular and flexible design and is structured using a client-server architecture. The server can be operated stand alone with a command-like interface to the data while the client offers a user friendly, browser independent interface. The selection of the metadata of DCS parameters is done via a column-tree view or with a powerful search engine. The final visualisation of the data is done using various plugins such as "value over time" charts, data tables, raw ASCII or structured export to ROOT. Excessive access or malicious use of the database is prevented by dedicated protection mechanisms, allowing the exposure of the tool to hundreds of inexperienced users. The metadata selection and data output features can be used separately by XML configuration files. Security constraints have been taken into account in the implementation allowing the access of DDV by collaborators worldwide. Due to its flexible interface and its generic and modular approach, DDV could be easily used for other experiment control systems that archive data using PVSS.  
poster icon Poster MOPKN019 [0.938 MB]  
 
MOPKN021 Asynchronous Data Change Notification between Database Server and Accelerator Control Systems controls, target, software, EPICS 144
 
  • W. Fu, J. Morris, S. Nemesure
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy.
Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality.
 
poster icon Poster MOPKN021 [0.355 MB]  
 
MOPKN024 The Integration of the LHC Cyrogenics Control System Data into the CERN Layout Database controls, cryogenics, instrumentation, interface 147
 
  • E. Fortescue-Beck, R. Billen, P. Gomes
    CERN, Geneva, Switzerland
 
  The Large Hadron Collider's Cryogenic Control System makes extensive use of several databases to manage data appertaining to over 34,000 cryogenic instrumentation channels. This data is essential for populating the firmware of the PLCs which are responsible for maintaining the LHC at the appropriate temperature. In order to reduce the number of data sources and the overall complexity of the system, the databases have been rationalised and the automatic tool, that extracts data for the control software, has been simplified. This paper describes the main improvements that have been made and evaluates the success of the project.  
 
MOPKN025 Integrating the EPICS IOC Log into the CSS Message Log EPICS, controls, network, monitoring 151
 
  • K.-U. Kasemir, E. Danilova
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy
The Experimental Physics and Industrial Control System (EPICS) includes the "IOCLogServer", a tool that logs error messages from front-end computers (Input/Output Controllers, IOCs) into a set of text files. Control System Studio (CSS) includes a distributed message logging system with relational database persistence and various log analysis tools. We implemented a log server that forwards IOC messages to the CSS log database, allowing several ways of monitoring and analyzing the IOC error messages.
 
poster icon Poster MOPKN025 [4.006 MB]  
 
MOPKN027 BDNLS - BESSY Device Name Location Service controls, EPICS, interface, target 154
 
  • D.B. Engel, P. Laux, R. Müller
    HZB, Berlin, Germany
 
  Initially the relational database (RDB) for control system configuration at BESSY has been built around the device concept [1]. Maintenance and consistency issues as well as complexity of scripts generating the configuration data, triggered the development of a novel, generic RDB structure based on hierarchies of named nodes with attribute/value pair [2]. Unfortunately it turned out that usability of this generic RDB structure for a comprehensive configuration management relies totally on sophisticated data maintenance tools. On this background BDNS, a new database management tool has been developed within the framework of the Eclipse Rich Client Platform. It uses the Model View Control (MVC) layer of Jface to cleanly dissect retrieval processes, data path, data visualization and actualization. It is based on extensible configurations described in XML allowing to chain SQL calls and compose profiles for various use cases. It solves the problem of data key forwarding to the subsequent SQL statement. BDNS and its potential to map various levels of complexity into the XML configurations allows to provide easy usable, tailored database access to the configuration maintainers for the different underlying database structures. Based on Eclipse the integration of BDNS into Control System Studio is straight forward.
[1] T. Birke et.al.: Relational Database for Controls Configuration Management, IADBG Workshop 2001, San Jose.
[2] T. Birke et.al.: Beyond Devices - An improved RDB Data-Model for Configuration Management, ICALEPCS 2005, Geneva
 
poster icon Poster MOPKN027 [0.210 MB]  
 
MOPKN029 Design and Implementation of the CEBAF Element Database interface, controls, software, hardware 157
 
  • T. L. Larrieu, M.E. Joyce, C.J. Slominski
    JLAB, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
With inauguration of the CEBAF Element Database(CED) in Fall 2010, Jefferson Lab computer scientists have taken a first step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting front-end computers to building controls screens. A particular requirement influencing the CED design is that it must provide consistent access to not only present, but also future, and eventually past, configurations of the CEBAF accelerator. To accomplish this, an introspective database schema was designed that allows new elements, element types, and element properties to be defined on-the-fly without changing table structure. When used in conjunction with the Oracle Workspace Manager, it allows users to seamlessly query data from any time in the database history with the exact same tools as they use for querying the present configuration. Users can also check-out workspaces and use them as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented API that is translated automatically from original C++ into native libraries for script languages such as perl, php, and TCL making access to the CED easy and ubiquitous.
The U.S.Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.
 
poster icon Poster MOPKN029 [5.239 MB]  
 
MOPKS029 The CODAC Software Distribution for the ITER Plant Systems software, controls, EPICS, operation 227
 
  • F. Di Maio, L. Abadie, C.S. Kim, K. Mahajan, P. Makijarvi, D. Stepanov, N. Utzel, A. Wallander
    ITER Organization, St. Paul lez Durance, France
 
  Most of the systems that constitutes the ITER plant will be built and supplied by the seven ITER domestic agencies. These plant systems will require their own Instrumentation and Control (I&C) that will be procured by the various suppliers. For improving the homogeneity of these plant system I&C, the CODAC group, that is in charge of the ITER control system, is promoting standardized solutions at project level and makes available, as a support for these standards, the software for the development and tests of the plant system I&C. The CODAC Core System is built by the ITER Organization and distributed to all ITER partners. It includes the ITER standard operating system, RHEL, and the ITER standard control framework, EPICS, as well as some ITER specific tools, mostly for configuration management, and ITER specific software modules, such as drivers for standard I/O boards. A process for the distribution and support is in place since the first release, in February 2010, and has been continuously improved to support the development and distribution of the following versions.  
poster icon Poster MOPKS029 [1.209 MB]  
 
MOPMN003 A Bottom-up Approach to Automatically Configured Tango Control Systems. controls, TANGO, vacuum, hardware 239
 
  • S. Rubio-Manrique, D.B. Beltrán, I. Costa, D.F.C. Fernández-Carreiras, J.V. Gigante, J. Klora, O. Matilla, R. Ranz, J. Ribas, O. Sanchez
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  Alba maintains a central repository, so called "Cabling and Controls database" (CCDB), which keeps the inventory of equipment, cables, connections and their configuration and technical specifications. The valuable information kept in this MySQL database enables some tools to automatically create and configure Tango devices and other software components of the control systems of Accelerators, beamlines and laboratories. This paper describes the process involved in this automatic setup.  
poster icon Poster MOPMN003 [0.922 MB]  
 
MOPMN015 Multi Channel Applications for Control System Studio (CSS) controls, EPICS, operation, storage-ring 271
 
  • K. Shroff, G. Carcassi
    BNL, Upton, Long Island, New York, USA
  • R. Lange
    HZB, Berlin, Germany
 
  Funding: Work supported by U.S. Department of Energy
This talk will present a set of applications for CSS built on top of the services provided by the ChannelFinder, a directory service for control system, and PVManager, a client library for data manipulation and aggregation. ChannelFinder Viewer allows for the querying of the ChannelFinder service, and the sorting and tagging of the results. Multi Channel Viewer allows the creation of plots from the live data of a group of channels.
 
poster icon Poster MOPMN015 [0.297 MB]  
 
MOPMN022 Database Driven Control System Configuration for the PSI Proton Accelerator Facilities controls, EPICS, proton, hardware 289
 
  • H. Lutz, D. Anicic
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  At PSI there are two facilities with proton cyclotron accelerators. The machine control system for PROSCAN which is used for medical patient therapy, is running with EPICS. The High Intensity Proton Accelerator (HIPA) is mostly running under the in-house control system ACS. Dedicated parts of HIPA are under EPICS control. Both these facilities are configured through an Oracle database application suite. This paper presents the concepts and tools which are used to configure the control system directly from the database-stored configurations. Such an approach has advantages which contribute for better control system reliability, overview and consistency.  
poster icon Poster MOPMN022 [0.992 MB]  
 
MOPMN027 The LHC Sequencer GUI, controls, operation, injection 300
 
  • R. Alemany-Fernandez, V. Baggiolini, R. Gorbonosov, D. Khasbulatov, M. Lamont, P. Le Roux, C. Roderick
    CERN, Geneva, Switzerland
 
  The Large Hadron Collider (LHC) at CERN is a highly complex system made of many different sub-systems whose operation implies the execution of many tasks with stringent constraints on the order and duration of the execution. To be able to operate such a system in the most efficient and reliable way the operators in the CERN control room use a high level control system: the LHC Sequencer. The LHC Sequencer system is composed of several components, including an Oracle database where operational sequences are configured, a core server that orchestrates the execution of the sequences, and two graphical user interfaces: one for sequence edition, and another for sequence execution. This paper describes the architecture of the LHC Sequencer system, and how the sequences are prepared and used for LHC operation.  
poster icon Poster MOPMN027 [2.163 MB]  
 
MOPMN029 Spiral2 Control Command: First High-level Java Applications Based on the OPEN-XAL Library software, controls, EPICS, ion 308
 
  • P. Gillette, E. Lemaître, G. Normand, L. Philippe
    GANIL, Caen, France
 
  The Radioactive Ions Beam SPIRAL2 facility will be based on a supra-conducting driver providing deuterons or heavy ions beams at different energies and intensities. Using then the ISOLD method, exotic nuclei beams will be sent either to new physics facilities or to the existing GANIL experimental areas. To tune this large range of beams, high-level applications will be mainly developed in Java language. The choice of the OPEN-XAL application framework, developed at the Spallation Neutron Source (SNS), has proven to be very efficient and greatly helps us to design our first software pieces to tune the accelerator. The first part of this paper presents some new applications: "Minimisation" which aims at optimizing a section of the accelerator; a general purpose software named "Hook" for interacting with equipment of any kind; and an application called "Profils" to visualize and control the Spiral2 beam wire harps. As tuning operation has to deal with configuration and archiving issues, databases are an effective way to manage data. Therefore, two databases are being developed to address these problems for the SPIRAL2 command control: one is in charge of device configuration upstream the Epics databases while another one is in charge of accelerator configuration (lattice, optics and set of values). The last part of this paper aims at describing these databases and how java applications will interact with them.  
poster icon Poster MOPMN029 [1.654 MB]  
 
MOPMS004 First Experience with VMware Servers at HLS controls, hardware, brilliance, network 323
 
  • G. Liu, X. Bao, C. Li, J.G. Wang, K. Xuan
    USTC/NSRL, Hefei, Anhui, People's Republic of China
 
  Hefei Light Source(HLS) is a dedicated second generation VUV light source, which was designed and constructed two decades ago. In order to improve the performance of HLS, especially getting higher brilliance and increasing the number of straight sections, an upgrade project is undergoing, accordingly the new control system is under construction. VMware vSphere 4 Enterprise Plus is used to construct the server system for HLS control system. Four DELL PowerEdge R710 rack servers and one DELL Equallogic PS6000E iSCSI SAN comprises the hardware platform. Some kinds of servers, such as file server, web server, database server, NIS servers etc. together with the softIOC applications are all integrated into this virtualization platform. The prototype of softIOC is setup and its performance is also given in this paper. High availability and flexibility are achieved with low cost.  
poster icon Poster MOPMS004 [0.463 MB]  
 
MOPMS007 Deep-Seated Cancer Treatment Spot-Scanning Control System heavy-ion, hardware, ion, controls 333
 
  • W. Zhang, S. An, G.H. Li, W.F. Liu, W.M. Qiao, Y.P. Wang, F. Yang
    IMP, Lanzhou, People's Republic of China
 
  System is mainly composed of hardware, the data for a given waveform scanning power supply controller, dose-controlled counting cards, and event generator system. Software consists of the following components: generating tumor shape and the corresponding waveform data system, waveform controller (ARM and DSP) program, counting cards FPGA procedures, event and data synchronization for transmission COM program.  
 
MOPMS009 IFMIF LLRF Control System Architecture Based on Epics EPICS, controls, LLRF, interface 339
 
  • J.C. Calvo, A. Ibarra, A. Salom
    CIEMAT, Madrid, Spain
  • M.A. Patricio
    UCM, Colmenarejo, Spain
  • M.L. Rivers
    ANL, Argonne, USA
 
  The IFMIF-EVEDA (International Fusion Materials Irradiation Facility - Engineering Validation and Engineering Design Activity) linear accelerator will be a 9 MeV, 125mA CW (Continuous Wave) deuteron accelerator prototype to validate the technical options of the accelerator design for IFMIF. The RF (Radio Frequency) power system of IFMIF-EVEDA consists of 18 RF chains working at 175MHz with three amplification stages each; each one of the required chains for the accelerator prototype is based on several 175MHz amplification stages. The LLRF system provides the RF Drive input of the RF plants. It controls the amplitude and phase of this signal to be synchronized with the beam and it also controls the resonance frequency of the cavities. The system is based on a commercial cPCI FPGA Board provided by Lyrtech and controlled by a Windows Host PC. For this purpose, it is mandatory to communicate the cPCI FPGA Board with an EPICS Channel Access, building an IOC (Input Output Controller) between Lyrtech board and EPICS. A new software architecture to design a device support, using AsynPortDriver class and CSS as a GUI (Graphical User Interface), is presented.  
poster icon Poster MOPMS009 [2.763 MB]  
 
MOPMS024 Evolution of the Argonne Tandem Linear Accelerator System (ATLAS) Control System controls, software, distributed, hardware 371
 
  • M.A. Power, F.H. Munson
    ANL, Argonne, USA
 
  Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357.
Given that the Argonne Tandem Linac Accelerator System (ATLAS) recently celebrated its 25th anniversary, this paper will explore the past, present and future of the ATLAS Control System and how it has evolved along with the accelerator and control system technology. ATLAS as we know it today, originated with a Tandem Van de Graff in the 1960's. With the addition of the Booster section in the late 1970's, came the first computerized control. ATLAS itself was placed into service on June 25, 1985 and was the world's first superconducting linear accelerator for ions. Since its dedication as a National User Facility, more than a thousand experiments by more than 2,000 users world-wide, have taken advantage of the unique capabilities it provides. Today, ATLAS continues to be a user facility for physicists who study the particles that form the heart of atoms. Its most recent addition, CARIBU (Californium Rare Isotope Breeder Upgrade), creates special beams that feed into ATLAS. ATLAS is similar to a living organism, changing and responding to new technological challenges and research needs. As it continues to evolve, so does the control system: from the original days using a DEC PDP-11/34 computer and 2 CAMAC crates, to a DEC Alpha computer running Vsystem software and more than twenty CAMAC crates, to distributed computers and VME systems. Future upgrades are also in the planning stages that will continue to evolve the control system.
 
poster icon Poster MOPMS024 [2.845 MB]  
 
MOPMS030 Improvement of the Oracle Setup and Database Design at the Heidelberg Ion Therapy Center ion, controls, operation, hardware 393
 
  • K. Höppner, Th. Haberer, J.M. Mosthaf, A. Peters
    HIT, Heidelberg, Germany
  • G. Fröhlich, S. Jülicher, V.RW. Schaa, W. Schiebel, S. Steinmetz
    GSI, Darmstadt, Germany
  • M. Thomas, A. Welde
    Eckelmann AG, Wiesbaden, Germany
 
  The HIT (Heidelberg Ion Therapy) center is an accelerator facility for cancer therapy using both carbon ions and protons, located at the university hospital in Heidelberg. It provides three therapy treatment rooms: two with fixed beam exit (both in clinical use), and a unique gantry with a rotating beam head, currently under commissioning. The backbone of the proprietary accelerator control system consists of an Oracle database running on a Windows server, storing and delivering data of beam cycles, error logging, measured values, and the device parameters and beam settings for about 100,000 combinations of energy, beam size and particle number used in treatment plans. Since going operational, we found some performance problems with the current database setup. Thus, we started an analysis in cooperation with the industrial supplier of the control system (Eckelmann AG) and the GSI Helmholtzzentrum für Schwerionenforschung. It focused on the following topics: hardware resources of the DB server, configuration of the Oracle instance, and a review of the database design that underwent several changes since its original design. The analysis revealed issues on all fields. The outdated server will be replaced by a state-of-the-art machine soon. We will present improvements of the Oracle configuration, the optimization of SQL statements, and the performance tuning of database design by adding new indexes which proved directly visible in accelerator operation, while data integrity was improved by additional foreign key constraints.  
poster icon Poster MOPMS030 [2.014 MB]  
 
MOPMS037 A Customizable Platform for High-availability Monitoring, Control and Data Distribution at CERN monitoring, controls, software, hardware 418
 
  • M. Brightwell, M. Bräger, A. Lang, A. Suwalska
    CERN, Geneva, Switzerland
 
  In complex operational environments, monitoring and control systems are asked to satisfy ever more stringent requirements. In addition to reliability, the availability of the system has become crucial to accommodate for tight planning schedules and increased dependencies to other systems. In this context, adapting a monitoring system to changes in its environment and meeting requests for new functionalities are increasingly challenging. Combining maintainability and high-availability within a portable architecture is the focus of this work. To meet these increased requirements, we present a new modular system developed at CERN. Using the experience gained from previous implementations, the new platform uses a multi-server architecture to allow running patches and updates to the application without affecting its availability. The data acquisition can also be reconfigured without any downtime or potential data loss. The modular architecture builds on a core system that aims to be reusable for multiple monitoring scenarios, while keeping each instance as lightweight as possible. Both for cost and future maintenance concerns, open and customizable technologies have been preferred.  
 
MOPMU005 Overview of the Spiral2 Control System Progress controls, EPICS, ion, interface 429
 
  • E. Lécorché, P. Gillette, C.H. Haquin, E. Lemaître, L. Philippe, D.T. Touchard
    GANIL, Caen, France
  • J.F. Denis, F. Gougnaud, J.-F. Gournay, Y. Lussignol, P. Mattei
    CEA/DSM/IRFU, France
  • P.G. Graehling, J.H. Hosselet, C. Maazouzi
    IPHC, Strasbourg Cedex 2, France
 
  Spiral2 whose construction physically started at the beginning of this year at Ganil (Caen, France) will be a new Radioactive Ion Beams facility to extend scientific knowledge in nuclear physics, astrophysics and interdisciplinary researches. The project consists of a high intensity multi-ion accelerator driver delivering beams to a high power production system to generate the Radioactive Ion Beams being then post-accelerated and used within the existing Ganil complex. Resulting from the collaboration between several laboratories, Epics has been adopted as the standard framework for the control command system. At the lower level, pieces of equipment are handled through VME/VxWorks chassis or directly interfaced using the Modbus/TCP protocol; also, Siemens programmable logic controllers are tightly coupled to the control system, being in charge of specific devices or hardware safety systems. The graphical user interface layer integrates both some standard Epics client tools (EDM, CSS under evaluation, etc …) and specific high level applications written in Java, also deriving developments from the Xal framework. Relational databases are involved into the control system for equipment configuration (foreseen), machine representation and configuration, CSS archivers (under evaluation) and Irmis (mainly for process variable description). The first components of the Spiral2 control system are now used in operation within the context of the ion and deuteron sources test platforms. The paper also describes how software development and sharing is managed within the collaboration.  
poster icon Poster MOPMU005 [2.093 MB]  
 
MOPMU006 The Commissioning of the Control System of the Accelerators and Beamlines at the Alba Synchrotron controls, TANGO, booster, project-management 432
 
  • D.F.C. Fernández-Carreiras, F. Becheri, S. Blanch, A. Camps, T.M. Coutinho, G. Cuní, J.V. Gigante, J.J. Jamroz, J. Klora, J. Lidón-Simon, O. Matilla, J. Metge, A. Milán, J. Moldes, R. Montaño, M. Niegowski, C. Pascual-Izarra, S. Pusó, Z. Reszela, A. Rubio, S. Rubio-Manrique, A. Ruz
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  Alba is a third generation synchrotron located near Barcelona in Spain. The final commissioning of all accelerators and beamlines started the 8th of March 2011. The Alba control system is based on the middle layer and tools provided by TANGO. It extensively uses the Sardana Framework, including the Taurus graphical toolkit, based on Python and Qt. The control system of Alba is highly distributed. The design choices made five years ago, have been validated during the commissioning. Alba uses extensively Ethernet as a Fieldbus, and combines diskless machines running Tango on Linux and Windows, with specific hardware based in FPGA and fiber optics for fast real time transmissions and synchronizations. B&R PLCs, robust, reliable and cost-effective are widely used in the different components of the machine protection system. In order to match the requirements in terms of speed, these PLCs are sometimes combined with the MRF Timing for the fast interlocks. This paper describes the design, requirements, challenges and the lessons learnt in the installation and commissioning of the control system.  
poster icon Poster MOPMU006 [24.241 MB]  
 
MOPMU011 The Design Status of CSNS Experimental Control System controls, EPICS, software, neutron 446
 
  • J. Zhuang, Y.P. Chu, L.B. Ding, L. Hu, D.P. Jin, J.J. Li, Y.L. Liu, Y.Q. Liu, Y.H. Zhang, Z.Y. Zhang, K.J. Zhu
    IHEP Beijing, Beijing, People's Republic of China
 
  To meet the increasing demand from user community, China decided to build a world-class spallation neutron source, called CSNS(China Spallation Neutron Source). It can provide users a neutron scattering platform with high flux, wide wavelength range and high efficiency. CSNS construction is expected to start in 2011 and will last 6.5 years. The control system of CSNS is divided into accelerator control system and experimental control system. CSNS Experimental Control System is based on EPICS architecture, offering device operating and device debug interface, communication between devices, environment monitor, machine and people protection, interface for accelerator system, control system monitor and database service. The all control system is divided into 4 parts, such as front control layer, Epics global control layer, database and network service. The front control layer is based on YOKOGAWA PLC and other controllers. Epics layer provides all system control and information exchange. Embedded PLC YOKOGAWA RP61 is considered used as communication node between front layer and EPICS layer. Database service provides system configuration and historical data. From the experience of BESIII, MySQL is a option. The system will be developed in Dongguan , Guangdong p province and Beijing, so VPN will be used to help development. Now,there are 9 people working on this system. The system design is completed. We are working on a prototype system now.  
poster icon Poster MOPMU011 [0.224 MB]  
 
MOPMU019 The Gateways of Facility Control for SPring-8 Accelerators controls, data-acquisition, framework, network 473
 
  • M. Ishii, T. Masuda, R. Tanaka, A. Yamashita
    JASRI/SPring-8, Hyogo-ken, Japan
 
  We integrated the utilities data acquisition into the SPring-8 accelerator control system based on MADOCA framework. The utilities data such as air temperature, power line voltage and temperature of machine cooling water are helpful to study the correlation between the beam stability and the environmental conditions. However the accelerator control system had no way to take many utilities data managed by the facility control system, because the accelerator control system and the facility control system was independent system without an interconnection. In 2010, we had a chance to replace the old facility control system. At that time, we constructed the gateways between the MADOCA-based accelerator control system and the new facility control system installing BACnet, that is a data communication protocol for Building Automation and Control Networks, as a fieldbus. The system requirements were as follows: to monitor utilities data with required sampling rate and resolution, to store all acquired data in the accelerator database, to keep an independence between the accelerator control system and the facility control system, to have a future expandability to control the facilities from the accelerator control system. During the work, we outsourced to build the gateways including data taking software of MADOCA to solve the problems of less manpower and short work period. In this paper we describe the system design and the approach of outsourcing.  
 
MOPMU032 An EPICS IOC Builder EPICS, hardware, controls, software 506
 
  • M.G. Abbott, T.M. Cobb
    Diamond, Oxfordshire, United Kingdom
 
  An EPICS IO controller is typically assembled from a number of standard components each with potentially quite complex hardware or software initialisation procedures intermixed with a good deal of repetitive boilerplate code. Assembling and maintaining a complex IOC can be a quite difficult and error prone process, particularly if the components are unfamiliar. The EPICS IOC builder is a Python library designed to automate the assembly of a complete IOC from a concise component level description. The dependencies and interactions between components as well as their detailed initialisation procedures are automatically managed by the IOC builder through component description files maintained with the individual components. At Diamond Light Source we have a large library of components that can be assembled into EPICS IOCs. The IOC Builder is further finding increasing use in helping non-expert users to assemble an IOC without specialist knowledge.  
poster icon Poster MOPMU032 [3.887 MB]  
 
MOPMU033 ControlView to EPICS Conversion of the TRIUMF TR13 Cyclotron Control System EPICS, controls, TRIUMF, ISAC 510
 
  • D.B. Morris
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
 
  The TRIUMF TR13 Cyclotron Control System was developed in 1995 using Allen Bradley PLCs and ControlView. A console replacement project using the EPICS toolkit was started in Fall 2009 with the strict requirement that the PLC code not be modified. Access to the operating machine would be limited due to production schedules. A complete mock-up of the PLC control system was built, to allow parallel development and testing without interfering with the production system. The deployment allows both systems to operate simultaneously easing verification of all functions. A major modification was required to the EPICS Allen Bradley PLC5 Device Support software to support the original PLC programming schema. EDM screens were manually built to create similar displays to the original ControlView screens, reducing operator re-training. A discussion is presented on some of the problems encountered and their solutions.  
poster icon Poster MOPMU033 [2.443 MB]  
 
MOPMU039 ACSys in a Box controls, framework, site, Linux 522
 
  • C.I. Briegel, D. Finstrom, B. Hendricks, CA. King, R. Neswold, D.J. Nicklaus, J.F. Patrick, A.D. Petrov, C.L. Schumann, J.G. Smedinghoff
    Fermilab, Batavia, USA
 
  Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
The Accelerator Control System at Fermilab has evolved to enable this relatively large control system to be encapsulated into a "box" such as a laptop. The goal was to provide a platform isolated from the "online" control system. This platform can be used internally for making major upgrades and modifications without impacting operations. It also provides a standalone environment for research and development including a turnkey control system for collaborators. Over time, the code base running on Scientific Linux has enabled all the salient features of the Fermilab's control system to be captured in an off-the-shelf laptop. The anticipated additional benefits of packaging the system include improved maintenance, reliability, documentation, and future enhancements.
 
 
TUAAULT03 BLED: A Top-down Approach to Accelerator Control System Design controls, lattice, operation, EPICS 537
 
  • J. Bobnar, K. Žagar
    COBIK, Solkan, Slovenia
 
  In many existing controls projects the central database/inventory was introduced late in the project, usually to support installation or maintenance activities. Thus construction of this database was done in a bottom-up fashion by reverse engineering the installation. However, there are several benefits if the central database is introduced early in machine design, such as the ability to simulate the system as a whole without having all the IOCs in place, it can be used as an input to the installation/commissioning plan, or act as an enforcer of certain conventions and quality processes. Based on our experience with the control systems, we have designed a central database BLED (Best and Leanest Ever Database), which is used for storage of all machine configuration and parameters as well as control system configuration, inventory, and cabling. First implementation of BLED supports EPICS, meaning it is capable of storage and generation of EPICS templates and substitution files as well as archive, alarm and other configurations. With a goal in mind to provide functionality of several existing central databases (IRMIS, SNS db, DBSF etc.) a lot of effort has been made to design the database in a way to handle extremely large set-ups, consisting of millions of control system points. Furthermore, BLED also stores the lattice data, thus providing additional information (e.g. survey data) required by different engineering groups. The lattice import/export tools among others support MAD and TraceWin Tools formats which are widely used in the machine design community.  
slides icon Slides TUAAULT03 [4.660 MB]  
 
TUAAULT04 Web-based Execution of Graphical Workflows : a Modular Platform for Multifunctional Scientific Process Automation controls, interface, synchrotron, framework 540
 
  • E. De Ley, D. Jacobs
    iSencia Belgium, Gent, Belgium
  • M. Ounsy
    SOLEIL, Gif-sur-Yvette, France
 
  The Passerelle process automation suite offers a fundamentally modular solution platform, based on a layered integration of several best-of-breed technologies. It has been successfully applied by Synchrotron Soleil as the sequencer for data acquisition and control processes on its beamlines, integrated with TANGO as a control bus and GlobalScreen as the Scada package. Since last year it is being used as the graphical workflow component for the development of an eclipse-based Data Analysis Work Bench, at ESRF. The top layer of Passerelle exposes an actor-based development paradigm, based on the Ptolemy framework (UC Berkeley). Actors provide explicit reusability and strong decoupling, combined with an inherently concurrent execution model. Actor libraries exist for TANGO integration, web-services, database operations, flow control, rules-based analysis, mathematical calculations, launching external scripts etc. Passerelle's internal architecture is based on OSGi, the major Java framework for modular service-based applications. A large set of modules exist that can be recombined as desired to obtain different features and deployment models. Besides desktop versions of the Passerelle workflow workbench, there is also the Passerelle Manager. It is a secured web application including a graphical editor, for centralized design, execution, management and monitoring of process flows, integrating standard Java Enterprise services with OSGi. We will present the internal technical architecture, some interesting application cases and the lessons learnt.  
slides icon Slides TUAAULT04 [10.055 MB]  
 
TUCAUST01 Upgrading the Fermilab Fire and Security Reporting System hardware, interface, network, software 563
 
  • CA. King, R. Neswold
    Fermilab, Batavia, USA
 
  Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
Fermilab's homegrown fire and security system (known as FIRUS) is highly reliable and has been used nearly thirty years. The system has gone through some minor upgrades, however, none of those changes made significant, visible changes. In this paper, we present a major overhaul to the system that is halfway complete. We discuss the use of Apple's OS X for the new GUI, upgrading the servers to use the Erlang programming language and allowing limited access for iOS and Android-based mobile devices.
 
slides icon Slides TUCAUST01 [2.818 MB]  
 
WEMAU001 A Remote Tracing Facility for Distributed Systems GUI, interface, controls, operation 650
 
  • F. Ehm, A. Dworak
    CERN, Geneva, Switzerland
 
  Today the CERN's accelerator control system is built upon a large number of services mainly based on C++ and JAVA which produce log events. In such a largely distributed environment these log messages are essential for problem recognition and tracing. Tracing is therefore a vital part of operations, as understanding an issue in a subsystem means analyzing log events in an efficient and fast manner. At present 3150 device servers are deployed on 1600 diskless frontends and they send their log messages via the network to an in-house developed central server which, in turn, saves them to files. However, this solution is not able to provide several highly desired features and has performance limitations which led to the development of a new solution. The new distributed tracing facility fulfills these requirements by taking advantage of the Simple Text Orientated Message Protocol [STOMP] and ActiveMQ as the transport layer. The system not only allows to store critical log events centrally in files or in a database but also it allows other clients (e.g. graphical interfaces) to read the same events at the same time by using the provided JAVA API. This facility also ensures that each client receives only the log events of the desired level. Thanks to the ActiveMQ broker technology the system can easily be extended to clients implemented in other languages and it is highly scalable in terms of performance. Long running tests have shown that the system can handle up to 10.000 messages/second.  
slides icon Slides WEMAU001 [1.008 MB]  
poster icon Poster WEMAU001 [0.907 MB]  
 
WEMMU009 Status of the RBAC Infrastructure and Lessons Learnt from its Deployment in LHC controls, operation, software, software-architecture 702
 
  • W. Sliwinski, P. Charrue, I. Yastrebov
    CERN, Geneva, Switzerland
 
  The distributed control system for the LHC accelerator poses many challenges due to its inherent heterogeneity and highly dynamic nature. One of the important aspects is to protect the machine against unauthorised access and unsafe operation of the control system, from the low-level front-end machines up to the high-level control applications running in the control room. In order to prevent an unauthorized access to the control system and accelerator equipment and to address the possible security issues, the Role Based Access Control (RBAC) project was designed and developed at CERN, with a major contribution from Fermilab laboratory. Furthermore, RBAC became an integral part of the CERN Controls Middleware (CMW) infrastructure and it was deployed and commissioned in the LHC operation in the summer 2008, well before the first beam in LHC. This paper presents the current status of the RBAC infrastructure, together with an outcome and gathered experience after a massive deployment in the LHC operation. Moreover, we outline how the project evolved over the last three years and give an overview of the major extensions introduced to improve integration, stability and its functionality. The paper also describes the plans of future project evolution and possible extensions, based on gathered users requirements and operational experience.  
slides icon Slides WEMMU009 [0.604 MB]  
poster icon Poster WEMMU009 [1.262 MB]  
 
WEPKN002 Tango Control System Management Tool controls, status, TANGO, device-server 713
 
  • P.V. Verdier, F. Poncet, J.L. Pons
    ESRF, Grenoble, France
  • N. Leclercq
    SOLEIL, Gif-sur-Yvette, France
 
  Tango is an object oriented control system toolkit based on CORBA initially developed at the ESRF. It is now also developed and used by Soleil, Elettra, Alba, Desy, MAX Lab, FRM II and some other labs. Tango concept is a full distributed control system. That means that several processes (called servers) are running on many different hosts. Each server manages one or several Tango classes. Each class could have one or several instances. This poster will show existing tools to configure, survey and manage a very large number of Tango components.  
poster icon Poster WEPKN002 [1.982 MB]  
 
WEPKS020 Adding Flexible Subscription Options to EPICS EPICS, framework, operation, controls 827
 
  • R. Lange
    HZB, Berlin, Germany
  • L.R. Dalesio
    BNL, Upton, Long Island, New York, USA
  • A.N. Johnson
    ANL, Argonne, USA
 
  Funding: Work supported by U.S. Department of Energy (under contracts DE-AC02-06CH11357 resp. DE-AC02-98CH10886), German Bundesministerium für Bildung und Forschung and Land Berlin.
The need for a mechanism to control and filter subscriptions to control system variables by the client was described in a paper at the ICALEPCS2009 conference.[1] The implementation follows a plug-in design that allows the insertion of plug-in instances into the event stream on the server side. The client can instantiate and configure these plug-ins when opening a subscription, by adding field modifiers to the channel name using JSON notation.[2] This paper describes the design and implementation of a modular server-side plug-in framework for Channel Access, and shows examples for plug-ins as well as their use within an EPICS control system.
[1] R. Lange, A. Johnson, L. Dalesio: Advanced Monitor/Subscription Mechanisms for EPICS, THP090, ICALEPCS2009, Kobe, Japan.
[2] A. Johnson, R. Lange: Evolutionary Plans for EPICS Version 3, WEA003, ICALEPCS2009, Kobe, Japan.
 
poster icon Poster WEPKS020 [0.996 MB]  
 
WEPKS028 Exploring a New Paradigm for Accelerators and Large Experimental Apparatus Control Systems controls, distributed, toolkit, software 856
 
  • L. Catani, R. Ammendola, F. Zani
    INFN-Roma II, Roma, Italy
  • C. Bisegni, S. Calabrò, P. Ciuffetti, G. Di Pirro, G. Mazzitelli, A. Stecchi
    INFN/LNF, Frascati (Roma), Italy
  • L.G. Foggetta
    LAL, Orsay, France
 
  The integration of web technologies and web services has been, in the recent years, one of the major trends in upgrading and developing control systems for accelerators and large experimental apparatuses. Usually, web technologies have been introduced to complement the control systems with smart add-ons and user friendly services or, for instance, to safely allow access to the control system to users from remote sites. In spite of this still narrow spectrum of employment, some software technologies developed for high performance web services, although originally intended and optimized for these particular applications, deserve some features that would allow their deeper integration in a control system and, eventually, use them to develop some of the control system's core components. In this paper we present the conclusion of the preliminary investigations of a new paradigm for an accelerator control system and associated machine data acquisition system (DAQ), based on a synergic combination of network distributed cache memory and a non-relational key/value database. We investigated these technologies with particular interest on performances, namely speed of data storage and retrieve for the network memory, data throughput and queries execution time for the database and, especially, how much this performances can benefit from their inherent scalability. The work has been developed in a collaboration between INFN-LNF and INFN-Roma Tor Vergata.  
 
WEPMN001 Experience in Using Linux Based Embedded Controllers with EPICS Environment for the Beam Transport in SPES Off–Line Target Prototype EPICS, controls, software, target 875
 
  • M. Montis, M.G. Giacchini
    INFN/LNL, Legnaro (PD), Italy
 
  EPICS [1] was chosen as general framework to develop the control system of SPES facility under construction at LNL [2]. We report some experience in using some commercial devices based on Debian Linux to control the electrostatic deflectors installed on the beam line at the output of target chamber. We discuss this solution and compare it to other IOC implementations in use in the Target control system.
[1] http://www.aps.anl.gov/epics/
[2] http://www.lnl.infn.it/~epics
* M.Montis, MS thesis: http://www.lnl.infn.it/~epics/THESIS/TesiMaurizioMontis.pdf
 
poster icon Poster WEPMN001 [1.036 MB]  
 
WEPMN038 A Combined On-line Acoustic Flowmeter and Fluorocarbon Coolant Mixture Analyzer for the ATLAS Silicon Tracker software, controls, detector, real-time 969
 
  • A. Bitadze, R.L. Bates
    University of Glasgow, Glasgow, United Kingdom
  • M. Battistin, S. Berry, P. Bonneau, J. Botelho-Direito, B. Di Girolamo, J. Godlewski, E. Perez-Rodriguez, L. Zwalinski
    CERN, Geneva, Switzerland
  • N. Bousson, G.D. Hallewell, M. Mathieu, A. Rozanov
    CNRS/CPT, Marseille, France
  • R. Boyd
    University of Oklahoma, Norman, Oklahoma, USA
  • M. Doubek, V. Vacek, M. Vitek
    Czech Technical University in Prague, Faculty of Mechanical Engineering, Prague, Czech Republic
  • K. Egorov
    Indiana University, Bloomington, Indiana, USA
  • S. Katunin
    PNPI, Gatchina, Leningrad District, Russia
  • S. McMahon
    STFC/RAL/ASTeC, Chilton, Didcot, Oxon, United Kingdom
  • K. Nagai
    University of Tsukuba, Graduate School of Pure and Applied Sciences,, Tsukuba, Ibaraki, Japan
 
  An upgrade to the ATLAS silicon tracker cooling control system requires a change from C3F8 (molecular weight 188) coolant to a blend with 10-30% C2F6 (mw 138) to reduce the evaporation temperature and better protect the silicon from cumulative radiation damage at LHC. Central to this upgrade an acoustic instrument for measurement of C3F8/C2F6 mixture and flow has been developed. Sound velocity in a binary gas mixture at known temperature and pressure depends on the component concentrations. 50 kHz sound bursts are simultaneously sent via ultrasonic transceivers parallel and anti-parallel to the gas flow. A 20 MHz transit clock is started synchronous with burst transmission and stopped by over-threshold received sound pulses. Transit times in both directions, together with temperature and pressure, enter a FIFO memory 100 times/second. Gas mixture is continuously analyzed using PVSS-II, by comparison of average sound velocity in both directions with stored velocity-mixture look-up tables. Flow is calculated from the difference in sound velocity in the two directions. In future versions these calculations may be made in a micro-controller. The instrument has demonstrated a resolution of <0.3% for C3F8/C2F6 mixtures with ~20%C2F6, with simultaneous flow resolution of ~0.1% of F.S. Higher precision is possible: a sensitivity of ~0.005% to leaks of C3F8 into the ATLAS pixel detector nitrogen envelope (mw difference 156) has been seen. The instrument has many applications, including analysis of hydrocarbons, mixtures for semi-conductor manufacture and anesthesia.  
 
WEPMS005 Automated Coverage Tester for the Oracle Archiver of WinCC OA software, controls, status, operation 981
 
  • A. Voitier, P. Golonka, M. Gonzalez-Berges
    CERN, Geneva, Switzerland
 
  A large number of control systems at CERN are built with the commercial SCADA tool WinCC OA. They cover projects in the experiments, accelerators and infrastructure. An important component is the Oracle archiver used for long term storage of process data (events) and alarms. The archived data provide feedback to the operators and experts about how the system was behaving at particular moment in the past. In addition a subset of these data is used for offline physics analysis. The consistency of the archived data has to be ensured from writing to reading as well as throughout updates of the control systems. The complexity of the archiving subsystem comes from the multiplicity of data types, required performance and other factors such as operating system, environment variables or versions of the different software components, therefore an automatic tester has been implemented to systematically execute test scenarios under different conditions. The tests are based on scripts which are automatically generated from templates. Therefore they can cover a wide range of software contexts. The tester has been fully written in the same software environment as the targeted SCADA system. The current implementation is able to handle over 300 test cases, both for events and alarms. It has enabled to report issues to the provider of WinCC OA. The template mechanism allows sufficient flexibility to adapt the suite of tests to future needs. The developed tools are generic enough to be used to tests other parts of the control systems.  
poster icon Poster WEPMS005 [0.279 MB]  
 
WEPMS008 Software Tools for Electrical Quality Assurance in the LHC software, hardware, LabView, operation 993
 
  • M. Bednarek
    CERN, Geneva, Switzerland
  • J. Ludwin
    IFJ-PAN, Kraków, Poland
 
  There are over 1600 superconducting magnet circuits in the LHC machine. Many of them consist of a large number of components electrically connected in series. This enhances the sensitivity of the whole circuits to electrical faults of individual components. Furthermore, circuits are equipped with a large number of instrumentation wires, which are exposed to accidental damage or swapping. In order to ensure safe operation, an Electrical Quality Assurance (ELQA) campaign is needed after each thermal cycle. Due to the complexity of the circuits, as well as their distant geographical distribution (tunnel of 27km circumference divided in 8 sectors), suitable software and hardware platforms had to be developed. The software combines an Oracle database, LabView data acquisition applications and PHP-based web follow-up tools. This paper describes the software used for the ELQA of the LHC.  
poster icon Poster WEPMS008 [8.781 MB]  
 
WEPMU016 Pre-Operation, During Operation and Post-Operational Verification of Protection Systems operation, injection, controls, software 1090
 
  • I. Romera, M. Audrain
    CERN, Geneva, Switzerland
 
  This paper will provide an overview of the software checks performed on the Beam Interlock System ensuring that the system is functioning to specification. Critical protection functions are implemented in hardware, at the same time software tools play an important role in guaranteeing the correct configuration and operation of the system during all phases of operation. This paper will describe tests carried out pre-, during- and post- operation, if protection system integrity is not sure, subsequent injections of beam into the LHC will be inhibited.  
 
WEPMU030 CERN Safety System Monitoring - SSM monitoring, network, interface, controls 1134
 
  • T. Hakulinen, P. Ninin, F. Valentini
    CERN, Geneva, Switzerland
  • J. Gonzalez, C. Salatko-Petryszcze
    ASsystem, St Genis Pouilly, France
 
  CERN SSM (Safety System Monitoring) is a system for monitoring state-of-health of the various access and safety systems of the CERN site and accelerator infrastructure. The emphasis of SSM is on the needs of maintenance and system operation with the aim of providing an independent and reliable verification path of the basic operational parameters of each system. Included are all network-connected devices, such as PLCs, servers, panel displays, operator posts, etc. The basic monitoring engine of SSM is a freely available system monitoring framework Zabbix, on top of which a simplified traffic-light-type web-interface has been built. The web-interface of SSM is designed to be ultra-light to facilitate access from handheld devices over slow connections. The underlying Zabbix system offers history and notification mechanisms typical advanced monitoring systems.  
poster icon Poster WEPMU030 [1.231 MB]  
 
WEPMU035 Distributed Monitoring System Based on ICINGA monitoring, network, distributed, experiment 1149
 
  • C. Haen, E. Bonaccorsi, N. Neufeld
    CERN, Geneva, Switzerland
 
  The basic services of the large IT infrastructure of the LHCb experiment are monitored with ICINGA, a fork of the industry standard monitoring software NAGIOS. The infrastructure includes thousands of servers and computers, storage devices, more than 200 network devices and many VLANS, databases, hundreds diskless nodes and many more. The amount of configuration files needed to control the whole installation is big, and there is a lot of duplication, when the monitoring infrastructure is distributed over several servers. In order to ease the manipulation of the configuration files, we designed a monitoring schema particularly adapted to our network and taking advantage of its specificities, and developed a tool to centralize its configuration in a database. Thanks to this tool, we could also parse all our previous configuration files, and thus fill in our Oracle database, that comes as a replacement of the previous Active Directory based solution. A web frontend allows non-expert users to easily add new entities to monitor. We present the schema of our monitoring infrastructure and the tool used to manage and automatically generate the configuration for ICINGA.  
poster icon Poster WEPMU035 [0.375 MB]  
 
WEPMU036 Efficient Network Monitoring for Large Data Acquisition Systems network, monitoring, interface, software 1153
 
  • D.O. Savu, B. Martin
    CERN, Geneva, Switzerland
  • A. Al-Shabibi
    Heidelberg University, Heidelberg, Germany
  • S.M. Batraneanu, S.N. Stancu
    UCI, Irvine, California, USA
  • R. Sjoen
    University of Oslo, Oslo, Norway
 
  Though constantly evolving and improving, the available network monitoring solutions have limitations when applied to the infrastructure of a high speed real-time data acquisition (DAQ) system. DAQ networks are particular computer networks where experts have to pay attention to both individual subsections as well as system wide traffic flows while monitoring the network. The ATLAS Network at the Large Hadron Collider (LHC) has more than 200 switches interconnecting 3500 hosts and totaling 8500 high speed links. The use of heterogeneous tools for monitoring various infrastructure parameters, in order to assure optimal DAQ system performance, proved to be a tedious and time consuming task for experts. To alleviate this problem we used our networking and DAQ expertise to build a flexible and scalable monitoring system providing an intuitive user interface with the same look and feel irrespective of the data provider that is used. Our system uses custom developed components for critical performance monitoring and seamlessly integrates complementary data from auxiliary tools, such as NAGIOS, information services or custom databases. A number of techniques (e.g. normalization, aggregation and data caching) were used in order to improve the user interface response time. The end result is a unified monitoring interface, for fast and uniform access to system statistics, which significantly reduced the time spent by experts for ad-hoc and post-mortem analysis.  
poster icon Poster WEPMU036 [5.945 MB]  
 
WEPMU040 Packaging of Control System Software software, controls, EPICS, Linux 1168
 
  • K. Žagar, M. Kobal, N. Saje, A. Žagar
    Cosylab, Ljubljana, Slovenia
  • F. Di Maio, D. Stepanov
    ITER Organization, St. Paul lez Durance, France
  • R. Šabjan
    COBIK, Solkan, Slovenia
 
  Funding: ITER European Union, European Regional Development Fund and Republic of Slovenia, Ministry of Higher Education, Science and Technology
Control system software consists of several parts – the core of the control system, drivers for integration of devices, configuration for user interfaces, alarm system, etc. Once the software is developed and configured, it must be installed to computers where it runs. Usually, it is installed on an operating system whose services it needs, and also in some cases dynamically links with the libraries it provides. Operating system can be quite complex itself – for example, a typical Linux distribution consists of several thousand packages. To manage this complexity, we have decided to rely on Red Hat Package Management system (RPM) to package control system software, and also ensure it is properly installed (i.e., that dependencies are also installed, and that scripts are run after installation if any additional actions need to be performed). As dozens of RPM packages need to be prepared, we are reducing the amount of effort and improving consistency between packages through a Maven-based infrastructure that assists in packaging (e.g., automated generation of RPM SPEC files, including automated identification of dependencies). So far, we have used it to package EPICS, Control System Studio (CSS) and several device drivers. We perform extensive testing on Red Hat Enterprise Linux 5.5, but we have also verified that packaging works on CentOS and Scientific Linux. In this article, we describe in greater detail the systematic system of packaging we are using, and its particular application for the ITER CODAC Core System.
 
poster icon Poster WEPMU040 [0.740 MB]  
 
THCHAUST06 Instrumentation of the CERN Accelerator Logging Service: Ensuring Performance, Scalability, Maintenance and Diagnostics instrumentation, extraction, distributed, framework 1232
 
  • C. Roderick, R. Billen, D.D. Teixeira
    CERN, Geneva, Switzerland
 
  The CERN accelerator Logging Service currently holds more than 90 terabytes of data online, and processes approximately 450 gigabytes per day, via hundreds of data loading processes and data extraction requests. This service is mission-critical for day-to-day operations, especially with respect to the tracking of live data from the LHC beam and equipment. In order to effectively manage any service, the service provider's goals should include knowing how the underlying systems are being used, in terms of: "Who is doing what, from where, using which applications and methods, and how long each action takes". Armed with such information, it is then possible to: analyze and tune system performance over time; plan for scalability ahead of time; assess the impact of maintenance operations and infrastructure upgrades; diagnose past, on-going, or re-occurring problems. The Logging Service is based on Oracle DBMS and Application Servers, and Java technology, and is comprised of several layered and multi-tiered systems. These systems have all been heavily instrumented to capture data about system usage, using technologies such as JMX. The success of the Logging Service and its proven ability to cope with ever growing demands can be directly linked to the instrumentation in place. This paper describes the instrumentation that has been developed, and demonstrates how the instrumentation data is used to achieve the goals outlined above.  
slides icon Slides THCHAUST06 [5.459 MB]  
 
FRAAULT03 Development of the Diamond Light Source PSS in conformance with EN 61508 controls, interlocks, radiation, operation 1289
 
  • M.C. Wilson, A.G. Price
    Diamond, Oxfordshire, United Kingdom
 
  Diamond Light Source is constructing a third phase (Phase III) of photon beamlines and experiment stations. Experience gained in the design, realization and operation of the Personnel Safety Systems (PSS) on the first two phases of beamlines is being used to improve the design process for this development. Information on the safety functionality of Phase I and Phase II photon beamlines is maintained in a hazard database. From this reports are used to assist in the design, verification and validation of the new PSSs. The data is used to make comparisons between beamlines, validate safety functions and to record documentation for each beamline. This forms part of documentations process demonstrating conformance to EN 61508.  
slides icon Slides FRAAULT03 [0.372 MB]  
 
FRBHAULT01 Feed-forward in the LHC feedback, software, real-time, controls 1302
 
  • M. Pereira, X. Buffat, K. Fuchsberger, M. Lamont, G.J. Müller, S. Redaelli, R.J. Steinhagen, J. Wenninger
    CERN, Geneva, Switzerland
 
  The LHC operational cycle is comprised of several phases such as the ramp, the squeeze and stable beams. During the ramp and squeeze in particular, it has been observed that the behaviour of key LHC beam parameters such as tune, orbit and chromaticity are highly reproducible from fill to fill. To reduced the reliance on the crucial feedback systems, it was decided to perform fill-to-fill feed-forward corrections. The LHC feed-forward application was developed to ease the introduction of corrections to the operational settings. It retrieves the feedback system's corrections from the logging database and applies appropriate corrections to the ramp and squeeze settings. The LHC Feed-Forward software has been used during LHC commissioning and tune and orbit corrections during ramp have been successfully applied. As a result, the required real-time corrections for the above parameters have been reduced to a minimum.  
slides icon Slides FRBHAULT01 [0.961 MB]  
 
FRBHAULT02 ATLAS Online Determination and Feedback of LHC Beam Parameters feedback, detector, monitoring, experiment 1306
 
  • J.G. Cogan, R. Bartoldus, D.W. Miller, E. Strauss
    SLAC, Menlo Park, California, USA
 
  The High Level Trigger of the ATLAS experiment relies on the precise knowledge of the position, size and orientation of the luminous region produced by the LHC. Moreover, these parameters change significantly even during a single data taking run. We present the challenges, solutions and results for the online luminous region (beam spot) determination, and its monitoring and feedback system in ATLAS. The massively parallel calculation is performed on the trigger farm, where individual processors execute a dedicated algorithm that reconstructs event vertices from the proton-proton collision tracks seen in the silicon trackers. Monitoring histograms from all the cores are sampled and aggregated across the farm every 60 seconds. We describe the process by which a standalone application fetches and fits these distributions, extracting the parameters in real time. When the difference between the nominal and measured beam spot values satisfies threshold conditions, the parameters are published to close the feedback loop. To achieve sharp time boundaries across the event stream that is triggered at rates of several kHz, a special datagram is injected into the event path via the Central Trigger Processor that signals the pending update to the trigger nodes. Finally, we describe the efficient near-simultaneous database access through a proxy fan-out tree, which allows thousands of nodes to fetch the same set of values in a fraction of a second.  
slides icon Slides FRBHAULT02 [7.573 MB]  
 
FRBHMUST01 The Design of the Alba Control System: A Cost-Effective Distributed Hardware and Software Architecture. controls, TANGO, software, interface 1318
 
  • D.F.C. Fernández-Carreiras, D.B. Beltrán, T.M. Coutinho, G. Cuní, J. Klora, O. Matilla, R. Montaño, C. Pascual-Izarra, S. Pusó, R. Ranz, A. Rubio, S. Rubio-Manrique
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  The control system of Alba is highly distributed from both hardware and software points of view. The hardware infrastructure for the control system includes in the order of 350 racks, 20000 cables and 6200 equipments. More than 150 diskless industrial computers, distributed in the service area and 30 multicore servers in the data center, manage several thousands of process variables. The software is, of course, as distributed as the hardware. It is also a success story of the Tango Collaboration where a complete software infrastructure is available "off the shelf". In addition Tango has been productively complemented with the powerful Sardana framework, a great effort in terms of development, which nowadays, several institutes benefit from. The whole installation has been coordinated from the beginning with a complete cabling and equipment database, where all the equipment, cables, connectors are described and inventoried. The so called "cabling database" is core of the installation. The equipments and cables are defined there. The basic configurations of the hardware like MAC and IP addresses, DNS names, etc. are also gathered in this database, allowing the network communication files and declaration of variables in the PLCs to be created automatically. This paper explains the design and the architecture of the control system, describes the tools and justifies the choices made. Furthermore, it presents and analyzes the figures regarding cost and performances.  
slides icon Slides FRBHMUST01 [4.616 MB]  
 
FRBHMULT06 EPICS V4 Expands Support to Physics Application, Data Acsuisition, and Data Analysis controls, EPICS, data-acquisition, interface 1338
 
  • L.R. Dalesio, G. Carcassi, M.A. Davidsaver, M.R. Kraimer, R. Lange, N. Malitsky, G. Shen
    BNL, Upton, Long Island, New York, USA
  • T. Korhonen
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
  • J. Rowland
    Diamond, Oxfordshire, United Kingdom
  • M. Sekoranja
    Cosylab, Ljubljana, Slovenia
  • G.R. White
    SLAC, Menlo Park, California, USA
 
  Funding: Work supported under auspices of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookhaven Science Associates, LLC, and in part by the DOE Contract DE-AC02-76SF00515
EPICS version 4 extends the functionality of version 3 by providing the ability to define, transport, and introspect composite data types. Version 3 provided a set of process variables and a data protocol that adequately defined scalar data along with an atomic set of attributes. While remaining backward compatible, Version 4 is able to easily expand this set with a data protocol capable of exchanging complex data types and parameterized data requests. Additionally, a group of engineers defined reference types for some applications in this environment. The goal of this work is to define a narrow interface with the minimal set of data types needed to support a distributed architecture for physics applications, data acquisition, and data analysis.
 
slides icon Slides FRBHMULT06 [0.188 MB]  
 
FRCAUST03 Status of the ESS Control System controls, hardware, EPICS, software 1345
 
  • G. Trahern
    ESS, Lund, Sweden
 
  The European Spallation Source (ESS) is a high current proton LINAC to be built in Lund, Sweden. The LINAC delivers 5 MW of power to the target at 2500 MeV, with a nominal current of 50 mA. It is designed to include the ability to upgrade the LINAC to a higher power of 7.5 MW at a fixed energy of 2500 MeV. The Accelerator Design Update (ADU) collaboration of mainly European institutions will deliver a Technical Design Report at the end of 2012. First protons are expected in 2018, and first neutrons in 2019. The ESS will be constructed by a number of geographically dispersed institutions which means that a considerable part of control system integration will potentially be performed off-site. To mitigate this organizational risk, significant effort will be put into standardization of hardware, software, and development procedures early in the project. We have named the main result of this standardization the Control Box concept. The ESS will use EPICS, and will build on the positive distributed development experiences of SNS and ITER. Current state of control system design and key decisions are presented in the paper as well as immediate challenges and proposed solutions.
From PAC 2011 article
http://eval.esss.lu.se/cgi-bin/public/DocDB/ShowDocument?docid=45
From IPAC 2010 article
http://eval.esss.lu.se/cgi-bin/public/DocDB/ShowDocument?docid=26
 
slides icon Slides FRCAUST03 [1.944 MB]