MOPKN —  Poster   (10-Oct-11   16:30—18:00)
Chair: R. Wilcke, ESRF, Grenoble, France
Paper Title Page
MOPKN002 LHC Supertable 86
 
  • M. Pereira, M. Lamont, G.J. Müller, D.D. Teixeira
    CERN, Geneva, Switzerland
  • T.E. Lahey
    SLAC, Menlo Park, California, USA
  • E.S.M. McCrory
    Fermilab, Batavia, USA
 
  LHC operations generate enormous amounts of data. These data are being stored in many different databases. Hence, it is difficult for operators, physicists, engineers and management to have a clear view on the overall accelerator performance. Until recently the logging database, through its desktop interface TIMBER, was the only way of retrieving information on a fill-by-fill basis. The LHC Supertable has been developed to provide a summary of key LHC performance parameters in a clear, consistent and comprehensive format. The columns in this table represent main parameters that describe the collider's operation such as luminosity, beam intensity, emittance, etc. The data is organized in a tabular fill-by-fill manner with different levels of detail. A particular emphasis was placed on data sharing by making data available in various open formats. Typically the contents are calculated for periods of time that map to the accelerator's states or beam modes such as Injection, Stable Beams, etc. Data retrieval and calculation is triggered automatically after the end of each fill. The LHC Supertable project currently publishes 80 columns of data on around 100 fills.  
 
MOPKN005 Construction of New Data Archive System in RIKEN RI Beam Factory 90
 
  • M. Komiyama, N. Fukunishi
    RIKEN Nishina Center, Wako, Japan
  • A. Uchiyama
    SHI Accelerator Service Ltd., Tokyo, Japan
 
  The control system of RIKEN RI Beam Factory (RIBF) is based on EPICS and three kinds of data archive system have been in operation. Two of them are EPICS applications and the other is MyDAQ2 developed by SPring-8 control group. MyDAQ2 collects data such as cooling-water temperature and magnet temperature etc and is not integrated into our EPICS control system. In order to unify the three applications into a single system, we have started to develop a new system since October, 2009. One of the requirements for this RIBF Control data Archive System (RIBFCAS) is that it routinely collects more than 3000 data from 21 EPICS Input/Output Controllers (IOC) at every 1 to 60 seconds, depending on the type of equipment. An ability to unify MyDAQ2 database is also required. To fulfill these requirements, a Java-based system is constructed, in which Java Channel Access Light Library (JCAL) developed by J-PARC control group is adopted in order to acquire large amounts of data as mentioned above. The main advantage of JCAL is that it is based on single threaded architecture for thread safety and user thread can be multi-threaded. The RIBFCAS hardware consists of an application server, a database server and a client-PC. The client application is executed on the Adobe AIR runtime. At the moment, we succeeded in getting about 3000 data from 21 EPICS IOCs at every 10 seconds for one day, and validation tests are proceeding. Unification of MyDAQ2 is now in progress and it is scheduled to be completed in 2011.  
poster icon Poster MOPKN005 [27.545 MB]  
 
MOPKN006 Algorithms and Data Structures for the EPICS Channel Archiver 94
 
  • J. Rowland, M.T. Heron, M.A. Leech, S.J. Singleton, K. Vijayan
    Diamond, Oxfordshire, United Kingdom
 
  Diamond Light Source records 3GB of process data per day and has a 15TB archive on line with the EPICS Channel Archiver. This paper describes recent modifications to the software to improve performance and usability. The file-size limit on the R-Tree index has been removed, allowing all archived data to be searchable from one index. A decimation system works directly on compressed archives from a backup server and produces multi-rate reduced data with minimum and maximum values to support time efficient summary reporting and range queries. The XMLRPC interface has been extended to provide binary data transfer to clients needing large amounts of raw data.  
poster icon Poster MOPKN006 [0.133 MB]  
 
MOPKN007 Lhc Dipole Magnet Splice Resistance From Sm18 Data Mining 98
 
  • H. Reymond, O.O. Andreassen, C. Charrondière, G. Lehmann Miotto, A. Rijllart, D. Scannicchio
    CERN, Geneva, Switzerland
 
  The splice incident which happened during commissioning of the LHC on the 19th of September 2008 caused damage to several magnets and adjacent equipment. This raised not only the question of how it happened, but also about the state of all other splices. The inter magnet splices were studied very soon after with new measurements, but the internal magnet splices were also a concern. At the Chamonix meeting in January 2009, the CERN management decided to create a working group to analyse the provoked quench data of the magnet acceptance tests and try to find indications for bad splices in the main dipoles. This resulted in a data mining project that took about one year to complete. This presentation describes how the data was stored, extracted and analysed reusing existing LabVIEW™ based tools. We also present the encountered difficulties and the importance of combining measured data with operator notes in the logbook.  
poster icon Poster MOPKN007 [5.013 MB]  
 
MOPKN009 The CERN Accelerator Measurement Database: On the Road to Federation 102
 
  • C. Roderick, R. Billen, M. Gourber-Pace, N. Hoibian, M. Peryt
    CERN, Geneva, Switzerland
 
  The Measurement database, acting as short-term central persistence and front-end of the CERN accelerator Logging Service, receives billions of time-series data per day for 200,000+ signals. A variety of data acquisition systems on hundreds of front-end computers publish source data that eventually end up being logged in the Measurement database. As part of a federated approach to data management, information about source devices are defined in a Configuration database, whilst the signals to be logged are defined in the Measurement database. A mapping, which is often complex and subject to change and extension, is therefore required in order to subscribe to the source devices, and write the published data to the corresponding named signals. Since 2005, this mapping was done by means of dozens of XML files, which were manually maintained by multiple persons, resulting in a configuration that was error prone. In 2010 this configuration was improved, such that it becomes fully centralized in the Measurement database, reducing significantly the complexity and the number of actors in the process. Furthermore, logging processes immediately pick up modified configurations via JMS based notifications sent directly from the database, allowing targeted device subscription updates rather than a full process restart as was required previously. This paper will describe the architecture and the benefits of current implementation, as well as the next steps on the road to a fully federated solution.  
 
MOPKN010 Database and Interface Modifications: Change Management Without Affecting the Clients 106
 
  • M. Peryt, R. Billen, M. Martin Marquez, Z. Zaharieva
    CERN, Geneva, Switzerland
 
  The first Oracle-based Controls Configuration Database (CCDB) was developed in 1986, by which the controls system of CERN's Proton Synchrotron became data-driven. Since then, this mission-critical system has evolved tremendously going through several generational changes in terms of the increasing complexity of the control system, software technologies and data models. Today, the CCDB covers the whole CERN accelerator complex and satisfies a much wider range of functional requirements. Despite its online usage, everyday operations of the machines must not be disrupted. This paper describes our approach with respect to dealing with change while ensuring continuity. How do we manage the database schema changes? How do we take advantage of the latest web deployed application development frameworks without alienating the users? How do we minimize impact on the dependent systems connected to databases through various API's? In this paper we will provide our answers to these questions, and to many more.  
 
MOPKN011 CERN Alarms Data Management: State & Improvements 110
 
  • Z. Zaharieva, M. Buttner
    CERN, Geneva, Switzerland
 
  The CERN Alarms System - LASER is a centralized service ensuring the capturing, storing and notification of anomalies for the whole accelerator chain, including the technical infrastructure at CERN. The underlying database holds the pre-defined configuration data for the alarm definitions, for the Operators alarms consoles as well as the time-stamped, run-time alarm events, propagated through the Alarms Systems. The article will discuss the current state of the Alarms database and recent improvements that have been introduced. It will look into the data management challenges related to the alarms configuration data that is taken from numerous sources. Specially developed ETL processes must be applied to this data in order to transform it into an appropriate format and load it into the Alarms database. The recorded alarms events together with some additional data, necessary for providing events statistics to users, are transferred to the long-term alarms archive. The article will cover as well the data management challenges related to the recently developed suite of data management interfaces in respect of keeping data consistency between the alarms configuration data coming from external data sources and the data modifications introduced by the end-users.  
poster icon Poster MOPKN011 [4.790 MB]  
 
MOPKN012 Hyperarchiver: An Epics Archiver Prototype Based on Hypertable 114
 
  • M.G. Giacchini, A. Andrighetto, G. Bassato, L.G. Giovannini, M. Montis, G.P. Prete, J.A. Vásquez
    INFN/LNL, Legnaro (PD), Italy
  • J. Jugo
    University of the Basque Country, Faculty of Science and Technology, Bilbao, Spain
  • K.-U. Kasemir
    ORNL, Oak Ridge, Tennessee, USA
  • R. Lange
    HZB, Berlin, Germany
  • R. Petkus
    BNL, Upton, Long Island, New York, USA
  • M. del Campo
    ESS-Bilbao, Zamudio, Spain
 
  This work started in the context of NSLS2 project at Brookhaven National Laboratory. The NSLS2 control system foresees a very high number of PV variables and has strict requirements in terms of archiving/retrieving rate: our goal was to store 10K PV/sec and retrieve 4K PV/sec for a group of 4 signals. The HyperArchiver is an EPICS Archiver implementation engined by Hypertable, an open source database whose internal architecture is derived from Google's Big Table. We discuss the performance of HyperArchiver and present the results of some comparative tests.
HyperArchiver: http://www.lnl.infn.it/~epics/joomla/archiver.html
Epics: http://www.aps.anl.gov/epics/
 
poster icon Poster MOPKN012 [1.231 MB]  
 
MOPKN013 Image Acquisition and Analysis for Beam Diagnostics Applications of the Taiwan Photon Source 117
 
  • C.Y. Liao, J. Chen, Y.-S. Cheng, K.T. Hsu, K.H. Hu, C.H. Kuo, C.Y. Wu
    NSRRC, Hsinchu, Taiwan
 
  Design and implementation of image acquisition and analysis is in proceeding for the Taiwan Photon Source (TPS) diagnostic applications. The optical system contains screen, lens, and lighting system. A CCD camera with Gigabit Ethernet interface (GigE Vision) will be a standard image acquisition device. Image acquisition will be done on EPICS IOC via PV channel and analysis the properties by using Matlab tool to evaluate the beam profile (σ), beam size position and tilt angle et al. The EPICS IOC integrated with Matlab as a data processing system is not only could be used in image analysis but also in many types of equipment data processing applications. Progress of the project will be summarized in this report.  
poster icon Poster MOPKN013 [0.816 MB]  
 
MOPKN014 A Web Based Realtime Monitor on EPICS Data 121
 
  • L.F. Li, C.H. Wang
    IHEP Beijing, Beijing, People's Republic of China
 
  Funding: IHEP China
Monitoring systems such as EDM and CSS are extremely important in EPICS system. Most of them are based on client/server(C/S). This paper designs and implements a web based realtime monitoring system on EPICS data. This system is based on browser and server (B/S using Flex [1]). Through CAJ [2] interface, it fetches EPICS data including beam energy, beam current, lifetime and luminosity and so on. Then all data is displayed in a realtime chart in browser (IE or Firefox/Mozilla). The chart is refreshed every regular interval and can be zoomed and adjusted. Also, it provides data tips showing and full screen mode.
[1]http://www.adobe.com/products/flex.html
[2]M.Sekoranja, "Native Java Implement of channel access for Epics", 10th ICALEPCS, Geneva, Oct 2005, PO2.089-5.
 
poster icon Poster MOPKN014 [1.105 MB]  
 
MOPKN015 Managing Information Flow in ALICE 124
 
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
  • A. Augustinus, P.Ch. Chochula, L.S. Jirdén, A.N. Kurepin, M. Lechman, P. Rosinský
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
 
  ALICE is one of the experiments at the Large Hadron Collider (LHC), CERN (Geneva, Switzerland). The ALICE detector control system is an integrated system collecting 18 different subdetectors' controls and general services and is implemented using the commercial SCADA package PVSS. Information of general interest, beam and ALICE condition data, together with data related to shared plants or systems, are made available to all the subsystems through the distribution capabilities of PVSS. Great care has been taken during the design and implementation to build the control system as a hierarchical system, limiting the interdependencies of the various subsystems. Accessing remote resources in a PVSS distributed environment is very simple, and can be initiated unilaterally. In order to improve the reliability of distributed data and to avoid unforeseen dependencies, the ALICE DCS group has enforced the centralization of the publication of global data and other specific variables requested by the subsystems. As an example, a specific monitoring tool will be presented that has been developed in PVSS to estimate the level of interdependency and to understand the optimal layout of the distributed connections, allowing for an interactive visualization of the distribution topology.  
poster icon Poster MOPKN015 [2.585 MB]  
 
MOPKN016 Tango Archiving Service Status 127
 
  • G. Abeillé, J. Guyot, M. Ounsy, S. Pierre-Joseph Zéphir
    SOLEIL, Gif-sur-Yvette, France
  • R. Passuello, G. Strangolino
    ELETTRA, Basovizza, Italy
  • S. Rubio-Manrique
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
 
  In modern scientific instruments like ALBA, ELETTRA or Synchrotron Soleil the monitoring and tuning of thousands of parameters is essential to drive high-performing accelerators and beamlines. To keep tracks of these parameters and to manage easily large volumes of technical data, an archiving service is a key component of a modern control system like Tango [1]. To do so, a high-availability archiving service is provided as a feature of the Tango control system. This archiving service stores data coming from the Tango control system into MySQL [2] or Oracle [3] databases. Tree sub-services are provided: An historical service with an archiving period up to 10 seconds; a short term service providing a few weeks retention with a period up to 100 milliseconds; a snapshot service which takes "pictures" of Tango parameters and can reapply them to the control system on user demand. This paper presents how to obtain a high-performance and scalable service based on our feedback after years of operation. Then, the deployment architecture in the different Tango institutes will be detailed. The paper conclusion is a description of the next steps and incoming features which will be available in the next future.
[1] http://www.tango-controls.org/
[2] http://www.mysql.com/
[3] http://www.oracle.com/us/products/database/index.html
 
 
MOPKN017 From Data Storage towards Decision Making: LHC Technical Data Integration and Analysis 131
 
  • A. Marsili, E.B. Holzer, A. Nordt, M. Sapinski
    CERN, Geneva, Switzerland
 
  The monitoring of the beam conditions, equipment conditions and measurements from the beam instrumentation devices in CERN's Large Hadron Collider (LHC) produces more than 100 Gb/day of data. Such a big quantity of data is unprecedented in accelerator monitoring and new developments are needed to access, process, combine and analyse data from different equipments. The Beam Loss Monitoring (BLM) system has been one of the most reliable pieces of equipment in the LHC during its 2010 run, issuing beam dumps when the detected losses were above the defined abort thresholds. Furthermore, the BLM system was able to detect and study unexpected losses, requiring intensive offline analysis. This article describes the techniques developed to: access the data produced (∼ 50.000 values/s); access relevant system layout information; access, combine and display different machine data.  
poster icon Poster MOPKN017 [0.411 MB]  
 
MOPKN018 Computing Architecture of the ALICE Detector Control System 134
 
  • P. Rosinský, A. Augustinus, P.Ch. Chochula, L.S. Jirdén, M. Lechman
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The ALICE Detector Control System (DCS) is based on a commercial SCADA product, running on a large Windows computer cluster. It communicates with about 1200 network attached devices to assure safe and stable operation of the experiment. In the presentation we focus on the design of the ALICE DCS computer systems. We describe the management of data flow, mechanisms for handling the large data amounts and information exchange with external systems. One of the key operational requirements is an intuitive, error proof and robust user interface allowing for simple operation of the experiment. At the same time the typical operator task, like trending or routine checks of the devices, must be decoupled from the automated operation in order to prevent overload of critical parts of the system. All these requirements must be implemented in an environment with strict security requirements. In the presentation we explain how these demands affected the architecture of the ALICE DCS.  
 
MOPKN019 ATLAS Detector Control System Data Viewer 137
 
  • C.A. Tsarouchas, S.A. Roe, S. Schlenker
    CERN, Geneva, Switzerland
  • U.X. Bitenc, M.L. Fehling-Kaschek, S.X. Winkelmann
    Albert-Ludwig Universität Freiburg, Freiburg, Germany
  • S.X. D'Auria
    University of Glasgow, Glasgow, United Kingdom
  • D. Hoffmann, O.X. Pisano
    CPPM, Marseille, France
 
  The ATLAS experiment at CERN is one of the four Large Hadron Collider ex- periments. ATLAS uses a commercial SCADA system (PVSS) for its Detector Control System (DCS) which is responsible for the supervision of the detector equipment, the reading of operational parameters, the propagation of the alarms and the archiving of important operational data in a relational database. DCS Data Viewer (DDV) is an application that provides access to historical data of DCS parameters written to the database through a web interface. It has a modular and flexible design and is structured using a client-server architecture. The server can be operated stand alone with a command-like interface to the data while the client offers a user friendly, browser independent interface. The selection of the metadata of DCS parameters is done via a column-tree view or with a powerful search engine. The final visualisation of the data is done using various plugins such as "value over time" charts, data tables, raw ASCII or structured export to ROOT. Excessive access or malicious use of the database is prevented by dedicated protection mechanisms, allowing the exposure of the tool to hundreds of inexperienced users. The metadata selection and data output features can be used separately by XML configuration files. Security constraints have been taken into account in the implementation allowing the access of DDV by collaborators worldwide. Due to its flexible interface and its generic and modular approach, DDV could be easily used for other experiment control systems that archive data using PVSS.  
poster icon Poster MOPKN019 [0.938 MB]  
 
MOPKN020 The PSI Web Interface to the EPICS Channel Archiver 141
 
  • G. Jud, A. Lüdeke, W. Portmann
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The EPICS channel archiver is a powerful tool to collect control system data of thousands of EPICS process variables with rates of many Hertz each to an archive for later retrieval. [1] Within the package of the channel archiver version 2 you get a Java application for graphical data retrieval and a command line tool for data extraction into different file formats. For the Paul Scherrer Institute we wanted a possibility to retrieve the archived data from a web interface. It was desired to have flexible retrieval functions and to allow to interchange data references by e-mail. This web interface has been implemented by the PSI controls group and has now been in operation for several years. This presentation will highlight the special features of this PSI web interface to the EPICS channel archiver.
[1] http://sourceforge.net/apps/trac/epicschanarch/wiki
 
poster icon Poster MOPKN020 [0.385 MB]  
 
MOPKN021 Asynchronous Data Change Notification between Database Server and Accelerator Control Systems 144
 
  • W. Fu, J. Morris, S. Nemesure
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy.
Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality.
 
poster icon Poster MOPKN021 [0.355 MB]  
 
MOPKN024 The Integration of the LHC Cyrogenics Control System Data into the CERN Layout Database 147
 
  • E. Fortescue-Beck, R. Billen, P. Gomes
    CERN, Geneva, Switzerland
 
  The Large Hadron Collider's Cryogenic Control System makes extensive use of several databases to manage data appertaining to over 34,000 cryogenic instrumentation channels. This data is essential for populating the firmware of the PLCs which are responsible for maintaining the LHC at the appropriate temperature. In order to reduce the number of data sources and the overall complexity of the system, the databases have been rationalised and the automatic tool, that extracts data for the control software, has been simplified. This paper describes the main improvements that have been made and evaluates the success of the project.  
 
MOPKN025 Integrating the EPICS IOC Log into the CSS Message Log 151
 
  • K.-U. Kasemir, E. Danilova
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy
The Experimental Physics and Industrial Control System (EPICS) includes the "IOCLogServer", a tool that logs error messages from front-end computers (Input/Output Controllers, IOCs) into a set of text files. Control System Studio (CSS) includes a distributed message logging system with relational database persistence and various log analysis tools. We implemented a log server that forwards IOC messages to the CSS log database, allowing several ways of monitoring and analyzing the IOC error messages.
 
poster icon Poster MOPKN025 [4.006 MB]  
 
MOPKN027 BDNLS - BESSY Device Name Location Service 154
 
  • D.B. Engel, P. Laux, R. Müller
    HZB, Berlin, Germany
 
  Initially the relational database (RDB) for control system configuration at BESSY has been built around the device concept [1]. Maintenance and consistency issues as well as complexity of scripts generating the configuration data, triggered the development of a novel, generic RDB structure based on hierarchies of named nodes with attribute/value pair [2]. Unfortunately it turned out that usability of this generic RDB structure for a comprehensive configuration management relies totally on sophisticated data maintenance tools. On this background BDNS, a new database management tool has been developed within the framework of the Eclipse Rich Client Platform. It uses the Model View Control (MVC) layer of Jface to cleanly dissect retrieval processes, data path, data visualization and actualization. It is based on extensible configurations described in XML allowing to chain SQL calls and compose profiles for various use cases. It solves the problem of data key forwarding to the subsequent SQL statement. BDNS and its potential to map various levels of complexity into the XML configurations allows to provide easy usable, tailored database access to the configuration maintainers for the different underlying database structures. Based on Eclipse the integration of BDNS into Control System Studio is straight forward.
[1] T. Birke et.al.: Relational Database for Controls Configuration Management, IADBG Workshop 2001, San Jose.
[2] T. Birke et.al.: Beyond Devices - An improved RDB Data-Model for Configuration Management, ICALEPCS 2005, Geneva
 
poster icon Poster MOPKN027 [0.210 MB]  
 
MOPKN029 Design and Implementation of the CEBAF Element Database 157
 
  • T. L. Larrieu, M.E. Joyce, C.J. Slominski
    JLAB, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
With inauguration of the CEBAF Element Database(CED) in Fall 2010, Jefferson Lab computer scientists have taken a first step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting front-end computers to building controls screens. A particular requirement influencing the CED design is that it must provide consistent access to not only present, but also future, and eventually past, configurations of the CEBAF accelerator. To accomplish this, an introspective database schema was designed that allows new elements, element types, and element properties to be defined on-the-fly without changing table structure. When used in conjunction with the Oracle Workspace Manager, it allows users to seamlessly query data from any time in the database history with the exact same tools as they use for querying the present configuration. Users can also check-out workspaces and use them as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented API that is translated automatically from original C++ into native libraries for script languages such as perl, php, and TCL making access to the CED easy and ubiquitous.
The U.S.Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.
 
poster icon Poster MOPKN029 [5.239 MB]