Data and Information Management

Paper Title Page
TOPA01 Data Management at JET with a Look Forward to ITER 74
 
  • A. J. Capel, N. J. Cook, A. M. Edwards, E. M. Jones, R. A. Layne, D. C. McDonald, M. W. Wheatley, J. W. Farthing
    UKAEA Culham, Culham, Abingdon, Oxon
  • M. Greenwald
    MIT/PSFC, Cambridge, Massachusetts
  • J. B. Lister
    ITER, St Paul lez Durance
 
  Since the first JET pulse in 1983, the raw data collected per ~40s of plasma discharge (pulse) has roughly followed a Moore's Law-like doubling every 2 years. Today we collect up to ~10GB per pulse, and the total data collected over ~70,000 pulses amounts to ~35TB. Enhancements to JET should result in ~60GB per pulse being collected by 2010. An ongoing challenge is to maintain the pulse repetition rate, data access times, and data security. The mass data store provides storage, archiving, and also the data access methods. JET, like most fusion experiments, provides an MDSplus (http://www.mdsplus.org) access layer on top of its own client-server access. Although ITER will also be a pulsed experiment, the discharge will be ~300-5000s in duration. Data storage and analysis must hence be performed exclusively in real time. The ITER conceptual design proposes a continuous timeline for access to all project data. The JET mass data store will be described together with the planned upgrades required to cater for the increases in data at the end of 2009. The functional requirements for the ITER mass storage system will be described based on the current status of the ITER conceptual design.  
slides icon Slides  
TOPA02 SDA Time Intervals 79
 
  • J. Cai, E. S. McCrory, D. J. Nicklaus, T. B. Bolshakov
    Fermilab, Batavia, Illinois
 
  SDA (Sequenced Data Acquisition) Time Intervals is a hierarchical logging system for describing complex large-scale repeated processes. SDA has been used extensively at Fermilab* for fine tuning during the Tevatron Collider Run II. SDA Time Intervals is a new system born during discussions between CERN and FNAL about routinely recording relevant data for the LHC. Its main advantages are extremly low maintenance and good integration with traditional "flat" dataloggers. The Time Intervals (TI) system records the time of key events during a process and relates these events to the data that the traditional datalogger archives. From the point of view of the application program, any number of datalogging systems can be refactored into human-understandable time intervals.

* SDA-based diagnostic and analysis tools for Collider Run II. T.B. Bolshakov, P. Lebrun, S. Panacek, V. Papadimitriou, J. Slaughter, A. Xiao. Proceedings of PAC 05, Knoxville, Tennessee, May 2005.

 
slides icon Slides  
TOPA03 The IRMIS Universal Component-Type Model 82
 
  • D. Dohan
    ANL, Argonne, Illinois
 
  The IRMIS toolkit provides a relational description of the accelerator/facility hardware and how it is assembled. To create this relational model, the APS site infrastructure was successively partitioned until a set of familiar, "unit-replaceable" components was reached. These items were grouped into a set of component types, each characterized by the type's function, form factor, etc. No accelerator "role" was assigned to the components, resulting in a universal set of component types applicable to any laboratory or facility. This paper discusses the development of the universal component-type model. Extension of the component types to include port definitions and signal-handling capabilities will be discussed. This signal-handling aspect provides the primary mechanism for relating control system software to accelerator hardware. The schema is being extended to include references to the device support for EPICS-supported component types. This suggests a new approach to EPICS database configuration in which the user, after selecting a particular hardware component, is provided with links to the support software to be used in building the EPICS application.  
slides icon Slides  
ROPA01 Lessons Learned from the SNS Relational Database 514
 
  • E. Danilova, J. G. Patton, J. D. Purcell
    ORNL, Oak Ridge, Tennessee
 
  The Spallation Neutron Source Project relies heavily on many different applications that require and depend on the SNS integrated relational database. Although many of the projects undertaken have been successful, the majority of time and energy spent on producing products has resulted in opportunities lost. The percentage of time lost or wasted has been very similar to that of software development projects everywhere. At the SNS the variety of factors that have influenced these projects can be traced to some specific areas: management support, project deadlines, user expectations, graphical user interfaces, and the database itself. This paper presents a look at the factors that have helped make different projects a success and factors that have led to less favorable results.  
slides icon Slides  
ROPA02 The High Performance Database Archiver for the LHC Experiments 517
 
  • M. Gonzalez-Berges
    CERN, Geneva
 
  Each of the Large Hadron Collider (LHC) experiments will be controlled by a large distributed system built with the SCADA tool PVSS. There will be about 150 computers and millions of input/output channels per experiment. The values read from the hardware, alarms generated, and user actions will be archived for the physics analysis and for the debugging of the control system itself. Although the original PVSS implementation of a database archiver was appropriate for standard industrial use, the performance was not enough. A collaboration was set up between CERN and ETM, the company that develops PVSS. Changes in the architecture and several optimizations were made and tested in a system of a comparable size to the final ones. As a result we have been able to improve the performance by more than one order of magnitude, and what is more important, we now have a scalable architecture based on the Oracle clustering technology (Real Application Cluster or RAC). This architecture can deal with the requirements for insertion rate, data querying, and manageability of the high volume of data (e.g., an insertion rate of > 150,000 changes/s was achieved with a 6-node RAC cluster).  
slides icon Slides  
ROPA03 ANTARES Slow Control Status 520
 
  • J. M. Gallone
    IPHC, Strasbourg Cedex 2
 
  ANTARES is a neutrino telescope project based on strings of Cerenkov detectors in deep sea. These detectors are spread over a volume of 1 km3 at a depth of about 2 km in the Mediterranean Sea near Toulon. About 400 of such detectors are now operational, as well as a large variety of instruments that need a reliable and accurate embedded slow control system. Based on Commodity Off-the-Shelf (COTS) low-power integrated processors and industry standards such as Ethernet and ModBus, the foreseen system is expected to run for 3 years without any direct access to the hardware. We present the system architecture and some performance figures. The slow control system stores the state of the system at any time in a database. This state may be analyzed by technical staff in charge of the maintenance, physicists to check the setup of the experiment, or the data acquisition system for saving experimental conditions. The main functions of the slow control system are to give a record of the state of the whole system, to set the value of a parameter of a physical device, to modify the value of a parameter of a physical device, and to set up the initial values of the physical devices.  
slides icon Slides  
RPPA03 The LHC Functional Layout Database as Foundation of the Controls System 526
 
  • R. Billen, J. Mariethoz, P. Le Roux
    CERN, Geneva
 
  For the design, construction, integration, and installation of the LHC, the LHC Layout database manages the information on the functional positions of the components of the LHC. Since January 2005, the scope of this database has been extended to include all electronics racks in the tunnel, underground areas, and surface buildings. This description of the accelerator and the installed controls topology is now used as the foundation for the online operational databases, namely for controls configuration and operational settings. This paper will sketch the scope of the Layout database and explain the details of data propagation towards the respective controls data consumers. The question whether this approach is applicable to the rest of the accelerator complex at CERN will be addressed as well.  
RPPA04 Automating the Configuration of the Control Systems of the LHC Experiments 529
 
  • P. Golonka, F. Varela, F. Calheiros
    CERN, Geneva
 
  The supervisory layer of the Large Hadron Collider (LHC) experiments is based on the PVSS SCADA tool and the Joint Control Project (JCOP) framework. This controls framework includes a Finite State Machine (FSM) toolkit, which allows operation of the control systems according to a well-defined set of states and commands. During the FSM transitions of the detectors, it will be required to reconfigure parts of the control systems. All configuration parameters of the devices integrated into the control system are stored in the so-called configuration database. In this paper the JCOP FSM-Configuration database tool is presented. This tool represents a common solution for the four LHC experiments to ensure the availability of all configuration data required for a given type of run of the experiment, in the PVSS sub-detector control applications. The implementation strategy chosen is discussed in the paper. This approach enables the standalone operation of different partitions of the detectors simultaneously while ensuring independent data handling. Preliminary performance results of the tool are also presented in this paper.  
RPPA05 Software Management of the LHC Detector Control Systems 532
 
  • F. Varela
    CERN, Geneva
 
  The control systems of each of the LHC experiments contain on the order of 150 computers running the back-end applications that are based on the PVSS SCADA package and the Joint Controls Project (JCOP) Framework. These inter-cooperating controls applications are being developed by different groups all around the world and have to be integrated by the experiments’ central controls teams. These applications will have to be maintained and eventually upgraded during the lifetime of the LHC experiments, ~20 years. This paper presents the centralized software management strategy based on the JCOP framework installation tool, a central repository shared by the different controls applications and an external database that holds the overall system configuration. The framework installation tool allows installation of software components in the sub-detector PVSS applications and eases integration of different parts of a control system. The information stored in the system configuration database can also be used by the installation tool to restore a computer in the event of failure. The central repository provides versioning of the various software components integrating the control system.  
RPPA12 Process Control: Object Oriented Model for Offline Data 541
 
  • T. Boeckmann, M. R. Clausen, J. Hatje, H. R. Rickens, C. H. Gerke
    DESY, Hamburg
 
  Process control systems are primarily designed to handle online real-time data. But once the system has to be maintained over years of continuous operation, the aspects of asset management (e.g., spare parts) and reengineering (e.g., loading process computers and field bus processors with consistent data after modification of instrumentation) become more and more important. One way to get the necessary information is data mining in the running system. The other possibility is to collect all relevant information in a database from the beginning and build up configuration files from there. For the cryogenic systems in the XFEL, the planned x-ray free electron laser facility at DESY in Hamburg, Germany, EPICS will be used as the process control software. This talk will present the status of the development of our device database, which is to hold the offline data. We have chosen an approach representing the instrumentation and field bus components as objects in Java. The objects are made persistent in an Oracle database using Hibernate. The user interface will be implemented as a plugin to the control system studio CSS based on Eclipse.  
RPPA13 The Electrical Power Project at SNS 544
 
  • M. P. Martinez, J. D. Purcell, E. Danilova
    ORNL, Oak Ridge, Tennessee
 
  The Electrical Power Project consists of recording data on all power-distribution devices necessary to SNS operations and how they are connected, assigning a valid name to each device and describing it, along with loading this information and the relationships into the SNS Oracle database. Interactive web-based applications allow users to display and easily update power-related data. In the case of planned electrical outages, a complete list of affected devices (including beam-line devices) will be available to controls, diagnostics, and other groups in advance. The power-tree information can be used to help diagnose electrical problems of any specific device. Fast access to device characteristics and relations from any web browser will help technical personnel quickly identify hazards and prevent electrical accidents, thereby ensuring SNS electrical safety. The project was completed by a special task team containing individuals from different groups. The paper covers the project history, QA issues, technology used, and current status.  
RPPA25 The Data Acquisition System (DAQ) of the FLASH Facility 564
 
  • K. Rehlich, R. Rybnikov, R. Kammering
    DESY, Hamburg
 
  Nowadays the photon science experiments and the machines providing these photon beams produce enormous amounts of data. To capture the data from the photon science experiments and from the machine itself, we developed a novel Data AcQusition (DAQ) system for the FLASH (Free electron LASer in Hamburg) facility. Meanwhile the system is not only fully integrated into the DOOCS control system, but is also the core for a number of essential machine-related feedback loops and monitoring tasks. A central DAQ server records and stores the data of more than 900 channels with 1-MHz up to 2-GHz sampling and several images from the photon science experiments with a typical frame rate of 5 Hz. On this server all data are synchronized on a bunch basis which makes this the perfect location to attach, e.g., high-level feedbacks and calculations. An overview of the architecture of the DAQ system and its interconnections within the complex of the FLASH facility together with the status of the DAQ system and possible future extensions/applications will be given.  
RPPA26 Database for Control System of J-PARC 3 GeV RCS 567
 
  • S. F. Fukuta
    MELCO SC, Tsukuba
  • Y. Kato, M. Kawase, H. Sakaki, H. Sako, H. Yoshikawa, H. Takahashi
    JAEA/J-PARC, Tokai-Mura, Naka-Gun, Ibaraki-Ken
  • S. S. Sawa
    Total Support Systems Corporation, Tokai-mura, Naka-gun, Ibaraki
  • M. Sugimoto
    Mitsubishi Electric Control Software Corp, Kobe
 
  The Control System of J-PARC 3GeV RCS is configured based on Database, which is comprised of Component Data Management DB (Component DB) and Data Acquisition DB (Operation DB. Component DB was developed mainly to manage the data on accelerator components and to generate EPICS records automatically using the data. Presently we are testing the reliability of DB application software at Linac operation. Later most Linac EPICS records are generated from DB, and we are able to operate Linac with very few problems. Operation DB collects the two kinds of data. One is EPICS records data, and the other is synchronized data. Now we are testing the reliability of application software for EPICS records data collection, and we have confirmed that EPICS record data are corrected with very few problems. Later Linac EPICS records data are inserted in Operation DB from Linac Operation start. On the other hand, application software for synchronized data collection is now being developed, and we will test the reliability of this application software from comprehensive information on RCS operation. We report on the status of development for Database for Control System of J-PARC 3GeV RCS.  
RPPA27 Status of the TANGO Archiving System 570
 
  • J. Guyot, M. O. Ounsy, S. Pierre-Joseph Zephir
    SOLEIL, Gif-sur-Yvette
 
  This poster will give a detailed status of the major functionality delivered as a Tango service: the archiving service. The goal of this service is to maintain the archive history of thousands of accelerators or beamline control parameters in order to be able to correlate signals or to get snapshots of the system at different times and to compare them. For this aim, three database services have been developed and fully integrated in Tango: an historical database with an archiving frequency up to 0.1 Hz, a short-term database providing a few hours retention but with higher archiving frequency (up to 10 HZ), and finally a snapshotting database. These services are available to end users through two graphical user interfaces: Mambo (for data extraction/visualization from historical and temporary databases) and Bensikin (for snapshots management). The software architecture and design of the whole system will be presented, as well as the current status of the deployment at SOLEIL.  
RPPA31 Construction and Application of Database for CSNS 579
 
  • P. Chu
    SLAC, Menlo Park, California
  • C. H. Wang, Q. Gan
    IHEP Beijing, Beijing
 
  The database of the China Spallation Neutron Source (CSNS) Accelerator is designed to store machine parameters, magnet measurement data, survey and alignment data, control system configuration data, equipment historical data, e-logbook, and so on. It will also provide project management quality assurance, error impact analysis, and assembly assistance including sorting. This paper introduces the construction and application of the database for CSNS. Details such as convention name rules, database model and schema, interface of import and export data, and database maintenance will be presented.  
RPPA33 Search for a Reliable Storage Architecture for RHIC 585
 
  • R. A. Katz, J. Morris, S. Binello
    BNL, Upton, Long Island, New York
 
  Software used to operate the Relativistic Heavy Ion Collider (RHIC) resides on one operational RAID storage system. This storage system is also used to store data that reflects the status and recent history of accelerator operations. Failure of this system interrupts the operation of the accelerator as backup systems are brought online. In order to increase the reliability of this critical control system component, the storage system architecture has been upgraded to use Storage Area Network (SAN) technology and to introduce redundant components and redundant storage paths. This paper describes the evolution of the storage system, the contributions to reliability that each additional feature has provided, further improvements that are being considered, and real-life experience with the current system.  
RPPA35 The DIAMON Project – Monitoring and Diagnostics for the CERN Controls Infrastructure 588
 
  • M. Buttner, J. Lauener, K. Sigerud, M. Sobczak, N. Stapley, P. Charrue
    CERN, Geneva
 
  The CERN accelerators’ controls infrastructure spans over large geographical distances and accesses a big diversity of equipment. In order to ensure smooth beam operation, efficient monitoring and diagnostic tools are required by the operators, presenting the state of the infrastructure and offering guidance for the first line support. The DIAMON project intends to deploy software monitoring agents in the controls infrastructure, each agent running predefined local tests and sending its result to a central service. A highly configurable graphical interface will exploit these results and present the current state of the controls infrastructure. Diagnostic facilities to get further details on a problem and first aid to repair it will also be provided. This paper will describe the DIAMON project’s scope and objectives as well as the user requirements. Also presented will be the system architecture and the first operational version.  
RPPA36 Handling Large Data Amounts in ALICE DCS 591
 
  • A. Augustinus, L. S. Jirden, S. Kapusta, P. Rosinsky, P. Ch. Chochula
    CERN, Geneva
 
  The amount of control data to be handled by the ALICE experiment at CERN is by a magnitude larger than in previous-generation experiments. Some 18 detectors, 130 subsystems, and 100,000 control channels need to be configured, controlled, and archived in normal operation. During the configuration phase several Gigabytes of data are written to devices, and during stable operations some 1,000 values per second are written to archival. The peak load for the archival is estimated to 150,000 changes/s. Data is also continuously exchanged with several external systems, and the system should be able to operate unattended and fully independent from any external resources. Much care has been taken in the design to fulfill the requirements, and this report will describe the solutions implemented. The data flow and the various components will be described as well as the data exchange mechanisms and the interfaces to the external systems. Some emphasis will also be given to data reduction and filtering mechanisms that have been implemented in order to keep the archive within maintainable margins.  
RPPA37 Experiences: Configuration Management with a Generic RDB Data-Model 594
 
  • B. Franksen, B. Kuner, T. Birke
    BESSY GmbH, Berlin
 
  A new RDB data-model has been introduced at BESSY to enable a more generic approach to store and handle configuration data. Stored data ranges from global hardware-structure and -information through building logical hierarchies to configuration information for monitoring applications as well as signal-level information. This information is used to configure the front-end computers as well as the generic and higher-level tools like alarm-handler and archiver. New applications at BESSY are developed with this generic RDB data-model in mind. First experiences with real-life applications as well as a set of tools for entering, maintenance, and retrieval of configuration data are described in this paper.  
RPPA39 Accelerator Trouble Ticket 600
 
  • C. Bravo, D. Maselli, G. Mazzitelli, T. Tonus, A. Camiletti
    INFN/LNF, Frascati (Roma)
 
  The DAFNE Accelerator complex, a 1020-MeV center of mass lepton collider for Phi particle production, consists of a linear accelerator, a damping ring, nearly 180 m of transfer lines, two storage rings that intersect in two points, a test beam area providing e+/e- and photos (BTF) on demand, and three synchrotron light lines (DAFNE-L). The complexity of the machine and subsystem pushed us to develop a system for logging, archiving, and making statistics and history of the DAFNE accelerator and experimental user’s faults, warnings, news, and general setup information. The Accelerator Trouble Ticket is a web tool (PHP, MySQL, and email based), that allows for complete handling and sharing of all the accelerator information with the scientific, technical, and service staff; it also allows experimental users easy access via the World Wide Web. The architecture and implementation of the system and the ease of exportation and configuration for any accelerator complex is presented, along with examples of products and results obtained from the first year of operation at the DAFNE accelerator.