MOMAU —  Mini Orals A   (10-Oct-11   17:00—17:30)
Chair: J.M. Meyer, ESRF, Grenoble, France
Paper Title Page
MOMAU002 Improving Data Retrieval Rates Using Remote Data Servers 40
 
  • T. D'Ottavio, B. Frak, J. Morris, S. Nemesure
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work performed under the auspices of the U.S. Department of Energy
The power and scope of modern Control Systems has led to an increased amount of data being collected and stored, including data collected at high (kHz) frequencies. One consequence is that users now routinely make data requests that can cause gigabytes of data to be read and displayed. Given that a users patience can be measured in seconds, this can be quite a technical challenge. This paper explores one possible solution to this problem - the creation of remote data servers whose performance is optimized to handle context-sensitive data requests. Methods for increasing data delivery performance include the use of high speed network connections between the stored data and the data servers, smart caching of frequently used data, and the culling of data delivered as determined by the context of the data request. This paper describes decisions made when constructing these servers and compares data retrieval performance by clients that use or do not use an intermediate data server.
 
slides icon Slides MOMAU002 [0.085 MB]  
poster icon Poster MOMAU002 [1.077 MB]  
 
MOMAU003 The Computing Model of the Experiments at PETRA III 44
 
  • T. Kracht, M. Alfaro, M. Flemming, J. Grabitz, T. Núñez, A. Rothkirch, F. Schlünzen, E. Wintersberger, P. van der Reest
    DESY, Hamburg, Germany
 
  The PETRA storage ring at DESY in Hamburg has been refurbished to become a highly brilliant synchrotron radiation source (now named PETRA III). Commissioning of the beamlines started in 2009, user operation in 2010. In comparison with our DORIS beamlimes, the PETRA III experiments have larger complexity, higher data rates and require an integrated system for data storage and archiving, data processing and data distribution. Tango [1] and Sardana [2] are the main components of our online control system. Both systems are developed by international collaborations. Tango serves as the backbone to operate all beamline components, certain storage ring devices and equipment from our users. Sardana is an abstraction layer on top of Tango. It standardizes the hardware access, organizes experimental procedures, has a command line interface and provides us with widgets for graphical user interfaces. Other clients like Spectra, which was written for DORIS, interact with Tango or Sardana. Modern 2D detectors create large data volumes. At PETRA III all data are transferred to an online file server which is hosted by the DESY computer center. Near real time analysis and reconstruction steps are executed on a CPU farm. A portal for remote data access is in preparation. Data archiving is done by the dCache [3]. An offline file server has been installed for further analysis and inhouse data storage.
[1] http://www.tango-controls.org
[2] http://computing.cells.es/services/collaborations/sardana
[3] http://www-dcache.desy.de
 
slides icon Slides MOMAU003 [0.347 MB]  
poster icon Poster MOMAU003 [0.563 MB]  
 
MOMAU004 Database Foundation for the Configuration Management of the CERN Accelerator Controls Systems 48
 
  • Z. Zaharieva, M. Martin Marquez, M. Peryt
    CERN, Geneva, Switzerland
 
  The Controls Configuration DB (CCDB) and its interfaces have been developed over the last 25 years in order to become nowadays the basis for the Configuration Management of the Controls System for all accelerators at CERN. The CCDB contains data for all configuration items and their relationships, required for the correct functioning of the Controls System. The configuration items are quite heterogeneous, depicting different areas of the Controls System – ranging from 3000 Front-End Computers, 75 000 software devices allowing remote control of the accelerators, to valid states of the Accelerators Timing System. The article will describe the different areas of the CCDB, their interdependencies and the challenges to establish the data model for such a diverse configuration management database, serving a multitude of clients. The CCDB tracks the life of the configuration items by allowing their clear identification, triggering change management processes as well as providing status accounting and audits. This necessitated the development and implementation of a combination of tailored processes and tools. The Controls System is a data-driven one - the data stored in the CCDB is extracted and propagated to the controls hardware in order to configure it remotely. Therefore a special attention is placed on data security and data integrity as an incorrectly configured item can have a direct impact on the operation of the accelerators.  
slides icon Slides MOMAU004 [0.404 MB]  
poster icon Poster MOMAU004 [6.064 MB]  
 
MOMAU005 Integrated Approach to the Development of the ITER Control System Configuration Data 52
 
  • D. Stepanov, L. Abadie
    ITER Organization, St. Paul lez Durance, France
  • J. Bertin, G. Bourguignon, G. Darcourt
    Sopra Group, Aix-en-Provence, France
  • O. Liotard
    TCS France, Puteaux, France
 
  ITER control system (CODAC) is steadily going into the implementation phase. A design guidelines handbook and a software development toolkit, named CODAC Core System, were produced in February 2011. They are ready to be used off-site, in the ITER domestic agencies and associated industries, in order to develop first control "islands" of various ITER plant systems. In addition to the work done off-site there is wealth of I&C related data developed centrally at ITER, but scattered through various sources. These data include I&C design diagrams, 3-D data, volume allocation, inventory control, administrative data, planning and scheduling, tracking of deliveries and associated documentation, requirements control, etc. All these data have to be kept coherent and up-to-date, with various types of cross-checks and procedures imposed on them. A "plant system profile" database, currently under development at ITER, represents an effort to provide integrated view into the I&C data. Supported by a platform-independent data modeling, done with a help of XML Schema, it accumulates all the data in a single hierarchy and provides different views for different aspects of the I&C data. The database is implemented using MS SQL Server and Java-based web interface. Import and data linking services are implemented using Talend software, and the report generation is done with a help of MS SQL Server Reporting Services. This paper will report on the first implementation of the database, the kind of data stored so far, typical work flows and processes, and directions of further work.  
slides icon Slides MOMAU005 [0.384 MB]  
poster icon Poster MOMAU005 [0.692 MB]  
 
MOMAU007 How to Maintain Hundreds of Computers Offering Different Functionalities with Only Two System Administrators 56
 
  • R.A. Krempaska, A.G. Bertrand, C.E. Higgs, R. Kapeller, H. Lutz, M. Provenzano
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The Controls section in PSI is responsible for the Control Systems of four Accelerators: two proton accelerators HIPA and PROSCAN, Swiss Light Source SLS and the Free Electron Laser (SwissFEL) Test Facility. On top of that, we have 18 additional SLS beamlines to control. The controls system is mainly composed of the so called Input Output Controllers (IOCs) which require a complete and complex computing infrastructure in order to boot, being developed, debugged and monitored. This infrastructure consists currently mainly of Linux computers like boot server, port server, or configuration server (called save and restore server). Overall, the constellation of computers and servers which compose the control system counts about five hundred Linux computers which can be split into 38 different configurations based on the work each of this system need to provide. For the administration of all this we do employ only two system administrators who are responsible for the installation, configuration and maintenance of those computers. This paper shows which tools are used to squash this difficult task: like Puppet (an open source Linux tool we further adapted) and many in-house developed tools offering an overview about computers, installation status and relations between the different servers / computers.  
slides icon Slides MOMAU007 [0.384 MB]  
poster icon Poster MOMAU007 [0.708 MB]  
 
MOMAU008 Integrated Management Tool for Controls Software Problems, Requests and Project Tasking at SLAC 59
 
  • D. Rogind, W. Allen, W.S. Colocho, G. DeContreras, J.B. Gordon, P. Pandey, H. Shoaee
    SLAC, Menlo Park, California, USA
 
  The Controls Department at SLAC, with its service center model, continuously receives engineering requests to design, build and support controls for accelerator systems lab-wide. Each customer request can vary in complexity from installing a minor feature to enhancing a major subsystem. Departmental accelerator improvement projects, along with DOE-approved construction projects, also contribute heavily to the work load. These various customer requests and projects, paired with the ongoing operational maintenance and problem reports, place a demand on the department that usually exceeds the capacity of available resources. An integrated, centralized repository - comprised of all problems, requests, and project tasks - available to all customers, operators, managers, and engineers alike - is essential to capture, communicate, prioritize, assign, schedule, track progress, and finally, commission all work components. The Controls software group has recently integrated its request/task management into its online problem tracking "Comprehensive Accelerator Tool for Enhancing Reliability" (CATER ) tool. This paper discusses the new integrated software problem/request/task management tool - its work-flow, reporting capability, and its many benefits.  
slides icon Slides MOMAU008 [0.083 MB]  
poster icon Poster MOMAU008 [1.444 MB]  
 
MOMAU009
Laser Inertial Fusion Energy Control Systems  
 
  • C.D. Marshall, R.W. Carey, R. Demaret, O.D. Edwards, L.J. Lagin, P.J. Van Arsdall
    LLNL, Livermore, California, USA
 
  Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory (LLNL) under Contract DE-AC52-07NA27344.
A Laser Inertial Fusion Energy (LIFE) facility point design is being developed at LLNL to support an Inertial Confinement Fusion (ICF) based energy concept. This will build upon the technical foundation of the National Ignition Facility (NIF), the world’s largest and most energetic laser system. NIF is designed to compress fusion targets to conditions required for thermonuclear burn. The LIFE control systems will have an architecture partitioned by sub-systems and distributed among over 1000’s of front-end processors, embedded controllers and supervisory servers. LIFE’s automated control subsystems will require interoperation between different languages and target architectures. Much of the control system will be embedded into the subsystem with well defined interface and performance requirements to the supervisory control layer. An automation framework will be used to orchestrate and automate start-up and shut-down as well as steady state operation. The LIFE control system will be a high parallel segmented architecture. For example, the laser system consists of 384 identical laser beamlines in a "box". The control system will mirror this architectural replication for each beamline with straightforward high-level interface for control and status monitoring. Key technical challenges will be discussed such as the injected target tracking and laser pointing feedback. This talk discusses the the plan for controls and information systems to support LIFE.
LLNL-CONF-476451
 
slides icon Slides MOMAU009 [0.784 MB]