Keyword: distributed
Paper Title Other Keywords Page
MOPKN015 Managing Information Flow in ALICE detector, controls, monitoring, database 124
 
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
  • A. Augustinus, P.Ch. Chochula, L.S. Jirdén, A.N. Kurepin, M. Lechman, P. Rosinský
    CERN, Geneva, Switzerland
  • G. De Cataldo
    INFN-Bari, Bari, Italy
  • A. Moreno
    Universidad Politécnica de Madrid, E.T.S.I Industriales, Madrid, Spain
 
  ALICE is one of the experiments at the Large Hadron Collider (LHC), CERN (Geneva, Switzerland). The ALICE detector control system is an integrated system collecting 18 different subdetectors' controls and general services and is implemented using the commercial SCADA package PVSS. Information of general interest, beam and ALICE condition data, together with data related to shared plants or systems, are made available to all the subsystems through the distribution capabilities of PVSS. Great care has been taken during the design and implementation to build the control system as a hierarchical system, limiting the interdependencies of the various subsystems. Accessing remote resources in a PVSS distributed environment is very simple, and can be initiated unilaterally. In order to improve the reliability of distributed data and to avoid unforeseen dependencies, the ALICE DCS group has enforced the centralization of the publication of global data and other specific variables requested by the subsystems. As an example, a specific monitoring tool will be presented that has been developed in PVSS to estimate the level of interdependency and to understand the optimal layout of the distributed connections, allowing for an interactive visualization of the distribution topology.  
poster icon Poster MOPKN015 [2.585 MB]  
 
MOPMS024 Evolution of the Argonne Tandem Linear Accelerator System (ATLAS) Control System controls, software, database, hardware 371
 
  • M.A. Power, F.H. Munson
    ANL, Argonne, USA
 
  Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357.
Given that the Argonne Tandem Linac Accelerator System (ATLAS) recently celebrated its 25th anniversary, this paper will explore the past, present and future of the ATLAS Control System and how it has evolved along with the accelerator and control system technology. ATLAS as we know it today, originated with a Tandem Van de Graff in the 1960's. With the addition of the Booster section in the late 1970's, came the first computerized control. ATLAS itself was placed into service on June 25, 1985 and was the world's first superconducting linear accelerator for ions. Since its dedication as a National User Facility, more than a thousand experiments by more than 2,000 users world-wide, have taken advantage of the unique capabilities it provides. Today, ATLAS continues to be a user facility for physicists who study the particles that form the heart of atoms. Its most recent addition, CARIBU (Californium Rare Isotope Breeder Upgrade), creates special beams that feed into ATLAS. ATLAS is similar to a living organism, changing and responding to new technological challenges and research needs. As it continues to evolve, so does the control system: from the original days using a DEC PDP-11/34 computer and 2 CAMAC crates, to a DEC Alpha computer running Vsystem software and more than twenty CAMAC crates, to distributed computers and VME systems. Future upgrades are also in the planning stages that will continue to evolve the control system.
 
poster icon Poster MOPMS024 [2.845 MB]  
 
TUDAUST04 Status of the Control System for the European XFEL controls, hardware, feedback, device-server 597
 
  • K. Rehlich
    DESY, Hamburg, Germany
 
  DESY is currently building a new 3.4 km-long X-ray free electron laser facility. Commissioning is planned in 2014. The facility will deliver ultra short light pulses with a peak power up to 100 GW and a wavelength down to 0.1 nm. About 200 distributed electronic crates will be used to control the facility. A major fraction of the controls will be installed inside the accelerator tunnel. MicroTCA was chosen as an adequate standard with state-of-the-art connectivity and performance including remote management. The FEL will produce up to 27000 bunches per second. Data acquisition and controls have to provide bunch-synchronous operation within the whole distributed system. Feedbacks implemented in FPGAs and on service tier processes will implement the required stability and automation of the FEL. This paper describes the progress in the development of the new hardware as well as the software architecture. Parts of the control system are currently implemented in the much smaller FLASH FEL facility.  
slides icon Slides TUDAUST04 [6.640 MB]  
 
WEBHAUST02 Optimizing Infrastructure for Software Testing Using Virtualization network, software, hardware, Windows 622
 
  • O. Khalid, B. Copy, A A. Shaikh
    CERN, Geneva, Switzerland
 
  Virtualization technology and cloud computing have a brought a paradigm shift in the way we utilize, deploy and manage computer resources. They allow fast deployment of multiple operating system as containers on physical machines which can be either discarded after use or snapshot for later re-deployment. At CERN, we have been using virtualization/cloud computing to quickly setup virtual machines for our developers with pre-configured software to enable them test/deploy a new version of a software patch for a given application. We also have been using the infrastructure to do security analysis of control systems as virtualization provides a degree of isolation where control systems such as SCADA systems could be evaluated for simulated network attacks. This paper reports both on the techniques that have been used for security analysis involving network configuration/isolation to prevent interference of other systems on the network. This paper also provides an overview of the technologies used to deploy such an infrastructure based on VMWare and OpenNebula cloud management platform.  
slides icon Slides WEBHAUST02 [2.899 MB]  
 
WEMAU003 The LabVIEW RADE Framework Distributed Architecture LabView, framework, software, interface 658
 
  • O.O. Andreassen, D. Kudryavtsev, A. Raimondo, A. Rijllart
    CERN, Geneva, Switzerland
  • S. Shaipov, R. Sorokoletov
    JINR, Dubna, Moscow Region, Russia
 
  For accelerator GUI applications there is a need for a rapid development environment to create expert tools or to prototype operator applications. Typically a variety of tools are being used, such as Matlab™ or Excel™, but their scope is limited, either because of their low flexibility or limited integration into the accelerator infrastructure. In addition, having several tools obliges users to deal with different programming techniques and data structures. We have addressed these limitations by using LabVIEW™, extending it with interfaces to C++ and Java. In this way it fulfills requirements of ease of use, flexibility and connectivity. We present the RADE framework and four applications based on it. Recent application requirements could only be met by implementing a distributed architecture with multiple servers running multiple services. This brought us the additional advantage to implement redundant services, to increase the availability and to make transparent updates. We will present two applications requiring high availability. We also report on issues encountered with such a distributed architecture and how we have addressed them. The latest extension of the framework is to industrial equipment, with program templates and drivers for PLCs (Siemens and Schneider) and PXI with LabVIEW-Real Time.  
slides icon Slides WEMAU003 [0.157 MB]  
poster icon Poster WEMAU003 [2.978 MB]  
 
WEPKN003 Distributed Fast Acquisitions System for Multi Detector Experiments detector, experiment, software, TANGO 717
 
  • F. Langlois, A. Buteau, X. Elattaoui, C.M. Kewish, S. Lê, P. Martinez, K. Medjoubi, S. Poirier, A. Somogyi
    SOLEIL, Gif-sur-Yvette, France
  • A. Noureddine
    MEDIANE SYSTEM, Le Pecq, France
  • C. Rodriguez
    ALTEN, Boulogne-Billancourt, France
 
  An increasing number of SOLEIL beamlines need to use in parallel several detection techniques, which could involve 2D area detectors, 1D fluorescence analyzers, etc. For such experiments, we have implemented Distributed Fast Acquisition Systems for Multi Detectors. Data from each Detector are collected by independent software applications (in our case Tango Devices), assuming all acquisitions are triggered by a unique Master clock. Then, each detector software device streams its own data on a common disk space, known as the spool. Each detector data are stored in independent NeXus files, with the help of a dedicated high performance NeXus streaming C++ library (called NeXus4Tango). A dedicated asynchronous process, known as the DataMerger, monitors the spool, and gathers all these individual temporary NeXus files into the final experiment NeXus file stored in SOLEIL common Storage System. Metadata information describing context and environment are also added in the final file, thanks to another process (the DataRecorder device). This software architecture proved to be very modular in terms of number and type of detectors while making life of users easier, all data being stored in a unique file at the end of the acquisition. The status of deployment and operation of this "Distributed Fast Acquisitions system for multi detector experiments" will be presented, with the examples of QuickExafs acquisitions on the SAMBA beamline and QuickSRCD acquisitions on DISCO. In particular, the complex case of the future NANOSCOPIUM beamline will be developed.  
poster icon Poster WEPKN003 [0.671 MB]  
 
WEPKS015 Automatic Creation of LabVIEW Network Shared Variables LabView, controls, hardware, network 812
 
  • T. Kluge
    Siemens AG, Erlangen, Germany
  • H.-C. Schröder
    ASTRUM IT GmbH, Erlangen, Germany
 
  We are in the process of preparing the LabVIEW controlled system components of our Solid State Direct Drive® experiments [1, 2, 3, 4] for the integration into a Supervisory Control And Data Acquisition (SCADA) or distributed control system. The predetermined route to this is the generation of LabVIEW network shared variables that can easily be exported by LabVIEW to the SCADA system using OLE for Process Control (OPC) or other means. Many repetitive tasks are associated with the creation of the shared variables and the required code. We are introducing an efficient and inexpensive procedure that automatically creates shared variable libraries and sets default values for the shared variables. Furthermore, LabVIEW controls are created that are used for managing the connection to the shared variable inside the LabVIEW code operating on the shared variables. The procedure takes as input an XML spreadsheet defining the required input. The procedure utilizes XSLT and LabVIEW scripting. In a later state of the project the code generation can be expanded to also create code and configuration files that will become necessary in order to access the shared variables from the SCADA system of choice.
[1] O. Heid, T. Hughes, THPD002, IPAC10, Kyoto, Japan
[2] R. Irsigler et al, 3B-9, PPC11, Chicago IL, USA
[3] O. Heid, T. Hughes, THP068, LINAC10, Tsukuba, Japan
[4] O. Heid, T. Hughes, MOPD42, HB2010, Morschach, Switzerland
 
poster icon Poster WEPKS015 [0.265 MB]  
 
WEPKS016 Software for Virtual Accelerator Designing simulation, framework, software, EPICS 816
 
  • N.V. Kulabukhova, A.N. Ivanov, V.V. Korkhov, A. Lazarev
    St. Petersburg State University, St. Petersburg, Russia
 
  The article discusses appropriate technologies for software implementation of the Virtual Accelerator. The Virtual Accelerator is considered as a set of services and tools enabling transparent execution of computational software for modeling beam dynamics in accelerators on distributed computing resources. Distributed storage and information processing facilities utilized by the Virtual Accelerator make use of the Service-Oriented Architecture (SOA) according to a cloud computing paradigm. Control system toolkits (such as EPICS, TANGO), computing modules (including high-performance computing), realization of the GUI with existing frameworks and visualization of the data are discussed in the paper. The presented research consists of software analysis for realization of interaction between all levels of the Virtual Accelerator and some samples of middleware implementation. A set of the servers and clusters at St.-Petersburg State University form the infrastructure of the computing environment for Virtual Accelerator design. Usage of component-oriented technology for realization of Virtual Accelerator levels interaction is proposed. The article concludes with an overview and substantiation of a choice of technologies that will be used for design and implementation of the Virtual Accelerator.  
poster icon Poster WEPKS016 [0.559 MB]  
 
WEPKS028 Exploring a New Paradigm for Accelerators and Large Experimental Apparatus Control Systems controls, toolkit, software, database 856
 
  • L. Catani, R. Ammendola, F. Zani
    INFN-Roma II, Roma, Italy
  • C. Bisegni, S. Calabrò, P. Ciuffetti, G. Di Pirro, G. Mazzitelli, A. Stecchi
    INFN/LNF, Frascati (Roma), Italy
  • L.G. Foggetta
    LAL, Orsay, France
 
  The integration of web technologies and web services has been, in the recent years, one of the major trends in upgrading and developing control systems for accelerators and large experimental apparatuses. Usually, web technologies have been introduced to complement the control systems with smart add-ons and user friendly services or, for instance, to safely allow access to the control system to users from remote sites. In spite of this still narrow spectrum of employment, some software technologies developed for high performance web services, although originally intended and optimized for these particular applications, deserve some features that would allow their deeper integration in a control system and, eventually, use them to develop some of the control system's core components. In this paper we present the conclusion of the preliminary investigations of a new paradigm for an accelerator control system and associated machine data acquisition system (DAQ), based on a synergic combination of network distributed cache memory and a non-relational key/value database. We investigated these technologies with particular interest on performances, namely speed of data storage and retrieve for the network memory, data throughput and queries execution time for the database and, especially, how much this performances can benefit from their inherent scalability. The work has been developed in a collaboration between INFN-LNF and INFN-Roma Tor Vergata.  
 
WEPKS032 A UML Profile for Code Generation of Component Based Distributed Systems interface, software, controls, framework 867
 
  • G. Chiozzi, L. Andolfato, R. Karban
    ESO, Garching bei Muenchen, Germany
  • A. Tejeda
    UCM, Antofagasta, Chile
 
  A consistent and unambiguous implementation of code generation (model to text transformation) from UML must rely on a well defined UML profile, customizing UML for a particular application domain. Such a profile must have a solid foundation in a formally correct ontology, formalizing the concepts and their relations in the specific domain, in order to avoid a maze or set of wildly created stereotypes. The paper describes a generic profile for the code generation of component based distributed systems for control applications, the process to distill the ontology and define the profile, and the strategy followed to implement the code generator. The main steps that take place iteratively include: defining the terms and relations with an ontology, mapping the ontology to the appropriate UML metaclasses, testing the profile by creating modelling examples, and generating the code.  
poster icon Poster WEPKS032 [1.925 MB]  
 
WEPMS001 Interconnection Test Framework for the CMS Level-1 Trigger System framework, operation, hardware, controls 973
 
  • J. Hammer
    CERN, Geneva, Switzerland
  • M. Magrans de Abril
    UW-Madison/PD, Madison, Wisconsin, USA
  • C.-E. Wulz
    HEPHY, Wien, Austria
 
  The Level-1 Trigger Control and Monitoring System is a software package designed to configure, monitor and test the Level-1 Trigger System of the Compact Muon Solenoid (CMS) experiment at CERN's Large Hadron Collider. It is a large and distributed system that runs over 50 PCs and controls about 200 hardware units. The Interconnection Test Framework (ITF), a generic and highly flexible framework for creating and executing hardware tests within the Level-1 Trigger environment is presented. The framework is designed to automate testing of the 13 major subsystems interconnected with more than 1000 links. Features include a web interface to create and execute tests, modeling using finite state machines, dependency management, automatic configuration, and loops. Furthermore, the ITF will replace the existing heterogeneous testing procedures and help reducing maintenance and complexity of operation tasks. Finally, an example of operational use of the Interconnection Test Framework is presented. This case study proves the concept and describes the customization process and its performance characteristics.  
poster icon Poster WEPMS001 [0.576 MB]  
 
WEPMU035 Distributed Monitoring System Based on ICINGA monitoring, network, database, experiment 1149
 
  • C. Haen, E. Bonaccorsi, N. Neufeld
    CERN, Geneva, Switzerland
 
  The basic services of the large IT infrastructure of the LHCb experiment are monitored with ICINGA, a fork of the industry standard monitoring software NAGIOS. The infrastructure includes thousands of servers and computers, storage devices, more than 200 network devices and many VLANS, databases, hundreds diskless nodes and many more. The amount of configuration files needed to control the whole installation is big, and there is a lot of duplication, when the monitoring infrastructure is distributed over several servers. In order to ease the manipulation of the configuration files, we designed a monitoring schema particularly adapted to our network and taking advantage of its specificities, and developed a tool to centralize its configuration in a database. Thanks to this tool, we could also parse all our previous configuration files, and thus fill in our Oracle database, that comes as a replacement of the previous Active Directory based solution. A web frontend allows non-expert users to easily add new entities to monitor. We present the schema of our monitoring infrastructure and the tool used to manage and automatically generate the configuration for ICINGA.  
poster icon Poster WEPMU035 [0.375 MB]  
 
THCHAUST06 Instrumentation of the CERN Accelerator Logging Service: Ensuring Performance, Scalability, Maintenance and Diagnostics instrumentation, database, extraction, framework 1232
 
  • C. Roderick, R. Billen, D.D. Teixeira
    CERN, Geneva, Switzerland
 
  The CERN accelerator Logging Service currently holds more than 90 terabytes of data online, and processes approximately 450 gigabytes per day, via hundreds of data loading processes and data extraction requests. This service is mission-critical for day-to-day operations, especially with respect to the tracking of live data from the LHC beam and equipment. In order to effectively manage any service, the service provider's goals should include knowing how the underlying systems are being used, in terms of: "Who is doing what, from where, using which applications and methods, and how long each action takes". Armed with such information, it is then possible to: analyze and tune system performance over time; plan for scalability ahead of time; assess the impact of maintenance operations and infrastructure upgrades; diagnose past, on-going, or re-occurring problems. The Logging Service is based on Oracle DBMS and Application Servers, and Java technology, and is comprised of several layered and multi-tiered systems. These systems have all been heavily instrumented to capture data about system usage, using technologies such as JMX. The success of the Logging Service and its proven ability to cope with ever growing demands can be directly linked to the instrumentation in place. This paper describes the instrumentation that has been developed, and demonstrates how the instrumentation data is used to achieve the goals outlined above.  
slides icon Slides THCHAUST06 [5.459 MB]  
 
FRBHMUST02 Towards High Performance Processing in Modern Java Based Control Systems monitoring, controls, software, real-time 1322
 
  • M. Misiowiec, W. Buczak, M. Buttner
    CERN, Geneva, Switzerland
 
  CERN controls software is often developed on Java foundation. Some systems carry out a combination of data, network and processor intensive tasks within strict time limits. Hence, there is a demand for high performing, quasi real time solutions. Extensive prototyping of the new CERN monitoring and alarm software required us to address such expectations. The system must handle dozens of thousands of data samples every second, along its three tiers, applying complex computations throughout. To accomplish the goal, a deep understanding of multithreading, memory management and interprocess communication was required. There are unexpected traps hidden behind an excessive use of 64 bit memory or severe impact on the processing flow of modern garbage collectors, including the state of the art Oracle GarbageFirst. Tuning JVM configuration significantly affects the execution of the code. Even more important is the amount of threads and the data structures used between them. Accurately dividing work into independent tasks might boost system performance. Thorough profiling with dedicated tools helped understand the bottlenecks and choose algorithmically optimal solutions. Different virtual machines were tested, in a variety of setups and garbage collection options. The overall work provided for discovering actual hard limits of the whole setup. We present this process of architecting a challenging system in view of the characteristics and limitations of the contemporary Java runtime environment.
http://cern.ch/marekm/icalepcs.html
 
slides icon Slides FRBHMUST02 [4.514 MB]  
 
FRBHMULT04 Towards a State Based Control Architecture for Large Telescopes: Laying a Foundation at the VLT controls, software, operation, interface 1330
 
  • R. Karban, N. Kornweibel
    ESO, Garching bei Muenchen, Germany
  • D.L. Dvorak, M.D. Ingham, D.A. Wagner
    JPL, Pasadena, California, USA
 
  Large telescopes are characterized by a high level of distribution of control-related tasks and will feature diverse data flow patterns and large ranges of sampling frequencies; there will often be no single, fixed server-client relationship between the control tasks. The architecture is also challenged by the task of integrating heterogeneous subsystems which will be delivered by multiple different contractors. Due to the high number of distributed components, the control system needs to effectively detect errors and faults, impede their propagation, and accurately mitigate them in the shortest time possible, enabling the service to be restored. The presented Data-Driven Architecture is based on a decentralized approach with an end-to-end integration of disparate independently-developed software components, using a high-performance standards-based communication middle-ware infrastructure, based on the Data Distribution Service. A set of rules and principles, based on JPL's State Analysis method and architecture, are established to avoid undisciplined component-to-component interactions, where the Control System and System Under Control are clearly separated. State Analysis provides a model-based process for capturing system and software requirements and design, helping reduce the gap between the requirements on software specified by systems engineers and the implementation by software engineers. The method and architecture has been field tested at the Very Large Telescope, where it has been integrated into an operational system with minimal downtime.  
slides icon Slides FRBHMULT04 [3.504 MB]