Keyword: framework
Paper Title Other Keywords Page
MOBAUST06 The LHCb Experiment Control System: on the Path to Full Automation controls, experiment, detector, operation 20
 
  • C. Gaspar, F. Alessio, L.G. Cardoso, M. Frank, J.C. Garnier, R. Jacobsson, B. Jost, N. Neufeld, R. Schwemmer, E. van Herwijnen
    CERN, Geneva, Switzerland
  • O. Callot
    LAL, Orsay, France
  • B. Franek
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
 
  LHCb is a large experiment at the LHC accelerator. The experiment control system is in charge of the configuration, control and monitoring of the different sub-detectors and of all areas of the online system: the Detector Control System (DCS), sub-detector's voltages, cooling, temperatures, etc.; the Data Acquisition System (DAQ), and the Run-Control; the High Level Trigger (HLT), a farm of around 1500 PCs running trigger algorithms; etc. The building blocks of the control system are based on the PVSS SCADA System complemented by a control Framework developed in common for the 4 LHC experiments. This framework includes an "expert system" like tool called SMI++ which we use for the system automation. The full control system runs distributed over around 160 PCs and is logically organised in a hierarchical structure, each level being capable of supervising and synchronizing the objects below. The experiment's operations are now almost completely automated driven by a top-level object called Big-Brother which pilots all the experiment's standard procedures and the most common error-recovery procedures. Some examples of automated procedures are: powering the detector, acting on the Run-Control (Start/Stop Run, etc.) and moving the vertex detector in/out of the beam, all driven by the state of the accelerator or recovering from errors in the HLT farm. The architecture, tools and mechanisms used for the implementation as well as some operational examples will be shown.  
slides icon Slides MOBAUST06 [1.451 MB]  
 
MOPKN019 ATLAS Detector Control System Data Viewer database, interface, controls, experiment 137
 
  • C.A. Tsarouchas, S.A. Roe, S. Schlenker
    CERN, Geneva, Switzerland
  • U.X. Bitenc, M.L. Fehling-Kaschek, S.X. Winkelmann
    Albert-Ludwig Universität Freiburg, Freiburg, Germany
  • S.X. D'Auria
    University of Glasgow, Glasgow, United Kingdom
  • D. Hoffmann, O.X. Pisano
    CPPM, Marseille, France
 
  The ATLAS experiment at CERN is one of the four Large Hadron Collider ex- periments. ATLAS uses a commercial SCADA system (PVSS) for its Detector Control System (DCS) which is responsible for the supervision of the detector equipment, the reading of operational parameters, the propagation of the alarms and the archiving of important operational data in a relational database. DCS Data Viewer (DDV) is an application that provides access to historical data of DCS parameters written to the database through a web interface. It has a modular and flexible design and is structured using a client-server architecture. The server can be operated stand alone with a command-like interface to the data while the client offers a user friendly, browser independent interface. The selection of the metadata of DCS parameters is done via a column-tree view or with a powerful search engine. The final visualisation of the data is done using various plugins such as "value over time" charts, data tables, raw ASCII or structured export to ROOT. Excessive access or malicious use of the database is prevented by dedicated protection mechanisms, allowing the exposure of the tool to hundreds of inexperienced users. The metadata selection and data output features can be used separately by XML configuration files. Security constraints have been taken into account in the implementation allowing the access of DDV by collaborators worldwide. Due to its flexible interface and its generic and modular approach, DDV could be easily used for other experiment control systems that archive data using PVSS.  
poster icon Poster MOPKN019 [0.938 MB]  
 
MOPMN005 ProShell – The MedAustron Accelerator Control Procedure Framework interface, controls, ion, ion-source 246
 
  • R. Moser, A.B. Brett, M. Marchhart, C. Torcato de Matos
    EBG MedAustron, Wr. Neustadt, Austria
  • J. Dedič, S. Sah
    Cosylab, Ljubljana, Slovenia
  • J. Gutleber
    CERN, Geneva, Switzerland
 
  MedAustron is a centre for ion-therapy and research in currently under construction in Austria. It features a synchrotron particle accelerator for proton and carbon-ion beams. This paper presents the architecture and concepts for implementing a procedure framework called ProShell. Procedures to automate high level control and analysis tasks for commissioning and during operation are modelled with Petri-Nets and user code is implemented with C#. It must be possible to execute procedures and monitor their execution progress remotely. Procedures include starting up devices and subsystems in a controlled manner, configuring, operating O(1000) devices and tuning their operational settings using iterative optimization algorithms. Device interfaces must be extensible to accommodate yet unanticipated functionalities. The framework implements a template for procedure specific graphical interfaces to access device specific information such as monitoring data. Procedures interact with physical devices through proxy software components that implement one of the following interfaces: (1) state-less or (2) state-driven device interface. Components can extend these device interfaces following an object-oriented single inheritance scheme to provide augmented, device-specific interfaces. As only two basic device interfaces need to be defined at an early project stage, devices can be integrated gradually as commissioning progresses. We present the architecture and design of ProShell and explain the programming model by giving the simple example of the ion source spectrum analysis procedure.  
poster icon Poster MOPMN005 [0.948 MB]  
 
MOPMN018 Toolchain for Online Modeling of the LHC optics, controls, simulation, software 277
 
  • G.J. Müller, X. Buffat, K. Fuchsberger, M. Giovannozzi, S. Redaelli, F. Schmidt
    CERN, Geneva, Switzerland
 
  The control of high intensity beams in a high energy, superconducting machine with complex optics like the CERN Large Hadron Collider (LHC) is challenging not only from the design aspect but also for operation. To support the LHC beam commissioning, operation and luminosity production, efforts were recently devoted towards the design and implementation of a software infrastructure aimed to use the computing power of the beam dynamics code MADX-X in the framework of the Java-based LHC control and measurement environment. Alongside interfacing to measurement data as well as to settings of the control system, the best knowledge of machine aperture and optic models is provided. In this paper, we present the status of the toolchain and illustrate how it has been used during commissioning and operation of the LHC. Possible future implementations are also discussed.  
poster icon Poster MOPMN018 [0.562 MB]  
 
MOPMN020 Integrating Controls Frameworks: Control Systems for NA62 LAV Detector Test Beams controls, detector, experiment, interface 285
 
  • O. Holme, J.A.R. Arroyo Garcia, P. Golonka, M. Gonzalez-Berges, H. Milcent
    CERN, Geneva, Switzerland
  • O. Holme
    ETH, Zurich, Switzerland
 
  The detector control system for the NA62 experiment at CERN, to be ready for physics data-taking in 2014, is going to be built based on control technologies recommended by the CERN Engineering group. A rich portfolio of the technologies is planned to be showcased and deployed in the final application, and synergy between them is needed. In particular two approaches to building controls application need to play in harmony: the use of the high-level application framework called UNICOS, and a bottom-up approach of development based on the components of the JCOP Framework. The aim of combining the features provided by the two frameworks is to avoid duplication of functionality and minimize the maintenance and development effort for future controls applications. In the paper the result of the integration efforts obtained so far are presented; namely the control applications developed for beam-testing of NA62 detector prototypes. Even though the delivered applications are simple, significant conceptual and development work was required to bring about the smooth inter-play between the two frameworks, while assuring the possibility of unleashing their full power. A discussion of current open issues is presented, including the viability of the approach for larger-scale applications of high complexity, such as the complete detector control system for the NA62 detector.  
poster icon Poster MOPMN020 [1.464 MB]  
 
MOPMU015 Control and Data Acquisition Systems for the FERMI@Elettra Experimental Stations controls, data-acquisition, TANGO, instrumentation 462
 
  • R. Borghes, V. Chenda, A. Curri, G. Gaio, G. Kourousias, M. Lonza, G. Passos, R. Passuello, L. Pivetta, M. Prica, M. Pugliese, G. Strangolino
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
FERMI@Elettra is a single-pass Free Electron Laser (FEL) user-facility covering the wavelength range from 100 nm to 4 nm. The facility is located in Trieste, Italy, nearby the third-generation synchrotron light source Elettra. Three experimental stations, dedicated to different scientific areas, have been installed installed in 2011: Low Density Matter (LDM), Elastic and Inelastic Scattering (EIS) and Diffraction and Projection Imaging (DiProI). The experiment control and data acquisition system is the natural extension of the machine control system. It integrates a shot-by-shot data acquisition framework with a centralized data storage and analysis system. Low-level applications for data acquisition and online processing have been developed using the Tango framework on Linux platforms. High-level experimental applications can be developed on both Linux and Windows platforms using C/C++, Python, LabView, IDL or Matlab. The Elettra scientific computing portal allows remote access to the experiment and to the data storage system.
 
poster icon Poster MOPMU015 [0.884 MB]  
 
MOPMU019 The Gateways of Facility Control for SPring-8 Accelerators controls, data-acquisition, database, network 473
 
  • M. Ishii, T. Masuda, R. Tanaka, A. Yamashita
    JASRI/SPring-8, Hyogo-ken, Japan
 
  We integrated the utilities data acquisition into the SPring-8 accelerator control system based on MADOCA framework. The utilities data such as air temperature, power line voltage and temperature of machine cooling water are helpful to study the correlation between the beam stability and the environmental conditions. However the accelerator control system had no way to take many utilities data managed by the facility control system, because the accelerator control system and the facility control system was independent system without an interconnection. In 2010, we had a chance to replace the old facility control system. At that time, we constructed the gateways between the MADOCA-based accelerator control system and the new facility control system installing BACnet, that is a data communication protocol for Building Automation and Control Networks, as a fieldbus. The system requirements were as follows: to monitor utilities data with required sampling rate and resolution, to store all acquired data in the accelerator database, to keep an independence between the accelerator control system and the facility control system, to have a future expandability to control the facilities from the accelerator control system. During the work, we outsourced to build the gateways including data taking software of MADOCA to solve the problems of less manpower and short work period. In this paper we describe the system design and the approach of outsourcing.  
 
MOPMU024 Status of ALMA Software software, operation, controls, monitoring 487
 
  • T.C. Shen, J.P.A. Ibsen, R.A. Olguin, R. Soto
    ALMA, Joint ALMA Observatory, Santiago, Chile
 
  The Atacama Large Millimeter /submillimeter Array (ALMA) will be a unique research instrument composed of at least 66 reconfigurable high-precision antennas, located at the Chajnantor plain in the Chilean Andes at an elevation of 5000 m. Each antenna contains instruments capable of receiving radio signals from 31.3 GHz up to 950 GHz. These signals are correlated inside a Correlator and the spectral data are finally saved into the Archive system together with the observation metadata. This paper describes the progress in the deployment of the ALMA software, with emphasis on the control software, which is built on top of the ALMA Common Software (ACS), a CORBA based middleware framework. In order to support and maintain the installed software, it is essential to have a mechanism to align and distribute the same version of software packages across all systems. This is achieved rigorously with weekly based regression tests and strict configuration control. A build farm to provide continuous integration and testing in simulation has been established as well. Given the large amount of antennas, it is imperative to have also a monitoring system to allow trend analysis of each component in order to trigger preventive maintenance activities. A challenge for which we are preparing this year consists in testing the whole ALMA software performing complete end-to-end operation, from proposal submission to data distribution to the ALMA Regional Centers. The experience gained during deployment, testing and operation support will be presented.  
poster icon Poster MOPMU024 [0.471 MB]  
 
MOPMU026 A Readout and Control System for a CTA Prototype Telescope controls, software, interface, hardware 494
 
  • I. Oya, U. Schwanke
    Humboldt University Berlin, Institut für Physik, Berlin, Germany
  • B. Behera, D. Melkumyan, T. Schmidt, P. Wegner, S. Wiesand, M. Winde
    DESY Zeuthen, Zeuthen, Germany
 
  CTA (Cherenkov Telescope Array) is an initiative to build the next generation ground-based gamma-ray instrument. The CTA array will allow studies in the very high-energy domain in the range from a few tens of GeV to more than hundred TeV, extending the existing energy coverage and increasing by a factor 10 the sensitivity compared to current installations, while enhancing other aspects like angular and energy resolution. These goals require the use of at least three different sizes of telescopes. CTA will comprise two arrays (one in the Northern hemisphere and one in the Southern hemisphere) for full sky coverage and will be operated as an open observatory. A prototype for the Medium Size Telescope (MST) type is under development and will be deployed in Berlin by the end of 2011. The MST prototype will consist of the mechanical structure, drive system, active mirror control, four CCD cameras for prototype instrumentation and a weather station. The ALMA Common Software (ACS) distributed control framework has been chosen for the implementation of the control system of the prototype. In the present approach, the interface to some of the hardware devices is achieved by using the OPC Unified Architecture (OPC UA). A code-generation framework (ACSCG) has been designed for ACS modeling. In this contribution the progress in the design and implementation of the control system for the CTA MST prototype is described.  
poster icon Poster MOPMU026 [1.953 MB]  
 
MOPMU039 ACSys in a Box controls, database, site, Linux 522
 
  • C.I. Briegel, D. Finstrom, B. Hendricks, CA. King, R. Neswold, D.J. Nicklaus, J.F. Patrick, A.D. Petrov, C.L. Schumann, J.G. Smedinghoff
    Fermilab, Batavia, USA
 
  Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
The Accelerator Control System at Fermilab has evolved to enable this relatively large control system to be encapsulated into a "box" such as a laptop. The goal was to provide a platform isolated from the "online" control system. This platform can be used internally for making major upgrades and modifications without impacting operations. It also provides a standalone environment for research and development including a turnkey control system for collaborators. Over time, the code base running on Scientific Linux has enabled all the salient features of the Fermilab's control system to be captured in an off-the-shelf laptop. The anticipated additional benefits of packaging the system include improved maintenance, reliability, documentation, and future enhancements.
 
 
TUAAULT04 Web-based Execution of Graphical Workflows : a Modular Platform for Multifunctional Scientific Process Automation controls, interface, synchrotron, database 540
 
  • E. De Ley, D. Jacobs
    iSencia Belgium, Gent, Belgium
  • M. Ounsy
    SOLEIL, Gif-sur-Yvette, France
 
  The Passerelle process automation suite offers a fundamentally modular solution platform, based on a layered integration of several best-of-breed technologies. It has been successfully applied by Synchrotron Soleil as the sequencer for data acquisition and control processes on its beamlines, integrated with TANGO as a control bus and GlobalScreen as the Scada package. Since last year it is being used as the graphical workflow component for the development of an eclipse-based Data Analysis Work Bench, at ESRF. The top layer of Passerelle exposes an actor-based development paradigm, based on the Ptolemy framework (UC Berkeley). Actors provide explicit reusability and strong decoupling, combined with an inherently concurrent execution model. Actor libraries exist for TANGO integration, web-services, database operations, flow control, rules-based analysis, mathematical calculations, launching external scripts etc. Passerelle's internal architecture is based on OSGi, the major Java framework for modular service-based applications. A large set of modules exist that can be recombined as desired to obtain different features and deployment models. Besides desktop versions of the Passerelle workflow workbench, there is also the Passerelle Manager. It is a secured web application including a graphical editor, for centralized design, execution, management and monitoring of process flows, integrating standard Java Enterprise services with OSGi. We will present the internal technical architecture, some interesting application cases and the lessons learnt.  
slides icon Slides TUAAULT04 [10.055 MB]  
 
WEAAULT02 Model Oriented Application Generation for Industrial Control Systems controls, software, target, factory 610
 
  • B. Copy, R. Barillère, E. Blanco Vinuela, R.N. Fernandes, B. Fernández Adiego, I. Prieto Barreiro
    CERN, Geneva, Switzerland
 
  The CERN Unified Industrial Control Systems framework (UNICOS) is a software generation methodology that standardizes the design of slow process control applications [1]. A Software Factory, named the UNICOS Application Builder (UAB) [2], was introduced to provide a stable metamodel, a set of platform-independent models and platform-specific configurations against which code and configuration generation plugins can be written. Such plugins currently target PLC programming environments (Schneider UNITY and SIEMENS Step7 PLCs) as well as SIEMENS WinCC Open Architecture SCADA (previously known as ETM PVSS) but are being expanded to cover more and more aspects of process control systems. We present what constitutes the UAB metamodel and the models in use, how these models can be used to capture knowledge about industrial control systems and how this knowledge can be leveraged to generate both code and configuration for a variety of target usages.
[1] H. Milcent et al, "UNICOS: AN OPEN FRAMEWORK", ICALEPCS2009, Kobe, Japan, (THD003)
[2] M. Dutour, "Software factory techniques applied to Process Control at CERN", ICALEPCS 2007, Knoxville Tennessee, USA
 
slides icon Slides WEAAULT02 [1.757 MB]  
 
WEAAULT03 A Platform Independent Framework for Statecharts Code Generation software, controls, CORBA, target 614
 
  • L. Andolfato, G. Chiozzi
    ESO, Garching bei Muenchen, Germany
  • N. Migliorini
    ENDIF, Ferrara, Italy
  • C. Morales
    UTFSM, Valparaíso, Chile
 
  Control systems for telescopes and their instruments are reactive systems very well suited to be modeled using Statecharts formalism. The World Wide Web Consortium is working on a new standard called SCXML that specifies an XML notation to describe Statecharts and provides a well defined operational semantic for run-time interpretation of the SCXML models. This paper presents a generic application framework for reactive non real-time systems based on interpreted Statecharts. The framework consists of a model to text transformation tool and an SCXML interpreter. The tool generates from UML state machine models the SCXML representation of the state machines and the application skeletons for the supported software platforms. An abstraction layer propagates the events from the middleware to the SCXML interpreter facilitating the support of different software platforms. This project benefits from the positive experience gained in several years of development of coordination and monitoring applications for the telescope control software domain using Model Driven Development technologies.  
slides icon Slides WEAAULT03 [2.179 MB]  
 
WEMAU003 The LabVIEW RADE Framework Distributed Architecture LabView, software, interface, distributed 658
 
  • O.O. Andreassen, D. Kudryavtsev, A. Raimondo, A. Rijllart
    CERN, Geneva, Switzerland
  • S. Shaipov, R. Sorokoletov
    JINR, Dubna, Moscow Region, Russia
 
  For accelerator GUI applications there is a need for a rapid development environment to create expert tools or to prototype operator applications. Typically a variety of tools are being used, such as Matlab™ or Excel™, but their scope is limited, either because of their low flexibility or limited integration into the accelerator infrastructure. In addition, having several tools obliges users to deal with different programming techniques and data structures. We have addressed these limitations by using LabVIEW™, extending it with interfaces to C++ and Java. In this way it fulfills requirements of ease of use, flexibility and connectivity. We present the RADE framework and four applications based on it. Recent application requirements could only be met by implementing a distributed architecture with multiple servers running multiple services. This brought us the additional advantage to implement redundant services, to increase the availability and to make transparent updates. We will present two applications requiring high availability. We also report on issues encountered with such a distributed architecture and how we have addressed them. The latest extension of the framework is to industrial equipment, with program templates and drivers for PLCs (Siemens and Schneider) and PXI with LabVIEW-Real Time.  
slides icon Slides WEMAU003 [0.157 MB]  
poster icon Poster WEMAU003 [2.978 MB]  
 
WEMAU007 Turn-key Applications for Accelerators with LabVIEW-RADE controls, LabView, software, alignment 670
 
  • O.O. Andreassen, P. Bestmann, C. Charrondière, T. Feniet, J. Kuczerowski, M. Nybø, A. Rijllart
    CERN, Geneva, Switzerland
 
  In the accelerator domain there is a need of integrating industrial devices and creating control and monitoring applications in an easy and yet structured way. The LabVIEW-RADE framework provides the method and tools to implement these requirements and also provides the essential integration of these applications into the CERN controls infrastructure. We present three examples of applications of different nature to show that the framework provides solutions at all three tiers of the control system, data access, process and supervision. The first example is a remotely controlled alignment system for the LHC collimators. The collimator alignment will need to be checked periodically. Due to limited access for personnel, the instruments are mounted on a small train. The system is composed of a PXI crate housing the instrument interfaces and a PLC for the motor control. We report on the design, development and commissioning of the system. The second application is the renovation of the PS beam spectrum analyser where both hardware and software were renewed. The control application was ported from Windows to LabVIEW-Real Time. We describe the technique used for a full integration into the PS console. The third example is a control and monitoring application of the CLIC two beam test stand. The application accesses CERN front-end equipment through the CERN middleware, CMW, and provides many different ways to view the data. We conclude with an evaluation of the framework based on the three examples and indicate new areas of improvement and extension.  
poster icon Poster WEMAU007 [2.504 MB]  
 
WEMAU012 COMETE: A Multi Data Source Oriented Graphical Framework software, TANGO, controls, toolkit 680
 
  • G. Viguier, Y. Huriez, M. Ounsy, K.S. Saintin
    SOLEIL, Gif-sur-Yvette, France
  • R. Girardot
    EXTIA, Boulogne Billancourt, France
 
  Modern beamlines at SOLEIL need to browse a large amount of scientific data through multiple sources that can be scientific measurement data files, databases or Tango [1] control systems. We created the COMETE [2] framework because we thought it was necessary for the end users to use the same collection of widgets for all the different data sources to be accessed. On the other side, for GUI application developers, the complexity of data source handling had to be hidden. These 2 requirements being now fulfilled, our development team is able to build high quality, modular and reusable scientific oriented GUI software, with consistent look and feel for end users. COMETE offers some key features to our developers: Smart refreshing service , easy-to-use and succinct API, Data Reduction functionality. This paper will present the work organization, the modern software architecture and design of the whole system. Then, the migration from our old GUI framework to COMETE will be detailed. The paper will conclude with an application example and a summary of the incoming features available in the framework.
[1] http://www.tango-controls.org
[2] http://comete.sourceforge.net
 
slides icon Slides WEMAU012 [0.083 MB]  
 
WEPKN005 Experiences in Messaging Middleware for High-Level Control Applications controls, EPICS, interface, software 720
 
  • N. Wang, J.L. Matykiewicz, R. Pundaleeka, S.G. Shasharina
    Tech-X, Boulder, Colorado, USA
 
  Funding: This project is funded by the US Department of Energy, Office of High Energy Physics under the contract #DE-FG02-08ER85043.
Existing high-level applications in accelerator control and modeling systems leverage many different languages, tools and frameworks that do not interoperate with one another. As a result, the community has moved toward the proven Service-Oriented Architecture approach to address the interoperability challenges among heterogeneous high-level application modules. This paper presents our experiences in developing a demonstrative high-level application environment using emerging messaging middleware standards. In particular, we utilized new features such as pvData, in the EPICS v4 and other emerging standards such as Data Distribution Service (DDS) and Extensible Type Interface by the Object Management Group. Our work on developing the demonstrative environment focuses on documenting the procedures to develop high-level accelerator control applications using the aforementioned technologies. Examples of such applications include presentation panel clients based on Control System Studio (CSS), Model-Independent plug-in for CSS, and data producing middle-layer applications such as model/data servers. Finally, we will show how these technologies enable developers to package various control subsystems and activities into "services" with well-defined "interfaces" and make leveraging heterogeneous high-level applications via flexible composition possible.
 
poster icon Poster WEPKN005 [2.723 MB]  
 
WEPKN019 A Programmable Logic Controller-Based System for the Recirculation of Liquid C6F14 in the ALICE High Momentum Particle Identification Detector at the Large Hadron Collider controls, detector, operation, monitoring 745
 
  • I. Sgura, G. De Cataldo, A. Franco, C. Pastore, G. Volpe
    INFN-Bari, Bari, Italy
 
  We present the design and the implementation of the Control System (CS) for the recirculation of liquid C6F14 (Perfluorohexane) in the High Momentum Particle Identification Detector (HMPID). The HMPID is a sub-detector of the ALICE experiment at the CERN Large Hadron Collider (LHC) and it uses liquid C6F14 as Cherenkov radiator medium in 21 quartz trays for the measurement of the velocity of charged particles. The primary task of the Liquid Circulation System (LCS) is to ensure the highest transparency of C6F14 to ultraviolet light by re-circulating the liquid through a set of special filters. In order to provide safe long term operation a PLC-based CS has been implemented. The CS supports both automatic and manual operating modes, remotely or locally. The adopted Finite State Machine approach minimizes the possible operator errors and provides a hierarchical control structure allowing the operation and monitoring of a single radiator tray. The LCS is protected against anomalous working conditions by both active and passive systems. The active ones are ensured via the control software running in the PLC whereas the human interface and data archiving are provided via PVSS, the SCADA framework which integrates the full detector control. The LCS under CS control has been fully commissioned and proved to meet all requirements, thus enabling HMPID to successfully collect the data from the first LHC operation..  
poster icon Poster WEPKN019 [1.270 MB]  
 
WEPKN024 UNICOS CPC New Domains of Application: Vacuum and Cooling & Ventilation controls, vacuum, operation, cryogenics 752
 
  • D. Willeman, E. Blanco Vinuela, B. Bradu, J.O. Ortola Vidal
    CERN, Geneva, Switzerland
 
  The UNICOS (UNified Industrial Control System) framework, and concretely the CPC package, has been extensively used in the domain of continuous processes (e.g. cryogenics, gas flows,…) and also others specific to the LHC machine as the collimators environmental measurements interlock system. The application of the UNICOS-CPC to other kind of processes: vacuum and the cooling and ventilation cases are depicted here. One of the major challenges was to figure out whether the model and devices created so far were also adapted for other types of processes (e.g Vacuum). To illustrate this challenge two domain use cases will be shown: ISOLDE vacuum control system and the STP18 (cooling & ventilation) control system. Both scenarios will be illustrated emphasizing the adaptability of the UNICOS CPC package to create those applications and highlighting the discovered needed features to include in the future UNICOS CPC package. This paper will also introduce the mechanisms used to optimize the commissioning time, the so-called virtual commissioning. In most of the cases, either the process is not yet accessible or the process is critical and its availability is then reduced, therefore a model of the process is used to offline validate the designed control system.  
poster icon Poster WEPKN024 [0.230 MB]  
 
WEPKN025 Supervision Application for the New Power Supply of the CERN PS (POPS) controls, interface, operation, software 756
 
  • H. Milcent, X. Genillon, M. Gonzalez-Berges, A. Voitier
    CERN, Geneva, Switzerland
 
  The power supply system for the magnets of the CERN PS has been recently upgraded to a new system called POPS (POwer for PS). The old mechanical machine has been replaced by a system based on capacitors. The equipment as well as the low level controls have been provided by an external company (CONVERTEAM). The supervision application has been developed at CERN reusing the technologies and tools used for the LHC Accelerator and Experiments (UNICOS and JCOP frameworks, PVSS SCADA tool). The paper describes the full architecture of the control application, and the challenges faced for the integration with an outsourced system. The benefits of reusing the CERN industrial control frameworks and the required adaptations will be discussed. Finally, the initial operational experience will be presented.  
poster icon Poster WEPKN025 [13.149 MB]  
 
WEPKS001 Agile Development and Dependency Management for Industrial Control Systems software, controls, site, project-management 767
 
  • B. Copy, M. Mettälä
    CERN, Geneva, Switzerland
 
  The production and exploitation of industrial control systems differ substantially from traditional information systems; this is in part due to constraints on the availability and change life-cycle of production systems, as well as their reliance on proprietary protocols and software packages with little support for open development standards [1]. The application of agile software development methods therefore represents a challenge which requires the adoption of existing change and build management tools and approaches that can help bridging the gap and reap the benefits of managed development when dealing with industrial control systems. This paper will consider how agile development tools such as Apache Maven for build management, Hudson for continuous integration or Sonatype Nexus for the operation of "definite media libraries" were leveraged to manage the development life-cyle of the CERN UAB framework [2], as well as other crucial building blocks of the CERN accelerator infrastructure, such as the CERN Common Middleware or the FESA project.
[1] H. Milcent et al, "UNICOS: AN OPEN FRAMEWORK", THD003, ICALEPCS2009, Kobe, Japan
[2] M. Dutour, "Software factory techniques applied to Process Control at CERN", ICALEPCS 2007, Knoxville Tennessee, USA
 
slides icon Slides WEPKS001 [10.592 MB]  
poster icon Poster WEPKS001 [1.032 MB]  
 
WEPKS003 An Object Oriented Framework of EPICS for MicroTCA Based Control System EPICS, controls, interface, software 775
 
  • Z. Geng
    SLAC, Menlo Park, California, USA
 
  EPICS (Experimental Physics and Industrial Control System) is a distributed control system platform which has been widely used for large scientific devices control like particle accelerators and fusion plant. EPICS has introduced object oriented (C++) interfaces to most of the core services. But the major part of EPICS, the run-time database, only provides C interfaces, which is hard to involve the EPICS record concerned data and routines in the object oriented architecture of the software. This paper presents an object oriented framework which contains some abstract classes to encapsulate the EPICS record concerned data and routines in C++ classes so that full OOA (Objected Oriented Analysis) and OOD (Object Oriented Design) methodologies can be used for EPCIS IOC design. We also present a dynamic device management scheme for the hot-swap capability of the MicroTCA based control system.  
poster icon Poster WEPKS003 [0.176 MB]  
 
WEPKS005 State Machine Framework and its Use for Driving LHC Operational States* controls, operation, embedded, GUI 782
 
  • M. Misiowiec, V. Baggiolini, M. Solfaroli Camillocci
    CERN, Geneva, Switzerland
 
  The LHC follows a complex operational cycle with 12 major phases that include equipment tests, preparation, beam injection, ramping and squeezing, finally followed by the physics phase. This cycle is modeled and enforced with a state machine, whereby each operational phase is represented by a state. On each transition, before entering the next state, a series of conditions is verified to make sure the LHC is ready to move on. The State Machine framework was developed to cater for building independent or embedded state machines. They safely drive between the states executing tasks bound to transitions and broadcast related information to interested parties. The framework encourages users to program their own actions. Simple configuration management allows the operators to define and maintain complex models themselves. An emphasis was also put on easy interaction with the remote state machine instances through standard communication protocols. On top of its core functionality, the framework offers a transparent integration with other crucial tools used to operate LHC, such as the LHC Sequencer. LHC Operational States has been in production for half a year and was seamlessly adopted by the operators. Further extensions to the framework and its application in operations are under way.
* http://cern.ch/marekm/icalepcs.html
 
poster icon Poster WEPKS005 [0.717 MB]  
 
WEPKS006 UNICOS Evolution: CPC Version 6 controls, operation, vacuum, cryogenics 786
 
  • E. Blanco Vinuela, J.M. Beckers, B. Bradu, Ph. Durand, B. Fernández Adiego, S. Izquierdo Rosas, A. Merezhin, J.O. Ortola Vidal, J. Rochez, D. Willeman
    CERN, Geneva, Switzerland
 
  The UNICOS (UNified Industrial Control System) framework was created back in 1998, since then a noticeable number of applications in different domains have used this framework to develop process control applications. Furthermore the UNICOS framework has been formalized and their supervision layer has been reused in other kinds of applications (e.g. monitoring or supervisory tasks) where a control layer is not necessarily UNICOS oriented. The process control package has been reformulated as the UNICOS CPC package (Continuous Process Control) and a reengineering process has been followed. These noticeable changes were motivated by many factors as (1) being able to upgrade to the new more performance IT technologies in the automatic code generation, (2) being flexible enough to create new additional device types to cope with other needs (e.g. Vacuum or Cooling and Ventilation applications) without major impact on the framework or the PLC code baselines and (3) enhance the framework with new functionalities (e.g. recipes). This publication addresses the motivation, changes, new functionalities and results obtained. It introduces in an overall view the technologies used and changes followed, emphasizing what has been gained for the developer and the final user. Finally some of the new domains where UNICOS CPC has been used will be illustrated.  
poster icon Poster WEPKS006 [0.449 MB]  
 
WEPKS011 Use of ITER CODAC Core System in SPIDER Ion Source EPICS, controls, experiment, data-acquisition 801
 
  • C. Taliercio, A. Barbalace, M. Breda, R. Capobianco, A. Luchetta, G. Manduchi, F. Molon, M. Moressa, P. Simionato
    Consorzio RFX, Associazione Euratom-ENEA sulla Fusione, Padova, Italy
 
  In February 2011 ITER released a new version (v2) of the CODAC Core System. In addition to the selected EPICS core, the new package includes also several tools from Control System Studio [1]. These tools are all integrated in Eclipse and offer an integrated environment for development and operation. The SPIDER Ion Source experiment is the first experiment planned in the ITER Neutral Beam Test Facility under construction at Consorzio RFX, Padova, Italy. As the final product of the Test Facility is the ITER Neutral Beam Injector, we decided to adhere since the beginning to the ITER CODAC guidelines. Therefore the EPICS system provided in the CODAC Core System will be used in SPIDER for plant control and supervision and, to some extent, for data acquisition. In this paper we report our experience in the usage of CODAC Core System v2 in the implementation of the control system of SPIDER and, in particular, we analyze the benefits and drawbacks of the Self Description Data (SDD) tools which, based on a XML description of the signals involved in the system, provide the automatic generation of the configuration files for the EPICS tools and PLC data exchange.
[1] Control System Studio home page: http://css.desy.de/content/index_eng.html
 
 
WEPKS016 Software for Virtual Accelerator Designing simulation, distributed, software, EPICS 816
 
  • N.V. Kulabukhova, A.N. Ivanov, V.V. Korkhov, A. Lazarev
    St. Petersburg State University, St. Petersburg, Russia
 
  The article discusses appropriate technologies for software implementation of the Virtual Accelerator. The Virtual Accelerator is considered as a set of services and tools enabling transparent execution of computational software for modeling beam dynamics in accelerators on distributed computing resources. Distributed storage and information processing facilities utilized by the Virtual Accelerator make use of the Service-Oriented Architecture (SOA) according to a cloud computing paradigm. Control system toolkits (such as EPICS, TANGO), computing modules (including high-performance computing), realization of the GUI with existing frameworks and visualization of the data are discussed in the paper. The presented research consists of software analysis for realization of interaction between all levels of the Virtual Accelerator and some samples of middleware implementation. A set of the servers and clusters at St.-Petersburg State University form the infrastructure of the computing environment for Virtual Accelerator design. Usage of component-oriented technology for realization of Virtual Accelerator levels interaction is proposed. The article concludes with an overview and substantiation of a choice of technologies that will be used for design and implementation of the Virtual Accelerator.  
poster icon Poster WEPKS016 [0.559 MB]  
 
WEPKS018 MstApp, a Rich Client Control Applications Framework at DESY controls, operation, hardware, status 819
 
  • W. Schütte, K. Hinsch
    DESY, Hamburg, Germany
 
  Funding: Deutsches Elektronen-Synchrotron DESY
The control system for PETRA 3 [1] and its pre accelerators extensively use rich clients for the control room and the servers. Most of them are written with the help of a rich client Java framework: MstApp. They total to 106 different console and 158 individual server applications. MstApp takes care of many common control system application aspects beyond communication. MstApp provides a common look and feel: core menu items, a color scheme for standard states of hardware components and standardized screen sizes/locations. It interfaces our console application manager (CAM) and displays on demand our communication link diagnostics tools. MstApp supplies an accelerator context for each application; it handles printing, logging, resizing and unexpected application crashes. Due to our standardized deploy process MstApp applications know their individual developers and can even send them – on button press of the users - emails. Further a concept of different operation modes is implemented: view only, operating and expert use. Administration of the corresponding rights is done via web access of a database server. Initialization files on a web server are instantiated as JAVA objects with the help of the Java SE XMLEncoder. Data tables are read with the same mechanism. New MstApp applications can easily be created with in house wizards like the NewProjectWizard or the DeviceServerWizard. MstApp improves the operator experience, application developer productivity and delivered software quality.
[1] Reinhard Bacher, “Commissioning of the New Control System for the PETRA 3 Accelerator Complex at Desy”, Proceedings of ICALEPCS 2009, Kobe, Japan
 
poster icon Poster WEPKS018 [0.474 MB]  
 
WEPKS020 Adding Flexible Subscription Options to EPICS EPICS, database, operation, controls 827
 
  • R. Lange
    HZB, Berlin, Germany
  • L.R. Dalesio
    BNL, Upton, Long Island, New York, USA
  • A.N. Johnson
    ANL, Argonne, USA
 
  Funding: Work supported by U.S. Department of Energy (under contracts DE-AC02-06CH11357 resp. DE-AC02-98CH10886), German Bundesministerium für Bildung und Forschung and Land Berlin.
The need for a mechanism to control and filter subscriptions to control system variables by the client was described in a paper at the ICALEPCS2009 conference.[1] The implementation follows a plug-in design that allows the insertion of plug-in instances into the event stream on the server side. The client can instantiate and configure these plug-ins when opening a subscription, by adding field modifiers to the channel name using JSON notation.[2] This paper describes the design and implementation of a modular server-side plug-in framework for Channel Access, and shows examples for plug-ins as well as their use within an EPICS control system.
[1] R. Lange, A. Johnson, L. Dalesio: Advanced Monitor/Subscription Mechanisms for EPICS, THP090, ICALEPCS2009, Kobe, Japan.
[2] A. Johnson, R. Lange: Evolutionary Plans for EPICS Version 3, WEA003, ICALEPCS2009, Kobe, Japan.
 
poster icon Poster WEPKS020 [0.996 MB]  
 
WEPKS024 CAFE, A Modern C++ Interface to the EPICS Channel Access Library interface, EPICS, controls, GUI 840
 
  • J.T.M. Chrin, M.C. Sloan
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  CAFE (Channel Access interFacE) is a C++ library that provides a modern, multifaceted interface to the EPICS-based control system. CAFE makes extensive use of templates and multi-index containers to enhance efficiency, flexibility and performance. Stability and robustness are accomplished by ensuring that connectivity to EPICS channels remains in a well defined state in every eventuality, and results of all synchronous and asynchronous operations are captured and reported with integrity. CAFE presents the user with a number of options for writing and retrieving data to and fro the control system. In addition to basic read and write operations, a further abstraction layer provides transparency to more intricate functionality involving logical sets of data; such object sequences are easily instantiated through an XML-based configuration mechanism. CAFE's suitability for use in a broad spectrum of applications is demonstrated. These range from high performance Qt GUI control widgets, to event processing agents that propagate data through OMG's Data Distribution Service (DDS), to script-like frameworks such as MATLAB. The methodology for the modular use of CAFE serves to improve maintainability by enforcing a logical boundary between the channel access components and the specifics of the application framework at hand.  
poster icon Poster WEPKS024 [0.637 MB]  
 
WEPKS025 Evaluation of Software and Electronics Technologies for the Control of the E-ELT Instruments: a Case Study controls, software, hardware, CORBA 844
 
  • P. Di Marcantonio, R. Cirami, I. Coretti
    INAF-OAT, Trieste, Italy
  • G. Chiozzi, M. Kiekebusch
    ESO, Garching bei Muenchen, Germany
 
  In the scope of the evaluation of architecture and technologies for the control system of the E-ELT (European-Extremely Large Telescope) instruments, a collaboration has been set up between the Instrumentation and Control Group of the INAF-OATs and the ESO Directorate of Engineering. The first result of this collaboration is the design and implementation of a prototype of a small but representative control system for an E-ELT instrument that has been setup at the INAF-OATs premises. The electronics has been based on PLCs (Programmable Logical Controller) and Ethernet based fieldbuses from different vendors but using international standards like the IEC 61131-3 and PLCopen Motion Control. The baseline design for the control software follows the architecture of the VLT (Very Large Telescope) Instrumentation application framework but it has been implemented using the ACS (ALMA Common Software), an open source software framework developed for the ALMA project and based on CORBA middleware. The communication among the software components is based in two models: CORBA calls for command/reply and CORBA notification channel for distributing the devices status. The communication with the PLCs is based on OPC-UA, an international standard for the communication with industrial controllers. The results of this work will contribute to the definition of the architecture of the control system that will be provided to all consortia responsible for the actual implementation of the E-ELT instruments. This paper presents the prototype motivation, its architecture, design and implementation.  
poster icon Poster WEPKS025 [3.039 MB]  
 
WEPKS026 A C/C++ Build System Based on Maven for the LHC Controls System target, controls, Linux, pick-up 848
 
  • J. Nguyen Xuan, B. Copy, M. Dönszelmann
    CERN, Geneva, Switzerland
 
  The CERN accelerator controls system, mainly written in Java and C/C++, consists nowadays of 50 projects and 150 active developers. The controls group has decided to unify the development process and standards (e.g. project layout) using Apache Maven and Sonatype Nexus. Maven is the de-facto build tool for Java, it deals with versioning and dependency management, whereas Nexus is a repository manager. C/C++ developers were struggling to keep their dependencies on other CERN projects, as no versioning was applied, the libraries have to be compiled and available for several platforms and architectures, and finally there was no dependency management mechanism. This results in very complex Makefiles which were difficult to maintain. Even if Maven is primarily designed for Java, a plugin (Maven NAR [1]) adapts the build process for native programming languages for different operating systems and platforms. However C/C++ developers were not keen to abandon their current Makefiles. Hence our approach was to combine the best of the two worlds: NAR/Nexus and Makefiles. Maven NAR manages the dependencies, the versioning and creates a file with the linker and compiler options to include the dependencies. The Makefiles carry the build process to generate the binaries. Finally the resulting artifacts (binaries, header files, metadata) are versioned and stored in a central Nexus repository. Early experiments were conducted in the scope of the controls group's Testbed. Some existing projects have been successfully converted to this solution and some starting projects use this implementation.
[1] http://cern.ch/jnguyenx/MavenNAR.html
 
poster icon Poster WEPKS026 [0.518 MB]  
 
WEPKS027 Java Expert GUI Framework for CERN's Beam Instrumentation Systems GUI, software, controls, software-architecture 852
 
  • S. Bart Pedersen, S. Bozyigit, S. Jackson
    CERN, Geneva, Switzerland
 
  The CERN Beam Instrumentation Group software section have recently performed a study of the tools used to produce Java expert applications. This paper will present the analysis that was made to understand the requirements for generic components and the resulting tools including a compilation of Java components that have been made available for a wider audience. The paper will also discuss the eventuality of using MAVEN as deployment tool with its implications for developers and users.  
poster icon Poster WEPKS027 [1.838 MB]  
 
WEPKS032 A UML Profile for Code Generation of Component Based Distributed Systems interface, software, distributed, controls 867
 
  • G. Chiozzi, L. Andolfato, R. Karban
    ESO, Garching bei Muenchen, Germany
  • A. Tejeda
    UCM, Antofagasta, Chile
 
  A consistent and unambiguous implementation of code generation (model to text transformation) from UML must rely on a well defined UML profile, customizing UML for a particular application domain. Such a profile must have a solid foundation in a formally correct ontology, formalizing the concepts and their relations in the specific domain, in order to avoid a maze or set of wildly created stereotypes. The paper describes a generic profile for the code generation of component based distributed systems for control applications, the process to distill the ontology and define the profile, and the strategy followed to implement the code generator. The main steps that take place iteratively include: defining the terms and relations with an ontology, mapping the ontology to the appropriate UML metaclasses, testing the profile by creating modelling examples, and generating the code.  
poster icon Poster WEPKS032 [1.925 MB]  
 
WEPKS033 UNICOS CPC6: Automated Code Generation for Process Control Applications controls, software, operation, vacuum 871
 
  • B. Fernández Adiego, E. Blanco Vinuela, I. Prieto Barreiro
    CERN, Geneva, Switzerland
 
  The Continuous Process Control package (CPC) is one of the components of the CERN Unified Industrial Control System framework (UNICOS). As a part of this framework, UNICOS-CPC provides a well defined library of device types, a methodology and a set of tools to design and implement industrial control applications. The new CPC version uses the software factory UNICOS Application Builder (UAB) to develop the CPC applications. The CPC component is composed of several platform oriented plug-ins (PLCs and SCADA) describing the structure and the format of the generated code. It uses a resource package where both, the library of device types and the generated file syntax are defined. The UAB core is the generic part of this software, it discovers and calls dynamically the different plug-ins and provides the required common services. In this paper the UNICOS CPC6 package is presented. It is composed of several plug-ins: the Instance generator and the Logic generator for both, Siemens and Schneider PLCs, the SCADA generator (based on PVSS) and the CPC wizard as a dedicated Plug-in created to provide the user a friendly GUI. A management tool called UAB bootstrap will administer the different CPC component versions and all the dependencies between the CPC resource packages and the components. This tool guides the control system developer to install and launch the different CPC component versions.  
poster icon Poster WEPKS033 [0.730 MB]  
 
WEPMN009 Simplified Instrument/Application Development and System Integration Using Libera Base Software Framework software, hardware, interface, controls 890
 
  • M. Kenda, T. Beltram, T. Juretič, B. Repič, D. Škvarč, C. Valentinčič
    I-Tech, Solkan, Slovenia
 
  Development of many appliances used in scientific environment forces us to face similar challenges, often executed repeatedly. One has to design or integrate hardware components. Support for network and other communications standards needs to be established. Data and signals are processed and dispatched. Interfaces are required to monitor and control the behaviour of the appliances. At Instrumentation Technologies we identified and addressed these issues by creating a generic framework which is composed of several reusable building blocks. They simplify some of the tedious tasks and leave more time to concentrate on real issues of the application. Further more, the end product quality benefits from larger common base of this middle-ware. We will present the benefits on concrete example of instrument implemented on MTCA platform accessible over graphical user interface.  
poster icon Poster WEPMN009 [5.755 MB]  
 
WEPMN013 Recent Developments in Synchronised Motion Control at Diamond Light Source EPICS, controls, software, interface 901
 
  • B.J. Nutter, T.M. Cobb, M.R. Pearson, N.P. Rees, F. Yuan
    Diamond, Oxfordshire, United Kingdom
 
  At Diamond Light Source the EPICS control system is used with a variety of motion controllers. The use of EPICS ensures a common interface over a range of motorised applications. We have developed a system to enable the use of the same interface for synchronised motion over multiple axes using the Delta Tau PMAC controller. Details of this work will be presented, along with examples and possible future developments.  
 
WEPMN036 Comparative Analysis of EPICS IOC and MARTe for the Development of a Hard Real-Time Control Application EPICS, controls, real-time, software 961
 
  • A. Barbalace, A. Luchetta, G. Manduchi, C. Taliercio
    Consorzio RFX, Associazione Euratom-ENEA sulla Fusione, Padova, Italy
  • B. Carvalho, D.F. Valcárcel
    IPFN, Lisbon, Portugal
 
  EPICS is used worldwide to build distributed control systems for scientific experiments. The EPICS software suite is based around the Channel Access (CA) network protocol that allows the communication of different EPICS clients and servers in a distributed architecture. Servers are called Input/Output Controllers (IOCs) and perform real-world I/O or local control tasks. EPICS IOCs were originally designed for VxWorks to meet the demanding real-time requirements of control algorithms and have lately been ported to different operating systems. The MARTe framework has recently been adopted to develop an increasing number of hard real-time systems in different fusion experiments. MARTe is a software library that allows the rapid and modular development of stand-alone hard real-time control applications on different operating systems. MARTe has been created to be portable and during the last years it has evolved to follow the multicore evolution. In this paper we review several implementation differences between EPICS IOC and MARTe. We dissect their internal data structures and synchronization mechanisms to understand what happens behind the scenes. Differences in the component based approach and in the concurrent model of computation in EPICS IOC and MARTe are explained. Such differences lead to distinct time models in the computational blocks and distinct real-time capabilities of the two frameworks that a developer must be aware of.  
poster icon Poster WEPMN036 [2.406 MB]  
 
WEPMS001 Interconnection Test Framework for the CMS Level-1 Trigger System operation, hardware, distributed, controls 973
 
  • J. Hammer
    CERN, Geneva, Switzerland
  • M. Magrans de Abril
    UW-Madison/PD, Madison, Wisconsin, USA
  • C.-E. Wulz
    HEPHY, Wien, Austria
 
  The Level-1 Trigger Control and Monitoring System is a software package designed to configure, monitor and test the Level-1 Trigger System of the Compact Muon Solenoid (CMS) experiment at CERN's Large Hadron Collider. It is a large and distributed system that runs over 50 PCs and controls about 200 hardware units. The Interconnection Test Framework (ITF), a generic and highly flexible framework for creating and executing hardware tests within the Level-1 Trigger environment is presented. The framework is designed to automate testing of the 13 major subsystems interconnected with more than 1000 links. Features include a web interface to create and execute tests, modeling using finite state machines, dependency management, automatic configuration, and loops. Furthermore, the ITF will replace the existing heterogeneous testing procedures and help reducing maintenance and complexity of operation tasks. Finally, an example of operational use of the Interconnection Test Framework is presented. This case study proves the concept and describes the customization process and its performance characteristics.  
poster icon Poster WEPMS001 [0.576 MB]  
 
WEPMU010 Automatic Analysis at the Commissioning of the LHC Superconducting Electrical Circuits operation, hardware, GUI, status 1073
 
  • H. Reymond, O.O. Andreassen, C. Charrondière, A. Rijllart, M. Zerlauth
    CERN, Geneva, Switzerland
 
  Since the beginning of 2010 the LHC has been operating in a routinely manner, starting with a commissioning phase and then an operation for physics phase. The commissioning of the superconducting electrical circuits requires rigorous test procedures before entering into operation. To maximize the beam operation time of the LHC these tests should be done as fast as procedures allow. A full commissioning needs 12000 tests and is required after circuits have been warmed above liquid nitrogen temperature. Below this temperature, after an end of year break of two months, commissioning needs about 6000 tests. Because the manual analysis of the tests takes a major part of the commissioning time, we proceeded to the automation of the existing analysis tools. We present the way in which these LabVIEW™ applications were automated. We evaluate the gain in commissioning time and reduction of experts on night shift observed during the LHC hardware commissioning campaign of 2011 compared to 2010. We end with an outlook at what can be further optimized.  
poster icon Poster WEPMU010 [3.124 MB]  
 
WEPMU029 Assessment And Testing of Industrial Devices Robustness Against Cyber Security Attacks network, controls, monitoring, target 1130
 
  • F.M. Tilaro, B. Copy
    CERN, Geneva, Switzerland
 
  CERN (European Organization for Nuclear Research),like any organization, needs to achieve the conflicting objectives of connecting its operational network to Internet while at the same time keeping its industrial control systems secure from external and internal cyber attacks. With this in mind, the ISA-99 [1] international cyber security standard has been adopted at CERN as a reference model to define a set of guidelines and security robustness criteria applicable to any network device. Devices robustness represents a key link in the defense-in-depth concept as some attacks will inevitably penetrate security boundaries and thus require further protection measures. When assessing the cyber security robustness of devices we have singled out control system-relevant attack patterns derived from the well-known CAPEC [2] classification. Once a vulnerability is identified, it needs to be documented, prioritized and reproduced at will in a dedicated test environment for debugging purposes. CERN - in collaboration with SIEMENS –has designed and implemented a dedicated working environment, the Test-bench for Robustness of Industrial Equipments [3] (“TRoIE”). Such tests attempt to detect possible anomalies by exploiting corrupt communication channels and manipulating the normal behavior of the communication protocols, in the same way as a cyber attacker would proceed. This document provides an inventory of security guidelines [4] relevant to the CERN industrial environment and describes how we have automated the collection and classification of identified vulnerabilities into a test-bench.
[1] http://www.isa.org
[2] http://capec.mitre.org
[3] F. Tilaro, "Test-bench for Robustness…", CERN, 2009
[4] B. Copy, F. Tilaro, "Standards based measurable security for embedded devices", ICALEPCS 2009
 
poster icon Poster WEPMU029 [3.152 MB]  
 
WEPMU033 Monitoring Control Applications at CERN controls, monitoring, operation, software 1141
 
  • F. Varela, F.B. Bernard, M. Gonzalez-Berges, H. Milcent, L.B. Petrova
    CERN, Geneva, Switzerland
 
  The Industrial Controls and Engineering (EN-ICE) group of the Engineering Department at CERN has produced, and is responsible for the operation of around 60 applications, which control critical processes in the domains of cryogenics, quench protections systems, power interlocks for the Large Hadron Collider and other sub-systems of the accelerator complex. These applications require 24/7 operation and a quick reaction to problems. For this reason the EN-ICE is presently developing the monitoring tool to detect, anticipate and inform of possible anomalies in the integrity of the applications. The tool builds on top of Simatic WinCC Open Architecture (formerly PVSS) SCADA and makes usage of the Joint COntrols Project (JCOP) and UNICOS Frameworks developed at CERN. The tool provides centralized monitoring of the different elements integrating the controls systems like Windows and Linux servers, PLCs, applications, etc. Although the primary aim of the tool is to assist the members of the EN-ICE Standby Service, the tool may present different levels of details of the systems depending on the user, which enables experts to diagnose and troubleshoot problems. In this paper, the scope, functionality and architecture of the tool are presented and some initial results on its performance are summarized.  
poster icon Poster WEPMU033 [1.719 MB]  
 
THAAUST02 Suitability Assessment of OPC UA as the Backbone of Ground-based Observatory Control Systems controls, software, interface, CORBA 1174
 
  • W. Pessemier, G. Deconinck, G. Raskin, H. Van Winckel
    KU Leuven, Leuven, Belgium
  • P. Saey
    Katholieke Hogeschool Sint-Lieven, Gent, Belgium
 
  A common requirement of modern observatory control systems is to allow interaction between various heterogeneous subsystems in a transparent way. However, the integration of COTS industrial products - such as PLCs and SCADA software - has long been hampered by the lack of an adequate, standardized interfacing method. With the advent of the Unified Architecture version of OPC (Object Linking and Embedding for Process Control), the limitations of the original industry-accepted interface are now lifted, and in addition much more functionality has been defined. In this paper the most important features of OPC UA are matched against the requirements of ground-based observatory control systems in general and in particular of the 1.2m Mercator Telescope. We investigate the opportunities of the "information modelling" idea behind OPC UA, which could allow an extensive standardization in the field of astronomical instrumentation, similar to the standardization efforts emerging in several industry domains. Because OPC UA is designed for both vertical and horizontal integration of heterogeneous subsystems and subnetworks, we explore its capabilities to serve as the backbone of a dependable and scalable observatory control system, treating "industrial components" like PLCs no differently than custom software components. In order to quantitatively assess the performance and scalability of OPC UA, stress tests are described and their results are presented. Finally, we consider practical issues such as the availability of COTS OPC UA stacks, software development kits, servers and clients.  
slides icon Slides THAAUST02 [2.879 MB]  
 
THBHMUST01 Multi-platform SCADA GUI Regression Testing at CERN. GUI, software, Windows, Linux 1201
 
  • P.C. Burkimsher, M. Gonzalez-Berges, S. Klikovits
    CERN, Geneva, Switzerland
 
  Funding: CERN
The JCOP Framework is a toolkit used widely at CERN for the development of industrial control systems in several domains (i.e. experiments, accelerators and technical infrastructure). The software development started 10 years ago and there is now a large base of production systems running it. For the success of the project, it was essential to formalize and automate the quality assurance process. The paper will present the overall testing strategy and will describe in detail mechanisms used for GUI testing. The choice of a commercial tool (Squish) and the architectural features making it appropriate for our multi-platform environment will be described. Practical difficulties encountered when using the tool in the CERN context are discussed as well as how these were addressed. In the light of initial experience, the test code itself has been recently reworked in OO style to facilitate future maintenance and extension. The paper concludes with a description of our initial steps towards incorporation of full-blown Continuous Integration (CI) support.
 
slides icon Slides THBHMUST01 [1.878 MB]  
 
THCHAUST03 Common Data Model ; A Unified Layer to Access Data from Data Analysis Point of View detector, synchrotron, data-analysis, neutron 1220
 
  • N. Hauser, T.K. Lam, N. Xiong
    ANSTO, Menai, Australia
  • A. Buteau, M. Ounsy, S. Poirier
    SOLEIL, Gif-sur-Yvette, France
  • C. Rodriguez
    ALTEN, Boulogne-Billancourt, France
 
  For almost 20 years, the scientific community of neutrons and synchrotron facilities has been dreaming of using a common data format to be able to exchange experimental results and applications to analyse them. If using HDF5 as a physical container for data quickly raised a large consensus, the big issue is the standardisation of data organisation. By introducing a new level of indirection for data access, the CommonDataModel (CDM) framework offers a solution and allows to split development efforts and responsibilities between institutes. The CDM is made of a core API that accesses data through a data format plugins mechanism and scientific applications definitions (i.e. sets of logically organized keywords defined by scientists for each experimental technique). Using a innovative "mapping" system between applications definitions and physical data organizations, the CDM allows to develop data reduction applications regardless of data files formats AND organisations. Then each institute has to develop data access plugins for its own files formats along with the mapping between application definitions and its own data files organisation. Thus, data reduction applications can be developed from a strictly scientific point of view and are natively able to process data coming from several institutes. A concrete example on a SAXS data reduction application, accessing NeXus and EDF (ESRF Data Format) file will be commented.  
slides icon Slides THCHAUST03 [36.889 MB]  
 
THCHAUST06 Instrumentation of the CERN Accelerator Logging Service: Ensuring Performance, Scalability, Maintenance and Diagnostics instrumentation, database, extraction, distributed 1232
 
  • C. Roderick, R. Billen, D.D. Teixeira
    CERN, Geneva, Switzerland
 
  The CERN accelerator Logging Service currently holds more than 90 terabytes of data online, and processes approximately 450 gigabytes per day, via hundreds of data loading processes and data extraction requests. This service is mission-critical for day-to-day operations, especially with respect to the tracking of live data from the LHC beam and equipment. In order to effectively manage any service, the service provider's goals should include knowing how the underlying systems are being used, in terms of: "Who is doing what, from where, using which applications and methods, and how long each action takes". Armed with such information, it is then possible to: analyze and tune system performance over time; plan for scalability ahead of time; assess the impact of maintenance operations and infrastructure upgrades; diagnose past, on-going, or re-occurring problems. The Logging Service is based on Oracle DBMS and Application Servers, and Java technology, and is comprised of several layered and multi-tiered systems. These systems have all been heavily instrumented to capture data about system usage, using technologies such as JMX. The success of the Logging Service and its proven ability to cope with ever growing demands can be directly linked to the instrumentation in place. This paper describes the instrumentation that has been developed, and demonstrates how the instrumentation data is used to achieve the goals outlined above.  
slides icon Slides THCHAUST06 [5.459 MB]  
 
THCHMUST04 Free and Open Source Software at CERN: Integration of Drivers in the Linux Kernel Linux, controls, FPGA, data-acquisition 1248
 
  • J.D. González Cobas, S. Iglesias Gonsálvez, J.H. Lewis, J. Serrano, M. Vanga
    CERN, Geneva, Switzerland
  • E.G. Cota
    Columbia University, NY, USA
  • A. Rubini, F. Vaga
    University of Pavia, Pavia, Italy
 
  We describe the experience acquired during the integration of the tsi148 driver into the main Linux kernel tree. The benefits (and some of the drawbacks) for long-term software maintenance are analysed, the most immediate one being the support and quality review added by an enormous community of skilled developers. Indirect consequences are also analysed, and these are no less important: a serious impact in the style of the development process, the use of cutting edge tools and technologies supporting development, the adoption of the very strict standards enforced by the Linux kernel community, etc. These elements were also exported to the hardware development process in our section and we will explain how they were used with a particular example in mind: the development of the FMC family of boards following the Open Hardware philosophy, and how its architecture must fit the Linux model. This delicate interplay of hardware and software architectures is a perfect showcase of the benefits we get from the strategic decision of having our drivers integrated in the kernel. Finally, the case for a whole family of CERN-developed drivers for data acquisition models, the prospects for its integration in the kernel, and the adoption of a model parallel to Comedi, is also taken as an example of how this model will perform in the future.  
slides icon Slides THCHMUST04 [0.777 MB]  
 
THDAUST02 An Erlang-Based Front End Framework for Accelerator Controls controls, interface, data-acquisition, hardware 1264
 
  • D.J. Nicklaus, C.I. Briegel, J.D. Firebaugh, CA. King, R. Neswold, R. Rechenmacher, J. You
    Fermilab, Batavia, USA
 
  We have developed a new front-end framework for the ACNET control system in Erlang. Erlang is a functional programming language developed for real-time telecommunications applications. The primary task of the front-end software is to connect the control system with drivers collecting data from individual field bus devices. Erlang's concurrency and message passing support have proven well-suited for managing large numbers of independent ACNET client requests for front-end data. Other Erlang features which make it particularly well-suited for a front-end framework include fault-tolerance with process monitoring and restarting, real-time response,and the ability to change code in running systems. Erlang's interactive shell and dynamic typing make writing and running unit tests an easy part of the development process. Erlang includes mechanisms for distributing applications which we will use for deploying our framework to multiple front-ends, along with a configured set of device drivers. We've developed Erlang code to use Fermilab's TCLK event distribution clock and Erlang's interface to C/C++ allows hardware-specific driver access.  
slides icon Slides THDAUST02 [1.439 MB]  
 
THDAULT06 MARTe Framework: a Middleware for Real-time Applications Development real-time, controls, hardware, Linux 1277
 
  • A. Neto, D. Alves, B. Carvalho, P.J. Carvalho, H. Fernandes, D.F. Valcárcel
    IPFN, Lisbon, Portugal
  • A. Barbalace, G. Manduchi
    Consorzio RFX, Associazione Euratom-ENEA sulla Fusione, Padova, Italy
  • L. Boncagni
    ENEA C.R. Frascati, Frascati (Roma), Italy
  • G. De Tommasi
    CREATE, Napoli, Italy
  • P. McCullen, A.V. Stephen
    CCFE, Abingdon, Oxon, United Kingdom
  • F. Sartori
    F4E, Barcelona, Spain
  • R. Vitelli
    Università di Roma II Tor Vergata, Roma, Italy
  • L. Zabeo
    ITER Organization, St. Paul lez Durance, France
 
  Funding: This work was supported by the European Communities under the contract of Association between EURATOM/IST and was carried out within the framework of the European Fusion Development Agreement
The Multi-threaded Application Real-Time executor (MARTe) is a C++ framework that provides a development environment for the design and deployment of real-time applications, e.g. control systems. The kernel of MARTe comprises a set of data-driven independent blocks, connected using a shared bus. This modular design enforces a clear boundary between algorithms, hardware interaction and system configuration. The architecture, being multi-platform, facilitates the test and commissioning of new systems, enabling the execution of plant models in offline environments and with the hardware-in-the-loop, whilst also providing a set of non-intrusive introspection and logging facilities. Furthermore, applications can be developed in non real-time environments and deployed in a real-time operating system, using exactly the same code and configuration data. The framework is already being used in several fusion experiments, with control cycles ranging from 50 microseconds to 10 milliseconds exhibiting jitters of less than 2%, using VxWorks, RTAI or Linux. Codes can also be developed and executed in Microsoft Windows, Solaris and Mac OS X. This paper discusses the main design concepts of MARTe, in particular the architectural choices which enabled the combination of real-time accuracy, performance and robustness with complex and modular data driven applications.
 
slides icon Slides THDAULT06 [1.535 MB]