Paper | Title | Other Keywords | Page |
---|---|---|---|
MOBAUST01 | News from ITER Controls - A Status Report | controls, EPICS, network, real-time | 1 |
|
|||
Construction of ITER has started at the Cadarache site in southern France. The first buildings are taking shape and more than 60 % of the in-kind procurement has been committed by the seven ITER member states (China, Europe, India, Japan, Korea, Russia and Unites States). The design and manufacturing of the main components of the machine is now underway all over the world. Each of these components comes with a local control system, which must be integrated in the central control system. The control group at ITER has developed two products to facilitate this; the plant control design handbook (PCDH) and the control, data access and communication (CODAC) core system. PCDH is a document which prescribes the technologies and methods to be used in developing the local control system and sets the rules applicable to the in-kind procurements. CODAC core system is a software package, distributed to all in-kind procurement developers, which implements the PCDH and facilitates the compliance of the local control system. In parallel, the ITER control group is proceeding with the design of the central control system to allow fully integrated and automated operation of ITER. In this paper we report on the progress of design, technology choices and discuss justifications of those choices. We also report on the results of some pilot projects aiming at validating the design and technologies. | |||
![]() |
Slides MOBAUST01 [4.238 MB] | ||
MOBAUST05 | Control System Achievement at KEKB and Upgrade Design for SuperKEKB | controls, EPICS, operation, linac | 17 |
|
|||
SuperKEKB electron-positron asymmetric collider is being constructed after a decade of successful operation at KEKB for B physics research. KEKB completed all of the technical milestones, and had offered important insights into the flavor structure of elementary particles, especially the CP violation. The combination of scripting languages at the operation layer and EPICS at the equipment layer had led the control system to successful performance. The new control system in SuperKEKB will continue to employ those major features of KEKB, with additional technologies for the reliability and flexibility. The major structure will be maintained especially the online linkage to the simulation code and slow controls. However, as the design luminosity is 40-times higher than that of KEKB, several orders of magnitude higher performance will be required at certain area. At the same time more controllers with embedded technology will be installed to meet the limited resources. | |||
![]() |
Slides MOBAUST05 [2.781 MB] | ||
MOCAULT02 | Managing the Development of Plant Subsystems for a Large International Project | controls, interface, EPICS, site | 27 |
|
|||
ITER is an international collaborative project under development by nations representing over one half of the world's population. Major components will be supplied by "Domestic Agencies" representing the various participating countries. While the supervisory control system, known as "CODAC", will be developed at the project site in the south of France, the EPICS and PLC-based plant control subsystems are to be developed and tested locally, where the subsystems themselves are being built. This is similar to the model used for the development of the Spallation Neutron Source (SNS), which was a US national collaboration. However the far more complex constraints of an international collaboration, as well as the mandated extensive use of externally contracted and commercially-built subsystems, preclude the use of many specifics of the SNS collaboration approach which may have contributed to its success. Moreover, procedures for final system integration and commissioning at ITER are not yet well defined. This paper will outline the particular issues either inherent in an international collaboration or specific to ITER, and will suggest approaches to mitigate those problems with the goal of assuring a successful and timely integration and commissioning phase. | |||
![]() |
Slides MOCAULT02 [3.684 MB] | ||
MOMAU002 | Improving Data Retrieval Rates Using Remote Data Servers | network, hardware, database, controls | 40 |
|
|||
Funding: Work performed under the auspices of the U.S. Department of Energy The power and scope of modern Control Systems has led to an increased amount of data being collected and stored, including data collected at high (kHz) frequencies. One consequence is that users now routinely make data requests that can cause gigabytes of data to be read and displayed. Given that a users patience can be measured in seconds, this can be quite a technical challenge. This paper explores one possible solution to this problem - the creation of remote data servers whose performance is optimized to handle context-sensitive data requests. Methods for increasing data delivery performance include the use of high speed network connections between the stored data and the data servers, smart caching of frequently used data, and the culling of data delivered as determined by the context of the data request. This paper describes decisions made when constructing these servers and compares data retrieval performance by clients that use or do not use an intermediate data server. |
|||
![]() |
Slides MOMAU002 [0.085 MB] | ||
![]() |
Poster MOMAU002 [1.077 MB] | ||
MOMAU004 | Database Foundation for the Configuration Management of the CERN Accelerator Controls Systems | controls, database, interface, timing | 48 |
|
|||
The Controls Configuration DB (CCDB) and its interfaces have been developed over the last 25 years in order to become nowadays the basis for the Configuration Management of the Controls System for all accelerators at CERN. The CCDB contains data for all configuration items and their relationships, required for the correct functioning of the Controls System. The configuration items are quite heterogeneous, depicting different areas of the Controls System – ranging from 3000 Front-End Computers, 75 000 software devices allowing remote control of the accelerators, to valid states of the Accelerators Timing System. The article will describe the different areas of the CCDB, their interdependencies and the challenges to establish the data model for such a diverse configuration management database, serving a multitude of clients. The CCDB tracks the life of the configuration items by allowing their clear identification, triggering change management processes as well as providing status accounting and audits. This necessitated the development and implementation of a combination of tailored processes and tools. The Controls System is a data-driven one - the data stored in the CCDB is extracted and propagated to the controls hardware in order to configure it remotely. Therefore a special attention is placed on data security and data integrity as an incorrectly configured item can have a direct impact on the operation of the accelerators. | |||
![]() |
Slides MOMAU004 [0.404 MB] | ||
![]() |
Poster MOMAU004 [6.064 MB] | ||
MOMAU005 | Integrated Approach to the Development of the ITER Control System Configuration Data | database, controls, network, status | 52 |
|
|||
ITER control system (CODAC) is steadily going into the implementation phase. A design guidelines handbook and a software development toolkit, named CODAC Core System, were produced in February 2011. They are ready to be used off-site, in the ITER domestic agencies and associated industries, in order to develop first control "islands" of various ITER plant systems. In addition to the work done off-site there is wealth of I&C related data developed centrally at ITER, but scattered through various sources. These data include I&C design diagrams, 3-D data, volume allocation, inventory control, administrative data, planning and scheduling, tracking of deliveries and associated documentation, requirements control, etc. All these data have to be kept coherent and up-to-date, with various types of cross-checks and procedures imposed on them. A "plant system profile" database, currently under development at ITER, represents an effort to provide integrated view into the I&C data. Supported by a platform-independent data modeling, done with a help of XML Schema, it accumulates all the data in a single hierarchy and provides different views for different aspects of the I&C data. The database is implemented using MS SQL Server and Java-based web interface. Import and data linking services are implemented using Talend software, and the report generation is done with a help of MS SQL Server Reporting Services. This paper will report on the first implementation of the database, the kind of data stored so far, typical work flows and processes, and directions of further work. | |||
![]() |
Slides MOMAU005 [0.384 MB] | ||
![]() |
Poster MOMAU005 [0.692 MB] | ||
MOMAU007 | How to Maintain Hundreds of Computers Offering Different Functionalities with Only Two System Administrators | controls, Linux, database, EPICS | 56 |
|
|||
The Controls section in PSI is responsible for the Control Systems of four Accelerators: two proton accelerators HIPA and PROSCAN, Swiss Light Source SLS and the Free Electron Laser (SwissFEL) Test Facility. On top of that, we have 18 additional SLS beamlines to control. The controls system is mainly composed of the so called Input Output Controllers (IOCs) which require a complete and complex computing infrastructure in order to boot, being developed, debugged and monitored. This infrastructure consists currently mainly of Linux computers like boot server, port server, or configuration server (called save and restore server). Overall, the constellation of computers and servers which compose the control system counts about five hundred Linux computers which can be split into 38 different configurations based on the work each of this system need to provide. For the administration of all this we do employ only two system administrators who are responsible for the installation, configuration and maintenance of those computers. This paper shows which tools are used to squash this difficult task: like Puppet (an open source Linux tool we further adapted) and many in-house developed tools offering an overview about computers, installation status and relations between the different servers / computers. | |||
![]() |
Slides MOMAU007 [0.384 MB] | ||
![]() |
Poster MOMAU007 [0.708 MB] | ||
MOMAU008 | Integrated Management Tool for Controls Software Problems, Requests and Project Tasking at SLAC | controls, status, HOM, feedback | 59 |
|
|||
The Controls Department at SLAC, with its service center model, continuously receives engineering requests to design, build and support controls for accelerator systems lab-wide. Each customer request can vary in complexity from installing a minor feature to enhancing a major subsystem. Departmental accelerator improvement projects, along with DOE-approved construction projects, also contribute heavily to the work load. These various customer requests and projects, paired with the ongoing operational maintenance and problem reports, place a demand on the department that usually exceeds the capacity of available resources. An integrated, centralized repository - comprised of all problems, requests, and project tasks - available to all customers, operators, managers, and engineers alike - is essential to capture, communicate, prioritize, assign, schedule, track progress, and finally, commission all work components. The Controls software group has recently integrated its request/task management into its online problem tracking "Comprehensive Accelerator Tool for Enhancing Reliability" (CATER ) tool. This paper discusses the new integrated software problem/request/task management tool - its work-flow, reporting capability, and its many benefits. | |||
![]() |
Slides MOMAU008 [0.083 MB] | ||
![]() |
Poster MOMAU008 [1.444 MB] | ||
MOMMU012 | A Digital Base-band RF Control System | controls, FPGA, diagnostics, operation | 82 |
|
|||
Funding: Supported by DFG through CRC 634. The analog RF control system of the S-DALINAC has been replaced by a new digital system. The new hardware consists of an RF module and an FPGA board that have been developed in-house. A self-developed CPU implemented in the FPGA executing the control algorithm allows to change the algorithm without time-consuming synthesis. Another micro-controller connects the FPGA board to a standard PC server via CAN bus. This connection is used to adjust control parameters as well as to send commands from the RF control system to the cavity tuner power supplies. The PC runs Linux and an EPICS IOC. The latter is connected to the CAN bus with a device support that uses the SocketCAN network stack included in recent Linux kernels making the IOC independent of the CAN controller hardware. A diagnostic server streams signals from the FPGAs to clients on the network. Clients used for diagnosis include a software oscilloscope as well as a software spectrum analyzer. The parameters of the controllers can be changed with Control System Studio. We will present the architecture of the RF control system as well as the functionality of its components from a control system developers point of view. |
|||
![]() |
Slides MOMMU012 [0.087 MB] | ||
![]() |
Poster MOMMU012 [33.544 MB] | ||
MOPKN006 | Algorithms and Data Structures for the EPICS Channel Archiver | EPICS, hardware, operation, database | 94 |
|
|||
Diamond Light Source records 3GB of process data per day and has a 15TB archive on line with the EPICS Channel Archiver. This paper describes recent modifications to the software to improve performance and usability. The file-size limit on the R-Tree index has been removed, allowing all archived data to be searchable from one index. A decimation system works directly on compressed archives from a backup server and produces multi-rate reduced data with minimum and maximum values to support time efficient summary reporting and range queries. The XMLRPC interface has been extended to provide binary data transfer to clients needing large amounts of raw data. | |||
![]() |
Poster MOPKN006 [0.133 MB] | ||
MOPKN010 | Database and Interface Modifications: Change Management Without Affecting the Clients | database, controls, interface, operation | 106 |
|
|||
The first Oracle-based Controls Configuration Database (CCDB) was developed in 1986, by which the controls system of CERN's Proton Synchrotron became data-driven. Since then, this mission-critical system has evolved tremendously going through several generational changes in terms of the increasing complexity of the control system, software technologies and data models. Today, the CCDB covers the whole CERN accelerator complex and satisfies a much wider range of functional requirements. Despite its online usage, everyday operations of the machines must not be disrupted. This paper describes our approach with respect to dealing with change while ensuring continuity. How do we manage the database schema changes? How do we take advantage of the latest web deployed application development frameworks without alienating the users? How do we minimize impact on the dependent systems connected to databases through various API's? In this paper we will provide our answers to these questions, and to many more. | |||
MOPKN013 | Image Acquisition and Analysis for Beam Diagnostics Applications of the Taiwan Photon Source | EPICS, GUI, controls, linac | 117 |
|
|||
Design and implementation of image acquisition and analysis is in proceeding for the Taiwan Photon Source (TPS) diagnostic applications. The optical system contains screen, lens, and lighting system. A CCD camera with Gigabit Ethernet interface (GigE Vision) will be a standard image acquisition device. Image acquisition will be done on EPICS IOC via PV channel and analysis the properties by using Matlab tool to evaluate the beam profile (σ), beam size position and tilt angle et al. The EPICS IOC integrated with Matlab as a data processing system is not only could be used in image analysis but also in many types of equipment data processing applications. Progress of the project will be summarized in this report. | |||
![]() |
Poster MOPKN013 [0.816 MB] | ||
MOPKN020 | The PSI Web Interface to the EPICS Channel Archiver | interface, EPICS, controls, operation | 141 |
|
|||
The EPICS channel archiver is a powerful tool to collect control system data of thousands of EPICS process variables with rates of many Hertz each to an archive for later retrieval. [1] Within the package of the channel archiver version 2 you get a Java application for graphical data retrieval and a command line tool for data extraction into different file formats. For the Paul Scherrer Institute we wanted a possibility to retrieve the archived data from a web interface. It was desired to have flexible retrieval functions and to allow to interchange data references by e-mail. This web interface has been implemented by the PSI controls group and has now been in operation for several years. This presentation will highlight the special features of this PSI web interface to the EPICS channel archiver.
[1] http://sourceforge.net/apps/trac/epicschanarch/wiki |
|||
![]() |
Poster MOPKN020 [0.385 MB] | ||
MOPKN021 | Asynchronous Data Change Notification between Database Server and Accelerator Control Systems | database, controls, target, EPICS | 144 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to any client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. |
|||
![]() |
Poster MOPKN021 [0.355 MB] | ||
MOPKN029 | Design and Implementation of the CEBAF Element Database | database, interface, controls, hardware | 157 |
|
|||
Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. With inauguration of the CEBAF Element Database(CED) in Fall 2010, Jefferson Lab computer scientists have taken a first step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting front-end computers to building controls screens. A particular requirement influencing the CED design is that it must provide consistent access to not only present, but also future, and eventually past, configurations of the CEBAF accelerator. To accomplish this, an introspective database schema was designed that allows new elements, element types, and element properties to be defined on-the-fly without changing table structure. When used in conjunction with the Oracle Workspace Manager, it allows users to seamlessly query data from any time in the database history with the exact same tools as they use for querying the present configuration. Users can also check-out workspaces and use them as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented API that is translated automatically from original C++ into native libraries for script languages such as perl, php, and TCL making access to the CED easy and ubiquitous. The U.S.Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes. |
|||
![]() |
Poster MOPKN029 [5.239 MB] | ||
MOPKS019 | Electro Optical Beam Diagnostics System and its Control at PSI | laser, controls, electron, electronics | 195 |
|
|||
Electro Optical (EO) techniques are very promising non-invasive methods for measuring extremely short (in a sub-picosecond range) electron bunches. A prototype of an EO Bunch Length Monitoring System (BLMS) for the future SwissFEL facility is created at PSI. The core of this system is an advanced fiber laser unit with pulse generating and mode locking electronics. The system is integrated into the EPICS based PSI controls, which significantly simplifies its operations. The paper presents main components of the BLMS and its performance. | |||
![]() |
Poster MOPKS019 [0.718 MB] | ||
MOPKS024 | A Digital System for Longitudinal Emittance Blow-Up in the LHC | controls, feedback, FPGA, synchrotron | 215 |
|
|||
In order to preserve beam stability above injection energy in the LHC, longitudinal emittance blowup is performed during the energy ramp by injecting band-limited noise around the synchrotron frequency into the beam phase loop. The noise is generated continuously in software and streamed digitally into the DSP of the Beam Control system. In order to achieve reproducible results, a feedback system on the observed average bunch length controls the strength of the excitation, allowing the operator to simply set a target bunch length. The frequency spectrum of the excitation depends on the desired bunch length, and as it must follow the evolution of the synchrotron frequency spread through the ramp, it is automatically calculated by the LHC settings management software from the energy and RF voltage. The system is routinely used in LHC operation since August 2010. We present here the details of the implementation in software, FPGA firmware and DSP code, as well as some results with beam. | |||
![]() |
Poster MOPKS024 [0.467 MB] | ||
MOPKS028 | Using TANGO for Controlling a Microfluidic System with Automatic Image Analysis and Droplet Detection | TANGO, device-server, controls, interface | 223 |
|
|||
Microfluidics allows one to manipulate small quantities of fluids, using channel dimensions of several micrometers. At CEA / LIONS, microfluidic chips are used to produce calibrated complex microdrops. This technique requires only a small volume of chemicals, but requires the use a number of accurate electronic equipment such as motorized syringes, valve and pressure sensors, video cameras with fast frame rate, coupled to microscopes. We use the TANGO control system for all heterogeneous equipment in microfluidics experiments and video acquisition. We have developed a set of tools that allow us to perform the image acquisition, allows shape detection of droplets, whose size, number, and speed can be determined, almost in real time. Using TANGO, we are able to provide feedback to actuators, in order to adjust the microfabrication parameters and time droplet formation. | |||
![]() |
Poster MOPKS028 [1.594 MB] | ||
MOPKS029 | The CODAC Software Distribution for the ITER Plant Systems | controls, EPICS, database, operation | 227 |
|
|||
Most of the systems that constitutes the ITER plant will be built and supplied by the seven ITER domestic agencies. These plant systems will require their own Instrumentation and Control (I&C) that will be procured by the various suppliers. For improving the homogeneity of these plant system I&C, the CODAC group, that is in charge of the ITER control system, is promoting standardized solutions at project level and makes available, as a support for these standards, the software for the development and tests of the plant system I&C. The CODAC Core System is built by the ITER Organization and distributed to all ITER partners. It includes the ITER standard operating system, RHEL, and the ITER standard control framework, EPICS, as well as some ITER specific tools, mostly for configuration management, and ITER specific software modules, such as drivers for standard I/O boards. A process for the distribution and support is in place since the first release, in February 2010, and has been continuously improved to support the development and distribution of the following versions. | |||
![]() |
Poster MOPKS029 [1.209 MB] | ||
MOPMN004 | An Operational Event Announcer for the LHC Control Centre Using Speech Synthesis | controls, timing, interface, operation | 242 |
|
|||
The LHC island of the CERN Control Centre is a busy working environment with many status displays and running software applications. An audible event announcer was developed in order to provide a simple and efficient method to notify the operations team of events occurring within the many subsystems of the accelerator. The LHC Announcer uses speech synthesis to report messages based upon data received from multiple sources. General accelerator information such as injections, beam energies and beam dumps are derived from data received from the LHC Timing System. Additionally, a software interface is provided that allows other surveillance processes to send messages to the Announcer using the standard control system middleware. Events are divided into categories which the user can enable or disable depending upon their interest. Use of the LHC Announcer is not limited to the Control Centre and is intended to be available to a wide audience, both inside and outside CERN. To accommodate this, it was designed to require no special software beyond a standard web browser. This paper describes the design of the LHC Announcer and how it is integrated into the LHC operational environment. | |||
![]() |
Poster MOPMN004 [1.850 MB] | ||
MOPMN009 | First Experience with the MATLAB Middle Layer at ANKA | controls, EPICS, interface, alignment | 253 |
|
|||
The MATLAB Middle Layer has been adapted for use at ANKA. It was finally commissioned in March 2011. It is used for accelerator physics studies and regular tasks like beam-based alignment and response matrix analysis using LOCO. Furthermore, we intend to study the MATLAB Middle Layer as default orbit correction tool for user operation. We will report on the experience made during the commissioning process and present the latest results obtained while using the MATLAB Middle Layer for machine studies. | |||
![]() |
Poster MOPMN009 [0.646 MB] | ||
MOPMN012 | The Electronic Logbook for LNL Accelerators | experiment, ion, Linux, booster | 260 |
|
|||
In spring 2009 all run-time data concerning the particle accelerators at LNL (Laboratori Nazionali di Legnaro) were still registered mainly on paper. TANDEM and its Negative Source data were logged on a large format paper logbook, for ALPI booster and PIAVE injector with its Positive ECR Source a number of independent paper notebooks were used, together with plain data files containing raw instant snapshots of each RF superconductive accelerators. At that time a decision was taken to build a new tool for a general electronic registration of accelerators run-time data. The result of this effort, the LNL electronic logbook, is presented here . | |||
![]() |
Poster MOPMN012 [8.543 MB] | ||
MOPMN018 | Toolchain for Online Modeling of the LHC | optics, controls, simulation, framework | 277 |
|
|||
The control of high intensity beams in a high energy, superconducting machine with complex optics like the CERN Large Hadron Collider (LHC) is challenging not only from the design aspect but also for operation. To support the LHC beam commissioning, operation and luminosity production, efforts were recently devoted towards the design and implementation of a software infrastructure aimed to use the computing power of the beam dynamics code MADX-X in the framework of the Java-based LHC control and measurement environment. Alongside interfacing to measurement data as well as to settings of the control system, the best knowledge of machine aperture and optic models is provided. In this paper, we present the status of the toolchain and illustrate how it has been used during commissioning and operation of the LHC. Possible future implementations are also discussed. | |||
![]() |
Poster MOPMN018 [0.562 MB] | ||
MOPMN029 | Spiral2 Control Command: First High-level Java Applications Based on the OPEN-XAL Library | database, controls, EPICS, ion | 308 |
|
|||
The Radioactive Ions Beam SPIRAL2 facility will be based on a supra-conducting driver providing deuterons or heavy ions beams at different energies and intensities. Using then the ISOLD method, exotic nuclei beams will be sent either to new physics facilities or to the existing GANIL experimental areas. To tune this large range of beams, high-level applications will be mainly developed in Java language. The choice of the OPEN-XAL application framework, developed at the Spallation Neutron Source (SNS), has proven to be very efficient and greatly helps us to design our first software pieces to tune the accelerator. The first part of this paper presents some new applications: "Minimisation" which aims at optimizing a section of the accelerator; a general purpose software named "Hook" for interacting with equipment of any kind; and an application called "Profils" to visualize and control the Spiral2 beam wire harps. As tuning operation has to deal with configuration and archiving issues, databases are an effective way to manage data. Therefore, two databases are being developed to address these problems for the SPIRAL2 command control: one is in charge of device configuration upstream the Epics databases while another one is in charge of accelerator configuration (lattice, optics and set of values). The last part of this paper aims at describing these databases and how java applications will interact with them. | |||
![]() |
Poster MOPMN029 [1.654 MB] | ||
MOPMS001 | The New Control System for the Vacuum of ISOLDE | vacuum, controls, interlocks, hardware | 312 |
|
|||
The On-Line Isotope Mass Separator (ISOLDE) is a facility dedicated to the production of radioactive ion beams for nuclear and atomic physics. From ISOLDE vacuum sectors to the pressurized gases storage tanks there are up to five stages of pumping for a total of more than one hundred pumps including turbo-molecular, cryo, dry, membrane and oil pumps. The ISOLDE vacuum control system is critical; the volatile radioactive elements present in the exhaust gases and the High and Ultra High Vacuum pressure specifications require a complex control and interlocks system. This paper describes the reengineering of the control system developed using the CERN UNICOS-CPC framework. An additional challenge has been the usage of the UNICOS-CPC in a vacuum domain for the first time. The process automation provides multiple operating modes (Rough pumping, bake-out, high vacuum pumping, regeneration for cryo-pumped sectors, venting, etc). The control system is composed of local controllers driven by PLC (logic, interlocks) and a SCADA application (operation, alarms monitoring and diagnostics). | |||
![]() |
Poster MOPMS001 [4.105 MB] | ||
MOPMS002 | LHC Survey Laser Tracker Controls Renovation | laser, interface, hardware, controls | 316 |
|
|||
The LHC survey laser tracker control system is based on an industrial software package (Axyz) from Leica Geosystems™ that has an interface to Visual Basic 6.0™, which we used to automate the geometric measurements for the LHC magnets. As the Axyz package is no longer supported and the Visual Basic 6.0™ interface would need to be changed to Visual Basic. Net™ we have taken the decision to recode the automation application in LabVIEW™ interfacing to the PC-DMIS software, proposed by Leica Geosystems. This presentation describes the existing equipment, interface and application showing the reasons for our decisions to move to PC-DMIS and LabVIEW. We present the experience with the first prototype and make a comparison with the legacy system. | |||
![]() |
Poster MOPMS002 [1.812 MB] | ||
MOPMS003 | The Evolution of the Control System for the Electromagnetic Calorimeter of the Compact Muon Solenoid Experiment at the Large Hadron Collider | controls, detector, hardware, interface | 319 |
|
|||
Funding: Swiss National Science Foundation (SNF) This paper discusses the evolution of the Detector Control System (DCS) designed and implemented for the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) as well as the operational experience acquired during the LHC physics data taking periods of 2010 and 2011. The current implementation in terms of functionality and planned hardware upgrades are presented. Furthermore, a project for reducing the long-term software maintenance, including a year-long detailed analysis of the existing applications, is put forward and the current outcomes which have informed the design decisions for the next CMS ECAL DCS software generation are described. The main goals for the new version are to minimize external dependencies enabling smooth migration to new hardware and software platforms and to maintain the existing functionality whilst substantially reducing support and maintenance effort through homogenization, simplification and standardization of the control system software. |
|||
![]() |
Poster MOPMS003 [3.508 MB] | ||
MOPMS005 | The Upgraded Corrector Control Subsystem for the Nuclotron Main Magnetic Field | controls, power-supply, status, operation | 326 |
|
|||
This report discusses a control subsystem of 40 main magnetic field correctors which is a part of the superconducting synchrotron Nuclotron Control System. The subsystem is used in static and dynamic (corrector's current depends on the magnetic field value) modes. Development of the subsystem is performed within the bounds of the Nuclotron-NICA project. Principles of digital (PSMBus/RS-485 protocol) and analog control of the correctors' power supplies, current monitoring, remote control of the subsystem via IP network, are also presented. The first results of the subsystem commissioning are given. | |||
![]() |
Poster MOPMS005 [1.395 MB] | ||
MOPMS014 | GSI Operation Software: Migration from OpenVMS to Linux | Linux, operation, controls, linac | 351 |
|
|||
The current operation software at GSI controlling the linac, beam transfer lines, synchrotron and storage ring, has been developed over a period of more than two decades using OpenVMS now on Alpha-Workstations. The GSI accelerator facilities will serve as a injector chain for the new FAIR accelerator complex for which a control system is currently developed. To enable reuse and integration of parts of the distributed GSI software system, in particular the linac operation software, within the FAIR control system, the corresponding software components must be migrated to Linux. The interoperability with FAIR controls applications is achieved by adding a generic middleware interface accessible from Java applications. For porting applications to Linux a set of libraries and tools has been developed covering the necessary OpenVMS system functionality. Currently, core applications and services are already ported or rewritten and functionally tested but not in operational usage. This paper presents the current status of the project and concepts for putting the migrated software into operation. | |||
MOPMS021 | Detector Control System of the ATLAS Insertable B-Layer | detector, controls, monitoring, hardware | 364 |
|
|||
To improve tracking robustness and precision of the ATLAS inner tracker an additional fourth pixel layer is foreseen, called Insertable B-Layer (IBL). It will be installed between the innermost present Pixel layer and a new smaller beam pipe and is presently under construction. As, once installed into the experiment, no access is available, a highly reliable control system is required. It has to supply the detector with all entities required for operation and protect it at all times. Design constraints are the high power density inside the detector volume, the sensitivity of the sensors against heatups, and the protection of the front end electronics against transients. We present the architecture of the control system with an emphasis on the CO2 cooling system, the power supply system and protection strategies. As we aim for a common operation of pixel and IBL detector, the integration of the IBL control system into the Pixel one will be discussed as well. | |||
MOPMS024 | Evolution of the Argonne Tandem Linear Accelerator System (ATLAS) Control System | controls, distributed, database, hardware | 371 |
|
|||
Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357. Given that the Argonne Tandem Linac Accelerator System (ATLAS) recently celebrated its 25th anniversary, this paper will explore the past, present and future of the ATLAS Control System and how it has evolved along with the accelerator and control system technology. ATLAS as we know it today, originated with a Tandem Van de Graff in the 1960's. With the addition of the Booster section in the late 1970's, came the first computerized control. ATLAS itself was placed into service on June 25, 1985 and was the world's first superconducting linear accelerator for ions. Since its dedication as a National User Facility, more than a thousand experiments by more than 2,000 users world-wide, have taken advantage of the unique capabilities it provides. Today, ATLAS continues to be a user facility for physicists who study the particles that form the heart of atoms. Its most recent addition, CARIBU (Californium Rare Isotope Breeder Upgrade), creates special beams that feed into ATLAS. ATLAS is similar to a living organism, changing and responding to new technological challenges and research needs. As it continues to evolve, so does the control system: from the original days using a DEC PDP-11/34 computer and 2 CAMAC crates, to a DEC Alpha computer running Vsystem software and more than twenty CAMAC crates, to distributed computers and VME systems. Future upgrades are also in the planning stages that will continue to evolve the control system. |
|||
![]() |
Poster MOPMS024 [2.845 MB] | ||
MOPMS027 | Fast Beam Current Transformer Software for the CERN Injector Complex | hardware, GUI, real-time, timing | 382 |
|
|||
The Fast transfer-line BCTs in CERN injector complex are undergoing a complete consolidation to eradicate obsolete, maintenance intensive hardware. The corresponding low-level software has been designed to minimise the effect of identified error sources while allowing remote diagnostics and calibration facilities. This paper will present the front-end and expert application software with the results obtained. | |||
![]() |
Poster MOPMS027 [1.223 MB] | ||
MOPMS034 | Software Renovation of CERN's Experimental Areas | controls, hardware, GUI, detector | 409 |
|
|||
The experimental areas at CERN (AD, PS and SPS) have undergone a wide-spread electronics and software consolidation based on modern techniques allowing them to be used in the many years to come. This paper will describe the scale of the software renovation and how the issues were overcome in order to ensure a complete integration into the respective control systems. | |||
![]() |
Poster MOPMS034 [1.582 MB] | ||
MOPMS035 | A Beam Profiler and Emittance Meter for the SPES Project at INFN-LNL | diagnostics, EPICS, emittance, ion | 412 |
|
|||
The beam diagnostics system currently in use at LNL in the superconducting Linac has been upgraded for the SPES project. The control software has been rewritten using EPICS tools and a new emittance meter has been developed. The beam detector is based on wire grids, the IOC is implemented in a VME system running under Vxworks and the graphic interface is based on CSS. The system is now in operation in the SPES Target Laboratory for the characterization of beams produced by the new ion source. | |||
![]() |
Poster MOPMS035 [0.367 MB] | ||
MOPMS036 | Upgrade of the Nuclotron Extracted Beam Diagnostic Subsystem. | controls, hardware, operation, high-voltage | 415 |
|
|||
The subsystem is intended for the Nuclotron extracted beam parameters measurement. Multiwire proportional chambers are used for transversal beam profiles mesurements in four points of the beam transfer line. Gas amplification values are tuned by high voltage power supplies adjustments. The extracted beam intensity is measured by means of ionization chamber, variable gain current amplifier DDPCA-300 and voltage-to-frequency converter. The data is processed by industrial PC with National Instruments DAQ modules. The client-server distributed application written in LabView environment allows operators to control hardware and obtain measurement results over TCP/IP network. | |||
![]() |
Poster MOPMS036 [1.753 MB] | ||
MOPMS037 | A Customizable Platform for High-availability Monitoring, Control and Data Distribution at CERN | monitoring, controls, database, hardware | 418 |
|
|||
In complex operational environments, monitoring and control systems are asked to satisfy ever more stringent requirements. In addition to reliability, the availability of the system has become crucial to accommodate for tight planning schedules and increased dependencies to other systems. In this context, adapting a monitoring system to changes in its environment and meeting requests for new functionalities are increasingly challenging. Combining maintainability and high-availability within a portable architecture is the focus of this work. To meet these increased requirements, we present a new modular system developed at CERN. Using the experience gained from previous implementations, the new platform uses a multi-server architecture to allow running patches and updates to the application without affecting its availability. The data acquisition can also be reconfigured without any downtime or potential data loss. The modular architecture builds on a core system that aims to be reusable for multiple monitoring scenarios, while keeping each instance as lightweight as possible. Both for cost and future maintenance concerns, open and customizable technologies have been preferred. | |||
MOPMU001 | Software and Capabilities of the Beam Position Measurement System for Novosibirsk Free Electron Laser | electron, FEL, pick-up, controls | 422 |
|
|||
The system that measures the electron beam position in Novosibirsk free electron laser with the application of electrostatic pick-up electrodes is described. The measuring hardware and main principles of measurement are considered. The capabilities and different operation modes of this system are described. In particular, the option of simultaneous detection of accelerated and decelerated electron beams at one pick-up station is considered. Besides, the operational features of this system at different modes of FEL performance (the 1st, 2nd, and 3rd stages) are mentioned. | |||
![]() |
Poster MOPMU001 [0.339 MB] | ||
MOPMU011 | The Design Status of CSNS Experimental Control System | controls, EPICS, neutron, database | 446 |
|
|||
To meet the increasing demand from user community, China decided to build a world-class spallation neutron source, called CSNS(China Spallation Neutron Source). It can provide users a neutron scattering platform with high flux, wide wavelength range and high efficiency. CSNS construction is expected to start in 2011 and will last 6.5 years. The control system of CSNS is divided into accelerator control system and experimental control system. CSNS Experimental Control System is based on EPICS architecture, offering device operating and device debug interface, communication between devices, environment monitor, machine and people protection, interface for accelerator system, control system monitor and database service. The all control system is divided into 4 parts, such as front control layer, Epics global control layer, database and network service. The front control layer is based on YOKOGAWA PLC and other controllers. Epics layer provides all system control and information exchange. Embedded PLC YOKOGAWA RP61 is considered used as communication node between front layer and EPICS layer. Database service provides system configuration and historical data. From the experience of BESIII, MySQL is a option. The system will be developed in Dongguan , Guangdong p province and Beijing, so VPN will be used to help development. Now,there are 9 people working on this system. The system design is completed. We are working on a prototype system now. | |||
![]() |
Poster MOPMU011 [0.224 MB] | ||
MOPMU013 | Phase II and III The Next Generation of CLS Beamline Control and Data Acquisition Systems | controls, EPICS, experiment, interface | 454 |
|
|||
The Canadian Light Source is nearing the completion of its suite of phase II Beamlines and in detailed design of its Phase III Beamlines. The paper presents an overview of the overall approach adopted by CLS in the development of beamline control and data acquisition systems. Building on the experience of our first phase of beamlines the CLS has continued to make extensive use of EPICS with EDM and QT based user interfaces. Increasing interpretive languages such as Python are finding a place in the beamline control systems. Web based environment such as ScienceStudio have also found a prominent place in the control system architecture as we move to tighter integration between data acquisition, visualization and data analysis. | |||
MOPMU018 | Update On The Central Control System of TRIUMF's 500 MeV Cyclotron | controls, cyclotron, hardware, operation | 469 |
|
|||
The Central Control System of TRIUMF's 500 MeV cyclotron was initially commissioned in the early 1970s. In 1987 a four year project to upgrade the control system was planned and commenced. By 1997 this upgrade was complete and the new system was operating with increased reliability, functionality and maintainability. Since 1997 an evolution of incremental change has existed. Functionality, reliability and maintainability have continued to improve. This paper provides an update on the present control system situation (2011) and possible future directions. | |||
![]() |
Poster MOPMU018 [4.613 MB] | ||
MOPMU020 | The Control and Data Acquisition System of the Neutron Instrument BIODIFF | controls, neutron, detector, TANGO | 477 |
|
|||
The Neutron instrument BIODIFF is a single crystal diffractometer for biological macromolecules that has been built in a cooperation of Forschungszentrum Jülich and the Technical University of Munich. It is located at the research reactor FRM-II in Garching, Germany, and is in its commissioning phase, now. The control and data acquisition system of BIODIFF is based on the so-called "Jülich-Munich Standard", a set of standards and technologies commonly accepted at the FRM-II, which is based on the TACO control system developed by the ESRF. In future, it is intended to introduce TANGO at the FRM-II. The Image Plate detector system of BIODIFF is already equipped with a TANGO subsystem that was integrated into the overall TACO instrument control system. | |||
MOPMU021 | Control System for Magnet Power Supplies for Novosibirsk Free Electron Laser | controls, power-supply, FEL, operation | 480 |
|
|||
The control system for the magnetic system of the free electron laser (FEL) is described. The characteristics and structure of the power supply system are presented. The power supply control system based on embedded intelligent controllers with the CAN-BUS interface is considered in detail. The control software structure and capabilities are described. Besides, software tools for power supply diagnostics are described. | |||
![]() |
Poster MOPMU021 [0.291 MB] | ||
MOPMU024 | Status of ALMA Software | operation, controls, monitoring, framework | 487 |
|
|||
The Atacama Large Millimeter /submillimeter Array (ALMA) will be a unique research instrument composed of at least 66 reconfigurable high-precision antennas, located at the Chajnantor plain in the Chilean Andes at an elevation of 5000 m. Each antenna contains instruments capable of receiving radio signals from 31.3 GHz up to 950 GHz. These signals are correlated inside a Correlator and the spectral data are finally saved into the Archive system together with the observation metadata. This paper describes the progress in the deployment of the ALMA software, with emphasis on the control software, which is built on top of the ALMA Common Software (ACS), a CORBA based middleware framework. In order to support and maintain the installed software, it is essential to have a mechanism to align and distribute the same version of software packages across all systems. This is achieved rigorously with weekly based regression tests and strict configuration control. A build farm to provide continuous integration and testing in simulation has been established as well. Given the large amount of antennas, it is imperative to have also a monitoring system to allow trend analysis of each component in order to trigger preventive maintenance activities. A challenge for which we are preparing this year consists in testing the whole ALMA software performing complete end-to-end operation, from proposal submission to data distribution to the ALMA Regional Centers. The experience gained during deployment, testing and operation support will be presented. | |||
![]() |
Poster MOPMU024 [0.471 MB] | ||
MOPMU025 | The Implementation of the Spiral2 Injector Control System | EPICS, controls, emittance, diagnostics | 491 |
|
|||
The EPICS framework was chosen for the Spiral2 project control system [1] in 2007. Four institutes are involved in the command control: Ganil (Caen), IPHC (Strasbourg) and IRFU (Saclay) and LPSC (Grenoble), the IRFU institute being in charge of the Injector controls. This injector includes two ECR sources (one for deuterons and one for A/q= 3 ions) with their associated low-energy beam transport lines (LEBTs). The deuteron source is installed at Saclay and the A/q=3 ion source at Grenoble. Both lines will merge before injecting beam in a RFQ cavity for pre acceleration. This paper presents the control system for both injector beamlines with their diagnostics (Faraday cups, ACCT/DCCT, profilers, emittancemeters) and slits. This control relies on COTS VME boards and an EPICS software platform. Modbus/TCP protocol is also used with COTS devices like power supplies and Siemens PLCs. The Injector graphical user interface is based on Edm while the port to CSS BOY is under evaluation; also high level applications are developed in Java. This paper also emphasizes the EPICS development for new industrial VME boards ADAS ICV108/178 with a sampling rate ranging from 100 K Samples/s to 1.2 M Samples/s. This new software is used for the beam intensity measurement by diagnostics and the acquisition of sources.
[1] Overview of the Spiral2 control system progress E. Lécorché & al (Ganil/CAEN),this conference. |
|||
![]() |
Poster MOPMU025 [1.036 MB] | ||
MOPMU026 | A Readout and Control System for a CTA Prototype Telescope | controls, interface, framework, hardware | 494 |
|
|||
CTA (Cherenkov Telescope Array) is an initiative to build the next generation ground-based gamma-ray instrument. The CTA array will allow studies in the very high-energy domain in the range from a few tens of GeV to more than hundred TeV, extending the existing energy coverage and increasing by a factor 10 the sensitivity compared to current installations, while enhancing other aspects like angular and energy resolution. These goals require the use of at least three different sizes of telescopes. CTA will comprise two arrays (one in the Northern hemisphere and one in the Southern hemisphere) for full sky coverage and will be operated as an open observatory. A prototype for the Medium Size Telescope (MST) type is under development and will be deployed in Berlin by the end of 2011. The MST prototype will consist of the mechanical structure, drive system, active mirror control, four CCD cameras for prototype instrumentation and a weather station. The ALMA Common Software (ACS) distributed control framework has been chosen for the implementation of the control system of the prototype. In the present approach, the interface to some of the hardware devices is achieved by using the OPC Unified Architecture (OPC UA). A code-generation framework (ACSCG) has been designed for ACS modeling. In this contribution the progress in the design and implementation of the control system for the CTA MST prototype is described. | |||
![]() |
Poster MOPMU026 [1.953 MB] | ||
MOPMU027 | Controls System Developments for the ERL Facility | controls, interface, Linux, electron | 498 |
|
|||
Funding: Funding: This manuscript has been authored by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U. S. Department of Energy. The BNL Energy Recovery LINAC (ERL) is a high beam current, superconducting RF electron accelerator that is being commissioned to serve as a research and development prototype for a RHIC facility upgrade for electron-ion collision (eRHIC). Key components of the machine include a laser, photocathode, and 5-cell superconducting RF cavity operating at a frequency of 703 MHz. Starting with a foundation based on existing ADO software running on Linux servers and on the VME/VxWorks platforms developed for RHIC, we are developing a controls system that incorporates a wide range of hardware I/O interfaces that are needed for machine R&D. Details of the system layout, specifications, and user interfaces are provided. |
|||
![]() |
Poster MOPMU027 [0.709 MB] | ||
MOPMU032 | An EPICS IOC Builder | EPICS, hardware, database, controls | 506 |
|
|||
An EPICS IO controller is typically assembled from a number of standard components each with potentially quite complex hardware or software initialisation procedures intermixed with a good deal of repetitive boilerplate code. Assembling and maintaining a complex IOC can be a quite difficult and error prone process, particularly if the components are unfamiliar. The EPICS IOC builder is a Python library designed to automate the assembly of a complete IOC from a concise component level description. The dependencies and interactions between components as well as their detailed initialisation procedures are automatically managed by the IOC builder through component description files maintained with the individual components. At Diamond Light Source we have a large library of components that can be assembled into EPICS IOCs. The IOC Builder is further finding increasing use in helping non-expert users to assemble an IOC without specialist knowledge. | |||
![]() |
Poster MOPMU032 [3.887 MB] | ||
MOPMU040 | REVOLUTION at SOLEIL: Review and Prospect for Motion Control | controls, TANGO, hardware, radiation | 525 |
|
|||
At any synchrotron facility, motors are numerous: it is a significant actuator of accelerators and the main actuator of beamlines. Since 2003, the Electronic Control and Data Acquisition group of SOLEIL has defined a modular and reliable motion architecture integrating industrial products (Galil controller, Midi Ingénierie and Phytron power boards). Simultaneously, the software control group has developed a set of dedicated Tango devices. At present, more than 1000 motors and 200 motion controller crates are in operation at SOLEIL. Aware that the motion control is important in improving performance as the positioning of optical systems and samples is a key element of any beamline, SOLEIL wants to upgrade its motion controller in order to maintain the facility at a high performance level and to be able to answer to new requirements: better accuracy, complex trajectory and coupling multi-axis devices like a hexapod. This project is called REVOLUTION (REconsider Various contrOLler for yoUr moTION). | |||
![]() |
Poster MOPMU040 [1.388 MB] | ||
TUAAULT02 | Tango Collaboration and Kernel Status | TANGO, controls, CORBA, device-server | 533 |
|
|||
This paper is divided in two parts. The first part summarises the main changes done within the Tango collaboration since the last Icalepcs conference. This will cover technical evolutions but also the new way our collaboration is managed. The second part will focus on the evolution of the so-called Tango event system (asynchronous communication between client and server). Since its beginning, within Tango, this type of communication is implemented using a CORBA notification service implementation called omniNotify. This system is currently re-written using zeromq as transport layer. Reasons of the zeromq choice will be detailed. A first feedback of the new implementation will be given. | |||
![]() |
Slides TUAAULT02 [1.458 MB] | ||
TUBAULT04 | Open Hardware for CERN’s Accelerator Control Systems | hardware, controls, FPGA, timing | 554 |
|
|||
The accelerator control systems at CERN will be renovated and many electronics modules will be redesigned as the modules they will replace cannot be bought anymore or use obsolete components. The modules used in the control systems are diverse: analog and digital I/O, level converters and repeaters, serial links and timing modules. Overall around 120 modules are supported that are used in systems such as beam instrumentation, cryogenics and power converters. Only a small percentage of the currently used modules are commercially available, while most of them had been specifically designed at CERN. The new developments are based on VITA and PCI-SIG standards such as FMC (FPGA Mezzanine Card), PCI Express and VME64x using transition modules. As system-on-chip interconnect, the public domain Wishbone specification is used. For the renovation, it is considered imperative to have for each board access to the full hardware design and its firmware so that problems could quickly be resolved by CERN engineers or its collaborators. To attract other partners, that are not necessarily part of the existing networks of particle physics, the new projects are developed in a fully 'Open' fashion. This allows for strong collaborations that will result in better and reusable designs. Within this Open Hardware project new ways of working with industry are being tested with the aim to prove that there is no contradiction between commercial off-the-shelf products and openness and that industry can be involved at all stages, from design to production and support. | |||
![]() |
Slides TUBAULT04 [7.225 MB] | ||
TUBAUIO05 | Challenges for Emerging New Electronics Standards for Physics | controls, hardware, interface, monitoring | 558 |
|
|||
Funding: Work supported by US Department of Energy Contract DE AC03 76SF00515 A unique effort is underway between industry and the international physics community to extend the Telecom industry’s Advanced Telecommunications Computing Architecture (ATCA and MicroTCA) to meet future needs of the physics machine and detector community. New standard extensions for physics have now been designed to deliver unprecedented performance and high subsystem availability for accelerator controls, instrumentation and data acquisition. Key technical features include a unique out-of-band imbedded standard Intelligent Platform Management Interface (IPMI) system to manage hot-swap module replacement and hardware-software failover. However the acceptance of any new standard depends critically on the creation of strong collaborations among users and between user and industry communities. For the relatively small high performance physics market to attract strong industry support requires collaborations to converge on core infrastructure components including hardware, timing, software and firmware architectures; as well as to strive for a much higher degree of interoperability of both lab and industry designed hardware-software products than past generations of standards. The xTCA platform presents a unique opportunity for future progress. This presentation will describe status of the hardware-software extension plans; technology advantages for machine controls and data acquisition systems; and examples of current collaborative efforts to help develop an industry base of generic ATCA and MicroTCA products in an open-source environment. 1. PICMG, the PCI Industrial Computer Manufacturer’s Group 2. Lab representation on PICMG includes CERN, DESY, FNAL, IHEP, IPFN, ITER and SLAC |
|||
![]() |
Slides TUBAUIO05 [1.935 MB] | ||
TUCAUST01 | Upgrading the Fermilab Fire and Security Reporting System | hardware, interface, network, database | 563 |
|
|||
Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy. Fermilab's homegrown fire and security system (known as FIRUS) is highly reliable and has been used nearly thirty years. The system has gone through some minor upgrades, however, none of those changes made significant, visible changes. In this paper, we present a major overhaul to the system that is halfway complete. We discuss the use of Apple's OS X for the new GUI, upgrading the servers to use the Erlang programming language and allowing limited access for iOS and Android-based mobile devices. |
|||
![]() |
Slides TUCAUST01 [2.818 MB] | ||
TUCAUST02 | SARAF Control System Rebuild | controls, network, operation, proton | 567 |
|
|||
The Soreq Applied Research Accelerator Facility (SARAF) is a proton/deuteron RF superconducting linear accelerator, which was commissioned at Soreq NRC. SARAF will be a multi-user facility, whose main activities will be neutron physics and applications, radio-pharmaceuticals development and production, and basic nuclear physics research. The SARAF Accelerator Control System (ACS) was delivered while still in development phase. Various issues limit our capability to use it as a basis for future phases of the accelerator operation and need to be addressed. Recently two projects have been launched in order to streamline the system and prepare it for the future development of the accelerator. This article will describe the plans and goals of these projects, the preparations undertaken by the SARAF team, the design principles on which the control methodology will be based and the architecture which is planned to be implemented. The rebuilding process will take place in two consecutive projects. The first will revamp the network architecture and the second will involve the actual rebuilding of the control system applications, features and procedures. | |||
![]() |
Slides TUCAUST02 [1.733 MB] | ||
TUCAUST03 | The Upgrade Programme for the ESRF Accelerator Control System | controls, TANGO, storage-ring, insertion | 570 |
|
|||
To reach the goals specified in the ESRF upgrade program [1], for the new experiments to be built, the storage ring needs to be modified. The optics must to be changed to allow up to seven meter long straight sections and canted undulator set-ups. Better beam stabilization and feedback systems are necessary for the nano-focus experiments planned. Also we are undergoing a renovation and modernization phase to increase the lifetime of the accelerator and its control system. This paper resumes the major upgrade projects, like the new BPM system, the fast orbit feedback or the ultra small vertical emittance, and their implications on the control system. Ongoing modernization projects such as the solid state radio frequency amplifier or the HOM damped cavities are described. Software upgrades of several sub-systems like vacuum and insertion devices, which are planned for this year or for the long shutdown period beginning of 2012 are covered as well. The final goal is to move to a Tango only control system.
[1] http://www.esrf.fr/AboutUs/Upgrade |
|||
![]() |
Slides TUCAUST03 [1.750 MB] | ||
TUCAUST04 | Changing Horses Mid-stream: Upgrading the LCLS Control System During Production Operations | controls, EPICS, linac, interface | 574 |
|
|||
The control system for the Linac Coherent Light Source (LCLS) began as a combination of new and legacy systems. When the LCLS began operating, the bulk of the facility was newly constructed, including a new control system using the Experimental Physics and Industrial Control System (EPICS) framework. The Linear Accelerator (LINAC) portion of the LCLS was repurposed for use by the LCLS and was controlled by the legacy system, which was built nearly 30 years ago. This system uses CAMAC, distributed 80386 microprocessors, and a central Alpha 6600 computer running the VMS operating system. This legacy control system has been successfully upgraded to EPICS during LCLS production operations while maintaining the 95% uptime required by the LCLS users. The successful transition was made possible by thorough testing in sections of the LINAC which were not in use by the LCLS. Additionally, a system was implemented to switch control of a LINAC section between new and legacy control systems in a few minutes. Using this rapid switching, testing could be performed during maintenance periods and accelerator development days. If any problems were encountered after a section had been switched to the new control system, it could be quickly switched back. | |||
![]() |
Slides TUCAUST04 [0.183 MB] | ||
TUDAUST05 | The Laser MegaJoule Facility: Control System Status Report | controls, laser, target, experiment | 600 |
|
|||
The French Commissariat à l'Energie Atomique (CEA) is currently building the Laser MegaJoule (LMJ), a 176-beam laser facility, at the CEA Laboratory CESTA near Bordeaux. It is designed to deliver about 1.4 MJ of energy to targets for high energy density physics experiments, including fusion experiments. LMJ technological choices were validated with the LIL, a scale 1 prototype of one LMJ bundle. The construction of the LMJ building itself is now achieved and the assembly of laser components is on-going. A Petawatt laser line is also being installed in the building. The presentation gives an overview of the general control system architecture, and focuses on the hardware platform being installed on the LMJ, in the aim of hosting the different software applications for system supervisory and sub-system controls. This platform is based on the use of virtualization techniques that were used to develop a high availability optimized hardware platform, with a high operating flexibility, including power consumption and cooling considerations. This platform is spread over 2 sites, the LMJ itself of course, but also on the software integration platform built outside LMJ, and intended to provide system integration of various software control system components of the LMJ. | |||
![]() |
Slides TUDAUST05 [9.215 MB] | ||
TURAULT01 | Summary of the 3rd Control System Cyber-security (CS)2/HEP Workshop | controls, network, experiment, detector | 603 |
|
|||
Over the last decade modern accelerator and experiment control systems have increasingly been based on commercial-off-the-shelf products (VME crates, programmable logic controllers (PLCs), supervisory control and data acquisition (SCADA) systems, etc.), on Windows or Linux PCs, and on communication infrastructures using Ethernet and TCP/IP. Despite the benefits coming with this (r)evolution, new vulnerabilities are inherited, too: Worms and viruses spread within seconds via the Ethernet cable, and attackers are becoming interested in control systems. The Stuxnet worm of 2010 against a particular Siemens PLC is a unique example for a sophisticated attack against control systems [1]. Unfortunately, control PCs cannot be patched as fast as office PCs. Even worse, vulnerability scans at CERN using standard IT tools have shown that commercial automation systems lack fundamental security precautions: Some systems crashed during the scan, others could easily be stopped or their process data being altered [2]. The 3rd (CS)2/HEP workshop [3] held the weekend before the ICALEPCS2011 conference was intended to raise awareness; exchange good practices, ideas, and implementations; discuss what works & what not as well as their pros & cons; report on security events, lessons learned & successes; and update on progresses made at HEP laboratories around the world in order to secure control systems. This presentation will give a summary of the solutions planned, deployed and the experience gained.
[1] S. Lüders, "Stuxnet and the Impact on Accelerator Control Systems", FRAAULT02, ICALEPCS, Grenoble, October 2011; [2] S. Lüders, "Control Systems Under Attack?", O5_008, ICALEPCS, Geneva, October 2005. [3] 3rd Control System Cyber-Security CS2/HEP Workshop, http://indico.cern.ch/conferenceDisplay.py?confId=120418 |
|||
WEAAULT02 | Model Oriented Application Generation for Industrial Control Systems | controls, target, framework, factory | 610 |
|
|||
The CERN Unified Industrial Control Systems framework (UNICOS) is a software generation methodology that standardizes the design of slow process control applications [1]. A Software Factory, named the UNICOS Application Builder (UAB) [2], was introduced to provide a stable metamodel, a set of platform-independent models and platform-specific configurations against which code and configuration generation plugins can be written. Such plugins currently target PLC programming environments (Schneider UNITY and SIEMENS Step7 PLCs) as well as SIEMENS WinCC Open Architecture SCADA (previously known as ETM PVSS) but are being expanded to cover more and more aspects of process control systems. We present what constitutes the UAB metamodel and the models in use, how these models can be used to capture knowledge about industrial control systems and how this knowledge can be leveraged to generate both code and configuration for a variety of target usages.
[1] H. Milcent et al, "UNICOS: AN OPEN FRAMEWORK", ICALEPCS2009, Kobe, Japan, (THD003) [2] M. Dutour, "Software factory techniques applied to Process Control at CERN", ICALEPCS 2007, Knoxville Tennessee, USA |
|||
![]() |
Slides WEAAULT02 [1.757 MB] | ||
WEAAULT03 | A Platform Independent Framework for Statecharts Code Generation | controls, framework, CORBA, target | 614 |
|
|||
Control systems for telescopes and their instruments are reactive systems very well suited to be modeled using Statecharts formalism. The World Wide Web Consortium is working on a new standard called SCXML that specifies an XML notation to describe Statecharts and provides a well defined operational semantic for run-time interpretation of the SCXML models. This paper presents a generic application framework for reactive non real-time systems based on interpreted Statecharts. The framework consists of a model to text transformation tool and an SCXML interpreter. The tool generates from UML state machine models the SCXML representation of the state machines and the application skeletons for the supported software platforms. An abstraction layer propagates the events from the middleware to the SCXML interpreter facilitating the support of different software platforms. This project benefits from the positive experience gained in several years of development of coordination and monitoring applications for the telescope control software domain using Model Driven Development technologies. | |||
![]() |
Slides WEAAULT03 [2.179 MB] | ||
WEBHAUST02 | Optimizing Infrastructure for Software Testing Using Virtualization | network, hardware, Windows, distributed | 622 |
|
|||
Virtualization technology and cloud computing have a brought a paradigm shift in the way we utilize, deploy and manage computer resources. They allow fast deployment of multiple operating system as containers on physical machines which can be either discarded after use or snapshot for later re-deployment. At CERN, we have been using virtualization/cloud computing to quickly setup virtual machines for our developers with pre-configured software to enable them test/deploy a new version of a software patch for a given application. We also have been using the infrastructure to do security analysis of control systems as virtualization provides a degree of isolation where control systems such as SCADA systems could be evaluated for simulated network attacks. This paper reports both on the techniques that have been used for security analysis involving network configuration/isolation to prevent interference of other systems on the network. This paper also provides an overview of the technologies used to deploy such an infrastructure based on VMWare and OpenNebula cloud management platform. | |||
![]() |
Slides WEBHAUST02 [2.899 MB] | ||
WEBHMUST02 | Solid State Direct Drive RF Linac: Control System | controls, cavity, experiment, LLRF | 638 |
|
|||
Recently a Solid State Direct Drive ® concept for RF linacs has been introduced [1]. This new approach integrates the RF source, comprised of multiple Silicon Carbide (SiC) solid state Rf-modules [2], directly onto the cavity. Such an approach introduces new challenges for the control of such machines namely the non-linear behavior of the solid state RF-modules and the direct coupling of the RF-modules onto the cavity. In this paper we discuss further results of the experimental program [3,4] to integrate and control 64 RF-modules onto a λ/4 cavity. The next stage of experiments aims on gaining better feed forward control of the system and on detailed system identification. For this purpose a digital control board comprising of a Virtex 6 FPGA, high speed DACs/ADCs and trigger I/O is developed and integrated into the experiment and used to control the system. The design of the board is consequently digital aiming at direct processing of the signals. Power control within the cavity is achieved by an outphasing control of two groups of the RF-modules. This allows a power control without degradation of RF-module efficiency.
[1] Heid O., Hughes T., THPD002, IPAC10, Kyoto, Japan [2] Irsigler R. et al, 3B-9, PPC11, Chicago IL, USA [3] Heid O., Hughes T., THP068, LINAC10, Tsukuba, Japan [4] Heid O., Hughes T., MOPD42, HB2010, Morschach, Switzerland |
|||
![]() |
Slides WEBHMUST02 [1.201 MB] | ||
WEBHMULT03 | EtherBone - A Network Layer for the Wishbone SoC Bus | operation, hardware, Ethernet, timing | 642 |
|
|||
Today, there are several System on a Chip (SoC) bus systems. Typically, these busses are confined on-chip and rely on higher level components to communicate with the outside world. Taking these systems a step further, we see the possibility of extending the reach of the SoC bus to remote FPGAs or processors. This leads to the idea of the EtherBone (EB) core, which connects a Wishbone (WB) Ver. 4 Bus via a Gigabit Ethernet based network link to remote peripheral devices. EB acts as a transparent interconnect module towards attached WB Bus devices. Address information and data from one or more WB bus cycles is preceded with a descriptive header and encapsulated in a UDP/IP packet. Because of this standard compliance, EB is able to traverse Wide Area Networks and is therefore not bound to a geographic location. Due to the low level nature of the WB bus, EB provides a sound basis for remote hardware tools like a JTAG debugger, In-System-Programmer (ISP), boundary scan interface or logic analyser module. EB was developed in the scope of the WhiteRabbit Timing Project (WR) at CERN and GSI/FAIR, which employs GigaBit Ethernet technology to communicate with memory mapped slave devices. WR will make use of EB as means to issue commands to its timing nodes and control connected accelerator hardware. | |||
![]() |
Slides WEBHMULT03 [1.547 MB] | ||
WEMAU002 | Coordinating Simultaneous Instruments at the Advanced Technology Solar Telescope | controls, experiment, interface, target | 654 |
|
|||
A key component of the Advanced Technology Solar Telescope control system design is the efficient support of multiple instruments sharing the light path provided by the telescope. The set of active instruments varies with each experiment and possibly with each observation within an experiment. The flow of control for a typical experiment is traced through the control system to preset the main aspects of the design that facilitate this behavior. Special attention is paid to the role of ATST's Common Services Framework in assisting the coordination of instruments with each other and with the telescope. | |||
![]() |
Slides WEMAU002 [0.251 MB] | ||
![]() |
Poster WEMAU002 [0.438 MB] | ||
WEMAU003 | The LabVIEW RADE Framework Distributed Architecture | LabView, framework, interface, distributed | 658 |
|
|||
For accelerator GUI applications there is a need for a rapid development environment to create expert tools or to prototype operator applications. Typically a variety of tools are being used, such as Matlab™ or Excel™, but their scope is limited, either because of their low flexibility or limited integration into the accelerator infrastructure. In addition, having several tools obliges users to deal with different programming techniques and data structures. We have addressed these limitations by using LabVIEW™, extending it with interfaces to C++ and Java. In this way it fulfills requirements of ease of use, flexibility and connectivity. We present the RADE framework and four applications based on it. Recent application requirements could only be met by implementing a distributed architecture with multiple servers running multiple services. This brought us the additional advantage to implement redundant services, to increase the availability and to make transparent updates. We will present two applications requiring high availability. We also report on issues encountered with such a distributed architecture and how we have addressed them. The latest extension of the framework is to industrial equipment, with program templates and drivers for PLCs (Siemens and Schneider) and PXI with LabVIEW-Real Time. | |||
![]() |
Slides WEMAU003 [0.157 MB] | ||
![]() |
Poster WEMAU003 [2.978 MB] | ||
WEMAU007 | Turn-key Applications for Accelerators with LabVIEW-RADE | controls, framework, LabView, alignment | 670 |
|
|||
In the accelerator domain there is a need of integrating industrial devices and creating control and monitoring applications in an easy and yet structured way. The LabVIEW-RADE framework provides the method and tools to implement these requirements and also provides the essential integration of these applications into the CERN controls infrastructure. We present three examples of applications of different nature to show that the framework provides solutions at all three tiers of the control system, data access, process and supervision. The first example is a remotely controlled alignment system for the LHC collimators. The collimator alignment will need to be checked periodically. Due to limited access for personnel, the instruments are mounted on a small train. The system is composed of a PXI crate housing the instrument interfaces and a PLC for the motor control. We report on the design, development and commissioning of the system. The second application is the renovation of the PS beam spectrum analyser where both hardware and software were renewed. The control application was ported from Windows to LabVIEW-Real Time. We describe the technique used for a full integration into the PS console. The third example is a control and monitoring application of the CLIC two beam test stand. The application accesses CERN front-end equipment through the CERN middleware, CMW, and provides many different ways to view the data. We conclude with an evaluation of the framework based on the three examples and indicate new areas of improvement and extension. | |||
![]() |
Poster WEMAU007 [2.504 MB] | ||
WEMAU011 | LIMA: A Generic Library for High Throughput Image Acquisition | detector, hardware, controls, interface | 676 |
|
|||
A significant number of 2D detectors are used in large scale facilities' control systems for quantitative data analysis. In these devices, a common set of control parameters and features can be identified, but most of manufacturers provide specific software control interfaces. A generic image acquisition library, called LIMA, has been developed at the ESRF for a better compatibility and easier integration of 2D detectors to existing control systems. The LIMA design is driven by three main goals: i) independence of any control system to be shared by a wide scientific community; ii) a rich common set of functionalities (e.g., if a feature is not supported by hardware, then the alternative software implementation is provided); and iii) intensive use of events and multi-threaded algorithms for an optimal exploit of multi-core hardware resources, needed when controlling high throughput detectors. LIMA currently supports the ESRF Frelon and Maxipix detectors as well as the Dectris Pilatus. Within a collaborative framework, the integration of the Basler GigE cameras is a contribution from SOLEIL. Although it is still under development, LIMA features so far fast data saving on different file formats and basic data processing / reduction, like software pixel binning / sub-image, background subtraction, beam centroid and sub-image statistics calculation, among others. | |||
![]() |
Slides WEMAU011 [0.073 MB] | ||
WEMAU012 | COMETE: A Multi Data Source Oriented Graphical Framework | TANGO, controls, toolkit, framework | 680 |
|
|||
Modern beamlines at SOLEIL need to browse a large amount of scientific data through multiple sources that can be scientific measurement data files, databases or Tango [1] control systems. We created the COMETE [2] framework because we thought it was necessary for the end users to use the same collection of widgets for all the different data sources to be accessed. On the other side, for GUI application developers, the complexity of data source handling had to be hidden. These 2 requirements being now fulfilled, our development team is able to build high quality, modular and reusable scientific oriented GUI software, with consistent look and feel for end users. COMETE offers some key features to our developers: Smart refreshing service , easy-to-use and succinct API, Data Reduction functionality. This paper will present the work organization, the modern software architecture and design of the whole system. Then, the migration from our old GUI framework to COMETE will be detailed. The paper will conclude with an application example and a summary of the incoming features available in the framework.
[1] http://www.tango-controls.org [2] http://comete.sourceforge.net |
|||
![]() |
Slides WEMAU012 [0.083 MB] | ||
WEMMU006 | Management Tools for Distributed Control System in KSTAR | controls, monitoring, operation, EPICS | 694 |
|
|||
The integrated control system of the Korea Superconducting Tokamak Advanced Research (KSTAR) has been developed with distributed control systems based on Experimental Physics and Industrial Control System (EPICS). It has the essential role of remote operation, supervising of tokamak device and conducting of plasma experiments without any interruption. Therefore, the availability of the control system directly impacts on the entire device performance. For the non-interrupted operation of the KSTAR control system, we have developed a tool named as Control System Monitoring (CSM) to monitor the resources of EPICS Input/Output Controller (IOC) servers (utilization of memory, cpu, disk, network, user-defined process and system-defined process), the soundness of storage systems (storage utilization, storage status), the status of network switches using Simple Network Management Protocol (SNMP), the network connection status of every local control sever using Internet Control Message Protocol (ICMP), and the operation environment of the main control room and the computer room (temperature, humidity, water-leak) in real time. When abnormal conditions or faults are detected by the CSM, it alerts abnormal or fault alarms to operators. Especially, if critical fault related to the data storage occurs, the CSM sends the simple messages to operator’s mobile phone. In addition to the CSM, other tools, which are subversion for software version control and vmware for the virtualized IT infrastructure, for managing the integrated control system for KSTAR operation will be introduced. | |||
![]() |
Slides WEMMU006 [0.247 MB] | ||
![]() |
Poster WEMMU006 [5.611 MB] | ||
WEMMU009 | Status of the RBAC Infrastructure and Lessons Learnt from its Deployment in LHC | controls, operation, database, software-architecture | 702 |
|
|||
The distributed control system for the LHC accelerator poses many challenges due to its inherent heterogeneity and highly dynamic nature. One of the important aspects is to protect the machine against unauthorised access and unsafe operation of the control system, from the low-level front-end machines up to the high-level control applications running in the control room. In order to prevent an unauthorized access to the control system and accelerator equipment and to address the possible security issues, the Role Based Access Control (RBAC) project was designed and developed at CERN, with a major contribution from Fermilab laboratory. Furthermore, RBAC became an integral part of the CERN Controls Middleware (CMW) infrastructure and it was deployed and commissioned in the LHC operation in the summer 2008, well before the first beam in LHC. This paper presents the current status of the RBAC infrastructure, together with an outcome and gathered experience after a massive deployment in the LHC operation. Moreover, we outline how the project evolved over the last three years and give an overview of the major extensions introduced to improve integration, stability and its functionality. The paper also describes the plans of future project evolution and possible extensions, based on gathered users requirements and operational experience. | |||
![]() |
Slides WEMMU009 [0.604 MB] | ||
![]() |
Poster WEMMU009 [1.262 MB] | ||
WEMMU010 | Dependable Design Flow for Protection Systems using Programmable Logic Devices | hardware, simulation, FPGA, controls | 706 |
|
|||
Programmable Logic Devices (PLD) such as Field Programmable Gate Arrays (FPGA) are becoming more prevalent in protection and safety-related electronic systems. When employing such programmable logic devices, extra care and attention needs to be taken. It is important to be confident that the final synthesis result, used to generate the bit-stream to program the device, meets the design requirements. This paper will describe how to maximize confidence using techniques such as Formal Methods, exhaustive Hardware Description Language (HDL) code simulation and hardware testing. An example will be given for one of the critical function of the Safe Machine Parameters (SMP) system, one of the key systems for the protection of the Large Hadrons Collider (LHC) at CERN. The design flow will be presented where the implementation phase is just one small element of the whole process. Techniques and tools presented can be applied for any PLD based system implementation and verification. | |||
![]() |
Slides WEMMU010 [1.093 MB] | ||
![]() |
Poster WEMMU010 [0.829 MB] | ||
WEPKN003 | Distributed Fast Acquisitions System for Multi Detector Experiments | detector, experiment, TANGO, distributed | 717 |
|
|||
An increasing number of SOLEIL beamlines need to use in parallel several detection techniques, which could involve 2D area detectors, 1D fluorescence analyzers, etc. For such experiments, we have implemented Distributed Fast Acquisition Systems for Multi Detectors. Data from each Detector are collected by independent software applications (in our case Tango Devices), assuming all acquisitions are triggered by a unique Master clock. Then, each detector software device streams its own data on a common disk space, known as the spool. Each detector data are stored in independent NeXus files, with the help of a dedicated high performance NeXus streaming C++ library (called NeXus4Tango). A dedicated asynchronous process, known as the DataMerger, monitors the spool, and gathers all these individual temporary NeXus files into the final experiment NeXus file stored in SOLEIL common Storage System. Metadata information describing context and environment are also added in the final file, thanks to another process (the DataRecorder device). This software architecture proved to be very modular in terms of number and type of detectors while making life of users easier, all data being stored in a unique file at the end of the acquisition. The status of deployment and operation of this "Distributed Fast Acquisitions system for multi detector experiments" will be presented, with the examples of QuickExafs acquisitions on the SAMBA beamline and QuickSRCD acquisitions on DISCO. In particular, the complex case of the future NANOSCOPIUM beamline will be developed. | |||
![]() |
Poster WEPKN003 [0.671 MB] | ||
WEPKN005 | Experiences in Messaging Middleware for High-Level Control Applications | controls, EPICS, framework, interface | 720 |
|
|||
Funding: This project is funded by the US Department of Energy, Office of High Energy Physics under the contract #DE-FG02-08ER85043. Existing high-level applications in accelerator control and modeling systems leverage many different languages, tools and frameworks that do not interoperate with one another. As a result, the community has moved toward the proven Service-Oriented Architecture approach to address the interoperability challenges among heterogeneous high-level application modules. This paper presents our experiences in developing a demonstrative high-level application environment using emerging messaging middleware standards. In particular, we utilized new features such as pvData, in the EPICS v4 and other emerging standards such as Data Distribution Service (DDS) and Extensible Type Interface by the Object Management Group. Our work on developing the demonstrative environment focuses on documenting the procedures to develop high-level accelerator control applications using the aforementioned technologies. Examples of such applications include presentation panel clients based on Control System Studio (CSS), Model-Independent plug-in for CSS, and data producing middle-layer applications such as model/data servers. Finally, we will show how these technologies enable developers to package various control subsystems and activities into "services" with well-defined "interfaces" and make leveraging heterogeneous high-level applications via flexible composition possible. |
|||
![]() |
Poster WEPKN005 [2.723 MB] | ||
WEPKN007 | A LEGO Paradigm for Virtual Accelerator Concept | controls, experiment, simulation, operation | 728 |
|
|||
The paper considers basic features of a Virtual Accelerator concept based on LEGO paradigm. This concept involves three types of components: different mathematical models for accelerator design problems, integrated beam simulation packages (i.e. COSY, MAD, OptiM and others), and a special class of virtual feedback instruments similar to real control systems (EPICS). All of these components should interoperate for more complete analysis of control systems and increased fault tolerance. The Virtual Accelerator is an information and computing environment which provides a framework for analysis based on these components that can be combined in different ways. Corresponding distributed computing services establish interaction between mathematical models and low level control system. The general idea of the software implementation is based on the Service-Oriented Architecture (SOA) that allows using cloud computing technology and enables remote access to the information and computing resources. The Virtual Accelerator allows a designer to combine powerful instruments for modeling beam dynamics in a friendly to use way including both self-developed and well-known packages. In the scope of this concept the following is also proposed: the control system identification, analysis and result verification, visualization as well as virtual feedback for beam line operation. The architecture of the Virtual Accelerator system itself and results of beam dynamics studies are presented. | |||
![]() |
Poster WEPKN007 [0.969 MB] | ||
WEPKN020 | TANGO Integration of a SIMATIC WinCC Open Architecture SCADA System at ANKA | TANGO, controls, synchrotron, Linux | 749 |
|
|||
The WinCC OA supervisory control and data acquisition (SCADA) system provides at the ANKA synchrotron facility a powerful and very scalable tool to manage the enormous variety of technical equipment relevant for house keeping and beamline operation. Crucial to the applicability of a SCADA system for the ANKA synchrotron are the provided options to integrate it into other control concepts even if they are working e.g. on different time scales, managing concepts, and control standards. Especially these latter aspects result into different approaches for controlling concepts for technical services, storage ring, and beamlines. The beamline control at ANKA is mainly based on TANGO and SPEC, which has been expanded by TANGO server capabilities. This approach implies the essential need to provide a stable and fast link, that does not increase the dead time of a measurement, to the slower WinCC OA SCADA system. The open architecture of WinCC OA offers a smooth integration in both directions and therefore gives options to combine potential advantages, e.g. native hardware drivers or convenient graphical skills. The implemented solution will be presented and discussed at selected examples. | |||
![]() |
Poster WEPKN020 [0.378 MB] | ||
WEPKN025 | Supervision Application for the New Power Supply of the CERN PS (POPS) | controls, interface, framework, operation | 756 |
|
|||
The power supply system for the magnets of the CERN PS has been recently upgraded to a new system called POPS (POwer for PS). The old mechanical machine has been replaced by a system based on capacitors. The equipment as well as the low level controls have been provided by an external company (CONVERTEAM). The supervision application has been developed at CERN reusing the technologies and tools used for the LHC Accelerator and Experiments (UNICOS and JCOP frameworks, PVSS SCADA tool). The paper describes the full architecture of the control application, and the challenges faced for the integration with an outsourced system. The benefits of reusing the CERN industrial control frameworks and the required adaptations will be discussed. Finally, the initial operational experience will be presented. | |||
![]() |
Poster WEPKN025 [13.149 MB] | ||
WEPKN026 | The ELBE Control System – 10 Years of Experience with Commercial Control, SCADA and DAQ Environments | controls, hardware, electron, interface | 759 |
|
|||
The electron accelerator facility ELBE is the central experimental site of the Helmholtz-Zentrum Dresden-Rossendorf, Germany. Experiments with Bremsstrahlung started in 2001 and since that, through a series of expansions and modifications, ELBE has evolved to a 24/7 user facility running a total of seven secondary sources including two IR FELs. As its control system, ELBE uses WinCC on top of a networked PLC architecture. For data acquisition with high temporal resolution, PXI and PC based systems are in use, applying National Instruments hardware and LabVIEW application software. Machine protection systems are based on in-house built digital and analogue hardware. An overview of the system is given, along with an experience report on maintenance, reliability and efforts to keep track with ongoing IT, OS and security developments. Limits of application and new demands imposed by the forthcoming facility upgrade as a centre for high intensity beams (in conjunction with TW/PW femtosecond lasers) are discussed. | |||
![]() |
Poster WEPKN026 [0.102 MB] | ||
WEPKS001 | Agile Development and Dependency Management for Industrial Control Systems | framework, controls, site, project-management | 767 |
|
|||
The production and exploitation of industrial control systems differ substantially from traditional information systems; this is in part due to constraints on the availability and change life-cycle of production systems, as well as their reliance on proprietary protocols and software packages with little support for open development standards [1]. The application of agile software development methods therefore represents a challenge which requires the adoption of existing change and build management tools and approaches that can help bridging the gap and reap the benefits of managed development when dealing with industrial control systems. This paper will consider how agile development tools such as Apache Maven for build management, Hudson for continuous integration or Sonatype Nexus for the operation of "definite media libraries" were leveraged to manage the development life-cyle of the CERN UAB framework [2], as well as other crucial building blocks of the CERN accelerator infrastructure, such as the CERN Common Middleware or the FESA project.
[1] H. Milcent et al, "UNICOS: AN OPEN FRAMEWORK", THD003, ICALEPCS2009, Kobe, Japan [2] M. Dutour, "Software factory techniques applied to Process Control at CERN", ICALEPCS 2007, Knoxville Tennessee, USA |
|||
![]() |
Slides WEPKS001 [10.592 MB] | ||
![]() |
Poster WEPKS001 [1.032 MB] | ||
WEPKS003 | An Object Oriented Framework of EPICS for MicroTCA Based Control System | EPICS, controls, framework, interface | 775 |
|
|||
EPICS (Experimental Physics and Industrial Control System) is a distributed control system platform which has been widely used for large scientific devices control like particle accelerators and fusion plant. EPICS has introduced object oriented (C++) interfaces to most of the core services. But the major part of EPICS, the run-time database, only provides C interfaces, which is hard to involve the EPICS record concerned data and routines in the object oriented architecture of the software. This paper presents an object oriented framework which contains some abstract classes to encapsulate the EPICS record concerned data and routines in C++ classes so that full OOA (Objected Oriented Analysis) and OOD (Object Oriented Design) methodologies can be used for EPCIS IOC design. We also present a dynamic device management scheme for the hot-swap capability of the MicroTCA based control system. | |||
![]() |
Poster WEPKS003 [0.176 MB] | ||
WEPKS008 | Rules-based Analysis with JBoss Drools : Adding Intelligence to Automation | monitoring, controls, synchrotron, DSL | 790 |
|
|||
Rules engines are less-known as software technology than the traditional procedural, object-oriented, scripting or dynamic development languages. This is a pity, as their usage may offer an important enrichment to a development toolbox. JBoss Drools is an open-source rules engine that can easily be embedded in any Java application. Through an integration in our Passerelle process automation suite, we have been able to provide advanced solutions for intelligent process automation, complex event processing, system monitoring and alarming, automated repair etc. This platform has been proven for many years as an automated diagnosis and repair engine for Belgium's largest telecom provider, and it is being piloted at Synchrotron Soleil for device monitoring and alarming. After an introduction to rules engines in general and JBoss Drools in particular, we will present some practical use cases and important caveats. | |||
WEPKS009 | Integrating Gigabit Ethernet Cameras into EPICS at Diamond Light Source | EPICS, Ethernet, controls, photon | 794 |
|
|||
At Diamond Light Source we have selected Gigabit Ethernet cameras supporting GigE Vision for our new photon beamlines. GigE Vision is an interface standard for high speed Ethernet cameras which encourages interoperability between manufacturers. This paper describes the challenges encountered while integrating GigE Vision cameras from a range of vendors into EPICS. | |||
![]() |
Poster WEPKS009 [0.976 MB] | ||
WEPKS010 | Architecture Design of the Application Software for the Low-Level RF Control System of the Free-Electron Laser at Hamburg | LLRF, controls, cavity, interface | 798 |
|
|||
The superconducting linear accelerator of the Free-Electron Laser at Hamburg (FLASH) provides high performance electron beams to the lasing system to generate synchrotron radiation to various users. The Low-Level RF (LLRF) system is used to maintain the beam stabilities by stabilizing the RF field in the superconducting cavities with feedback and feed forward algorithms. The LLRF applications are sets of software to perform RF system model identification, control parameters optimization, exception detection and handling, so as to improve the precision, robustness and operability of the LLRF system. In order to implement the LLRF applications in the hardware with multiple distributed processors, an optimized architecture of the software is required for good understandability, maintainability and extendibility. This paper presents the design of the LLRF application software architecture based on the software engineering approach and the implementation at FLASH. | |||
![]() |
Poster WEPKS010 [0.307 MB] | ||
WEPKS012 | Intuitionistic Fuzzy (IF) Evaluations of Multidimensional Model | data-analysis, operation, lattice, fuzzy set | 805 |
|
|||
There are different logical methods for data structuring, but no one is perfect enough. Multidimensional model of data is presentation of data in a form of cube (referred as infocube or hypercube) with data or in form of "star" type scheme (referred as multidimensional scheme), by use of F-structures (Facts) and set of D-structures (Dimensions), based on the notion of hierarchy of D-structures. The data, being subject of analysis in a specific multidimensional model is located in a Cartesian space, being restricted by D-structures. In fact, the data is either dispersed or "concentrated", therefore the data cells are not distributed evenly within the respective space. The moment of occurrence of any event is difficult to be predicted and the data is concentrated as per time periods, location of performed event, etc. To process such dispersed or concentrated data, various technical strategies are needed. The use of intuitionistic fuzzy evaluations- IFE provide us new possibilities for alternative presentation and processing of data, subject of analysis in any OLAP application. The use of IFE at the evaluation of multidimensional models will result in the following advantages: analysts will dispose with more complete information for processing and analysis of respective data; benefit for the managers is that the final decisions will be more effective ones; enabling design of more functional multidimensional schemes. The purpose of this work is to apply intuitionistic fuzzy evaluations of multidimensional model of data. | |||
WEPKS016 | Software for Virtual Accelerator Designing | simulation, distributed, framework, EPICS | 816 |
|
|||
The article discusses appropriate technologies for software implementation of the Virtual Accelerator. The Virtual Accelerator is considered as a set of services and tools enabling transparent execution of computational software for modeling beam dynamics in accelerators on distributed computing resources. Distributed storage and information processing facilities utilized by the Virtual Accelerator make use of the Service-Oriented Architecture (SOA) according to a cloud computing paradigm. Control system toolkits (such as EPICS, TANGO), computing modules (including high-performance computing), realization of the GUI with existing frameworks and visualization of the data are discussed in the paper. The presented research consists of software analysis for realization of interaction between all levels of the Virtual Accelerator and some samples of middleware implementation. A set of the servers and clusters at St.-Petersburg State University form the infrastructure of the computing environment for Virtual Accelerator design. Usage of component-oriented technology for realization of Virtual Accelerator levels interaction is proposed. The article concludes with an overview and substantiation of a choice of technologies that will be used for design and implementation of the Virtual Accelerator. | |||
![]() |
Poster WEPKS016 [0.559 MB] | ||
WEPKS021 | EPICS V4 in Python | EPICS, controls, status, data-analysis | 830 |
|
|||
Funding: Work supported under auspices of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookhaven Science Associates, LLC, and in part by the DOE Contract DE-AC02-76SF00515 A novel design and implementation of EPICS version 4 is undergoing in Python. EPICS V4 defined an efficient way to describe a complex data structure, and data protocol. Current implementation in either C++ or Java has to invent a new wheel to present its data structure. However, it is more efficient in Python by mapping the data structure into a numpy array. This presentation shows the performance benchmarking, comparison in different language, and current status. |
|||
WEPKS023 | Further Developments in Generating Type-Safe Messaging | status, target, controls, network | 836 |
|
|||
Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy. At ICALEPCS '09, we introduced a source code generator that allows processes to communicate safely using native data types. In this paper, we discuss further development that has occurred since the conference in Kobe, Japan, including adding three more client languages, an optimization in network packet size and the addition of a new protocol data type. |
|||
![]() |
Poster WEPKS023 [3.219 MB] | ||
WEPKS025 | Evaluation of Software and Electronics Technologies for the Control of the E-ELT Instruments: a Case Study | controls, hardware, framework, CORBA | 844 |
|
|||
In the scope of the evaluation of architecture and technologies for the control system of the E-ELT (European-Extremely Large Telescope) instruments, a collaboration has been set up between the Instrumentation and Control Group of the INAF-OATs and the ESO Directorate of Engineering. The first result of this collaboration is the design and implementation of a prototype of a small but representative control system for an E-ELT instrument that has been setup at the INAF-OATs premises. The electronics has been based on PLCs (Programmable Logical Controller) and Ethernet based fieldbuses from different vendors but using international standards like the IEC 61131-3 and PLCopen Motion Control. The baseline design for the control software follows the architecture of the VLT (Very Large Telescope) Instrumentation application framework but it has been implemented using the ACS (ALMA Common Software), an open source software framework developed for the ALMA project and based on CORBA middleware. The communication among the software components is based in two models: CORBA calls for command/reply and CORBA notification channel for distributing the devices status. The communication with the PLCs is based on OPC-UA, an international standard for the communication with industrial controllers. The results of this work will contribute to the definition of the architecture of the control system that will be provided to all consortia responsible for the actual implementation of the E-ELT instruments. This paper presents the prototype motivation, its architecture, design and implementation. | |||
![]() |
Poster WEPKS025 [3.039 MB] | ||
WEPKS027 | Java Expert GUI Framework for CERN's Beam Instrumentation Systems | GUI, framework, controls, software-architecture | 852 |
|
|||
The CERN Beam Instrumentation Group software section have recently performed a study of the tools used to produce Java expert applications. This paper will present the analysis that was made to understand the requirements for generic components and the resulting tools including a compilation of Java components that have been made available for a wider audience. The paper will also discuss the eventuality of using MAVEN as deployment tool with its implications for developers and users. | |||
![]() |
Poster WEPKS027 [1.838 MB] | ||
WEPKS028 | Exploring a New Paradigm for Accelerators and Large Experimental Apparatus Control Systems | controls, distributed, toolkit, database | 856 |
|
|||
The integration of web technologies and web services has been, in the recent years, one of the major trends in upgrading and developing control systems for accelerators and large experimental apparatuses. Usually, web technologies have been introduced to complement the control systems with smart add-ons and user friendly services or, for instance, to safely allow access to the control system to users from remote sites. In spite of this still narrow spectrum of employment, some software technologies developed for high performance web services, although originally intended and optimized for these particular applications, deserve some features that would allow their deeper integration in a control system and, eventually, use them to develop some of the control system's core components. In this paper we present the conclusion of the preliminary investigations of a new paradigm for an accelerator control system and associated machine data acquisition system (DAQ), based on a synergic combination of network distributed cache memory and a non-relational key/value database. We investigated these technologies with particular interest on performances, namely speed of data storage and retrieve for the network memory, data throughput and queries execution time for the database and, especially, how much this performances can benefit from their inherent scalability. The work has been developed in a collaboration between INFN-LNF and INFN-Roma Tor Vergata. | |||
WEPKS029 | Integrating a Workflow Engine within a Commercial SCADA to Build End User Applications in a Scientific Environment | GUI, controls, alignment, interface | 860 |
|
|||
To build integrated high-level applications, SOLEIL is using an original component-oriented approach based on GlobalSCREEN, an industrial Java SCADA [1]. The aim of this integrated development environment is to give SOLEIL's scientific and technical staff a way to develop GUI applications for beamlines external users . These GUI applications must address the 2 following needs : monitoring and supervision of a control system and development and execution of automated processes (like beamline alignment, data collections, and on-line data analysis). The first need is now completely answered through a rich set of Java graphical components based on the COMETE [2] library and providing a high level of service for data logging, scanning and so on. To reach the same quality of service for process automation, a big effort has been made to integrate more smoothly PASSERELLE [3], a workflow engine, with dedicated user-friendly interfaces for end users, packaged as JavaBeans in GlobalSCREEN components library. Starting with brief descriptions of software architecture of the PASSERELLE and GlobalSCREEN environments, we will then present the overall system integration design as well as the current status of deployment on SOLEIL beamlines.
[1] V. Hardion, M. Ounsy, K. Saintin, "How to Use a SCADA for High-Level Application Development on a Large-Scale Basis in a Scientific Environment", ICALEPS 2007 [2] G. Viguier, K. Saintin, https://comete.svn.sourceforge.net/svnroot/comete, ICALEPS'11, MOPKN016. [3] A. Buteau, M. Ounsy, G. Abeille, "A Graphical Sequencer for SOLEIL Beamline Acquisitions", ICALEPS'07, Knoxville, Tennessee - USA, Oct 2007. |
|||
WEPKS030 | A General Device Driver Simulator to Help Compare Real Time Control Systems | EPICS, TANGO, device-server, controls | 863 |
|
|||
Supervisory Control And Data Acquisition systems (SCADA) such as Epics, Tango and Tine usually provide small example device driver programs for testing or to help users get started, however they differ between systems making it hard to compare the SCADA. To address this, a small simulator driver was created which emulates signals and errors similar to those received from a hardware device. The simulator driver can return from one to four signals: a ramp signal, a large alarm ramp signal, an error signal and a timeout. The different signals or errors are selected using the associated software device number. The simulator driver performs similar functions to Epic’s clockApp [1], Tango’s TangoTest and the Tine’s sinegenerator but the signals are independent of the SCADA. A command line application, an Epics server (IOC), a Tango device server, and a Tine server (FEC) were created and linked with the simulator driver. In each case the software device numbers were equated to a dummy device. Using the servers it was possible to compare how each SCADA behaved against the same repeatable signals. In addition to comparing and testing the SCADA the finished servers proved useful as templates for real hardware device drivers.
[1] F.Furukawa, "Very Simple Example of EPICS Device Suport", http://www-linac.kek.jp/epics/second |
|||
![]() |
Poster WEPKS030 [1.504 MB] | ||
WEPKS032 | A UML Profile for Code Generation of Component Based Distributed Systems | interface, distributed, controls, framework | 867 |
|
|||
A consistent and unambiguous implementation of code generation (model to text transformation) from UML must rely on a well defined UML profile, customizing UML for a particular application domain. Such a profile must have a solid foundation in a formally correct ontology, formalizing the concepts and their relations in the specific domain, in order to avoid a maze or set of wildly created stereotypes. The paper describes a generic profile for the code generation of component based distributed systems for control applications, the process to distill the ontology and define the profile, and the strategy followed to implement the code generator. The main steps that take place iteratively include: defining the terms and relations with an ontology, mapping the ontology to the appropriate UML metaclasses, testing the profile by creating modelling examples, and generating the code. | |||
![]() |
Poster WEPKS032 [1.925 MB] | ||
WEPKS033 | UNICOS CPC6: Automated Code Generation for Process Control Applications | controls, framework, operation, vacuum | 871 |
|
|||
The Continuous Process Control package (CPC) is one of the components of the CERN Unified Industrial Control System framework (UNICOS). As a part of this framework, UNICOS-CPC provides a well defined library of device types, a methodology and a set of tools to design and implement industrial control applications. The new CPC version uses the software factory UNICOS Application Builder (UAB) to develop the CPC applications. The CPC component is composed of several platform oriented plug-ins (PLCs and SCADA) describing the structure and the format of the generated code. It uses a resource package where both, the library of device types and the generated file syntax are defined. The UAB core is the generic part of this software, it discovers and calls dynamically the different plug-ins and provides the required common services. In this paper the UNICOS CPC6 package is presented. It is composed of several plug-ins: the Instance generator and the Logic generator for both, Siemens and Schneider PLCs, the SCADA generator (based on PVSS) and the CPC wizard as a dedicated Plug-in created to provide the user a friendly GUI. A management tool called UAB bootstrap will administer the different CPC component versions and all the dependencies between the CPC resource packages and the components. This tool guides the control system developer to install and launch the different CPC component versions. | |||
![]() |
Poster WEPKS033 [0.730 MB] | ||
WEPMN001 | Experience in Using Linux Based Embedded Controllers with EPICS Environment for the Beam Transport in SPES Off–Line Target Prototype | EPICS, controls, database, target | 875 |
|
|||
EPICS [1] was chosen as general framework to develop the control system of SPES facility under construction at LNL [2]. We report some experience in using some commercial devices based on Debian Linux to control the electrostatic deflectors installed on the beam line at the output of target chamber. We discuss this solution and compare it to other IOC implementations in use in the Target control system.
[1] http://www.aps.anl.gov/epics/ [2] http://www.lnl.infn.it/~epics * M.Montis, MS thesis: http://www.lnl.infn.it/~epics/THESIS/TesiMaurizioMontis.pdf |
|||
![]() |
Poster WEPMN001 [1.036 MB] | ||
WEPMN008 | Function Generation and Regulation Libraries and their Application to the Control of the New Main Power Converter (POPS) at the CERN CPS | controls, simulation, real-time, Linux | 886 |
|
|||
Power converter control for the LHC is based on an embedded control computer called a Function Generator/Controller (FGC). Every converter includes an FGC with responsibility for the generation of the reference current as a function of time and the regulation of the circuit current, as well as control of the converter state. With many new converter controls software classes in development it was decided to generalise several key components of the FGC software in the form of C libraries: function generation in libfg, regulation, limits and simulation in libreg and DCCT, ADC and DAC calibration in libcal. These libraries were first used in the software class dedicated to controlling the new 60MW main power converter (POPS) at the CERN CPS where regulation of both magnetic field and circuit current is supported. This paper reports on the functionality provided by each library and in particular libfg and libreg. The libraries are already being used by software classes in development for the next generation FGC for Linac4 converters, as well as the CERN SPS converter controls (MUGEF) and MedAustron converter regulation board (CRB). | |||
![]() |
Poster WEPMN008 [3.304 MB] | ||
WEPMN009 | Simplified Instrument/Application Development and System Integration Using Libera Base Software Framework | hardware, framework, interface, controls | 890 |
|
|||
Development of many appliances used in scientific environment forces us to face similar challenges, often executed repeatedly. One has to design or integrate hardware components. Support for network and other communications standards needs to be established. Data and signals are processed and dispatched. Interfaces are required to monitor and control the behaviour of the appliances. At Instrumentation Technologies we identified and addressed these issues by creating a generic framework which is composed of several reusable building blocks. They simplify some of the tedious tasks and leave more time to concentrate on real issues of the application. Further more, the end product quality benefits from larger common base of this middle-ware. We will present the benefits on concrete example of instrument implemented on MTCA platform accessible over graphical user interface. | |||
![]() |
Poster WEPMN009 [5.755 MB] | ||
WEPMN011 | Controlling the EXCALIBUR Detector | detector, simulation, controls, hardware | 894 |
|
|||
EXCALIBUR is an advanced photon counting detector being designed and built by a collaboration of Diamond Light Source and the Science and Technology Facilities Council. It is based around 48 CERN Medipix III silicon detectors arranged as an 8x6 array. The main problem addressed by the design of the hardware and software is the uninterrupted collection and safe storage of image data at rates up to one hundred (2048x1536) frames per second. This is achieved by splitting the image into six 'stripes' and providing parallel data paths for them all the way from the detectors to the storage. This architecture requires the software to control the configuration of the stripes in a consistent manner and to keep track of the data so that the stripes can be subsequently stitched together into frames. | |||
![]() |
Poster WEPMN011 [0.289 MB] | ||
WEPMN013 | Recent Developments in Synchronised Motion Control at Diamond Light Source | EPICS, controls, interface, framework | 901 |
|
|||
At Diamond Light Source the EPICS control system is used with a variety of motion controllers. The use of EPICS ensures a common interface over a range of motorised applications. We have developed a system to enable the use of the same interface for synchronised motion over multiple axes using the Delta Tau PMAC controller. Details of this work will be presented, along with examples and possible future developments. | |||
WEPMN015 | Timing-system Solution for MedAustron; Real-time Event and Data Distribution Network | timing, real-time, controls, ion | 909 |
|
|||
MedAustron is an ion beam cancer therapy and research centre currently under construction in Wiener Neustadt, Austria. This facility features a synchrotron particle accelerator for light ions. A timing system is being developed for that class of accelerators targeted at clinical use as a product of close collaboration between MedAustron and Cosylab. We redesignedμResearch Finland transport layer's FPGA firmware, extending its capabilities to address specific requirements of the machine to come to a generic real-time broadcast network for coordinating actions of a compact, pulse-to-pulse modulation based particle accelerator. One such requirement is the need to support for configurable responses to timing events on the receiver side. The system comes with National Instruments LabView based software support, ready to be integrated into the PXI based front-end controllers. This paper explains the design process from initial requirements refinement to technology choice, architectural design and implementation. It elaborates the main characteristics of the accelerator that the timing system has to address, such as support for concurrently operating partitions, real-time and non real-time data transport needs and flexible configuration schemes for real-time response to timing event reception. Finally, the architectural overview is given, with the main components explained in due detail. | |||
![]() |
Poster WEPMN015 [0.800 MB] | ||
WEPMN018 | Performance Tests of the Standard FAIR Equipment Controller Prototype | FPGA, controls, timing, Ethernet | 919 |
|
|||
For the control system of the new FAIR accelerator facility a standard equipment controller, the Scalable Control Unit (SCU), is presently under development. First prototypes have already been tested in real applications. The controller combines an x86 ComExpress Board and an Altera Arria II FPGA. Over a parallel bus interface called the SCU bus, up to 12 slave boards can be controlled. Communication between CPU and FPGA is done by a PCIe link. We discuss the real time behaviour between the Linux OS and the FPGA Hardware. For the test, a Front-End Software Architecture (FESA) class, running under Linux, communicates with the PCIe bridge in the FPGA. Although we are using PCIe only for single 32 bit wide accesses to the FPGA address space, the performance still seems sufficient. The tests showed an average response time to IRQs of 50 microseconds with a 1.6 GHz Intel Atom CPU. This includes the context change to the FESA userspace application and the reply back to the FPGA. Further topics are the bandwidth of the PCIe link for single/burst transfers and the performance of the SCU bus communication. | |||
WEPMN026 | Evolution of the CERN Power Converter Function Generator/Controller for Operation in Fast Cycling Accelerators | Ethernet, controls, network, radiation | 939 |
|
|||
Power converters in the LHC are controlled by the second generation of an embedded computer known as a Function Generator/Controller (FGC2). Following the success of this control system, new power converter installations at CERN will be based around an evolution of the design - a third generation called FGC3. The FGC3 will initially be used in the PS Booster and Linac4. This paper compares the hardware of the two generations of FGC and details the decisions made during the design of the FGC3. | |||
![]() |
Poster WEPMN026 [0.586 MB] | ||
WEPMN032 | Development of Pattern Awareness Unit (PAU) for the LCLS Beam Based Fast Feedback System | feedback, timing, operation, controls | 954 |
|
|||
LCLS is now successfully operating at its design beam repetition rate of 120 Hz, but in order to ensure stable beam operation at this high rate we have developed a new timing pattern aware EPICS controller for beam line actuators. Actuators that are capable of responding at 120 Hz are controlled by the new Pattern Aware Unit (PAU) as part of the beam-based feedback system. The beam at the LCLS is synchronized to the 60 Hz AC power line phase and is subject to electrical noise which differs according to which of the six possible AC phases is chosen from the 3-phase site power line. Beam operation at 120 Hz interleaves two of these 60 Hz phases and the feedback must be able to apply independent corrections to the beam pulse according to which of the 60 Hz timing patterns the pulse is synchronized to. The PAU works together with the LCLS Event Timing system which broadcasts a timing pattern that uniquely identifies each pulse when it is measured and allows the feedback correction to be applied to subsequent pulses belonging to the same timing pattern, or time slot, as it is referred to at SLAC. At 120 Hz operation this effectively provides us with two independent, but interleaved feedback loops. Other beam programs at the SLAC facility such as LCLS-II and FACET will be pulsed on other time slots and the PAUs in those systems will respond to their appropriate timing patterns. This paper describes the details of the PAU development: real-time requirements and achievement, scalability, and consistency. The operational results will also be described. | |||
![]() |
Poster WEPMN032 [0.430 MB] | ||
WEPMN034 | YAMS: a Stepper Motor Controller for the FERMI@Elettra Free Electron Laser | controls, power-supply, interface, TANGO | 958 |
|
|||
Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3 New projects, like FERMI@Elettra, demand for standardization of the systems in order to cut development and maintenance costs. The various motion control applications foreseen in this project required a specific controller able to flexibly adapt to any need while maintaining a common interface to the control system to minimize software development efforts. These reasons led us to design and build "Yet Another Motor Subrack", YAMS, a 3U chassis containing a commercial stepper motor controller, up to eight motor drivers and all the necessary auxiliary systems. The motors can be controlled locally by means of an operator panel or remotely through an Ethernet interface and a dedicated Tango device server. The paper describes the details of the project and the deployment issues. |
|||
![]() |
Poster WEPMN034 [4.274 MB] | ||
WEPMN036 | Comparative Analysis of EPICS IOC and MARTe for the Development of a Hard Real-Time Control Application | EPICS, controls, real-time, framework | 961 |
|
|||
EPICS is used worldwide to build distributed control systems for scientific experiments. The EPICS software suite is based around the Channel Access (CA) network protocol that allows the communication of different EPICS clients and servers in a distributed architecture. Servers are called Input/Output Controllers (IOCs) and perform real-world I/O or local control tasks. EPICS IOCs were originally designed for VxWorks to meet the demanding real-time requirements of control algorithms and have lately been ported to different operating systems. The MARTe framework has recently been adopted to develop an increasing number of hard real-time systems in different fusion experiments. MARTe is a software library that allows the rapid and modular development of stand-alone hard real-time control applications on different operating systems. MARTe has been created to be portable and during the last years it has evolved to follow the multicore evolution. In this paper we review several implementation differences between EPICS IOC and MARTe. We dissect their internal data structures and synchronization mechanisms to understand what happens behind the scenes. Differences in the component based approach and in the concurrent model of computation in EPICS IOC and MARTe are explained. Such differences lead to distinct time models in the computational blocks and distinct real-time capabilities of the two frameworks that a developer must be aware of. | |||
![]() |
Poster WEPMN036 [2.406 MB] | ||
WEPMN037 | DEBROS: Design and Use of a Linux-like RTOS on an Inexpensive 8-bit Single Board Computer | Linux, network, hardware, interface | 965 |
|
|||
As the power, complexity, and capabilities of embedded processors continues to grow, it is easy to forget just how much can be done with inexpensive single board computers based on 8-bit processors. When the proprietary, non-standard tools from the vendor for one such embedded computer became a major roadblock, I embarked on a project to expand my own knowledge and provide a more flexible, standards based alternative. Inspired by operating systems such as Unix, Linux, and Minix, I wrote DEBROS (the Davis Embedded Baby Real-time Operating System) [1], which is a fully pre-emptive, priority-based OS with soft real-time capabilities that provides a subset of standard Linux/Unix compatible system calls such as stdio, BSD sockets, pipes, semaphores, etc. The end result was a much more flexible, standards-based development environment which allowed me to simplify my programming model, expand diagnostic capabilities, and reduce the time spent monitoring and applying updates to the hundreds of devices in the lab currently using this hardware.[2]
[1] http://groups.nscl.msu.edu/controls/files/DEBROS_User_Developer_Manual.doc [2] http://groups.nscl.msu.edu/controls/ |
|||
![]() |
Poster WEPMN037 [0.112 MB] | ||
WEPMN038 | A Combined On-line Acoustic Flowmeter and Fluorocarbon Coolant Mixture Analyzer for the ATLAS Silicon Tracker | controls, detector, database, real-time | 969 |
|
|||
An upgrade to the ATLAS silicon tracker cooling control system requires a change from C3F8 (molecular weight 188) coolant to a blend with 10-30% C2F6 (mw 138) to reduce the evaporation temperature and better protect the silicon from cumulative radiation damage at LHC. Central to this upgrade an acoustic instrument for measurement of C3F8/C2F6 mixture and flow has been developed. Sound velocity in a binary gas mixture at known temperature and pressure depends on the component concentrations. 50 kHz sound bursts are simultaneously sent via ultrasonic transceivers parallel and anti-parallel to the gas flow. A 20 MHz transit clock is started synchronous with burst transmission and stopped by over-threshold received sound pulses. Transit times in both directions, together with temperature and pressure, enter a FIFO memory 100 times/second. Gas mixture is continuously analyzed using PVSS-II, by comparison of average sound velocity in both directions with stored velocity-mixture look-up tables. Flow is calculated from the difference in sound velocity in the two directions. In future versions these calculations may be made in a micro-controller. The instrument has demonstrated a resolution of <0.3% for C3F8/C2F6 mixtures with ~20%C2F6, with simultaneous flow resolution of ~0.1% of F.S. Higher precision is possible: a sensitivity of ~0.005% to leaks of C3F8 into the ATLAS pixel detector nitrogen envelope (mw difference 156) has been seen. The instrument has many applications, including analysis of hydrocarbons, mixtures for semi-conductor manufacture and anesthesia. | |||
WEPMS003 | A Testbed for Validating the LHC Controls System Core Before Deployment | controls, hardware, operation, timing | 977 |
|
|||
Since the start-up of the LHC, it is crucial to carefully test core controls components before deploying them operationally. The Testbed of the CERN accelerator controls group was developed for this purpose. It contains different hardware (PPC, i386) running different operating systems (Linux and LynxOS) and core software components running on front-ends, communication middleware and client libraries. The Testbed first executes integration tests to verify that the components delivered by individual teams interoperate, and then system tests, which verify high-level, end-user functionality. It also verifies that different versions of components are compatible, which is vital, because not all parts of the operational LHC control system can be upgraded simultaneously. In addition, the Testbed can be used for performance and stress tests. Internally, the Testbed is driven by Bamboo, a Continuous Integration server, which builds and deploys automatically new software versions into the Testbed environment and executes the tests continuously to prevent from software regression. Whenever a test fails, an e-mail is sent to the appropriate persons. The Testbed is part of the official controls development process wherein new releases of the controls system have to be validated before being deployed operationally. Integration and system tests are an important complement to the unit tests previously executed in the teams. The Testbed has already caught several bugs that were not discovered by the unit tests of the individual components.
* http://cern.ch/jnguyenx/ControlsTestBed.html |
|||
![]() |
Poster WEPMS003 [0.111 MB] | ||
WEPMS005 | Automated Coverage Tester for the Oracle Archiver of WinCC OA | controls, status, operation, database | 981 |
|
|||
A large number of control systems at CERN are built with the commercial SCADA tool WinCC OA. They cover projects in the experiments, accelerators and infrastructure. An important component is the Oracle archiver used for long term storage of process data (events) and alarms. The archived data provide feedback to the operators and experts about how the system was behaving at particular moment in the past. In addition a subset of these data is used for offline physics analysis. The consistency of the archived data has to be ensured from writing to reading as well as throughout updates of the control systems. The complexity of the archiving subsystem comes from the multiplicity of data types, required performance and other factors such as operating system, environment variables or versions of the different software components, therefore an automatic tester has been implemented to systematically execute test scenarios under different conditions. The tests are based on scripts which are automatically generated from templates. Therefore they can cover a wide range of software contexts. The tester has been fully written in the same software environment as the targeted SCADA system. The current implementation is able to handle over 300 test cases, both for events and alarms. It has enabled to report issues to the provider of WinCC OA. The template mechanism allows sufficient flexibility to adapt the suite of tests to future needs. The developed tools are generic enough to be used to tests other parts of the control systems. | |||
![]() |
Poster WEPMS005 [0.279 MB] | ||
WEPMS006 | Automated testing of OPC Servers | DSL, operation, Windows, Domain-Specific-Languages | 985 |
|
|||
CERN relies on OPC Server implementations from 3rd party device vendors to provide a software interface to their respective hardware. Each time a vendor releases a new OPC Server version it is regression tested internally to verify that existing functionality has not been inadvertently broken during the process of adding new features. In addition bugs and problems must be communicated to the vendors in a reliable and portable way. This presentation covers the automated test approach used at CERN to cover both cases: Scripts are written in a domain specific language specifically created for describing OPC tests and executed by a custom software engine driving the OPC Server implementation. | |||
![]() |
Poster WEPMS006 [1.384 MB] | ||
WEPMS007 | Backward Compatibility as a Key Measure for Smooth Upgrades to the LHC Control System | controls, operation, feedback, Linux | 989 |
|
|||
Now that the LHC is operational, a big challenge is to upgrade the control system smoothly, with minimal downtime and interruptions. Backward compatibility (BC) is a key measure to achieve this: a subsystem with a stable API can be upgraded smoothly. As part of a broader Quality Assurance effort, the CERN Accelerator Controls group explored methods and tools supporting BC. We investigated two aspects in particular: (1) "Incoming dependencies", to know which part of an API is really used by clients and (2) BC validation, to check that a modification is really backward compatible. We used this approach for Java APIs and for FESA devices (which expose an API in the form of device/property sets). For Java APIs, we gather dependency information by regularly running byte-code analysis on all the 1000 Jar files that belong to the control system and find incoming dependencies (methods calls and inheritance). An Eclipse plug-in we developed shows these incoming dependencies to the developer. If an API method is used by many clients, it has to remain backward compatible. On the other hand, if a method is not used, it can be freely modified. To validate BC, we are exploring the official Eclipse tools (PDE-API tools), and others that check BC without need for invasive technology such as OSGi. For FESA devices, we instrumented key components of our controls system to know which devices and properties are in use. This information is collected in the Controls Database and is used (amongst others) by the FESA design tools in order to prevent the FESA class developer from breaking BC. | |||
WEPMS008 | Software Tools for Electrical Quality Assurance in the LHC | database, hardware, LabView, operation | 993 |
|
|||
There are over 1600 superconducting magnet circuits in the LHC machine. Many of them consist of a large number of components electrically connected in series. This enhances the sensitivity of the whole circuits to electrical faults of individual components. Furthermore, circuits are equipped with a large number of instrumentation wires, which are exposed to accidental damage or swapping. In order to ensure safe operation, an Electrical Quality Assurance (ELQA) campaign is needed after each thermal cycle. Due to the complexity of the circuits, as well as their distant geographical distribution (tunnel of 27km circumference divided in 8 sectors), suitable software and hardware platforms had to be developed. The software combines an Oracle database, LabView data acquisition applications and PHP-based web follow-up tools. This paper describes the software used for the ELQA of the LHC. | |||
![]() |
Poster WEPMS008 [8.781 MB] | ||
WEPMS022 | The Controller Design for Kicker Magnet Adjustment Mechanism in SSRF | controls, feedback, kicker, injection | 1021 |
|
|||
The kicker magnet adjustment mechanism controller in SSRF is to improve the efficiency of injection by changing the magnet real-time, especially in the top-up mode. The controller mainly consists of Programmable Logic Controller (PLC), stepper motor, reducer, worm and mechanism. PLC controls the stepper motors for adjusting the azimuth of the magnet, monitors and regulates the magnet with inclinometer sensor. It also monitors the interlock. In addition, the controller is provided with local and remote working mode. This paper mainly introduces related hardware and software designs for this device. | |||
![]() |
Poster WEPMS022 [0.173 MB] | ||
WEPMU002 | Testing Digital Electronic Protection Systems | hardware, LabView, FPGA, controls | 1047 |
|
|||
The Safe Machine Parameters Controller (SMPC) ensures the correct configuration of the LHC machine protection system, and that safe injection conditions are maintained throughout the filling of the LHC machine. The SMPC receives information in real-time from measurement electronics installed throughout the LHC and SPS accelerators, determines the state of the machine, and informs the SPS and LHC machine protection systems of these conditions. This paper outlines the core concepts and realization of the SMPC test-bench, based on a VME crate and LabVIEW program. Its main goal is to ensure the correct function of the SMPC for the protection of the CERN accelerator complex. To achieve this, the tester has been built to replicate the machine environment and operation, in order to ensure that the chassis under test is completely exercised. The complexity of the task increases with the number of input combinations which are, in the case of the SMPC, in excess of 2364. This paper also outlines the benefits and weaknesses of developing a test suite independently of the hardware being tested, using the "V" approach. | |||
![]() |
Poster WEPMU002 [0.763 MB] | ||
WEPMU007 | Securing a Control System: Experiences from ISO 27001 Implementation | controls, EPICS, operation, network | 1062 |
|
|||
Recent incidents have emphasized the importance of security and operational continuity for achieving the quality objectives of an organization, and the safety of its personnel and machines. However, security and disaster recovery are either completely ignored or given a low priority during the design and development of an accelerator control system, the underlying technologies, and the overlaid applications. This leads to an operational facility that is easy to breach, and difficult to recover. Retrofitting security into the control system becomes much more difficult during operations. In this paper we describe our experiences in achieving ISO 27001 compliance for NSCL's control system. We illustrate problems faced with securing low-level controls, infrastructure, and applications. We also provide guidelines to address the security and disaster recovery issues upfront during the development phase. | |||
![]() |
Poster WEPMU007 [1.304 MB] | ||
WEPMU011 | Automatic Injection Quality Checks for the LHC | injection, kicker, GUI, timing | 1077 |
|
|||
Twelve injections per beam are required to fill the LHC with the nominal filling scheme. The injected beam needs to fulfill a number of requirements to provide useful physics for the experiments when they take data at collisions later on in the LHC cycle. These requirements are checked by a dedicated software system, called the LHC injection quality check. At each injection, this system receives data about beam characteristics from key equipment in the LHC and analyzes it online to determine the quality of the injected beam after each injection. If the quality is insufficient, the automatic injection process is stopped, and the operator has to take corrective measures. This paper will describe the software architecture of the LHC injection quality check and the interplay with other systems. A set of tools for self-monitoring of the injection quality checks to achieve optimum performance will be discussed as well. Results obtained during the LHC commissioning year 2010 and the LHC run 2011 will finally be presented. | |||
![]() |
Poster WEPMU011 [0.358 MB] | ||
WEPMU015 | The Machine Protection System for the R&D Energy Recovery LINAC | FPGA, LabView, hardware, interface | 1087 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The Machine Protection System (MPS) is a device-safety system that is designed to prevent damage to hardware by generating interlocks, based upon the state of input signals generated by selected sub-systems. It protects all the key machinery in the R&D Project called the Energy Recovery LINAC (ERL) against the high beam current. The MPS is capable of responding to a fault with an interlock signal within several microseconds. The ERL MPS is based on a National Instruments CompactRIO platform, and is programmed by utilizing National Instruments' development environment for a visual programming language. The system also transfers data (interlock status, time of fault, etc.) to the main server. Transferred data is integrated into the pre-existing software architecture which is accessible by the operators. This paper will provide an overview of the hardware used, its configuration and operation, as well as the software written both on the device and the server side. |
|||
![]() |
Poster WEPMU015 [17.019 MB] | ||
WEPMU016 | Pre-Operation, During Operation and Post-Operational Verification of Protection Systems | operation, injection, controls, database | 1090 |
|
|||
This paper will provide an overview of the software checks performed on the Beam Interlock System ensuring that the system is functioning to specification. Critical protection functions are implemented in hardware, at the same time software tools play an important role in guaranteeing the correct configuration and operation of the system during all phases of operation. This paper will describe tests carried out pre-, during- and post- operation, if protection system integrity is not sure, subsequent injections of beam into the LHC will be inhibited. | |||
WEPMU019 | First Operational Experience with the LHC Beam Dump Trigger Synchronisation Unit | hardware, embedded, monitoring, operation | 1100 |
|
|||
Two LHC Beam Dumping Systems (LBDS) remove the counter-rotating beams safely from the collider during setting up of the accelerator, at the end of a physics run and in case of emergencies. Dump requests can come from 3 different sources: the machine protection system in emergency cases, the machine timing system for scheduled dumps or the LBDS itself in case of internal failures. These dump requests are synchronised with the 3 μs beam abort gap in a fail-safe redundant Trigger Synchronisation Unit (TSU) based on Digital Phase Lock Loops (DPLL), locked onto the LHC beam revolution frequency with a maximum phase error of 40 ns. The synchronised trigger pulses coming out of the TSU are then distributed to the high voltage generators of the beam dump kickers through a redundant fault-tolerant trigger distribution system. This paper describes the operational experience gained with the TSU since their commissioning with beam in 2009, and highlights the improvements which have been implemented for a safer operation. This includes an increase of the diagnosis and monitoring functionalities, a more automated validation of the hardware and embedded firmware before deployment, or the execution of a post-operational analysis of the TSU performance after each dump action. In the light of this first experience the outcome of the external review performed in 2010 is presented. The lessons learnt on the project life-cycle for the design of mission critical electronic modules are discussed. | |||
![]() |
Poster WEPMU019 [1.220 MB] | ||
WEPMU033 | Monitoring Control Applications at CERN | controls, monitoring, operation, framework | 1141 |
|
|||
The Industrial Controls and Engineering (EN-ICE) group of the Engineering Department at CERN has produced, and is responsible for the operation of around 60 applications, which control critical processes in the domains of cryogenics, quench protections systems, power interlocks for the Large Hadron Collider and other sub-systems of the accelerator complex. These applications require 24/7 operation and a quick reaction to problems. For this reason the EN-ICE is presently developing the monitoring tool to detect, anticipate and inform of possible anomalies in the integrity of the applications. The tool builds on top of Simatic WinCC Open Architecture (formerly PVSS) SCADA and makes usage of the Joint COntrols Project (JCOP) and UNICOS Frameworks developed at CERN. The tool provides centralized monitoring of the different elements integrating the controls systems like Windows and Linux servers, PLCs, applications, etc. Although the primary aim of the tool is to assist the members of the EN-ICE Standby Service, the tool may present different levels of details of the systems depending on the user, which enables experts to diagnose and troubleshoot problems. In this paper, the scope, functionality and architecture of the tool are presented and some initial results on its performance are summarized. | |||
![]() |
Poster WEPMU033 [1.719 MB] | ||
WEPMU036 | Efficient Network Monitoring for Large Data Acquisition Systems | network, monitoring, interface, database | 1153 |
|
|||
Though constantly evolving and improving, the available network monitoring solutions have limitations when applied to the infrastructure of a high speed real-time data acquisition (DAQ) system. DAQ networks are particular computer networks where experts have to pay attention to both individual subsections as well as system wide traffic flows while monitoring the network. The ATLAS Network at the Large Hadron Collider (LHC) has more than 200 switches interconnecting 3500 hosts and totaling 8500 high speed links. The use of heterogeneous tools for monitoring various infrastructure parameters, in order to assure optimal DAQ system performance, proved to be a tedious and time consuming task for experts. To alleviate this problem we used our networking and DAQ expertise to build a flexible and scalable monitoring system providing an intuitive user interface with the same look and feel irrespective of the data provider that is used. Our system uses custom developed components for critical performance monitoring and seamlessly integrates complementary data from auxiliary tools, such as NAGIOS, information services or custom databases. A number of techniques (e.g. normalization, aggregation and data caching) were used in order to improve the user interface response time. The end result is a unified monitoring interface, for fast and uniform access to system statistics, which significantly reduced the time spent by experts for ad-hoc and post-mortem analysis. | |||
![]() |
Poster WEPMU036 [5.945 MB] | ||
WEPMU040 | Packaging of Control System Software | controls, EPICS, Linux, database | 1168 |
|
|||
Funding: ITER European Union, European Regional Development Fund and Republic of Slovenia, Ministry of Higher Education, Science and Technology Control system software consists of several parts – the core of the control system, drivers for integration of devices, configuration for user interfaces, alarm system, etc. Once the software is developed and configured, it must be installed to computers where it runs. Usually, it is installed on an operating system whose services it needs, and also in some cases dynamically links with the libraries it provides. Operating system can be quite complex itself – for example, a typical Linux distribution consists of several thousand packages. To manage this complexity, we have decided to rely on Red Hat Package Management system (RPM) to package control system software, and also ensure it is properly installed (i.e., that dependencies are also installed, and that scripts are run after installation if any additional actions need to be performed). As dozens of RPM packages need to be prepared, we are reducing the amount of effort and improving consistency between packages through a Maven-based infrastructure that assists in packaging (e.g., automated generation of RPM SPEC files, including automated identification of dependencies). So far, we have used it to package EPICS, Control System Studio (CSS) and several device drivers. We perform extensive testing on Red Hat Enterprise Linux 5.5, but we have also verified that packaging works on CentOS and Scientific Linux. In this article, we describe in greater detail the systematic system of packaging we are using, and its particular application for the ITER CODAC Core System. |
|||
![]() |
Poster WEPMU040 [0.740 MB] | ||
THAAUST02 | Suitability Assessment of OPC UA as the Backbone of Ground-based Observatory Control Systems | controls, framework, interface, CORBA | 1174 |
|
|||
A common requirement of modern observatory control systems is to allow interaction between various heterogeneous subsystems in a transparent way. However, the integration of COTS industrial products - such as PLCs and SCADA software - has long been hampered by the lack of an adequate, standardized interfacing method. With the advent of the Unified Architecture version of OPC (Object Linking and Embedding for Process Control), the limitations of the original industry-accepted interface are now lifted, and in addition much more functionality has been defined. In this paper the most important features of OPC UA are matched against the requirements of ground-based observatory control systems in general and in particular of the 1.2m Mercator Telescope. We investigate the opportunities of the "information modelling" idea behind OPC UA, which could allow an extensive standardization in the field of astronomical instrumentation, similar to the standardization efforts emerging in several industry domains. Because OPC UA is designed for both vertical and horizontal integration of heterogeneous subsystems and subnetworks, we explore its capabilities to serve as the backbone of a dependable and scalable observatory control system, treating "industrial components" like PLCs no differently than custom software components. In order to quantitatively assess the performance and scalability of OPC UA, stress tests are described and their results are presented. Finally, we consider practical issues such as the availability of COTS OPC UA stacks, software development kits, servers and clients. | |||
![]() |
Slides THAAUST02 [2.879 MB] | ||
THBHAUST04 | jddd, a State-of-the-art Solution for Control Panel Development | controls, operation, feedback, status | 1189 |
|
|||
Software for graphical user interfaces to control systems may be developed as a rich or thin client. The thin client approach has the advantage that anyone can create and modify control system panels without specific skills in software programming. The Java DOOCS Data Display, jddd, is based on the thin client interaction model. It provides "Include" components and address inheritance for the creation of generic displays. Wildcard operations and regular expression filters are used to customize the graphics content at runtime, e.g. in a "DynamicList" component the parameters have to be painted only once in edit mode and then are automatically displayed multiple times for all available instances in run mode. This paper will describe the benefits of using jddd for control panel design as an alternative to rich client development. | |||
![]() |
Slides THBHAUST04 [0.687 MB] | ||
THBHAUIO06 | Cognitive Ergonomics of Operational Tools | controls, interface, operation, power-supply | 1196 |
|
|||
Control systems have become continuously more powerful over the past decades. The ability for high data throughput and sophisticated graphical interactions have opened a variety of new possibilities. But has it helped to provide intuitive, easy to use applications to simplify the operation of modern large scale accelerator facilities? We will discuss what makes an application useful to operation and what is necessary to make a tool easy to use. We will show that even the implementation of a small number of simple design rules for applications can help to ease the operation of a facility. | |||
![]() |
Slides THBHAUIO06 [23.914 MB] | ||
THBHMUST01 | Multi-platform SCADA GUI Regression Testing at CERN. | GUI, framework, Windows, Linux | 1201 |
|
|||
Funding: CERN The JCOP Framework is a toolkit used widely at CERN for the development of industrial control systems in several domains (i.e. experiments, accelerators and technical infrastructure). The software development started 10 years ago and there is now a large base of production systems running it. For the success of the project, it was essential to formalize and automate the quality assurance process. The paper will present the overall testing strategy and will describe in detail mechanisms used for GUI testing. The choice of a commercial tool (Squish) and the architectural features making it appropriate for our multi-platform environment will be described. Practical difficulties encountered when using the tool in the CERN context are discussed as well as how these were addressed. In the light of initial experience, the test code itself has been recently reworked in OO style to facilitate future maintenance and extension. The paper concludes with a description of our initial steps towards incorporation of full-blown Continuous Integration (CI) support. |
|||
![]() |
Slides THBHMUST01 [1.878 MB] | ||
THBHMUST02 | Assessing Software Quality at Each Step of its Lifecycle to Enhance Reliability of Control Systems | TANGO, controls, monitoring, factory | 1205 |
|
|||
A distributed software control system aims to enhance the evolutivity and reliability by sharing responsibility between several components. Disadvantage is that detection of problems is harder on a significant number of modules. In the Kaizen spirit, we choose to continuously invest in automatism to obtain a complete overview of software quality despite the growth of legacy code. The development process was already mastered by staging each lifecycle step thanks to a continuous integration server based on JENKINS and MAVEN. We enhanced this process focusing on 3 objectives : Automatic Test, Static Code Analysis and Post-Mortem Supervision. Now the build process automatically includes the test part to detect regression, wrong behavior and integration incompatibility. The in-house TANGOUNIT project satisfies the difficulties of testing the distributed components that Tango Devices are. Next step, the programming code has to pass a complete code quality check-up. SONAR quality server was integrated to the process, to collect each static code analysis and display the hot topics on synthetic web pages. Finally, the integration of Google BREAKPAD in every TANGO Devices gives us an essential statistic from crash reports and allows to replay the crash scenarii at any time. The gain already gives us more visibility on current developments. Some concrete results will be presented like reliability enhancement, better management of subcontracted software development, quicker adoption of coding standard by new developers and understanding of impacts when moving to a new technology. | |||
![]() |
Slides THBHMUST02 [2.973 MB] | ||
THBHMUST04 | The Software Improvement Process – Tools and Rules to Encourage Quality | operation, controls, feedback, FEL | 1212 |
|
|||
The Applications section of the CERN accelerator controls group has decided to apply a systematic approach to quality assurance (QA), the "Software Improvement Process", SIP. This process focuses on three areas: the development process itself, suitable QA tools, and how to practically encourage developers to do QA. For each stage of the development process we have agreed on the recommended activities and deliverables, and identified tools to automate and support the task. For example we do more code reviews. As peer reviews are resource-intensive, we only do them for complex parts of a product. As a complement, we are using static code checking tools, like FindBugs and Checkstyle. We also encourage unit testing and have agreed on a minimum level of test coverage recommended for all products, measured using Clover. Each of these tools is well integrated with our IDE (Eclipse) and give instant feedback to the developer about the quality of their code. The major challenges of SIP have been to 1) agree on common standards and configurations, for example common code formatting and Javadoc documentation guidelines, and 2) how to encourage the developers to do QA. To address the second point, we have successfully implemented 'SIP days', i.e. one day dedicated to QA work to which the whole group of developers participates, and 'Top/Flop' lists, clearly indicating the best and worst products with regards to SIP guidelines and standards, for example test coverage. This paper presents the SIP initiative in more detail, summarizing our experience since two years and our future plans. | |||
![]() |
Slides THBHMUST04 [5.638 MB] | ||
THCHAUST02 | Large Scale Data Facility for Data Intensive Synchrotron Beamlines | data-management, experiment, synchrotron, detector | 1216 |
|
|||
ANKA is a large scale facility of the Helmholtz Association of National Research Centers in Germany located at the Karlsruhe Institute of Technology. As the synchrotron light source it is providing light from hard X-rays to the far-infrared for research and technology. It is serving as a user facility for the national and international scientific community currently producing 100 TB of data per year. Within the next two years a couple of additional data intensive beamlines will be operational producing up to 1.6 PB per year. These amounts of data have to be stored and provided on demand to the users. The Large Scale Data Facility LSDF is located on the same campus as ANKA. It is a data service facility dedicated for data intensive scientific experiments. Currently storage of 4 PB for unstructured and structured data and a HADOOP cluster as a computing resource for data intensive applications are available. Within the campus experiments and the main large data producing facilities are connected via 10 GE network links. An additional 10 GE link exists to the internet. Tools for an easy and transparent access allow scientists to use the LSDF without bothering with the internal structures and technologies. Open interfaces and APIs support a variety of access methods to the highly available services for high throughput data applications. In close cooperation with ANKA the LSDF provides assistance to efficiently organize data and meta data structures, and develops and deploys community specific software running on the directly connected computing infrastructure. | |||
![]() |
Slides THCHAUST02 [1.294 MB] | ||
THCHAUST05 | LHCb Online Log Analysis and Maintenance System | Linux, network, detector, controls | 1228 |
|
|||
History has shown, many times computer logs are the only information an administrator may have for an incident, which could be caused either by a malfunction or an attack. Due to huge amount of logs that are produced from large-scale IT infrastructures, such as LHCb Online, critical information may overlooked or simply be drowned in a sea of other messages . This clearly demonstrates the need for an automatic system for long-term maintenance and real time analysis of the logs. We have constructed a low cost, fault tolerant centralized logging system which is able to do in-depth analysis and cross-correlation of every log. This system is capable of handling O(10000) different log sources and numerous formats, while trying to keep the overhead as low as possible. It provides log gathering and management, offline analysis and online analysis. We call offline analysis the procedure of analyzing old logs for critical information, while Online analysis refer to the procedure of early alerting and reacting. The system is extensible and cooperates well with other applications such as Intrusion Detection / Prevention Systems. This paper presents the LHCb Online topology, problems we had to overcome and our solutions. Special emphasis is given to log analysis and how we use it for monitoring and how we can have uninterrupted access to the logs. We provide performance plots, code modification in well known log tools and our experience from trying various storage strategies. | |||
![]() |
Slides THCHAUST05 [0.377 MB] | ||
THCHMUST02 | Control and Test Software for IRAM Widex Correlator | real-time, Linux, simulation, hardware | 1240 |
|
|||
IRAM is an international research institute for radio astronomy. It has designed a new correlator called WideX for the Plateau de Bure interferometer (an array of six 15-meter telescopes) in the French Alps. The device started its official service in February 2010. This correlator must be driven in real-time at 32 Hz for sending parameters and for data acquisition. With 3.67 million channels, distributed over 1792 dedicated chips, that produce a 1.87 Gbits/sec data output rate, the data acquisition and processing and also the automatic hardware-failure detection are big challenges for the software. This article presents the software that has been developed to drive and test the correlator. In particular it presents an innovative usage of a high-speed optical link, initially developed for the CERN ALICE experiment, associated with real-time Linux (RTAI) to achieve our goals. | |||
![]() |
Slides THCHMUST02 [2.272 MB] | ||
THCHMUST05 | The Case for Soft-CPUs in Accelerator Control Systems | FPGA, hardware, controls, Linux | 1252 |
|
|||
The steady improvements in Field Programmable Gate Array (FPGA) performance, size, and cost have driven their ever increasing use in science and industry. As FPGA sizes continue to increase, more and more devices and logic are moved from external chips to FPGAs. For simple hardware devices, the savings in board area and ASIC manufacturing setup are compelling. For more dynamic logic, the trade-off is not always as clear. Traditionally, this has been the domain of CPUs and software programming languages. In hardware designs already including an FPGA, it is tempting to remove the CPU and implement all logic in the FPGA, saving component costs and increasing performance. However, that logic must then be implemented in the more constraining hardware description languages, cannot be as easily debugged or traced, and typically requires significant FPGA area. For performance-critical tasks this trade-off can make sense. However, for the myriad slower and dynamic tasks, software programming languages remain the better choice. One great benefit of a CPU is that it can perform many tasks. Thus, by including a small "Soft-CPU" inside the FPGA, all of the slower tasks can be aggregated into a single component. These tasks may then re-use existing software libraries, debugging techniques, and device drivers, while retaining ready access to the FPGA's internals. This paper discusses requirements for using Soft-CPUs in this niche, especially for the FAIR project. Several open-source alternatives will be compared and recommendations made for the best way to leverage a hybrid design. | |||
![]() |
Slides THCHMUST05 [0.446 MB] | ||
THDAULT01 | Modern System Architectures in Embedded Systems | embedded, controls, FPGA, hardware | 1260 |
|
|||
Several new technologies are making their way also in embedded systems. In addition to FPGA technology which has become commonplace, multicore CPUs and I/O virtualization (among others) are being introduced to the embedded systems. In our paper we present our ideas and studies about how to take advantage of these features in control systems. Some application examples involving things like CPU partitioning, virtualized I/O and so an are discussed, along with some benchmarks. | |||
![]() |
Slides THDAULT01 [1.426 MB] | ||
FRAAULT02 | STUXNET and the Impact on Accelerator Control Systems | controls, network, hardware, Windows | 1285 |
|
|||
2010 has seen a wide news coverage of a new kind of computer attack, named "Stuxnet", targeting control systems. Due to its level of sophistication, it is widely acknowledged that this attack marks the very first case of a cyber-war of one country against the industrial infrastructure of another, although there is still is much speculation about the details. Worse yet, experts recognize that Stuxnet might just be the beginning and that similar attacks, eventually with much less sophistication, but with much more collateral damage, can be expected in the years to come. Stuxnet was targeting a special model of the Siemens 400 PLC series. Similar modules are also deployed for accelerator controls like the LHC cryogenics or vacuum systems or the detector control systems in LHC experiments. Therefore, the aim of this presentation is to give an insight into what this new attack does and why it is deemed to be special. In particular, the potential impact on accelerator and experiment control systems will be discussed, and means will be presented how to properly protect against similar attacks. | |||
![]() |
Slides FRAAULT02 [8.221 MB] | ||
FRAAUIO05 | High-Integrity Software, Computation and the Scientific Method | experiment, controls, background, vacuum | 1297 |
|
|||
Given the overwhelming use of computation in modern science and the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. There is a growing debate but this has some distance to run as yet with journals still divided on what even constitutes repeatability. Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. In this paper, some of the problems with computation, for example with respect to specification, implementation, the use of programming languages and the long-term unquantifiable presence of undiscovered defect will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within CS itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. | |||
![]() |
Slides FRAAUIO05 [0.710 MB] | ||
FRBHAULT01 | Feed-forward in the LHC | feedback, real-time, controls, database | 1302 |
|
|||
The LHC operational cycle is comprised of several phases such as the ramp, the squeeze and stable beams. During the ramp and squeeze in particular, it has been observed that the behaviour of key LHC beam parameters such as tune, orbit and chromaticity are highly reproducible from fill to fill. To reduced the reliance on the crucial feedback systems, it was decided to perform fill-to-fill feed-forward corrections. The LHC feed-forward application was developed to ease the introduction of corrections to the operational settings. It retrieves the feedback system's corrections from the logging database and applies appropriate corrections to the ramp and squeeze settings. The LHC Feed-Forward software has been used during LHC commissioning and tune and orbit corrections during ramp have been successfully applied. As a result, the required real-time corrections for the above parameters have been reduced to a minimum. | |||
![]() |
Slides FRBHAULT01 [0.961 MB] | ||
FRBHMUST01 | The Design of the Alba Control System: A Cost-Effective Distributed Hardware and Software Architecture. | controls, TANGO, database, interface | 1318 |
|
|||
The control system of Alba is highly distributed from both hardware and software points of view. The hardware infrastructure for the control system includes in the order of 350 racks, 20000 cables and 6200 equipments. More than 150 diskless industrial computers, distributed in the service area and 30 multicore servers in the data center, manage several thousands of process variables. The software is, of course, as distributed as the hardware. It is also a success story of the Tango Collaboration where a complete software infrastructure is available "off the shelf". In addition Tango has been productively complemented with the powerful Sardana framework, a great effort in terms of development, which nowadays, several institutes benefit from. The whole installation has been coordinated from the beginning with a complete cabling and equipment database, where all the equipment, cables, connectors are described and inventoried. The so called "cabling database" is core of the installation. The equipments and cables are defined there. The basic configurations of the hardware like MAC and IP addresses, DNS names, etc. are also gathered in this database, allowing the network communication files and declaration of variables in the PLCs to be created automatically. This paper explains the design and the architecture of the control system, describes the tools and justifies the choices made. Furthermore, it presents and analyzes the figures regarding cost and performances. | |||
![]() |
Slides FRBHMUST01 [4.616 MB] | ||
FRBHMUST02 | Towards High Performance Processing in Modern Java Based Control Systems | monitoring, controls, real-time, distributed | 1322 |
|
|||
CERN controls software is often developed on Java foundation. Some systems carry out a combination of data, network and processor intensive tasks within strict time limits. Hence, there is a demand for high performing, quasi real time solutions. Extensive prototyping of the new CERN monitoring and alarm software required us to address such expectations. The system must handle dozens of thousands of data samples every second, along its three tiers, applying complex computations throughout. To accomplish the goal, a deep understanding of multithreading, memory management and interprocess communication was required. There are unexpected traps hidden behind an excessive use of 64 bit memory or severe impact on the processing flow of modern garbage collectors, including the state of the art Oracle GarbageFirst. Tuning JVM configuration significantly affects the execution of the code. Even more important is the amount of threads and the data structures used between them. Accurately dividing work into independent tasks might boost system performance. Thorough profiling with dedicated tools helped understand the bottlenecks and choose algorithmically optimal solutions. Different virtual machines were tested, in a variety of setups and garbage collection options. The overall work provided for discovering actual hard limits of the whole setup. We present this process of architecting a challenging system in view of the characteristics and limitations of the contemporary Java runtime environment.
http://cern.ch/marekm/icalepcs.html |
|||
![]() |
Slides FRBHMUST02 [4.514 MB] | ||
FRBHMUST03 | Thirty Meter Telescope Observatory Software Architecture | controls, hardware, operation, software-architecture | 1326 |
|
|||
The Thirty Meter Telescope (TMT) will be a ground-based, 30-m optical-IR telescope with a highly segmented primary mirror located on the summit of Mauna Kea in Hawaii. The TMT Observatory Software (OSW) system will deliver the software applications and infrastructure necessary to integrate all TMT software into a single system and implement a minimal end-to-end science operations system. At the telescope, OSW is focused on the task of integrating and efficiently controlling and coordinating the telescope, adaptive optics, science instruments, and their subsystems during observation execution. From the software architecture viewpoint, the software system is viewed as a set of software components distributed across many machines that are integrated using a shared software base and a set of services that provide communications and other needed functionality. This paper describes the current state of the TMT Observatory Software focusing on its unique requirements, architecture, and the use of middleware technologies and solutions that enable the OSW design. | |||
![]() |
Slides FRBHMUST03 [3.788 MB] | ||
FRBHMULT04 | Towards a State Based Control Architecture for Large Telescopes: Laying a Foundation at the VLT | controls, distributed, operation, interface | 1330 |
|
|||
Large telescopes are characterized by a high level of distribution of control-related tasks and will feature diverse data flow patterns and large ranges of sampling frequencies; there will often be no single, fixed server-client relationship between the control tasks. The architecture is also challenged by the task of integrating heterogeneous subsystems which will be delivered by multiple different contractors. Due to the high number of distributed components, the control system needs to effectively detect errors and faults, impede their propagation, and accurately mitigate them in the shortest time possible, enabling the service to be restored. The presented Data-Driven Architecture is based on a decentralized approach with an end-to-end integration of disparate independently-developed software components, using a high-performance standards-based communication middle-ware infrastructure, based on the Data Distribution Service. A set of rules and principles, based on JPL's State Analysis method and architecture, are established to avoid undisciplined component-to-component interactions, where the Control System and System Under Control are clearly separated. State Analysis provides a model-based process for capturing system and software requirements and design, helping reduce the gap between the requirements on software specified by systems engineers and the implementation by software engineers. The method and architecture has been field tested at the Very Large Telescope, where it has been integrated into an operational system with minimal downtime. | |||
![]() |
Slides FRBHMULT04 [3.504 MB] | ||
FRCAUST03 | Status of the ESS Control System | controls, database, hardware, EPICS | 1345 |
|
|||
The European Spallation Source (ESS) is a high current proton LINAC to be built in Lund, Sweden. The LINAC delivers 5 MW of power to the target at 2500 MeV, with a nominal current of 50 mA. It is designed to include the ability to upgrade the LINAC to a higher power of 7.5 MW at a fixed energy of 2500 MeV. The Accelerator Design Update (ADU) collaboration of mainly European institutions will deliver a Technical Design Report at the end of 2012. First protons are expected in 2018, and first neutrons in 2019. The ESS will be constructed by a number of geographically dispersed institutions which means that a considerable part of control system integration will potentially be performed off-site. To mitigate this organizational risk, significant effort will be put into standardization of hardware, software, and development procedures early in the project. We have named the main result of this standardization the Control Box concept. The ESS will use EPICS, and will build on the positive distributed development experiences of SNS and ITER. Current state of control system design and key decisions are presented in the paper as well as immediate challenges and proposed solutions.
From PAC 2011 article http://eval.esss.lu.se/cgi-bin/public/DocDB/ShowDocument?docid=45 From IPAC 2010 article http://eval.esss.lu.se/cgi-bin/public/DocDB/ShowDocument?docid=26 |
|||
![]() |
Slides FRCAUST03 [1.944 MB] | ||
FRCAUST04 | Status of the ASKAP Monitoring and Control System | controls, EPICS, hardware, monitoring | 1349 |
|
|||
The Australian Square Kilometre Array Pathfinder, or ASKAP, is CSIRO’s new radio telescope currently under construction at the Murchison Radio astronomy Observatory (MRO) in Mid West region of Western Australia. As well as being a world-leading telescope in its own right, ASKAP will be an important testbed for the Square Kilometre Array, a future international radio telescope that will be the world’s largest and most sensitive. This paper gives a status update of the ASKAP project and provides a detailed look at the initial deployment of the monitoring and control system as well as major issues to be addressed in future software releases before the start of system commissioning later this year. | |||
![]() |
Slides FRCAUST04 [3.414 MB] | ||