Paper | Title | Other Keywords | Page |
---|---|---|---|
MOCOAAB01 | The First Running Period of the CMS Detector Controls System - A Success Story | detector, experiment, status, hardware | 1 |
|
|||
After only three months of commissioning, the CMS detector controls system (DCS) was running at close to 100% efficiency. Despite millions of parameters to control and the HEP typical distributed development structure, only minor problems were encountered. The system can be operated by a single person and the required maintenance effort is low. A well factorized system structure and development are keys to success as well as a centralized, service like deployment approach. The underlying controls software PVSS has proven to work in a DCS environment. Converting the DCS to full redundancy will further reduce the need for interventions to a minimum. | |||
![]() |
Slides MOCOAAB01 [1.468 MB] | ||
MOCOAAB02 | Design and Status of the SuperKEKB Accelerator Control System | network, timing, EPICS, interface | 4 |
|
|||
SuperKEKB is the upgrade of the KEKB asymmetric energy e+e− collider, for the B-factory experiment in Japan, designed to achieve a 40-times higher luminosity than the world record by KEKB. The KEKB control system was based on EPICS at the equipment layer and scripting languages at the operation layer. The SuperKEKB control system continues to employ those features, while we implement additional technologies for the successful operation at such a high luminosity. In the accelerator control network system, we introduce 10GbE for the wider bandwidth data transfer, and redundant configurations for reliability. The network security is also enhanced. For the SuperKEKB construction, the wireless network is installed into the beamline tunnel. In the timing system, the new configuration for positron beams is required. We have developed the faster response beam abort system, interface modules to control thousands magnet power supplies, and the monitoring system for the final focusing superconducting magnets to assure stable operations. We introduce the EPICS embedded PLC, where EPICS runs on a CPU module. The design and status of the SuperKEKB accelerator control system will be presented. | |||
![]() |
Slides MOCOAAB02 [5.930 MB] | ||
MOCOAAB03 | The Spiral2 Control System Progress Towards the Commission Phase | interface, PLC, EPICS, database | 8 |
|
|||
The commissioning of the Spiral2 Radioactive Ion Beams facility at Ganil will soon start, so requiring the control system components to be delivered in time. Yet, parts of the system were validated during preliminary tests performed with ions and deuterons beams at low energy. The control system development results from the collaboration between Ganil, CEA/IRFU, CNRS/IPHC laboratories, using appropriate tools and approach. Based on Epics, the control system follows a classical architecture. At the lowest level, Modbus/TCP protocol is considered as a field bus. Then, equipment are handled by IOCs (soft or VME/VxWorks) with a software standardized interface between IOCs and clients applications on top. This last upper layer consists of Epics standard tools, CSS/BOY user interfaces within the so-called CSSop Spiral2 context suited for operation and, for machine tunings, high level applications implemented by Java programs developed within a Spiral2 framework derived from the open-Xal one. Databases are used for equipment data and alarms archiving, to configure equipment and to manage the machine lattice and beam settings. A global overview of the system is therefore here proposed. | |||
![]() |
Slides MOCOAAB03 [3.205 MB] | ||
MOCOAAB04 | The Integrated Control System at ESS | software, hardware, timing, linac | 12 |
|
|||
The European Spallation Source (ESS) is a high current proton LINAC to be built in Lund, Sweden. The LINAC delivers 5 MW of power to the target at 2500 MeV, with a nominal current of 50 mA. The project entered Construction phase on January 1st 2013. In order to design, develop and deliver a reliable, well-performing and standardized control system for the ESS facility, the Integrated Control System (ICS) project has been established. The ICS project also entered Construction phase on January 1st. ICS consists of four distinct Core components (Physics, Software Services, Hardware and Protection) that make up the essence of the control system. Integration Support activities support the stakeholders and users, and the Control System Infrastructure provides the required underlying infrastructure for operating the control system and the facility. The current state of the control system project and key decisions are presented as well as immediate challenges and proposed solutions. | |||
![]() |
Slides MOCOAAB04 [11.760 MB] | ||
MOCOAAB05 | Keck Telescope Control System Upgrade Project Status | EPICS, software, PLC, hardware | 15 |
|
|||
The Keck telescopes, located at one of the world’s premier sites for astronomy, were the first of a new generation of very large ground-based optical/infrared telescopes with the first Keck telescope beginning science operations in May of 1993, and the second in October of 1996. The components of the telescopes and control systems are more than 15 years old. The upgrade to the control systems of the telescopes consists of mechanical, electrical, software and network components with the overall goals of improving performance, increasing reliability, addressing serious obsolescence issues and providing a knowledge refresh. The telescope encoder systems will be replaced to fully meet demanding science requirements and electronics will be upgraded to meet the needs of modern instrumentation. The upgrade will remain backwards compatible with remaining Observatory subsystems to allow for a phased migration to the new system. This paper describes where Keck is in the development processes, key decisions that have been made, covers successes and challenges to date and presents an overview of future plans. | |||
![]() |
Slides MOCOAAB05 [2.172 MB] | ||
MOCOAAB06 | MeerKAT Control and Monitoring - Design Concepts and Status | monitoring, interface, hardware, status | 19 |
|
|||
Funding: National Research Foundation of South Africa This presentation gives a status update of the MeerKAT Control & Monitoring subsystem focusing on the development philosophy, design concepts, technologies and key design decisions. The presentation will be supplemented by a poster (if accepted) with **live demonstation** of the current KAT-7 Control&Monitoring system. The vision for MeerKAT includes to a) use Offset Gregorian antennas in a radio telescope array combined with optimized receiver technology in order to achieve superior imaging and maximum sensitivity, b) be the most sensitive instrument in the world in L-band, c) be an instrument that will be considered the benchmark for performance and reliability by the scientific community at large, and d) be a true precursor for the SKA that will be integrated into the SKA-mid dish array. The 7-dish engineering prototype (KAT-7) for MeerKAT is already producing exciting science and is being operated 24x7. The first MeerKAT antenna will be on site by the end of this year and the first two Receptors will be fully integrated and ready for testing by April 2014. By December 2016 hardware for all 64 receptors will be installed and accepted and 32 antennas will be fully commissioned. |
|||
![]() |
Slides MOCOAAB06 [1.680 MB] | ||
MOCOAAB07 | Real Time Control for KAGAR, 3km Cryogenic Gravitational Wave Detector in Japan | network, EPICS, real-time, feedback | 23 |
|
|||
KAGRA is a 3km cryogenic interferometer for gravitational wave detection located underground Kamioka-mine in Japan. The next generation large scale interferometric gravitational wave detectors require very complicated control topologies for the optical path length between mirrors, and very low noise feedback controls in order to detect an extremely tiny motion between mirrors excited by gravitational waves. The interferometer consists of a Michelson interferometer with Fabry-Perot cavities on its arms, and other two mirrors as, so called, a power recycling and a resonant sideband extraction technique. In total, 5 degrees of freedom for length between 7 mirrors should be controlled at a time, and the control must be continuously kept during the observation of gravitational waves. We are currently developing a real time controls system using computers for KAGRA. In this talk, we report how the control system works. | |||
![]() |
Slides MOCOAAB07 [8.536 MB] | ||
MOCOBAB01 | New Electrical Network Supervision for CERN: Simpler, Safer, Faster, and Including New Modern Features | network, status, operation, framework | 27 |
|
|||
Since 2012, an effort started to replace the ageing electrical supervision system (managing more than 200,000 tags) currently in operation with a WinCC OA-based supervision system in order to unify the monitoring systems used by CERN operators and to leverage the internal knowledge and development of the products (JCOP, UNICOS, etc.). Along with the classical functionalities of a typical SCADA system (alarms, event, trending, archiving, access control, etc.), the supervision of the CERN electrical network requires a set of domain specific applications gathered under the name of EMS (Energy Management System). Such applications include network coloring, state estimation, power flow calculations, contingency analysis, optimal power flow, etc. Additionally, as electrical power is a critical service for CERN, a high availability of its infrastructure, including its supervision system, is required. The supervision system is therefore redundant along with a disaster recovery system which is itself redundant. In this paper, we will present the overall architecture of the future supervision system with an emphasis on the parts specific to the supervision of electrical network. | |||
![]() |
Slides MOCOBAB01 [1.414 MB] | ||
MOCOBAB02 | Integration of PLC with EPICS IOC for SuperKEKB Control System | PLC, LLRF, EPICS, interface | 31 |
|
|||
Recently, more and more PLCs are adopted for various frontend controls of accelerators. It is common to connect the PLCs with higher level control layers by the network. As a result, control logic becomes dispersed over separate layers, one of which is implemented by ladder programs on PLCs, and the other is implemented by higher level languages on frontend computers. EPICS-based SuperKEKB accelerator control system, however, take a different approach by using FA-M3 PLCs with a special CPU module (F3RP61), which runs Linux and functions as an IOC. This consolidation of PLC and IOC enables higher level applications to directly reach every PLC placed at frontends by Channel Access. In addition, most of control logic can be implemented by the IOC core program and/or EPICS sequencer to make the system more homogeneous resulting in easier development and maintenance of applications. This type of PLC-based IOCs are to be used to monitor and control many subsystems of SuperKEKB, such as personnel protection system, vacuum system, RF system, magnet power supplies, and so on. This paper describes the applications of the PLC-based IOCs to the SuperKEKB accelerator control system. | |||
![]() |
Slides MOCOBAB02 [1.850 MB] | ||
MOCOBAB03 | The Laser MegaJoule ICCS Integration Platform | software, interface, hardware, site | 35 |
|
|||
The French Atomic Energy Commission(CEA)has just built an integration platform outside the LMJ facility in order to assemble the various components of the Integrated Control Command System(ICCS). The talk gives an overview of this integration platform and the qualification strategy based on the use of equipment simulators, and focuses on several tools that have been developed to integrate each sub-system and qualify the overall behavior of the ICCS. Each delivery kit of a sub-system component(Virtual Machine, WIM, PLC,.) is scanned by antivirus software and stored in the delivery database. A specific tool allows the deployment of the delivery kits on the hardware platform (a copy of the LMJ hardware platform). Then, the TMW(Testing Management Workstation) performs automatic tests by coordinating the equipment simulators behavior and the operator’s behavior. The tests configurations, test scenarios and test results are stored in another database. Test results are analyzed, every dysfunction is stored in an event data base which is used to perform reliability calculation of each component. The qualified software is delivered on the LMJ to perform the commissioning of each bundle. | |||
![]() |
Slides MOCOBAB03 [2.025 MB] | ||
MOCOBAB04 | The Advanced Radiographic Capability, a Major Upgrade of the Computer Controls for the National Ignition Facility | software, laser, target, operation | 39 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633793 The Advanced Radiographic Capability (ARC) currently under development for the National Ignition Facility (NIF) will provide short (1-50 picoseconds) ultra high power (>1 Petawatt) laser pulses used for a variety of diagnostic purposes on NIF ranging from a high energy x-ray pulse source for backlighter imaging to an experimental platform for fast-ignition. A single NIF Quad (4 beams) is being upgraded to support experimentally driven, autonomous operations using either ARC or existing NIF pulses. Using its own seed oscillator, ARC generates short, wide bandwidth pulses that propagate down the existing NIF beamlines for amplification before being redirected through large aperture gratings that perform chirped pulse compression, generating a series of high-intensity pulses within the target chamber. This significant effort to integrate the ARC adds 40% additional control points to the existing NIF Quad and will be deployed in several phases over the coming year. This talk discusses some new unique ARC software controls used for short pulse operation on NIF and integration techniques being used to expedite deployment of this new diagnostic. |
|||
![]() |
Slides MOCOBAB04 [3.279 MB] | ||
MOCOBAB05 | How to Successfully Renovate a Controls System? - Lessons Learned from the Renovation of the CERN Injectors’ Controls Software | operation, software, GUI, software-architecture | 43 |
|
|||
Renovation of the control system of the CERN LHC injectors was initiated in 2007 in the scope of the Injector Controls Architecture (InCA) project. One of its main objectives was to homogenize the controls software across CERN accelerators and reuse as much as possible the existing modern sub-systems, such as the settings management used for the LHC. The project team created a platform that would permit coexistence and intercommunication between old and new components via a dedicated gateway, allowing a progressive replacement of the former. Dealing with a heterogeneous environment, with many diverse and interconnected modules, implemented using different technologies and programming languages, the team had to introduce all the modifications in the smoothest possible way, without causing machine downtime. After a brief description of the system architecture, the paper discusses the technical and non-technical sides of the renovation process such as validation and deployment methodology, operational applications and diagnostic tools characteristics and finally users’ involvement and human aspects, outlining good decisions, pitfalls and lessons learned over the last five years. | |||
![]() |
Slides MOCOBAB05 [1.746 MB] | ||
MOCOBAB06 | Integrated Monitoring and Control Specification Environment | target, framework, EPICS, interface | 47 |
|
|||
Monitoring and control solutions for large one-off systems are typically built in silos using multiple tools and technologies. Functionality such as data processing logic, alarm handling, UIs, device drivers are implemented by manually writing configuration code in isolation and their cross dependencies maintained manually. The correctness of the created specification is checked using manually written test cases. Non-functional requirements – such as reliability, performance, availability, reusability and so on – are addressed in ad hoc manner. This hinders evolution of systems with long lifetimes. For ITER, we developed an integrated specifications environment and a set of tools to generate configurations for target execution platforms, along with required glue to realize the entire M&C solution. The SKA is an opportunity to enhance this framework further to include checking for functional and engineering properties of the solution based on domain best practices. The framework includes three levels: domain-specific, problem-specific and target technology-specific. We discuss how this approach can address three major facets of complexity: scale, diversity and evolution. | |||
MOMIB01 | Sirius Control System: Conceptual Design | network, EPICS, interface, operation | 51 |
|
|||
Sirius is a new 3 GeV synchrotron light source currently being designed at the Brazilian Synchrotron Light Laboratory (LNLS) in Campinas, Brazil. The Control System will be heavily distributed and digitally connected to all equipments in order to avoid analog signals cables. A three-layer control system is being planned. The equipment layer uses RS485 serial networks, running at 10Mbps, with a very light proprietary protocol, in order to achieve good performance. The middle layer, interconnecting these serial networks, is based on Single Board Computers, PCs and commercial switches. Operation layer will be composed of PC’s running Control System’s client programs. Special topology will be used for Fast Orbit Feedback with one 10Gbps switch between the beam position monitors electronics and a workstation for corrections calculation and orbit correctors. At the moment, EPICS is the best candidate to manage the Control System. | |||
![]() |
Slides MOMIB01 [0.268 MB] | ||
![]() |
Poster MOMIB01 [0.580 MB] | ||
MOMIB02 | Development Status of the TPS Control System | EPICS, power-supply, interface, Ethernet | 54 |
|
|||
The EPICS was chosen as control system framework for the new project of 3 GeV synchrotron light source (Taiwan Photon Source, TPS). The standard hardware and software components had been defined, and the various IOCs (Input Output Controller) are gradually implemented as various subsystems control platforms. The subsystems control interfaces include event based timing system, Ethernet based power supply control, corrector power supply control, PLC based pulse magnet power supply control and machine protection system, insertion devices motion control system, various diagnostics, and etc. Development of the infrastructure of high level and low level software are on-going. Installation and integration test are in proceeding. Progress will be summarized in the paper. | |||
![]() |
Slides MOMIB02 [0.235 MB] | ||
![]() |
Poster MOMIB02 [5.072 MB] | ||
MOMIB03 | Control Systems Issues and Planning for eRHIC | interface, electron, hardware, feedback | 58 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The next generation of high-energy nuclear physics experiments involve colliding high-energy electrons with ions, as well as colliding polarized electrons with polarized protons and polarized helions (Helium-3 nuclei). The eRHIC project proposes to add an electron accelerator to the RHIC complex, thus allowing all of these types of experiments to be done by combining existing capabilities with high energy and high intensity electrons. In this paper we describe the controls systems requirements for eRHIC, the technical challenges, and our vision of a control system ten years into the future. What we build over the next ten years will be what is used for the ten years following the start of operations. This presents opportunities to take advantage of changes in technologies but also many challenges in building reliable and stable controls and integrating those controls with existing RHIC systems. This also presents an opportunity to leverage on state of the art innovations and build collaborations both with industry and other institutions, allowing us to build the best and most cost effective set of systems that will allow eRHIC to achieve its goals. |
|||
![]() |
Slides MOMIB03 [0.633 MB] | ||
![]() |
Poster MOMIB03 [2.682 MB] | ||
MOMIB05 | BeagleBone for Embedded Control System Applications | embedded, power-supply, laser, interface | 62 |
|
|||
Funding: Work supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3 The control system architecture of modern experimental physics facilities needs to meet the requirements of the ever increasing complexity of the controlled devices. Whenever feasible, moving from a distributed architecture based on powerful but complex and expensive computers to an even more pervasive approach based on simple and cheap embedded systems, allows shifting the knowledge close to the devices. The BeagleBone computer, being capable of running a full featured operating system such as GNU/Linux, integrates effectively into the existing control systems and allows executing complex control functions with the required flexibility. The paper discusses the choice of the BeagleBone as embedded platform and reports some examples of control applications recently developed for the ELETTRA and FERMI@Elettra light sources. |
|||
![]() |
Slides MOMIB05 [0.436 MB] | ||
![]() |
Poster MOMIB05 [1.259 MB] | ||
MOMIB07 | An OPC-UA Based Architecture for the Control of the ESPRESSO Spectrograph @ VLT | software, PLC, hardware, interface | 70 |
|
|||
ESPRESSO is a fiber-fed, cross-dispersed, high-resolution echelle spectrograph for the ESO Very Large Telescope (VLT). The instrument is designed to combine incoherently the light coming from up to 4 VLT Unit Telescopes. To ensure maximum stability the spectrograph is placed in a thermal enclosure and a vacuum vessel. Abandoning the VME-based technologies previously adopted for the ESO VLT instruments, the ESPRESSO control electronics has been developed around a new concept based on industrial COTS PLCs. This choice ensures a number of benefits like lower costs and less space and power consumption requirement. Moreover it makes possible to structure the whole control electronics in a distributed way using building blocks available commercially off-the-shelf and minimizing in this way the need for custom solutions. The main adopted PLC brand is Beckhoff, whose product lineup satisfies the requirements set by the instrument control functions. OPC-UA is the chosen communication protocol between the PLCs and the instrument control software, which is based on the VLT Control Software package. | |||
![]() |
Slides MOMIB07 [0.419 MB] | ||
![]() |
Poster MOMIB07 [32.149 MB] | ||
MOMIB09 | ZIO: The Ultimate Linux I/O Framework | framework, Linux, interface, software | 77 |
|
|||
ZIO (with Z standing for "The Ultimate I/O" Framework) was developed for CERN with the specific needs of physics labs in mind, which are poorly addressed in the mainstream Linux kernel. ZIO provides a framework for industrial, high-throughput, high-channel count I/O device drivers (digitizers, function generators, timing devices like TDCs) with performance, generality and scalability as design goals. Among its many features, it offers abstractions for - input and output channels, and channel sets - configurable trigger types - configurable buffer types - interface via sysfs attributes, control and data device nodes - a socket interface (PFZIO) which provides enormous flexibility and power for remote control In this paper, we discuss the design and implementation of ZIO, and describe representative cases of driver development for typical and exotic applications (FMC ADC 100Msps digitizer, FMC TDC timestamp counter, FMC DEL fine delay). | |||
![]() |
Slides MOMIB09 [0.818 MB] | ||
MOPPC013 | Revolution in Motion Control at SOLEIL: How to Balance Performance and Cost | software, hardware, embedded, TANGO | 81 |
|
|||
SOLEIL * is a third generation Synchrotron radiation source located near Paris in France. REVOLUTION (REconsider Various contrOLler for yoUr moTION) is the motion controller upgrade project at SOLEIL. It was initiated by the first « Motion control workshop in radiation facilities » in May 2011 that allowed development of an international motion control community in large research facilities. The next meeting will take place during pre-ICALEPS workshop: Motion Control Applications in Large Facilities **. As motion control is an essential key element in assuring optimal results, but also at a competitive price, the REVOLUTION team selected alternatives by following a theoretical and practical methodology: advanced market analysis, tests, measures and impact evaluation. Products from two major motion control manufacturers are on the short list. They must provide the best performance for a small selection of demanding applications, and the lowest global cost to maintain operational conditions for the majority of applications at SOLEIL. The search for the best technical, economical and organizational compromise to face our challenges is detailed in this paper.
* : www.synchrotron-soleil.fr ** : http://www.synchrotron-soleil.fr/Workshops/2013/motioncontrol |
|||
MOPPC014 |
Diagnostic Use Case Examples for ITER Plant Instrumentation and Control | diagnostics, interface, hardware, operation | 85 |
|
|||
ITER requires extensive diagnostics to meet the requirements for machine operation, protection, plasma control and physics studies. The realization of these systems is a major challenge not only because of the harsh environment and the nuclear requirements but also with respect to plant system Instrumentation and Control (I&C) of all the 45 diagnostics systems since the procurement arrangements of the ITER diagnostics with the domestic agencies require a large number of high performance fast controllers whose choice is based on guidelines and catalogues published by the ITER Organization (IO). The goal is to simplify acceptance testing and commissioning for both domestic agencies and the IO. For this purpose several diagnostic use case examples for plant system I&C documentation and implementation are provided by IO to the domestic agencies. Their implementations cover major parts of the diagnostic plant system I&C such as multi-channel high performance data and image acquisition, data processing as well as real-time and data archiving aspects. In this paper, the current status and achievements in implementation and documentation for the use case examples are presented. | |||
![]() |
Poster MOPPC014 [2.068 MB] | ||
MOPPC016 | IFMIF EVEDA RFQ Local Control System to Power Tests | EPICS, rfq, network, software | 89 |
|
|||
In the IFMIF EVEDA project, normal conducting Radio Frequency Quadrupole (RFQ) is used to bunch and accelerate a 130 mA steady beam to 5 MeV. RFQ cavity is divided into three structures, named super-modules. Each super-module is divided into 6 modules for a total of 18 modules for the overall structure. The final three modules have to be tested at high power to test and validate the most critical RF components of RFQ cavity and, on the other hand, to test performances of the main ancillaries that will be used for IFMIF EVEDA project (vacuum manifold system, tuning system and control system). The choice of the last three modules is due to the fact that they will operate in the most demanding conditions in terms of power density (100 kW/m) and surface electric field (1.8*Ekp). The Experimental Physics and Industrial Control System (EPICS) environment [1] provides the framework for monitoring any equipment connected to it. This paper report the usage of this framework to the RFQ power tests at Legnaro National Laboratories [2][3].
[1] http://www.aps.anl.gov/epics/ [2] http://www.lnl.infn.it/ [3] http://www.lnl.infn.it/~epics/joomla/ |
|||
MOPPC017 | Upgrade of J-PARC/MLF General Control System with EPICS/CSS | EPICS, software, operation, LabView | 93 |
|
|||
A general control system of the Materials and Life science experimental Facility (MLF-GCS) consists of programmable logic controllers (PLCs), operator interfaces (OPI) of iFix, data servers, and so on. It is controlling various devices such as a mercury target and a personnel protection system. The present system has been working well but there are problems in view of maintenance and update because of poor flexibility of OS and version compatibility. To overcome the weakness of the system, we decided to replace it to an advanced system based on EPICS and CSS as a framework and OPI software, which has advantages of high scalability and usability. Then we built a prototype system, connected it to the current MLF-GCS, and examined its performance. As the result, the communication between the EPICS/CSS system and the PLCs was successfully implemented by mediating a Takebishi OPC server, true data of 7000 were stored with suitable speed and capacity in a new data storage server based on a PostgreSQL, and OPI functions of the CSS were verified. We concluded through these examinations that the EPICS/CSS system had function and performance specified to the advanced MLF-GCS. | |||
![]() |
Poster MOPPC017 [0.376 MB] | ||
MOPPC020 | New Automated Control System at Kurchatov Synchrotron Radiation Source Based on SCADA System Citect | synchrotron, electron, synchrotron-radiation, radiation | 97 |
|
|||
The description of new automated control system of Kurchatov synchrotron radiation source which is realized at present time is presented in the paper. The necessity of automated control system modernization is explained by the equipment replacement in which we take state of art hardware decisions for facility control and increase the performances of facility control system. In particular, the number of control channels are increase, the processing and transmitting data speed are considerably increase and the requirements to measurement accuracy are become more strict. The paper presents the detailed description of all control levels (lower, server and upper) of new automated control system and integration of SCADA-system Citect into facility control system which provides the facility control, alarms notify, detailed reports preparation, acquisition and storage of historical data et al. | |||
MOPPC021 | Configuration System of the NSLS-II Booster Control System Electronics | booster, software, kicker, database | 100 |
|
|||
The National Synchrotron Light Source II is under construction at Brookhaven National Laboratory, Upton, USA. NSLS-II consists of linac, transport lines, booster synchrotron and the storage ring. The main features of booster are 1 or 2 Hz cycle and beam energy ramp from 200 MeV up to 3 GeV in 300 msec. EPICS is chosen as a base for the NSLS-II Control System. The booster control system covers all parts of the facility such as power supplies, timing system, diagnostics, vacuum system and many others. Each part includes a set of various electronic devices and a lot of parameters which shall be fully defined for the control system software. This paper considers an approach proposed for defining some equipment of the NSLS-II Booster. It provides a description of different entities of the facility in a uniform way. This information is used to generate configuration files for EPICS IOCs. The main goal of this approach is to put information in one place and elimination of data duplication. Also this approach simplifies configuration and modification of the description and makes it more clear and easily usable by engineers and operators. | |||
![]() |
Poster MOPPC021 [0.240 MB] | ||
MOPPC022 | Remote Control of Heterogeneous Sensors for 3D LHC Collimator Alignment | PLC, alignment, LabView, target | 103 |
|
|||
Periodically the alignment of LHC collimators needs to be verified. Access for personnel is limited due to the level of radiation close to the collimators. The required measurements precision must be comparable to the other equipment in the LHC tunnel, meaning 0.15 mm in a sliding window of 200 m. Hence conventional measurements would take 4 days for a team of 3 people. This presentation covers the design, development and commissioning of a remotely controlled system able performs the same measurements in 1 h with one operator. The system includes the integration of a variety of industrial devices ranging from position sensors, inclination sensors to video cameras, all linked to a PXI system running LabVIEW. The control of the motors is done through a PLC based system. The overall performance and user experience are reported. | |||
![]() |
Poster MOPPC022 [19.665 MB] | ||
MOPPC023 | Centralized Data Engineering for the Monitoring of the CERN Electrical Network | interface, database, network, framework | 107 |
|
|||
The monitoring and control of the CERN electrical network involves a large variety of devices and software: it ranges from acquisition devices to data concentrators, supervision systems as well as power network simulation tools. The main issue faced nowadays for the engineering of such large and heterogeneous system including more than 20,000 devices and 200,000 tags is that all devices and software have their own data engineering tool while many of the configuration data have to be shared between two or more devices: the same data needs to be entered manually to the different tools leading to duplication of effort and many inconsistencies. This paper presents a tool called ENSDM aiming at centralizing all the data needed to engineer the monitoring and control infrastructure into a single database from which the configuration of the various devices is extracted automatically. Such approach allows the user to enter the information only once and guarantee the consistency of the data across the entire system. The paper will focus more specifically on the configuration of the remote terminal unit) devices, the global supervision system (SCADA) and the power network simulation tools. | |||
![]() |
Poster MOPPC023 [1.253 MB] | ||
MOPPC024 | An Event Driven Communication Protocol for Process Control: Performance Evaluation and Redundant Capabilities | PLC, status, framework, Windows | 111 |
|
|||
The CERN Unified Industrial Control System framework (UNICOS) with its Continuous Control Package (UNICOS CPC) is the CERN standard solution for the design and implementation of continuous industrial process control applications. The in-house designed communication mechanism, based on the Time Stamp Push Protocol (TSPP) provides event driven high performance data communication between the control and supervision layers of a UNICOS CPC application. In its recent implementation of full redundant capabilities for both control and supervision layers, the TSPP protocol has reached maturity. This paper presents the design of the redundancy, the architecture, the current implementation as well as a comprehensive evaluation of its performance for SIEMENS PLCs in different test scenarios. | |||
![]() |
Poster MOPPC024 [7.161 MB] | ||
MOPPC025 | A Movement Control System for Roman Pots at the LHC | collimation, interface, FPGA, experiment | 115 |
|
|||
This paper describes the movement control system for detector positioning based on the Roman Pot design used by the ATLAS-ALFA and TOTEM experiments at the LHC. A key system requirement is that LHC machine protection rules are obeyed: the position is surveyed every 20ms with an accuracy of 15?m. If the detectors move too close to the beam (outside limits set by LHC Operators) the LHC interlock system is triggered to dump the beam. LHC Operators in the CERN Control Centre (CCC) drive the system via an HMI provided by a custom built Java application which uses Common Middleware (CMW) to interact with lower level components. Low-level motorization control is executed using National Instruments PXI devices. The DIM protocol provides the software interface to the PXI layer. A FESA gateway server provides a communication bridge between CMW and DIM. A cut down laboratory version of the system was built to provide a platform for verifying the integrity of the full chain, with respect to user and machine protection requirements, and validating new functionality before deploying to the LHC. The paper contains a detailed system description, test bench results and foreseen system improvements. | |||
MOPPC026 | Bake-out Mobile Controls for Large Vacuum Systems | vacuum, PLC, status, software | 119 |
|
|||
Large vacuum systems at CERN (Large Hadron Collider, the Low Energy Ion Rings…) require bake-out to achieve ultra-high vacuum specifications. The bake-out cycle is used to decrease the outgassing rate of the vacuum vessel and to activate the Non-Evaporable Getter (NEG) thin film. Bake-out control is a Proportional-Integral-Derivative (PID) regulation with complex recipes, interlocks and troubleshooting management and remote control. It is based on mobile Programmable Logic Controller (PLC) cabinets, fieldbus network and Supervisory Control and Data Acquisition (SCADA) application. CERN vacuum installations include more than 7 km of baked vessels; using mobile cabinets reduces considerably the cost of the control system. The cabinets are installed close to the vacuum vessels during the time of the bake-out cycle. Mobile cabinets can be used in all the CERN vacuum facilities. Remote control is provided by fieldbus network and SCADA application. | |||
![]() |
Poster MOPPC026 [3.088 MB] | ||
MOPPC027 | The Control System of CERN Accelerators Vacuum [LS1 Activities and New Developments] | vacuum, PLC, linac, software | 123 |
|
|||
After 3 years of operation, the LHC entered its first Long Shutdown period (LS1), in February 2013. Major consolidation and maintenance works will be performed across the whole CERN’s accelerator chain, in order to prepare the LHC to restart at higher energy, in 2015. The rest of the accelerator complex shall resume in mid-2014. We report on the recent and on-going vacuum-controls projects. Some of them are associated with the consolidations of the vacuum systems of LHC and of its injectors; others concern the complete renovation of the controls of some machines; and there are also some completely new installations. Due to the wide age-span of the existing vacuum installations, there is a mix of design philosophies and of control-equipment generations. The renovation and the novel projects offer an opportunity to improve the Quality Assurance of vacuum controls by: identifying, documenting, naming and labelling all pieces of equipment; minimising the number of equipment versions with similar functionality; homogenising the control architectures, while converging to a single software framework. | |||
![]() |
Poster MOPPC027 [67.309 MB] | ||
MOPPC028 | High-Density Power Converter Real-Time Control for the MedAustron Synchrotron | timing, operation, FPGA, real-time | 127 |
|
|||
The MedAustron accelerator is a synchrotron for light-ion therapy, developed under the guidance of CERN within the MedAustron-CERN collaboration. Procurement of 7 different power converter families and development of the control system were carried out concurrently. Control is optimized for unattended routine clinical operation. Therefore, finding a uniform control solution was paramount to fulfill the ambitious project plan. Another challenge was the need to operate with about 5'000 cycles initially, achieving pipelined operation with pulse-to-pulse re-configuration times smaller than 250 msec. This contribution shows the architecture and design and gives an overview of the system as built and operated. It is based on commercial-off-the-shelf processing hardware at front-end level and on the CERN function generator design at equipment level. The system is self contained, permitting use of parts and the whole is other accelerators. Especially the separation of the power converter from the real-time regulation using CERN's Converter Regulation Board makes this approach an attractive choice for integrating existing power converters in new configurations. | |||
![]() |
Poster MOPPC028 [0.892 MB] | ||
MOPPC029 | Internal Post Operation Check System for Kicker Magnet Current Waveforms Surveillance | kicker, interface, operation, timing | 131 |
|
|||
A software framework, called Internal Post Operation Check (IPOC), has been developed to acquire and analyse kicker magnet current waveforms. It was initially aimed at performing the surveillance of LHC beam dumping system (LBDS) extraction and dilution kicker current waveforms and was subsequently also deployed on various other kicker systems at CERN. It has been implemented using the Front-End Software Architecture (FESA) framework, and uses many CERN control services. It provides a common interface to various off-the-shelf digitiser cards, allowing a transparent integration of new digitiser types into the system. The waveform analysis algorithms are provided as external plug-in libraries, leaving their specific implementation to the kicker system experts. The general architecture of the IPOC system is presented in this paper, along with its integration within the control environment at CERN. Some application examples are provided, including the surveillance of the LBDS kicker currents and trigger synchronisation, and a closed-loop configuration to guarantee constant switching characteristics of high voltage thyratron switches. | |||
![]() |
Poster MOPPC029 [0.435 MB] | ||
MOPPC030 | Developments on the SCADA of CERN Accelerators Vacuum | vacuum, PLC, software, database | 135 |
|
|||
During the first 3 years of LHC operation, the priorities for the vacuum controls SCADA were to attend to user requests, and to improve its ergonomics and efficiency. We now have reached: information access simplified and more uniform; automatic scripts instead of fastidious manual actions; functionalities and menus standardized across all accelerators; enhanced tools for data analysis and maintenance interventions. Several decades of cumulative developments, based on heterogeneous technologies and architectures, have been asking for a homogenization effort. The Long Shutdown (LS1) provides the opportunity to further standardize our vacuum controls systems, around Siemens-S7 PLCs and PVSS SCADA. Meanwhile, we have been promoting exchanges with other Groups at CERN and outside Institutes: to follow the global update policy for software libraries; to discuss philosophies and development details; and to accomplish common products. Furthermore, while preserving the current functionalities, we are working on a convergence towards the CERN UNICOS framework. | |||
![]() |
Poster MOPPC030 [31.143 MB] | ||
MOPPC031 | IEPLC Framework, Automated Communication in a Heterogeneous Control System Environment | PLC, framework, software, hardware | 139 |
|
|||
Programmable Logic Controllers (PLCs, PXI systems and other micro-controller families) are essential components of CERN control's system. They typically present custom communication interfaces which make their federation a difficult task. Dependency from specific protocols makes code not reusable and the replacement of old technology a tedious problem. IEPLC proposes a uniform and hardware independent communication schema. It automatically generates all the resources needed on master and slave side to implement a common and generic Ethernet communication. The framework consists of a set of tools, scripts and a C++ library. The JAVA configuration tool allows the description and instantiation of the data to be exchanged with the controllers. The Python scripts generate the resources necessary to the final communication while the C++ library, eventually, allows sending and receiving data at run-time from the master process. This paper describes the product by focusing on its main objectives: the definition of a clear and standard communication interface; the reduction of user’s developments and configuration time. | |||
![]() |
Poster MOPPC031 [2.509 MB] | ||
MOPPC032 | OPC Unified Architecture within the Control System of the ATLAS Experiment | hardware, interface, toolkit, software | 143 |
|
|||
The Detector Control System (DCS) of the ATLAS experiment at the LHC has been using the OPC DA standard as interface for controlling various standard and custom hardware components and their integration into the SCADA layer. Due to its platform restrictions and expiring long-term support, OPC DA will be replaced by the succeeding OPC Unified Architecture (UA) standard. OPC UA offers powerful object-oriented information modeling capabilities, platform independence, secure communication and allows server embedding into custom electronics. We present an OPC UA server implementation for CANopen devices which is used in the ATLAS DCS to control dedicated IO boards distributed within and outside the detector. Architecture and server configuration aspects are detailed and the server performance is evaluated and compared with the previous OPC DA server. Furthermore, based on the experience with the first server implementation, OPC UA is evaluated as standard middleware solution for future use in the ATLAS DCS and beyond. | |||
![]() |
Poster MOPPC032 [2.923 MB] | ||
MOPPC033 | Opening the Floor to PLCs and IPCs: CODESYS in UNICOS | PLC, software, framework, hardware | 147 |
|
|||
This paper presents the integration of a third industrial platform for process control applications with the UNICOS (Unified Industrial Control System) framework at CERN. The UNICOS framework is widely used in many process control domains (e.g. Cryogenics, Cooling, Ventilation, Vacuum ) to produce highly structured standardised control applications for the two CERN approved industrial PLC product families, Siemens and Schneider. The CoDeSys platform, developed by the 3S (Smart Software Solution), provides an independent IEC 6131-3 programming environment for industrial controllers. The complete CoDeSys based development includes: (1) a dedicated Java™ module plugged in an automatic code generation tool, the UAB (UNICOS Application Builder), (2) the associated UNICOS baseline library for industrial PLCs and IPCs (Industrial PC) CoDeSys v3 compliant, and (3) the Python-based templates to deploy device instances and control logic. The availability of this development opens the UNICOS framework to a wider community of industrial PLC manufacturers (e.g. ABB, WAGO ) and, as the CoDeSys control Runtime works in standard Operating Systems (Linux, W7 ), UNICOS could be deployed to any IPC. | |||
![]() |
Poster MOPPC033 [4.915 MB] | ||
MOPPC034 | Control System Hardware Upgrade | hardware, interface, power-supply, software | 151 |
|
|||
The Paul Scherrer Institute builds, runs and maintains several particle accelerators. The proton accelerator HIPA, the oldest facility, was mostly equipped with CAMAC components until a few years ago. In several phases CAMAC was replaced by VME hardware and involved about 60 VME crates with 500 cards controlling a few hundred power supplies, motors, and digital as well as analog input/output channels. To control old analog and new digital power supplies with the same new VME components, an interface, so called Multi-IO, had to be developed. In addition, several other interfaces like accommodating different connectors had to be build. Through a few examples the upgrade of the hardware will be explained. | |||
![]() |
Poster MOPPC034 [0.151 MB] | ||
MOPPC035 | Re-integration and Consolidation of the Detector Control System for the Compact Muon Solenoid Electromagnetic Calorimeter | software, hardware, database, interface | 154 |
|
|||
Funding: Swiss National Science Foundation (SNSF) The current shutdown of the Large Hadron Collider (LHC), following three successful years of physics data-taking, provides an opportunity for major upgrades to be performed on the Detector Control System (DCS) of the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment. The upgrades involve changes to both hardware and software, with particular emphasis on taking advantage of more powerful servers and updating third-party software to the latest supported versions. The considerable increase in available processing power enables a reduction from fifteen to three or four servers. To host the control system on fewer machines and to ensure that previously independent software components could run side-by-side without incompatibilities, significant changes in the software and databases were required. Additional work was undertaken to modernise and concentrate I/O interfaces. The challenges to prepare and validate the hardware and software upgrades are described along with details of the experience of migrating to this newly consolidated DCS. |
|||
![]() |
Poster MOPPC035 [2.811 MB] | ||
MOPPC037 | Control Programs for the MANTRA Project at the ATLAS Superconducting Accelerator | laser, data-acquisition, ion, experiment | 162 |
|
|||
Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357. The AMS (Accelerator Mass Spectrometry) project at ATLAS (Argonne Tandem Linac Accelerator System) complements the MANTRA (Measurement of Actinides Neutron TRAnsmutation) experimental campaign. To improve the precision and accuracy of AMS measurements at ATLAS, a new overall control system for AMS measurements needs to be implemented to reduce systematic errors arising from changes in transmission and ion source operation. The system will automatically and rapidly switch between different m/q settings, acquire the appropriate data and move on to the next setting. In addition to controlling the new multi-sample changer and laser ablation system, a master control program will communicate via the network to integrate the ATLAS accelerator control system, FMA control computer, and the data acquisition system. |
|||
![]() |
Poster MOPPC037 [2.211 MB] | ||
MOPPC038 | Rapid Software Prototyping into Large Scale Controls Systems | software, hardware, interface, laser | 166 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632892 The programmable spatial shaper (PSS) within the National Ignition Facility (NIF) reduces energy on isolated optic flaws in order to lower the optics maintenance costs. This will be accomplished by using a closed-loop system for determining the optimal liquid-crystal-based spatial light pattern for beamshaping and placement of variable transmission blockers. A stand-alone prototype was developed and successfully run in a lab environment as well as on a single quad of NIF lasers following a temporary hardware reconfiguration required to support the test. Several challenges exist in directly integrating the C-based PSS engine written by an independent team into the Integrated Computer Control System (ICCS) for proof on concept on all 48 NIF laser quads. ICCS is a large-scale data-driven distributed control system written primarily in Java using CORBA to interact with +60K control points. The project plan and software design needed to specifically address the engine interface specification, configuration management, reversion plan for the existing 0% transmission blocker capability, and a multi-phase integration and demonstration schedule. |
|||
![]() |
Poster MOPPC038 [2.410 MB] | ||
MOPPC039 | Hardware Interface Independent Serial Communication (IISC) | interface, software, hardware, factory | 169 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The communication framework for the in-house controls system in the Collider-Accelerator Department at BNL depends on a variety of hardware interfaces and protocols including RS232, GPIB, USB and Ethernet to name a few. IISC is a client software library, which can be used to initiate, communicate and terminate data exchange sessions with devices over the network. It acts as a layer of abstraction allowing developers to establish communication with these devices without having to be concerned about the particulars of the interfaces and protocols involved. Details of implementation and a performance analysis will be presented. |
|||
![]() |
Poster MOPPC039 [1.247 MB] | ||
MOPPC040 | A Hazard Driven Approach to Accelerator Safety System Design - How CLS Successfully Applied ALARP in the Design of Safety Systems | factory, PLC, operation, radiation | 172 |
|
|||
All large scale particle accelerator facilities end up utilising computerised safety systems for the accelerator access control and interlock system including search lockup sequences and other safety functions. Increasingly there has been a strong move toward IEC 61508 based standards in the design of these systems. CLS designed and deployed its first IEC 61508 based system nearly 10 years ago. The challenge has increasingly been to manage the complexity of requirements and ensure that features being added into such systems were truly requirements to achieve safety. Over the past few years CLS has moved to a more structured Hazard Analysis technique that is tightly coupled and traceable through the design and verification of its engineered safety systems. This paper presents the CLS approach and lessons learned. | |||
MOPPC041 | Machine Protection System for TRIUMF's ARIEL Facility | TRIUMF, electron, target, operation | 175 |
|
|||
Phase 1 of the Advanced Rare Isotope & Electron Linac (ARIEL) facility at TRIUMF is scheduled for completion in 2014. It will utilize an electron linear accelerator (eLinac) capable of currents up to 10mA and energy up to 75MeV. The eLinac will provide CW as well as pulsed beams with durations as short as 10uS. A Machine Protection System (MPS) will protect the accelerator and the associated beamline equipment from the nominal 500kW beam. Hazardous situations require the beam to be extinguished at the electron gun within 10uS of detection. Beam loss accounting is an additional requirement of the MPS. The MPS consists of an FPGA based controller module, Beam Loss Monitor VME modules developed by JLAB, and EPICS -based controls to establish and enforce beam operating modes. This paper describes the design, architecture, and implementation of the MPS. | |||
![]() |
Poster MOPPC041 [1.345 MB] | ||
MOPPC042 | Machine Protection System for the SPIRAL2 Facility | target, beam-losses, PLC, diagnostics | 178 |
|
|||
The phase 1 of the SPIRAL2 facility, the extension project of the GANIL laboratory, is under construction in Caen, France. The accelerator is based on a linear solution, mainly composed of a normal conducting RFQ and a superconducting linac. One of its specificities is to be designed to accelerate high power deuteron and heavy ion beams from 40 to 200kW, and medium intensity heavy ion beams as well to a few kW. A Machine Protection System, has been studied to control and protect the accelerator from thermal damages for a very large range of beam intensities and powers. This paper presents the technical solutions chosen for this system which is based on two technical subsystems: one dedicated to thermal protection which requires a first PLC associated with a fast electronic system and a second dedicated to enlarged protection which is based on a safety products. | |||
![]() |
Poster MOPPC042 [2.220 MB] | ||
MOPPC043 | Development of the Thermal Beam Loss Monitors of the Spiral2 Control System | detector, EPICS, monitoring, FPGA | 181 |
|
|||
The Spiral2 linear accelerator will drive high intensity beams, up to 5mA, to up to 200kW at linac exit. Such beams can seriously damage and activate the machine ! To prevent from such situation, the Machine Protection System (MPS) has been designed. This system is connected to diagnostics indicating if the beam remains under specific limits. As soon as a diagnostic detects its limit is crossed, it informs the MPS which will in turn take actions that can lead to a beam cut-off in appropriated timing requirements. In this process, the Beam Loss Monitors (BLM) are involved in monitoring prompt radiation generated by beam particles interactions with beam line components and responsible for activation, on one side, and thermal effects, on the other side. BLM system relies mainly on scintillator detectors, NIM electronics and a VME subsystem monitoring the heating of the machine. This subsystem, also called «Thermal BLM», will be integrated in the Spiral2 EPICS environment. For its development, a specific project organization has been setup since the development is subcontracted to Cosylab. This paper focuses on the Thermal BLM controls aspects and describes this development process. | |||
![]() |
Poster MOPPC043 [0.957 MB] | ||
MOPPC044 | Cilex-Apollon Personnel Safety System | laser, radiation, operation, interlocks | 184 |
|
|||
Funding: CNRS, MESR, CG91, CRiDF, ANR Cilex-Apollon is a high intensity laser facility delivering at least 5 PW pulses on targets at one shot per minute, to study physics such as laser plasma electron or ion accelerator and laser plasma X-Ray sources. Under construction, Apollon is a four beam laser installation with two target areas. Such a facility causes many risks, in particular laser and ionizing radiations. The Personal Safety System (PSS) ensures to both decrease impact of dangers and limit exposure to them. Based on a risk analysis, Safety Integrity Level (SIL) has been assessed respecting international norms IEC 62061 and IEC 61511-3. To conceive a high reliability system a SIL 2 is required. The PSS is based on four laser risk levels corresponding to the different uses of Apollon. The study has been conducted according to norm EN 60825. Independent from the main command -control network the distributed system is made of a safety PLC and equipment, communicating through a safety network. The article presents the concepts, the architecture the client-server architecture, from control screens to sensors and actuators and interfaces to the access control system and the synchronization and sequence system. |
|||
![]() |
Poster MOPPC044 [3.864 MB] | ||
MOPPC047 | A New PSS for the ELBE Accelerator Facility | laser, radiation, electron, hardware | 191 |
|
|||
The ELBE facility (Electron Linear accelerator with high Brightness and low Emittance) is being upgraded towards a Center for High Power Radiation Sources in conjunction with Terawatt & Petawatt femtosecond lasers. The topological facility expansion and an increased number of radiation sources made a replacement of the former personnel safety system (PSS) necessary. The new system based on failsafe PLCs was designed to fulfil the requirements of radiation protection according to effective law, where it combines both laser and radiation safety for the new laser based particle sources. Conceptual design and general specification was done in-house, while detailed design and installation were carried out in close cooperation with an outside firm. The article describes architecture, functions and some technical features of the new ELBE PSS. Special focus is on the implementation of IEC 61508 and the project track. The system was integrated in an existing (and mostly running) facility and is liable to third party approval. Operational experience after one year of run-time is also given. | |||
![]() |
Poster MOPPC047 [0.120 MB] | ||
MOPPC048 | Evaluation of the Beamline Personnel Safety System at ANKA under the Aegis of the 'Designated Architectures' Approach | radiation, software, operation, experiment | 195 |
|
|||
The Beamline Personnel Safety System (BPSS) at Angstroemquelle Karlsruhe (ANKA) started operation in 2003. The paper describes the safety related design and evaluation of serial, parallel and nested radiation safety areas, which allows the flexible plug-in of experimental setups at ANKA-beamlines. It evaluates the resulting requirements for safety system hard- and software and the necessary validation procedure defined by current national and international standards, based on probabilistic reliability parameters supplied by component libraries of manufacturers and an approach known as 'Designated Architectures', defining safety functions in terms of sensor-logic-actor chains. An ANKA-beamline example is presented with special regards to features like (self-) Diagnostic Coverage (DC) of the control system, which is not part of classical Markov process modelling of systems safety. | |||
![]() |
Poster MOPPC048 [0.699 MB] | ||
MOPPC049 | Radiation and Laser Safety Systems for the FERMI Free Electron Laser | laser, electron, FEL, operation | 198 |
|
|||
Funding: Work supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3 FERMI@Elettra is a Free Electron Laser (FEL) users facility based on a 1.5 GeV electron linac. The personnel safety systems allow entering the restricted areas of the facility only when safety conditions are fulfilled, and set the machine to a safe condition in case any dangerous situation is detected. Hazards are associated with accelerated electron beams and with an infrared laser used for pump-probe experiments. The safety systems are based on PLCs providing redundant logic in a fail-safe configuration. They make use of a distributed architecture based on fieldbus technology and communicate with the control system via Ethernet interfaces. The paper describes the architecture, the operational modes and the procedures that have been implemented. The experience gained in the recent operation is also reported. |
|||
![]() |
Poster MOPPC049 [0.447 MB] | ||
MOPPC051 | NSLS-II Booster Interlock System | vacuum, operation, status, interlocks | 202 |
|
|||
Being responsible for 3 GeV booster synchrotron for the National Synchrotron Light Source (NSLS-II, BNL, USA) design and manufacture, Budker Institute of Nuclear Physics also designs the booster control and diagnostic system. Among others, the system includes interlock system consisting of equipment protection system, vacuum level and vacuum chamber temperature control system, beam diagnostic service system. These subsystems are to protect facility elements in case of vacuum leakage or chamber overheating and to provide subsidiary functions for beam diagnostics. Providing beam interlocks, it processes more then 150 signals from thermocouples, cold and hot cathode vacuum gauges and ion pump controllers. The subsystems contain nine 5U 19" chassis with hardware of each based on Allen-Bradley CompactLogix Programmable Logic Controller. All the interlock related connections are made with dry contacts, whereas system status and control is available through EPICS channel access. All operator screens are developed with Control System Studio tooling. This paper describes configuration and operation of the booster interlock system. | |||
MOPPC052 | ESS Bilbao Interlock System Approach | interlocks, PLC, EPICS, ion | 206 |
|
|||
Funding: ESS Bilbao This paper describes the approach used at ESS Bilbao initiative for the implementation of the Interlock System. The system is divided into two parts depending on the required speed for the system response: Slow Interlocks (>100 msec.) and Fast Interlocks (<100 msec.). Besides, both interlocks parts are arranged in two layers: Local Layer and Master Layer. The Slow Interlocks subsystem is based on PLCs. This solution is being tested in the ESS Bilbao ECR ion source with positive results and the first version design is now complete for the LEBT system. For the Fast Interlocks local layer part, a solution based on NI cRIO has been designed and tested. In these tests a maximum response time of 3.5 μs. was measured for analog acquisition, threshold comparison and signal generation. For digital signals the maximum time response of a similar process was 500 nsec. . These responses are considered valid for the standard need of the project. Finally, to extract information from the interlocks system and monitor it, the Modbus/EPICS interface is used for Slow Interlocks, while EPICS output is produced by NI cRIO. Hence, it is planned to develop a light pyQT solution to perform this task. |
|||
MOPPC053 | A Safety System for Experimental Magnets Based on CompactRIO | status, interface, hardware, experiment | 210 |
|
|||
This paper describes the development of a new safety system for experimental magnets using National Instruments CompactRIO devices. The design of the custom Magnet Safety System (MSS) for the large LHC experimental magnets began in 1998 and it was first installed and commissioned in 2002. Some of its components like the isolation amplifier or ALTERA Reconfigurable Field-Programmable Gate Array (FPGA) are not available on the market any longer. A review of the system shows that it can be modernized and simplified by replacing the Hard-wired Logic Module (HLM) by a CompactRIO device. This industrial unit is a reconfigurable embedded system containing a processor running a real-time operating system (RTOS), a FPGA, and interchangeable industrial I/O modules. A prototype system, called MSS2, has been built and successfully tested using a test bench based on PXI crate. Two systems are currently being assembled for two experimental magnets at CERN, for the COMPASS solenoid and for the M1 magnet at the SPS beam line. This paper contains a detailed description of MSS2, the test bench and results from a first implementation and operation with real magnets. | |||
![]() |
Poster MOPPC053 [0.543 MB] | ||
MOPPC054 | Application of Virtualization to CERN Access and Safety Systems | hardware, software, network, interface | 214 |
|
|||
Access and safety systems are by nature heterogeneous: different kinds of hardware and software, commercial and home-grown, are integrated to form a working system. This implies many different application services, for which separate physical servers are allocated to keep the various subsystems isolated. Each such application server requires special expertise to install and manage. Furthermore, physical hardware is relatively expensive and presents a single point of failure to any of the subsystems, unless designed to include often complex redundancy protocols. We present the Virtual Safety System Infrastructure project (VSSI), whose aim is to utilize modern virtualization techniques to abstract application servers from the actual hardware. The virtual servers run on robust and redundant standard hardware, where snapshotting and backing up of virtual machines can be carried out to maximize availability. Uniform maintenance procedures are applicable to all virtual machines on the hypervisor level, which helps to standardize maintenance tasks. This approach has been applied to the servers of CERN PS and LHC access systems as well as to CERN Safety Alarm Monitoring System (CSAM). | |||
![]() |
Poster MOPPC054 [1.222 MB] | ||
MOPPC056 | The Detector Safety System of NA62 Experiment | experiment, detector, status, interface | 222 |
|
|||
The aim of the NA62 experiment is the study of the rare decay K+→π+ν;ν- at the CERN SPS. The Detector Safety System (DSS) developed at CERN is responsible for assuring the protection of the experiment’s equipment. DSS requires a high degree of availability and reliability. It is composed of a Front-End and a Back-End part, the Front-End being based on a National Instruments cRIO system, to which the safety critical part is delegated. The cRIO Front-End is capable of running autonomously and of automatically taking predefined protective actions whenever required. It is supervised and configured by the standard CERN PVSS SCADA system. This DSS system can easily adapt to evolving requirements of the experiment during the construction, commissioning and exploitation phases. The NA62 DSS is being installed and has been partially commissioned during the NA62 Technical Run in autumn 2012, where components from almost all the detectors as well as the trigger and the data acquisition systems were successfully tested. The paper contains a detailed description of this innovative and performing solution, and demonstrates a good alternative to the LHC systems based on redundant PLCs. | |||
![]() |
Poster MOPPC056 [0.613 MB] | ||
MOPPC057 | Data Management and Tools for the Access to the Radiological Areas at CERN | database, radiation, interface, operation | 226 |
|
|||
As part of the refurbishment of the PS Personnel Protection system, the radioprotection (RP) buffer zones & equipment have been incorporated into the design of the new access points providing an integrated access concept to the radiation controlled areas of the PS complex. The integration of the RP and access control equipment has been very challenging due to the lack of space in many of the zones. Although successfully carried out, our experience from the commissioning of the first installed access points shows that the integration should also include the software tools and procedures. This paper presents an inventory of all the tools and data bases currently used (*) in order to ensure the access to the CERN radiological areas according to CERN’s safety and radioprotection procedures. We summarize the problems and limitations of each tool as well as the whole process, and propose a number of improvements for the different kinds of users including changes required in each of the tools. The aim is to optimize the access process and the operation & maintenance of the related tools by rationalizing and better integrating them.
(*) Access Distribution and Management, Safety Information Registration, Works Coordination, Access Control, Operational Dosimeter, Traceability of Radioactive Equipment, Safety Information Panel. |
|||
![]() |
Poster MOPPC057 [1.955 MB] | ||
MOPPC058 | Design, Development and Implementation of a Dependable Interlocking Prototype for the ITER Superconducting Magnet Powering System | interface, software, plasma, PLC | 230 |
|
|||
Based on the experience with an operational interlock system for the superconducting magnets of the LHC, CERN has developed a prototype for the ITER magnet central interlock system in collaboration with ITER. A total energy of more than 50 Giga Joules is stored in the magnet coils of the ITER Tokamak. Upon detection of a quench or other critical powering failures, the central interlock system must initiate the extraction of the energy to protect the superconducting magnets and, depending on the situation, request plasma disruption mitigations to protect against mechanical forces induced between the magnet coils and the plasma. To fulfil these tasks with the required high level of dependability the implemented interlock system is based on redundant PLC technology making use of hardwired interlock loops in 2-out-of-3 redundancy, providing the best balance between safety and availability. In order to allow for simple and unique connectivity of all client systems involved in the safety critical protection functions as well as for common remote diagnostics, a dedicated user interface box has been developed. | |||
MOPPC059 | Refurbishing of the CERN PS Complex Personnel Protection System | PLC, network, interface, radiation | 234 |
|
|||
In 2010, the refurbishment of the Personnel Protection System of the CERN Proton Synchrotron complex primary beam areas started. This large scale project was motivated by the obsolescence of the existing system and the objective of rationalizing the personnel protection systems across the CERN accelerators to meet the latest recommendations of the regulatory bodies of the host states. A new generation of access points providing biometric identification, authorization and co-activity clearance, reinforced passage check, and radiation protection related functionalities will allow access to the radiologically classified areas. Using a distributed fail-safe PLC architecture and a diversely redundant logic chain, the cascaded safety system guarantees personnel safety in the 17 machine of the PS complex by acting on the important safety elements of each zone and on the adjacent upstream ones. It covers radiological and activated air hazards from circulating beams as well as laser, and electrical hazards. This paper summarizes the functionalities provided, the new concepts introduced, and, the functional safety methodology followed to deal with the renovation of this 50 year old facility. | |||
![]() |
Poster MOPPC059 [2.874 MB] | ||
MOPPC061 | Achieving a Highly Configurable Personnel Protection System for Experimental Areas | PLC, radiation, status, interface | 238 |
|
|||
The personnel protection system of the secondary beam experimental areas at CERN manages the beam and access interlocking mechanism. Its aim is to guarantee the safety of the experimental area users against the hazards of beam radiation and laser light. The highly configurable, interconnected, and modular nature of those areas requires a very versatile system. In order to follow closely the operational changes and new experimental setups and to still keep the required level of safety, the system was designed with a set of matrices which can be quickly reconfigured. Through a common paradigm, based on industrial hardware components, this challenging implementation has been made for both the PS and SPS experimental halls, according to the IEC 61508 standard. The current system is based on a set of hypotheses formed during 25 years of operation. Conscious of the constant increase in complexity and the broadening risk spectrum of the present and future experiments, we propose a framework intended as a practical guide to structure the design of the experimental layouts based on risk evaluation, safety function prescriptions and field equipment capabilities. | |||
![]() |
Poster MOPPC061 [2.241 MB] | ||
MOPPC064 | A New Spark Detection System for the Electrostatic Septa of the SPS North (Experimental) Area | ion, high-voltage, cathode, septum | 246 |
|
|||
Electrostatic septa (ZS) are used in the extraction of the particle beams from the CERN SPS to the North Area experimental zone. These septa employ high electric fields, generated from a 300 kV power supply, and are particularly prone to internal sparking around the cathode structure. This sparking degrades the electric field quality, consequently affecting the extracted beam, vacuum and equipment performance. To mitigate these effects, a Spark Detection System (SDS) has been realised, which is based on an industrial SIEMENS S7-400 programmable logic controller and deported Boolean processors modules interfaced through a PROFINET fieldbus. The SDS interlock logic uses a moving average spark rate count to determine if the ZS performance is acceptable. Below a certain spark rate it is probable that the ZS septa tank vacuum can recover, thus avoiding transition into a state where rapid degradation would occur. Above this level an interlock is raised and the high voltage is switched off. Additionally, all spark signals acquired by the SDS are sent to a front-end computer to allow further analysis such as calculation of spark rates and production of statistical data. | |||
![]() |
Poster MOPPC064 [0.366 MB] | ||
MOPPC066 | Reliability Analysis of the LHC Beam Dumping System Taking into Account the Operational Experience during LHC Run 1 | dumping, operation, power-supply, diagnostics | 250 |
|
|||
The LHC beam dumping system operated reliably during the Run 1 period of the LHC (2009 – 2013). As expected, there were a number of internal failures of the beam dumping system which, because of in-built safety features, resulted in safe removal of the particle beams from the machine. These failures (i.e. "false" beam dumps) have been appointed to the different failure modes and are compared to the predictions made by a reliability model established before the start of LHC operation. A statistically significant difference between model and failure data identifies those beam dumping system components that may have unduly impacted on the LHC availability and safety or might have been out of the scope of the initial model. An updated model of the beam dumping system reliability is presented, taking into account the experimental data presented and the foreseen system changes to be made in the 2013 – 2014 LHC shutdown. | |||
![]() |
Poster MOPPC066 [1.554 MB] | ||
MOPPC068 | Operational Experience with a PLC Based Positioning System for a LHC Extraction Protection Element | PLC, operation, software, dumping | 254 |
|
|||
The LHC Beam Dumping System (LBDS) nominally dumps the beam synchronously with the passage of the particle free beam abort gap at the beam dump extraction kickers. In the case of an asynchronous beam dump, an absorber element protects the machine aperture. This is a single sided collimator (TCDQ), positioned close to the beam, which has to follow the beam position and beam size during the energy ramp. The TCDQ positioning control is implemented within a SIEMENS S7-300 Programmable Logic Controller (PLC). A positioning accuracy better than 30 μm is achieved through a PID based servo algorithm. Errors due to a wrong position of the absorber w.r.t. the beam energy and size generates interlock conditions to the LHC machine protection system. Additionally, the correct position of the TCDQ w.r.t. the beam position in the extraction region is cross-checked after each dump by the LBDS eXternal Post Operational Check (XPOC). This paper presents the experience gained during LHC Run 1 and describes improvements that will be applied during the LHC shutdown 2013 – 2014. | |||
![]() |
Poster MOPPC068 [3.381 MB] | ||
MOPPC071 | Development of the Machine Protection System for FERMILAB'S ASTA Facility | cryomodule, laser, FPGA, interface | 262 |
|
|||
The Fermilab Advance Superconducting Test Accelerator (ASTA) under development will be capable of delivering an electron beam with up to 3000 bunches per macro-pulse, 5Hz repetition rate and 1.5 GeV beam energy in the final phase. The completed machine will be capable of sustaining an average beam power of 72 KW at the bunch charge of 3.2 nC. A robust Machine Protection System (MPS) capable of interrupting the beam within a macro-pulse and that interfaces well with new and existing controls system infrastructure is being developed to mitigate and analyze faults related to this relatively high damage potential. This paper will describe the component layers of the MPS system, including a FPGA-based Laser Pulse Controller, the Beam Loss Monitoring system design and the controls and related work done to date. | |||
![]() |
Poster MOPPC071 [1.479 MB] | ||
MOPPC077 | Open Hardware Collaboration: A Way to Improve Efficiency for a Team | detector, hardware, electronics, FPGA | 273 |
|
|||
SOLEIL* is a third generation Synchrotron radiation source located near Paris in France. Today, the Storage Ring delivers photon beam to 26 beamlines. In order to improve the machine and beamlines performance, new electronics requirements are identified. For these improvements, up-to-date commercial products are preferred but sometimes custom hardware designs become essential. At SOLEIL, the electronic team (8 people) is in charge of design, implementation and maintenance of 2000 electronics installed for control and data acquisition. This large basement and small team mean there is only little time left to focus on the development of new hardware designs. As alternative, we focus our development on the open Hardware (OHWR) initiative from the CERN dedicated for electronics designers at experimental physics facilities to collaborate on hardware designs. We collaborate as an evaluator and a contributor. We share some boards in the project SPI BOARDS PACKAGE**, developed to face our current challenges. We evaluated TDC core project, and we plan to evaluate FMC carrier. We will present our approach on how to be more efficient with developments, issues to face and the benefit we get.
*: www.synchrotron-soleil.fr **: www.ohwr.org/projects/spi-board-package |
|||
MOPPC078 | TANGO Steps Toward Industry | TANGO, software, site, synchrotron | 277 |
|
|||
Funding: Gravit innovation Grenoble France. TANGO has proven its excellent reliability by controlling several huge scientific installations in a 24*7 mode. Even if it has originally been built for particle accelerators and scientific experiments, it can be used to control any equipment from small domestic applications to big industrial installations. In the last years the interest around TANGO has been growing and several industrial partners in Europe propose services for TANGO. The TANGO industrialization project aims to increase the visibility of the system fostering the economic activity around it. It promotes TANGO as an open-source flexible solution for controlling equipment as an alternative to proprietary SCADA systems. To achieve this goal several actions have been started, such as the development of an industrial demonstrator, better packaging, integrating OPC-UA and improving the communication around TANGO. The next step will be the creation of a TANGO software Foundation able to engage itself as a legal and economical partner for industry. This foundation will be funded by industrial partners, scientific institutes and grants. The goal is to foster and nurture the growing economic eco-system around TANGO. |
|||
![]() |
Poster MOPPC078 [4.179 MB] | ||
MOPPC079 |
CODAC Core System, the ITER Software Distribution for I&C | software, EPICS, interface, network | 281 |
|
|||
In order to support the adoption of the ITER standards for the Instrumentation & Control (I&C) and to prepare for the integration of the plant systems I&C developed by many distributed suppliers, the ITER Organization is providing the I&C developers with a software distribution named CODAC Core System. This software has been released as incremental versions since 2010, starting from preliminary releases and with stable versions since 2012. It includes the operating system, the EPICS control framework and the tools required to develop and test the software for the controllers, central servers and operator terminals. Some components have been adopted from the EPICS community and adapted to the ITER needs, in collaboration with the other users. This is the case for the CODAC services for operation, such as operator HMI, alarms or archives. Other components have been developed specifically for the ITER project. This applies to the Self-Description Data configuration tools. This paper describes the current version (4.0) of the software as released in February 2013 with details on the components and on the process for its development, distribution and support. | |||
![]() |
Poster MOPPC079 [1.744 MB] | ||
MOPPC081 | The Case of MTCA.4: Managing the Introduction of a New Crate Standard at Large Scale Facilities and Beyond | data-acquisition, electronics, operation, klystron | 285 |
|
|||
The demands on hardware for control and data acquisition at large-scale research organizations have increased considerably in recent years. In response, modular systems based on the new MTCA.4 standard, jointly developed by large Public Research Organizations and industrial electronics manufacturers, have pushed the boundary of system performance in terms of analog/digital data processing performance, remote management capabilities, timing stability, signal integrity, redundancy and maintainability. Whereas such public-private collaborations are not entirely new, novel instruments are in order to test the acceptance of the MTCA.4 standard beyond the physics community, identify gaps in the technology portfolio and align collaborative R&D programs accordingly. We describe the ongoing implementation of a time-limited validation project as means towards this end, highlight the challenges encountered so far and present solutions for a sustainable division of labor along the industry value chain. | |||
MOPPC084 | ESS Integrated Control System and the Agile Methodology | software, feedback, target, neutron | 296 |
|
|||
The stakeholders of the ESS Integrated Control System (ICS) reside in four parts of the ESS machine: accelerator, target, neutron instruments and conventional facilities. ICS plans to meet the stakeholders’ needs early in the Construction phase, to accelerate and facilitate the Commissioning process by providing and delivering required tools earlier. This introduces the risk that stakeholders will not have had the full set of information required available early enough for the development of the interfacing systems (e.g. missing requirements, undecided design etc.) In order for ICS to accomplish its objectives it is needed to establish a development process that allows a quick adaptation to any change in the requirements with a minimum impact in the execution of the projects. Agile Methodology is well known for its ability to adapt quickly to change, as well as for involving users in the development process and producing working and reliable software from a very early stage in the project. The paper will present the plans, the tools, the organization of the team and the preliminary results of the setup work. | |||
MOPPC086 | Manage the MAX IV Laboratory Control System as an Open Source Project | software, TANGO, GUI, framework | 299 |
|
|||
Free Open Source Software (FOSS) is now deployed and used in most of the big facilities. It brings a lot of qualities that can compete with proprietary software like robustness, reliability and functionality. Arguably the most important quality that marks the DNA of FOSS is Transparency. This is the fundamental difference compared to its closed competitors and has a direct impact on how projects are managed. As users, reporters, contributors are more than welcome the project management has to have a clear strategy to promote exchange and to keep a community. The Control System teams have the chance to work on the same arena as their users and, even better, some of the users have programming skills. Unlike a fortress strategy, an open strategy may benefit from the situation to enhance the user experience. In this topic we will explain the position of the MaxIV KITS team. How “Tango install party” and “coding dojo” have been used to promote the contribution to the control system software and how our projects are structured in terms of process and tools (SARDANA, GIT… ) to make them more accessible for in house collaboration as well as from other facilities or even subcontractors. | |||
![]() |
Poster MOPPC086 [7.230 MB] | ||
MOPPC087 | Tools and Rules to Encourage Quality for C/C++ Software | software, monitoring, framework, diagnostics | 303 |
|
|||
Inspired by the success of the software improvement process for Java projects, in place since several years in the CERN accelerator controls group, it was agreed in 2011 to apply the same principles to the C/C++ software developed in the group, an initiative we call the Software Improvement Process for C/C++ software (SIP4C/C++). The objectives of the SIP4C/C++ initiative are: 1) agree on and establish best software quality practices, 2) choose tools for quality and 3) integrate these tools in the build process. After a year we have reached a number of concrete results, thanks to the collaboration between several involved projects, including: common build tool (based on GNU Make), which standardizes the way to build, test and release C/C++ binaries; unit testing with Google Test & Google Mock; continuous integration of C/C++ products with the existing CI server (Atlassian Bamboo); static code analysis (Coverity); generation of manifest file with dependency information; and runtime in-process metrics. This work presents the SIP4C/C++ initiative in more detail, summarizing our experience and the future plans. | |||
![]() |
Poster MOPPC087 [3.062 MB] | ||
MOPPC088 | Improving Code Quality of the Compact Muon Solenoid Electromagnetic Calorimeter Control Software to Increase System Maintainability | software, monitoring, GUI, detector | 306 |
|
|||
Funding: Swiss National Science Foundation (SNSF) The Detector Control System (DCS) software of the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at CERN is designed primarily to enable safe and efficient operation of the detector during Large Hadron Collider (LHC) data-taking periods. Through a manual analysis of the code and the adoption of ConQAT*, a software quality assessment toolkit, the CMS ECAL DCS team has made significant progress in reducing complexity and improving code quality, with observable results in terms of a reduction in the effort dedicated to software maintenance. This paper explains the methodology followed, including the motivation to adopt ConQAT, the specific details of how this toolkit was used and the outcomes that have been achieved. * ConQAT, https://www.conqat.org/ |
|||
![]() |
Poster MOPPC088 [2.510 MB] | ||
MOPPC090 | Managing a Product Called NIF - PLM Current State and Processes | software, data-management, operation, laser | 310 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632452 Product lifecycle management (PLM) is the process of managing the entire lifecycle of a product from its conception, through design and manufacture, to service and disposal. The National Ignition Facility (NIF) can be considered one enormous product that is made up of hundreds of millions of individual parts and components (or products). The ability to manage and control the physical definition, status and configuration of the sum of all of these products is a monumental undertaking yet critical to the validity of the shot experiment data and the safe operation of the facility. NIF is meeting this challenge by utilizing an integrated and graded approach to implement a suite of commercial and custom enterprise software solutions to address PLM and other facility management and configuration requirements. It has enabled the passing of needed elements of product data into downstream enterprise solutions while at the same time minimizing data replication. Strategic benefits have been realized using this approach while validating the decision for an integrated approach where more than one solution may be required to address the entire product lifecycle management process. |
|||
![]() |
Poster MOPPC090 [14.237 MB] | ||
MOPPC092 | Commissioning the MedAustron Accelerator with ProShell | interface, ion, framework, timing | 314 |
|
|||
MedAustron is a synchrotron based centre for light ion therapy under construction in Austria. The accelerator and its control system entered the on-site commissioning phase in January 2013. This contribution presents the current status of the accelerator operation and commissioning procedure framework called ProShell. It is used to model measurement procedures for commissioning and operation with Petri-Nets. Beam diagnostics device adapters are implemented in C#. To illustrate its use for beam commissioning, procedures currently in use are presented including their integration with existing devices such as ion source, power converters, slits, wire scanners and profile grid monitors. The beam spectrum procedure measures distribution of particles generated by the ion source. The phase space distribution procedure performs emittance measurement in beam transfer lines. The trajectory steering procedure measures the beam position in each part of the machine and aids in correcting the beam positions by integrating MAD-XX optics calculations. Additional procedures and (beam diagnostic) devices are defined, implemented and integrated with ProShell on demand as commissioning progresses. | |||
![]() |
Poster MOPPC092 [2.896 MB] | ||
MOPPC094 | ARIEL Control System at TRIUMF – Project Update | EPICS, PLC, ISAC, Linux | 318 |
|
|||
The Advanced Rare Isotope & Electron Linac (ARIEL) facility at TRIUMF, scheduled for Phase 1 completion in 2014, will use a control system based on EPICS. Discrete subsystems within the accelerator, beamlines and conventional facilities have been clearly identified. Control system strategies for each identified subsystem have been developed, and components have been chosen to satisfy the unique requirements of each system. The ARIEL control system will encompass methodology already established in the TRIUMF ISAC & ISAC-II facilities in addition to adoption of a number of technologies previously unused at TRIUMF. The scope includes interface with other discrete subsystems such as cryogenics and power distribution, as well as complete subsystem controls packages. | |||
MOPPC095 | PETAL Control System Status Report | laser, software, framework, hardware | 321 |
|
|||
Funding: CEA / Région Aquitaine / ILP / Europe / HYPER The PETAL laser facility is a high energy multi-petawatt laser beam being installed in the Laser MegaJoule building facility. PETAL is designed to produce a laser beam at 3 kilojoules of energy for 0.5 picoseconds of duration. The autonomous commissioning began in 2013. In the long term, PETAL’s Control System is to be integrated in the LMJ’s Control System for a coupling with its 192 nanoseconds laser beams. The presentation gives an overview of the general control system architecture, and focuses on the use of TANGO framework in some of the subsystems software. Then the presentation explains the steps planned to develop the control system from the first laser shoots in autonomous exploitation to the merger in the LMJ’s facility. |
|||
![]() |
Poster MOPPC095 [1.891 MB] | ||
MOPPC096 | Design and Implementation Aspects of the Control System at FHI FEL | FEL, interface, cavity, EPICS | 324 |
|
|||
A new mid-infrared FEL has been commissioned at the Fritz-Haber-Institut in Berlin. It will be used for spectroscopic investigations of molecules, clusters, nanoparticles and surfaces. The oscillator FEL is operated with 15 - 50 MeV electrons from a normal-conducting S-band linac equipped with a gridded thermionic gun and a chicane for controlled bunch compression. Construction of the facility building with the accelerator vault began in April 2010. First lasing was observed on Februar 15th, 2012. * The EPICS software framework was chosen to build the control system for this facility. The industrial utility control system is integrated using BACnet/IP. Graphical operator and user interfaces are based on the Control System Studio package. The EPICS channel archiver, an electronic logbook, a web based monitoring tool, and a gateway complete the installation. This paper presents design and implementation aspects of the control system, its capabilities, and lessons learned during local and remote commissioning.
* W. Schöllkopf et al., FIRST LASING OF THE IR FEL AT THE FRITZ-HABER-INSTITUT, BERLIN, Conference FEL12 |
|||
![]() |
Poster MOPPC096 [10.433 MB] | ||
MOPPC097 | The FAIR Control System - System Architecture and First Implementations | timing, software, operation, network | 328 |
|
|||
The paper presents the architecture of the control system for the Facility for Antiproton and Ion Research (FAIR) currently under development. The FAIR control system comprises the full electronics, hardware, and software to control, commission, and operate the FAIR accelerator complex for multiplexed beams. It takes advantage of collaborations with CERN in using proven framework solutions like FESA, LSA, White Rabbit, etc. The equipment layer consists of equipment interfaces, embedded system controllers, and software representations of the equipment (FESA). A dedicated real time network based on White Rabbit is used to synchronize and trigger actions on equipment level. The middle layer provides service functionality both to the equipment layer and the application layer through the IP control system network. LSA is used for settings management. The application layer combines the applications for operators as GUI applications or command line tools typically written in Java. For validation of concepts already in 2014 FAIR's proton injector at CEA/France and CRYRING at GSI will be commissioned with reduced functionality of the proposed FAIR control system stack. | |||
![]() |
Poster MOPPC097 [2.717 MB] | ||
MOPPC098 | The EPICS-based Accelerator Control System of the S-DALINAC | EPICS, interface, network, hardware | 332 |
|
|||
Funding: Supported by DFG through CRC 634. The S-DALINAC (Superconducting Darmstadt Linear Accelerator) is an electron accelerator for energies from 3 MeV up to 130 MeV. It supplies beams of either spin-polarized or unpolarized electrons for experiments in the field of nuclear structure physics and related areas of fundamental research. The migration of the Accelerator Control System to an EPICS-based system started three years ago and has essentially been done in parallel to regular operation. While it has not been finished yet it already pervades all the different aspects of the control system. The hardware is interfaced by EPICS Input/Output Controllers. User interfaces are designed with Control System Studio (CSS) and BOY (Best Operator Interface Yet). Latest activities are aimed at the completion of the migration of the beamline devices to EPICS. Furthermore, higher-level aspects can now be approached more intensely. This includes the introduction of efficient alarm-handling capabilities as well as making use of interconnections between formerly separated parts of the system. This contribution will outline the architecture of the S-DALINAC's Accelerator Control System and report about latest achievements in detail. |
|||
![]() |
Poster MOPPC098 [26.010 MB] | ||
MOPPC099 | The ANKA Control System: On a Path to the Future | hardware, EPICS, Ethernet, interface | 336 |
|
|||
The machine control system of the synchrotron radiation source ANKA at KIT (Karlsruhe Institute of Technology) is migrating from dedicated I/O microcontroller boards that utilise the LonWorks field bus and are visualised with the ACS Corba based control system to Ethernet TCP/IP devices with an EPICS server layer and visualisation by Control System Studio (CSS). This migration is driven by the need to replace ageing hardware, and in order to move away from the outdated microcontroller's embedded LonWorks bus. Approximately 500 physical devices, such as power supplies, vacuum pumps etc, will need to be replaced (or have their I/O hardware changed) and be integrated to the new EPICS/CSS control system. In this paper we report on the technology choices and discuss justifications of those choices, the progress of migration, and how such a task can be achieved in a transparent way with a fully user operational machine. We also report on the benefits reaped from using EPICS, CSS and BEAST alarming. | |||
![]() |
Poster MOPPC099 [0.152 MB] | ||
MOPPC100 | SKA Monitioring and Control Progress Status | operation, monitoring, site, interface | 340 |
|
|||
The Monitoring and Control system for the SKA radio telescope is now moving from the conceptual design to the system requirements and design phase, with the formation of a consortium geared towards delivering the Telescope Manager (TM) work package. Recent program decisions regarding hosting of the telescope across two sites, Australia and South Africa, have brought in new challenges from the TM design perspective. These include strategy to leverage the individual capabilities of autonomous telescopes, and also integrating the existing precursor telescopes (ASKAP and MeerKat) with heterogenous technologies and approaches into the SKA. A key design goal from the viewpoint of minimizing development and lifecycle costs is to have a uniform architectural approach across the telescopes, and to maximize standardization of software and instrumentation across the systems, despite potential variations in system hardware and procurement arrangements among the participating countries. This paper discusses some of these challenges, and their mitigation approaches that the consortium intends to work upon, along with an update on the current status and progress on the overall TM work. | |||
MOPPC101 | The Control Architecture of Large Scientific Facilities: ITER and LHC lessons for IFMIF | interface, neutron, network, EPICS | 344 |
|
|||
The development of an intense source of neutrons with the spectrum of DT fusion reactions is indispensable to qualify suitable materials for the First Wall of the nuclear vessel in fusion power plants. The FW, overlap of different layers, is essential in future reactors; they will convert the 14 MeV of neutrons to thermal energy and generate T to feed the DT reactions. IFMIF will reproduce those irradiation conditions with two parallel 40 MeV CW deuteron Linacs, at 2x125 mA beam current, colliding on a 25 mm thick Li screen flowing at 15 m/s and producing a n flux of 1018 m2/s in 500 cm3 volume with a broad peak energy at 14 MeV. The design of the control architecture of a large scientific facility is dependent on the particularities of the processes in place or the volume of data generated; but it is also tied to project management issues. LHC and ITER are two complex facilities, with ~106 process variables, with different control systems strategies, from the modular approach of CODAC, to the more integrated implementation of CERN Technical Network. This paper analyzes both solutions, and extracts conclusions that shall be applied to the future control architecture of IFMIF. | |||
![]() |
Poster MOPPC101 [0.297 MB] | ||
MOPPC103 | Status of the RIKEN RI Beam Factory Control System | EPICS, ion, network, cyclotron | 348 |
|
|||
RIKEN Radioactive Isotope Beam Factory (RIBF) is a heavy-ion accelerator facility producing unstable nuclei and studying their properties. After the first beam extraction from Superconducting Ring Cyclotron (SRC), the final stage accelerator of RIBF, in 2006, several kinds of updates have been performed. We will here present two projects of large-scale experimental instrumentations to be introduced in RIBF that offer new type of experiments. One is an isochronous storage ring aiming at precise mass measurements of short-lived nuclei (Rare RI ring), and the other is construction of a new beam transport line dedicated to more effective generation of seaweed mutation induced by energetic heavy ions. In order to control them, the EPICS-based RIBF control system is now under upgrading. Each device used in new experimental instrumentations is controlled by the same kind of controllers as those existing, such as Programmable Logic Controllers (PLCs). On the other hand, we have first introduced Control System Studio (CSS) for operator interface. We plan to set up the CSS not only for new projects but also for the existing RIBF control system step by step. | |||
![]() |
Poster MOPPC103 [2.446 MB] | ||
MOPPC104 | Design and Implementation of Sesame's Booster Ring Control System | booster, EPICS, PLC, network | 352 |
|
|||
SESAME is a synchrotron light source under installation located in Allan, Jordan. It consists of 2.5 GeV storage-ring, a 800 MeV Booster-Synchrotron and a 22 MeV Microtron as Pre-Injector. SESAME succeeded to get the first beam from Microtron, the booster is expected to be commissioned by the end of 2013, the storage-ring by the end of 2015 and the first beam-lines in 2016. This paper presents building of control systems of SEAME booster. EPICS is the main control-software tool and EDM for building GUIs which is being replaced by CSS. PLCs are used mainly for the interlocks in the vacuum system and power-supplies of the magnets, and in diagnostics for florescent screens and camera- switches. Soft IOCs are used for different serial devices (e.g. vacuum gauge controllers) through Moxa terminal servers and Booster power supplies through Ethernet connection. Libera Electron modules with EPICS tools (IOCs and GUIs) from Diamond Light Source are used for beam position monitoring. The timing System consists of one EVG and three EVR cards from Micro Research Finland (MRF). A distributed version control repository using Git is used at SESAME to track development of the control subsystems. | |||
![]() |
Poster MOPPC104 [1.776 MB] | ||
MOPPC106 | Status Report of RAON Control System | EPICS, timing, vacuum, PLC | 356 |
|
|||
The RAON is a new heavy ion accelerator under construction in South Korea, which is to produce a variety of stable ion and rare isotope beams to support various researches for the basic science and applied research applications. To produce the isotopes to fulfill the requirements we have planed the several modes of operation scheme which require fine-tuned synchronous controls, asynchronous controls, or both among the accelerator complexes. The basic idea and development progress of the control system as well as the future plan are presented. | |||
![]() |
Poster MOPPC106 [1.403 MB] | ||
MOPPC107 | RF-Generators Control Tools for Kurchatov Synchrotron Radiation Source | synchrotron, electron, synchrotron-radiation, radiation | 359 |
|
|||
Now the technology equipment of the Kurchatov Synchrotron Radiation Source (KSRS) is upgraded. At the same time, new equipment and software solutions for the control system are implemented. The KSRS main ring is the electron synchrotron with two 181 MHz RF-generators, their control system provides measurement of parameters of generation, regulation of tuning elements in wave guides and resonators, output of alarm messages. At the execution level the VME standard equipment is used. Server level is supported by Citect SCADA and the SQL historian server. The operator level of control system is implemented, as a PC local network. It allowed to expand number of measuring channels, to increase speed of processing and data transfers, to have on demand historical data with the big frequency of inquiry, and also to improve the accuracy of measurements. In article the control system structure by KSRS RF-generators, including the description of all levels of control is provided. Examples of implementation of the operator interface are given. | |||
![]() |
Poster MOPPC107 [1.671 MB] | ||
MOPPC108 | Status of the NSLS-II Booster Control System | booster, vacuum, timing, operation | 362 |
|
|||
The booster control system is an integral part of the NSLS-II control system and is developed under EPICS. The booster control system includes six IBM Systems x3250 M3 and four VME3100 controllers connected via Gigabit Ethernet. These computers provide running IOCs for power supplies control, timing, beam diagnostics and interlocks. Also cPCI ADCs located in cPCI crate are used for beam diagnostics. Front-end electronics for vacuum control and interlocks are Allen-Bradley programmable logic controllers and I/O devices. Timing system is based on use of Micro-Research Finland Oy products: EVR 230RF and PMC EVR. Power supplies control use BNL developed set of a Power Supply Interface (PSI) which is located close to power supplies and a Power Supply Controller (PSC) which is connected to a front-end computer via 100 Mbit Ethernet. Each PSI is connected to its PSC via fiber-optic link. High Level Applications developed in Control System Studio and python run in Operator Consoles located in the Control Room. This paper describes the final design and status of the booster control system. The functional block diagrams are presented. | |||
![]() |
Poster MOPPC108 [0.458 MB] | ||
MOPPC109 | Status of the MAX IV Laboratory Control System | linac, storage-ring, interface, TANGO | 366 |
|
|||
The MAX IV Laboratory is a new synchrotron light source being built in Lund, south Sweden. The whole accelerator complex consists of a 3GeV 300m long full energy linac, two Storage Rings of 1.5GeV and 3GeV and a Short Pulse Facility for pump and probe experiments with bunches around 100fs long. First x-rays for the users are expected to be delivered in 2015 for the SPF and 2016 for the Storage Rings. This paper describes the progress in the design of the control system for the accelerator and the different solutions adopted for data acquisition, synchronisation, networking, safety and other aspects related to the control system | |||
![]() |
Poster MOPPC109 [0.522 MB] | ||
MOPPC110 | The Control System for the CO2 Cooling Plants for Physics Experiments | detector, operation, software, interface | 370 |
|
|||
CO2 cooling has become interesting technology for current and future tracking particle detectors. A key advantage of using CO2 as refrigerant is the high heat transfer capabilities allowing a significant material budget saving, which is a critical element in state of the art detector technologies. Several CO2 cooling stations, with cooling power ranging from 100W to several kW, have been developed at CERN to support detector testing for future LHC detector upgrades. Currently, two CO2 cooling plants for the ATLAS Pixel Insertable B-Layer and the Phase I Upgrade CMS Pixel detector are under construction. This paper describes the control system design and implementation using the UNICOS framework for the PLCs and SCADA. The control philosophy, safety and interlocking standard, user interfaces and additional features are presented. CO2 cooling is characterized by high operation stability and accurate evaporation temperature control over large distances. Implemented split range PID controllers with dynamically calculated limiters, multi-level interlocking and new software tools like CO2 online p-H diagram, jointly enable the cooling to fulfill the key requirements of reliable system. | |||
![]() |
Poster MOPPC110 [2.385 MB] | ||
MOPPC111 | Overview of LINAC4 Beam Instrumentation Software | linac, software, emittance, electronics | 374 |
|
|||
This paper presents an overview of results from the recent LINAC4 commissioning with H− beam at CERN. It will cover beam instrumentation systems acquiring beam position, intensity, size and emittance starting from the project proposal to commissioning results. | |||
MOPPC112 | Current Status and Perspectives of the SwissFEL Injector Test Facility Control System | EPICS, operation, software, network | 378 |
|
|||
The Free Electron Laser (SwissFEL) Injector Test Facility at Paul Scherrer Institute has been in operations for more than three years. The Injector Test Facility machine is a valuable development and validation platform for all major SwissFEL subsystems including controls. Based on the experience gained from the Test Facility operations support, the paper presents current and some perspective controls solutions focusing on the future SwissFEL project. | |||
![]() |
Poster MOPPC112 [1.224 MB] | ||
MOPPC116 | Evolution of Control System Standards on the Diamond Synchrotron Light Source | EPICS, interface, Linux, hardware | 381 |
|
|||
Control system standards for the Diamond synchrotron light source were initially developed in 2003. They were largely based on Linux, EPICS and VME and were applied fairly consistently across the three accelerators and first twenty photon beamlines. With funding for further photon beamlines in 2011 the opportunity was taken to redefine the standards to be largely based on Linux, EPICS, PC’s and Ethernet. The developments associated with this will be presented, together with solutions being developed for requirements that fall outside the standards. | |||
![]() |
Poster MOPPC116 [0.360 MB] | ||
MOPPC118 | Development of EPICS Accelerator Control System for the IAC 44 MeV Linac | linac, EPICS, power-supply, database | 385 |
|
|||
The Idaho Accelerator Center (IAC) of Idaho State University (ISU) has been operating nine low energy accelerators. Since the beginning of the fall semester of 2012, the ISU Advanced Accelerator and Ultrafast Beam Lab (AAUL) group has been working to develop a new EPICS system to control 47 magnet power supplies for an IAC 44 MeV L-band linear accelerator. Its original control system was fully analog, which had several limitations to get good reproducibility and stability during the accelerator operation. This paper describes our group’s team effort and accomplishment in developing a new EPICS system to control 15 Lambda EMS and 32 TDK-Lambda ZUP power supplies for the IAC L-band linear accelerator. In addition, we also describe several other useful tools such as the save and restore function. | |||
![]() |
Poster MOPPC118 [1.175 MB] | ||
MOPPC120 | Commissioning Status of NSLS-II Vacuum Control System | vacuum, PLC, EPICS, linac | 389 |
|
|||
The National Synchrotron Light Source II (NSLS-II) is a state-of-the-art 3 GeV third generation light source currently under integrated testing and commissioning at Brookhaven National Laboratory. The vacuum systems are monitored by vacuum gauges and ion pump current. The gate valves are controlled by programmable logic controllers (PLC) using voting scheme. EPICS application codes provide the high level monitoring and control through the input-output controllers. This paper will discuss the commissioning status of the various aspects of vacuum control system. | |||
![]() |
Poster MOPPC120 [0.648 MB] | ||
MOPPC122 | EPICS Interface and Control of NSLS-II Residual Gas Analyzer System | EPICS, vacuum, interface, operation | 392 |
|
|||
Residual Gas Analyzers (RGAs) have been widely used in accelerator vacuum systems for monitoring and vacuum diagnostics. The National Synchrotron Light Source II (NSLS-II) vacuum system adopts Hiden RC-100 RGA which supports remote electronics, thus allowing real-time diagnostics with beam operation as well as data archiving and off-line analysis. This paper describes the interface and operation of these RGAs with the EPICS based control system. | |||
![]() |
Poster MOPPC122 [1.004 MB] | ||
MOPPC123 | Extending WinCC OA for Use as Accelerator Control System Core | ion, interface, status, real-time | 395 |
|
|||
The accelerator control system for the MedAustron light-ion medical particle accelerator has been designed under the guidance of CERN in the scope of an EBG MedAustron/CERN collaboration agreement. The core is based on the SIMATIC WinCC OA SCADA tool. Its open API and modular architecture permitted CERN to extend the product with features that go beyond traditional supervisory control and that are vital for directly operating a particle accelerator. Several extensions have been introduced to make WinCC OA fit for accelerator control: (1) Near real-time data visualization, (2) external application launch and monitoring, (3) accelerator settings snapshot and consistent restore, (4) generic panel navigation supporting role based permission handling, (5) native integration with interactive 3D engineering visualization, (6) integration with National Instruments based front-end controllers. The major drawback identified is the lack of support of callbacks from C++ extensions. This prevents asynchronous functions, multithreaded implementations and soft real-time behaviour. We are therefore striving to search for support in the user community to trigger the implementation of this function. | |||
![]() |
Poster MOPPC123 [0.656 MB] | ||
MOPPC124 | Optimizing EPICS for Multi-Core Architectures | EPICS, real-time, Linux, software | 399 |
|
|||
Funding: Work supported by German Bundesministerium für Bildung und Forschung and Land Berlin. EPICS is a widely used software framework for real-time controls in large facilities, accelerators and telescopes. Its multithreaded IOC (Input Output Controller) Core software has been developed on traditional single-core CPUs. The ITER project will use modern multi-core CPUs, running the RHEL Linux operating system in its MRG-R real-time variant. An analysis of the thread handling in IOC Core shows different options for improving the performance and real-time behavior, which are discussed and evaluated. The implementation is split between improvements inside EPICS Base, which have been merged back into the main distribution, and a support module that makes full use of these new features. This paper describes design and implementation aspects, and presents results as well as lessons learned. |
|||
![]() |
Poster MOPPC124 [0.448 MB] | ||
MOPPC126 | !CHAOS: the "Control Server" Framework for Controls | framework, distributed, software, database | 403 |
|
|||
We report on the progress of !CHAOS*, a framework for the development of control and data acquisition services for particle accelerators and large experimental apparatuses. !CHAOS introduces to the world of controls a new approach for designing and implementing communications and data distribution among components and for providing the middle-layer services for a control system. Based on software technologies borrowed from high-performance Internet services !CHAOS offers by using a centralized, yet highly-scalable, cloud-like approach all the services needed for controlling and managing a large infrastructure. It includes a number of innovative features such as high abstraction of services, devices and data, easy and modular customization, extensive data caching for enhancing performances, integration of all services in a common framework. Since the !CHAOS conceptual design was presented two years ago the INFN group have been working on the implementations of services and components of the software framework. Most of them have been completed and tested for evaluating performance and reliability. Some services are already installed and operational in experimental facilities at LNF.
* "Introducing a new paradigm for accelerators and large experimental apparatus control systems", L. Catani et.al., Phys. Rev. ST Accel. Beams, http://prst-ab.aps.org/abstract/PRSTAB/v15/i11/e112804 |
|||
![]() |
Poster MOPPC126 [0.874 MB] | ||
MOPPC128 | Real-Time Process Control on Multi-Core Processors | real-time, framework, operation, software | 407 |
|
|||
A real-time control is an essential for a low level RF and timing system to have beam stability in the accelerator operation. It is difficult to optimize priority control of multiple processes with real-time class and time-sharing class on a single-core processor. For example, we can’t log into the operating system if a real-time class process occupies the resource of a single-core processor. Recently multi-core processors have been utilized for equipment controls. We studied the process control of multiple processes running on multi-core processors. After several tunings, we confirmed that an operating system could run stably under heavy load on multi-core processors. It would be possible to achieve real-time control required milliseconds order response under the fast control system such as an event synchronized data acquisition system. Additionally we measured the response performance between client and server processes using MADOCA II framework that is the next-generation MADOCA. In this paper we present about the tunings for real-time process control on multi-core processors and performance results of MADOCA II. | |||
![]() |
Poster MOPPC128 [0.450 MB] | ||
MOPPC129 | MADOCA II Interface for LabVIEW | LabView, interface, framework, Windows | 410 |
|
|||
LabVIEW is widely used for experimental station control in SPring-8. LabVIEW is also partially used for accelerator control, while most software of the SPring-8 accelerator and beamline control are built on MADOCA control framework. As synchrotron radiation experiments advances, there is requirement of complex data exchange between MADOCA and LabVIEW control systems which was not realized. We have developed next generation MADOCA called MADOCA II, as reported in this ICALEPCS (T.Matsumoto et.al.). We ported MADOCA II framework to Windows and we developed MADOCA II interface for LabVIEW. Using the interface, variable length data can be exchanged between MADOCA and LabVIEW based softwares. As a first application, we developed a readout system for an electron beam position monitor with NI's PCI-5922 digitizers. A client software sends a message to a remote LabVIEW based digitizer readout software via the MADOCA II midlleware and the readout system sends back waveform data to the client. We plan to apply the interface various accelerator and synchrotron radiation experiment controls. | |||
MOPPC130 | A New Message-Based Data Acquisition System for Accelerator Control | data-acquisition, database, embedded, network | 413 |
|
|||
The data logging system for SPring-8 accelerator complex has been operating for 16 years as a part of MADOCA system. Collector processes periodically request distributed computers to collect sets of data by synchronous ONC-RPC protocol at fixed cycles. On the other hand, we also developed another MyDAQ system for casual or temporary data acquisition. A data acquisition process running on a local computer pushes one BSD socket stream into a server at random time. Its "one stream per one signal" strategy made data management simple while the system has no scalability. We developed a new data acquisition system which has super-MADOCA scale and MyDAQ's simplicity for new generation accelerator project. The new system based on ZeroMQ messaging library and MessagePack serialization library has high availability, asynchronous messaging, flexibility in data expression and scalability. The input/output plug-ins accept multi protocols and send data to various data systems. This paper describes design, implementation, performance, reliability and deployment of the system. | |||
![]() |
Poster MOPPC130 [0.197 MB] | ||
MOPPC131 | Experience of Virtual Machines in J-PARC MR Control | operation, EPICS, Linux, embedded | 417 |
|
|||
At the J-PARC Main Ring (MR), we have used virtual-machine environment extensively in our accelerator control. In 2011, we developed a virtual-IOC, an EPICS In/Out Controller running on a virtual machine [1]. Now in 2013, about 20 virtual-IOCs are used in daily MR operation. In the summer of 2012, we updated our operating system from Scientific Linux 4 (SL4) to Scientific Linux 6 (SL6). In the SL6, KVM virtual-machine environment is supported as a default service. This fact encouraged us to port basic control services (ldap, dhcp, tftp, rdb, achiver, etc.) to multiple virtual machines. Each virtual machine has one service. Virtual machines are running on a few (not many) physical machines. This scheme enables easier maintenance of control services than before. In this paper, our experiences using virtual machines during J-PARC MR operation will be reported.
[1] VIRTUAL IO CONTROLLERS AT J-PARC MR USING XEN, N.Kamikubota et. al., ICALEPCS 2011 |
|||
![]() |
Poster MOPPC131 [0.213 MB] | ||
MOPPC132 | Evaluating Live Migration Performance of a KVM-Based EPICS | EPICS, network, software, Linux | 420 |
|
|||
In this paper we present some results about live migration performance evaluation of a KVM-Based EPICS on PC.About PC,we care about the performance of storage,network and CPU. EPICS is a control system. we make a demo control system for evaluation, and it is lightweight. For time measurement, we set a monitor PV, and the PV can automatics change its value at regular time intervals. Data Browser can display the values of 'live' PVs and can measure the time. In the end, we get the evaluation value of live migration time using Data Browser. | |||
MOPPC137 | IEC 61850 Industrial Communication Standards under Test | network, framework, Ethernet, software | 427 |
|
|||
IEC 61850, as part of the International Electro-technical Commission's Technical Committee 57, defines an international and standardized methodology to design electric power automation substations. It specifies a common way of communicating and integrating heterogeneous systems based on multivendor intelligent electronic devices (IEDs). They are connected to Ethernet network and according to IEC 61850 their abstract data models have been mapped to specific communication protocols: MMS, GOOSE, SV and possibly in the future Web Services. All of them can run over TCP/IP networks, so they can be easily integrated with Enterprise Resource Planning networks; while this integration provides economical and functional benefits for the companies, on the other hand it exposes the industrial infrastructure to the external existing cyber-attacks. Within the Openlab collaboration between CERN and Siemens, a test-bench has been developed specifically to evaluate the robustness of industrial equipment (TRoIE). This paper describes the design and the implementation of the testing framework focusing on the IEC 61850 previously mentioned protocols implementations. | |||
![]() |
Poster MOPPC137 [1.673 MB] | ||
MOPPC138 | Continuous Integration for Automated Code Generation Tools | software, framework, PLC, target | 431 |
|
|||
The UNICOS* (UNified Industrial COntrol System) framework was created back in 1998 as a solution to build object-based industry-like control systems. The Continuous Process Control package (CPC**) is a UNICOS component that provides a methodology and a set of tools to design and implement industrial control applications. UAB** (UNICOS Application Builder) is the software factory used to develop UNICOS-CPC applications. The constant evolution of the CPC component brought the necessity of creating a new tool to validate the generated applications and to verify that the modifications introduced in the software tools do not create any undesirable effect on the existing control applications. The uab-maven-plugin is a plug-in for the Apache Maven build manager that can be used to trigger the generation of the CPC applications and verify the consistency of the generated code. This plug-in can be integrated in continuous integration tools - like Hudson or Jenkins – to create jobs for constant monitoring of changes in the software that will trigger a new generation of all the applications located in the source code management.
* "UNICOS a framework to build industry like control systems: Principles & Methodology". ** "UNICOS CPC6: Automated code generation for process control applications". |
|||
![]() |
Poster MOPPC138 [4.420 MB] | ||
MOPPC140 | High-Availability Monitoring and Big Data: Using Java Clustering and Caching Technologies to Meet Complex Monitoring Scenarios | monitoring, distributed, software, network | 439 |
|
|||
Monitoring and control applications face ever more demanding requirements: as both data sets and data rates continue to increase, non-functional requirements such as performance, availability and maintainability become more important. C2MON (CERN Control and Monitoring Platform) is a monitoring platform developed at CERN over the past few years. Making use of modern Java caching and clustering technologies, the platform supports multiple deployment architectures, from a simple 3-tier system to highly complex clustered solutions. In this paper we consider various monitoring scenarios and how the C2MON deployment strategy can be adapted to meet them. | |||
![]() |
Poster MOPPC140 [1.382 MB] | ||
MOPPC142 | Groovy as Domain-specific Language (DSL) in Software Interlock System | DSL, software, Domain-Specific-Languages, framework | 443 |
|
|||
The SIS, in operation since over 7 years, is a mission-critical component of the CERN accelerator control system, covering areas from general machine protection to diagnostics. The growing number of instances and the size of the existing installations have increased both the complexity and maintenance cost of running the SIS infrastructure. Also the domain experts have considered the XML and Java mixture for configuration as difficult and suitable only for software engineers. To address these issues, new ways of configuring the system have been investigated aiming at simplifying the process by making it faster, more user-friendly and adapted for a wider audience. From all the existing DSL choices (fluent Java APIs, external/internal DSLs), the Groovy scripting language has been considered as being particularly well suited for writing a custom DSL due to its built-in language features: Java compatibility, native syntax constructs, command chain expressions, hierarchical structures with builders, closures or AST transformations. This paper explains best practices and lessons learned while building the accelerators domain-oriented DSL for the configuration of the interlock system. | |||
![]() |
Poster MOPPC142 [0.510 MB] | ||
MOPPC143 | Plug-in Based Analysis Framework for LHC Post-Mortem Analysis | framework, operation, injection, software | 446 |
|
|||
Plug-in based software architectures are extensible, enforce modularity and allow several teams to work in parallel. But they have certain technical and organizational challenges, which we discuss in this paper. We gained our experience when developing the Post-Mortem Analysis (PMA) system, which is a mission-critical system for the Large Hadron Collider (LHC). We used a plugin-based architecture with a general-purpose analysis engine, for which physicists and equipment experts code plug-ins containing the analysis algorithms. We have over 45 analysis plug-ins developed by a dozen of domain experts. This paper focuses on the design challenges we faced in order to mitigate the risks of executing third-party code: assurance that even a badly written plug-in doesn't perturb the work of the overall application; plug-in execution control which allows to detect plug-in misbehavior and react; robust communication mechanism between plug-ins, diagnostics facilitation in case of plug-in failure; testing of the plug-ins before integration into the application, etc.
https://espace.cern.ch/be-dep/CO/DA/Services/Post-Mortem%20Analysis.aspx |
|||
![]() |
Poster MOPPC143 [3.128 MB] | ||
MOPPC145 | Mass-Accessible Controls Data for Web Consumers | network, framework, operation, status | 449 |
|
|||
The past few years in computing have seen the emergence of smart mobile devices, sporting multi-core embedded processors, powerful graphical processing units, and pervasive high-speed network connections (supported by WIFI or EDGE/UMTS). The relatively limited capacity of these devices requires relying on dedicated embedded operating systems (such as Android, or iOS), while their diverse form factors (from mobile phone screens to large tablet screens) require the adoption of programming techniques and technologies that are both resource-efficient and standards-based for better platform independence. We will consider what are the available options for hybrid desktop / mobile web development today, from native software development kits (Android, iOS) to platform-independent solutions (mobile Google Web toolkit [3], JQuery mobile, Apache Cordova[4], Opensocial). Through the authors' successive attempts at implementing a range of solutions for LHC-related data broadcasting, from data acquisition systems, LHC middleware such as DIP and CMW, on to the World Wide Web, we will investigate what are the valid choices to make and what pitfalls to avoid in today’s web development landscape. | |||
![]() |
Poster MOPPC145 [1.318 MB] | ||
MOPPC146 | MATLAB Objects for EPICS Channel Access | EPICS, interface, status, operation | 453 |
|
|||
With the substantial dependence on MATLAB for application development at the SwissFEL Injector Test Facility, the requirement for a robust and extensive EPICS Channel Access (CA) interface became increasingly imperative. To this effect, a new MATLAB Executable (Mex) file has been developed around an in-house C++ CA interface library (CAFE), which serves to expose comprehensive CA functionality to within the MATLAB framework. Immediate benefits include support for all MATLAB data types, a rich set of synchronous and asynchronous methods, a further physics oriented abstraction layer that uses CA synchronous groups, and compilation on 64-bit architectures. An account of the mocha (Matlab Objects for CHannel Access) interface is presented. | |||
MOPPC148 | Not Dead Yet: Recent Enhancements and Future Plans for EPICS Version 3 | EPICS, software, target, Linux | 457 |
|
|||
Funding: Work supported by U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357. The EPICS Version 4 development effort* is not planning to replace the current Version 3 IOC Database or its use of the Channel Access network protocol in the near future. Interoperability is a key aim of the V4 development, which is building upon the older IOC implementation. EPICS V3 continues to gain new features and functionality on its Version 3.15 development branch, while the Version 3.14 stable branch has been accumulating minor tweaks, bug fixes, and support for new and updated operating systems. This paper describes the main enhancements provided by recent and upcoming releases of EPICS Version 3 for control system applications. * Korhonen et al, "EPICS Version 4 Progress Report", this conference. |
|||
![]() |
Poster MOPPC148 [5.067 MB] | ||
MOPPC149 | A Messaging-Based Data Access Layer for Client Applications | data-acquisition, network, operation, interface | 460 |
|
|||
Funding: US Department of Energy The Fermilab Accelerator Control system has recently integrated use of a publish/subscribe infrastructure as a means of communication between Java client applications and data acquisition middleware. This supercedes a previous implementation based on Java Remote Method Invocation (RMI). The RMI implementation had issues with network firewalls, misbehaving client applications affecting the middleware, lack of portability to other platforms, and cumbersome authentication. The new system uses the AMQP messaging protocol and RabbitMQ data brokers. This decouples the client and middleware, is more portable to other languages, and has proven to be much more reliable. A Java client library provides for single synchronous operations as well as periodic data subscriptions. This new system is now used by the general synoptic display manager application as well as a number of new custom applications. Also a web service has been written that provides easy access to control system data from many languages. |
|||
![]() |
Poster MOPPC149 [4.654 MB] | ||
MOPPC150 | Channel Access in Erlang | EPICS, framework, network, detector | 462 |
|
|||
We have developed an Erlang language implementation of the Channel Access protocol. Included are low-level functions for encoding and decoding Channel Access protocol network packets as well as higher level functions for monitoring or setting EPICS Process Variables. This provides access to EPICS process variables for the Fermilab Acnet control system via our Erlang-based front-end architecture without having to interface to C/C++ programs and libraries. Erlang is a functional programming language originally developed for real-time telecommunications applications. Its network programming features and list management functions make it particularly well-suited for the task of managing multiple Channel Access circuits and PV monitors. | |||
![]() |
Poster MOPPC150 [0.268 MB] | ||
MOPPC155 | NSLS II Middlelayer Services | lattice, database, interface, EPICS | 467 |
|
|||
Funding: Work supported under auspices of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookhaven Science Associates, LLC, and in part by the DOE Contract DE-AC02-76SF00515 A service oriented architecture has been designed for NSLS II project for its beam commissioning and daily operation. Middle layer services have been actively developing, and some of them have been deployed into NSLS II control network to support our beam commissioning. The services are majorly based on 2 technologies, which are web-service/RESTful and EPICS V4 respectively. The services provides functions to take machine status snapshot, convert magnet setting between different unit system, or serve lattice information and simulation results. This paper presents the latest status of services development at NSLS II project, and our future development plan. |
|||
![]() |
Poster MOPPC155 [2.079 MB] | ||
MOPPC157 | Application of Transparent Proxy Servers in Control Systems | operation, framework, collider, target | 475 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Proxy servers (Proxies) have been a staple of the World Wide Web infrastructure since its humble beginning. They provide a number of valuable functional services like access control, caching or logging. Historically, controls system have had little need for full fledged proxied systems as direct, unimpeded resource access is almost always preferable. This still holds true today, however unbound direct asset access can lead to performance issues, especially on older, underpowered systems. This paper describes an implementation of a fully transparent proxy server used to moderate asynchronous data flow between selected front end computers (FECs) and their clients as well as infrastructure changes required to accommodate this new platform. Finally it ventures into the future by examining additional untapped benefits of proxied control systems like write-through caching and runtime read-write modifications. |
|||
![]() |
Poster MOPPC157 [1.873 MB] | ||
MOPPC158 | Application of Modern Programming Techniques in Existing Control System Software | framework, injection, software, operation | 479 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Accelerator Device Object (ADO) specification and its original implementation are almost 20 years old. In those last two decades ADO development methodology has changed very little, which is a testament to its robust design, however during this time frame we've seen introduction of many new technologies and ideas, many of which with applicable and tangible benefits to control system software. This paper describes how some of these concepts like convention over configuration, aspect oriented programming (AOP) paradigm, which coupled with powerful techniques like bytecode generation and manipulation tools can greatly simplify both server and client side development by allowing developers to concentrate on the core implementation details without polluting their code with: 1) synchronization blocks 2) supplementary validation 3) asynchronous communication calls or 4) redundant bootstrapping. In addition to streamlining existing fundamental development methods we introduce additional concepts, many of which are found outside of the majority of the controls systems. These include 1) ACID transactions 2) client and servers-side dependency injection and 3) declarative event handling. |
|||
![]() |
Poster MOPPC158 [2.483 MB] | ||
TUCOAAB01 | Status of the National Ignition Facility (NIF) Integrated Computer Control and Information Systems | diagnostics, laser, software, target | 483 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-631632 The National Ignition Facility (NIF) is operated by the Integrated Computer Control System in an object-oriented, CORBA-based system distributed among over 1800 front-end processors, embedded controllers and supervisory servers. At present, NIF operates 24x7 and conducts a variety of fusion, high energy density and basic science experiments. During the past year, the control system was expanded to include a variety of new diagnostic systems, and programmable laser beam shaping and parallel shot automation for more efficient shot operations. The system is also currently being expanded with an Advanced Radiographic Capability, which will provide short (<10 picoseconds) ultra-high power (>1 Petawatt) laser pulses that will be used for a variety of diagnostic and experimental capabilities. Additional tools have been developed to support experimental planning, experimental setup, facility configuration and post shot analysis, using open-source software, commercial workflow tools, database and messaging technologies. This talk discusses the current status of the control and information systems to support a wide variety of experiments being conducted on NIF including ignition experiments. |
|||
![]() |
Slides TUCOAAB01 [4.087 MB] | ||
TUCOAAB02 | The Laser Megajoule Facility: Control System Status Report | laser, target, software, alignment | 487 |
|
|||
The French Commissariat à l’Énergie Atomique (CEA) is currently building the Laser Megajoule (LMJ), a 176-beam laser facility, at the CEA Laboratory CESTA near Bordeaux. It is designed to deliver about 1.4 MJ of energy to targets for high energy density physics experiments, including fusion experiments. The assembly of the first lines of amplification is almost achieved and functional tests are planed for next year. The first part of the presentation is a photo album of the progress of the assembly of the bundles in the four laser bay, and the equipements in the target bay. The second part of the presentation illustrates a particularity of the LMJ commissioning: a secondary control room is dedicated to successive bundles commissioning, while the main control room allows shots and fusion experiments with already commissioned bundles | |||
![]() |
Slides TUCOAAB02 [3.928 MB] | ||
TUCOAAB03 |
Approaching the Final Design of ITER Control System | plasma, network, interface, operation | 490 |
|
|||
The control system of ITER (CODAC) is subject to a final design review early 2014, with a second final design review covering high-level applications scheduled for 2015. The system architecture has been established and all plant systems required for first plasma have been identified. Interfaces are being detailed, which is a key activity to prepare for integration. A built to print design of the network infrastructure covering the full site is in place and installation is expected to start next year. The common software deployed in the local plant systems as well as the central system, called CODAC Core System and based on EPICS, has reached maturity providing most of the required functions. It is currently used by 55 organizations throughout the world involved in the development of plant systems and ITER controls. The first plant systems are expected to arrive on site in 2015 starting a five-year integration phase to prepare for first plasma operation. In this paper, we report on the progress made on ITER control system over the last two years and outline the plans and strategies allowing us to integrate hundreds of plant systems procured in-kind by the seven ITER members. | |||
![]() |
Slides TUCOAAB03 [5.294 MB] | ||
TUCOAAB04 | The MedAustron Accelerator Control System: Design, Installation and Commissioning | software, network, ion, operation | 494 |
|
|||
MedAustron is a light-ion accelerator cancer treatment facility built on the green field in Austria. The accelerator, its control systemand protection systems have been designed under the guidance of CERN within the MedAustron – CERN collaboration. Building construction has been completed in October 2012 and accelerator installation has started in December 2012. Readiness for accelerator control deployment was reached in January 2013. This contribution gives an overview of the accelerator control system project. It reports on the current status of commissioning including the ion sources, low-energy beam transfer and injector. The major challenge so far has been the readiness of the industry supplied IT infrastructure on which accelerator controls relies heavily due to its distributed and virtualized architecture. After all, the control system has been successfully released for accelerator commissioning within time and budget. The need to deliver a highly performant control system to cope with thousands of cycles in real-time, to cover interactive commissioning and unattended medical operation were mere technical aspects to be solved during the development phase. | |||
![]() |
Slides TUCOAAB04 [2.712 MB] | ||
TUCOBAB01 | A Small but Efficient Collaboration for the Spiral2 Control System Development | EPICS, PLC, software, GUI | 498 |
|
|||
The Spiral2 radioactive ion beam facility to be commissioned in 2014 at Ganil (Caen) is built within international collaborations. This also concerns the control system development shared by three laboratories: Ganil has to coordinate the control and automated systems work packages, CEA/IRFU is in charge of the “injector” (sources and low energy beam lines) and the LLRF, CNRS/IPHC provides the emittancemeters and a beam diagnostics platform. Besides the technology Epics based, this collaboration, although being handled with a few people, nevertheless requires an appropriate and tight organization to reach the objectives given by the project. This contribution describes how, started in 2006, the collaboration for controls has been managed both from the technological point of view and the organizational one, taking into account not only the previous experience, technical background or skill of each partner, but also their existing working practices and “cultural” approaches. A first feedback comes from successful beam tests carried out at Saclay and Grenoble; a next challenge is the migration to operation, Ganil having to run Spiral2 as the other members are moving to new projects | |||
![]() |
Slides TUCOBAB01 [2.747 MB] | ||
TUCOBAB03 | Utilizing Atlassian JIRA for Large-Scale Software Development Management | software, status, database, operation | 505 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632634 Used actively by the National Ignition Facility since 2004, the JIRA issue tracking system from Atlassian is now used for 63 different projects. NIF software developers and customers have created over 80,000 requests (issues) for new features and bug fixes. The largest NIF software project in JIRA is the Integrated Computer Control system (ICCS), with nearly 40,000 issues. In this paper, we’ll discuss how JIRA has been customized to meet our software development process. ICCS developed a custom workflow in JIRA for tracking code reviews, recording test results by both developers and a dedicated Quality Control team, and managing the product release process. JIRA’s advanced customization capability have proven to be a great help in tracking key metrics about the ICCS development efforts (e.g. developer workload). ICCS developers store software in a configuration management tool called AccuRev, and document all software changes in each JIRA issue. Specialized tools developed by the NIF Configuration Management team analyze each software product release, insuring that each software product release contains only the exact expected changes. |
|||
![]() |
Slides TUCOBAB03 [2.010 MB] | ||
TUCOBAB04 | Evaluation of Issue Tracking and Project Management Tools for Use Across All CSIRO Radio Telescope Facilities | software, project-management, interface, operation | 509 |
|
|||
CSIRO's radio astronomy observatories are collectively known as the Australia Telescope National Facility (ATNF). The observatories include the 64-metre dish at Parkes, the Australia Telescope Compact Array (ATCA) in Narrabri, the Mopra 22-metre dish near Coonabarabran and the ASKAP telescope located in Western Australia and in early stages of commissioning. In January 2013 a new group named Software and Computing has been formed. This group, part of the ATNF Operations Program brings all the software development expertise under one umbrella and it is responsible for the development and maintenance of the software for all ATNF facilities, from monitoring and control to science data processing and archiving. One of the first task of the new group is to start homogenising the way software development is done across all observatories. This paper presents the results of the evaluation of several issue tracking and project management tools, including Redmine and JIRA to be used as a software development management tool across all ATNF facilities. It also describes how these tools can potentially be used for non-software type of applications such as fault reporting and tracking system. | |||
![]() |
Slides TUCOBAB04 [2.158 MB] | ||
TUCOBAB05 | A Rational Approach to Control System Development Projects That Incorporates Risk Management | project-management, software, interface, synchrotron | 513 |
|
|||
Over the past year CLS has migrated towards a project management approach based on the Project Management Institute (PMI) guidelines as well as adopting an Enterprise Risk Management (ERM) program. Though these are broader organisational initiatives they do impact how controls systems and data acquisition software activities and planned, executed and integrated into larger scale projects. Synchrotron beamline development and accelerator upgrade projects have their own special considerations that require adaptation of the more standard techniques that are used. Our ERM processes integrate in two ways: (1) in helping to identify and prioritising those projects that we should be undertaking and (2) in helping identify risks that are internal to the project. These broader programs are resulting in us revising and improving processes we have in place for control and data acquisition system development and maintenance. This paper examines the approach we have adopted, our preliminary experience and our plans going forward. | |||
![]() |
Slides TUCOBAB05 [0.791 MB] | ||
TUMIB01 | Using Prince2 and ITIL Practices for Computing Projects and Service Management in a Scientific Installation | project-management, operation, status, synchrotron | 517 |
|
|||
The conscientious project management during the installation is a key factor keeping the schedule and costs in specifications. Methodologies like Prince2 for project management or ITIL best practices for service management, supported by tools like Request Tracker, Redmine or Track, improve the communication between scientists and support groups, speed up the time to respond, and increase the satisfaction and quality perceived by the user. In the same way, during operation, some practices complemented with software tools, may increase substantially the quality of the service with the resources available. This paper describes the use of these processes and methodologies in a scientific installation such as the synchrotron Alba. It also evaluates the strengths and the risks associated to the implementation as well as the achievements and the failures, proposing some improvements. | |||
![]() |
Slides TUMIB01 [1.043 MB] | ||
![]() |
Poster TUMIB01 [7.037 MB] | ||
TUMIB02 | A Control System for the ESRF Synchrotron Radiation Therapy Clinical Trials | synchrotron, software, radiation, synchrotron-radiation | 521 |
|
|||
The bio-medical beamline of the European Synchrotron Radiation Facility (ESRF) located in Grenoble, France, has recently started the Phase I-II Stereotactic Synchrotron Radiation Therapy (SSRT) clinical trials targeting brain tumours. This very first SSRT protocol consists in a combined therapy where monochromatic X-rays are delivered to the tumour pre-loaded with high Z element. The challenges of this technique are the accurate positioning of the target tumour with respect to the beam and the precision of the dose delivery whilst fully assuring the patient safety. The positioning system used for previous angiography clinical trials has been adapted to this new modality. 3-D imaging is performed for positioning purpose to fit to the treatment planning. The control system of this experiment will be described from the hardware and software point of view with emphasis on the constraints imposed by the Patient Safety System. | |||
![]() |
Slides TUMIB02 [0.839 MB] | ||
TUMIB04 | Migrating to an EPICS Based Instrument Control System at the ISIS Spallation Neutron Source | EPICS, software, LabView, neutron | 525 |
|
|||
The beamline instruments at the ISIS spallation neutron source have been running successfully for many years using an in-house developed control system. The advent of new instruments and the desire for more complex experiments has led to a project being created to determine how best to meet these challenges. Though it would be possible to enhance the existing system, migrating to an EPICS-based system offers many advantages in terms of flexibility, software reuse and the potential for collaboration. While EPICS is well established for accelerator and synchrotron beamline control, is it not currently widely used for neutron instruments, but this is changing. The new control system is being developed to initially run in parallel with the existing system, a first version being scheduled for testing on two newly constructed instruments starting summer 2013. In this paper, we will discuss the design and implementation of the new control system, including how our existing National Instruments LabVIEW controlled equipment was integrated, and issues that we encountered during the migration process. | |||
![]() |
Slides TUMIB04 [0.098 MB] | ||
![]() |
Poster TUMIB04 [0.315 MB] | ||
TUMIB06 | Development of a Scalable and Flexible Data Logging System Using NoSQL Databases | database, data-acquisition, operation, network | 532 |
|
|||
We have developed a scalable and flexible data logging system for SPring-8 accelerator control. The current SPring-8 data logging system powered by a relational database management system (RDBMS) has been storing log data for 16 years. With the experience, we recognized the lack of RDBMS flexibility on data logging such as little adaptability of data format and data acquisition cycle, complexity in data management and no horizontal scalability. To solve the problem, we chose a combination of two NoSQL databases for the new system; Redis for real time data cache and Apache Cassandra for perpetual archive. Logging data are stored into both database serialized by MessagePack with flexible data format that is not limited to single integer or real value. Apache Cassandra is a scalable and highly available column oriented database, which is suitable for time series logging data. Redis is a very fast on-memory key-value store that complements Cassandra's eventual consistent model. We developed a data logging system with ZeroMQ message and have proved its high performance and reliability in long term evaluation. It will be released for partial control system this summer. | |||
![]() |
Slides TUMIB06 [0.182 MB] | ||
![]() |
Poster TUMIB06 [0.525 MB] | ||
TUMIB08 |
ITER Contribution to Control System Studio (CSS) Development Effort | EPICS, framework, interface, distributed | 540 |
|
|||
In 2010, Control System Studio (CSS) was chosen for CODAC - the central control system of ITER - as the development and runtime integrated environment for local control systems. It became quickly necessary to contribute to CSS development effort - after all, CODAC team wants to be sure that the tools that are being used by the seven ITER members all over the world continue to be available and to be improved. In order to integrate CSS main components in its framework, CODAC team needed first to adapt them to its standard platform based on Linux 64-bits and PostgreSQL database. Then, user feedback started to emerge as well as the need for an industrial symbol library to represent pump, valve or electrical breaker states on the operator interface and the requirement to automatically send an email when a new alarm is raised. It also soon became important for CODAC team to be able to publish its contributions quickly and to adapt its own infrastructure for that. This paper describes ITER increasing contribution to the CSS development effort and the future plans to address factory and site acceptance tests of the local control systems. | |||
![]() |
Slides TUMIB08 [2.970 MB] | ||
![]() |
Poster TUMIB08 [0.959 MB] | ||
TUMIB09 | jddd: A Tool for Operators and Experts to Design Control System Panels | GUI, EPICS, interface, TANGO | 544 |
|
|||
jddd, a graphical tool for control system panel design, has been developed at DESY to allow machine operators and experts the design of complex panels. No knowledge of a programming language nor compiling steps are required to generate highly dynamic panels with the jddd editor. After 5 years of development and implementing requirements for DESY-specific accelerator operations, jddd has become mature and is increasingly used at DESY. The focus meanwhile has changed from pure feature development to new tasks as archiving/managing a huge number of control panels, finding panel dependencies, automatic refactoring of panel names, book keeping and evaluation of panel usage and collecting Java exception messages in an automatic manner. Therefore technologies of the existing control system infrastructure like Servlets, JMS, Lucene, SQL, SVN are used. The concepts and technologies to further improve the quality and robustness of the tool are presented in this paper. | |||
![]() |
Slides TUMIB09 [0.811 MB] | ||
![]() |
Poster TUMIB09 [1.331 MB] | ||
TPPC003 |
SDD toolkit : ITER CODAC Platform for Configuration and Development | EPICS, toolkit, database, framework | 550 |
|
|||
ITER will consist of roughly 200 plant systems I&C (in total millions of variables) delivered in kind which need to be integrated into the ITER control infrastructure. To integrate them in a smooth way, CODAC team releases every year the Core Software environment which consists of many applications. This paper focuses on the self description data toolkit implementation, a fully home-made ITER product. The SDD model has been designed with Hibernate/Spring to provide required information to generate configuration files for CODAC services such as archiving, EPICS, alarm, SDN, basic HMIs, etc. Users enter their configuration data via GUIs based on web application and Eclipse. Snapshots of I&C projects can be dumped to XML. Different levels of validation corresponding to various stages of development have been implemented: it enables during integration, verification that I&C projects are compliant with our standards. The development of I&C projects continues with Maven utilities. In 2012, a new Eclipse perspective has been developed to allow user to develop codes, to start their projects, to develop new HMIs, to retrofit their data in SDD database and to checkout/commit from/to SVN. | |||
![]() |
Poster TUPPC003 [1.293 MB] | ||
TUPPC004 | Scalable Archiving with the Cassandra Archiver for CSS | database, EPICS, software, distributed | 554 |
|
|||
An archive for process-variable values is an important part of most supervisory control and data acquisition (SCADA) systems, because it allows operators to investigate past events, thus helping in identifying and resolving problems in the operation of the supervised facility. For large facilities like particle accelerators there can be more than one hundred thousand process variables that have to be archived. When these process variables change at a rate of one Hertz or more, a single computer system can typically not handle the data processing and storage. The Cassandra Archiver has been developed in order to provide a simple to use, scalable data-archiving solution. It seamlessly plugs into Control System Studio (CSS) providing quick and simple access to all archived process variables. An Apache Cassandra database is used for storing the data, automatically distributing it over many nodes and providing high-availability features. This contribution depicts the architecture of the Cassandra Archiver and presents performance benchmarks outlining the scalability and comparing it to traditional archiving solutions based on relational databases. | |||
![]() |
Poster TUPPC004 [3.304 MB] | ||
TUPPC005 | Implementation of an Overall Data Management at the Tomography Station at ANKA | experiment, TANGO, data-management, synchrotron | 558 |
|
|||
New technologies and research methods increase the complexity of data management at the beamlines of a synchrotron radiation facility. The diverse experimental data such as user and sample information, beamline status and parameters and experimental datasets, has to be interrelated, stored and provided to the user in a convenient way. The implementation of these requirements leads to challenges in fields of data life-cycle, storage, format and flow. At the tomography station at the ANKA a novel data management system has been introduced, representing a clearly structured and well organized data flow. The first step was to introduce the Experimental Coordination Service ECS, which reorganizes the measurement process and provides automatic linking of meta-, logging- and experimental-data. The huge amount of data, several TByte/week, is stored in NeXus files. These files are subsequently handled regarding storage location and life cycle by the WorkSpaceCreator development tool. In a further step ANKA will introduce the European single sign on system Umbrella and the experimental data catalogue ICAT as planned as the European standard solution in the PaNdata project. | |||
![]() |
Poster TUPPC005 [1.422 MB] | ||
TUPPC006 | Identifying Control Equipment | database, EPICS, cryogenics, interface | 562 |
|
|||
The cryogenic installations at DESY are widely spread over the DESY campus. Many new components have been and will be installed for the new European XFEL. Commissioning and testing takes a lot of time. Local tag labels help identify the components but it is error prone to type in the names. Local bar-codes and/or datamatrix codes can be used in conjunction with intelligent devices like smart (i)Phones to retrieve data directly from the control system. The developed application will also show information from the asset database. This will provide the asset properties of the individual hardware device including the remaining warranty. Last not least cables are equipped with a bar-code which helps to identify start and endpoint of the cable and the related physical signal. This paper will describe our experience with the mobile applications and the related background databases which are operational already for several years. | |||
![]() |
Poster TUPPC006 [0.398 MB] | ||
TUPPC011 | Development of an Innovative Storage Manager for a Distributed Control System | distributed, framework, software, operation | 570 |
|
|||
The !CHAOS(*) framework will provide all the services needed for controlling and managing a large scientific infrastructure, including a number of innovating features such as abstraction of services, devices and data, easy and modular customization, extensive data caching for performance boost, integration of all functionalities in a common framework. One of most relevant innovation in !CHAOS resides in the History Data Service (HDS) for a continuous acquisition of operating data pushed by devices controllers. The core component of the HDS is the History engine(HST). It implements the abstraction layer for the underneath storage technology and the logics for indexing and querying data. The HST drivers are designed to provide specific HDS tasks such as Indexing, Caching and Storing, and for wrapping the chosen third-party database API with !CHOAS standard calls. Indeed, the HST allows to route to independent channels the different !CHAOS services data flow in order to improve the global efficiency of the whole data acquisition system.
* - http://chaos.infn.it * - https://chaosframework.atlassian.net/wiki/display/DOC/General+View * - http://prst-ab.aps.org/abstract/PRSTAB/v15/i11/e112804 |
|||
![]() |
Poster TUPPC011 [6.729 MB] | ||
TUPPC013 | Scaling Out of the MADOCA Database System for SACLA | database, GUI, operation, monitoring | 574 |
|
|||
MADOCA was adopted for the control system of SACLA, and the MADOCA database system was designed as a copy of the database system in SPring-8. The system realized a high redundancy because the system had already tested in SPring-8. However the signals which the MADOCA system handles in SACLA are increasing drastically. And GUIs that require frequent database accesses were developed. The load of the database system increased, and the response of the systems delayed in some occasions. We investigated the bottle neck of the system. From the results of the investigation, we decided to distribute the access to two servers. The primary server handles present data and signal properties. The other handles archived data, and the data was mounted to the primary server as a proxy table. In this way, we could divide the load into two servers and clients such as GUI do not need any changes. We have tested the load and response of the system by adding 40000 signals to present 45000 signals, of which data acquisition intervals are typically 2 sec. The system was installed successfully and operating without any interruption which is caused by the high load of the database. | |||
TUPPC014 | Development of SPring-8 Experimental Data Repository System for Management and Delivery of Experimental Data | experiment, data-management, interface, database | 577 |
|
|||
SPring-8 experimental Data Repository system (SP8DR) is an online storage service, which is built as one of the infrastructure services of SPring-8. SP8DR enables experimental user to obtain his experimental data, which was brought forth at SPring-8 beamline, on demand via the Internet. To make easy searching for required data-sets later, the system stored experimental data with meta-data such as experimental conditions. It is also useful to the post-experiment analysis process. As a framework for data management, we adopted DSpace that is widely used in the academic library information system. We made two kind of application software for registering an experimental data simply and quickly. These applications are used to record metadata-set to SP8DR database that has relations to experimental data on the storage system. This data management design allowed applications to high bandwidth data acquisition system. In this presentation, we report about the SPring-8 experimental Data Repository system that began operation in SPring-8 beamline. | |||
TUPPC021 | Monitoring and Archiving of NSLS-II Booster Synchrotron Parameters | booster, monitoring, operation, EPICS | 587 |
|
|||
When operating a multicomponent system, it is always necessary to observe the state of a whole installation as well as of its components. Tracking data is essential to perform tuning and troubleshooting, so records of a work process generally have to be kept. As any other machine, the NSLS-II booster should have an implementation of monitoring and archiving schemes as a part of the control system. Because of the booster being a facility with a cyclical operation mode, there were additional challenges when designing and developing monitoring and archiving tools. Thorough analysis of available infrastructure and current approaches to monitoring and archiving was conducted to take into account additional needs that come from booster special characteristics. A software extension for values present in the control system allowed to track the state of booster subsystems and to perform an advanced archiving with multiple warning levels. Time stamping and data collecting strategies were developed as a part of monitoring scheme in order to preserve and recover read-backs and settings as consistent data sets. This paper describes relevant solutions incorporated in the booster control system. | |||
![]() |
Poster TUPPC021 [0.589 MB] | ||
TUPPC022 | Centralized Software and Hardware Configuration Tool for Large and Small Experimental Physics Facilities | software, database, network, EPICS | 591 |
|
|||
All software of control system, starting from hardware drivers and up to user space PC applications, needs configuration information to work properly. This information includes such parameters as channels calibrations, network addresses, servers responsibilities and other. Each software subsystem requires a part of configuration parameters, but storing them separately from whole configuration will cause usability and reliability issues. On the other hand, storing all configuration in one centralized database will decrease software development speed, by adding extra central database querying. The paper proposes configuration tool that has advantages of both ways. Firstly, it uses a centralized configurable graph database, that could be manipulated by web-interface. Secondly, it could automatically export configuration information from centralized database to any local configuration storage. The tool has been developed at BINP (Novosibirsk, Russia) and is used to configure VEPP-2000 electron-positron collider (BINP, Russia), Electron Linear Induction Accelerator (Snezhinsk, Russia) and NSLS-II booster synchrotron (BNL, USA). | |||
![]() |
Poster TUPPC022 [1.441 MB] | ||
TUPPC023 | MeerKAT Poster and Demo Control and Monitoring Highlights | monitoring, interface, hardware, software | 594 |
|
|||
The 64-dish MeerKAT Karoo Array Telescope, currently under development, will become the largest and most sensitive radio telescope in the Southern Hemisphere until the Square Kilometre Array (SKA) is completed around 2024. MeerKAT will ultimately become an integral part of the SKA. The MeerKAT project will build on the techniques and experience acquired during the development of KAT-7, a 7-dish engineering prototype that has already proved its worth in practical use, operating 24/7 to deliver useful science data in the Karoo. Much of the MeerKAT development will centre on further refinement and scaling of the technology, using lessons learned from KAT-7. The poster session will present the proposed MeerKAT CAM (Control & Monitoring) architecture and highlight the solutions we are exploring for system monitoring, control and scheduling, data archiving and retrieval, and human interaction with the system. We will supplement the poster session with a live demonstration of the present KAT-7 CAM system. This will include a live video feed from the site as well as the use of the current GUI to generate and display the flow of events and data in a typical observation. | |||
![]() |
Poster TUPPC023 [0.471 MB] | ||
TUPPC024 | Challenges to Providing a Successful Central Configuration Service to Support CERN’s New Controls Diagnostics and Monitoring System | database, monitoring, diagnostics, framework | 596 |
|
|||
The Controls Diagnostic and Monitoring service (DIAMON) provides monitoring and diagnostics tools to the operators in the CERN Control Centre. A recent reengineering presented the opportunity to restructure its data management and to integrate it with the central Controls Configuration Service (CCS). The CCS provides the Configuration Management for the Controls System for all accelerators at CERN. The new facility had to cater for the configuration management of all agents monitored by DIAMON, (>3000 computers of different types), provide deployment information, relations between metrics, and historical information. In addition, it had to be integrated into the operational CCS, while ensuring stability and data coherency. An important design decision was to largely reuse the existing infrastructure in the CCS and adapt the DIAMON data management to it e.g. by using the device/property model through a Virtual Devices framework to model the DIAMON agents. This article will show how these challenging requirements were successfully met, the problems encountered and their resolution. The new service architecture will be presented: database model, new and tailored processes and tools. | |||
![]() |
Poster TUPPC024 [2.741 MB] | ||
TUPPC025 | Advantages and Challenges to the Use of On-line Feedback in CERN’s Accelerators Controls Configuration Management | feedback, hardware, database, status | 600 |
|
|||
The Controls Configuration Service (CCS) provides the Configuration Management facilities for the Controls System for all CERN accelerators. It complies with Configuration Management standards, tracking the life of configuration items and their relationships by allowing identification and triggering change management processes. Data stored in the CCS is extracted and propagated to the controls hardware for remote configuration. The article will present the ability of the CCS to audit items and verify conformance to specification with the implementation of on-line feedback focusing on Front-End Computers (FEC) configurations. Long-standing problems existed in this area such as discrepancies between the actual state of the FEC and the configuration sent to it at reboot. This resulted in difficult-to-diagnose behaviour and disturbance for the Operations team. The article will discuss the solution architecture (tailored processes and tools), the development and implementation challenges, as well as the advantages of this approach and the benefits to the user groups – from equipment specialists and controls systems experts to the operators in the Accelerators Controls Centre. | |||
![]() |
Poster TUPPC025 [3.937 MB] | ||
TUPPC027 | Quality Management of CERN Vacuum Controls | vacuum, database, interface, framework | 608 |
|
|||
The vacuum controls team is in charge of the monitoring, maintenance & consolidation of the control systems of all accelerators and detectors in CERN; this represents 6 000 instruments distributed along 128 km of vacuum chambers, often of heterogeneous architectures. In order to improve the efficiency of the services we provide, to vacuum experts and to accelerator operators, a Quality Management Plan is being put into place. The first step was the gathering of old documents and the centralisation of information concerning architectures, procedures, equipment and settings. It was followed by the standardisation of the naming convention across different accelerators. The traceability of problems, request, repairs, and other actions, has also been put into place. It goes together with the effort on identification of each individual device by a coded label, and its registration in a central database. We are also working on ways to record, retrieve, process, and display the information across several linked repositories; then, the quality and efficiency of our services can only improve, and the corresponding performance indicators will be available. | |||
![]() |
Poster TUPPC027 [98.542 MB] | ||
TUPPC028 | The CERN Accelerator Logging Service - 10 Years in Operation: A Look at the Past, Present, and Future | database, operation, extraction, instrumentation | 612 |
|
|||
During the 10 years since it's first operational use, the scope and scale of the CERN Accelerator Logging Service (LS) has evolved significantly: from an LHC specific service expected to store 1TB / year; to a CERN-wide service spanning the complete accelerator complex (including related sub-systems and experiments) currently storing more than 50 TB / year on-line for some 1 million signals. Despite the massive increase over initial expectations the LS remains reliable, and highly usable - this can be attested to by the 5 million daily / average number of data extraction requests, from close to 1000 users. Although a highly successful service, demands on the LS are expected to increase significantly as CERN prepares LHC for running at top energy, which is likely to result in at least doubling current data volumes. Furthermore, focus is now shifting firmly towards a need to perform complex analysis on logged data, which in-turn presents new challenges. This paper reflects on 10 years as an operational service, in terms of how it has managed to scale to meet growing demands, what has worked well, and lessons learned. On-going developments, and future evolution will also be discussed. | |||
![]() |
Poster TUPPC028 [3.130 MB] | ||
TUPPC029 | Integration, Processing, Analysis Methodologies and Tools for Ensuring High Data Quality and Rapid Data Access in the TIM* Monitoring System | monitoring, database, real-time, data-acquisition | 615 |
|
|||
Processing, storing and analysing large amounts of real-time data is a challenge for every monitoring system. The performance of the system strongly depends on high quality configuration data and the ability of the system to cope with data anomalies. The Technical Infrastructure Monitoring system (TIM) addresses data quality issues by enforcing a workflow of strict procedures to integrate or modify data tag configurations. TIM’s data acquisition layer architecture allows real-time analysis and rejection of irrelevant data. The discarded raw data 90,000,000 transactions/day) are stored in a database, then purged after gathering statistics. The remaining operational data (2,000,000 transactions/day) are transferred to a server running an in-memory database, ensuring its rapid processing. These data are currently stored for 30 days allowing ad hoc historical data analysis. In this paper we describe the methods and tools used to guarantee the quality of configuration data and highlight the advanced architecture that ensures optimal access to operational data as well as the tools used to perform off-line data analysis.
* Technical Infrastructure Monitoring system |
|||
![]() |
Poster TUPPC029 [0.742 MB] | ||
TUPPC031 | Proteus: FRIB Configuration Database | database, cavity, interface, operation | 623 |
|
|||
Distributed Information Services for Control Systems (DISCS) is a framework for developing high-level information systems for a Experimental Physics Facility. It comprises of a set of cooperating components. Each component of the system has a database, an API, and several applications. One of DISCS' core components is the Configuration Module. It is responsible for the management of devices, their layout, measurements, alignment, calibration, signals, and inventory. In this paper we describe FRIB's implementation of the Configuration Module - Proteus. We describe its architecture, database schema, web-based GUI, EPICS V4 and REST services, and Java/Python APIs. It has been developed as a product that other labs can download and use. It can be integrated with other independent systems. We describe the challenges to implementing such a system, our technology choices, and the lessons learnt. | |||
![]() |
Poster TUPPC031 [1.248 MB] | ||
TUPPC032 | Database-backed Configuration Service | database, operation, interface, network | 627 |
|
|||
Keck Observatory is in the midst of a major telescope control system upgrade. This upgrade will include a new database-backed configuration service which will be used to manage the many aspects of the telescope that need to be configured (e.g. site parameters, control tuning, limit values) for its control software and it will keep the configuration data persistent between IOC restarts. This paper will discuss this new configuration service, including its database schema, iocsh API, rich user interface and the many other provided features. The solution provides automatic time-stamping, a history of all database changes, the ability to snapshot and load different configurations and triggers to manage the integrity of the data collections. Configuration is based on a simple concept of controllers, components and their associated mapping. The solution also provides a failsafe mode that allows client IOCs to function if there is a problem with the database server. It will also discuss why this new service is preferred over the file based configuration tools that have been used at Keck up to now. | |||
![]() |
Poster TUPPC032 [0.849 MB] | ||
TUPPC035 | A New EPICS Archiver | EPICS, database, data-management, distributed | 632 |
|
|||
This report presents a large-scale high-performance distributed data storage system for acquiring and processing time series data of modern accelerator facilities. Derived from the original EPICS Channel Archiver, this version consistently extends it through the integration of the deliberately selected technologies, such as the HDF5 file format, the SciDB chunk-oriented interface, and the RDB-based representation of the DDS X-Types specification. The changes allowed to scale the performance of the new version towards the data rates of 500 K scalar samples per seconds. Moreover, the new EPICS Archiver provides a common platform for managing both the EPICS 3 records and composite data types, like images, of EPICS 4 applications. | |||
![]() |
Poster TUPPC035 [0.247 MB] | ||
TUPPC036 | A Status Update on Hyppie – a Hyppervisored PXI for Physics Instrumentation under EPICS | Linux, EPICS, LabView, hardware | 635 |
|
|||
Beamlines at LNLS are moving to the concept of distributed control under EPICS. This has being done by reusing available code from the community and/or by programming hardware access in LabVIEW integrated to EPICS through Hyppie. Hyppie is a project to make a bridge between EPICS records and corresponding devices in a PXI chassis. Both EPICS/Linux and LabVIEW Real-Time run simultaneously in the same PXI controller, in a virtualization system with a common memory block shared as their communication interface. A number of new devices were introduced in the Hyppie suite and LNLS beamlines are experiencing a smooth transition to the new concept. | |||
![]() |
Poster TUPPC036 [1.658 MB] | ||
TUPPC037 | LabWeb - LNLS Beamlines Remote Operation System | experiment, operation, software, interface | 638 |
|
|||
Funding: Project funded by CENPES/PETROBRAS under contract number: 0050.0067267.11.9 LabWeb is a software developed to allow remote operation of beamlines at LNLS, in a partnership with Petrobras Nanotechnology Network. Being the only light source in Latin America, LNLS receives many researchers and students interested in conducting experiments and analyses in these lines. The implementation of LabWeb allow researchers to use the laboratory structure without leaving their research centers, reducing time and travel costs in a continental country like Brazil. In 2010, the project was in its first phase in which tests were conducted using a beta version. Two years later, a new phase of the project began with the main goal of giving the operation scale for the remote access project to LNLS users. In this new version, a partnership was established to use the open source platform Science Studio developed and applied at the Canadian Light Source (CLS). Currently, the project includes remote operation of three beamlines at LNLS: SAXS1 (Small Angle X-Ray Scattering), XAFS1 (X-Ray Absorption and Fluorescence Spectroscopy) and XRD1 (X-Ray Diffraction). Now, the expectation is to provide this new way of realize experiments to all the other beamlines at LNLS. |
|||
![]() |
Poster TUPPC037 [1.613 MB] | ||
TUPPC038 | Simultaneous On-line Ultrasonic Flowmetery and Binary Gas Mixture Analysis for the ATLAS Silicon Tracker Cooling Control System | electronics, operation, detector, Ethernet | 642 |
|
|||
We describe a combined ultrasonic instrument for continuous gas flow measurement and simultaneous real-time binary gas mixture analysis. The analysis algorithm compares real time measurements with a stored data base of sound velocity vs. gas composition. The instrument was developed for the ATLAS silicon tracker evaporative cooling system where C3F8 refrigerant may be replaced by a blend with 25% C2F6, allowing a lower evaporation temperature as the LHC luminosity increases. The instrument has been developed in two geometries. A version with an axial sound path has demonstrated a 1 % Full Scale precision for flows up to 230 l/min. A resolution of 0.3% is seen in C3F8/C2F6 molar mixtures, and a sensitivity of better than 0.005% to traces of C3F8 in nitrogen, during a 1 year continuous study in a system with sequenced multi-stream sampling. A high flow version has demonstrated a resolution of 1.9 % Full Scale for flows up to 7500 l/min. The instrument can provide rapid feedback in control systems operating with refrigerants or binary gas mixtures in detector applications. Other uses include anesthesia, analysis of hydrocarbons and vapor mixtures for semiconductor manufacture.
* Comm. author: martin.doubek@cern.ch Refs R. Bates et al. Combined ultrasonic flow meter & binary vapour analyzer for ATLAS 2013 JINST 8 C01002 |
|||
![]() |
Poster TUPPC038 [1.834 MB] | ||
TUPPC039 | Development of a High-speed Diagnostics Package for the 0.2 J, 20 fs, 1 kHz Repetition Rate Laser at ELI Beamlines | laser, diagnostics, FPGA, interface | 646 |
|
|||
The ELI Beamlines facility aims to provide a selection of high repetition rate terawatt and petawatt femtosecond pulsed lasers, with applications in plasma research, particle acceleration, high-field physics and high intensity extended-UV/X-ray generation. The highest rate laser in the facility will be a 1 kHz femtosecond laser with pulse energy of 200 mJ. This high repetition rate presents unique challenges for the control system, particularly the diagnostics package. This is tasked with measuring key laser parameters such as pulse energy, pointing accuracy, and beam profile. Not only must this system be capable of relaying individual pulse measurements in real-time to the six experimental target chambers, it must also respond with microsecond latency to any aberrations indicating component damage or failure. We discuss the development and testing of a prototype near-field camera profiling system forming part of this diagnostics package consisting of a 1000 fps high resolution camera and FPGA-based beam profile and aberration detection system. | |||
![]() |
Poster TUPPC039 [2.244 MB] | ||
TUPPC040 | Saclay GBAR Command Control | PLC, software, linac, positron | 650 |
|
|||
The GBAR experiment will be installed in 2016 at CERN’s Antiproton Decelerator, ELENA extension, and will measure the free fall acceleration of neutral antihydrogen atoms. Before construction of GBAR, the CEA/Irfu institute has built a beam line to guide positrons produced by a Linac (linear particle accelerator) through either a materials science line or a Penning trap. The experiment command control is mainly based on Programmable Logical Controllers (PLC). A CEA/Irfu-developed Muscade SCADA (Supervisory Control and Data Acquisition) is installed on a Windows 7 embedded shoebox PC. It manages local and remote display, and is responsible for archiving and alarms. Muscade was used because it is rapidly and easily configurable. The project required Muscade to communicate with three different types of PLCs: Schneider, National Instruments (NI) and Siemens. Communication is based on Modbus/TCP and on an in-house protocol optimized for the Siemens PLC. To share information between fast and slow controls, a LabVIEW PC dedicated to the trap fast control communicates with a PLC dedicated to security via Profinet fieldbus. | |||
![]() |
Poster TUPPC040 [1.791 MB] | ||
TUPPC042 | Prototype of a Simple ZeroMQ-Based RPC in Replacement of CORBA in NOMAD | CORBA, operation, interface, GUI | 654 |
|
|||
The NOMAD instrument control software of the Institut Laue-Langevin is a client server application. The communication between the server and its clients is performed with CORBA, which has now major drawbacks like the lack of support and a slow or non-existing evolution. The present paper describes the implementation of the recent and promising ZeroMQ technology in replacement to CORBA. We present the prototype of a simple RPC built on top of ZeroMQ and the performant Google Protocol Buffers serialization tool, to which we add a remote method dispatch layer. The final project will also provide an IDL compiler restricted to a subset of the language so that only minor modifications to our existing IDL interfaces and class implementations will have to be made to replace the communication layer in NOMAD. | |||
![]() |
Poster TUPPC042 [1.637 MB] | ||
TUPPC043 | Controlling Cilex-Apollon Laser Beams Alignment and Diagnostics Systems with Tango | alignment, laser, GUI, TANGO | 658 |
|
|||
Funding: CNRS, MESR, CG91, CRiDF, ANR Cilex-Apollon is a high intensity laser facility delivering at least 5 PW pulses on targets at one shot per minute, to study physics such as laser plasma electron or ion accelerator and laser plasma X-Ray sources. Under construction, Apollon is a four beam laser installation with two target areas. To control the laser beam characteristics and alignment, more than 75 CCD cameras and 100 motors are dispatched in the facility and controlled through a Tango bus. The image acquisition and display are made at 10 Hz. Different operations are made on line, at the same rate on acquired images like binarisation, centroid calculation, size and energy of laser beam. Other operations are made off line, on stored images. The beam alignment can be operated manually or automatically. The automatic mode is based on a close loop using a transfer matrix and can correct the laser beam centering and pointing 5 times per second. The article presents the architecture, functionality, performances and feedback from a first deployment on a demonstrator. |
|||
![]() |
Poster TUPPC043 [0.766 MB] | ||
TUPPC044 | When Hardware and Software Work in Concert | experiment, interface, detector, operation | 661 |
|
|||
Funding: Partially funded by BMBF under the grants 05K10CKB and 05K10VKE. Integrating control and high-speed data processing is a fundamental requirement to operate a beam line efficiently and improve user's beam time experience. Implementing such control environments for data intensive applications at synchrotrons has been difficult because of vendor-specific device access protocols and distributed components. Although TANGO addresses the distributed nature of experiment instrumentation, standardized APIs that provide uniform device access, process control and data analysis are still missing. Concert is a Python-based framework for device control and messaging. It implements these programming interfaces and provides a simple but powerful user interface. Our system exploits the asynchronous nature of device accesses and performs low-latency on-line data analysis using GPU-based data processing. We will use Concert to conduct experiments to adjust experimental conditions using on-line data analysis, e.g. during radiographic and tomographic experiments. Concert's process control mechanisms and the UFO processing framework* will allow us to control the process under study and the measuring procedure depending on image dynamics. * Vogelgesang, Chilingaryan, Rolo, Kopmann: “UFO: A Scalable GPU-based Image Processing Framework for On-line Monitoring” |
|||
![]() |
Poster TUPPC044 [4.318 MB] | ||
TUPPC045 | Software Development for High Speed Data Recording and Processing | detector, software, monitoring, network | 665 |
|
|||
Funding: The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 283745. The European XFEL beam delivery defines a unique time structure that requires acquiring and processing data in short bursts of up to 2700 images every 100 ms. The 2D pixel detectors being developed produce up to 10 GB/s of 1-Mpixel image data. Efficient handling of this huge data volume requires large network bandwidth and computing capabilities. The architecture of the DAQ system is hierarchical and modular. The DAQ network uses 10 GbE switched links to provide large bandwidth data transport between the front-end interfaces (FEI), data handling PC layer servers, and storage and analysis clusters. Front-end interfaces are required to build images acquired during a burst into pulse ordered image trains and forward them to PC layer farm. The PC layer consists of dedicated high-performance computers for raw data monitoring, processing and filtering, and aggregating data files that are then distributed to on-line storage and data analysis clusters. In this contribution we give an overview of the DAQ system architecture, communication protocols, as well as software stack for data acquisition pre-processing, monitoring, storage and analysis. |
|||
![]() |
Poster TUPPC045 [1.323 MB] | ||
TUPPC046 | Control Using Beckhoff Distributed Rail Systems at the European XFEL | PLC, hardware, photon, software | 669 |
|
|||
The European XFEL project is a 4th generation light source producing spatially coherent 80fs short photon x-ray pulses with a peak brilliance of 1032-1034 photons/s/mm2/mrad2/0.1% BW in the energy range from 0.26 to 24 keV at an electron beam energy 14 GeV. Six experiment stations will start data taking in fall 2015. In order to provide a simple, homogeneous solution, the DAQ and control systems group at the European XFEL are standardizing on COTS control hardware for use in experiment and photon beam line tunnels. A common factor within this standardization requirement is the integration with the Karabo software framework of Beckhoff TwinCAT 2.11 or TwinCAT3 PLCs and EtherCAT. The latter provides the high degree of reliability required and the desirable characteristics of real time capability, fast I/O channels, distributed flexible terminal topologies, and low cost per channel. In this contribution we describe how Beckhoff PLC and EtherCAT terminals will be used to control experiment and beam line systems. This allows a high degree of standardization for control and monitoring of systems.
Hardware Technology - POSTER |
|||
![]() |
Poster TUPPC046 [1.658 MB] | ||
TUPPC047 | The New TANGO-based Control and Data Acquisition System of the GISAXS Instrument GALAXI at Forschungszentrum Jülich | TANGO, software, neutron, detector | 673 |
|
|||
Forschungszentrum Jülich operated the SAXS instrument JUSIFA at DESY in Hamburg for more than twenty years. With the shutdown of the DORIS ring JUSIFA was relocated to Jülich. Based on most JUSIFA components (with major mechanical modifications) and a MetalJet high performance X-Ray source from Bruker AXS the new GISAXS instrument GALAXI was built by JCNS (Jülich Centre for Neutron Science). GALAXI was equipped with new electronics and a completely new control and data acquisition system by ZEA-2 (Zentralinstitut für Engineering, Elektronik und Analytik 2 – Systeme der Elektronik, formely ZEL). On the base of good experience with the TACO control system, ZEA-2 decided that GALAXI should be the first instrument of Forschungszentrum Jülich with the successor system TANGO. The application software on top of TANGO is based on pyfrid. Pyfrid was originally developed for the neutron scattering instruments of JCNS and provides a scripting interface as well as a Web GUI. The design of the new control and data acquisition system is presented and the lessons learned by the introduction of TANGO are reported. | |||
TUPPC048 | Adoption of the "PyFRID" Python Framework for Neutron Scattering Instruments | framework, interface, scattering, software | 677 |
|
|||
M.Drochner, L.Fleischhauer-Fuss, H.Kleines, D.Korolkov, M.Wagener, S.v.Waasen Adoption of the "PyFRID" Python Framework for Neutron Scattering Instruments To unify the user interfaces of the JCNS (Jülich Centre for Neutron Science) scattering instruments, we are adapting and extending the "PyFRID" framework. "PyFRID" is a high-level Python framework for instrument control. It provides a high level of abstraction, particularly by use of aspect oriented (AOP) techniques. Users can use a builtin command language or a web interface to control and monitor motors, sensors, detectors and other instrument components. The framework has been fully adopted at two instruments, and work is in progress to use it on more. | |||
TUPPC050 | Control, Safety and Diagnostics for Future ATLAS Pixel Detectors | detector, monitoring, diagnostics, operation | 679 |
|
|||
To ensure the excellent performance of the ATLAS Pixel detector during the next run periods of the LHC, with increasing demands, two upgrades of the pixel detector are foreseen. One takes place in the first long shutdown, which is currently on-going. During this period an additional layer, the Insertable B-Layer, will be installed. The second upgrade will replace the entire pixel detector and is planed for 2020, when the LHC will be upgraded to HL-LHC. As once installed no access is possible over years, a highly reliable control system is required. It has to supply the detector with all entities required for operation, protect it at all times, and provide detailed information to diagnose the detector’s behaviour. Design constraints are the sensitivity of the sensors and reduction of material inside the tracker volume. We report on the construction of the control system for the Insertable B Layer and present a concept for the control of the pixel detector at the HL-LHC. While the latter requires completely new strategies, the control system of the IBL includes single new components, which can be developed further for the long-term upgrade. | |||
![]() |
Poster TUPPC050 [0.566 MB] | ||
TUPPC053 | New Control System for the SPES Off-line Laboratory at LNL-INFN using EPICS IOCs based on the Raspberry Pi | EPICS, interface, Ethernet, detector | 687 |
|
|||
SPES (Selective Production of Exotic Species) is an ISOL type RIB facility of the LNL-INFN at Italy dedicated to the production of neutron-rich radioactive nuclei by uranium fission. At the LNL, for the last four years, an off-line laboratory has been developed in order to study the target front-end test bench. The instrumentation devices are controlled using EPICS. A new flexible, easy to adapt, low cost and open solution for this control system is being tested. It consists on EPICS IOCs developed at the LNL which are based on the low cost computer board Raspberry Pi with custom-made expansion boards. The operating system is a modify version of Debian Linux running EPICS soft IOCs that communicates with the expansion board using home-made drivers. The expansion boards consist on multi-channel 16bits ADCs and DACs, digital inputs and outputs and stepper motor drivers. The idea is to have a distributed control system using customized IOC for controlling the instrumentation devices on the system as well as to read the information from the detectors using the EPICS channel access as communication protocol. This solution will be very cost effective and easy to customize. | |||
![]() |
Poster TUPPC053 [2.629 MB] | ||
TUPPC054 | A PLC-Based System for the Control of an Educational Observatory | PLC, instrumentation, interface, operation | 691 |
|
|||
An educational project that aims to involve young students in astronomical observations has been developed in the last decade at the Basovizza branch station of the INAF-Astronomical Observatory of Trieste. The telescope used is a 14” reflector equipped with a robotic Paramount ME equatorial mount and placed in a non-automatic dome. The new-developing control system is based on Beckhoff PLC. The control system will mainly allow to remotely control the three-phase synchronous motor of the dome, the switching of the whole instrumentation and the park of the telescope. Thanks to the data coming from the weather sensor, the PLC will be able to ensure the safety of the instruments. A web interface is used for the communication between the user and the instrumentation. In this paper a detailed description of the whole PLC-based control system architecture will be presented. | |||
![]() |
Poster TUPPC054 [3.671 MB] | ||
TUPPC055 | Developing of the Pulse Motor Controller Electronics for Running under Weak Radiation Environment | radiation, operation, interface, optics | 695 |
|
|||
Hitz Hitachi Zosen has developed new pulse motor controller. This controller which controls two axes per one controller implements high performance processor, pulse control device and peripheral interface. This controller has simply extensibility and various interface and realizes low price. We are able to operate the controller through Ethernet TCP/IP(or FLnet). Also, the controller can control max 16 axes. In addition, we want to drive the motor controller in optics hatch filled with weak radiation. If we can put the controller in optics hatch, wiring will become simple because of closed wiring in optics hatch . Therefore we have evaluated controller electronics running under weak radiation. | |||
![]() |
Poster TUPPC055 [0.700 MB] | ||
TUPPC057 | New Development of EPICS-based Data Acquisition System for Electron Cyclotron Emission Diagnostics in KSTAR Tokamak | electron, EPICS, real-time, diagnostics | 699 |
|
|||
Korea Superconducting Tokamak Advanced Research (KSTAR) will be operated in the 6nd campaign in 2013 after achievement of first plasma in 2008. Many diagnostic devices have been installed for measuring the various plasma properties in the KSTAR tokamak during the campaigns. From the first campaign, a data acquisition system of Electron Cyclotron Emission (ECE) Heterodyne Radiometer (HR) has been operated to measure the radial profile of electron temperature. The DAQ system at the beginning was developed with a VME-form factor digitizer in Linux OS platform. However, this configuration had some limitations that it could not acquire over 100,000 samples per second due to its unstable operation during the campaigns. In order to overcome these weak points, a new ECE HR DAQ system is under development with a cPCI-form factor in Linux OS platform and the main control application will be developed based on EPICS framework like other control systems installed in KSTAR. Besides solving the described problems main advantages of the new ECE HR DAQ system are capabilities of calculating plasma electron temperature during plasma shot and displaying it in run-time. | |||
![]() |
Poster TUPPC057 [1.286 MB] | ||
TUPPC058 | Automation of Microbeam Focusing for X-Ray Micro-Experiments at the 4B Beamline of Pohang Light Source-II | focusing, experiment, LabView, hardware | 703 |
|
|||
The 4B beamline of the Pohang Light Source-II performs X-ray microdiffraction and microfluorescence experiments using X-ray microbeams. The microbeam has been focused down to FWHM sizes of less than 3 μm by manually adjusting the vertical and horizontal focusing mirrors of a K-B (Kirkpatrick-Baez) mirror system. In this research, a microbeam-focusing automation software was developed to automate the old complex and cumbersome process of beam focusing which may take about a day. The existing controllers of the K-B mirror system were replaced by products with communication functions and a motor-driving routine by means of proportional feedback control was constructed. Based on the routine and the outputs of two ionization chambers arranged in front and rear of the K-B mirror system, the automation software to perform every step of the beam focusing process was completed as LabVIEW applications. The developed automation software was applied to the 4B beamline and showed the performance of focusing an X-ray beam with a minimal size within an hour. This presentation introduces the details of the algorithms of the automation software and examines its performances. | |||
![]() |
Poster TUPPC058 [1.257 MB] | ||
TUPPC060 | Implementation of Continuous Scans Used in Beamline Experiments at Alba Synchrotron | experiment, hardware, software, detector | 710 |
|
|||
The Alba control system * is based on Sardana **, a software package implemented in Python, built on top of Tango *** and oriented to beamline and accelerator control and data acquisition. Sardana provides an advanced scan framework, which is commonly used in all the beamlines of Alba as well as other institutes. This framework provides standard macros and comprises various scanning modes: step, hybrid and software-continuous, however no hardware-continuous. The continuous scans speed up the data acquisition, making it a great asset for most experiments and due to time constraints, mandatory for a few of them. A continuous scan has been developed and installed in three beamlines where it reduced the time overheads of the step scans. Furthermore it could be easily adapted to any other experiment and will be used as a base for extending Sardana scan framework with the generic continuous scan capabilities. This article describes requirements, plan and implementation of the project as well as its results and possible improvements.
*"The design of the Alba Control System. […]" D. Fernández et al, ICALEPCS2011 **"Sardana, The Software for Building SCADAS […]" T.M. Coutinho et al, ICALEPCS2011 ***www.tango-controls.org |
|||
![]() |
Poster TUPPC060 [13.352 MB] | ||
TUPPC061 | BL13-XALOC, MX experiments at Alba: Current Status and Ongoing Improvements | interface, TANGO, experiment, hardware | 714 |
|
|||
BL13-XALOC is the only Macromolecular Crystallography (MX) beamline at the 3-GeV ALBA synchrotron. The control system is based on Tango * and Sardana **, which provides a powerful python-based environment for building and executing user-defined macros, a comprehensive access to the hardware, a standard Command Line Interface based on ipython, and a generic and customizable Graphical User Interface based on Taurus ***. Currently, the MX experiments are performed through panels that provide control to different beamline instrumentation. Users are able to collect diffraction data and solve crystal structures, and now it is time to improve the control system by combining the feedback from the users with the development of the second stage features: group all the interfaces (i.e. sample viewing system, automatic sample changer, fluorescence scans, and data collections) in a high-level application and implement new functionalities in order to provide a higher throughput experiment, with data collection strategies, automated data collections, and workflows. This article describes the current architecture of the XALOC control system, and the plan to implement the future improvements.
* http://www.tango-controls.org/ ** http://www.sardana-controls.org/ *** http://www.tango-controls.org/static/taurus/ |
|||
![]() |
Poster TUPPC061 [2.936 MB] | ||
TUPPC063 | Control and Monitoring of the Online Computer Farm for Offline Processing in LHCb | monitoring, network, experiment, interface | 721 |
|
|||
LHCb, one of the 4 experiments at the LHC accelerator at CERN, uses approximately 1500 PCs (averaging 12 cores each) for processing the High Level Trigger (HLT) during physics data taking. During periods when data acquisition is not required most of these PCs are idle. In these periods it is possible to profit from the unused processing capacity to run offline jobs, such as Monte Carlo simulation. The LHCb offline computing environment is based on LHCbDIRAC (Distributed Infrastructure with Remote Agent Control). In LHCbDIRAC, job agents are started on Worker Nodes, pull waiting tasks from the central WMS (Workload Management System) and process them on the available resources. A Control System was developed which is able to launch, control and monitor the job agents for the offline data processing on the HLT Farm. This control system is based on the existing Online System Control infrastructure, the PVSS SCADA and the FSM toolkit. It has been extensively used launching and monitoring 22.000+ agents simultaneously and more than 850.000 jobs have already been processed in the HLT Farm. This paper describes the deployment and experience with the Control System in the LHCb experiment. | |||
![]() |
Poster TUPPC063 [2.430 MB] | ||
TUPPC064 | Reusing the Knowledge from the LHC Experiments to Implement the NA62 Run Control | experiment, detector, hardware, framework | 725 |
|
|||
NA62 is an experiment designed to measure very rare kaon decays at the CERN SPS planned to start operation in 2014. Until this date, several intermediate run periods have been scheduled to exercise and commission the different parts and subsystems of the detector. The Run Control system monitors and controls all processes and equipment involved in data-taking. This system is developed as a collaboration between the NA62 Experiment and the Industrial Controls and Engineering (EN-ICE) Group of the Engineering Department at CERN. In this paper, the contribution of EN-ICE to the NA62 Run Control project is summarized. EN-ICE has promoted the utilization of standardized control technologies and frameworks at CERN, which were originally developed for the controls of the LHC experiments. This approach has enabled to deliver a working system for the 2013 Technical Run that exceeded the initial requirements, in a very short time and with limited manpower. | |||
TUPPC066 | 10 Years of Experiment Control at SLS Beam Lines: an Outlook to SwissFEL | EPICS, detector, FEL, operation | 729 |
|
|||
Today, after nearly 10 years of consolidated user operation at the Swiss Light Source (SLS) with up to 18 beam lines, we are looking back to briefly describe the success story based on EPICS controls toolkit and give an outlook towards the X-ray free-electron laser SwissFEL, the next challenging PSI project. We focus on SLS spectroscopy beam lines with experimental setups rigorously based on the SynApps "Positioner-Trigger-Detector" (PTD) anatomy [2]. We briefly describe the main beam line “Positioners” used inside the PTD concept. On the “Detector” side an increased effort is made to standardize the control within the areaDetector (AD) software package [3]. For the SwissFEL two detectors are envisaged: the Gotthard 1D and Jungfrau 2D pixel detectors, both built at PSI. Consistently with the PTD-anatomy, their control system framework based on the AD package is in preparation. In order to guarantee data acquisition with the SwissFEL nominal 100 Hz rate, the “Trigger” is interconnected with the SwissFEL timing system to guarantee shot-to-shot operation [4]. The AD plug-in concept allows significant data reduction; we believe this opens the doors towards on-line FEL experiments.
[1] Krempaský et al, ICALEPCS 2001 [2] www.aps.anl.gov/bcda/synApps/index.php [3] M. Rivers, SRI 2009, Melbourne [4] B. Kalantari et al, ICALEPCS 2011 |
|||
TUPPC067 | A Distributed Remote Monitoring System for ISIS Sample Environment | monitoring, EPICS, neutron, instrumentation | 733 |
|
|||
The benefits of remote monitoring in industrial and manufacturing plants are well documented and equally applicable to scientific research facilities. This paper highlights the benefits of implementing a distributed monitoring system for sample environment equipment and instrumentation at the ISIS Neutron & Muon source facility. The upcoming implementation of an EPICS replacement for the existing beamline control system provides a timely opportunity to integrate operational monitoring and diagnostic capabilities with minimal overheads. The ISIS facility located at the Rutherford Appleton Laboratory UK is the most productive research centre of its type in the world supporting a national and international community of more than 2000 scientists using neutrons and muons for research into materials and life sciences. | |||
![]() |
Poster TUPPC067 [0.821 MB] | ||
TUPPC069 | ZEBRA: a Flexible Solution for Controlling Scanning Experiments | FPGA, detector, EPICS, interface | 736 |
|
|||
This paper presents the ZEBRA product developed at Diamond Light Source. ZEBRA is a stand-alone event handling system with interfaces to multi-standard digital I/O signals (TTL, LVDS, PECL, NIM and Open Collector) and RS422 quadrature incremental encoder signals. Input events can be triggered by input signals, encoder position signals or repetitive time signals, and can be combined using logic gates in an FPGA to generate and output other events. The positions of all 4 encoders can be captured at the time of a given event and made available to the controlling system. All control and status is available through a serial protocol, so there is no dependency on a specific higher level control system. We have found it has applications on virtually all Diamond beamlines, from applications as simple as signal level shifting to, for example, using it for all continuous scanning experiments. The internal functionality is reconfigurable on the fly through the user interface and can be saved to static memory. It provides a flexible solution to interface different third party hardware (detectors and motion controllers) and to configure the required functionality as part of the experiment. | |||
![]() |
Poster TUPPC069 [2.909 MB] | ||
TUPPC070 | Detector Controls for the NOvA Experiment Using Acnet-in-a-Box | detector, PLC, monitoring, interface | 740 |
|
|||
In recent years, we have packaged the Fermilab accelerator control system, Acnet, so that other instances of it can be deployed independent of the Fermilab infrastructure. This encapsulated "Acnet-in-a-Box" is installed as the detector control system at the NOvA Far Detector. NOvA is a neutrino experiment using a beam of particles produced by the Fermilab accelerators. There are two NOvA detectors: a 330 ton ‘‘Near Detector'' on the Fermilab campus and a 14000 ton ‘‘Far Detector'' 735 km away. All key tiers and aspects of Acnet are available in the NOvA instantiation, including the central device database, java Open Access Clients, erlang front-ends, application consoles, synoptic displays, data logging, and state notifications. Acnet at NOvA is used for power-supply control, monitoring position and strain gauges, environmental control, PLC supervision, relay rack monitoring, and interacting with Epics PVs instrumenting the detector's avalanche photo-diodes. We discuss the challenges of maintaining a control system in a remote location, synchronizing updates between the instances, and improvements made to Acnet as a result of our NOvA experience. | |||
![]() |
Poster TUPPC070 [0.876 MB] | ||
TUPPC071 | Muon Ionization Cooling Experiment: Controls and Monitoring | monitoring, EPICS, emittance, hardware | 743 |
|
|||
The Muon Ionization Cooling Experiment is a demonstration experiment to prove the feasibility of cooling a beam of muons for use in a Neutrino Factory and/or Muon Collider. The MICE cooling channel will produce a 10% reduction in beam emittance which will be measured with a 1% resolution, and this level of precision requires strict controls and monitoring of all experimental parameters to minimize systematic errors. The MICE Controls and Monitoring system is based on EPICS and integrates with the DAQ, data monitoring systems, a configuration database, and state machines for device operations. Run Control has been developed to ensure proper sequencing of equipment and use of system resources to protect data quality. State machines are used in test operations of cooling channel superconducting solenoids to set parameters for monitoring, alarms, and data archiving. A description of this system, its implementation and performance during both muon beam data collection and magnet training will be discussed. | |||
![]() |
Poster TUPPC071 [1.820 MB] | ||
TUPPC073 | National Ignition Facility (NIF) Dilation X-ray Imager (DIXI) Diagnostic Instrumentation and Control System | diagnostics, target, timing, instrumentation | 751 |
|
|||
Funding: * This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344. #LLNL-ABS-633832 X-ray cameras on inertial confinement fusion facilities can determine the implosion velocity and symmetry of NIF targets by recording the emission of X-rays from the target gated as a function of time. To capture targets that undergo ignition and thermonuclear burn, however, cameras with less than 10 picosecond shutter times are needed. A Collaboration between LLNL, General Atomics and Kentech Instruments has resulted in the design and construction of an X-ray camera which converts an X-ray image to an electron image, which is stretched, and then coupled to a conventional shuttered electron camera to meet this criteria. This talk discusses target diagnostic instrumentation and software used to control the DIXI diagnostic and seamlessly integrate it into the National Ignition Facility (NIF) Integrated Computer Control System (ICCS). |
|||
![]() |
Poster TUPPC073 [3.443 MB] | ||
TUPPC076 | SNS Instrument Data Acquisition and Controls | EPICS, neutron, interface, data-acquisition | 755 |
|
|||
Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U. S. Department of Energy. The data acquisition (DAQ) and control systems for the neutron beam line instruments at the Spallation Neutron Source (SNS) are undergoing upgrades addressing three critical areas: data throughput and data handling from DAQ to data analysis, instrument controls including user interface and experiment automation, and the low-level electronics for DAQ and timing. This paper will outline the status of the upgrades and will address some of the challenges in implementing fundamental upgrades to an operating facility concurrent with commissioning of existing beam lines and construction of new beam lines. |
|||
TUPPC077 | Experiment Automation with a Robot Arm Using the Liquids Reflectometer Instrument at the Spallation Neutron Source | neutron, alignment, experiment, target | 759 |
|
|||
Funding: U.S. Government under contract DE-AC05-00OR22725 with UT-Battelle, LLC, which manages the Oak Ridge National Laboratory. The Liquids Reflectometer instrument installed at the Spallation Neutron Source (SNS) enables observations of chemical kinetics, solid-state reactions and phase-transitions of thin film materials at both solid and liquid surfaces. Effective measurement of these behaviors requires each sample to be calibrated dynamically using the neutron beam and the data acquisition system in a feedback loop. Since the SNS is an intense neutron source, the time needed to perform the measurement can be the same as the alignment process, leading to a labor-intensive operation that is exhausting to users. An update to the instrument control system, completed in March 2013, implemented the key features of automated sample alignment and robot-driven sample management, allowing for unattended operation over extended periods, lasting as long as 20 hours. We present a case study of the effort, detailing the mechanical, electrical and software modifications that were made as well as the lessons learned during the integration, verification and testing process. |
|||
![]() |
Poster TUPPC077 [17.799 MB] | ||
TUPPC078 | First EPICS/CSS Based Instrument Control and Acquisition System at ORNL | EPICS, experiment, interface, neutron | 763 |
|
|||
Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy The neutron imaging prototype beamline (CG-1D) at the Oak Ridge National Laboratory High Flux Isotope Reactor (HFIR) is used for many different applications necessitating a flexible and stable instrument control system. Beamline scientists expect a robust data acquisition system. They need a clear and concise user interface that allows them to both configure an experiment and to monitor an ongoing experiment run. Idle time between acquiring consecutive images must be minimized. To achieve these goals, we implement a system based upon EPICS, a newly developed CSS scan system, and CSS BOY. This paper presents the system architecture and possible future plans. |
|||
![]() |
Poster TUPPC078 [6.846 MB] | ||
TUPPC081 | IcePAP: An Advanced Motor Controller for Scientific Applications in Large User Facilities | hardware, software, target, interface | 766 |
|
|||
Synchrotron radiation facilities and in particular large hard X-ray sources such as the ESRF are equipped with thousands of motorized position actuators. Combining all the functional needs found in those facilities with the implications related to personnel resources, expertise and cost makes the choice of motor controllers a strategic matter. Most of the large facilities adopt strategies based on the use of off-the-shelf devices packaged using standard interfaces. As this approach implies severe compromises, the ESRF decided to address the development of IcePAP, a motor controller designed for applications in a scientific environment. It optimizes functionality, performance, ease of deployment, level of standardization and cost. This device is adopted as standard and is widely used at the beamlines and accelerators of ESRF and ALBA. This paper provides details on the architecture and technical characteristics of IcePAP as well as examples on how it implements advanced features. It also presents ongoing and foreseen improvements as well as introduces the outline of an emerging collaboration aimed at further development of the system making it available to other research labs. | |||
![]() |
Poster TUPPC081 [0.615 MB] | ||
TUPPC086 | Electronics Developments for High Speed Data Throughput and Processing | detector, FPGA, interface, timing | 778 |
|
|||
Funding: The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 283745 The European XFEL DAQ system has to acquire and process data in short bursts every 100ms. Bursts lasts for 600us and contain a maximum of 2700 x-ray pulses with a repetition rate of 4.5MHz which have to be captured and processed before the next burst starts. This time structure defines the boundary conditions for almost all diagnostic and detector related DAQ electronics required and currently being developed for start of operation in fall 2015. Standards used in the electronics developments are: MicroTCA.4 and AdvancedTCA crates, use of FPGAs for data processing, transfer to backend systems via 10Gbps (SFP+) links, and feedback information transfer using 3.125Gbps (SFP) links. Electronics being developed in-house or in collaboration with external institutes and companies include: a Train Builder ATCA blade for assembling and processing data of large-area image detectors, a VETO MTCA.4 development for evaluating pulse information and distributing a trigger decision to detector front-end ASICs and FPGAs with low-latency, a MTCA.4 digitizer module, interface boards for timing and similar synchronization information, etc. |
|||
![]() |
Poster TUPPC086 [0.983 MB] | ||
TUPPC088 | Development of MicroTCA-based Image Processing System at SPring-8 | FPGA, interface, framework, Linux | 786 |
|
|||
In SPring-8, various CCD cameras have been utilized for electron beam diagnostics of accelerators and x-ray imaging experiments. PC-based image processing systems are mainly used for the CCD cameras with Cameralink I/F. We have developed a new image processing system based on MicroTCA platform, which has an advantage over PC in robustness and scalability due to its hot-swappable modular architecture. In order to reduce development cost and time, the new system is built with COTS products including a user-configurable Spartan6 AMC with an FMC slot and a Cameralink FMC. The Cameralink FPGA core is newly developed in compliance with the AXI4 open-bus to enhance reusability. The MicroTCA system will be first applied to upgrade of the two-dimensional synchrotron radiation interferometer[1] operating at the SPring-8 storage ring. The sizes and tilt angle of a transverse electron beam profile with elliptical Gaussian distribution are extracted from an observed 2D-interferogram. A dedicated processor AMC (PrAMC) that communicates with the primary PrAMC via backplane is added for fast 2D-fitting calculation to achieve real-time beam profile monitoring during the storage ring operation.
[1] "Two-dimensional visible synchrotron light interferometry for transverse beam-profile measurement at the SPring-8 storage ring", M.Masaki and S.Takano, J. Synchrotron Rad. 10, 295 (2003). |
|||
![]() |
Poster TUPPC088 [4.372 MB] | ||
TUPPC089 | Upgrade of the Power Supply Interface Controller Module for SuperKEKB | power-supply, interface, operation, hardware | 790 |
|
|||
There were more than 2500 magnet power supplies for KEKB storage rings and injection beam transport lines. For the remote control of such a large number of power supplies, we have developed the Power Supply Interface Controller Module (PSICM), which is plugged into each power supply. It has a microprocessor, ARCNET interface, trigger signal input interface, and parallel interface to the power supply. The PSICM is not only an interface card but also controls synchronous operation of the multiple power supplies with an arbitrary tracking curve. For SuperKEKB, the upgrade of KEKB, most of the existing power supplies continues while handreds of new power suplies are also installed. Although the PSICMs have worked without serious problem for 12 years, it seems too hard to keep maintenance for the next decade because of the discontinued parts. Thus we have developed the upgraded version of the PSICM. The new PSICM has the fully backward compatible interface to the power supply. The enhanced features are high speed ARCNET communication and redundant trigger signals. The design and the status of the upgraded PSICM are presented. | |||
![]() |
Poster TUPPC089 [1.516 MB] | ||
TUPPC090 | Digital Control System of High Extensibility for KAGRA | laser, power-supply, detector, cryogenics | 794 |
|
|||
KAGRA is the large scale cryogenic gravitational wave telescope project in Japan which is developed and constructed by ICRR. of The University of Tokyo. Hitz Hitachi Zosen produced PCI Express I/O chassis and the anti-aliasing/anti-imaging filter board for KAGRA digital control system. These products are very important for KAGRA interferometer from the point of view of low noise operations. This paper reports the performance of these products. | |||
![]() |
Poster TUPPC090 [0.487 MB] | ||
TUPPC094 | Em# Project. Improvement of Low Current Measurements at Alba Synchrotron | FPGA, hardware, target, feedback | 798 |
|
|||
After two years with 50 four-channels electrometer measurement units working successfully at Alba beamlines, new features implementation have forced a complete instrument architecture change. This new equipment is taking advantage of the targets achieved as the remarkable low noise in the current amplifier stage and implements new features currently not available in the market. First an embedded 18 bits SAR ADC able to work under up to 500V biasing has been implemented looking for the highest possible accuracy. The data stream is analysed by a flexible data processing based on a FPGA which is able to execute sample-by-sample real-time calculation aimed to be applied in experiments as the current normalization absorption between two channel acquisitions; being able to optimize the SNR of an absorption spectrum. The equipment is oriented from the design stage to be integrated in continuous scans setups, implementing low level timestamp compatible with multiple clock sources standards using an SFP port. This port could also be used in the future to integrate XBPM measures into the FOFB network for the accelerator beam position correction. | |||
![]() |
Poster TUPPC094 [0.545 MB] | ||
TUPPC095 | Low Cost FFT Scope using LabVIEW cRIO and FPGA | LabView, software, hardware, FPGA | 801 |
|
|||
At CERN, many digitizers and scopes are starting to age and should be replaced. Much of the equipment is custom made or not available on the market anymore. Replacing this equipment with the equivalent of today would either be time consuming or expensive. This paper looks at the pros and cons of using COTS systems like NI-cRIO and NI-PXIe and their FPGA capabilities as flexible instruments, replacing costly spectrum analyzers and older scopes. It adds some insight on what had to be done to integrate and deploy the equipment in the unique CERN infrastructure, and the added value of having a fully customizable platform, that makes it possible to stream, store and align the data without any additional equipment. | |||
![]() |
Poster TUPPC095 [5.250 MB] | ||
TUPPC096 | Migration from WorldFIP to a Low-Cost Ethernet Fieldbus for Power Converter Control at CERN | Ethernet, interface, software, network | 805 |
|
|||
Power converter control in the LHC uses embedded computers called Function Generator/Controllers (FGCs) which are connected to WorldFIP fieldbuses around the accelerator ring. The FGCs are integrated into the accelerator control system by x86 gateway front-end systems running Linux. With the LHC now operational, attention has turned to the renovation of older control systems as well as a new installation for Linac 4. A new generation of FGC is being deployed to meet the needs of these cycling accelerators. As WorldFIP is very limited in data rate and is unlikely to undergo further development, it was decided to base future installations upon an Ethernet fieldbus with standard switches and interface chipsets in both the FGCs and gateways. The FGC communications protocol that runs over WorldFIP in the LHC was adapted to work over raw Ethernet, with the aim to have a simple solution that will easily allow the same devices to operate with either type of interface. This paper describes the evolution of FGC communications from WorldFIP to dedicated Ethernet networks and presents the results of initial tests, diagnostic tools and how real-time power converter control is achieved. | |||
![]() |
Poster TUPPC096 [1.250 MB] | ||
TUPPC098 | Advanced Light Source Control System Upgrade – Intelligent Local Controller Replacement | FPGA, hardware, software, EPICS | 809 |
|
|||
Funding: Work supported by the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 As part of the control system upgrade at the Advanced Light Source (ALS) the existing intelligent local controller (ILC) modules have been replaced. These remote input/output modules provide real-time updates of control setpoints and monitored values. This paper describes the 'ILC Replacement Modules' which have been developed to take on the duties of the existing modules. The new modules use a 100BaseT network connection to communicate with the ALS Experimental Physics and Industrial Control System (EPICS) and are based on a commercial FPGA evaluation board running a microcontroller-like application. In addition to providing remote ana log and digital input/output points the replacement modules also provide some rudimentary logic operations, analog slew rate limiting and accurate time stamping of acquired data. Results of extensive performance testing and experience gained now that the modules have been in service for several months are presented. |
|||
TUPPC100 | Recent Changes to Beamline Software at the Canadian Light Source | software, experiment, EPICS, Windows | 813 |
|
|||
The Canadian Light Source has ongoing work to improve the user interfaces at the beamlines. Much of the direction has made use of Qt and EPICS, using both C++ and Python in providing applications. Continuing work on the underlying data acquisition and visualization tools provides a commonality for both development and operation, and provisions for extending tools allow flexibility in types of experiments being run. | |||
![]() |
Poster TUPPC100 [1.864 MB] | ||
TUPPC101 | Scaling of EPICS edm Display Pages at ISAC | ISAC, EPICS, factory, TRIUMF | 816 |
|
|||
The EPICS-based control system of the ISAC facility at TRIUMF uses the edm display editor / display manager to create and render the Operator interface displays. edm displays are expressed in pixel coordinates and edm does not scale the display page when a window is re-sized. A simple scheme was implemented to allow operators to switch page magnifications using a set of pre-selected scaling factors. Possible extensions of the scheme and its limitations will be discussed. | |||
![]() |
Poster TUPPC101 [1.067 MB] | ||
TUPPC102 | User Interfaces for the Spiral2 Machine Protection System | beam-losses, PLC, rfq, software | 818 |
|
|||
Spiral2 accelerator is designed to accelerate protons, deuterons, ions with a power from hundreds of Watts to 200kW. Therefore, it is important to monitor and anticipate beam losses to maintain equipment integrities by triggering beam cuts when beam losses or equipment malfunctions are detected; the MPS (Machine Protection System) is in charge of this function. The MPS has also to monitor and limit activations but this part is not addressed here. Linked to the MPS, five human machine interfaces will be provided. The first, “MPS” lets operators and accelerator engineers monitor MPS states, alarms and tune some beam losses thresholds. The second “beam power rise” defines successive steps to reach the desired beam power. Then, “interlock” is a synoptic to control beam stops state and defaults; the “beam losses” one displays beam losses, currents and efficiencies along the accelerator. Finally, “beam structure” lets users interact with the timing system by controlling the temporal structure to obtain a specific duty cycle according to the beam power constraints. In this paper, we introduce these human machine interfaces, their interactions and the method used for software development. | |||
![]() |
Poster TUPPC102 [1.142 MB] | ||
TUPPC106 | Development of a Web-based Shift Reporting Tool for Accelerator Operation at the Heidelberg Ion Beam Therapy Center | ion, database, operation, framework | 822 |
|
|||
The HIT (Heidelberg Ion Therapy) center is the first dedicated European accelerator facility for cancer therapy using both carbon ions and protons, located at the university hospital in Heidelberg. It provides three fully operational therapy treatment rooms, two with fixed beam exit and a gantry. We are currently developing a web based reporting tool for accelerator operations. Since medical treatment requires a high level of quality assurance, a detailed reporting on beam quality, device failures and technical problems is even more needed than in accelerator operations for science. The reporting tools will allow the operators to create their shift reports with support from automatically derived data, i.e. by providing pre-filled forms based on data from the Oracle database that is part of the proprietary accelerator control system. The reporting tool is based on the Python-powered CherryPy web framework, using SQLAlchemy for object relational mapping. The HTML pages are generated from templates, enriched with jQuery to provide a desktop-like usability. We will report on the system architecture of the tool and the current status, and show screenshots of the user interface.
[1] Th. Haberer et al., “The Heidelberg Ion Therapy Center”, Rad. & Onc., |
|||
TUPPC108 | Using Web Syndication for Flexible Remote Monitoring | site, detector, operation, experiment | 825 |
|
|||
With the experience gained in the first years of running the ALICE apparatus we have identified the need of collecting and aggregating different data to be displayed to the user in a simplified, personalized and clear way. The data comes from different sources in several formats, can contain data, text, pictures or can simply be a link to an extended content. This paper will describe the idea to design a light and flexible infrastructure, to aggregate information produced in different systems and offer them to the readers. In this model, a reader is presented with the information relevant to him, without being obliged to browse through different systems. The project consists of data production, collection and syndication, and is being developed in parallel with more traditional monitoring interfaces, with the aim of offering the ALICE users an alternative and convenient way to stay updated about their preferred systems even when they are far from the experiment. | |||
![]() |
Poster TUPPC108 [1.301 MB] | ||
TUPPC109 | MacspeechX.py Module and Its Use in an Accelerator Control System | hardware, software, interface, target | 829 |
|
|||
macspeechX.py is a Python module to accels speech systehsis library on MacOSX. This module have been used in the vocal alert system in KEKB and J-PARC accelrator cotrol system. Recent upgrade of this module allow us to handle non-English lanugage, such as Japanse, through this module. Implementation detail will be presented as an example of Python program accessing system library. | |||
TUPPC110 | Operator Intervention System for Remote Accelerator Diagnostics and Support | network, operation, EPICS, site | 832 |
|
|||
In a large experimental physics project such as ITER and LHC, the project has managed by an international collaboration. Similarly, ILC (International Linear Collider) as next generation project will be started by a collaboration of many institutes from three regions. After the collaborative construction, any collaborators except a host country will need to have some methods for remote maintenances by control and monitoring of devices. For example, the method can be provided by connecting to the control system network via WAN from their own countries. On the other hand, the remote operation of an accelerator via WAN has some issues from a practical application standpoint. One of the issues is that the accelerator has both experimental device and radiation generator characteristics. Additionally, after miss operation in the remote control, it will cause breakdown immediately. For this reason, we plan to implement the operator intervening system for remote accelerator diagnostics and support, and then it will solve the issues of difference of between the local control room and other locations. In this paper, we report the system concept, the development status, and the future plan. | |||
![]() |
Poster TUPPC110 [7.215 MB] | ||
TUPPC115 | Hierarchies of Alarms for Large Distributed Systems | detector, experiment, interface, diagnostics | 844 |
|
|||
The control systems of most of the infrastructure at CERN makes use of the SCADA package WinCC OA by ETM, including successful projects to control large scale systems (i.e. the LHC accelerator and associated experiments). Each of these systems features up to 150 supervisory computers and several millions of parameters. To handle such large systems, the control topologies are designed in a hierarchical way (i.e. sensor, module, detector, experiment) with the main goal of supervising a complete installation with a single person from a central user interface. One of the key features to achieve this is alarm management (generation, handling, storage, reporting). Although most critical systems include automatic reactions to faults, alarms are fundamental for intervention and diagnostics. Since one installation can have up to 250k alarms defined, a major failure may create an avalanche of alarms that is difficult for an operator to interpret. Missing important alarms may lead to downtime or to danger for the equipment. The paper presents the developments made in recent years on WinCC OA to work with large hierarchies of alarms and to present summarized information to the operators. | |||
TUPPC116 | Cheburashka: A Tool for Consistent Memory Map Configuration Across Hardware and Software | software, hardware, interface, database | 848 |
|
|||
The memory map of a hardware module is defined by the designer at the moment when the firmware is specified. It is then used by software developers to define device drivers and front-end software classes. Maintaining consistency between hardware and its software is critical. In addition, the manual process of writing VHDL firmware on one side and the C++ software on the other is labour-intensive and error-prone. Cheburashka* is a software tool which eases this process. From a unique declaration of the memory map, created using the tool’s graphical editor, it allows to generate the memory map VHDL package, the Linux device driver configuration for the front-end computer, and a FESA** class for debugging. An additional tool, GENA, is being used to automatically create all required VHDL code to build the associated register control block. These tools are now used by the hardware and software teams for the design of all new interfaces from FPGAs to VME or on-board DSPs in the context of the extensive program of development and renovation being undertaken in the CERN injector chain during LS1***. Several VME modules and their software have already been deployed and used in the SPS.
(*) Cheburashka is developed in the RF group at CERN (**)FESA is an acronym for Front End Software Architecture, developped at CERN (***)LS1 : LHC Long Shutdown 1, from 2013 to 2014 |
|||
TUPPC119 | Exchange of Crucial Information between Accelerator Operation, Equipment Groups and Technical Infrastructure at CERN | operation, database, interface, laser | 856 |
|
|||
During CERN accelerator operation, a large number of events, related to accelerator operation and management of technical infrastructure, occur with different criticality. All these events are detected, diagnosed and managed by the Technical Infrastructure service (TI) in the CERN Control Centre (CCC); equipment groups concerned have to solve the problem with a minimal impact on accelerator operation. A new database structure and new interfaces have to be implemented to share information received by TI, to improve communication between the control room and equipment groups, to help post-mortem studies and to correlate events with accelerator operation incidents. Different tools like alarm screens, logbooks, maintenance plans and work orders exist and are in use today. A project was initiated with the goal to integrate and standardize information in a common repository to be used by the different stakeholders through dedicated user interfaces. | |||
![]() |
Poster TUPPC119 [10.469 MB] | ||
TUPPC120 | LHC Collimator Alignment Operational Tool | alignment, collimation, interface, monitoring | 860 |
|
|||
Beam-based LHC collimator alignment is necessary to determine the beam centers and beam sizes at the collimator locations for various machine configurations. Fast and automatic alignment is provided through an operational tool has been developed for use in the CERN Control Center, which is described in this paper. The tool is implemented as a Java application, and acquires beam loss and collimator position data from the hardware through a middleware layer. The user interface is designed to allow for a quick transition from application start up, to selecting the required collimators for alignment and configuring the alignment parameters. The measured beam centers and sizes are then logged and displayed in different forms to help the user set up the system. | |||
![]() |
Poster TUPPC120 [2.464 MB] | ||
TUPPC121 | caQtDM, an EPICS Display Manager Based on Qt | EPICS, interface, Windows, data-acquisition | 864 |
|
|||
At the Paul Scherrer Institut (PSI) the display manager MEDM was used until recently for the synoptic displays at all our facilities, not only for EPICS but also for another, in-house built control system ACS. However MEDM is based on MOTIF and Xt/X11, systems/libraries that are starting to age. Moreover MEDM is difficult to extend with new entities. Therefore a new tool has been developed based on Qt. This reproduces the functionality of MEDM and is now in use at several facilities. As Qt is supported on several platforms this tool will also format using the parser tool adl2ui. These were then edited further with the Qt-Designer and displayed with the new Qt-Manager caQtDM. The integration of new entities into the Qt designer and therefore into the Qt based applications is very easy, so that the system can easily be enhanced with new widgets. New features needed for our facility were implemented. The caQtDM application uses a C++ class to perform the data acquisition and display; this class can also be integrated into other applications. | |||
![]() |
Slides TUPPC121 [1.024 MB] | ||
TUPPC122 | Progress of the TPS Control Applications Development | EPICS, GUI, operation, interface | 867 |
|
|||
The TPS (Taiwan Photon Source) is the latest generation 3 GeV synchrotron light source which is in installation phase. Commissioning is estimated in 2014. The EPICS is adopted as control system framework for the TPS. The various EPICS IOCs have implemented for each subsystem at this moment. Development and integration of specific control operation interfaces are in progress. The operation interfaces mainly include the function of setting, reading, save, restore and etc. Development of high level applications which are depended upon properties of each subsystem is on-going. The archive database system and its browser toolkits gradually have been established and tested. The Web based operation interfaces and broadcasting are also created for observing the machine status. The efforts will be summarized at this report. | |||
![]() |
Poster TUPPC122 [2.054 MB] | ||
TUPPC123 | User Interfaces Development of Imaging Diagnostic Devices for the Taiwan Photon Source | EPICS, LabView, GUI, synchrotron | 871 |
|
|||
Taiwan Photon Source (TPS) is a 3 GeV synchrotron light source which is being construction at campus of National Synchrotron Radiation Research Center (NSRRC) in Taiwan. Many diagnostic devices are used for the implementation and will be deployed to assist commissioning and operating the TPS. The imaging diagnostics devices, includes screen monitor (SM), streak camera (SC), and intensified CCD (ICCD) are used and its user interfaces are plan to develop. Control of these applications is centered around EPICS IOC. The windows OS based system, such as SC and ICCD, are controlled respectively through the Matlab (combined with LabCA module) and LabVIEW (combined with DSC module) tools and share the data as EPICS PVs. The main user interfaces and data analysis are constructed by Matlab GUIDE toolbox. The progress of the plans will be summarized in this report. | |||
![]() |
Poster TUPPC123 [1.518 MB] | ||
TUPPC124 | Distributed Network Monitoring Made Easy - An Application for Accelerator Control System Process Monitoring | monitoring, network, software, Linux | 875 |
|
|||
Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357. As the complexity and scope of distributed control systems increase, so does the need for an ever increasing level of automated process monitoring. The goal of this paper is to demonstrate one method whereby the SNMP protocol combined with open-source management tools can be quickly leveraged to gain critical insight into any complex computing system. Specifically, we introduce an automated, fully customizable, web-based remote monitoring solution which has been implemented at the Argonne Tandem Linac Accelerator System (ATLAS). This collection of tools is not limited to only monitoring network infrastructure devices, but also to monitor critical processes running on any remote system. The tools and techniques used are typically available pre-installed or are available via download on several standard operating systems, and in most cases require only a small amount of configuration out of the box. High level logging, level-checking, alarming, notification and reporting is accomplished with the open source network management package OpenNMS, and normally requires a bare minimum of implementation effort by a non-IT user. |
|||
![]() |
Poster TUPPC124 [0.875 MB] | ||
TUPPC128 | Machine History Viewer for the Integrated Computer Control System of the National Ignition Facility | GUI, software, database, target | 883 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633812 The Machine History Viewer is a recently developed capability of the Integrated Computer Control System (ICCS) software to the National Ignition Facility (NIF) that introduces the capability to analyze machine history data to troubleshoot equipment problems and to predict future failures. Flexible time correlation, text annotations, and multiple y-axis scales will help users determine cause and effect in the complex machine interactions at work in the NIF. Report criteria can be saved for easy modification and reuse. Integration into the already-familiar ICCS GUIs makes reporting easy to access for the operators. Reports can be created that will help analyze trends over long periods of time that lead to improved calibration and better detection of equipment failures. Faster identification of current failures and anticipation of potential failures will improve NIF availability and shot efficiency. A standalone version of this application is under development that will provide users remote access to real-time data and analysis allowing troubleshooting by experts without requiring them to come on-site. |
|||
![]() |
Poster TUPPC128 [4.826 MB] | ||
TUPPC129 | NIF Device Health Monitoring | GUI, monitoring, framework, status | 887 |
|
|||
Funding: * This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344. #LLNL-ABS-633794 The Integrated Computer Control System (ICCS) at the National Ignition Facility (NIF) uses Front-End Processors (FEP) controlling over 60,000 devices. Often device faults are not discovered until a device is needed during a shot, creating run-time errors that delay the laser shot. This paper discusses a new ICCS framework feature for FEPs to monitor devices and report its overall health, allowing for problem devices to be identified before they are needed. Each FEP has different devices and a unique definition of healthy. The ICCS software uses an object oriented approach using polymorphism so FEP’s can determine their health status and report it in a consistent way. This generic approach provides consistent GUI indication and the display of detailed information of device problems. It allows for operators to be informed quickly of faults and provides them with the information necessary to pin point and resolve issues. Operators now know before starting a shot if the control system is ready, thereby reducing time and material lost due to a failure and improving overall control system reliability and availability. |
|||
![]() |
Poster TUPPC129 [2.318 MB] | ||
TUPPC130 | The Design of NSLS-II High Level Physics Applications | linac, GUI, booster, closed-orbit | 890 |
|
|||
The NSLS-II high level physics applications are an effort from both controls and accelerator physics group. They are developed with the client-server approach, where the services are mainly provided by controls group in terms of web service or libraries. | |||
TUPPC131 | Synoptic Displays and Rapid Visual Application Development | embedded, collider, power-supply, interface | 893 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. For a number of years there has been an increasing desire to adopt a synoptic display suite within BNL accelerator community. Initial interest in the precursors to the modern display suites like MEDM quickly fizzled out as our users found them aesthetically unappealing and cumbersome to use. Subsequent attempts to adopt Control System Studio (CSS) also fell short when work on the abstraction bridge between CSS and our control system stalled and was eventually abandoned. Most recently, we tested the open source version of a synoptic display developed at Fermilab. It, like its previously evaluated predecessors, also seemed rough around the edges, however a few implementation details made it more appealing than every single previously mentioned solution and after a brief evaluation we settled on Synoptic as our display suite of choice. This paper describes this adoption process and goes into details on several key changes and improvements made to the original implementation – a few of which made us rethink how we want to use this tool in the future. |
|||
![]() |
Poster TUPPC131 [3.793 MB] | ||
TUPPC132 | Accelerator Control Data Visualization with Google Map | target, status, GUI, survey | 897 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Using geological map data to serve as a visualization for components of a Controls System provides Main Control Room Operators an easy way to both identify and locate conditions within specific parts of an accelerator complex that may require attention. Google's Map API provides a simple and convenient way to display some of C-AD's Controls System data and provide location and status feedback using dynamic symbols and animations. This paper describes the details of how chipmunk and beam loss data visualization can be implemented for the AGS/RHIC Controls system. Most of the server side and client site software can be easily adapted to many other similar types of data visualizations. Wenge Fu, Seth Nemesure, Brookhaven National Laboratory, Upton, NY 11973, USA |
|||
![]() |
Poster TUPPC132 [2.086 MB] | ||
TUPPC133 | Graphene: A Java Library for Real-Time Scientific Graphs | real-time, operation, interface, background | 901 |
|
|||
While there are a number of open source charting library available in Java, none of them seem to be suitable for real time scientific data, such as the one coming from control systems. Common shortcomings include: inadequate performance, too entangled with other scientific packages, concrete data object (which require copy operations), designed for small datasets, required running UI to produce any graph. Graphene is our effort to produce graphs that are suitable for scientific publishing, can be created without UI (e.g. in a web server), work on data defined through interfaces that allow no copy processing in a real time pipeline and are produced with adequate performance. The graphs are then integrated using pvmanager within Control System Studio. | |||
![]() |
Poster TUPPC133 [0.502 MB] | ||
TUPPC134 | Pvmanager: A Java Library for Real-Time Data Processing | framework, real-time, EPICS, background | 903 |
|
|||
Increasingly becoming the standard connection layer in Control System Studio, pvmanager is a Java library that allows to create well behaved applications that process real time data, such as the one coming from a control system. It takes care of the caching, queuing, rate decoupling and throttling, connection sharing, data aggregation and all the other details needed to make an application robust. Its fluent API allows to specify the detail for each pipeline declaratively in a compact way. | |||
![]() |
Poster TUPPC134 [0.518 MB] | ||
TUCOCA02 |
The ITER Interlock System | plasma, interlocks, operation, neutral-beams | 910 |
|
|||
ITER is formed by systems which shall be pushed to their performance limits in order to successfully achieve the scientific goals. The scientists in charge of exploiting the tokamak will require enough operational flexibility to explore as many plasma scenarios as possible while being sure that the integrity of the machine and safety of the environment and personnel are not compromised. The I&C Systems of ITER has been divided in three separate tiers: the conventional I&C, the safety system and the interlock system. This paper focuses on the latter. The design of the ITER interlocks has to take into account the intrinsic diversity of ITER systems, which implies a diversity of risks to be mitigated and hence the impossibility to implement a unique solution for the whole machine. This paper presents the chosen interlock solutions based on PLC, FPGA, and hardwired technologies. It also describes how experience from existing tokamaks has been applied to the design of the ITER interlocks, as well as the ITER particularities that have forced the designers to evaluate some technical choices which historically have been considered as non-suitable for implementing interlock functions. | |||
![]() |
Slides TUCOCA02 [3.303 MB] | ||
TUCOCA05 | EPICS-based Control System for a Radiation Therapy Machine | EPICS, database, neutron, cyclotron | 922 |
|
|||
The clinical neutron therapy system (CNTS) at the University of Washington Medical Center (UWMC) has been treating patients since 1984. Its new control system retains the original safety philosophy and delegation of functions among nonprogrammable hardware, PLCs, microcomputers with programs in ROM, and finally general-purpose computers. The latter are used only for data-intensive, prescription-specific functions. For these, a new EPICS-based control program replaces a locally-developed C program used since 1999. The therapy control portion uses a single soft IOC for control and a single EDM session for the operator's console. Prescriptions are retrieved from a PostgreSQL database and loaded into the IOC by a Python program; another Python program stores treatment records from the IOC back into the database. The system remains safe if the general-purpose computers or their programs crash or stop producing results. Different programs at different stages of the computation check for invalid data. Development activities including formal specifications and automated testing avoid, then check for, design and programming errors. | |||
![]() |
Slides TUCOCA05 [0.175 MB] | ||
TUCOCA07 | A Streamlined Architecture of LCLS-II Beam Containment System | PLC, radiation, distributed, diagnostics | 930 |
|
|||
With the construction of LCLS-II, SLAC is developing a new Beam Containment System (BCS) to replace the aging hardwired system. This system will ensure that the beam is confined to the design channel at an approved beam power to prevent unacceptable radiation levels in occupable areas. Unlike other safety systems deployed at SLAC, the new BCS is distributed and has explicit response time requirements, which impose design constraints on system architecture. The design process complies with IEC 61508 and the system will have systematic capability SC3. This paper discusses the BCS built on Siemens S7-300F PLC. For those events requiring faster action, a hardwired shutoff path is provided in addition to peer safety functions within PLC; safety performance is enhanced, and the additional diagnostic capabilities significantly relieve operational cost and burden. The new system is also more scalable and flexible, featuring improved configuration control, simplified EPICS interface and reduced safety assurance testing efforts. The new architecture fully leverages the safety PLC capabilities and streamlines design and commissioning through a single-processor single-programmer approach. | |||
![]() |
Slides TUCOCA07 [1.802 MB] | ||
TUCOCA08 | Personnel and Machine Protection Systems in The National Ignition Facility (NIF) | target, laser, operation, monitoring | 933 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory (LLNL) under Contract DE-AC52-07NA27344. #LLNL-ABS-633232 The National Ignition Facility (NIF) is the world’s largest and most energetic laser system and has the potential to generate significant levels of ionizing radiation. The NIF employs real time safety systems to monitor and mitigate the potential hazards presented by the facility. The Machine Safety System (MSS) monitors key components in the facility to allow operations while also protecting against configurations that could damage equipment. The NIF Safety Interlock System (SIS) monitors for oxygen deficiency, radiological alarms, and controls access to the facility preventing exposure to laser light and radiation. Together the SIS and MSS control permissives to the hazard generating equipment and annunciate hazard levels in the facility. To do this reliably and safely, the SIS and MSS have been designed as fail safe systems with a proven performance record now spanning over 12 years. This presentation discusses the SIS and MSS, design, implementation, operator interfaces, validation/verification, and the hazard mitigation approaches employed in the NIF. A brief discussion of common failures encountered in the design of safety systems and how to avoid them will be presented. |
|||
![]() |
Slides TUCOCA08 [2.808 MB] | ||
TUCOCA10 | Improvements in the T2K Primary Beamline Control System | PLC, power-supply, EPICS, status | 940 |
|
|||
T2K is a long-baseline neutrino oscillation experiment in Japan. We report recent improvements in the T2K primary beamline control system. The first improvement is a new interlock system for current fluctuations of the normal-conducting (NC) magnet power supplies. To prevent the intense beam from hitting the beamline equipment due to a current fluctuation in a magnet power supply, we continuously monitor the power supply output current using digital-panel-meters. The second improvement is a new PLC-based control system for the NC magnet power supplies. We will also discuss the actual implementation of these improvements. | |||
![]() |
Slides TUCOCA10 [2.595 MB] | ||
TUCOCB01 | Next-Generation MADOCA for The SPring-8 Control Framework | Windows, framework, interface, software | 944 |
|
|||
MADOCA control framework* was developed for SPring-8 accelerator control and has been utilized in several facilities since 1997. As a result of increasing demands in controls, now we need to treat various data including image data in beam profile monitoring, and also need to control specific devices which can be only managed by Windows drivers. To fulfill such requirements, next-generation MADOCA (MADOCA II) was developed this time. MADOCA II is also based on message oriented control architecture, but the core part of the messaging is completely rewritten with ZeroMQ socket library. Main features of MADOCA II are as follows: 1) Variable length data such as image data can be transferred with a message. 2) The control system can run on Windows as well as other platforms such as Linux and Solaris. 3) Concurrent processing of multiple messages can be performed for fast control. In this paper, we report on the new control framework especially from messaging aspects. We also report the status on the replacement of the control system with MADOCA II. Partial control system of SPring-8 was already replaced with MADOCA II last summer and has been stably operated.
*R.Tanaka et al., “Control System of the SPring-8 Storage Ring”, Proc. of ICALEPCS’95, Chicago, USA, (1995) |
|||
![]() |
Slides TUCOCB01 [2.157 MB] | ||
TUCOCB02 | Middleware Proxy: A Request-Driven Messaging Broker for High Volume Data Distribution | device-server, operation, database, diagnostics | 948 |
|
|||
Nowadays, all major infrastructures and data centers (commercial and scientific) make an extensive use of the publish-subscribe messaging paradigm, which helps to decouple the message sender (publisher) from the message receiver (consumer). This paradigm is also heavily used in the CERN Accelerator Control system, in Proxy broker - critical part of the Controls Middleware (CMW) project. Proxy provides the aforementioned publish-subscribe facility and also supports execution of synchronous read and write operations. Moreover, it enables service scalability and dramatically reduces the network resources and overhead (CPU and memory) on publisher machine, required to serve all subscriptions. Proxy was developed in modern C++, using state of the art programming techniques (e.g. Boost) and following recommended software patterns for achieving low-latency and high concurrency. The outstanding performance of the Proxy infrastructure was confirmed during the last 3 years by delivering the high volume of LHC equipment data to many critical systems. This work describes in detail the Proxy architecture together with the lessons learnt from operation and the plans for the future evolution. | |||
![]() |
Slides TUCOCB02 [4.726 MB] | ||
TUCOCB03 | A Practical Approach to Ontology-Enabled Control Systems for Astronomical Instrumentation | detector, software, DSL, database | 952 |
|
|||
Even though modern service-oriented and data-oriented architectures promise to deliver loosely coupled control systems, they are inherently brittle as they commonly depend on a priori agreed interfaces and data models. At the same time, the Semantic Web and a whole set of accompanying standards and tools are emerging, advocating ontologies as the basis for knowledge exchange. In this paper we aim to identify a number of key ideas from the myriad of knowledge-based practices that can readily be implemented by control systems today. We demonstrate with a practical example (a three-channel imager for the Mercator Telescope) how ontologies developed in the Web Ontology Language (OWL) can serve as a meta-model for our instrument, covering as many engineering aspects of the project as needed. We show how a concrete system model can be built on top of this meta-model via a set of Domain Specific Languages (DSLs), supporting both formal verification and the generation of software and documentation artifacts. Finally we reason how the available semantics can be exposed at run-time by adding a “semantic layer” that can be browsed, queried, monitored etc. by any OPC UA-enabled client. | |||
![]() |
Slides TUCOCB03 [2.130 MB] | ||
TUCOCB04 | EPICS Version 4 Progress Report | EPICS, database, operation, network | 956 |
|
|||
EPICS Version 4 is the next major revision of the Experimental Physics and Industrial Control System, a widely used software framework for controls in large facilities, accelerators and telescopes. The primary goal of Version 4 is to improve support for scientific applications by augmenting the control-centered EPICS Version 3 with an architecture that allows building scientific services on top of it. Version 4 provides a new standardized wire protocol, support of structured types, and parametrized queries. The long-term plans also include a revision of the IOC core layer. The first set of services like directory, archive retrieval, and save set services aim to improve the current EPICS architecture and enable interoperability. The first services and applications are now being deployed in running facilities. We present the current status of EPICS V4, the interoperation of EPICS V3 and V4, and how to create services such as accelerator modelling, large database access, etc. These enable operators and physicists to write thin and powerful clients to support commissioning, beam studies and operations, and opens up the possibility of sharing applications between different facilities. | |||
![]() |
Slides TUCOCB04 [1.937 MB] | ||
TUCOCB06 | Designing and Implementing LabVIEW Solutions for Re-Use* | framework, interface, LabView, hardware | 960 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632632 Many of our machines have a lot in common – they drive motors, take pictures, generate signals, toggle switches, and observe and measure effects. In a research environment that creates new machines and expects them to perform for a production assembly line, it is important to meet both schedule and quality. NIF has developed a LabVIEW layered architecture of Support, general Frameworks, Controllers, Devices, and User Interface Frameworks. This architecture provides a tested and qualified framework of software that allows us to focus on developing and testing the external interfaces (hardware and user) of each machine. |
|||
![]() |
Slides TUCOCB06 [4.232 MB] | ||
TUCOCB07 | TANGO - Can ZMQ Replace CORBA ? | TANGO, CORBA, network, database | 964 |
|
|||
TANGO (http://www.tango-controls.org) is a modern distributed device oriented control system toolkit used by a number of facilities to control synchrotrons, lasers and a wide variety of equipment for doing physics experiments. The performance of the network protocol used by TANGO is a key component of the toolkit. For this reason TANGO is based on the omniORB implementation of CORBA. CORBA offers an interface definition language with mappings to multiple programming languages, an efficient binary protocol, a data representation layer, and various services. In recent years a new series of binary protocols based on AMQP have emerged from the high frequency stock market trading business. A simplified version of AMQP called ZMQ (http://www.zeromq.org/) was open sourced in 2010. In 2011 the TANGO community decided to take advantage of ZMQ. In 2012 the kernel developers successfully replaced the CORBA Notification Service with ZMQ in TANGO V8. The first part of this paper will present the software design, the issues encountered and the resulting improvements in performance. The second part of this paper will present a study of how ZMQ could replace CORBA completely in TANGO. | |||
![]() |
Slides TUCOCB07 [1.328 MB] | ||
TUCOCB08 | Reimplementing the Bulk Data System with DDS in ALMA ACS | network, operation, site, CORBA | 969 |
|
|||
Bulk Data(BD) is a service in the ALMA Common Software to transfer a high amount of astronomical data from many-to-one, and one-to-many computers. Its main application is the Correlator SW (processes raw lags from the Correlator HW into science visibilities). The Correlator retrieves data from antennas on up to 32 computers. Data is forwarded to a master computer and combined to be sent to consumers. The throughput requirement both to/from the master is 64 MBytes/sec, differently distributed based on observing conditions. Requirements for robustness make the application very challenging. The first implementation, based on the CORBA A/V Streaming service, showed weaknesses. We therefore decided to replace it, even if we were approaching start of operations, making provision for careful testing. We have chosen as core technology DDS (Data Distribution Service), being a well supported standard, widespread in similar applications. We have evaluated mainstream implementations, with emphasis on performance, robustness and error handling. We have successfully deployed the new BD, making it easy switching between old and new for testing purposes. We discuss challenges and lessons learned. | |||
![]() |
Slides TUCOCB08 [1.582 MB] | ||
TUCOCB09 | The Internet of Things and Control System | network, TANGO, feedback, embedded | 974 |
|
|||
A recent huge interest in Machine to Machine communication is known as the Internet Of Things (IOT), to allow the possibility for autonomous devices to use Internet for exchanging the data. The Internet and the World Wide Web have caused a revolution in communication between the people. They were born from the need to exchange scientific information between institutes. Several universities have predicted that IOT will have a similar impact and now, industry is gearing up for it. The issues under discussion for IOT , such as protocols, representations and resources are similar to human communication and are currently being tested by different institutes and companies, including start-ups. Already, the term smart city is used to describe uses of IOT, such as smart parking, traffic congestion and waste management. In the domain of Control Systems for big research facilities, a lot of knowledge has already been acquired for building the connections between thousands of devices, more and more of which are provided with a TCP/IP connection. This paper investigates the possible convergence between Control Systems and IOT. | |||
![]() |
Slides TUCOCB09 [11.919 MB] | ||
TUCOCB10 | TANGO V8 - Another Turbo Charged Major Release | TANGO, interface, device-server, CORBA | 978 |
|
|||
The TANGO (http://tango-controls/org) collaboration continues to evolve and improve the TANGO kernel. A latest release has made major improvements to the protocol and, the language support in Java. The replacement of the CORBA Notificaton service with ZMQ for sending events has allowed a much higher performance, a simplification of the architecture and support for multicasting to be achieved. A rewrite of the Java device server binding using the latest features of the Java language has made the code much more compact and modern. Guidelines for writing device servers have been produced so they can be more easily shared. The test suite for testing the TANGO kernel has been re-written and the code coverage drastically improved. TANGO has been ported to new embedded platforms running Linux and mobile platforms running Android and iOS. Packaging for Debian and bindings to commercial tools have been updated and a new one (Panorama) added. The graphical layers have been extended. The latest figures on TANGO performance will be presented. Finally the paper will present the roadmap for the next major release. | |||
![]() |
Slides TUCOCB10 [1.469 MB] | ||
WECOAAB01 | An Overview of the LHC Experiments' Control Systems | experiment, framework, interface, monitoring | 982 |
|
|||
Although they are LHC experiments, the four experiments, either by need or by choice, use different equipment, have defined different requirements and are operated differently. This led to the development of four quite different Control Systems. Although a joint effort was done in the area of Detector Control Systems (DCS) allowing a common choice of components and tools and achieving the development of a common DCS Framework for the four experiments, nothing was done in common in the areas of Data Acquisition or Trigger Control (normally called Run Control). This talk will present an overview of the design principles, architectures and technologies chosen by the four experiments in order to perform the Control System's tasks: Configuration, Control, Monitoring, Error Recovery, User Interfacing, Automation, etc.
Invited |
|||
![]() |
Slides WECOAAB01 [2.616 MB] | ||
WECOAAB02 | Status of the ACS-based Control System of the Mid-sized Telescope Prototype for the Cherenkov Telescope Array (CTA) | software, interface, monitoring, framework | 987 |
|
|||
CTA as the next generation ground-based very-high-energy gamma-ray observatory is defining new areas beyond those related to physics; it is also creating new demands on the control and data acquisition system. With on the order of 100 telescopes spread over large area with numerous central facilities, CTA will comprise a significantly larger number of devices than any other current imaging atmospheric Cherenkov telescope experiment. A prototype for the Medium Size Telescope (MST) of a diameter of 12 m has been installed in Berlin and is currently being commissioned. The design of the control software of this telescope incorporates the main tools and concepts under evaluation within the CTA consortium in order to provide an array control prototype for the CTA project. The readout and control system for the MST prototype is implemented within the ALMA Common Software (ACS) framework. The interfacing to the hardware is performed via the OPen Connectivity-Unified Architecture (OPC UA). The archive system is based on MySQL and MongoDB. In this contribution the architecture of the MST control and data acquisition system, implementation details and first conclusions are presented. | |||
![]() |
Slides WECOAAB02 [3.148 MB] | ||
WECOAAB03 | Synchronization of Motion and Detectors and Continuous Scans as the Standard Data Acquisition Technique | detector, hardware, software, data-acquisition | 992 |
|
|||
This paper describes the model, objectives and implementation of a generic data acquisition structure for an experimental station, which integrates the hardware and software synchronization of motors, detectors, shutters and in general any experimental channel or events related with the experiment. The implementation involves the management of hardware triggers, which can be derived from time, position of encoders or even events from the particle accelerator, combined with timestamps for guaranteeing the correct integration of software triggered or slow channels. The infrastructure requires a complex management of buffers of different sources, centralized and distributed, including interpolation procedures. ALBA uses Sardana built on TANGO as the generic control system, which provides the abstraction and communication with the hardware, and a complete macro edition and execution environment. | |||
![]() |
Slides WECOAAB03 [2.432 MB] | ||
WECOBA02 | Distributed Information Services for Control Systems | database, EPICS, interface, software | 1000 |
|
|||
During the design and construction of an experimental physics facility (EPF), a heterogeneous set of engineering disciplines, methods, and tools is used, making subsequent exploitation of data difficult. In this paper, we describe a framework (DISCS) for building high-level applications for commissioning, operation, and maintenance of an EPF that provides programmatic as well as graphical interfaces to its data and services. DISCS is a collaborative effort of BNL, FRIB, Cosylab, IHEP, and ESS. It is comprised of a set of cooperating services and applications, and manages data such as machine configuration, lattice, measurements, alignment, cables, machine state, inventory, operations, calibration, and design parameters. The services/applications include Channel Finder, Logbook, Traveler, Unit Conversion, Online Model, and Save-Restore. Each component of the system has a database, an API, and a set of applications. The services are accessed through REST and EPICS V4. We also discuss the challenges to developing database services in an environment where requirements continue to evolve and developers are distributed among different laboratories with different technology platforms. | |||
WECOCB01 | CERN's FMC Kit | hardware, FPGA, interface, feedback | 1020 |
|
|||
In the frame of the renovation of controls and data acquisition electronics for accelerators, the BE-CO-HT section at CERN has designed a kit based on carriers and mezzanines following the FPGA Mezzanine Card (FMC, VITA 57) standard. Carriers exist in VME64x and PCIe form factors, with a PXIe carrier underway. Mezzanines include an Analog to Digital Converter (ADC), a Time to Digital Converter (TDC) and a fine delay generator. All of the designs are licensed under the CERN Open Hardware Licence (OHL) and commercialized by companies. The paper discusses the benefits of this carrier-mezzanine strategy and of the Open Hardware based commercial paradigm, along with performance figures and plans for the future. | |||
![]() |
Slides WECOCB01 [3.300 MB] | ||
WECOCB05 | Modern Technology in Disguise | FPGA, software, interface, hardware | 1032 |
|
|||
A modern embedded system for fast systems has to incorporate technologies like multicore CPUs, fast serial links and FPGAs for interfaces and local processing. Those technologies are still relatively new and integrating them in a control system infrastructure that either exists already or has to be planned for long-term maintainability is a challenge that needs to be addressed. At PSI we have, in collaboration with an industrial company (IOxOS SA)[*], built a board and infrastructure around it solving issues like scalability and modularization of systems that are based on FPGAs and the FMC standard, simplicity in taking such a board in operation and re-using parts of the source code base for FPGA. In addition the board has several state-of-the-art features that are typically found in the newer bus systems like MicroTCA, but can still easily be incorporated in our VME64x-based infrastructure. In the presentation we will describe the system architecture, its technical features and how it enables us to effectively develop our different user applications and fast front-end systems.
* IOxOS SA, Gland, Switzerland, http://www.ioxos ch |
|||
![]() |
Slides WECOCB05 [0.675 MB] | ||
WECOCB07 | Development of an Open-Source Hardware Platform for Sirius BPM and Orbit Feedback | hardware, FPGA, interface, software | 1036 |
|
|||
The Brazilian Synchrotron Light Laboratory (LNLS) is developing a BPM and orbit feedback system for Sirius, the new low emmitance synchrotron light source under construction in Brazil. In that context, 3 open-source boards and accompanying low-level firmware/software were developed in cooperation with the Warsaw University of Technology (WUT) to serve as hardware platform for the BPM data acquisition and digital signal processing platform as well as orbit feedback data distributor: (i) FPGA board with 2 high-pin count FMC slots in PICMG AMC form factor; (ii) 4-channel 16-bit 130 MS/s ADC board in ANSI/VITA FMC form factor; (iii) 4-channel 16-bit 250 MS/s ADC board in ANSI/VITA FMC form factor. The experience of integrating the system prototype in a COTS MicroTCA.4 crate will be reported, as well as the planned developments. | |||
![]() |
Slides WECOCB07 [4.137 MB] | ||
THCOAAB01 | A Scalable and Homogeneous Web-Based Solution for Presenting CMS Control System Data | interface, software, detector, status | 1040 |
|
|||
The Control System of the CMS experiment ensures the monitoring and safe operation of over 1M parameters. The high demand for access to online and historical Control System Data calls for a scalable solution combining multiple data sources. The advantage of a Web solution is that data can be accessed from everywhere with no additional software. Moreover, existing visualization libraries can be reused to achieve a user-friendly and effective data presentation. Access to the online information is provided with minimal impact on the running control system by using a common cache in order to be independent of the number of users. Historical data archived by the SCADA software is accessed via an Oracle Database. The web interfaces provide mostly a read-only access to data but some commands are also allowed. Moreover, developers and experts use web interfaces to deploy the control software and administer the SCADA projects in production. By using an enterprise portal, we profit from single sign-on and role-based access control. Portlets maintained by different developers are centrally integrated into dynamic pages, resulting in a consistent user experience. | |||
![]() |
Slides THCOAAB01 [1.814 MB] | ||
THCOAAB02 | Enhancing the Man-Machine-Interface of Accelerator Control Applications with Modern Consumer Market Technologies | HOM, framework, embedded, software | 1044 |
|
|||
The paradigms of human interaction with modern consumer market devices such as tablets, smartphones or video game consoles are currently undergoing rapid and serious changes. Device control by multi-finger touch gesture or voice recognition has now become standard. Even further advanced technologies such as 3D-gesture recognition are becoming routine. Smart enhancements of head-mounted display technologies are beginning to appear on the consumer market. In addition, the look-and-feel of mobile apps and classical desktop applications are becoming remarkably similar to one another. We have used Web2cToGo to investigate the consequences of the above-mentioned technologies and paradigms with respect to accelerator control applications. Web2cToGo is a framework which is being developed at DESY. It provides a common, platform-independent Web application capable of running on widely-used mobile as well as common desktop platforms. This paper reports the basic concept of the project and presents the results achieved so far and discusses the next development steps. | |||
![]() |
Slides THCOAAB02 [0.667 MB] | ||
THCOAAB03 | Bringing Control System User Interfaces to the Web | interface, EPICS, network, status | 1048 |
|
|||
Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy With the evolution of web based technologies, especially HTML5[1], it becomes possible to create web-based control system user interfaces (UI) that are cross-browser and cross-device compatible. This article describes two technologies that facilitate this goal. The first one is the WebOPI [2], which can seamlessly display CSS BOY[3] Operator Interfaces (OPI) in web browsers without modification to the original OPI file. The WebOPI leverages the powerful graphical editing capabilities of BOY, it provides the convenience of re-using existing OPI files. On the other hand, it uses auto-generated JavaScript and a generic communication mechanism between the web browser and web server. It is not optimized for a control system, which results in unnecessary network traffic and resource usage. Our second technology is the WebSocket-based Process Data Access (WebPDA). It is a protocol that provides efficient control system data communication using WebSockets[4], so that users can create web-based control system UIs using standard web page technologies such as HTML, CSS and JavaScript. The protocol is control system independent, so it potentially can support any type of control system. [1]http://en.wikipedia.org/wiki/HTML5 [2]https://sourceforge.net/apps/trac/cs-studio/wiki/webopi [3]https://sourceforge.net/apps/trac/cs-studio/wiki/BOY [4]http://en.wikipedia.org/wiki/WebSocket |
|||
![]() |
Slides THCOAAB03 [1.768 MB] | ||
THCOAAB04 | Synchrobots: Experiments with Telepresence and Teleoperated Mobile Robots in a Synchrotron Radiation Facility | radiation, synchrotron, synchrotron-radiation, experiment | 1052 |
|
|||
Synchrobot is an autonomous mobile robot that supports the machine operators of Elettra (*), a synchrotron radiation facility, in tasks such as diagnostic and measurement campaigns being capable of moving in the restricted area when the machine is running. In general, telepresence robots are mobile robot platforms capable of providing two way audio and video communication. Recently many companies are entering the business of telepresence robots. This paper describes our experience with tools like synchrobot and also commercially available telepresence robots. Based on our experience, we present a set of guidelines for using and integrating telepresence robots in the daily life of a research infrastructure and explore potential future development scenarios.
http://www.elettra.eu |
|||
![]() |
Slides THCOAAB04 [9.348 MB] | ||
THCOAAB06 | Achieving a Successful Alarm Management Deployment – The CLS Experience | operation, factory, software, monitoring | 1062 |
|
|||
Alarm management systems promise to improve situational awareness, aid operational staff in correcting responding to accelerator problems and reduce downtime. Many facilities, including the Canadian Light Source (CLS), have been challenged in achieving this goal. At CLS past attempts focusing on software features and capabilities. Our third attempt switched gears and instead focused on human factors engineering techniques and the associated response processes to the alarm. Aspects of ISA 18,2, EEMUA 191 and NREG-700 standards were used. CLS adopted the CSS BEAST alarm handler software. Work was also undertaken to identify bad actors and analyzing alarm system performance and to avoid alarm flooding. The BEAST deployment was augmented with a locally developed voice annunciation system for a small number of critical high impact alarms and auto diallers for shutdown periods when the control room is not staffed. This paper summaries our approach and lessons learned. | |||
![]() |
Slides THCOAAB06 [0.397 MB] | ||
THCOAAB08 | NOMAD Goes Mobile | GUI, CORBA, interface, network | 1070 |
|
|||
The commissioning of the new instruments at the Institut Laue-Langevin (ILL) has shown the need to extend instrument control outside the classical desktop computer location. This, together with the availability of reliable and powerful mobile devices such as smartphones and tablets has triggered a new branch of development for NOMAD, the instrument control software in use at the ILL. Those devices, often considered only as recreational toys, can play an important role in simplifying the life of instrument scientists and technicians. Performing an experiment not only happens in the instrument cabin but also from the office, from another instrument, from the lab and from home. The present paper describes the development of a remote interface, based on Java and Android Eclipse SDK, communicating with the NOMAD server using CORBA via wireless network. Moreover, the application is distributed on “Google Play” to minimise the installation and the update procedures. | |||
![]() |
Slides THCOAAB08 [2.320 MB] | ||
THCOAAB09 | Olog and Control System Studio: A Rich Logging Environment | interface, operation, experiment, framework | 1074 |
|
|||
Leveraging the features provided by Olog and Control System Studio, we have developed a logging environment which allows for the creation of rich log entries. These entries in addition to text and snapshots images store context which can comprise of information either from the control system (process variables) or other services (directory, ticketing, archiver). The client tools using this context provide the user the ability to launch various applications with their state initialized to match those while the entry was created. | |||
![]() |
Slides THCOAAB09 [1.673 MB] | ||
THMIB03 | From Real to Virtual - How to Provide a High-avaliblity Computer Server Infrastructure | Linux, hardware, operation, network | 1076 |
|
|||
During the commissioning phase of the Swiss Light Source (SLS) at the Paul Scherrer Institut (PSI) we decided in 2000 for a strategy to separate individual services for the control system. The reason was to prevent interruptions due to network congestion, misdirected control, and other causes between different service contexts. This concept proved to be reliable over the years. Today, each accelerator facility and beamline of PSI resides on a separated subnet and uses its dedicated set of service computers. As the number of beamlines and accelerators grew, the variety of services and their quantity rapidly increased. Fortunately, about the time when the SLS announced its first beam, VMware introduced its VMware Virtual Platform for Intel IA32 architecture. This was a great opportunity for us to start with the virtualization of the controls services. Currently, we have about 200 of such systems. In this presentation we discuss the way how we achieved the high-level-virtualization controls infrastructure, as well as how we will proceed in the future. | |||
![]() |
Slides THMIB03 [2.124 MB] | ||
![]() |
Poster THMIB03 [1.257 MB] | ||
THMIB07 | Fast Orbit Feedback Control in Mode Space | booster, feedback, synchrotron, electron | 1082 |
|
|||
This paper describes the design and implementation of fast orbit feedback control in mode space. Using a Singular Value Decomposition (SVD) of the response matrix, each singular value can be associated with a spatial mode and enhanced feedback performance can be achieved by applying different controller dynamics to each spatial mode. By considering the disturbance spectrum across both dynamic and spatial frequencies, controller dynamics for each mode can be selected. Most orbit feedback systems apply only different gains to each mode however; mode space control gives greater flexibility in control design and can lead to enhanced disturbance suppression. Mode space control was implemented on the Booster synchrotron at Diamond Light Source, operated in stored beam mode. Implementation and performance of the mode space controller are presented. | |||
![]() |
Slides THMIB07 [0.582 MB] | ||
![]() |
Poster THMIB07 [0.593 MB] | ||
THMIB09 | Management of the FERMI Control System Infrastructure | network, interface, TANGO, Ethernet | 1086 |
|
|||
Funding: Work supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3 Efficiency, flexibility and simplicity of management have been some of the design guidelines of the control system for the FERMI@Elettra Free Electron Laser. Out-of-band system monitoring devices, remotely operated power distribution units and remote management interfaces have been integrated into the Tango control system, leading to an effective control of the infrastructure. The Open Source tool Nagios has been deployed to monitor the functionality of the control system computers and the status of the application software for an easy and automatic identification and report of troubles. |
|||
![]() |
Slides THMIB09 [0.236 MB] | ||
![]() |
Poster THMIB09 [1.567 MB] | ||
THPPC001 | Overview of "The Scans" in the Central Control System of TRIUMF's 500 MeV Cyclotron | software, cyclotron, TRIUMF, hardware | 1090 |
|
|||
The Controls Group for TRIUMF's 500 MeV cyclotron developed, runs and maintains a software application known as The Scans whose purpose is to: a) log events, b) enunciate alarms and warnings, c) perform simple actions on the hardware, and d) provide software interlocks for machine protection. Since its inception more than 35 years ago, The Scans has increasingly become an essential part for the proper operation of the Cyclotron. This paper gives an overview of The Scans, its advantages and limitations, and desired improvements. | |||
![]() |
Poster THPPC001 [4.637 MB] | ||
THPPC002 | Configuration Management for Beam Delivery at TRIUMF/ISAC | ion-source, ion, database, ISAC | 1094 |
|
|||
The ISAC facility at TRIUMF delivers simultaneous beams from different ion sources to multiple destinations. More beams will be added by the ARIEL facility which is presently under construction. To ensure co-ordination of beam delivery, beam path configuration management has been implemented. The process involves beam path selection, configuration setup and configuration monitoring. In addition save and restore of beam line device settings, scaling of beam optic devices for beam energy and mass, beam path specific operator displays, the ability to compare present and previous beam tunes, and alarm enunciation of device readings outside prescribed ranges are supported. Design factors, re-usability strategies, and results are described. | |||
![]() |
Poster THPPC002 [0.508 MB] | ||
THPPC004 |
CODAC Standardisation of PLC Communication | PLC, EPICS, software, Ethernet | 1097 |
|
|||
As defined by the CODAC Architecture of ITER, a Plant System Host (PSH) and one or more Slow Controllers (SIEMENS PLCs) are connected over a switched Industrial Ethernet (IE) network. An important part of Software Engineering of Slow Controllers is the standardization of communication between PSH and PLCs. Based on prototyping and performance evaluation, Open IE Communication over TCP was selected. It is implemented on PLCs to support the CODAC data model of ‘State’, ‘Configuration’ and ‘Simple Commands’. The implementation is packaged in Standard PLC Software Structure(SPSS) as a part of CODAC Core System release. SPSS can be easily configured by the SDD Tools of CODAC. However Open IE Communication is restricted to the PLC CPUs. This presents a challenge to implement redundant PLC architecture and use remote IO modules. Another version of SPSS is developed to support communication over Communication Processors(CP). The EPICS driver is also extended to support redundancy transparent to the CODAC applications. Issues of PLC communication standardization in the context of CODAC environment and future development of SPSS and EPICS driver are presented here. | |||
THPPC005 | Virtualization Infrastructure within the Controls Environment of the Light Sources at HZB | network, hardware, software, EPICS | 1100 |
|
|||
The advantages of visualization techniques and infrastructures with respect to configuration management, high availability and resource management have become obvious also for controls applications. Today a choice of powerful products are easy-to-use and support desirable functionality, performance, usability and maintainability at very matured levels. This paper presents the architecture of the virtual infrastructure and its relations to the hardware based counterpart as it has emerged for BESSY II and MLS controls within the past decade. Successful experiences as well as abandoned attempts and caveats on some intricate troubles are summarized. | |||
![]() |
Poster THPPC005 [0.286 MB] | ||
THPPC006 | REMBRANDT - REMote Beam instRumentation And Network Diagnosis Tool | database, monitoring, network, status | 1103 |
|
|||
As with any other large accelerator complex in operation today, the beam instrumentation devices and associated data acquisition components for the coming FAIR accelerators will be distributed over a large area and partially installed in inaccessible radiation exposed areas. Besides operation of the device itself, like acquisition of data, it is mandatory to control also the supporting LAN based components like VME/μTCA crates, front-end computers (FEC), middle ware servers and more. Fortunately many COTS systems provide means for remote control and monitoring using a variety of standardized protocols like SNMP, IPMI or iAMT. REMBRANDT is a Java framework, which allows the authorized user to monitor and control remote systems while hiding the underlying protocols and connection information such as ip addresses, user-ids and passwords. Beneath voltage and current control, the main features are the remote power switching of the systems and the reverse telnet boot process observation of FECs. REMBRANDT is designed to be easily extensible with new protocols and features. The software concept, including the client-server part and the database integration, will be presented. | |||
![]() |
Poster THPPC006 [3.139 MB] | ||
THPPC009 | Design and Status of the SuperKEKB Accelerator Control Network System | network, linac, EPICS, Ethernet | 1107 |
|
|||
SuperKEKB is the upgrade of the KEKB asymmetric energy electron-positron collider, for the next generation B-factory experiment in Japan. It is designed to achieve a luminosity of 8x1035/cm2/s, 40 times higher than the world highest luminosity record at KEKB. For SuperKEKB, we upgrade the accelerator control network system, which connects all devices in the accelerator. To construct the higher performance network system, we install the network switches based on the 10 gigabit Ethernet (10GbE) for the wider bandwidth data transfer. Additional optical fibers, for the reliable and redundant network and for the robust accelerator control timing system, are also installed. For the KEKB beamline construction and accelerator components maintenance, we install the new wireless network system based on the Leaky Coaxial (LCX) cable antennas into the 3 km circumference beamline tunnel. We reconfigure the network design to enhance the reliability and security of the network. In this paper, the design and current status of the SuperKEKB accelerator control network system will be presented. | |||
![]() |
Poster THPPC009 [1.143 MB] | ||
THPPC012 | The Equipment Database for the Control System of the NICA Accelerator Complex | database, TANGO, software, collider | 1111 |
|
|||
The report describes the database of equipment for the control system of Nuclotron-based Ion Collider fAcility (NICA, JINR, Russia). The database will contain information about hardware, software, computers and network components of control system, their main settings and parameters, and the responsible persons. The equipment database should help to implement the Tango system as a control system of NICA accelerator complex. The report also describes a web service to display, search, and manage the database. | |||
![]() |
Poster THPPC012 [1.070 MB] | ||
THPPC013 | Configuration Management of the Control System | TANGO, software, database, PLC | 1114 |
|
|||
The control system of big research facilities like synchrotron involves a lot of work to keep hardware and software synchronised to each other to have a good coherence. Modern Control System middleware Infrastructures like Tango use a database to store all values necessary to communicate with the devices. Nevertheless it is necessary to configure the driver of a PowerSupply or a Motor controller before being able to communicate with any software of the control system. This is part of the configuration management which involves keeping track of thousands of equipments and their properties. In recent years, several DevOps tools like Chef, Puppet, Ansible or SpaceMaster have been developed by the OSS community. They are now mandatory for the configuration of thousands of servers to build clusters or cloud servers. Define a set of coherent components, enable Continuous Deployment in synergy with Continuous Integration, reproduce a control system for simulation, rebuild and track changes even in the hardware configuration are among the use cases. We will explain the strategy of MaxIV on this subject, regarding the configuration management. | |||
![]() |
Poster THPPC013 [4.620 MB] | ||
THPPC014 | CMX - A Generic Solution to Expose Monitoring Metrics in C and C++ Applications | monitoring, real-time, diagnostics, operation | 1118 |
|
|||
CERN’s Accelerator Control System is built upon a large number of C, C++ and Java services that are required for daily operation of the accelerator complex. The knowledge of the internal state of these processes is essential for problem diagnostic as well as for constant monitoring for pre-failure recognition. The CMX library follows similar principles as JMX (Java Management Extensions) and provides similar monitoring capabilities for C and C++ applications. It allows registering and exposing runtime information as simple counters, floating point numbers or character data. This can be subsequently used by external diagnostics tools for checking thresholds, sending alerts or trending. CMX uses shared-memory to ensure non-blocking read/update actions, which is an important requirement for real-time processes. This paper introduces the topic of monitoring C/C++ applications and presents CMX as a building block to achieve this goal. | |||
![]() |
Poster THPPC014 [0.795 MB] | ||
THPPC015 | Managing Infrastructure in the ALICE Detector Control System | detector, experiment, hardware, software | 1122 |
|
|||
The main role of the ALICE Detector Control System (DCS) is to ensure safe and efficient operation of one of the large high energy physics experiments at CERN. The DCS design is based on the commercial SCADA software package WinCC Open Architecture. The system includes over 270 VME and power supply crates, 1200 network devices, over 1,000,000 monitored parameters as well as numerous pieces of front-end and readout electronics. This paper summarizes the computer infrastructure of the DCS as well as the hardware and software components that are used by WinCC OA for communication with electronics devices. The evolution of these components and experience gained from the first years of their production use are also described. We also present tools for the monitoring of the DCS infrastructure and supporting its administration together with plans for their improvement during the first long technical stop in LHC operation. | |||
![]() |
Poster THPPC015 [1.627 MB] | ||
THPPC017 | Control System Configuration Management at PSI Large Research Facilities | EPICS, hardware, database, software | 1125 |
|
|||
The control system of the PSI accelerator facilities and their beamlines consists mainly of the so called Input Output Controllers (IOCs) running EPICS. There are several flavors of EPICS IOCs at PSI running on different CPUs, different underlying operating systems and different EPICS versions. We have hundreds of IOCs which control the facilities at PSI. The goal of the Control system configuration management is to provide a set of tools to allow a consistent and uniform configuration for all IOCs. In this context the Oracle database contains all hardware-specific information including the CPU type, operating system or EPICS version. The installation tool connects to Oracle database. Depending on the IOC-type a set of files (or symbolic links) are created which connect to the required operating system, libraries or EPICS configuration files in the boot directory. In this way a transparent and user-friendly IOC installation is achieved. The control system export can check the IOC installation, boot information, as well as the status of loaded EPICS process variables by using Web applications. | |||
![]() |
Poster THPPC017 [0.405 MB] | ||
THPPC018 | Construction of the TPS Network System | network, EPICS, Ethernet, timing | 1127 |
|
|||
Project of 3 GeV Taiwan Photon Source (TPS) need a reliable, secure and high throughput network to ensure facility operate routinely and to provide better service for various purposes. The network system includes the office network, the beamline network and the accelerator control network for the TPS and the TLS (Taiwan Light Source) sites at NSRRC. Combining cyber security technologies such as firewall, NAT and VLAN will be adopted to define the tree network topology for isolating the accelerator control network, beamline network and subsystem components. Various network management tools are used for maintenance and troubleshooting. The TPS network system architecture, cabling topology, redundancy and maintainability are described in this report. | |||
![]() |
Poster THPPC018 [2.650 MB] | ||
THPPC022 | Securing Mobile Control System Devices: Development and Testing | network, Linux, EPICS, interface | 1131 |
|
|||
Recent advances in portable devices allow end users convenient wasy to access data over the network. Networked control systems have traditionally been kept on local or internal networks to prevent external threats and isolate traffic. The UMWC Clinical Neutron Therapy System has its control system on such an isolated network. Engineers have been updating the control system with EPICS, and have developed EDM-based interfaces for control and monitoring. This project describes a tablet-based monitoring device being developed to allow the engineers to monitor the system, while, e.g. moving from rack to rack, or room to room. EDM is being made available via the tablet. Methods to maintain security of the control system and tablet, while providing ease of access and meaningful data for management are being created. In parallel with the tablet development, security and penetration tests are also being produced. | |||
THPPC023 | Integration of Windows Binaries in the UNIX-based RHIC Control System Environment | Windows, software, interface, Linux | 1135 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Since its inception, the RHIC control system has been built-up on UNIX or LINUX and implemented primarily in C++. Sometimes equipment vendors include software packages developed in the Microsoft Windows operating system. This leads to a need to integrate these packaged executables into existing data logging, display, and alarms systems. This paper will describe an approach to incorporate such non-UNIX binaries seamlessly into the RHIC control system with minimal changes to the existing code base, allowing for compilation on standard LINUX workstations through the use of a virtual machine. The implementation resulted in the successful use of a windows dynamic linked library (DLL) to control equipment remotely while running a synoptic display interface on a LINUX machine. |
|||
![]() |
Poster THPPC023 [1.391 MB] | ||
THPPC024 | Operating System Upgrades at RHIC | software, Linux, network, collider | 1138 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Upgrading hundreds of machines to the next major release of an Operating system (OS), while keeping the accelerator complex running, presents a considerable challenge. Even before addressing the challenges that an upgrade represents, there are critical questions that must be answered. Why should an upgrade be considered? (An upgrade is labor intensive and includes potential risks due to defective software.) When is it appropriate to make incremental upgrades to the OS? (Incremental upgrades can also be labor intensive and include similar risks.) When is the best time to perform an upgrade? (An upgrade can be disruptive.) Should all machines be upgraded to the same version at the same time? (At times this may not be possible, and there may not be a need to upgrade certain machines.) Should the compiler be upgraded at the same time? (A compiler upgrade can also introduce risks at the software application level.) This paper examines our answers to these questions, describes how upgrades to the Red Hat Linux OS are implemented by the Controls group at RHIC, and describes our experiences. |
|||
![]() |
Poster THPPC024 [0.517 MB] | ||
THPPC025 | The Interaction between Safety Interlock and Motion Control Systems on the Dingo Radiography Instrument at the OPAL Research Reactor | collimation, neutron, shielding, radiation | 1141 |
|
|||
A neutron radiography/tomography instrument (Dingo) has recently been commissioned at the Bragg Institute, ANSTO. It utilizes thermal beam HB2 of the OPAL research reactor with flux up to 4.75 x 107 neutrons cm-2 s−1 at the sample. One component of the instrument is a 2.5 tonne selector wheel filled with a wax/steel shielding mixture which requires complex interaction between the safety interlock and motion control systems. It provides six apertures which are equipped with various neutron beam optics plus a solid ‘shutter’ section to block the beam. A standardized Galil based motion system precisely controls the movement of the wheel while a Pilz safety PLC specifies the desired position and handles other safety aspects of the instrument. A shielded absolute SSI encoder is employed to give high accuracy feedback on the position in conjunction with a number or limit switches. This paper details the challenges of creating a motion system with inherent safety, verifying the wheel meets specifications and the considerations in selecting components to withstand high radiation environments. | |||
![]() |
Poster THPPC025 [1.929 MB] | ||
THPPC026 | Diagnostic Controls of IFMIF-EVEDA Prototype Accelerator | diagnostics, EPICS, software, emittance | 1144 |
|
|||
The Linear IFMIF prototype accelerator (LIPac) will accelerate a 9 MeV, 125 mA, CW deuteron beam in order to validate the technology that will be used for the future IFMIF accelerator (International Fusion Materials Irradiation Facility). This facility will be installed in Rokkasho (Japan) and Irfu-Saclay has developed the control system for several work packages like the injector and a set of the diagnostic subsystem. At Irfu-Saclay, beam tests were carried out on the injector with its diagnostics. Diagnostic devices have been developed to characterize the high beam power (more than 1MW) along the accelerator: an Emittance Meter Unit (EMU), Ionization Profile Monitors (IPM), Secondary Electron Emission Grids (SEM-grids), Beam Loss Monitors (BLoM and μLoss), and Current Transformers (CT). This control system relies on COTS and an EPICS software platform. A specific isolated fast acquisition subsystem running at high sampling rate (about 1 MS/s), triggered by the Machine Protection System (MPS), is dedicated to the analysis of post-mortem data produced by the BLoMs and current transformer signals. | |||
![]() |
Poster THPPC026 [0.581 MB] | ||
THPPC027 | A New EPICS Device Support for S7 PLCs | PLC, EPICS, interface, software | 1147 |
|
|||
S7 series programmable logic controllers (PLCs) are commonly used in accelerator environments. A new EPICS device support for S7 PLCs that is based on libnodave has been developed. This device support allows for a simple integration of S7 PLCs into EPICS environments. Developers can simply create an EPICS record referring to a memory address in the PLC and the device support takes care of automatically connecting to the PLC and transferring the value. This contribution presents the concept behind the s7nodave device support and shows how simple it is to create an EPICS IOC that communicates with an S7 PLC. | |||
![]() |
Poster THPPC027 [3.037 MB] | ||
THPPC032 | Embedded EPICS Controller for KEK Linac Screen Monitor System | linac, PLC, EPICS, Linux | 1150 |
|
|||
The screen monitor (SC) of the KEK linac is a beam diagnostics device to measure transverse beam profiles with a fluorescent screen. The screen material is made of 99.5% Al2O3 and 0.5% CrO3, with which a sufficient amount of fluorescent light can be obtained when electron and positron beams impinge on the screen. the fluorescent light with a camera embedded with a charge-coupled device (CCD), the transverse spatial profiles of the beam can be easily measured. Compact SCs were previously developed in 1995 for the KEKB project. About 110 compact SCs were installed into the beam line at that time. VME-based computer control system was also developed in order to perform fast and stable control of the SC system. However, the previous system becomes obsolete and hard to maintain. Recently, a new screen monitor control system for the KEK electron/positron injector linac has been developed and fully installed. The new system is an embedded EPICS IOC based on the Linux/PLC. In this paper, we present the new screen monitor control system in detail. | |||
THPPC035 | RF Signal Switching System for Electron Beam Position Monitor Utilizing ARM Microcontroller | operation, injection, LabView, Ethernet | 1160 |
|
|||
ARM microcontrollers have high processing speed and low power consumption because they work efficiently with less memory by their own instruction set. Therefore, ARM microcontrollers are used not only in portable devices but also other commercial electronic devices. In recent years, free development environments and low-cost development kits are provided by many companies. The “mbed” provided by NXP is one of them. The “mbed” provides an environment where we can develop a product easily even if we are not familiar with electronics or microcontrollers. We can supply electric power and can transfer the program that we have developed by connecting to a PC via USB. We can use USB and LAN that, in general, require high level of expertise. The “mbed” has also a function as a HTTP server. By combining with JavaScript library, we can control multiple I/O ports at the same time through LAN. In the presentation, we will report the results that we applied the “mbed” to develop an RF signal switching system for a turn-by-turn beam position monitor (BPM) at a synchrotron light source, UVSOR-III. | |||
![]() |
Poster THPPC035 [2.228 MB] | ||
THPPC036 | EPICS Control System for the FFAG Complex at KURRI | EPICS, interface, network, LabView | 1164 |
|
|||
In Kyoto University Research Reactor Institute (KURRI), a fixed-field alternating gradient (FFAG) proton accelerator complex, which is consists of the three FFAG rings, had been constructed to make an experimental study of accelerator driven sub-critical reactor (ADSR) system with spallation neutrons produced by the accelerator. The world first ADSR experiment was carried out in March of 2009. In order to increase the beam intensity of the proton FFAG accelerator, a new injection system with H− linac has been constructed in 2011. To deal with these developments, a control system of these accelerators should be easy to develop and maintain. The first control system was based on LabVIEW and the development had been started seven years ago. Thus it is necessary to update the components of the control system, for example operating system of the computer. And the first control system had some minor stability problems and it was difficult for non-expert of LabVIEW to modify control program. Therefore the EPICS toolkit has been started to use as the accelerator control system in 2009. The present control system of the KURRI FFAG complex is explained. | |||
![]() |
Poster THPPC036 [3.868 MB] | ||
THPPC037 | EPICS-based Control System for New Skew Quadrupole Magnets in J-PARC MR | PLC, EPICS, status, quadrupole | 1168 |
|
|||
In J-PARC Main Ring (MR), a control system for new skew quadrupole magnets has been constructed. This system is based on EPICS (Experimental Physics and Industrial Control System). The system comprises a YOKOGAWA F3RP61-2L (a PLC controller running Linux), a function generator (Tektronix AFG3000), and a commercial bipolar-DC Amplifier. The function generator is controlled using VXI-11 protocol over Ethernet, and the amplifier is connected to PLC I/O modules with hardwire. Both devices are controlled by the F3RP61-2L. The Function Generator produces a ramp waveform at each machine cycle of 2.48 seconds. The DC amplifire drives the magnet. The control system for skew quadrupole magnets was developed in 2012, and has been in opeation since January, 2013. | |||
![]() |
Poster THPPC037 [1.027 MB] | ||
THPPC043 | Implement an Interface for Control System to Interact with Oracle Database at SSC-LINAC | database, EPICS, interface, linac | 1171 |
|
|||
SSC-LINAC control system is based on EPICS architecture. The control system includes ion sources, vacuum, digital power supplies, etc. In these subsystems, some of those need to interactive with Oracle database, such as power supplies control subsystem, who need to get some parameters while power supplies is running and also need to store some data with Oracle. So we design and implementation an interface for EPICS IOC to interactive with Oracle database. The interface is a soft IOC which is also bases on EPICS architecture, so others IOC and OPI can use the soft IOC interactive with Oracle via Channel Access protocol. | |||
THPPC045 | The SSC-Linac Control System | software, linac, hardware, operation | 1173 |
|
|||
This article gives a brief description of the SSC-Linac control system for Heavy Ion Research Facility of Lanzhou(HIRFL). It describes in detail mainly of the overall system architecture, hardware and software. The overall system architecture is the distributed control system. We have adopted the the EPICS system as the system integration tools to develop the control system of the SSC-Linac. We use the NI PXIe chassis and PXIe bus master as a front-end control system hardware. Device controllers for each subsystem were composed of the commercial products or components designed by subsystems. The operating system in OPI and IOC of the SSC-Linac control system will use Linux. | |||
THPPC048 | Upgrade of the Nuclotron Injection Control and Diagnostics System | injection, TANGO, diagnostics, device-server | 1176 |
|
|||
Nuclotron is a 6 GeV/n superconducting synchrotron operating at JINR, Dubna since 1993. It will be the core of the future accelerating complex NICA which is under development now. The report presents details of the Nuclotron injection hardware and software upgrade to operate under future NICA control system based on Tango. The designed system provides control and synchronization of electrostatic and magnetic inflector devices and diagnostics of the ion beam injected from 20MeV linear accelerator to Nuclotron. The hardware consists of few controllable power supplies, various National Instruments acquisition devices, custom-designed controller module. The software consists of few C++ Tango device servers and NI LabView client applications. | |||
![]() |
Poster THPPC048 [1.472 MB] | ||
THPPC049 | The Power Supply System for Electron Beam Orbit Correctors and Focusing Lenses of Kurchatov Synchrotron Radiation Source | power-supply, operation, synchrotron, synchrotron-radiation | 1180 |
|
|||
The modernization project of the low-current power supply system of Kurchatov Synchrotron Radiation Source has been designed and is under implementation now. It includes transition to the new power suppliers to feed electron beam orbit correctors and focusing lenses. Multi-level control system, based on CAN/CANopen fieldbus, has been developed for specific accelerator applications, which allows startup and continuous run of hundreds of power supplies together with the other subsystems of the accelerator. The power sources data and status are collected into the archive with the Citect SCADA 7.2 Server SCADA Historian Server. The following operational parameters of the system are expected: current control resolution - 0.05% of IMAX; current stability - 5*10-4 ; 10 hours current variance - 100 ppm of IMAX ; temperature drift - 40ppm/K of IMAX. | |||
THPPC050 | Upgrade System of Vacuum Monitoring of Synchrotron Radiation Sources of National Research Centre Kurchatov Institute | vacuum, synchrotron, operation, database | 1183 |
|
|||
Modernization project of the vacuum system of the synchrotron radiation source at the National Research Centre Kurchatov Institute (NRC KI) has been designed and implemented. It includes transition to the new high-voltage power sources for NMD and PVIG–0.25/630 pumps. The system is controlled via CAN-bus, and the vacuum is controlled by measuring pump currents in a range of 0.0001–10 mA. Status visualization, data collection and data storage is implemented on Sitect SCADA 7.2 Server and SCADA Historian Server. The system ensures a vacuum of 10–7 Pa. The efficiency and reliability of the vacuum system is increased by this work, making it possible to improve the main parameters of the SR source. | |||
THPPC051 | First Operation of New Electron Beam Orbit Measurement System at SIBERIA-2 | electron, radiation, synchrotron, brilliance | 1186 |
|
|||
The paper focuses on the results of commission and usage of the electron beam orbit measurement system at synchrotron radiation source SIBERIA-2 realized at present time at Kurchatov Institute. The main purpose of new orbit measurement system creation is an improvement of the electron beam diagnostic system at the storage ring. This system provides continuous measurements of the electron beam closed orbit during storing, ramping and operation for users. Besides, with the help of the system it is possible to carry out turn-by-turn measurements of the electron beam trajectory during injection process. After installation of new orbit measurement system we obtained a very good instrument to study electron beam dynamics into the main storage ring in detail. The paper describes the new orbit measurement system, its technical performance, the results of commission and our experience. | |||
THPPC053 | NSLS-II Booster Ramp Handling | booster, operation, injection, dipole | 1189 |
|
|||
The NSLS-II booster is a full-energy synchrotron with the range from 200 MeV up to 3 GeV. The ramping cycle is 1 second. A set of electronics developed in BNL fro the NSLS-II project was modified for the booster Power Supplies (PSs) control. The set includes Power Supply Interface which is located close to a power supply and a Power Supply Controller (PSC) which is connected to EPICS IOC running in a front-end computer via 100 Mbit Ethernet. A table of 10k setpoints uploaded to the memory of PSC defines a behavior of a PS in the machine cycle. A special software is implemented in IOC to provide a smooth shape of the ramping waveform in the case of the waveform change. A Ramp Manager (RM) high level application is developed in python to provide an easy change, compare, copy the ramping waveforms, and upload them to process variables. The RM provides check of a waveform derivative, manual adjusting of the waveform in graph and text format, and includes all specific features of the booster PSs control. This paper describes software for the booster ramp handling. | |||
![]() |
Poster THPPC053 [0.423 MB] | ||
THPPC056 | Design and Implementation of Linux Drivers for National Instruments IEEE 1588 Timing and General I/O Cards | hardware, timing, software, Linux | 1193 |
|
|||
Cosylab is developing GPL Linux device drivers to support several National Instruments (NI) devices. In particular, drivers have already been developed for the NI PCI-1588, PXI-6682 (IEEE1588/PTP) devices and the NI PXI-6259 I/O device. These drivers are being used in the development of the latest plasma fusion research reactor, ITER, being built at the Cadarache facility in France. In this paper we discuss design and implementation issues, such as driver API design (device file per device versus device file per functional unit), PCI device enumeration, handling reset, etc. We also present various use-cases demonstrating the capabilities and real-world applications of these drivers. | |||
![]() |
Poster THPPC056 [0.482 MB] | ||
THPPC057 | Validation of the Data Consolidation in Layout Database for the LHC Tunnel Cryogenics Controls Package | database, cryogenics, PLC, operation | 1197 |
|
|||
The control system of the Large Hadron Collider cryogenics manages over 34,000 instrumentation channels which are essential for populating the software of the PLCs (Programmable Logic Controller) and SCADA (Supervisory Control and Data Acquisition) responsible for maintaining the LHC at the appropriate operating conditions. The control system specification's are generated by the CERN UNICOS (Unified Industrial Control System) framework using a set of information of database views extracted from the LHC layout database. The LHC layout database is part of the CERN database managing centralized and integrated data, documenting the whole CERN infrastructures (Accelerator complex) by modeling their topographical organization (“layouts”), and defining their components (functional positions) and the relationships between them. This paper describes the methodology of the data validation process, including the development of different software tools used to update the database from original values to manually adjusted values after three years of machine operation, as well as the update of the data to accommodate the upgrade of the UNICOS Continuous Process Control package(CPC). | |||
THPPC058 | LSA - the High Level Application Software of the LHC - and Its Performance During the First Three Years of Operation | software, injection, optics, hardware | 1201 |
|
|||
The LSA (LHC software architecture) project was started in 2001 with the aim of developing the high level core software for the control of the LHC accelerator. It has now been deployed widely across the CERN accelerator complex and has been largely successful in meeting its initial aims. The main functionality and architecture of the system is recalled and its use in the commissioning and exploitation of the LHC is elucidated. | |||
![]() |
Poster THPPC058 [1.291 MB] | ||
THPPC060 | A PXI-Based Low Level Control for the Fast Pulsed Magnets in the CERN PS Complex | kicker, FPGA, timing, monitoring | 1205 |
|
|||
Fast pulsed magnet (kicker) systems are used for beam injection and extraction in the CERN PS complex. A novel approach, based on off-the-shelf PXI components, has been used for the consolidation of the low level part of their control system. Typical functionalities required like interlocking, equipment state control, thyratron drift stabilisation and protection, short circuit detection in magnets and transmission lines, pulsed signal acquisition and fine timing have been successfully integrated within a PXI controller. The controller comprises a National Instruments NI PXI-810x RT real time processor, a multifunctional RIO module including a Virtex-5 LX30 FPGA, a 1 GS/s digitiser and a digital delay module with 1 ns resolution. National Instruments LabVIEW development tools have been used to develop the embedded real time software as well as FPGA configuration and expert application programs. The integration within the CERN controls environment is performed using the Rapid Application Development Environment (RADE) software tools, developed at CERN. | |||
![]() |
Poster THPPC060 [0.887 MB] | ||
THPPC061 | SwissFEL Magnet Test Setup and Its Controls at PSI | EPICS, software, operation, detector | 1209 |
|
|||
High brightness electron bunches will be guided in the future Free Electron Laser (SwissFEL) at Paul Scherrer Institute (PSI) with the use of several hundred magnets. The SwissFEL machine imposes very strict requirements not only to the field quality but also to mechanical and magnetic alignments of these magnets. To ensure that the magnet specifications are met and to develop reliable procedures for aligning magnets in the SwissFEL and correcting their field errors during machine operations, the PSI magnet test system was upgraded. The upgraded system is a high precision measurement setup based on Hall probe, rotating coil, vibrating wire and moving wire techniques. It is fully automated and integrated in the PSI controls. The paper describes the main controls components of the new magnet test setup and their performance. | |||
![]() |
Poster THPPC061 [0.855 MB] | ||
THPPC062 | Control Environment of Power Supply for TPS Booster Synchrotron | power-supply, booster, EPICS, interface | 1213 |
|
|||
The TPS is a latest generation of high brightness synchrotron light source and scheduled to be commissioning in 2014. Its booster is designed to ramp electron beams from 150 MeV to 3 GeV in 3 Hz. The control environments based on EPICS framework are gradually developed and built. This report summarizes the efforts on control environment of BPM and power supply for TPS booster synchrotron. | |||
THPPC063 | Status of the TPS Insertion Devices Controls | insertion, insertion-device, EPICS, hardware | 1216 |
|
|||
The Insertion devices (ID) for Taiwan Photon Source are under construction. There are eight insertion devices are under construction. These devices include in-vacuum undulators with or without taper, elliptical polarized undulators. Control framework for all IDs was developed. Hardware and software components are use common as possible. Motion control functionality for gap and phase adjustment supports servo motors, stepper motors, and absolute encoders. The control system for all IDs is based on the EPICS architecture. Trimming power supply for corrector magnets, phase shifter control functionality are also address. Miscellaneous controls include ion pumpers and BA gauges for vacuum system, temperature sensors for ID environmental monitoring and baking, limit switches, emergency button. User interface for ID beamline users are included to help them to do experiment, such as ID gap control and on-the fly experimental. The progress of IDs control system will be summary in the report. | |||
![]() |
Poster THPPC063 [2.878 MB] | ||
THPPC064 | The HiSPARC Control System | detector, software, Windows, database | 1220 |
|
|||
Funding: Nikhef The purpose of the HiSPARC project is twofold. First the physics goal: detection of high-energy cosmic rays. Secondly, offer an educational program in which high school students participate by building their detection station and analysing their data. Around 70 high schools, spread over the Netherlands, are participating. Data are centrally stored at Nikhef in Amsterdam. The detectors, located on the roof of the high-schools, are connected by means of a USB interface to a Windows PC, which itself is connected to the high school's network and further on to the public internet. Each station is equipped with GPS providing exact location and accurate timing. This paper covers the setup, building and usage of the station software. It contains a LabVIEW run-time engine, services for remote control and monitoring, a series of Python scripts and a local buffer. An important task of the station software is to control the dataflow, event building and submission to the central database. Furthermore, several global aspects are described, like the source repository, the station software installer and organization. Windows, USB, FTDI, LabVIEW, VPN, VNC, Python, Nagios, NSIS, Django |
|||
THPPC065 | Software System for Monitoring and Control at the Solenoid Test Facility | monitoring, operation, solenoid, database | 1224 |
|
|||
Funding: This work was supported by the U.S. Department of Energy. The architecture and implementation aspects of the control and monitoring system developed for Fermilab's new Solenoid Test Facility will be presented. At the heart of the system lies a highly configurable scan subsystem targeted at precise measurements of low temperatures with uniformly incorporated control elements. A multi-format archival system allows for the use of flat files, XML, and a relational database for storing data, and a Web-based application provides access to historical trends. The DAQ and computing platform includes COTS elements. The layered architecture separates the system into Windows operator stations, the real-time operating system-based DAQ and controls, and the FPGA-based time-critical and safety elements. The use of the EPICS CA protocol with LabVIEW opens the system to many available EPICS utilities . |
|||
![]() |
Poster THPPC065 [2.059 MB] | ||
THPPC066 | ACSys Camera Implementation Utilizing an Erlang Framework to C++ Interface | framework, software, interface, hardware | 1228 |
|
|||
Multiple cameras are integrated into the Accelerator Control System utilizing an Erlang framework. Message passing is implemented to provide access into C++ methods. The framework runs in a multi-core processor running Scientific Linux. The system provides full access to any 3 cameras out of approximately 20 cameras collecting 5 Hz frames. JPEG images in memory or as files providing for visual information. PNG files are provided in memory or as files for analysis. Histograms over the X & Y coordinates are filtered and analyzed. This implementation is described and the framework is evaluated. | |||
THPPC067 | New EPICS Drivers for Keck TCS Upgrade | EPICS, FPGA, interface, timing | 1231 |
|
|||
Keck Observatory is in the midst of a major telescope control system upgrade. This involves migrating from a VME based EPICS control system originally deployed on Motorola FRC40s VxWorks 5.1 and EPICS R3.13.0Beta12 to a distributed 64-bit X86 Linux servers running RHEL 2.6.33.x and EPICS R3.14.12.x. This upgrade brings a lot of new hardware to the project which includes Ethernet/IP connected PLCs, the ethernet connected DeltaTau Brick controllers, National Instruments MXI RIO, Heidenhain Encoders (and the Heidenhain ethernet connected Encoder Interface Box in particular), Symmetricom PCI based BC635 timing and synchronization cards, and serial line extenders and protocols. Keck has chosen to implement all new drivers using the ASYN framework. This paper will describe the various drivers used in the upgrade including those from the community and those developed by Keck which include BC635, MXI and Heidenhain EIB. It will also discuss the use of the BC635 as a local NTP reference clock and a service for the EPICS general time. | |||
THPPC076 | Re-Engineering Control Systems using Automatic Generation Tools and Process Simulation: the LHC Water Cooling Case | PLC, simulation, operation, interlocks | 1242 |
|
|||
This paper presents the approach used at CERN (European Organization for Nuclear Research) for the re-engineering of the control systems for the water cooling systems of the LHC (Large Hadron Collider). Due to a very short, and therefore restrictive, intervention time for these control systems, each PLC had to be completely commissioned in only two weeks. To achieve this challenge, automatic generation tools were used with the CERN control framework UNICOS (Unified Industrial Control System) to produce the PLC code. Moreover, process dynamic models using the simulation software EcosimPro were developed to carry out the ‘virtual’ commissioning of the new control systems for the most critical processes thus minimizing the real commissioning time on site. The re-engineering concerns around 20 PLCs managing 11000 Inputs/Outputs all around the LHC. These cooling systems are composed of cooling towers, chilled water production units and water distribution systems. | |||
![]() |
Poster THPPC076 [4.046 MB] | ||
THPPC077 | A Fuzzy-Oriented Solution for Automatic Distribution of Limited Resources According to Priority Lists | simulation, cryogenics, superconducting-magnet, operation | 1246 |
|
|||
This paper proposes a solution for resources allocation when limited resources supply several clients in parallel. The lack of a suitable limitation mechanism in the supply system can lead to the depletion of the resources if the total demand exceeds the availability. To avoid this situation, an algorithm for priority handling which relies on the Fuzzy Systems Theory is used. The Fuzzy approach, as a problem-solving technique, is robust with respect to model and parameter uncertainties and is well-adapted to systems whose mathematical formulation is difficult or impossible to obtain. The aim of the algorithm is to grant a fair allocation if the resources availability is sufficient for all the clients, or, in case of excess of demand, on the basis of priority lists, to assure enough resources only to the high priority clients in order to allow the completion of the high priority tasks. Besides the general algorithm, this paper describes the Fuzzy approach applied to a cryogenic test facility at CERN. Simulation tools are employed to validate the proposed algorithm and to characterize its performance. | |||
THPPC080 | Testing and Verification of PLC Code for Process Control | PLC, framework, software, factory | 1258 |
|
|||
Functional testing of PLC programs has been historically a challenging task for control systems engineers. This paper presents the analysis of different mechanisms for testing PLCs programs developed within the UNICOS (Unified Industrial COntrol System) framework. The framework holds a library of objects, which are represented as Function Blocks in the PLC application. When a new object is added to the library or a correction of an existing one is needed, exhaustive validation of the PLC code is needed. Testing and formal verification are two distinct approaches selected for eliminating failures of UNICOS objects. Testing is usually done manually or automatically by developing scripts at the supervision layer using the real control infrastructure. Formal verification proofs the correctness of the system by checking weather a formal model of the system satisfies some properties or requirements. The NuSMV model checker has been chosen to perform this task. The advantages and limitations of both approaches are presented and illustrated with a case study, validating a specific UNICOS object. | |||
![]() |
Poster THPPC080 [3.659 MB] | ||
THPPC081 | High-level Functions for Modern Control Systems: A Practical Example | experiment, framework, status, monitoring | 1262 |
|
|||
Modern control systems make wide usage of different IT technologies and complex computational techniques to render the data gathered accessible from different locations and devices, as well as to understand and even predict the behavior of the systems under supervision. The Industrial Controls Engineering (ICE) Group of the EN Department develops and maintains more than 150 vital controls applications for a number of strategic sectors at CERN like the accelerator, the experiments and the central infrastructure systems. All these applications are supervised by MOON, a very successful central monitoring and configuration tool developed by the group that has been in operation 24/7 since 2011. The basic functionality of MOON was presented in previous editions of these series of conferences. In this contribution we focus on the high-level functionality recently added to the tool to grant access to multiple users through the web and mobile devices to the data gathered, as well as a first attempt to data analytics with the goal of identifying useful information to support developers during the optimization of their systems and help in the daily operations of the systems. | |||
THPPC082 | Monitoring of the National Ignition Facility Integrated Computer Control System | database, experiment, framework, interface | 1266 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632812 The Integrated Computer Control System (ICCS), used by the National Ignition Facility (NIF) provides comprehensive status and control capabilities for operating approximately 100,000 devices through 2,600 processes located on 1,800 servers, front end processors and embedded controllers. Understanding the behaviors of complex, large scale, operational control software, and improving system reliability and availability, is a critical maintenance activity. In this paper we describe the ICCS diagnostic framework, with tunable detail levels and automatic rollovers, and its use in analyzing system behavior. ICCS recently added Splunk as a tool for improved archiving and analysis of these log files (about 20GB, or 35 million logs, per day). Splunk now continuously captures all ICCS log files for both real-time examination and exploration of trends. Its powerful search query language and user interface provides allows interactive exploration of log data to visualize specific indicators of system performance, assists in problems analysis, and provides instantaneous notification of specific system behaviors. |
|||
![]() |
Poster THPPC082 [4.693 MB] | ||
THPPC085 | Image Analysis for the Automated Alignment of the Advanced Radiography Capability (ARC) Diagnostic Path* | diagnostics, alignment, hardware, software | 1274 |
|
|||
Funding: *This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-631616 The Advanced Radiographic Capability (ARC) at the National Ignition Facility was developed to produce a sequence of short laser pulses that are used to backlight an imploding fuel capsule. This backlighting capability will enable the creation of a sequence of radiographs during capsule implosion and provide an unprecedented view into the dynamics of the implosion. A critical element of the ARC is the diagnostic instrumentation used to assess the quality of the pulses. Pulses are steered to the diagnostic package through a complex optical path that requires precision alignment. A central component of the alignment system is the image analysis algorithms, which are used to extract information from alignment imagery and provide feedback for the optical alignment control loops. Alignment imagery consists of complex patterns of light resulting from the diffraction of pilot beams around cross-hairs and other fiducials placed in the optical path. This paper describes the alignment imagery, and the image analysis algorithms used to extract the information needed for proper operation of the ARC automated alignment loops. |
|||
![]() |
Poster THPPC085 [3.236 MB] | ||
THPPC086 | Analyzing Off-normals in Large Distributed Control Systems using Deep Packet Inspection and Data Mining Techniques | network, toolkit, operation, distributed | 1278 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632814 Network packet inspection using port mirroring provides the ultimate tool for understanding complex behaviors in large distributed control systems. The timestamped captures of network packets embody the full spectrum of protocol layers and uncover intricate and surprising interactions. No other tool is capable of penetrating through the layers of software and hardware abstractions to allow the researcher to analyze an integrated system composed of various operating systems, closed-source embedded controllers, software libraries and middleware. Being completely passive, the packet inspection does not modify the timings or behaviors. The completeness and fine resolution of the network captures present an analysis challenge, due to huge data volumes and difficulty of determining what constitutes the signal and noise in each situation. We discuss the development of a deep packet inspection toolchain and application of the R language for data mining and visualization. We present case studies demonstrating off-normal analysis in a distributed real-time control system. In each case, the toolkit pinpointed the problem root cause which had escaped traditional software debugging techniques. |
|||
![]() |
Poster THPPC086 [2.353 MB] | ||
THPPC089 | High Repetition Rate Laser Beamline Control System | laser, timing, EPICS, network | 1281 |
|
|||
Funding: The authors acknowledge the support of the following grants of the Czech Ministry of Education, Youth and Sports "CZ.1.05/1.1.00/02.0061" and "CZ.1.07/2.3.00/20.0091". ELI-Beamlines will be a high-energy, high repetition-rate laser pillar of the ELI (Extreme Light Infrastructure) project. It will be an international user facility for both academic and applied research, scheduled to provide user capability from the beginning of 2017. As part of the development of L1 laser beamline we are developing a prototype control system. The beamline repetition rate of 1kHz with its femtosecond pulse accuracy puts demanding requirements on both control and synchronization systems. A low-jitter high-precision commercial timing system will be deployed to accompany both EPICS- and LabVIEW-based control system nodes, many of which will be enhanced for real-time responsiveness. Data acquisition will be supported by an in-house time-stamping mechanism relying on sub-millisecond system responses. The synergy of LabVIEW Real-Time and EPICS within particular nodes should be secured by advanced techniques to achieve both fast responsiveness and high data-throughput. *tomas.mazanec@eli-beams.eu |
|||
![]() |
Poster THPPC089 [1.286 MB] | ||
THPPC090 | Picoseconds Timing System | timing, laser, experiment, diagnostics | 1285 |
|
|||
The instrumentation of large physics experiments needs to be synchronized down to few picoseconds. These experiments require different sampling rates for multi shot or single shot on each instrument distributed on a large area. Greenfield Technology presents a commercial solution with a Picoseconds Timing System built around a central Master Oscillator which delivers a serial data stream over an optical network to synchronize local multi channel delay generators. This system is able to provide several hundreds of trigger pulses within a 1ps resolution and a jitter less than 15 ps distributed over an area up to 10 000 m². The various qualities of this Picoseconds Timing System are presented with measurements and functions and have already been implemented in French facilities (Laser MegaJoule prototype - Ligne d’Intégration Laser- , petawatt laser applications and synchrotron Soleil). This system with different local delay generator form factors (box, 19” rack, cPCI or PXI board) and many possibilities of trigger pulse shape is the ideal solution to synchronize Synchrotron, High Energy Laser or any Big Physics Experiments. | |||
![]() |
Poster THPPC090 [1.824 MB] | ||
THPPC092 | FAIR Timing System Developments Based on White Rabbit | timing, network, FPGA, interface | 1288 |
|
|||
A new timing system based on White Rabbit (WR) is being developed for the upcoming FAIR facility at GSI, in collaboration with CERN, other institutes and industry partners. The timing system is responsible for the synchronization of nodes with nanosecond accuracy and distribution of timing messages, which allows for real-time control of the accelerator equipment. WR is a fully deterministic Ethernet-based network for general data transfer and synchronization, which is based on Synchronous Ethernet and PTP. The ongoing development at GSI aims for a miniature timing system, which is part of a control system of a proton source, that will be used at one of the accelerators at FAIR. Such a timing system consists of a Data Master generating timing messages, which are forwarded by a WR switch to a handful of timing receiver. The next step is an enhancement of the robustness, reliability and scalability of the system. These features will be integrated in the forthcoming CRYRING control system in GSI. CRYRING serves as a prototype and testing ground for the final control system for FAIR. The contribution presents the overall design and status of the timing system development. | |||
![]() |
Poster THPPC092 [0.549 MB] | ||
THPPC095 | A Proof-of-Principle Study of a Synchronous Movement of an Undulator Array Using an EtherCAT Fieldbus at European XFEL | undulator, electron, photon, software | 1292 |
|
|||
The European XFEL project is a 4th generation X-ray light source. The undulator systems SASE 1, SASE 2 and SASE 3 are used to produce photon beams. Each undulator system consists of an array of undulator cells installed in a row along the electron beam. The motion control of an undulator system is carried out by means of industrial components using an EtherCAT fieldbus. One of its features is motion synchronization for undulator cells which belong to the same system. This paper describes the technical design and software implementation of the undulator system control providing that feature. It presents the results of an on-going proof-of-principle study of synchronous movement of four undulator cells as well as study of movement synchronization between undulator and phase shifter. | |||
![]() |
Poster THPPC095 [3.131 MB] | ||
THPPC105 | The LHC Injection Sequencer | injection, kicker, operation, database | 1307 |
|
|||
The LHC is the largest accelerator at CERN. The 2 beams of the LHC are colliding in four experiments, each beam can be composed up to 2808 high intensity bunches. The beams are produced at the LINAC, is shaped and accelerated in the LHC injectors to 450GeV. The injected beam contains up to 288 high intensity bunches, corresponding to a stored energy of 2MJ. To build for each LHC ring the complete bunch scheme that ensure a desired number of collision for each experiment, several injections are needed from the SPS to the LHC. The type of beam that is needed and the longitudinal emplacement of each injection have to be defined with care. This process is controlled by the injection sequencer and it orchestrates the beam requests. Predefined filling schemes stored in a database are used to indicate the number of injection, the type of beam and the longitudinal place of each. The injection sequencer sends the corresponding beam requests to the CBCM, the central timing manager which in turn synchronizes the beam production in the injectors. This paper will describe how the injection sequencer is implemented and its interaction with the other systems involved in the injection process. | |||
![]() |
Poster THPPC105 [0.606 MB] | ||
THPPC107 | Timing and Synchronization at Beam Line Experiments | hardware, experiment, timing, EPICS | 1311 |
|
|||
Some experiment concepts require a control system with the individual components working synchronously. At PSI the control system for X-ray experiments is distributed in several VME crates, on several EPICS soft ioc servers and linux nodes, which need to be synchronized. The timing network using fibre optics, separated from standard network based on TCP/IP protocol, is used for distributing of time stamps and timing events. The synchronization of all control components and data acquisition systems has to be done automatically with sufficient accuracy and is done by event distribution and/or by synchronization by I/O trigger devices. Data acquisition is synchronized by hardware triggers either produced by sequences in event generator or by motors in case of on-the-fly scans. Some detectors like EIGER with acquisition rate close to 20kHz, fast BPMs connected to current measuring devices like picoammmeters with sampling frequences up to 26 kHz and photodiodes are integrated to measure beam properties and radiation exposures. The measured data are stored on various file servers situated within one BL subnetwork. In this paper we describe a concept for implementing such a system. | |||
THPPC109 | Status of the TPS Timing System | timing, injection, booster, EPICS | 1314 |
|
|||
Implementation of timing system of the Taiwan Photon Source (TPS) is underway. Timing system provides synchronization for electron gun, modulators of linac, pulse magnet power supplies, booster power supply ramp trigger, bucket addressing of storage ring, diagnostic equipments, beamline gating signal for top-up injection, synchronize for the time-resolved experiments. The system is based on event distribution system that broadcasts the timing events over optic fiber network, and decodes and processes them at the timing event receivers. The system supports uplink functionality which will be used for the fast interlock system to distribute signals like beam dump and post-mortem trigger with less than 5 μsec response time. Software support is in preceded. Time sequencer to support various injection modes is in development. Timing solutions for the TPS project will summary in following paragraphs. | |||
![]() |
Poster THPPC109 [1.612 MB] | ||
THPPC112 | The LANSCE Timing Reference Generator | timing, neutron, EPICS, interface | 1321 |
|
|||
The Los Alamos Neutron Science Center is an 800 MeV linear proton accelerator at Los Alamos National Laboratory. For optimum performance, power modulators must be tightly coupled to the phase of the power grid. Downstream at the neutron scattering center there is a competing requirement that rotating choppers follow the changing phase of neutron production in order to remove unwanted energy components from the beam. While their powerful motors are actively accelerated and decelerated to track accelerator timing, they cannot track instantaneous grid phase changes. A new timing reference generator has been designed to couple the accelerator to the power grid through a phase locked loop. This allows some slip between the phase of the grid and the accelerator so that the modulators stay within their timing margins, but the demands on the choppers are relaxed. This new timing reference generator is implemented in 64 bit floating point math in an FPGA. Operators in the control room have real-time network control over the AC zero crossing offset, maximum allowed drift, and slew rate - the parameter that determines how tightly the phase of the accelerator is coupled to the power grid.
LA-UR-13-21289 |
|||
THPPC113 | Integrated Timing System for the EBIS Pre-Injector | timing, booster, ion, operation | 1325 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The Electron Beam Ion Source (EBIS) began operating as a pre-injector in the C-AD RHIC accelerator complex in 2010. Historically, C-AD RHIC pre-injectors, like the 200MeV Linac, have had largely independent timing systems that receive a minimal number of triggers from the central C-AD timing system to synchronize the injection process. The EBIS timing system is much more closely integrated into central C-AD timing, with all EBIS machine cycles included in the master supercycle that coordinates the interoperation of C-AD accelerators. The integrated timing approach allows better coordination of pre-injector activities with other activities in the C-AD complex. Independent pre-injector operation, however, must also be supported by the EBIS timing system. This paper describes the design of the EBIS timing system and evaluates experience in operational management of EBIS timing. |
|||
![]() |
Poster THPPC113 [21.388 MB] | ||
THPPC116 | Temperature Precise Control in a Large Scale Helium Refrigerator | cryogenics, operation, experiment, simulation | 1331 |
|
|||
Precise control of operating load temperature is a key requirement for application of a large scale helium refrigerator. Strict control logic and time sequence are necessary in the process related to main components including a fine load, turbine expanders and compressors. However control process sequence may become disordered due to improper PID parameter settings and logic equations and causes temperature oscillation, load augmentation or protection of the compressors and cryogenic valve function failure etc. Combination of experimental studies and simulation models, effect of PID parameters adjustment on the control process is present in detail. The methods and rules of general parameter settings are revealed and the suitable control logic equations are derived for temperature stabilization. | |||
![]() |
Poster THPPC116 [0.584 MB] | ||
THPPC117 | A Control Strategy for Highly Regulated Magnet Power Supplies Using a LQR Approach | power-supply, damping, simulation, proton | 1334 |
|
|||
A linear quadratic regulator (LQR) based proportional-Integrator-derivative (PID) controller is proposed for the SMPS based magnet power supply of the high current proton injector operational at VECC. The state weighting matrices ‘Q’ of the LQR based controller is derived analytically using guaranteed dominant pole placement approach with desired ‘ζ’ (maximum overshoot) and ‘ω’(rise time). The uniqueness of this scheme is that the controller gives the desired closed loop response with minimum control effort, hence avoiding the actuator saturation by utilizing both optimum behavior of LQR technique and simplicity of the conventional PID controller. The controller and power supply parameter perturbations is studied along with the load disturbance to verify the robustness of proposed control mechanism. | |||
THPPC119 | Software Architecture for the LHC Beam-based Feedback System at CERN | feedback, optics, network, timing | 1337 |
|
|||
This paper presents an overview of beam based feedback systems at the LHC at CERN. It will cover the system architecture which is split into two main parts – a controller (OFC) and a service unit (OFSU). The paper presents issues encountered during beam commissioning and lessons learned including follow-up from a recent review which took place at CERN | |||
![]() |
Poster THPPC119 [1.474 MB] | ||
THPPC120 | A Simplified Model of the International Linear Collider Final Focus System | detector, quadrupole, feedback, resonance | 1341 |
|
|||
Mechanical vibrations are the main sources of Luminosity Loss at the Final Focus System of the future Linear Colliders, where the nanometric beams are required to be extremely stable. Precise models are needed to validate the supporting scheme adopted. Where the beam structure allows it, as for the International Linear Collider (ILC), intra-trains Luminosity Feedback schemes are possible. Where this is not possible, as for the Compact Linear Collider (CLIC), an active stabilization of the doublets is required. Further complications arise from the optics requirements, which place the final doublet very close to the IP (~4m). We present a model of the SID detector, where the QD0 doublet is captured inside the detector and the QF1 magnet is inside the tunnel. Ground Motion measured at the SLD detector at SLAC have been used together with a model of the technical noise. The model predicts that the rms vibration of QDO is below the capture range of the IP feedback system available in the ILC. With the addition of an active stabilization system on QD0, it is also possible to achieve the stability requirements of CLIC. These results can have important implications for CLIC. | |||
THPPC121 | Feedbacks and Automation at the Free Electron Laser in Hamburg (FLASH) | feedback, operation, electron, laser | 1345 |
|
|||
For many years a set of historically grown Matlab scripts and tools have been used to stabilize transversal and longitudinal properties of the electron bunches at the FLASH. Though this Matlab-based approach comes in handy when commissioning or developing tools for certain operational procedures, it turns out to be quite tedious to maintain on the long run as it often lacks stability and performance e.g. in feedback procedures. To overcome these shortcomings of the Matlab-based approach, a server-based C++ solution in the DOOCS* framework has been realized at FLASH. Using the graphical UI designer jddd** a generic version of the longitudinal feedback has been implemented and put very fast into standard operation. The design uses sets of monitors and actuators plus their coupling which easily be adapted operation requirements. The daily routine operation of this server-based FB implementation has proven to offer a robust, well maintainable and flexible solution to the common problem of automation and control for such complex machines as FLASH and will be well suited for the European XFEL purposes.
* see e.g. http://doocs.desy.de ** see e.g. http//jddd.desy.de |
|||
![]() |
Poster THPPC121 [9.473 MB] | ||
THPPC122 | High Performance and Low Latency Single Cavity RF Control Based on MTCA.4 | LLRF, feedback, cavity, hardware | 1348 |
|
|||
The European XFEL project at DESY requires a very precise RF control, fulfilling the objectives of high performance FEL generation. Within the MTCA.4 based hardware framework a LLRF system has been designed to control multi-cavity applications, require large processing capabilities. A generic software structure allows to apply the same design also for single-cavity applications, reducing efforts for maintenance. It has be demonstrated that the MTCA.4 based LLRF controller development achieves XFEL requirement in terms of amplitude and phase control. Due to the complexity of the signal part, which is not essential for a single cavity regulation an alternative framework has been developed, to minimize processing latency which is especially for high bandwidth applications very important. This setup is based on a fast processing advanced mezzanine card (AMC) combined with a down-converter and vector-modulator rear transition module (RTM). Within this paper the system layout and first measurement results are presented, demonstrating capabilities not only for LLRF specific applications. | |||
THPPC123 | Online Luminosity Optimization at the LHC | luminosity, experiment, target, proton | 1351 |
|
|||
The online luminosity control of the LHC experiments consists of an automatic slow real-time feedback system controlled by a specific experiment software that communicates directly with an LHC application. The LHC application drives a set of corrector magnets to adjust the transversal beam overlap at the interaction point in order to keep the instantaneous luminosity aligned to the target luminosity provided by the experiment. This solution was proposed by the LHCb experiment and tested first in July 2010. It has been in routine operation during the first two years of physics luminosity data taking, 2011 and 2012, in LHCb. It was also adopted for the ALICE experiment during 2011. The experience provides an important basis for the potential future need of levelling the luminosity in all the LHC experiments. This paper describes the implementation of the LHC application controlling the luminosity at the experiments and the information exchanged that allows this automatic control. | |||
![]() |
Poster THPPC123 [1.344 MB] | ||
THPPC125 | Evaluation and Implementation of Advanced Process Control with the compactRIO Material of National Instrument | FPGA, LabView, feedback, real-time | 1355 |
|
|||
Programmable Logic Controller (PLC) is very commonly used in many industries and research applications for process control. However a very complex process control may require algorithms and performances beyond the capability of PLCs, very high-speed or precision controls may also require other solutions. This paper describes recent research conducted to implement advanced process controls with the cRIO material from National Instruments (decoupling of MIMO process control, steady state feedback, observer, Kalman filter, etc ). The cRIO systems consist of an embedded real-time controller for communication and processing, a Reconfigurable Field Programmable Array (FPGA) and hot-swappable I/O modules. The paper presents experimental results and the ability of the cRIO to treat complex process control. | |||
![]() |
Poster THPPC125 [1.004 MB] | ||
THPPC129 | Evolution of the FERMI Beam Based Feedbacks | feedback, laser, FEL, electron | 1362 |
|
|||
Funding: This work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3 Evolution of the FERMI@Elettra Beam Based Feedbacks FERMI@Elettra is the first seeded Free Electron Laser (FEL) users facility. A number of shot-to-shot feedback loops running synchronously at the machine repetition rate stabilize the electron beam trajectory, energy and bunch length, as well as the trajectory of the laser beams used for the seeding and pump-probe experiments. They are based on a flexible real-time distributed framework integrated into the control system. The interdependence between feedback loops and the need to react coordinately to different operating conditions lead to the development of a real-time supervisor capable of controlling each loop depending on critical machine parameters not directly involved in the feedbacks. The overall system architecture, performance and user interfaces are presented. |
|||
![]() |
Poster THPPC129 [1.381 MB] | ||
THPPC135 | From Pulse to Continuous Wave Operation of TESLA Cryomodules – LLRF System Software Modification and Development | operation, feedback, LLRF, cavity | 1366 |
|
|||
Funding: We acknowledge the support from National Science Center (Poland) grant no 5593/B/T02/2010/39 Higher efficiency of TESLA based free electron lasers (FLASH, XFEL) by means of increased quantity of photon bursts can be achieved using continuous wave operation mode. In order to maintain constant beam acceleration in superconducting cavities and keep short pulse to CW operation transition costs reasonably low some substantial modification of accelerator subsystems are necessary. Changes in: RF power source, cryo systems, electron beam source, etc. have to be also accompanied by adjustments in LLRF system. In this paper challenges for well established pulsed mode LLRF system are discussed (in case of CW and LP scenarios). Firmware, software modifications needed for maintaining high performance of cavities field parameters regulation (for 1Hz CW and LP cryo-module operation) are described. Results from studies of vector sum amplitude and phase control in case of resonators high Ql factor settings (Ql~1.5e7) are shown. Proposed modifications implemented in VME and microTCA (MTCA.4) based LLRF system has been tested during studies at CryoModule Test Bench (CMTB) in DESY. Results from this tests together with achieved regulation performance data are also presented and discussed. |
|||
![]() |
Poster THPPC135 [1.310 MB] | ||
THPPC136 | Stabilizing the Beam Current Split Ratio in TRIUMF's 500 MeV Cyclotron with High Level, Closed-Loop Feedback Software | feedback, software, cyclotron, TRIUMF | 1370 |
|
|||
In the pursuit of progressively more stable beam currents at TRIUMF's 500 MeV cyclotron there was a proposal to regulate the beam current split ratio for two primary beamlines with closed-loop feedback. Initial runs have shown promising results and have justified further efforts in that direction. This paper describes the software to provide the closed-loop feedback, and future developments. | |||
![]() |
Poster THPPC136 [4.309 MB] | ||
THPPC138 | A System for Automatic Locking of Resonators of Linac at IUAC | linac, interface, operation, feedback | 1376 |
|
|||
The superconducting LINAC booster of IUAC consists of five cryostats housing a total of 27 Nb quarter wave resonators (QWRs). The QWRs are phase locked against the master oscillator at a frequency of 97 MHz. Cavity frequency tuning is done by a Helium gas based slow tuner. Presently, the frequency tuning and cavity phase locking is done from the control room consoles. To automate the LINAC operation, an automatic phase locking system has been implemented. The slow tuner gas pressure is automatically controlled in response to the frequency error of the cavity. The fast tuner is automatically triggered into phase lock when the frequency is within the lock window. This system has band implemented sucessfully on a few cavities. The system is now being installed for the remaining cavities of the LINAC booster.
[1]S.Ghosh et al Phys. Rev. ST Accel. Beams 12, 040101 (2009). |
|||
![]() |
Poster THPPC138 [4.654 MB] | ||
THCOBB04 | Overview of the ELSA Accelerator Control System | database, interface, hardware, Linux | 1396 |
|
|||
The Electron Stretcher Facility ELSA provides a beam of polarized electrons with a maximum energy of 3.2 GeV for hadron physics experiments. The in-house developed control system has continuously been improved during the last 15 years of operation. Its top layer consists of a distributed shared memory database and several core applications which are running on a linux host. The interconnectivity to hardware devices is built up with a second layer of the control system operating on PCs and VMEs. High level applications are integrated into the control system using C and C++ libraries. An event based messaging system notifies attached applications about parameter updates in near real-time. The overall system structure and specific implementation details of the control system will be presented. | |||
![]() |
Slides THCOBB04 [0.527 MB] | ||
THCOBB05 | Switching Solution – Upgrading a Running System | software, EPICS, hardware, interface | 1400 |
|
|||
At Keck Observatory, we are upgrading our existing operational telescope control system and must do it with as little operational impact as possible. This paper describes our current integrated system and how we plan to create a more distributed system and deploy it subsystem by subsystem. This will be done by systematically extracting the existing subsystem then replacing it with the new upgraded distributed subsystem maintaining backwards compatibility as much as possible to ensure a seamless transition. We will also describe a combination of cabling solutions, design choices and a hardware switching solution we’ve designed to allow us to seamlessly switch signals back and forth between the current and new systems. | |||
![]() |
Slides THCOBB05 [1.482 MB] | ||
THCOBB06 | CLIC-ACM: Acquisition and Control System | radiation, timing, network, survey | 1404 |
|
|||
CLIC (Compact Linear Collider) is a world-wide collaboration to study the next “terascale” lepton collider, relying upon a very innovative concept of two-beam-acceleration. In this scheme, the power is transported to the main accelerating structures by a primary electron beam. The Two Beam Module (TBM) is a compact integration with a high filling factor of all components: RF, Magnets, Instrumentation, Vacuum, Alignment and Stabilization. This paper describes the very challenging aspects of designing the compact system to serve as a dedicated Acquisition & Control Module (ACM) for all signals of the TBM. Very delicate conditions must be considered, in particular radiation doses that could reach several kGy in the tunnel. In such severe conditions shielding and hardened electronics will have to be taken into consideration. In addition, with more than 300 channels per ACM and about 21000 ACMs in total, it appears clearly that power consumption will be an important issue. It is also obvious that digitalization of the signals acquisition will take place at the lowest possible hardware level and that neither the local processor, nor the operating system shall be used inside the ACM. | |||
![]() |
Slides THCOBB06 [0.846 MB] | ||
![]() |
Poster THCOBB06 [0.747 MB] | ||
THCOBA02 | Unidirectional Security Gateways: Stronger than Firewalls | network, hardware, experiment, software | 1412 |
|
|||
In the last half decade, application integration via Unidirectional Security Gateways has emerged as a secure alternative to firewalls. The gateways are deployed extensively to protect the safety and reliability of industrial control systems in nuclear generators, conventional generators and a wide variety of other critical infrastructures. Unidirectional Gateways are a combination of hardware and software. The hardware allows information to leave a protected industrial network, and physically prevents any signal whatsoever from returning to the protected network. The result is that the hardware blocks all online attacks originating on external networks. The software replicates industrial servers to external networks, where the information in those servers is available to end users and to external applications. The software does not proxy bi-directional protocols. Join us to learn how this secure alternative to firewalls works, where and how the tecnhology is deployed routinely, and how all of the usual remote support, data integrity and other apparently bi-directional deployment issues are routinely resolved. | |||
![]() |
Slides THCOBA02 [0.721 MB] | ||
THCOBA03 | DIAMON2 – Improved Monitoring of CERN’s Accelerator Controls Infrastructure | monitoring, GUI, data-acquisition, framework | 1415 |
|
|||
Monitoring of heterogeneous systems in large organizations like CERN is always challenging. CERN's accelerators infrastructure includes large number of equipment (servers, consoles, FECs, PLCs), some still running legacy software like LynxOS 4 or Red Hat Enterprise Linux 4 on older hardware with very limited resources. DIAMON2 is based on CERN Common Monitoring platform. Using Java industry standards, notably Spring, Ehcache and the Java Message Service, together with a small footprint C++ -based monitoring agent for real time systems and wide variety of additional data acquisition components (SNMP, JMS, JMX etc.), DIAMON2 targets CERN’s environment, providing easily extensible, dynamically reconfigurable, reliable and scalable monitoring solution. This article explains the evolution of the CERN diagnostics and monitoring environment until DIAMON2, describes the overall system’s architecture, main components and their functionality as well as the first operational experiences with the new system, observed under the very demanding infrastructure of CERN’s accelerator complex. | |||
![]() |
Slides THCOBA03 [1.209 MB] | ||
THCOBA05 | Control System Virtualization for the LHCb Online System | network, experiment, operation, hardware | 1419 |
|
|||
Virtualization provides many benefits such as more efficiency in resource utilization, less power consumption, better management by centralized control and higher availability. It can also save time for IT projects by eliminating dedicated hardware procurement and providing standard software configurations. In view of this virtualization is very attractive for mission-critical projects like the experiment control-system (ECS) of the large LHCb experiment at CERN. This paper describes our implementation of the control system infrastructure on a general purpose server-hardware based on Linux and the RHEV enterprise clustering platform. The paper describes the methods used , our experiences and the knowledge acquired in evaluating the performance of the setup using test systems, constraints and limitations we encountered. We compare these with parameters measured under typical load conditions in a real production system. We also present the specific measures taken to guarantee optimal performance for the SCADA system (WinCC OA), which is the back-bone of our control system. | |||
![]() |
Slides THCOBA05 [1.065 MB] | ||
THCOBA06 | Virtualization and Deployment Management for the KAT-7 / MeerKAT Control and Monitoring System | software, hardware, database, network | 1422 |
|
|||
Funding: National Research Foundation (NRF) of South Africa To facilitate efficient deployment and management of the Control and Monitoring software of the South African 7-dish Karoo Array Telescope (KAT-7) and the forthcoming Square Kilometer Array (SKA) precursor, the 64-dish MeerKAT Telescope, server virtualization and automated deployment using a host configuration database is used. The advantages of virtualization is well known; adding automated deployment from a configuration database, additional advantages accrue: Server configuration becomes deterministic, development and deployment environments match more closely, system configuration can easily be version controlled and systems can easily be rebuilt when hardware fails. We chose the Debian GNU/Linux based Proxmox VE hypervisor using the OpenVZ single kernel container virtualization method along with Fabric (a Python ssh automation library) based deployment automation and a custom configuration database. This paper presents the rationale behind these choices, our current implementation and our experience with it, and a performance evalution of OpenVZ and KVM. Tests include a comparison of application specific networking performance over 10GbE using several network configurations. |
|||
![]() |
Slides THCOBA06 [5.044 MB] | ||
THCOCB04 | Using an Expert System for Accelerators Tuning and Automation of Operating Failure Checks | TANGO, database, monitoring, operation | 1434 |
|
|||
Today at SOLEIL abnormal operating conditions cost many human resources involved in plenty of manual checks on various different tools interacting with different service layers of the control system (archiving system, device drivers, etc.) before recovering a normal accelerators operation. These manual checks are also systematically redone before each beam shutdown and restart. All these repetitive tasks are very error prone and lead to a tremendous lack in the assessment of beam delivery to users. Due to the increased process complexity and the multiple unpredictable factors of instability in the accelerators operating conditions, the existing diagnosis tools and manual check procedures reached their limits to provide practical reliable assistance to both operators and accelerators physicists. The aim of this paper is to show how the advanced expert system layer of the PASERELLE* framework, using the CDMA API** to access in a uniform way all the underlying data sources provided by the control system, can be used to assist the operators in detecting and diagnosing abnormal conditions and thus providing safe guards against these unexpected accelerators operation conditions.
*http://www.isencia.be/services/passerelle **https://code.google.com/p/cdma/ |
|||
![]() |
Slides THCOCB04 [1.636 MB] | ||
THCOCB05 | The LHCb Online Luminosity Monitoring and Control | luminosity, detector, experiment, target | 1438 |
|
|||
The LHCb experiment searches for New Physics by precision measurements in heavy flavour physics. The optimization of the data taking conditions relies on accurate monitoring of the instantaneous luminosity, and many physics measurements rely on accurate knowledge of the integrated luminosity. Most of the measurements have potential systematic effects associated with pileup and changing running conditions. To cope with these while aiming at maximising the collected luminosity, a control of the LHCb luminosity was put in operation. It consists of an automatic real-time feedback system controlled from the LHCb online system which communicates directly with an LHC application which in turn adjusts the beam overlap at the interaction point. It was proposed and tested in July 2010 and has been in routine operation during 2011-2012. As a result, LHCb has been operating at well over four times the design pileup, and 95% of the integrated luminosity has been recorded within 3% of the desired luminosity. This paper motivates and describes the implementation and the experience with the online luminosity monitoring and control, including the mechanisms to perform the luminosity calibrations. | |||
![]() |
Slides THCOCB05 [1.368 MB] | ||
THCOCA01 | A Design of Sub-Nanosecond Timing and Data Acquisition Endpoint for LHAASO Project | timing, network, interface, electronics | 1442 |
|
|||
Funding: National Science Foundation of China (No.11005065 and 11275111) The particle detector array (KM2A) of Large High Altitude Air Shower Observatory (LHAASO) project consists of 5631 electron and 1221 muon detection units over 1.2 square km area. To reconstruct the incident angle of cosmic ray, sub-nanosecond time synchronization must be achieved. The White Rabbit (WR) protocol is applied for its high synchronization precision, automatic delay compensation and intrinsic high band-width data transmit capability. This paper describes the design of a sub-nanosecond timing and data acquisition endpoint for KM2A. It works as a FMC mezzanine mounted on detector specific front-end electronic boards and provides the WR synchronized clock and timestamp. The endpoint supports EtherBone protocol for remote monitor and firmware update. Moreover, a hardware UDP engine is integrated in the FPGA to pack and transmit raw data from detector electronics to readout network. Preliminary test demonstrates a timing precision of 29ps (RMS) and a timing accuracy better than 100ps (RMS). * The authors are with Key Laboratory of Particle and Radiation Imaging, Department of Engineering Physics, Tsinghua University, Beijing, China, 100084 * pwb.thu@gmail.com |
|||
![]() |
Slides THCOCA01 [1.182 MB] | ||
THCOCA02 | White Rabbit Status and Prospects | network, distributed, Ethernet, FPGA | 1445 |
|
|||
The White Rabbit (WR) project started off to provide a sequencing and synchronization solution for the needs of CERN and GSI. Since then, many other users have adopted it to solve problems in the domain of distributed hard real-time systems. The paper discusses the current performance of WR hardware, along with present and foreseen applications. It also describes current efforts to standardize WR under IEEE 1588 and recent developments on reliability of timely data distribution. Then it analyzes the role of companies and the commercial Open Hardware paradigm, finishing with an outline of future plans. | |||
![]() |
Slides THCOCA02 [7.955 MB] | ||
FRCOAAB01 | CSS Scan System | interface, experiment, EPICS, software | 1461 |
|
|||
Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy Automation of beam line experiments requires more flexibility than the control of an accelerator. The sample environment devices to control as well as requirements for their operation can change daily. Tools that allow stable automation of an accelerator are not practical in such a dynamic environment. On the other hand, falling back to generic scripts opens too much room for error. The Scan System offers an intermediate approach. Scans can be submitted in numerous ways, from pre-configured operator interface panels, graphical scan editors, scripts, the command line, or a web interface. At the same time, each scan is assembled from a well-defined set of scan commands, each one with robust features like error checking, time-out handling and read-back verification. Integrated into Control System Studio (CSS), scans can be monitored, paused, modified or aborted as needed. We present details of the implementation and first usage experience. |
|||
![]() |
Slides FRCOAAB01 [1.853 MB] | ||
FRCOAAB02 | Karabo: An Integrated Software Framework Combining Control, Data Management, and Scientific Computing Tasks | device-server, GUI, interface, software | 1465 |
|
|||
The expected very high data rates and volumes at the European XFEL demand an efficient concurrent approach of performing experiments. Data analysis must already start whilst data is still being acquired and initial analysis results must immediately be usable to re-adjust the current experiment setup. We have developed a software framework, called Karabo, which allows such a tight integration of these tasks. Karabo is in essence a pluggable, distributed application management system. All Karabo applications (called “Devices”) have a standardized API for self-description/configuration, program-flow organization (state machine), logging and communication. Central services exist for user management, access control, data logging, configuration management etc. The design provides a very scalable but still maintainable system that at the same time can act as a fully-fledged control or a highly parallel distributed scientific workflow system. It allows simple integration and adaption to changing control requirements and the addition of new scientific analysis algorithms, making them automatically and immediately available to experimentalists. | |||
![]() |
Slides FRCOAAB02 [2.523 MB] | ||
FRCOAAB03 | Experiment Control and Analysis for High-Resolution Tomography | software, experiment, detector, EPICS | 1469 |
|
|||
Funding: Work supported by U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357. X-ray Computed Tomography (XCT) is a powerful technique for imaging 3D structures at the micro- and nano-levels. Recent upgrades to tomography beamlines at the APS have enabled imaging at resolutions up to 20 nm at increased pixel counts and speeds. As detector resolution and speed increase, the amount of data that must be transferred and analyzed also increases. This coupled with growing experiment complexity drives the need for software to automate data acquisition and processing. We present an experiment control and data processing system for tomography beamlines that helps address this concern. The software, written in C++ using Qt, interfaces with EPICS for beamline control and provides live and offline data viewing, basic image manipulation features, and scan sequencing that coordinates EPICS-enabled apparatus. Post acquisition, the software triggers a workflow pipeline, written using ActiveMQ, that transfers data from the detector computer to an analysis computer, and launches a reconstruction process. Experiment metadata and provenance information is stored along with raw and analyzed data in a single HDF5 file. |
|||
![]() |
Slides FRCOAAB03 [1.707 MB] | ||
FRCOAAB05 | JOGL Live Rendering Techniques in Data Acquisition Systems | GPU, detector, real-time, experiment | 1477 |
|
|||
One of the major challenges in instrument control is to provide a fast and scientifically correct representation of the data collected by the detector through the data acquisition system. Despite the availability nowadays of a large number of excellent libraries for off-line data plotting, the real-time 2D and 3D data rendering still suffers of performance issues related namely to the amount of information to be displayed. The current paper describes new methods of image generation (rendering) based on JOGL library used for data acquisition at the Institut Laue-Langevin (ILL) on instruments that require either high image resolution or large number of images rendered at the same time. These new methods involve the definition of data buffers and the usage of the GPU memory, technique known as Vertex Buffer Object (VBO). Implementation of different modes of rendering, on-screen and off-screen, will be also detailed. | |||
![]() |
Slides FRCOAAB05 [1.422 MB] | ||
FRCOAAB07 | Operational Experience with the ALICE Detector Control System | detector, operation, experiment, status | 1485 |
|
|||
The first LHC run period, lasting 4 year brought exciting physics results and new insight into the mysteries of the matter. One of the key components in this achievements were the detectors, which provided unprecedented amounts of data of the highest quality. The control systems, responsible for their smooth and safe operation, played a key role in this success. The design of the ALICE Detector Control System (DCS) started more than 12 years ago. High level of standardization and pragmatic design led to a reliable and stable system, which allowed for efficient experiment operation. In this presentation we summarize the overall architectural principles of the system, the standardized components and procedures. The original expectations and plans are compared with the final design. Focus is given on the operational procedures, which evolved with time. We explain, how a single operator can control and protect a complex device like ALICE, with millions of readout channels and several thousand control devices and boards. We explain what we learned during the first years of LHC operation and which improvements will be implemented to provide excellent DCS service during the next years. | |||
![]() |
Slides FRCOAAB07 [7.856 MB] | ||
FRCOAAB08 | The LIMA Project Update | detector, hardware, interface, software | 1489 |
|
|||
LIMA, a Library for Image Acquisition, was developed at the ESRF to control high-performance 2D detectors used in scientific applications. It provides generic access to common image acquisition concepts, from detector synchronization to online data reduction, including image transformations and storage management. An abstraction of the low-level 2D control defines the interface for camera plugins, allowing different degrees of hardware optimizations. Scientific 2D data throughput up to 250 MB/s is ensured by multi-threaded algorithms exploiting multi-CPU/core technologies. Eighteen detectors are currently supported by LIMA, covering CCD, CMOS and pixel detectors, and video GigE cameras. Control system agnostic by design, LIMA has become the de facto 2D standard in the TANGO community. An active collaboration among large facilities, research laboratories and detector manufacturers joins efforts towards the integration of new core features, detectors and data processing algorithms. The LIMA 2 generation will provide major improvements in several key core elements, like buffer management, data format support (including HDF5) and user-defined software operations, among others. | |||
![]() |
Slides FRCOAAB08 [1.338 MB] | ||
FRCOBAB03 | The New Multicore Real-time Control System of the RFX-mod Experiment | real-time, plasma, Linux, framework | 1493 |
|
|||
The real-time control system of RFX-mod nuclear fusion experiment has been in operation since 2004 and has been used to control the plasma position and the MagnetoHydroDinamic (MHD) modes. Over time new and more computing demanding control algorithm shave been developed and the system has been pushed to its limits. Therefore a complete re-design has been carried out in 2012. The new system adopts radically different solutions in Hardware, Operating System and Software management. The VME PowerPc CPUs communicating over Ethernet used in the former system have been replaced by a single multicore server. The VxWorks Operating System , previously used in the VME CPUs has now been replaced by Linux MRG, that proved to behave very well in real-time applications. The previous framework for control and communication has been replaced by MARTe, a modern framework for real-time control gaining interest in the fusion community. Thanks to the MARTe organization, a rapid development of the control system has been possible. In particular, its intrinsic simulation ability of the framework gave us the possibility of carrying out most debugging in simulation, without affecting machine operation. | |||
![]() |
Slides FRCOBAB03 [1.301 MB] | ||
FRCOBAB04 | Beam Feedback System Challenges at SuperKEKB Injector Linac | linac, feedback, emittance, EPICS | 1497 |
|
|||
SuperKEKB electron/positron asymmetric collider is under construction in order to elucidate new physics beyond the standard model of elementary particle physics. This will be only possible by a precise measurement with 40-times higher luminosity compared with that of KEKB. The injector linac should be upgraded to enable a 20-times smaller beam size of 50 nm at the collision point and twice-larger stored beam current with short lifetime of 10 minutes. At the same time two light source rings, PF and PF-AR, should be filled in top-up injection mode. To this end the linac should be operated with precise beam controls. Dual-layer controls with EPICS and MRF event systems are being enhanced to support precise pulse-to-pulse beam modulation (PPM) at 50Hz. A virtual accelerator (VA) concept is introduced to enable a single linac behaving as four VAs switched by PPM, where each VA corresponds to one of four top-up injections into storage rings. Each VA should be accompanied with independent beam orbit and energy feedback loops to maintain the required beam qualities. The requirements from SuperKEKB HER and LER for beam emittance, energy-spread, and charge are especially challenging. | |||
![]() |
Slides FRCOBAB04 [1.596 MB] | ||
FRCOBAB05 | Distributed Feedback Loop Implementation in the RHIC Low Level RF Platform | cavity, LLRF, damping, FPGA | 1501 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DEAC02-98CH10886 with the U.S. Department of Energy. We present a brief overview of distributed feedback systems based on the RHIC LLRF Platform. The general architecture and sub-system components of a complex feedback system are described, emphasizing the techniques and features employed to achieve deterministic and low latency data and timing delivery between local and remote sub-systems: processors, FPGA fabric components and the high level control system. In particular, we will describe how we make use of the platform to implement a widely distributed multi-processor and FPGA based longitudinal damping system, which relies on task sharing, tight synchronization and integration to achieve the desired functionality and performance. |
|||
![]() |
Slides FRCOBAB05 [3.147 MB] | ||