Paper | Title | Page |
---|---|---|
WEM303 | Virtualisation within the Control System Environment at the Australian Synchrotron | 664 |
|
||
Virtualisation technologies significantly improve efficiency and availability of computing services while reducing the total cost of ownership. Real-time computing environments used in distributed control systems require special consideration when it comes to server and application virtualisation. The EPICS environment at the Australian Synchrotron comprises more than 500 interconnected physical devices; their virtualisation holds great potential for reducing risk and maintenance. An overview of the approach taken by the Australian Synchrotron, the involved hardware and software technologies as well as the configuration of the virtualisation eco-system is presented, including the challenges, experiences and lessons learnt. | ||
![]() |
Slides WEM303 [1.236 MB] | |
![]() |
Poster WEM303 [0.963 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEM304 | Status Monitoring of the EPICS Control System at the Canadian Light Source | 667 |
|
||
The CLS uses the EPICS Distributed Control System (DCS) for control and feedback of a linear accelerator, booster ring, electron storage ring, and numerous x-ray beamlines. The number of host computers running EPICS IOC applications has grown to 200, and the number of IOC applications exceeds 700. The first part of this paper will present the challenges and current efforts to monitor and report the status of the control system itself by monitoring the EPICS network traffic. This approach does not require any configuration or application modification to report the currently active applications, and then provide notification of any changes. The second part will cover the plans to use the information collected dynamically to improve upon the information gathered by process variable crawlers for an IRMIS database, with the goal to eventually replace the process variable crawlers. | ||
![]() |
Slides WEM304 [0.550 MB] | |
![]() |
Poster WEM304 [1.519 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF001 | The Instrument Control Electronics of the ESPRESSO Spectrograph @VLT | 689 |
|
||
ESPRESSO, the Echelle SPectrograph for Rocky Exoplanet and Stable Spectroscopic Observations, is a super-stable Optical High Resolution Spectrograph for the Combined Coudé focus of the VLT. It can be operated either as a single telescope instrument or as a multi-telescope facility, by collecting the light of up to four UTs. From the Nasmyth focus of each UT the light is fed, through a set of optical elements (Coudé Train), to the Front End Unit which performs several functions, as image and pupil stabilization, inclusion of calibration light and refocusing. The light is then conveyed into the spectrograph fibers. The whole process is handled by several electronically controlled devices. About 40 motorized stages, more than 90 sensors and several calibration lamps are controlled by the Instrument Control Electronics (ICE) and Software (ICS). The technology employed for the control of the ESPRESSO subsystems is PLC-based, with a distributed layout close to the functions to control. This paper illustrates the current status of the ESPRESSO ICE, showing the control architecture, the electrical cabinets organization and the experiences gained during the development and assembly phase. | ||
![]() |
Poster WEPGF001 [5.729 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF002 | A Protocol for Streaming Large Messages with UDP | 693 |
|
||
Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy. We have developed a protocol concatenating UDP datagrams to stream large messages. The datagrams can be sized to the optimual size of the receiver. The protocol provides acknowledged reception based on a sliding window concept. The implementation provides for up to 10 Mbyte messages and guarrantees complete delivery or a corresponding error. The protocol is implemented as a standalone messaging between two sockets and also within the context of Fermilab's ACNet protocol. Results of this implementation in vxWorks is analyzed. |
||
![]() |
Poster WEPGF002 [0.796 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF005 | The New Modular Control System for Power Converters at CERN | 697 |
|
||
The CERN Accelerator Complex consists of several generations of particle accelerators, having around 5000 power converters supplying regulated current and voltage to normal and superconducting magnets. Today around 12 generations of legacy control system types are in operation in the accelerator complex, having significant impact on operability, support and flexibility for the converter controls electronics. Over the past years a new generation of modular controls called RegFGC3 has been developed by CERN's power conversion group. The goal is to provide a new standardised and cost effective control solution, supporting the largest number of converter topologies in a single platform. This will reduce the maintenance cost by decreasing the variety and diversity of control systems whilst simultaneously improving the operability of power converters. This paper describes Thyristor-based power converter controls as well as the on-going design and realization, focusing on functional requirements and first implementation. | ||
![]() |
Poster WEPGF005 [1.132 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF006 | Magnet Server and Control System Database Infrastructure for the European XFEL | 701 |
|
||
The linear accelerator of the European XFEL will use more than 1400 individually powered electromagnets for beam guidance and focusing. Front-end servers establish the low-level interface to several types of power supplies, and a middle layer server provides control over physical parameters like field or deflection angle in consideration of the hysteresis curve of the magnet. A relational database system with stringent consistency checks is used to store configuration data. The paper focuses on the functionality and architecture of the middle layer server and gives an overview of the database infrastructure. | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF010 | Securing Access to Controls Applications with Apache httpd Proxy | 705 |
|
||
Many commercial systems used for controls nowadays contain embedded web servers. Secure access to these, often essential, facilities is of utmost importance, yet it remains complicated to manage for different reasons (e.g. obtaining and applying patches from vendors, ad-hoc oversimplified implementations of web-servers are prone to remote exploit). In this paper we describe a security-mediating proxy system, which is based on the well-known Apache httpd software. We describe how the use of the proxy made it possible to simplify the infrastructure necessary to start WinCC OA-based supervision applications on operator consoles, providing, at the same time, an improved level of security and traceability. Proper integration with the CERN central user account repository allows the operators to use their personal credentials to access applications, and also allows one to use standard user management tools. In addition, easy-to-memorize URL addresses for access to the applications are provided, and the use of a secure https transport protocol is possible for services that do not support it on their own. | ||
![]() |
Poster WEPGF010 [1.824 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF011 | Progress of the Control Systems for the ADS injector II | 709 |
|
||
This paper reports the progress of the control system for accelerator injector II used in China initiative accelerator driven sub-critical (ADS) facility. As a linear proton accelerator, injector II includes an ECR ion source, a low-energy beam transport line, a radio frequency quadrupole accelerator, a medium energy beam transport line, several crymodules, and a diagnostics plate. Several subsystems in the control system have been discussed, such as a machine protection system, a timing system, and a data storage system. A three-layer control system has been developed for injector II. In the equipment layer, the low-level control with various industrial control cards, such as programmable logic controller and peripheral component interconnect (PCI), have been reported. In the middle layer, a redundant Gigabit Ethernet based on the Ethernet ring protection protocol has been used in the control network for Injector II. In the operation layer, high-level application software has been developed for the beam commissioning and the operation of the accelerator. Finally, by using this control system, the proton beam commissioning for Injector II in the control room has been mentioned. | ||
![]() |
Poster WEPGF011 [0.701 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF012 | Information Security Assessment of CERN Access and Safety Systems | 713 |
|
||
Access and safety systems are traditionally considered critical in organizations and they are therefore usually well isolated from the rest of the network. However, recent years have seen a number of cases, where such systems have been compromised even when in principle well protected. The tendency has also been to increase information exchange between these systems and the rest of the world to facilitate operation and maintenance, which further serves to make these systems vulnerable. In order to gain insight on the overall level of information security of CERN access and safety systems, a security assessment was carried out. This process consisted not only of a logical evaluation of the architecture and implementation, but also of active probing for various types of vulnerabilities on test bench installations. | ||
![]() |
Poster WEPGF012 [1.052 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF013 | Increasing Availability by Implementing Software Redundancy in the CMS Detector Control System | 717 |
|
||
Funding: Swiss National Science Foundation (SNSF). The Detector Control System (DCS) of the Compact Muon Solenoid (CMS) experiment ran with high availability throughout the first physics data-taking period of the Large Hadron Collider (LHC). This was achieved through the consistent improvement of the control software and the provision of a 24-hour expert on-call service. One remaining potential cause of significant downtime was the failure of the computers hosting the DCS software. To minimize the impact of these failures after the restart of the LHC in 2015, it was decided to implement a redundant software layer for the control system where two computers host each DCS application. By customizing and extending the redundancy concept offered by WinCC Open Architecture (WinCC OA), the CMS DCS can now run in a fully redundant software configuration. The implementation involves one host being active, handling all monitoring and control tasks, with the second host running in a minimally functional, passive configuration. Data from the active host is constantly copied to the passive host to enable a rapid switchover as needed. This paper describes details of the implementation and practical experience of redundancy in the CMS DCS. |
||
![]() |
Poster WEPGF013 [1.730 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF014 | A Data Acquisition System for Abnormal RF Waveform at SACLA | 721 |
|
||
At the X-ray Free Electron Laser (XFEL) facility, SACLA, an event-synchronized data acquisition system has been utilized for the XFEL operation. This system collects every shot-by-shot data, such as point data of the phase and amplitude of the RF cavity pickup signals, in synchronization with the beam operation cycle. This system also acquires RF waveform data every 10 minutes. In addition to the periodic waveform acquisition, an abnormal RF waveform that suddenly occurs should be collected for failure diagnostics. Therefore, we developed an abnormal RF waveform data acquisition (DAQ) system, which consists of the VME systems, a cache server, and a NoSQL database system, Apache Cassandra. When the VME system detects an abnormal RF waveform, it collects all related waveforms of the same shot. The waveforms are stored in Cassandra through the cache server. Before the installation to SACLA, we ensured the performance with a prototype system. In 2014, we installed the DAQ system into the injection part with five VME systems. In 2015, we will acquire waveforms from the low-level RF control system configured by 74 VME systems at the SACLA accelerator. | ||
![]() |
Poster WEPGF014 [0.978 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF015 | Drivers and Software for MicroTCA.4 | 725 |
|
||
Funding: This work is supported by the Helmholtz Validation Fund HVF-0016 'MTCA.4 for Industry'. The MicroTCA.4 crate standard provides a powerful electronic platform for digital and analogue signal processing. Besides excellent hardware modularity, it is the software reliability and flexibility as well as the easy integration into existing software infrastructures that will drive the widespread adoption of the new standard. The DESY MicroTCA.4 User Tool Kit (MTCA4U) comprises three main components: A Linux device driver, a C++ API for accessing the MicroTCA.4 devices and a control system interface layer. The main focus of the tool kit is flexibility to enable fast development. The universal, expandable PCI Express driver and a register mapping library allow out of the box operation of all MicroTCA.4 devices which are running firmware developed with the DESY board support package. The tool kit has recently been extended with features like command line tools and language bindings to Python and Matlab. |
||
![]() |
Poster WEPGF015 [0.540 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF018 | Service Asset and Configuration Management in ALICE Detector Control System | 729 |
|
||
ALICE (A Large Ion Collider Experiment) is one of the big LHC (Large Hadron Collider) detectors at CERN. It is composed of 19 sub-detectors constructed by different institutes participating in the project. Each of these subsystems has a dedicated control system based on the commercial SCADA package "WinCC Open Architecture" and numerous other software and hardware components delivered by external vendors. The task of the central controls coordination team is to supervise integration, to provide shared services (e.g. database, gas monitoring, safety systems) and to manage the complex infrastructure (including over 1200 network devices and 270 VME and power supply crates) that is used by over 100 developers around the world. Due to the scale of the control system, it is essential to ensure that reliable and accurate information about all the components - required to deliver these services along with relationship between the assets - is properly stored and controlled. In this paper we will present the techniques and tools that were implemented to achieve this goal, together with experience gained from their use and plans for their improvement. | ||
![]() |
Poster WEPGF018 [11.378 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF019 | Database Applications Development of the TPS Control System | 732 |
|
||
The control system had been established for the new 3 GeV synchrotron light source (Taiwan Photon Source, TPS) which was successful to commission at December 2014. Various control system platforms with the EPICS framework had been implemented and commissioned. The relational database (RDB) has been set up for some of the TPS control system applications used. The EPICS data archive systems are necessary to be built to record various machine parameters and status information into the RDB for long time logging. The specific applications have been developed to analyze the archived data which retrieved from the RDB. One EPICS alarm system is necessary to be set up to monitor sub-system status and record detail information into the RDB if the problem happened. Some Web-based applications with RDB have been gradually created to show the TPS machine status related information. The efforts are described at this paper. | ||
![]() |
Poster WEPGF019 [4.008 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF020 | A Redundant EPICS Control System Based on PROFINET | 736 |
|
||
This paper will demonstrate a redundant EPICS control system based on PROFIENT. The control system consists of 4 levels: the EPICS IOC, the PROFINET IO controller, the PROFINET media and the PROFINET IO device. Redundancy at each level is independent of redundancy at each other level in order to achieve highest flexibility. The implementation and performance of each level will be described in this paper. | ||
![]() |
Poster WEPGF020 [0.631 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF021 | Design of Control Networks for China Initiative Accelerator Driven System | 739 |
|
||
In this paper, we report the conceptual design of control networks used in the control system for China initiative accelerator driven sub-critical (ADS) facility which consists of two accelerator injectors, a main accelerator, a spallation target and a reactor. Because different applications have varied expectations on reliability, latency, jitter and bandwidth, the following networks have been designed for the control systems, i.e. a central operation network for the operation of accelerators, target, and reactor; a reactor protection network for preventing the release of radioactivity to the environment; a personnel protection network for protecting personnel against unnecessary exposure to hazards; a machine protection network for protecting the machines in the ADS system; a time communication network for providing timing and synchronization for three accelerators; and a data archiving network for recording important measurement results from accelerators, target and reactor. Finally, we discuss the application of high-performance Ethernet technologies, such as Ethernet ring protection protocol, in these control networks for CIADS. | ||
![]() |
Poster WEPGF021 [0.197 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF023 | Controlling Camera and PDU | 743 |
|
||
Funding: SKA South Africa National Research Foundation of South Africa Department of Science and Technology 3rd floor, The Park Park Road Pinelands ZA Cape Town 7405 +27 21 506 7300 The 64-dish MeerKAT radio telescope, currently under construction in South Africa, will become the largest and most sensitive radio telescope in the Southern Hemisphere until integrated with the Square Kilometre Array (SKA). This poster will present the software solutions that the MeerKAT Control and Monitoring (CAM) team implemented to achieve control (pan, tilt, zoom and focus) of the on-site video cameras using the pelco D protocol. Furthermore this poster will present how the outlets of the PDU (Power Distribution Unit) are switched on and off using SNMP to facilitate emergency shutdown of equipment. This will include a live demonstration from site (South Africa). |
||
![]() |
Poster WEPGF023 [0.896 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF024 | Interfacing EPICS to the Widespread Platform Management Interface IPMI | 746 |
|
||
Funding: This work has been supported by the German Federal Ministry of Education and Research (BMBF) under Grant Identifier 05H12VHH. The Intelligent Platform Management Interface (IPMI) is a standardized interface to management functionalities of computer systems. The data provided typically includes the readings of monitoring sensors, such as fan speeds, temperatures, power consumption, etc. It is provided not only by servers, but also by uTCA crates that are often used to host an experiment's control and readout system. Therefore, it is well suited to monitor the health of the hardware deployed in HEP experiments. In addition, the crates can be controlled via IPMI with functions such as triggering a reset, or configuring IP parameters. We present the design and functionality of an EPICS module to interface to IPMI that is based on ipmitool. It supports automatic scanning for IPMI sensors and filling the PV metadata (units, meaning of status words in mbbi records) from the IPMI sensor information. Most importantly, the IPMI-provided alarm thresholds are automatically placed in the PV for easy implementation of an alarm system to monitor IPMI hardware. For the DEPFET Collaboration. |
||
![]() |
Poster WEPGF024 [2.366 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF025 | Data Driven Simulation Framework | 749 |
|
||
Funding: Tata Research Development and Design Centre, TCSL. Control systems for Radio Astronomy projects such as MeerKAT* require testing functionality of different parts of the Telescope even when the system is not fully developed. Usage of software simulators in such scenarios is customary. Projects build simulators for subsystems such as Dishes, Beamformers and so on to ensure the correctness of a) their interface to the control system b) logic written to coordinate and configure them. However, such simulators are developed as one-offs, even when they implement similar functionality. This leads to duplicated effort impacting large projects such as Square Kilometer Array**. We leverage the idea of data driven software development and conceptualize a simulation framework that reduces the simulator development effort, to mitigate this: 1) capturing all the necessary information through instantiation of a well-defined simulation specification model, 2) configuring a reusable engine that performs the required simulation functions based on the instantiated and populated model provided to it as input. The results of a PoC for such a simulation framework implemented in the context of Giant Meter-wave Radio Telescope*** are presented. *MeerKAT CAM Design Description, DNo M1500-0000-006, Rev 2, July 2014 **A.R. Taylor, "The Square Kilometre Array", Proceedings IAU Symposium, 2012 ***www.gmrt.ncra.tifr.res.in |
||
![]() |
Poster WEPGF025 [0.676 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF028 | A Self-Configurable Server for Controlling Devices Over the Simple Network Management Protocol | 753 |
|
||
The Simple Network Management Protocol (SNMP) is an open-source protocol that allows many manufacturers to utilize it for controlling and monitoring their hardware. More and more SNMP-manageable devices show up on the market that can be used by control systems for accelerators. Some SNMP devices are being used at the free-electron laser (FLASH) at DESY and planned to be used at the European X-ray Free Electron Laser (XFEL) in Hamburg, Germany. To provide an easy and uniform way of controlling SNMP devices a server has been developed. The server configuration, with respect to device parameters to control, is done during its start-up and driven by the manufacturer Management Information Base (MIB) files provided with SNMP devices. This paper gives some details of the server design, its implementation and examples of use. | ||
![]() |
Poster WEPGF028 [3.323 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF029 | High Level Software Structure for the European XFEL LLRF System | 757 |
|
||
The Low level RF system for the European XFEL is controlling the accelerating RF fields in order to meet the specifications of the electron bunch parameters. A hardware platform based on the MicroTCA.4 standard has been chosen, to realize a reliable, remotely maintainable and high performing integrated system. Fast data transfer and processing is done by field programmable gate arrays (FPGA) within the crate, controlled by a CPU via PCIe communication. In addition to the MTCA system, the LLRF comprises external supporting modules also requiring control and monitoring software. In this paper the LLRF system high level software used in E-XFEL is presented. It is implemented as a semi-distributed architecture of front end server instances in combination with direct FPGA communication using fast optical links. Miscellaneous server tasks have to be executed, e.g. fast data acquisition and distribution, adaptation algorithms and updating controller parameters. Furthermore the inter-server data communication and integration within the control system environment as well as the interface to other subsystems are described. | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF030 | The EPICS Archiver Appliance | 761 |
|
||
The EPICS Archiver Appliance was developed by a collaboration of SLAC, BNL and FRIB to allow for the archival of millions of PVs, mainly focusing on data retrieval performance. It offers the ability to cluster appliances and to scale by adding appliances to the cluster. Multiple stages and an inbuilt process to move data between stages facilitates the usage of faster storage and the ability to decimate data as it is moved. An HTML management interface and scriptable business logic significantly simplifies administration. Well-defined customization hooks allow facilities to tailor the product to suit their requirements. Mechanisms to facilitate installation and migration have been developed. The system has been in production at SLAC for about 2 years now, at FRIB for about a year and is heading towards a production deployment at BNL. At SLAC, the system has significantly reduced maintenance costs while enabling new functionality that was not possible before. This paper presents an overview of the system and shares some of our experience with deploying and managing it at our facilities. | ||
![]() |
Poster WEPGF030 [1.254 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF031 | The Evolution of the Simulation Environment in ALMA | 765 |
|
||
The Atacama Large Millimeter /sub millimeter Array (ALMA) has entered into operation phase since 2014. This transition changed the priorities within the observatory, in which, most of the available time will be dedicated to science observations at the expense of technical time that software testing used to have available in abundance. The scarcity of the technical time surfaces one of the weakest points in the existent infrastructure available for software testing: the simulation environment of the ALMA software. The existent simulation focuses on the functionality aspect but not on the real operation scenarios with all the antennas. Therefore, scalability and performance problems introduced by new features or hidden in the current accepted software cannot be verified until the actual problem explodes during operation. Therefore, it was planned to design and implement a new simulation environment, which must be comparable, or at least, be representative of the production environment. In this paper we will review experiences gained and lessons learnt during the design and implementation of the new simulated environment. | ||
![]() |
Poster WEPGF031 [1.358 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF032 | EPICS PV Management and Method for RIBF Control System | 769 |
|
||
For the RIBF project (RIKEN RI Beam Factory), the EPICS-based distributed control system is utilized on Linux and vxWorks as an embedded EPICS technology. Utilizing NAS that have a High-Availability system as a shared storage, common EPICS programs (Base, Db, and so on) are shared with each EPICS IOC. In March 2015, the control system continues to grow and consists of about 50 EPICS IOCs, and more than 100, 000 EPICS records. For a large number of control hardware devices, the dependencies between EPICS records and EPICS IOCs are complicated. For example, it is not easy to know accurate device information by only the EPICS record name information. Therefore, new management system was constructed for RIBF control system to call up detailed information easily. In the system, by parsing startup script files (st.cmd) for running EPICS IOCs, all EPICS records and EPICS fields are stored into the PostgreSQL-based database. By utilizing this stored data, it is successful to develop Web-based management and search tools. In this paper the system concept, the feature of the Web-based tools for the management, is reported in detail. | ||
![]() |
Poster WEPGF032 [6.833 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF034 | The Power Supply Control System of CSR | 772 |
|
||
This paper gives a brief description of the power supply control system for Cooler Storage Ring (CSR). It introduces in detail mainly of the control system architecture, hardware and software. We use standard distributed control system (DCS) architecture. The software is the standard three-layer structure. OPI layer realizes data generation and monitoring. The intermediate layer is a data processing and transmission. Device control layer performs data output of the power supply. We use ARM + DSP controller designed by ourselves for controlling the power supply output. At the same time, we have adopted the FPGA controller designed for timing for power supply control in order to meet the requirements of accelerator synchronized with the output of the power supply. | ||
![]() |
Poster WEPGF034 [0.322 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
WEPGF155 | Improving Software Services Through Diagnostic and Monitoring Capabilities | 1070 |
|
||
CERN's Accelerator Controls System is built upon a large set of software services which are vital for daily operations. It is important to instrument these services with sufficient diagnostic and monitoring capabilities to reduce the time to locate a problem and to enable pre-failure detection by surveillance of process internal information. The main challenges here are the diversity of programs (C/C++ and Java) , real-time constraints, the distributed environment and diskless systems. This paper describes which building blocks have been developed to collect process metrics and logs, software deployment and release information and how equipment/software experts today have simple and time-saving access to them using the DIAMON console. This includes the possibility to remotely inspect the process (build-time, version, start time, counters,..) and change its log levels for more detailed information. | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THHD3O01 | Control Systems for Spallation Target in China Initiative Accelerator Driven System | 1147 |
|
||
In this paper, we report the design of the control system for the spallation target in China initiative accelerator driven sub-critical (ADS) system, where a heavy-metal target located vertically at the centre of a sub-critical reactor core is bombarded vertically by the high-energy protons from an accelerator. The main functions of the control system for the target are to monitor and control thermal hydraulic, neutron flux, and accelerator-target interface. The first function is to control the components in the primary and secondary loops, such as pumps, heat exchangers, valves, sensors, etc. For the commissioning measurements of the accelerator, the second function is to monitor the neutrons from the spallation target. The three-layer architecture has been used in the control system. In the middle network layer, in order to increase the network reliability, the redundant Ethernet based on Ethernet ring protection protocol has been considered. In the bottom equipment layer, the equipment controls for the above-mentioned functions have been designed. Finally, because the main objective of the target is to integrate the accelerator and the reactor into one system, the integration of accelerator's control system and the reactor's instrumentation and controls into the target's control system has been mentioned. | ||
![]() |
Slides THHD3O01 [0.628 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THHD3O03 |
A Dual-mode Measurement and Control System for High Intensity D-T Fusion Neutron Generator | |
|
||
Funding: 1. Strategic Priority Science & Technology Program of the Chinese Academy of Sciences(No.XDA03040000), 2. ITER 973 Program(No.2014GB112001). High Intensity D-T Fusion Neutron Generator (HINEG) is an accelerator-based D-T fusion neutron facility, which parameters of neutron yield is designed to be the best in neutron generators in China and could be operated in continuous and pulsed modes. A dual-mode control system has been designed to achieve remote monitoring and control for HINEG. Distributed control technology is adopted in this control system due to all the devices dispersed along the beam line in different locations. A dual-mode friendly operation interface is developed for operators to manage the devices expediently through the intranet. Additionally, a multi-objective multi-parameter safety interlock system configured with a Safety PLC is constructed to protect the devices, controllers and operators. The safety interlock system could be triggered by intelligent fault diagnosis system to terminate HINEG operation and alarm within tens of milliseconds if the interlocking constraints are broken. For the controllers and devices are operated under high voltages about 400kV, strong electromagnetic fields and nuclear radiation conditions, many anti-interference measures are taken in this control system. |
||
![]() |
Slides THHD3O03 [36.679 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THHD3O05 | Standards-Based Open-Source PLC Diagnostics Monitoring | 1151 |
|
||
PLCs are widely used to control and monitor industrial processes at CERN. Since these PLCs fulfill critical functions, they must be placed under permanent monitoring. However, due to their proprietary architecture, it is difficult to both monitor the status of these PLCs using vendor-provided software packages and integrate the resulting data with the CERN accelerator infrastructure, which itself relies on CERN-specific protocols. This paper describes the architecture of a stand-alone "PLC diagnostics monitoring" Linux daemon which provides live diagnostics information through standard means and protocols (file logging, CERN protocols, Java Monitoring Extensions). This information is currently consumed by the supervision software which is used by the standby service to monitor the status of critical industrial applications in the LHC and by the monitoring console used by the LHC operators. Both applications are intensively used to monitor and diagnose critical PLC hardware running all over CERN. | ||
![]() |
Slides THHD3O05 [1.057 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THHD3O06 | Overview of the Monitoring Data Archive used on MeerKAT | 1155 |
|
||
Funding: SKA South Africa National Research Foundation of South Africa Department of Science and Technology. MeerKAT, the 64-receptor radio telescope being built in the Karoo, South Africa, by Square Kilometre Array South Africa (SKA SA), comprises a large number of components. All components are interfaced to the Control and Monitoring (CAM) system via the Karoo Array Telescope Communication Protocol (KATCP). KATCP is used extensively for internal communications between CAM components and other subsystems. A KATCP interface exposes requests and sensors. Sampling strategies are set on sensors, ranging from several updates per second to infrequent updates. The sensor samples are of multiple types, from small integers to text fields. As the various components react to user input and sensor samples, the samples with timestamps need to be permanently stored and made available for scientists, engineers and operators to query and analyse. This paper present how the storage infrastructure (dubbed Katstore) manages the volume, velocity and variety of this data. Katstore is comprised of several stages of data collection and transportation. The stages move the data from monitoring nodes to storage node to permanent storage to offsite storage. Additional information (e.g. type, description, units) about each sensor is stored with the samples. |
||
![]() |
Slides THHD3O06 [29.051 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THHD3O08 | Upgrades to the Infrastructure and Management of the Operator Workstations and Servers for Run 2 of the CERN Accelerator Complex | 1158 |
|
||
The Controls Group of the CERN Beams Department provides more than 400 operator workstations in the CERN Control Centre (CCC) and technical buildings of the accelerators, plus 300 servers in the server room (CCR) of the CCC. During the long shutdown of the accelerators that started in February 2013, many upgrades were done to improve this infrastructure in view of the higher-energy LHC run. The Engineering Department improved the electrical supply with fully redundant UPS, on-site diesel generators and for the CCR, water and air cooling systems. The Information Technology Department increased network bandwidth for the servers by a factor of 10 and introduced a pilot multicast service for the video streaming of the accelerator status displays and beam cameras. The Controls Group removed dependencies on network file systems for the operator accounts they manage for the Linacs, Booster, PS, ISOLDE, AD, CTF3, SPS, LHC and cryogenics. It also moved away from system administration based on shell scripts to using modern tools like version-controlled Ansible playbooks, which are now used for installation, day-to-day re-configuration and staged updates during technical stops. | ||
![]() |
Slides THHD3O08 [21.308 MB] | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |