Paper | Title | Other Keywords | Page |
---|---|---|---|
MOCOAAB02 | Design and Status of the SuperKEKB Accelerator Control System | controls, timing, EPICS, interface | 4 |
|
|||
SuperKEKB is the upgrade of the KEKB asymmetric energy e+e− collider, for the B-factory experiment in Japan, designed to achieve a 40-times higher luminosity than the world record by KEKB. The KEKB control system was based on EPICS at the equipment layer and scripting languages at the operation layer. The SuperKEKB control system continues to employ those features, while we implement additional technologies for the successful operation at such a high luminosity. In the accelerator control network system, we introduce 10GbE for the wider bandwidth data transfer, and redundant configurations for reliability. The network security is also enhanced. For the SuperKEKB construction, the wireless network is installed into the beamline tunnel. In the timing system, the new configuration for positron beams is required. We have developed the faster response beam abort system, interface modules to control thousands magnet power supplies, and the monitoring system for the final focusing superconducting magnets to assure stable operations. We introduce the EPICS embedded PLC, where EPICS runs on a CPU module. The design and status of the SuperKEKB accelerator control system will be presented. | |||
![]() |
Slides MOCOAAB02 [5.930 MB] | ||
MOCOAAB07 | Real Time Control for KAGAR, 3km Cryogenic Gravitational Wave Detector in Japan | controls, EPICS, real-time, feedback | 23 |
|
|||
KAGRA is a 3km cryogenic interferometer for gravitational wave detection located underground Kamioka-mine in Japan. The next generation large scale interferometric gravitational wave detectors require very complicated control topologies for the optical path length between mirrors, and very low noise feedback controls in order to detect an extremely tiny motion between mirrors excited by gravitational waves. The interferometer consists of a Michelson interferometer with Fabry-Perot cavities on its arms, and other two mirrors as, so called, a power recycling and a resonant sideband extraction technique. In total, 5 degrees of freedom for length between 7 mirrors should be controlled at a time, and the control must be continuously kept during the observation of gravitational waves. We are currently developing a real time controls system using computers for KAGRA. In this talk, we report how the control system works. | |||
![]() |
Slides MOCOAAB07 [8.536 MB] | ||
MOCOBAB01 | New Electrical Network Supervision for CERN: Simpler, Safer, Faster, and Including New Modern Features | status, controls, operation, framework | 27 |
|
|||
Since 2012, an effort started to replace the ageing electrical supervision system (managing more than 200,000 tags) currently in operation with a WinCC OA-based supervision system in order to unify the monitoring systems used by CERN operators and to leverage the internal knowledge and development of the products (JCOP, UNICOS, etc.). Along with the classical functionalities of a typical SCADA system (alarms, event, trending, archiving, access control, etc.), the supervision of the CERN electrical network requires a set of domain specific applications gathered under the name of EMS (Energy Management System). Such applications include network coloring, state estimation, power flow calculations, contingency analysis, optimal power flow, etc. Additionally, as electrical power is a critical service for CERN, a high availability of its infrastructure, including its supervision system, is required. The supervision system is therefore redundant along with a disaster recovery system which is itself redundant. In this paper, we will present the overall architecture of the future supervision system with an emphasis on the parts specific to the supervision of electrical network. | |||
![]() |
Slides MOCOBAB01 [1.414 MB] | ||
MOMIB01 | Sirius Control System: Conceptual Design | controls, EPICS, interface, operation | 51 |
|
|||
Sirius is a new 3 GeV synchrotron light source currently being designed at the Brazilian Synchrotron Light Laboratory (LNLS) in Campinas, Brazil. The Control System will be heavily distributed and digitally connected to all equipments in order to avoid analog signals cables. A three-layer control system is being planned. The equipment layer uses RS485 serial networks, running at 10Mbps, with a very light proprietary protocol, in order to achieve good performance. The middle layer, interconnecting these serial networks, is based on Single Board Computers, PCs and commercial switches. Operation layer will be composed of PC’s running Control System’s client programs. Special topology will be used for Fast Orbit Feedback with one 10Gbps switch between the beam position monitors electronics and a workstation for corrections calculation and orbit correctors. At the moment, EPICS is the best candidate to manage the Control System. | |||
![]() |
Slides MOMIB01 [0.268 MB] | ||
![]() |
Poster MOMIB01 [0.580 MB] | ||
MOPPC016 | IFMIF EVEDA RFQ Local Control System to Power Tests | controls, EPICS, rfq, software | 89 |
|
|||
In the IFMIF EVEDA project, normal conducting Radio Frequency Quadrupole (RFQ) is used to bunch and accelerate a 130 mA steady beam to 5 MeV. RFQ cavity is divided into three structures, named super-modules. Each super-module is divided into 6 modules for a total of 18 modules for the overall structure. The final three modules have to be tested at high power to test and validate the most critical RF components of RFQ cavity and, on the other hand, to test performances of the main ancillaries that will be used for IFMIF EVEDA project (vacuum manifold system, tuning system and control system). The choice of the last three modules is due to the fact that they will operate in the most demanding conditions in terms of power density (100 kW/m) and surface electric field (1.8*Ekp). The Experimental Physics and Industrial Control System (EPICS) environment [1] provides the framework for monitoring any equipment connected to it. This paper report the usage of this framework to the RFQ power tests at Legnaro National Laboratories [2][3].
[1] http://www.aps.anl.gov/epics/ [2] http://www.lnl.infn.it/ [3] http://www.lnl.infn.it/~epics/joomla/ |
|||
MOPPC023 | Centralized Data Engineering for the Monitoring of the CERN Electrical Network | interface, database, framework, controls | 107 |
|
|||
The monitoring and control of the CERN electrical network involves a large variety of devices and software: it ranges from acquisition devices to data concentrators, supervision systems as well as power network simulation tools. The main issue faced nowadays for the engineering of such large and heterogeneous system including more than 20,000 devices and 200,000 tags is that all devices and software have their own data engineering tool while many of the configuration data have to be shared between two or more devices: the same data needs to be entered manually to the different tools leading to duplication of effort and many inconsistencies. This paper presents a tool called ENSDM aiming at centralizing all the data needed to engineer the monitoring and control infrastructure into a single database from which the configuration of the various devices is extracted automatically. Such approach allows the user to enter the information only once and guarantee the consistency of the data across the entire system. The paper will focus more specifically on the configuration of the remote terminal unit) devices, the global supervision system (SCADA) and the power network simulation tools. | |||
![]() |
Poster MOPPC023 [1.253 MB] | ||
MOPPC054 | Application of Virtualization to CERN Access and Safety Systems | hardware, software, controls, interface | 214 |
|
|||
Access and safety systems are by nature heterogeneous: different kinds of hardware and software, commercial and home-grown, are integrated to form a working system. This implies many different application services, for which separate physical servers are allocated to keep the various subsystems isolated. Each such application server requires special expertise to install and manage. Furthermore, physical hardware is relatively expensive and presents a single point of failure to any of the subsystems, unless designed to include often complex redundancy protocols. We present the Virtual Safety System Infrastructure project (VSSI), whose aim is to utilize modern virtualization techniques to abstract application servers from the actual hardware. The virtual servers run on robust and redundant standard hardware, where snapshotting and backing up of virtual machines can be carried out to maximize availability. Uniform maintenance procedures are applicable to all virtual machines on the hypervisor level, which helps to standardize maintenance tasks. This approach has been applied to the servers of CERN PS and LHC access systems as well as to CERN Safety Alarm Monitoring System (CSAM). | |||
![]() |
Poster MOPPC054 [1.222 MB] | ||
MOPPC055 | Revisiting CERN Safety System Monitoring (SSM) | monitoring, PLC, database, status | 218 |
|
|||
CERN Safety System Monitoring (SSM) is a system for monitoring state-of-health of the various access and personnel safety systems at CERN since more than three years. SSM implements monitoring of different operating systems, network equipment, storage, and special devices like PLCs, front ends, etc. It is based on the monitoring framework Zabbix, which supports alert notifications, issue escalation, reporting, distributed management, and automatic scalability. The emphasis of SSM is on the needs of maintenance and system operation, where timely and reliable feedback directly from the systems themselves is important to quickly pinpoint immediate or creeping problems. A new application of SSM is to anticipate availability problems through predictive trending that allows to visualize and manage upcoming operational issues and infrastructure requirements. Work is underway to extend the scope of SSM to all access and safety systems managed by the access and safety team with upgrades to the monitoring methodology as well as to the visualization of results. | |||
![]() |
Poster MOPPC055 [1.537 MB] | ||
MOPPC059 | Refurbishing of the CERN PS Complex Personnel Protection System | controls, PLC, interface, radiation | 234 |
|
|||
In 2010, the refurbishment of the Personnel Protection System of the CERN Proton Synchrotron complex primary beam areas started. This large scale project was motivated by the obsolescence of the existing system and the objective of rationalizing the personnel protection systems across the CERN accelerators to meet the latest recommendations of the regulatory bodies of the host states. A new generation of access points providing biometric identification, authorization and co-activity clearance, reinforced passage check, and radiation protection related functionalities will allow access to the radiologically classified areas. Using a distributed fail-safe PLC architecture and a diversely redundant logic chain, the cascaded safety system guarantees personnel safety in the 17 machine of the PS complex by acting on the important safety elements of each zone and on the adjacent upstream ones. It covers radiological and activated air hazards from circulating beams as well as laser, and electrical hazards. This paper summarizes the functionalities provided, the new concepts introduced, and, the functional safety methodology followed to deal with the renovation of this 50 year old facility. | |||
![]() |
Poster MOPPC059 [2.874 MB] | ||
MOPPC079 |
CODAC Core System, the ITER Software Distribution for I&C | software, controls, EPICS, interface | 281 |
|
|||
In order to support the adoption of the ITER standards for the Instrumentation & Control (I&C) and to prepare for the integration of the plant systems I&C developed by many distributed suppliers, the ITER Organization is providing the I&C developers with a software distribution named CODAC Core System. This software has been released as incremental versions since 2010, starting from preliminary releases and with stable versions since 2012. It includes the operating system, the EPICS control framework and the tools required to develop and test the software for the controllers, central servers and operator terminals. Some components have been adopted from the EPICS community and adapted to the ITER needs, in collaboration with the other users. This is the case for the CODAC services for operation, such as operator HMI, alarms or archives. Other components have been developed specifically for the ITER project. This applies to the Self-Description Data configuration tools. This paper describes the current version (4.0) of the software as released in February 2013 with details on the components and on the process for its development, distribution and support. | |||
![]() |
Poster MOPPC079 [1.744 MB] | ||
MOPPC097 | The FAIR Control System - System Architecture and First Implementations | timing, controls, software, operation | 328 |
|
|||
The paper presents the architecture of the control system for the Facility for Antiproton and Ion Research (FAIR) currently under development. The FAIR control system comprises the full electronics, hardware, and software to control, commission, and operate the FAIR accelerator complex for multiplexed beams. It takes advantage of collaborations with CERN in using proven framework solutions like FESA, LSA, White Rabbit, etc. The equipment layer consists of equipment interfaces, embedded system controllers, and software representations of the equipment (FESA). A dedicated real time network based on White Rabbit is used to synchronize and trigger actions on equipment level. The middle layer provides service functionality both to the equipment layer and the application layer through the IP control system network. LSA is used for settings management. The application layer combines the applications for operators as GUI applications or command line tools typically written in Java. For validation of concepts already in 2014 FAIR's proton injector at CEA/France and CRYRING at GSI will be commissioned with reduced functionality of the proposed FAIR control system stack. | |||
![]() |
Poster MOPPC097 [2.717 MB] | ||
MOPPC098 | The EPICS-based Accelerator Control System of the S-DALINAC | EPICS, controls, interface, hardware | 332 |
|
|||
Funding: Supported by DFG through CRC 634. The S-DALINAC (Superconducting Darmstadt Linear Accelerator) is an electron accelerator for energies from 3 MeV up to 130 MeV. It supplies beams of either spin-polarized or unpolarized electrons for experiments in the field of nuclear structure physics and related areas of fundamental research. The migration of the Accelerator Control System to an EPICS-based system started three years ago and has essentially been done in parallel to regular operation. While it has not been finished yet it already pervades all the different aspects of the control system. The hardware is interfaced by EPICS Input/Output Controllers. User interfaces are designed with Control System Studio (CSS) and BOY (Best Operator Interface Yet). Latest activities are aimed at the completion of the migration of the beamline devices to EPICS. Furthermore, higher-level aspects can now be approached more intensely. This includes the introduction of efficient alarm-handling capabilities as well as making use of interconnections between formerly separated parts of the system. This contribution will outline the architecture of the S-DALINAC's Accelerator Control System and report about latest achievements in detail. |
|||
![]() |
Poster MOPPC098 [26.010 MB] | ||
MOPPC101 | The Control Architecture of Large Scientific Facilities: ITER and LHC lessons for IFMIF | controls, interface, neutron, EPICS | 344 |
|
|||
The development of an intense source of neutrons with the spectrum of DT fusion reactions is indispensable to qualify suitable materials for the First Wall of the nuclear vessel in fusion power plants. The FW, overlap of different layers, is essential in future reactors; they will convert the 14 MeV of neutrons to thermal energy and generate T to feed the DT reactions. IFMIF will reproduce those irradiation conditions with two parallel 40 MeV CW deuteron Linacs, at 2x125 mA beam current, colliding on a 25 mm thick Li screen flowing at 15 m/s and producing a n flux of 1018 m2/s in 500 cm3 volume with a broad peak energy at 14 MeV. The design of the control architecture of a large scientific facility is dependent on the particularities of the processes in place or the volume of data generated; but it is also tied to project management issues. LHC and ITER are two complex facilities, with ~106 process variables, with different control systems strategies, from the modular approach of CODAC, to the more integrated implementation of CERN Technical Network. This paper analyzes both solutions, and extracts conclusions that shall be applied to the future control architecture of IFMIF. | |||
![]() |
Poster MOPPC101 [0.297 MB] | ||
MOPPC103 | Status of the RIKEN RI Beam Factory Control System | controls, EPICS, ion, cyclotron | 348 |
|
|||
RIKEN Radioactive Isotope Beam Factory (RIBF) is a heavy-ion accelerator facility producing unstable nuclei and studying their properties. After the first beam extraction from Superconducting Ring Cyclotron (SRC), the final stage accelerator of RIBF, in 2006, several kinds of updates have been performed. We will here present two projects of large-scale experimental instrumentations to be introduced in RIBF that offer new type of experiments. One is an isochronous storage ring aiming at precise mass measurements of short-lived nuclei (Rare RI ring), and the other is construction of a new beam transport line dedicated to more effective generation of seaweed mutation induced by energetic heavy ions. In order to control them, the EPICS-based RIBF control system is now under upgrading. Each device used in new experimental instrumentations is controlled by the same kind of controllers as those existing, such as Programmable Logic Controllers (PLCs). On the other hand, we have first introduced Control System Studio (CSS) for operator interface. We plan to set up the CSS not only for new projects but also for the existing RIBF control system step by step. | |||
![]() |
Poster MOPPC103 [2.446 MB] | ||
MOPPC104 | Design and Implementation of Sesame's Booster Ring Control System | controls, booster, EPICS, PLC | 352 |
|
|||
SESAME is a synchrotron light source under installation located in Allan, Jordan. It consists of 2.5 GeV storage-ring, a 800 MeV Booster-Synchrotron and a 22 MeV Microtron as Pre-Injector. SESAME succeeded to get the first beam from Microtron, the booster is expected to be commissioned by the end of 2013, the storage-ring by the end of 2015 and the first beam-lines in 2016. This paper presents building of control systems of SEAME booster. EPICS is the main control-software tool and EDM for building GUIs which is being replaced by CSS. PLCs are used mainly for the interlocks in the vacuum system and power-supplies of the magnets, and in diagnostics for florescent screens and camera- switches. Soft IOCs are used for different serial devices (e.g. vacuum gauge controllers) through Moxa terminal servers and Booster power supplies through Ethernet connection. Libera Electron modules with EPICS tools (IOCs and GUIs) from Diamond Light Source are used for beam position monitoring. The timing System consists of one EVG and three EVR cards from Micro Research Finland (MRF). A distributed version control repository using Git is used at SESAME to track development of the control subsystems. | |||
![]() |
Poster MOPPC104 [1.776 MB] | ||
MOPPC112 | Current Status and Perspectives of the SwissFEL Injector Test Facility Control System | controls, EPICS, operation, software | 378 |
|
|||
The Free Electron Laser (SwissFEL) Injector Test Facility at Paul Scherrer Institute has been in operations for more than three years. The Injector Test Facility machine is a valuable development and validation platform for all major SwissFEL subsystems including controls. Based on the experience gained from the Test Facility operations support, the paper presents current and some perspective controls solutions focusing on the future SwissFEL project. | |||
![]() |
Poster MOPPC112 [1.224 MB] | ||
MOPPC130 | A New Message-Based Data Acquisition System for Accelerator Control | data-acquisition, database, controls, embedded | 413 |
|
|||
The data logging system for SPring-8 accelerator complex has been operating for 16 years as a part of MADOCA system. Collector processes periodically request distributed computers to collect sets of data by synchronous ONC-RPC protocol at fixed cycles. On the other hand, we also developed another MyDAQ system for casual or temporary data acquisition. A data acquisition process running on a local computer pushes one BSD socket stream into a server at random time. Its "one stream per one signal" strategy made data management simple while the system has no scalability. We developed a new data acquisition system which has super-MADOCA scale and MyDAQ's simplicity for new generation accelerator project. The new system based on ZeroMQ messaging library and MessagePack serialization library has high availability, asynchronous messaging, flexibility in data expression and scalability. The input/output plug-ins accept multi protocols and send data to various data systems. This paper describes design, implementation, performance, reliability and deployment of the system. | |||
![]() |
Poster MOPPC130 [0.197 MB] | ||
MOPPC132 | Evaluating Live Migration Performance of a KVM-Based EPICS | EPICS, software, Linux, controls | 420 |
|
|||
In this paper we present some results about live migration performance evaluation of a KVM-Based EPICS on PC.About PC,we care about the performance of storage,network and CPU. EPICS is a control system. we make a demo control system for evaluation, and it is lightweight. For time measurement, we set a monitor PV, and the PV can automatics change its value at regular time intervals. Data Browser can display the values of 'live' PVs and can measure the time. In the end, we get the evaluation value of live migration time using Data Browser. | |||
MOPPC133 | Performance Improvement of KSTAR Networks for Long Distance Collaborations | experiment, site, interface | 423 |
|
|||
KSTAR (Korea Superconducting Tokamak Advanced Research) has completed its 5th campaign. Every year, it produces enormous amount of data that need to be forwarded to international collaborators shot by shot for run-time analysis. Analysis of one shot helps in deciding parameters for next shot. Many shots are conducted in a day, therefore, this communication need to be very efficient. Moreover, amount of KSTAR data and number of international collaborators are increasing every year. In presence of big data and various collaborators exists in all over the world, communicating at run-time will be a challenge. To meet this challenge, we need efficient ways of communications to transfer data. Therefore, in this paper, we will optimize paths among internal and external networks of KSTAR for efficient communication. We will also discuss transmission solutions for environment construction and evaluate performance for long distance collaborations. | |||
![]() |
Poster MOPPC133 [1.582 MB] | ||
MOPPC137 | IEC 61850 Industrial Communication Standards under Test | controls, framework, Ethernet, software | 427 |
|
|||
IEC 61850, as part of the International Electro-technical Commission's Technical Committee 57, defines an international and standardized methodology to design electric power automation substations. It specifies a common way of communicating and integrating heterogeneous systems based on multivendor intelligent electronic devices (IEDs). They are connected to Ethernet network and according to IEC 61850 their abstract data models have been mapped to specific communication protocols: MMS, GOOSE, SV and possibly in the future Web Services. All of them can run over TCP/IP networks, so they can be easily integrated with Enterprise Resource Planning networks; while this integration provides economical and functional benefits for the companies, on the other hand it exposes the industrial infrastructure to the external existing cyber-attacks. Within the Openlab collaboration between CERN and Siemens, a test-bench has been developed specifically to evaluate the robustness of industrial equipment (TRoIE). This paper describes the design and the implementation of the testing framework focusing on the IEC 61850 previously mentioned protocols implementations. | |||
![]() |
Poster MOPPC137 [1.673 MB] | ||
MOPPC140 | High-Availability Monitoring and Big Data: Using Java Clustering and Caching Technologies to Meet Complex Monitoring Scenarios | monitoring, distributed, software, controls | 439 |
|
|||
Monitoring and control applications face ever more demanding requirements: as both data sets and data rates continue to increase, non-functional requirements such as performance, availability and maintainability become more important. C2MON (CERN Control and Monitoring Platform) is a monitoring platform developed at CERN over the past few years. Making use of modern Java caching and clustering technologies, the platform supports multiple deployment architectures, from a simple 3-tier system to highly complex clustered solutions. In this paper we consider various monitoring scenarios and how the C2MON deployment strategy can be adapted to meet them. | |||
![]() |
Poster MOPPC140 [1.382 MB] | ||
MOPPC145 | Mass-Accessible Controls Data for Web Consumers | controls, framework, operation, status | 449 |
|
|||
The past few years in computing have seen the emergence of smart mobile devices, sporting multi-core embedded processors, powerful graphical processing units, and pervasive high-speed network connections (supported by WIFI or EDGE/UMTS). The relatively limited capacity of these devices requires relying on dedicated embedded operating systems (such as Android, or iOS), while their diverse form factors (from mobile phone screens to large tablet screens) require the adoption of programming techniques and technologies that are both resource-efficient and standards-based for better platform independence. We will consider what are the available options for hybrid desktop / mobile web development today, from native software development kits (Android, iOS) to platform-independent solutions (mobile Google Web toolkit [3], JQuery mobile, Apache Cordova[4], Opensocial). Through the authors' successive attempts at implementing a range of solutions for LHC-related data broadcasting, from data acquisition systems, LHC middleware such as DIP and CMW, on to the World Wide Web, we will investigate what are the valid choices to make and what pitfalls to avoid in today’s web development landscape. | |||
![]() |
Poster MOPPC145 [1.318 MB] | ||
MOPPC149 | A Messaging-Based Data Access Layer for Client Applications | controls, data-acquisition, operation, interface | 460 |
|
|||
Funding: US Department of Energy The Fermilab Accelerator Control system has recently integrated use of a publish/subscribe infrastructure as a means of communication between Java client applications and data acquisition middleware. This supercedes a previous implementation based on Java Remote Method Invocation (RMI). The RMI implementation had issues with network firewalls, misbehaving client applications affecting the middleware, lack of portability to other platforms, and cumbersome authentication. The new system uses the AMQP messaging protocol and RabbitMQ data brokers. This decouples the client and middleware, is more portable to other languages, and has proven to be much more reliable. A Java client library provides for single synchronous operations as well as periodic data subscriptions. This new system is now used by the general synoptic display manager application as well as a number of new custom applications. Also a web service has been written that provides easy access to control system data from many languages. |
|||
![]() |
Poster MOPPC149 [4.654 MB] | ||
MOPPC150 | Channel Access in Erlang | EPICS, controls, framework, detector | 462 |
|
|||
We have developed an Erlang language implementation of the Channel Access protocol. Included are low-level functions for encoding and decoding Channel Access protocol network packets as well as higher level functions for monitoring or setting EPICS Process Variables. This provides access to EPICS process variables for the Fermilab Acnet control system via our Erlang-based front-end architecture without having to interface to C/C++ programs and libraries. Erlang is a functional programming language originally developed for real-time telecommunications applications. Its network programming features and list management functions make it particularly well-suited for the task of managing multiple Channel Access circuits and PV monitors. | |||
![]() |
Poster MOPPC150 [0.268 MB] | ||
TUCOAAB03 |
Approaching the Final Design of ITER Control System | controls, plasma, interface, operation | 490 |
|
|||
The control system of ITER (CODAC) is subject to a final design review early 2014, with a second final design review covering high-level applications scheduled for 2015. The system architecture has been established and all plant systems required for first plasma have been identified. Interfaces are being detailed, which is a key activity to prepare for integration. A built to print design of the network infrastructure covering the full site is in place and installation is expected to start next year. The common software deployed in the local plant systems as well as the central system, called CODAC Core System and based on EPICS, has reached maturity providing most of the required functions. It is currently used by 55 organizations throughout the world involved in the development of plant systems and ITER controls. The first plant systems are expected to arrive on site in 2015 starting a five-year integration phase to prepare for first plasma operation. In this paper, we report on the progress made on ITER control system over the last two years and outline the plans and strategies allowing us to integrate hundreds of plant systems procured in-kind by the seven ITER members. | |||
![]() |
Slides TUCOAAB03 [5.294 MB] | ||
TUCOAAB04 | The MedAustron Accelerator Control System: Design, Installation and Commissioning | controls, software, ion, operation | 494 |
|
|||
MedAustron is a light-ion accelerator cancer treatment facility built on the green field in Austria. The accelerator, its control systemand protection systems have been designed under the guidance of CERN within the MedAustron – CERN collaboration. Building construction has been completed in October 2012 and accelerator installation has started in December 2012. Readiness for accelerator control deployment was reached in January 2013. This contribution gives an overview of the accelerator control system project. It reports on the current status of commissioning including the ion sources, low-energy beam transfer and injector. The major challenge so far has been the readiness of the industry supplied IT infrastructure on which accelerator controls relies heavily due to its distributed and virtualized architecture. After all, the control system has been successfully released for accelerator commissioning within time and budget. The need to deliver a highly performant control system to cope with thousands of cycles in real-time, to cover interactive commissioning and unattended medical operation were mere technical aspects to be solved during the development phase. | |||
![]() |
Slides TUCOAAB04 [2.712 MB] | ||
TUMIB06 | Development of a Scalable and Flexible Data Logging System Using NoSQL Databases | database, controls, data-acquisition, operation | 532 |
|
|||
We have developed a scalable and flexible data logging system for SPring-8 accelerator control. The current SPring-8 data logging system powered by a relational database management system (RDBMS) has been storing log data for 16 years. With the experience, we recognized the lack of RDBMS flexibility on data logging such as little adaptability of data format and data acquisition cycle, complexity in data management and no horizontal scalability. To solve the problem, we chose a combination of two NoSQL databases for the new system; Redis for real time data cache and Apache Cassandra for perpetual archive. Logging data are stored into both database serialized by MessagePack with flexible data format that is not limited to single integer or real value. Apache Cassandra is a scalable and highly available column oriented database, which is suitable for time series logging data. Redis is a very fast on-memory key-value store that complements Cassandra's eventual consistent model. We developed a data logging system with ZeroMQ message and have proved its high performance and reliability in long term evaluation. It will be released for partial control system this summer. | |||
![]() |
Slides TUMIB06 [0.182 MB] | ||
![]() |
Poster TUMIB06 [0.525 MB] | ||
TUPPC022 | Centralized Software and Hardware Configuration Tool for Large and Small Experimental Physics Facilities | software, database, controls, EPICS | 591 |
|
|||
All software of control system, starting from hardware drivers and up to user space PC applications, needs configuration information to work properly. This information includes such parameters as channels calibrations, network addresses, servers responsibilities and other. Each software subsystem requires a part of configuration parameters, but storing them separately from whole configuration will cause usability and reliability issues. On the other hand, storing all configuration in one centralized database will decrease software development speed, by adding extra central database querying. The paper proposes configuration tool that has advantages of both ways. Firstly, it uses a centralized configurable graph database, that could be manipulated by web-interface. Secondly, it could automatically export configuration information from centralized database to any local configuration storage. The tool has been developed at BINP (Novosibirsk, Russia) and is used to configure VEPP-2000 electron-positron collider (BINP, Russia), Electron Linear Induction Accelerator (Snezhinsk, Russia) and NSLS-II booster synchrotron (BNL, USA). | |||
![]() |
Poster TUPPC022 [1.441 MB] | ||
TUPPC032 | Database-backed Configuration Service | database, controls, operation, interface | 627 |
|
|||
Keck Observatory is in the midst of a major telescope control system upgrade. This upgrade will include a new database-backed configuration service which will be used to manage the many aspects of the telescope that need to be configured (e.g. site parameters, control tuning, limit values) for its control software and it will keep the configuration data persistent between IOC restarts. This paper will discuss this new configuration service, including its database schema, iocsh API, rich user interface and the many other provided features. The solution provides automatic time-stamping, a history of all database changes, the ability to snapshot and load different configurations and triggers to manage the integrity of the data collections. Configuration is based on a simple concept of controllers, components and their associated mapping. The solution also provides a failsafe mode that allows client IOCs to function if there is a problem with the database server. It will also discuss why this new service is preferred over the file based configuration tools that have been used at Keck up to now. | |||
![]() |
Poster TUPPC032 [0.849 MB] | ||
TUPPC034 | Experience Improving the Performance of Reading and Displaying Very Large Datasets | collider, distributed, instrumentation, software | 630 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. There has been an increasing need over the last 5 years within the BNL accelerator community (primarily within the RF and Instrumentation groups) to collect, store and display data at high frequencies (1-10 kHz). Data throughput considerations when storing this data are manageable. But requests to display gigabytes of the collected data can quickly tax the speed at which data can be read from storage, transported over a network, and displayed on a users computer monitor. This paper reports on efforts to improve the performance of both reading and displaying data collected by our data logging system. Our primary means of improving performance was to build an Data Server – a hardware/software server solution built to respond to client requests for data. It's job is to improve performance by 1) improving the speed at which data is read from disk, and 2) culling the data so that the returned datasets are visually indistinguishable from the requested datasets. This paper reports on statistics that we've accumulated over the last two years that show improved data processing speeds and associated increases in the number and average size of client requests. |
|||
![]() |
Poster TUPPC034 [1.812 MB] | ||
TUPPC045 | Software Development for High Speed Data Recording and Processing | detector, software, monitoring, controls | 665 |
|
|||
Funding: The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 283745. The European XFEL beam delivery defines a unique time structure that requires acquiring and processing data in short bursts of up to 2700 images every 100 ms. The 2D pixel detectors being developed produce up to 10 GB/s of 1-Mpixel image data. Efficient handling of this huge data volume requires large network bandwidth and computing capabilities. The architecture of the DAQ system is hierarchical and modular. The DAQ network uses 10 GbE switched links to provide large bandwidth data transport between the front-end interfaces (FEI), data handling PC layer servers, and storage and analysis clusters. Front-end interfaces are required to build images acquired during a burst into pulse ordered image trains and forward them to PC layer farm. The PC layer consists of dedicated high-performance computers for raw data monitoring, processing and filtering, and aggregating data files that are then distributed to on-line storage and data analysis clusters. In this contribution we give an overview of the DAQ system architecture, communication protocols, as well as software stack for data acquisition pre-processing, monitoring, storage and analysis. |
|||
![]() |
Poster TUPPC045 [1.323 MB] | ||
TUPPC063 | Control and Monitoring of the Online Computer Farm for Offline Processing in LHCb | controls, monitoring, experiment, interface | 721 |
|
|||
LHCb, one of the 4 experiments at the LHC accelerator at CERN, uses approximately 1500 PCs (averaging 12 cores each) for processing the High Level Trigger (HLT) during physics data taking. During periods when data acquisition is not required most of these PCs are idle. In these periods it is possible to profit from the unused processing capacity to run offline jobs, such as Monte Carlo simulation. The LHCb offline computing environment is based on LHCbDIRAC (Distributed Infrastructure with Remote Agent Control). In LHCbDIRAC, job agents are started on Worker Nodes, pull waiting tasks from the central WMS (Workload Management System) and process them on the available resources. A Control System was developed which is able to launch, control and monitor the job agents for the offline data processing on the HLT Farm. This control system is based on the existing Online System Control infrastructure, the PVSS SCADA and the FSM toolkit. It has been extensively used launching and monitoring 22.000+ agents simultaneously and more than 850.000 jobs have already been processed in the HLT Farm. This paper describes the deployment and experience with the Control System in the LHCb experiment. | |||
![]() |
Poster TUPPC063 [2.430 MB] | ||
TUPPC096 | Migration from WorldFIP to a Low-Cost Ethernet Fieldbus for Power Converter Control at CERN | Ethernet, controls, interface, software | 805 |
|
|||
Power converter control in the LHC uses embedded computers called Function Generator/Controllers (FGCs) which are connected to WorldFIP fieldbuses around the accelerator ring. The FGCs are integrated into the accelerator control system by x86 gateway front-end systems running Linux. With the LHC now operational, attention has turned to the renovation of older control systems as well as a new installation for Linac 4. A new generation of FGC is being deployed to meet the needs of these cycling accelerators. As WorldFIP is very limited in data rate and is unlikely to undergo further development, it was decided to base future installations upon an Ethernet fieldbus with standard switches and interface chipsets in both the FGCs and gateways. The FGC communications protocol that runs over WorldFIP in the LHC was adapted to work over raw Ethernet, with the aim to have a simple solution that will easily allow the same devices to operate with either type of interface. This paper describes the evolution of FGC communications from WorldFIP to dedicated Ethernet networks and presents the results of initial tests, diagnostic tools and how real-time power converter control is achieved. | |||
![]() |
Poster TUPPC096 [1.250 MB] | ||
TUPPC110 | Operator Intervention System for Remote Accelerator Diagnostics and Support | controls, operation, EPICS, site | 832 |
|
|||
In a large experimental physics project such as ITER and LHC, the project has managed by an international collaboration. Similarly, ILC (International Linear Collider) as next generation project will be started by a collaboration of many institutes from three regions. After the collaborative construction, any collaborators except a host country will need to have some methods for remote maintenances by control and monitoring of devices. For example, the method can be provided by connecting to the control system network via WAN from their own countries. On the other hand, the remote operation of an accelerator via WAN has some issues from a practical application standpoint. One of the issues is that the accelerator has both experimental device and radiation generator characteristics. Additionally, after miss operation in the remote control, it will cause breakdown immediately. For this reason, we plan to implement the operator intervening system for remote accelerator diagnostics and support, and then it will solve the issues of difference of between the local control room and other locations. In this paper, we report the system concept, the development status, and the future plan. | |||
![]() |
Poster TUPPC110 [7.215 MB] | ||
TUPPC124 | Distributed Network Monitoring Made Easy - An Application for Accelerator Control System Process Monitoring | monitoring, controls, software, Linux | 875 |
|
|||
Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357. As the complexity and scope of distributed control systems increase, so does the need for an ever increasing level of automated process monitoring. The goal of this paper is to demonstrate one method whereby the SNMP protocol combined with open-source management tools can be quickly leveraged to gain critical insight into any complex computing system. Specifically, we introduce an automated, fully customizable, web-based remote monitoring solution which has been implemented at the Argonne Tandem Linac Accelerator System (ATLAS). This collection of tools is not limited to only monitoring network infrastructure devices, but also to monitor critical processes running on any remote system. The tools and techniques used are typically available pre-installed or are available via download on several standard operating systems, and in most cases require only a small amount of configuration out of the box. High level logging, level-checking, alarming, notification and reporting is accomplished with the open source network management package OpenNMS, and normally requires a bare minimum of implementation effort by a non-IT user. |
|||
![]() |
Poster TUPPC124 [0.875 MB] | ||
TUCOCB04 | EPICS Version 4 Progress Report | EPICS, controls, database, operation | 956 |
|
|||
EPICS Version 4 is the next major revision of the Experimental Physics and Industrial Control System, a widely used software framework for controls in large facilities, accelerators and telescopes. The primary goal of Version 4 is to improve support for scientific applications by augmenting the control-centered EPICS Version 3 with an architecture that allows building scientific services on top of it. Version 4 provides a new standardized wire protocol, support of structured types, and parametrized queries. The long-term plans also include a revision of the IOC core layer. The first set of services like directory, archive retrieval, and save set services aim to improve the current EPICS architecture and enable interoperability. The first services and applications are now being deployed in running facilities. We present the current status of EPICS V4, the interoperation of EPICS V3 and V4, and how to create services such as accelerator modelling, large database access, etc. These enable operators and physicists to write thin and powerful clients to support commissioning, beam studies and operations, and opens up the possibility of sharing applications between different facilities. | |||
![]() |
Slides TUCOCB04 [1.937 MB] | ||
TUCOCB07 | TANGO - Can ZMQ Replace CORBA ? | TANGO, CORBA, controls, database | 964 |
|
|||
TANGO (http://www.tango-controls.org) is a modern distributed device oriented control system toolkit used by a number of facilities to control synchrotrons, lasers and a wide variety of equipment for doing physics experiments. The performance of the network protocol used by TANGO is a key component of the toolkit. For this reason TANGO is based on the omniORB implementation of CORBA. CORBA offers an interface definition language with mappings to multiple programming languages, an efficient binary protocol, a data representation layer, and various services. In recent years a new series of binary protocols based on AMQP have emerged from the high frequency stock market trading business. A simplified version of AMQP called ZMQ (http://www.zeromq.org/) was open sourced in 2010. In 2011 the TANGO community decided to take advantage of ZMQ. In 2012 the kernel developers successfully replaced the CORBA Notification Service with ZMQ in TANGO V8. The first part of this paper will present the software design, the issues encountered and the resulting improvements in performance. The second part of this paper will present a study of how ZMQ could replace CORBA completely in TANGO. | |||
![]() |
Slides TUCOCB07 [1.328 MB] | ||
TUCOCB08 | Reimplementing the Bulk Data System with DDS in ALMA ACS | operation, site, CORBA, controls | 969 |
|
|||
Bulk Data(BD) is a service in the ALMA Common Software to transfer a high amount of astronomical data from many-to-one, and one-to-many computers. Its main application is the Correlator SW (processes raw lags from the Correlator HW into science visibilities). The Correlator retrieves data from antennas on up to 32 computers. Data is forwarded to a master computer and combined to be sent to consumers. The throughput requirement both to/from the master is 64 MBytes/sec, differently distributed based on observing conditions. Requirements for robustness make the application very challenging. The first implementation, based on the CORBA A/V Streaming service, showed weaknesses. We therefore decided to replace it, even if we were approaching start of operations, making provision for careful testing. We have chosen as core technology DDS (Data Distribution Service), being a well supported standard, widespread in similar applications. We have evaluated mainstream implementations, with emphasis on performance, robustness and error handling. We have successfully deployed the new BD, making it easy switching between old and new for testing purposes. We discuss challenges and lessons learned. | |||
![]() |
Slides TUCOCB08 [1.582 MB] | ||
TUCOCB09 | The Internet of Things and Control System | controls, TANGO, feedback, embedded | 974 |
|
|||
A recent huge interest in Machine to Machine communication is known as the Internet Of Things (IOT), to allow the possibility for autonomous devices to use Internet for exchanging the data. The Internet and the World Wide Web have caused a revolution in communication between the people. They were born from the need to exchange scientific information between institutes. Several universities have predicted that IOT will have a similar impact and now, industry is gearing up for it. The issues under discussion for IOT , such as protocols, representations and resources are similar to human communication and are currently being tested by different institutes and companies, including start-ups. Already, the term smart city is used to describe uses of IOT, such as smart parking, traffic congestion and waste management. In the domain of Control Systems for big research facilities, a lot of knowledge has already been acquired for building the connections between thousands of devices, more and more of which are provided with a TCP/IP connection. This paper investigates the possible convergence between Control Systems and IOT. | |||
![]() |
Slides TUCOCB09 [11.919 MB] | ||
WECOBA07 | High Speed Detectors: Problems and Solutions | detector, operation, software, data-analysis | 1016 |
|
|||
Diamond has an increasing number of high speed detectors primarily used on Macromolecular Crystallography, Small Angle X-Ray Scattering and Tomography beamlines. Recently, the performance requirements have exceeded the performance available from a single threaded writing process on our Lustre parallel file system, so we have had to investigate other file systems and ways of parallelising the data flow to mitigate this. We report on the some comparative tests between Lustre and GPFS, and some work we have been leading to enhance the HDF5 library to add features that simplify the parallel writing problem. | |||
![]() |
Slides WECOBA07 [0.617 MB] | ||
THCOAAB03 | Bringing Control System User Interfaces to the Web | controls, interface, EPICS, status | 1048 |
|
|||
Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy With the evolution of web based technologies, especially HTML5[1], it becomes possible to create web-based control system user interfaces (UI) that are cross-browser and cross-device compatible. This article describes two technologies that facilitate this goal. The first one is the WebOPI [2], which can seamlessly display CSS BOY[3] Operator Interfaces (OPI) in web browsers without modification to the original OPI file. The WebOPI leverages the powerful graphical editing capabilities of BOY, it provides the convenience of re-using existing OPI files. On the other hand, it uses auto-generated JavaScript and a generic communication mechanism between the web browser and web server. It is not optimized for a control system, which results in unnecessary network traffic and resource usage. Our second technology is the WebSocket-based Process Data Access (WebPDA). It is a protocol that provides efficient control system data communication using WebSockets[4], so that users can create web-based control system UIs using standard web page technologies such as HTML, CSS and JavaScript. The protocol is control system independent, so it potentially can support any type of control system. [1]http://en.wikipedia.org/wiki/HTML5 [2]https://sourceforge.net/apps/trac/cs-studio/wiki/webopi [3]https://sourceforge.net/apps/trac/cs-studio/wiki/BOY [4]http://en.wikipedia.org/wiki/WebSocket |
|||
![]() |
Slides THCOAAB03 [1.768 MB] | ||
THCOAAB07 | NIF Electronic Operations: Improving Productivity with iPad Application Development | operation, framework, database, diagnostics | 1066 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632815 In an experimental facility like the National Ignition Facility (NIF), thousands of devices must be maintained during day to day operations. Teams within NIF have documented hundreds of procedures, or checklists, detailing how to perform this maintenance. These checklists have been paper based, until now. NIF Electronic Operations (NEO) is a new web and iPad application for managing and executing checklists. NEO increases efficiency of operations by reducing the overhead associated with paper based checklists, and provides analysis and integration opportunities that were previously not possible. NEO’s data driven architecture allows users to manage their own checklists and provides checklist versioning, real-time input validation, detailed step timing analysis, and integration with external task tracking and content management systems. Built with mobility in mind, NEO runs on an iPad and works without the need for a network connection. When executing a checklist, users capture various readings, photos, measurements and notes which are then reviewed and assessed after its completion. NEO’s design, architecture, iPad application and uses throughout the NIF will be discussed. |
|||
![]() |
Slides THCOAAB07 [1.237 MB] | ||
THCOAAB08 | NOMAD Goes Mobile | GUI, CORBA, controls, interface | 1070 |
|
|||
The commissioning of the new instruments at the Institut Laue-Langevin (ILL) has shown the need to extend instrument control outside the classical desktop computer location. This, together with the availability of reliable and powerful mobile devices such as smartphones and tablets has triggered a new branch of development for NOMAD, the instrument control software in use at the ILL. Those devices, often considered only as recreational toys, can play an important role in simplifying the life of instrument scientists and technicians. Performing an experiment not only happens in the instrument cabin but also from the office, from another instrument, from the lab and from home. The present paper describes the development of a remote interface, based on Java and Android Eclipse SDK, communicating with the NOMAD server using CORBA via wireless network. Moreover, the application is distributed on “Google Play” to minimise the installation and the update procedures. | |||
![]() |
Slides THCOAAB08 [2.320 MB] | ||
THMIB03 | From Real to Virtual - How to Provide a High-avaliblity Computer Server Infrastructure | controls, Linux, hardware, operation | 1076 |
|
|||
During the commissioning phase of the Swiss Light Source (SLS) at the Paul Scherrer Institut (PSI) we decided in 2000 for a strategy to separate individual services for the control system. The reason was to prevent interruptions due to network congestion, misdirected control, and other causes between different service contexts. This concept proved to be reliable over the years. Today, each accelerator facility and beamline of PSI resides on a separated subnet and uses its dedicated set of service computers. As the number of beamlines and accelerators grew, the variety of services and their quantity rapidly increased. Fortunately, about the time when the SLS announced its first beam, VMware introduced its VMware Virtual Platform for Intel IA32 architecture. This was a great opportunity for us to start with the virtualization of the controls services. Currently, we have about 200 of such systems. In this presentation we discuss the way how we achieved the high-level-virtualization controls infrastructure, as well as how we will proceed in the future. | |||
![]() |
Slides THMIB03 [2.124 MB] | ||
![]() |
Poster THMIB03 [1.257 MB] | ||
THMIB09 | Management of the FERMI Control System Infrastructure | controls, interface, TANGO, Ethernet | 1086 |
|
|||
Funding: Work supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3 Efficiency, flexibility and simplicity of management have been some of the design guidelines of the control system for the FERMI@Elettra Free Electron Laser. Out-of-band system monitoring devices, remotely operated power distribution units and remote management interfaces have been integrated into the Tango control system, leading to an effective control of the infrastructure. The Open Source tool Nagios has been deployed to monitor the functionality of the control system computers and the status of the application software for an easy and automatic identification and report of troubles. |
|||
![]() |
Slides THMIB09 [0.236 MB] | ||
![]() |
Poster THMIB09 [1.567 MB] | ||
THPPC005 | Virtualization Infrastructure within the Controls Environment of the Light Sources at HZB | hardware, controls, software, EPICS | 1100 |
|
|||
The advantages of visualization techniques and infrastructures with respect to configuration management, high availability and resource management have become obvious also for controls applications. Today a choice of powerful products are easy-to-use and support desirable functionality, performance, usability and maintainability at very matured levels. This paper presents the architecture of the virtual infrastructure and its relations to the hardware based counterpart as it has emerged for BESSY II and MLS controls within the past decade. Successful experiences as well as abandoned attempts and caveats on some intricate troubles are summarized. | |||
![]() |
Poster THPPC005 [0.286 MB] | ||
THPPC006 | REMBRANDT - REMote Beam instRumentation And Network Diagnosis Tool | controls, database, monitoring, status | 1103 |
|
|||
As with any other large accelerator complex in operation today, the beam instrumentation devices and associated data acquisition components for the coming FAIR accelerators will be distributed over a large area and partially installed in inaccessible radiation exposed areas. Besides operation of the device itself, like acquisition of data, it is mandatory to control also the supporting LAN based components like VME/μTCA crates, front-end computers (FEC), middle ware servers and more. Fortunately many COTS systems provide means for remote control and monitoring using a variety of standardized protocols like SNMP, IPMI or iAMT. REMBRANDT is a Java framework, which allows the authorized user to monitor and control remote systems while hiding the underlying protocols and connection information such as ip addresses, user-ids and passwords. Beneath voltage and current control, the main features are the remote power switching of the systems and the reverse telnet boot process observation of FECs. REMBRANDT is designed to be easily extensible with new protocols and features. The software concept, including the client-server part and the database integration, will be presented. | |||
![]() |
Poster THPPC006 [3.139 MB] | ||
THPPC009 | Design and Status of the SuperKEKB Accelerator Control Network System | controls, linac, EPICS, Ethernet | 1107 |
|
|||
SuperKEKB is the upgrade of the KEKB asymmetric energy electron-positron collider, for the next generation B-factory experiment in Japan. It is designed to achieve a luminosity of 8x1035/cm2/s, 40 times higher than the world highest luminosity record at KEKB. For SuperKEKB, we upgrade the accelerator control network system, which connects all devices in the accelerator. To construct the higher performance network system, we install the network switches based on the 10 gigabit Ethernet (10GbE) for the wider bandwidth data transfer. Additional optical fibers, for the reliable and redundant network and for the robust accelerator control timing system, are also installed. For the KEKB beamline construction and accelerator components maintenance, we install the new wireless network system based on the Leaky Coaxial (LCX) cable antennas into the 3 km circumference beamline tunnel. We reconfigure the network design to enhance the reliability and security of the network. In this paper, the design and current status of the SuperKEKB accelerator control network system will be presented. | |||
![]() |
Poster THPPC009 [1.143 MB] | ||
THPPC018 | Construction of the TPS Network System | controls, EPICS, Ethernet, timing | 1127 |
|
|||
Project of 3 GeV Taiwan Photon Source (TPS) need a reliable, secure and high throughput network to ensure facility operate routinely and to provide better service for various purposes. The network system includes the office network, the beamline network and the accelerator control network for the TPS and the TLS (Taiwan Light Source) sites at NSRRC. Combining cyber security technologies such as firewall, NAT and VLAN will be adopted to define the tree network topology for isolating the accelerator control network, beamline network and subsystem components. Various network management tools are used for maintenance and troubleshooting. The TPS network system architecture, cabling topology, redundancy and maintainability are described in this report. | |||
![]() |
Poster THPPC018 [2.650 MB] | ||
THPPC022 | Securing Mobile Control System Devices: Development and Testing | controls, Linux, EPICS, interface | 1131 |
|
|||
Recent advances in portable devices allow end users convenient wasy to access data over the network. Networked control systems have traditionally been kept on local or internal networks to prevent external threats and isolate traffic. The UMWC Clinical Neutron Therapy System has its control system on such an isolated network. Engineers have been updating the control system with EPICS, and have developed EDM-based interfaces for control and monitoring. This project describes a tablet-based monitoring device being developed to allow the engineers to monitor the system, while, e.g. moving from rack to rack, or room to room. EDM is being made available via the tablet. Methods to maintain security of the control system and tablet, while providing ease of access and meaningful data for management are being created. In parallel with the tablet development, security and penetration tests are also being produced. | |||
THPPC024 | Operating System Upgrades at RHIC | software, controls, Linux, collider | 1138 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Upgrading hundreds of machines to the next major release of an Operating system (OS), while keeping the accelerator complex running, presents a considerable challenge. Even before addressing the challenges that an upgrade represents, there are critical questions that must be answered. Why should an upgrade be considered? (An upgrade is labor intensive and includes potential risks due to defective software.) When is it appropriate to make incremental upgrades to the OS? (Incremental upgrades can also be labor intensive and include similar risks.) When is the best time to perform an upgrade? (An upgrade can be disruptive.) Should all machines be upgraded to the same version at the same time? (At times this may not be possible, and there may not be a need to upgrade certain machines.) Should the compiler be upgraded at the same time? (A compiler upgrade can also introduce risks at the software application level.) This paper examines our answers to these questions, describes how upgrades to the Red Hat Linux OS are implemented by the Controls group at RHIC, and describes our experiences. |
|||
![]() |
Poster THPPC024 [0.517 MB] | ||
THPPC036 | EPICS Control System for the FFAG Complex at KURRI | controls, EPICS, interface, LabView | 1164 |
|
|||
In Kyoto University Research Reactor Institute (KURRI), a fixed-field alternating gradient (FFAG) proton accelerator complex, which is consists of the three FFAG rings, had been constructed to make an experimental study of accelerator driven sub-critical reactor (ADSR) system with spallation neutrons produced by the accelerator. The world first ADSR experiment was carried out in March of 2009. In order to increase the beam intensity of the proton FFAG accelerator, a new injection system with H− linac has been constructed in 2011. To deal with these developments, a control system of these accelerators should be easy to develop and maintain. The first control system was based on LabVIEW and the development had been started seven years ago. Thus it is necessary to update the components of the control system, for example operating system of the computer. And the first control system had some minor stability problems and it was difficult for non-expert of LabVIEW to modify control program. Therefore the EPICS toolkit has been started to use as the accelerator control system in 2009. The present control system of the KURRI FFAG complex is explained. | |||
![]() |
Poster THPPC036 [3.868 MB] | ||
THPPC086 | Analyzing Off-normals in Large Distributed Control Systems using Deep Packet Inspection and Data Mining Techniques | controls, toolkit, operation, distributed | 1278 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632814 Network packet inspection using port mirroring provides the ultimate tool for understanding complex behaviors in large distributed control systems. The timestamped captures of network packets embody the full spectrum of protocol layers and uncover intricate and surprising interactions. No other tool is capable of penetrating through the layers of software and hardware abstractions to allow the researcher to analyze an integrated system composed of various operating systems, closed-source embedded controllers, software libraries and middleware. Being completely passive, the packet inspection does not modify the timings or behaviors. The completeness and fine resolution of the network captures present an analysis challenge, due to huge data volumes and difficulty of determining what constitutes the signal and noise in each situation. We discuss the development of a deep packet inspection toolchain and application of the R language for data mining and visualization. We present case studies demonstrating off-normal analysis in a distributed real-time control system. In each case, the toolkit pinpointed the problem root cause which had escaped traditional software debugging techniques. |
|||
![]() |
Poster THPPC086 [2.353 MB] | ||
THPPC089 | High Repetition Rate Laser Beamline Control System | laser, controls, timing, EPICS | 1281 |
|
|||
Funding: The authors acknowledge the support of the following grants of the Czech Ministry of Education, Youth and Sports "CZ.1.05/1.1.00/02.0061" and "CZ.1.07/2.3.00/20.0091". ELI-Beamlines will be a high-energy, high repetition-rate laser pillar of the ELI (Extreme Light Infrastructure) project. It will be an international user facility for both academic and applied research, scheduled to provide user capability from the beginning of 2017. As part of the development of L1 laser beamline we are developing a prototype control system. The beamline repetition rate of 1kHz with its femtosecond pulse accuracy puts demanding requirements on both control and synchronization systems. A low-jitter high-precision commercial timing system will be deployed to accompany both EPICS- and LabVIEW-based control system nodes, many of which will be enhanced for real-time responsiveness. Data acquisition will be supported by an in-house time-stamping mechanism relying on sub-millisecond system responses. The synergy of LabVIEW Real-Time and EPICS within particular nodes should be secured by advanced techniques to achieve both fast responsiveness and high data-throughput. *tomas.mazanec@eli-beams.eu |
|||
![]() |
Poster THPPC089 [1.286 MB] | ||
THPPC092 | FAIR Timing System Developments Based on White Rabbit | timing, controls, FPGA, interface | 1288 |
|
|||
A new timing system based on White Rabbit (WR) is being developed for the upcoming FAIR facility at GSI, in collaboration with CERN, other institutes and industry partners. The timing system is responsible for the synchronization of nodes with nanosecond accuracy and distribution of timing messages, which allows for real-time control of the accelerator equipment. WR is a fully deterministic Ethernet-based network for general data transfer and synchronization, which is based on Synchronous Ethernet and PTP. The ongoing development at GSI aims for a miniature timing system, which is part of a control system of a proton source, that will be used at one of the accelerators at FAIR. Such a timing system consists of a Data Master generating timing messages, which are forwarded by a WR switch to a handful of timing receiver. The next step is an enhancement of the robustness, reliability and scalability of the system. These features will be integrated in the forthcoming CRYRING control system in GSI. CRYRING serves as a prototype and testing ground for the final control system for FAIR. The contribution presents the overall design and status of the timing system development. | |||
![]() |
Poster THPPC092 [0.549 MB] | ||
THPPC102 | Comparison of Synchronization Layers for Design of Timing Systems | timing, interface, Ethernet, real-time | 1296 |
|
|||
Two synchronization layers for timing systems in large experimental physics control systems are compared. White Rabbit (WR), which is an emerging standard, is compared against the well-established event based approach. Several typical timing system services have been implemented on an FPGA using WR to explore its concepts and architecture, which is fundamentally different from an event based. Both timing system synchronization layers were evaluated based on typical requirements of current accelerator projects and with regard to other parameters such as scalability. The proposed design methodology demonstrates how WR can be deployed in future accelerator projects. | |||
![]() |
Poster THPPC102 [1.796 MB] | ||
THPPC119 | Software Architecture for the LHC Beam-based Feedback System at CERN | feedback, controls, optics, timing | 1337 |
|
|||
This paper presents an overview of beam based feedback systems at the LHC at CERN. It will cover the system architecture which is split into two main parts – a controller (OFC) and a service unit (OFSU). The paper presents issues encountered during beam commissioning and lessons learned including follow-up from a recent review which took place at CERN | |||
![]() |
Poster THPPC119 [1.474 MB] | ||
THCOBB06 | CLIC-ACM: Acquisition and Control System | radiation, timing, controls, survey | 1404 |
|
|||
CLIC (Compact Linear Collider) is a world-wide collaboration to study the next “terascale” lepton collider, relying upon a very innovative concept of two-beam-acceleration. In this scheme, the power is transported to the main accelerating structures by a primary electron beam. The Two Beam Module (TBM) is a compact integration with a high filling factor of all components: RF, Magnets, Instrumentation, Vacuum, Alignment and Stabilization. This paper describes the very challenging aspects of designing the compact system to serve as a dedicated Acquisition & Control Module (ACM) for all signals of the TBM. Very delicate conditions must be considered, in particular radiation doses that could reach several kGy in the tunnel. In such severe conditions shielding and hardened electronics will have to be taken into consideration. In addition, with more than 300 channels per ACM and about 21000 ACMs in total, it appears clearly that power consumption will be an important issue. It is also obvious that digitalization of the signals acquisition will take place at the lowest possible hardware level and that neither the local processor, nor the operating system shall be used inside the ACM. | |||
![]() |
Slides THCOBB06 [0.846 MB] | ||
![]() |
Poster THCOBB06 [0.747 MB] | ||
THCOBA02 | Unidirectional Security Gateways: Stronger than Firewalls | controls, hardware, experiment, software | 1412 |
|
|||
In the last half decade, application integration via Unidirectional Security Gateways has emerged as a secure alternative to firewalls. The gateways are deployed extensively to protect the safety and reliability of industrial control systems in nuclear generators, conventional generators and a wide variety of other critical infrastructures. Unidirectional Gateways are a combination of hardware and software. The hardware allows information to leave a protected industrial network, and physically prevents any signal whatsoever from returning to the protected network. The result is that the hardware blocks all online attacks originating on external networks. The software replicates industrial servers to external networks, where the information in those servers is available to end users and to external applications. The software does not proxy bi-directional protocols. Join us to learn how this secure alternative to firewalls works, where and how the tecnhology is deployed routinely, and how all of the usual remote support, data integrity and other apparently bi-directional deployment issues are routinely resolved. | |||
![]() |
Slides THCOBA02 [0.721 MB] | ||
THCOBA05 | Control System Virtualization for the LHCb Online System | controls, experiment, operation, hardware | 1419 |
|
|||
Virtualization provides many benefits such as more efficiency in resource utilization, less power consumption, better management by centralized control and higher availability. It can also save time for IT projects by eliminating dedicated hardware procurement and providing standard software configurations. In view of this virtualization is very attractive for mission-critical projects like the experiment control-system (ECS) of the large LHCb experiment at CERN. This paper describes our implementation of the control system infrastructure on a general purpose server-hardware based on Linux and the RHEV enterprise clustering platform. The paper describes the methods used , our experiences and the knowledge acquired in evaluating the performance of the setup using test systems, constraints and limitations we encountered. We compare these with parameters measured under typical load conditions in a real production system. We also present the specific measures taken to guarantee optimal performance for the SCADA system (WinCC OA), which is the back-bone of our control system. | |||
![]() |
Slides THCOBA05 [1.065 MB] | ||
THCOBA06 | Virtualization and Deployment Management for the KAT-7 / MeerKAT Control and Monitoring System | software, hardware, database, controls | 1422 |
|
|||
Funding: National Research Foundation (NRF) of South Africa To facilitate efficient deployment and management of the Control and Monitoring software of the South African 7-dish Karoo Array Telescope (KAT-7) and the forthcoming Square Kilometer Array (SKA) precursor, the 64-dish MeerKAT Telescope, server virtualization and automated deployment using a host configuration database is used. The advantages of virtualization is well known; adding automated deployment from a configuration database, additional advantages accrue: Server configuration becomes deterministic, development and deployment environments match more closely, system configuration can easily be version controlled and systems can easily be rebuilt when hardware fails. We chose the Debian GNU/Linux based Proxmox VE hypervisor using the OpenVZ single kernel container virtualization method along with Fabric (a Python ssh automation library) based deployment automation and a custom configuration database. This paper presents the rationale behind these choices, our current implementation and our experience with it, and a performance evalution of OpenVZ and KVM. Tests include a comparison of application specific networking performance over 10GbE using several network configurations. |
|||
![]() |
Slides THCOBA06 [5.044 MB] | ||
THCOCA01 | A Design of Sub-Nanosecond Timing and Data Acquisition Endpoint for LHAASO Project | timing, interface, electronics, controls | 1442 |
|
|||
Funding: National Science Foundation of China (No.11005065 and 11275111) The particle detector array (KM2A) of Large High Altitude Air Shower Observatory (LHAASO) project consists of 5631 electron and 1221 muon detection units over 1.2 square km area. To reconstruct the incident angle of cosmic ray, sub-nanosecond time synchronization must be achieved. The White Rabbit (WR) protocol is applied for its high synchronization precision, automatic delay compensation and intrinsic high band-width data transmit capability. This paper describes the design of a sub-nanosecond timing and data acquisition endpoint for KM2A. It works as a FMC mezzanine mounted on detector specific front-end electronic boards and provides the WR synchronized clock and timestamp. The endpoint supports EtherBone protocol for remote monitor and firmware update. Moreover, a hardware UDP engine is integrated in the FPGA to pack and transmit raw data from detector electronics to readout network. Preliminary test demonstrates a timing precision of 29ps (RMS) and a timing accuracy better than 100ps (RMS). * The authors are with Key Laboratory of Particle and Radiation Imaging, Department of Engineering Physics, Tsinghua University, Beijing, China, 100084 * pwb.thu@gmail.com |
|||
![]() |
Slides THCOCA01 [1.182 MB] | ||
THCOCA02 | White Rabbit Status and Prospects | distributed, controls, Ethernet, FPGA | 1445 |
|
|||
The White Rabbit (WR) project started off to provide a sequencing and synchronization solution for the needs of CERN and GSI. Since then, many other users have adopted it to solve problems in the domain of distributed hard real-time systems. The paper discusses the current performance of WR hardware, along with present and foreseen applications. It also describes current efforts to standardize WR under IEEE 1588 and recent developments on reliability of timely data distribution. Then it analyzes the role of companies and the commercial Open Hardware paradigm, finishing with an outline of future plans. | |||
![]() |
Slides THCOCA02 [7.955 MB] | ||