Paper | Title | Page |
---|---|---|
TUPHA046 | PLC Factory: Automating Routine Tasks in Large-Scale PLC Software Development | 495 |
|
||
The European Spallation Source ERIC (ESS) in Lund, Sweden, is building large-scale infrastructure that is projected to include hundreds of programmable logic controllers (PLCs). Given the future large-scale deployment of PLCs at ESS, we therefore explored ways of automating some of the tasks associated with PLC programming. We designed and implemented PLC Factory, which is an application written in Python that facilitates large-scale PLC development. With PLC Factory, we managed to automate repetitive tasks associated with PLC programming and interfacing PLCs with an EPICS database. A key part of PLC Factory is its embedded domain-specific programming language PLCF#, which makes it possible to define dynamic substitutions. Using a database for configuration management, PLC Factory is able to generate both EPICS database records as well as code blocks in Structured Control Language (SCL) for the Siemens product TIA Portal. Hierarchies of devices of arbitrary depth are taken into account, which means that dependencies of devices are correctly resolved. PLC Factory is in active use at ESS. | ||
![]() |
Poster TUPHA046 [0.185 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA046 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
TUPHA048 | VDI (Virtual Desktop Infrastructure) Implementation for Control System - Overview and Analysis | 501 |
|
||
At Solaris (National Synchrotron Radiation Center , Kraków ) we have deployed test VDI software to virtualize physical desktops in the control room to ensure stability, more efficient support, system updates, and restores. The test was aimed to accelerate the installation of new work places for the single users. Horizon software gives us an opportunity to create roles and access permission . VDI software has contributed to efficient management and lower maintenance costs of virtual machines than physical hosts. We are still testing VMware Horizon 7 at Solaris. | ||
![]() |
Poster TUPHA048 [2.441 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA048 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
TUPHA049 | ARES: Automatic Release Service | 503 |
|
||
This paper presents the Automatic RElease Service (ARES) developed by the Industrial Controls and Safety systems group at CERN. ARES provides tools and techniques to fully automate the software release procedure. The service replaces release mechanisms, which in some cases were cumbersome and error prone, by an automated procedure where the software release and publication is completed with a few mouse clicks. ARES allows optimizing the time and the work to be performed by developers in order to carry out a new release. Consequently, this enables more frequent releases and therefore a quicker reaction to user requests. The service uses standard technologies (Jenkins, Nexus, Maven, Drupal, MongoDB) to checkout, build, package and deploy software components to different repositories (Nexus, EDMS), as well as the final publication to Drupal web sites. | ||
![]() |
Poster TUPHA049 [0.387 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-TUPHA049 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THBPA01 | Cyber Threats, the World Is No Longer What We Knew… | 1137 |
|
||
Security policies are becoming hard to apply as instruments are smarter than ever. Every oscilloscope gets its own stick with a Windows tag, everybody would like to control his huge installation through the air, IOT is on every lips' Stuxnet, the recent Ed. Snowden revelations have shown that cyber threat on SCADAs cannot be only played in James Bond movies. This paper aims to give simple advises in order to protect and make our installations more and more secure. How to write security files? What are the main precautions we have to take care of? Where are the vulnerabilities of my installation? Cyber security is everyone's matter, not only the cyber staff's! | ||
![]() |
Slides THBPA01 [9.135 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPA01 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THBPA02 | Securing Light Source SCADA Systems | 1142 |
|
||
Funding: European X-Ray Free-Electron Laser Facility GmbH Cyber security aspects are often not thoroughly addressed in the design of light source SCADA system. In general the focus remains on building a reliable and fully-functional ecosystem. The underlying assumption is that a SCADA infrastructure is a closed ecosystem of sufficiently complex technologies to provide some security through trust and obscurity. However, considering the number of internal users, engineers, visiting scientists, students going in and out light source facilities cyber security threats can no longer be minored. At the European XFEL, we envision a comprehensive security layer for the entire SCADA infrastructure. There, Karabo [1], the control, data acquisition and analysis software shall implement these security paradigms known in IT but not applicable off-the-shelf to the FEL context. The challenges are considerable: (i) securing access to photon science hardware that has not been designed with security in mind; (ii) granting limited fine-grained permissions to external users; (iii) truly securing Control and Data acquisition APIs while preserving performance. Only tailored solution strategies, as presented in this paper, can fulfill these requirements. [1] Heisen et al (2013) "Karabo: An Integrated Software Framework Combining Control, Data Management, and Scientific Computing Tasks". Proc. of 14th ICALEPCS 2013, Melbourne, Australia (p. FRCOAAB02) |
||
![]() |
Slides THBPA02 [1.679 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPA02 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THBPA03 | The Back-End Computer System for the Medipix Based PI-MEGA X-Ray Camera | 1149 |
|
||
The Brazilian Synchrotron, in partnership with BrPhotonics, is designing and developing pi-mega, a new X-Ray camera using Medipix chips, with the goal of building very large and fast cameras to supply Sirius' new demands. This work describes the design and testing of the back end computer system that will receive, process and store images. The back end system will use RDMA over Ethernet technology and must be able to process data at a rate ranging from 50 Gbps to 100 Gbps per pi-mega element. Multiple pi-mega elements may be combined to produce a large camera. Initial applications include tomographic reconstruction and coherent diffraction imaging techniques. | ||
![]() |
Slides THBPA03 [1.918 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPA03 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THBPA04 | Orchestrating MeerKAT's Distributed Science Data Processing Pipelines | 1152 |
|
||
The 64-antenna MeerKAT radio telescope is a precursor to the Square Kilometre Array. The telescope's correlator beamformer streams data at 600 Gb/s to the science data processing pipeline that must consume it in real time. This requires significant compute resources, which are provided by a cluster of heterogeneous hardware nodes. Effective utilisation of the available resources is a critical design goal, made more challenging by requiring multiple, highly configurable pipelines. We initially used a static allocation of processes to hardware nodes, but this approach is insufficient as the project scales up. We describe recent improvements to our distributed container deployment, using Apache Mesos for orchestration. We also discuss how issues like non-uniform memory access (NUMA), network partitions, and fractional allocation of graphical processing units (GPUs) are addressed using a custom scheduler for Mesos. | ||
![]() |
Slides THBPA04 [8.485 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPA04 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THBPA05 | IT Infrastructure Tips and Tricks for Control System and PLC | 1158 |
|
||
The network infrastructure in Solaris (National Synchrotron Radiation Center, Kraków) is carrying traffic between around 900 of physical devices and dedicated virtual machines running Tango control system. The Machine Protection System based on PLCs is also interconnected by network infrastructure. We have performed an extensive measurements of traffic flows and analysis of traffic patterns that revealed congestion of aggregated traffic from high speed acquisition devices. We have also applied the flow based anomaly detection systems that give an interesting low level view on Tango control system traffic flows. All issues were successfully addressed, thanks to proper analysis of traffic nature. This paper presents the essential techniques and tools for network traffic patterns analysis, tips and tricks for improvements and real-time data examples. | ||
![]() |
Slides THBPA05 [3.026 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPA05 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THBPA06 | Configuration Management for the Integrated Control System Software of ELI-ALPS | 1162 |
|
||
ELI-ALPS (Extreme Light Infrastructure - Attosecond Light Pulse Source) is a new Research Infrastructure under implementation in Hungary. The infrastructure will consist of various systems (laser sources, beam transport, secondary sources, end stations) built on top of common subsystems (HVAC, cooling water, vibration monitoring, vacuum system, etc.), yielding a heterogeneous environment. To support the full control software development lifecycle for this complex infrastructure a flexible hierarchical configuration model has been defined, and a supporting toolset has been developed for its management. The configuration model is comprehensive as it covers all relevant aspects of the entire controlled system, the control software components and all the necessary connections between them. Furthermore, it supports the generation of virtual environments that approximate the hardware environment for software testing purposes. The toolset covers configuration functions such as storage, version control, GUI editing and queries. The model and tools presented in our paper are not specific to ELI-ALPS or to the ELI community, they may be useful for other research institutions as well. | ||
![]() |
Slides THBPA06 [2.775 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THBPA06 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THPHA045 | Packaging and High Availability for Distributed Control Systems | 1465 |
|
||
Funding: Centro Científico Tecnológico de Valparaíso (CONICYT FB-0821) Advanced Center for Electrical and Electronic Engineering (CONICYT FB-0008) The ALMA Common Software (ACS) is a distributed framework used for control of astronomical observatories, which is built and deployed using roughly the same tools available at its design stage. Due to a shallow and rigid dependency management, the strong modularity principle of the framework cannot be exploited for packaging, installation and deployment. Moreover, life-cycle control of its components does not comply with standardized system-based mechanisms. These problems are shared by other instrument-based distributed systems. The new high-availability requirements of modern projects, such as the Cherenkov Telescope Array, tend to be implemented as new software features due to these problems, rather than using off-the-shelf and well-tested platform-based technologies. We present a general solution for high availability strongly-based on system services and proper packaging. We use RPM Packaging, oVirt and Docker as the infrastructure managers, Pacemaker as the software resource orchestrator and life-cycle process control through Systemd. A prototype for ACS was developed to handle its services and containers. |
||
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA045 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THPHA047 | Network System Operation for J-PARC Accelarators | 1470 |
|
||
The network systems for J-PARC accelerators have been operated over ten years. This report gives: a) an overview of the control network system, b) discussion on relationship between the control network and the office network, and c) recent security issues (policy for antivirus) for terminals and servers. Operation experiences, including troubles, are also presented. | ||
![]() |
Poster THPHA047 [1.056 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA047 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |
THPHA048 | New IT-Infrastructure of Accelerators at BINP | 1474 |
|
||
In 2017 the Injection Complex at Budker Institute, Novosibirsk, Russia began to operate for its consumers - colliders VEPP-4 and VEPP-2000. For successful functioning of these installations is very important to ensure a stable operation of their control systems and IT-infrastructure. The given article is about new IT-infrastructures of three accelerators: Injection Complex, VEPP-2000 and VEPP-4. IT-infrastructure for accelerators consists of servers, network equipment and system software with 10-20 years life-cycle and timely support. The reasons to create IT-infrastructure with the same principles are costs minimization and simplification of support. The following points that underlie during designing are high availability, flexibility and low cost. First is achieved through redundancy of hardware - doubling of servers, disks and network interconnections. Flexibility is caused by extensive use of virtualization that allows easy migration from one hardware to another in case of fault and gives users an ability to use custom system environment. Low cost - from equipment unification and minimizing proprietary solutions | ||
![]() |
Poster THPHA048 [2.132 MB] | |
DOI • | reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2017-THPHA048 | |
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | |