Paper | Title | Other Keywords | Page |
---|---|---|---|
MOCOAAB04 | The Integrated Control System at ESS | controls, hardware, timing, linac | 12 |
|
|||
The European Spallation Source (ESS) is a high current proton LINAC to be built in Lund, Sweden. The LINAC delivers 5 MW of power to the target at 2500 MeV, with a nominal current of 50 mA. The project entered Construction phase on January 1st 2013. In order to design, develop and deliver a reliable, well-performing and standardized control system for the ESS facility, the Integrated Control System (ICS) project has been established. The ICS project also entered Construction phase on January 1st. ICS consists of four distinct Core components (Physics, Software Services, Hardware and Protection) that make up the essence of the control system. Integration Support activities support the stakeholders and users, and the Control System Infrastructure provides the required underlying infrastructure for operating the control system and the facility. The current state of the control system project and key decisions are presented as well as immediate challenges and proposed solutions. | |||
![]() |
Slides MOCOAAB04 [11.760 MB] | ||
MOCOAAB05 | Keck Telescope Control System Upgrade Project Status | controls, EPICS, PLC, hardware | 15 |
|
|||
The Keck telescopes, located at one of the world’s premier sites for astronomy, were the first of a new generation of very large ground-based optical/infrared telescopes with the first Keck telescope beginning science operations in May of 1993, and the second in October of 1996. The components of the telescopes and control systems are more than 15 years old. The upgrade to the control systems of the telescopes consists of mechanical, electrical, software and network components with the overall goals of improving performance, increasing reliability, addressing serious obsolescence issues and providing a knowledge refresh. The telescope encoder systems will be replaced to fully meet demanding science requirements and electronics will be upgraded to meet the needs of modern instrumentation. The upgrade will remain backwards compatible with remaining Observatory subsystems to allow for a phased migration to the new system. This paper describes where Keck is in the development processes, key decisions that have been made, covers successes and challenges to date and presents an overview of future plans. | |||
![]() |
Slides MOCOAAB05 [2.172 MB] | ||
MOCOBAB03 | The Laser MegaJoule ICCS Integration Platform | controls, interface, hardware, site | 35 |
|
|||
The French Atomic Energy Commission(CEA)has just built an integration platform outside the LMJ facility in order to assemble the various components of the Integrated Control Command System(ICCS). The talk gives an overview of this integration platform and the qualification strategy based on the use of equipment simulators, and focuses on several tools that have been developed to integrate each sub-system and qualify the overall behavior of the ICCS. Each delivery kit of a sub-system component(Virtual Machine, WIM, PLC,.) is scanned by antivirus software and stored in the delivery database. A specific tool allows the deployment of the delivery kits on the hardware platform (a copy of the LMJ hardware platform). Then, the TMW(Testing Management Workstation) performs automatic tests by coordinating the equipment simulators behavior and the operator’s behavior. The tests configurations, test scenarios and test results are stored in another database. Test results are analyzed, every dysfunction is stored in an event data base which is used to perform reliability calculation of each component. The qualified software is delivered on the LMJ to perform the commissioning of each bundle. | |||
![]() |
Slides MOCOBAB03 [2.025 MB] | ||
MOCOBAB04 | The Advanced Radiographic Capability, a Major Upgrade of the Computer Controls for the National Ignition Facility | controls, laser, target, operation | 39 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633793 The Advanced Radiographic Capability (ARC) currently under development for the National Ignition Facility (NIF) will provide short (1-50 picoseconds) ultra high power (>1 Petawatt) laser pulses used for a variety of diagnostic purposes on NIF ranging from a high energy x-ray pulse source for backlighter imaging to an experimental platform for fast-ignition. A single NIF Quad (4 beams) is being upgraded to support experimentally driven, autonomous operations using either ARC or existing NIF pulses. Using its own seed oscillator, ARC generates short, wide bandwidth pulses that propagate down the existing NIF beamlines for amplification before being redirected through large aperture gratings that perform chirped pulse compression, generating a series of high-intensity pulses within the target chamber. This significant effort to integrate the ARC adds 40% additional control points to the existing NIF Quad and will be deployed in several phases over the coming year. This talk discusses some new unique ARC software controls used for short pulse operation on NIF and integration techniques being used to expedite deployment of this new diagnostic. |
|||
![]() |
Slides MOCOBAB04 [3.279 MB] | ||
MOCOBAB05 | How to Successfully Renovate a Controls System? - Lessons Learned from the Renovation of the CERN Injectors’ Controls Software | controls, operation, GUI, software-architecture | 43 |
|
|||
Renovation of the control system of the CERN LHC injectors was initiated in 2007 in the scope of the Injector Controls Architecture (InCA) project. One of its main objectives was to homogenize the controls software across CERN accelerators and reuse as much as possible the existing modern sub-systems, such as the settings management used for the LHC. The project team created a platform that would permit coexistence and intercommunication between old and new components via a dedicated gateway, allowing a progressive replacement of the former. Dealing with a heterogeneous environment, with many diverse and interconnected modules, implemented using different technologies and programming languages, the team had to introduce all the modifications in the smoothest possible way, without causing machine downtime. After a brief description of the system architecture, the paper discusses the technical and non-technical sides of the renovation process such as validation and deployment methodology, operational applications and diagnostic tools characteristics and finally users’ involvement and human aspects, outlining good decisions, pitfalls and lessons learned over the last five years. | |||
![]() |
Slides MOCOBAB05 [1.746 MB] | ||
MOMIB07 | An OPC-UA Based Architecture for the Control of the ESPRESSO Spectrograph @ VLT | controls, PLC, hardware, interface | 70 |
|
|||
ESPRESSO is a fiber-fed, cross-dispersed, high-resolution echelle spectrograph for the ESO Very Large Telescope (VLT). The instrument is designed to combine incoherently the light coming from up to 4 VLT Unit Telescopes. To ensure maximum stability the spectrograph is placed in a thermal enclosure and a vacuum vessel. Abandoning the VME-based technologies previously adopted for the ESO VLT instruments, the ESPRESSO control electronics has been developed around a new concept based on industrial COTS PLCs. This choice ensures a number of benefits like lower costs and less space and power consumption requirement. Moreover it makes possible to structure the whole control electronics in a distributed way using building blocks available commercially off-the-shelf and minimizing in this way the need for custom solutions. The main adopted PLC brand is Beckhoff, whose product lineup satisfies the requirements set by the instrument control functions. OPC-UA is the chosen communication protocol between the PLCs and the instrument control software, which is based on the VLT Control Software package. | |||
![]() |
Slides MOMIB07 [0.419 MB] | ||
![]() |
Poster MOMIB07 [32.149 MB] | ||
MOMIB08 | Continuous Integration Using LabVIEW, SVN and Hudson | LabView, framework, Linux, interface | 74 |
|
|||
In the accelerator domain there is a need of integrating industrial devices and creating control and monitoring applications in an easy and yet structured way. The LabVIEW-RADE framework provides the method and tools to implement these requirements and also provides the essential integration of these applications into the CERN controls infrastructure. Building and distributing these core libraries for multiple platforms, e.g.Windows, Linux and Mac, and for different versions of LabVIEW, is a time consuming task that consist of repetitive and cumbersome work. All libraries have to be tested, commissioned and validated. Preparing one package for each variation takes almost a week to complete. With the introduction of Subversion version control (SVN) and Hudson continuous integration server (HCI) the process is now fully automated and a new distribution for all platforms is available within the hour. In this paper we are evaluating the pros and cons of using continuous integration, the time it took to get up and running and the added benefits. We conclude with an evaluation of the framework and indicate new areas of improvement and extension. | |||
![]() |
Slides MOMIB08 [2.990 MB] | ||
![]() |
Poster MOMIB08 [6.363 MB] | ||
MOMIB09 | ZIO: The Ultimate Linux I/O Framework | framework, controls, Linux, interface | 77 |
|
|||
ZIO (with Z standing for "The Ultimate I/O" Framework) was developed for CERN with the specific needs of physics labs in mind, which are poorly addressed in the mainstream Linux kernel. ZIO provides a framework for industrial, high-throughput, high-channel count I/O device drivers (digitizers, function generators, timing devices like TDCs) with performance, generality and scalability as design goals. Among its many features, it offers abstractions for - input and output channels, and channel sets - configurable trigger types - configurable buffer types - interface via sysfs attributes, control and data device nodes - a socket interface (PFZIO) which provides enormous flexibility and power for remote control In this paper, we discuss the design and implementation of ZIO, and describe representative cases of driver development for typical and exotic applications (FMC ADC 100Msps digitizer, FMC TDC timestamp counter, FMC DEL fine delay). | |||
![]() |
Slides MOMIB09 [0.818 MB] | ||
MOPPC013 | Revolution in Motion Control at SOLEIL: How to Balance Performance and Cost | controls, hardware, embedded, TANGO | 81 |
|
|||
SOLEIL * is a third generation Synchrotron radiation source located near Paris in France. REVOLUTION (REconsider Various contrOLler for yoUr moTION) is the motion controller upgrade project at SOLEIL. It was initiated by the first « Motion control workshop in radiation facilities » in May 2011 that allowed development of an international motion control community in large research facilities. The next meeting will take place during pre-ICALEPS workshop: Motion Control Applications in Large Facilities **. As motion control is an essential key element in assuring optimal results, but also at a competitive price, the REVOLUTION team selected alternatives by following a theoretical and practical methodology: advanced market analysis, tests, measures and impact evaluation. Products from two major motion control manufacturers are on the short list. They must provide the best performance for a small selection of demanding applications, and the lowest global cost to maintain operational conditions for the majority of applications at SOLEIL. The search for the best technical, economical and organizational compromise to face our challenges is detailed in this paper.
* : www.synchrotron-soleil.fr ** : http://www.synchrotron-soleil.fr/Workshops/2013/motioncontrol |
|||
MOPPC016 | IFMIF EVEDA RFQ Local Control System to Power Tests | controls, EPICS, rfq, network | 89 |
|
|||
In the IFMIF EVEDA project, normal conducting Radio Frequency Quadrupole (RFQ) is used to bunch and accelerate a 130 mA steady beam to 5 MeV. RFQ cavity is divided into three structures, named super-modules. Each super-module is divided into 6 modules for a total of 18 modules for the overall structure. The final three modules have to be tested at high power to test and validate the most critical RF components of RFQ cavity and, on the other hand, to test performances of the main ancillaries that will be used for IFMIF EVEDA project (vacuum manifold system, tuning system and control system). The choice of the last three modules is due to the fact that they will operate in the most demanding conditions in terms of power density (100 kW/m) and surface electric field (1.8*Ekp). The Experimental Physics and Industrial Control System (EPICS) environment [1] provides the framework for monitoring any equipment connected to it. This paper report the usage of this framework to the RFQ power tests at Legnaro National Laboratories [2][3].
[1] http://www.aps.anl.gov/epics/ [2] http://www.lnl.infn.it/ [3] http://www.lnl.infn.it/~epics/joomla/ |
|||
MOPPC017 | Upgrade of J-PARC/MLF General Control System with EPICS/CSS | EPICS, controls, operation, LabView | 93 |
|
|||
A general control system of the Materials and Life science experimental Facility (MLF-GCS) consists of programmable logic controllers (PLCs), operator interfaces (OPI) of iFix, data servers, and so on. It is controlling various devices such as a mercury target and a personnel protection system. The present system has been working well but there are problems in view of maintenance and update because of poor flexibility of OS and version compatibility. To overcome the weakness of the system, we decided to replace it to an advanced system based on EPICS and CSS as a framework and OPI software, which has advantages of high scalability and usability. Then we built a prototype system, connected it to the current MLF-GCS, and examined its performance. As the result, the communication between the EPICS/CSS system and the PLCs was successfully implemented by mediating a Takebishi OPC server, true data of 7000 were stored with suitable speed and capacity in a new data storage server based on a PostgreSQL, and OPI functions of the CSS were verified. We concluded through these examinations that the EPICS/CSS system had function and performance specified to the advanced MLF-GCS. | |||
![]() |
Poster MOPPC017 [0.376 MB] | ||
MOPPC021 | Configuration System of the NSLS-II Booster Control System Electronics | controls, booster, kicker, database | 100 |
|
|||
The National Synchrotron Light Source II is under construction at Brookhaven National Laboratory, Upton, USA. NSLS-II consists of linac, transport lines, booster synchrotron and the storage ring. The main features of booster are 1 or 2 Hz cycle and beam energy ramp from 200 MeV up to 3 GeV in 300 msec. EPICS is chosen as a base for the NSLS-II Control System. The booster control system covers all parts of the facility such as power supplies, timing system, diagnostics, vacuum system and many others. Each part includes a set of various electronic devices and a lot of parameters which shall be fully defined for the control system software. This paper considers an approach proposed for defining some equipment of the NSLS-II Booster. It provides a description of different entities of the facility in a uniform way. This information is used to generate configuration files for EPICS IOCs. The main goal of this approach is to put information in one place and elimination of data duplication. Also this approach simplifies configuration and modification of the description and makes it more clear and easily usable by engineers and operators. | |||
![]() |
Poster MOPPC021 [0.240 MB] | ||
MOPPC026 | Bake-out Mobile Controls for Large Vacuum Systems | controls, vacuum, PLC, status | 119 |
|
|||
Large vacuum systems at CERN (Large Hadron Collider, the Low Energy Ion Rings…) require bake-out to achieve ultra-high vacuum specifications. The bake-out cycle is used to decrease the outgassing rate of the vacuum vessel and to activate the Non-Evaporable Getter (NEG) thin film. Bake-out control is a Proportional-Integral-Derivative (PID) regulation with complex recipes, interlocks and troubleshooting management and remote control. It is based on mobile Programmable Logic Controller (PLC) cabinets, fieldbus network and Supervisory Control and Data Acquisition (SCADA) application. CERN vacuum installations include more than 7 km of baked vessels; using mobile cabinets reduces considerably the cost of the control system. The cabinets are installed close to the vacuum vessels during the time of the bake-out cycle. Mobile cabinets can be used in all the CERN vacuum facilities. Remote control is provided by fieldbus network and SCADA application. | |||
![]() |
Poster MOPPC026 [3.088 MB] | ||
MOPPC027 | The Control System of CERN Accelerators Vacuum [LS1 Activities and New Developments] | controls, vacuum, PLC, linac | 123 |
|
|||
After 3 years of operation, the LHC entered its first Long Shutdown period (LS1), in February 2013. Major consolidation and maintenance works will be performed across the whole CERN’s accelerator chain, in order to prepare the LHC to restart at higher energy, in 2015. The rest of the accelerator complex shall resume in mid-2014. We report on the recent and on-going vacuum-controls projects. Some of them are associated with the consolidations of the vacuum systems of LHC and of its injectors; others concern the complete renovation of the controls of some machines; and there are also some completely new installations. Due to the wide age-span of the existing vacuum installations, there is a mix of design philosophies and of control-equipment generations. The renovation and the novel projects offer an opportunity to improve the Quality Assurance of vacuum controls by: identifying, documenting, naming and labelling all pieces of equipment; minimising the number of equipment versions with similar functionality; homogenising the control architectures, while converging to a single software framework. | |||
![]() |
Poster MOPPC027 [67.309 MB] | ||
MOPPC030 | Developments on the SCADA of CERN Accelerators Vacuum | controls, vacuum, PLC, database | 135 |
|
|||
During the first 3 years of LHC operation, the priorities for the vacuum controls SCADA were to attend to user requests, and to improve its ergonomics and efficiency. We now have reached: information access simplified and more uniform; automatic scripts instead of fastidious manual actions; functionalities and menus standardized across all accelerators; enhanced tools for data analysis and maintenance interventions. Several decades of cumulative developments, based on heterogeneous technologies and architectures, have been asking for a homogenization effort. The Long Shutdown (LS1) provides the opportunity to further standardize our vacuum controls systems, around Siemens-S7 PLCs and PVSS SCADA. Meanwhile, we have been promoting exchanges with other Groups at CERN and outside Institutes: to follow the global update policy for software libraries; to discuss philosophies and development details; and to accomplish common products. Furthermore, while preserving the current functionalities, we are working on a convergence towards the CERN UNICOS framework. | |||
![]() |
Poster MOPPC030 [31.143 MB] | ||
MOPPC031 | IEPLC Framework, Automated Communication in a Heterogeneous Control System Environment | controls, PLC, framework, hardware | 139 |
|
|||
Programmable Logic Controllers (PLCs, PXI systems and other micro-controller families) are essential components of CERN control's system. They typically present custom communication interfaces which make their federation a difficult task. Dependency from specific protocols makes code not reusable and the replacement of old technology a tedious problem. IEPLC proposes a uniform and hardware independent communication schema. It automatically generates all the resources needed on master and slave side to implement a common and generic Ethernet communication. The framework consists of a set of tools, scripts and a C++ library. The JAVA configuration tool allows the description and instantiation of the data to be exchanged with the controllers. The Python scripts generate the resources necessary to the final communication while the C++ library, eventually, allows sending and receiving data at run-time from the master process. This paper describes the product by focusing on its main objectives: the definition of a clear and standard communication interface; the reduction of user’s developments and configuration time. | |||
![]() |
Poster MOPPC031 [2.509 MB] | ||
MOPPC032 | OPC Unified Architecture within the Control System of the ATLAS Experiment | hardware, interface, toolkit, controls | 143 |
|
|||
The Detector Control System (DCS) of the ATLAS experiment at the LHC has been using the OPC DA standard as interface for controlling various standard and custom hardware components and their integration into the SCADA layer. Due to its platform restrictions and expiring long-term support, OPC DA will be replaced by the succeeding OPC Unified Architecture (UA) standard. OPC UA offers powerful object-oriented information modeling capabilities, platform independence, secure communication and allows server embedding into custom electronics. We present an OPC UA server implementation for CANopen devices which is used in the ATLAS DCS to control dedicated IO boards distributed within and outside the detector. Architecture and server configuration aspects are detailed and the server performance is evaluated and compared with the previous OPC DA server. Furthermore, based on the experience with the first server implementation, OPC UA is evaluated as standard middleware solution for future use in the ATLAS DCS and beyond. | |||
![]() |
Poster MOPPC032 [2.923 MB] | ||
MOPPC033 | Opening the Floor to PLCs and IPCs: CODESYS in UNICOS | controls, PLC, framework, hardware | 147 |
|
|||
This paper presents the integration of a third industrial platform for process control applications with the UNICOS (Unified Industrial Control System) framework at CERN. The UNICOS framework is widely used in many process control domains (e.g. Cryogenics, Cooling, Ventilation, Vacuum ) to produce highly structured standardised control applications for the two CERN approved industrial PLC product families, Siemens and Schneider. The CoDeSys platform, developed by the 3S (Smart Software Solution), provides an independent IEC 6131-3 programming environment for industrial controllers. The complete CoDeSys based development includes: (1) a dedicated Java™ module plugged in an automatic code generation tool, the UAB (UNICOS Application Builder), (2) the associated UNICOS baseline library for industrial PLCs and IPCs (Industrial PC) CoDeSys v3 compliant, and (3) the Python-based templates to deploy device instances and control logic. The availability of this development opens the UNICOS framework to a wider community of industrial PLC manufacturers (e.g. ABB, WAGO ) and, as the CoDeSys control Runtime works in standard Operating Systems (Linux, W7 ), UNICOS could be deployed to any IPC. | |||
![]() |
Poster MOPPC033 [4.915 MB] | ||
MOPPC034 | Control System Hardware Upgrade | controls, hardware, interface, power-supply | 151 |
|
|||
The Paul Scherrer Institute builds, runs and maintains several particle accelerators. The proton accelerator HIPA, the oldest facility, was mostly equipped with CAMAC components until a few years ago. In several phases CAMAC was replaced by VME hardware and involved about 60 VME crates with 500 cards controlling a few hundred power supplies, motors, and digital as well as analog input/output channels. To control old analog and new digital power supplies with the same new VME components, an interface, so called Multi-IO, had to be developed. In addition, several other interfaces like accommodating different connectors had to be build. Through a few examples the upgrade of the hardware will be explained. | |||
![]() |
Poster MOPPC034 [0.151 MB] | ||
MOPPC035 | Re-integration and Consolidation of the Detector Control System for the Compact Muon Solenoid Electromagnetic Calorimeter | hardware, controls, database, interface | 154 |
|
|||
Funding: Swiss National Science Foundation (SNSF) The current shutdown of the Large Hadron Collider (LHC), following three successful years of physics data-taking, provides an opportunity for major upgrades to be performed on the Detector Control System (DCS) of the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment. The upgrades involve changes to both hardware and software, with particular emphasis on taking advantage of more powerful servers and updating third-party software to the latest supported versions. The considerable increase in available processing power enables a reduction from fifteen to three or four servers. To host the control system on fewer machines and to ensure that previously independent software components could run side-by-side without incompatibilities, significant changes in the software and databases were required. Additional work was undertaken to modernise and concentrate I/O interfaces. The challenges to prepare and validate the hardware and software upgrades are described along with details of the experience of migrating to this newly consolidated DCS. |
|||
![]() |
Poster MOPPC035 [2.811 MB] | ||
MOPPC038 | Rapid Software Prototyping into Large Scale Controls Systems | controls, hardware, interface, laser | 166 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632892 The programmable spatial shaper (PSS) within the National Ignition Facility (NIF) reduces energy on isolated optic flaws in order to lower the optics maintenance costs. This will be accomplished by using a closed-loop system for determining the optimal liquid-crystal-based spatial light pattern for beamshaping and placement of variable transmission blockers. A stand-alone prototype was developed and successfully run in a lab environment as well as on a single quad of NIF lasers following a temporary hardware reconfiguration required to support the test. Several challenges exist in directly integrating the C-based PSS engine written by an independent team into the Integrated Computer Control System (ICCS) for proof on concept on all 48 NIF laser quads. ICCS is a large-scale data-driven distributed control system written primarily in Java using CORBA to interact with +60K control points. The project plan and software design needed to specifically address the engine interface specification, configuration management, reversion plan for the existing 0% transmission blocker capability, and a multi-phase integration and demonstration schedule. |
|||
![]() |
Poster MOPPC038 [2.410 MB] | ||
MOPPC039 | Hardware Interface Independent Serial Communication (IISC) | interface, hardware, controls, factory | 169 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The communication framework for the in-house controls system in the Collider-Accelerator Department at BNL depends on a variety of hardware interfaces and protocols including RS232, GPIB, USB and Ethernet to name a few. IISC is a client software library, which can be used to initiate, communicate and terminate data exchange sessions with devices over the network. It acts as a layer of abstraction allowing developers to establish communication with these devices without having to be concerned about the particulars of the interfaces and protocols involved. Details of implementation and a performance analysis will be presented. |
|||
![]() |
Poster MOPPC039 [1.247 MB] | ||
MOPPC045 | Cilex-Apollon Synchronization and Security System | laser, TANGO, target, distributed | 188 |
|
|||
Funding: CNRS, MESR, CG91, CRiDF, ANR Cilex-Apollon is a high intensity laser facility delivering at least 5 PW pulses on targets at one shot per minute, to study physics such as laser plasma electron or ion accelerator and laser plasma X-Ray sources. Under construction, Apollon is a four beam laser installation with two target areas. Apollon control system is based on Tango. The Synchronization and Security System (SSS) is the heart of this control system and has two main functions. First one is to deliver triggering signals to lasers sources and diagnostics and the second one is to ensure machine protection to guarantee optic components integrity by avoiding damages caused by abnormal operational modes. The SSS is composed of two distributed systems. Machine protection system is based on a distributed I/O system running a Labview real time application and the synchronization part is based on the distributed Greenfield Technology system. The SSS also delivers shots to the experiment areas through programmed sequences. The SSS are interfaced to Tango bus. The article presents the architecture, functionality, interfaces to others processes, performances and feedback from a first deployment on a demonstrator. |
|||
![]() |
Poster MOPPC045 [1.207 MB] | ||
MOPPC048 | Evaluation of the Beamline Personnel Safety System at ANKA under the Aegis of the 'Designated Architectures' Approach | radiation, controls, operation, experiment | 195 |
|
|||
The Beamline Personnel Safety System (BPSS) at Angstroemquelle Karlsruhe (ANKA) started operation in 2003. The paper describes the safety related design and evaluation of serial, parallel and nested radiation safety areas, which allows the flexible plug-in of experimental setups at ANKA-beamlines. It evaluates the resulting requirements for safety system hard- and software and the necessary validation procedure defined by current national and international standards, based on probabilistic reliability parameters supplied by component libraries of manufacturers and an approach known as 'Designated Architectures', defining safety functions in terms of sensor-logic-actor chains. An ANKA-beamline example is presented with special regards to features like (self-) Diagnostic Coverage (DC) of the control system, which is not part of classical Markov process modelling of systems safety. | |||
![]() |
Poster MOPPC048 [0.699 MB] | ||
MOPPC054 | Application of Virtualization to CERN Access and Safety Systems | hardware, controls, network, interface | 214 |
|
|||
Access and safety systems are by nature heterogeneous: different kinds of hardware and software, commercial and home-grown, are integrated to form a working system. This implies many different application services, for which separate physical servers are allocated to keep the various subsystems isolated. Each such application server requires special expertise to install and manage. Furthermore, physical hardware is relatively expensive and presents a single point of failure to any of the subsystems, unless designed to include often complex redundancy protocols. We present the Virtual Safety System Infrastructure project (VSSI), whose aim is to utilize modern virtualization techniques to abstract application servers from the actual hardware. The virtual servers run on robust and redundant standard hardware, where snapshotting and backing up of virtual machines can be carried out to maximize availability. Uniform maintenance procedures are applicable to all virtual machines on the hypervisor level, which helps to standardize maintenance tasks. This approach has been applied to the servers of CERN PS and LHC access systems as well as to CERN Safety Alarm Monitoring System (CSAM). | |||
![]() |
Poster MOPPC054 [1.222 MB] | ||
MOPPC058 | Design, Development and Implementation of a Dependable Interlocking Prototype for the ITER Superconducting Magnet Powering System | interface, plasma, PLC, controls | 230 |
|
|||
Based on the experience with an operational interlock system for the superconducting magnets of the LHC, CERN has developed a prototype for the ITER magnet central interlock system in collaboration with ITER. A total energy of more than 50 Giga Joules is stored in the magnet coils of the ITER Tokamak. Upon detection of a quench or other critical powering failures, the central interlock system must initiate the extraction of the energy to protect the superconducting magnets and, depending on the situation, request plasma disruption mitigations to protect against mechanical forces induced between the magnet coils and the plasma. To fulfil these tasks with the required high level of dependability the implemented interlock system is based on redundant PLC technology making use of hardwired interlock loops in 2-out-of-3 redundancy, providing the best balance between safety and availability. In order to allow for simple and unique connectivity of all client systems involved in the safety critical protection functions as well as for common remote diagnostics, a dedicated user interface box has been developed. | |||
MOPPC068 | Operational Experience with a PLC Based Positioning System for a LHC Extraction Protection Element | controls, PLC, operation, dumping | 254 |
|
|||
The LHC Beam Dumping System (LBDS) nominally dumps the beam synchronously with the passage of the particle free beam abort gap at the beam dump extraction kickers. In the case of an asynchronous beam dump, an absorber element protects the machine aperture. This is a single sided collimator (TCDQ), positioned close to the beam, which has to follow the beam position and beam size during the energy ramp. The TCDQ positioning control is implemented within a SIEMENS S7-300 Programmable Logic Controller (PLC). A positioning accuracy better than 30 μm is achieved through a PID based servo algorithm. Errors due to a wrong position of the absorber w.r.t. the beam energy and size generates interlock conditions to the LHC machine protection system. Additionally, the correct position of the TCDQ w.r.t. the beam position in the extraction region is cross-checked after each dump by the LBDS eXternal Post Operational Check (XPOC). This paper presents the experience gained during LHC Run 1 and describes improvements that will be applied during the LHC shutdown 2013 – 2014. | |||
![]() |
Poster MOPPC068 [3.381 MB] | ||
MOPPC069 | Operational Experience with the LHC Software Interlock System | interlocks, injection, operation, hardware | 258 |
|
|||
The Software Interlock System (SIS) is a JAVA software project developed for the CERN accelerators complex. The core functionality of SIS is to provide a framework to program high level interlocks based on the surveillance of a large number of accelerator device parameters. The interlock results are exported to trigger beam dumps, inhibit beam transfers or abort the main magnets powering. Since its deployment in 2008, the LHC SIS has demonstrated that it is a reliable solution for complex interlocks involving multiple or distributed systems and when quick solutions for un-expected situations is needed. This paper is presenting the operational experience with software interlocking in the LHC machine, reporting on the overall performance and flexibility of the SIS, mentioning the risks when SW interlocks are used to patch missing functionalities for personal safety or machine protection. | |||
![]() |
Poster MOPPC069 [0.323 MB] | ||
MOPPC078 | TANGO Steps Toward Industry | TANGO, controls, site, synchrotron | 277 |
|
|||
Funding: Gravit innovation Grenoble France. TANGO has proven its excellent reliability by controlling several huge scientific installations in a 24*7 mode. Even if it has originally been built for particle accelerators and scientific experiments, it can be used to control any equipment from small domestic applications to big industrial installations. In the last years the interest around TANGO has been growing and several industrial partners in Europe propose services for TANGO. The TANGO industrialization project aims to increase the visibility of the system fostering the economic activity around it. It promotes TANGO as an open-source flexible solution for controlling equipment as an alternative to proprietary SCADA systems. To achieve this goal several actions have been started, such as the development of an industrial demonstrator, better packaging, integrating OPC-UA and improving the communication around TANGO. The next step will be the creation of a TANGO software Foundation able to engage itself as a legal and economical partner for industry. This foundation will be funded by industrial partners, scientific institutes and grants. The goal is to foster and nurture the growing economic eco-system around TANGO. |
|||
![]() |
Poster MOPPC078 [4.179 MB] | ||
MOPPC079 |
CODAC Core System, the ITER Software Distribution for I&C | controls, EPICS, interface, network | 281 |
|
|||
In order to support the adoption of the ITER standards for the Instrumentation & Control (I&C) and to prepare for the integration of the plant systems I&C developed by many distributed suppliers, the ITER Organization is providing the I&C developers with a software distribution named CODAC Core System. This software has been released as incremental versions since 2010, starting from preliminary releases and with stable versions since 2012. It includes the operating system, the EPICS control framework and the tools required to develop and test the software for the controllers, central servers and operator terminals. Some components have been adopted from the EPICS community and adapted to the ITER needs, in collaboration with the other users. This is the case for the CODAC services for operation, such as operator HMI, alarms or archives. Other components have been developed specifically for the ITER project. This applies to the Self-Description Data configuration tools. This paper describes the current version (4.0) of the software as released in February 2013 with details on the components and on the process for its development, distribution and support. | |||
![]() |
Poster MOPPC079 [1.744 MB] | ||
MOPPC084 | ESS Integrated Control System and the Agile Methodology | controls, feedback, target, neutron | 296 |
|
|||
The stakeholders of the ESS Integrated Control System (ICS) reside in four parts of the ESS machine: accelerator, target, neutron instruments and conventional facilities. ICS plans to meet the stakeholders’ needs early in the Construction phase, to accelerate and facilitate the Commissioning process by providing and delivering required tools earlier. This introduces the risk that stakeholders will not have had the full set of information required available early enough for the development of the interfacing systems (e.g. missing requirements, undecided design etc.) In order for ICS to accomplish its objectives it is needed to establish a development process that allows a quick adaptation to any change in the requirements with a minimum impact in the execution of the projects. Agile Methodology is well known for its ability to adapt quickly to change, as well as for involving users in the development process and producing working and reliable software from a very early stage in the project. The paper will present the plans, the tools, the organization of the team and the preliminary results of the setup work. | |||
MOPPC086 | Manage the MAX IV Laboratory Control System as an Open Source Project | controls, TANGO, GUI, framework | 299 |
|
|||
Free Open Source Software (FOSS) is now deployed and used in most of the big facilities. It brings a lot of qualities that can compete with proprietary software like robustness, reliability and functionality. Arguably the most important quality that marks the DNA of FOSS is Transparency. This is the fundamental difference compared to its closed competitors and has a direct impact on how projects are managed. As users, reporters, contributors are more than welcome the project management has to have a clear strategy to promote exchange and to keep a community. The Control System teams have the chance to work on the same arena as their users and, even better, some of the users have programming skills. Unlike a fortress strategy, an open strategy may benefit from the situation to enhance the user experience. In this topic we will explain the position of the MaxIV KITS team. How “Tango install party” and “coding dojo” have been used to promote the contribution to the control system software and how our projects are structured in terms of process and tools (SARDANA, GIT… ) to make them more accessible for in house collaboration as well as from other facilities or even subcontractors. | |||
![]() |
Poster MOPPC086 [7.230 MB] | ||
MOPPC087 | Tools and Rules to Encourage Quality for C/C++ Software | monitoring, framework, controls, diagnostics | 303 |
|
|||
Inspired by the success of the software improvement process for Java projects, in place since several years in the CERN accelerator controls group, it was agreed in 2011 to apply the same principles to the C/C++ software developed in the group, an initiative we call the Software Improvement Process for C/C++ software (SIP4C/C++). The objectives of the SIP4C/C++ initiative are: 1) agree on and establish best software quality practices, 2) choose tools for quality and 3) integrate these tools in the build process. After a year we have reached a number of concrete results, thanks to the collaboration between several involved projects, including: common build tool (based on GNU Make), which standardizes the way to build, test and release C/C++ binaries; unit testing with Google Test & Google Mock; continuous integration of C/C++ products with the existing CI server (Atlassian Bamboo); static code analysis (Coverity); generation of manifest file with dependency information; and runtime in-process metrics. This work presents the SIP4C/C++ initiative in more detail, summarizing our experience and the future plans. | |||
![]() |
Poster MOPPC087 [3.062 MB] | ||
MOPPC088 | Improving Code Quality of the Compact Muon Solenoid Electromagnetic Calorimeter Control Software to Increase System Maintainability | controls, monitoring, GUI, detector | 306 |
|
|||
Funding: Swiss National Science Foundation (SNSF) The Detector Control System (DCS) software of the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at CERN is designed primarily to enable safe and efficient operation of the detector during Large Hadron Collider (LHC) data-taking periods. Through a manual analysis of the code and the adoption of ConQAT*, a software quality assessment toolkit, the CMS ECAL DCS team has made significant progress in reducing complexity and improving code quality, with observable results in terms of a reduction in the effort dedicated to software maintenance. This paper explains the methodology followed, including the motivation to adopt ConQAT, the specific details of how this toolkit was used and the outcomes that have been achieved. * ConQAT, https://www.conqat.org/ |
|||
![]() |
Poster MOPPC088 [2.510 MB] | ||
MOPPC090 | Managing a Product Called NIF - PLM Current State and Processes | controls, data-management, operation, laser | 310 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632452 Product lifecycle management (PLM) is the process of managing the entire lifecycle of a product from its conception, through design and manufacture, to service and disposal. The National Ignition Facility (NIF) can be considered one enormous product that is made up of hundreds of millions of individual parts and components (or products). The ability to manage and control the physical definition, status and configuration of the sum of all of these products is a monumental undertaking yet critical to the validity of the shot experiment data and the safe operation of the facility. NIF is meeting this challenge by utilizing an integrated and graded approach to implement a suite of commercial and custom enterprise software solutions to address PLM and other facility management and configuration requirements. It has enabled the passing of needed elements of product data into downstream enterprise solutions while at the same time minimizing data replication. Strategic benefits have been realized using this approach while validating the decision for an integrated approach where more than one solution may be required to address the entire product lifecycle management process. |
|||
![]() |
Poster MOPPC090 [14.237 MB] | ||
MOPPC095 | PETAL Control System Status Report | controls, laser, framework, hardware | 321 |
|
|||
Funding: CEA / Région Aquitaine / ILP / Europe / HYPER The PETAL laser facility is a high energy multi-petawatt laser beam being installed in the Laser MegaJoule building facility. PETAL is designed to produce a laser beam at 3 kilojoules of energy for 0.5 picoseconds of duration. The autonomous commissioning began in 2013. In the long term, PETAL’s Control System is to be integrated in the LMJ’s Control System for a coupling with its 192 nanoseconds laser beams. The presentation gives an overview of the general control system architecture, and focuses on the use of TANGO framework in some of the subsystems software. Then the presentation explains the steps planned to develop the control system from the first laser shoots in autonomous exploitation to the merger in the LMJ’s facility. |
|||
![]() |
Poster MOPPC095 [1.891 MB] | ||
MOPPC097 | The FAIR Control System - System Architecture and First Implementations | timing, controls, operation, network | 328 |
|
|||
The paper presents the architecture of the control system for the Facility for Antiproton and Ion Research (FAIR) currently under development. The FAIR control system comprises the full electronics, hardware, and software to control, commission, and operate the FAIR accelerator complex for multiplexed beams. It takes advantage of collaborations with CERN in using proven framework solutions like FESA, LSA, White Rabbit, etc. The equipment layer consists of equipment interfaces, embedded system controllers, and software representations of the equipment (FESA). A dedicated real time network based on White Rabbit is used to synchronize and trigger actions on equipment level. The middle layer provides service functionality both to the equipment layer and the application layer through the IP control system network. LSA is used for settings management. The application layer combines the applications for operators as GUI applications or command line tools typically written in Java. For validation of concepts already in 2014 FAIR's proton injector at CEA/France and CRYRING at GSI will be commissioned with reduced functionality of the proposed FAIR control system stack. | |||
![]() |
Poster MOPPC097 [2.717 MB] | ||
MOPPC110 | The Control System for the CO2 Cooling Plants for Physics Experiments | controls, detector, operation, interface | 370 |
|
|||
CO2 cooling has become interesting technology for current and future tracking particle detectors. A key advantage of using CO2 as refrigerant is the high heat transfer capabilities allowing a significant material budget saving, which is a critical element in state of the art detector technologies. Several CO2 cooling stations, with cooling power ranging from 100W to several kW, have been developed at CERN to support detector testing for future LHC detector upgrades. Currently, two CO2 cooling plants for the ATLAS Pixel Insertable B-Layer and the Phase I Upgrade CMS Pixel detector are under construction. This paper describes the control system design and implementation using the UNICOS framework for the PLCs and SCADA. The control philosophy, safety and interlocking standard, user interfaces and additional features are presented. CO2 cooling is characterized by high operation stability and accurate evaporation temperature control over large distances. Implemented split range PID controllers with dynamically calculated limiters, multi-level interlocking and new software tools like CO2 online p-H diagram, jointly enable the cooling to fulfill the key requirements of reliable system. | |||
![]() |
Poster MOPPC110 [2.385 MB] | ||
MOPPC111 | Overview of LINAC4 Beam Instrumentation Software | linac, emittance, controls, electronics | 374 |
|
|||
This paper presents an overview of results from the recent LINAC4 commissioning with H− beam at CERN. It will cover beam instrumentation systems acquiring beam position, intensity, size and emittance starting from the project proposal to commissioning results. | |||
MOPPC112 | Current Status and Perspectives of the SwissFEL Injector Test Facility Control System | controls, EPICS, operation, network | 378 |
|
|||
The Free Electron Laser (SwissFEL) Injector Test Facility at Paul Scherrer Institute has been in operations for more than three years. The Injector Test Facility machine is a valuable development and validation platform for all major SwissFEL subsystems including controls. Based on the experience gained from the Test Facility operations support, the paper presents current and some perspective controls solutions focusing on the future SwissFEL project. | |||
![]() |
Poster MOPPC112 [1.224 MB] | ||
MOPPC124 | Optimizing EPICS for Multi-Core Architectures | EPICS, real-time, controls, Linux | 399 |
|
|||
Funding: Work supported by German Bundesministerium für Bildung und Forschung and Land Berlin. EPICS is a widely used software framework for real-time controls in large facilities, accelerators and telescopes. Its multithreaded IOC (Input Output Controller) Core software has been developed on traditional single-core CPUs. The ITER project will use modern multi-core CPUs, running the RHEL Linux operating system in its MRG-R real-time variant. An analysis of the thread handling in IOC Core shows different options for improving the performance and real-time behavior, which are discussed and evaluated. The implementation is split between improvements inside EPICS Base, which have been merged back into the main distribution, and a support module that makes full use of these new features. This paper describes design and implementation aspects, and presents results as well as lessons learned. |
|||
![]() |
Poster MOPPC124 [0.448 MB] | ||
MOPPC126 | !CHAOS: the "Control Server" Framework for Controls | controls, framework, distributed, database | 403 |
|
|||
We report on the progress of !CHAOS*, a framework for the development of control and data acquisition services for particle accelerators and large experimental apparatuses. !CHAOS introduces to the world of controls a new approach for designing and implementing communications and data distribution among components and for providing the middle-layer services for a control system. Based on software technologies borrowed from high-performance Internet services !CHAOS offers by using a centralized, yet highly-scalable, cloud-like approach all the services needed for controlling and managing a large infrastructure. It includes a number of innovative features such as high abstraction of services, devices and data, easy and modular customization, extensive data caching for enhancing performances, integration of all services in a common framework. Since the !CHAOS conceptual design was presented two years ago the INFN group have been working on the implementations of services and components of the software framework. Most of them have been completed and tested for evaluating performance and reliability. Some services are already installed and operational in experimental facilities at LNF.
* "Introducing a new paradigm for accelerators and large experimental apparatus control systems", L. Catani et.al., Phys. Rev. ST Accel. Beams, http://prst-ab.aps.org/abstract/PRSTAB/v15/i11/e112804 |
|||
![]() |
Poster MOPPC126 [0.874 MB] | ||
MOPPC128 | Real-Time Process Control on Multi-Core Processors | controls, real-time, framework, operation | 407 |
|
|||
A real-time control is an essential for a low level RF and timing system to have beam stability in the accelerator operation. It is difficult to optimize priority control of multiple processes with real-time class and time-sharing class on a single-core processor. For example, we can’t log into the operating system if a real-time class process occupies the resource of a single-core processor. Recently multi-core processors have been utilized for equipment controls. We studied the process control of multiple processes running on multi-core processors. After several tunings, we confirmed that an operating system could run stably under heavy load on multi-core processors. It would be possible to achieve real-time control required milliseconds order response under the fast control system such as an event synchronized data acquisition system. Additionally we measured the response performance between client and server processes using MADOCA II framework that is the next-generation MADOCA. In this paper we present about the tunings for real-time process control on multi-core processors and performance results of MADOCA II. | |||
![]() |
Poster MOPPC128 [0.450 MB] | ||
MOPPC132 | Evaluating Live Migration Performance of a KVM-Based EPICS | EPICS, network, Linux, controls | 420 |
|
|||
In this paper we present some results about live migration performance evaluation of a KVM-Based EPICS on PC.About PC,we care about the performance of storage,network and CPU. EPICS is a control system. we make a demo control system for evaluation, and it is lightweight. For time measurement, we set a monitor PV, and the PV can automatics change its value at regular time intervals. Data Browser can display the values of 'live' PVs and can measure the time. In the end, we get the evaluation value of live migration time using Data Browser. | |||
MOPPC137 | IEC 61850 Industrial Communication Standards under Test | controls, network, framework, Ethernet | 427 |
|
|||
IEC 61850, as part of the International Electro-technical Commission's Technical Committee 57, defines an international and standardized methodology to design electric power automation substations. It specifies a common way of communicating and integrating heterogeneous systems based on multivendor intelligent electronic devices (IEDs). They are connected to Ethernet network and according to IEC 61850 their abstract data models have been mapped to specific communication protocols: MMS, GOOSE, SV and possibly in the future Web Services. All of them can run over TCP/IP networks, so they can be easily integrated with Enterprise Resource Planning networks; while this integration provides economical and functional benefits for the companies, on the other hand it exposes the industrial infrastructure to the external existing cyber-attacks. Within the Openlab collaboration between CERN and Siemens, a test-bench has been developed specifically to evaluate the robustness of industrial equipment (TRoIE). This paper describes the design and the implementation of the testing framework focusing on the IEC 61850 previously mentioned protocols implementations. | |||
![]() |
Poster MOPPC137 [1.673 MB] | ||
MOPPC138 | Continuous Integration for Automated Code Generation Tools | controls, framework, PLC, target | 431 |
|
|||
The UNICOS* (UNified Industrial COntrol System) framework was created back in 1998 as a solution to build object-based industry-like control systems. The Continuous Process Control package (CPC**) is a UNICOS component that provides a methodology and a set of tools to design and implement industrial control applications. UAB** (UNICOS Application Builder) is the software factory used to develop UNICOS-CPC applications. The constant evolution of the CPC component brought the necessity of creating a new tool to validate the generated applications and to verify that the modifications introduced in the software tools do not create any undesirable effect on the existing control applications. The uab-maven-plugin is a plug-in for the Apache Maven build manager that can be used to trigger the generation of the CPC applications and verify the consistency of the generated code. This plug-in can be integrated in continuous integration tools - like Hudson or Jenkins – to create jobs for constant monitoring of changes in the software that will trigger a new generation of all the applications located in the source code management.
* "UNICOS a framework to build industry like control systems: Principles & Methodology". ** "UNICOS CPC6: Automated code generation for process control applications". |
|||
![]() |
Poster MOPPC138 [4.420 MB] | ||
MOPPC139 | A Framework for Off-line Verification of Beam Instrumentation Systems at CERN | framework, database, instrumentation, interface | 435 |
|
|||
Many beam instrumentation systems require checks to confirm their beam readiness, detect any deterioration in performance and to identify physical problems or anomalies. Such tests have already been developed for several LHC instruments using the LHC sequencer, but the scope of this framework doesn't extend to all systems; notably absent in the pre-LHC injector chain. Furthermore, the operator-centric nature of the LHC sequencer means that sequencer tasks aren't accessible by hardware and software experts who are required to execute similar tests on a regular basis. As a consequence, ad-hoc solutions involving code sharing and in extreme cases code duplication have evolved to satisfy the various use-cases. In terms of long term maintenance, this is undesirable due to the often short-term nature of developers at CERN alongside the importance of the uninterrupted stability of CERN's accelerators. This paper will outline the first results of an investigation into the existing analysis software, and provide proposals for the future of such software. | |||
MOPPC140 | High-Availability Monitoring and Big Data: Using Java Clustering and Caching Technologies to Meet Complex Monitoring Scenarios | monitoring, distributed, controls, network | 439 |
|
|||
Monitoring and control applications face ever more demanding requirements: as both data sets and data rates continue to increase, non-functional requirements such as performance, availability and maintainability become more important. C2MON (CERN Control and Monitoring Platform) is a monitoring platform developed at CERN over the past few years. Making use of modern Java caching and clustering technologies, the platform supports multiple deployment architectures, from a simple 3-tier system to highly complex clustered solutions. In this paper we consider various monitoring scenarios and how the C2MON deployment strategy can be adapted to meet them. | |||
![]() |
Poster MOPPC140 [1.382 MB] | ||
MOPPC142 | Groovy as Domain-specific Language (DSL) in Software Interlock System | DSL, Domain-Specific-Languages, framework, controls | 443 |
|
|||
The SIS, in operation since over 7 years, is a mission-critical component of the CERN accelerator control system, covering areas from general machine protection to diagnostics. The growing number of instances and the size of the existing installations have increased both the complexity and maintenance cost of running the SIS infrastructure. Also the domain experts have considered the XML and Java mixture for configuration as difficult and suitable only for software engineers. To address these issues, new ways of configuring the system have been investigated aiming at simplifying the process by making it faster, more user-friendly and adapted for a wider audience. From all the existing DSL choices (fluent Java APIs, external/internal DSLs), the Groovy scripting language has been considered as being particularly well suited for writing a custom DSL due to its built-in language features: Java compatibility, native syntax constructs, command chain expressions, hierarchical structures with builders, closures or AST transformations. This paper explains best practices and lessons learned while building the accelerators domain-oriented DSL for the configuration of the interlock system. | |||
![]() |
Poster MOPPC142 [0.510 MB] | ||
MOPPC143 | Plug-in Based Analysis Framework for LHC Post-Mortem Analysis | framework, controls, operation, injection | 446 |
|
|||
Plug-in based software architectures are extensible, enforce modularity and allow several teams to work in parallel. But they have certain technical and organizational challenges, which we discuss in this paper. We gained our experience when developing the Post-Mortem Analysis (PMA) system, which is a mission-critical system for the Large Hadron Collider (LHC). We used a plugin-based architecture with a general-purpose analysis engine, for which physicists and equipment experts code plug-ins containing the analysis algorithms. We have over 45 analysis plug-ins developed by a dozen of domain experts. This paper focuses on the design challenges we faced in order to mitigate the risks of executing third-party code: assurance that even a badly written plug-in doesn't perturb the work of the overall application; plug-in execution control which allows to detect plug-in misbehavior and react; robust communication mechanism between plug-ins, diagnostics facilitation in case of plug-in failure; testing of the plug-ins before integration into the application, etc.
https://espace.cern.ch/be-dep/CO/DA/Services/Post-Mortem%20Analysis.aspx |
|||
![]() |
Poster MOPPC143 [3.128 MB] | ||
MOPPC148 | Not Dead Yet: Recent Enhancements and Future Plans for EPICS Version 3 | EPICS, controls, target, Linux | 457 |
|
|||
Funding: Work supported by U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357. The EPICS Version 4 development effort* is not planning to replace the current Version 3 IOC Database or its use of the Channel Access network protocol in the near future. Interoperability is a key aim of the V4 development, which is building upon the older IOC implementation. EPICS V3 continues to gain new features and functionality on its Version 3.15 development branch, while the Version 3.14 stable branch has been accumulating minor tweaks, bug fixes, and support for new and updated operating systems. This paper describes the main enhancements provided by recent and upcoming releases of EPICS Version 3 for control system applications. * Korhonen et al, "EPICS Version 4 Progress Report", this conference. |
|||
![]() |
Poster MOPPC148 [5.067 MB] | ||
MOPPC158 | Application of Modern Programming Techniques in Existing Control System Software | framework, controls, injection, operation | 479 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Accelerator Device Object (ADO) specification and its original implementation are almost 20 years old. In those last two decades ADO development methodology has changed very little, which is a testament to its robust design, however during this time frame we've seen introduction of many new technologies and ideas, many of which with applicable and tangible benefits to control system software. This paper describes how some of these concepts like convention over configuration, aspect oriented programming (AOP) paradigm, which coupled with powerful techniques like bytecode generation and manipulation tools can greatly simplify both server and client side development by allowing developers to concentrate on the core implementation details without polluting their code with: 1) synchronization blocks 2) supplementary validation 3) asynchronous communication calls or 4) redundant bootstrapping. In addition to streamlining existing fundamental development methods we introduce additional concepts, many of which are found outside of the majority of the controls systems. These include 1) ACID transactions 2) client and servers-side dependency injection and 3) declarative event handling. |
|||
![]() |
Poster MOPPC158 [2.483 MB] | ||
TUCOAAB01 | Status of the National Ignition Facility (NIF) Integrated Computer Control and Information Systems | controls, diagnostics, laser, target | 483 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-631632 The National Ignition Facility (NIF) is operated by the Integrated Computer Control System in an object-oriented, CORBA-based system distributed among over 1800 front-end processors, embedded controllers and supervisory servers. At present, NIF operates 24x7 and conducts a variety of fusion, high energy density and basic science experiments. During the past year, the control system was expanded to include a variety of new diagnostic systems, and programmable laser beam shaping and parallel shot automation for more efficient shot operations. The system is also currently being expanded with an Advanced Radiographic Capability, which will provide short (<10 picoseconds) ultra-high power (>1 Petawatt) laser pulses that will be used for a variety of diagnostic and experimental capabilities. Additional tools have been developed to support experimental planning, experimental setup, facility configuration and post shot analysis, using open-source software, commercial workflow tools, database and messaging technologies. This talk discusses the current status of the control and information systems to support a wide variety of experiments being conducted on NIF including ignition experiments. |
|||
![]() |
Slides TUCOAAB01 [4.087 MB] | ||
TUCOAAB02 | The Laser Megajoule Facility: Control System Status Report | controls, laser, target, alignment | 487 |
|
|||
The French Commissariat à l’Énergie Atomique (CEA) is currently building the Laser Megajoule (LMJ), a 176-beam laser facility, at the CEA Laboratory CESTA near Bordeaux. It is designed to deliver about 1.4 MJ of energy to targets for high energy density physics experiments, including fusion experiments. The assembly of the first lines of amplification is almost achieved and functional tests are planed for next year. The first part of the presentation is a photo album of the progress of the assembly of the bundles in the four laser bay, and the equipements in the target bay. The second part of the presentation illustrates a particularity of the LMJ commissioning: a secondary control room is dedicated to successive bundles commissioning, while the main control room allows shots and fusion experiments with already commissioned bundles | |||
![]() |
Slides TUCOAAB02 [3.928 MB] | ||
TUCOAAB04 | The MedAustron Accelerator Control System: Design, Installation and Commissioning | controls, network, ion, operation | 494 |
|
|||
MedAustron is a light-ion accelerator cancer treatment facility built on the green field in Austria. The accelerator, its control systemand protection systems have been designed under the guidance of CERN within the MedAustron – CERN collaboration. Building construction has been completed in October 2012 and accelerator installation has started in December 2012. Readiness for accelerator control deployment was reached in January 2013. This contribution gives an overview of the accelerator control system project. It reports on the current status of commissioning including the ion sources, low-energy beam transfer and injector. The major challenge so far has been the readiness of the industry supplied IT infrastructure on which accelerator controls relies heavily due to its distributed and virtualized architecture. After all, the control system has been successfully released for accelerator commissioning within time and budget. The need to deliver a highly performant control system to cope with thousands of cycles in real-time, to cover interactive commissioning and unattended medical operation were mere technical aspects to be solved during the development phase. | |||
![]() |
Slides TUCOAAB04 [2.712 MB] | ||
TUCOBAB01 | A Small but Efficient Collaboration for the Spiral2 Control System Development | controls, EPICS, PLC, GUI | 498 |
|
|||
The Spiral2 radioactive ion beam facility to be commissioned in 2014 at Ganil (Caen) is built within international collaborations. This also concerns the control system development shared by three laboratories: Ganil has to coordinate the control and automated systems work packages, CEA/IRFU is in charge of the “injector” (sources and low energy beam lines) and the LLRF, CNRS/IPHC provides the emittancemeters and a beam diagnostics platform. Besides the technology Epics based, this collaboration, although being handled with a few people, nevertheless requires an appropriate and tight organization to reach the objectives given by the project. This contribution describes how, started in 2006, the collaboration for controls has been managed both from the technological point of view and the organizational one, taking into account not only the previous experience, technical background or skill of each partner, but also their existing working practices and “cultural” approaches. A first feedback comes from successful beam tests carried out at Saclay and Grenoble; a next challenge is the migration to operation, Ganil having to run Spiral2 as the other members are moving to new projects | |||
![]() |
Slides TUCOBAB01 [2.747 MB] | ||
TUCOBAB02 | The Mantid Project: Notes from an International Software Collaboration | framework, interface, neutron, distributed | 502 |
|
|||
Funding: This project is a collaboration between SNS, ORNL and ISIS, RAL with expertise supplied by Tessella. These facilities are in turn funded by the US DoE and the UK STFC. The Mantid project was started by ISIS in 2007 to provide a framework to perform data reduction and analysis for neutron and muon data. The SNS and HFIR joined the Mantid project in 2009 adding event processing and other capabilities to the Mantid framework. The Mantid software is now supporting the data reduction needs of most of the instruments at ISIS, the SNS and some at HFIR, and is being evaluated by other facilities. The scope of data reduction and analysis challenges, together with the need to create a cross platform solution, fuels the need for Mantid to be developed in collaboration between facilities. Mantid has from inception been an open source project, built to be flexible enough to be instrument and technique independent, and initially planned to support collaboration with other development teams. Through the collaboration with the SNS development practices and tools have been further developed to support the distributed development team in this challenge. This talk will describe the building and structure of the collaboration, the stumbling blocks we have overcome, and the great steps we have made in building a solid collaboration between these facilities. Mantid project website: www.mantidproject.org ISIS: http://www.isis.stfc.ac.uk/ SNS & HFIR: http://neutrons.ornl.gov/ |
|||
![]() |
Slides TUCOBAB02 [1.280 MB] | ||
TUCOBAB03 | Utilizing Atlassian JIRA for Large-Scale Software Development Management | controls, status, database, operation | 505 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632634 Used actively by the National Ignition Facility since 2004, the JIRA issue tracking system from Atlassian is now used for 63 different projects. NIF software developers and customers have created over 80,000 requests (issues) for new features and bug fixes. The largest NIF software project in JIRA is the Integrated Computer Control system (ICCS), with nearly 40,000 issues. In this paper, we’ll discuss how JIRA has been customized to meet our software development process. ICCS developed a custom workflow in JIRA for tracking code reviews, recording test results by both developers and a dedicated Quality Control team, and managing the product release process. JIRA’s advanced customization capability have proven to be a great help in tracking key metrics about the ICCS development efforts (e.g. developer workload). ICCS developers store software in a configuration management tool called AccuRev, and document all software changes in each JIRA issue. Specialized tools developed by the NIF Configuration Management team analyze each software product release, insuring that each software product release contains only the exact expected changes. |
|||
![]() |
Slides TUCOBAB03 [2.010 MB] | ||
TUCOBAB04 | Evaluation of Issue Tracking and Project Management Tools for Use Across All CSIRO Radio Telescope Facilities | project-management, interface, controls, operation | 509 |
|
|||
CSIRO's radio astronomy observatories are collectively known as the Australia Telescope National Facility (ATNF). The observatories include the 64-metre dish at Parkes, the Australia Telescope Compact Array (ATCA) in Narrabri, the Mopra 22-metre dish near Coonabarabran and the ASKAP telescope located in Western Australia and in early stages of commissioning. In January 2013 a new group named Software and Computing has been formed. This group, part of the ATNF Operations Program brings all the software development expertise under one umbrella and it is responsible for the development and maintenance of the software for all ATNF facilities, from monitoring and control to science data processing and archiving. One of the first task of the new group is to start homogenising the way software development is done across all observatories. This paper presents the results of the evaluation of several issue tracking and project management tools, including Redmine and JIRA to be used as a software development management tool across all ATNF facilities. It also describes how these tools can potentially be used for non-software type of applications such as fault reporting and tracking system. | |||
![]() |
Slides TUCOBAB04 [2.158 MB] | ||
TUCOBAB05 | A Rational Approach to Control System Development Projects That Incorporates Risk Management | controls, project-management, interface, synchrotron | 513 |
|
|||
Over the past year CLS has migrated towards a project management approach based on the Project Management Institute (PMI) guidelines as well as adopting an Enterprise Risk Management (ERM) program. Though these are broader organisational initiatives they do impact how controls systems and data acquisition software activities and planned, executed and integrated into larger scale projects. Synchrotron beamline development and accelerator upgrade projects have their own special considerations that require adaptation of the more standard techniques that are used. Our ERM processes integrate in two ways: (1) in helping to identify and prioritising those projects that we should be undertaking and (2) in helping identify risks that are internal to the project. These broader programs are resulting in us revising and improving processes we have in place for control and data acquisition system development and maintenance. This paper examines the approach we have adopted, our preliminary experience and our plans going forward. | |||
![]() |
Slides TUCOBAB05 [0.791 MB] | ||
TUMIB02 | A Control System for the ESRF Synchrotron Radiation Therapy Clinical Trials | controls, synchrotron, radiation, synchrotron-radiation | 521 |
|
|||
The bio-medical beamline of the European Synchrotron Radiation Facility (ESRF) located in Grenoble, France, has recently started the Phase I-II Stereotactic Synchrotron Radiation Therapy (SSRT) clinical trials targeting brain tumours. This very first SSRT protocol consists in a combined therapy where monochromatic X-rays are delivered to the tumour pre-loaded with high Z element. The challenges of this technique are the accurate positioning of the target tumour with respect to the beam and the precision of the dose delivery whilst fully assuring the patient safety. The positioning system used for previous angiography clinical trials has been adapted to this new modality. 3-D imaging is performed for positioning purpose to fit to the treatment planning. The control system of this experiment will be described from the hardware and software point of view with emphasis on the constraints imposed by the Patient Safety System. | |||
![]() |
Slides TUMIB02 [0.839 MB] | ||
TUMIB04 | Migrating to an EPICS Based Instrument Control System at the ISIS Spallation Neutron Source | controls, EPICS, LabView, neutron | 525 |
|
|||
The beamline instruments at the ISIS spallation neutron source have been running successfully for many years using an in-house developed control system. The advent of new instruments and the desire for more complex experiments has led to a project being created to determine how best to meet these challenges. Though it would be possible to enhance the existing system, migrating to an EPICS-based system offers many advantages in terms of flexibility, software reuse and the potential for collaboration. While EPICS is well established for accelerator and synchrotron beamline control, is it not currently widely used for neutron instruments, but this is changing. The new control system is being developed to initially run in parallel with the existing system, a first version being scheduled for testing on two newly constructed instruments starting summer 2013. In this paper, we will discuss the design and implementation of the new control system, including how our existing National Instruments LabVIEW controlled equipment was integrated, and issues that we encountered during the migration process. | |||
![]() |
Slides TUMIB04 [0.098 MB] | ||
![]() |
Poster TUMIB04 [0.315 MB] | ||
TUMIB07 | RASHPA: A Data Acquisition Framework for 2D XRays Detectors | detector, hardware, framework, FPGA | 536 |
|
|||
Funding: Cluster of Research Infrastructures for Synergies in Physics (CRISP) co-funded by the partners and the European Commission under the 7th Framework Programme Grant Agreement 283745 ESRF ESRF research programs, along with the foreseen accelerator sources upgrade, require state-of-the-art instrumentation devices with high data flow acquisition systems. This paper presents RASHPA, a data acquisition framework targeting 2D XRay detectors. By combining a highly configurable multi link PCI Express over cable based data transmission engine and a carefully designed LINUX software stack, RASHPA aims at reaching the performances required by current and future detectors. |
|||
![]() |
Slides TUMIB07 [0.168 MB] | ||
TUMIB10 | Performance Testing of EPICS User Interfaces - an Attempt to Compare the Performance of MEDM, EDM, CSS-BOY, and EPICS | interface, hardware, Linux, EPICS | 547 |
|
|||
Funding: Work at the APS is supported by the U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH1135 Upgrading of the display manger or graphical user interface at EPICS sites reliant on older display technologies, typically MEDM or EDM, requires attention not only to functionality but also performance. For many sites, performance is not an issue - all display managers will update small numbers of process variables at rates exceeding the human ability to discern changes; but for certain applications typically found at larger sites, the ability to respond to updates rates at sub-Hertz frequencies for thousands of process variables is a requirement. This paper describes a series of tests performed on both older display managers – MEDM and EDM – and also the newer display managers CSS-BOY, epicsQT, and CaQtDM. Modestly performing modern hardware is used. |
|||
![]() |
Slides TUMIB10 [0.486 MB] | ||
![]() |
Poster TUMIB10 [0.714 MB] | ||
TUPPC004 | Scalable Archiving with the Cassandra Archiver for CSS | database, EPICS, controls, distributed | 554 |
|
|||
An archive for process-variable values is an important part of most supervisory control and data acquisition (SCADA) systems, because it allows operators to investigate past events, thus helping in identifying and resolving problems in the operation of the supervised facility. For large facilities like particle accelerators there can be more than one hundred thousand process variables that have to be archived. When these process variables change at a rate of one Hertz or more, a single computer system can typically not handle the data processing and storage. The Cassandra Archiver has been developed in order to provide a simple to use, scalable data-archiving solution. It seamlessly plugs into Control System Studio (CSS) providing quick and simple access to all archived process variables. An Apache Cassandra database is used for storing the data, automatically distributing it over many nodes and providing high-availability features. This contribution depicts the architecture of the Cassandra Archiver and presents performance benchmarks outlining the scalability and comparing it to traditional archiving solutions based on relational databases. | |||
![]() |
Poster TUPPC004 [3.304 MB] | ||
TUPPC008 | A New Flexible Integration of NeXus Datasets to ANKA by Fuse File Systems | synchrotron, detector, Linux, neutron | 566 |
|
|||
In the high data rate initiative (HDRI) german accelerator and neutron facilities of the Helmholtz Association agreed to use NeXus as a common data format. The synchrotron radiation source ANKA decided in 2012 to introduce NeXus as common data format for all beam lines. Nevertheless it is a challenging work to integrate a new data format in existing data processing work flows. Scientists rely on existing data evaluation kits which require specific data formats. To solve this obstacle, for linux a filesystem in userspace (FUSE) was developed, allowing to mount NeXus-Files as a filesystem. Easy in XML configurable filter rules allow a very flexible view to the data. Tomography data frames can be directly accessed as TIFF files by any standard picture viewer or scan data can be presented as a virtual ASCII file compatible to spec. | |||
TUPPC011 | Development of an Innovative Storage Manager for a Distributed Control System | controls, distributed, framework, operation | 570 |
|
|||
The !CHAOS(*) framework will provide all the services needed for controlling and managing a large scientific infrastructure, including a number of innovating features such as abstraction of services, devices and data, easy and modular customization, extensive data caching for performance boost, integration of all functionalities in a common framework. One of most relevant innovation in !CHAOS resides in the History Data Service (HDS) for a continuous acquisition of operating data pushed by devices controllers. The core component of the HDS is the History engine(HST). It implements the abstraction layer for the underneath storage technology and the logics for indexing and querying data. The HST drivers are designed to provide specific HDS tasks such as Indexing, Caching and Storing, and for wrapping the chosen third-party database API with !CHOAS standard calls. Indeed, the HST allows to route to independent channels the different !CHAOS services data flow in order to improve the global efficiency of the whole data acquisition system.
* - http://chaos.infn.it * - https://chaosframework.atlassian.net/wiki/display/DOC/General+View * - http://prst-ab.aps.org/abstract/PRSTAB/v15/i11/e112804 |
|||
![]() |
Poster TUPPC011 [6.729 MB] | ||
TUPPC022 | Centralized Software and Hardware Configuration Tool for Large and Small Experimental Physics Facilities | database, network, controls, EPICS | 591 |
|
|||
All software of control system, starting from hardware drivers and up to user space PC applications, needs configuration information to work properly. This information includes such parameters as channels calibrations, network addresses, servers responsibilities and other. Each software subsystem requires a part of configuration parameters, but storing them separately from whole configuration will cause usability and reliability issues. On the other hand, storing all configuration in one centralized database will decrease software development speed, by adding extra central database querying. The paper proposes configuration tool that has advantages of both ways. Firstly, it uses a centralized configurable graph database, that could be manipulated by web-interface. Secondly, it could automatically export configuration information from centralized database to any local configuration storage. The tool has been developed at BINP (Novosibirsk, Russia) and is used to configure VEPP-2000 electron-positron collider (BINP, Russia), Electron Linear Induction Accelerator (Snezhinsk, Russia) and NSLS-II booster synchrotron (BNL, USA). | |||
![]() |
Poster TUPPC022 [1.441 MB] | ||
TUPPC023 | MeerKAT Poster and Demo Control and Monitoring Highlights | controls, monitoring, interface, hardware | 594 |
|
|||
The 64-dish MeerKAT Karoo Array Telescope, currently under development, will become the largest and most sensitive radio telescope in the Southern Hemisphere until the Square Kilometre Array (SKA) is completed around 2024. MeerKAT will ultimately become an integral part of the SKA. The MeerKAT project will build on the techniques and experience acquired during the development of KAT-7, a 7-dish engineering prototype that has already proved its worth in practical use, operating 24/7 to deliver useful science data in the Karoo. Much of the MeerKAT development will centre on further refinement and scaling of the technology, using lessons learned from KAT-7. The poster session will present the proposed MeerKAT CAM (Control & Monitoring) architecture and highlight the solutions we are exploring for system monitoring, control and scheduling, data archiving and retrieval, and human interaction with the system. We will supplement the poster session with a live demonstration of the present KAT-7 CAM system. This will include a live video feed from the site as well as the use of the current GUI to generate and display the flow of events and data in a typical observation. | |||
![]() |
Poster TUPPC023 [0.471 MB] | ||
TUPPC030 | System Relation Management and Status Tracking for CERN Accelerator Systems | framework, interface, database, hardware | 619 |
|
|||
The Large Hadron Collider (LHC) at CERN requires many systems to work in close interplay to allow reliable operation and at the same time ensure the correct functioning of the protection systems required when operating with large energies stored in magnet system and particle beams. Examples for systems are e.g. magnets, power converters, quench protection systems as well as higher level systems like java applications or server processes. All these systems have numerous and different kind of links (dependencies) between each other. The knowledge about the different dependencies is available from different sources, like Layout databases, Java imports, proprietary files, etc . Retrieving consistent information is difficult due to the lack of a unified way of retrieval for the relevant data. This paper describes a new approach to establish a central server instance, which allows collecting this information and providing it to different clients used during commissioning and operation of the accelerator. Furthermore, it explains future visions for such a system, which includes additional layers for distributing system information like operational status, issues or faults. | |||
![]() |
Poster TUPPC030 [4.175 MB] | ||
TUPPC034 | Experience Improving the Performance of Reading and Displaying Very Large Datasets | collider, network, distributed, instrumentation | 630 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. There has been an increasing need over the last 5 years within the BNL accelerator community (primarily within the RF and Instrumentation groups) to collect, store and display data at high frequencies (1-10 kHz). Data throughput considerations when storing this data are manageable. But requests to display gigabytes of the collected data can quickly tax the speed at which data can be read from storage, transported over a network, and displayed on a users computer monitor. This paper reports on efforts to improve the performance of both reading and displaying data collected by our data logging system. Our primary means of improving performance was to build an Data Server – a hardware/software server solution built to respond to client requests for data. It's job is to improve performance by 1) improving the speed at which data is read from disk, and 2) culling the data so that the returned datasets are visually indistinguishable from the requested datasets. This paper reports on statistics that we've accumulated over the last two years that show improved data processing speeds and associated increases in the number and average size of client requests. |
|||
![]() |
Poster TUPPC034 [1.812 MB] | ||
TUPPC037 | LabWeb - LNLS Beamlines Remote Operation System | experiment, operation, interface, controls | 638 |
|
|||
Funding: Project funded by CENPES/PETROBRAS under contract number: 0050.0067267.11.9 LabWeb is a software developed to allow remote operation of beamlines at LNLS, in a partnership with Petrobras Nanotechnology Network. Being the only light source in Latin America, LNLS receives many researchers and students interested in conducting experiments and analyses in these lines. The implementation of LabWeb allow researchers to use the laboratory structure without leaving their research centers, reducing time and travel costs in a continental country like Brazil. In 2010, the project was in its first phase in which tests were conducted using a beta version. Two years later, a new phase of the project began with the main goal of giving the operation scale for the remote access project to LNLS users. In this new version, a partnership was established to use the open source platform Science Studio developed and applied at the Canadian Light Source (CLS). Currently, the project includes remote operation of three beamlines at LNLS: SAXS1 (Small Angle X-Ray Scattering), XAFS1 (X-Ray Absorption and Fluorescence Spectroscopy) and XRD1 (X-Ray Diffraction). Now, the expectation is to provide this new way of realize experiments to all the other beamlines at LNLS. |
|||
![]() |
Poster TUPPC037 [1.613 MB] | ||
TUPPC040 | Saclay GBAR Command Control | PLC, linac, controls, positron | 650 |
|
|||
The GBAR experiment will be installed in 2016 at CERN’s Antiproton Decelerator, ELENA extension, and will measure the free fall acceleration of neutral antihydrogen atoms. Before construction of GBAR, the CEA/Irfu institute has built a beam line to guide positrons produced by a Linac (linear particle accelerator) through either a materials science line or a Penning trap. The experiment command control is mainly based on Programmable Logical Controllers (PLC). A CEA/Irfu-developed Muscade SCADA (Supervisory Control and Data Acquisition) is installed on a Windows 7 embedded shoebox PC. It manages local and remote display, and is responsible for archiving and alarms. Muscade was used because it is rapidly and easily configurable. The project required Muscade to communicate with three different types of PLCs: Schneider, National Instruments (NI) and Siemens. Communication is based on Modbus/TCP and on an in-house protocol optimized for the Siemens PLC. To share information between fast and slow controls, a LabVIEW PC dedicated to the trap fast control communicates with a PLC dedicated to security via Profinet fieldbus. | |||
![]() |
Poster TUPPC040 [1.791 MB] | ||
TUPPC045 | Software Development for High Speed Data Recording and Processing | detector, monitoring, network, controls | 665 |
|
|||
Funding: The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 283745. The European XFEL beam delivery defines a unique time structure that requires acquiring and processing data in short bursts of up to 2700 images every 100 ms. The 2D pixel detectors being developed produce up to 10 GB/s of 1-Mpixel image data. Efficient handling of this huge data volume requires large network bandwidth and computing capabilities. The architecture of the DAQ system is hierarchical and modular. The DAQ network uses 10 GbE switched links to provide large bandwidth data transport between the front-end interfaces (FEI), data handling PC layer servers, and storage and analysis clusters. Front-end interfaces are required to build images acquired during a burst into pulse ordered image trains and forward them to PC layer farm. The PC layer consists of dedicated high-performance computers for raw data monitoring, processing and filtering, and aggregating data files that are then distributed to on-line storage and data analysis clusters. In this contribution we give an overview of the DAQ system architecture, communication protocols, as well as software stack for data acquisition pre-processing, monitoring, storage and analysis. |
|||
![]() |
Poster TUPPC045 [1.323 MB] | ||
TUPPC046 | Control Using Beckhoff Distributed Rail Systems at the European XFEL | PLC, controls, hardware, photon | 669 |
|
|||
The European XFEL project is a 4th generation light source producing spatially coherent 80fs short photon x-ray pulses with a peak brilliance of 1032-1034 photons/s/mm2/mrad2/0.1% BW in the energy range from 0.26 to 24 keV at an electron beam energy 14 GeV. Six experiment stations will start data taking in fall 2015. In order to provide a simple, homogeneous solution, the DAQ and control systems group at the European XFEL are standardizing on COTS control hardware for use in experiment and photon beam line tunnels. A common factor within this standardization requirement is the integration with the Karabo software framework of Beckhoff TwinCAT 2.11 or TwinCAT3 PLCs and EtherCAT. The latter provides the high degree of reliability required and the desirable characteristics of real time capability, fast I/O channels, distributed flexible terminal topologies, and low cost per channel. In this contribution we describe how Beckhoff PLC and EtherCAT terminals will be used to control experiment and beam line systems. This allows a high degree of standardization for control and monitoring of systems.
Hardware Technology - POSTER |
|||
![]() |
Poster TUPPC046 [1.658 MB] | ||
TUPPC047 | The New TANGO-based Control and Data Acquisition System of the GISAXS Instrument GALAXI at Forschungszentrum Jülich | TANGO, controls, neutron, detector | 673 |
|
|||
Forschungszentrum Jülich operated the SAXS instrument JUSIFA at DESY in Hamburg for more than twenty years. With the shutdown of the DORIS ring JUSIFA was relocated to Jülich. Based on most JUSIFA components (with major mechanical modifications) and a MetalJet high performance X-Ray source from Bruker AXS the new GISAXS instrument GALAXI was built by JCNS (Jülich Centre for Neutron Science). GALAXI was equipped with new electronics and a completely new control and data acquisition system by ZEA-2 (Zentralinstitut für Engineering, Elektronik und Analytik 2 – Systeme der Elektronik, formely ZEL). On the base of good experience with the TACO control system, ZEA-2 decided that GALAXI should be the first instrument of Forschungszentrum Jülich with the successor system TANGO. The application software on top of TANGO is based on pyfrid. Pyfrid was originally developed for the neutron scattering instruments of JCNS and provides a scripting interface as well as a Web GUI. The design of the new control and data acquisition system is presented and the lessons learned by the introduction of TANGO are reported. | |||
TUPPC048 | Adoption of the "PyFRID" Python Framework for Neutron Scattering Instruments | controls, framework, interface, scattering | 677 |
|
|||
M.Drochner, L.Fleischhauer-Fuss, H.Kleines, D.Korolkov, M.Wagener, S.v.Waasen Adoption of the "PyFRID" Python Framework for Neutron Scattering Instruments To unify the user interfaces of the JCNS (Jülich Centre for Neutron Science) scattering instruments, we are adapting and extending the "PyFRID" framework. "PyFRID" is a high-level Python framework for instrument control. It provides a high level of abstraction, particularly by use of aspect oriented (AOP) techniques. Users can use a builtin command language or a web interface to control and monitor motors, sensors, detectors and other instrument components. The framework has been fully adopted at two instruments, and work is in progress to use it on more. | |||
TUPPC059 | EPICS Data Acquisition Device Support | EPICS, interface, timing, detector | 707 |
|
|||
A large number of devices offer a similar kind of capabilities. For example, data acquisition all offer sampling at some rate. If each such device were to have a different interface, engineers using them would need to be familiar with each device specifically, inhibiting transfer of know-how from working with one device to another and increasing the chance of engineering errors due to a miscomprehension or incorrect assumptions. In the Nominal Device Model (NDM) model, we propose to standardize the EPICS interface of the analog and digital input and output devices, and image acquisition devices. The model describes an input/output device which can have digital or analog channels, where channels can be configured for output or input. Channels can be organized in groups that have common parameters. NDM is implemented as EPICS Nominal Device Support library (NDS). It provides a C++ interface to developers of device-specific drivers. NDS itself inherits well-known asynPortDriver. NDS hides from the developer all the complexity of the communication with asynDriver and allows to focus on the business logic of the device itself. | |||
![]() |
Poster TUPPC059 [0.371 MB] | ||
TUPPC060 | Implementation of Continuous Scans Used in Beamline Experiments at Alba Synchrotron | experiment, hardware, controls, detector | 710 |
|
|||
The Alba control system * is based on Sardana **, a software package implemented in Python, built on top of Tango *** and oriented to beamline and accelerator control and data acquisition. Sardana provides an advanced scan framework, which is commonly used in all the beamlines of Alba as well as other institutes. This framework provides standard macros and comprises various scanning modes: step, hybrid and software-continuous, however no hardware-continuous. The continuous scans speed up the data acquisition, making it a great asset for most experiments and due to time constraints, mandatory for a few of them. A continuous scan has been developed and installed in three beamlines where it reduced the time overheads of the step scans. Furthermore it could be easily adapted to any other experiment and will be used as a base for extending Sardana scan framework with the generic continuous scan capabilities. This article describes requirements, plan and implementation of the project as well as its results and possible improvements.
*"The design of the Alba Control System. […]" D. Fernández et al, ICALEPCS2011 **"Sardana, The Software for Building SCADAS […]" T.M. Coutinho et al, ICALEPCS2011 ***www.tango-controls.org |
|||
![]() |
Poster TUPPC060 [13.352 MB] | ||
TUPPC062 | High-Speed Data Acquisition of Sensor Signals for Physical Model Verification at CERN HiRadMat (SHC-DAQ) | data-acquisition, hardware, real-time, LabView | 718 |
|
|||
A high-speed data acquisition system was successfully developed and put into production in a harsh radiation environment in a couple of months to test new materials impacted by proton beams for future use in beam intercepting devices. A 4 MHz ADC with high impedance and low capacitance was used to digitize the data at a 2 MHz bandwidth. The system requirements were to design a full speed data streaming on a trigger during up to 30 ms and then reconfigure the hardware in less than 500 ms to perform a 100 Hz acquisition for 30 seconds. Experimental data were acquired, using LabVIEW real-time, relying on extensive embedded instrumentation (strain gauges and temperature sensors) and on acquisition boards hosted on a PXI crate. The data acquisition system has a dynamic range and sampling rate that are sufficient to acquire the very fast and intense shock waves generated by the impact. This presentation covers the requirements, the design, development and commissioning of the system. The overall performance, user experience and preliminary results will be reported. | |||
![]() |
Poster TUPPC062 [9.444 MB] | ||
TUPPC072 | Flexible Data Driven Experimental Data Analysis at the National Ignition Facility | data-analysis, diagnostics, target, framework | 747 |
|
|||
Funding: This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344. #LLNL-ABS-632532 After each target shot at the National Ignition Facility (NIF), scientists require data analysis within 30 minutes from ~50 diagnostic instrument systems. To meet this goal, NIF engineers created the Shot Data Analysis (SDA) Engine based on the Oracle Business Process Execution Language (BPEL) platform. While this provided for a very powerful and flexible analysis product, it still required engineers conversant in software development practices in order to create the configurations executed by the SDA engine. As more and more diagnostics were developed and the demand for analysis increased, the development staff was not able to keep pace. To solve this problem, the Data Systems team took the approach of creating a database table based scripting language that allows users to define an analysis configuration of inputs, input the data into standard processing algorithms and then store the outputs in a database. The creation of the Data Driven Engine (DDE) has substantially decreased the development time for new analysis and simplified maintenance of existing configurations. The architecture and functionality of the Data Driven Engine will be presented along with examples. |
|||
![]() |
Poster TUPPC072 [1.150 MB] | ||
TUPPC081 | IcePAP: An Advanced Motor Controller for Scientific Applications in Large User Facilities | controls, hardware, target, interface | 766 |
|
|||
Synchrotron radiation facilities and in particular large hard X-ray sources such as the ESRF are equipped with thousands of motorized position actuators. Combining all the functional needs found in those facilities with the implications related to personnel resources, expertise and cost makes the choice of motor controllers a strategic matter. Most of the large facilities adopt strategies based on the use of off-the-shelf devices packaged using standard interfaces. As this approach implies severe compromises, the ESRF decided to address the development of IcePAP, a motor controller designed for applications in a scientific environment. It optimizes functionality, performance, ease of deployment, level of standardization and cost. This device is adopted as standard and is widely used at the beamlines and accelerators of ESRF and ALBA. This paper provides details on the architecture and technical characteristics of IcePAP as well as examples on how it implements advanced features. It also presents ongoing and foreseen improvements as well as introduces the outline of an emerging collaboration aimed at further development of the system making it available to other research labs. | |||
![]() |
Poster TUPPC081 [0.615 MB] | ||
TUPPC087 | High Level FPGA Programming Framework Based on Simulink | FPGA, framework, interface, hardware | 782 |
|
|||
Funding: The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement No 283745. Modern diagnostic and detector related data acquisition and processing hardware are increasingly being implemented with Field Programmable Gate Array (FPGA) technology. The level of flexibility allows for simpler hardware solutions together with the ability to implement functions during the firmware programming phase. The technology is also becoming more relevant in data processing, allowing for reduction and filtering to be done at the hardware level together with implementation of low-latency feedback systems. However, this flexibility and possibilities require a significant amount of design, programming, simulation and testing work usually done by FPGA experts. A high-level FPGA programming framework is currently under development at the European XFEL in collaboration with the Oxford University within the EU CRISP project. This framework allows for people unfamiliar with FPGA programming to develop and simulate complete algorithms and programs within the MathWorks Simulink graphical tool with real FPGA precision. Modules within the framework allow for simple code reuse by compiling them into libraries, which can be deployed to other boards or FPGAs. |
|||
![]() |
Poster TUPPC087 [0.813 MB] | ||
TUPPC095 | Low Cost FFT Scope using LabVIEW cRIO and FPGA | LabView, hardware, FPGA, controls | 801 |
|
|||
At CERN, many digitizers and scopes are starting to age and should be replaced. Much of the equipment is custom made or not available on the market anymore. Replacing this equipment with the equivalent of today would either be time consuming or expensive. This paper looks at the pros and cons of using COTS systems like NI-cRIO and NI-PXIe and their FPGA capabilities as flexible instruments, replacing costly spectrum analyzers and older scopes. It adds some insight on what had to be done to integrate and deploy the equipment in the unique CERN infrastructure, and the added value of having a fully customizable platform, that makes it possible to stream, store and align the data without any additional equipment. | |||
![]() |
Poster TUPPC095 [5.250 MB] | ||
TUPPC096 | Migration from WorldFIP to a Low-Cost Ethernet Fieldbus for Power Converter Control at CERN | Ethernet, controls, interface, network | 805 |
|
|||
Power converter control in the LHC uses embedded computers called Function Generator/Controllers (FGCs) which are connected to WorldFIP fieldbuses around the accelerator ring. The FGCs are integrated into the accelerator control system by x86 gateway front-end systems running Linux. With the LHC now operational, attention has turned to the renovation of older control systems as well as a new installation for Linac 4. A new generation of FGC is being deployed to meet the needs of these cycling accelerators. As WorldFIP is very limited in data rate and is unlikely to undergo further development, it was decided to base future installations upon an Ethernet fieldbus with standard switches and interface chipsets in both the FGCs and gateways. The FGC communications protocol that runs over WorldFIP in the LHC was adapted to work over raw Ethernet, with the aim to have a simple solution that will easily allow the same devices to operate with either type of interface. This paper describes the evolution of FGC communications from WorldFIP to dedicated Ethernet networks and presents the results of initial tests, diagnostic tools and how real-time power converter control is achieved. | |||
![]() |
Poster TUPPC096 [1.250 MB] | ||
TUPPC098 | Advanced Light Source Control System Upgrade – Intelligent Local Controller Replacement | FPGA, controls, hardware, EPICS | 809 |
|
|||
Funding: Work supported by the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 As part of the control system upgrade at the Advanced Light Source (ALS) the existing intelligent local controller (ILC) modules have been replaced. These remote input/output modules provide real-time updates of control setpoints and monitored values. This paper describes the 'ILC Replacement Modules' which have been developed to take on the duties of the existing modules. The new modules use a 100BaseT network connection to communicate with the ALS Experimental Physics and Industrial Control System (EPICS) and are based on a commercial FPGA evaluation board running a microcontroller-like application. In addition to providing remote ana log and digital input/output points the replacement modules also provide some rudimentary logic operations, analog slew rate limiting and accurate time stamping of acquired data. Results of extensive performance testing and experience gained now that the modules have been in service for several months are presented. |
|||
TUPPC100 | Recent Changes to Beamline Software at the Canadian Light Source | experiment, controls, EPICS, Windows | 813 |
|
|||
The Canadian Light Source has ongoing work to improve the user interfaces at the beamlines. Much of the direction has made use of Qt and EPICS, using both C++ and Python in providing applications. Continuing work on the underlying data acquisition and visualization tools provides a commonality for both development and operation, and provisions for extending tools allow flexibility in types of experiments being run. | |||
![]() |
Poster TUPPC100 [1.864 MB] | ||
TUPPC102 | User Interfaces for the Spiral2 Machine Protection System | beam-losses, controls, PLC, rfq | 818 |
|
|||
Spiral2 accelerator is designed to accelerate protons, deuterons, ions with a power from hundreds of Watts to 200kW. Therefore, it is important to monitor and anticipate beam losses to maintain equipment integrities by triggering beam cuts when beam losses or equipment malfunctions are detected; the MPS (Machine Protection System) is in charge of this function. The MPS has also to monitor and limit activations but this part is not addressed here. Linked to the MPS, five human machine interfaces will be provided. The first, “MPS” lets operators and accelerator engineers monitor MPS states, alarms and tune some beam losses thresholds. The second “beam power rise” defines successive steps to reach the desired beam power. Then, “interlock” is a synoptic to control beam stops state and defaults; the “beam losses” one displays beam losses, currents and efficiencies along the accelerator. Finally, “beam structure” lets users interact with the timing system by controlling the temporal structure to obtain a specific duty cycle according to the beam power constraints. In this paper, we introduce these human machine interfaces, their interactions and the method used for software development. | |||
![]() |
Poster TUPPC102 [1.142 MB] | ||
TUPPC109 | MacspeechX.py Module and Its Use in an Accelerator Control System | controls, hardware, interface, target | 829 |
|
|||
macspeechX.py is a Python module to accels speech systehsis library on MacOSX. This module have been used in the vocal alert system in KEKB and J-PARC accelrator cotrol system. Recent upgrade of this module allow us to handle non-English lanugage, such as Japanse, through this module. Implementation detail will be presented as an example of Python program accessing system library. | |||
TUPPC116 | Cheburashka: A Tool for Consistent Memory Map Configuration Across Hardware and Software | hardware, controls, interface, database | 848 |
|
|||
The memory map of a hardware module is defined by the designer at the moment when the firmware is specified. It is then used by software developers to define device drivers and front-end software classes. Maintaining consistency between hardware and its software is critical. In addition, the manual process of writing VHDL firmware on one side and the C++ software on the other is labour-intensive and error-prone. Cheburashka* is a software tool which eases this process. From a unique declaration of the memory map, created using the tool’s graphical editor, it allows to generate the memory map VHDL package, the Linux device driver configuration for the front-end computer, and a FESA** class for debugging. An additional tool, GENA, is being used to automatically create all required VHDL code to build the associated register control block. These tools are now used by the hardware and software teams for the design of all new interfaces from FPGAs to VME or on-board DSPs in the context of the extensive program of development and renovation being undertaken in the CERN injector chain during LS1***. Several VME modules and their software have already been deployed and used in the SPS.
(*) Cheburashka is developed in the RF group at CERN (**)FESA is an acronym for Front End Software Architecture, developped at CERN (***)LS1 : LHC Long Shutdown 1, from 2013 to 2014 |
|||
TUPPC117 | Unifying Data Diversity and Conversion to Common Engineering Analysis Tools | superconducting-magnet, status, factory, data-analysis | 852 |
|
|||
The large variety of systems for the measurements of insulation, conductivity, RRR, quench performance, etc. installed at CERN’s superconducting magnet test facility generates a diversity of data formats. This mixture causes problems when the measurements need to be correlated. Each measurement application has a dedicated data analysis tool used to validate its results, but there are no generic bridge between the applications that facilitates cross analysis of mixed data and data types. Since the LHC start-up, the superconducting magnet test facility hosts new R&D measurements on a multitude of superconducting components. These results are analysed by international collaborators, which triggered a greater need to access the raw data from many typical engineering and analysis tools, such as MATLAB®, Mathcad®, DIAdem™, Excel™… This paper describes the technical solutions developed for the data formats unification and reviews the present status. | |||
![]() |
Poster TUPPC117 [11.140 MB] | ||
TUPPC124 | Distributed Network Monitoring Made Easy - An Application for Accelerator Control System Process Monitoring | monitoring, network, controls, Linux | 875 |
|
|||
Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357. As the complexity and scope of distributed control systems increase, so does the need for an ever increasing level of automated process monitoring. The goal of this paper is to demonstrate one method whereby the SNMP protocol combined with open-source management tools can be quickly leveraged to gain critical insight into any complex computing system. Specifically, we introduce an automated, fully customizable, web-based remote monitoring solution which has been implemented at the Argonne Tandem Linac Accelerator System (ATLAS). This collection of tools is not limited to only monitoring network infrastructure devices, but also to monitor critical processes running on any remote system. The tools and techniques used are typically available pre-installed or are available via download on several standard operating systems, and in most cases require only a small amount of configuration out of the box. High level logging, level-checking, alarming, notification and reporting is accomplished with the open source network management package OpenNMS, and normally requires a bare minimum of implementation effort by a non-IT user. |
|||
![]() |
Poster TUPPC124 [0.875 MB] | ||
TUPPC126 | Visualization of Experimental Data at the National Ignition Facility | diagnostics, target, framework, database | 879 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633252 An experiment on the National Ignition Facility (NIF) may produce hundreds of gigabytes of target diagnostic data. Raw and analyzed data are accumulated into the NIF Archive database. The Shot Data Systems team provides alternatives for accessing data including a web-based data visualization tool, a virtual file system for programmatic data access, a macro language for data integration, and a Wiki to support collaboration. The data visualization application in particular adapts dashboard user-interface design patterns popularized by the business intelligence software community. The dashboard canvas provides the ability to rapidly assemble tailored views of data directly from the NIF archive. This design has proven capable of satisfying most new visualization requirements in near real-time. The separate file system and macro feature-set support direct data access from a scientist’s computer using scientific languages such as IDL, Matlab and Mathematica. Underlying all these capabilities is a shared set of web services that provide APIs and transformation routines to the NIF Archive. The overall software architecture will be presented with an emphasis on data visualization. |
|||
![]() |
Poster TUPPC126 [4.900 MB] | ||
TUPPC128 | Machine History Viewer for the Integrated Computer Control System of the National Ignition Facility | controls, GUI, database, target | 883 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633812 The Machine History Viewer is a recently developed capability of the Integrated Computer Control System (ICCS) software to the National Ignition Facility (NIF) that introduces the capability to analyze machine history data to troubleshoot equipment problems and to predict future failures. Flexible time correlation, text annotations, and multiple y-axis scales will help users determine cause and effect in the complex machine interactions at work in the NIF. Report criteria can be saved for easy modification and reuse. Integration into the already-familiar ICCS GUIs makes reporting easy to access for the operators. Reports can be created that will help analyze trends over long periods of time that lead to improved calibration and better detection of equipment failures. Faster identification of current failures and anticipation of potential failures will improve NIF availability and shot efficiency. A standalone version of this application is under development that will provide users remote access to real-time data and analysis allowing troubleshooting by experts without requiring them to come on-site. |
|||
![]() |
Poster TUPPC128 [4.826 MB] | ||
TUCOCA04 | Formal Methodology for Safety-Critical Systems Engineering at CERN | PLC, operation, site, interface | 918 |
|
|||
A Safety-Critical system is a system whose failure or malfunctioning may lead to an injury or loss of human life or may have serious environmental consequences. The Safety System Engineering section of CERN is responsible for the conception of systems capable of performing, in an extremely safe way, a predefined set of Instrumented Functions preventing any human presence inside areas where a potential hazardous event may occur. This paper describes the formal approach followed for the engineering of the new Personnel Safety System of the PS accelerator complex at CERN. Starting from applying the generic guidelines of the safety standard IEC-61511, we have defined a novel formal approach particularly useful to express the complete set of Safety Functions in a rigorous and unambiguous way. We present the main advantages offered by this formalism and, in particular, we will show how this has been effective in solving the problem of the Safety Functions testing, leading to a major reduction of time for the test pattern generation. | |||
![]() |
Slides TUCOCA04 [2.227 MB] | ||
TUCOCB01 | Next-Generation MADOCA for The SPring-8 Control Framework | controls, Windows, framework, interface | 944 |
|
|||
MADOCA control framework* was developed for SPring-8 accelerator control and has been utilized in several facilities since 1997. As a result of increasing demands in controls, now we need to treat various data including image data in beam profile monitoring, and also need to control specific devices which can be only managed by Windows drivers. To fulfill such requirements, next-generation MADOCA (MADOCA II) was developed this time. MADOCA II is also based on message oriented control architecture, but the core part of the messaging is completely rewritten with ZeroMQ socket library. Main features of MADOCA II are as follows: 1) Variable length data such as image data can be transferred with a message. 2) The control system can run on Windows as well as other platforms such as Linux and Solaris. 3) Concurrent processing of multiple messages can be performed for fast control. In this paper, we report on the new control framework especially from messaging aspects. We also report the status on the replacement of the control system with MADOCA II. Partial control system of SPring-8 was already replaced with MADOCA II last summer and has been stably operated.
*R.Tanaka et al., “Control System of the SPring-8 Storage Ring”, Proc. of ICALEPCS’95, Chicago, USA, (1995) |
|||
![]() |
Slides TUCOCB01 [2.157 MB] | ||
TUCOCB03 | A Practical Approach to Ontology-Enabled Control Systems for Astronomical Instrumentation | controls, detector, DSL, database | 952 |
|
|||
Even though modern service-oriented and data-oriented architectures promise to deliver loosely coupled control systems, they are inherently brittle as they commonly depend on a priori agreed interfaces and data models. At the same time, the Semantic Web and a whole set of accompanying standards and tools are emerging, advocating ontologies as the basis for knowledge exchange. In this paper we aim to identify a number of key ideas from the myriad of knowledge-based practices that can readily be implemented by control systems today. We demonstrate with a practical example (a three-channel imager for the Mercator Telescope) how ontologies developed in the Web Ontology Language (OWL) can serve as a meta-model for our instrument, covering as many engineering aspects of the project as needed. We show how a concrete system model can be built on top of this meta-model via a set of Domain Specific Languages (DSLs), supporting both formal verification and the generation of software and documentation artifacts. Finally we reason how the available semantics can be exposed at run-time by adding a “semantic layer” that can be browsed, queried, monitored etc. by any OPC UA-enabled client. | |||
![]() |
Slides TUCOCB03 [2.130 MB] | ||
WECOAAB02 | Status of the ACS-based Control System of the Mid-sized Telescope Prototype for the Cherenkov Telescope Array (CTA) | controls, interface, monitoring, framework | 987 |
|
|||
CTA as the next generation ground-based very-high-energy gamma-ray observatory is defining new areas beyond those related to physics; it is also creating new demands on the control and data acquisition system. With on the order of 100 telescopes spread over large area with numerous central facilities, CTA will comprise a significantly larger number of devices than any other current imaging atmospheric Cherenkov telescope experiment. A prototype for the Medium Size Telescope (MST) of a diameter of 12 m has been installed in Berlin and is currently being commissioned. The design of the control software of this telescope incorporates the main tools and concepts under evaluation within the CTA consortium in order to provide an array control prototype for the CTA project. The readout and control system for the MST prototype is implemented within the ALMA Common Software (ACS) framework. The interfacing to the hardware is performed via the OPen Connectivity-Unified Architecture (OPC UA). The archive system is based on MySQL and MongoDB. In this contribution the architecture of the MST control and data acquisition system, implementation details and first conclusions are presented. | |||
![]() |
Slides WECOAAB02 [3.148 MB] | ||
WECOAAB03 | Synchronization of Motion and Detectors and Continuous Scans as the Standard Data Acquisition Technique | detector, hardware, controls, data-acquisition | 992 |
|
|||
This paper describes the model, objectives and implementation of a generic data acquisition structure for an experimental station, which integrates the hardware and software synchronization of motors, detectors, shutters and in general any experimental channel or events related with the experiment. The implementation involves the management of hardware triggers, which can be derived from time, position of encoders or even events from the particle accelerator, combined with timestamps for guaranteeing the correct integration of software triggered or slow channels. The infrastructure requires a complex management of buffers of different sources, centralized and distributed, including interpolation procedures. ALBA uses Sardana built on TANGO as the generic control system, which provides the abstraction and communication with the hardware, and a complete macro edition and execution environment. | |||
![]() |
Slides WECOAAB03 [2.432 MB] | ||
WECOBA02 | Distributed Information Services for Control Systems | database, controls, EPICS, interface | 1000 |
|
|||
During the design and construction of an experimental physics facility (EPF), a heterogeneous set of engineering disciplines, methods, and tools is used, making subsequent exploitation of data difficult. In this paper, we describe a framework (DISCS) for building high-level applications for commissioning, operation, and maintenance of an EPF that provides programmatic as well as graphical interfaces to its data and services. DISCS is a collaborative effort of BNL, FRIB, Cosylab, IHEP, and ESS. It is comprised of a set of cooperating services and applications, and manages data such as machine configuration, lattice, measurements, alignment, cables, machine state, inventory, operations, calibration, and design parameters. The services/applications include Channel Finder, Logbook, Traveler, Unit Conversion, Online Model, and Save-Restore. Each component of the system has a database, an API, and a set of applications. The services are accessed through REST and EPICS V4. We also discuss the challenges to developing database services in an environment where requirements continue to evolve and developers are distributed among different laboratories with different technology platforms. | |||
WECOBA05 | Understanding NIF Experimental Results: NIF Target Diagnostic Automated Analysis Recent Accompolishments | diagnostics, target, database, laser | 1008 |
|
|||
Funding: This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344. #LLNL-ABS-632818 The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is the most energetic laser system in the world. During a NIF laser shot, a 20-ns ultraviolet laser pulse is split into 192 separate beams, amplified, and directed to a millimeter-sized target at the center of a 10-m target chamber. To achieve the goals of studying energy science, basic science, and national security, NIF laser shot performance is being optimized around key metrics such as implosion shape and fuel mix. These metrics are accurately quantified after each laser shot using automated signal and image processing routines to analyze raw data from over 50 specialized diagnostics that measure x-ray, optical and nuclear phenomena. Each diagnostic’s analysis is comprised of a series of inverse problems, timing analysis, and specialized processing. This talk will review the framework for general diagnostic analysis, give examples of specific algorithms used, and review the diagnostic analysis team’s recent accomplishments. The automated diagnostic analysis for x-ray, optical, and nuclear diagnostics provides accurate key performance metrics and enables NIF to achieve its goals. |
|||
![]() |
Slides WECOBA05 [3.991 MB] | ||
WECOBA06 | Exploring No-SQL Alternatives for ALMA Monitoring System | monitoring, database, hardware, insertion | 1012 |
|
|||
The Atacama Large Millimeter /submillimeter Array (ALMA) will be a unique research instrument composed of at least 66 reconfigurable high-precision antennas, located at the Chajnantor plain in the Chilean Andes at an elevation of 5000 m. This paper describes the experience gained after several years working with the monitoring system, which has the fundamental requirement to collect and storage up to 100K variables. The original design is built on top of a cluster of relational database server and network attached storage with fiber channel interface. As the number of monitoring points increases with the number of antennas included in the array, the current monitoring system has demonstrated to be able to handle the increased data rate in the collection and storage area, but the data query interface has started to suffered serious performance degradation. A solution based on no-SQL platform was explored as an alternative of the current long-term storage system, specifically mongoDB has been chosen. Intermediate cache servers based on Redis are also introduced to allow faster online data streaming of the most recent data to data analysis application and web based charts applications | |||
![]() |
Slides WECOBA06 [0.916 MB] | ||
WECOBA07 | High Speed Detectors: Problems and Solutions | detector, network, operation, data-analysis | 1016 |
|
|||
Diamond has an increasing number of high speed detectors primarily used on Macromolecular Crystallography, Small Angle X-Ray Scattering and Tomography beamlines. Recently, the performance requirements have exceeded the performance available from a single threaded writing process on our Lustre parallel file system, so we have had to investigate other file systems and ways of parallelising the data flow to mitigate this. We report on the some comparative tests between Lustre and GPFS, and some work we have been leading to enhance the HDF5 library to add features that simplify the parallel writing problem. | |||
![]() |
Slides WECOBA07 [0.617 MB] | ||
WECOCB05 | Modern Technology in Disguise | FPGA, controls, interface, hardware | 1032 |
|
|||
A modern embedded system for fast systems has to incorporate technologies like multicore CPUs, fast serial links and FPGAs for interfaces and local processing. Those technologies are still relatively new and integrating them in a control system infrastructure that either exists already or has to be planned for long-term maintainability is a challenge that needs to be addressed. At PSI we have, in collaboration with an industrial company (IOxOS SA)[*], built a board and infrastructure around it solving issues like scalability and modularization of systems that are based on FPGAs and the FMC standard, simplicity in taking such a board in operation and re-using parts of the source code base for FPGA. In addition the board has several state-of-the-art features that are typically found in the newer bus systems like MicroTCA, but can still easily be incorporated in our VME64x-based infrastructure. In the presentation we will describe the system architecture, its technical features and how it enables us to effectively develop our different user applications and fast front-end systems.
* IOxOS SA, Gland, Switzerland, http://www.ioxos ch |
|||
![]() |
Slides WECOCB05 [0.675 MB] | ||
WECOCB07 | Development of an Open-Source Hardware Platform for Sirius BPM and Orbit Feedback | hardware, FPGA, interface, controls | 1036 |
|
|||
The Brazilian Synchrotron Light Laboratory (LNLS) is developing a BPM and orbit feedback system for Sirius, the new low emmitance synchrotron light source under construction in Brazil. In that context, 3 open-source boards and accompanying low-level firmware/software were developed in cooperation with the Warsaw University of Technology (WUT) to serve as hardware platform for the BPM data acquisition and digital signal processing platform as well as orbit feedback data distributor: (i) FPGA board with 2 high-pin count FMC slots in PICMG AMC form factor; (ii) 4-channel 16-bit 130 MS/s ADC board in ANSI/VITA FMC form factor; (iii) 4-channel 16-bit 250 MS/s ADC board in ANSI/VITA FMC form factor. The experience of integrating the system prototype in a COTS MicroTCA.4 crate will be reported, as well as the planned developments. | |||
![]() |
Slides WECOCB07 [4.137 MB] | ||
THCOAAB01 | A Scalable and Homogeneous Web-Based Solution for Presenting CMS Control System Data | controls, interface, detector, status | 1040 |
|
|||
The Control System of the CMS experiment ensures the monitoring and safe operation of over 1M parameters. The high demand for access to online and historical Control System Data calls for a scalable solution combining multiple data sources. The advantage of a Web solution is that data can be accessed from everywhere with no additional software. Moreover, existing visualization libraries can be reused to achieve a user-friendly and effective data presentation. Access to the online information is provided with minimal impact on the running control system by using a common cache in order to be independent of the number of users. Historical data archived by the SCADA software is accessed via an Oracle Database. The web interfaces provide mostly a read-only access to data but some commands are also allowed. Moreover, developers and experts use web interfaces to deploy the control software and administer the SCADA projects in production. By using an enterprise portal, we profit from single sign-on and role-based access control. Portlets maintained by different developers are centrally integrated into dynamic pages, resulting in a consistent user experience. | |||
![]() |
Slides THCOAAB01 [1.814 MB] | ||
THCOAAB02 | Enhancing the Man-Machine-Interface of Accelerator Control Applications with Modern Consumer Market Technologies | controls, HOM, framework, embedded | 1044 |
|
|||
The paradigms of human interaction with modern consumer market devices such as tablets, smartphones or video game consoles are currently undergoing rapid and serious changes. Device control by multi-finger touch gesture or voice recognition has now become standard. Even further advanced technologies such as 3D-gesture recognition are becoming routine. Smart enhancements of head-mounted display technologies are beginning to appear on the consumer market. In addition, the look-and-feel of mobile apps and classical desktop applications are becoming remarkably similar to one another. We have used Web2cToGo to investigate the consequences of the above-mentioned technologies and paradigms with respect to accelerator control applications. Web2cToGo is a framework which is being developed at DESY. It provides a common, platform-independent Web application capable of running on widely-used mobile as well as common desktop platforms. This paper reports the basic concept of the project and presents the results achieved so far and discusses the next development steps. | |||
![]() |
Slides THCOAAB02 [0.667 MB] | ||
THCOAAB05 | Rapid Application Development Using Web 2.0 Technologies | framework, target, interface, experiment | 1058 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632813 The National Ignition Facility (NIF) strives to deliver reliable, cost effective applications that can easily adapt to the changing business needs of the organization. We use HTML5, RESTful web services, AJAX, jQuery, and JSF 2.0 to meet these goals. WebGL and HTML5 Canvas technologies are being used to provide 3D and 2D data visualization applications. JQuery’s rich set of widgets along with technologies such as HighCharts and Datatables allow for creating interactive charts, graphs, and tables. PrimeFaces enables us to utilize much of this Ajax and JQuery functionality while leveraging our existing knowledge base in the JSF framework. RESTful Web Services have replaced the traditional SOAP model allowing us to easily create and test web services. Additionally, new software based on NodeJS and WebSocket technology is currently being developed which will augment the capabilities of our existing applications to provide a level of interaction with our users that was previously unfeasible. These Web 2.0-era technologies have allowed NIF to build more robust and responsive applications. Their benefits and details on their use will be discussed. |
|||
![]() |
Slides THCOAAB05 [0.832 MB] | ||
THCOAAB06 | Achieving a Successful Alarm Management Deployment – The CLS Experience | controls, operation, factory, monitoring | 1062 |
|
|||
Alarm management systems promise to improve situational awareness, aid operational staff in correcting responding to accelerator problems and reduce downtime. Many facilities, including the Canadian Light Source (CLS), have been challenged in achieving this goal. At CLS past attempts focusing on software features and capabilities. Our third attempt switched gears and instead focused on human factors engineering techniques and the associated response processes to the alarm. Aspects of ISA 18,2, EEMUA 191 and NREG-700 standards were used. CLS adopted the CSS BEAST alarm handler software. Work was also undertaken to identify bad actors and analyzing alarm system performance and to avoid alarm flooding. The BEAST deployment was augmented with a locally developed voice annunciation system for a small number of critical high impact alarms and auto diallers for shutdown periods when the control room is not staffed. This paper summaries our approach and lessons learned. | |||
![]() |
Slides THCOAAB06 [0.397 MB] | ||
THPPC001 | Overview of "The Scans" in the Central Control System of TRIUMF's 500 MeV Cyclotron | cyclotron, controls, TRIUMF, hardware | 1090 |
|
|||
The Controls Group for TRIUMF's 500 MeV cyclotron developed, runs and maintains a software application known as The Scans whose purpose is to: a) log events, b) enunciate alarms and warnings, c) perform simple actions on the hardware, and d) provide software interlocks for machine protection. Since its inception more than 35 years ago, The Scans has increasingly become an essential part for the proper operation of the Cyclotron. This paper gives an overview of The Scans, its advantages and limitations, and desired improvements. | |||
![]() |
Poster THPPC001 [4.637 MB] | ||
THPPC004 |
CODAC Standardisation of PLC Communication | PLC, EPICS, controls, Ethernet | 1097 |
|
|||
As defined by the CODAC Architecture of ITER, a Plant System Host (PSH) and one or more Slow Controllers (SIEMENS PLCs) are connected over a switched Industrial Ethernet (IE) network. An important part of Software Engineering of Slow Controllers is the standardization of communication between PSH and PLCs. Based on prototyping and performance evaluation, Open IE Communication over TCP was selected. It is implemented on PLCs to support the CODAC data model of ‘State’, ‘Configuration’ and ‘Simple Commands’. The implementation is packaged in Standard PLC Software Structure(SPSS) as a part of CODAC Core System release. SPSS can be easily configured by the SDD Tools of CODAC. However Open IE Communication is restricted to the PLC CPUs. This presents a challenge to implement redundant PLC architecture and use remote IO modules. Another version of SPSS is developed to support communication over Communication Processors(CP). The EPICS driver is also extended to support redundancy transparent to the CODAC applications. Issues of PLC communication standardization in the context of CODAC environment and future development of SPSS and EPICS driver are presented here. | |||
THPPC005 | Virtualization Infrastructure within the Controls Environment of the Light Sources at HZB | network, hardware, controls, EPICS | 1100 |
|
|||
The advantages of visualization techniques and infrastructures with respect to configuration management, high availability and resource management have become obvious also for controls applications. Today a choice of powerful products are easy-to-use and support desirable functionality, performance, usability and maintainability at very matured levels. This paper presents the architecture of the virtual infrastructure and its relations to the hardware based counterpart as it has emerged for BESSY II and MLS controls within the past decade. Successful experiences as well as abandoned attempts and caveats on some intricate troubles are summarized. | |||
![]() |
Poster THPPC005 [0.286 MB] | ||
THPPC012 | The Equipment Database for the Control System of the NICA Accelerator Complex | controls, database, TANGO, collider | 1111 |
|
|||
The report describes the database of equipment for the control system of Nuclotron-based Ion Collider fAcility (NICA, JINR, Russia). The database will contain information about hardware, software, computers and network components of control system, their main settings and parameters, and the responsible persons. The equipment database should help to implement the Tango system as a control system of NICA accelerator complex. The report also describes a web service to display, search, and manage the database. | |||
![]() |
Poster THPPC012 [1.070 MB] | ||
THPPC013 | Configuration Management of the Control System | controls, TANGO, database, PLC | 1114 |
|
|||
The control system of big research facilities like synchrotron involves a lot of work to keep hardware and software synchronised to each other to have a good coherence. Modern Control System middleware Infrastructures like Tango use a database to store all values necessary to communicate with the devices. Nevertheless it is necessary to configure the driver of a PowerSupply or a Motor controller before being able to communicate with any software of the control system. This is part of the configuration management which involves keeping track of thousands of equipments and their properties. In recent years, several DevOps tools like Chef, Puppet, Ansible or SpaceMaster have been developed by the OSS community. They are now mandatory for the configuration of thousands of servers to build clusters or cloud servers. Define a set of coherent components, enable Continuous Deployment in synergy with Continuous Integration, reproduce a control system for simulation, rebuild and track changes even in the hardware configuration are among the use cases. We will explain the strategy of MaxIV on this subject, regarding the configuration management. | |||
![]() |
Poster THPPC013 [4.620 MB] | ||
THPPC015 | Managing Infrastructure in the ALICE Detector Control System | controls, detector, experiment, hardware | 1122 |
|
|||
The main role of the ALICE Detector Control System (DCS) is to ensure safe and efficient operation of one of the large high energy physics experiments at CERN. The DCS design is based on the commercial SCADA software package WinCC Open Architecture. The system includes over 270 VME and power supply crates, 1200 network devices, over 1,000,000 monitored parameters as well as numerous pieces of front-end and readout electronics. This paper summarizes the computer infrastructure of the DCS as well as the hardware and software components that are used by WinCC OA for communication with electronics devices. The evolution of these components and experience gained from the first years of their production use are also described. We also present tools for the monitoring of the DCS infrastructure and supporting its administration together with plans for their improvement during the first long technical stop in LHC operation. | |||
![]() |
Poster THPPC015 [1.627 MB] | ||
THPPC017 | Control System Configuration Management at PSI Large Research Facilities | EPICS, controls, hardware, database | 1125 |
|
|||
The control system of the PSI accelerator facilities and their beamlines consists mainly of the so called Input Output Controllers (IOCs) running EPICS. There are several flavors of EPICS IOCs at PSI running on different CPUs, different underlying operating systems and different EPICS versions. We have hundreds of IOCs which control the facilities at PSI. The goal of the Control system configuration management is to provide a set of tools to allow a consistent and uniform configuration for all IOCs. In this context the Oracle database contains all hardware-specific information including the CPU type, operating system or EPICS version. The installation tool connects to Oracle database. Depending on the IOC-type a set of files (or symbolic links) are created which connect to the required operating system, libraries or EPICS configuration files in the boot directory. In this way a transparent and user-friendly IOC installation is achieved. The control system export can check the IOC installation, boot information, as well as the status of loaded EPICS process variables by using Web applications. | |||
![]() |
Poster THPPC017 [0.405 MB] | ||
THPPC023 | Integration of Windows Binaries in the UNIX-based RHIC Control System Environment | controls, Windows, interface, Linux | 1135 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Since its inception, the RHIC control system has been built-up on UNIX or LINUX and implemented primarily in C++. Sometimes equipment vendors include software packages developed in the Microsoft Windows operating system. This leads to a need to integrate these packaged executables into existing data logging, display, and alarms systems. This paper will describe an approach to incorporate such non-UNIX binaries seamlessly into the RHIC control system with minimal changes to the existing code base, allowing for compilation on standard LINUX workstations through the use of a virtual machine. The implementation resulted in the successful use of a windows dynamic linked library (DLL) to control equipment remotely while running a synoptic display interface on a LINUX machine. |
|||
![]() |
Poster THPPC023 [1.391 MB] | ||
THPPC024 | Operating System Upgrades at RHIC | controls, Linux, network, collider | 1138 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Upgrading hundreds of machines to the next major release of an Operating system (OS), while keeping the accelerator complex running, presents a considerable challenge. Even before addressing the challenges that an upgrade represents, there are critical questions that must be answered. Why should an upgrade be considered? (An upgrade is labor intensive and includes potential risks due to defective software.) When is it appropriate to make incremental upgrades to the OS? (Incremental upgrades can also be labor intensive and include similar risks.) When is the best time to perform an upgrade? (An upgrade can be disruptive.) Should all machines be upgraded to the same version at the same time? (At times this may not be possible, and there may not be a need to upgrade certain machines.) Should the compiler be upgraded at the same time? (A compiler upgrade can also introduce risks at the software application level.) This paper examines our answers to these questions, describes how upgrades to the Red Hat Linux OS are implemented by the Controls group at RHIC, and describes our experiences. |
|||
![]() |
Poster THPPC024 [0.517 MB] | ||
THPPC026 | Diagnostic Controls of IFMIF-EVEDA Prototype Accelerator | controls, diagnostics, EPICS, emittance | 1144 |
|
|||
The Linear IFMIF prototype accelerator (LIPac) will accelerate a 9 MeV, 125 mA, CW deuteron beam in order to validate the technology that will be used for the future IFMIF accelerator (International Fusion Materials Irradiation Facility). This facility will be installed in Rokkasho (Japan) and Irfu-Saclay has developed the control system for several work packages like the injector and a set of the diagnostic subsystem. At Irfu-Saclay, beam tests were carried out on the injector with its diagnostics. Diagnostic devices have been developed to characterize the high beam power (more than 1MW) along the accelerator: an Emittance Meter Unit (EMU), Ionization Profile Monitors (IPM), Secondary Electron Emission Grids (SEM-grids), Beam Loss Monitors (BLoM and μLoss), and Current Transformers (CT). This control system relies on COTS and an EPICS software platform. A specific isolated fast acquisition subsystem running at high sampling rate (about 1 MS/s), triggered by the Machine Protection System (MPS), is dedicated to the analysis of post-mortem data produced by the BLoMs and current transformer signals. | |||
![]() |
Poster THPPC026 [0.581 MB] | ||
THPPC027 | A New EPICS Device Support for S7 PLCs | PLC, EPICS, controls, interface | 1147 |
|
|||
S7 series programmable logic controllers (PLCs) are commonly used in accelerator environments. A new EPICS device support for S7 PLCs that is based on libnodave has been developed. This device support allows for a simple integration of S7 PLCs into EPICS environments. Developers can simply create an EPICS record referring to a memory address in the PLC and the device support takes care of automatically connecting to the PLC and transferring the value. This contribution presents the concept behind the s7nodave device support and shows how simple it is to create an EPICS IOC that communicates with an S7 PLC. | |||
![]() |
Poster THPPC027 [3.037 MB] | ||
THPPC045 | The SSC-Linac Control System | controls, linac, hardware, operation | 1173 |
|
|||
This article gives a brief description of the SSC-Linac control system for Heavy Ion Research Facility of Lanzhou(HIRFL). It describes in detail mainly of the overall system architecture, hardware and software. The overall system architecture is the distributed control system. We have adopted the the EPICS system as the system integration tools to develop the control system of the SSC-Linac. We use the NI PXIe chassis and PXIe bus master as a front-end control system hardware. Device controllers for each subsystem were composed of the commercial products or components designed by subsystems. The operating system in OPI and IOC of the SSC-Linac control system will use Linux. | |||
THPPC056 | Design and Implementation of Linux Drivers for National Instruments IEEE 1588 Timing and General I/O Cards | hardware, timing, Linux, controls | 1193 |
|
|||
Cosylab is developing GPL Linux device drivers to support several National Instruments (NI) devices. In particular, drivers have already been developed for the NI PCI-1588, PXI-6682 (IEEE1588/PTP) devices and the NI PXI-6259 I/O device. These drivers are being used in the development of the latest plasma fusion research reactor, ITER, being built at the Cadarache facility in France. In this paper we discuss design and implementation issues, such as driver API design (device file per device versus device file per functional unit), PCI device enumeration, handling reset, etc. We also present various use-cases demonstrating the capabilities and real-world applications of these drivers. | |||
![]() |
Poster THPPC056 [0.482 MB] | ||
THPPC058 | LSA - the High Level Application Software of the LHC - and Its Performance During the First Three Years of Operation | controls, injection, optics, hardware | 1201 |
|
|||
The LSA (LHC software architecture) project was started in 2001 with the aim of developing the high level core software for the control of the LHC accelerator. It has now been deployed widely across the CERN accelerator complex and has been largely successful in meeting its initial aims. The main functionality and architecture of the system is recalled and its use in the commissioning and exploitation of the LHC is elucidated. | |||
![]() |
Poster THPPC058 [1.291 MB] | ||
THPPC061 | SwissFEL Magnet Test Setup and Its Controls at PSI | controls, EPICS, operation, detector | 1209 |
|
|||
High brightness electron bunches will be guided in the future Free Electron Laser (SwissFEL) at Paul Scherrer Institute (PSI) with the use of several hundred magnets. The SwissFEL machine imposes very strict requirements not only to the field quality but also to mechanical and magnetic alignments of these magnets. To ensure that the magnet specifications are met and to develop reliable procedures for aligning magnets in the SwissFEL and correcting their field errors during machine operations, the PSI magnet test system was upgraded. The upgraded system is a high precision measurement setup based on Hall probe, rotating coil, vibrating wire and moving wire techniques. It is fully automated and integrated in the PSI controls. The paper describes the main controls components of the new magnet test setup and their performance. | |||
![]() |
Poster THPPC061 [0.855 MB] | ||
THPPC064 | The HiSPARC Control System | detector, controls, Windows, database | 1220 |
|
|||
Funding: Nikhef The purpose of the HiSPARC project is twofold. First the physics goal: detection of high-energy cosmic rays. Secondly, offer an educational program in which high school students participate by building their detection station and analysing their data. Around 70 high schools, spread over the Netherlands, are participating. Data are centrally stored at Nikhef in Amsterdam. The detectors, located on the roof of the high-schools, are connected by means of a USB interface to a Windows PC, which itself is connected to the high school's network and further on to the public internet. Each station is equipped with GPS providing exact location and accurate timing. This paper covers the setup, building and usage of the station software. It contains a LabVIEW run-time engine, services for remote control and monitoring, a series of Python scripts and a local buffer. An important task of the station software is to control the dataflow, event building and submission to the central database. Furthermore, several global aspects are described, like the source repository, the station software installer and organization. Windows, USB, FTDI, LabVIEW, VPN, VNC, Python, Nagios, NSIS, Django |
|||
THPPC066 | ACSys Camera Implementation Utilizing an Erlang Framework to C++ Interface | framework, controls, interface, hardware | 1228 |
|
|||
Multiple cameras are integrated into the Accelerator Control System utilizing an Erlang framework. Message passing is implemented to provide access into C++ methods. The framework runs in a multi-core processor running Scientific Linux. The system provides full access to any 3 cameras out of approximately 20 cameras collecting 5 Hz frames. JPEG images in memory or as files providing for visual information. PNG files are provided in memory or as files for analysis. Histograms over the X & Y coordinates are filtered and analyzed. This implementation is described and the framework is evaluated. | |||
THPPC071 | Machine Protection Diagnostics on a Rule Based System | diagnostics, DSL, hardware, vacuum | 1235 |
|
|||
Since commissioning the high-brilliance, 3rd-generation light source, PETRA-3 in 2009 the accelerator operation has become routine. To guard the machine against damage a Machine Protection System (MPS) was built [*]. Alarms and beam information are collected by the MPS and can be used to analyse beam losses and dumps. The MPS triggers a visual diagnostic software, which is used to analyse the hardware dump cause. The diagnostic software is based on a Domain Specific Language (DSL) architecture. The MPS diagnostic application is designed with a server-client architecture and written in Java. The communication protocol is based on TINE. We characterise the data flow of the alarms and the DSL specification and describe the composition from the delivered structure to a single, human understandable message.
* T. Lensch, M. Werner, "Commissioning Results and Improvements of the Machine Protection System for PETRA III", BIW10, New Mexico, US, 2010 |
|||
![]() |
Poster THPPC071 [0.838 MB] | ||
THPPC080 | Testing and Verification of PLC Code for Process Control | PLC, controls, framework, factory | 1258 |
|
|||
Functional testing of PLC programs has been historically a challenging task for control systems engineers. This paper presents the analysis of different mechanisms for testing PLCs programs developed within the UNICOS (Unified Industrial COntrol System) framework. The framework holds a library of objects, which are represented as Function Blocks in the PLC application. When a new object is added to the library or a correction of an existing one is needed, exhaustive validation of the PLC code is needed. Testing and formal verification are two distinct approaches selected for eliminating failures of UNICOS objects. Testing is usually done manually or automatically by developing scripts at the supervision layer using the real control infrastructure. Formal verification proofs the correctness of the system by checking weather a formal model of the system satisfies some properties or requirements. The NuSMV model checker has been chosen to perform this task. The advantages and limitations of both approaches are presented and illustrated with a case study, validating a specific UNICOS object. | |||
![]() |
Poster THPPC080 [3.659 MB] | ||
THPPC083 | Software Tool Leverages Existing Image Analysis Results to Provide In-Situ Transmission of the NIF Disposable Debris Shields | laser, optics, alignment, target | 1270 |
|
|||
Funding: * This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344. #LLNL-ABS-632472 The Disposable Debris-Shield (DDS) Attenuation Tool is software that leverages Automatic Alignment image analysis results and takes advantage of the DDS motorized insertion and removal to compute the in-situ transmission of the 192 NIF DDS. The NIF employs glass DDS to protect the final optics from debris and shrapnel generated by the laser-target interaction. Each DDS transmission must be closely monitored and replaced when its physical characteristics impact laser performance. The tool was developed to calculate the transmission by obtaining the total pixel intensity of acquired images with the debris shield inserted and removed. These total intensities existed in the Automatic Alignment image processing algorithms. The tool uses this data, adding the capability to specify DDS to test, moves the DDS, performs calculations, and saves data to an output file. It operates on all 192 beams of the NIF in parallel, and has shown a discrepancy between laser predictive models and actual. As qualification the transmission of new DDS were tested, with known transmissions supplied by the vendor. This demonstrated the tool capable of measuring in-situ DDS transmission to better than 0.5% rms. |
|||
![]() |
Poster THPPC083 [2.362 MB] | ||
THPPC085 | Image Analysis for the Automated Alignment of the Advanced Radiography Capability (ARC) Diagnostic Path* | diagnostics, alignment, controls, hardware | 1274 |
|
|||
Funding: *This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-631616 The Advanced Radiographic Capability (ARC) at the National Ignition Facility was developed to produce a sequence of short laser pulses that are used to backlight an imploding fuel capsule. This backlighting capability will enable the creation of a sequence of radiographs during capsule implosion and provide an unprecedented view into the dynamics of the implosion. A critical element of the ARC is the diagnostic instrumentation used to assess the quality of the pulses. Pulses are steered to the diagnostic package through a complex optical path that requires precision alignment. A central component of the alignment system is the image analysis algorithms, which are used to extract information from alignment imagery and provide feedback for the optical alignment control loops. Alignment imagery consists of complex patterns of light resulting from the diffraction of pilot beams around cross-hairs and other fiducials placed in the optical path. This paper describes the alignment imagery, and the image analysis algorithms used to extract the information needed for proper operation of the ARC automated alignment loops. |
|||
![]() |
Poster THPPC085 [3.236 MB] | ||
THPPC095 | A Proof-of-Principle Study of a Synchronous Movement of an Undulator Array Using an EtherCAT Fieldbus at European XFEL | undulator, controls, electron, photon | 1292 |
|
|||
The European XFEL project is a 4th generation X-ray light source. The undulator systems SASE 1, SASE 2 and SASE 3 are used to produce photon beams. Each undulator system consists of an array of undulator cells installed in a row along the electron beam. The motion control of an undulator system is carried out by means of industrial components using an EtherCAT fieldbus. One of its features is motion synchronization for undulator cells which belong to the same system. This paper describes the technical design and software implementation of the undulator system control providing that feature. It presents the results of an on-going proof-of-principle study of synchronous movement of four undulator cells as well as study of movement synchronization between undulator and phase shifter. | |||
![]() |
Poster THPPC095 [3.131 MB] | ||
THPPC104 | A Timing System for Cycle Based Accelerators | timing, real-time, LabView, hardware | 1303 |
|
|||
Synchrotron accelerators with multiple ion sources and beam lines require a high degree of flexibility to define beam cycle timing sequences. We have therefore decided to design a ready-to-use accelerator timing system based on off-the-shelf hardware and software that can fit mid-size accelerators and that is easy to adapt to specific user needs. This Real Time Event Distribution Network (REDNet) has been developed under the guidance of CERN within the MedAustron-CERN collaboration. The system based on the MRF transport layer has been implemented by Cosylab. While we have used hardware on NI PXIe platform, it is straightforward to obtain it for other platforms such as VME. The following characteristics are key to its readiness for use: (1) turn-key system comprising hardware, transport layer, application software and open integration interfaces, (2) performance suitable for a wide range of accelerators, (3) multiple virtual timing systems in one physical box, (4) documentation developed according to V-model. Given the maturity of the development, we have decided to make REDNet available as a product through our industrial partner. | |||
![]() |
Poster THPPC104 [0.429 MB] | ||
THPPC115 | Fast Orbit Feedback Implementation at Alba Synchrotron | FPGA, real-time, hardware, target | 1328 |
|
|||
After the successful accelerator commissioning and with the facility already in operation one of the top short term objectives pointed out by accelerator division was the Fast Orbit Feedback implementation (FOFB). The target of the FOFB system is to hold the electron beam position at submicron range both in vertical and horizontal planes correcting the inestabilities up to 120Hz. This increased beam stability performance is considered a major asset for the beamlines user operation. To achieve this target, the orbit position is acquired from the 88 Libera BPMs at a 10KHz sampling rate, distributed through an independent network and the corrections are calculated and sent to the 176 power supplies that drive the corrector coils. All this correction loop is executed at 10 KHz and the total latency of the system is characterized and minimized optimizing the bandwidth response. | |||
![]() |
Poster THPPC115 [0.732 MB] | ||
THPPC136 | Stabilizing the Beam Current Split Ratio in TRIUMF's 500 MeV Cyclotron with High Level, Closed-Loop Feedback Software | feedback, cyclotron, TRIUMF, controls | 1370 |
|
|||
In the pursuit of progressively more stable beam currents at TRIUMF's 500 MeV cyclotron there was a proposal to regulate the beam current split ratio for two primary beamlines with closed-loop feedback. Initial runs have shown promising results and have justified further efforts in that direction. This paper describes the software to provide the closed-loop feedback, and future developments. | |||
![]() |
Poster THPPC136 [4.309 MB] | ||
THPPC137 | Time Domain Simulation Software of the APS Storage Ring Orbit Real-time Feedback System | feedback, simulation, FPGA, storage-ring | 1373 |
|
|||
The APS storage ring real-time feedback (RTFB) system will be upgraded as part of the APS Upgrade project. The time domain simulation software is implemented to find the best parameters of correctors and evaluate the performance of different system configurations. The software includes two parts: the corrector noise model generator and the RTFB simulation. The corrector noise model generates the corrector noise data that are the input for the RTFB simulation. The corrector noise data are generated from the measured APS BPM turn-by-turn noise data, so that simulation actually reproduces the real machine. This paper introduces the algorithm and high-level software development of the corrector noise model generator and the RTFB simulation.
Work supported by U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357. |
|||
![]() |
Poster THPPC137 [0.445 MB] | ||
THCOBB05 | Switching Solution – Upgrading a Running System | controls, EPICS, hardware, interface | 1400 |
|
|||
At Keck Observatory, we are upgrading our existing operational telescope control system and must do it with as little operational impact as possible. This paper describes our current integrated system and how we plan to create a more distributed system and deploy it subsystem by subsystem. This will be done by systematically extracting the existing subsystem then replacing it with the new upgraded distributed subsystem maintaining backwards compatibility as much as possible to ensure a seamless transition. We will also describe a combination of cabling solutions, design choices and a hardware switching solution we’ve designed to allow us to seamlessly switch signals back and forth between the current and new systems. | |||
![]() |
Slides THCOBB05 [1.482 MB] | ||
THCOBA02 | Unidirectional Security Gateways: Stronger than Firewalls | network, controls, hardware, experiment | 1412 |
|
|||
In the last half decade, application integration via Unidirectional Security Gateways has emerged as a secure alternative to firewalls. The gateways are deployed extensively to protect the safety and reliability of industrial control systems in nuclear generators, conventional generators and a wide variety of other critical infrastructures. Unidirectional Gateways are a combination of hardware and software. The hardware allows information to leave a protected industrial network, and physically prevents any signal whatsoever from returning to the protected network. The result is that the hardware blocks all online attacks originating on external networks. The software replicates industrial servers to external networks, where the information in those servers is available to end users and to external applications. The software does not proxy bi-directional protocols. Join us to learn how this secure alternative to firewalls works, where and how the tecnhology is deployed routinely, and how all of the usual remote support, data integrity and other apparently bi-directional deployment issues are routinely resolved. | |||
![]() |
Slides THCOBA02 [0.721 MB] | ||
THCOBA06 | Virtualization and Deployment Management for the KAT-7 / MeerKAT Control and Monitoring System | hardware, database, network, controls | 1422 |
|
|||
Funding: National Research Foundation (NRF) of South Africa To facilitate efficient deployment and management of the Control and Monitoring software of the South African 7-dish Karoo Array Telescope (KAT-7) and the forthcoming Square Kilometer Array (SKA) precursor, the 64-dish MeerKAT Telescope, server virtualization and automated deployment using a host configuration database is used. The advantages of virtualization is well known; adding automated deployment from a configuration database, additional advantages accrue: Server configuration becomes deterministic, development and deployment environments match more closely, system configuration can easily be version controlled and systems can easily be rebuilt when hardware fails. We chose the Debian GNU/Linux based Proxmox VE hypervisor using the OpenVZ single kernel container virtualization method along with Fabric (a Python ssh automation library) based deployment automation and a custom configuration database. This paper presents the rationale behind these choices, our current implementation and our experience with it, and a performance evalution of OpenVZ and KVM. Tests include a comparison of application specific networking performance over 10GbE using several network configurations. |
|||
![]() |
Slides THCOBA06 [5.044 MB] | ||
FRCOAAB01 | CSS Scan System | interface, controls, experiment, EPICS | 1461 |
|
|||
Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy Automation of beam line experiments requires more flexibility than the control of an accelerator. The sample environment devices to control as well as requirements for their operation can change daily. Tools that allow stable automation of an accelerator are not practical in such a dynamic environment. On the other hand, falling back to generic scripts opens too much room for error. The Scan System offers an intermediate approach. Scans can be submitted in numerous ways, from pre-configured operator interface panels, graphical scan editors, scripts, the command line, or a web interface. At the same time, each scan is assembled from a well-defined set of scan commands, each one with robust features like error checking, time-out handling and read-back verification. Integrated into Control System Studio (CSS), scans can be monitored, paused, modified or aborted as needed. We present details of the implementation and first usage experience. |
|||
![]() |
Slides FRCOAAB01 [1.853 MB] | ||
FRCOAAB02 | Karabo: An Integrated Software Framework Combining Control, Data Management, and Scientific Computing Tasks | controls, device-server, GUI, interface | 1465 |
|
|||
The expected very high data rates and volumes at the European XFEL demand an efficient concurrent approach of performing experiments. Data analysis must already start whilst data is still being acquired and initial analysis results must immediately be usable to re-adjust the current experiment setup. We have developed a software framework, called Karabo, which allows such a tight integration of these tasks. Karabo is in essence a pluggable, distributed application management system. All Karabo applications (called “Devices”) have a standardized API for self-description/configuration, program-flow organization (state machine), logging and communication. Central services exist for user management, access control, data logging, configuration management etc. The design provides a very scalable but still maintainable system that at the same time can act as a fully-fledged control or a highly parallel distributed scientific workflow system. It allows simple integration and adaption to changing control requirements and the addition of new scientific analysis algorithms, making them automatically and immediately available to experimentalists. | |||
![]() |
Slides FRCOAAB02 [2.523 MB] | ||
FRCOAAB03 | Experiment Control and Analysis for High-Resolution Tomography | controls, experiment, detector, EPICS | 1469 |
|
|||
Funding: Work supported by U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357. X-ray Computed Tomography (XCT) is a powerful technique for imaging 3D structures at the micro- and nano-levels. Recent upgrades to tomography beamlines at the APS have enabled imaging at resolutions up to 20 nm at increased pixel counts and speeds. As detector resolution and speed increase, the amount of data that must be transferred and analyzed also increases. This coupled with growing experiment complexity drives the need for software to automate data acquisition and processing. We present an experiment control and data processing system for tomography beamlines that helps address this concern. The software, written in C++ using Qt, interfaces with EPICS for beamline control and provides live and offline data viewing, basic image manipulation features, and scan sequencing that coordinates EPICS-enabled apparatus. Post acquisition, the software triggers a workflow pipeline, written using ActiveMQ, that transfers data from the detector computer to an analysis computer, and launches a reconstruction process. Experiment metadata and provenance information is stored along with raw and analyzed data in a single HDF5 file. |
|||
![]() |
Slides FRCOAAB03 [1.707 MB] | ||
FRCOAAB08 | The LIMA Project Update | detector, controls, hardware, interface | 1489 |
|
|||
LIMA, a Library for Image Acquisition, was developed at the ESRF to control high-performance 2D detectors used in scientific applications. It provides generic access to common image acquisition concepts, from detector synchronization to online data reduction, including image transformations and storage management. An abstraction of the low-level 2D control defines the interface for camera plugins, allowing different degrees of hardware optimizations. Scientific 2D data throughput up to 250 MB/s is ensured by multi-threaded algorithms exploiting multi-CPU/core technologies. Eighteen detectors are currently supported by LIMA, covering CCD, CMOS and pixel detectors, and video GigE cameras. Control system agnostic by design, LIMA has become the de facto 2D standard in the TANGO community. An active collaboration among large facilities, research laboratories and detector manufacturers joins efforts towards the integration of new core features, detectors and data processing algorithms. The LIMA 2 generation will provide major improvements in several key core elements, like buffer management, data format support (including HDF5) and user-defined software operations, among others. | |||
![]() |
Slides FRCOAAB08 [1.338 MB] | ||