Paper | Title | Other Keywords | Page |
---|---|---|---|
MOCOBAB01 | New Electrical Network Supervision for CERN: Simpler, Safer, Faster, and Including New Modern Features | network, status, controls, operation | 27 |
|
|||
Since 2012, an effort started to replace the ageing electrical supervision system (managing more than 200,000 tags) currently in operation with a WinCC OA-based supervision system in order to unify the monitoring systems used by CERN operators and to leverage the internal knowledge and development of the products (JCOP, UNICOS, etc.). Along with the classical functionalities of a typical SCADA system (alarms, event, trending, archiving, access control, etc.), the supervision of the CERN electrical network requires a set of domain specific applications gathered under the name of EMS (Energy Management System). Such applications include network coloring, state estimation, power flow calculations, contingency analysis, optimal power flow, etc. Additionally, as electrical power is a critical service for CERN, a high availability of its infrastructure, including its supervision system, is required. The supervision system is therefore redundant along with a disaster recovery system which is itself redundant. In this paper, we will present the overall architecture of the future supervision system with an emphasis on the parts specific to the supervision of electrical network. | |||
![]() |
Slides MOCOBAB01 [1.414 MB] | ||
MOCOBAB06 | Integrated Monitoring and Control Specification Environment | controls, target, EPICS, interface | 47 |
|
|||
Monitoring and control solutions for large one-off systems are typically built in silos using multiple tools and technologies. Functionality such as data processing logic, alarm handling, UIs, device drivers are implemented by manually writing configuration code in isolation and their cross dependencies maintained manually. The correctness of the created specification is checked using manually written test cases. Non-functional requirements – such as reliability, performance, availability, reusability and so on – are addressed in ad hoc manner. This hinders evolution of systems with long lifetimes. For ITER, we developed an integrated specifications environment and a set of tools to generate configurations for target execution platforms, along with required glue to realize the entire M&C solution. The SKA is an opportunity to enhance this framework further to include checking for functional and engineering properties of the solution based on domain best practices. The framework includes three levels: domain-specific, problem-specific and target technology-specific. We discuss how this approach can address three major facets of complexity: scale, diversity and evolution. | |||
MOMIB08 | Continuous Integration Using LabVIEW, SVN and Hudson | LabView, software, Linux, interface | 74 |
|
|||
In the accelerator domain there is a need of integrating industrial devices and creating control and monitoring applications in an easy and yet structured way. The LabVIEW-RADE framework provides the method and tools to implement these requirements and also provides the essential integration of these applications into the CERN controls infrastructure. Building and distributing these core libraries for multiple platforms, e.g.Windows, Linux and Mac, and for different versions of LabVIEW, is a time consuming task that consist of repetitive and cumbersome work. All libraries have to be tested, commissioned and validated. Preparing one package for each variation takes almost a week to complete. With the introduction of Subversion version control (SVN) and Hudson continuous integration server (HCI) the process is now fully automated and a new distribution for all platforms is available within the hour. In this paper we are evaluating the pros and cons of using continuous integration, the time it took to get up and running and the added benefits. We conclude with an evaluation of the framework and indicate new areas of improvement and extension. | |||
![]() |
Slides MOMIB08 [2.990 MB] | ||
![]() |
Poster MOMIB08 [6.363 MB] | ||
MOMIB09 | ZIO: The Ultimate Linux I/O Framework | controls, Linux, interface, software | 77 |
|
|||
ZIO (with Z standing for "The Ultimate I/O" Framework) was developed for CERN with the specific needs of physics labs in mind, which are poorly addressed in the mainstream Linux kernel. ZIO provides a framework for industrial, high-throughput, high-channel count I/O device drivers (digitizers, function generators, timing devices like TDCs) with performance, generality and scalability as design goals. Among its many features, it offers abstractions for - input and output channels, and channel sets - configurable trigger types - configurable buffer types - interface via sysfs attributes, control and data device nodes - a socket interface (PFZIO) which provides enormous flexibility and power for remote control In this paper, we discuss the design and implementation of ZIO, and describe representative cases of driver development for typical and exotic applications (FMC ADC 100Msps digitizer, FMC TDC timestamp counter, FMC DEL fine delay). | |||
![]() |
Slides MOMIB09 [0.818 MB] | ||
MOPPC023 | Centralized Data Engineering for the Monitoring of the CERN Electrical Network | interface, database, network, controls | 107 |
|
|||
The monitoring and control of the CERN electrical network involves a large variety of devices and software: it ranges from acquisition devices to data concentrators, supervision systems as well as power network simulation tools. The main issue faced nowadays for the engineering of such large and heterogeneous system including more than 20,000 devices and 200,000 tags is that all devices and software have their own data engineering tool while many of the configuration data have to be shared between two or more devices: the same data needs to be entered manually to the different tools leading to duplication of effort and many inconsistencies. This paper presents a tool called ENSDM aiming at centralizing all the data needed to engineer the monitoring and control infrastructure into a single database from which the configuration of the various devices is extracted automatically. Such approach allows the user to enter the information only once and guarantee the consistency of the data across the entire system. The paper will focus more specifically on the configuration of the remote terminal unit) devices, the global supervision system (SCADA) and the power network simulation tools. | |||
![]() |
Poster MOPPC023 [1.253 MB] | ||
MOPPC024 | An Event Driven Communication Protocol for Process Control: Performance Evaluation and Redundant Capabilities | PLC, controls, status, Windows | 111 |
|
|||
The CERN Unified Industrial Control System framework (UNICOS) with its Continuous Control Package (UNICOS CPC) is the CERN standard solution for the design and implementation of continuous industrial process control applications. The in-house designed communication mechanism, based on the Time Stamp Push Protocol (TSPP) provides event driven high performance data communication between the control and supervision layers of a UNICOS CPC application. In its recent implementation of full redundant capabilities for both control and supervision layers, the TSPP protocol has reached maturity. This paper presents the design of the redundancy, the architecture, the current implementation as well as a comprehensive evaluation of its performance for SIEMENS PLCs in different test scenarios. | |||
![]() |
Poster MOPPC024 [7.161 MB] | ||
MOPPC031 | IEPLC Framework, Automated Communication in a Heterogeneous Control System Environment | controls, PLC, software, hardware | 139 |
|
|||
Programmable Logic Controllers (PLCs, PXI systems and other micro-controller families) are essential components of CERN control's system. They typically present custom communication interfaces which make their federation a difficult task. Dependency from specific protocols makes code not reusable and the replacement of old technology a tedious problem. IEPLC proposes a uniform and hardware independent communication schema. It automatically generates all the resources needed on master and slave side to implement a common and generic Ethernet communication. The framework consists of a set of tools, scripts and a C++ library. The JAVA configuration tool allows the description and instantiation of the data to be exchanged with the controllers. The Python scripts generate the resources necessary to the final communication while the C++ library, eventually, allows sending and receiving data at run-time from the master process. This paper describes the product by focusing on its main objectives: the definition of a clear and standard communication interface; the reduction of user’s developments and configuration time. | |||
![]() |
Poster MOPPC031 [2.509 MB] | ||
MOPPC033 | Opening the Floor to PLCs and IPCs: CODESYS in UNICOS | controls, PLC, software, hardware | 147 |
|
|||
This paper presents the integration of a third industrial platform for process control applications with the UNICOS (Unified Industrial Control System) framework at CERN. The UNICOS framework is widely used in many process control domains (e.g. Cryogenics, Cooling, Ventilation, Vacuum ) to produce highly structured standardised control applications for the two CERN approved industrial PLC product families, Siemens and Schneider. The CoDeSys platform, developed by the 3S (Smart Software Solution), provides an independent IEC 6131-3 programming environment for industrial controllers. The complete CoDeSys based development includes: (1) a dedicated Java™ module plugged in an automatic code generation tool, the UAB (UNICOS Application Builder), (2) the associated UNICOS baseline library for industrial PLCs and IPCs (Industrial PC) CoDeSys v3 compliant, and (3) the Python-based templates to deploy device instances and control logic. The availability of this development opens the UNICOS framework to a wider community of industrial PLC manufacturers (e.g. ABB, WAGO ) and, as the CoDeSys control Runtime works in standard Operating Systems (Linux, W7 ), UNICOS could be deployed to any IPC. | |||
![]() |
Poster MOPPC033 [4.915 MB] | ||
MOPPC086 | Manage the MAX IV Laboratory Control System as an Open Source Project | controls, software, TANGO, GUI | 299 |
|
|||
Free Open Source Software (FOSS) is now deployed and used in most of the big facilities. It brings a lot of qualities that can compete with proprietary software like robustness, reliability and functionality. Arguably the most important quality that marks the DNA of FOSS is Transparency. This is the fundamental difference compared to its closed competitors and has a direct impact on how projects are managed. As users, reporters, contributors are more than welcome the project management has to have a clear strategy to promote exchange and to keep a community. The Control System teams have the chance to work on the same arena as their users and, even better, some of the users have programming skills. Unlike a fortress strategy, an open strategy may benefit from the situation to enhance the user experience. In this topic we will explain the position of the MaxIV KITS team. How “Tango install party” and “coding dojo” have been used to promote the contribution to the control system software and how our projects are structured in terms of process and tools (SARDANA, GIT… ) to make them more accessible for in house collaboration as well as from other facilities or even subcontractors. | |||
![]() |
Poster MOPPC086 [7.230 MB] | ||
MOPPC087 | Tools and Rules to Encourage Quality for C/C++ Software | software, monitoring, controls, diagnostics | 303 |
|
|||
Inspired by the success of the software improvement process for Java projects, in place since several years in the CERN accelerator controls group, it was agreed in 2011 to apply the same principles to the C/C++ software developed in the group, an initiative we call the Software Improvement Process for C/C++ software (SIP4C/C++). The objectives of the SIP4C/C++ initiative are: 1) agree on and establish best software quality practices, 2) choose tools for quality and 3) integrate these tools in the build process. After a year we have reached a number of concrete results, thanks to the collaboration between several involved projects, including: common build tool (based on GNU Make), which standardizes the way to build, test and release C/C++ binaries; unit testing with Google Test & Google Mock; continuous integration of C/C++ products with the existing CI server (Atlassian Bamboo); static code analysis (Coverity); generation of manifest file with dependency information; and runtime in-process metrics. This work presents the SIP4C/C++ initiative in more detail, summarizing our experience and the future plans. | |||
![]() |
Poster MOPPC087 [3.062 MB] | ||
MOPPC092 | Commissioning the MedAustron Accelerator with ProShell | controls, interface, ion, timing | 314 |
|
|||
MedAustron is a synchrotron based centre for light ion therapy under construction in Austria. The accelerator and its control system entered the on-site commissioning phase in January 2013. This contribution presents the current status of the accelerator operation and commissioning procedure framework called ProShell. It is used to model measurement procedures for commissioning and operation with Petri-Nets. Beam diagnostics device adapters are implemented in C#. To illustrate its use for beam commissioning, procedures currently in use are presented including their integration with existing devices such as ion source, power converters, slits, wire scanners and profile grid monitors. The beam spectrum procedure measures distribution of particles generated by the ion source. The phase space distribution procedure performs emittance measurement in beam transfer lines. The trajectory steering procedure measures the beam position in each part of the machine and aids in correcting the beam positions by integrating MAD-XX optics calculations. Additional procedures and (beam diagnostic) devices are defined, implemented and integrated with ProShell on demand as commissioning progresses. | |||
![]() |
Poster MOPPC092 [2.896 MB] | ||
MOPPC095 | PETAL Control System Status Report | controls, laser, software, hardware | 321 |
|
|||
Funding: CEA / Région Aquitaine / ILP / Europe / HYPER The PETAL laser facility is a high energy multi-petawatt laser beam being installed in the Laser MegaJoule building facility. PETAL is designed to produce a laser beam at 3 kilojoules of energy for 0.5 picoseconds of duration. The autonomous commissioning began in 2013. In the long term, PETAL’s Control System is to be integrated in the LMJ’s Control System for a coupling with its 192 nanoseconds laser beams. The presentation gives an overview of the general control system architecture, and focuses on the use of TANGO framework in some of the subsystems software. Then the presentation explains the steps planned to develop the control system from the first laser shoots in autonomous exploitation to the merger in the LMJ’s facility. |
|||
![]() |
Poster MOPPC095 [1.891 MB] | ||
MOPPC126 | !CHAOS: the "Control Server" Framework for Controls | controls, distributed, software, database | 403 |
|
|||
We report on the progress of !CHAOS*, a framework for the development of control and data acquisition services for particle accelerators and large experimental apparatuses. !CHAOS introduces to the world of controls a new approach for designing and implementing communications and data distribution among components and for providing the middle-layer services for a control system. Based on software technologies borrowed from high-performance Internet services !CHAOS offers by using a centralized, yet highly-scalable, cloud-like approach all the services needed for controlling and managing a large infrastructure. It includes a number of innovative features such as high abstraction of services, devices and data, easy and modular customization, extensive data caching for enhancing performances, integration of all services in a common framework. Since the !CHAOS conceptual design was presented two years ago the INFN group have been working on the implementations of services and components of the software framework. Most of them have been completed and tested for evaluating performance and reliability. Some services are already installed and operational in experimental facilities at LNF.
* "Introducing a new paradigm for accelerators and large experimental apparatus control systems", L. Catani et.al., Phys. Rev. ST Accel. Beams, http://prst-ab.aps.org/abstract/PRSTAB/v15/i11/e112804 |
|||
![]() |
Poster MOPPC126 [0.874 MB] | ||
MOPPC128 | Real-Time Process Control on Multi-Core Processors | controls, real-time, operation, software | 407 |
|
|||
A real-time control is an essential for a low level RF and timing system to have beam stability in the accelerator operation. It is difficult to optimize priority control of multiple processes with real-time class and time-sharing class on a single-core processor. For example, we can’t log into the operating system if a real-time class process occupies the resource of a single-core processor. Recently multi-core processors have been utilized for equipment controls. We studied the process control of multiple processes running on multi-core processors. After several tunings, we confirmed that an operating system could run stably under heavy load on multi-core processors. It would be possible to achieve real-time control required milliseconds order response under the fast control system such as an event synchronized data acquisition system. Additionally we measured the response performance between client and server processes using MADOCA II framework that is the next-generation MADOCA. In this paper we present about the tunings for real-time process control on multi-core processors and performance results of MADOCA II. | |||
![]() |
Poster MOPPC128 [0.450 MB] | ||
MOPPC129 | MADOCA II Interface for LabVIEW | LabView, controls, interface, Windows | 410 |
|
|||
LabVIEW is widely used for experimental station control in SPring-8. LabVIEW is also partially used for accelerator control, while most software of the SPring-8 accelerator and beamline control are built on MADOCA control framework. As synchrotron radiation experiments advances, there is requirement of complex data exchange between MADOCA and LabVIEW control systems which was not realized. We have developed next generation MADOCA called MADOCA II, as reported in this ICALEPCS (T.Matsumoto et.al.). We ported MADOCA II framework to Windows and we developed MADOCA II interface for LabVIEW. Using the interface, variable length data can be exchanged between MADOCA and LabVIEW based softwares. As a first application, we developed a readout system for an electron beam position monitor with NI's PCI-5922 digitizers. A client software sends a message to a remote LabVIEW based digitizer readout software via the MADOCA II midlleware and the readout system sends back waveform data to the client. We plan to apply the interface various accelerator and synchrotron radiation experiment controls. | |||
MOPPC137 | IEC 61850 Industrial Communication Standards under Test | controls, network, Ethernet, software | 427 |
|
|||
IEC 61850, as part of the International Electro-technical Commission's Technical Committee 57, defines an international and standardized methodology to design electric power automation substations. It specifies a common way of communicating and integrating heterogeneous systems based on multivendor intelligent electronic devices (IEDs). They are connected to Ethernet network and according to IEC 61850 their abstract data models have been mapped to specific communication protocols: MMS, GOOSE, SV and possibly in the future Web Services. All of them can run over TCP/IP networks, so they can be easily integrated with Enterprise Resource Planning networks; while this integration provides economical and functional benefits for the companies, on the other hand it exposes the industrial infrastructure to the external existing cyber-attacks. Within the Openlab collaboration between CERN and Siemens, a test-bench has been developed specifically to evaluate the robustness of industrial equipment (TRoIE). This paper describes the design and the implementation of the testing framework focusing on the IEC 61850 previously mentioned protocols implementations. | |||
![]() |
Poster MOPPC137 [1.673 MB] | ||
MOPPC138 | Continuous Integration for Automated Code Generation Tools | controls, software, PLC, target | 431 |
|
|||
The UNICOS* (UNified Industrial COntrol System) framework was created back in 1998 as a solution to build object-based industry-like control systems. The Continuous Process Control package (CPC**) is a UNICOS component that provides a methodology and a set of tools to design and implement industrial control applications. UAB** (UNICOS Application Builder) is the software factory used to develop UNICOS-CPC applications. The constant evolution of the CPC component brought the necessity of creating a new tool to validate the generated applications and to verify that the modifications introduced in the software tools do not create any undesirable effect on the existing control applications. The uab-maven-plugin is a plug-in for the Apache Maven build manager that can be used to trigger the generation of the CPC applications and verify the consistency of the generated code. This plug-in can be integrated in continuous integration tools - like Hudson or Jenkins – to create jobs for constant monitoring of changes in the software that will trigger a new generation of all the applications located in the source code management.
* "UNICOS a framework to build industry like control systems: Principles & Methodology". ** "UNICOS CPC6: Automated code generation for process control applications". |
|||
![]() |
Poster MOPPC138 [4.420 MB] | ||
MOPPC139 | A Framework for Off-line Verification of Beam Instrumentation Systems at CERN | database, software, instrumentation, interface | 435 |
|
|||
Many beam instrumentation systems require checks to confirm their beam readiness, detect any deterioration in performance and to identify physical problems or anomalies. Such tests have already been developed for several LHC instruments using the LHC sequencer, but the scope of this framework doesn't extend to all systems; notably absent in the pre-LHC injector chain. Furthermore, the operator-centric nature of the LHC sequencer means that sequencer tasks aren't accessible by hardware and software experts who are required to execute similar tests on a regular basis. As a consequence, ad-hoc solutions involving code sharing and in extreme cases code duplication have evolved to satisfy the various use-cases. In terms of long term maintenance, this is undesirable due to the often short-term nature of developers at CERN alongside the importance of the uninterrupted stability of CERN's accelerators. This paper will outline the first results of an investigation into the existing analysis software, and provide proposals for the future of such software. | |||
MOPPC142 | Groovy as Domain-specific Language (DSL) in Software Interlock System | DSL, software, Domain-Specific-Languages, controls | 443 |
|
|||
The SIS, in operation since over 7 years, is a mission-critical component of the CERN accelerator control system, covering areas from general machine protection to diagnostics. The growing number of instances and the size of the existing installations have increased both the complexity and maintenance cost of running the SIS infrastructure. Also the domain experts have considered the XML and Java mixture for configuration as difficult and suitable only for software engineers. To address these issues, new ways of configuring the system have been investigated aiming at simplifying the process by making it faster, more user-friendly and adapted for a wider audience. From all the existing DSL choices (fluent Java APIs, external/internal DSLs), the Groovy scripting language has been considered as being particularly well suited for writing a custom DSL due to its built-in language features: Java compatibility, native syntax constructs, command chain expressions, hierarchical structures with builders, closures or AST transformations. This paper explains best practices and lessons learned while building the accelerators domain-oriented DSL for the configuration of the interlock system. | |||
![]() |
Poster MOPPC142 [0.510 MB] | ||
MOPPC143 | Plug-in Based Analysis Framework for LHC Post-Mortem Analysis | controls, operation, injection, software | 446 |
|
|||
Plug-in based software architectures are extensible, enforce modularity and allow several teams to work in parallel. But they have certain technical and organizational challenges, which we discuss in this paper. We gained our experience when developing the Post-Mortem Analysis (PMA) system, which is a mission-critical system for the Large Hadron Collider (LHC). We used a plugin-based architecture with a general-purpose analysis engine, for which physicists and equipment experts code plug-ins containing the analysis algorithms. We have over 45 analysis plug-ins developed by a dozen of domain experts. This paper focuses on the design challenges we faced in order to mitigate the risks of executing third-party code: assurance that even a badly written plug-in doesn't perturb the work of the overall application; plug-in execution control which allows to detect plug-in misbehavior and react; robust communication mechanism between plug-ins, diagnostics facilitation in case of plug-in failure; testing of the plug-ins before integration into the application, etc.
https://espace.cern.ch/be-dep/CO/DA/Services/Post-Mortem%20Analysis.aspx |
|||
![]() |
Poster MOPPC143 [3.128 MB] | ||
MOPPC145 | Mass-Accessible Controls Data for Web Consumers | controls, network, operation, status | 449 |
|
|||
The past few years in computing have seen the emergence of smart mobile devices, sporting multi-core embedded processors, powerful graphical processing units, and pervasive high-speed network connections (supported by WIFI or EDGE/UMTS). The relatively limited capacity of these devices requires relying on dedicated embedded operating systems (such as Android, or iOS), while their diverse form factors (from mobile phone screens to large tablet screens) require the adoption of programming techniques and technologies that are both resource-efficient and standards-based for better platform independence. We will consider what are the available options for hybrid desktop / mobile web development today, from native software development kits (Android, iOS) to platform-independent solutions (mobile Google Web toolkit [3], JQuery mobile, Apache Cordova[4], Opensocial). Through the authors' successive attempts at implementing a range of solutions for LHC-related data broadcasting, from data acquisition systems, LHC middleware such as DIP and CMW, on to the World Wide Web, we will investigate what are the valid choices to make and what pitfalls to avoid in today’s web development landscape. | |||
![]() |
Poster MOPPC145 [1.318 MB] | ||
MOPPC150 | Channel Access in Erlang | EPICS, controls, network, detector | 462 |
|
|||
We have developed an Erlang language implementation of the Channel Access protocol. Included are low-level functions for encoding and decoding Channel Access protocol network packets as well as higher level functions for monitoring or setting EPICS Process Variables. This provides access to EPICS process variables for the Fermilab Acnet control system via our Erlang-based front-end architecture without having to interface to C/C++ programs and libraries. Erlang is a functional programming language originally developed for real-time telecommunications applications. Its network programming features and list management functions make it particularly well-suited for the task of managing multiple Channel Access circuits and PV monitors. | |||
![]() |
Poster MOPPC150 [0.268 MB] | ||
MOPPC157 | Application of Transparent Proxy Servers in Control Systems | controls, operation, collider, target | 475 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Proxy servers (Proxies) have been a staple of the World Wide Web infrastructure since its humble beginning. They provide a number of valuable functional services like access control, caching or logging. Historically, controls system have had little need for full fledged proxied systems as direct, unimpeded resource access is almost always preferable. This still holds true today, however unbound direct asset access can lead to performance issues, especially on older, underpowered systems. This paper describes an implementation of a fully transparent proxy server used to moderate asynchronous data flow between selected front end computers (FECs) and their clients as well as infrastructure changes required to accommodate this new platform. Finally it ventures into the future by examining additional untapped benefits of proxied control systems like write-through caching and runtime read-write modifications. |
|||
![]() |
Poster MOPPC157 [1.873 MB] | ||
MOPPC158 | Application of Modern Programming Techniques in Existing Control System Software | controls, injection, software, operation | 479 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Accelerator Device Object (ADO) specification and its original implementation are almost 20 years old. In those last two decades ADO development methodology has changed very little, which is a testament to its robust design, however during this time frame we've seen introduction of many new technologies and ideas, many of which with applicable and tangible benefits to control system software. This paper describes how some of these concepts like convention over configuration, aspect oriented programming (AOP) paradigm, which coupled with powerful techniques like bytecode generation and manipulation tools can greatly simplify both server and client side development by allowing developers to concentrate on the core implementation details without polluting their code with: 1) synchronization blocks 2) supplementary validation 3) asynchronous communication calls or 4) redundant bootstrapping. In addition to streamlining existing fundamental development methods we introduce additional concepts, many of which are found outside of the majority of the controls systems. These include 1) ACID transactions 2) client and servers-side dependency injection and 3) declarative event handling. |
|||
![]() |
Poster MOPPC158 [2.483 MB] | ||
TUCOBAB02 | The Mantid Project: Notes from an International Software Collaboration | software, interface, neutron, distributed | 502 |
|
|||
Funding: This project is a collaboration between SNS, ORNL and ISIS, RAL with expertise supplied by Tessella. These facilities are in turn funded by the US DoE and the UK STFC. The Mantid project was started by ISIS in 2007 to provide a framework to perform data reduction and analysis for neutron and muon data. The SNS and HFIR joined the Mantid project in 2009 adding event processing and other capabilities to the Mantid framework. The Mantid software is now supporting the data reduction needs of most of the instruments at ISIS, the SNS and some at HFIR, and is being evaluated by other facilities. The scope of data reduction and analysis challenges, together with the need to create a cross platform solution, fuels the need for Mantid to be developed in collaboration between facilities. Mantid has from inception been an open source project, built to be flexible enough to be instrument and technique independent, and initially planned to support collaboration with other development teams. Through the collaboration with the SNS development practices and tools have been further developed to support the distributed development team in this challenge. This talk will describe the building and structure of the collaboration, the stumbling blocks we have overcome, and the great steps we have made in building a solid collaboration between these facilities. Mantid project website: www.mantidproject.org ISIS: http://www.isis.stfc.ac.uk/ SNS & HFIR: http://neutrons.ornl.gov/ |
|||
![]() |
Slides TUCOBAB02 [1.280 MB] | ||
TUMIB07 | RASHPA: A Data Acquisition Framework for 2D XRays Detectors | detector, hardware, FPGA, software | 536 |
|
|||
Funding: Cluster of Research Infrastructures for Synergies in Physics (CRISP) co-funded by the partners and the European Commission under the 7th Framework Programme Grant Agreement 283745 ESRF ESRF research programs, along with the foreseen accelerator sources upgrade, require state-of-the-art instrumentation devices with high data flow acquisition systems. This paper presents RASHPA, a data acquisition framework targeting 2D XRay detectors. By combining a highly configurable multi link PCI Express over cable based data transmission engine and a carefully designed LINUX software stack, RASHPA aims at reaching the performances required by current and future detectors. |
|||
![]() |
Slides TUMIB07 [0.168 MB] | ||
TUMIB08 |
ITER Contribution to Control System Studio (CSS) Development Effort | controls, EPICS, interface, distributed | 540 |
|
|||
In 2010, Control System Studio (CSS) was chosen for CODAC - the central control system of ITER - as the development and runtime integrated environment for local control systems. It became quickly necessary to contribute to CSS development effort - after all, CODAC team wants to be sure that the tools that are being used by the seven ITER members all over the world continue to be available and to be improved. In order to integrate CSS main components in its framework, CODAC team needed first to adapt them to its standard platform based on Linux 64-bits and PostgreSQL database. Then, user feedback started to emerge as well as the need for an industrial symbol library to represent pump, valve or electrical breaker states on the operator interface and the requirement to automatically send an email when a new alarm is raised. It also soon became important for CODAC team to be able to publish its contributions quickly and to adapt its own infrastructure for that. This paper describes ITER increasing contribution to the CSS development effort and the future plans to address factory and site acceptance tests of the local control systems. | |||
![]() |
Slides TUMIB08 [2.970 MB] | ||
![]() |
Poster TUMIB08 [0.959 MB] | ||
TUPPC003 |
SDD toolkit : ITER CODAC Platform for Configuration and Development | EPICS, toolkit, database, controls | 550 |
|
|||
ITER will consist of roughly 200 plant systems I&C (in total millions of variables) delivered in kind which need to be integrated into the ITER control infrastructure. To integrate them in a smooth way, CODAC team releases every year the Core Software environment which consists of many applications. This paper focuses on the self description data toolkit implementation, a fully home-made ITER product. The SDD model has been designed with Hibernate/Spring to provide required information to generate configuration files for CODAC services such as archiving, EPICS, alarm, SDN, basic HMIs, etc. Users enter their configuration data via GUIs based on web application and Eclipse. Snapshots of I&C projects can be dumped to XML. Different levels of validation corresponding to various stages of development have been implemented: it enables during integration, verification that I&C projects are compliant with our standards. The development of I&C projects continues with Maven utilities. In 2012, a new Eclipse perspective has been developed to allow user to develop codes, to start their projects, to develop new HMIs, to retrofit their data in SDD database and to checkout/commit from/to SVN. | |||
![]() |
Poster TUPPC003 [1.293 MB] | ||
TUPPC011 | Development of an Innovative Storage Manager for a Distributed Control System | controls, distributed, software, operation | 570 |
|
|||
The !CHAOS(*) framework will provide all the services needed for controlling and managing a large scientific infrastructure, including a number of innovating features such as abstraction of services, devices and data, easy and modular customization, extensive data caching for performance boost, integration of all functionalities in a common framework. One of most relevant innovation in !CHAOS resides in the History Data Service (HDS) for a continuous acquisition of operating data pushed by devices controllers. The core component of the HDS is the History engine(HST). It implements the abstraction layer for the underneath storage technology and the logics for indexing and querying data. The HST drivers are designed to provide specific HDS tasks such as Indexing, Caching and Storing, and for wrapping the chosen third-party database API with !CHOAS standard calls. Indeed, the HST allows to route to independent channels the different !CHAOS services data flow in order to improve the global efficiency of the whole data acquisition system.
* - http://chaos.infn.it * - https://chaosframework.atlassian.net/wiki/display/DOC/General+View * - http://prst-ab.aps.org/abstract/PRSTAB/v15/i11/e112804 |
|||
![]() |
Poster TUPPC011 [6.729 MB] | ||
TUPPC024 | Challenges to Providing a Successful Central Configuration Service to Support CERN’s New Controls Diagnostics and Monitoring System | controls, database, monitoring, diagnostics | 596 |
|
|||
The Controls Diagnostic and Monitoring service (DIAMON) provides monitoring and diagnostics tools to the operators in the CERN Control Centre. A recent reengineering presented the opportunity to restructure its data management and to integrate it with the central Controls Configuration Service (CCS). The CCS provides the Configuration Management for the Controls System for all accelerators at CERN. The new facility had to cater for the configuration management of all agents monitored by DIAMON, (>3000 computers of different types), provide deployment information, relations between metrics, and historical information. In addition, it had to be integrated into the operational CCS, while ensuring stability and data coherency. An important design decision was to largely reuse the existing infrastructure in the CCS and adapt the DIAMON data management to it e.g. by using the device/property model through a Virtual Devices framework to model the DIAMON agents. This article will show how these challenging requirements were successfully met, the problems encountered and their resolution. The new service architecture will be presented: database model, new and tailored processes and tools. | |||
![]() |
Poster TUPPC024 [2.741 MB] | ||
TUPPC026 | Concept and Prototype for a Distributed Analysis Framework for the LHC Machine Data | operation, extraction, embedded, database | 604 |
|
|||
The Large Hadron Collider (LHC) at CERN produces more than 50 TB of diagnostic data every year, shared between normal running periods as well as commissioning periods. The data is collected in different systems, like the LHC Post Mortem System (PM), the LHC Logging Database and different file catalogues. To analyse and correlate data from these systems it is necessary to extract data to a local workspace and to use scripts to obtain and correlate the required information. Since the amount of data can be huge (depending on the task to be achieved) this approach can be very inefficient. To cope with this problem, a new project was launched to bring the analysis closer to the data itself. This paper describes the concepts and the implementation of the first prototype of an extensible framework, which will allow integrating all the existing data sources as well as future extensions, like hadoop* clusters or other parallelization frameworks.
*http://hadoop.apache.org/ |
|||
![]() |
Poster TUPPC026 [1.378 MB] | ||
TUPPC027 | Quality Management of CERN Vacuum Controls | vacuum, controls, database, interface | 608 |
|
|||
The vacuum controls team is in charge of the monitoring, maintenance & consolidation of the control systems of all accelerators and detectors in CERN; this represents 6 000 instruments distributed along 128 km of vacuum chambers, often of heterogeneous architectures. In order to improve the efficiency of the services we provide, to vacuum experts and to accelerator operators, a Quality Management Plan is being put into place. The first step was the gathering of old documents and the centralisation of information concerning architectures, procedures, equipment and settings. It was followed by the standardisation of the naming convention across different accelerators. The traceability of problems, request, repairs, and other actions, has also been put into place. It goes together with the effort on identification of each individual device by a coded label, and its registration in a central database. We are also working on ways to record, retrieve, process, and display the information across several linked repositories; then, the quality and efficiency of our services can only improve, and the corresponding performance indicators will be available. | |||
![]() |
Poster TUPPC027 [98.542 MB] | ||
TUPPC030 | System Relation Management and Status Tracking for CERN Accelerator Systems | software, interface, database, hardware | 619 |
|
|||
The Large Hadron Collider (LHC) at CERN requires many systems to work in close interplay to allow reliable operation and at the same time ensure the correct functioning of the protection systems required when operating with large energies stored in magnet system and particle beams. Examples for systems are e.g. magnets, power converters, quench protection systems as well as higher level systems like java applications or server processes. All these systems have numerous and different kind of links (dependencies) between each other. The knowledge about the different dependencies is available from different sources, like Layout databases, Java imports, proprietary files, etc . Retrieving consistent information is difficult due to the lack of a unified way of retrieval for the relevant data. This paper describes a new approach to establish a central server instance, which allows collecting this information and providing it to different clients used during commissioning and operation of the accelerator. Furthermore, it explains future visions for such a system, which includes additional layers for distributing system information like operational status, issues or faults. | |||
![]() |
Poster TUPPC030 [4.175 MB] | ||
TUPPC048 | Adoption of the "PyFRID" Python Framework for Neutron Scattering Instruments | controls, interface, scattering, software | 677 |
|
|||
M.Drochner, L.Fleischhauer-Fuss, H.Kleines, D.Korolkov, M.Wagener, S.v.Waasen Adoption of the "PyFRID" Python Framework for Neutron Scattering Instruments To unify the user interfaces of the JCNS (Jülich Centre for Neutron Science) scattering instruments, we are adapting and extending the "PyFRID" framework. "PyFRID" is a high-level Python framework for instrument control. It provides a high level of abstraction, particularly by use of aspect oriented (AOP) techniques. Users can use a builtin command language or a web interface to control and monitor motors, sensors, detectors and other instrument components. The framework has been fully adopted at two instruments, and work is in progress to use it on more. | |||
TUPPC064 | Reusing the Knowledge from the LHC Experiments to Implement the NA62 Run Control | controls, experiment, detector, hardware | 725 |
|
|||
NA62 is an experiment designed to measure very rare kaon decays at the CERN SPS planned to start operation in 2014. Until this date, several intermediate run periods have been scheduled to exercise and commission the different parts and subsystems of the detector. The Run Control system monitors and controls all processes and equipment involved in data-taking. This system is developed as a collaboration between the NA62 Experiment and the Industrial Controls and Engineering (EN-ICE) Group of the Engineering Department at CERN. In this paper, the contribution of EN-ICE to the NA62 Run Control project is summarized. EN-ICE has promoted the utilization of standardized control technologies and frameworks at CERN, which were originally developed for the controls of the LHC experiments. This approach has enabled to deliver a working system for the 2013 Technical Run that exceeded the initial requirements, in a very short time and with limited manpower. | |||
TUPPC072 | Flexible Data Driven Experimental Data Analysis at the National Ignition Facility | data-analysis, diagnostics, software, target | 747 |
|
|||
Funding: This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344. #LLNL-ABS-632532 After each target shot at the National Ignition Facility (NIF), scientists require data analysis within 30 minutes from ~50 diagnostic instrument systems. To meet this goal, NIF engineers created the Shot Data Analysis (SDA) Engine based on the Oracle Business Process Execution Language (BPEL) platform. While this provided for a very powerful and flexible analysis product, it still required engineers conversant in software development practices in order to create the configurations executed by the SDA engine. As more and more diagnostics were developed and the demand for analysis increased, the development staff was not able to keep pace. To solve this problem, the Data Systems team took the approach of creating a database table based scripting language that allows users to define an analysis configuration of inputs, input the data into standard processing algorithms and then store the outputs in a database. The creation of the Data Driven Engine (DDE) has substantially decreased the development time for new analysis and simplified maintenance of existing configurations. The architecture and functionality of the Data Driven Engine will be presented along with examples. |
|||
![]() |
Poster TUPPC072 [1.150 MB] | ||
TUPPC087 | High Level FPGA Programming Framework Based on Simulink | FPGA, interface, hardware, software | 782 |
|
|||
Funding: The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement No 283745. Modern diagnostic and detector related data acquisition and processing hardware are increasingly being implemented with Field Programmable Gate Array (FPGA) technology. The level of flexibility allows for simpler hardware solutions together with the ability to implement functions during the firmware programming phase. The technology is also becoming more relevant in data processing, allowing for reduction and filtering to be done at the hardware level together with implementation of low-latency feedback systems. However, this flexibility and possibilities require a significant amount of design, programming, simulation and testing work usually done by FPGA experts. A high-level FPGA programming framework is currently under development at the European XFEL in collaboration with the Oxford University within the EU CRISP project. This framework allows for people unfamiliar with FPGA programming to develop and simulate complete algorithms and programs within the MathWorks Simulink graphical tool with real FPGA precision. Modules within the framework allow for simple code reuse by compiling them into libraries, which can be deployed to other boards or FPGAs. |
|||
![]() |
Poster TUPPC087 [0.813 MB] | ||
TUPPC088 | Development of MicroTCA-based Image Processing System at SPring-8 | FPGA, controls, interface, Linux | 786 |
|
|||
In SPring-8, various CCD cameras have been utilized for electron beam diagnostics of accelerators and x-ray imaging experiments. PC-based image processing systems are mainly used for the CCD cameras with Cameralink I/F. We have developed a new image processing system based on MicroTCA platform, which has an advantage over PC in robustness and scalability due to its hot-swappable modular architecture. In order to reduce development cost and time, the new system is built with COTS products including a user-configurable Spartan6 AMC with an FMC slot and a Cameralink FMC. The Cameralink FPGA core is newly developed in compliance with the AXI4 open-bus to enhance reusability. The MicroTCA system will be first applied to upgrade of the two-dimensional synchrotron radiation interferometer[1] operating at the SPring-8 storage ring. The sizes and tilt angle of a transverse electron beam profile with elliptical Gaussian distribution are extracted from an observed 2D-interferogram. A dedicated processor AMC (PrAMC) that communicates with the primary PrAMC via backplane is added for fast 2D-fitting calculation to achieve real-time beam profile monitoring during the storage ring operation.
[1] "Two-dimensional visible synchrotron light interferometry for transverse beam-profile measurement at the SPring-8 storage ring", M.Masaki and S.Takano, J. Synchrotron Rad. 10, 295 (2003). |
|||
![]() |
Poster TUPPC088 [4.372 MB] | ||
TUPPC106 | Development of a Web-based Shift Reporting Tool for Accelerator Operation at the Heidelberg Ion Beam Therapy Center | ion, database, controls, operation | 822 |
|
|||
The HIT (Heidelberg Ion Therapy) center is the first dedicated European accelerator facility for cancer therapy using both carbon ions and protons, located at the university hospital in Heidelberg. It provides three fully operational therapy treatment rooms, two with fixed beam exit and a gantry. We are currently developing a web based reporting tool for accelerator operations. Since medical treatment requires a high level of quality assurance, a detailed reporting on beam quality, device failures and technical problems is even more needed than in accelerator operations for science. The reporting tools will allow the operators to create their shift reports with support from automatically derived data, i.e. by providing pre-filled forms based on data from the Oracle database that is part of the proprietary accelerator control system. The reporting tool is based on the Python-powered CherryPy web framework, using SQLAlchemy for object relational mapping. The HTML pages are generated from templates, enriched with jQuery to provide a desktop-like usability. We will report on the system architecture of the tool and the current status, and show screenshots of the user interface.
[1] Th. Haberer et al., “The Heidelberg Ion Therapy Center”, Rad. & Onc., |
|||
TUPPC126 | Visualization of Experimental Data at the National Ignition Facility | diagnostics, software, target, database | 879 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633252 An experiment on the National Ignition Facility (NIF) may produce hundreds of gigabytes of target diagnostic data. Raw and analyzed data are accumulated into the NIF Archive database. The Shot Data Systems team provides alternatives for accessing data including a web-based data visualization tool, a virtual file system for programmatic data access, a macro language for data integration, and a Wiki to support collaboration. The data visualization application in particular adapts dashboard user-interface design patterns popularized by the business intelligence software community. The dashboard canvas provides the ability to rapidly assemble tailored views of data directly from the NIF archive. This design has proven capable of satisfying most new visualization requirements in near real-time. The separate file system and macro feature-set support direct data access from a scientist’s computer using scientific languages such as IDL, Matlab and Mathematica. Underlying all these capabilities is a shared set of web services that provide APIs and transformation routines to the NIF Archive. The overall software architecture will be presented with an emphasis on data visualization. |
|||
![]() |
Poster TUPPC126 [4.900 MB] | ||
TUPPC129 | NIF Device Health Monitoring | controls, GUI, monitoring, status | 887 |
|
|||
Funding: * This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344. #LLNL-ABS-633794 The Integrated Computer Control System (ICCS) at the National Ignition Facility (NIF) uses Front-End Processors (FEP) controlling over 60,000 devices. Often device faults are not discovered until a device is needed during a shot, creating run-time errors that delay the laser shot. This paper discusses a new ICCS framework feature for FEPs to monitor devices and report its overall health, allowing for problem devices to be identified before they are needed. Each FEP has different devices and a unique definition of healthy. The ICCS software uses an object oriented approach using polymorphism so FEP’s can determine their health status and report it in a consistent way. This generic approach provides consistent GUI indication and the display of detailed information of device problems. It allows for operators to be informed quickly of faults and provides them with the information necessary to pin point and resolve issues. Operators now know before starting a shot if the control system is ready, thereby reducing time and material lost due to a failure and improving overall control system reliability and availability. |
|||
![]() |
Poster TUPPC129 [2.318 MB] | ||
TUPPC134 | Pvmanager: A Java Library for Real-Time Data Processing | controls, real-time, EPICS, background | 903 |
|
|||
Increasingly becoming the standard connection layer in Control System Studio, pvmanager is a Java library that allows to create well behaved applications that process real time data, such as the one coming from a control system. It takes care of the caching, queuing, rate decoupling and throttling, connection sharing, data aggregation and all the other details needed to make an application robust. Its fluent API allows to specify the detail for each pipeline declaratively in a compact way. | |||
![]() |
Poster TUPPC134 [0.518 MB] | ||
TUCOCB01 | Next-Generation MADOCA for The SPring-8 Control Framework | controls, Windows, interface, software | 944 |
|
|||
MADOCA control framework* was developed for SPring-8 accelerator control and has been utilized in several facilities since 1997. As a result of increasing demands in controls, now we need to treat various data including image data in beam profile monitoring, and also need to control specific devices which can be only managed by Windows drivers. To fulfill such requirements, next-generation MADOCA (MADOCA II) was developed this time. MADOCA II is also based on message oriented control architecture, but the core part of the messaging is completely rewritten with ZeroMQ socket library. Main features of MADOCA II are as follows: 1) Variable length data such as image data can be transferred with a message. 2) The control system can run on Windows as well as other platforms such as Linux and Solaris. 3) Concurrent processing of multiple messages can be performed for fast control. In this paper, we report on the new control framework especially from messaging aspects. We also report the status on the replacement of the control system with MADOCA II. Partial control system of SPring-8 was already replaced with MADOCA II last summer and has been stably operated.
*R.Tanaka et al., “Control System of the SPring-8 Storage Ring”, Proc. of ICALEPCS’95, Chicago, USA, (1995) |
|||
![]() |
Slides TUCOCB01 [2.157 MB] | ||
TUCOCB06 | Designing and Implementing LabVIEW Solutions for Re-Use* | interface, LabView, hardware, controls | 960 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632632 Many of our machines have a lot in common – they drive motors, take pictures, generate signals, toggle switches, and observe and measure effects. In a research environment that creates new machines and expects them to perform for a production assembly line, it is important to meet both schedule and quality. NIF has developed a LabVIEW layered architecture of Support, general Frameworks, Controllers, Devices, and User Interface Frameworks. This architecture provides a tested and qualified framework of software that allows us to focus on developing and testing the external interfaces (hardware and user) of each machine. |
|||
![]() |
Slides TUCOCB06 [4.232 MB] | ||
WECOAAB01 | An Overview of the LHC Experiments' Control Systems | controls, experiment, interface, monitoring | 982 |
|
|||
Although they are LHC experiments, the four experiments, either by need or by choice, use different equipment, have defined different requirements and are operated differently. This led to the development of four quite different Control Systems. Although a joint effort was done in the area of Detector Control Systems (DCS) allowing a common choice of components and tools and achieving the development of a common DCS Framework for the four experiments, nothing was done in common in the areas of Data Acquisition or Trigger Control (normally called Run Control). This talk will present an overview of the design principles, architectures and technologies chosen by the four experiments in order to perform the Control System's tasks: Configuration, Control, Monitoring, Error Recovery, User Interfacing, Automation, etc.
Invited |
|||
![]() |
Slides WECOAAB01 [2.616 MB] | ||
WECOAAB02 | Status of the ACS-based Control System of the Mid-sized Telescope Prototype for the Cherenkov Telescope Array (CTA) | controls, software, interface, monitoring | 987 |
|
|||
CTA as the next generation ground-based very-high-energy gamma-ray observatory is defining new areas beyond those related to physics; it is also creating new demands on the control and data acquisition system. With on the order of 100 telescopes spread over large area with numerous central facilities, CTA will comprise a significantly larger number of devices than any other current imaging atmospheric Cherenkov telescope experiment. A prototype for the Medium Size Telescope (MST) of a diameter of 12 m has been installed in Berlin and is currently being commissioned. The design of the control software of this telescope incorporates the main tools and concepts under evaluation within the CTA consortium in order to provide an array control prototype for the CTA project. The readout and control system for the MST prototype is implemented within the ALMA Common Software (ACS) framework. The interfacing to the hardware is performed via the OPen Connectivity-Unified Architecture (OPC UA). The archive system is based on MySQL and MongoDB. In this contribution the architecture of the MST control and data acquisition system, implementation details and first conclusions are presented. | |||
![]() |
Slides WECOAAB02 [3.148 MB] | ||
WECOBA01 | Algebraic Reconstruction of Ultrafast Tomography Images at the Large Scale Data Facility | data-analysis, distributed, synchrotron, radiation | 996 |
|
|||
Funding: Kalsruhe Institute of Technology, Institute for Data Processing and Electronics; China Scholarship Council The ultrafast tomography system built up at the ANKA Synchrotron Light Source at KIT makes possible the study of moving biological objects with high temporal and spatial resolution. The resulting amounts of data are challenging in terms of reconstruction algorithm, automatic processing software and computing. The standard operated reconstruction method yields limited quality of reconstruction images due to much fewer projections obtained from the ultrafast tomography. Thus an algebraic reconstruction technique based on a more precise forward transform model and compressive sampling theory is investigated. It results in high quality images, but is computationally very intensive. For near real–time reconstruction, an automatic workflow is started after data ingest, processing a full volume data in parallel using the Hadoop cluster at the Large Scale Data Facility (LSDF) to reduce the computing time greatly. It will not only provide better reconstruction results but also higher data analysis efficiency to users. This study contributes to the construction of the fast tomography system at ANKA and will enhance its application in the fields of chemistry, biology and new materials. |
|||
![]() |
Slides WECOBA01 [1.595 MB] | ||
THCOAAB02 | Enhancing the Man-Machine-Interface of Accelerator Control Applications with Modern Consumer Market Technologies | controls, HOM, embedded, software | 1044 |
|
|||
The paradigms of human interaction with modern consumer market devices such as tablets, smartphones or video game consoles are currently undergoing rapid and serious changes. Device control by multi-finger touch gesture or voice recognition has now become standard. Even further advanced technologies such as 3D-gesture recognition are becoming routine. Smart enhancements of head-mounted display technologies are beginning to appear on the consumer market. In addition, the look-and-feel of mobile apps and classical desktop applications are becoming remarkably similar to one another. We have used Web2cToGo to investigate the consequences of the above-mentioned technologies and paradigms with respect to accelerator control applications. Web2cToGo is a framework which is being developed at DESY. It provides a common, platform-independent Web application capable of running on widely-used mobile as well as common desktop platforms. This paper reports the basic concept of the project and presents the results achieved so far and discusses the next development steps. | |||
![]() |
Slides THCOAAB02 [0.667 MB] | ||
THCOAAB05 | Rapid Application Development Using Web 2.0 Technologies | software, target, interface, experiment | 1058 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632813 The National Ignition Facility (NIF) strives to deliver reliable, cost effective applications that can easily adapt to the changing business needs of the organization. We use HTML5, RESTful web services, AJAX, jQuery, and JSF 2.0 to meet these goals. WebGL and HTML5 Canvas technologies are being used to provide 3D and 2D data visualization applications. JQuery’s rich set of widgets along with technologies such as HighCharts and Datatables allow for creating interactive charts, graphs, and tables. PrimeFaces enables us to utilize much of this Ajax and JQuery functionality while leveraging our existing knowledge base in the JSF framework. RESTful Web Services have replaced the traditional SOAP model allowing us to easily create and test web services. Additionally, new software based on NodeJS and WebSocket technology is currently being developed which will augment the capabilities of our existing applications to provide a level of interaction with our users that was previously unfeasible. These Web 2.0-era technologies have allowed NIF to build more robust and responsive applications. Their benefits and details on their use will be discussed. |
|||
![]() |
Slides THCOAAB05 [0.832 MB] | ||
THCOAAB07 | NIF Electronic Operations: Improving Productivity with iPad Application Development | operation, network, database, diagnostics | 1066 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632815 In an experimental facility like the National Ignition Facility (NIF), thousands of devices must be maintained during day to day operations. Teams within NIF have documented hundreds of procedures, or checklists, detailing how to perform this maintenance. These checklists have been paper based, until now. NIF Electronic Operations (NEO) is a new web and iPad application for managing and executing checklists. NEO increases efficiency of operations by reducing the overhead associated with paper based checklists, and provides analysis and integration opportunities that were previously not possible. NEO’s data driven architecture allows users to manage their own checklists and provides checklist versioning, real-time input validation, detailed step timing analysis, and integration with external task tracking and content management systems. Built with mobility in mind, NEO runs on an iPad and works without the need for a network connection. When executing a checklist, users capture various readings, photos, measurements and notes which are then reviewed and assessed after its completion. NEO’s design, architecture, iPad application and uses throughout the NIF will be discussed. |
|||
![]() |
Slides THCOAAB07 [1.237 MB] | ||
THCOAAB09 | Olog and Control System Studio: A Rich Logging Environment | controls, interface, operation, experiment | 1074 |
|
|||
Leveraging the features provided by Olog and Control System Studio, we have developed a logging environment which allows for the creation of rich log entries. These entries in addition to text and snapshots images store context which can comprise of information either from the control system (process variables) or other services (directory, ticketing, archiver). The client tools using this context provide the user the ability to launch various applications with their state initialized to match those while the entry was created. | |||
![]() |
Slides THCOAAB09 [1.673 MB] | ||
THPPC066 | ACSys Camera Implementation Utilizing an Erlang Framework to C++ Interface | controls, software, interface, hardware | 1228 |
|
|||
Multiple cameras are integrated into the Accelerator Control System utilizing an Erlang framework. Message passing is implemented to provide access into C++ methods. The framework runs in a multi-core processor running Scientific Linux. The system provides full access to any 3 cameras out of approximately 20 cameras collecting 5 Hz frames. JPEG images in memory or as files providing for visual information. PNG files are provided in memory or as files for analysis. Histograms over the X & Y coordinates are filtered and analyzed. This implementation is described and the framework is evaluated. | |||
THPPC078 | The AccTesting Framework: An Extensible Framework for Accelerator Commissioning and Systematic Testing | GUI, LabView, database, hardware | 1250 |
|
|||
The Large Hadron Collider (LHC) at CERN requires many systems to work in close interplay to allow reliable operation and at the same time ensure the correct functioning of the protection systems required when operating with large energies stored in magnet system and particle beams. The systems for magnet powering and beam operation are qualified during dedicated commissioning periods and retested after corrective or regular maintenance. Based on the experience acquired with the initial commissioning campaigns of the LHC magnet powering system, a framework was developed to orchestrate the thousands of tests for electrical circuits and other systems of the LHC. The framework was carefully designed to be extendable. Currently, work is on-going to prepare and extend the framework for the re-commissioning of the machine protection systems at the end of 2014 after the LHC Long Shutdown. This paper describes concept, current functionality and vision of this framework to cope with the required dependability of test execution and analysis. | |||
![]() |
Poster THPPC078 [5.908 MB] | ||
THPPC079 | Using a Java Embedded DSL for LHC Test Analysis | hardware, interface, embedded, DSL | 1254 |
|
|||
The Large Hadron Collider (LHC) at CERN requires many systems to work in close cooperation. All systems for magnet powering and beam operation are qualified during dedicated commissioning periods and retested after corrective or regular maintenance. Already for the first commissioning of the magnet powering system in 2006, the execution of such tests was automated to a high degree to facilitate the execution and tracking of the more than 10.000 required test steps. Most of the time during today’s commissioning campaigns is spent in analysing test results, to a large extend still done manually. A project was launched to automate the analysis of such tests as much as possible. A dedicated Java embedded Domain Specific Language (eDSL) was created, which allows system experts to describe desired analysis steps in a simple way. The execution of these checks results in simple decisions on the success of the tests and provides plots for experts to quickly identify the source of problems exposed by the tests. This paper explains the concepts and vision of the first version of the eDSL. | |||
![]() |
Poster THPPC079 [1.480 MB] | ||
THPPC080 | Testing and Verification of PLC Code for Process Control | PLC, controls, software, factory | 1258 |
|
|||
Functional testing of PLC programs has been historically a challenging task for control systems engineers. This paper presents the analysis of different mechanisms for testing PLCs programs developed within the UNICOS (Unified Industrial COntrol System) framework. The framework holds a library of objects, which are represented as Function Blocks in the PLC application. When a new object is added to the library or a correction of an existing one is needed, exhaustive validation of the PLC code is needed. Testing and formal verification are two distinct approaches selected for eliminating failures of UNICOS objects. Testing is usually done manually or automatically by developing scripts at the supervision layer using the real control infrastructure. Formal verification proofs the correctness of the system by checking weather a formal model of the system satisfies some properties or requirements. The NuSMV model checker has been chosen to perform this task. The advantages and limitations of both approaches are presented and illustrated with a case study, validating a specific UNICOS object. | |||
![]() |
Poster THPPC080 [3.659 MB] | ||
THPPC081 | High-level Functions for Modern Control Systems: A Practical Example | controls, experiment, status, monitoring | 1262 |
|
|||
Modern control systems make wide usage of different IT technologies and complex computational techniques to render the data gathered accessible from different locations and devices, as well as to understand and even predict the behavior of the systems under supervision. The Industrial Controls Engineering (ICE) Group of the EN Department develops and maintains more than 150 vital controls applications for a number of strategic sectors at CERN like the accelerator, the experiments and the central infrastructure systems. All these applications are supervised by MOON, a very successful central monitoring and configuration tool developed by the group that has been in operation 24/7 since 2011. The basic functionality of MOON was presented in previous editions of these series of conferences. In this contribution we focus on the high-level functionality recently added to the tool to grant access to multiple users through the web and mobile devices to the data gathered, as well as a first attempt to data analytics with the goal of identifying useful information to support developers during the optimization of their systems and help in the daily operations of the systems. | |||
THPPC082 | Monitoring of the National Ignition Facility Integrated Computer Control System | controls, database, experiment, interface | 1266 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632812 The Integrated Computer Control System (ICCS), used by the National Ignition Facility (NIF) provides comprehensive status and control capabilities for operating approximately 100,000 devices through 2,600 processes located on 1,800 servers, front end processors and embedded controllers. Understanding the behaviors of complex, large scale, operational control software, and improving system reliability and availability, is a critical maintenance activity. In this paper we describe the ICCS diagnostic framework, with tunable detail levels and automatic rollovers, and its use in analyzing system behavior. ICCS recently added Splunk as a tool for improved archiving and analysis of these log files (about 20GB, or 35 million logs, per day). Splunk now continuously captures all ICCS log files for both real-time examination and exploration of trends. Its powerful search query language and user interface provides allows interactive exploration of log data to visualize specific indicators of system performance, assists in problems analysis, and provides instantaneous notification of specific system behaviors. |
|||
![]() |
Poster THPPC082 [4.693 MB] | ||
THCOBA03 | DIAMON2 – Improved Monitoring of CERN’s Accelerator Controls Infrastructure | monitoring, controls, GUI, data-acquisition | 1415 |
|
|||
Monitoring of heterogeneous systems in large organizations like CERN is always challenging. CERN's accelerators infrastructure includes large number of equipment (servers, consoles, FECs, PLCs), some still running legacy software like LynxOS 4 or Red Hat Enterprise Linux 4 on older hardware with very limited resources. DIAMON2 is based on CERN Common Monitoring platform. Using Java industry standards, notably Spring, Ehcache and the Java Message Service, together with a small footprint C++ -based monitoring agent for real time systems and wide variety of additional data acquisition components (SNMP, JMS, JMX etc.), DIAMON2 targets CERN’s environment, providing easily extensible, dynamically reconfigurable, reliable and scalable monitoring solution. This article explains the evolution of the CERN diagnostics and monitoring environment until DIAMON2, describes the overall system’s architecture, main components and their functionality as well as the first operational experiences with the new system, observed under the very demanding infrastructure of CERN’s accelerator complex. | |||
![]() |
Slides THCOBA03 [1.209 MB] | ||
FRCOAAB06 | A Common Software Framework for FEL Data Acquisition and Experiment Management at FERMI | experiment, FEL, TANGO, data-acquisition | 1481 |
|
|||
Funding: Work supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3 After installation and commissioning, the Free Electron Laser facility FERMI is now open to users. As of December 2012, three experimental stations dedicated to different scientific areas, are available for user research proposals: Low Density Matter (LDM), Elastic & Inelastic Scattering (EIS), and Diffraction & Projection Imaging (DiProI). A flexible and highly configurable common framework has been developed and successfully deployed for experiment management and shot-by-shot data acquisition. This paper describes the software architecture behind all the experiments performed so far; the combination of the EXECUTER script engine with a specialized data acquisition device (FERMIDAQ) based on TANGO. Finally, experimental applications, performance results and future developments are presented and discussed. |
|||
![]() |
Slides FRCOAAB06 [5.896 MB] | ||
FRCOBAB03 | The New Multicore Real-time Control System of the RFX-mod Experiment | controls, real-time, plasma, Linux | 1493 |
|
|||
The real-time control system of RFX-mod nuclear fusion experiment has been in operation since 2004 and has been used to control the plasma position and the MagnetoHydroDinamic (MHD) modes. Over time new and more computing demanding control algorithm shave been developed and the system has been pushed to its limits. Therefore a complete re-design has been carried out in 2012. The new system adopts radically different solutions in Hardware, Operating System and Software management. The VME PowerPc CPUs communicating over Ethernet used in the former system have been replaced by a single multicore server. The VxWorks Operating System , previously used in the VME CPUs has now been replaced by Linux MRG, that proved to behave very well in real-time applications. The previous framework for control and communication has been replaced by MARTe, a modern framework for real-time control gaining interest in the fusion community. Thanks to the MARTe organization, a rapid development of the control system has been possible. In particular, its intrinsic simulation ability of the framework gave us the possibility of carrying out most debugging in simulation, without affecting machine operation. | |||
![]() |
Slides FRCOBAB03 [1.301 MB] | ||