Paper | Title | Other Keywords | Page |
---|---|---|---|
MOPPC045 | Cilex-Apollon Synchronization and Security System | laser, TANGO, target, software | 188 |
|
|||
Funding: CNRS, MESR, CG91, CRiDF, ANR Cilex-Apollon is a high intensity laser facility delivering at least 5 PW pulses on targets at one shot per minute, to study physics such as laser plasma electron or ion accelerator and laser plasma X-Ray sources. Under construction, Apollon is a four beam laser installation with two target areas. Apollon control system is based on Tango. The Synchronization and Security System (SSS) is the heart of this control system and has two main functions. First one is to deliver triggering signals to lasers sources and diagnostics and the second one is to ensure machine protection to guarantee optic components integrity by avoiding damages caused by abnormal operational modes. The SSS is composed of two distributed systems. Machine protection system is based on a distributed I/O system running a Labview real time application and the synchronization part is based on the distributed Greenfield Technology system. The SSS also delivers shots to the experiment areas through programmed sequences. The SSS are interfaced to Tango bus. The article presents the architecture, functionality, interfaces to others processes, performances and feedback from a first deployment on a demonstrator. |
|||
![]() |
Poster MOPPC045 [1.207 MB] | ||
MOPPC075 | A Monte Carlo Simulation Approach to the Reliability Modeling of the Beam Permit System of Relativistic Heavy Ion Collider (RHIC) at BNL | simulation, collider, kicker, electron | 265 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The RHIC Beam Permit System (BPS) monitors the health of RHIC subsystems and takes active decisions regarding beam-abort and magnet power dump, upon a subsystem fault. The reliability of BPS directly impacts the RHIC downtime, and hence its availability. This work assesses the probability of BPS failures that could lead to substantial downtime. A fail-safe condition imparts downtime to restart the machine, while a failure to respond to an actual fault can cause potential machine damage and impose significant downtime. This paper illustrates a modular multistate reliability model of the BPS, with modules having exponential lifetime distributions. The model is based on the Competing Risks Theory with Crude Lifetimes, where multiple failure modes compete against each other to cause a final failure, and simultaneously influence each other. It is also dynamic in nature as the number of modules varies based on the fault trigger location. The model is implemented as a Monte Carlo simulation in Java, and analytically validated. The eRHIC BPS will be an extension of RHIC BPS. This analysis will facilitate building a knowledge base rendering intelligent decision support for eRHIC BPS design. |
|||
![]() |
Poster MOPPC075 [0.985 MB] | ||
MOPPC126 | !CHAOS: the "Control Server" Framework for Controls | controls, framework, software, database | 403 |
|
|||
We report on the progress of !CHAOS*, a framework for the development of control and data acquisition services for particle accelerators and large experimental apparatuses. !CHAOS introduces to the world of controls a new approach for designing and implementing communications and data distribution among components and for providing the middle-layer services for a control system. Based on software technologies borrowed from high-performance Internet services !CHAOS offers by using a centralized, yet highly-scalable, cloud-like approach all the services needed for controlling and managing a large infrastructure. It includes a number of innovative features such as high abstraction of services, devices and data, easy and modular customization, extensive data caching for enhancing performances, integration of all services in a common framework. Since the !CHAOS conceptual design was presented two years ago the INFN group have been working on the implementations of services and components of the software framework. Most of them have been completed and tested for evaluating performance and reliability. Some services are already installed and operational in experimental facilities at LNF.
* "Introducing a new paradigm for accelerators and large experimental apparatus control systems", L. Catani et.al., Phys. Rev. ST Accel. Beams, http://prst-ab.aps.org/abstract/PRSTAB/v15/i11/e112804 |
|||
![]() |
Poster MOPPC126 [0.874 MB] | ||
MOPPC140 | High-Availability Monitoring and Big Data: Using Java Clustering and Caching Technologies to Meet Complex Monitoring Scenarios | monitoring, software, controls, network | 439 |
|
|||
Monitoring and control applications face ever more demanding requirements: as both data sets and data rates continue to increase, non-functional requirements such as performance, availability and maintainability become more important. C2MON (CERN Control and Monitoring Platform) is a monitoring platform developed at CERN over the past few years. Making use of modern Java caching and clustering technologies, the platform supports multiple deployment architectures, from a simple 3-tier system to highly complex clustered solutions. In this paper we consider various monitoring scenarios and how the C2MON deployment strategy can be adapted to meet them. | |||
![]() |
Poster MOPPC140 [1.382 MB] | ||
TUCOBAB02 | The Mantid Project: Notes from an International Software Collaboration | framework, software, interface, neutron | 502 |
|
|||
Funding: This project is a collaboration between SNS, ORNL and ISIS, RAL with expertise supplied by Tessella. These facilities are in turn funded by the US DoE and the UK STFC. The Mantid project was started by ISIS in 2007 to provide a framework to perform data reduction and analysis for neutron and muon data. The SNS and HFIR joined the Mantid project in 2009 adding event processing and other capabilities to the Mantid framework. The Mantid software is now supporting the data reduction needs of most of the instruments at ISIS, the SNS and some at HFIR, and is being evaluated by other facilities. The scope of data reduction and analysis challenges, together with the need to create a cross platform solution, fuels the need for Mantid to be developed in collaboration between facilities. Mantid has from inception been an open source project, built to be flexible enough to be instrument and technique independent, and initially planned to support collaboration with other development teams. Through the collaboration with the SNS development practices and tools have been further developed to support the distributed development team in this challenge. This talk will describe the building and structure of the collaboration, the stumbling blocks we have overcome, and the great steps we have made in building a solid collaboration between these facilities. Mantid project website: www.mantidproject.org ISIS: http://www.isis.stfc.ac.uk/ SNS & HFIR: http://neutrons.ornl.gov/ |
|||
![]() |
Slides TUCOBAB02 [1.280 MB] | ||
TUMIB08 |
ITER Contribution to Control System Studio (CSS) Development Effort | controls, EPICS, framework, interface | 540 |
|
|||
In 2010, Control System Studio (CSS) was chosen for CODAC - the central control system of ITER - as the development and runtime integrated environment for local control systems. It became quickly necessary to contribute to CSS development effort - after all, CODAC team wants to be sure that the tools that are being used by the seven ITER members all over the world continue to be available and to be improved. In order to integrate CSS main components in its framework, CODAC team needed first to adapt them to its standard platform based on Linux 64-bits and PostgreSQL database. Then, user feedback started to emerge as well as the need for an industrial symbol library to represent pump, valve or electrical breaker states on the operator interface and the requirement to automatically send an email when a new alarm is raised. It also soon became important for CODAC team to be able to publish its contributions quickly and to adapt its own infrastructure for that. This paper describes ITER increasing contribution to the CSS development effort and the future plans to address factory and site acceptance tests of the local control systems. | |||
![]() |
Slides TUMIB08 [2.970 MB] | ||
![]() |
Poster TUMIB08 [0.959 MB] | ||
TUPPC004 | Scalable Archiving with the Cassandra Archiver for CSS | database, EPICS, controls, software | 554 |
|
|||
An archive for process-variable values is an important part of most supervisory control and data acquisition (SCADA) systems, because it allows operators to investigate past events, thus helping in identifying and resolving problems in the operation of the supervised facility. For large facilities like particle accelerators there can be more than one hundred thousand process variables that have to be archived. When these process variables change at a rate of one Hertz or more, a single computer system can typically not handle the data processing and storage. The Cassandra Archiver has been developed in order to provide a simple to use, scalable data-archiving solution. It seamlessly plugs into Control System Studio (CSS) providing quick and simple access to all archived process variables. An Apache Cassandra database is used for storing the data, automatically distributing it over many nodes and providing high-availability features. This contribution depicts the architecture of the Cassandra Archiver and presents performance benchmarks outlining the scalability and comparing it to traditional archiving solutions based on relational databases. | |||
![]() |
Poster TUPPC004 [3.304 MB] | ||
TUPPC011 | Development of an Innovative Storage Manager for a Distributed Control System | controls, framework, software, operation | 570 |
|
|||
The !CHAOS(*) framework will provide all the services needed for controlling and managing a large scientific infrastructure, including a number of innovating features such as abstraction of services, devices and data, easy and modular customization, extensive data caching for performance boost, integration of all functionalities in a common framework. One of most relevant innovation in !CHAOS resides in the History Data Service (HDS) for a continuous acquisition of operating data pushed by devices controllers. The core component of the HDS is the History engine(HST). It implements the abstraction layer for the underneath storage technology and the logics for indexing and querying data. The HST drivers are designed to provide specific HDS tasks such as Indexing, Caching and Storing, and for wrapping the chosen third-party database API with !CHOAS standard calls. Indeed, the HST allows to route to independent channels the different !CHAOS services data flow in order to improve the global efficiency of the whole data acquisition system.
* - http://chaos.infn.it * - https://chaosframework.atlassian.net/wiki/display/DOC/General+View * - http://prst-ab.aps.org/abstract/PRSTAB/v15/i11/e112804 |
|||
![]() |
Poster TUPPC011 [6.729 MB] | ||
TUPPC017 | Development of J-PARC Time-Series Data Archiver using Distributed Database System | database, EPICS, operation, linac | 584 |
|
|||
J-PARC(Japan Proton Accelerator Research Complex) is consists of much equipment. In Linac and 3GeV synchrotron, the data of over the 64,000 EPICS records for these apparatus control is being collected. The data has been being stored by a RDB system using PostgreSQL now, but it is not enough in availability, performance, and extendibility. Therefore, the new system architecture is required, which is rich in the pliability and can respond to the data increasing continuously for years to come. In order to cope with this problem, we considered adoption of the distributed database archtecture and constructed the demonstration system using Hadoop/HBase. We present results of these demonstration. | |||
TUPPC034 | Experience Improving the Performance of Reading and Displaying Very Large Datasets | collider, network, instrumentation, software | 630 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. There has been an increasing need over the last 5 years within the BNL accelerator community (primarily within the RF and Instrumentation groups) to collect, store and display data at high frequencies (1-10 kHz). Data throughput considerations when storing this data are manageable. But requests to display gigabytes of the collected data can quickly tax the speed at which data can be read from storage, transported over a network, and displayed on a users computer monitor. This paper reports on efforts to improve the performance of both reading and displaying data collected by our data logging system. Our primary means of improving performance was to build an Data Server – a hardware/software server solution built to respond to client requests for data. It's job is to improve performance by 1) improving the speed at which data is read from disk, and 2) culling the data so that the returned datasets are visually indistinguishable from the requested datasets. This paper reports on statistics that we've accumulated over the last two years that show improved data processing speeds and associated increases in the number and average size of client requests. |
|||
![]() |
Poster TUPPC034 [1.812 MB] | ||
TUPPC035 | A New EPICS Archiver | EPICS, controls, database, data-management | 632 |
|
|||
This report presents a large-scale high-performance distributed data storage system for acquiring and processing time series data of modern accelerator facilities. Derived from the original EPICS Channel Archiver, this version consistently extends it through the integration of the deliberately selected technologies, such as the HDF5 file format, the SciDB chunk-oriented interface, and the RDB-based representation of the DDS X-Types specification. The changes allowed to scale the performance of the new version towards the data rates of 500 K scalar samples per seconds. Moreover, the new EPICS Archiver provides a common platform for managing both the EPICS 3 records and composite data types, like images, of EPICS 4 applications. | |||
![]() |
Poster TUPPC035 [0.247 MB] | ||
TUCOCA07 | A Streamlined Architecture of LCLS-II Beam Containment System | PLC, radiation, controls, diagnostics | 930 |
|
|||
With the construction of LCLS-II, SLAC is developing a new Beam Containment System (BCS) to replace the aging hardwired system. This system will ensure that the beam is confined to the design channel at an approved beam power to prevent unacceptable radiation levels in occupable areas. Unlike other safety systems deployed at SLAC, the new BCS is distributed and has explicit response time requirements, which impose design constraints on system architecture. The design process complies with IEC 61508 and the system will have systematic capability SC3. This paper discusses the BCS built on Siemens S7-300F PLC. For those events requiring faster action, a hardwired shutoff path is provided in addition to peer safety functions within PLC; safety performance is enhanced, and the additional diagnostic capabilities significantly relieve operational cost and burden. The new system is also more scalable and flexible, featuring improved configuration control, simplified EPICS interface and reduced safety assurance testing efforts. The new architecture fully leverages the safety PLC capabilities and streamlines design and commissioning through a single-processor single-programmer approach. | |||
![]() |
Slides TUCOCA07 [1.802 MB] | ||
WECOBA01 | Algebraic Reconstruction of Ultrafast Tomography Images at the Large Scale Data Facility | data-analysis, framework, synchrotron, radiation | 996 |
|
|||
Funding: Kalsruhe Institute of Technology, Institute for Data Processing and Electronics; China Scholarship Council The ultrafast tomography system built up at the ANKA Synchrotron Light Source at KIT makes possible the study of moving biological objects with high temporal and spatial resolution. The resulting amounts of data are challenging in terms of reconstruction algorithm, automatic processing software and computing. The standard operated reconstruction method yields limited quality of reconstruction images due to much fewer projections obtained from the ultrafast tomography. Thus an algebraic reconstruction technique based on a more precise forward transform model and compressive sampling theory is investigated. It results in high quality images, but is computationally very intensive. For near real–time reconstruction, an automatic workflow is started after data ingest, processing a full volume data in parallel using the Hadoop cluster at the Large Scale Data Facility (LSDF) to reduce the computing time greatly. It will not only provide better reconstruction results but also higher data analysis efficiency to users. This study contributes to the construction of the fast tomography system at ANKA and will enhance its application in the fields of chemistry, biology and new materials. |
|||
![]() |
Slides WECOBA01 [1.595 MB] | ||
THPPC086 | Analyzing Off-normals in Large Distributed Control Systems using Deep Packet Inspection and Data Mining Techniques | network, controls, toolkit, operation | 1278 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632814 Network packet inspection using port mirroring provides the ultimate tool for understanding complex behaviors in large distributed control systems. The timestamped captures of network packets embody the full spectrum of protocol layers and uncover intricate and surprising interactions. No other tool is capable of penetrating through the layers of software and hardware abstractions to allow the researcher to analyze an integrated system composed of various operating systems, closed-source embedded controllers, software libraries and middleware. Being completely passive, the packet inspection does not modify the timings or behaviors. The completeness and fine resolution of the network captures present an analysis challenge, due to huge data volumes and difficulty of determining what constitutes the signal and noise in each situation. We discuss the development of a deep packet inspection toolchain and application of the R language for data mining and visualization. We present case studies demonstrating off-normal analysis in a distributed real-time control system. In each case, the toolkit pinpointed the problem root cause which had escaped traditional software debugging techniques. |
|||
![]() |
Poster THPPC086 [2.353 MB] | ||
THCOBA01 | Evolution of the Monitoring in the LHCb Online System | monitoring, database, status, interface | 1408 |
|
|||
The LHCb online system relies on a large and heterogeneous I.T. infrastructure : it comprises more than 2000 servers and embedded systems and more than 200 network devices. The low level monitoring of the equipment was originally done with Nagios. In 2011, we replaced the single Nagios instance with a distributed Icinga setup presented at ICALEPCS 2011. This paper will present with more hindsight the improvements we observed, as well as problems encountered. Finally, we will describe some of our prospects for the future after the Long Shutdown period, namely Shinken and Ganglia. | |||
![]() |
Slides THCOBA01 [1.426 MB] | ||
THCOCA02 | White Rabbit Status and Prospects | network, controls, Ethernet, FPGA | 1445 |
|
|||
The White Rabbit (WR) project started off to provide a sequencing and synchronization solution for the needs of CERN and GSI. Since then, many other users have adopted it to solve problems in the domain of distributed hard real-time systems. The paper discusses the current performance of WR hardware, along with present and foreseen applications. It also describes current efforts to standardize WR under IEEE 1588 and recent developments on reliability of timely data distribution. Then it analyzes the role of companies and the commercial Open Hardware paradigm, finishing with an outline of future plans. | |||
![]() |
Slides THCOCA02 [7.955 MB] | ||