Paper | Title | Other Keywords | Page |
---|---|---|---|
MOPPC037 | Control Programs for the MANTRA Project at the ATLAS Superconducting Accelerator | controls, laser, ion, experiment | 162 |
|
|||
Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357. The AMS (Accelerator Mass Spectrometry) project at ATLAS (Argonne Tandem Linac Accelerator System) complements the MANTRA (Measurement of Actinides Neutron TRAnsmutation) experimental campaign. To improve the precision and accuracy of AMS measurements at ATLAS, a new overall control system for AMS measurements needs to be implemented to reduce systematic errors arising from changes in transmission and ion source operation. The system will automatically and rapidly switch between different m/q settings, acquire the appropriate data and move on to the next setting. In addition to controlling the new multi-sample changer and laser ablation system, a master control program will communicate via the network to integrate the ATLAS accelerator control system, FMA control computer, and the data acquisition system. |
|||
![]() |
Poster MOPPC037 [2.211 MB] | ||
MOPPC081 | The Case of MTCA.4: Managing the Introduction of a New Crate Standard at Large Scale Facilities and Beyond | controls, electronics, operation, klystron | 285 |
|
|||
The demands on hardware for control and data acquisition at large-scale research organizations have increased considerably in recent years. In response, modular systems based on the new MTCA.4 standard, jointly developed by large Public Research Organizations and industrial electronics manufacturers, have pushed the boundary of system performance in terms of analog/digital data processing performance, remote management capabilities, timing stability, signal integrity, redundancy and maintainability. Whereas such public-private collaborations are not entirely new, novel instruments are in order to test the acceptance of the MTCA.4 standard beyond the physics community, identify gaps in the technology portfolio and align collaborative R&D programs accordingly. We describe the ongoing implementation of a time-limited validation project as means towards this end, highlight the challenges encountered so far and present solutions for a sustainable division of labor along the industry value chain. | |||
MOPPC130 | A New Message-Based Data Acquisition System for Accelerator Control | database, controls, embedded, network | 413 |
|
|||
The data logging system for SPring-8 accelerator complex has been operating for 16 years as a part of MADOCA system. Collector processes periodically request distributed computers to collect sets of data by synchronous ONC-RPC protocol at fixed cycles. On the other hand, we also developed another MyDAQ system for casual or temporary data acquisition. A data acquisition process running on a local computer pushes one BSD socket stream into a server at random time. Its "one stream per one signal" strategy made data management simple while the system has no scalability. We developed a new data acquisition system which has super-MADOCA scale and MyDAQ's simplicity for new generation accelerator project. The new system based on ZeroMQ messaging library and MessagePack serialization library has high availability, asynchronous messaging, flexibility in data expression and scalability. The input/output plug-ins accept multi protocols and send data to various data systems. This paper describes design, implementation, performance, reliability and deployment of the system. | |||
![]() |
Poster MOPPC130 [0.197 MB] | ||
MOPPC149 | A Messaging-Based Data Access Layer for Client Applications | controls, network, operation, interface | 460 |
|
|||
Funding: US Department of Energy The Fermilab Accelerator Control system has recently integrated use of a publish/subscribe infrastructure as a means of communication between Java client applications and data acquisition middleware. This supercedes a previous implementation based on Java Remote Method Invocation (RMI). The RMI implementation had issues with network firewalls, misbehaving client applications affecting the middleware, lack of portability to other platforms, and cumbersome authentication. The new system uses the AMQP messaging protocol and RabbitMQ data brokers. This decouples the client and middleware, is more portable to other languages, and has proven to be much more reliable. A Java client library provides for single synchronous operations as well as periodic data subscriptions. This new system is now used by the general synoptic display manager application as well as a number of new custom applications. Also a web service has been written that provides easy access to control system data from many languages. |
|||
![]() |
Poster MOPPC149 [4.654 MB] | ||
TUMIB06 | Development of a Scalable and Flexible Data Logging System Using NoSQL Databases | database, controls, operation, network | 532 |
|
|||
We have developed a scalable and flexible data logging system for SPring-8 accelerator control. The current SPring-8 data logging system powered by a relational database management system (RDBMS) has been storing log data for 16 years. With the experience, we recognized the lack of RDBMS flexibility on data logging such as little adaptability of data format and data acquisition cycle, complexity in data management and no horizontal scalability. To solve the problem, we chose a combination of two NoSQL databases for the new system; Redis for real time data cache and Apache Cassandra for perpetual archive. Logging data are stored into both database serialized by MessagePack with flexible data format that is not limited to single integer or real value. Apache Cassandra is a scalable and highly available column oriented database, which is suitable for time series logging data. Redis is a very fast on-memory key-value store that complements Cassandra's eventual consistent model. We developed a data logging system with ZeroMQ message and have proved its high performance and reliability in long term evaluation. It will be released for partial control system this summer. | |||
![]() |
Slides TUMIB06 [0.182 MB] | ||
![]() |
Poster TUMIB06 [0.525 MB] | ||
TUPPC015 | On-line and Off-line Data Analysis System for SACLA Experiments | experiment, detector, data-analysis, laser | 580 |
|
|||
The X-ray Free-Electron Laser facility, SACLA, has delivered X-ray laser beams to users from March 2012 [1]. Typical user experiments utilize two-dimensional-imaging sensors, which generate 10 MBytes per accelerator beam shot. At 60 Hz beam repetition, the experimental data at the rate of 600 MBytes/second are accumulated using a dedicate data-acquisition (DAQ) system [2]. To analyze such a large amount of data, we developed data-analysis system for SACLA experiments. The system consists of on-line and off-line sections. The on-line section performs on-the-fly filtering using data handling servers, which examine data qualities and records the results onto the database with event-by-event basis. By referring the database, we can select good events before performing off-line analysis. The off-line section performs precise analysis by utilizing high-performance computing system, such as physical image reconstruction and rough three-dimensional structure analysis of the data samples. For the large-scaled image reconstructions, we also plan to use external supercomputer. In this paper, we present overview and future plan of the SACLA analysis system.
[1] T. Ishikawa et al., Nature Photonics 6, 540-544 (2012). [2] M. Yamaga et al., ICALEPCS 2011, TUCAUST06, 2011. |
|||
![]() |
Poster TUPPC015 [10.437 MB] | ||
TUPPC029 | Integration, Processing, Analysis Methodologies and Tools for Ensuring High Data Quality and Rapid Data Access in the TIM* Monitoring System | monitoring, database, real-time, controls | 615 |
|
|||
Processing, storing and analysing large amounts of real-time data is a challenge for every monitoring system. The performance of the system strongly depends on high quality configuration data and the ability of the system to cope with data anomalies. The Technical Infrastructure Monitoring system (TIM) addresses data quality issues by enforcing a workflow of strict procedures to integrate or modify data tag configurations. TIM’s data acquisition layer architecture allows real-time analysis and rejection of irrelevant data. The discarded raw data 90,000,000 transactions/day) are stored in a database, then purged after gathering statistics. The remaining operational data (2,000,000 transactions/day) are transferred to a server running an in-memory database, ensuring its rapid processing. These data are currently stored for 30 days allowing ad hoc historical data analysis. In this paper we describe the methods and tools used to guarantee the quality of configuration data and highlight the advanced architecture that ensures optimal access to operational data as well as the tools used to perform off-line data analysis.
* Technical Infrastructure Monitoring system |
|||
![]() |
Poster TUPPC029 [0.742 MB] | ||
TUPPC062 | High-Speed Data Acquisition of Sensor Signals for Physical Model Verification at CERN HiRadMat (SHC-DAQ) | hardware, real-time, LabView, software | 718 |
|
|||
A high-speed data acquisition system was successfully developed and put into production in a harsh radiation environment in a couple of months to test new materials impacted by proton beams for future use in beam intercepting devices. A 4 MHz ADC with high impedance and low capacitance was used to digitize the data at a 2 MHz bandwidth. The system requirements were to design a full speed data streaming on a trigger during up to 30 ms and then reconfigure the hardware in less than 500 ms to perform a 100 Hz acquisition for 30 seconds. Experimental data were acquired, using LabVIEW real-time, relying on extensive embedded instrumentation (strain gauges and temperature sensors) and on acquisition boards hosted on a PXI crate. The data acquisition system has a dynamic range and sampling rate that are sufficient to acquire the very fast and intense shock waves generated by the impact. This presentation covers the requirements, the design, development and commissioning of the system. The overall performance, user experience and preliminary results will be reported. | |||
![]() |
Poster TUPPC062 [9.444 MB] | ||
TUPPC076 | SNS Instrument Data Acquisition and Controls | controls, EPICS, neutron, interface | 755 |
|
|||
Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U. S. Department of Energy. The data acquisition (DAQ) and control systems for the neutron beam line instruments at the Spallation Neutron Source (SNS) are undergoing upgrades addressing three critical areas: data throughput and data handling from DAQ to data analysis, instrument controls including user interface and experiment automation, and the low-level electronics for DAQ and timing. This paper will outline the status of the upgrades and will address some of the challenges in implementing fundamental upgrades to an operating facility concurrent with commissioning of existing beam lines and construction of new beam lines. |
|||
TUPPC121 | caQtDM, an EPICS Display Manager Based on Qt | controls, EPICS, interface, Windows | 864 |
|
|||
At the Paul Scherrer Institut (PSI) the display manager MEDM was used until recently for the synoptic displays at all our facilities, not only for EPICS but also for another, in-house built control system ACS. However MEDM is based on MOTIF and Xt/X11, systems/libraries that are starting to age. Moreover MEDM is difficult to extend with new entities. Therefore a new tool has been developed based on Qt. This reproduces the functionality of MEDM and is now in use at several facilities. As Qt is supported on several platforms this tool will also format using the parser tool adl2ui. These were then edited further with the Qt-Designer and displayed with the new Qt-Manager caQtDM. The integration of new entities into the Qt designer and therefore into the Qt based applications is very easy, so that the system can easily be enhanced with new widgets. New features needed for our facility were implemented. The caQtDM application uses a C++ class to perform the data acquisition and display; this class can also be integrated into other applications. | |||
![]() |
Slides TUPPC121 [1.024 MB] | ||
WECOAAB03 | Synchronization of Motion and Detectors and Continuous Scans as the Standard Data Acquisition Technique | detector, hardware, software, controls | 992 |
|
|||
This paper describes the model, objectives and implementation of a generic data acquisition structure for an experimental station, which integrates the hardware and software synchronization of motors, detectors, shutters and in general any experimental channel or events related with the experiment. The implementation involves the management of hardware triggers, which can be derived from time, position of encoders or even events from the particle accelerator, combined with timestamps for guaranteeing the correct integration of software triggered or slow channels. The infrastructure requires a complex management of buffers of different sources, centralized and distributed, including interpolation procedures. ALBA uses Sardana built on TANGO as the generic control system, which provides the abstraction and communication with the hardware, and a complete macro edition and execution environment. | |||
![]() |
Slides WECOAAB03 [2.432 MB] | ||
THCOBA03 | DIAMON2 – Improved Monitoring of CERN’s Accelerator Controls Infrastructure | monitoring, controls, GUI, framework | 1415 |
|
|||
Monitoring of heterogeneous systems in large organizations like CERN is always challenging. CERN's accelerators infrastructure includes large number of equipment (servers, consoles, FECs, PLCs), some still running legacy software like LynxOS 4 or Red Hat Enterprise Linux 4 on older hardware with very limited resources. DIAMON2 is based on CERN Common Monitoring platform. Using Java industry standards, notably Spring, Ehcache and the Java Message Service, together with a small footprint C++ -based monitoring agent for real time systems and wide variety of additional data acquisition components (SNMP, JMS, JMX etc.), DIAMON2 targets CERN’s environment, providing easily extensible, dynamically reconfigurable, reliable and scalable monitoring solution. This article explains the evolution of the CERN diagnostics and monitoring environment until DIAMON2, describes the overall system’s architecture, main components and their functionality as well as the first operational experiences with the new system, observed under the very demanding infrastructure of CERN’s accelerator complex. | |||
![]() |
Slides THCOBA03 [1.209 MB] | ||
FRCOAAB06 | A Common Software Framework for FEL Data Acquisition and Experiment Management at FERMI | experiment, FEL, TANGO, framework | 1481 |
|
|||
Funding: Work supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3 After installation and commissioning, the Free Electron Laser facility FERMI is now open to users. As of December 2012, three experimental stations dedicated to different scientific areas, are available for user research proposals: Low Density Matter (LDM), Elastic & Inelastic Scattering (EIS), and Diffraction & Projection Imaging (DiProI). A flexible and highly configurable common framework has been developed and successfully deployed for experiment management and shot-by-shot data acquisition. This paper describes the software architecture behind all the experiments performed so far; the combination of the EXECUTER script engine with a specialized data acquisition device (FERMIDAQ) based on TANGO. Finally, experimental applications, performance results and future developments are presented and discussed. |
|||
![]() |
Slides FRCOAAB06 [5.896 MB] | ||