Keyword: database
Paper Title Other Keywords Page
MOM305 Control System for a Dedicated Accelerator for SACLA Wide-Band Beam Line controls, electron, operation, experiment 74
 
  • N. Hosoda, T. Fukui
    RIKEN SPring-8 Center, Innovative Light Sources Division, Hyogo, Japan
  • M. Ishii
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Ohshima, T. Sakurai, H. Takebe
    RIKEN SPring-8 Center, Sayo-cho, Sayo-gun, Hyogo, Japan
 
  This paper report about a control system for a dedicated accelerator for SACLA wide-band beam line (BL1), requirements, construction strategies, and present status. At the upgrade plan of SACLA BL1, it was decided to move SCSS test accelerator, which operated from 2005 to 2013, to the upstream of the BL1 in the undulator hall. The control system of the accelerator had to be operated seamlessly with SACLA, to reuse old components as much as possible, and to avoid stopping SACLA user experiments during the start up. The system was constructed with MADOCA which is already used at SACLA. In the control components, VME optical DIO cards and chassis for magnet power supplies were reused after cleaning and checking that there was no degradation of quality. The RF conditioning of the accelerator was started in in October 2014, while SACLA user experiments were going on. A data collection system was prepared, myCC, having a MADOCA compatible interface and an independent database from SACLA. It enabled efficient start up and after enough debugging, the data collection was successfully merged to SACLA in January 2015. Beam commissioning of the accelerator is planned for autumn 2015.  
slides icon Slides MOM305 [0.969 MB]  
poster icon Poster MOM305 [0.368 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF006 The Renovation of the CERN Controls Configuration Service controls, software, factory, GUI 103
 
  • L. Burdzanowski, C. Roderick
    CERN, Geneva, Switzerland
 
  The Controls Configuration Service (CCS) is a key component in CERN's data driven accelerator Control System.  Based around a central database, the service also provides a range of client APIs and user interfaces - enabling configuration of controls for CERN's accelerator complex.  The service has existed for 35 years (29 based on Oracle DBMS). There has been substantial evolution of the CCS over time to cater for changing requirements and technology advances.  Inevitably this has led to increases in CCS complexity and an accumulation of technical debt.  These two aspects combined have a negative impact on the flexibility and maintainability of the CCS, leading to a potential bottleneck for Control System evolution.   This paper describes on-going renovation efforts (started mid-2014) to tackle the aforementioned issues, whilst ensuring overall system stability.  In particular, this paper covers architectural changes, the agile development process in place - bringing users close to the development cycle, and the deterministic approach used to treat technical debt.  Collectively these efforts are leading towards a successful renovation of a core element of the Control System.  
poster icon Poster MOPGF006 [4.512 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF021 Database Archiving System for Supervision Systems at CERN: a Successful Upgrade Story controls, operation, experiment, software 129
 
  • P. Golonka, M. Gonzalez-Berges, J. Hofer, A. Voitier
    CERN, Geneva, Switzerland
 
  Almost 200 controls applications, in domains like LHC magnet protection, cryogenics and vacuum systems, cooling-and-ventilation or electrical network supervision, have been developed and are currently maintained by the CERN Industrial Controls Group in close collaboration with several equipment groups. The supervision layer of these systems is based on the same technologies as 400 other systems running in the LHC Experiments (e.g. WinCC Open Architecture, Oracle). During the last two-year LHC Long Shutdown 1, the 200 systems have been successfully migrated from a file-based archiver to a centralized infrastructure based on Oracle databases. This migration has homogenized the archiving chain for all CERN systems, and at the same time has presented a number of additional challenges. The paper presents the design, the necessary optimizations and the migration process that allowed us to meet unprecedented data-archiving rates (unachievable for the previously used system), and liaise with the existing long-term storage system (LHC LoggingDB) to assure data-continuity.  
poster icon Poster MOPGF021 [3.510 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF050 Tango-Kepler Integration at ELI-ALPS TANGO, controls, device-server, framework 212
 
  • P. Ács, S. Brockhauser, L.J. Fülöp, V. Hanyecz, M. Kiss, Cs. Koncz, L. Schrettner
    ELI-ALPS, Szeged, Hungary
 
  Funding: The ELI-ALPS project (GOP-1.1.1-12/B-2012-000, GINOP-2.3.6-15-2015-00001) is supported by the European Union and co-financed by the European Regional Development Fund.
ELI-ALPS will provide a wide range of attosecond pulses which will be used for performing experiments by international research groups. ELI-ALPS will use the TANGO Controls framework to build up the central control system and to integrate the autonomous subsystems regarding software monitoring and control. Beside a robust central and integrated control system a flexible and dynamic high level environment could be beneficial. The envisioned users will come from diverse fields including chemistry, biology, physics or medicine. Most of the users will not have programming or scripting background. Meanwhile workflow system provides visual programming facilities where the logics can be drawn, which is understandable by the potential users. We have integrated TANGO into the Kepler workflow system because it gives a lot of actors for all natural scientific fields. Moreover it has the potential for running the workflows on HPC or GRID resources. We demonstrated the usability of the development with a beamline simulation. The TANGO-Kepler integration provides an easy-to-use environment for the users therefore it can facilitate e.g. the standardization of measurements protocols as well.
 
poster icon Poster MOPGF050 [0.668 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF072 Hot Checkout for 12 GeV at Jefferson Lab status, operation, software, software-tool 258
 
  • R.J. Slominski, T. L. Larrieu
    JLab, Newport News, Virginia, USA
 
  Funding: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to this manuscript.
A new hot checkout process was implemented at Jefferson Lab for the upgraded 12 GeV accelerator. The previous process proved insufficient in the fall of 2011 when a fire broke out in a septa magnet along the beam line due to a lack of communication about the status of systems. The improved process provides rigorous verification of system readiness thus protecting property while minimizing program delays. To achieve these goals, a database and web application were created to maintain an accurate list of machine components and coordinate and record verification checks by each responsible group. The process requires groups to publish checklists detailing each system check to encourage good work practice. Within groups, the process encourages two independent checks of each component: the first by a technician, and a second by the group leader. Finally, the application provides a dashboard display of checkout progress for each system and beam destination of the machine allowing for informed management decisions. Successful deployment of the new process has led to safe and efficient machine commissioning.
 
poster icon Poster MOPGF072 [3.851 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF092 Integration of the TRACK Beam Dynamics Model to Decrease LINAC Tuning Times simulation, controls, emittance, real-time 291
 
  • C.E. Peters, C. Dickerson, F. Garcia, M.A. Power
    ANL, Argonne, Illinois, USA
 
  Funding: This work is supported by the U.S. DOE, Office of Nuclear Physics, contract No. DE-AC02-06CH11357.  This research used resources of ANLs ATLAS facility, which is a DOE Office of Science User Facility
The Accelerator R&D Group within the Argonne National Laboratory (ANL) Physics Division maintains a beam dynamics model named TRACK. This simulation code has the potential to assist operators in visualizing key performance parameters of the Argonne Tandem Linear Accelerating System (ATLAS). By having real-time access to visual and animated models of the particle beam transverse and longitudinal phase spaces, operators can more quickly iterate to a final machine tune. However, this effort requires a seamless integration into the control system, both to extract initial run-time information from the accelerator, and to present the simulation results back to the users. This paper presents efforts to pre-process, batch execute, and visualize TRACK particle beam physics simulations in real-time via the ATLAS Control System.
 
poster icon Poster MOPGF092 [2.203 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF102 The New Control Software for the CERN NA62 Beam Vacuum controls, vacuum, PLC, software 314
 
  • S. Blanchard, F. Antoniotti, R. Ferreira, P. Gomes, A. Gutierrez, B. Jenninger, F. Mateo, H.F. Pereira
    CERN, Geneva, Switzerland
  • L. Kopylov, S. Merker
    IHEP, Moscow Region, Russia
 
  NA62 is a fixed target experiment to measure very rare decays of Kaons at CERN Super Proton Synchrotron accelerator. The NA62 experiment line comprises several large detectors installed inside a vacuum vessel with a length of 250 m and an internal diameter of up to 2.8 m. The vacuum installation consists of 170 remote controlled pumps, valves and gauges. The operational specifications of NA62 require a complex vacuum control system: tight interaction between vacuum controllers and detector controllers, including pumping or venting vetoes, and detector start-stop interlocks; most of the valves are interlocked, including the large vacuum sector gate valves; the vacuum devices are driven by 20 logic processes. The vacuum control system is based on commercial Programmable Logical Controllers (Siemens PLC: S7-300 series) and a Supervisory Control And Data Acquisition application (Siemens SCADA: WINCC OA). The control software is built upon the standard framework used in CERN accelerators vacuum, with some specific developments. We describe the controls architecture, and report on the particular requirements and the solutions implemented.  
poster icon Poster MOPGF102 [2.675 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF105 Device Control Database Tool (DCDB) EPICS, PLC, controls, Linux 326
 
  • P.A. Maslov, M. Komel, M. Pavleski, K. Žagar
    Cosylab, Ljubljana, Slovenia
 
  Funding: This project has received funding from the European Union's Seventh Framework Programme for research, technological development and demonstration under grant agreement no 289485.
We have developed a control system configuration tool, which provides an easy-to-use interface for quick configuration of the entire facility. It uses Microsoft Excel as the front-end application and allows the user to quickly generate and deploy IOC configuration (EPICS start-up scripts, alarms and archive configuration) onto IOCs; start, stop and restart IOCs, alarm servers and archive engines, and more. The DCDB tool utilizes a relational database, which stores information about all the elements of the accelerator. The communication between the client, database and IOCs is realized by a REST server written in Python. The key feature of the DCDB tool is that the user does not need to recompile the source code. It is achieved by using a dynamic library loader, which automatically loads and links device support libraries. The DCDB tool is compliant with CODAC (used at ITER and ELI-NP), but can also be used in any other EPICS environment (e.g. it has been customized to work at ESS).
 
poster icon Poster MOPGF105 [2.749 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF115 LabVIEW as a New Supervision Solution for Industrial Control Systems controls, LabView, PLC, framework 349
 
  • O.Ø. Andreassen, F. Augrandjean, E. Blanco Vinuela, M.F. Gomez De La Cruz, A. Rijllart
    CERN, Geneva, Switzerland
  • D. Abalo Miron
    University of Oviedo, Oviedo, Spain
 
  To shorten the development time of supervision applications, CERN has developed the UNICOS framework, which simplifies the configuration of the front-end devices and the supervision (SCADA) layer. At CERN the SCADA system of choice is WinCC OA, but for specific projects (small size, not connected to accelerator operation or not located at CERN) a more customisable SCADA using LabVIEW is an attractive alternative. Therefore a similar system, called UNICOS in LabVIEW (UiL), has been implemented. It provides a set of highly customisable re-usable components, devices and utilities. Because LabVIEW uses different programming methods than WinCC OA, the tools for automatic instantiation of devices on both the front-end and supervision layer had to be re-developed, but the configuration files of the devices and the SCADA can be reused. This paper reports how the implementation was done, it describes the first project implemented in UiL and an outlook to other possible applications.  
poster icon Poster MOPGF115 [4.417 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF140 Integration of PLC's in Tango Control Systems Using PyPLC TANGO, controls, PLC, GUI 413
 
  • S. Rubio-Manrique, M. Broseta, G. Cuní, D. Fernández-Carreiras, A. Rubio, J. Villanueva
    ALBA-CELLS Synchrotron, Cerdanyola del Vallès, Spain
 
  The Equipment Protection Systems and Personnel Safety Systems of the ALBA Synchrotron are complex and highly distributed control systems based on PLC's from different vendors. EPS and PSS not only regulate the interlocks of the whole ALBA facility but provide an extense network of analog and digital sensors that collect information from all subsystems; as well as its logical states. TANGO is the Control System framework used at ALBA, providing several tools and services (GUI's, Archiving, Alarms) in which EPS and PSS systems must be integrated. PyPLC, a dynamic Tango device, have been developed in python to provide a flexible interface and enable PLC developers to automatically update it. This paper describes how protection systems and the PLC code generation cycle have been fully integrated within TANGO Control System at ALBA.  
poster icon Poster MOPGF140 [2.246 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF143 Integration of Heterogeneous Access Control Functionalities Using the New Generation of NI cRIO 903x Controllers controls, software, LabView, real-time 424
 
  • F. Valentini, T. Hakulinen, L. Hammouti, P. Ninin
    CERN, Geneva, Switzerland
 
  Engineering of Personnel Protection Systems (PPS) in large research facilities, such CERN, represents nowadays a major challenge in terms of requirements for safety and access control functionalities. PPS are usually conceived as two separate independent entities: a Safety System dealing with machine interlocks and subject to rigid safe-ty standards (e.g. IEC-61508); and a conventional Access Control System made by integration of different COTS technologies. The latter provides a large palette of func-tionalities and tools intended either to assist users access-ing the controlled areas, either to automate a certain number of control room operator's tasks. In this paper we analyse the benefits in terms of performance, cost and system maintainability of adopting the new generation of NI multipurpose CRIO 903x controllers. These new de-vices allows an optimal integration of a large set of access control functionalities, namely: automatic control of mo-torized devices, identification/count of users in zone, im-plementation of dedicated anti-intrusion algorithms, graphical display of relevant information for local users, and remote control/monitoring for control room opera-tors.  
poster icon Poster MOPGF143 [3.045 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF149 Nuclotron and NICA Control System Development Status TANGO, controls, network, monitoring 437
 
  • E.V. Gorbachev, V. Andreev, A. Kirichenko, D.V. Monakhov, S. Romanov, T.V. Rukoyatkina
    JINR, Dubna, Moscow Region, Russia
  • G.S. Sedykh, V. Volkov
    JINR/VBLHEP, Dubna, Moscow region, Russia
 
  The Nuclotron is a 6 GeV/n superconducting proton synchrotron operating at JINR, Dubna since 1993. It will be the core of the future accelerating complex NICA which is under construction now. NICA will provide collider experiments with heavy ions at nucleon-nucleon centre-of-mass energies of 4-11 GeV. The TANGO based control system of the accelerating complex is under development now. This paper describes its structure, main features and present status.  
poster icon Poster MOPGF149 [2.424 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF164 Status of the EPICS-Based Control and Interlock System of the Belle II PXD controls, EPICS, detector, power-supply 476
 
  • M. Ritzert
    Heidelberg University, Heidelberg, Germany
 
  Funding: This work has been supported by the German Federal Ministry of Education and Research (BMBF) under Grant Identifier 05H12VHH.
The Belle II e+/e collider experiment at KEK will include a new pixelated detector (PXD) based on DEPFET technology as the innermost layer. This detector requires a complex control and readout infrastructure consisting of several ASICs and FPGA boards. This paper present the architecture and EPICS-based implementation of the control, alarm, and interlock systems, their interface to the various subsystems, and to the NSM2-based Belle II run-control. The complex startup sequence is orchestrated by a statemachine. CSS is used to implement the user interface. The alarm system uses CSS/BEAST, and is designed to minimize spurious alarms. The interlock system consists of two main parts: a hardware-based system that triggers on adverse environmental (temperature, humidity, radiation) conditions, and a software-based system. Strict monitoring including the use of heartbeats ensures permanent protection and fast reaction times. Especially the power supply system is monitored for malfunctions, and all user inputs are verified before they are sent to the hardware. The control system also incorporates archiving, logging, and reporting in a uniform workflow for the ease of daily operation.
For the DEPFET Collaboration.
 
poster icon Poster MOPGF164 [6.746 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUB3O04 The LMJ System Sequences Adaptability (French MegaJoule Laser) laser, controls, target, GUI 533
 
  • Y. Tranquille-Marques, J. Fleury, H. Graillot, J. Nicoloso
    CEA, LE BARP cedex, France
  • S. Bailleux, N. Chapron
    AREVA TA Site LMJ CEA /CESTA, LE BARP cedex, France
  • J. Gende
    ALTEN CEA /CESTA, Merignac, France
 
  The French Atomic and Alternative Energies Commission (CEA : Commissariat à l'Energie Atomique et aux Energies Alternatives) is currently building the Laser MegaJoule facility. In 2014, the first 8 beams and the target area were commissioned and the first physics campaign (a set of several shots) was achieved. On the LMJ, each shot requires more or less the same operations except for the settings that change from shot to shot. The supervisory controls provide five semi-automated sequence programs to repeat and schedule actions on devices. Three of them are now regularly used to drive the LMJ. Sequence programs need to have different qualities such as flexibility, contextual adaptability, reliability and repeatability. Currently, the calibration shots sequence drives 328 actions towards local control systems. However, this sequence is already dimensioned to drive 22 bundles, which will lead to manage almost 5300 actions. This paper introduces the organization of the control system used by sequence programs, the sequence adjustments files, the grafcets of sequences, the GUIs, the software and different tools used to control the facility.  
slides icon Slides TUB3O04 [11.273 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUD3O03 REMUS: The new CERN Radiation and Environment Monitoring Unified Supervision monitoring, radiation, interface, operation 574
 
  • A. Ledeul, G. Segura, R.P.I. Silvola, B. Styczen, D. Vasques Ribeira
    CERN, Geneva, Switzerland
 
  The CERN Health, Safety and Environment Unit is mandated to provide a Radiation and Environment Monitoring SCADA system for all CERN accelerators, experiments as well as the environment. In order to face the increasing demand of radiation protection and continuously assess both the conventional and the radiological impact on the environment, CERN is developing and progressively deploying its new supervisory system, called REMUS - Radiation and Environment Monitoring Unified Supervision. This new WinCC OA based system aims for an optimum flexibility and scalability, based on the experience acquired during the development and operation of the previous CERN radiation and environment supervisory systems (RAMSES and ARCON). REMUS will interface with more than 70 device types, providing about 3,000 measurement channels (approximately 500, 000 tags) by end 2016. This paper describes the architecture of the system, as well as the innovative design that was adopted in order to face the challenges of heterogeneous equipment interfacing, diversity of end users and non-stop operation.  
slides icon Slides TUD3O03 [2.217 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEA3O02 Recent Advancements and Deployments of EPICS Version 4 EPICS, controls, detector, experiment 589
 
  • G.R. White, M.V. Shankar
    SLAC, Menlo Park, California, USA
  • A. Arkilic, L.R. Dalesio, M.A. Davidsaver, M.R. Kraimer, N. Malitsky, B.S. Martins
    BNL, Upton, Long Island, New York, USA
  • S.M. Hartman, K.-U. Kasemir
    ORNL, Oak Ridge, Tennessee, USA
  • D.G. Hickin
    DLS, Oxfordshire, United Kingdom
  • A.N. Johnson, S. Veseli
    ANL, Argonne, Ilinois, USA
  • T. Korhonen
    ESS, Lund, Sweden
  • R. Lange
    ITER Organization, St. Paul lez Durance, France
  • M. Sekoranja
    Cosylab, Ljubljana, Slovenia
  • G. Shen
    FRIB, East Lansing, Michigan, USA
 
  EPICS version 4 is a set of software modules that add to the base of the EPICS toolkit for advanced control systems. Version 4 adds the possibility of process variable values of structured data, an introspection interface for dynamic typing plus some standard types, high-performance streaming, and a new front-end processing database for managing complex data I/O. A synchronous RPC-style facility has also been added so that the EPICS environment supports service-oriented architecture. We introduce EPICS and the new features of version 4. Then we describe selected deployments, particularly for high-throughput experiment data transport, experiment data management, beam dynamics and infrastructure data.  
slides icon Slides WEA3O02 [2.413 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEB3O04 Accelerator Modelling and Message Logging with ZeroMQ controls, CORBA, framework, GUI 610
 
  • J.T.M. Chrin, M. Aiba, A. Rawat, Z. Wang
    PSI, Villigen PSI, Switzerland
 
  ZeroMQ is an emerging message oriented middleware architecture that is being increasingly adopted in the software engineering of distributed control and data acquisition systems within the accelerator community. The rich array of built-in core messaging patterns may, however, be equally applied to within the domain of high-level applications, where the seamless integration of accelerator models and message logging capabilities, respectively serve to extend the effectiveness of beam dynamics applications and allow for their monitoring. Various advanced patterns that include intermediaries and proxies further provide for reliable service-oriented brokers, as may be required in real-world operations. A report on an investigation into ZeroMQ's suitability for integrating key distributed components into high-level applications, and the experience gained, are presented.  
slides icon Slides WEB3O04 [3.542 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WED3O03 MADOCA II Data Logging System Using NoSQL Database for SPRING-8 data-acquisition, controls, embedded, operation 648
 
  • A. Yamashita, M. Kago
    JASRI/SPring-8, Hyogo-ken, Japan
 
  The data logging system for SPring-8 was upgraded to the new system using NoSQL database, as a part of a MADOCA II framework. It has been collecting all the log data required for accelerator control without any trouble since the upgrade. In the past, the system powered by a relational database management system (RDBMS) had been operating since 1997. It had grown with the development of accelerators. However, the system with RDBMS became difficult to handle new requirements like variable length data storage, data mining from large volume data and fast data acquisition. New software technologies gave solution for the problems. In the new system, we adopted two NoSQL databases, Apache Cassandra and Redis, for data storage. Apache Cassandra is utilized for perpetual archive. It is a scalable and highly available column oriented database suitable for time series data. Redis is used for the real time data cache because of a very fast in-memory key-value store. Data acquisition part of the new system was also built based on ZeroMQ message packed by MessagePack. The operation of the new system started in January 2015 after the long term evaluation over one year.  
slides icon Slides WED3O03 [0.513 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WED3O04 HDB++: A New Archiving System for TANGO TANGO, device-server, GUI, interface 652
 
  • L. Pivetta, C. Scafuri, G. Scalamera, G. Strangolino, L. Zambon
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
  • R. Bourtembourg, J.L. Pons, P.V. Verdier
    ESRF, Grenoble, France
 
  The TANGO release 8 led to several enhancements, including the adoption of the ZeroMQ library for faster and lightweight event-driven communication. Exploiting these improved capabilities, a high performance, event-driven archiving system written in C++ has been developed. It inherits the database structure from the existing TANGO Historical Data Base (HDB) and introduces new storage architecture possibilities, better internal diagnostic capabilities and an optimized API. Its design allows storing data into traditional database management systems such as MySQL or into NoSQL database such as Apache Cassandra. This paper describes the software design of the new HDB++ archiving system, the current state of the implementation and gives some performance figures and use cases.  
slides icon Slides WED3O04 [1.392 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WED3O05 Big Data Analysis and Analytics with MATLAB software, framework, data-acquisition, controls 656
 
  • D.S. Willingham
    ASCo, Clayton, Victoria, Australia
 
  Overview using Data Analytics to turn large volumes of complex data into actionable information can help you improve design and decision-making processes. In today's world, there is an abundance of data being generated from many different sources. However, developing effective analytics and integrating them into existing systems can be challenging. Big data represents an opportunity for analysts and data scientists to gain greater insight and to make more informed decisions, but it also presents a number of challenges. Big data sets may not fit into available memory, may take too long to process, or may stream too quickly to store. Standard algorithms are usually not designed to process big data sets in reasonable amounts of time or memory. There is no single approach to big data. Therefore, MATLAB provides a number of tools to tackle these challenges. In this paper 2 case studies will be presented: 1. Manipulating and doing computations on big datasets on light weight machines; 2. Visualising big, multi-dimensional datasets Developing Predictive Models High performance computing with clusters and Cloud Integration with Databases, HADOOP and Big Data Environments.  
slides icon Slides WED3O05 [10.989 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEM304 Status Monitoring of the EPICS Control System at the Canadian Light Source controls, EPICS, network, status 667
 
  • G. Wright, M. Bree
    CLS, Saskatoon, Saskatchewan, Canada
 
  The CLS uses the EPICS Distributed Control System (DCS) for control and feedback of a linear accelerator, booster ring, electron storage ring, and numerous x-ray beamlines. The number of host computers running EPICS IOC applications has grown to 200, and the number of IOC applications exceeds 700. The first part of this paper will present the challenges and current efforts to monitor and report the status of the control system itself by monitoring the EPICS network traffic. This approach does not require any configuration or application modification to report the currently active applications, and then provide notification of any changes. The second part will cover the plans to use the information collected dynamically to improve upon the information gathered by process variable crawlers for an IRMIS database, with the goal to eventually replace the process variable crawlers.  
slides icon Slides WEM304 [0.550 MB]  
poster icon Poster WEM304 [1.519 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEM310 How Cassandra Improves Performances and Availability of HDB++ Tango Archiving System TANGO, device-server, GUI, controls 685
 
  • R. Bourtembourg, J.L. Pons, P.V. Verdier
    ESRF, Grenoble, France
 
  The TANGO release 8 led to several enhancements, including the adoption of the ZeroMQ library for faster and lightweight event-driven communication. Exploiting these improved capabilities, a high performance, event-driven archiving system, named Tango HDB++*, has been developed. Its design gives the possibility to store archiving data into Apache Cassandra: a high performance scalable NoSQL distributed database, providing High Availability service and replication, with no single point of failure. HDB++ with Cassandra will open up new perspectives for TANGO in the era of big data and will be the starting point of new big data analytics/data mining applications, breaking the limits of the archiving systems which are based on traditional relational databases. This paper describes the current state of the implementation and our experience with Apache Cassandra in the scope of the Tango HDB++ project. It also gives some performance figures and use cases where using Cassandra with Tango HDB++ is a good fit.
* HDB++ project is the result of a collaboration between the Elettra synchrotron (Trieste) and the European Radiation Synchrotron Facility (Grenoble)
 
slides icon Slides WEM310 [1.897 MB]  
poster icon Poster WEM310 [2.446 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF006 Magnet Server and Control System Database Infrastructure for the European XFEL power-supply, controls, electron, quadrupole 701
 
  • L. Fröhlich, P.K. Bartkiewicz, M. Walla
    DESY, Hamburg, Germany
 
  The linear accelerator of the European XFEL will use more than 1400 individually powered electromagnets for beam guidance and focusing. Front-end servers establish the low-level interface to several types of power supplies, and a middle layer server provides control over physical parameters like field or deflection angle in consideration of the hysteresis curve of the magnet. A relational database system with stringent consistency checks is used to store configuration data. The paper focuses on the functionality and architecture of the middle layer server and gives an overview of the database infrastructure.  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF018 Service Asset and Configuration Management in ALICE Detector Control System controls, detector, software, hardware 729
 
  • M. Lechman, A. Augustinus, P.M. Bond, P.Ch. Chochula, A.N. Kurepin, O. Pinazza
    CERN, Geneva, Switzerland
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
  • M. Lechman
    IP SAS, Bratislava, Slovak Republic
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  ALICE (A Large Ion Collider Experiment) is one of the big LHC (Large Hadron Collider) detectors at CERN. It is composed of 19 sub-detectors constructed by different institutes participating in the project. Each of these subsystems has a dedicated control system based on the commercial SCADA package "WinCC Open Architecture" and numerous other software and hardware components delivered by external vendors. The task of the central controls coordination team is to supervise integration, to provide shared services (e.g. database, gas monitoring, safety systems) and to manage the complex infrastructure (including over 1200 network devices and 270 VME and power supply crates) that is used by over 100 developers around the world. Due to the scale of the control system, it is essential to ensure that reliable and accurate information about all the components - required to deliver these services along with relationship between the assets - is properly stored and controlled. In this paper we will present the techniques and tools that were implemented to achieve this goal, together with experience gained from their use and plans for their improvement.  
poster icon Poster WEPGF018 [11.378 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF019 Database Applications Development of the TPS Control System EPICS, controls, interface, status 732
 
  • Y.-S. Cheng, Y.-T. Chang, J. Chen, P.C. Chiu, K.T. Hsu, C. H. Huang, C.Y. Liao
    NSRRC, Hsinchu, Taiwan
 
  The control system had been established for the new 3 GeV synchrotron light source (Taiwan Photon Source, TPS) which was successful to commission at December 2014. Various control system platforms with the EPICS framework had been implemented and commissioned. The relational database (RDB) has been set up for some of the TPS control system applications used. The EPICS data archive systems are necessary to be built to record various machine parameters and status information into the RDB for long time logging. The specific applications have been developed to analyze the archived data which retrieved from the RDB. One EPICS alarm system is necessary to be set up to monitor sub-system status and record detail information into the RDB if the problem happened. Some Web-based applications with RDB have been gradually created to show the TPS machine status related information. The efforts are described at this paper.  
poster icon Poster WEPGF019 [4.008 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF030 The EPICS Archiver Appliance EPICS, controls, interface, operation 761
 
  • M.V. Shankar, L.F. Li
    SLAC, Menlo Park, California, USA
  • M.A. Davidsaver
    BNL, Upton, New York, USA
  • M.G. Konrad
    FRIB, East Lansing, Michigan, USA
 
  The EPICS Archiver Appliance was developed by a collaboration of SLAC, BNL and FRIB to allow for the archival of millions of PVs, mainly focusing on data retrieval performance. It offers the ability to cluster appliances and to scale by adding appliances to the cluster. Multiple stages and an inbuilt process to move data between stages facilitates the usage of faster storage and the ability to decimate data as it is moved. An HTML management interface and scriptable business logic significantly simplifies administration. Well-defined customization hooks allow facilities to tailor the product to suit their requirements. Mechanisms to facilitate installation and migration have been developed. The system has been in production at SLAC for about 2 years now, at FRIB for about a year and is heading towards a production deployment at BNL. At SLAC, the system has significantly reduced maintenance costs while enabling new functionality that was not possible before. This paper presents an overview of the system and shares some of our experience with deploying and managing it at our facilities.  
poster icon Poster WEPGF030 [1.254 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF032 EPICS PV Management and Method for RIBF Control System EPICS, controls, network, monitoring 769
 
  • A. Uchiyama, N. Fukunishi, M. Komiyama
    RIKEN Nishina Center, Wako, Japan
 
  For the RIBF project (RIKEN RI Beam Factory), the EPICS-based distributed control system is utilized on Linux and vxWorks as an embedded EPICS technology. Utilizing NAS that have a High-Availability system as a shared storage, common EPICS programs (Base, Db, and so on) are shared with each EPICS IOC. In March 2015, the control system continues to grow and consists of about 50 EPICS IOCs, and more than 100, 000 EPICS records. For a large number of control hardware devices, the dependencies between EPICS records and EPICS IOCs are complicated. For example, it is not easy to know accurate device information by only the EPICS record name information. Therefore, new management system was constructed for RIBF control system to call up detailed information easily. In the system, by parsing startup script files (st.cmd) for running EPICS IOCs, all EPICS records and EPICS fields are stored into the PostgreSQL-based database. By utilizing this stored data, it is successful to develop Web-based management and search tools. In this paper the system concept, the feature of the Web-based tools for the management, is reported in detail.  
poster icon Poster WEPGF032 [6.833 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF034 The Power Supply Control System of CSR controls, power-supply, operation, ion 772
 
  • W. Zhang, S. An, S.Z. Gou, K. Gu, P. Li, Y.P. Wang, M. Yue
    IMP/CAS, Lanzhou, People's Republic of China
 
  This paper gives a brief description of the power supply control system for Cooler Storage Ring (CSR). It introduces in detail mainly of the control system architecture, hardware and software. We use standard distributed control system (DCS) architecture. The software is the standard three-layer structure. OPI layer realizes data generation and monitoring. The intermediate layer is a data processing and transmission. Device control layer performs data output of the power supply. We use ARM + DSP controller designed by ourselves for controlling the power supply output. At the same time, we have adopted the FPGA controller designed for timing for power supply control in order to meet the requirements of accelerator synchronized with the output of the power supply.  
poster icon Poster WEPGF034 [0.322 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF042 Scalable Web Broadcasting for Historical Industrial Control Data software, controls, framework, interface 790
 
  • B. Copy, O.O. Andreassen, Ph. Gayet, M. Labrenz, H. Milcent, F. Piccinelli
    CERN, Geneva, Switzerland
 
  With the wide-spread use of asynchronous web communication mechanisms like WebSockets and WebRTC, it has now become possible to distribute industrial controls data originated in field devices or SCADA software in a scalable and event-based manner to a large number of web clients in the form of rich interactive visualizations. There is however no simple, secure and performant way yet to query large amounts of aggregated historical data. This paper presents an implementation of a tool, able to make massive quantities of pre-indexed historical data stored in ElasticSearch available to a large amount of web-based consumers through asynchronous web protocols. It also presents a simple, Opensocial-based dashboard architecture, that allows users to configure and organize rich data visualizations (based on Highcharts Javascript libraries) and create navigation flows in a responsive mobile-friendly user interface. Such techniques are used at CERN to display interactive reports about the status of the LHC infrastructure (e.g. vacuum or cryogenics installations) and give access to fine-grained historical data stored in the LHC Logging database in a matter of seconds.

 
poster icon Poster WEPGF042 [1.056 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF043 Metadatastore: A Primary Data Store for NSLS-2 Beamlines experiment, data-analysis, EPICS, GUI 794
 
  • A. Arkilic, D.B. Allan, T.A. Caswell, L.R. Dalesio, W.K. Lewis
    BNL, Upton, Long Island, New York, USA
 
  Funding: Department of Energy, Brookhaven National Lab
The beamlines at NSLS-II are among the highest instrumented, and controlled of any worldwide. Each beamline can produce unstructured data sets in various formats. This data should be made available for data analysis and processing for beamline scientists and users. Various data flow systems are in place in numerous synchrotrons, however these are very domain specific and cannot handle such unstructured data. We have developed a data flow service, metadatastore, that manages experimental data in NSLS-II beamlines. This service enables data analysis and visualization clients to access this service either directly or via databroker api in a consistent and partition tolerant fashion, providing a reliable and easy to use interface to our state-of-the-art beamlines.
 
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF044 Filestore: A File Management Tool for NSLS-II Beamlines experiment, data-analysis, EPICS, data-acquisition 796
 
  • A. Arkilic, T.A. Caswell, D. Chabot, L.R. Dalesio, W.K. Lewis
    BNL, Upton, Long Island, New York, USA
 
  Funding: Brookhaven National Lab, Departmet of Energy
NSLS-II beamlines can generate 72,000 data sets per day resulting in over 2 M data sets in one year. The large amount of data files generated by our beamlines poses a massive file management challenge. In response to this challenge, we have developed filestore, as means to provide users with an interface to stored data. By leveraging features of Python and MongoDB, filestore can store information regarding the location of a file, access and open the file, retrieve a given piece of data in that file, and provide users with a token, a unique identifier allowing them to retrieve each piece of data. Filestore does not interfere with the file source or the storage method and supports any file format, making data within files available for NSLS-II data analysis environment.
 
poster icon Poster WEPGF044 [0.854 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF045 Large Graph Visualization of Millions of Connections in the CERN Control System Network Traffic: Analysis and Design of Routing and Firewall Rules with a New Approach network, controls, operation, Windows 799
 
  • L. Gallerani
    CERN, Geneva, Switzerland
 
  The CERN Technical Network (TN) TN was intended to be a network for accelerator and infrastructure operations. However, today, more than 60 Million IP packets are routed every hour between the General Purpose Network (GPN) and the TN involving more than 6000 different hosts. In order to improve the security of the accelerator control system, it is fundamental to understand the network traffic between the two networks in order to define appropriate routing and firewall rules without impacting Operations. The complexity and huge size of the infrastructure and the number of protocols and services involved have discouraged for years any attempt to understand and control the network traffic between the GPN and the TN. In this talk, we will show a new way to solve the problem graphically. Combining the network traffic analysis with the use of large graph visualization algorithms we produce comprehensible and usable 2D large colour topology graphs mapping the complex network relations of the control system machines and services in a detail and clarity never seen before. The talk integrates very interesting pictures and video of the graphical analysis attempt.  
poster icon Poster WEPGF045 [6.809 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF049 The Unified Anka Archiving System - a Powerful Wrapper to Scada Systems like Tango and WinCC OA TANGO, controls, interface, synchrotron 810
 
  • D. Haas, S.A. Chilingaryan, A. Kopmann, W. Mexner, D. Ressmann
    KIT, Eggenstein-Leopoldshafen, Germany
 
  ANKA realized a new unified archiving system for the typical synchrotron control systems by integrating their logging databases into the "Advanced Data Extraction Infrastructure" (ADEI). ANKA's control system environment is heterogeneous: some devices are integrated into the Tango archiving system, other sensors are logged by the Supervisory Control and Data Acquisition (SCADA) system WinCC OA. For both systems modules exist to configure the pool of sensors to be archived in the individual control system databases. ADEI has been developed to provide a unified data access layer for large time-series data sets. It supports internal data processing, caching, data aggregation and fast visualization in the web. Intelligent caching strategies ensure fast access even to huge data sets stored in the attached data sources like SQL databases. With its data abstraction layer the new ANKA archiving system is the foundation for automated monitoring while keeping the freedom to integrate nearly any control system flavor. The ANKA archiving system has been introduced successfully at three beamlines. It is operating stable since about one year and it is intended to extend it to the whole facility.  
poster icon Poster WEPGF049 [0.973 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF052 Development of the J-PARC Time-Series Data Archiver using a Distributed Database System, II distributed, EPICS, hardware, status 818
 
  • N. Kikuzawa, A. Yoshii
    JAEA/J-PARC, Tokai-Mura, Naka-Gun, Ibaraki-Ken, Japan
  • H. Ikeda, Y. Kato
    JAEA, Ibaraki-ken, Japan
 
  The linac and the RCS in J-PARC (Japan Proton Accelerator Research Complex) have over 64000 EPICS records, providing enormous data to control much equipment. The data has been collected into PostgreSQL, while we are planning to replace it with HBase and Hadoop, a well-known distributed database and a distributed file system that HBase depends on. In the previous conference it was reported that we had constructed an archive system with a new version of HBase and Hadoop that cover a single point of failure, although we realized there were some issues to make progress into a practical phase. In order to revise the system with resolving the issues, we have been reconstructing the system with replacing master nodes with reinforced hardware machines, creating a kickstart file and scripts to automatically set up a node, introducing a monitoring tool to early detect flaws without fail, etc. In this paper these methods are reported, and the performance tests for the new system with accordingly fixing some parameters in HBase and Hadoop, are also examined and reported.  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF060 A Data Management Infrastructure for Neutron Scattering Experiments in J-PARC/MLF data-management, neutron, experiment, operation 834
 
  • K. Moriyama, T. Nakatani
    JAEA/J-PARC, Tokai-Mura, Naka-Gun, Ibaraki-Ken, Japan
 
  The role of data management is one of the greatest contributions in the research workflow for scientific experiments such as neutron scattering. The facility is required to safely and efficiently manage a huge amount of data over the long duration, and provide an effective data access for facility users promoting the creation of scientific results. In order to meet these requirements, we are operating and updating a data management infrastructure in J-PAPC/MLF, which consists of the web-based integrated data management system called the MLF Experimental Database (MLF EXP-DB), the hierarchical raw data repository composed of distributed storages, and the integrated authentication system. The MLF EXP-DB creates experimental data catalogues in which raw data, measurement logs, and other contextual information on sample, experimental proposal, investigator, etc. are interrelated. This system conducts the reposition, archive and on-demand retrieve of raw data in the repository. Facility users are able to access the experimental data via a web portal. This contribution presents the overview of our data management infrastructure, and the recent updated features for high availability, scaling-out, and flexible data retrieval in the MLF EXP-DB.  
poster icon Poster WEPGF060 [1.075 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF061 Beam Trail Tracking at Fermilab interface, linac, software, booster 838
 
  • D.J. Nicklaus, L.R. Carmichael, R. Neswold, Z.Y. Yuan
    Fermilab, Batavia, Illinois, USA
 
  This paper presents a system for acquiring and sorting data from select devices depending on the destination of each particular beam pulse in the Fermilab accelerator chain. The 15 Hz beam that begins in the Fermilab Linac can be directed to a variety of additional accelerators, beam lines, beam dumps, and experiments. We have implemented a data acquisition system that senses the destination of each pulse and reads the appropriate beam intensity devices so that profiles of the beam can be stored and analyzed for each type of beam trail. It is envisioned that this data will be utilized long term to identify trends in the performance of the accelerators.  
poster icon Poster WEPGF061 [2.198 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF065 Illustrate the Flow of Monitoring Data through the MeerKAT Telescope Control Software interface, monitoring, network, controls 849
 
  • M.J. Slabber, M.T. Ockards
    SKA South Africa, National Research Foundation of South Africa, Cape Town, South Africa
 
  Funding: SKA-SA National Research Foundation (South Africa)
The MeerKAT telescope, under construction in South Africa, is comprised of a large set of elements. The elements expose various sensors to the Control and Monitoring (CAM) system, and the sampling strategy set by CAM per sensor varies from several samples a second to infrequent updates. This creates a substantial volume of sensor data that needs to be stored and made available for analysis. We depict the flow of sensor data through the CAM system, showing the various memory buffers, temporary disk storage and mechanisms to permanently store the data in HDF5 format on the network attached storage (NAS).
 
poster icon Poster WEPGF065 [1.380 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF066 A Systematic Measurement Analyzer for LHC Operational Data collimation, operation, luminosity, beam-losses 853
 
  • G. Valentino, X. Buffat, D. Kirchner, S. Redaelli
    CERN, Geneva, Switzerland
 
  The CERN Accelerator Logging Service stores data from hundreds of thousands of parameters and measurements, mostly from the Large Hadron Collider (LHC). The systematic measurement analyzer is a Java-based tool that is used to visualize and analyze various beam measurement data over multiple fills and time intervals during the operational cycle, such as ramp or squeeze. Statistical analysis and various manipulations of data are possible, including correlation with several machine parameters such as β* and energy. Examples of analyses performed include checks of collimator positions, beam losses throughout the cycle and tune stability during the squeeze which is then used for feed-forward purposes.  
poster icon Poster WEPGF066 [2.274 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF070 A New Data Acquiring and Query System with Oracle and EPICS in the BEPCII EPICS, data-acquisition, interface, controls 865
 
  • C.H. Wang, L.F. Li
    IHEP, Beijing, People's Republic of China
 
  Funding: supported by NFSC(1137522)
The old historical Oracle database in the BEPCII has been put into operation in 2006, there are some problems such as the program operation instability and EPICS PVs loss, a new data acquiring and query system with Oracle and EPICS has been developed with Eclipse and JCA. On one hand, the authors adopt the technology of the table-space and the table-partition to build a special database schema in Oracle. On another hand, based on RCP and Java, EPICS data acquiring system is developed successfully with a very friendly user interface. It's easy for users to check the status of each PV's connection, manage or maintain the system. Meanwhile, the authors also develop the system of data query, which provides many functions, including data query, data plotting, data exporting, data zooming, etc. This new system has been put into running for three years. It also can be applied to any EPICS control systems.

 
poster icon Poster WEPGF070 [0.946 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF072 Parameters Tracking and Fault Diagnosis base on NoSQL Database at SSRF distributed, hardware, storage-ring, injection 873
 
  • Y.B. Yan, Z.C. Chen, L.W. Lai, Y.B. Leng
    SINAP, Shanghai, People's Republic of China
 
  As a user facility, the reliability and stability are very important. Besides using high-reliability hardware, the rapid fault diagnosis, data mining and predictive analytic s are also effective ways to improve the efficiency of the accelerator. A beam data logging system was built at SSRF, which was based on NoSQL database. The logging system stores beam parameters under some predefined conditions. The details of the system will be reported in this paper.  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF095 Application of PyCDB for K-500 Beam Transfer Line controls, software, network, EPICS 923
 
  • P.B. Cheblakov, S.E. Karnaev, O.A. Khudayberdieva
    BINP SB RAS, Novosibirsk, Russia
 
  Funding: This work has been supported by Russian Science Foundation (project N 14-50-00080).
The new injection complex for VEPP-4 and VEPP-2000 e-p colliders is under construction at Budker Institute, Novosibirsk, Russia. The double-direction bipolar transfer line K-500 of 130 and 220 meters length respectively will provide the beam transportation from the injection complex to the colliders with a frequency of 1 Hz. The designed number of particles in the transferred beam is 2*1010 of electrons or positrons, the energy is 500 MeV. K-500 has dozens of types of magnets, power supplies and electronic devices. It is rather complicated task to store and manage information about such a number of types and instances of entities, especially to handle relations between them. This knowledge is critical for configuration of all aspects of control system. Therefore we have chosen PyCDB to handle this information and automate configuration data extraction for different purposes starting with reports and diagrams and ending with high-level applications and EPICS IOCs' configuration. This paper considers concepts of this approach and shows the PyCDB database sctructure designed for K-500 transfer line. An automatic configuration of IOCs is described as integration with EPICS.
 
poster icon Poster WEPGF095 [0.792 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF097 Local Monitoring and Control System for the SKA Telescope Manager: A Knowledge-Based System Approach for Issues Identification Within a Logging Service TANGO, software, controls, interface 930
 
  • M. Di Carlo, M. Dolci
    INAF - OA Teramo, Teramo, Italy
  • R. Smareglia
    INAF-OAT, Trieste, Italy
  • P.S. Swart, G.M. le Roux
    SKA South Africa, National Research Foundation of South Africa, Cape Town, South Africa
 
  The SKA Telescope Manager (SKA. TM) is a distributed software application aimed to control the operation of thousands of radio telescopes, antennas and auxiliary systems (e.g. infrastructures, signal processors, …) which will compose the Square Kilometre Array, the world's largest radio astronomy facility currently under development. SKA. TM, as an "element" of the SKA, is composed in turn by a set of sub-elements whose tight coordination is ensured by a specific sub-element called "Local Monitoring and Control" (TM.LMC). TM.LMC is mainly focussed on the life cycle management of TM, the acquisition of every network-related information useful to understand how TM is performing and the logging library for both online and offline sub-elements. Given the high complexity of the system, identifying the origin of an issue, as soon as a problem occurs, appears to be a hard task. To allow a prompt diagnostics analysis by engineers, operators and software developers, a Knowledge-Based System (KBS) approach is proposed and described for the logging service.  
poster icon Poster WEPGF097 [7.145 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF101 A Modular Software Architecture for Applications that Support Accelerator Commissioning at MedAustron interface, framework, software, controls 938
 
  • M. Hager, M. Regodic
    EBG MedAustron, Wr. Neustadt, Austria
 
  The commissioning and operation of an accelerator requires a large set of supportive applications. Especially in the early stages, these tools have to work with unfinished and changing systems. To allow the implementation of applications that are dynamic enough for this environment, a dedicated software architecture, the Operational Application (OpApp) architecture, has been developed at MedAustron. The main ideas of the architecture are a separation of functionality into reusable execution modules and a flexible and intuitive composition of the modules into bigger modules and applications. Execution modules are implemented for the acquisition of beam measurements, the generation of cycle dependent data, the access to a database and other tasks. On this basis, Operational Applications for a wide variety of use cases can be created, from small helper tools to interactive beam commissioning applications with graphical user interfaces. This contribution outlines the OpApp architecture and the implementation of the most frequently used applications.  
poster icon Poster WEPGF101 [2.169 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF133 TINE Studio, Making Life Easy for Administrators, Operators and Developers controls, operation, interface, GUI 1017
 
  • P. Duval, M. Lomperski
    DESY, Hamburg, Germany
  • J. Bobnar
    Cosylab, Ljubljana, Slovenia
 
  A mature control system will provide central services such as alarm handling, archiving, location and naming, debugging, etc. along with development tools and administrative utilities. It has become common to refer to the collection of these services as a 'studio'. Indeed Control System Studio (CSS)* strives to provide such services independent of the control system protocol. Such a 'one-size-fits-all' approach is likely, however, to focus on features and behavior of the most prominent control system protocol in use, providing a good fit there but perhaps offering only a rudimentary fit for 'other' control systems. TINE** is for instance supported by CSS but is much better served by making use of TINE Studio. This paper reports here on the rich set of services and utilities comprising TINE Studio.
* http://www.controlsystemstudio.org
** http://tine.desy.de
 
poster icon Poster WEPGF133 [2.528 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF134 Applying Sophisticated Analytics to Accelerator Data at BNLs Collider-Accelerator Complex: Bridging to Repositories, Tools of Choice, and Applications interface, controls, collider, network 1021
 
  • K.A. Brown, P. Chitnis, T. D'Ottavio, J. Morris, S. Nemesure, S. Perez, D.J. Thomas
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
Analysis of accelerator data has traditionally been done using custom tools, either developed locally or at other laboratories. The actual data repositories are openly available to all users, but it can take significant effort to mine the desired data, especially as the volume of these repositories increases to hundreds of terabytes or more. Much of the data analysis is done in real time when the data is being logged. However, sometimes users wish to apply improved algorithms, look for data correlations, or perform more sophisticated analysis. There is a wide spectrum of desired analytics for this small percentage of the problem domains. In order to address this tools have been built that allow users to efficiently pull data out of the repositories but it is then left up to them to post process that data. In recent years, the use of tools to bridge standard analysis systems, such as Matlab, R, or SciPy, to the controls data repositories, has been investigated. In this paper, the tools used to extract data from the repositories, tools used to bridge the repositories to standard analysis systems, and directions being considered for the future, will be discussed.
 
poster icon Poster WEPGF134 [2.714 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF141 Tools and Procedures for High Quality Technical Infrastructure Monitoring Reference Data at CERN monitoring, controls, interface, framework 1036
 
  • R. Martini, M. Bräger, J.L. Salmon, A. Suwalska
    CERN, Geneva, Switzerland
 
  The monitoring of the technical infrastructure at CERN relies on the quality of the definition of numerous and heterogeneous data sources. In 2006, we introduced the MoDESTI* procedure for the Technical Infrastructure Monitoring** (TIM) system to promote data quality. The first step in the data integration process is the standardisation of the declaration of the various data points whether these are alarms, equipment statuses or analogue measurement values. Users declare their data points and can follow their requests, monitoring personnel ensure the infrastructure is adapted to the new data, and control room operators check that the data points are defined in a consistent and intelligible way. Furthermore, rigorous validations are carried out on input data to ensure correctness as well as optimal integration with other computer systems at CERN (maintenance management, geographical viewing tools etc.). We are now redesigning the MoDESTI procedure in order to provide an intuitive and streamlined Web based tool for managing data definition, as well as reducing the time between data point integration requests and implementation. Additionally, we are introducing a Class-Device-Property data definition model, a standard in the CERN accelerator sector, for a more flexible use of the TIM data points.
*MoDESTI: Monitoring Data Entry System for the Technical Infrastructure
**TIM: Technical Infrastructure Monitoring
 
poster icon Poster WEPGF141 [0.509 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF152 Time Travel Made Possible at FERMI by the Time-Machine Application TANGO, interface, controls, framework 1059
 
  • G. Strangolino, M. Lonza, L. Pivetta
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  The TANGO archiving system HDB++ continuously stores data over time into the historical database. The new time-machine application, a specialization of the extensively used save/restore framework, allows bringing back sets of control system variables to their values at a precise date and time in the past. Given the desired time stamp t0 and a set of TANGO attributes, the values recorded at the most recent date and time preceding or equaling t0 are fetched from the historical database. The user can examine the list of variables with their values before performing a full or partial restoration of the set. The time-machine seamlessly integrates with the well known save/restore application, sharing many of its characteristics and functionalities, such as the matrix-based subset selection, the live difference view and the simple and effective user interface.  
poster icon Poster WEPGF152 [0.445 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF154 Visualization of Interlocks with EPICS Database and EDM Embedded Windows EPICS, controls, interlocks, PLC 1066
 
  • E. Tikhomolov
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
 
  The control system for TRIUMF's upgraded secondary beam line M20 was implemented by using a PLC and one of many EPICS IOCs running on a multi-core Dell server. Running the IOC on a powerful machine rather than on a small dedicated computer has a number of advantages such as fast code execution and the availability of a large amount of memory. A large EPICS database can be loaded into the IOC and used for visualization of the interlocks implemented in the PLC. The information about interlock status registers, text messages, and the names of control and interlock panels are entered into a relational database by using a web browser. Top-level EPICS schematics are generated from the relational database. For visualization the embedded windows available in the Extensible Display Manager (EDM) are the EPICS clients, which retrieve interlock status information from the EPICS database. A set of interlock panels is the library, which can be used to show any chains of interlocks. If necessary, a new interlock panel can be created by using the visualization tools provided with EDM. This solution, in use for more than 3 years, has proven to be reliable and very flexible.  
poster icon Poster WEPGF154 [1.158 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THHA3O03 Managing Neutron Beam Scans at the Canadian Neutron Beam Centre experiment, controls, neutron, software 1096
 
  • M.R. Vigder, M.L. Cusick, D. Dean
    CNL, Ontario, Canada
 
  The Canadian Neutron Beam Centre (CNBC) of the Canadian Nuclear Laboratories (CNL) operate six beam lines for material research. A single beam line experiment requires scientists to acquire data as a sequence of scans that involves data acquisition at many points, varying sample positions, samples, wavelength, sample environment, etc. The points at which measurements must be taken can number in the thousands with scans or their variations having to be run multiple times. At the CNBC an approach has been developed to allow scientists to specify and manage their scans using a set of processes and tools. Scans are specified using a set of constructors and a scan algebra that allows scans to be combined using a set of scan operators. Using the operators of the algebra, complex scan sequences can be constructed from simpler scans and run unattended for up to a few days. Based on the constructors and the algebra, tools are provided to scientists to build, organize and execute their scans. These tools can take the form of scripting languages, spreadsheets, or databases. This scanning technique is currently in use at CNL, and has been implemented in Python on an EPICS based control system.  
slides icon Slides THHA3O03 [0.745 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THHC2O01 Beam Property Management at KEK Electron/Positron 7-GeV Injector Linac controls, linac, emittance, electron 1123
 
  • K. Furukawa, N. Iida, T. Kamitani, S. Kazama, T. Miura, F. Miyahara, Y. Ohnishi, M. Satoh, T. Suwada, K. Yokoyama
    KEK, Ibaraki, Japan
 
  The electron/positron injector linac at KEK has injected a variety of beams into the electron accelerator complex of SuperKEKB collider and light sources for particle physics and photon science experiments for more than 30 years. The beam property of electrons and positrons varies in energy from 2.5 GeV to 7 GeV and in bunch charge from 0.2 nC to 10 nC, and their stability requirements are different depending on the injected storage ring. They have to be switched by pulse-to-pulse modulation at 50 Hz. The emittance control is especially crucial to achieve the goal at SuperKEKB and is under development. The beam energy management becomes more important as it affects all of the beam properties. Beam acceleration provided by 60 RF power station should be properly distributed considering redundancy and stability. Thus, the equipment controls are also restructured in order to enable the precise control of the beam properties, based on the synchronized event control system and EPICS control system. The strategy and status of the upgrade is described in this paper from the practical aspects of device controls, online simulation and operation.  
slides icon Slides THHC2O01 [2.187 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THHC2O02 Component Database for APS Upgrade software, interface, storage-ring, lattice 1127
 
  • S. Veseli, N.D. Arnold, J. Carwardine, G. Decker, D.P. Jarosz, N. Schwarz
    ANL, Argonne, Ilinois, USA
 
  The Advanced Photon Source Upgrade (APS-U) project will replace the existing APS storage ring with a multi-bend achromat (MBA) lattice to provide extreme transverse coherence and extreme brightness x-rays to its users. As the time to replace the existing storage ring accelerator is of critical concern, an aggressive one-year removal/installation/testing period is being planned. To aid in the management of the thousands of components to be installed in such a short time, the Component Database (CDB) application is being developed with the purpose to identify, document, track, locate, and organize components in a central database. Three major domains are being addressed: Component definitions (which together make up an exhaustive "Component Catalog"), Designs (groupings of components to create subsystems), and Component Instances ('Inventory'). Relationships between the major domains offer additional "system knowledge" to be captured that will be leveraged with future tools and applications. It is imperative to provide sub-system engineers with a functional application early in the machine design cycle. Topics discussed in this paper include the initial design and deployment of CDB, as well as future development plans.  
slides icon Slides THHC2O02 [1.957 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THHD3O06 Overview of the Monitoring Data Archive used on MeerKAT interface, monitoring, GUI, status 1155
 
  • M.J. Slabber
    SKA South Africa, National Research Foundation of South Africa, Cape Town, South Africa
 
  Funding: SKA South Africa National Research Foundation of South Africa Department of Science and Technology.
MeerKAT, the 64-receptor radio telescope being built in the Karoo, South Africa, by Square Kilometre Array South Africa (SKA SA), comprises a large number of components. All components are interfaced to the Control and Monitoring (CAM) system via the Karoo Array Telescope Communication Protocol (KATCP). KATCP is used extensively for internal communications between CAM components and other subsystems. A KATCP interface exposes requests and sensors. Sampling strategies are set on sensors, ranging from several updates per second to infrequent updates. The sensor samples are of multiple types, from small integers to text fields. As the various components react to user input and sensor samples, the samples with timestamps need to be permanently stored and made available for scientists, engineers and operators to query and analyse. This paper present how the storage infrastructure (dubbed Katstore) manages the volume, velocity and variety of this data. Katstore is comprised of several stages of data collection and transportation. The stages move the data from monitoring nodes to storage node to permanent storage to offsite storage. Additional information (e.g. type, description, units) about each sensor is stored with the samples.
 
slides icon Slides THHD3O06 [29.051 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)