Paper | Title | Other Keywords | Page |
---|---|---|---|
MOCOAAB03 | The Spiral2 Control System Progress Towards the Commission Phase | controls, interface, PLC, EPICS | 8 |
|
|||
The commissioning of the Spiral2 Radioactive Ion Beams facility at Ganil will soon start, so requiring the control system components to be delivered in time. Yet, parts of the system were validated during preliminary tests performed with ions and deuterons beams at low energy. The control system development results from the collaboration between Ganil, CEA/IRFU, CNRS/IPHC laboratories, using appropriate tools and approach. Based on Epics, the control system follows a classical architecture. At the lowest level, Modbus/TCP protocol is considered as a field bus. Then, equipment are handled by IOCs (soft or VME/VxWorks) with a software standardized interface between IOCs and clients applications on top. This last upper layer consists of Epics standard tools, CSS/BOY user interfaces within the so-called CSSop Spiral2 context suited for operation and, for machine tunings, high level applications implemented by Java programs developed within a Spiral2 framework derived from the open-Xal one. Databases are used for equipment data and alarms archiving, to configure equipment and to manage the machine lattice and beam settings. A global overview of the system is therefore here proposed. | |||
![]() |
Slides MOCOAAB03 [3.205 MB] | ||
MOPPC021 | Configuration System of the NSLS-II Booster Control System Electronics | controls, booster, software, kicker | 100 |
|
|||
The National Synchrotron Light Source II is under construction at Brookhaven National Laboratory, Upton, USA. NSLS-II consists of linac, transport lines, booster synchrotron and the storage ring. The main features of booster are 1 or 2 Hz cycle and beam energy ramp from 200 MeV up to 3 GeV in 300 msec. EPICS is chosen as a base for the NSLS-II Control System. The booster control system covers all parts of the facility such as power supplies, timing system, diagnostics, vacuum system and many others. Each part includes a set of various electronic devices and a lot of parameters which shall be fully defined for the control system software. This paper considers an approach proposed for defining some equipment of the NSLS-II Booster. It provides a description of different entities of the facility in a uniform way. This information is used to generate configuration files for EPICS IOCs. The main goal of this approach is to put information in one place and elimination of data duplication. Also this approach simplifies configuration and modification of the description and makes it more clear and easily usable by engineers and operators. | |||
![]() |
Poster MOPPC021 [0.240 MB] | ||
MOPPC023 | Centralized Data Engineering for the Monitoring of the CERN Electrical Network | interface, network, framework, controls | 107 |
|
|||
The monitoring and control of the CERN electrical network involves a large variety of devices and software: it ranges from acquisition devices to data concentrators, supervision systems as well as power network simulation tools. The main issue faced nowadays for the engineering of such large and heterogeneous system including more than 20,000 devices and 200,000 tags is that all devices and software have their own data engineering tool while many of the configuration data have to be shared between two or more devices: the same data needs to be entered manually to the different tools leading to duplication of effort and many inconsistencies. This paper presents a tool called ENSDM aiming at centralizing all the data needed to engineer the monitoring and control infrastructure into a single database from which the configuration of the various devices is extracted automatically. Such approach allows the user to enter the information only once and guarantee the consistency of the data across the entire system. The paper will focus more specifically on the configuration of the remote terminal unit) devices, the global supervision system (SCADA) and the power network simulation tools. | |||
![]() |
Poster MOPPC023 [1.253 MB] | ||
MOPPC030 | Developments on the SCADA of CERN Accelerators Vacuum | controls, vacuum, PLC, software | 135 |
|
|||
During the first 3 years of LHC operation, the priorities for the vacuum controls SCADA were to attend to user requests, and to improve its ergonomics and efficiency. We now have reached: information access simplified and more uniform; automatic scripts instead of fastidious manual actions; functionalities and menus standardized across all accelerators; enhanced tools for data analysis and maintenance interventions. Several decades of cumulative developments, based on heterogeneous technologies and architectures, have been asking for a homogenization effort. The Long Shutdown (LS1) provides the opportunity to further standardize our vacuum controls systems, around Siemens-S7 PLCs and PVSS SCADA. Meanwhile, we have been promoting exchanges with other Groups at CERN and outside Institutes: to follow the global update policy for software libraries; to discuss philosophies and development details; and to accomplish common products. Furthermore, while preserving the current functionalities, we are working on a convergence towards the CERN UNICOS framework. | |||
![]() |
Poster MOPPC030 [31.143 MB] | ||
MOPPC035 | Re-integration and Consolidation of the Detector Control System for the Compact Muon Solenoid Electromagnetic Calorimeter | software, hardware, controls, interface | 154 |
|
|||
Funding: Swiss National Science Foundation (SNSF) The current shutdown of the Large Hadron Collider (LHC), following three successful years of physics data-taking, provides an opportunity for major upgrades to be performed on the Detector Control System (DCS) of the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment. The upgrades involve changes to both hardware and software, with particular emphasis on taking advantage of more powerful servers and updating third-party software to the latest supported versions. The considerable increase in available processing power enables a reduction from fifteen to three or four servers. To host the control system on fewer machines and to ensure that previously independent software components could run side-by-side without incompatibilities, significant changes in the software and databases were required. Additional work was undertaken to modernise and concentrate I/O interfaces. The challenges to prepare and validate the hardware and software upgrades are described along with details of the experience of migrating to this newly consolidated DCS. |
|||
![]() |
Poster MOPPC035 [2.811 MB] | ||
MOPPC055 | Revisiting CERN Safety System Monitoring (SSM) | monitoring, network, PLC, status | 218 |
|
|||
CERN Safety System Monitoring (SSM) is a system for monitoring state-of-health of the various access and personnel safety systems at CERN since more than three years. SSM implements monitoring of different operating systems, network equipment, storage, and special devices like PLCs, front ends, etc. It is based on the monitoring framework Zabbix, which supports alert notifications, issue escalation, reporting, distributed management, and automatic scalability. The emphasis of SSM is on the needs of maintenance and system operation, where timely and reliable feedback directly from the systems themselves is important to quickly pinpoint immediate or creeping problems. A new application of SSM is to anticipate availability problems through predictive trending that allows to visualize and manage upcoming operational issues and infrastructure requirements. Work is underway to extend the scope of SSM to all access and safety systems managed by the access and safety team with upgrades to the monitoring methodology as well as to the visualization of results. | |||
![]() |
Poster MOPPC055 [1.537 MB] | ||
MOPPC057 | Data Management and Tools for the Access to the Radiological Areas at CERN | controls, radiation, interface, operation | 226 |
|
|||
As part of the refurbishment of the PS Personnel Protection system, the radioprotection (RP) buffer zones & equipment have been incorporated into the design of the new access points providing an integrated access concept to the radiation controlled areas of the PS complex. The integration of the RP and access control equipment has been very challenging due to the lack of space in many of the zones. Although successfully carried out, our experience from the commissioning of the first installed access points shows that the integration should also include the software tools and procedures. This paper presents an inventory of all the tools and data bases currently used (*) in order to ensure the access to the CERN radiological areas according to CERN’s safety and radioprotection procedures. We summarize the problems and limitations of each tool as well as the whole process, and propose a number of improvements for the different kinds of users including changes required in each of the tools. The aim is to optimize the access process and the operation & maintenance of the related tools by rationalizing and better integrating them.
(*) Access Distribution and Management, Safety Information Registration, Works Coordination, Access Control, Operational Dosimeter, Traceability of Radioactive Equipment, Safety Information Panel. |
|||
![]() |
Poster MOPPC057 [1.955 MB] | ||
MOPPC062 | Real-Time System Supervision for the LHC Beam Loss Monitoring System at CERN | monitoring, FPGA, detector, operation | 242 |
|
|||
The strategy for machine protection and quench prevention of the Large Hadron Collider (LHC) at the European Organisation for Nuclear Research (CERN) is mainly based on the Beam Loss Monitoring (BLM) system. The LHC BLM system is one of the most complex and large instrumentation systems deployed in the LHC. In addition to protecting the collider, the system also needs to provide a means of diagnosing machine faults and deliver feedback of the losses to the control room as well as to several systems for their setup and analysis. In order to augment the dependability of the system several layers of supervision has been implemented internally and externally to the system. This paper describes the different methods employed to achieve the expected availability and system fault detection. | |||
MOPPC118 | Development of EPICS Accelerator Control System for the IAC 44 MeV Linac | controls, linac, EPICS, power-supply | 385 |
|
|||
The Idaho Accelerator Center (IAC) of Idaho State University (ISU) has been operating nine low energy accelerators. Since the beginning of the fall semester of 2012, the ISU Advanced Accelerator and Ultrafast Beam Lab (AAUL) group has been working to develop a new EPICS system to control 47 magnet power supplies for an IAC 44 MeV L-band linear accelerator. Its original control system was fully analog, which had several limitations to get good reproducibility and stability during the accelerator operation. This paper describes our group’s team effort and accomplishment in developing a new EPICS system to control 15 Lambda EMS and 32 TDK-Lambda ZUP power supplies for the IAC L-band linear accelerator. In addition, we also describe several other useful tools such as the save and restore function. | |||
![]() |
Poster MOPPC118 [1.175 MB] | ||
MOPPC126 | !CHAOS: the "Control Server" Framework for Controls | controls, framework, distributed, software | 403 |
|
|||
We report on the progress of !CHAOS*, a framework for the development of control and data acquisition services for particle accelerators and large experimental apparatuses. !CHAOS introduces to the world of controls a new approach for designing and implementing communications and data distribution among components and for providing the middle-layer services for a control system. Based on software technologies borrowed from high-performance Internet services !CHAOS offers by using a centralized, yet highly-scalable, cloud-like approach all the services needed for controlling and managing a large infrastructure. It includes a number of innovative features such as high abstraction of services, devices and data, easy and modular customization, extensive data caching for enhancing performances, integration of all services in a common framework. Since the !CHAOS conceptual design was presented two years ago the INFN group have been working on the implementations of services and components of the software framework. Most of them have been completed and tested for evaluating performance and reliability. Some services are already installed and operational in experimental facilities at LNF.
* "Introducing a new paradigm for accelerators and large experimental apparatus control systems", L. Catani et.al., Phys. Rev. ST Accel. Beams, http://prst-ab.aps.org/abstract/PRSTAB/v15/i11/e112804 |
|||
![]() |
Poster MOPPC126 [0.874 MB] | ||
MOPPC130 | A New Message-Based Data Acquisition System for Accelerator Control | data-acquisition, controls, embedded, network | 413 |
|
|||
The data logging system for SPring-8 accelerator complex has been operating for 16 years as a part of MADOCA system. Collector processes periodically request distributed computers to collect sets of data by synchronous ONC-RPC protocol at fixed cycles. On the other hand, we also developed another MyDAQ system for casual or temporary data acquisition. A data acquisition process running on a local computer pushes one BSD socket stream into a server at random time. Its "one stream per one signal" strategy made data management simple while the system has no scalability. We developed a new data acquisition system which has super-MADOCA scale and MyDAQ's simplicity for new generation accelerator project. The new system based on ZeroMQ messaging library and MessagePack serialization library has high availability, asynchronous messaging, flexibility in data expression and scalability. The input/output plug-ins accept multi protocols and send data to various data systems. This paper describes design, implementation, performance, reliability and deployment of the system. | |||
![]() |
Poster MOPPC130 [0.197 MB] | ||
MOPPC139 | A Framework for Off-line Verification of Beam Instrumentation Systems at CERN | framework, software, instrumentation, interface | 435 |
|
|||
Many beam instrumentation systems require checks to confirm their beam readiness, detect any deterioration in performance and to identify physical problems or anomalies. Such tests have already been developed for several LHC instruments using the LHC sequencer, but the scope of this framework doesn't extend to all systems; notably absent in the pre-LHC injector chain. Furthermore, the operator-centric nature of the LHC sequencer means that sequencer tasks aren't accessible by hardware and software experts who are required to execute similar tests on a regular basis. As a consequence, ad-hoc solutions involving code sharing and in extreme cases code duplication have evolved to satisfy the various use-cases. In terms of long term maintenance, this is undesirable due to the often short-term nature of developers at CERN alongside the importance of the uninterrupted stability of CERN's accelerators. This paper will outline the first results of an investigation into the existing analysis software, and provide proposals for the future of such software. | |||
MOPPC152 | Accelerator Lattice and Model Services | lattice, simulation, GUI, EPICS | 464 |
|
|||
Funding: This work is supported by the U.S. Department of Energy Office of Science under Cooperative Agreement DE-SC0000661, and the Chinese Spallation Neutron Source Project. Physics model based beam tuning applications are essential for complex accelerators. Traditionally, such applications acquire lattice data directly from a persistent data source and then carry out model computation within the applications. However, this approach often suffers from poor performance and modeling tool limitation. A better architecture is to offload heavy database query and model computation from the application instances. A database has been designed for hosting lattice and physics modeling data while a set of web based services then provide lattice and model data for the beam tuning applications to consume. Preliminary lattice and model services are based on standard J2EE Glassfish platform with MySQL database as backend data storage. Such lattice and model services can greatly improve the performance and reliability of physics applications. |
|||
![]() |
Poster MOPPC152 [0.312 MB] | ||
MOPPC155 | NSLS II Middlelayer Services | lattice, interface, controls, EPICS | 467 |
|
|||
Funding: Work supported under auspices of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookhaven Science Associates, LLC, and in part by the DOE Contract DE-AC02-76SF00515 A service oriented architecture has been designed for NSLS II project for its beam commissioning and daily operation. Middle layer services have been actively developing, and some of them have been deployed into NSLS II control network to support our beam commissioning. The services are majorly based on 2 technologies, which are web-service/RESTful and EPICS V4 respectively. The services provides functions to take machine status snapshot, convert magnet setting between different unit system, or serve lattice information and simulation results. This paper presents the latest status of services development at NSLS II project, and our future development plan. |
|||
![]() |
Poster MOPPC155 [2.079 MB] | ||
MOPPC156 | Virtual Accelerator at NSLS II Project | insertion, insertion-device, EPICS, lattice | 471 |
|
|||
Funding: Work supported under auspices of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookhaven Science Associates, LLC, and in part by the DOE Contract DE-AC02-76SF00515 A virtual accelerator has been developed at NSLS II to support tools development from physics study and beam commissioning to beam operation. The physics results are provided using Tracy simulation code thru EPICS process variables, which was implemented originally by Diamond Light Source. The latest virtual accelerator supports all major accelerator components including all magnets (Dipole, Quadrupole, Sextuple), RF cavity, insertion device, and other diagnostics devices (BPM for example), and works properly for both linear machine and synchrotron ring. Two error mechanisms are implemented, which are random error for each magnet setting, and systematic error to simulate misalignment. Meanwhile, it also provides sort of online model functions including serving beta function, and close orbit data. In NSLS II, there are 5 virtual accelerators deployed, and 3 of them are running simultaneously. Those virtual accelerators have been effectively supporting the tools development such as physics applications, and other services such as Channel Finder. This paper presents the latest status of virtual accelerator, and our plan for its future development and deployment. |
|||
![]() |
Poster MOPPC156 [1.393 MB] | ||
TUCOBAB03 | Utilizing Atlassian JIRA for Large-Scale Software Development Management | software, controls, status, operation | 505 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632634 Used actively by the National Ignition Facility since 2004, the JIRA issue tracking system from Atlassian is now used for 63 different projects. NIF software developers and customers have created over 80,000 requests (issues) for new features and bug fixes. The largest NIF software project in JIRA is the Integrated Computer Control system (ICCS), with nearly 40,000 issues. In this paper, we’ll discuss how JIRA has been customized to meet our software development process. ICCS developed a custom workflow in JIRA for tracking code reviews, recording test results by both developers and a dedicated Quality Control team, and managing the product release process. JIRA’s advanced customization capability have proven to be a great help in tracking key metrics about the ICCS development efforts (e.g. developer workload). ICCS developers store software in a configuration management tool called AccuRev, and document all software changes in each JIRA issue. Specialized tools developed by the NIF Configuration Management team analyze each software product release, insuring that each software product release contains only the exact expected changes. |
|||
![]() |
Slides TUCOBAB03 [2.010 MB] | ||
TUMIB05 | ANSTO, Australian Synchrotron, Metadata Catalogues and the Australian National Data Service | synchrotron, data-management, experiment, neutron | 529 |
|
|||
Data citation, management and discovery are important to ANSTO, the Australian Synchrotron and the scientists that use them. Gone are the days when raw data is written to a removable media and subsequently lost. The metadata catalogue *MyTardis is being used by both ANSTO and the Australian Synchrotron. Metadata is harvested from the neutron beam and X-ray instruments raw experimental files and catalogued in databases that are local to the facilities. The data is accessible via a web portal. Data policies are applied to embargo data prior to placing data in the public domain. Public domain data is published to the Australian Research Data Commons using the OAI-PMH standard. The Commons is run by the Australian National Data Service (ANDS), who was the project sponsor. The Commons is a web robot friendly site. ANDS also sponsors digital object identifiers (DOI) for deposited datasets, which allows raw data to now be a first class research output, allowing scientists that collect data to gain recognition in the same way as those who publish journal articles. Data is being discovered, cited, reused and collaborations initiated through the Commons. | |||
![]() |
Slides TUMIB05 [1.623 MB] | ||
![]() |
Poster TUMIB05 [1.135 MB] | ||
TUMIB06 | Development of a Scalable and Flexible Data Logging System Using NoSQL Databases | controls, data-acquisition, operation, network | 532 |
|
|||
We have developed a scalable and flexible data logging system for SPring-8 accelerator control. The current SPring-8 data logging system powered by a relational database management system (RDBMS) has been storing log data for 16 years. With the experience, we recognized the lack of RDBMS flexibility on data logging such as little adaptability of data format and data acquisition cycle, complexity in data management and no horizontal scalability. To solve the problem, we chose a combination of two NoSQL databases for the new system; Redis for real time data cache and Apache Cassandra for perpetual archive. Logging data are stored into both database serialized by MessagePack with flexible data format that is not limited to single integer or real value. Apache Cassandra is a scalable and highly available column oriented database, which is suitable for time series logging data. Redis is a very fast on-memory key-value store that complements Cassandra's eventual consistent model. We developed a data logging system with ZeroMQ message and have proved its high performance and reliability in long term evaluation. It will be released for partial control system this summer. | |||
![]() |
Slides TUMIB06 [0.182 MB] | ||
![]() |
Poster TUMIB06 [0.525 MB] | ||
TUPPC003 |
SDD toolkit : ITER CODAC Platform for Configuration and Development | EPICS, toolkit, framework, controls | 550 |
|
|||
ITER will consist of roughly 200 plant systems I&C (in total millions of variables) delivered in kind which need to be integrated into the ITER control infrastructure. To integrate them in a smooth way, CODAC team releases every year the Core Software environment which consists of many applications. This paper focuses on the self description data toolkit implementation, a fully home-made ITER product. The SDD model has been designed with Hibernate/Spring to provide required information to generate configuration files for CODAC services such as archiving, EPICS, alarm, SDN, basic HMIs, etc. Users enter their configuration data via GUIs based on web application and Eclipse. Snapshots of I&C projects can be dumped to XML. Different levels of validation corresponding to various stages of development have been implemented: it enables during integration, verification that I&C projects are compliant with our standards. The development of I&C projects continues with Maven utilities. In 2012, a new Eclipse perspective has been developed to allow user to develop codes, to start their projects, to develop new HMIs, to retrofit their data in SDD database and to checkout/commit from/to SVN. | |||
![]() |
Poster TUPPC003 [1.293 MB] | ||
TUPPC004 | Scalable Archiving with the Cassandra Archiver for CSS | EPICS, controls, software, distributed | 554 |
|
|||
An archive for process-variable values is an important part of most supervisory control and data acquisition (SCADA) systems, because it allows operators to investigate past events, thus helping in identifying and resolving problems in the operation of the supervised facility. For large facilities like particle accelerators there can be more than one hundred thousand process variables that have to be archived. When these process variables change at a rate of one Hertz or more, a single computer system can typically not handle the data processing and storage. The Cassandra Archiver has been developed in order to provide a simple to use, scalable data-archiving solution. It seamlessly plugs into Control System Studio (CSS) providing quick and simple access to all archived process variables. An Apache Cassandra database is used for storing the data, automatically distributing it over many nodes and providing high-availability features. This contribution depicts the architecture of the Cassandra Archiver and presents performance benchmarks outlining the scalability and comparing it to traditional archiving solutions based on relational databases. | |||
![]() |
Poster TUPPC004 [3.304 MB] | ||
TUPPC006 | Identifying Control Equipment | EPICS, controls, cryogenics, interface | 562 |
|
|||
The cryogenic installations at DESY are widely spread over the DESY campus. Many new components have been and will be installed for the new European XFEL. Commissioning and testing takes a lot of time. Local tag labels help identify the components but it is error prone to type in the names. Local bar-codes and/or datamatrix codes can be used in conjunction with intelligent devices like smart (i)Phones to retrieve data directly from the control system. The developed application will also show information from the asset database. This will provide the asset properties of the individual hardware device including the remaining warranty. Last not least cables are equipped with a bar-code which helps to identify start and endpoint of the cable and the related physical signal. This paper will describe our experience with the mobile applications and the related background databases which are operational already for several years. | |||
![]() |
Poster TUPPC006 [0.398 MB] | ||
TUPPC013 | Scaling Out of the MADOCA Database System for SACLA | controls, GUI, operation, monitoring | 574 |
|
|||
MADOCA was adopted for the control system of SACLA, and the MADOCA database system was designed as a copy of the database system in SPring-8. The system realized a high redundancy because the system had already tested in SPring-8. However the signals which the MADOCA system handles in SACLA are increasing drastically. And GUIs that require frequent database accesses were developed. The load of the database system increased, and the response of the systems delayed in some occasions. We investigated the bottle neck of the system. From the results of the investigation, we decided to distribute the access to two servers. The primary server handles present data and signal properties. The other handles archived data, and the data was mounted to the primary server as a proxy table. In this way, we could divide the load into two servers and clients such as GUI do not need any changes. We have tested the load and response of the system by adding 40000 signals to present 45000 signals, of which data acquisition intervals are typically 2 sec. The system was installed successfully and operating without any interruption which is caused by the high load of the database. | |||
TUPPC014 | Development of SPring-8 Experimental Data Repository System for Management and Delivery of Experimental Data | experiment, data-management, interface, controls | 577 |
|
|||
SPring-8 experimental Data Repository system (SP8DR) is an online storage service, which is built as one of the infrastructure services of SPring-8. SP8DR enables experimental user to obtain his experimental data, which was brought forth at SPring-8 beamline, on demand via the Internet. To make easy searching for required data-sets later, the system stored experimental data with meta-data such as experimental conditions. It is also useful to the post-experiment analysis process. As a framework for data management, we adopted DSpace that is widely used in the academic library information system. We made two kind of application software for registering an experimental data simply and quickly. These applications are used to record metadata-set to SP8DR database that has relations to experimental data on the storage system. This data management design allowed applications to high bandwidth data acquisition system. In this presentation, we report about the SPring-8 experimental Data Repository system that began operation in SPring-8 beamline. | |||
TUPPC017 | Development of J-PARC Time-Series Data Archiver using Distributed Database System | distributed, EPICS, operation, linac | 584 |
|
|||
J-PARC(Japan Proton Accelerator Research Complex) is consists of much equipment. In Linac and 3GeV synchrotron, the data of over the 64,000 EPICS records for these apparatus control is being collected. The data has been being stored by a RDB system using PostgreSQL now, but it is not enough in availability, performance, and extendibility. Therefore, the new system architecture is required, which is rich in the pliability and can respond to the data increasing continuously for years to come. In order to cope with this problem, we considered adoption of the distributed database archtecture and constructed the demonstration system using Hadoop/HBase. We present results of these demonstration. | |||
TUPPC022 | Centralized Software and Hardware Configuration Tool for Large and Small Experimental Physics Facilities | software, network, controls, EPICS | 591 |
|
|||
All software of control system, starting from hardware drivers and up to user space PC applications, needs configuration information to work properly. This information includes such parameters as channels calibrations, network addresses, servers responsibilities and other. Each software subsystem requires a part of configuration parameters, but storing them separately from whole configuration will cause usability and reliability issues. On the other hand, storing all configuration in one centralized database will decrease software development speed, by adding extra central database querying. The paper proposes configuration tool that has advantages of both ways. Firstly, it uses a centralized configurable graph database, that could be manipulated by web-interface. Secondly, it could automatically export configuration information from centralized database to any local configuration storage. The tool has been developed at BINP (Novosibirsk, Russia) and is used to configure VEPP-2000 electron-positron collider (BINP, Russia), Electron Linear Induction Accelerator (Snezhinsk, Russia) and NSLS-II booster synchrotron (BNL, USA). | |||
![]() |
Poster TUPPC022 [1.441 MB] | ||
TUPPC024 | Challenges to Providing a Successful Central Configuration Service to Support CERN’s New Controls Diagnostics and Monitoring System | controls, monitoring, diagnostics, framework | 596 |
|
|||
The Controls Diagnostic and Monitoring service (DIAMON) provides monitoring and diagnostics tools to the operators in the CERN Control Centre. A recent reengineering presented the opportunity to restructure its data management and to integrate it with the central Controls Configuration Service (CCS). The CCS provides the Configuration Management for the Controls System for all accelerators at CERN. The new facility had to cater for the configuration management of all agents monitored by DIAMON, (>3000 computers of different types), provide deployment information, relations between metrics, and historical information. In addition, it had to be integrated into the operational CCS, while ensuring stability and data coherency. An important design decision was to largely reuse the existing infrastructure in the CCS and adapt the DIAMON data management to it e.g. by using the device/property model through a Virtual Devices framework to model the DIAMON agents. This article will show how these challenging requirements were successfully met, the problems encountered and their resolution. The new service architecture will be presented: database model, new and tailored processes and tools. | |||
![]() |
Poster TUPPC024 [2.741 MB] | ||
TUPPC025 | Advantages and Challenges to the Use of On-line Feedback in CERN’s Accelerators Controls Configuration Management | controls, feedback, hardware, status | 600 |
|
|||
The Controls Configuration Service (CCS) provides the Configuration Management facilities for the Controls System for all CERN accelerators. It complies with Configuration Management standards, tracking the life of configuration items and their relationships by allowing identification and triggering change management processes. Data stored in the CCS is extracted and propagated to the controls hardware for remote configuration. The article will present the ability of the CCS to audit items and verify conformance to specification with the implementation of on-line feedback focusing on Front-End Computers (FEC) configurations. Long-standing problems existed in this area such as discrepancies between the actual state of the FEC and the configuration sent to it at reboot. This resulted in difficult-to-diagnose behaviour and disturbance for the Operations team. The article will discuss the solution architecture (tailored processes and tools), the development and implementation challenges, as well as the advantages of this approach and the benefits to the user groups – from equipment specialists and controls systems experts to the operators in the Accelerators Controls Centre. | |||
![]() |
Poster TUPPC025 [3.937 MB] | ||
TUPPC026 | Concept and Prototype for a Distributed Analysis Framework for the LHC Machine Data | framework, operation, extraction, embedded | 604 |
|
|||
The Large Hadron Collider (LHC) at CERN produces more than 50 TB of diagnostic data every year, shared between normal running periods as well as commissioning periods. The data is collected in different systems, like the LHC Post Mortem System (PM), the LHC Logging Database and different file catalogues. To analyse and correlate data from these systems it is necessary to extract data to a local workspace and to use scripts to obtain and correlate the required information. Since the amount of data can be huge (depending on the task to be achieved) this approach can be very inefficient. To cope with this problem, a new project was launched to bring the analysis closer to the data itself. This paper describes the concepts and the implementation of the first prototype of an extensible framework, which will allow integrating all the existing data sources as well as future extensions, like hadoop* clusters or other parallelization frameworks.
*http://hadoop.apache.org/ |
|||
![]() |
Poster TUPPC026 [1.378 MB] | ||
TUPPC027 | Quality Management of CERN Vacuum Controls | vacuum, controls, interface, framework | 608 |
|
|||
The vacuum controls team is in charge of the monitoring, maintenance & consolidation of the control systems of all accelerators and detectors in CERN; this represents 6 000 instruments distributed along 128 km of vacuum chambers, often of heterogeneous architectures. In order to improve the efficiency of the services we provide, to vacuum experts and to accelerator operators, a Quality Management Plan is being put into place. The first step was the gathering of old documents and the centralisation of information concerning architectures, procedures, equipment and settings. It was followed by the standardisation of the naming convention across different accelerators. The traceability of problems, request, repairs, and other actions, has also been put into place. It goes together with the effort on identification of each individual device by a coded label, and its registration in a central database. We are also working on ways to record, retrieve, process, and display the information across several linked repositories; then, the quality and efficiency of our services can only improve, and the corresponding performance indicators will be available. | |||
![]() |
Poster TUPPC027 [98.542 MB] | ||
TUPPC028 | The CERN Accelerator Logging Service - 10 Years in Operation: A Look at the Past, Present, and Future | operation, extraction, controls, instrumentation | 612 |
|
|||
During the 10 years since it's first operational use, the scope and scale of the CERN Accelerator Logging Service (LS) has evolved significantly: from an LHC specific service expected to store 1TB / year; to a CERN-wide service spanning the complete accelerator complex (including related sub-systems and experiments) currently storing more than 50 TB / year on-line for some 1 million signals. Despite the massive increase over initial expectations the LS remains reliable, and highly usable - this can be attested to by the 5 million daily / average number of data extraction requests, from close to 1000 users. Although a highly successful service, demands on the LS are expected to increase significantly as CERN prepares LHC for running at top energy, which is likely to result in at least doubling current data volumes. Furthermore, focus is now shifting firmly towards a need to perform complex analysis on logged data, which in-turn presents new challenges. This paper reflects on 10 years as an operational service, in terms of how it has managed to scale to meet growing demands, what has worked well, and lessons learned. On-going developments, and future evolution will also be discussed. | |||
![]() |
Poster TUPPC028 [3.130 MB] | ||
TUPPC029 | Integration, Processing, Analysis Methodologies and Tools for Ensuring High Data Quality and Rapid Data Access in the TIM* Monitoring System | monitoring, real-time, controls, data-acquisition | 615 |
|
|||
Processing, storing and analysing large amounts of real-time data is a challenge for every monitoring system. The performance of the system strongly depends on high quality configuration data and the ability of the system to cope with data anomalies. The Technical Infrastructure Monitoring system (TIM) addresses data quality issues by enforcing a workflow of strict procedures to integrate or modify data tag configurations. TIM’s data acquisition layer architecture allows real-time analysis and rejection of irrelevant data. The discarded raw data 90,000,000 transactions/day) are stored in a database, then purged after gathering statistics. The remaining operational data (2,000,000 transactions/day) are transferred to a server running an in-memory database, ensuring its rapid processing. These data are currently stored for 30 days allowing ad hoc historical data analysis. In this paper we describe the methods and tools used to guarantee the quality of configuration data and highlight the advanced architecture that ensures optimal access to operational data as well as the tools used to perform off-line data analysis.
* Technical Infrastructure Monitoring system |
|||
![]() |
Poster TUPPC029 [0.742 MB] | ||
TUPPC030 | System Relation Management and Status Tracking for CERN Accelerator Systems | framework, software, interface, hardware | 619 |
|
|||
The Large Hadron Collider (LHC) at CERN requires many systems to work in close interplay to allow reliable operation and at the same time ensure the correct functioning of the protection systems required when operating with large energies stored in magnet system and particle beams. Examples for systems are e.g. magnets, power converters, quench protection systems as well as higher level systems like java applications or server processes. All these systems have numerous and different kind of links (dependencies) between each other. The knowledge about the different dependencies is available from different sources, like Layout databases, Java imports, proprietary files, etc . Retrieving consistent information is difficult due to the lack of a unified way of retrieval for the relevant data. This paper describes a new approach to establish a central server instance, which allows collecting this information and providing it to different clients used during commissioning and operation of the accelerator. Furthermore, it explains future visions for such a system, which includes additional layers for distributing system information like operational status, issues or faults. | |||
![]() |
Poster TUPPC030 [4.175 MB] | ||
TUPPC031 | Proteus: FRIB Configuration Database | controls, cavity, interface, operation | 623 |
|
|||
Distributed Information Services for Control Systems (DISCS) is a framework for developing high-level information systems for a Experimental Physics Facility. It comprises of a set of cooperating components. Each component of the system has a database, an API, and several applications. One of DISCS' core components is the Configuration Module. It is responsible for the management of devices, their layout, measurements, alignment, calibration, signals, and inventory. In this paper we describe FRIB's implementation of the Configuration Module - Proteus. We describe its architecture, database schema, web-based GUI, EPICS V4 and REST services, and Java/Python APIs. It has been developed as a product that other labs can download and use. It can be integrated with other independent systems. We describe the challenges to implementing such a system, our technology choices, and the lessons learnt. | |||
![]() |
Poster TUPPC031 [1.248 MB] | ||
TUPPC032 | Database-backed Configuration Service | controls, operation, interface, network | 627 |
|
|||
Keck Observatory is in the midst of a major telescope control system upgrade. This upgrade will include a new database-backed configuration service which will be used to manage the many aspects of the telescope that need to be configured (e.g. site parameters, control tuning, limit values) for its control software and it will keep the configuration data persistent between IOC restarts. This paper will discuss this new configuration service, including its database schema, iocsh API, rich user interface and the many other provided features. The solution provides automatic time-stamping, a history of all database changes, the ability to snapshot and load different configurations and triggers to manage the integrity of the data collections. Configuration is based on a simple concept of controllers, components and their associated mapping. The solution also provides a failsafe mode that allows client IOCs to function if there is a problem with the database server. It will also discuss why this new service is preferred over the file based configuration tools that have been used at Keck up to now. | |||
![]() |
Poster TUPPC032 [0.849 MB] | ||
TUPPC035 | A New EPICS Archiver | EPICS, controls, data-management, distributed | 632 |
|
|||
This report presents a large-scale high-performance distributed data storage system for acquiring and processing time series data of modern accelerator facilities. Derived from the original EPICS Channel Archiver, this version consistently extends it through the integration of the deliberately selected technologies, such as the HDF5 file format, the SciDB chunk-oriented interface, and the RDB-based representation of the DDS X-Types specification. The changes allowed to scale the performance of the new version towards the data rates of 500 K scalar samples per seconds. Moreover, the new EPICS Archiver provides a common platform for managing both the EPICS 3 records and composite data types, like images, of EPICS 4 applications. | |||
![]() |
Poster TUPPC035 [0.247 MB] | ||
TUPPC106 | Development of a Web-based Shift Reporting Tool for Accelerator Operation at the Heidelberg Ion Beam Therapy Center | ion, controls, operation, framework | 822 |
|
|||
The HIT (Heidelberg Ion Therapy) center is the first dedicated European accelerator facility for cancer therapy using both carbon ions and protons, located at the university hospital in Heidelberg. It provides three fully operational therapy treatment rooms, two with fixed beam exit and a gantry. We are currently developing a web based reporting tool for accelerator operations. Since medical treatment requires a high level of quality assurance, a detailed reporting on beam quality, device failures and technical problems is even more needed than in accelerator operations for science. The reporting tools will allow the operators to create their shift reports with support from automatically derived data, i.e. by providing pre-filled forms based on data from the Oracle database that is part of the proprietary accelerator control system. The reporting tool is based on the Python-powered CherryPy web framework, using SQLAlchemy for object relational mapping. The HTML pages are generated from templates, enriched with jQuery to provide a desktop-like usability. We will report on the system architecture of the tool and the current status, and show screenshots of the user interface.
[1] Th. Haberer et al., “The Heidelberg Ion Therapy Center”, Rad. & Onc., |
|||
TUPPC112 | GeoSynoptic Panel | synchrotron, radiation, synchrotron-radiation, storage-ring | 840 |
|
|||
Funding: Synchrotron Radiation Centre SOLARIS at Jagiellonian University ul. Gronostajowa 7/P-1.6 30-387 Kraków Poland Solaris is a third generation Polish Synchrotron under construction at the Jagiellonian University in Kraków. Furthermore, National Synchrotron Radiation Center is member of the Tango Collaboration. The project is based on the 1.5 GeV storage ring being at the simultaneously built for the MAX IV project in Lund, Sweden. The Solaris project is a prime example of the benefits of use EU regional development funds and sharing of knowledge and resources for the rapid establishment of a national research infrastructure. The Solaris develops highly customizable and adaptable application called the GeoSynoptic Panel. Main goal of the GeoSynoptic Panel is to provide a graphical map of devices based on information stored in the Tango database. It is achieved by providing additional device/class properties which describe location and graphical components (such as icons and particular GUI window) related to a particular device or class . The application is expected to reduce time needed for preparation of synoptic applications for each individual (part of) machines or subsystems and to reduce effort related to debugging and change management. |
|||
![]() |
Poster TUPPC112 [19.249 MB] | ||
TUPPC116 | Cheburashka: A Tool for Consistent Memory Map Configuration Across Hardware and Software | software, hardware, controls, interface | 848 |
|
|||
The memory map of a hardware module is defined by the designer at the moment when the firmware is specified. It is then used by software developers to define device drivers and front-end software classes. Maintaining consistency between hardware and its software is critical. In addition, the manual process of writing VHDL firmware on one side and the C++ software on the other is labour-intensive and error-prone. Cheburashka* is a software tool which eases this process. From a unique declaration of the memory map, created using the tool’s graphical editor, it allows to generate the memory map VHDL package, the Linux device driver configuration for the front-end computer, and a FESA** class for debugging. An additional tool, GENA, is being used to automatically create all required VHDL code to build the associated register control block. These tools are now used by the hardware and software teams for the design of all new interfaces from FPGAs to VME or on-board DSPs in the context of the extensive program of development and renovation being undertaken in the CERN injector chain during LS1***. Several VME modules and their software have already been deployed and used in the SPS.
(*) Cheburashka is developed in the RF group at CERN (**)FESA is an acronym for Front End Software Architecture, developped at CERN (***)LS1 : LHC Long Shutdown 1, from 2013 to 2014 |
|||
TUPPC119 | Exchange of Crucial Information between Accelerator Operation, Equipment Groups and Technical Infrastructure at CERN | operation, interface, laser, controls | 856 |
|
|||
During CERN accelerator operation, a large number of events, related to accelerator operation and management of technical infrastructure, occur with different criticality. All these events are detected, diagnosed and managed by the Technical Infrastructure service (TI) in the CERN Control Centre (CCC); equipment groups concerned have to solve the problem with a minimal impact on accelerator operation. A new database structure and new interfaces have to be implemented to share information received by TI, to improve communication between the control room and equipment groups, to help post-mortem studies and to correlate events with accelerator operation incidents. Different tools like alarm screens, logbooks, maintenance plans and work orders exist and are in use today. A project was initiated with the goal to integrate and standardize information in a common repository to be used by the different stakeholders through dedicated user interfaces. | |||
![]() |
Poster TUPPC119 [10.469 MB] | ||
TUPPC126 | Visualization of Experimental Data at the National Ignition Facility | diagnostics, software, target, framework | 879 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633252 An experiment on the National Ignition Facility (NIF) may produce hundreds of gigabytes of target diagnostic data. Raw and analyzed data are accumulated into the NIF Archive database. The Shot Data Systems team provides alternatives for accessing data including a web-based data visualization tool, a virtual file system for programmatic data access, a macro language for data integration, and a Wiki to support collaboration. The data visualization application in particular adapts dashboard user-interface design patterns popularized by the business intelligence software community. The dashboard canvas provides the ability to rapidly assemble tailored views of data directly from the NIF archive. This design has proven capable of satisfying most new visualization requirements in near real-time. The separate file system and macro feature-set support direct data access from a scientist’s computer using scientific languages such as IDL, Matlab and Mathematica. Underlying all these capabilities is a shared set of web services that provide APIs and transformation routines to the NIF Archive. The overall software architecture will be presented with an emphasis on data visualization. |
|||
![]() |
Poster TUPPC126 [4.900 MB] | ||
TUPPC128 | Machine History Viewer for the Integrated Computer Control System of the National Ignition Facility | controls, GUI, software, target | 883 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633812 The Machine History Viewer is a recently developed capability of the Integrated Computer Control System (ICCS) software to the National Ignition Facility (NIF) that introduces the capability to analyze machine history data to troubleshoot equipment problems and to predict future failures. Flexible time correlation, text annotations, and multiple y-axis scales will help users determine cause and effect in the complex machine interactions at work in the NIF. Report criteria can be saved for easy modification and reuse. Integration into the already-familiar ICCS GUIs makes reporting easy to access for the operators. Reports can be created that will help analyze trends over long periods of time that lead to improved calibration and better detection of equipment failures. Faster identification of current failures and anticipation of potential failures will improve NIF availability and shot efficiency. A standalone version of this application is under development that will provide users remote access to real-time data and analysis allowing troubleshooting by experts without requiring them to come on-site. |
|||
![]() |
Poster TUPPC128 [4.826 MB] | ||
TUCOCA05 | EPICS-based Control System for a Radiation Therapy Machine | EPICS, controls, neutron, cyclotron | 922 |
|
|||
The clinical neutron therapy system (CNTS) at the University of Washington Medical Center (UWMC) has been treating patients since 1984. Its new control system retains the original safety philosophy and delegation of functions among nonprogrammable hardware, PLCs, microcomputers with programs in ROM, and finally general-purpose computers. The latter are used only for data-intensive, prescription-specific functions. For these, a new EPICS-based control program replaces a locally-developed C program used since 1999. The therapy control portion uses a single soft IOC for control and a single EDM session for the operator's console. Prescriptions are retrieved from a PostgreSQL database and loaded into the IOC by a Python program; another Python program stores treatment records from the IOC back into the database. The system remains safe if the general-purpose computers or their programs crash or stop producing results. Different programs at different stages of the computation check for invalid data. Development activities including formal specifications and automated testing avoid, then check for, design and programming errors. | |||
![]() |
Slides TUCOCA05 [0.175 MB] | ||
TUCOCB02 | Middleware Proxy: A Request-Driven Messaging Broker for High Volume Data Distribution | controls, device-server, operation, diagnostics | 948 |
|
|||
Nowadays, all major infrastructures and data centers (commercial and scientific) make an extensive use of the publish-subscribe messaging paradigm, which helps to decouple the message sender (publisher) from the message receiver (consumer). This paradigm is also heavily used in the CERN Accelerator Control system, in Proxy broker - critical part of the Controls Middleware (CMW) project. Proxy provides the aforementioned publish-subscribe facility and also supports execution of synchronous read and write operations. Moreover, it enables service scalability and dramatically reduces the network resources and overhead (CPU and memory) on publisher machine, required to serve all subscriptions. Proxy was developed in modern C++, using state of the art programming techniques (e.g. Boost) and following recommended software patterns for achieving low-latency and high concurrency. The outstanding performance of the Proxy infrastructure was confirmed during the last 3 years by delivering the high volume of LHC equipment data to many critical systems. This work describes in detail the Proxy architecture together with the lessons learnt from operation and the plans for the future evolution. | |||
![]() |
Slides TUCOCB02 [4.726 MB] | ||
TUCOCB03 | A Practical Approach to Ontology-Enabled Control Systems for Astronomical Instrumentation | controls, detector, software, DSL | 952 |
|
|||
Even though modern service-oriented and data-oriented architectures promise to deliver loosely coupled control systems, they are inherently brittle as they commonly depend on a priori agreed interfaces and data models. At the same time, the Semantic Web and a whole set of accompanying standards and tools are emerging, advocating ontologies as the basis for knowledge exchange. In this paper we aim to identify a number of key ideas from the myriad of knowledge-based practices that can readily be implemented by control systems today. We demonstrate with a practical example (a three-channel imager for the Mercator Telescope) how ontologies developed in the Web Ontology Language (OWL) can serve as a meta-model for our instrument, covering as many engineering aspects of the project as needed. We show how a concrete system model can be built on top of this meta-model via a set of Domain Specific Languages (DSLs), supporting both formal verification and the generation of software and documentation artifacts. Finally we reason how the available semantics can be exposed at run-time by adding a “semantic layer” that can be browsed, queried, monitored etc. by any OPC UA-enabled client. | |||
![]() |
Slides TUCOCB03 [2.130 MB] | ||
TUCOCB04 | EPICS Version 4 Progress Report | EPICS, controls, operation, network | 956 |
|
|||
EPICS Version 4 is the next major revision of the Experimental Physics and Industrial Control System, a widely used software framework for controls in large facilities, accelerators and telescopes. The primary goal of Version 4 is to improve support for scientific applications by augmenting the control-centered EPICS Version 3 with an architecture that allows building scientific services on top of it. Version 4 provides a new standardized wire protocol, support of structured types, and parametrized queries. The long-term plans also include a revision of the IOC core layer. The first set of services like directory, archive retrieval, and save set services aim to improve the current EPICS architecture and enable interoperability. The first services and applications are now being deployed in running facilities. We present the current status of EPICS V4, the interoperation of EPICS V3 and V4, and how to create services such as accelerator modelling, large database access, etc. These enable operators and physicists to write thin and powerful clients to support commissioning, beam studies and operations, and opens up the possibility of sharing applications between different facilities. | |||
![]() |
Slides TUCOCB04 [1.937 MB] | ||
TUCOCB07 | TANGO - Can ZMQ Replace CORBA ? | TANGO, CORBA, network, controls | 964 |
|
|||
TANGO (http://www.tango-controls.org) is a modern distributed device oriented control system toolkit used by a number of facilities to control synchrotrons, lasers and a wide variety of equipment for doing physics experiments. The performance of the network protocol used by TANGO is a key component of the toolkit. For this reason TANGO is based on the omniORB implementation of CORBA. CORBA offers an interface definition language with mappings to multiple programming languages, an efficient binary protocol, a data representation layer, and various services. In recent years a new series of binary protocols based on AMQP have emerged from the high frequency stock market trading business. A simplified version of AMQP called ZMQ (http://www.zeromq.org/) was open sourced in 2010. In 2011 the TANGO community decided to take advantage of ZMQ. In 2012 the kernel developers successfully replaced the CORBA Notification Service with ZMQ in TANGO V8. The first part of this paper will present the software design, the issues encountered and the resulting improvements in performance. The second part of this paper will present a study of how ZMQ could replace CORBA completely in TANGO. | |||
![]() |
Slides TUCOCB07 [1.328 MB] | ||
WECOBA02 | Distributed Information Services for Control Systems | controls, EPICS, interface, software | 1000 |
|
|||
During the design and construction of an experimental physics facility (EPF), a heterogeneous set of engineering disciplines, methods, and tools is used, making subsequent exploitation of data difficult. In this paper, we describe a framework (DISCS) for building high-level applications for commissioning, operation, and maintenance of an EPF that provides programmatic as well as graphical interfaces to its data and services. DISCS is a collaborative effort of BNL, FRIB, Cosylab, IHEP, and ESS. It is comprised of a set of cooperating services and applications, and manages data such as machine configuration, lattice, measurements, alignment, cables, machine state, inventory, operations, calibration, and design parameters. The services/applications include Channel Finder, Logbook, Traveler, Unit Conversion, Online Model, and Save-Restore. Each component of the system has a database, an API, and a set of applications. The services are accessed through REST and EPICS V4. We also discuss the challenges to developing database services in an environment where requirements continue to evolve and developers are distributed among different laboratories with different technology platforms. | |||
WECOBA05 | Understanding NIF Experimental Results: NIF Target Diagnostic Automated Analysis Recent Accompolishments | diagnostics, target, laser, software | 1008 |
|
|||
Funding: This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344. #LLNL-ABS-632818 The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is the most energetic laser system in the world. During a NIF laser shot, a 20-ns ultraviolet laser pulse is split into 192 separate beams, amplified, and directed to a millimeter-sized target at the center of a 10-m target chamber. To achieve the goals of studying energy science, basic science, and national security, NIF laser shot performance is being optimized around key metrics such as implosion shape and fuel mix. These metrics are accurately quantified after each laser shot using automated signal and image processing routines to analyze raw data from over 50 specialized diagnostics that measure x-ray, optical and nuclear phenomena. Each diagnostic’s analysis is comprised of a series of inverse problems, timing analysis, and specialized processing. This talk will review the framework for general diagnostic analysis, give examples of specific algorithms used, and review the diagnostic analysis team’s recent accomplishments. The automated diagnostic analysis for x-ray, optical, and nuclear diagnostics provides accurate key performance metrics and enables NIF to achieve its goals. |
|||
![]() |
Slides WECOBA05 [3.991 MB] | ||
WECOBA06 | Exploring No-SQL Alternatives for ALMA Monitoring System | monitoring, hardware, insertion, software | 1012 |
|
|||
The Atacama Large Millimeter /submillimeter Array (ALMA) will be a unique research instrument composed of at least 66 reconfigurable high-precision antennas, located at the Chajnantor plain in the Chilean Andes at an elevation of 5000 m. This paper describes the experience gained after several years working with the monitoring system, which has the fundamental requirement to collect and storage up to 100K variables. The original design is built on top of a cluster of relational database server and network attached storage with fiber channel interface. As the number of monitoring points increases with the number of antennas included in the array, the current monitoring system has demonstrated to be able to handle the increased data rate in the collection and storage area, but the data query interface has started to suffered serious performance degradation. A solution based on no-SQL platform was explored as an alternative of the current long-term storage system, specifically mongoDB has been chosen. Intermediate cache servers based on Redis are also introduced to allow faster online data streaming of the most recent data to data analysis application and web based charts applications | |||
![]() |
Slides WECOBA06 [0.916 MB] | ||
THCOAAB07 | NIF Electronic Operations: Improving Productivity with iPad Application Development | operation, framework, network, diagnostics | 1066 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632815 In an experimental facility like the National Ignition Facility (NIF), thousands of devices must be maintained during day to day operations. Teams within NIF have documented hundreds of procedures, or checklists, detailing how to perform this maintenance. These checklists have been paper based, until now. NIF Electronic Operations (NEO) is a new web and iPad application for managing and executing checklists. NEO increases efficiency of operations by reducing the overhead associated with paper based checklists, and provides analysis and integration opportunities that were previously not possible. NEO’s data driven architecture allows users to manage their own checklists and provides checklist versioning, real-time input validation, detailed step timing analysis, and integration with external task tracking and content management systems. Built with mobility in mind, NEO runs on an iPad and works without the need for a network connection. When executing a checklist, users capture various readings, photos, measurements and notes which are then reviewed and assessed after its completion. NEO’s design, architecture, iPad application and uses throughout the NIF will be discussed. |
|||
![]() |
Slides THCOAAB07 [1.237 MB] | ||
THPPC002 | Configuration Management for Beam Delivery at TRIUMF/ISAC | ion-source, ion, ISAC, controls | 1094 |
|
|||
The ISAC facility at TRIUMF delivers simultaneous beams from different ion sources to multiple destinations. More beams will be added by the ARIEL facility which is presently under construction. To ensure co-ordination of beam delivery, beam path configuration management has been implemented. The process involves beam path selection, configuration setup and configuration monitoring. In addition save and restore of beam line device settings, scaling of beam optic devices for beam energy and mass, beam path specific operator displays, the ability to compare present and previous beam tunes, and alarm enunciation of device readings outside prescribed ranges are supported. Design factors, re-usability strategies, and results are described. | |||
![]() |
Poster THPPC002 [0.508 MB] | ||
THPPC006 | REMBRANDT - REMote Beam instRumentation And Network Diagnosis Tool | controls, monitoring, network, status | 1103 |
|
|||
As with any other large accelerator complex in operation today, the beam instrumentation devices and associated data acquisition components for the coming FAIR accelerators will be distributed over a large area and partially installed in inaccessible radiation exposed areas. Besides operation of the device itself, like acquisition of data, it is mandatory to control also the supporting LAN based components like VME/μTCA crates, front-end computers (FEC), middle ware servers and more. Fortunately many COTS systems provide means for remote control and monitoring using a variety of standardized protocols like SNMP, IPMI or iAMT. REMBRANDT is a Java framework, which allows the authorized user to monitor and control remote systems while hiding the underlying protocols and connection information such as ip addresses, user-ids and passwords. Beneath voltage and current control, the main features are the remote power switching of the systems and the reverse telnet boot process observation of FECs. REMBRANDT is designed to be easily extensible with new protocols and features. The software concept, including the client-server part and the database integration, will be presented. | |||
![]() |
Poster THPPC006 [3.139 MB] | ||
THPPC012 | The Equipment Database for the Control System of the NICA Accelerator Complex | controls, TANGO, software, collider | 1111 |
|
|||
The report describes the database of equipment for the control system of Nuclotron-based Ion Collider fAcility (NICA, JINR, Russia). The database will contain information about hardware, software, computers and network components of control system, their main settings and parameters, and the responsible persons. The equipment database should help to implement the Tango system as a control system of NICA accelerator complex. The report also describes a web service to display, search, and manage the database. | |||
![]() |
Poster THPPC012 [1.070 MB] | ||
THPPC013 | Configuration Management of the Control System | controls, TANGO, software, PLC | 1114 |
|
|||
The control system of big research facilities like synchrotron involves a lot of work to keep hardware and software synchronised to each other to have a good coherence. Modern Control System middleware Infrastructures like Tango use a database to store all values necessary to communicate with the devices. Nevertheless it is necessary to configure the driver of a PowerSupply or a Motor controller before being able to communicate with any software of the control system. This is part of the configuration management which involves keeping track of thousands of equipments and their properties. In recent years, several DevOps tools like Chef, Puppet, Ansible or SpaceMaster have been developed by the OSS community. They are now mandatory for the configuration of thousands of servers to build clusters or cloud servers. Define a set of coherent components, enable Continuous Deployment in synergy with Continuous Integration, reproduce a control system for simulation, rebuild and track changes even in the hardware configuration are among the use cases. We will explain the strategy of MaxIV on this subject, regarding the configuration management. | |||
![]() |
Poster THPPC013 [4.620 MB] | ||
THPPC017 | Control System Configuration Management at PSI Large Research Facilities | EPICS, controls, hardware, software | 1125 |
|
|||
The control system of the PSI accelerator facilities and their beamlines consists mainly of the so called Input Output Controllers (IOCs) running EPICS. There are several flavors of EPICS IOCs at PSI running on different CPUs, different underlying operating systems and different EPICS versions. We have hundreds of IOCs which control the facilities at PSI. The goal of the Control system configuration management is to provide a set of tools to allow a consistent and uniform configuration for all IOCs. In this context the Oracle database contains all hardware-specific information including the CPU type, operating system or EPICS version. The installation tool connects to Oracle database. Depending on the IOC-type a set of files (or symbolic links) are created which connect to the required operating system, libraries or EPICS configuration files in the boot directory. In this way a transparent and user-friendly IOC installation is achieved. The control system export can check the IOC installation, boot information, as well as the status of loaded EPICS process variables by using Web applications. | |||
![]() |
Poster THPPC017 [0.405 MB] | ||
THPPC043 | Implement an Interface for Control System to Interact with Oracle Database at SSC-LINAC | EPICS, interface, controls, linac | 1171 |
|
|||
SSC-LINAC control system is based on EPICS architecture. The control system includes ion sources, vacuum, digital power supplies, etc. In these subsystems, some of those need to interactive with Oracle database, such as power supplies control subsystem, who need to get some parameters while power supplies is running and also need to store some data with Oracle. So we design and implementation an interface for EPICS IOC to interactive with Oracle database. The interface is a soft IOC which is also bases on EPICS architecture, so others IOC and OPI can use the soft IOC interactive with Oracle via Channel Access protocol. | |||
THPPC050 | Upgrade System of Vacuum Monitoring of Synchrotron Radiation Sources of National Research Centre Kurchatov Institute | vacuum, controls, synchrotron, operation | 1183 |
|
|||
Modernization project of the vacuum system of the synchrotron radiation source at the National Research Centre Kurchatov Institute (NRC KI) has been designed and implemented. It includes transition to the new high-voltage power sources for NMD and PVIG–0.25/630 pumps. The system is controlled via CAN-bus, and the vacuum is controlled by measuring pump currents in a range of 0.0001–10 mA. Status visualization, data collection and data storage is implemented on Sitect SCADA 7.2 Server and SCADA Historian Server. The system ensures a vacuum of 10–7 Pa. The efficiency and reliability of the vacuum system is increased by this work, making it possible to improve the main parameters of the SR source. | |||
THPPC057 | Validation of the Data Consolidation in Layout Database for the LHC Tunnel Cryogenics Controls Package | controls, cryogenics, PLC, operation | 1197 |
|
|||
The control system of the Large Hadron Collider cryogenics manages over 34,000 instrumentation channels which are essential for populating the software of the PLCs (Programmable Logic Controller) and SCADA (Supervisory Control and Data Acquisition) responsible for maintaining the LHC at the appropriate operating conditions. The control system specification's are generated by the CERN UNICOS (Unified Industrial Control System) framework using a set of information of database views extracted from the LHC layout database. The LHC layout database is part of the CERN database managing centralized and integrated data, documenting the whole CERN infrastructures (Accelerator complex) by modeling their topographical organization (“layouts”), and defining their components (functional positions) and the relationships between them. This paper describes the methodology of the data validation process, including the development of different software tools used to update the database from original values to manually adjusted values after three years of machine operation, as well as the update of the data to accommodate the upgrade of the UNICOS Continuous Process Control package(CPC). | |||
THPPC064 | The HiSPARC Control System | detector, controls, software, Windows | 1220 |
|
|||
Funding: Nikhef The purpose of the HiSPARC project is twofold. First the physics goal: detection of high-energy cosmic rays. Secondly, offer an educational program in which high school students participate by building their detection station and analysing their data. Around 70 high schools, spread over the Netherlands, are participating. Data are centrally stored at Nikhef in Amsterdam. The detectors, located on the roof of the high-schools, are connected by means of a USB interface to a Windows PC, which itself is connected to the high school's network and further on to the public internet. Each station is equipped with GPS providing exact location and accurate timing. This paper covers the setup, building and usage of the station software. It contains a LabVIEW run-time engine, services for remote control and monitoring, a series of Python scripts and a local buffer. An important task of the station software is to control the dataflow, event building and submission to the central database. Furthermore, several global aspects are described, like the source repository, the station software installer and organization. Windows, USB, FTDI, LabVIEW, VPN, VNC, Python, Nagios, NSIS, Django |
|||
THPPC065 | Software System for Monitoring and Control at the Solenoid Test Facility | controls, monitoring, operation, solenoid | 1224 |
|
|||
Funding: This work was supported by the U.S. Department of Energy. The architecture and implementation aspects of the control and monitoring system developed for Fermilab's new Solenoid Test Facility will be presented. At the heart of the system lies a highly configurable scan subsystem targeted at precise measurements of low temperatures with uniformly incorporated control elements. A multi-format archival system allows for the use of flat files, XML, and a relational database for storing data, and a Web-based application provides access to historical trends. The DAQ and computing platform includes COTS elements. The layered architecture separates the system into Windows operator stations, the real-time operating system-based DAQ and controls, and the FPGA-based time-critical and safety elements. The use of the EPICS CA protocol with LabVIEW opens the system to many available EPICS utilities . |
|||
![]() |
Poster THPPC065 [2.059 MB] | ||
THPPC078 | The AccTesting Framework: An Extensible Framework for Accelerator Commissioning and Systematic Testing | framework, GUI, LabView, hardware | 1250 |
|
|||
The Large Hadron Collider (LHC) at CERN requires many systems to work in close interplay to allow reliable operation and at the same time ensure the correct functioning of the protection systems required when operating with large energies stored in magnet system and particle beams. The systems for magnet powering and beam operation are qualified during dedicated commissioning periods and retested after corrective or regular maintenance. Based on the experience acquired with the initial commissioning campaigns of the LHC magnet powering system, a framework was developed to orchestrate the thousands of tests for electrical circuits and other systems of the LHC. The framework was carefully designed to be extendable. Currently, work is on-going to prepare and extend the framework for the re-commissioning of the machine protection systems at the end of 2014 after the LHC Long Shutdown. This paper describes concept, current functionality and vision of this framework to cope with the required dependability of test execution and analysis. | |||
![]() |
Poster THPPC078 [5.908 MB] | ||
THPPC082 | Monitoring of the National Ignition Facility Integrated Computer Control System | controls, experiment, framework, interface | 1266 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632812 The Integrated Computer Control System (ICCS), used by the National Ignition Facility (NIF) provides comprehensive status and control capabilities for operating approximately 100,000 devices through 2,600 processes located on 1,800 servers, front end processors and embedded controllers. Understanding the behaviors of complex, large scale, operational control software, and improving system reliability and availability, is a critical maintenance activity. In this paper we describe the ICCS diagnostic framework, with tunable detail levels and automatic rollovers, and its use in analyzing system behavior. ICCS recently added Splunk as a tool for improved archiving and analysis of these log files (about 20GB, or 35 million logs, per day). Splunk now continuously captures all ICCS log files for both real-time examination and exploration of trends. Its powerful search query language and user interface provides allows interactive exploration of log data to visualize specific indicators of system performance, assists in problems analysis, and provides instantaneous notification of specific system behaviors. |
|||
![]() |
Poster THPPC082 [4.693 MB] | ||
THPPC105 | The LHC Injection Sequencer | injection, kicker, operation, controls | 1307 |
|
|||
The LHC is the largest accelerator at CERN. The 2 beams of the LHC are colliding in four experiments, each beam can be composed up to 2808 high intensity bunches. The beams are produced at the LINAC, is shaped and accelerated in the LHC injectors to 450GeV. The injected beam contains up to 288 high intensity bunches, corresponding to a stored energy of 2MJ. To build for each LHC ring the complete bunch scheme that ensure a desired number of collision for each experiment, several injections are needed from the SPS to the LHC. The type of beam that is needed and the longitudinal emplacement of each injection have to be defined with care. This process is controlled by the injection sequencer and it orchestrates the beam requests. Predefined filling schemes stored in a database are used to indicate the number of injection, the type of beam and the longitudinal place of each. The injection sequencer sends the corresponding beam requests to the CBCM, the central timing manager which in turn synchronizes the beam production in the injectors. This paper will describe how the injection sequencer is implemented and its interaction with the other systems involved in the injection process. | |||
![]() |
Poster THPPC105 [0.606 MB] | ||
THCOBB04 | Overview of the ELSA Accelerator Control System | controls, interface, hardware, Linux | 1396 |
|
|||
The Electron Stretcher Facility ELSA provides a beam of polarized electrons with a maximum energy of 3.2 GeV for hadron physics experiments. The in-house developed control system has continuously been improved during the last 15 years of operation. Its top layer consists of a distributed shared memory database and several core applications which are running on a linux host. The interconnectivity to hardware devices is built up with a second layer of the control system operating on PCs and VMEs. High level applications are integrated into the control system using C and C++ libraries. An event based messaging system notifies attached applications about parameter updates in near real-time. The overall system structure and specific implementation details of the control system will be presented. | |||
![]() |
Slides THCOBB04 [0.527 MB] | ||
THCOBA01 | Evolution of the Monitoring in the LHCb Online System | monitoring, status, interface, distributed | 1408 |
|
|||
The LHCb online system relies on a large and heterogeneous I.T. infrastructure : it comprises more than 2000 servers and embedded systems and more than 200 network devices. The low level monitoring of the equipment was originally done with Nagios. In 2011, we replaced the single Nagios instance with a distributed Icinga setup presented at ICALEPCS 2011. This paper will present with more hindsight the improvements we observed, as well as problems encountered. Finally, we will describe some of our prospects for the future after the Long Shutdown period, namely Shinken and Ganglia. | |||
![]() |
Slides THCOBA01 [1.426 MB] | ||
THCOBA06 | Virtualization and Deployment Management for the KAT-7 / MeerKAT Control and Monitoring System | software, hardware, network, controls | 1422 |
|
|||
Funding: National Research Foundation (NRF) of South Africa To facilitate efficient deployment and management of the Control and Monitoring software of the South African 7-dish Karoo Array Telescope (KAT-7) and the forthcoming Square Kilometer Array (SKA) precursor, the 64-dish MeerKAT Telescope, server virtualization and automated deployment using a host configuration database is used. The advantages of virtualization is well known; adding automated deployment from a configuration database, additional advantages accrue: Server configuration becomes deterministic, development and deployment environments match more closely, system configuration can easily be version controlled and systems can easily be rebuilt when hardware fails. We chose the Debian GNU/Linux based Proxmox VE hypervisor using the OpenVZ single kernel container virtualization method along with Fabric (a Python ssh automation library) based deployment automation and a custom configuration database. This paper presents the rationale behind these choices, our current implementation and our experience with it, and a performance evalution of OpenVZ and KVM. Tests include a comparison of application specific networking performance over 10GbE using several network configurations. |
|||
![]() |
Slides THCOBA06 [5.044 MB] | ||
THCOCB04 | Using an Expert System for Accelerators Tuning and Automation of Operating Failure Checks | TANGO, controls, monitoring, operation | 1434 |
|
|||
Today at SOLEIL abnormal operating conditions cost many human resources involved in plenty of manual checks on various different tools interacting with different service layers of the control system (archiving system, device drivers, etc.) before recovering a normal accelerators operation. These manual checks are also systematically redone before each beam shutdown and restart. All these repetitive tasks are very error prone and lead to a tremendous lack in the assessment of beam delivery to users. Due to the increased process complexity and the multiple unpredictable factors of instability in the accelerators operating conditions, the existing diagnosis tools and manual check procedures reached their limits to provide practical reliable assistance to both operators and accelerators physicists. The aim of this paper is to show how the advanced expert system layer of the PASERELLE* framework, using the CDMA API** to access in a uniform way all the underlying data sources provided by the control system, can be used to assist the operators in detecting and diagnosing abnormal conditions and thus providing safe guards against these unexpected accelerators operation conditions.
*http://www.isencia.be/services/passerelle **https://code.google.com/p/cdma/ |
|||
![]() |
Slides THCOCB04 [1.636 MB] | ||
FRCOAAB04 | Data Driven Campaign Management at the National Ignition Facility | experiment, diagnostics, target, interface | 1473 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633255 The Campaign Management Tool Suite (CMT) provides tools for establishing the experimental goals, achieving reviews and approvals, and ensuring readiness for a NIF experiment. Over the last 2 years, CMT has significantly increased the number of diagnostics that supports to around 50. Meeting this ever increasing demand for new functionality has resulted in a design whereby more and more of the functionality can be specified in data rather than coded directly in Java. To do this support tools have been written that manage various aspects of the data and to also handle potential inconsistencies that can arise from a data driven paradigm. For example; drop down menus are specified in the Part and Lists Manager, the Shot Setup reports that lists the configurations for diagnostics are specified in the database, the review tool Approval Manager has a rules engine that can be changed without a software deployment, various template managers are used to provide predefined entry of hundreds parameters and finally a stale data tool validates that experiments contain valid data items. The trade-offs, benefits and issues of adapting and implementing this data driven philosophy will be presented. |
|||
![]() |
Slides FRCOAAB04 [0.929 MB] | ||