Paper | Title | Other Keywords | Page |
---|---|---|---|
MOCOBAB01 | New Electrical Network Supervision for CERN: Simpler, Safer, Faster, and Including New Modern Features | network, status, controls, framework | 27 |
|
|||
Since 2012, an effort started to replace the ageing electrical supervision system (managing more than 200,000 tags) currently in operation with a WinCC OA-based supervision system in order to unify the monitoring systems used by CERN operators and to leverage the internal knowledge and development of the products (JCOP, UNICOS, etc.). Along with the classical functionalities of a typical SCADA system (alarms, event, trending, archiving, access control, etc.), the supervision of the CERN electrical network requires a set of domain specific applications gathered under the name of EMS (Energy Management System). Such applications include network coloring, state estimation, power flow calculations, contingency analysis, optimal power flow, etc. Additionally, as electrical power is a critical service for CERN, a high availability of its infrastructure, including its supervision system, is required. The supervision system is therefore redundant along with a disaster recovery system which is itself redundant. In this paper, we will present the overall architecture of the future supervision system with an emphasis on the parts specific to the supervision of electrical network. | |||
![]() |
Slides MOCOBAB01 [1.414 MB] | ||
MOCOBAB04 | The Advanced Radiographic Capability, a Major Upgrade of the Computer Controls for the National Ignition Facility | controls, software, laser, target | 39 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633793 The Advanced Radiographic Capability (ARC) currently under development for the National Ignition Facility (NIF) will provide short (1-50 picoseconds) ultra high power (>1 Petawatt) laser pulses used for a variety of diagnostic purposes on NIF ranging from a high energy x-ray pulse source for backlighter imaging to an experimental platform for fast-ignition. A single NIF Quad (4 beams) is being upgraded to support experimentally driven, autonomous operations using either ARC or existing NIF pulses. Using its own seed oscillator, ARC generates short, wide bandwidth pulses that propagate down the existing NIF beamlines for amplification before being redirected through large aperture gratings that perform chirped pulse compression, generating a series of high-intensity pulses within the target chamber. This significant effort to integrate the ARC adds 40% additional control points to the existing NIF Quad and will be deployed in several phases over the coming year. This talk discusses some new unique ARC software controls used for short pulse operation on NIF and integration techniques being used to expedite deployment of this new diagnostic. |
|||
![]() |
Slides MOCOBAB04 [3.279 MB] | ||
MOCOBAB05 | How to Successfully Renovate a Controls System? - Lessons Learned from the Renovation of the CERN Injectors’ Controls Software | controls, software, GUI, software-architecture | 43 |
|
|||
Renovation of the control system of the CERN LHC injectors was initiated in 2007 in the scope of the Injector Controls Architecture (InCA) project. One of its main objectives was to homogenize the controls software across CERN accelerators and reuse as much as possible the existing modern sub-systems, such as the settings management used for the LHC. The project team created a platform that would permit coexistence and intercommunication between old and new components via a dedicated gateway, allowing a progressive replacement of the former. Dealing with a heterogeneous environment, with many diverse and interconnected modules, implemented using different technologies and programming languages, the team had to introduce all the modifications in the smoothest possible way, without causing machine downtime. After a brief description of the system architecture, the paper discusses the technical and non-technical sides of the renovation process such as validation and deployment methodology, operational applications and diagnostic tools characteristics and finally users’ involvement and human aspects, outlining good decisions, pitfalls and lessons learned over the last five years. | |||
![]() |
Slides MOCOBAB05 [1.746 MB] | ||
MOMIB01 | Sirius Control System: Conceptual Design | controls, network, EPICS, interface | 51 |
|
|||
Sirius is a new 3 GeV synchrotron light source currently being designed at the Brazilian Synchrotron Light Laboratory (LNLS) in Campinas, Brazil. The Control System will be heavily distributed and digitally connected to all equipments in order to avoid analog signals cables. A three-layer control system is being planned. The equipment layer uses RS485 serial networks, running at 10Mbps, with a very light proprietary protocol, in order to achieve good performance. The middle layer, interconnecting these serial networks, is based on Single Board Computers, PCs and commercial switches. Operation layer will be composed of PC’s running Control System’s client programs. Special topology will be used for Fast Orbit Feedback with one 10Gbps switch between the beam position monitors electronics and a workstation for corrections calculation and orbit correctors. At the moment, EPICS is the best candidate to manage the Control System. | |||
![]() |
Slides MOMIB01 [0.268 MB] | ||
![]() |
Poster MOMIB01 [0.580 MB] | ||
MOPPC014 |
Diagnostic Use Case Examples for ITER Plant Instrumentation and Control | diagnostics, interface, controls, hardware | 85 |
|
|||
ITER requires extensive diagnostics to meet the requirements for machine operation, protection, plasma control and physics studies. The realization of these systems is a major challenge not only because of the harsh environment and the nuclear requirements but also with respect to plant system Instrumentation and Control (I&C) of all the 45 diagnostics systems since the procurement arrangements of the ITER diagnostics with the domestic agencies require a large number of high performance fast controllers whose choice is based on guidelines and catalogues published by the ITER Organization (IO). The goal is to simplify acceptance testing and commissioning for both domestic agencies and the IO. For this purpose several diagnostic use case examples for plant system I&C documentation and implementation are provided by IO to the domestic agencies. Their implementations cover major parts of the diagnostic plant system I&C such as multi-channel high performance data and image acquisition, data processing as well as real-time and data archiving aspects. In this paper, the current status and achievements in implementation and documentation for the use case examples are presented. | |||
![]() |
Poster MOPPC014 [2.068 MB] | ||
MOPPC017 | Upgrade of J-PARC/MLF General Control System with EPICS/CSS | EPICS, controls, software, LabView | 93 |
|
|||
A general control system of the Materials and Life science experimental Facility (MLF-GCS) consists of programmable logic controllers (PLCs), operator interfaces (OPI) of iFix, data servers, and so on. It is controlling various devices such as a mercury target and a personnel protection system. The present system has been working well but there are problems in view of maintenance and update because of poor flexibility of OS and version compatibility. To overcome the weakness of the system, we decided to replace it to an advanced system based on EPICS and CSS as a framework and OPI software, which has advantages of high scalability and usability. Then we built a prototype system, connected it to the current MLF-GCS, and examined its performance. As the result, the communication between the EPICS/CSS system and the PLCs was successfully implemented by mediating a Takebishi OPC server, true data of 7000 were stored with suitable speed and capacity in a new data storage server based on a PostgreSQL, and OPI functions of the CSS were verified. We concluded through these examinations that the EPICS/CSS system had function and performance specified to the advanced MLF-GCS. | |||
![]() |
Poster MOPPC017 [0.376 MB] | ||
MOPPC028 | High-Density Power Converter Real-Time Control for the MedAustron Synchrotron | controls, timing, FPGA, real-time | 127 |
|
|||
The MedAustron accelerator is a synchrotron for light-ion therapy, developed under the guidance of CERN within the MedAustron-CERN collaboration. Procurement of 7 different power converter families and development of the control system were carried out concurrently. Control is optimized for unattended routine clinical operation. Therefore, finding a uniform control solution was paramount to fulfill the ambitious project plan. Another challenge was the need to operate with about 5'000 cycles initially, achieving pipelined operation with pulse-to-pulse re-configuration times smaller than 250 msec. This contribution shows the architecture and design and gives an overview of the system as built and operated. It is based on commercial-off-the-shelf processing hardware at front-end level and on the CERN function generator design at equipment level. The system is self contained, permitting use of parts and the whole is other accelerators. Especially the separation of the power converter from the real-time regulation using CERN's Converter Regulation Board makes this approach an attractive choice for integrating existing power converters in new configurations. | |||
![]() |
Poster MOPPC028 [0.892 MB] | ||
MOPPC029 | Internal Post Operation Check System for Kicker Magnet Current Waveforms Surveillance | controls, kicker, interface, timing | 131 |
|
|||
A software framework, called Internal Post Operation Check (IPOC), has been developed to acquire and analyse kicker magnet current waveforms. It was initially aimed at performing the surveillance of LHC beam dumping system (LBDS) extraction and dilution kicker current waveforms and was subsequently also deployed on various other kicker systems at CERN. It has been implemented using the Front-End Software Architecture (FESA) framework, and uses many CERN control services. It provides a common interface to various off-the-shelf digitiser cards, allowing a transparent integration of new digitiser types into the system. The waveform analysis algorithms are provided as external plug-in libraries, leaving their specific implementation to the kicker system experts. The general architecture of the IPOC system is presented in this paper, along with its integration within the control environment at CERN. Some application examples are provided, including the surveillance of the LBDS kicker currents and trigger synchronisation, and a closed-loop configuration to guarantee constant switching characteristics of high voltage thyratron switches. | |||
![]() |
Poster MOPPC029 [0.435 MB] | ||
MOPPC040 | A Hazard Driven Approach to Accelerator Safety System Design - How CLS Successfully Applied ALARP in the Design of Safety Systems | controls, factory, PLC, radiation | 172 |
|
|||
All large scale particle accelerator facilities end up utilising computerised safety systems for the accelerator access control and interlock system including search lockup sequences and other safety functions. Increasingly there has been a strong move toward IEC 61508 based standards in the design of these systems. CLS designed and deployed its first IEC 61508 based system nearly 10 years ago. The challenge has increasingly been to manage the complexity of requirements and ensure that features being added into such systems were truly requirements to achieve safety. Over the past few years CLS has moved to a more structured Hazard Analysis technique that is tightly coupled and traceable through the design and verification of its engineered safety systems. This paper presents the CLS approach and lessons learned. | |||
MOPPC041 | Machine Protection System for TRIUMF's ARIEL Facility | controls, TRIUMF, electron, target | 175 |
|
|||
Phase 1 of the Advanced Rare Isotope & Electron Linac (ARIEL) facility at TRIUMF is scheduled for completion in 2014. It will utilize an electron linear accelerator (eLinac) capable of currents up to 10mA and energy up to 75MeV. The eLinac will provide CW as well as pulsed beams with durations as short as 10uS. A Machine Protection System (MPS) will protect the accelerator and the associated beamline equipment from the nominal 500kW beam. Hazardous situations require the beam to be extinguished at the electron gun within 10uS of detection. Beam loss accounting is an additional requirement of the MPS. The MPS consists of an FPGA based controller module, Beam Loss Monitor VME modules developed by JLAB, and EPICS -based controls to establish and enforce beam operating modes. This paper describes the design, architecture, and implementation of the MPS. | |||
![]() |
Poster MOPPC041 [1.345 MB] | ||
MOPPC044 | Cilex-Apollon Personnel Safety System | laser, controls, radiation, interlocks | 184 |
|
|||
Funding: CNRS, MESR, CG91, CRiDF, ANR Cilex-Apollon is a high intensity laser facility delivering at least 5 PW pulses on targets at one shot per minute, to study physics such as laser plasma electron or ion accelerator and laser plasma X-Ray sources. Under construction, Apollon is a four beam laser installation with two target areas. Such a facility causes many risks, in particular laser and ionizing radiations. The Personal Safety System (PSS) ensures to both decrease impact of dangers and limit exposure to them. Based on a risk analysis, Safety Integrity Level (SIL) has been assessed respecting international norms IEC 62061 and IEC 61511-3. To conceive a high reliability system a SIL 2 is required. The PSS is based on four laser risk levels corresponding to the different uses of Apollon. The study has been conducted according to norm EN 60825. Independent from the main command -control network the distributed system is made of a safety PLC and equipment, communicating through a safety network. The article presents the concepts, the architecture the client-server architecture, from control screens to sensors and actuators and interfaces to the access control system and the synchronization and sequence system. |
|||
![]() |
Poster MOPPC044 [3.864 MB] | ||
MOPPC048 | Evaluation of the Beamline Personnel Safety System at ANKA under the Aegis of the 'Designated Architectures' Approach | radiation, software, controls, experiment | 195 |
|
|||
The Beamline Personnel Safety System (BPSS) at Angstroemquelle Karlsruhe (ANKA) started operation in 2003. The paper describes the safety related design and evaluation of serial, parallel and nested radiation safety areas, which allows the flexible plug-in of experimental setups at ANKA-beamlines. It evaluates the resulting requirements for safety system hard- and software and the necessary validation procedure defined by current national and international standards, based on probabilistic reliability parameters supplied by component libraries of manufacturers and an approach known as 'Designated Architectures', defining safety functions in terms of sensor-logic-actor chains. An ANKA-beamline example is presented with special regards to features like (self-) Diagnostic Coverage (DC) of the control system, which is not part of classical Markov process modelling of systems safety. | |||
![]() |
Poster MOPPC048 [0.699 MB] | ||
MOPPC049 | Radiation and Laser Safety Systems for the FERMI Free Electron Laser | laser, electron, controls, FEL | 198 |
|
|||
Funding: Work supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3 FERMI@Elettra is a Free Electron Laser (FEL) users facility based on a 1.5 GeV electron linac. The personnel safety systems allow entering the restricted areas of the facility only when safety conditions are fulfilled, and set the machine to a safe condition in case any dangerous situation is detected. Hazards are associated with accelerated electron beams and with an infrared laser used for pump-probe experiments. The safety systems are based on PLCs providing redundant logic in a fail-safe configuration. They make use of a distributed architecture based on fieldbus technology and communicate with the control system via Ethernet interfaces. The paper describes the architecture, the operational modes and the procedures that have been implemented. The experience gained in the recent operation is also reported. |
|||
![]() |
Poster MOPPC049 [0.447 MB] | ||
MOPPC051 | NSLS-II Booster Interlock System | vacuum, controls, status, interlocks | 202 |
|
|||
Being responsible for 3 GeV booster synchrotron for the National Synchrotron Light Source (NSLS-II, BNL, USA) design and manufacture, Budker Institute of Nuclear Physics also designs the booster control and diagnostic system. Among others, the system includes interlock system consisting of equipment protection system, vacuum level and vacuum chamber temperature control system, beam diagnostic service system. These subsystems are to protect facility elements in case of vacuum leakage or chamber overheating and to provide subsidiary functions for beam diagnostics. Providing beam interlocks, it processes more then 150 signals from thermocouples, cold and hot cathode vacuum gauges and ion pump controllers. The subsystems contain nine 5U 19" chassis with hardware of each based on Allen-Bradley CompactLogix Programmable Logic Controller. All the interlock related connections are made with dry contacts, whereas system status and control is available through EPICS channel access. All operator screens are developed with Control System Studio tooling. This paper describes configuration and operation of the booster interlock system. | |||
MOPPC057 | Data Management and Tools for the Access to the Radiological Areas at CERN | controls, database, radiation, interface | 226 |
|
|||
As part of the refurbishment of the PS Personnel Protection system, the radioprotection (RP) buffer zones & equipment have been incorporated into the design of the new access points providing an integrated access concept to the radiation controlled areas of the PS complex. The integration of the RP and access control equipment has been very challenging due to the lack of space in many of the zones. Although successfully carried out, our experience from the commissioning of the first installed access points shows that the integration should also include the software tools and procedures. This paper presents an inventory of all the tools and data bases currently used (*) in order to ensure the access to the CERN radiological areas according to CERN’s safety and radioprotection procedures. We summarize the problems and limitations of each tool as well as the whole process, and propose a number of improvements for the different kinds of users including changes required in each of the tools. The aim is to optimize the access process and the operation & maintenance of the related tools by rationalizing and better integrating them.
(*) Access Distribution and Management, Safety Information Registration, Works Coordination, Access Control, Operational Dosimeter, Traceability of Radioactive Equipment, Safety Information Panel. |
|||
![]() |
Poster MOPPC057 [1.955 MB] | ||
MOPPC062 | Real-Time System Supervision for the LHC Beam Loss Monitoring System at CERN | monitoring, FPGA, detector, database | 242 |
|
|||
The strategy for machine protection and quench prevention of the Large Hadron Collider (LHC) at the European Organisation for Nuclear Research (CERN) is mainly based on the Beam Loss Monitoring (BLM) system. The LHC BLM system is one of the most complex and large instrumentation systems deployed in the LHC. In addition to protecting the collider, the system also needs to provide a means of diagnosing machine faults and deliver feedback of the losses to the control room as well as to several systems for their setup and analysis. In order to augment the dependability of the system several layers of supervision has been implemented internally and externally to the system. This paper describes the different methods employed to achieve the expected availability and system fault detection. | |||
MOPPC066 | Reliability Analysis of the LHC Beam Dumping System Taking into Account the Operational Experience during LHC Run 1 | dumping, controls, power-supply, diagnostics | 250 |
|
|||
The LHC beam dumping system operated reliably during the Run 1 period of the LHC (2009 – 2013). As expected, there were a number of internal failures of the beam dumping system which, because of in-built safety features, resulted in safe removal of the particle beams from the machine. These failures (i.e. "false" beam dumps) have been appointed to the different failure modes and are compared to the predictions made by a reliability model established before the start of LHC operation. A statistically significant difference between model and failure data identifies those beam dumping system components that may have unduly impacted on the LHC availability and safety or might have been out of the scope of the initial model. An updated model of the beam dumping system reliability is presented, taking into account the experimental data presented and the foreseen system changes to be made in the 2013 – 2014 LHC shutdown. | |||
![]() |
Poster MOPPC066 [1.554 MB] | ||
MOPPC068 | Operational Experience with a PLC Based Positioning System for a LHC Extraction Protection Element | controls, PLC, software, dumping | 254 |
|
|||
The LHC Beam Dumping System (LBDS) nominally dumps the beam synchronously with the passage of the particle free beam abort gap at the beam dump extraction kickers. In the case of an asynchronous beam dump, an absorber element protects the machine aperture. This is a single sided collimator (TCDQ), positioned close to the beam, which has to follow the beam position and beam size during the energy ramp. The TCDQ positioning control is implemented within a SIEMENS S7-300 Programmable Logic Controller (PLC). A positioning accuracy better than 30 μm is achieved through a PID based servo algorithm. Errors due to a wrong position of the absorber w.r.t. the beam energy and size generates interlock conditions to the LHC machine protection system. Additionally, the correct position of the TCDQ w.r.t. the beam position in the extraction region is cross-checked after each dump by the LBDS eXternal Post Operational Check (XPOC). This paper presents the experience gained during LHC Run 1 and describes improvements that will be applied during the LHC shutdown 2013 – 2014. | |||
![]() |
Poster MOPPC068 [3.381 MB] | ||
MOPPC069 | Operational Experience with the LHC Software Interlock System | interlocks, injection, software, hardware | 258 |
|
|||
The Software Interlock System (SIS) is a JAVA software project developed for the CERN accelerators complex. The core functionality of SIS is to provide a framework to program high level interlocks based on the surveillance of a large number of accelerator device parameters. The interlock results are exported to trigger beam dumps, inhibit beam transfers or abort the main magnets powering. Since its deployment in 2008, the LHC SIS has demonstrated that it is a reliable solution for complex interlocks involving multiple or distributed systems and when quick solutions for un-expected situations is needed. This paper is presenting the operational experience with software interlocking in the LHC machine, reporting on the overall performance and flexibility of the SIS, mentioning the risks when SW interlocks are used to patch missing functionalities for personal safety or machine protection. | |||
![]() |
Poster MOPPC069 [0.323 MB] | ||
MOPPC076 | Quantitative Fault Tree Analysis of the Beam Permit System Elements of Relativistic Heavy Ion Collider (RHIC) at BNL | kicker, simulation, collider, interface | 269 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The RHIC Beam Permit System (BPS) plays a key role in safeguarding against the anomalies developing in the collider during a run. The BPS collects RHIC subsystem statuses to allow the beam entry and its existence in the machine. The building blocks of BPS are Permit Module (PM) and Abort Kicker Module (AKM), which incorporate various electronic boards based on VME specification. This paper presents a quantitative Fault Tree Analysis (FTA) of the PM and AKM, yielding the hazard rates of three top failures that are potential enough to cause a significant downtime of the machine. The FTA helps tracing down the top failure of the module to a component level failure (such as an IC or resistor). The fault trees are constructed for all module variants and are probabilistically evaluated using an analytical solution approach. The component failure rates are calculated using manufacturer datasheets and MIL-HDBK-217F. The apportionment of failure modes for components is calculated using FMD-97. The aim of this work is to understand the importance of individual components of the RHIC BPS regarding its reliable operation, and evaluate their impact on the operation of BPS. |
|||
![]() |
Poster MOPPC076 [0.626 MB] | ||
MOPPC081 | The Case of MTCA.4: Managing the Introduction of a New Crate Standard at Large Scale Facilities and Beyond | controls, data-acquisition, electronics, klystron | 285 |
|
|||
The demands on hardware for control and data acquisition at large-scale research organizations have increased considerably in recent years. In response, modular systems based on the new MTCA.4 standard, jointly developed by large Public Research Organizations and industrial electronics manufacturers, have pushed the boundary of system performance in terms of analog/digital data processing performance, remote management capabilities, timing stability, signal integrity, redundancy and maintainability. Whereas such public-private collaborations are not entirely new, novel instruments are in order to test the acceptance of the MTCA.4 standard beyond the physics community, identify gaps in the technology portfolio and align collaborative R&D programs accordingly. We describe the ongoing implementation of a time-limited validation project as means towards this end, highlight the challenges encountered so far and present solutions for a sustainable division of labor along the industry value chain. | |||
MOPPC090 | Managing a Product Called NIF - PLM Current State and Processes | controls, software, data-management, laser | 310 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632452 Product lifecycle management (PLM) is the process of managing the entire lifecycle of a product from its conception, through design and manufacture, to service and disposal. The National Ignition Facility (NIF) can be considered one enormous product that is made up of hundreds of millions of individual parts and components (or products). The ability to manage and control the physical definition, status and configuration of the sum of all of these products is a monumental undertaking yet critical to the validity of the shot experiment data and the safe operation of the facility. NIF is meeting this challenge by utilizing an integrated and graded approach to implement a suite of commercial and custom enterprise software solutions to address PLM and other facility management and configuration requirements. It has enabled the passing of needed elements of product data into downstream enterprise solutions while at the same time minimizing data replication. Strategic benefits have been realized using this approach while validating the decision for an integrated approach where more than one solution may be required to address the entire product lifecycle management process. |
|||
![]() |
Poster MOPPC090 [14.237 MB] | ||
MOPPC097 | The FAIR Control System - System Architecture and First Implementations | timing, controls, software, network | 328 |
|
|||
The paper presents the architecture of the control system for the Facility for Antiproton and Ion Research (FAIR) currently under development. The FAIR control system comprises the full electronics, hardware, and software to control, commission, and operate the FAIR accelerator complex for multiplexed beams. It takes advantage of collaborations with CERN in using proven framework solutions like FESA, LSA, White Rabbit, etc. The equipment layer consists of equipment interfaces, embedded system controllers, and software representations of the equipment (FESA). A dedicated real time network based on White Rabbit is used to synchronize and trigger actions on equipment level. The middle layer provides service functionality both to the equipment layer and the application layer through the IP control system network. LSA is used for settings management. The application layer combines the applications for operators as GUI applications or command line tools typically written in Java. For validation of concepts already in 2014 FAIR's proton injector at CEA/France and CRYRING at GSI will be commissioned with reduced functionality of the proposed FAIR control system stack. | |||
![]() |
Poster MOPPC097 [2.717 MB] | ||
MOPPC100 | SKA Monitioring and Control Progress Status | controls, monitoring, site, interface | 340 |
|
|||
The Monitoring and Control system for the SKA radio telescope is now moving from the conceptual design to the system requirements and design phase, with the formation of a consortium geared towards delivering the Telescope Manager (TM) work package. Recent program decisions regarding hosting of the telescope across two sites, Australia and South Africa, have brought in new challenges from the TM design perspective. These include strategy to leverage the individual capabilities of autonomous telescopes, and also integrating the existing precursor telescopes (ASKAP and MeerKat) with heterogenous technologies and approaches into the SKA. A key design goal from the viewpoint of minimizing development and lifecycle costs is to have a uniform architectural approach across the telescopes, and to maximize standardization of software and instrumentation across the systems, despite potential variations in system hardware and procurement arrangements among the participating countries. This paper discusses some of these challenges, and their mitigation approaches that the consortium intends to work upon, along with an update on the current status and progress on the overall TM work. | |||
MOPPC108 | Status of the NSLS-II Booster Control System | controls, booster, vacuum, timing | 362 |
|
|||
The booster control system is an integral part of the NSLS-II control system and is developed under EPICS. The booster control system includes six IBM Systems x3250 M3 and four VME3100 controllers connected via Gigabit Ethernet. These computers provide running IOCs for power supplies control, timing, beam diagnostics and interlocks. Also cPCI ADCs located in cPCI crate are used for beam diagnostics. Front-end electronics for vacuum control and interlocks are Allen-Bradley programmable logic controllers and I/O devices. Timing system is based on use of Micro-Research Finland Oy products: EVR 230RF and PMC EVR. Power supplies control use BNL developed set of a Power Supply Interface (PSI) which is located close to power supplies and a Power Supply Controller (PSC) which is connected to a front-end computer via 100 Mbit Ethernet. Each PSI is connected to its PSC via fiber-optic link. High Level Applications developed in Control System Studio and python run in Operator Consoles located in the Control Room. This paper describes the final design and status of the booster control system. The functional block diagrams are presented. | |||
![]() |
Poster MOPPC108 [0.458 MB] | ||
MOPPC110 | The Control System for the CO2 Cooling Plants for Physics Experiments | controls, detector, software, interface | 370 |
|
|||
CO2 cooling has become interesting technology for current and future tracking particle detectors. A key advantage of using CO2 as refrigerant is the high heat transfer capabilities allowing a significant material budget saving, which is a critical element in state of the art detector technologies. Several CO2 cooling stations, with cooling power ranging from 100W to several kW, have been developed at CERN to support detector testing for future LHC detector upgrades. Currently, two CO2 cooling plants for the ATLAS Pixel Insertable B-Layer and the Phase I Upgrade CMS Pixel detector are under construction. This paper describes the control system design and implementation using the UNICOS framework for the PLCs and SCADA. The control philosophy, safety and interlocking standard, user interfaces and additional features are presented. CO2 cooling is characterized by high operation stability and accurate evaporation temperature control over large distances. Implemented split range PID controllers with dynamically calculated limiters, multi-level interlocking and new software tools like CO2 online p-H diagram, jointly enable the cooling to fulfill the key requirements of reliable system. | |||
![]() |
Poster MOPPC110 [2.385 MB] | ||
MOPPC112 | Current Status and Perspectives of the SwissFEL Injector Test Facility Control System | controls, EPICS, software, network | 378 |
|
|||
The Free Electron Laser (SwissFEL) Injector Test Facility at Paul Scherrer Institute has been in operations for more than three years. The Injector Test Facility machine is a valuable development and validation platform for all major SwissFEL subsystems including controls. Based on the experience gained from the Test Facility operations support, the paper presents current and some perspective controls solutions focusing on the future SwissFEL project. | |||
![]() |
Poster MOPPC112 [1.224 MB] | ||
MOPPC122 | EPICS Interface and Control of NSLS-II Residual Gas Analyzer System | controls, EPICS, vacuum, interface | 392 |
|
|||
Residual Gas Analyzers (RGAs) have been widely used in accelerator vacuum systems for monitoring and vacuum diagnostics. The National Synchrotron Light Source II (NSLS-II) vacuum system adopts Hiden RC-100 RGA which supports remote electronics, thus allowing real-time diagnostics with beam operation as well as data archiving and off-line analysis. This paper describes the interface and operation of these RGAs with the EPICS based control system. | |||
![]() |
Poster MOPPC122 [1.004 MB] | ||
MOPPC128 | Real-Time Process Control on Multi-Core Processors | controls, real-time, framework, software | 407 |
|
|||
A real-time control is an essential for a low level RF and timing system to have beam stability in the accelerator operation. It is difficult to optimize priority control of multiple processes with real-time class and time-sharing class on a single-core processor. For example, we can’t log into the operating system if a real-time class process occupies the resource of a single-core processor. Recently multi-core processors have been utilized for equipment controls. We studied the process control of multiple processes running on multi-core processors. After several tunings, we confirmed that an operating system could run stably under heavy load on multi-core processors. It would be possible to achieve real-time control required milliseconds order response under the fast control system such as an event synchronized data acquisition system. Additionally we measured the response performance between client and server processes using MADOCA II framework that is the next-generation MADOCA. In this paper we present about the tunings for real-time process control on multi-core processors and performance results of MADOCA II. | |||
![]() |
Poster MOPPC128 [0.450 MB] | ||
MOPPC131 | Experience of Virtual Machines in J-PARC MR Control | controls, EPICS, Linux, embedded | 417 |
|
|||
At the J-PARC Main Ring (MR), we have used virtual-machine environment extensively in our accelerator control. In 2011, we developed a virtual-IOC, an EPICS In/Out Controller running on a virtual machine [1]. Now in 2013, about 20 virtual-IOCs are used in daily MR operation. In the summer of 2012, we updated our operating system from Scientific Linux 4 (SL4) to Scientific Linux 6 (SL6). In the SL6, KVM virtual-machine environment is supported as a default service. This fact encouraged us to port basic control services (ldap, dhcp, tftp, rdb, achiver, etc.) to multiple virtual machines. Each virtual machine has one service. Virtual machines are running on a few (not many) physical machines. This scheme enables easier maintenance of control services than before. In this paper, our experiences using virtual machines during J-PARC MR operation will be reported.
[1] VIRTUAL IO CONTROLLERS AT J-PARC MR USING XEN, N.Kamikubota et. al., ICALEPCS 2011 |
|||
![]() |
Poster MOPPC131 [0.213 MB] | ||
MOPPC143 | Plug-in Based Analysis Framework for LHC Post-Mortem Analysis | framework, controls, injection, software | 446 |
|
|||
Plug-in based software architectures are extensible, enforce modularity and allow several teams to work in parallel. But they have certain technical and organizational challenges, which we discuss in this paper. We gained our experience when developing the Post-Mortem Analysis (PMA) system, which is a mission-critical system for the Large Hadron Collider (LHC). We used a plugin-based architecture with a general-purpose analysis engine, for which physicists and equipment experts code plug-ins containing the analysis algorithms. We have over 45 analysis plug-ins developed by a dozen of domain experts. This paper focuses on the design challenges we faced in order to mitigate the risks of executing third-party code: assurance that even a badly written plug-in doesn't perturb the work of the overall application; plug-in execution control which allows to detect plug-in misbehavior and react; robust communication mechanism between plug-ins, diagnostics facilitation in case of plug-in failure; testing of the plug-ins before integration into the application, etc.
https://espace.cern.ch/be-dep/CO/DA/Services/Post-Mortem%20Analysis.aspx |
|||
![]() |
Poster MOPPC143 [3.128 MB] | ||
MOPPC145 | Mass-Accessible Controls Data for Web Consumers | controls, network, framework, status | 449 |
|
|||
The past few years in computing have seen the emergence of smart mobile devices, sporting multi-core embedded processors, powerful graphical processing units, and pervasive high-speed network connections (supported by WIFI or EDGE/UMTS). The relatively limited capacity of these devices requires relying on dedicated embedded operating systems (such as Android, or iOS), while their diverse form factors (from mobile phone screens to large tablet screens) require the adoption of programming techniques and technologies that are both resource-efficient and standards-based for better platform independence. We will consider what are the available options for hybrid desktop / mobile web development today, from native software development kits (Android, iOS) to platform-independent solutions (mobile Google Web toolkit [3], JQuery mobile, Apache Cordova[4], Opensocial). Through the authors' successive attempts at implementing a range of solutions for LHC-related data broadcasting, from data acquisition systems, LHC middleware such as DIP and CMW, on to the World Wide Web, we will investigate what are the valid choices to make and what pitfalls to avoid in today’s web development landscape. | |||
![]() |
Poster MOPPC145 [1.318 MB] | ||
MOPPC146 | MATLAB Objects for EPICS Channel Access | EPICS, interface, controls, status | 453 |
|
|||
With the substantial dependence on MATLAB for application development at the SwissFEL Injector Test Facility, the requirement for a robust and extensive EPICS Channel Access (CA) interface became increasingly imperative. To this effect, a new MATLAB Executable (Mex) file has been developed around an in-house C++ CA interface library (CAFE), which serves to expose comprehensive CA functionality to within the MATLAB framework. Immediate benefits include support for all MATLAB data types, a rich set of synchronous and asynchronous methods, a further physics oriented abstraction layer that uses CA synchronous groups, and compilation on 64-bit architectures. An account of the mocha (Matlab Objects for CHannel Access) interface is presented. | |||
MOPPC149 | A Messaging-Based Data Access Layer for Client Applications | controls, data-acquisition, network, interface | 460 |
|
|||
Funding: US Department of Energy The Fermilab Accelerator Control system has recently integrated use of a publish/subscribe infrastructure as a means of communication between Java client applications and data acquisition middleware. This supercedes a previous implementation based on Java Remote Method Invocation (RMI). The RMI implementation had issues with network firewalls, misbehaving client applications affecting the middleware, lack of portability to other platforms, and cumbersome authentication. The new system uses the AMQP messaging protocol and RabbitMQ data brokers. This decouples the client and middleware, is more portable to other languages, and has proven to be much more reliable. A Java client library provides for single synchronous operations as well as periodic data subscriptions. This new system is now used by the general synoptic display manager application as well as a number of new custom applications. Also a web service has been written that provides easy access to control system data from many languages. |
|||
![]() |
Poster MOPPC149 [4.654 MB] | ||
MOPPC157 | Application of Transparent Proxy Servers in Control Systems | controls, framework, collider, target | 475 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Proxy servers (Proxies) have been a staple of the World Wide Web infrastructure since its humble beginning. They provide a number of valuable functional services like access control, caching or logging. Historically, controls system have had little need for full fledged proxied systems as direct, unimpeded resource access is almost always preferable. This still holds true today, however unbound direct asset access can lead to performance issues, especially on older, underpowered systems. This paper describes an implementation of a fully transparent proxy server used to moderate asynchronous data flow between selected front end computers (FECs) and their clients as well as infrastructure changes required to accommodate this new platform. Finally it ventures into the future by examining additional untapped benefits of proxied control systems like write-through caching and runtime read-write modifications. |
|||
![]() |
Poster MOPPC157 [1.873 MB] | ||
MOPPC158 | Application of Modern Programming Techniques in Existing Control System Software | framework, controls, injection, software | 479 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Accelerator Device Object (ADO) specification and its original implementation are almost 20 years old. In those last two decades ADO development methodology has changed very little, which is a testament to its robust design, however during this time frame we've seen introduction of many new technologies and ideas, many of which with applicable and tangible benefits to control system software. This paper describes how some of these concepts like convention over configuration, aspect oriented programming (AOP) paradigm, which coupled with powerful techniques like bytecode generation and manipulation tools can greatly simplify both server and client side development by allowing developers to concentrate on the core implementation details without polluting their code with: 1) synchronization blocks 2) supplementary validation 3) asynchronous communication calls or 4) redundant bootstrapping. In addition to streamlining existing fundamental development methods we introduce additional concepts, many of which are found outside of the majority of the controls systems. These include 1) ACID transactions 2) client and servers-side dependency injection and 3) declarative event handling. |
|||
![]() |
Poster MOPPC158 [2.483 MB] | ||
TUCOAAB03 |
Approaching the Final Design of ITER Control System | controls, plasma, network, interface | 490 |
|
|||
The control system of ITER (CODAC) is subject to a final design review early 2014, with a second final design review covering high-level applications scheduled for 2015. The system architecture has been established and all plant systems required for first plasma have been identified. Interfaces are being detailed, which is a key activity to prepare for integration. A built to print design of the network infrastructure covering the full site is in place and installation is expected to start next year. The common software deployed in the local plant systems as well as the central system, called CODAC Core System and based on EPICS, has reached maturity providing most of the required functions. It is currently used by 55 organizations throughout the world involved in the development of plant systems and ITER controls. The first plant systems are expected to arrive on site in 2015 starting a five-year integration phase to prepare for first plasma operation. In this paper, we report on the progress made on ITER control system over the last two years and outline the plans and strategies allowing us to integrate hundreds of plant systems procured in-kind by the seven ITER members. | |||
![]() |
Slides TUCOAAB03 [5.294 MB] | ||
TUCOAAB04 | The MedAustron Accelerator Control System: Design, Installation and Commissioning | controls, software, network, ion | 494 |
|
|||
MedAustron is a light-ion accelerator cancer treatment facility built on the green field in Austria. The accelerator, its control systemand protection systems have been designed under the guidance of CERN within the MedAustron – CERN collaboration. Building construction has been completed in October 2012 and accelerator installation has started in December 2012. Readiness for accelerator control deployment was reached in January 2013. This contribution gives an overview of the accelerator control system project. It reports on the current status of commissioning including the ion sources, low-energy beam transfer and injector. The major challenge so far has been the readiness of the industry supplied IT infrastructure on which accelerator controls relies heavily due to its distributed and virtualized architecture. After all, the control system has been successfully released for accelerator commissioning within time and budget. The need to deliver a highly performant control system to cope with thousands of cycles in real-time, to cover interactive commissioning and unattended medical operation were mere technical aspects to be solved during the development phase. | |||
![]() |
Slides TUCOAAB04 [2.712 MB] | ||
TUCOBAB03 | Utilizing Atlassian JIRA for Large-Scale Software Development Management | software, controls, status, database | 505 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632634 Used actively by the National Ignition Facility since 2004, the JIRA issue tracking system from Atlassian is now used for 63 different projects. NIF software developers and customers have created over 80,000 requests (issues) for new features and bug fixes. The largest NIF software project in JIRA is the Integrated Computer Control system (ICCS), with nearly 40,000 issues. In this paper, we’ll discuss how JIRA has been customized to meet our software development process. ICCS developed a custom workflow in JIRA for tracking code reviews, recording test results by both developers and a dedicated Quality Control team, and managing the product release process. JIRA’s advanced customization capability have proven to be a great help in tracking key metrics about the ICCS development efforts (e.g. developer workload). ICCS developers store software in a configuration management tool called AccuRev, and document all software changes in each JIRA issue. Specialized tools developed by the NIF Configuration Management team analyze each software product release, insuring that each software product release contains only the exact expected changes. |
|||
![]() |
Slides TUCOBAB03 [2.010 MB] | ||
TUCOBAB04 | Evaluation of Issue Tracking and Project Management Tools for Use Across All CSIRO Radio Telescope Facilities | software, project-management, interface, controls | 509 |
|
|||
CSIRO's radio astronomy observatories are collectively known as the Australia Telescope National Facility (ATNF). The observatories include the 64-metre dish at Parkes, the Australia Telescope Compact Array (ATCA) in Narrabri, the Mopra 22-metre dish near Coonabarabran and the ASKAP telescope located in Western Australia and in early stages of commissioning. In January 2013 a new group named Software and Computing has been formed. This group, part of the ATNF Operations Program brings all the software development expertise under one umbrella and it is responsible for the development and maintenance of the software for all ATNF facilities, from monitoring and control to science data processing and archiving. One of the first task of the new group is to start homogenising the way software development is done across all observatories. This paper presents the results of the evaluation of several issue tracking and project management tools, including Redmine and JIRA to be used as a software development management tool across all ATNF facilities. It also describes how these tools can potentially be used for non-software type of applications such as fault reporting and tracking system. | |||
![]() |
Slides TUCOBAB04 [2.158 MB] | ||
TUMIB01 | Using Prince2 and ITIL Practices for Computing Projects and Service Management in a Scientific Installation | project-management, controls, status, synchrotron | 517 |
|
|||
The conscientious project management during the installation is a key factor keeping the schedule and costs in specifications. Methodologies like Prince2 for project management or ITIL best practices for service management, supported by tools like Request Tracker, Redmine or Track, improve the communication between scientists and support groups, speed up the time to respond, and increase the satisfaction and quality perceived by the user. In the same way, during operation, some practices complemented with software tools, may increase substantially the quality of the service with the resources available. This paper describes the use of these processes and methodologies in a scientific installation such as the synchrotron Alba. It also evaluates the strengths and the risks associated to the implementation as well as the achievements and the failures, proposing some improvements. | |||
![]() |
Slides TUMIB01 [1.043 MB] | ||
![]() |
Poster TUMIB01 [7.037 MB] | ||
TUMIB06 | Development of a Scalable and Flexible Data Logging System Using NoSQL Databases | database, controls, data-acquisition, network | 532 |
|
|||
We have developed a scalable and flexible data logging system for SPring-8 accelerator control. The current SPring-8 data logging system powered by a relational database management system (RDBMS) has been storing log data for 16 years. With the experience, we recognized the lack of RDBMS flexibility on data logging such as little adaptability of data format and data acquisition cycle, complexity in data management and no horizontal scalability. To solve the problem, we chose a combination of two NoSQL databases for the new system; Redis for real time data cache and Apache Cassandra for perpetual archive. Logging data are stored into both database serialized by MessagePack with flexible data format that is not limited to single integer or real value. Apache Cassandra is a scalable and highly available column oriented database, which is suitable for time series logging data. Redis is a very fast on-memory key-value store that complements Cassandra's eventual consistent model. We developed a data logging system with ZeroMQ message and have proved its high performance and reliability in long term evaluation. It will be released for partial control system this summer. | |||
![]() |
Slides TUMIB06 [0.182 MB] | ||
![]() |
Poster TUMIB06 [0.525 MB] | ||
TUPPC011 | Development of an Innovative Storage Manager for a Distributed Control System | controls, distributed, framework, software | 570 |
|
|||
The !CHAOS(*) framework will provide all the services needed for controlling and managing a large scientific infrastructure, including a number of innovating features such as abstraction of services, devices and data, easy and modular customization, extensive data caching for performance boost, integration of all functionalities in a common framework. One of most relevant innovation in !CHAOS resides in the History Data Service (HDS) for a continuous acquisition of operating data pushed by devices controllers. The core component of the HDS is the History engine(HST). It implements the abstraction layer for the underneath storage technology and the logics for indexing and querying data. The HST drivers are designed to provide specific HDS tasks such as Indexing, Caching and Storing, and for wrapping the chosen third-party database API with !CHOAS standard calls. Indeed, the HST allows to route to independent channels the different !CHAOS services data flow in order to improve the global efficiency of the whole data acquisition system.
* - http://chaos.infn.it * - https://chaosframework.atlassian.net/wiki/display/DOC/General+View * - http://prst-ab.aps.org/abstract/PRSTAB/v15/i11/e112804 |
|||
![]() |
Poster TUPPC011 [6.729 MB] | ||
TUPPC013 | Scaling Out of the MADOCA Database System for SACLA | database, controls, GUI, monitoring | 574 |
|
|||
MADOCA was adopted for the control system of SACLA, and the MADOCA database system was designed as a copy of the database system in SPring-8. The system realized a high redundancy because the system had already tested in SPring-8. However the signals which the MADOCA system handles in SACLA are increasing drastically. And GUIs that require frequent database accesses were developed. The load of the database system increased, and the response of the systems delayed in some occasions. We investigated the bottle neck of the system. From the results of the investigation, we decided to distribute the access to two servers. The primary server handles present data and signal properties. The other handles archived data, and the data was mounted to the primary server as a proxy table. In this way, we could divide the load into two servers and clients such as GUI do not need any changes. We have tested the load and response of the system by adding 40000 signals to present 45000 signals, of which data acquisition intervals are typically 2 sec. The system was installed successfully and operating without any interruption which is caused by the high load of the database. | |||
TUPPC017 | Development of J-PARC Time-Series Data Archiver using Distributed Database System | database, distributed, EPICS, linac | 584 |
|
|||
J-PARC(Japan Proton Accelerator Research Complex) is consists of much equipment. In Linac and 3GeV synchrotron, the data of over the 64,000 EPICS records for these apparatus control is being collected. The data has been being stored by a RDB system using PostgreSQL now, but it is not enough in availability, performance, and extendibility. Therefore, the new system architecture is required, which is rich in the pliability and can respond to the data increasing continuously for years to come. In order to cope with this problem, we considered adoption of the distributed database archtecture and constructed the demonstration system using Hadoop/HBase. We present results of these demonstration. | |||
TUPPC021 | Monitoring and Archiving of NSLS-II Booster Synchrotron Parameters | booster, monitoring, controls, EPICS | 587 |
|
|||
When operating a multicomponent system, it is always necessary to observe the state of a whole installation as well as of its components. Tracking data is essential to perform tuning and troubleshooting, so records of a work process generally have to be kept. As any other machine, the NSLS-II booster should have an implementation of monitoring and archiving schemes as a part of the control system. Because of the booster being a facility with a cyclical operation mode, there were additional challenges when designing and developing monitoring and archiving tools. Thorough analysis of available infrastructure and current approaches to monitoring and archiving was conducted to take into account additional needs that come from booster special characteristics. A software extension for values present in the control system allowed to track the state of booster subsystems and to perform an advanced archiving with multiple warning levels. Time stamping and data collecting strategies were developed as a part of monitoring scheme in order to preserve and recover read-backs and settings as consistent data sets. This paper describes relevant solutions incorporated in the booster control system. | |||
![]() |
Poster TUPPC021 [0.589 MB] | ||
TUPPC026 | Concept and Prototype for a Distributed Analysis Framework for the LHC Machine Data | framework, extraction, embedded, database | 604 |
|
|||
The Large Hadron Collider (LHC) at CERN produces more than 50 TB of diagnostic data every year, shared between normal running periods as well as commissioning periods. The data is collected in different systems, like the LHC Post Mortem System (PM), the LHC Logging Database and different file catalogues. To analyse and correlate data from these systems it is necessary to extract data to a local workspace and to use scripts to obtain and correlate the required information. Since the amount of data can be huge (depending on the task to be achieved) this approach can be very inefficient. To cope with this problem, a new project was launched to bring the analysis closer to the data itself. This paper describes the concepts and the implementation of the first prototype of an extensible framework, which will allow integrating all the existing data sources as well as future extensions, like hadoop* clusters or other parallelization frameworks.
*http://hadoop.apache.org/ |
|||
![]() |
Poster TUPPC026 [1.378 MB] | ||
TUPPC028 | The CERN Accelerator Logging Service - 10 Years in Operation: A Look at the Past, Present, and Future | database, extraction, controls, instrumentation | 612 |
|
|||
During the 10 years since it's first operational use, the scope and scale of the CERN Accelerator Logging Service (LS) has evolved significantly: from an LHC specific service expected to store 1TB / year; to a CERN-wide service spanning the complete accelerator complex (including related sub-systems and experiments) currently storing more than 50 TB / year on-line for some 1 million signals. Despite the massive increase over initial expectations the LS remains reliable, and highly usable - this can be attested to by the 5 million daily / average number of data extraction requests, from close to 1000 users. Although a highly successful service, demands on the LS are expected to increase significantly as CERN prepares LHC for running at top energy, which is likely to result in at least doubling current data volumes. Furthermore, focus is now shifting firmly towards a need to perform complex analysis on logged data, which in-turn presents new challenges. This paper reflects on 10 years as an operational service, in terms of how it has managed to scale to meet growing demands, what has worked well, and lessons learned. On-going developments, and future evolution will also be discussed. | |||
![]() |
Poster TUPPC028 [3.130 MB] | ||
TUPPC031 | Proteus: FRIB Configuration Database | database, controls, cavity, interface | 623 |
|
|||
Distributed Information Services for Control Systems (DISCS) is a framework for developing high-level information systems for a Experimental Physics Facility. It comprises of a set of cooperating components. Each component of the system has a database, an API, and several applications. One of DISCS' core components is the Configuration Module. It is responsible for the management of devices, their layout, measurements, alignment, calibration, signals, and inventory. In this paper we describe FRIB's implementation of the Configuration Module - Proteus. We describe its architecture, database schema, web-based GUI, EPICS V4 and REST services, and Java/Python APIs. It has been developed as a product that other labs can download and use. It can be integrated with other independent systems. We describe the challenges to implementing such a system, our technology choices, and the lessons learnt. | |||
![]() |
Poster TUPPC031 [1.248 MB] | ||
TUPPC032 | Database-backed Configuration Service | database, controls, interface, network | 627 |
|
|||
Keck Observatory is in the midst of a major telescope control system upgrade. This upgrade will include a new database-backed configuration service which will be used to manage the many aspects of the telescope that need to be configured (e.g. site parameters, control tuning, limit values) for its control software and it will keep the configuration data persistent between IOC restarts. This paper will discuss this new configuration service, including its database schema, iocsh API, rich user interface and the many other provided features. The solution provides automatic time-stamping, a history of all database changes, the ability to snapshot and load different configurations and triggers to manage the integrity of the data collections. Configuration is based on a simple concept of controllers, components and their associated mapping. The solution also provides a failsafe mode that allows client IOCs to function if there is a problem with the database server. It will also discuss why this new service is preferred over the file based configuration tools that have been used at Keck up to now. | |||
![]() |
Poster TUPPC032 [0.849 MB] | ||
TUPPC037 | LabWeb - LNLS Beamlines Remote Operation System | experiment, software, interface, controls | 638 |
|
|||
Funding: Project funded by CENPES/PETROBRAS under contract number: 0050.0067267.11.9 LabWeb is a software developed to allow remote operation of beamlines at LNLS, in a partnership with Petrobras Nanotechnology Network. Being the only light source in Latin America, LNLS receives many researchers and students interested in conducting experiments and analyses in these lines. The implementation of LabWeb allow researchers to use the laboratory structure without leaving their research centers, reducing time and travel costs in a continental country like Brazil. In 2010, the project was in its first phase in which tests were conducted using a beta version. Two years later, a new phase of the project began with the main goal of giving the operation scale for the remote access project to LNLS users. In this new version, a partnership was established to use the open source platform Science Studio developed and applied at the Canadian Light Source (CLS). Currently, the project includes remote operation of three beamlines at LNLS: SAXS1 (Small Angle X-Ray Scattering), XAFS1 (X-Ray Absorption and Fluorescence Spectroscopy) and XRD1 (X-Ray Diffraction). Now, the expectation is to provide this new way of realize experiments to all the other beamlines at LNLS. |
|||
![]() |
Poster TUPPC037 [1.613 MB] | ||
TUPPC038 | Simultaneous On-line Ultrasonic Flowmetery and Binary Gas Mixture Analysis for the ATLAS Silicon Tracker Cooling Control System | controls, electronics, detector, Ethernet | 642 |
|
|||
We describe a combined ultrasonic instrument for continuous gas flow measurement and simultaneous real-time binary gas mixture analysis. The analysis algorithm compares real time measurements with a stored data base of sound velocity vs. gas composition. The instrument was developed for the ATLAS silicon tracker evaporative cooling system where C3F8 refrigerant may be replaced by a blend with 25% C2F6, allowing a lower evaporation temperature as the LHC luminosity increases. The instrument has been developed in two geometries. A version with an axial sound path has demonstrated a 1 % Full Scale precision for flows up to 230 l/min. A resolution of 0.3% is seen in C3F8/C2F6 molar mixtures, and a sensitivity of better than 0.005% to traces of C3F8 in nitrogen, during a 1 year continuous study in a system with sequenced multi-stream sampling. A high flow version has demonstrated a resolution of 1.9 % Full Scale for flows up to 7500 l/min. The instrument can provide rapid feedback in control systems operating with refrigerants or binary gas mixtures in detector applications. Other uses include anesthesia, analysis of hydrocarbons and vapor mixtures for semiconductor manufacture.
* Comm. author: martin.doubek@cern.ch Refs R. Bates et al. Combined ultrasonic flow meter & binary vapour analyzer for ATLAS 2013 JINST 8 C01002 |
|||
![]() |
Poster TUPPC038 [1.834 MB] | ||
TUPPC042 | Prototype of a Simple ZeroMQ-Based RPC in Replacement of CORBA in NOMAD | CORBA, interface, GUI, controls | 654 |
|
|||
The NOMAD instrument control software of the Institut Laue-Langevin is a client server application. The communication between the server and its clients is performed with CORBA, which has now major drawbacks like the lack of support and a slow or non-existing evolution. The present paper describes the implementation of the recent and promising ZeroMQ technology in replacement to CORBA. We present the prototype of a simple RPC built on top of ZeroMQ and the performant Google Protocol Buffers serialization tool, to which we add a remote method dispatch layer. The final project will also provide an IDL compiler restricted to a subset of the language so that only minor modifications to our existing IDL interfaces and class implementations will have to be made to replace the communication layer in NOMAD. | |||
![]() |
Poster TUPPC042 [1.637 MB] | ||
TUPPC044 | When Hardware and Software Work in Concert | controls, experiment, interface, detector | 661 |
|
|||
Funding: Partially funded by BMBF under the grants 05K10CKB and 05K10VKE. Integrating control and high-speed data processing is a fundamental requirement to operate a beam line efficiently and improve user's beam time experience. Implementing such control environments for data intensive applications at synchrotrons has been difficult because of vendor-specific device access protocols and distributed components. Although TANGO addresses the distributed nature of experiment instrumentation, standardized APIs that provide uniform device access, process control and data analysis are still missing. Concert is a Python-based framework for device control and messaging. It implements these programming interfaces and provides a simple but powerful user interface. Our system exploits the asynchronous nature of device accesses and performs low-latency on-line data analysis using GPU-based data processing. We will use Concert to conduct experiments to adjust experimental conditions using on-line data analysis, e.g. during radiographic and tomographic experiments. Concert's process control mechanisms and the UFO processing framework* will allow us to control the process under study and the measuring procedure depending on image dynamics. * Vogelgesang, Chilingaryan, Rolo, Kopmann: “UFO: A Scalable GPU-based Image Processing Framework for On-line Monitoring” |
|||
![]() |
Poster TUPPC044 [4.318 MB] | ||
TUPPC050 | Control, Safety and Diagnostics for Future ATLAS Pixel Detectors | detector, controls, monitoring, diagnostics | 679 |
|
|||
To ensure the excellent performance of the ATLAS Pixel detector during the next run periods of the LHC, with increasing demands, two upgrades of the pixel detector are foreseen. One takes place in the first long shutdown, which is currently on-going. During this period an additional layer, the Insertable B-Layer, will be installed. The second upgrade will replace the entire pixel detector and is planed for 2020, when the LHC will be upgraded to HL-LHC. As once installed no access is possible over years, a highly reliable control system is required. It has to supply the detector with all entities required for operation, protect it at all times, and provide detailed information to diagnose the detector’s behaviour. Design constraints are the sensitivity of the sensors and reduction of material inside the tracker volume. We report on the construction of the control system for the Insertable B Layer and present a concept for the control of the pixel detector at the HL-LHC. While the latter requires completely new strategies, the control system of the IBL includes single new components, which can be developed further for the long-term upgrade. | |||
![]() |
Poster TUPPC050 [0.566 MB] | ||
TUPPC054 | A PLC-Based System for the Control of an Educational Observatory | controls, PLC, instrumentation, interface | 691 |
|
|||
An educational project that aims to involve young students in astronomical observations has been developed in the last decade at the Basovizza branch station of the INAF-Astronomical Observatory of Trieste. The telescope used is a 14” reflector equipped with a robotic Paramount ME equatorial mount and placed in a non-automatic dome. The new-developing control system is based on Beckhoff PLC. The control system will mainly allow to remotely control the three-phase synchronous motor of the dome, the switching of the whole instrumentation and the park of the telescope. Thanks to the data coming from the weather sensor, the PLC will be able to ensure the safety of the instruments. A web interface is used for the communication between the user and the instrumentation. In this paper a detailed description of the whole PLC-based control system architecture will be presented. | |||
![]() |
Poster TUPPC054 [3.671 MB] | ||
TUPPC055 | Developing of the Pulse Motor Controller Electronics for Running under Weak Radiation Environment | radiation, controls, interface, optics | 695 |
|
|||
Hitz Hitachi Zosen has developed new pulse motor controller. This controller which controls two axes per one controller implements high performance processor, pulse control device and peripheral interface. This controller has simply extensibility and various interface and realizes low price. We are able to operate the controller through Ethernet TCP/IP(or FLnet). Also, the controller can control max 16 axes. In addition, we want to drive the motor controller in optics hatch filled with weak radiation. If we can put the controller in optics hatch, wiring will become simple because of closed wiring in optics hatch . Therefore we have evaluated controller electronics running under weak radiation. | |||
![]() |
Poster TUPPC055 [0.700 MB] | ||
TUPPC066 | 10 Years of Experiment Control at SLS Beam Lines: an Outlook to SwissFEL | controls, EPICS, detector, FEL | 729 |
|
|||
Today, after nearly 10 years of consolidated user operation at the Swiss Light Source (SLS) with up to 18 beam lines, we are looking back to briefly describe the success story based on EPICS controls toolkit and give an outlook towards the X-ray free-electron laser SwissFEL, the next challenging PSI project. We focus on SLS spectroscopy beam lines with experimental setups rigorously based on the SynApps "Positioner-Trigger-Detector" (PTD) anatomy [2]. We briefly describe the main beam line “Positioners” used inside the PTD concept. On the “Detector” side an increased effort is made to standardize the control within the areaDetector (AD) software package [3]. For the SwissFEL two detectors are envisaged: the Gotthard 1D and Jungfrau 2D pixel detectors, both built at PSI. Consistently with the PTD-anatomy, their control system framework based on the AD package is in preparation. In order to guarantee data acquisition with the SwissFEL nominal 100 Hz rate, the “Trigger” is interconnected with the SwissFEL timing system to guarantee shot-to-shot operation [4]. The AD plug-in concept allows significant data reduction; we believe this opens the doors towards on-line FEL experiments.
[1] Krempaský et al, ICALEPCS 2001 [2] www.aps.anl.gov/bcda/synApps/index.php [3] M. Rivers, SRI 2009, Melbourne [4] B. Kalantari et al, ICALEPCS 2011 |
|||
TUPPC089 | Upgrade of the Power Supply Interface Controller Module for SuperKEKB | power-supply, interface, controls, hardware | 790 |
|
|||
There were more than 2500 magnet power supplies for KEKB storage rings and injection beam transport lines. For the remote control of such a large number of power supplies, we have developed the Power Supply Interface Controller Module (PSICM), which is plugged into each power supply. It has a microprocessor, ARCNET interface, trigger signal input interface, and parallel interface to the power supply. The PSICM is not only an interface card but also controls synchronous operation of the multiple power supplies with an arbitrary tracking curve. For SuperKEKB, the upgrade of KEKB, most of the existing power supplies continues while handreds of new power suplies are also installed. Although the PSICMs have worked without serious problem for 12 years, it seems too hard to keep maintenance for the next decade because of the discontinued parts. Thus we have developed the upgraded version of the PSICM. The new PSICM has the fully backward compatible interface to the power supply. The enhanced features are high speed ARCNET communication and redundant trigger signals. The design and the status of the upgraded PSICM are presented. | |||
![]() |
Poster TUPPC089 [1.516 MB] | ||
TUPPC106 | Development of a Web-based Shift Reporting Tool for Accelerator Operation at the Heidelberg Ion Beam Therapy Center | ion, database, controls, framework | 822 |
|
|||
The HIT (Heidelberg Ion Therapy) center is the first dedicated European accelerator facility for cancer therapy using both carbon ions and protons, located at the university hospital in Heidelberg. It provides three fully operational therapy treatment rooms, two with fixed beam exit and a gantry. We are currently developing a web based reporting tool for accelerator operations. Since medical treatment requires a high level of quality assurance, a detailed reporting on beam quality, device failures and technical problems is even more needed than in accelerator operations for science. The reporting tools will allow the operators to create their shift reports with support from automatically derived data, i.e. by providing pre-filled forms based on data from the Oracle database that is part of the proprietary accelerator control system. The reporting tool is based on the Python-powered CherryPy web framework, using SQLAlchemy for object relational mapping. The HTML pages are generated from templates, enriched with jQuery to provide a desktop-like usability. We will report on the system architecture of the tool and the current status, and show screenshots of the user interface.
[1] Th. Haberer et al., “The Heidelberg Ion Therapy Center”, Rad. & Onc., |
|||
TUPPC108 | Using Web Syndication for Flexible Remote Monitoring | site, controls, detector, experiment | 825 |
|
|||
With the experience gained in the first years of running the ALICE apparatus we have identified the need of collecting and aggregating different data to be displayed to the user in a simplified, personalized and clear way. The data comes from different sources in several formats, can contain data, text, pictures or can simply be a link to an extended content. This paper will describe the idea to design a light and flexible infrastructure, to aggregate information produced in different systems and offer them to the readers. In this model, a reader is presented with the information relevant to him, without being obliged to browse through different systems. The project consists of data production, collection and syndication, and is being developed in parallel with more traditional monitoring interfaces, with the aim of offering the ALICE users an alternative and convenient way to stay updated about their preferred systems even when they are far from the experiment. | |||
![]() |
Poster TUPPC108 [1.301 MB] | ||
TUPPC110 | Operator Intervention System for Remote Accelerator Diagnostics and Support | controls, network, EPICS, site | 832 |
|
|||
In a large experimental physics project such as ITER and LHC, the project has managed by an international collaboration. Similarly, ILC (International Linear Collider) as next generation project will be started by a collaboration of many institutes from three regions. After the collaborative construction, any collaborators except a host country will need to have some methods for remote maintenances by control and monitoring of devices. For example, the method can be provided by connecting to the control system network via WAN from their own countries. On the other hand, the remote operation of an accelerator via WAN has some issues from a practical application standpoint. One of the issues is that the accelerator has both experimental device and radiation generator characteristics. Additionally, after miss operation in the remote control, it will cause breakdown immediately. For this reason, we plan to implement the operator intervening system for remote accelerator diagnostics and support, and then it will solve the issues of difference of between the local control room and other locations. In this paper, we report the system concept, the development status, and the future plan. | |||
![]() |
Poster TUPPC110 [7.215 MB] | ||
TUPPC111 | Online Status and Settings Monitoring for the LHC Collimators | status, injection, collimation, monitoring | 836 |
|
|||
The Large Hadron Collider is equipped with 100 movable collimators. The LHC collimator control system is responsible for the accurate synchronization of around 400 axes of motion at the microsecond level, and with the precision of a few micrometres. The status and settings of the collimators can be monitored by three displays in the CERN Control Center, each providing a different viewpoint onto the system and a different level of abstraction, such as the positions in mm or beam size units. Any errors and warnings are also displayed. In this paper, the display operation is described, as well as the interaction that occurs when an operator is required to identify and understand an error in the collimator settings. | |||
![]() |
Poster TUPPC111 [2.260 MB] | ||
TUPPC119 | Exchange of Crucial Information between Accelerator Operation, Equipment Groups and Technical Infrastructure at CERN | database, interface, laser, controls | 856 |
|
|||
During CERN accelerator operation, a large number of events, related to accelerator operation and management of technical infrastructure, occur with different criticality. All these events are detected, diagnosed and managed by the Technical Infrastructure service (TI) in the CERN Control Centre (CCC); equipment groups concerned have to solve the problem with a minimal impact on accelerator operation. A new database structure and new interfaces have to be implemented to share information received by TI, to improve communication between the control room and equipment groups, to help post-mortem studies and to correlate events with accelerator operation incidents. Different tools like alarm screens, logbooks, maintenance plans and work orders exist and are in use today. A project was initiated with the goal to integrate and standardize information in a common repository to be used by the different stakeholders through dedicated user interfaces. | |||
![]() |
Poster TUPPC119 [10.469 MB] | ||
TUPPC122 | Progress of the TPS Control Applications Development | controls, EPICS, GUI, interface | 867 |
|
|||
The TPS (Taiwan Photon Source) is the latest generation 3 GeV synchrotron light source which is in installation phase. Commissioning is estimated in 2014. The EPICS is adopted as control system framework for the TPS. The various EPICS IOCs have implemented for each subsystem at this moment. Development and integration of specific control operation interfaces are in progress. The operation interfaces mainly include the function of setting, reading, save, restore and etc. Development of high level applications which are depended upon properties of each subsystem is on-going. The archive database system and its browser toolkits gradually have been established and tested. The Web based operation interfaces and broadcasting are also created for observing the machine status. The efforts will be summarized at this report. | |||
![]() |
Poster TUPPC122 [2.054 MB] | ||
TUPPC133 | Graphene: A Java Library for Real-Time Scientific Graphs | real-time, controls, interface, background | 901 |
|
|||
While there are a number of open source charting library available in Java, none of them seem to be suitable for real time scientific data, such as the one coming from control systems. Common shortcomings include: inadequate performance, too entangled with other scientific packages, concrete data object (which require copy operations), designed for small datasets, required running UI to produce any graph. Graphene is our effort to produce graphs that are suitable for scientific publishing, can be created without UI (e.g. in a web server), work on data defined through interfaces that allow no copy processing in a real time pipeline and are produced with adequate performance. The graphs are then integrated using pvmanager within Control System Studio. | |||
![]() |
Poster TUPPC133 [0.502 MB] | ||
TUCOCA01 | XFEL Machine Protection System (MPS) Based on uTCA | linac, kicker, FPGA, undulator | 906 |
|
|||
The European X-Ray Free Electron Laser (XFEL) linear accelerator will provide an electron beam with energies of up to 17.5 GeV and will use it to generate extremely brilliant pulses of spatially coherent xrays. With a designated average beam power of up to 600 kW and beam spot sizes down to few micrometers, the machine will hold a serious damage potential. To ensure safe operation of the accelerator it is necessary to detect dangerous situations by closely monitoring beam losses and the status of critical components. This is the task of the uTCA* based machine protection system (MPS). Many design features of the system have been influenced by experience from existing facilities, particularly the Free Electron Laser in Hamburg (FLASH), which is a kind of 1:10 prototype for the XFEL. A high flexibility of the MPS is essential to guarantee a minimum downtime of the accelerator. The MPS is embedded in the DOOCS** control system.
* uTCA: Micro Telecommunications Computing Architecture ** DOOCS: Distributed Object Oriented Control System |
|||
![]() |
Slides TUCOCA01 [2.255 MB] | ||
TUCOCA02 |
The ITER Interlock System | plasma, controls, interlocks, neutral-beams | 910 |
|
|||
ITER is formed by systems which shall be pushed to their performance limits in order to successfully achieve the scientific goals. The scientists in charge of exploiting the tokamak will require enough operational flexibility to explore as many plasma scenarios as possible while being sure that the integrity of the machine and safety of the environment and personnel are not compromised. The I&C Systems of ITER has been divided in three separate tiers: the conventional I&C, the safety system and the interlock system. This paper focuses on the latter. The design of the ITER interlocks has to take into account the intrinsic diversity of ITER systems, which implies a diversity of risks to be mitigated and hence the impossibility to implement a unique solution for the whole machine. This paper presents the chosen interlock solutions based on PLC, FPGA, and hardwired technologies. It also describes how experience from existing tokamaks has been applied to the design of the ITER interlocks, as well as the ITER particularities that have forced the designers to evaluate some technical choices which historically have been considered as non-suitable for implementing interlock functions. | |||
![]() |
Slides TUCOCA02 [3.303 MB] | ||
TUCOCA04 | Formal Methodology for Safety-Critical Systems Engineering at CERN | PLC, software, site, interface | 918 |
|
|||
A Safety-Critical system is a system whose failure or malfunctioning may lead to an injury or loss of human life or may have serious environmental consequences. The Safety System Engineering section of CERN is responsible for the conception of systems capable of performing, in an extremely safe way, a predefined set of Instrumented Functions preventing any human presence inside areas where a potential hazardous event may occur. This paper describes the formal approach followed for the engineering of the new Personnel Safety System of the PS accelerator complex at CERN. Starting from applying the generic guidelines of the safety standard IEC-61511, we have defined a novel formal approach particularly useful to express the complete set of Safety Functions in a rigorous and unambiguous way. We present the main advantages offered by this formalism and, in particular, we will show how this has been effective in solving the problem of the Safety Functions testing, leading to a major reduction of time for the test pattern generation. | |||
![]() |
Slides TUCOCA04 [2.227 MB] | ||
TUCOCA06 | Current Status of a Carborne Survey System, KURAMA | survey, monitoring, radiation, detector | 926 |
|
|||
A carborne survey system named as KURAMA (Kyoto University RAdiation MApping system) has been developed as a response to the nuclear accident at TEPCO Fukushima Daiichi Nuclear Power Plant in 2011. Now the system evolved into a CompactRIO-based KURAMA-II, and serves for the various types of applications. More than a hundred of KURAMA-II are deployed for the periodical drawing of the radiation map in the East Japan by Japanese government. A continuous radiation monitoring by KURAMA-II on local buses is started in Fukushima prefecture as the collaboration project among Kyoto University, Fukushima prefectural government, and JAEA. Extended applications such as precise radiation mappings in farmlands and parks are also on the way. The present status and future prospects of KURAMA and KURAMA-II are introduced. | |||
TUCOCA08 | Personnel and Machine Protection Systems in The National Ignition Facility (NIF) | target, controls, laser, monitoring | 933 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory (LLNL) under Contract DE-AC52-07NA27344. #LLNL-ABS-633232 The National Ignition Facility (NIF) is the world’s largest and most energetic laser system and has the potential to generate significant levels of ionizing radiation. The NIF employs real time safety systems to monitor and mitigate the potential hazards presented by the facility. The Machine Safety System (MSS) monitors key components in the facility to allow operations while also protecting against configurations that could damage equipment. The NIF Safety Interlock System (SIS) monitors for oxygen deficiency, radiological alarms, and controls access to the facility preventing exposure to laser light and radiation. Together the SIS and MSS control permissives to the hazard generating equipment and annunciate hazard levels in the facility. To do this reliably and safely, the SIS and MSS have been designed as fail safe systems with a proven performance record now spanning over 12 years. This presentation discusses the SIS and MSS, design, implementation, operator interfaces, validation/verification, and the hazard mitigation approaches employed in the NIF. A brief discussion of common failures encountered in the design of safety systems and how to avoid them will be presented. |
|||
![]() |
Slides TUCOCA08 [2.808 MB] | ||
TUCOCB02 | Middleware Proxy: A Request-Driven Messaging Broker for High Volume Data Distribution | controls, device-server, database, diagnostics | 948 |
|
|||
Nowadays, all major infrastructures and data centers (commercial and scientific) make an extensive use of the publish-subscribe messaging paradigm, which helps to decouple the message sender (publisher) from the message receiver (consumer). This paradigm is also heavily used in the CERN Accelerator Control system, in Proxy broker - critical part of the Controls Middleware (CMW) project. Proxy provides the aforementioned publish-subscribe facility and also supports execution of synchronous read and write operations. Moreover, it enables service scalability and dramatically reduces the network resources and overhead (CPU and memory) on publisher machine, required to serve all subscriptions. Proxy was developed in modern C++, using state of the art programming techniques (e.g. Boost) and following recommended software patterns for achieving low-latency and high concurrency. The outstanding performance of the Proxy infrastructure was confirmed during the last 3 years by delivering the high volume of LHC equipment data to many critical systems. This work describes in detail the Proxy architecture together with the lessons learnt from operation and the plans for the future evolution. | |||
![]() |
Slides TUCOCB02 [4.726 MB] | ||
TUCOCB04 | EPICS Version 4 Progress Report | EPICS, controls, database, network | 956 |
|
|||
EPICS Version 4 is the next major revision of the Experimental Physics and Industrial Control System, a widely used software framework for controls in large facilities, accelerators and telescopes. The primary goal of Version 4 is to improve support for scientific applications by augmenting the control-centered EPICS Version 3 with an architecture that allows building scientific services on top of it. Version 4 provides a new standardized wire protocol, support of structured types, and parametrized queries. The long-term plans also include a revision of the IOC core layer. The first set of services like directory, archive retrieval, and save set services aim to improve the current EPICS architecture and enable interoperability. The first services and applications are now being deployed in running facilities. We present the current status of EPICS V4, the interoperation of EPICS V3 and V4, and how to create services such as accelerator modelling, large database access, etc. These enable operators and physicists to write thin and powerful clients to support commissioning, beam studies and operations, and opens up the possibility of sharing applications between different facilities. | |||
![]() |
Slides TUCOCB04 [1.937 MB] | ||
TUCOCB08 | Reimplementing the Bulk Data System with DDS in ALMA ACS | network, site, CORBA, controls | 969 |
|
|||
Bulk Data(BD) is a service in the ALMA Common Software to transfer a high amount of astronomical data from many-to-one, and one-to-many computers. Its main application is the Correlator SW (processes raw lags from the Correlator HW into science visibilities). The Correlator retrieves data from antennas on up to 32 computers. Data is forwarded to a master computer and combined to be sent to consumers. The throughput requirement both to/from the master is 64 MBytes/sec, differently distributed based on observing conditions. Requirements for robustness make the application very challenging. The first implementation, based on the CORBA A/V Streaming service, showed weaknesses. We therefore decided to replace it, even if we were approaching start of operations, making provision for careful testing. We have chosen as core technology DDS (Data Distribution Service), being a well supported standard, widespread in similar applications. We have evaluated mainstream implementations, with emphasis on performance, robustness and error handling. We have successfully deployed the new BD, making it easy switching between old and new for testing purposes. We discuss challenges and lessons learned. | |||
![]() |
Slides TUCOCB08 [1.582 MB] | ||
WECOBA07 | High Speed Detectors: Problems and Solutions | detector, network, software, data-analysis | 1016 |
|
|||
Diamond has an increasing number of high speed detectors primarily used on Macromolecular Crystallography, Small Angle X-Ray Scattering and Tomography beamlines. Recently, the performance requirements have exceeded the performance available from a single threaded writing process on our Lustre parallel file system, so we have had to investigate other file systems and ways of parallelising the data flow to mitigate this. We report on the some comparative tests between Lustre and GPFS, and some work we have been leading to enhance the HDF5 library to add features that simplify the parallel writing problem. | |||
![]() |
Slides WECOBA07 [0.617 MB] | ||
THCOAAB06 | Achieving a Successful Alarm Management Deployment – The CLS Experience | controls, factory, software, monitoring | 1062 |
|
|||
Alarm management systems promise to improve situational awareness, aid operational staff in correcting responding to accelerator problems and reduce downtime. Many facilities, including the Canadian Light Source (CLS), have been challenged in achieving this goal. At CLS past attempts focusing on software features and capabilities. Our third attempt switched gears and instead focused on human factors engineering techniques and the associated response processes to the alarm. Aspects of ISA 18,2, EEMUA 191 and NREG-700 standards were used. CLS adopted the CSS BEAST alarm handler software. Work was also undertaken to identify bad actors and analyzing alarm system performance and to avoid alarm flooding. The BEAST deployment was augmented with a locally developed voice annunciation system for a small number of critical high impact alarms and auto diallers for shutdown periods when the control room is not staffed. This paper summaries our approach and lessons learned. | |||
![]() |
Slides THCOAAB06 [0.397 MB] | ||
THCOAAB07 | NIF Electronic Operations: Improving Productivity with iPad Application Development | framework, network, database, diagnostics | 1066 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632815 In an experimental facility like the National Ignition Facility (NIF), thousands of devices must be maintained during day to day operations. Teams within NIF have documented hundreds of procedures, or checklists, detailing how to perform this maintenance. These checklists have been paper based, until now. NIF Electronic Operations (NEO) is a new web and iPad application for managing and executing checklists. NEO increases efficiency of operations by reducing the overhead associated with paper based checklists, and provides analysis and integration opportunities that were previously not possible. NEO’s data driven architecture allows users to manage their own checklists and provides checklist versioning, real-time input validation, detailed step timing analysis, and integration with external task tracking and content management systems. Built with mobility in mind, NEO runs on an iPad and works without the need for a network connection. When executing a checklist, users capture various readings, photos, measurements and notes which are then reviewed and assessed after its completion. NEO’s design, architecture, iPad application and uses throughout the NIF will be discussed. |
|||
![]() |
Slides THCOAAB07 [1.237 MB] | ||
THCOAAB09 | Olog and Control System Studio: A Rich Logging Environment | controls, interface, experiment, framework | 1074 |
|
|||
Leveraging the features provided by Olog and Control System Studio, we have developed a logging environment which allows for the creation of rich log entries. These entries in addition to text and snapshots images store context which can comprise of information either from the control system (process variables) or other services (directory, ticketing, archiver). The client tools using this context provide the user the ability to launch various applications with their state initialized to match those while the entry was created. | |||
![]() |
Slides THCOAAB09 [1.673 MB] | ||
THMIB03 | From Real to Virtual - How to Provide a High-avaliblity Computer Server Infrastructure | controls, Linux, hardware, network | 1076 |
|
|||
During the commissioning phase of the Swiss Light Source (SLS) at the Paul Scherrer Institut (PSI) we decided in 2000 for a strategy to separate individual services for the control system. The reason was to prevent interruptions due to network congestion, misdirected control, and other causes between different service contexts. This concept proved to be reliable over the years. Today, each accelerator facility and beamline of PSI resides on a separated subnet and uses its dedicated set of service computers. As the number of beamlines and accelerators grew, the variety of services and their quantity rapidly increased. Fortunately, about the time when the SLS announced its first beam, VMware introduced its VMware Virtual Platform for Intel IA32 architecture. This was a great opportunity for us to start with the virtualization of the controls services. Currently, we have about 200 of such systems. In this presentation we discuss the way how we achieved the high-level-virtualization controls infrastructure, as well as how we will proceed in the future. | |||
![]() |
Slides THMIB03 [2.124 MB] | ||
![]() |
Poster THMIB03 [1.257 MB] | ||
THPPC014 | CMX - A Generic Solution to Expose Monitoring Metrics in C and C++ Applications | monitoring, controls, real-time, diagnostics | 1118 |
|
|||
CERN’s Accelerator Control System is built upon a large number of C, C++ and Java services that are required for daily operation of the accelerator complex. The knowledge of the internal state of these processes is essential for problem diagnostic as well as for constant monitoring for pre-failure recognition. The CMX library follows similar principles as JMX (Java Management Extensions) and provides similar monitoring capabilities for C and C++ applications. It allows registering and exposing runtime information as simple counters, floating point numbers or character data. This can be subsequently used by external diagnostics tools for checking thresholds, sending alerts or trending. CMX uses shared-memory to ensure non-blocking read/update actions, which is an important requirement for real-time processes. This paper introduces the topic of monitoring C/C++ applications and presents CMX as a building block to achieve this goal. | |||
![]() |
Poster THPPC014 [0.795 MB] | ||
THPPC034 | A Novel Analysis of Time Evolving Betatron Tune | betatron, injection, experiment, extraction | 1157 |
|
|||
J-PARC Main Ring (MR) is a high-intensity proton synchrotron and since 2009 delivering beam to the T2K neutrino experiment and hadron experiments. It is essential to measure time variation of betatron tune accurately throughout from beam injection at 3 GeV to extraction at 30 GeV. The tune measurement system of J-PARC MR consist of a stripline-kicker, beam position monitors, and a waveform digitizer. Betatron tune appears as sidebands of harmonics of revolution frequency in the turn-by-turn beam position spectrum. Excellent accuracy of measurement and high immunity against noise were achieved by exploiting a wide-band spectrum covering multiple harmonics. | |||
![]() |
Poster THPPC034 [0.707 MB] | ||
THPPC035 | RF Signal Switching System for Electron Beam Position Monitor Utilizing ARM Microcontroller | controls, injection, LabView, Ethernet | 1160 |
|
|||
ARM microcontrollers have high processing speed and low power consumption because they work efficiently with less memory by their own instruction set. Therefore, ARM microcontrollers are used not only in portable devices but also other commercial electronic devices. In recent years, free development environments and low-cost development kits are provided by many companies. The “mbed” provided by NXP is one of them. The “mbed” provides an environment where we can develop a product easily even if we are not familiar with electronics or microcontrollers. We can supply electric power and can transfer the program that we have developed by connecting to a PC via USB. We can use USB and LAN that, in general, require high level of expertise. The “mbed” has also a function as a HTTP server. By combining with JavaScript library, we can control multiple I/O ports at the same time through LAN. In the presentation, we will report the results that we applied the “mbed” to develop an RF signal switching system for a turn-by-turn beam position monitor (BPM) at a synchrotron light source, UVSOR-III. | |||
![]() |
Poster THPPC035 [2.228 MB] | ||
THPPC045 | The SSC-Linac Control System | controls, software, linac, hardware | 1173 |
|
|||
This article gives a brief description of the SSC-Linac control system for Heavy Ion Research Facility of Lanzhou(HIRFL). It describes in detail mainly of the overall system architecture, hardware and software. The overall system architecture is the distributed control system. We have adopted the the EPICS system as the system integration tools to develop the control system of the SSC-Linac. We use the NI PXIe chassis and PXIe bus master as a front-end control system hardware. Device controllers for each subsystem were composed of the commercial products or components designed by subsystems. The operating system in OPI and IOC of the SSC-Linac control system will use Linux. | |||
THPPC049 | The Power Supply System for Electron Beam Orbit Correctors and Focusing Lenses of Kurchatov Synchrotron Radiation Source | controls, power-supply, synchrotron, synchrotron-radiation | 1180 |
|
|||
The modernization project of the low-current power supply system of Kurchatov Synchrotron Radiation Source has been designed and is under implementation now. It includes transition to the new power suppliers to feed electron beam orbit correctors and focusing lenses. Multi-level control system, based on CAN/CANopen fieldbus, has been developed for specific accelerator applications, which allows startup and continuous run of hundreds of power supplies together with the other subsystems of the accelerator. The power sources data and status are collected into the archive with the Citect SCADA 7.2 Server SCADA Historian Server. The following operational parameters of the system are expected: current control resolution - 0.05% of IMAX; current stability - 5*10-4 ; 10 hours current variance - 100 ppm of IMAX ; temperature drift - 40ppm/K of IMAX. | |||
THPPC050 | Upgrade System of Vacuum Monitoring of Synchrotron Radiation Sources of National Research Centre Kurchatov Institute | vacuum, controls, synchrotron, database | 1183 |
|
|||
Modernization project of the vacuum system of the synchrotron radiation source at the National Research Centre Kurchatov Institute (NRC KI) has been designed and implemented. It includes transition to the new high-voltage power sources for NMD and PVIG–0.25/630 pumps. The system is controlled via CAN-bus, and the vacuum is controlled by measuring pump currents in a range of 0.0001–10 mA. Status visualization, data collection and data storage is implemented on Sitect SCADA 7.2 Server and SCADA Historian Server. The system ensures a vacuum of 10–7 Pa. The efficiency and reliability of the vacuum system is increased by this work, making it possible to improve the main parameters of the SR source. | |||
THPPC053 | NSLS-II Booster Ramp Handling | controls, booster, injection, dipole | 1189 |
|
|||
The NSLS-II booster is a full-energy synchrotron with the range from 200 MeV up to 3 GeV. The ramping cycle is 1 second. A set of electronics developed in BNL fro the NSLS-II project was modified for the booster Power Supplies (PSs) control. The set includes Power Supply Interface which is located close to a power supply and a Power Supply Controller (PSC) which is connected to EPICS IOC running in a front-end computer via 100 Mbit Ethernet. A table of 10k setpoints uploaded to the memory of PSC defines a behavior of a PS in the machine cycle. A special software is implemented in IOC to provide a smooth shape of the ramping waveform in the case of the waveform change. A Ramp Manager (RM) high level application is developed in python to provide an easy change, compare, copy the ramping waveforms, and upload them to process variables. The RM provides check of a waveform derivative, manual adjusting of the waveform in graph and text format, and includes all specific features of the booster PSs control. This paper describes software for the booster ramp handling. | |||
![]() |
Poster THPPC053 [0.423 MB] | ||
THPPC057 | Validation of the Data Consolidation in Layout Database for the LHC Tunnel Cryogenics Controls Package | controls, database, cryogenics, PLC | 1197 |
|
|||
The control system of the Large Hadron Collider cryogenics manages over 34,000 instrumentation channels which are essential for populating the software of the PLCs (Programmable Logic Controller) and SCADA (Supervisory Control and Data Acquisition) responsible for maintaining the LHC at the appropriate operating conditions. The control system specification's are generated by the CERN UNICOS (Unified Industrial Control System) framework using a set of information of database views extracted from the LHC layout database. The LHC layout database is part of the CERN database managing centralized and integrated data, documenting the whole CERN infrastructures (Accelerator complex) by modeling their topographical organization (“layouts”), and defining their components (functional positions) and the relationships between them. This paper describes the methodology of the data validation process, including the development of different software tools used to update the database from original values to manually adjusted values after three years of machine operation, as well as the update of the data to accommodate the upgrade of the UNICOS Continuous Process Control package(CPC). | |||
THPPC061 | SwissFEL Magnet Test Setup and Its Controls at PSI | controls, EPICS, software, detector | 1209 |
|
|||
High brightness electron bunches will be guided in the future Free Electron Laser (SwissFEL) at Paul Scherrer Institute (PSI) with the use of several hundred magnets. The SwissFEL machine imposes very strict requirements not only to the field quality but also to mechanical and magnetic alignments of these magnets. To ensure that the magnet specifications are met and to develop reliable procedures for aligning magnets in the SwissFEL and correcting their field errors during machine operations, the PSI magnet test system was upgraded. The upgraded system is a high precision measurement setup based on Hall probe, rotating coil, vibrating wire and moving wire techniques. It is fully automated and integrated in the PSI controls. The paper describes the main controls components of the new magnet test setup and their performance. | |||
![]() |
Poster THPPC061 [0.855 MB] | ||
THPPC065 | Software System for Monitoring and Control at the Solenoid Test Facility | controls, monitoring, solenoid, database | 1224 |
|
|||
Funding: This work was supported by the U.S. Department of Energy. The architecture and implementation aspects of the control and monitoring system developed for Fermilab's new Solenoid Test Facility will be presented. At the heart of the system lies a highly configurable scan subsystem targeted at precise measurements of low temperatures with uniformly incorporated control elements. A multi-format archival system allows for the use of flat files, XML, and a relational database for storing data, and a Web-based application provides access to historical trends. The DAQ and computing platform includes COTS elements. The layered architecture separates the system into Windows operator stations, the real-time operating system-based DAQ and controls, and the FPGA-based time-critical and safety elements. The use of the EPICS CA protocol with LabVIEW opens the system to many available EPICS utilities . |
|||
![]() |
Poster THPPC065 [2.059 MB] | ||
THPPC072 | Superconducting Cavity Quench Detection and Prevention for the European XFEL | cavity, LLRF, cryogenics, coupling | 1239 |
|
|||
Due to its large scale, the European X-ray Free Electron Laser accelerator (XFEL) requires a high level of automation for commissioning and operation. Each of the 800 superconducting RF cavities simultaneously running during normal operation can occasionally quench, potentially tripping the cryogenic system and resulting into machine down-time. A fast and reliable quench detection system is then a necessity to rapidly detect individual cavity quenches and take immediate action, thus avoiding interruption of machine operation. In this paper, the mechanisms implemented in the low level RF system (LLRF) to prevent quenches and the algorithms developed to detect if a cavity quenches anyways are explained. In particular, the different types of cavity quenches and the techniques developed to identify them are shown. Experimental results acquired during the testing of XFEL cryomodules prototypes at DESY are presented, demonstrating the performance and efficiency of this machine operation and cavity protection tool. | |||
THPPC076 | Re-Engineering Control Systems using Automatic Generation Tools and Process Simulation: the LHC Water Cooling Case | controls, PLC, simulation, interlocks | 1242 |
|
|||
This paper presents the approach used at CERN (European Organization for Nuclear Research) for the re-engineering of the control systems for the water cooling systems of the LHC (Large Hadron Collider). Due to a very short, and therefore restrictive, intervention time for these control systems, each PLC had to be completely commissioned in only two weeks. To achieve this challenge, automatic generation tools were used with the CERN control framework UNICOS (Unified Industrial Control System) to produce the PLC code. Moreover, process dynamic models using the simulation software EcosimPro were developed to carry out the ‘virtual’ commissioning of the new control systems for the most critical processes thus minimizing the real commissioning time on site. The re-engineering concerns around 20 PLCs managing 11000 Inputs/Outputs all around the LHC. These cooling systems are composed of cooling towers, chilled water production units and water distribution systems. | |||
![]() |
Poster THPPC076 [4.046 MB] | ||
THPPC077 | A Fuzzy-Oriented Solution for Automatic Distribution of Limited Resources According to Priority Lists | simulation, controls, cryogenics, superconducting-magnet | 1246 |
|
|||
This paper proposes a solution for resources allocation when limited resources supply several clients in parallel. The lack of a suitable limitation mechanism in the supply system can lead to the depletion of the resources if the total demand exceeds the availability. To avoid this situation, an algorithm for priority handling which relies on the Fuzzy Systems Theory is used. The Fuzzy approach, as a problem-solving technique, is robust with respect to model and parameter uncertainties and is well-adapted to systems whose mathematical formulation is difficult or impossible to obtain. The aim of the algorithm is to grant a fair allocation if the resources availability is sufficient for all the clients, or, in case of excess of demand, on the basis of priority lists, to assure enough resources only to the high priority clients in order to allow the completion of the high priority tasks. Besides the general algorithm, this paper describes the Fuzzy approach applied to a cryogenic test facility at CERN. Simulation tools are employed to validate the proposed algorithm and to characterize its performance. | |||
THPPC086 | Analyzing Off-normals in Large Distributed Control Systems using Deep Packet Inspection and Data Mining Techniques | network, controls, toolkit, distributed | 1278 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632814 Network packet inspection using port mirroring provides the ultimate tool for understanding complex behaviors in large distributed control systems. The timestamped captures of network packets embody the full spectrum of protocol layers and uncover intricate and surprising interactions. No other tool is capable of penetrating through the layers of software and hardware abstractions to allow the researcher to analyze an integrated system composed of various operating systems, closed-source embedded controllers, software libraries and middleware. Being completely passive, the packet inspection does not modify the timings or behaviors. The completeness and fine resolution of the network captures present an analysis challenge, due to huge data volumes and difficulty of determining what constitutes the signal and noise in each situation. We discuss the development of a deep packet inspection toolchain and application of the R language for data mining and visualization. We present case studies demonstrating off-normal analysis in a distributed real-time control system. In each case, the toolkit pinpointed the problem root cause which had escaped traditional software debugging techniques. |
|||
![]() |
Poster THPPC086 [2.353 MB] | ||
THPPC105 | The LHC Injection Sequencer | injection, kicker, database, controls | 1307 |
|
|||
The LHC is the largest accelerator at CERN. The 2 beams of the LHC are colliding in four experiments, each beam can be composed up to 2808 high intensity bunches. The beams are produced at the LINAC, is shaped and accelerated in the LHC injectors to 450GeV. The injected beam contains up to 288 high intensity bunches, corresponding to a stored energy of 2MJ. To build for each LHC ring the complete bunch scheme that ensure a desired number of collision for each experiment, several injections are needed from the SPS to the LHC. The type of beam that is needed and the longitudinal emplacement of each injection have to be defined with care. This process is controlled by the injection sequencer and it orchestrates the beam requests. Predefined filling schemes stored in a database are used to indicate the number of injection, the type of beam and the longitudinal place of each. The injection sequencer sends the corresponding beam requests to the CBCM, the central timing manager which in turn synchronizes the beam production in the injectors. This paper will describe how the injection sequencer is implemented and its interaction with the other systems involved in the injection process. | |||
![]() |
Poster THPPC105 [0.606 MB] | ||
THPPC113 | Integrated Timing System for the EBIS Pre-Injector | timing, booster, ion, controls | 1325 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The Electron Beam Ion Source (EBIS) began operating as a pre-injector in the C-AD RHIC accelerator complex in 2010. Historically, C-AD RHIC pre-injectors, like the 200MeV Linac, have had largely independent timing systems that receive a minimal number of triggers from the central C-AD timing system to synchronize the injection process. The EBIS timing system is much more closely integrated into central C-AD timing, with all EBIS machine cycles included in the master supercycle that coordinates the interoperation of C-AD accelerators. The integrated timing approach allows better coordination of pre-injector activities with other activities in the C-AD complex. Independent pre-injector operation, however, must also be supported by the EBIS timing system. This paper describes the design of the EBIS timing system and evaluates experience in operational management of EBIS timing. |
|||
![]() |
Poster THPPC113 [21.388 MB] | ||
THPPC116 | Temperature Precise Control in a Large Scale Helium Refrigerator | controls, cryogenics, experiment, simulation | 1331 |
|
|||
Precise control of operating load temperature is a key requirement for application of a large scale helium refrigerator. Strict control logic and time sequence are necessary in the process related to main components including a fine load, turbine expanders and compressors. However control process sequence may become disordered due to improper PID parameter settings and logic equations and causes temperature oscillation, load augmentation or protection of the compressors and cryogenic valve function failure etc. Combination of experimental studies and simulation models, effect of PID parameters adjustment on the control process is present in detail. The methods and rules of general parameter settings are revealed and the suitable control logic equations are derived for temperature stabilization. | |||
![]() |
Poster THPPC116 [0.584 MB] | ||
THPPC121 | Feedbacks and Automation at the Free Electron Laser in Hamburg (FLASH) | feedback, electron, controls, laser | 1345 |
|
|||
For many years a set of historically grown Matlab scripts and tools have been used to stabilize transversal and longitudinal properties of the electron bunches at the FLASH. Though this Matlab-based approach comes in handy when commissioning or developing tools for certain operational procedures, it turns out to be quite tedious to maintain on the long run as it often lacks stability and performance e.g. in feedback procedures. To overcome these shortcomings of the Matlab-based approach, a server-based C++ solution in the DOOCS* framework has been realized at FLASH. Using the graphical UI designer jddd** a generic version of the longitudinal feedback has been implemented and put very fast into standard operation. The design uses sets of monitors and actuators plus their coupling which easily be adapted operation requirements. The daily routine operation of this server-based FB implementation has proven to offer a robust, well maintainable and flexible solution to the common problem of automation and control for such complex machines as FLASH and will be well suited for the European XFEL purposes.
* see e.g. http://doocs.desy.de ** see e.g. http//jddd.desy.de |
|||
![]() |
Poster THPPC121 [9.473 MB] | ||
THPPC135 | From Pulse to Continuous Wave Operation of TESLA Cryomodules – LLRF System Software Modification and Development | feedback, LLRF, cavity, controls | 1366 |
|
|||
Funding: We acknowledge the support from National Science Center (Poland) grant no 5593/B/T02/2010/39 Higher efficiency of TESLA based free electron lasers (FLASH, XFEL) by means of increased quantity of photon bursts can be achieved using continuous wave operation mode. In order to maintain constant beam acceleration in superconducting cavities and keep short pulse to CW operation transition costs reasonably low some substantial modification of accelerator subsystems are necessary. Changes in: RF power source, cryo systems, electron beam source, etc. have to be also accompanied by adjustments in LLRF system. In this paper challenges for well established pulsed mode LLRF system are discussed (in case of CW and LP scenarios). Firmware, software modifications needed for maintaining high performance of cavities field parameters regulation (for 1Hz CW and LP cryo-module operation) are described. Results from studies of vector sum amplitude and phase control in case of resonators high Ql factor settings (Ql~1.5e7) are shown. Proposed modifications implemented in VME and microTCA (MTCA.4) based LLRF system has been tested during studies at CryoModule Test Bench (CMTB) in DESY. Results from this tests together with achieved regulation performance data are also presented and discussed. |
|||
![]() |
Poster THPPC135 [1.310 MB] | ||
THPPC138 | A System for Automatic Locking of Resonators of Linac at IUAC | controls, linac, interface, feedback | 1376 |
|
|||
The superconducting LINAC booster of IUAC consists of five cryostats housing a total of 27 Nb quarter wave resonators (QWRs). The QWRs are phase locked against the master oscillator at a frequency of 97 MHz. Cavity frequency tuning is done by a Helium gas based slow tuner. Presently, the frequency tuning and cavity phase locking is done from the control room consoles. To automate the LINAC operation, an automatic phase locking system has been implemented. The slow tuner gas pressure is automatically controlled in response to the frequency error of the cavity. The fast tuner is automatically triggered into phase lock when the frequency is within the lock window. This system has band implemented sucessfully on a few cavities. The system is now being installed for the remaining cavities of the LINAC booster.
[1]S.Ghosh et al Phys. Rev. ST Accel. Beams 12, 040101 (2009). |
|||
![]() |
Poster THPPC138 [4.654 MB] | ||
THPPC141 | Automatic Alignment Upgrade of Advanced Radiographic Capability for the National Ignition Facility | alignment, target, laser, vacuum | 1384 |
|
|||
Funding: This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344. #LLNL-ABS-632633 For many experiments planned on the National Ignition Facility (NIF), high-energy x-ray backlighters are an important diagnostic. NIF will be deploying this year a new Advanced Radiographic Capability (ARC) for generating these high-energy short-pulses. The precision of the Automatic Alignment (AA) for ARC is an important element in the success of the enhancement. A key aspect of the ARC AA is integration of the new alignment capabilities without disturbing the existing AA operations of NIF. Small pointing tolerances of 5 micron precision to a 10 micron target are required. After main amplification the beams are shortened by up to 1,000x in time in the ARC compressor vessel and aimed at backlighter targets in the NIF target chamber. Alignment Stability and Verification of the compressor gratings is critical to ensuring the ARC pulses meet their experimental specifications. |
|||
![]() |
Poster THPPC141 [4.485 MB] | ||
THCOBA05 | Control System Virtualization for the LHCb Online System | controls, network, experiment, hardware | 1419 |
|
|||
Virtualization provides many benefits such as more efficiency in resource utilization, less power consumption, better management by centralized control and higher availability. It can also save time for IT projects by eliminating dedicated hardware procurement and providing standard software configurations. In view of this virtualization is very attractive for mission-critical projects like the experiment control-system (ECS) of the large LHCb experiment at CERN. This paper describes our implementation of the control system infrastructure on a general purpose server-hardware based on Linux and the RHEV enterprise clustering platform. The paper describes the methods used , our experiences and the knowledge acquired in evaluating the performance of the setup using test systems, constraints and limitations we encountered. We compare these with parameters measured under typical load conditions in a real production system. We also present the specific measures taken to guarantee optimal performance for the SCADA system (WinCC OA), which is the back-bone of our control system. | |||
![]() |
Slides THCOBA05 [1.065 MB] | ||
THCOCB02 | The Role of Data Driven Models in Optimizing the Operation of the National Ignition Facility | laser, target, experiment, simulation | 1426 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633233 The Virtual Beam Line (VBL) code is essential to operate, maintain and validate the design of laser components to meet the performance goals at Lawrence Livermore National Laboratory’s National Ignition Facility (NIF). The NIF relies upon the Laser Performance Operations Model (LPOM), whose physics engine is the Virtual Beam Line (VBL) code, to automate the setup of the laser by simulating the laser energetics of the as-built system. VBL simulates paraxial beam propagation, amplification, aberration and modulation, nonlinear self-focusing and focal behavior. Each of the NIF’s 192 beam lines are modeled in parallel on the LPOM Linux compute cluster during shot setup and validation. NIF achieved a record 1.8 MJ shot in July 2012, and LPOM (with VBL) was key to achieving the requested pulse shape. We will discuss some examples of how the VBL physics code is used to model the laser phenomena and operate the NIF laser system. |
|||
![]() |
Slides THCOCB02 [4.589 MB] | ||
THCOCB03 | Fast Automatic Beam-based Alignment of the LHC Collimation System | alignment, collimation, monitoring, feedback | 1430 |
|
|||
Maximum beam cleaning efficiency and LHC machine protection is provided when the collimator jaws are properly adjusted at well-defined distances from the circulating beams. The required settings for different locations around the 27 km long LHC rings are determined through beam-based collimator alignment, which uses feedback from Beam Loss Monitoring (BLM) system. After the first experience with beam, a systematic automation of the alignment procedure was performed. This paper gives an overview of the algorithms developed to speed up the alignment and reduce human errors. The experience accumulated in four years of operation, from 2010 to 2013 is reviewed. | |||
![]() |
Slides THCOCB03 [13.293 MB] | ||
THCOCB04 | Using an Expert System for Accelerators Tuning and Automation of Operating Failure Checks | TANGO, controls, database, monitoring | 1434 |
|
|||
Today at SOLEIL abnormal operating conditions cost many human resources involved in plenty of manual checks on various different tools interacting with different service layers of the control system (archiving system, device drivers, etc.) before recovering a normal accelerators operation. These manual checks are also systematically redone before each beam shutdown and restart. All these repetitive tasks are very error prone and lead to a tremendous lack in the assessment of beam delivery to users. Due to the increased process complexity and the multiple unpredictable factors of instability in the accelerators operating conditions, the existing diagnosis tools and manual check procedures reached their limits to provide practical reliable assistance to both operators and accelerators physicists. The aim of this paper is to show how the advanced expert system layer of the PASERELLE* framework, using the CDMA API** to access in a uniform way all the underlying data sources provided by the control system, can be used to assist the operators in detecting and diagnosing abnormal conditions and thus providing safe guards against these unexpected accelerators operation conditions.
*http://www.isencia.be/services/passerelle **https://code.google.com/p/cdma/ |
|||
![]() |
Slides THCOCB04 [1.636 MB] | ||
THCOCA04 | Upgrade of Event Timing System at SuperKEKB | timing, injection, linac, positron | 1453 |
|
|||
The timing system of the KEKB accelerator will be upgraded for the SuperKEKB project. One of difficulties at SuperKEKB is the positron injection. It takes more than 40ms since positron pulse must be stored at newly constructed damping ring for at least 40ms. Timings of whole accelerators are precisely synchronized for such a long period. We must manage highly frequent injections even with this situation. Typically beam pulse is delivered to one of rings at every 20ms. Besides, the new system must have a capability of realtime selection of injection RF-bucket - we call it "Bucket Selection" at KEKB - for equalizing bunch current at main rings. Bucket Selection also will be upgraded to synchronize buckets of damping ring and those of main rings. This includes the expansion of maximum delay time up to 2ms and the pulse-by-pulse shift of RF phase at 2nd half of injection Linac. We plan to upgrade the Event Timing System from "2-layer type", which simply connect one generator and one receiver, to "cascade type" for satisfying the new injection requirements. We report the basic design of the new timing system and recent studies about key elements of Event Timing System instruments. | |||
![]() |
Slides THCOCA04 [1.559 MB] | ||
FRCOAAB07 | Operational Experience with the ALICE Detector Control System | detector, controls, experiment, status | 1485 |
|
|||
The first LHC run period, lasting 4 year brought exciting physics results and new insight into the mysteries of the matter. One of the key components in this achievements were the detectors, which provided unprecedented amounts of data of the highest quality. The control systems, responsible for their smooth and safe operation, played a key role in this success. The design of the ALICE Detector Control System (DCS) started more than 12 years ago. High level of standardization and pragmatic design led to a reliable and stable system, which allowed for efficient experiment operation. In this presentation we summarize the overall architectural principles of the system, the standardized components and procedures. The original expectations and plans are compared with the final design. Focus is given on the operational procedures, which evolved with time. We explain, how a single operator can control and protect a complex device like ALICE, with millions of readout channels and several thousand control devices and boards. We explain what we learned during the first years of LHC operation and which improvements will be implemented to provide excellent DCS service during the next years. | |||
![]() |
Slides FRCOAAB07 [7.856 MB] | ||