Paper | Title | Other Keywords | Page |
---|---|---|---|
MOCOAAB02 | Design and Status of the SuperKEKB Accelerator Control System | controls, network, timing, EPICS | 4 |
|
|||
SuperKEKB is the upgrade of the KEKB asymmetric energy e+e− collider, for the B-factory experiment in Japan, designed to achieve a 40-times higher luminosity than the world record by KEKB. The KEKB control system was based on EPICS at the equipment layer and scripting languages at the operation layer. The SuperKEKB control system continues to employ those features, while we implement additional technologies for the successful operation at such a high luminosity. In the accelerator control network system, we introduce 10GbE for the wider bandwidth data transfer, and redundant configurations for reliability. The network security is also enhanced. For the SuperKEKB construction, the wireless network is installed into the beamline tunnel. In the timing system, the new configuration for positron beams is required. We have developed the faster response beam abort system, interface modules to control thousands magnet power supplies, and the monitoring system for the final focusing superconducting magnets to assure stable operations. We introduce the EPICS embedded PLC, where EPICS runs on a CPU module. The design and status of the SuperKEKB accelerator control system will be presented. | |||
![]() |
Slides MOCOAAB02 [5.930 MB] | ||
MOCOAAB03 | The Spiral2 Control System Progress Towards the Commission Phase | controls, PLC, EPICS, database | 8 |
|
|||
The commissioning of the Spiral2 Radioactive Ion Beams facility at Ganil will soon start, so requiring the control system components to be delivered in time. Yet, parts of the system were validated during preliminary tests performed with ions and deuterons beams at low energy. The control system development results from the collaboration between Ganil, CEA/IRFU, CNRS/IPHC laboratories, using appropriate tools and approach. Based on Epics, the control system follows a classical architecture. At the lowest level, Modbus/TCP protocol is considered as a field bus. Then, equipment are handled by IOCs (soft or VME/VxWorks) with a software standardized interface between IOCs and clients applications on top. This last upper layer consists of Epics standard tools, CSS/BOY user interfaces within the so-called CSSop Spiral2 context suited for operation and, for machine tunings, high level applications implemented by Java programs developed within a Spiral2 framework derived from the open-Xal one. Databases are used for equipment data and alarms archiving, to configure equipment and to manage the machine lattice and beam settings. A global overview of the system is therefore here proposed. | |||
![]() |
Slides MOCOAAB03 [3.205 MB] | ||
MOCOAAB06 | MeerKAT Control and Monitoring - Design Concepts and Status | monitoring, hardware, controls, status | 19 |
|
|||
Funding: National Research Foundation of South Africa This presentation gives a status update of the MeerKAT Control & Monitoring subsystem focusing on the development philosophy, design concepts, technologies and key design decisions. The presentation will be supplemented by a poster (if accepted) with **live demonstation** of the current KAT-7 Control&Monitoring system. The vision for MeerKAT includes to a) use Offset Gregorian antennas in a radio telescope array combined with optimized receiver technology in order to achieve superior imaging and maximum sensitivity, b) be the most sensitive instrument in the world in L-band, c) be an instrument that will be considered the benchmark for performance and reliability by the scientific community at large, and d) be a true precursor for the SKA that will be integrated into the SKA-mid dish array. The 7-dish engineering prototype (KAT-7) for MeerKAT is already producing exciting science and is being operated 24x7. The first MeerKAT antenna will be on site by the end of this year and the first two Receptors will be fully integrated and ready for testing by April 2014. By December 2016 hardware for all 64 receptors will be installed and accepted and 32 antennas will be fully commissioned. |
|||
![]() |
Slides MOCOAAB06 [1.680 MB] | ||
MOCOBAB02 | Integration of PLC with EPICS IOC for SuperKEKB Control System | controls, PLC, LLRF, EPICS | 31 |
|
|||
Recently, more and more PLCs are adopted for various frontend controls of accelerators. It is common to connect the PLCs with higher level control layers by the network. As a result, control logic becomes dispersed over separate layers, one of which is implemented by ladder programs on PLCs, and the other is implemented by higher level languages on frontend computers. EPICS-based SuperKEKB accelerator control system, however, take a different approach by using FA-M3 PLCs with a special CPU module (F3RP61), which runs Linux and functions as an IOC. This consolidation of PLC and IOC enables higher level applications to directly reach every PLC placed at frontends by Channel Access. In addition, most of control logic can be implemented by the IOC core program and/or EPICS sequencer to make the system more homogeneous resulting in easier development and maintenance of applications. This type of PLC-based IOCs are to be used to monitor and control many subsystems of SuperKEKB, such as personnel protection system, vacuum system, RF system, magnet power supplies, and so on. This paper describes the applications of the PLC-based IOCs to the SuperKEKB accelerator control system. | |||
![]() |
Slides MOCOBAB02 [1.850 MB] | ||
MOCOBAB03 | The Laser MegaJoule ICCS Integration Platform | software, controls, hardware, site | 35 |
|
|||
The French Atomic Energy Commission(CEA)has just built an integration platform outside the LMJ facility in order to assemble the various components of the Integrated Control Command System(ICCS). The talk gives an overview of this integration platform and the qualification strategy based on the use of equipment simulators, and focuses on several tools that have been developed to integrate each sub-system and qualify the overall behavior of the ICCS. Each delivery kit of a sub-system component(Virtual Machine, WIM, PLC,.) is scanned by antivirus software and stored in the delivery database. A specific tool allows the deployment of the delivery kits on the hardware platform (a copy of the LMJ hardware platform). Then, the TMW(Testing Management Workstation) performs automatic tests by coordinating the equipment simulators behavior and the operator’s behavior. The tests configurations, test scenarios and test results are stored in another database. Test results are analyzed, every dysfunction is stored in an event data base which is used to perform reliability calculation of each component. The qualified software is delivered on the LMJ to perform the commissioning of each bundle. | |||
![]() |
Slides MOCOBAB03 [2.025 MB] | ||
MOCOBAB06 | Integrated Monitoring and Control Specification Environment | controls, target, framework, EPICS | 47 |
|
|||
Monitoring and control solutions for large one-off systems are typically built in silos using multiple tools and technologies. Functionality such as data processing logic, alarm handling, UIs, device drivers are implemented by manually writing configuration code in isolation and their cross dependencies maintained manually. The correctness of the created specification is checked using manually written test cases. Non-functional requirements – such as reliability, performance, availability, reusability and so on – are addressed in ad hoc manner. This hinders evolution of systems with long lifetimes. For ITER, we developed an integrated specifications environment and a set of tools to generate configurations for target execution platforms, along with required glue to realize the entire M&C solution. The SKA is an opportunity to enhance this framework further to include checking for functional and engineering properties of the solution based on domain best practices. The framework includes three levels: domain-specific, problem-specific and target technology-specific. We discuss how this approach can address three major facets of complexity: scale, diversity and evolution. | |||
MOMIB01 | Sirius Control System: Conceptual Design | controls, network, EPICS, operation | 51 |
|
|||
Sirius is a new 3 GeV synchrotron light source currently being designed at the Brazilian Synchrotron Light Laboratory (LNLS) in Campinas, Brazil. The Control System will be heavily distributed and digitally connected to all equipments in order to avoid analog signals cables. A three-layer control system is being planned. The equipment layer uses RS485 serial networks, running at 10Mbps, with a very light proprietary protocol, in order to achieve good performance. The middle layer, interconnecting these serial networks, is based on Single Board Computers, PCs and commercial switches. Operation layer will be composed of PC’s running Control System’s client programs. Special topology will be used for Fast Orbit Feedback with one 10Gbps switch between the beam position monitors electronics and a workstation for corrections calculation and orbit correctors. At the moment, EPICS is the best candidate to manage the Control System. | |||
![]() |
Slides MOMIB01 [0.268 MB] | ||
![]() |
Poster MOMIB01 [0.580 MB] | ||
MOMIB02 | Development Status of the TPS Control System | controls, EPICS, power-supply, Ethernet | 54 |
|
|||
The EPICS was chosen as control system framework for the new project of 3 GeV synchrotron light source (Taiwan Photon Source, TPS). The standard hardware and software components had been defined, and the various IOCs (Input Output Controller) are gradually implemented as various subsystems control platforms. The subsystems control interfaces include event based timing system, Ethernet based power supply control, corrector power supply control, PLC based pulse magnet power supply control and machine protection system, insertion devices motion control system, various diagnostics, and etc. Development of the infrastructure of high level and low level software are on-going. Installation and integration test are in proceeding. Progress will be summarized in the paper. | |||
![]() |
Slides MOMIB02 [0.235 MB] | ||
![]() |
Poster MOMIB02 [5.072 MB] | ||
MOMIB03 | Control Systems Issues and Planning for eRHIC | controls, electron, hardware, feedback | 58 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The next generation of high-energy nuclear physics experiments involve colliding high-energy electrons with ions, as well as colliding polarized electrons with polarized protons and polarized helions (Helium-3 nuclei). The eRHIC project proposes to add an electron accelerator to the RHIC complex, thus allowing all of these types of experiments to be done by combining existing capabilities with high energy and high intensity electrons. In this paper we describe the controls systems requirements for eRHIC, the technical challenges, and our vision of a control system ten years into the future. What we build over the next ten years will be what is used for the ten years following the start of operations. This presents opportunities to take advantage of changes in technologies but also many challenges in building reliable and stable controls and integrating those controls with existing RHIC systems. This also presents an opportunity to leverage on state of the art innovations and build collaborations both with industry and other institutions, allowing us to build the best and most cost effective set of systems that will allow eRHIC to achieve its goals. |
|||
![]() |
Slides MOMIB03 [0.633 MB] | ||
![]() |
Poster MOMIB03 [2.682 MB] | ||
MOMIB05 | BeagleBone for Embedded Control System Applications | controls, embedded, power-supply, laser | 62 |
|
|||
Funding: Work supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3 The control system architecture of modern experimental physics facilities needs to meet the requirements of the ever increasing complexity of the controlled devices. Whenever feasible, moving from a distributed architecture based on powerful but complex and expensive computers to an even more pervasive approach based on simple and cheap embedded systems, allows shifting the knowledge close to the devices. The BeagleBone computer, being capable of running a full featured operating system such as GNU/Linux, integrates effectively into the existing control systems and allows executing complex control functions with the required flexibility. The paper discusses the choice of the BeagleBone as embedded platform and reports some examples of control applications recently developed for the ELETTRA and FERMI@Elettra light sources. |
|||
![]() |
Slides MOMIB05 [0.436 MB] | ||
![]() |
Poster MOMIB05 [1.259 MB] | ||
MOMIB07 | An OPC-UA Based Architecture for the Control of the ESPRESSO Spectrograph @ VLT | controls, software, PLC, hardware | 70 |
|
|||
ESPRESSO is a fiber-fed, cross-dispersed, high-resolution echelle spectrograph for the ESO Very Large Telescope (VLT). The instrument is designed to combine incoherently the light coming from up to 4 VLT Unit Telescopes. To ensure maximum stability the spectrograph is placed in a thermal enclosure and a vacuum vessel. Abandoning the VME-based technologies previously adopted for the ESO VLT instruments, the ESPRESSO control electronics has been developed around a new concept based on industrial COTS PLCs. This choice ensures a number of benefits like lower costs and less space and power consumption requirement. Moreover it makes possible to structure the whole control electronics in a distributed way using building blocks available commercially off-the-shelf and minimizing in this way the need for custom solutions. The main adopted PLC brand is Beckhoff, whose product lineup satisfies the requirements set by the instrument control functions. OPC-UA is the chosen communication protocol between the PLCs and the instrument control software, which is based on the VLT Control Software package. | |||
![]() |
Slides MOMIB07 [0.419 MB] | ||
![]() |
Poster MOMIB07 [32.149 MB] | ||
MOMIB08 | Continuous Integration Using LabVIEW, SVN and Hudson | LabView, software, framework, Linux | 74 |
|
|||
In the accelerator domain there is a need of integrating industrial devices and creating control and monitoring applications in an easy and yet structured way. The LabVIEW-RADE framework provides the method and tools to implement these requirements and also provides the essential integration of these applications into the CERN controls infrastructure. Building and distributing these core libraries for multiple platforms, e.g.Windows, Linux and Mac, and for different versions of LabVIEW, is a time consuming task that consist of repetitive and cumbersome work. All libraries have to be tested, commissioned and validated. Preparing one package for each variation takes almost a week to complete. With the introduction of Subversion version control (SVN) and Hudson continuous integration server (HCI) the process is now fully automated and a new distribution for all platforms is available within the hour. In this paper we are evaluating the pros and cons of using continuous integration, the time it took to get up and running and the added benefits. We conclude with an evaluation of the framework and indicate new areas of improvement and extension. | |||
![]() |
Slides MOMIB08 [2.990 MB] | ||
![]() |
Poster MOMIB08 [6.363 MB] | ||
MOMIB09 | ZIO: The Ultimate Linux I/O Framework | framework, controls, Linux, software | 77 |
|
|||
ZIO (with Z standing for "The Ultimate I/O" Framework) was developed for CERN with the specific needs of physics labs in mind, which are poorly addressed in the mainstream Linux kernel. ZIO provides a framework for industrial, high-throughput, high-channel count I/O device drivers (digitizers, function generators, timing devices like TDCs) with performance, generality and scalability as design goals. Among its many features, it offers abstractions for - input and output channels, and channel sets - configurable trigger types - configurable buffer types - interface via sysfs attributes, control and data device nodes - a socket interface (PFZIO) which provides enormous flexibility and power for remote control In this paper, we discuss the design and implementation of ZIO, and describe representative cases of driver development for typical and exotic applications (FMC ADC 100Msps digitizer, FMC TDC timestamp counter, FMC DEL fine delay). | |||
![]() |
Slides MOMIB09 [0.818 MB] | ||
MOPPC014 |
Diagnostic Use Case Examples for ITER Plant Instrumentation and Control | diagnostics, controls, hardware, operation | 85 |
|
|||
ITER requires extensive diagnostics to meet the requirements for machine operation, protection, plasma control and physics studies. The realization of these systems is a major challenge not only because of the harsh environment and the nuclear requirements but also with respect to plant system Instrumentation and Control (I&C) of all the 45 diagnostics systems since the procurement arrangements of the ITER diagnostics with the domestic agencies require a large number of high performance fast controllers whose choice is based on guidelines and catalogues published by the ITER Organization (IO). The goal is to simplify acceptance testing and commissioning for both domestic agencies and the IO. For this purpose several diagnostic use case examples for plant system I&C documentation and implementation are provided by IO to the domestic agencies. Their implementations cover major parts of the diagnostic plant system I&C such as multi-channel high performance data and image acquisition, data processing as well as real-time and data archiving aspects. In this paper, the current status and achievements in implementation and documentation for the use case examples are presented. | |||
![]() |
Poster MOPPC014 [2.068 MB] | ||
MOPPC023 | Centralized Data Engineering for the Monitoring of the CERN Electrical Network | database, network, framework, controls | 107 |
|
|||
The monitoring and control of the CERN electrical network involves a large variety of devices and software: it ranges from acquisition devices to data concentrators, supervision systems as well as power network simulation tools. The main issue faced nowadays for the engineering of such large and heterogeneous system including more than 20,000 devices and 200,000 tags is that all devices and software have their own data engineering tool while many of the configuration data have to be shared between two or more devices: the same data needs to be entered manually to the different tools leading to duplication of effort and many inconsistencies. This paper presents a tool called ENSDM aiming at centralizing all the data needed to engineer the monitoring and control infrastructure into a single database from which the configuration of the various devices is extracted automatically. Such approach allows the user to enter the information only once and guarantee the consistency of the data across the entire system. The paper will focus more specifically on the configuration of the remote terminal unit) devices, the global supervision system (SCADA) and the power network simulation tools. | |||
![]() |
Poster MOPPC023 [1.253 MB] | ||
MOPPC025 | A Movement Control System for Roman Pots at the LHC | controls, collimation, FPGA, experiment | 115 |
|
|||
This paper describes the movement control system for detector positioning based on the Roman Pot design used by the ATLAS-ALFA and TOTEM experiments at the LHC. A key system requirement is that LHC machine protection rules are obeyed: the position is surveyed every 20ms with an accuracy of 15?m. If the detectors move too close to the beam (outside limits set by LHC Operators) the LHC interlock system is triggered to dump the beam. LHC Operators in the CERN Control Centre (CCC) drive the system via an HMI provided by a custom built Java application which uses Common Middleware (CMW) to interact with lower level components. Low-level motorization control is executed using National Instruments PXI devices. The DIM protocol provides the software interface to the PXI layer. A FESA gateway server provides a communication bridge between CMW and DIM. A cut down laboratory version of the system was built to provide a platform for verifying the integrity of the full chain, with respect to user and machine protection requirements, and validating new functionality before deploying to the LHC. The paper contains a detailed system description, test bench results and foreseen system improvements. | |||
MOPPC029 | Internal Post Operation Check System for Kicker Magnet Current Waveforms Surveillance | controls, kicker, operation, timing | 131 |
|
|||
A software framework, called Internal Post Operation Check (IPOC), has been developed to acquire and analyse kicker magnet current waveforms. It was initially aimed at performing the surveillance of LHC beam dumping system (LBDS) extraction and dilution kicker current waveforms and was subsequently also deployed on various other kicker systems at CERN. It has been implemented using the Front-End Software Architecture (FESA) framework, and uses many CERN control services. It provides a common interface to various off-the-shelf digitiser cards, allowing a transparent integration of new digitiser types into the system. The waveform analysis algorithms are provided as external plug-in libraries, leaving their specific implementation to the kicker system experts. The general architecture of the IPOC system is presented in this paper, along with its integration within the control environment at CERN. Some application examples are provided, including the surveillance of the LBDS kicker currents and trigger synchronisation, and a closed-loop configuration to guarantee constant switching characteristics of high voltage thyratron switches. | |||
![]() |
Poster MOPPC029 [0.435 MB] | ||
MOPPC032 | OPC Unified Architecture within the Control System of the ATLAS Experiment | hardware, toolkit, software, controls | 143 |
|
|||
The Detector Control System (DCS) of the ATLAS experiment at the LHC has been using the OPC DA standard as interface for controlling various standard and custom hardware components and their integration into the SCADA layer. Due to its platform restrictions and expiring long-term support, OPC DA will be replaced by the succeeding OPC Unified Architecture (UA) standard. OPC UA offers powerful object-oriented information modeling capabilities, platform independence, secure communication and allows server embedding into custom electronics. We present an OPC UA server implementation for CANopen devices which is used in the ATLAS DCS to control dedicated IO boards distributed within and outside the detector. Architecture and server configuration aspects are detailed and the server performance is evaluated and compared with the previous OPC DA server. Furthermore, based on the experience with the first server implementation, OPC UA is evaluated as standard middleware solution for future use in the ATLAS DCS and beyond. | |||
![]() |
Poster MOPPC032 [2.923 MB] | ||
MOPPC034 | Control System Hardware Upgrade | controls, hardware, power-supply, software | 151 |
|
|||
The Paul Scherrer Institute builds, runs and maintains several particle accelerators. The proton accelerator HIPA, the oldest facility, was mostly equipped with CAMAC components until a few years ago. In several phases CAMAC was replaced by VME hardware and involved about 60 VME crates with 500 cards controlling a few hundred power supplies, motors, and digital as well as analog input/output channels. To control old analog and new digital power supplies with the same new VME components, an interface, so called Multi-IO, had to be developed. In addition, several other interfaces like accommodating different connectors had to be build. Through a few examples the upgrade of the hardware will be explained. | |||
![]() |
Poster MOPPC034 [0.151 MB] | ||
MOPPC035 | Re-integration and Consolidation of the Detector Control System for the Compact Muon Solenoid Electromagnetic Calorimeter | software, hardware, controls, database | 154 |
|
|||
Funding: Swiss National Science Foundation (SNSF) The current shutdown of the Large Hadron Collider (LHC), following three successful years of physics data-taking, provides an opportunity for major upgrades to be performed on the Detector Control System (DCS) of the Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment. The upgrades involve changes to both hardware and software, with particular emphasis on taking advantage of more powerful servers and updating third-party software to the latest supported versions. The considerable increase in available processing power enables a reduction from fifteen to three or four servers. To host the control system on fewer machines and to ensure that previously independent software components could run side-by-side without incompatibilities, significant changes in the software and databases were required. Additional work was undertaken to modernise and concentrate I/O interfaces. The challenges to prepare and validate the hardware and software upgrades are described along with details of the experience of migrating to this newly consolidated DCS. |
|||
![]() |
Poster MOPPC035 [2.811 MB] | ||
MOPPC038 | Rapid Software Prototyping into Large Scale Controls Systems | software, controls, hardware, laser | 166 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632892 The programmable spatial shaper (PSS) within the National Ignition Facility (NIF) reduces energy on isolated optic flaws in order to lower the optics maintenance costs. This will be accomplished by using a closed-loop system for determining the optimal liquid-crystal-based spatial light pattern for beamshaping and placement of variable transmission blockers. A stand-alone prototype was developed and successfully run in a lab environment as well as on a single quad of NIF lasers following a temporary hardware reconfiguration required to support the test. Several challenges exist in directly integrating the C-based PSS engine written by an independent team into the Integrated Computer Control System (ICCS) for proof on concept on all 48 NIF laser quads. ICCS is a large-scale data-driven distributed control system written primarily in Java using CORBA to interact with +60K control points. The project plan and software design needed to specifically address the engine interface specification, configuration management, reversion plan for the existing 0% transmission blocker capability, and a multi-phase integration and demonstration schedule. |
|||
![]() |
Poster MOPPC038 [2.410 MB] | ||
MOPPC039 | Hardware Interface Independent Serial Communication (IISC) | software, hardware, controls, factory | 169 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The communication framework for the in-house controls system in the Collider-Accelerator Department at BNL depends on a variety of hardware interfaces and protocols including RS232, GPIB, USB and Ethernet to name a few. IISC is a client software library, which can be used to initiate, communicate and terminate data exchange sessions with devices over the network. It acts as a layer of abstraction allowing developers to establish communication with these devices without having to be concerned about the particulars of the interfaces and protocols involved. Details of implementation and a performance analysis will be presented. |
|||
![]() |
Poster MOPPC039 [1.247 MB] | ||
MOPPC053 | A Safety System for Experimental Magnets Based on CompactRIO | status, controls, hardware, experiment | 210 |
|
|||
This paper describes the development of a new safety system for experimental magnets using National Instruments CompactRIO devices. The design of the custom Magnet Safety System (MSS) for the large LHC experimental magnets began in 1998 and it was first installed and commissioned in 2002. Some of its components like the isolation amplifier or ALTERA Reconfigurable Field-Programmable Gate Array (FPGA) are not available on the market any longer. A review of the system shows that it can be modernized and simplified by replacing the Hard-wired Logic Module (HLM) by a CompactRIO device. This industrial unit is a reconfigurable embedded system containing a processor running a real-time operating system (RTOS), a FPGA, and interchangeable industrial I/O modules. A prototype system, called MSS2, has been built and successfully tested using a test bench based on PXI crate. Two systems are currently being assembled for two experimental magnets at CERN, for the COMPASS solenoid and for the M1 magnet at the SPS beam line. This paper contains a detailed description of MSS2, the test bench and results from a first implementation and operation with real magnets. | |||
![]() |
Poster MOPPC053 [0.543 MB] | ||
MOPPC054 | Application of Virtualization to CERN Access and Safety Systems | hardware, software, controls, network | 214 |
|
|||
Access and safety systems are by nature heterogeneous: different kinds of hardware and software, commercial and home-grown, are integrated to form a working system. This implies many different application services, for which separate physical servers are allocated to keep the various subsystems isolated. Each such application server requires special expertise to install and manage. Furthermore, physical hardware is relatively expensive and presents a single point of failure to any of the subsystems, unless designed to include often complex redundancy protocols. We present the Virtual Safety System Infrastructure project (VSSI), whose aim is to utilize modern virtualization techniques to abstract application servers from the actual hardware. The virtual servers run on robust and redundant standard hardware, where snapshotting and backing up of virtual machines can be carried out to maximize availability. Uniform maintenance procedures are applicable to all virtual machines on the hypervisor level, which helps to standardize maintenance tasks. This approach has been applied to the servers of CERN PS and LHC access systems as well as to CERN Safety Alarm Monitoring System (CSAM). | |||
![]() |
Poster MOPPC054 [1.222 MB] | ||
MOPPC056 | The Detector Safety System of NA62 Experiment | experiment, detector, status, controls | 222 |
|
|||
The aim of the NA62 experiment is the study of the rare decay K+→π+ν;ν- at the CERN SPS. The Detector Safety System (DSS) developed at CERN is responsible for assuring the protection of the experiment’s equipment. DSS requires a high degree of availability and reliability. It is composed of a Front-End and a Back-End part, the Front-End being based on a National Instruments cRIO system, to which the safety critical part is delegated. The cRIO Front-End is capable of running autonomously and of automatically taking predefined protective actions whenever required. It is supervised and configured by the standard CERN PVSS SCADA system. This DSS system can easily adapt to evolving requirements of the experiment during the construction, commissioning and exploitation phases. The NA62 DSS is being installed and has been partially commissioned during the NA62 Technical Run in autumn 2012, where components from almost all the detectors as well as the trigger and the data acquisition systems were successfully tested. The paper contains a detailed description of this innovative and performing solution, and demonstrates a good alternative to the LHC systems based on redundant PLCs. | |||
![]() |
Poster MOPPC056 [0.613 MB] | ||
MOPPC057 | Data Management and Tools for the Access to the Radiological Areas at CERN | controls, database, radiation, operation | 226 |
|
|||
As part of the refurbishment of the PS Personnel Protection system, the radioprotection (RP) buffer zones & equipment have been incorporated into the design of the new access points providing an integrated access concept to the radiation controlled areas of the PS complex. The integration of the RP and access control equipment has been very challenging due to the lack of space in many of the zones. Although successfully carried out, our experience from the commissioning of the first installed access points shows that the integration should also include the software tools and procedures. This paper presents an inventory of all the tools and data bases currently used (*) in order to ensure the access to the CERN radiological areas according to CERN’s safety and radioprotection procedures. We summarize the problems and limitations of each tool as well as the whole process, and propose a number of improvements for the different kinds of users including changes required in each of the tools. The aim is to optimize the access process and the operation & maintenance of the related tools by rationalizing and better integrating them.
(*) Access Distribution and Management, Safety Information Registration, Works Coordination, Access Control, Operational Dosimeter, Traceability of Radioactive Equipment, Safety Information Panel. |
|||
![]() |
Poster MOPPC057 [1.955 MB] | ||
MOPPC058 | Design, Development and Implementation of a Dependable Interlocking Prototype for the ITER Superconducting Magnet Powering System | software, plasma, PLC, controls | 230 |
|
|||
Based on the experience with an operational interlock system for the superconducting magnets of the LHC, CERN has developed a prototype for the ITER magnet central interlock system in collaboration with ITER. A total energy of more than 50 Giga Joules is stored in the magnet coils of the ITER Tokamak. Upon detection of a quench or other critical powering failures, the central interlock system must initiate the extraction of the energy to protect the superconducting magnets and, depending on the situation, request plasma disruption mitigations to protect against mechanical forces induced between the magnet coils and the plasma. To fulfil these tasks with the required high level of dependability the implemented interlock system is based on redundant PLC technology making use of hardwired interlock loops in 2-out-of-3 redundancy, providing the best balance between safety and availability. In order to allow for simple and unique connectivity of all client systems involved in the safety critical protection functions as well as for common remote diagnostics, a dedicated user interface box has been developed. | |||
MOPPC059 | Refurbishing of the CERN PS Complex Personnel Protection System | controls, PLC, network, radiation | 234 |
|
|||
In 2010, the refurbishment of the Personnel Protection System of the CERN Proton Synchrotron complex primary beam areas started. This large scale project was motivated by the obsolescence of the existing system and the objective of rationalizing the personnel protection systems across the CERN accelerators to meet the latest recommendations of the regulatory bodies of the host states. A new generation of access points providing biometric identification, authorization and co-activity clearance, reinforced passage check, and radiation protection related functionalities will allow access to the radiologically classified areas. Using a distributed fail-safe PLC architecture and a diversely redundant logic chain, the cascaded safety system guarantees personnel safety in the 17 machine of the PS complex by acting on the important safety elements of each zone and on the adjacent upstream ones. It covers radiological and activated air hazards from circulating beams as well as laser, and electrical hazards. This paper summarizes the functionalities provided, the new concepts introduced, and, the functional safety methodology followed to deal with the renovation of this 50 year old facility. | |||
![]() |
Poster MOPPC059 [2.874 MB] | ||
MOPPC061 | Achieving a Highly Configurable Personnel Protection System for Experimental Areas | PLC, radiation, status, controls | 238 |
|
|||
The personnel protection system of the secondary beam experimental areas at CERN manages the beam and access interlocking mechanism. Its aim is to guarantee the safety of the experimental area users against the hazards of beam radiation and laser light. The highly configurable, interconnected, and modular nature of those areas requires a very versatile system. In order to follow closely the operational changes and new experimental setups and to still keep the required level of safety, the system was designed with a set of matrices which can be quickly reconfigured. Through a common paradigm, based on industrial hardware components, this challenging implementation has been made for both the PS and SPS experimental halls, according to the IEC 61508 standard. The current system is based on a set of hypotheses formed during 25 years of operation. Conscious of the constant increase in complexity and the broadening risk spectrum of the present and future experiments, we propose a framework intended as a practical guide to structure the design of the experimental layouts based on risk evaluation, safety function prescriptions and field equipment capabilities. | |||
![]() |
Poster MOPPC061 [2.241 MB] | ||
MOPPC071 | Development of the Machine Protection System for FERMILAB'S ASTA Facility | controls, cryomodule, laser, FPGA | 262 |
|
|||
The Fermilab Advance Superconducting Test Accelerator (ASTA) under development will be capable of delivering an electron beam with up to 3000 bunches per macro-pulse, 5Hz repetition rate and 1.5 GeV beam energy in the final phase. The completed machine will be capable of sustaining an average beam power of 72 KW at the bunch charge of 3.2 nC. A robust Machine Protection System (MPS) capable of interrupting the beam within a macro-pulse and that interfaces well with new and existing controls system infrastructure is being developed to mitigate and analyze faults related to this relatively high damage potential. This paper will describe the component layers of the MPS system, including a FPGA-based Laser Pulse Controller, the Beam Loss Monitoring system design and the controls and related work done to date. | |||
![]() |
Poster MOPPC071 [1.479 MB] | ||
MOPPC076 | Quantitative Fault Tree Analysis of the Beam Permit System Elements of Relativistic Heavy Ion Collider (RHIC) at BNL | operation, kicker, simulation, collider | 269 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The RHIC Beam Permit System (BPS) plays a key role in safeguarding against the anomalies developing in the collider during a run. The BPS collects RHIC subsystem statuses to allow the beam entry and its existence in the machine. The building blocks of BPS are Permit Module (PM) and Abort Kicker Module (AKM), which incorporate various electronic boards based on VME specification. This paper presents a quantitative Fault Tree Analysis (FTA) of the PM and AKM, yielding the hazard rates of three top failures that are potential enough to cause a significant downtime of the machine. The FTA helps tracing down the top failure of the module to a component level failure (such as an IC or resistor). The fault trees are constructed for all module variants and are probabilistically evaluated using an analytical solution approach. The component failure rates are calculated using manufacturer datasheets and MIL-HDBK-217F. The apportionment of failure modes for components is calculated using FMD-97. The aim of this work is to understand the importance of individual components of the RHIC BPS regarding its reliable operation, and evaluate their impact on the operation of BPS. |
|||
![]() |
Poster MOPPC076 [0.626 MB] | ||
MOPPC079 |
CODAC Core System, the ITER Software Distribution for I&C | software, controls, EPICS, network | 281 |
|
|||
In order to support the adoption of the ITER standards for the Instrumentation & Control (I&C) and to prepare for the integration of the plant systems I&C developed by many distributed suppliers, the ITER Organization is providing the I&C developers with a software distribution named CODAC Core System. This software has been released as incremental versions since 2010, starting from preliminary releases and with stable versions since 2012. It includes the operating system, the EPICS control framework and the tools required to develop and test the software for the controllers, central servers and operator terminals. Some components have been adopted from the EPICS community and adapted to the ITER needs, in collaboration with the other users. This is the case for the CODAC services for operation, such as operator HMI, alarms or archives. Other components have been developed specifically for the ITER project. This applies to the Self-Description Data configuration tools. This paper describes the current version (4.0) of the software as released in February 2013 with details on the components and on the process for its development, distribution and support. | |||
![]() |
Poster MOPPC079 [1.744 MB] | ||
MOPPC082 | Automated Verification Environment for TwinCAT PLC Programs | PLC, hardware, simulation, undulator | 288 |
|
|||
The European XFEL will have three undulator systems SASE1, SASE2, and SASE3 to produce extremely brilliant, ultra-short pulses of x-rays with wavelengths down to 0.1 nm. The undulator gap is adjustable in order to vary photon beam energy. The corresponding motion control is implemented with industrial PCs running Beckhoff TwinCAT Programmable Logic Controllers (PLCs). So far, the functionality of the PLC programs has been verified on system level with the final hardware. This is a time-consuming manual task, but may also damage the hardware in case of severe program failures. To improve the verification process of PLC programs, a test environment with simulated hardware has been set up. It uses a virtual machine to run the PLC program together with a verification program that simulates the behavior of the hardware. Test execution and result checking is automated with the help of scripts, which communicate with the verification program to stimulate the PLC program. Thus, functional verification of PLC programs is reduced to running a set of scripts, without the need to connect to real hardware and without manual effort. | |||
![]() |
Poster MOPPC082 [0.226 MB] | ||
MOPPC092 | Commissioning the MedAustron Accelerator with ProShell | controls, ion, framework, timing | 314 |
|
|||
MedAustron is a synchrotron based centre for light ion therapy under construction in Austria. The accelerator and its control system entered the on-site commissioning phase in January 2013. This contribution presents the current status of the accelerator operation and commissioning procedure framework called ProShell. It is used to model measurement procedures for commissioning and operation with Petri-Nets. Beam diagnostics device adapters are implemented in C#. To illustrate its use for beam commissioning, procedures currently in use are presented including their integration with existing devices such as ion source, power converters, slits, wire scanners and profile grid monitors. The beam spectrum procedure measures distribution of particles generated by the ion source. The phase space distribution procedure performs emittance measurement in beam transfer lines. The trajectory steering procedure measures the beam position in each part of the machine and aids in correcting the beam positions by integrating MAD-XX optics calculations. Additional procedures and (beam diagnostic) devices are defined, implemented and integrated with ProShell on demand as commissioning progresses. | |||
![]() |
Poster MOPPC092 [2.896 MB] | ||
MOPPC096 | Design and Implementation Aspects of the Control System at FHI FEL | controls, FEL, cavity, EPICS | 324 |
|
|||
A new mid-infrared FEL has been commissioned at the Fritz-Haber-Institut in Berlin. It will be used for spectroscopic investigations of molecules, clusters, nanoparticles and surfaces. The oscillator FEL is operated with 15 - 50 MeV electrons from a normal-conducting S-band linac equipped with a gridded thermionic gun and a chicane for controlled bunch compression. Construction of the facility building with the accelerator vault began in April 2010. First lasing was observed on Februar 15th, 2012. * The EPICS software framework was chosen to build the control system for this facility. The industrial utility control system is integrated using BACnet/IP. Graphical operator and user interfaces are based on the Control System Studio package. The EPICS channel archiver, an electronic logbook, a web based monitoring tool, and a gateway complete the installation. This paper presents design and implementation aspects of the control system, its capabilities, and lessons learned during local and remote commissioning.
* W. Schöllkopf et al., FIRST LASING OF THE IR FEL AT THE FRITZ-HABER-INSTITUT, BERLIN, Conference FEL12 |
|||
![]() |
Poster MOPPC096 [10.433 MB] | ||
MOPPC098 | The EPICS-based Accelerator Control System of the S-DALINAC | EPICS, controls, network, hardware | 332 |
|
|||
Funding: Supported by DFG through CRC 634. The S-DALINAC (Superconducting Darmstadt Linear Accelerator) is an electron accelerator for energies from 3 MeV up to 130 MeV. It supplies beams of either spin-polarized or unpolarized electrons for experiments in the field of nuclear structure physics and related areas of fundamental research. The migration of the Accelerator Control System to an EPICS-based system started three years ago and has essentially been done in parallel to regular operation. While it has not been finished yet it already pervades all the different aspects of the control system. The hardware is interfaced by EPICS Input/Output Controllers. User interfaces are designed with Control System Studio (CSS) and BOY (Best Operator Interface Yet). Latest activities are aimed at the completion of the migration of the beamline devices to EPICS. Furthermore, higher-level aspects can now be approached more intensely. This includes the introduction of efficient alarm-handling capabilities as well as making use of interconnections between formerly separated parts of the system. This contribution will outline the architecture of the S-DALINAC's Accelerator Control System and report about latest achievements in detail. |
|||
![]() |
Poster MOPPC098 [26.010 MB] | ||
MOPPC099 | The ANKA Control System: On a Path to the Future | controls, hardware, EPICS, Ethernet | 336 |
|
|||
The machine control system of the synchrotron radiation source ANKA at KIT (Karlsruhe Institute of Technology) is migrating from dedicated I/O microcontroller boards that utilise the LonWorks field bus and are visualised with the ACS Corba based control system to Ethernet TCP/IP devices with an EPICS server layer and visualisation by Control System Studio (CSS). This migration is driven by the need to replace ageing hardware, and in order to move away from the outdated microcontroller's embedded LonWorks bus. Approximately 500 physical devices, such as power supplies, vacuum pumps etc, will need to be replaced (or have their I/O hardware changed) and be integrated to the new EPICS/CSS control system. In this paper we report on the technology choices and discuss justifications of those choices, the progress of migration, and how such a task can be achieved in a transparent way with a fully user operational machine. We also report on the benefits reaped from using EPICS, CSS and BEAST alarming. | |||
![]() |
Poster MOPPC099 [0.152 MB] | ||
MOPPC100 | SKA Monitioring and Control Progress Status | controls, operation, monitoring, site | 340 |
|
|||
The Monitoring and Control system for the SKA radio telescope is now moving from the conceptual design to the system requirements and design phase, with the formation of a consortium geared towards delivering the Telescope Manager (TM) work package. Recent program decisions regarding hosting of the telescope across two sites, Australia and South Africa, have brought in new challenges from the TM design perspective. These include strategy to leverage the individual capabilities of autonomous telescopes, and also integrating the existing precursor telescopes (ASKAP and MeerKat) with heterogenous technologies and approaches into the SKA. A key design goal from the viewpoint of minimizing development and lifecycle costs is to have a uniform architectural approach across the telescopes, and to maximize standardization of software and instrumentation across the systems, despite potential variations in system hardware and procurement arrangements among the participating countries. This paper discusses some of these challenges, and their mitigation approaches that the consortium intends to work upon, along with an update on the current status and progress on the overall TM work. | |||
MOPPC101 | The Control Architecture of Large Scientific Facilities: ITER and LHC lessons for IFMIF | controls, neutron, network, EPICS | 344 |
|
|||
The development of an intense source of neutrons with the spectrum of DT fusion reactions is indispensable to qualify suitable materials for the First Wall of the nuclear vessel in fusion power plants. The FW, overlap of different layers, is essential in future reactors; they will convert the 14 MeV of neutrons to thermal energy and generate T to feed the DT reactions. IFMIF will reproduce those irradiation conditions with two parallel 40 MeV CW deuteron Linacs, at 2x125 mA beam current, colliding on a 25 mm thick Li screen flowing at 15 m/s and producing a n flux of 1018 m2/s in 500 cm3 volume with a broad peak energy at 14 MeV. The design of the control architecture of a large scientific facility is dependent on the particularities of the processes in place or the volume of data generated; but it is also tied to project management issues. LHC and ITER are two complex facilities, with ~106 process variables, with different control systems strategies, from the modular approach of CODAC, to the more integrated implementation of CERN Technical Network. This paper analyzes both solutions, and extracts conclusions that shall be applied to the future control architecture of IFMIF. | |||
![]() |
Poster MOPPC101 [0.297 MB] | ||
MOPPC109 | Status of the MAX IV Laboratory Control System | controls, linac, storage-ring, TANGO | 366 |
|
|||
The MAX IV Laboratory is a new synchrotron light source being built in Lund, south Sweden. The whole accelerator complex consists of a 3GeV 300m long full energy linac, two Storage Rings of 1.5GeV and 3GeV and a Short Pulse Facility for pump and probe experiments with bunches around 100fs long. First x-rays for the users are expected to be delivered in 2015 for the SPF and 2016 for the Storage Rings. This paper describes the progress in the design of the control system for the accelerator and the different solutions adopted for data acquisition, synchronisation, networking, safety and other aspects related to the control system | |||
![]() |
Poster MOPPC109 [0.522 MB] | ||
MOPPC110 | The Control System for the CO2 Cooling Plants for Physics Experiments | controls, detector, operation, software | 370 |
|
|||
CO2 cooling has become interesting technology for current and future tracking particle detectors. A key advantage of using CO2 as refrigerant is the high heat transfer capabilities allowing a significant material budget saving, which is a critical element in state of the art detector technologies. Several CO2 cooling stations, with cooling power ranging from 100W to several kW, have been developed at CERN to support detector testing for future LHC detector upgrades. Currently, two CO2 cooling plants for the ATLAS Pixel Insertable B-Layer and the Phase I Upgrade CMS Pixel detector are under construction. This paper describes the control system design and implementation using the UNICOS framework for the PLCs and SCADA. The control philosophy, safety and interlocking standard, user interfaces and additional features are presented. CO2 cooling is characterized by high operation stability and accurate evaporation temperature control over large distances. Implemented split range PID controllers with dynamically calculated limiters, multi-level interlocking and new software tools like CO2 online p-H diagram, jointly enable the cooling to fulfill the key requirements of reliable system. | |||
![]() |
Poster MOPPC110 [2.385 MB] | ||
MOPPC116 | Evolution of Control System Standards on the Diamond Synchrotron Light Source | controls, EPICS, Linux, hardware | 381 |
|
|||
Control system standards for the Diamond synchrotron light source were initially developed in 2003. They were largely based on Linux, EPICS and VME and were applied fairly consistently across the three accelerators and first twenty photon beamlines. With funding for further photon beamlines in 2011 the opportunity was taken to redefine the standards to be largely based on Linux, EPICS, PC’s and Ethernet. The developments associated with this will be presented, together with solutions being developed for requirements that fall outside the standards. | |||
![]() |
Poster MOPPC116 [0.360 MB] | ||
MOPPC122 | EPICS Interface and Control of NSLS-II Residual Gas Analyzer System | controls, EPICS, vacuum, operation | 392 |
|
|||
Residual Gas Analyzers (RGAs) have been widely used in accelerator vacuum systems for monitoring and vacuum diagnostics. The National Synchrotron Light Source II (NSLS-II) vacuum system adopts Hiden RC-100 RGA which supports remote electronics, thus allowing real-time diagnostics with beam operation as well as data archiving and off-line analysis. This paper describes the interface and operation of these RGAs with the EPICS based control system. | |||
![]() |
Poster MOPPC122 [1.004 MB] | ||
MOPPC123 | Extending WinCC OA for Use as Accelerator Control System Core | controls, ion, status, real-time | 395 |
|
|||
The accelerator control system for the MedAustron light-ion medical particle accelerator has been designed under the guidance of CERN in the scope of an EBG MedAustron/CERN collaboration agreement. The core is based on the SIMATIC WinCC OA SCADA tool. Its open API and modular architecture permitted CERN to extend the product with features that go beyond traditional supervisory control and that are vital for directly operating a particle accelerator. Several extensions have been introduced to make WinCC OA fit for accelerator control: (1) Near real-time data visualization, (2) external application launch and monitoring, (3) accelerator settings snapshot and consistent restore, (4) generic panel navigation supporting role based permission handling, (5) native integration with interactive 3D engineering visualization, (6) integration with National Instruments based front-end controllers. The major drawback identified is the lack of support of callbacks from C++ extensions. This prevents asynchronous functions, multithreaded implementations and soft real-time behaviour. We are therefore striving to search for support in the user community to trigger the implementation of this function. | |||
![]() |
Poster MOPPC123 [0.656 MB] | ||
MOPPC129 | MADOCA II Interface for LabVIEW | LabView, controls, framework, Windows | 410 |
|
|||
LabVIEW is widely used for experimental station control in SPring-8. LabVIEW is also partially used for accelerator control, while most software of the SPring-8 accelerator and beamline control are built on MADOCA control framework. As synchrotron radiation experiments advances, there is requirement of complex data exchange between MADOCA and LabVIEW control systems which was not realized. We have developed next generation MADOCA called MADOCA II, as reported in this ICALEPCS (T.Matsumoto et.al.). We ported MADOCA II framework to Windows and we developed MADOCA II interface for LabVIEW. Using the interface, variable length data can be exchanged between MADOCA and LabVIEW based softwares. As a first application, we developed a readout system for an electron beam position monitor with NI's PCI-5922 digitizers. A client software sends a message to a remote LabVIEW based digitizer readout software via the MADOCA II midlleware and the readout system sends back waveform data to the client. We plan to apply the interface various accelerator and synchrotron radiation experiment controls. | |||
MOPPC133 | Performance Improvement of KSTAR Networks for Long Distance Collaborations | network, experiment, site | 423 |
|
|||
KSTAR (Korea Superconducting Tokamak Advanced Research) has completed its 5th campaign. Every year, it produces enormous amount of data that need to be forwarded to international collaborators shot by shot for run-time analysis. Analysis of one shot helps in deciding parameters for next shot. Many shots are conducted in a day, therefore, this communication need to be very efficient. Moreover, amount of KSTAR data and number of international collaborators are increasing every year. In presence of big data and various collaborators exists in all over the world, communicating at run-time will be a challenge. To meet this challenge, we need efficient ways of communications to transfer data. Therefore, in this paper, we will optimize paths among internal and external networks of KSTAR for efficient communication. We will also discuss transmission solutions for environment construction and evaluate performance for long distance collaborations. | |||
![]() |
Poster MOPPC133 [1.582 MB] | ||
MOPPC139 | A Framework for Off-line Verification of Beam Instrumentation Systems at CERN | framework, database, software, instrumentation | 435 |
|
|||
Many beam instrumentation systems require checks to confirm their beam readiness, detect any deterioration in performance and to identify physical problems or anomalies. Such tests have already been developed for several LHC instruments using the LHC sequencer, but the scope of this framework doesn't extend to all systems; notably absent in the pre-LHC injector chain. Furthermore, the operator-centric nature of the LHC sequencer means that sequencer tasks aren't accessible by hardware and software experts who are required to execute similar tests on a regular basis. As a consequence, ad-hoc solutions involving code sharing and in extreme cases code duplication have evolved to satisfy the various use-cases. In terms of long term maintenance, this is undesirable due to the often short-term nature of developers at CERN alongside the importance of the uninterrupted stability of CERN's accelerators. This paper will outline the first results of an investigation into the existing analysis software, and provide proposals for the future of such software. | |||
MOPPC146 | MATLAB Objects for EPICS Channel Access | EPICS, controls, status, operation | 453 |
|
|||
With the substantial dependence on MATLAB for application development at the SwissFEL Injector Test Facility, the requirement for a robust and extensive EPICS Channel Access (CA) interface became increasingly imperative. To this effect, a new MATLAB Executable (Mex) file has been developed around an in-house C++ CA interface library (CAFE), which serves to expose comprehensive CA functionality to within the MATLAB framework. Immediate benefits include support for all MATLAB data types, a rich set of synchronous and asynchronous methods, a further physics oriented abstraction layer that uses CA synchronous groups, and compilation on 64-bit architectures. An account of the mocha (Matlab Objects for CHannel Access) interface is presented. | |||
MOPPC149 | A Messaging-Based Data Access Layer for Client Applications | controls, data-acquisition, network, operation | 460 |
|
|||
Funding: US Department of Energy The Fermilab Accelerator Control system has recently integrated use of a publish/subscribe infrastructure as a means of communication between Java client applications and data acquisition middleware. This supercedes a previous implementation based on Java Remote Method Invocation (RMI). The RMI implementation had issues with network firewalls, misbehaving client applications affecting the middleware, lack of portability to other platforms, and cumbersome authentication. The new system uses the AMQP messaging protocol and RabbitMQ data brokers. This decouples the client and middleware, is more portable to other languages, and has proven to be much more reliable. A Java client library provides for single synchronous operations as well as periodic data subscriptions. This new system is now used by the general synoptic display manager application as well as a number of new custom applications. Also a web service has been written that provides easy access to control system data from many languages. |
|||
![]() |
Poster MOPPC149 [4.654 MB] | ||
MOPPC155 | NSLS II Middlelayer Services | lattice, database, controls, EPICS | 467 |
|
|||
Funding: Work supported under auspices of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookhaven Science Associates, LLC, and in part by the DOE Contract DE-AC02-76SF00515 A service oriented architecture has been designed for NSLS II project for its beam commissioning and daily operation. Middle layer services have been actively developing, and some of them have been deployed into NSLS II control network to support our beam commissioning. The services are majorly based on 2 technologies, which are web-service/RESTful and EPICS V4 respectively. The services provides functions to take machine status snapshot, convert magnet setting between different unit system, or serve lattice information and simulation results. This paper presents the latest status of services development at NSLS II project, and our future development plan. |
|||
![]() |
Poster MOPPC155 [2.079 MB] | ||
TUCOAAB03 |
Approaching the Final Design of ITER Control System | controls, plasma, network, operation | 490 |
|
|||
The control system of ITER (CODAC) is subject to a final design review early 2014, with a second final design review covering high-level applications scheduled for 2015. The system architecture has been established and all plant systems required for first plasma have been identified. Interfaces are being detailed, which is a key activity to prepare for integration. A built to print design of the network infrastructure covering the full site is in place and installation is expected to start next year. The common software deployed in the local plant systems as well as the central system, called CODAC Core System and based on EPICS, has reached maturity providing most of the required functions. It is currently used by 55 organizations throughout the world involved in the development of plant systems and ITER controls. The first plant systems are expected to arrive on site in 2015 starting a five-year integration phase to prepare for first plasma operation. In this paper, we report on the progress made on ITER control system over the last two years and outline the plans and strategies allowing us to integrate hundreds of plant systems procured in-kind by the seven ITER members. | |||
![]() |
Slides TUCOAAB03 [5.294 MB] | ||
TUCOBAB02 | The Mantid Project: Notes from an International Software Collaboration | framework, software, neutron, distributed | 502 |
|
|||
Funding: This project is a collaboration between SNS, ORNL and ISIS, RAL with expertise supplied by Tessella. These facilities are in turn funded by the US DoE and the UK STFC. The Mantid project was started by ISIS in 2007 to provide a framework to perform data reduction and analysis for neutron and muon data. The SNS and HFIR joined the Mantid project in 2009 adding event processing and other capabilities to the Mantid framework. The Mantid software is now supporting the data reduction needs of most of the instruments at ISIS, the SNS and some at HFIR, and is being evaluated by other facilities. The scope of data reduction and analysis challenges, together with the need to create a cross platform solution, fuels the need for Mantid to be developed in collaboration between facilities. Mantid has from inception been an open source project, built to be flexible enough to be instrument and technique independent, and initially planned to support collaboration with other development teams. Through the collaboration with the SNS development practices and tools have been further developed to support the distributed development team in this challenge. This talk will describe the building and structure of the collaboration, the stumbling blocks we have overcome, and the great steps we have made in building a solid collaboration between these facilities. Mantid project website: www.mantidproject.org ISIS: http://www.isis.stfc.ac.uk/ SNS & HFIR: http://neutrons.ornl.gov/ |
|||
![]() |
Slides TUCOBAB02 [1.280 MB] | ||
TUCOBAB04 | Evaluation of Issue Tracking and Project Management Tools for Use Across All CSIRO Radio Telescope Facilities | software, project-management, controls, operation | 509 |
|
|||
CSIRO's radio astronomy observatories are collectively known as the Australia Telescope National Facility (ATNF). The observatories include the 64-metre dish at Parkes, the Australia Telescope Compact Array (ATCA) in Narrabri, the Mopra 22-metre dish near Coonabarabran and the ASKAP telescope located in Western Australia and in early stages of commissioning. In January 2013 a new group named Software and Computing has been formed. This group, part of the ATNF Operations Program brings all the software development expertise under one umbrella and it is responsible for the development and maintenance of the software for all ATNF facilities, from monitoring and control to science data processing and archiving. One of the first task of the new group is to start homogenising the way software development is done across all observatories. This paper presents the results of the evaluation of several issue tracking and project management tools, including Redmine and JIRA to be used as a software development management tool across all ATNF facilities. It also describes how these tools can potentially be used for non-software type of applications such as fault reporting and tracking system. | |||
![]() |
Slides TUCOBAB04 [2.158 MB] | ||
TUCOBAB05 | A Rational Approach to Control System Development Projects That Incorporates Risk Management | controls, project-management, software, synchrotron | 513 |
|
|||
Over the past year CLS has migrated towards a project management approach based on the Project Management Institute (PMI) guidelines as well as adopting an Enterprise Risk Management (ERM) program. Though these are broader organisational initiatives they do impact how controls systems and data acquisition software activities and planned, executed and integrated into larger scale projects. Synchrotron beamline development and accelerator upgrade projects have their own special considerations that require adaptation of the more standard techniques that are used. Our ERM processes integrate in two ways: (1) in helping to identify and prioritising those projects that we should be undertaking and (2) in helping identify risks that are internal to the project. These broader programs are resulting in us revising and improving processes we have in place for control and data acquisition system development and maintenance. This paper examines the approach we have adopted, our preliminary experience and our plans going forward. | |||
![]() |
Slides TUCOBAB05 [0.791 MB] | ||
TUMIB08 |
ITER Contribution to Control System Studio (CSS) Development Effort | controls, EPICS, framework, distributed | 540 |
|
|||
In 2010, Control System Studio (CSS) was chosen for CODAC - the central control system of ITER - as the development and runtime integrated environment for local control systems. It became quickly necessary to contribute to CSS development effort - after all, CODAC team wants to be sure that the tools that are being used by the seven ITER members all over the world continue to be available and to be improved. In order to integrate CSS main components in its framework, CODAC team needed first to adapt them to its standard platform based on Linux 64-bits and PostgreSQL database. Then, user feedback started to emerge as well as the need for an industrial symbol library to represent pump, valve or electrical breaker states on the operator interface and the requirement to automatically send an email when a new alarm is raised. It also soon became important for CODAC team to be able to publish its contributions quickly and to adapt its own infrastructure for that. This paper describes ITER increasing contribution to the CSS development effort and the future plans to address factory and site acceptance tests of the local control systems. | |||
![]() |
Slides TUMIB08 [2.970 MB] | ||
![]() |
Poster TUMIB08 [0.959 MB] | ||
TUMIB09 | jddd: A Tool for Operators and Experts to Design Control System Panels | controls, GUI, EPICS, TANGO | 544 |
|
|||
jddd, a graphical tool for control system panel design, has been developed at DESY to allow machine operators and experts the design of complex panels. No knowledge of a programming language nor compiling steps are required to generate highly dynamic panels with the jddd editor. After 5 years of development and implementing requirements for DESY-specific accelerator operations, jddd has become mature and is increasingly used at DESY. The focus meanwhile has changed from pure feature development to new tasks as archiving/managing a huge number of control panels, finding panel dependencies, automatic refactoring of panel names, book keeping and evaluation of panel usage and collecting Java exception messages in an automatic manner. Therefore technologies of the existing control system infrastructure like Servlets, JMS, Lucene, SQL, SVN are used. The concepts and technologies to further improve the quality and robustness of the tool are presented in this paper. | |||
![]() |
Slides TUMIB09 [0.811 MB] | ||
![]() |
Poster TUMIB09 [1.331 MB] | ||
TUMIB10 | Performance Testing of EPICS User Interfaces - an Attempt to Compare the Performance of MEDM, EDM, CSS-BOY, and EPICS | hardware, Linux, software, EPICS | 547 |
|
|||
Funding: Work at the APS is supported by the U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH1135 Upgrading of the display manger or graphical user interface at EPICS sites reliant on older display technologies, typically MEDM or EDM, requires attention not only to functionality but also performance. For many sites, performance is not an issue - all display managers will update small numbers of process variables at rates exceeding the human ability to discern changes; but for certain applications typically found at larger sites, the ability to respond to updates rates at sub-Hertz frequencies for thousands of process variables is a requirement. This paper describes a series of tests performed on both older display managers – MEDM and EDM – and also the newer display managers CSS-BOY, epicsQT, and CaQtDM. Modestly performing modern hardware is used. |
|||
![]() |
Slides TUMIB10 [0.486 MB] | ||
![]() |
Poster TUMIB10 [0.714 MB] | ||
TUPPC006 | Identifying Control Equipment | database, EPICS, controls, cryogenics | 562 |
|
|||
The cryogenic installations at DESY are widely spread over the DESY campus. Many new components have been and will be installed for the new European XFEL. Commissioning and testing takes a lot of time. Local tag labels help identify the components but it is error prone to type in the names. Local bar-codes and/or datamatrix codes can be used in conjunction with intelligent devices like smart (i)Phones to retrieve data directly from the control system. The developed application will also show information from the asset database. This will provide the asset properties of the individual hardware device including the remaining warranty. Last not least cables are equipped with a bar-code which helps to identify start and endpoint of the cable and the related physical signal. This paper will describe our experience with the mobile applications and the related background databases which are operational already for several years. | |||
![]() |
Poster TUPPC006 [0.398 MB] | ||
TUPPC014 | Development of SPring-8 Experimental Data Repository System for Management and Delivery of Experimental Data | experiment, data-management, database, controls | 577 |
|
|||
SPring-8 experimental Data Repository system (SP8DR) is an online storage service, which is built as one of the infrastructure services of SPring-8. SP8DR enables experimental user to obtain his experimental data, which was brought forth at SPring-8 beamline, on demand via the Internet. To make easy searching for required data-sets later, the system stored experimental data with meta-data such as experimental conditions. It is also useful to the post-experiment analysis process. As a framework for data management, we adopted DSpace that is widely used in the academic library information system. We made two kind of application software for registering an experimental data simply and quickly. These applications are used to record metadata-set to SP8DR database that has relations to experimental data on the storage system. This data management design allowed applications to high bandwidth data acquisition system. In this presentation, we report about the SPring-8 experimental Data Repository system that began operation in SPring-8 beamline. | |||
TUPPC023 | MeerKAT Poster and Demo Control and Monitoring Highlights | controls, monitoring, hardware, software | 594 |
|
|||
The 64-dish MeerKAT Karoo Array Telescope, currently under development, will become the largest and most sensitive radio telescope in the Southern Hemisphere until the Square Kilometre Array (SKA) is completed around 2024. MeerKAT will ultimately become an integral part of the SKA. The MeerKAT project will build on the techniques and experience acquired during the development of KAT-7, a 7-dish engineering prototype that has already proved its worth in practical use, operating 24/7 to deliver useful science data in the Karoo. Much of the MeerKAT development will centre on further refinement and scaling of the technology, using lessons learned from KAT-7. The poster session will present the proposed MeerKAT CAM (Control & Monitoring) architecture and highlight the solutions we are exploring for system monitoring, control and scheduling, data archiving and retrieval, and human interaction with the system. We will supplement the poster session with a live demonstration of the present KAT-7 CAM system. This will include a live video feed from the site as well as the use of the current GUI to generate and display the flow of events and data in a typical observation. | |||
![]() |
Poster TUPPC023 [0.471 MB] | ||
TUPPC027 | Quality Management of CERN Vacuum Controls | vacuum, controls, database, framework | 608 |
|
|||
The vacuum controls team is in charge of the monitoring, maintenance & consolidation of the control systems of all accelerators and detectors in CERN; this represents 6 000 instruments distributed along 128 km of vacuum chambers, often of heterogeneous architectures. In order to improve the efficiency of the services we provide, to vacuum experts and to accelerator operators, a Quality Management Plan is being put into place. The first step was the gathering of old documents and the centralisation of information concerning architectures, procedures, equipment and settings. It was followed by the standardisation of the naming convention across different accelerators. The traceability of problems, request, repairs, and other actions, has also been put into place. It goes together with the effort on identification of each individual device by a coded label, and its registration in a central database. We are also working on ways to record, retrieve, process, and display the information across several linked repositories; then, the quality and efficiency of our services can only improve, and the corresponding performance indicators will be available. | |||
![]() |
Poster TUPPC027 [98.542 MB] | ||
TUPPC030 | System Relation Management and Status Tracking for CERN Accelerator Systems | framework, software, database, hardware | 619 |
|
|||
The Large Hadron Collider (LHC) at CERN requires many systems to work in close interplay to allow reliable operation and at the same time ensure the correct functioning of the protection systems required when operating with large energies stored in magnet system and particle beams. Examples for systems are e.g. magnets, power converters, quench protection systems as well as higher level systems like java applications or server processes. All these systems have numerous and different kind of links (dependencies) between each other. The knowledge about the different dependencies is available from different sources, like Layout databases, Java imports, proprietary files, etc . Retrieving consistent information is difficult due to the lack of a unified way of retrieval for the relevant data. This paper describes a new approach to establish a central server instance, which allows collecting this information and providing it to different clients used during commissioning and operation of the accelerator. Furthermore, it explains future visions for such a system, which includes additional layers for distributing system information like operational status, issues or faults. | |||
![]() |
Poster TUPPC030 [4.175 MB] | ||
TUPPC031 | Proteus: FRIB Configuration Database | database, controls, cavity, operation | 623 |
|
|||
Distributed Information Services for Control Systems (DISCS) is a framework for developing high-level information systems for a Experimental Physics Facility. It comprises of a set of cooperating components. Each component of the system has a database, an API, and several applications. One of DISCS' core components is the Configuration Module. It is responsible for the management of devices, their layout, measurements, alignment, calibration, signals, and inventory. In this paper we describe FRIB's implementation of the Configuration Module - Proteus. We describe its architecture, database schema, web-based GUI, EPICS V4 and REST services, and Java/Python APIs. It has been developed as a product that other labs can download and use. It can be integrated with other independent systems. We describe the challenges to implementing such a system, our technology choices, and the lessons learnt. | |||
![]() |
Poster TUPPC031 [1.248 MB] | ||
TUPPC032 | Database-backed Configuration Service | database, controls, operation, network | 627 |
|
|||
Keck Observatory is in the midst of a major telescope control system upgrade. This upgrade will include a new database-backed configuration service which will be used to manage the many aspects of the telescope that need to be configured (e.g. site parameters, control tuning, limit values) for its control software and it will keep the configuration data persistent between IOC restarts. This paper will discuss this new configuration service, including its database schema, iocsh API, rich user interface and the many other provided features. The solution provides automatic time-stamping, a history of all database changes, the ability to snapshot and load different configurations and triggers to manage the integrity of the data collections. Configuration is based on a simple concept of controllers, components and their associated mapping. The solution also provides a failsafe mode that allows client IOCs to function if there is a problem with the database server. It will also discuss why this new service is preferred over the file based configuration tools that have been used at Keck up to now. | |||
![]() |
Poster TUPPC032 [0.849 MB] | ||
TUPPC037 | LabWeb - LNLS Beamlines Remote Operation System | experiment, operation, software, controls | 638 |
|
|||
Funding: Project funded by CENPES/PETROBRAS under contract number: 0050.0067267.11.9 LabWeb is a software developed to allow remote operation of beamlines at LNLS, in a partnership with Petrobras Nanotechnology Network. Being the only light source in Latin America, LNLS receives many researchers and students interested in conducting experiments and analyses in these lines. The implementation of LabWeb allow researchers to use the laboratory structure without leaving their research centers, reducing time and travel costs in a continental country like Brazil. In 2010, the project was in its first phase in which tests were conducted using a beta version. Two years later, a new phase of the project began with the main goal of giving the operation scale for the remote access project to LNLS users. In this new version, a partnership was established to use the open source platform Science Studio developed and applied at the Canadian Light Source (CLS). Currently, the project includes remote operation of three beamlines at LNLS: SAXS1 (Small Angle X-Ray Scattering), XAFS1 (X-Ray Absorption and Fluorescence Spectroscopy) and XRD1 (X-Ray Diffraction). Now, the expectation is to provide this new way of realize experiments to all the other beamlines at LNLS. |
|||
![]() |
Poster TUPPC037 [1.613 MB] | ||
TUPPC039 | Development of a High-speed Diagnostics Package for the 0.2 J, 20 fs, 1 kHz Repetition Rate Laser at ELI Beamlines | laser, diagnostics, FPGA, controls | 646 |
|
|||
The ELI Beamlines facility aims to provide a selection of high repetition rate terawatt and petawatt femtosecond pulsed lasers, with applications in plasma research, particle acceleration, high-field physics and high intensity extended-UV/X-ray generation. The highest rate laser in the facility will be a 1 kHz femtosecond laser with pulse energy of 200 mJ. This high repetition rate presents unique challenges for the control system, particularly the diagnostics package. This is tasked with measuring key laser parameters such as pulse energy, pointing accuracy, and beam profile. Not only must this system be capable of relaying individual pulse measurements in real-time to the six experimental target chambers, it must also respond with microsecond latency to any aberrations indicating component damage or failure. We discuss the development and testing of a prototype near-field camera profiling system forming part of this diagnostics package consisting of a 1000 fps high resolution camera and FPGA-based beam profile and aberration detection system. | |||
![]() |
Poster TUPPC039 [2.244 MB] | ||
TUPPC042 | Prototype of a Simple ZeroMQ-Based RPC in Replacement of CORBA in NOMAD | CORBA, operation, GUI, controls | 654 |
|
|||
The NOMAD instrument control software of the Institut Laue-Langevin is a client server application. The communication between the server and its clients is performed with CORBA, which has now major drawbacks like the lack of support and a slow or non-existing evolution. The present paper describes the implementation of the recent and promising ZeroMQ technology in replacement to CORBA. We present the prototype of a simple RPC built on top of ZeroMQ and the performant Google Protocol Buffers serialization tool, to which we add a remote method dispatch layer. The final project will also provide an IDL compiler restricted to a subset of the language so that only minor modifications to our existing IDL interfaces and class implementations will have to be made to replace the communication layer in NOMAD. | |||
![]() |
Poster TUPPC042 [1.637 MB] | ||
TUPPC044 | When Hardware and Software Work in Concert | controls, experiment, detector, operation | 661 |
|
|||
Funding: Partially funded by BMBF under the grants 05K10CKB and 05K10VKE. Integrating control and high-speed data processing is a fundamental requirement to operate a beam line efficiently and improve user's beam time experience. Implementing such control environments for data intensive applications at synchrotrons has been difficult because of vendor-specific device access protocols and distributed components. Although TANGO addresses the distributed nature of experiment instrumentation, standardized APIs that provide uniform device access, process control and data analysis are still missing. Concert is a Python-based framework for device control and messaging. It implements these programming interfaces and provides a simple but powerful user interface. Our system exploits the asynchronous nature of device accesses and performs low-latency on-line data analysis using GPU-based data processing. We will use Concert to conduct experiments to adjust experimental conditions using on-line data analysis, e.g. during radiographic and tomographic experiments. Concert's process control mechanisms and the UFO processing framework* will allow us to control the process under study and the measuring procedure depending on image dynamics. * Vogelgesang, Chilingaryan, Rolo, Kopmann: “UFO: A Scalable GPU-based Image Processing Framework for On-line Monitoring” |
|||
![]() |
Poster TUPPC044 [4.318 MB] | ||
TUPPC048 | Adoption of the "PyFRID" Python Framework for Neutron Scattering Instruments | controls, framework, scattering, software | 677 |
|
|||
M.Drochner, L.Fleischhauer-Fuss, H.Kleines, D.Korolkov, M.Wagener, S.v.Waasen Adoption of the "PyFRID" Python Framework for Neutron Scattering Instruments To unify the user interfaces of the JCNS (Jülich Centre for Neutron Science) scattering instruments, we are adapting and extending the "PyFRID" framework. "PyFRID" is a high-level Python framework for instrument control. It provides a high level of abstraction, particularly by use of aspect oriented (AOP) techniques. Users can use a builtin command language or a web interface to control and monitor motors, sensors, detectors and other instrument components. The framework has been fully adopted at two instruments, and work is in progress to use it on more. | |||
TUPPC053 | New Control System for the SPES Off-line Laboratory at LNL-INFN using EPICS IOCs based on the Raspberry Pi | EPICS, controls, Ethernet, detector | 687 |
|
|||
SPES (Selective Production of Exotic Species) is an ISOL type RIB facility of the LNL-INFN at Italy dedicated to the production of neutron-rich radioactive nuclei by uranium fission. At the LNL, for the last four years, an off-line laboratory has been developed in order to study the target front-end test bench. The instrumentation devices are controlled using EPICS. A new flexible, easy to adapt, low cost and open solution for this control system is being tested. It consists on EPICS IOCs developed at the LNL which are based on the low cost computer board Raspberry Pi with custom-made expansion boards. The operating system is a modify version of Debian Linux running EPICS soft IOCs that communicates with the expansion board using home-made drivers. The expansion boards consist on multi-channel 16bits ADCs and DACs, digital inputs and outputs and stepper motor drivers. The idea is to have a distributed control system using customized IOC for controlling the instrumentation devices on the system as well as to read the information from the detectors using the EPICS channel access as communication protocol. This solution will be very cost effective and easy to customize. | |||
![]() |
Poster TUPPC053 [2.629 MB] | ||
TUPPC054 | A PLC-Based System for the Control of an Educational Observatory | controls, PLC, instrumentation, operation | 691 |
|
|||
An educational project that aims to involve young students in astronomical observations has been developed in the last decade at the Basovizza branch station of the INAF-Astronomical Observatory of Trieste. The telescope used is a 14” reflector equipped with a robotic Paramount ME equatorial mount and placed in a non-automatic dome. The new-developing control system is based on Beckhoff PLC. The control system will mainly allow to remotely control the three-phase synchronous motor of the dome, the switching of the whole instrumentation and the park of the telescope. Thanks to the data coming from the weather sensor, the PLC will be able to ensure the safety of the instruments. A web interface is used for the communication between the user and the instrumentation. In this paper a detailed description of the whole PLC-based control system architecture will be presented. | |||
![]() |
Poster TUPPC054 [3.671 MB] | ||
TUPPC055 | Developing of the Pulse Motor Controller Electronics for Running under Weak Radiation Environment | radiation, controls, operation, optics | 695 |
|
|||
Hitz Hitachi Zosen has developed new pulse motor controller. This controller which controls two axes per one controller implements high performance processor, pulse control device and peripheral interface. This controller has simply extensibility and various interface and realizes low price. We are able to operate the controller through Ethernet TCP/IP(or FLnet). Also, the controller can control max 16 axes. In addition, we want to drive the motor controller in optics hatch filled with weak radiation. If we can put the controller in optics hatch, wiring will become simple because of closed wiring in optics hatch . Therefore we have evaluated controller electronics running under weak radiation. | |||
![]() |
Poster TUPPC055 [0.700 MB] | ||
TUPPC059 | EPICS Data Acquisition Device Support | EPICS, timing, software, detector | 707 |
|
|||
A large number of devices offer a similar kind of capabilities. For example, data acquisition all offer sampling at some rate. If each such device were to have a different interface, engineers using them would need to be familiar with each device specifically, inhibiting transfer of know-how from working with one device to another and increasing the chance of engineering errors due to a miscomprehension or incorrect assumptions. In the Nominal Device Model (NDM) model, we propose to standardize the EPICS interface of the analog and digital input and output devices, and image acquisition devices. The model describes an input/output device which can have digital or analog channels, where channels can be configured for output or input. Channels can be organized in groups that have common parameters. NDM is implemented as EPICS Nominal Device Support library (NDS). It provides a C++ interface to developers of device-specific drivers. NDS itself inherits well-known asynPortDriver. NDS hides from the developer all the complexity of the communication with asynDriver and allows to focus on the business logic of the device itself. | |||
![]() |
Poster TUPPC059 [0.371 MB] | ||
TUPPC061 | BL13-XALOC, MX experiments at Alba: Current Status and Ongoing Improvements | controls, TANGO, experiment, hardware | 714 |
|
|||
BL13-XALOC is the only Macromolecular Crystallography (MX) beamline at the 3-GeV ALBA synchrotron. The control system is based on Tango * and Sardana **, which provides a powerful python-based environment for building and executing user-defined macros, a comprehensive access to the hardware, a standard Command Line Interface based on ipython, and a generic and customizable Graphical User Interface based on Taurus ***. Currently, the MX experiments are performed through panels that provide control to different beamline instrumentation. Users are able to collect diffraction data and solve crystal structures, and now it is time to improve the control system by combining the feedback from the users with the development of the second stage features: group all the interfaces (i.e. sample viewing system, automatic sample changer, fluorescence scans, and data collections) in a high-level application and implement new functionalities in order to provide a higher throughput experiment, with data collection strategies, automated data collections, and workflows. This article describes the current architecture of the XALOC control system, and the plan to implement the future improvements.
* http://www.tango-controls.org/ ** http://www.sardana-controls.org/ *** http://www.tango-controls.org/static/taurus/ |
|||
![]() |
Poster TUPPC061 [2.936 MB] | ||
TUPPC063 | Control and Monitoring of the Online Computer Farm for Offline Processing in LHCb | controls, monitoring, network, experiment | 721 |
|
|||
LHCb, one of the 4 experiments at the LHC accelerator at CERN, uses approximately 1500 PCs (averaging 12 cores each) for processing the High Level Trigger (HLT) during physics data taking. During periods when data acquisition is not required most of these PCs are idle. In these periods it is possible to profit from the unused processing capacity to run offline jobs, such as Monte Carlo simulation. The LHCb offline computing environment is based on LHCbDIRAC (Distributed Infrastructure with Remote Agent Control). In LHCbDIRAC, job agents are started on Worker Nodes, pull waiting tasks from the central WMS (Workload Management System) and process them on the available resources. A Control System was developed which is able to launch, control and monitor the job agents for the offline data processing on the HLT Farm. This control system is based on the existing Online System Control infrastructure, the PVSS SCADA and the FSM toolkit. It has been extensively used launching and monitoring 22.000+ agents simultaneously and more than 850.000 jobs have already been processed in the HLT Farm. This paper describes the deployment and experience with the Control System in the LHCb experiment. | |||
![]() |
Poster TUPPC063 [2.430 MB] | ||
TUPPC069 | ZEBRA: a Flexible Solution for Controlling Scanning Experiments | FPGA, detector, EPICS, controls | 736 |
|
|||
This paper presents the ZEBRA product developed at Diamond Light Source. ZEBRA is a stand-alone event handling system with interfaces to multi-standard digital I/O signals (TTL, LVDS, PECL, NIM and Open Collector) and RS422 quadrature incremental encoder signals. Input events can be triggered by input signals, encoder position signals or repetitive time signals, and can be combined using logic gates in an FPGA to generate and output other events. The positions of all 4 encoders can be captured at the time of a given event and made available to the controlling system. All control and status is available through a serial protocol, so there is no dependency on a specific higher level control system. We have found it has applications on virtually all Diamond beamlines, from applications as simple as signal level shifting to, for example, using it for all continuous scanning experiments. The internal functionality is reconfigurable on the fly through the user interface and can be saved to static memory. It provides a flexible solution to interface different third party hardware (detectors and motion controllers) and to configure the required functionality as part of the experiment. | |||
![]() |
Poster TUPPC069 [2.909 MB] | ||
TUPPC070 | Detector Controls for the NOvA Experiment Using Acnet-in-a-Box | controls, detector, PLC, monitoring | 740 |
|
|||
In recent years, we have packaged the Fermilab accelerator control system, Acnet, so that other instances of it can be deployed independent of the Fermilab infrastructure. This encapsulated "Acnet-in-a-Box" is installed as the detector control system at the NOvA Far Detector. NOvA is a neutrino experiment using a beam of particles produced by the Fermilab accelerators. There are two NOvA detectors: a 330 ton ‘‘Near Detector'' on the Fermilab campus and a 14000 ton ‘‘Far Detector'' 735 km away. All key tiers and aspects of Acnet are available in the NOvA instantiation, including the central device database, java Open Access Clients, erlang front-ends, application consoles, synoptic displays, data logging, and state notifications. Acnet at NOvA is used for power-supply control, monitoring position and strain gauges, environmental control, PLC supervision, relay rack monitoring, and interacting with Epics PVs instrumenting the detector's avalanche photo-diodes. We discuss the challenges of maintaining a control system in a remote location, synchronizing updates between the instances, and improvements made to Acnet as a result of our NOvA experience. | |||
![]() |
Poster TUPPC070 [0.876 MB] | ||
TUPPC076 | SNS Instrument Data Acquisition and Controls | controls, EPICS, neutron, data-acquisition | 755 |
|
|||
Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U. S. Department of Energy. The data acquisition (DAQ) and control systems for the neutron beam line instruments at the Spallation Neutron Source (SNS) are undergoing upgrades addressing three critical areas: data throughput and data handling from DAQ to data analysis, instrument controls including user interface and experiment automation, and the low-level electronics for DAQ and timing. This paper will outline the status of the upgrades and will address some of the challenges in implementing fundamental upgrades to an operating facility concurrent with commissioning of existing beam lines and construction of new beam lines. |
|||
TUPPC078 | First EPICS/CSS Based Instrument Control and Acquisition System at ORNL | controls, EPICS, experiment, neutron | 763 |
|
|||
Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy The neutron imaging prototype beamline (CG-1D) at the Oak Ridge National Laboratory High Flux Isotope Reactor (HFIR) is used for many different applications necessitating a flexible and stable instrument control system. Beamline scientists expect a robust data acquisition system. They need a clear and concise user interface that allows them to both configure an experiment and to monitor an ongoing experiment run. Idle time between acquiring consecutive images must be minimized. To achieve these goals, we implement a system based upon EPICS, a newly developed CSS scan system, and CSS BOY. This paper presents the system architecture and possible future plans. |
|||
![]() |
Poster TUPPC078 [6.846 MB] | ||
TUPPC081 | IcePAP: An Advanced Motor Controller for Scientific Applications in Large User Facilities | controls, hardware, software, target | 766 |
|
|||
Synchrotron radiation facilities and in particular large hard X-ray sources such as the ESRF are equipped with thousands of motorized position actuators. Combining all the functional needs found in those facilities with the implications related to personnel resources, expertise and cost makes the choice of motor controllers a strategic matter. Most of the large facilities adopt strategies based on the use of off-the-shelf devices packaged using standard interfaces. As this approach implies severe compromises, the ESRF decided to address the development of IcePAP, a motor controller designed for applications in a scientific environment. It optimizes functionality, performance, ease of deployment, level of standardization and cost. This device is adopted as standard and is widely used at the beamlines and accelerators of ESRF and ALBA. This paper provides details on the architecture and technical characteristics of IcePAP as well as examples on how it implements advanced features. It also presents ongoing and foreseen improvements as well as introduces the outline of an emerging collaboration aimed at further development of the system making it available to other research labs. | |||
![]() |
Poster TUPPC081 [0.615 MB] | ||
TUPPC082 | DSP Design Using System Generator | FPGA, hardware, simulation, booster | 770 |
|
|||
When designing a real time control system, a fast data transfer between the different pieces of hardware must be guaranteed since synchronization and determinism have to be respected. One efficient solution to cope with these constraints is to embed the data collection, the signal-processing and the driving of the acting devices in FPGAs. Although this solution imposes that the whole design is being developed for an FPGA, in pure hardware, it is possible to open the part dedicated to the signal processing to non HDL (Hardware Description Language) specialists; the choice has been made here to develop this part under System Generator, in Simulink. Another challenge in such system design is the integration of real time models on already pre-configured hardware platforms. This paper describes with few examples how to interface such hardware with HDL System Generator control systems blocks. The advantages of Simulink for the simulation phase of the design as well as the possibility to introduce models dedicated to the tests are also presented. | |||
![]() |
Poster TUPPC082 [0.924 MB] | ||
TUPPC086 | Electronics Developments for High Speed Data Throughput and Processing | detector, FPGA, controls, timing | 778 |
|
|||
Funding: The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 283745 The European XFEL DAQ system has to acquire and process data in short bursts every 100ms. Bursts lasts for 600us and contain a maximum of 2700 x-ray pulses with a repetition rate of 4.5MHz which have to be captured and processed before the next burst starts. This time structure defines the boundary conditions for almost all diagnostic and detector related DAQ electronics required and currently being developed for start of operation in fall 2015. Standards used in the electronics developments are: MicroTCA.4 and AdvancedTCA crates, use of FPGAs for data processing, transfer to backend systems via 10Gbps (SFP+) links, and feedback information transfer using 3.125Gbps (SFP) links. Electronics being developed in-house or in collaboration with external institutes and companies include: a Train Builder ATCA blade for assembling and processing data of large-area image detectors, a VETO MTCA.4 development for evaluating pulse information and distributing a trigger decision to detector front-end ASICs and FPGAs with low-latency, a MTCA.4 digitizer module, interface boards for timing and similar synchronization information, etc. |
|||
![]() |
Poster TUPPC086 [0.983 MB] | ||
TUPPC087 | High Level FPGA Programming Framework Based on Simulink | FPGA, framework, hardware, software | 782 |
|
|||
Funding: The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement No 283745. Modern diagnostic and detector related data acquisition and processing hardware are increasingly being implemented with Field Programmable Gate Array (FPGA) technology. The level of flexibility allows for simpler hardware solutions together with the ability to implement functions during the firmware programming phase. The technology is also becoming more relevant in data processing, allowing for reduction and filtering to be done at the hardware level together with implementation of low-latency feedback systems. However, this flexibility and possibilities require a significant amount of design, programming, simulation and testing work usually done by FPGA experts. A high-level FPGA programming framework is currently under development at the European XFEL in collaboration with the Oxford University within the EU CRISP project. This framework allows for people unfamiliar with FPGA programming to develop and simulate complete algorithms and programs within the MathWorks Simulink graphical tool with real FPGA precision. Modules within the framework allow for simple code reuse by compiling them into libraries, which can be deployed to other boards or FPGAs. |
|||
![]() |
Poster TUPPC087 [0.813 MB] | ||
TUPPC088 | Development of MicroTCA-based Image Processing System at SPring-8 | FPGA, controls, framework, Linux | 786 |
|
|||
In SPring-8, various CCD cameras have been utilized for electron beam diagnostics of accelerators and x-ray imaging experiments. PC-based image processing systems are mainly used for the CCD cameras with Cameralink I/F. We have developed a new image processing system based on MicroTCA platform, which has an advantage over PC in robustness and scalability due to its hot-swappable modular architecture. In order to reduce development cost and time, the new system is built with COTS products including a user-configurable Spartan6 AMC with an FMC slot and a Cameralink FMC. The Cameralink FPGA core is newly developed in compliance with the AXI4 open-bus to enhance reusability. The MicroTCA system will be first applied to upgrade of the two-dimensional synchrotron radiation interferometer[1] operating at the SPring-8 storage ring. The sizes and tilt angle of a transverse electron beam profile with elliptical Gaussian distribution are extracted from an observed 2D-interferogram. A dedicated processor AMC (PrAMC) that communicates with the primary PrAMC via backplane is added for fast 2D-fitting calculation to achieve real-time beam profile monitoring during the storage ring operation.
[1] "Two-dimensional visible synchrotron light interferometry for transverse beam-profile measurement at the SPring-8 storage ring", M.Masaki and S.Takano, J. Synchrotron Rad. 10, 295 (2003). |
|||
![]() |
Poster TUPPC088 [4.372 MB] | ||
TUPPC089 | Upgrade of the Power Supply Interface Controller Module for SuperKEKB | power-supply, controls, operation, hardware | 790 |
|
|||
There were more than 2500 magnet power supplies for KEKB storage rings and injection beam transport lines. For the remote control of such a large number of power supplies, we have developed the Power Supply Interface Controller Module (PSICM), which is plugged into each power supply. It has a microprocessor, ARCNET interface, trigger signal input interface, and parallel interface to the power supply. The PSICM is not only an interface card but also controls synchronous operation of the multiple power supplies with an arbitrary tracking curve. For SuperKEKB, the upgrade of KEKB, most of the existing power supplies continues while handreds of new power suplies are also installed. Although the PSICMs have worked without serious problem for 12 years, it seems too hard to keep maintenance for the next decade because of the discontinued parts. Thus we have developed the upgraded version of the PSICM. The new PSICM has the fully backward compatible interface to the power supply. The enhanced features are high speed ARCNET communication and redundant trigger signals. The design and the status of the upgraded PSICM are presented. | |||
![]() |
Poster TUPPC089 [1.516 MB] | ||
TUPPC096 | Migration from WorldFIP to a Low-Cost Ethernet Fieldbus for Power Converter Control at CERN | Ethernet, controls, software, network | 805 |
|
|||
Power converter control in the LHC uses embedded computers called Function Generator/Controllers (FGCs) which are connected to WorldFIP fieldbuses around the accelerator ring. The FGCs are integrated into the accelerator control system by x86 gateway front-end systems running Linux. With the LHC now operational, attention has turned to the renovation of older control systems as well as a new installation for Linac 4. A new generation of FGC is being deployed to meet the needs of these cycling accelerators. As WorldFIP is very limited in data rate and is unlikely to undergo further development, it was decided to base future installations upon an Ethernet fieldbus with standard switches and interface chipsets in both the FGCs and gateways. The FGC communications protocol that runs over WorldFIP in the LHC was adapted to work over raw Ethernet, with the aim to have a simple solution that will easily allow the same devices to operate with either type of interface. This paper describes the evolution of FGC communications from WorldFIP to dedicated Ethernet networks and presents the results of initial tests, diagnostic tools and how real-time power converter control is achieved. | |||
![]() |
Poster TUPPC096 [1.250 MB] | ||
TUPPC109 | MacspeechX.py Module and Its Use in an Accelerator Control System | controls, hardware, software, target | 829 |
|
|||
macspeechX.py is a Python module to accels speech systehsis library on MacOSX. This module have been used in the vocal alert system in KEKB and J-PARC accelrator cotrol system. Recent upgrade of this module allow us to handle non-English lanugage, such as Japanse, through this module. Implementation detail will be presented as an example of Python program accessing system library. | |||
TUPPC115 | Hierarchies of Alarms for Large Distributed Systems | controls, detector, experiment, diagnostics | 844 |
|
|||
The control systems of most of the infrastructure at CERN makes use of the SCADA package WinCC OA by ETM, including successful projects to control large scale systems (i.e. the LHC accelerator and associated experiments). Each of these systems features up to 150 supervisory computers and several millions of parameters. To handle such large systems, the control topologies are designed in a hierarchical way (i.e. sensor, module, detector, experiment) with the main goal of supervising a complete installation with a single person from a central user interface. One of the key features to achieve this is alarm management (generation, handling, storage, reporting). Although most critical systems include automatic reactions to faults, alarms are fundamental for intervention and diagnostics. Since one installation can have up to 250k alarms defined, a major failure may create an avalanche of alarms that is difficult for an operator to interpret. Missing important alarms may lead to downtime or to danger for the equipment. The paper presents the developments made in recent years on WinCC OA to work with large hierarchies of alarms and to present summarized information to the operators. | |||
TUPPC116 | Cheburashka: A Tool for Consistent Memory Map Configuration Across Hardware and Software | software, hardware, controls, database | 848 |
|
|||
The memory map of a hardware module is defined by the designer at the moment when the firmware is specified. It is then used by software developers to define device drivers and front-end software classes. Maintaining consistency between hardware and its software is critical. In addition, the manual process of writing VHDL firmware on one side and the C++ software on the other is labour-intensive and error-prone. Cheburashka* is a software tool which eases this process. From a unique declaration of the memory map, created using the tool’s graphical editor, it allows to generate the memory map VHDL package, the Linux device driver configuration for the front-end computer, and a FESA** class for debugging. An additional tool, GENA, is being used to automatically create all required VHDL code to build the associated register control block. These tools are now used by the hardware and software teams for the design of all new interfaces from FPGAs to VME or on-board DSPs in the context of the extensive program of development and renovation being undertaken in the CERN injector chain during LS1***. Several VME modules and their software have already been deployed and used in the SPS.
(*) Cheburashka is developed in the RF group at CERN (**)FESA is an acronym for Front End Software Architecture, developped at CERN (***)LS1 : LHC Long Shutdown 1, from 2013 to 2014 |
|||
TUPPC119 | Exchange of Crucial Information between Accelerator Operation, Equipment Groups and Technical Infrastructure at CERN | operation, database, laser, controls | 856 |
|
|||
During CERN accelerator operation, a large number of events, related to accelerator operation and management of technical infrastructure, occur with different criticality. All these events are detected, diagnosed and managed by the Technical Infrastructure service (TI) in the CERN Control Centre (CCC); equipment groups concerned have to solve the problem with a minimal impact on accelerator operation. A new database structure and new interfaces have to be implemented to share information received by TI, to improve communication between the control room and equipment groups, to help post-mortem studies and to correlate events with accelerator operation incidents. Different tools like alarm screens, logbooks, maintenance plans and work orders exist and are in use today. A project was initiated with the goal to integrate and standardize information in a common repository to be used by the different stakeholders through dedicated user interfaces. | |||
![]() |
Poster TUPPC119 [10.469 MB] | ||
TUPPC120 | LHC Collimator Alignment Operational Tool | alignment, collimation, controls, monitoring | 860 |
|
|||
Beam-based LHC collimator alignment is necessary to determine the beam centers and beam sizes at the collimator locations for various machine configurations. Fast and automatic alignment is provided through an operational tool has been developed for use in the CERN Control Center, which is described in this paper. The tool is implemented as a Java application, and acquires beam loss and collimator position data from the hardware through a middleware layer. The user interface is designed to allow for a quick transition from application start up, to selecting the required collimators for alignment and configuring the alignment parameters. The measured beam centers and sizes are then logged and displayed in different forms to help the user set up the system. | |||
![]() |
Poster TUPPC120 [2.464 MB] | ||
TUPPC121 | caQtDM, an EPICS Display Manager Based on Qt | controls, EPICS, Windows, data-acquisition | 864 |
|
|||
At the Paul Scherrer Institut (PSI) the display manager MEDM was used until recently for the synoptic displays at all our facilities, not only for EPICS but also for another, in-house built control system ACS. However MEDM is based on MOTIF and Xt/X11, systems/libraries that are starting to age. Moreover MEDM is difficult to extend with new entities. Therefore a new tool has been developed based on Qt. This reproduces the functionality of MEDM and is now in use at several facilities. As Qt is supported on several platforms this tool will also format using the parser tool adl2ui. These were then edited further with the Qt-Designer and displayed with the new Qt-Manager caQtDM. The integration of new entities into the Qt designer and therefore into the Qt based applications is very easy, so that the system can easily be enhanced with new widgets. New features needed for our facility were implemented. The caQtDM application uses a C++ class to perform the data acquisition and display; this class can also be integrated into other applications. | |||
![]() |
Slides TUPPC121 [1.024 MB] | ||
TUPPC122 | Progress of the TPS Control Applications Development | controls, EPICS, GUI, operation | 867 |
|
|||
The TPS (Taiwan Photon Source) is the latest generation 3 GeV synchrotron light source which is in installation phase. Commissioning is estimated in 2014. The EPICS is adopted as control system framework for the TPS. The various EPICS IOCs have implemented for each subsystem at this moment. Development and integration of specific control operation interfaces are in progress. The operation interfaces mainly include the function of setting, reading, save, restore and etc. Development of high level applications which are depended upon properties of each subsystem is on-going. The archive database system and its browser toolkits gradually have been established and tested. The Web based operation interfaces and broadcasting are also created for observing the machine status. The efforts will be summarized at this report. | |||
![]() |
Poster TUPPC122 [2.054 MB] | ||
TUPPC131 | Synoptic Displays and Rapid Visual Application Development | controls, embedded, collider, power-supply | 893 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. For a number of years there has been an increasing desire to adopt a synoptic display suite within BNL accelerator community. Initial interest in the precursors to the modern display suites like MEDM quickly fizzled out as our users found them aesthetically unappealing and cumbersome to use. Subsequent attempts to adopt Control System Studio (CSS) also fell short when work on the abstraction bridge between CSS and our control system stalled and was eventually abandoned. Most recently, we tested the open source version of a synoptic display developed at Fermilab. It, like its previously evaluated predecessors, also seemed rough around the edges, however a few implementation details made it more appealing than every single previously mentioned solution and after a brief evaluation we settled on Synoptic as our display suite of choice. This paper describes this adoption process and goes into details on several key changes and improvements made to the original implementation – a few of which made us rethink how we want to use this tool in the future. |
|||
![]() |
Poster TUPPC131 [3.793 MB] | ||
TUPPC133 | Graphene: A Java Library for Real-Time Scientific Graphs | real-time, controls, operation, background | 901 |
|
|||
While there are a number of open source charting library available in Java, none of them seem to be suitable for real time scientific data, such as the one coming from control systems. Common shortcomings include: inadequate performance, too entangled with other scientific packages, concrete data object (which require copy operations), designed for small datasets, required running UI to produce any graph. Graphene is our effort to produce graphs that are suitable for scientific publishing, can be created without UI (e.g. in a web server), work on data defined through interfaces that allow no copy processing in a real time pipeline and are produced with adequate performance. The graphs are then integrated using pvmanager within Control System Studio. | |||
![]() |
Poster TUPPC133 [0.502 MB] | ||
TUCOCA04 | Formal Methodology for Safety-Critical Systems Engineering at CERN | PLC, software, operation, site | 918 |
|
|||
A Safety-Critical system is a system whose failure or malfunctioning may lead to an injury or loss of human life or may have serious environmental consequences. The Safety System Engineering section of CERN is responsible for the conception of systems capable of performing, in an extremely safe way, a predefined set of Instrumented Functions preventing any human presence inside areas where a potential hazardous event may occur. This paper describes the formal approach followed for the engineering of the new Personnel Safety System of the PS accelerator complex at CERN. Starting from applying the generic guidelines of the safety standard IEC-61511, we have defined a novel formal approach particularly useful to express the complete set of Safety Functions in a rigorous and unambiguous way. We present the main advantages offered by this formalism and, in particular, we will show how this has been effective in solving the problem of the Safety Functions testing, leading to a major reduction of time for the test pattern generation. | |||
![]() |
Slides TUCOCA04 [2.227 MB] | ||
TUCOCB01 | Next-Generation MADOCA for The SPring-8 Control Framework | controls, Windows, framework, software | 944 |
|
|||
MADOCA control framework* was developed for SPring-8 accelerator control and has been utilized in several facilities since 1997. As a result of increasing demands in controls, now we need to treat various data including image data in beam profile monitoring, and also need to control specific devices which can be only managed by Windows drivers. To fulfill such requirements, next-generation MADOCA (MADOCA II) was developed this time. MADOCA II is also based on message oriented control architecture, but the core part of the messaging is completely rewritten with ZeroMQ socket library. Main features of MADOCA II are as follows: 1) Variable length data such as image data can be transferred with a message. 2) The control system can run on Windows as well as other platforms such as Linux and Solaris. 3) Concurrent processing of multiple messages can be performed for fast control. In this paper, we report on the new control framework especially from messaging aspects. We also report the status on the replacement of the control system with MADOCA II. Partial control system of SPring-8 was already replaced with MADOCA II last summer and has been stably operated.
*R.Tanaka et al., “Control System of the SPring-8 Storage Ring”, Proc. of ICALEPCS’95, Chicago, USA, (1995) |
|||
![]() |
Slides TUCOCB01 [2.157 MB] | ||
TUCOCB06 | Designing and Implementing LabVIEW Solutions for Re-Use* | framework, LabView, hardware, controls | 960 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632632 Many of our machines have a lot in common – they drive motors, take pictures, generate signals, toggle switches, and observe and measure effects. In a research environment that creates new machines and expects them to perform for a production assembly line, it is important to meet both schedule and quality. NIF has developed a LabVIEW layered architecture of Support, general Frameworks, Controllers, Devices, and User Interface Frameworks. This architecture provides a tested and qualified framework of software that allows us to focus on developing and testing the external interfaces (hardware and user) of each machine. |
|||
![]() |
Slides TUCOCB06 [4.232 MB] | ||
TUCOCB10 | TANGO V8 - Another Turbo Charged Major Release | TANGO, controls, device-server, CORBA | 978 |
|
|||
The TANGO (http://tango-controls/org) collaboration continues to evolve and improve the TANGO kernel. A latest release has made major improvements to the protocol and, the language support in Java. The replacement of the CORBA Notificaton service with ZMQ for sending events has allowed a much higher performance, a simplification of the architecture and support for multicasting to be achieved. A rewrite of the Java device server binding using the latest features of the Java language has made the code much more compact and modern. Guidelines for writing device servers have been produced so they can be more easily shared. The test suite for testing the TANGO kernel has been re-written and the code coverage drastically improved. TANGO has been ported to new embedded platforms running Linux and mobile platforms running Android and iOS. Packaging for Debian and bindings to commercial tools have been updated and a new one (Panorama) added. The graphical layers have been extended. The latest figures on TANGO performance will be presented. Finally the paper will present the roadmap for the next major release. | |||
![]() |
Slides TUCOCB10 [1.469 MB] | ||
WECOAAB01 | An Overview of the LHC Experiments' Control Systems | controls, experiment, framework, monitoring | 982 |
|
|||
Although they are LHC experiments, the four experiments, either by need or by choice, use different equipment, have defined different requirements and are operated differently. This led to the development of four quite different Control Systems. Although a joint effort was done in the area of Detector Control Systems (DCS) allowing a common choice of components and tools and achieving the development of a common DCS Framework for the four experiments, nothing was done in common in the areas of Data Acquisition or Trigger Control (normally called Run Control). This talk will present an overview of the design principles, architectures and technologies chosen by the four experiments in order to perform the Control System's tasks: Configuration, Control, Monitoring, Error Recovery, User Interfacing, Automation, etc.
Invited |
|||
![]() |
Slides WECOAAB01 [2.616 MB] | ||
WECOAAB02 | Status of the ACS-based Control System of the Mid-sized Telescope Prototype for the Cherenkov Telescope Array (CTA) | controls, software, monitoring, framework | 987 |
|
|||
CTA as the next generation ground-based very-high-energy gamma-ray observatory is defining new areas beyond those related to physics; it is also creating new demands on the control and data acquisition system. With on the order of 100 telescopes spread over large area with numerous central facilities, CTA will comprise a significantly larger number of devices than any other current imaging atmospheric Cherenkov telescope experiment. A prototype for the Medium Size Telescope (MST) of a diameter of 12 m has been installed in Berlin and is currently being commissioned. The design of the control software of this telescope incorporates the main tools and concepts under evaluation within the CTA consortium in order to provide an array control prototype for the CTA project. The readout and control system for the MST prototype is implemented within the ALMA Common Software (ACS) framework. The interfacing to the hardware is performed via the OPen Connectivity-Unified Architecture (OPC UA). The archive system is based on MySQL and MongoDB. In this contribution the architecture of the MST control and data acquisition system, implementation details and first conclusions are presented. | |||
![]() |
Slides WECOAAB02 [3.148 MB] | ||
WECOBA02 | Distributed Information Services for Control Systems | database, controls, EPICS, software | 1000 |
|
|||
During the design and construction of an experimental physics facility (EPF), a heterogeneous set of engineering disciplines, methods, and tools is used, making subsequent exploitation of data difficult. In this paper, we describe a framework (DISCS) for building high-level applications for commissioning, operation, and maintenance of an EPF that provides programmatic as well as graphical interfaces to its data and services. DISCS is a collaborative effort of BNL, FRIB, Cosylab, IHEP, and ESS. It is comprised of a set of cooperating services and applications, and manages data such as machine configuration, lattice, measurements, alignment, cables, machine state, inventory, operations, calibration, and design parameters. The services/applications include Channel Finder, Logbook, Traveler, Unit Conversion, Online Model, and Save-Restore. Each component of the system has a database, an API, and a set of applications. The services are accessed through REST and EPICS V4. We also discuss the challenges to developing database services in an environment where requirements continue to evolve and developers are distributed among different laboratories with different technology platforms. | |||
WECOCB01 | CERN's FMC Kit | hardware, controls, FPGA, feedback | 1020 |
|
|||
In the frame of the renovation of controls and data acquisition electronics for accelerators, the BE-CO-HT section at CERN has designed a kit based on carriers and mezzanines following the FPGA Mezzanine Card (FMC, VITA 57) standard. Carriers exist in VME64x and PCIe form factors, with a PXIe carrier underway. Mezzanines include an Analog to Digital Converter (ADC), a Time to Digital Converter (TDC) and a fine delay generator. All of the designs are licensed under the CERN Open Hardware Licence (OHL) and commercialized by companies. The paper discusses the benefits of this carrier-mezzanine strategy and of the Open Hardware based commercial paradigm, along with performance figures and plans for the future. | |||
![]() |
Slides WECOCB01 [3.300 MB] | ||
WECOCB03 | Development of a Front-end Data-Acquisition System with a Camera Link FMC for High-Bandwidth X-Ray Imaging Detectors | detector, FPGA, experiment, synchrotron | 1028 |
|
|||
X-ray imaging detectors are indispensable for synchrotron radiation experiments and growing up with larger number of pixels and higher frame rate to acquire more information on the samples. The novel detector with data rate of up to 8 Gbps/sensor, SOPHIAS, is under development at SACLA facility. Therefore, we have developed a new front-end DAQ system with high data rate beyond the present level. The system consists of an FPGA-based evaluation board and a FPGA mezzanine card (FMC). As the FPGA interface, FMC was adopted for supporting variety of interfaces and considering COTS system. Since the data transmission performance of the FPGA board in combination with the FMCs was already evaluated as about 20 Gbps between boards, our choice of devices has the potential to meet the requirements of SOPHIAS detector*. We made a FMC with Camera Link (CL) interface to support 1st phase of SOPHIAS detector. Since almost CL configurations are supported, the system handles various types of commercial cameras as well as new detector. Moreover, the FMC has general purpose input/output to satisfy various experimental requirements. We report the design of new front-end DAQ and results of evaluation.
* A Study of a Prototype DAQ System with over 10 Gbps Bandwidth for the SACLA X-Ray Experiments, C. Saji, T. Ohata, T. Sugimoto, R. Tanaka, and M. Yamaga, 2012 IEEE NSS and MIC, p.1619-p.1622 |
|||
![]() |
Slides WECOCB03 [0.980 MB] | ||
WECOCB05 | Modern Technology in Disguise | FPGA, controls, software, hardware | 1032 |
|
|||
A modern embedded system for fast systems has to incorporate technologies like multicore CPUs, fast serial links and FPGAs for interfaces and local processing. Those technologies are still relatively new and integrating them in a control system infrastructure that either exists already or has to be planned for long-term maintainability is a challenge that needs to be addressed. At PSI we have, in collaboration with an industrial company (IOxOS SA)[*], built a board and infrastructure around it solving issues like scalability and modularization of systems that are based on FPGAs and the FMC standard, simplicity in taking such a board in operation and re-using parts of the source code base for FPGA. In addition the board has several state-of-the-art features that are typically found in the newer bus systems like MicroTCA, but can still easily be incorporated in our VME64x-based infrastructure. In the presentation we will describe the system architecture, its technical features and how it enables us to effectively develop our different user applications and fast front-end systems.
* IOxOS SA, Gland, Switzerland, http://www.ioxos ch |
|||
![]() |
Slides WECOCB05 [0.675 MB] | ||
WECOCB07 | Development of an Open-Source Hardware Platform for Sirius BPM and Orbit Feedback | hardware, FPGA, software, controls | 1036 |
|
|||
The Brazilian Synchrotron Light Laboratory (LNLS) is developing a BPM and orbit feedback system for Sirius, the new low emmitance synchrotron light source under construction in Brazil. In that context, 3 open-source boards and accompanying low-level firmware/software were developed in cooperation with the Warsaw University of Technology (WUT) to serve as hardware platform for the BPM data acquisition and digital signal processing platform as well as orbit feedback data distributor: (i) FPGA board with 2 high-pin count FMC slots in PICMG AMC form factor; (ii) 4-channel 16-bit 130 MS/s ADC board in ANSI/VITA FMC form factor; (iii) 4-channel 16-bit 250 MS/s ADC board in ANSI/VITA FMC form factor. The experience of integrating the system prototype in a COTS MicroTCA.4 crate will be reported, as well as the planned developments. | |||
![]() |
Slides WECOCB07 [4.137 MB] | ||
THCOAAB01 | A Scalable and Homogeneous Web-Based Solution for Presenting CMS Control System Data | controls, software, detector, status | 1040 |
|
|||
The Control System of the CMS experiment ensures the monitoring and safe operation of over 1M parameters. The high demand for access to online and historical Control System Data calls for a scalable solution combining multiple data sources. The advantage of a Web solution is that data can be accessed from everywhere with no additional software. Moreover, existing visualization libraries can be reused to achieve a user-friendly and effective data presentation. Access to the online information is provided with minimal impact on the running control system by using a common cache in order to be independent of the number of users. Historical data archived by the SCADA software is accessed via an Oracle Database. The web interfaces provide mostly a read-only access to data but some commands are also allowed. Moreover, developers and experts use web interfaces to deploy the control software and administer the SCADA projects in production. By using an enterprise portal, we profit from single sign-on and role-based access control. Portlets maintained by different developers are centrally integrated into dynamic pages, resulting in a consistent user experience. | |||
![]() |
Slides THCOAAB01 [1.814 MB] | ||
THCOAAB03 | Bringing Control System User Interfaces to the Web | controls, EPICS, network, status | 1048 |
|
|||
Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy With the evolution of web based technologies, especially HTML5[1], it becomes possible to create web-based control system user interfaces (UI) that are cross-browser and cross-device compatible. This article describes two technologies that facilitate this goal. The first one is the WebOPI [2], which can seamlessly display CSS BOY[3] Operator Interfaces (OPI) in web browsers without modification to the original OPI file. The WebOPI leverages the powerful graphical editing capabilities of BOY, it provides the convenience of re-using existing OPI files. On the other hand, it uses auto-generated JavaScript and a generic communication mechanism between the web browser and web server. It is not optimized for a control system, which results in unnecessary network traffic and resource usage. Our second technology is the WebSocket-based Process Data Access (WebPDA). It is a protocol that provides efficient control system data communication using WebSockets[4], so that users can create web-based control system UIs using standard web page technologies such as HTML, CSS and JavaScript. The protocol is control system independent, so it potentially can support any type of control system. [1]http://en.wikipedia.org/wiki/HTML5 [2]https://sourceforge.net/apps/trac/cs-studio/wiki/webopi [3]https://sourceforge.net/apps/trac/cs-studio/wiki/BOY [4]http://en.wikipedia.org/wiki/WebSocket |
|||
![]() |
Slides THCOAAB03 [1.768 MB] | ||
THCOAAB05 | Rapid Application Development Using Web 2.0 Technologies | framework, software, target, experiment | 1058 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632813 The National Ignition Facility (NIF) strives to deliver reliable, cost effective applications that can easily adapt to the changing business needs of the organization. We use HTML5, RESTful web services, AJAX, jQuery, and JSF 2.0 to meet these goals. WebGL and HTML5 Canvas technologies are being used to provide 3D and 2D data visualization applications. JQuery’s rich set of widgets along with technologies such as HighCharts and Datatables allow for creating interactive charts, graphs, and tables. PrimeFaces enables us to utilize much of this Ajax and JQuery functionality while leveraging our existing knowledge base in the JSF framework. RESTful Web Services have replaced the traditional SOAP model allowing us to easily create and test web services. Additionally, new software based on NodeJS and WebSocket technology is currently being developed which will augment the capabilities of our existing applications to provide a level of interaction with our users that was previously unfeasible. These Web 2.0-era technologies have allowed NIF to build more robust and responsive applications. Their benefits and details on their use will be discussed. |
|||
![]() |
Slides THCOAAB05 [0.832 MB] | ||
THCOAAB08 | NOMAD Goes Mobile | GUI, CORBA, controls, network | 1070 |
|
|||
The commissioning of the new instruments at the Institut Laue-Langevin (ILL) has shown the need to extend instrument control outside the classical desktop computer location. This, together with the availability of reliable and powerful mobile devices such as smartphones and tablets has triggered a new branch of development for NOMAD, the instrument control software in use at the ILL. Those devices, often considered only as recreational toys, can play an important role in simplifying the life of instrument scientists and technicians. Performing an experiment not only happens in the instrument cabin but also from the office, from another instrument, from the lab and from home. The present paper describes the development of a remote interface, based on Java and Android Eclipse SDK, communicating with the NOMAD server using CORBA via wireless network. Moreover, the application is distributed on “Google Play” to minimise the installation and the update procedures. | |||
![]() |
Slides THCOAAB08 [2.320 MB] | ||
THCOAAB09 | Olog and Control System Studio: A Rich Logging Environment | controls, operation, experiment, framework | 1074 |
|
|||
Leveraging the features provided by Olog and Control System Studio, we have developed a logging environment which allows for the creation of rich log entries. These entries in addition to text and snapshots images store context which can comprise of information either from the control system (process variables) or other services (directory, ticketing, archiver). The client tools using this context provide the user the ability to launch various applications with their state initialized to match those while the entry was created. | |||
![]() |
Slides THCOAAB09 [1.673 MB] | ||
THMIB09 | Management of the FERMI Control System Infrastructure | controls, network, TANGO, Ethernet | 1086 |
|
|||
Funding: Work supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3 Efficiency, flexibility and simplicity of management have been some of the design guidelines of the control system for the FERMI@Elettra Free Electron Laser. Out-of-band system monitoring devices, remotely operated power distribution units and remote management interfaces have been integrated into the Tango control system, leading to an effective control of the infrastructure. The Open Source tool Nagios has been deployed to monitor the functionality of the control system computers and the status of the application software for an easy and automatic identification and report of troubles. |
|||
![]() |
Slides THMIB09 [0.236 MB] | ||
![]() |
Poster THMIB09 [1.567 MB] | ||
THPPC022 | Securing Mobile Control System Devices: Development and Testing | controls, network, Linux, EPICS | 1131 |
|
|||
Recent advances in portable devices allow end users convenient wasy to access data over the network. Networked control systems have traditionally been kept on local or internal networks to prevent external threats and isolate traffic. The UMWC Clinical Neutron Therapy System has its control system on such an isolated network. Engineers have been updating the control system with EPICS, and have developed EDM-based interfaces for control and monitoring. This project describes a tablet-based monitoring device being developed to allow the engineers to monitor the system, while, e.g. moving from rack to rack, or room to room. EDM is being made available via the tablet. Methods to maintain security of the control system and tablet, while providing ease of access and meaningful data for management are being created. In parallel with the tablet development, security and penetration tests are also being produced. | |||
THPPC023 | Integration of Windows Binaries in the UNIX-based RHIC Control System Environment | controls, Windows, software, Linux | 1135 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Since its inception, the RHIC control system has been built-up on UNIX or LINUX and implemented primarily in C++. Sometimes equipment vendors include software packages developed in the Microsoft Windows operating system. This leads to a need to integrate these packaged executables into existing data logging, display, and alarms systems. This paper will describe an approach to incorporate such non-UNIX binaries seamlessly into the RHIC control system with minimal changes to the existing code base, allowing for compilation on standard LINUX workstations through the use of a virtual machine. The implementation resulted in the successful use of a windows dynamic linked library (DLL) to control equipment remotely while running a synoptic display interface on a LINUX machine. |
|||
![]() |
Poster THPPC023 [1.391 MB] | ||
THPPC027 | A New EPICS Device Support for S7 PLCs | PLC, EPICS, controls, software | 1147 |
|
|||
S7 series programmable logic controllers (PLCs) are commonly used in accelerator environments. A new EPICS device support for S7 PLCs that is based on libnodave has been developed. This device support allows for a simple integration of S7 PLCs into EPICS environments. Developers can simply create an EPICS record referring to a memory address in the PLC and the device support takes care of automatically connecting to the PLC and transferring the value. This contribution presents the concept behind the s7nodave device support and shows how simple it is to create an EPICS IOC that communicates with an S7 PLC. | |||
![]() |
Poster THPPC027 [3.037 MB] | ||
THPPC036 | EPICS Control System for the FFAG Complex at KURRI | controls, EPICS, network, LabView | 1164 |
|
|||
In Kyoto University Research Reactor Institute (KURRI), a fixed-field alternating gradient (FFAG) proton accelerator complex, which is consists of the three FFAG rings, had been constructed to make an experimental study of accelerator driven sub-critical reactor (ADSR) system with spallation neutrons produced by the accelerator. The world first ADSR experiment was carried out in March of 2009. In order to increase the beam intensity of the proton FFAG accelerator, a new injection system with H− linac has been constructed in 2011. To deal with these developments, a control system of these accelerators should be easy to develop and maintain. The first control system was based on LabVIEW and the development had been started seven years ago. Thus it is necessary to update the components of the control system, for example operating system of the computer. And the first control system had some minor stability problems and it was difficult for non-expert of LabVIEW to modify control program. Therefore the EPICS toolkit has been started to use as the accelerator control system in 2009. The present control system of the KURRI FFAG complex is explained. | |||
![]() |
Poster THPPC036 [3.868 MB] | ||
THPPC043 | Implement an Interface for Control System to Interact with Oracle Database at SSC-LINAC | database, EPICS, controls, linac | 1171 |
|
|||
SSC-LINAC control system is based on EPICS architecture. The control system includes ion sources, vacuum, digital power supplies, etc. In these subsystems, some of those need to interactive with Oracle database, such as power supplies control subsystem, who need to get some parameters while power supplies is running and also need to store some data with Oracle. So we design and implementation an interface for EPICS IOC to interactive with Oracle database. The interface is a soft IOC which is also bases on EPICS architecture, so others IOC and OPI can use the soft IOC interactive with Oracle via Channel Access protocol. | |||
THPPC062 | Control Environment of Power Supply for TPS Booster Synchrotron | power-supply, booster, controls, EPICS | 1213 |
|
|||
The TPS is a latest generation of high brightness synchrotron light source and scheduled to be commissioning in 2014. Its booster is designed to ramp electron beams from 150 MeV to 3 GeV in 3 Hz. The control environments based on EPICS framework are gradually developed and built. This report summarizes the efforts on control environment of BPM and power supply for TPS booster synchrotron. | |||
THPPC066 | ACSys Camera Implementation Utilizing an Erlang Framework to C++ Interface | framework, controls, software, hardware | 1228 |
|
|||
Multiple cameras are integrated into the Accelerator Control System utilizing an Erlang framework. Message passing is implemented to provide access into C++ methods. The framework runs in a multi-core processor running Scientific Linux. The system provides full access to any 3 cameras out of approximately 20 cameras collecting 5 Hz frames. JPEG images in memory or as files providing for visual information. PNG files are provided in memory or as files for analysis. Histograms over the X & Y coordinates are filtered and analyzed. This implementation is described and the framework is evaluated. | |||
THPPC067 | New EPICS Drivers for Keck TCS Upgrade | EPICS, FPGA, controls, timing | 1231 |
|
|||
Keck Observatory is in the midst of a major telescope control system upgrade. This involves migrating from a VME based EPICS control system originally deployed on Motorola FRC40s VxWorks 5.1 and EPICS R3.13.0Beta12 to a distributed 64-bit X86 Linux servers running RHEL 2.6.33.x and EPICS R3.14.12.x. This upgrade brings a lot of new hardware to the project which includes Ethernet/IP connected PLCs, the ethernet connected DeltaTau Brick controllers, National Instruments MXI RIO, Heidenhain Encoders (and the Heidenhain ethernet connected Encoder Interface Box in particular), Symmetricom PCI based BC635 timing and synchronization cards, and serial line extenders and protocols. Keck has chosen to implement all new drivers using the ASYN framework. This paper will describe the various drivers used in the upgrade including those from the community and those developed by Keck which include BC635, MXI and Heidenhain EIB. It will also discuss the use of the BC635 as a local NTP reference clock and a service for the EPICS general time. | |||
THPPC079 | Using a Java Embedded DSL for LHC Test Analysis | framework, hardware, embedded, DSL | 1254 |
|
|||
The Large Hadron Collider (LHC) at CERN requires many systems to work in close cooperation. All systems for magnet powering and beam operation are qualified during dedicated commissioning periods and retested after corrective or regular maintenance. Already for the first commissioning of the magnet powering system in 2006, the execution of such tests was automated to a high degree to facilitate the execution and tracking of the more than 10.000 required test steps. Most of the time during today’s commissioning campaigns is spent in analysing test results, to a large extend still done manually. A project was launched to automate the analysis of such tests as much as possible. A dedicated Java embedded Domain Specific Language (eDSL) was created, which allows system experts to describe desired analysis steps in a simple way. The execution of these checks results in simple decisions on the success of the tests and provides plots for experts to quickly identify the source of problems exposed by the tests. This paper explains the concepts and vision of the first version of the eDSL. | |||
![]() |
Poster THPPC079 [1.480 MB] | ||
THPPC082 | Monitoring of the National Ignition Facility Integrated Computer Control System | controls, database, experiment, framework | 1266 |
|
|||
Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-632812 The Integrated Computer Control System (ICCS), used by the National Ignition Facility (NIF) provides comprehensive status and control capabilities for operating approximately 100,000 devices through 2,600 processes located on 1,800 servers, front end processors and embedded controllers. Understanding the behaviors of complex, large scale, operational control software, and improving system reliability and availability, is a critical maintenance activity. In this paper we describe the ICCS diagnostic framework, with tunable detail levels and automatic rollovers, and its use in analyzing system behavior. ICCS recently added Splunk as a tool for improved archiving and analysis of these log files (about 20GB, or 35 million logs, per day). Splunk now continuously captures all ICCS log files for both real-time examination and exploration of trends. Its powerful search query language and user interface provides allows interactive exploration of log data to visualize specific indicators of system performance, assists in problems analysis, and provides instantaneous notification of specific system behaviors. |
|||
![]() |
Poster THPPC082 [4.693 MB] | ||
THPPC092 | FAIR Timing System Developments Based on White Rabbit | timing, controls, network, FPGA | 1288 |
|
|||
A new timing system based on White Rabbit (WR) is being developed for the upcoming FAIR facility at GSI, in collaboration with CERN, other institutes and industry partners. The timing system is responsible for the synchronization of nodes with nanosecond accuracy and distribution of timing messages, which allows for real-time control of the accelerator equipment. WR is a fully deterministic Ethernet-based network for general data transfer and synchronization, which is based on Synchronous Ethernet and PTP. The ongoing development at GSI aims for a miniature timing system, which is part of a control system of a proton source, that will be used at one of the accelerators at FAIR. Such a timing system consists of a Data Master generating timing messages, which are forwarded by a WR switch to a handful of timing receiver. The next step is an enhancement of the robustness, reliability and scalability of the system. These features will be integrated in the forthcoming CRYRING control system in GSI. CRYRING serves as a prototype and testing ground for the final control system for FAIR. The contribution presents the overall design and status of the timing system development. | |||
![]() |
Poster THPPC092 [0.549 MB] | ||
THPPC102 | Comparison of Synchronization Layers for Design of Timing Systems | timing, network, Ethernet, real-time | 1296 |
|
|||
Two synchronization layers for timing systems in large experimental physics control systems are compared. White Rabbit (WR), which is an emerging standard, is compared against the well-established event based approach. Several typical timing system services have been implemented on an FPGA using WR to explore its concepts and architecture, which is fundamentally different from an event based. Both timing system synchronization layers were evaluated based on typical requirements of current accelerator projects and with regard to other parameters such as scalability. The proposed design methodology demonstrates how WR can be deployed in future accelerator projects. | |||
![]() |
Poster THPPC102 [1.796 MB] | ||
THPPC112 | The LANSCE Timing Reference Generator | timing, controls, neutron, EPICS | 1321 |
|
|||
The Los Alamos Neutron Science Center is an 800 MeV linear proton accelerator at Los Alamos National Laboratory. For optimum performance, power modulators must be tightly coupled to the phase of the power grid. Downstream at the neutron scattering center there is a competing requirement that rotating choppers follow the changing phase of neutron production in order to remove unwanted energy components from the beam. While their powerful motors are actively accelerated and decelerated to track accelerator timing, they cannot track instantaneous grid phase changes. A new timing reference generator has been designed to couple the accelerator to the power grid through a phase locked loop. This allows some slip between the phase of the grid and the accelerator so that the modulators stay within their timing margins, but the demands on the choppers are relaxed. This new timing reference generator is implemented in 64 bit floating point math in an FPGA. Operators in the control room have real-time network control over the AC zero crossing offset, maximum allowed drift, and slew rate - the parameter that determines how tightly the phase of the accelerator is coupled to the power grid.
LA-UR-13-21289 |
|||
THPPC138 | A System for Automatic Locking of Resonators of Linac at IUAC | controls, linac, operation, feedback | 1376 |
|
|||
The superconducting LINAC booster of IUAC consists of five cryostats housing a total of 27 Nb quarter wave resonators (QWRs). The QWRs are phase locked against the master oscillator at a frequency of 97 MHz. Cavity frequency tuning is done by a Helium gas based slow tuner. Presently, the frequency tuning and cavity phase locking is done from the control room consoles. To automate the LINAC operation, an automatic phase locking system has been implemented. The slow tuner gas pressure is automatically controlled in response to the frequency error of the cavity. The fast tuner is automatically triggered into phase lock when the frequency is within the lock window. This system has band implemented sucessfully on a few cavities. The system is now being installed for the remaining cavities of the LINAC booster.
[1]S.Ghosh et al Phys. Rev. ST Accel. Beams 12, 040101 (2009). |
|||
![]() |
Poster THPPC138 [4.654 MB] | ||
THCOBB01 | An Upgraded ATLAS Central Trigger for 2015 LHC Luminosities | detector, timing, electronics, luminosity | 1388 |
|
|||
The LHC collides protons at a rate of ~40MHz and each collision produces ~1.5MB of data from the ATLAS detector (~60TB of data per second). The ATLAS trigger system reduces the input rate to a more reasonable storage rate of about 400Hz. The Level1 trigger reduces the input rate to ~100kHz with a decision latency of ~2.5us and is responsible for initiating the readout of data from all the ATLAS subdetectors. It is primarily composed of the Calorimeter Trigger, Muon Trigger, and the Central Trigger Processor (CTP). The CTP collects trigger information from all Level1 systems and produces the Level--1 trigger decision. The LHC has now shutdown for upgrades and will return in 2015 with an increased luminosity and a center of mass energy of 14TeV. With higher luminosities, the number and complexity of Level1 triggers will increase in order to satisfy the physics goals of ATLAS while keeping the total Level1 rates at or below 100kHz. In this talk we will discuss the current Central Trigger Processor, the justification for its upgrade, including the plans to satisfy the requirements of the 2015 physics run at the LHC.
The abstract is submitted on behalf of the ATLAS Collaboration. The name of the presenter will be chosen by the collaboration and communicated upon acceptance of the abstract. |
|||
![]() |
Slides THCOBB01 [10.206 MB] | ||
THCOBB04 | Overview of the ELSA Accelerator Control System | database, controls, hardware, Linux | 1396 |
|
|||
The Electron Stretcher Facility ELSA provides a beam of polarized electrons with a maximum energy of 3.2 GeV for hadron physics experiments. The in-house developed control system has continuously been improved during the last 15 years of operation. Its top layer consists of a distributed shared memory database and several core applications which are running on a linux host. The interconnectivity to hardware devices is built up with a second layer of the control system operating on PCs and VMEs. High level applications are integrated into the control system using C and C++ libraries. An event based messaging system notifies attached applications about parameter updates in near real-time. The overall system structure and specific implementation details of the control system will be presented. | |||
![]() |
Slides THCOBB04 [0.527 MB] | ||
THCOBB05 | Switching Solution – Upgrading a Running System | controls, software, EPICS, hardware | 1400 |
|
|||
At Keck Observatory, we are upgrading our existing operational telescope control system and must do it with as little operational impact as possible. This paper describes our current integrated system and how we plan to create a more distributed system and deploy it subsystem by subsystem. This will be done by systematically extracting the existing subsystem then replacing it with the new upgraded distributed subsystem maintaining backwards compatibility as much as possible to ensure a seamless transition. We will also describe a combination of cabling solutions, design choices and a hardware switching solution we’ve designed to allow us to seamlessly switch signals back and forth between the current and new systems. | |||
![]() |
Slides THCOBB05 [1.482 MB] | ||
THCOBA01 | Evolution of the Monitoring in the LHCb Online System | monitoring, database, status, distributed | 1408 |
|
|||
The LHCb online system relies on a large and heterogeneous I.T. infrastructure : it comprises more than 2000 servers and embedded systems and more than 200 network devices. The low level monitoring of the equipment was originally done with Nagios. In 2011, we replaced the single Nagios instance with a distributed Icinga setup presented at ICALEPCS 2011. This paper will present with more hindsight the improvements we observed, as well as problems encountered. Finally, we will describe some of our prospects for the future after the Long Shutdown period, namely Shinken and Ganglia. | |||
![]() |
Slides THCOBA01 [1.426 MB] | ||
THCOCA01 | A Design of Sub-Nanosecond Timing and Data Acquisition Endpoint for LHAASO Project | timing, network, electronics, controls | 1442 |
|
|||
Funding: National Science Foundation of China (No.11005065 and 11275111) The particle detector array (KM2A) of Large High Altitude Air Shower Observatory (LHAASO) project consists of 5631 electron and 1221 muon detection units over 1.2 square km area. To reconstruct the incident angle of cosmic ray, sub-nanosecond time synchronization must be achieved. The White Rabbit (WR) protocol is applied for its high synchronization precision, automatic delay compensation and intrinsic high band-width data transmit capability. This paper describes the design of a sub-nanosecond timing and data acquisition endpoint for KM2A. It works as a FMC mezzanine mounted on detector specific front-end electronic boards and provides the WR synchronized clock and timestamp. The endpoint supports EtherBone protocol for remote monitor and firmware update. Moreover, a hardware UDP engine is integrated in the FPGA to pack and transmit raw data from detector electronics to readout network. Preliminary test demonstrates a timing precision of 29ps (RMS) and a timing accuracy better than 100ps (RMS). * The authors are with Key Laboratory of Particle and Radiation Imaging, Department of Engineering Physics, Tsinghua University, Beijing, China, 100084 * pwb.thu@gmail.com |
|||
![]() |
Slides THCOCA01 [1.182 MB] | ||
FRCOAAB01 | CSS Scan System | controls, experiment, EPICS, software | 1461 |
|
|||
Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy Automation of beam line experiments requires more flexibility than the control of an accelerator. The sample environment devices to control as well as requirements for their operation can change daily. Tools that allow stable automation of an accelerator are not practical in such a dynamic environment. On the other hand, falling back to generic scripts opens too much room for error. The Scan System offers an intermediate approach. Scans can be submitted in numerous ways, from pre-configured operator interface panels, graphical scan editors, scripts, the command line, or a web interface. At the same time, each scan is assembled from a well-defined set of scan commands, each one with robust features like error checking, time-out handling and read-back verification. Integrated into Control System Studio (CSS), scans can be monitored, paused, modified or aborted as needed. We present details of the implementation and first usage experience. |
|||
![]() |
Slides FRCOAAB01 [1.853 MB] | ||
FRCOAAB02 | Karabo: An Integrated Software Framework Combining Control, Data Management, and Scientific Computing Tasks | controls, device-server, GUI, software | 1465 |
|
|||
The expected very high data rates and volumes at the European XFEL demand an efficient concurrent approach of performing experiments. Data analysis must already start whilst data is still being acquired and initial analysis results must immediately be usable to re-adjust the current experiment setup. We have developed a software framework, called Karabo, which allows such a tight integration of these tasks. Karabo is in essence a pluggable, distributed application management system. All Karabo applications (called “Devices”) have a standardized API for self-description/configuration, program-flow organization (state machine), logging and communication. Central services exist for user management, access control, data logging, configuration management etc. The design provides a very scalable but still maintainable system that at the same time can act as a fully-fledged control or a highly parallel distributed scientific workflow system. It allows simple integration and adaption to changing control requirements and the addition of new scientific analysis algorithms, making them automatically and immediately available to experimentalists. | |||
![]() |
Slides FRCOAAB02 [2.523 MB] | ||
FRCOAAB04 | Data Driven Campaign Management at the National Ignition Facility | experiment, diagnostics, target, database | 1473 |
|
|||
Funding: * This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. #LLNL-ABS-633255 The Campaign Management Tool Suite (CMT) provides tools for establishing the experimental goals, achieving reviews and approvals, and ensuring readiness for a NIF experiment. Over the last 2 years, CMT has significantly increased the number of diagnostics that supports to around 50. Meeting this ever increasing demand for new functionality has resulted in a design whereby more and more of the functionality can be specified in data rather than coded directly in Java. To do this support tools have been written that manage various aspects of the data and to also handle potential inconsistencies that can arise from a data driven paradigm. For example; drop down menus are specified in the Part and Lists Manager, the Shot Setup reports that lists the configurations for diagnostics are specified in the database, the review tool Approval Manager has a rules engine that can be changed without a software deployment, various template managers are used to provide predefined entry of hundreds parameters and finally a stale data tool validates that experiments contain valid data items. The trade-offs, benefits and issues of adapting and implementing this data driven philosophy will be presented. |
|||
![]() |
Slides FRCOAAB04 [0.929 MB] | ||
FRCOAAB08 | The LIMA Project Update | detector, controls, hardware, software | 1489 |
|
|||
LIMA, a Library for Image Acquisition, was developed at the ESRF to control high-performance 2D detectors used in scientific applications. It provides generic access to common image acquisition concepts, from detector synchronization to online data reduction, including image transformations and storage management. An abstraction of the low-level 2D control defines the interface for camera plugins, allowing different degrees of hardware optimizations. Scientific 2D data throughput up to 250 MB/s is ensured by multi-threaded algorithms exploiting multi-CPU/core technologies. Eighteen detectors are currently supported by LIMA, covering CCD, CMOS and pixel detectors, and video GigE cameras. Control system agnostic by design, LIMA has become the de facto 2D standard in the TANGO community. An active collaboration among large facilities, research laboratories and detector manufacturers joins efforts towards the integration of new core features, detectors and data processing algorithms. The LIMA 2 generation will provide major improvements in several key core elements, like buffer management, data format support (including HDF5) and user-defined software operations, among others. | |||
![]() |
Slides FRCOAAB08 [1.338 MB] | ||