Paper | Title | Other Keywords | Page |
---|---|---|---|
MOC3O04 | System Identification and Robust Control for the LNLS UVX Fast Orbit Feedback | controls, feedback, vacuum, power-supply | 30 |
|
|||
This paper describes the optimization work carried out to improve the performance of the LNLS UVX fast orbit feedback system. Black-box system identification techniques were applied to model the dynamic behavior of BPM electronics, orbit correctors, communication networks and vacuum chamber eddy currents. Due to the heterogeneity on the dynamic responses among several units of those subsystems, as well as variations on the static response matrix due to accelerator optics changes during operation, robust control techniques were employed to achieve appropriate closed-loop performance and robustness. | |||
![]() |
Slides MOC3O04 [3.796 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOM306 | Status of the PAL-XFEL Control System | controls, timing, undulator, electron | 79 |
|
|||
Pohang accelerator laboratory (PAL) started an x-ray free electron laser project (PAL-XFEL) in 2011. In the PAL-XFEL, an electron beam with 200 pC will be generated from a photocathode RF gun and will be accelerated to 10 GeV by using a linear accelerator. The electron beam will pass through undulator section to produce hard x-ray radiation. In 2015, we will finish the installation and will start a commissioning of the PAL-XFEL. In this paper, we introduce the PAL-XFEL and explain present status of it. Details of the control system will be described including a network system, a timing system, hardware control systems and a machine interlock system. | |||
![]() |
Slides MOM306 [1.842 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPGF019 | Experiences and Lessons Learned in Transitioning Beamline Front-Ends from VMEbus to Modular Distributed I/O | controls, PLC, interface, Linux | 121 |
|
|||
Historically Diamond's photon front-ends have adopted control systems based on the VMEbus platform. With increasing pressure towards improved system versatility, space constraints and the issues of long term support for the VME platform, a programme of migration to distributed remote I/O control systems was undertaken. This paper reports on the design strategies, benefits and issues addressed since the new design has been operational. | |||
![]() |
Poster MOPGF019 [0.477 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPGF035 | Control System Status of SuperKEKB Injector Linac | EPICS, linac, operation, controls | 170 |
|
|||
Toward SuperKEKB project, the injector linac upgrade is ongoing for aiming at the stable electron/positron beam operation with low emittance and high intensity bunch charge. To obtain such high quality beam, we have being commissioning many newly developed subsystems including a low emittance photocathode rf gun since October of 2013. Eventually, we will perform the simultaneous top-up for the four independent storage rings including two light sources. The stable beam operation as long as possible is desired since the prospective physics results strongly depends on the reliability and availability of accelerator operation. Since the middle stage of KEKB project, the injector linac control system has been gradually transferred to the EPICS based one from the in-house system based on RPC. We are expanding the existing control system for the newly installed devices like a network attached power supply, timing jitter monitoring system, and so on. In addition, many commissioning tools are now under development to accelerate the high quality beam development. In this paper, we will describe the present status of injector linac control system and future plan in detail. | |||
![]() |
Poster MOPGF035 [1.144 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPGF036 | Control System Developments at the Electron Storage Ring DELTA | controls, software, EPICS, hardware | 173 |
|
|||
Increasing demands, mandatory replacement of obsolete controls equipment as well as the introduction of new soft- and hardware technologies with short innovation cycles are some of the reasons why control systems need to be revised continuously. Thus, also at the EPICS-based DELTA control system, several projects have been tackled in recent years: (1) Embedding the new CHG-based short-pulse facility for VUV and THz radiation required, for example, the integration of IP-cameras, Raspberry-Pi PCs and EtherCat/TwinCat wired I/O-devices. (2) The request for a staff-free control room led to the programming of new web applications using Python and the Django framework. This development resulted in a web-based interlock system that can be run, amongst others, on Android-based mobile devices. (3) The virtualization infrastructure for server consolidation has been extended and migrated from XEN to the kernel based KVM approach. (4) I/O-units which were connected via conventional fieldbus systems (CAN, GPIB, RS-232/485), are now gradually replaced by TCP/IP-controlled devices. This paper describes details of these upgrades and further new developments. | |||
![]() |
Poster MOPGF036 [1.163 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPGF040 | Keck Telescope Control System Upgrade | controls, software, hardware, operation | 188 |
|
|||
The Keck telescopes, located at one of the world's premier sites for astronomy, were the first of a new generation of very large ground-based optical/infrared telescopes with the first Keck telescope beginning science operations in May of 1993, and the second in October of 1996. The components of the telescopes and control systems are more than 15 years old. The upgrade to the control systems of the telescopes consists of mechanical, electrical, software and network components with the overall goals of improving performance, increasing reliability, addressing serious obsolescence issues and providing a knowledge refresh. This paper is a continuation of one published at the 2013 conference and will describe the current status of the control systems upgrade. It will detail the implementation and testing for the Keck II telescope, including successes and challenges met to date. Transitioning to nighttime operations will be discussed, as will implementation on the Keck I telescope. | |||
![]() |
Poster MOPGF040 [3.465 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPGF067 | MeerKAT Control and Monitoring System Architecture | controls, interface, monitoring, FPGA | 247 |
|
|||
Funding: SKA South Africa, National Research Foundation of South Africa, Department of Science and Technology. The 64-dish MeerKAT radio telescope, currently under construction, comprises several loosely coupled independent subsystems, requiring a higher level Control and Monitoring (CAM) system to operate as a coherent instrument. Many control-system architectures are bus-like, clients directly \mbox{receiving} monitoring points from Input/Output Controllers; instead a multi-layer architecture based on point-to-point Karoo Array Telescope Control Protocol (KATCP) connections is used for MeerKAT. Clients (e.g. operators or scientists) only communicate directly with the outer layer of the telescope; only telescope interactions required for the given role are exposed to the user. The layers, interconnections, and how this architecture is used to meet telescope system requirements are described. Requirements include: Independently controllable telescope subsets; dynamically allocating telescope resources to individual users or observations, preventing the control of resources not allocated to them; commensal observations sharing resources; automatic detection of, and responses to, system-level alarm events; high level operator controls and health displays; automatic execution of scheduled observations. |
|||
![]() |
Poster MOPGF067 [60.303 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPGF088 | Integrating the Measuring System of Vibration and Beam Position Monitor to Study the Beam Stability | controls, monitoring, data-acquisition, vacuum | 277 |
|
|||
For a low emittance light source, beam orbit motion needs to be controlled within submicron for obtaining a high quality light. Magnets vibration especially quadruples will be one of the main sources to destroy the beam stability. In order to study the relationship between vibration and beam motion, it is highly desirable to use a synchronous data acquisition system which integrates measurement of vibration and beam position monitor systems especially for the coherence analysis. For a larger vibration such as earthquakes are also deleterious to beam stability or even make the beam trip due to the quench of superconducting RF cavity. A data acquisition system integrated with an earthquake detector is also quite necessary to show and archive the data on the control system. The data acquisition systems of vibration and earthquake measurement system are summarized in this report. The relationship between the beam motion and magnets vibration will also study here. | |||
![]() |
Poster MOPGF088 [0.504 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPGF090 | Control of Fast-Pulsed Power Converters at CERN Using a Function Generator/Controller | timing, controls, software, Ethernet | 281 |
|
|||
The electrical power converter group at CERN is responsible for the design of fast-pulsed power converters. These generate a flat-top pulse of the order of a few milliseconds. Control of these power converters is orchestrated by an embedded computer, known as the Function Generator/Controller (FGC). The FGC is the main component in the so-called RegFGC3 chassis, which also houses a variety of purpose-built cards. Ensuring the generation of the pulse at a precise moment, typically when the beam passes, is paramount to the correct behaviour of the accelerator. To that end, the timing distribution and posterior handling by the FGC must be well defined. Also important is the ability to provide operational feedback, and to configure the FGC, the converter, and the pulse characteristics. This paper presents an overview of the system architecture as well as the results obtained during the commissioning of this control solution in CERN's new Linac4. | |||
![]() |
Poster MOPGF090 [8.198 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPGF125 | The General Interlock System (GIS) for FAIR | hardware, software, PLC, pick-up | 374 |
|
|||
The Interlock System for FAIR named General Interlock System (GIS) is part of the Machine Protection System which protects the accelerator from damage by misled beams. The GIS collects various Interlock sources hardware signals from up to 60 distributed remote I/O stations through PROFINET to a central PLC CPU. Thus a bit-field is build and sent to the interlock processor via a simple Ethernet point-to-point connection. Additional software Interlock sources can be picked up by the Interlock Processor via UDP/IP protocol. The Interlock System for FAIR project was divided into 2 development phases. Phase A contains the interlock signal gathering (HW and SW) and a status viewer. Phase B entails the fully functional interlock logic (support for dynamic configuration), interface with Timing System, interlock signal acknowledging, interlock signal masking, archiving and logging. The realization of the phase A will be presented in this paper. | |||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPGF134 | Design of Fast Machine Protection System for the C-ADS Injection I | controls, FPGA, interface, timing | 393 |
|
|||
In this paper a new fast machine protection system is proposed. This system is designed for the injection Ι of C-ADS which fault reaction time requires less than 20us, and the one minute down time requires less than 7 times in a whole year. The system consist of one highly reliable control network based on a control board and some front IO sub-boards, and one nanosecond precision timing system using white rabbit protocol. The control board and front IO sub-board are redundant separately. The structure of the communication network is a combination structure of star and tree types which using the 2.5GHz optical fiber links the all nodes. This paper pioneered the use of nanosecond timing system based on the white rabbit protocol to determine the time and sequence of each system failure. Another advantage of the design is that it uses standard FMC and an easy extension structure which made the design is easy to use in a large accelerator. | |||
![]() |
Poster MOPGF134 [0.886 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPGF142 | Development of a Network-based Personal Dosimetry System, KURAMA-micro | radiation, monitoring, operation, detector | 420 |
|
|||
As the recovery from the nuclear accident in Fukushima progresses, strong demands arise on the continuous monitoring of individual radiation exposure based on action histories in a large group, such as the residents returning to their hometown after decontamination, or the workers involved in the decomissioning of the Fukushima Daiichi nuclear power plant. KURAMA-micro, a personal dosimetry system with network and positioning capability, is developed for such purpose. KURAMA-micro consists of a semiconductor dosimeter and a DAQ board based on OpenATOMS. Each unit records radiation data tagged with their measurement time and locations, and uploads the data to the server over a ZigBee-based network once each unit comes near one of the access points prepared expected activities range of users. Location data are basically obtained by a GPS unit, and an additional radio beacon scheme using ZigBee broadcast protocol is also used for the indoor positioning. The development of a proto-type KURAMA-micro is finished and a field test for the workers of a nuclear reactor under normal operation is planned in the spring of 2015. | |||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPGF149 | Nuclotron and NICA Control System Development Status | TANGO, controls, database, monitoring | 437 |
|
|||
The Nuclotron is a 6 GeV/n superconducting proton synchrotron operating at JINR, Dubna since 1993. It will be the core of the future accelerating complex NICA which is under construction now. NICA will provide collider experiments with heavy ions at nucleon-nucleon centre-of-mass energies of 4-11 GeV. The TANGO based control system of the accelerating complex is under development now. This paper describes its structure, main features and present status. | |||
![]() |
Poster MOPGF149 [2.424 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPGF158 | Sirius Control System: Design, Implementation Strategy and Measured Performance | controls, hardware, interface, operation | 456 |
|
|||
Sirius is a new 3 GeV synchrotron light source currently being designed at the Brazilian Synchrotron Light Laboratory (LNLS) in Campinas, Brazil. The Control System will be distributed and digitally connected to all equipment in order to avoid analog signal cables. A three-layer control system will be used. The equipment layer uses RS485 serial networks, running at 10Mbps, with a light proprietary protocol, over a proprietary hardware, in order to achieve good performance. The middle layer, interconnecting these serial networks, is based on Beaglebone Black single board computer and commercial switches. Operation layer will be composed of PC's running EPICS client programs. Special topology will be used for Orbit Feedback with a dedicated commercial 10Gbps switch. The lower layers software implementation may use either (a) distributed EPICS conventional servers, the traditional approach, or (b) centralized EPICS server, using data servers and light proprietary protocol over Ethernet. Both cases use the same hardware and can run concurrently, sharing the control network. Measured performance with these two approaches will be presented. | |||
![]() |
Poster MOPGF158 [1.511 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
MOPGF160 | ARIEL Control System at TRIUMF - Status Update | controls, EPICS, PLC, interface | 460 |
|
|||
The Advanced Rare Isotope & Electron Linac (ARIEL) facility at TRIUMF has now reached completion of the first phase of construction; the Electron Linac. A commissioning control system has been built and used to commission the electron e-gun and two stages of SRF acceleration. Numerous controls subsystems have been deployed including beamlines, vacuum systems, beamline diagnostics, machine protect system interfaces, LLRF, HPRF, and cryogenics. This paper describes some of the challenges and solutions that were encountered, and describes the scope of the project to date. An evaluation of some techniques that had been proposed and described at ICALEPCS 2013 are included. | |||
![]() |
Poster MOPGF160 [1.394 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEC3O01 | Trigger and RF Distribution Using White Rabbit | timing, FPGA, Ethernet, software | 619 |
|
|||
White Rabbit is an extension of Ethernet which allows remote synchronization of nodes with jitters of around 10ps. The technology can be used for a variety of purposes. This paper presents a fixed-latency trigger distribution system for the study of instabilities in the LHC. Fixed latency is achieved by precisely time-stamping incoming triggers, notifying other nodes via an Ethernet broadcast containing these time stamps and having these nodes produce pulses at well-defined time offsets. The same system is used to distribute the 89us LHC revolution tick. This paper also describes current efforts for distributing multiple RF signals over a WR network, using a Distributed DDS paradigm. | |||
![]() |
Slides WEC3O01 [1.465 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEC3O02 | The Phase-Locked Loop Algorithm of the Function Generation/Controller | controls, timing, Ethernet, real-time | 624 |
|
|||
This paper describes the phase-locked loop algorithms that are used by the real-time power converter controllers at CERN. The algorithms allow the recovery of the machine time and events received by an embedded controller through WorldFIP or Ethernet-based fieldbuses. During normal operation, the algorithm provides less than 10 μs of time precision and 0.5 μs of clock jitter for the WorldFIP case, and less than 2.5 μs of time precision and 40 ns of clock jitter for the Ethernet case. | |||
![]() |
Slides WEC3O02 [1.459 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEM303 | Virtualisation within the Control System Environment at the Australian Synchrotron | EPICS, controls, synchrotron, hardware | 664 |
|
|||
Virtualisation technologies significantly improve efficiency and availability of computing services while reducing the total cost of ownership. Real-time computing environments used in distributed control systems require special consideration when it comes to server and application virtualisation. The EPICS environment at the Australian Synchrotron comprises more than 500 interconnected physical devices; their virtualisation holds great potential for reducing risk and maintenance. An overview of the approach taken by the Australian Synchrotron, the involved hardware and software technologies as well as the configuration of the virtualisation eco-system is presented, including the challenges, experiences and lessons learnt. | |||
![]() |
Slides WEM303 [1.236 MB] | ||
![]() |
Poster WEM303 [0.963 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEM304 | Status Monitoring of the EPICS Control System at the Canadian Light Source | controls, EPICS, database, status | 667 |
|
|||
The CLS uses the EPICS Distributed Control System (DCS) for control and feedback of a linear accelerator, booster ring, electron storage ring, and numerous x-ray beamlines. The number of host computers running EPICS IOC applications has grown to 200, and the number of IOC applications exceeds 700. The first part of this paper will present the challenges and current efforts to monitor and report the status of the control system itself by monitoring the EPICS network traffic. This approach does not require any configuration or application modification to report the currently active applications, and then provide notification of any changes. The second part will cover the plans to use the information collected dynamically to improve upon the information gathered by process variable crawlers for an IRMIS database, with the goal to eventually replace the process variable crawlers. | |||
![]() |
Slides WEM304 [0.550 MB] | ||
![]() |
Poster WEM304 [1.519 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF002 | A Protocol for Streaming Large Messages with UDP | controls, Ethernet, software, Linux | 693 |
|
|||
Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy. We have developed a protocol concatenating UDP datagrams to stream large messages. The datagrams can be sized to the optimual size of the receiver. The protocol provides acknowledged reception based on a sliding window concept. The implementation provides for up to 10 Mbyte messages and guarrantees complete delivery or a corresponding error. The protocol is implemented as a standalone messaging between two sockets and also within the context of Fermilab's ACNet protocol. Results of this implementation in vxWorks is analyzed. |
|||
![]() |
Poster WEPGF002 [0.796 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF010 | Securing Access to Controls Applications with Apache httpd Proxy | controls, embedded, software, interface | 705 |
|
|||
Many commercial systems used for controls nowadays contain embedded web servers. Secure access to these, often essential, facilities is of utmost importance, yet it remains complicated to manage for different reasons (e.g. obtaining and applying patches from vendors, ad-hoc oversimplified implementations of web-servers are prone to remote exploit). In this paper we describe a security-mediating proxy system, which is based on the well-known Apache httpd software. We describe how the use of the proxy made it possible to simplify the infrastructure necessary to start WinCC OA-based supervision applications on operator consoles, providing, at the same time, an improved level of security and traceability. Proper integration with the CERN central user account repository allows the operators to use their personal credentials to access applications, and also allows one to use standard user management tools. In addition, easy-to-memorize URL addresses for access to the applications are provided, and the use of a secure https transport protocol is possible for services that do not support it on their own. | |||
![]() |
Poster WEPGF010 [1.824 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF011 | Progress of the Control Systems for the ADS injector II | controls, Ethernet, interface, timing | 709 |
|
|||
This paper reports the progress of the control system for accelerator injector II used in China initiative accelerator driven sub-critical (ADS) facility. As a linear proton accelerator, injector II includes an ECR ion source, a low-energy beam transport line, a radio frequency quadrupole accelerator, a medium energy beam transport line, several crymodules, and a diagnostics plate. Several subsystems in the control system have been discussed, such as a machine protection system, a timing system, and a data storage system. A three-layer control system has been developed for injector II. In the equipment layer, the low-level control with various industrial control cards, such as programmable logic controller and peripheral component interconnect (PCI), have been reported. In the middle layer, a redundant Gigabit Ethernet based on the Ethernet ring protection protocol has been used in the control network for Injector II. In the operation layer, high-level application software has been developed for the beam commissioning and the operation of the accelerator. Finally, by using this control system, the proton beam commissioning for Injector II in the control room has been mentioned. | |||
![]() |
Poster WEPGF011 [0.701 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF012 | Information Security Assessment of CERN Access and Safety Systems | controls, PLC, software, Windows | 713 |
|
|||
Access and safety systems are traditionally considered critical in organizations and they are therefore usually well isolated from the rest of the network. However, recent years have seen a number of cases, where such systems have been compromised even when in principle well protected. The tendency has also been to increase information exchange between these systems and the rest of the world to facilitate operation and maintenance, which further serves to make these systems vulnerable. In order to gain insight on the overall level of information security of CERN access and safety systems, a security assessment was carried out. This process consisted not only of a logical evaluation of the architecture and implementation, but also of active probing for various types of vulnerabilities on test bench installations. | |||
![]() |
Poster WEPGF012 [1.052 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF021 | Design of Control Networks for China Initiative Accelerator Driven System | Ethernet, controls, operation, target | 739 |
|
|||
In this paper, we report the conceptual design of control networks used in the control system for China initiative accelerator driven sub-critical (ADS) facility which consists of two accelerator injectors, a main accelerator, a spallation target and a reactor. Because different applications have varied expectations on reliability, latency, jitter and bandwidth, the following networks have been designed for the control systems, i.e. a central operation network for the operation of accelerators, target, and reactor; a reactor protection network for preventing the release of radioactivity to the environment; a personnel protection network for protecting personnel against unnecessary exposure to hazards; a machine protection network for protecting the machines in the ADS system; a time communication network for providing timing and synchronization for three accelerators; and a data archiving network for recording important measurement results from accelerators, target and reactor. Finally, we discuss the application of high-performance Ethernet technologies, such as Ethernet ring protection protocol, in these control networks for CIADS. | |||
![]() |
Poster WEPGF021 [0.197 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF023 | Controlling Camera and PDU | software, controls, monitoring, hardware | 743 |
|
|||
Funding: SKA South Africa National Research Foundation of South Africa Department of Science and Technology 3rd floor, The Park Park Road Pinelands ZA Cape Town 7405 +27 21 506 7300 The 64-dish MeerKAT radio telescope, currently under construction in South Africa, will become the largest and most sensitive radio telescope in the Southern Hemisphere until integrated with the Square Kilometre Array (SKA). This poster will present the software solutions that the MeerKAT Control and Monitoring (CAM) team implemented to achieve control (pan, tilt, zoom and focus) of the on-site video cameras using the pelco D protocol. Furthermore this poster will present how the outlets of the PDU (Power Distribution Unit) are switched on and off using SNMP to facilitate emergency shutdown of equipment. This will include a live demonstration from site (South Africa). |
|||
![]() |
Poster WEPGF023 [0.896 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF028 | A Self-Configurable Server for Controlling Devices Over the Simple Network Management Protocol | controls, operation, monitoring, status | 753 |
|
|||
The Simple Network Management Protocol (SNMP) is an open-source protocol that allows many manufacturers to utilize it for controlling and monitoring their hardware. More and more SNMP-manageable devices show up on the market that can be used by control systems for accelerators. Some SNMP devices are being used at the free-electron laser (FLASH) at DESY and planned to be used at the European X-ray Free Electron Laser (XFEL) in Hamburg, Germany. To provide an easy and uniform way of controlling SNMP devices a server has been developed. The server configuration, with respect to device parameters to control, is done during its start-up and driven by the manufacturer Management Information Base (MIB) files provided with SNMP devices. This paper gives some details of the server design, its implementation and examples of use. | |||
![]() |
Poster WEPGF028 [3.323 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF031 | The Evolution of the Simulation Environment in ALMA | simulation, software, hardware, operation | 765 |
|
|||
The Atacama Large Millimeter /sub millimeter Array (ALMA) has entered into operation phase since 2014. This transition changed the priorities within the observatory, in which, most of the available time will be dedicated to science observations at the expense of technical time that software testing used to have available in abundance. The scarcity of the technical time surfaces one of the weakest points in the existent infrastructure available for software testing: the simulation environment of the ALMA software. The existent simulation focuses on the functionality aspect but not on the real operation scenarios with all the antennas. Therefore, scalability and performance problems introduced by new features or hidden in the current accepted software cannot be verified until the actual problem explodes during operation. Therefore, it was planned to design and implement a new simulation environment, which must be comparable, or at least, be representative of the production environment. In this paper we will review experiences gained and lessons learnt during the design and implementation of the new simulated environment. | |||
![]() |
Poster WEPGF031 [1.358 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF032 | EPICS PV Management and Method for RIBF Control System | EPICS, controls, database, monitoring | 769 |
|
|||
For the RIBF project (RIKEN RI Beam Factory), the EPICS-based distributed control system is utilized on Linux and vxWorks as an embedded EPICS technology. Utilizing NAS that have a High-Availability system as a shared storage, common EPICS programs (Base, Db, and so on) are shared with each EPICS IOC. In March 2015, the control system continues to grow and consists of about 50 EPICS IOCs, and more than 100, 000 EPICS records. For a large number of control hardware devices, the dependencies between EPICS records and EPICS IOCs are complicated. For example, it is not easy to know accurate device information by only the EPICS record name information. Therefore, new management system was constructed for RIBF control system to call up detailed information easily. In the system, by parsing startup script files (st.cmd) for running EPICS IOCs, all EPICS records and EPICS fields are stored into the PostgreSQL-based database. By utilizing this stored data, it is successful to develop Web-based management and search tools. In this paper the system concept, the feature of the Web-based tools for the management, is reported in detail. | |||
![]() |
Poster WEPGF032 [6.833 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF036 | Data Categorization and Storage Strategies at RHIC | Linux, real-time, software, collider | 775 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy. This past year the Controls group within the Collider Accelerator Department at Brookhaven National Laboratory replaced the Network Attached Storage (NAS) system that is used to store software and data critical to the operation of the accelerators. The NAS also serves as the initial repository for all logged data. This purchase was used as an opportunity to categorize the data we store, and review and evaluate our storage strategies. This was done in the context of an existing policy that places no explicit limits on the amount of data that users can log, no limits on the amount of time that the data is retained at its original resolution, and that requires all logged data be available in real-time. This paper will describe how the data was categorized, and the various storage strategies used for each category. |
|||
![]() |
Poster WEPGF036 [0.295 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF041 | Monitoring Mixed-Language Applications with Elastic Search, Logstash and Kibana (ELK) | LabView, distributed, interface, framework | 786 |
|
|||
Application logging and system diagnostics is nothing new. Ever since we had the first computers scientist and engineers have been storing information about their systems, making it easier to understand what is going on and, in case of failures, what went wrong. Unfortunately there are as many different standards as there are file formats, storage types, locations, operating systems, etc. Recent development in web technology and storage has made it much simpler to gather all the different information in one place and dynamically adapt the display. With the introduction of Logstash with Elasticsearch as a backend, we store, index and query data, making it possible to display and manipulate data in whatever form one wishes. With Kibana as a generic and modern web interface on top, the information can be adapted at will. In this paper we will show how we can process almost any type of structured or unstructured data source. We will also show how data can be visualised and customised on a per user basis and how the system scales when the data volume grows. | |||
![]() |
Poster WEPGF041 [3.848 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF045 | Large Graph Visualization of Millions of Connections in the CERN Control System Network Traffic: Analysis and Design of Routing and Firewall Rules with a New Approach | controls, operation, Windows, database | 799 |
|
|||
The CERN Technical Network (TN) TN was intended to be a network for accelerator and infrastructure operations. However, today, more than 60 Million IP packets are routed every hour between the General Purpose Network (GPN) and the TN involving more than 6000 different hosts. In order to improve the security of the accelerator control system, it is fundamental to understand the network traffic between the two networks in order to define appropriate routing and firewall rules without impacting Operations. The complexity and huge size of the infrastructure and the number of protocols and services involved have discouraged for years any attempt to understand and control the network traffic between the GPN and the TN. In this talk, we will show a new way to solve the problem graphically. Combining the network traffic analysis with the use of large graph visualization algorithms we produce comprehensible and usable 2D large colour topology graphs mapping the complex network relations of the control system machines and services in a detail and clarity never seen before. The talk integrates very interesting pictures and video of the graphical analysis attempt. | |||
![]() |
Poster WEPGF045 [6.809 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF056 | Flyscan: a Fast and Multi-technique Data Acquisition Platform for the SOLEIL Beamlines | TANGO, synchrotron, hardware, timing | 826 |
|
|||
SOLEIL is continuously optimizing its 29 beamlines in order to provide its users with state of the art synchrotron radiation based experimental techniques. Among the topics addressed by the related transversal projects, the enhancement of the computing tools is identified as a high priority task. In this area, the aim is to optimize the beam time usage providing the users with a fast, simultaneous and multi-technique scanning platform. The concrete implementation of this general concept allows the users to acquire more data in the same amount of beam time. The present paper provides the reader with an overview of so call 'Flyscan' project currently under deployment at SOLEIL. It notably details a solution in which an unbounded number of distributed actuators and sensors share a common trigger clock and deliver their data into temporary files. The latter are immediately merged into common file(s) in order to make the whole experiment data available for on-line processing and visualization. Some application examples are also commented in order to illustrate the advantages of the Flyscan approach. | |||
![]() |
Poster WEPGF056 [2.339 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF065 | Illustrate the Flow of Monitoring Data through the MeerKAT Telescope Control Software | database, interface, monitoring, controls | 849 |
|
|||
Funding: SKA-SA National Research Foundation (South Africa) The MeerKAT telescope, under construction in South Africa, is comprised of a large set of elements. The elements expose various sensors to the Control and Monitoring (CAM) system, and the sampling strategy set by CAM per sensor varies from several samples a second to infrequent updates. This creates a substantial volume of sensor data that needs to be stored and made available for analysis. We depict the flow of sensor data through the CAM system, showing the various memory buffers, temporary disk storage and mechanisms to permanently store the data in HDF5 format on the network attached storage (NAS). |
|||
![]() |
Poster WEPGF065 [1.380 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF069 | Integrating Web-Based User Interface Within Cern's Industrial Control System Infrastructure | controls, interface, software, diagnostics | 861 |
|
|||
For decades the user interfaces of industrial control systems have been primarily based on native clients. However, the current IT trend is to have everything on the web. This can indeed bring some advantages such as easy deployment of applications, extending HMIs with turnkey web technologies, and apply to supervision interfaces the interaction model used on the web. However, this also brings its share of challenges: security management, ability to spread the load and scale out to many web clients, etc… In this paper, the architecture of the system that was devised at CERN to decouple the production WINCC-OA based supervision systems from the web frontend and the associated security implications are presented together with the transition strategy from legacy panels to full web pages using a stepwise replacement of widgets (e.g. visualization widgets) by their JavaScript counterpart. This evolution results in the on-going deployment of web-based supervision interfaces proposed to the operators as an alternative for comparison purposes. | |||
![]() |
Poster WEPGF069 [0.980 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF089 | CERN Open Hardware Experience: Upgrading the Diamond Fast Archiver | hardware, FPGA, interface, feedback | 901 |
|
|||
Diamond Light Source developed and integrated the Fast Archiver into its Fast Orbit Feedback communication network in 2009. It enabled synchronous capture and archive of the entire position data in real-time from all Electron Beam Position Monitors (BPMs) and X-RAY BPMs . The FA Archiver solution has also been adopted by SOLEIL and ESRF. However, the obsolescence of the existing PCI Express based FPGA board from Xilinx and continuing interest from community forced us to look for a new hardware platform while keeping the back compatibility with the existing Linux kernel driver and application software. This paper reports our experience with using the PCIe SPEC board from CERN Open Hardware initiative as the new FA Archiver platform. Implementation of the SPEC-based FA Archiver has been successfully completed and recently deployed at ALBA in Spain. | |||
![]() |
Poster WEPGF089 [0.581 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF093 | CXv4, a Modular Control System | controls, software, hardware, GUI | 915 |
|
|||
CX control system is used at VEPP-5 and several other BINP facilities. CX version 4 is designed to provide more flexibility and enable interoperability with other control systems. In addition to device drivers, most of its components are implemented in a modular fashion, including data access at both client and server sides. The server itself is a library. This approach allows clients to access several different control systems simultaneously and natively (without any gateways). CXv4 servers are able to provide data access to clients from diverse CS architectures/protocols, subject to appropriate network module being loaded. The server library, coupled with "null link" client-server access module, allows to create standalone monolythic programs for specific small applications (such as test benches and device test screens/utilities) using the same ready code from large-scale control system but without its complexity. CXv4 design principles and solutions are discussed and first deployment results are presented. | |||
![]() |
Poster WEPGF093 [0.752 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF095 | Application of PyCDB for K-500 Beam Transfer Line | database, controls, software, EPICS | 923 |
|
|||
Funding: This work has been supported by Russian Science Foundation (project N 14-50-00080). The new injection complex for VEPP-4 and VEPP-2000 e-p colliders is under construction at Budker Institute, Novosibirsk, Russia. The double-direction bipolar transfer line K-500 of 130 and 220 meters length respectively will provide the beam transportation from the injection complex to the colliders with a frequency of 1 Hz. The designed number of particles in the transferred beam is 2*1010 of electrons or positrons, the energy is 500 MeV. K-500 has dozens of types of magnets, power supplies and electronic devices. It is rather complicated task to store and manage information about such a number of types and instances of entities, especially to handle relations between them. This knowledge is critical for configuration of all aspects of control system. Therefore we have chosen PyCDB to handle this information and automate configuration data extraction for different purposes starting with reports and diagrams and ending with high-level applications and EPICS IOCs' configuration. This paper considers concepts of this approach and shows the PyCDB database sctructure designed for K-500 transfer line. An automatic configuration of IOCs is described as integration with EPICS. |
|||
![]() |
Poster WEPGF095 [0.792 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF096 | Managing a Real-time Embedded Linux Platform with Buildroot | Linux, target, software, controls | 926 |
|
|||
Funding: This work was supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359 Developers of real-time embedded software often need to build the operating system kernel, tools and supporting applications from source to work with the differences in their hardware configuration. The first attempt to introduce Linux-based real-time embedded systems into the Fermilab accelerator controls system used this approach but it was found to be time-consuming, difficult to maintain and difficult to adapt to different hardware configurations. Buildroot is an open source build system with a menu-driven configuration tool (similar to the Linux kernel build system) that automates this process. A customized Buildroot system has been developed for use in the Fermilab accelerator controls system that includes several hardware configuration profiles (including Intel, ARM and PowerPC) and packages for Fermilab support software. A bootable image file is produced containing the Linux kernel, shell and supporting software suite that varies from 3 to 20 megabytes large ' ideal for network booting. The result is a platform that is easier to maintain and deploy in diverse hardware configurations. |
|||
![]() |
Poster WEPGF096 [1.058 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF105 | EPICS V4 Evaluation for SNS Neutron Data | neutron, EPICS, detector, data-acquisition | 947 |
|
|||
Funding: This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. Version 4 of the Experimental Physics and Industrial Control System (EPICS) toolkit allows defining application-specific structured data types (pvData) and offers a network protocol for their efficient exchange (pvAccess). We evaluated V4 for the transport of neutron events from the detectors of the Spallation Neutron Source (SNS) to data acquisition and experiment monitoring systems. This includes the comparison of possible data structures, performance tests, and experience using V4 in production on a beam line. |
|||
![]() |
Poster WEPGF105 [1.281 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF107 | Multi-Host Message Routing in MADOCA II | controls, GUI, operation, free-electron-laser | 954 |
|
|||
MADOCA II is a next generation of Message And Database Oriented Control Architecture (MADOCA) and implemented into control system of SPring-8 and SACLA data acquisition (DAQ) system since 2013. In 2014, SACLA introduced a third beam line to increase the capacity of experiments. Then sophisticated control architecture needed to be developed to prevent miss operations among beamlines. In this paper, multi-host message routing in MADOCA II and its application to SALCA DAQ system to solve the problem is presented. In SACLA DAQ system, a master server was added which intermediates control messages between clients and equipment management servers. Since the access control can be centralized to the master server, reliable operation can be had by avoiding the influence by accidental modification of DAQ setting by end-users. The multi-host message routing was implemented to add an extension in MADOCA II by forwarding specific message objects to other hosts. Some technical issues related to messaging loop and time delay, are also addressed. It is also planned to utilize this technique to other cases in BL at SPring-8 where access control under firewall is required. | |||
![]() |
Poster WEPGF107 [0.762 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF112 | Flop: Customizing Yocto Project for MVMExxxx PowerPC and BeagleBone ARM | software, Linux, controls, embedded | 958 |
|
|||
During the last fifteen years several PowerPC-based VME single board computers, belonging to the MVMExxxx family, have been used for the control system front-end computers at Elettra Sincrotrone Trieste. Moreover, a low cost embedded board has been recently adopted to fulfill the control requirements of distributed instrumentation. These facts lead to the necessity of managing several releases of the operating system, kernel and libraries, and finally to the decision of adopting a comprehensive unified approach based on a common codebase: the Yocto Project. Based on Yocto Project, a control system oriented GNU/Linux distribution called 'Flop' has been created. The complete management of the software chain, the ease of upgrading or downgrading complete systems, the centralized management and the platform-independent deployment of the user software are the main features of Flop. | |||
![]() |
Poster WEPGF112 [1.254 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF121 | Operation Status of J-PARC Timing System and Future Plan | timing, operation, controls, injection | 988 |
|
|||
The beam commissioning of J-PARC started in November, 2006. Since then, the timing system of J-PARC accelerator complex has contributed stable beam operations of three accelerators: a 400-MeV linac (LI), a 3-GeV rapid cycling synchrotron (RCS), and a 50-GeV synchrotron (MR). The timing system handles two different repetition cycles: 25 Hz for LI and RCS, and 2.48-6.00 sec. for MR (MR cycle). In addition, the timing system is capable to provide beams to two different experimental facilities in single MR cycle: Material and Life Science Experimental Facility (MLF) and Neutrino Experimental Facility (NU), or, MLF and Hadron Experimental Facility (HD). Recently, a plan to introduce a new facility, Accelerator-Driven Transmutation Experimental Facility (ADS), around 2018, has been discussed. Studies for the timing system upgrade are started: change of the master repetition rate from 25Hz to 50 Hz, and a scheme to provide beams to three different experimental facilities in single MR cycle (MLF, NU and ADS or MLF, HD and ADS). This paper reviews the 8-year operation experience of the J-PARC timing system, followed by a present perspective of upgrade studies. | |||
![]() |
Poster WEPGF121 [1.138 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF126 | Prototype of White Rabbit Network in LHAASO | detector, timing, controls, experiment | 999 |
|
|||
Funding: Key Laboratory of Particle & Radiation Imaging, Open Research Foundation of State Key Lab of Digital Manufacturing Equipment & Technology in Huazhong Univ. of Science & Technology Synchronization is a crucial concern in distributed measurement and control systems. White Rabbit provides sub-nanosecond accuracy and picoseconds precision for large distributed systems. In the Large High Altitude Air Shower Observatory project, to guarantee the angular resolution of reconstructed air shower event, a 500 ps overall synchronization precision must be achieved among thousands of detectors. A small prototype built at Yangbajin, Tibet, China has been working well for a whole year. A portable calibration node directly synced with the grandmaster switch and a simple detectors stack named Telescope are used to verify the overall synchronization precision of the whole prototype. The preliminary experiment results show that the long term synchronization of the White-Rabbit network is promising and 500 ps overall synchronization precision is achievable with node by node calibration and temperature correction. |
|||
![]() |
Poster WEPGF126 [1.233 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF132 | An Update on CAFE, a C++ Channel Access Client Library, and its Scripting Language Extensions | interface, EPICS, controls, operation | 1013 |
|
|||
CAFE (Channel Access interFacE) is a C++ client library that offers a comprehensive and easy-to-use Channel Access (CA) interface to the Experimental Physics and Industrial Control System (EPICS). The code base has undergone significant refactoring to make the internal structure more comprehensible and easier to interpret, and further methods have been implemented to increase its flexibility in readiness to serve as the CA host in fourth-generation and scripting languages for use at the SwissFEL, Switzerland's X-ray Free-Electron Laser facility. A number of specific design features are presented, including policies that provide control over configurable components that govern the behaviour of interactions, and the methodology that guarantees that the outcome of all remote method invocations are captured with integrity in every eventuality, thereby ensuring reliability and stability. An account is also given on newly created bindings for the Cython programming language, which offers a major performance improvement to Python developers, and on an update to CAFE's MATLAB Executable (MEX) file. | |||
![]() |
Poster WEPGF132 [0.302 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF134 | Applying Sophisticated Analytics to Accelerator Data at BNLs Collider-Accelerator Complex: Bridging to Repositories, Tools of Choice, and Applications | interface, controls, database, collider | 1021 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy. Analysis of accelerator data has traditionally been done using custom tools, either developed locally or at other laboratories. The actual data repositories are openly available to all users, but it can take significant effort to mine the desired data, especially as the volume of these repositories increases to hundreds of terabytes or more. Much of the data analysis is done in real time when the data is being logged. However, sometimes users wish to apply improved algorithms, look for data correlations, or perform more sophisticated analysis. There is a wide spectrum of desired analytics for this small percentage of the problem domains. In order to address this tools have been built that allow users to efficiently pull data out of the repositories but it is then left up to them to post process that data. In recent years, the use of tools to bridge standard analysis systems, such as Matlab, R, or SciPy, to the controls data repositories, has been investigated. In this paper, the tools used to extract data from the repositories, tools used to bridge the repositories to standard analysis systems, and directions being considered for the future, will be discussed. |
|||
![]() |
Poster WEPGF134 [2.714 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
WEPGF150 | A HTML5 Web Interface for JAVA DOOCS Data Display | interface, controls, operation, vacuum | 1056 |
|
|||
JAVA DOOCS Data Display (JDDD) is the standard tool for developing control system panels for the FLASH facility and European XFEL. The panels are mainly started on DESY campus. For remote monitoring and expert assistance a secure, fast and light-weight access method is required. One possible solution is using HTML5 as transport protocol, because it is available on many common platforms including mobile ones. For this reason an HTML5 version of JDDD, running in a Tomcat application server, was developed. WebSocket technology is used to transfer the panel image to the browser. In the other direction, mouse events are sent back from the browser to the Tomcat server. Now thousands of existing JDDD panels can be accessed from remote using standard web technology. No special browser plugins are required. This article discusses the general issues of the web-based interaction with the control system such as security, usability, network traffic and scalability, and presents the WebSocket approach. | |||
![]() |
Poster WEPGF150 [1.024 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THHA2O03 | Message Signalled Interrupts in Mixed-Master Control | controls, operation, FPGA, target | 1083 |
|
|||
Timing Receivers in the FAIR control system are a complex composition of multiple bus-connected components. The bus is composed of Wishbone crossbars which connect master devices to their controlled slaves. These crossbars are in turn connected in master-slave relationships forming a DAG where source nodes are masters, interior nodes are crossbars, and terminal nodes are slaves. In current designs, masters may be found at multiple levels in the composed bus. Bus masters range from embeddedμcontrollers, to DMA controllers, to bridges from PCIe, VME, USB, or the network. In such a system, delivery of interrupts from controlled slaves to masters is non-trivial. The master may reside multiple levels up the hierarchy. In the case of network control, the master may be kilometres of fibre away. Our approach is to use message signalled interrupts (MSI). This is especially important as a particular slave may be controlled by different masters depending on the use-case. MSI allows the routing of interrupts via the same topology used in master-slave control. This paper explores the benefits, disadvantages, and challenges uncovered by our current implementation. | |||
![]() |
Slides THHA2O03 [0.762 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THHB3O02 | Real-Time Data Reduction Integrated into Instrument Control Software | controls, CORBA, software, real-time | 1115 |
|
|||
The increasing complexity of the experimental activity and the growing raw dataset collected during the measurements pushed the integration of the data reduction software within the instrument control. On-line raw data reduction allows users to take instant decisions based on the physical quantities they are looking for. In such a way, beam time is optimised avoiding oversampling. Moreover, the datasets are more consistent and the reduction procedure, becoming now part of the sequencer workflow, is well documented and can be saved for future use. A server and a client API that allows starting and monitoring the reduction procedures on remote machines and finally get their results, was designed. The implementation of the on-line data reduction on several instruments at the ILL as well as on the obtained performances, will be reported in this paper. | |||
![]() |
Slides THHB3O02 [4.458 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THHB3O03 | On-the-Fly Scans for Fast Tomography at LNLS Imaging Beamline | EPICS, controls, experiment, LabView | 1119 |
|
|||
Funding: Brazilian Synchrotron Light Laboratory. As we go to brighter light sources and time resolved ex-periments, different approaches for executing faster scans in synchrotrons are an everpresent need. In many light sources, performing scans through a sequence of hardware triggers is the most commonly used method for synchronizing instruments and motors. Thus, in order to provide a sufficiently flexible and robust solution, the XRay Imaging Beamline (IMX) at the Brazilian Synchrotron Light Source [1] upgraded its scanning system to a NI PXI chassis interfacing with Galil motion controllers and EPICS environment. It currently executes pointtopoint and onthefly scans controlled by hard-ware signals, fully integrated with the beamline control system under EPICS channel access protocol. Some approaches can use CSStudio screens and automated Python scripts to create a userfriendly interface. All pro-gramming languages used in the project are easy to use and to learn, which allows high maintainability for the system delivered. The use of LNLS Hyppie platform [2, 3] also enables software modularity for better compatibil-ity and scalability over different experimental setups and even different beamlines. [1]F. P. O'Dowd et al.,"X-Ray micro-tomography at the IMX beamline (LNLS)", XRM2014.[2]J. R. Piton et al.,"Hyppie: A hypervisored PXI for physics instrumentation under EPICS", BIW2012. |
|||
![]() |
Slides THHB3O03 [3.591 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THHC2O03 | Replacing the Engine in Your Car While You Are Still Driving It | timing, operation, interface, low-level-rf | 1131 |
|
|||
Funding: US Department of Energy under contract DC-AC52-06NA25396. Replacing your accelerator's timing system with a completely different architecture is not something that happens very often. Perhaps even rarer is the requirement that the replacement not interfere with the accelerator's normal operational cycle. In 2014, The Los Alamos Neutron Science Center (LANSCE) began the first phase of a multi-year rolling upgrade project which will eventually result in the complete replacement of the low-level RF system, the timing system, the industrial I/O system, the beam-synchronized data acquisition system, the fast-protect reporting system, and much of the diagnostic equipment. These projects are mostly independent of each other, with their own installation schedules, priorities, and time-lines. All of them, however, must interface with the timing system. This paper will focus on the timing system replacement project, its conversion from a centralized discrete signal distribution system to a more distributed event-driven system, and the challenges faced by having to interface with both the old and new equipment until the upgrade is completed. |
|||
![]() |
Slides THHC2O03 [2.345 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THHD3O01 | Control Systems for Spallation Target in China Initiative Accelerator Driven System | controls, target, neutron, Ethernet | 1147 |
|
|||
In this paper, we report the design of the control system for the spallation target in China initiative accelerator driven sub-critical (ADS) system, where a heavy-metal target located vertically at the centre of a sub-critical reactor core is bombarded vertically by the high-energy protons from an accelerator. The main functions of the control system for the target are to monitor and control thermal hydraulic, neutron flux, and accelerator-target interface. The first function is to control the components in the primary and secondary loops, such as pumps, heat exchangers, valves, sensors, etc. For the commissioning measurements of the accelerator, the second function is to monitor the neutrons from the spallation target. The three-layer architecture has been used in the control system. In the middle network layer, in order to increase the network reliability, the redundant Ethernet based on Ethernet ring protection protocol has been considered. In the bottom equipment layer, the equipment controls for the above-mentioned functions have been designed. Finally, because the main objective of the target is to integrate the accelerator and the reactor into one system, the integration of accelerator's control system and the reactor's instrumentation and controls into the target's control system has been mentioned. | |||
![]() |
Slides THHD3O01 [0.628 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||
THHD3O08 | Upgrades to the Infrastructure and Management of the Operator Workstations and Servers for Run 2 of the CERN Accelerator Complex | controls, operation, site, cryogenics | 1158 |
|
|||
The Controls Group of the CERN Beams Department provides more than 400 operator workstations in the CERN Control Centre (CCC) and technical buildings of the accelerators, plus 300 servers in the server room (CCR) of the CCC. During the long shutdown of the accelerators that started in February 2013, many upgrades were done to improve this infrastructure in view of the higher-energy LHC run. The Engineering Department improved the electrical supply with fully redundant UPS, on-site diesel generators and for the CCR, water and air cooling systems. The Information Technology Department increased network bandwidth for the servers by a factor of 10 and introduced a pilot multicast service for the video streaming of the accelerator status displays and beam cameras. The Controls Group removed dependencies on network file systems for the operator accounts they manage for the Linacs, Booster, PS, ISOLDE, AD, CTF3, SPS, LHC and cryogenics. It also moved away from system administration based on shell scripts to using modern tools like version-controlled Ansible playbooks, which are now used for installation, day-to-day re-configuration and staged updates during technical stops. | |||
![]() |
Slides THHD3O08 [21.308 MB] | ||
Export • | reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml) | ||