Keyword: timing
Paper Title Other Keywords Page
MOBAUST03 The MedAustron Accelerator Control System controls, operation, interface, real-time 9
  • J. Gutleber, M. Benedikt
    CERN, Geneva, Switzerland
  • A.B. Brett, A. Fabich, M. Marchhart, R. Moser, M. Thonke, C. Torcato de Matos
    EBG MedAustron, Wr. Neustadt, Austria
  • J. Dedič
    Cosylab, Ljubljana, Slovenia
  This paper presents the architecture and design of the MedAustron particle accelerator control system. The facility is currently under construction in Wr. Neustadt, Austria. The accelerator and its control system are designed at CERN. Accelerator control systems for ion therapy applications are characterized by rich sets of configuration data, real-time reconfiguration needs and high stability requirements. The machine is operated according to a pulse-to-pulse modulation scheme and beams are described in terms of ion type, energy, beam dimensions, intensity and spill length. An irradiation session for a patient consists of a few hundred accelerator cycles over a time period of about two minutes. No two cycles within a session are equal and the dead-time between two cycles must be kept low. The control system is based on a multi-tier architecture with the aim to achieve a clear separation between front-end devices and their controllers. Off-the-shelf technologies are deployed wherever possible. In-house developments cover a main timing system, a light-weight layer to standardize operation and communication of front-end controllers, the control of the power converters and a procedure programming framework for automating high-level control and data analysis tasks. In order to be able to roll out a system within a predictable schedule, an "off-shoring" project management process was adopted: A frame agreement with an integrator covers the provision of skilled personnel that specifies and builds components together with the core team.  
slides icon Slides MOBAUST03 [7.483 MB]  
MOMAU004 Database Foundation for the Configuration Management of the CERN Accelerator Controls Systems controls, database, interface, software 48
  • Z. Zaharieva, M. Martin Marquez, M. Peryt
    CERN, Geneva, Switzerland
  The Controls Configuration DB (CCDB) and its interfaces have been developed over the last 25 years in order to become nowadays the basis for the Configuration Management of the Controls System for all accelerators at CERN. The CCDB contains data for all configuration items and their relationships, required for the correct functioning of the Controls System. The configuration items are quite heterogeneous, depicting different areas of the Controls System – ranging from 3000 Front-End Computers, 75 000 software devices allowing remote control of the accelerators, to valid states of the Accelerators Timing System. The article will describe the different areas of the CCDB, their interdependencies and the challenges to establish the data model for such a diverse configuration management database, serving a multitude of clients. The CCDB tracks the life of the configuration items by allowing their clear identification, triggering change management processes as well as providing status accounting and audits. This necessitated the development and implementation of a combination of tailored processes and tools. The Controls System is a data-driven one - the data stored in the CCDB is extracted and propagated to the controls hardware in order to configure it remotely. Therefore a special attention is placed on data security and data integrity as an incorrectly configured item can have a direct impact on the operation of the accelerators.  
slides icon Slides MOMAU004 [0.404 MB]  
poster icon Poster MOMAU004 [6.064 MB]  
MOPKS004 NSLS-II Beam Diagnostics Control System diagnostics, controls, interface, electronics 168
  • Y. Hu, L.R. Dalesio, K. Ha, O. Singh, H. Xu
    BNL, Upton, Long Island, New York, USA
  A correct measurement of NSLS-II beam parameters (beam position, beam size, circulating current, beam emittance, etc.) depends on the effective combinations of beam monitors, control and data acquisition system and high level physics applications. This paper will present EPICS-based control system for NSLS-II diagnostics and give detailed descriptions of diagnostics controls interfaces including classifications of diagnostics, proposed electronics and EPICS IOC platforms, and interfaces to other subsystems. Device counts in diagnostics subsystems will also be briefly described.  
poster icon Poster MOPKS004 [0.167 MB]  
MOPKS011 Beam Synchronous Data Acquisition for SwissFEL Test Injector controls, data-acquisition, EPICS, real-time 180
  • B. Kalantari, T. Korhonen
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
  Funding: Paul Scherrer Institute
A 250 MeV injector facility at PSI has been constructed to study the scientific and technological challenges of the SwissFEL project. Since in such pulsed machines in principle every beam can have different characteristics, due to varying machine parameters and/or conditions, it is very crucial to be able to acquire and distinguish control system data from one pulse to the next. In this paper we describe the technique we have developed to perform beam synchronous data acquisition at 100 Hz rate. This has been particularly challenging since it has provided us with a reliable and real-time data acquisition method in a non real-time control system. We describe how this can be achieved by employing a powerful and flexible timing system with well defined interfaces to the control system.
poster icon Poster MOPKS011 [0.126 MB]  
MOPMN004 An Operational Event Announcer for the LHC Control Centre Using Speech Synthesis controls, software, interface, operation 242
  • S.T. Page, R. Alemany-Fernandez
    CERN, Geneva, Switzerland
  The LHC island of the CERN Control Centre is a busy working environment with many status displays and running software applications. An audible event announcer was developed in order to provide a simple and efficient method to notify the operations team of events occurring within the many subsystems of the accelerator. The LHC Announcer uses speech synthesis to report messages based upon data received from multiple sources. General accelerator information such as injections, beam energies and beam dumps are derived from data received from the LHC Timing System. Additionally, a software interface is provided that allows other surveillance processes to send messages to the Announcer using the standard control system middleware. Events are divided into categories which the user can enable or disable depending upon their interest. Use of the LHC Announcer is not limited to the Control Centre and is intended to be available to a wide audience, both inside and outside CERN. To accommodate this, it was designed to require no special software beyond a standard web browser. This paper describes the design of the LHC Announcer and how it is integrated into the LHC operational environment.  
poster icon Poster MOPMN004 [1.850 MB]  
MOPMN008 LASSIE: The Large Analogue Signal and Scaling Information Environment for FAIR controls, detector, data-acquisition, diagnostics 250
  • T. Hoffmann, H. Bräuning, R. Haseitl
    GSI, Darmstadt, Germany
  At FAIR, the Facility for Antiproton and Ion Research, several new accelerators such as the SIS 100, HESR, CR, the inter-connecting HEBT beam lines, S-FRS and experiments will be built. All of these installations are equipped with beam diagnostic devices and other components which deliver time-resolved analogue signals to show status, quality, and performance of the accelerators. These signals can originate from particle detectors such as ionization chambers and plastic scintillators, but also from adapted output signals of transformers, collimators, magnet functions, RF cavities, and others. To visualize and precisely correlate the time axis of all input signals a dedicated FESA based data acquisition and analysis system named LASSIE, the Large Analogue Signal and Scaling Information Environment, is under way. As the main operation mode of LASSIE, pulse counting with adequate scaler boards is used, without excluding enhancements for ADC, QDC, or TDC digitization in the future. The concept, features, and challenges of this large distributed DAQ system will be presented.  
poster icon Poster MOPMN008 [7.850 MB]  
MOPMS018 New Timing System Development at SNS hardware, diagnostics, operation, network 358
  • D. Curry
    ORNL RAD, Oak Ridge, Tennessee, USA
  • X.H. Chen, R. Dickson, S.M. Hartman, D.H. Thompson
    ORNL, Oak Ridge, Tennessee, USA
  • J. Dedič
    Cosylab, Ljubljana, Slovenia
  The timing system at the Spallation Neutron Source (SNS) has recently been updated to support the long range production and availability goals of the facility. A redesign of the hardware and software provided us with an opportunity to significantly reduce the complexity of the system as a whole and consolidate the functionality of multiple cards into single units eliminating almost half of our operating components in the field. It also presented a prime opportunity to integrate new system level diagnostics, previously unavailable, for experts and operations. These new tools provide us with a clear image of the health of our distribution links and enhance our ability to quickly identify and isolate errors.  
MOPMS027 Fast Beam Current Transformer Software for the CERN Injector Complex software, hardware, GUI, real-time 382
  • M. Andersen
    CERN, Geneva, Switzerland
  The Fast transfer-line BCTs in CERN injector complex are undergoing a complete consolidation to eradicate obsolete, maintenance intensive hardware. The corresponding low-level software has been designed to minimise the effect of identified error sources while allowing remote diagnostics and calibration facilities. This paper will present the front-end and expert application software with the results obtained.  
poster icon Poster MOPMS027 [1.223 MB]  
MOPMS028 CSNS Timing System Prototype EPICS, controls, operation, interface 386
  • G.L. Xu, G. Lei, L. Wang, Y.L. Zhang, P. Zhu
    IHEP Beijing, Beijing, People's Republic of China
  Timing system is important part of CSNS. Timing system prototype developments are based on the Event System 230 series. I use two debug platforms, one is EPICS base 3.14.8. IOC uses the MVME5100, running vxworks5.5 version; the other is EPICS base 3.13, using vxworks5.4 version. Prototype work included driver debugging, EVG/EVR-230 experimental new features, such as CML output signals using high-frequency step size of the signal cycle delay, the use of interlocking modules, CML, and TTL's Output to achieve interconnection function, data transmission functions. Finally, I programed the database with the new features and in order to achieve OPI.  
poster icon Poster MOPMS028 [0.434 MB]  
MOPMU023 The MRF Timing System. The Complete Control Software Integration in Tango. TANGO, device-server, GUI, controls 483
  • J. Moldes, D.B. Beltrán, D.F.C. Fernández-Carreiras, J.J. Jamroz, J. Klora, O. Matilla, R. Suñé
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
  The deployment of the Timing system based on the MRF hardware has been a important part of the control system. Hundreds of elements are integrated in the scheme, which provides synchronization signals and interlocks, transmitted in the microsecond range and distributed all around the installation. It has influenced several hardware choices and has been largely improved to support interlock events. The operation of the timing system requires a complex setup of all elements. A complete solution has been developed including libraries and stand alone Graphical User Interfaces. Therefore this set of tools is of a great added value, even increased if using Tango, since most high level applications and GUIs are based on Tango Servers. A complete software solution for managing the events, and interlocks of a large installation is presented.  
poster icon Poster MOPMU023 [25.650 MB]  
TUBAULT04 Open Hardware for CERN’s Accelerator Control Systems hardware, controls, FPGA, software 554
  • E. Van der Bij, P. Alvarez, M. Ayass, A. Boccardi, M. Cattin, C. Gil Soriano, E. Gousiou, S. Iglesias Gonsálvez, G. Penacoba Fernandez, J. Serrano, N. Voumard, T. Włostowski
    CERN, Geneva, Switzerland
  The accelerator control systems at CERN will be renovated and many electronics modules will be redesigned as the modules they will replace cannot be bought anymore or use obsolete components. The modules used in the control systems are diverse: analog and digital I/O, level converters and repeaters, serial links and timing modules. Overall around 120 modules are supported that are used in systems such as beam instrumentation, cryogenics and power converters. Only a small percentage of the currently used modules are commercially available, while most of them had been specifically designed at CERN. The new developments are based on VITA and PCI-SIG standards such as FMC (FPGA Mezzanine Card), PCI Express and VME64x using transition modules. As system-on-chip interconnect, the public domain Wishbone specification is used. For the renovation, it is considered imperative to have for each board access to the full hardware design and its firmware so that problems could quickly be resolved by CERN engineers or its collaborators. To attract other partners, that are not necessarily part of the existing networks of particle physics, the new projects are developed in a fully 'Open' fashion. This allows for strong collaborations that will result in better and reusable designs. Within this Open Hardware project new ways of working with industry are being tested with the aim to prove that there is no contradiction between commercial off-the-shelf products and openness and that industry can be involved at all stages, from design to production and support.  
slides icon Slides TUBAULT04 [7.225 MB]  
WEBHMUST01 The MicroTCA Acquisition and Processing Back-end for FERMI@Elettra Diagnostics controls, diagnostics, interface, FEL 634
  • A.O. Borga, R. De Monte, M. Ferianis, G. Gaio, L. Pavlovič, M. Predonzani, F. Rossi
    ELETTRA, Basovizza, Italy
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
Several diagnostics instruments for the FERMI@Elettra FEL require accurate readout, processing, and control electronics; together with a complete integration within the TANGO control system. A custom developed back-end system, compliant to the PICMG MicroTCA standard, provides a robust platform for accommodating such electronics; including reliable slow control and monitoring infrastructural features. Two types of digitizer AMCs have been developed, manufactured, tested and successfully commissioned in the FERMI facility. The first being a fast (160Msps) and high-resolution (16 bits) Analog to Digital and Digital to Analog (A|D|A) Convert Board, hosting 2 A-D and 2 D-A converters controlled by a large FPGA (Xilinx Virtex-5 SX50T) responsible also for the fast communication interface handling. The latter being an Analog to Digital Only (A|D|O), derived from A|D|A, with an analog front-side stage made of 4 A-D converters. A simple MicroTCA Timing Central Hub (MiTiCH) completes the set of modules necessary for operating the system. Several TANGO servers and panels have been developed and put in operation with the support of the controls group. The overall system's architectures, with different practical application examples, together with the specific AMCs' functionalities, are presented. Impressions on our experience on the field using the novel MicroTCA standard are also discussed.
slides icon Slides WEBHMUST01 [2.715 MB]  
WEBHMULT03 EtherBone - A Network Layer for the Wishbone SoC Bus operation, hardware, Ethernet, software 642
  • M. Kreider, W.W. Terpstra
    GSI, Darmstadt, Germany
  • J.H. Lewis, J. Serrano, T. Włostowski
    CERN, Geneva, Switzerland
  Today, there are several System on a Chip (SoC) bus systems. Typically, these busses are confined on-chip and rely on higher level components to communicate with the outside world. Taking these systems a step further, we see the possibility of extending the reach of the SoC bus to remote FPGAs or processors. This leads to the idea of the EtherBone (EB) core, which connects a Wishbone (WB) Ver. 4 Bus via a Gigabit Ethernet based network link to remote peripheral devices. EB acts as a transparent interconnect module towards attached WB Bus devices. Address information and data from one or more WB bus cycles is preceded with a descriptive header and encapsulated in a UDP/IP packet. Because of this standard compliance, EB is able to traverse Wide Area Networks and is therefore not bound to a geographic location. Due to the low level nature of the WB bus, EB provides a sound basis for remote hardware tools like a JTAG debugger, In-System-Programmer (ISP), boundary scan interface or logic analyser module. EB was developed in the scope of the WhiteRabbit Timing Project (WR) at CERN and GSI/FAIR, which employs GigaBit Ethernet technology to communicate with memory mapped slave devices. WR will make use of EB as means to issue commands to its timing nodes and control connected accelerator hardware.  
slides icon Slides WEBHMULT03 [1.547 MB]  
WEBHMULT04 Sub-nanosecond Timing System Design and Development for LHAASO Project detector, Ethernet, network, FPGA 646
  • G.H. Gong, S. Chen, Q. Du, J.M. Li, Y. Liu
    Tsinghua University, Beijing, People's Republic of China
  • H. He
    IHEP Beijing, Beijing, People's Republic of China
  Funding: National Science Foundation of China (No.11005065)
The Large High Altitude Air Shower Observatory (LHAASO) [1] project is designed to trace galactic cosmic ray sources by approximately 10,000 different types of ground air shower detectors. Reconstruction of cosmic ray arrival directions requires sub-nanosecond time synchronization, a novel design of the LHAASO timing system by means of packet-based frequency distribution and time synchronization over Ethernet is proposed. The White Rabbit Protocol (WR) [2] is applied as the infrastructure of the timing system, which implements a distributed adaptive phase tracking technology based on Synchronous Ethernet to lock all local clocks, and a real time delay calibration method based on the Precision Time Protocol to keep all local time synchronized within a nanosecond. We also demonstrate the development and test status on prototype WR switches and nodes.
[1] Cao Zhen, "A future project at tibet: the large high altitude air shower observatory (LHAASO)", Chinese Phys. C 34 249,2010
[2] P. Moreira, et al, "White Rabbit: Sub-Nanosecond Timing Distribution over Ethernet", ISPCS 2009
slides icon Slides WEBHMULT04 [8.775 MB]  
WEMMU007 Reliability in a White Rabbit Network network, controls, Ethernet, hardware 698
  • M. Lipiński, J. Serrano, T. Włostowski
    CERN, Geneva, Switzerland
  • C. Prados
    GSI, Darmstadt, Germany
  White Rabbit (WR) is a time-deterministic, low-latency Ethernet-based network which enables transparent, sub-ns accuracy timing distribution. It is being developed to replace the General Machine Timing (GMT) system currently used at CERN and will become the foundation for the control system of the Facility for Antiproton and Ion Research (FAIR) at GSI. High reliability is an important issue in WR's design, since unavailability of the accelerator's control system will directly translate into expensive downtime of the machine. A typical WR network is required to lose not more than a single message per year. Due to WR's complexity, the translation of this real-world-requirement into a reliability-requirement constitutes an interesting issue on its own: a WR network is considered functional only if it provides all its services to all its clients at any time. This paper defines reliability in WR and describes how it was addressed by dividing it into sub-domains: deterministic packet delivery, data redundancy, topology redundancy and clock resilience. The studies show that the Mean Time Between Failure (MTBF) of the WR Network is the main factor affecting its reliability. Therefore, probability calculations for different topologies were performed using the "Fault Tree analysis" and analytic estimations. Results of the study show that the requirements of WR are demanding. Design changes might be needed and further in-depth studies required, e.g. Monte Carlo simulations. Therefore, a direction for further investigations is proposed.  
slides icon Slides WEMMU007 [0.689 MB]  
poster icon Poster WEMMU007 [1.080 MB]  
WEPMN015 Timing-system Solution for MedAustron; Real-time Event and Data Distribution Network real-time, controls, software, ion 909
  • R. Štefanič, J. Dedič, R. Tavčar
    Cosylab, Ljubljana, Slovenia
  • J. Gutleber
    CERN, Geneva, Switzerland
  • R. Moser
    EBG MedAustron, Wr. Neustadt, Austria
  MedAustron is an ion beam cancer therapy and research centre currently under construction in Wiener Neustadt, Austria. This facility features a synchrotron particle accelerator for light ions. A timing system is being developed for that class of accelerators targeted at clinical use as a product of close collaboration between MedAustron and Cosylab. We redesignedμResearch Finland transport layer's FPGA firmware, extending its capabilities to address specific requirements of the machine to come to a generic real-time broadcast network for coordinating actions of a compact, pulse-to-pulse modulation based particle accelerator. One such requirement is the need to support for configurable responses to timing events on the receiver side. The system comes with National Instruments LabView based software support, ready to be integrated into the PXI based front-end controllers. This paper explains the design process from initial requirements refinement to technology choice, architectural design and implementation. It elaborates the main characteristics of the accelerator that the timing system has to address, such as support for concurrently operating partitions, real-time and non real-time data transport needs and flexible configuration schemes for real-time response to timing event reception. Finally, the architectural overview is given, with the main components explained in due detail.  
poster icon Poster WEPMN015 [0.800 MB]  
WEPMN016 Synchronously Driven Power Converter Controller Solution for MedAustron interface, controls, real-time, FPGA 912
  • L. Šepetavc, J. Dedič, R. Tavčar
    Cosylab, Ljubljana, Slovenia
  • J. Gutleber
    CERN, Geneva, Switzerland
  • R. Moser
    EBG MedAustron, Wr. Neustadt, Austria
  MedAustron is an ion beam cancer therapy and research centre currently under construction in Wiener Neustadt, Austria. This facility features a synchrotron particle accelerator for light ions. Cosylab is closely working together with MedAustron on the development of a power converter controller (PCC) for the 260 deployed converters. The majority are voltage sources that are regulated in real-time via digital signal processor (DSP) boards. The in-house developed PCC operates the DSP boards remotely, via real-time fiber optic links. A single PCC will control up to 30 power converters that deliver power to magnets used for focusing and steering particle beams. Outputs of all PCCs must be synchronized within a time frame of at most 1 microsecond, which is achieved by integration with the timing system. This pulse-to-pulse modulation machine requires different waveforms for each beam generation cycle. Dead times between cycles must be kept low, therefore the PCC is reconfigured during beam generation. The system is based on a PXI platform from National Instruments running LabVIEW Real-Time. An in-house developed generic real-time optical link connects the PCCs to custom developed front-end devices. These FPGA-based hardware components facilitate integration with different types of power converters. All PCCs are integrated within the SIMATIC WinCC OA SCADA system which coordinates and supervises their operation. This paper describes the overall system architecture, its main components, challenges we faced and the technical solutions.  
poster icon Poster WEPMN016 [0.695 MB]  
WEPMN018 Performance Tests of the Standard FAIR Equipment Controller Prototype FPGA, controls, Ethernet, software 919
  • S. Rauch, R. Bär, W. Panschow, M. Thieme
    GSI, Darmstadt, Germany
  For the control system of the new FAIR accelerator facility a standard equipment controller, the Scalable Control Unit (SCU), is presently under development. First prototypes have already been tested in real applications. The controller combines an x86 ComExpress Board and an Altera Arria II FPGA. Over a parallel bus interface called the SCU bus, up to 12 slave boards can be controlled. Communication between CPU and FPGA is done by a PCIe link. We discuss the real time behaviour between the Linux OS and the FPGA Hardware. For the test, a Front-End Software Architecture (FESA) class, running under Linux, communicates with the PCIe bridge in the FPGA. Although we are using PCIe only for single 32 bit wide accesses to the FPGA address space, the performance still seems sufficient. The tests showed an average response time to IRQs of 50 microseconds with a 1.6 GHz Intel Atom CPU. This includes the context change to the FESA userspace application and the reply back to the FPGA. Further topics are the bandwidth of the PCIe link for single/burst transfers and the performance of the SCU bus communication.  
WEPMN032 Development of Pattern Awareness Unit (PAU) for the LCLS Beam Based Fast Feedback System feedback, operation, controls, software 954
  • K.H. Kim, S. Allison, D. Fairley, T.M. Himel, P. Krejcik, D. Rogind, E. Williams
    SLAC, Menlo Park, California, USA
  LCLS is now successfully operating at its design beam repetition rate of 120 Hz, but in order to ensure stable beam operation at this high rate we have developed a new timing pattern aware EPICS controller for beam line actuators. Actuators that are capable of responding at 120 Hz are controlled by the new Pattern Aware Unit (PAU) as part of the beam-based feedback system. The beam at the LCLS is synchronized to the 60 Hz AC power line phase and is subject to electrical noise which differs according to which of the six possible AC phases is chosen from the 3-phase site power line. Beam operation at 120 Hz interleaves two of these 60 Hz phases and the feedback must be able to apply independent corrections to the beam pulse according to which of the 60 Hz timing patterns the pulse is synchronized to. The PAU works together with the LCLS Event Timing system which broadcasts a timing pattern that uniquely identifies each pulse when it is measured and allows the feedback correction to be applied to subsequent pulses belonging to the same timing pattern, or time slot, as it is referred to at SLAC. At 120 Hz operation this effectively provides us with two independent, but interleaved feedback loops. Other beam programs at the SLAC facility such as LCLS-II and FACET will be pulsed on other time slots and the PAUs in those systems will respond to their appropriate timing patterns. This paper describes the details of the PAU development: real-time requirements and achievement, scalability, and consistency. The operational results will also be described.  
poster icon Poster WEPMN032 [0.430 MB]  
WEPMS003 A Testbed for Validating the LHC Controls System Core Before Deployment controls, software, hardware, operation 977
  • J. Nguyen Xuan, V. Baggiolini
    CERN, Geneva, Switzerland
  Since the start-up of the LHC, it is crucial to carefully test core controls components before deploying them operationally. The Testbed of the CERN accelerator controls group was developed for this purpose. It contains different hardware (PPC, i386) running different operating systems (Linux and LynxOS) and core software components running on front-ends, communication middleware and client libraries. The Testbed first executes integration tests to verify that the components delivered by individual teams interoperate, and then system tests, which verify high-level, end-user functionality. It also verifies that different versions of components are compatible, which is vital, because not all parts of the operational LHC control system can be upgraded simultaneously. In addition, the Testbed can be used for performance and stress tests. Internally, the Testbed is driven by Bamboo, a Continuous Integration server, which builds and deploys automatically new software versions into the Testbed environment and executes the tests continuously to prevent from software regression. Whenever a test fails, an e-mail is sent to the appropriate persons. The Testbed is part of the official controls development process wherein new releases of the controls system have to be validated before being deployed operationally. Integration and system tests are an important complement to the unit tests previously executed in the teams. The Testbed has already caught several bugs that were not discovered by the unit tests of the individual components.
poster icon Poster WEPMS003 [0.111 MB]  
WEPMS011 The Timing Master for the FAIR Accelerator Facility FPGA, network, embedded, real-time 996
  • R. Bär, T. Fleck, M. Kreider, S. Mauro
    GSI, Darmstadt, Germany
  One central design feature of the FAIR accelerator complex is a high level of parallel beam operation, imposing ambitious demands on the timing and management of accelerator cycles. Several linear accelerators, synchrotrons, storage rings and beam lines have to be controlled and re-configured for each beam production chain on a pulse-to-pulse basis, with cycle lengths ranging from 20 ms to several hours. This implies initialization, synchronization of equipment on the time scale down to the ns level, interdependencies, multiple paths and contingency actions like emergency beam dump scenarios. The FAIR timing system will be based on White Rabbit [1] network technology, implementing a central Timing Master (TM) unit to orchestrate all machines. The TM is subdivided into separate functional blocks: the Clock Master, which deals with time and clock sources and their distribution over WR, the Management Master, which administrates all WR timing receivers, and the Data Master, which schedules and coordinates machine instructions and broadcasts them over the WR network. The TM triggers equipment actions based on the transmitted execution time. Since latencies in the low μs range are required, this paper investigates the possibilities of parallelisation in programmable hardware and discusses the benefits to either a distributed or monolithic timing master architecture. The proposed FPGA based TM will meet said timing requirements while providing fast reaction to interlocks and internal events and offers parallel processing of multiple signals and state machines.
[1] J. Serrano, et al, "The White Rabbit Project", ICALEPCS 2009.
WEPMS013 Timing System of the Taiwan Photon Source controls, injection, gun, EPICS 999
  • C.Y. Wu, Y.-T. Chang, J. Chen, Y.-S. Cheng, P.C. Chiu, K.T. Hsu, K.H. Hu, C.H. Kuo, D. Lee, C.Y. Liao
    NSRRC, Hsinchu, Taiwan
  The timing system of the Taiwan Photon Source provides synchronization for electron gun, modulators of linac, pulse magnet power supplies, booster power supply ramp, bucket addressing of storage ring, diagnostic equipments, beamline gating signal for top-up injection. The system is based on an event distribution system that broadcasts the timing events over a optic fiber network, and decodes and processes them at the timing event receivers. The system supports uplink functionality which will be used for the fast interlock system to distribute signals like beam dump and post-mortem trigger with 10 μsec response time. The hardware of the event system is a new design that is based on 6U CompactPCI form factor. This paper describes the technical solution, the functionality of the system and some applications that are based on the timing system.  
WEPMS015 NSLS-II Booster Timing System injection, booster, extraction, storage-ring 1003
  • P.B. Cheblakov, S.E. Karnaev
    BINP SB RAS, Novosibirsk, Russia
  • J.H. De Long
    BNL, Upton, Long Island, New York, USA
  The NSLS-II light source includes the main storage ring with beam lines and injection part consisting of 200 MeV linac, 3 GeV booster synchrotron and two transport lines. The booster timing system is a part of NSLS-II timing system which is based on Event Generator (EVG) and Event Receivers (EVRs) fromμResearch Finland. The booster timing is based on the external events coming from NSLS-II EVG: "Pre-Injection", "Injection", "Pre-Extraction", "Extraction". These events are referenced to the specified bunch of the Storage Ring and correspond to the first bunch of the booster. EVRs provide two scales for triggering both of the injection and the extraction pulse devices. The first scale provides triggering of the pulsed septums and the bump magnets in the range of milliseconds and uses TTL outputs of EVR, the second scale provides triggering of the kickers in the range of microseconds and uses CML outputs. EVRs also provide the timing of a booster cycle operation and events for cycle-to-cycle updates of pulsed and ramping parameters, and the booster beam instrumentation synchronization. This paper describes the final design of the booster timing system. The timing system functional and block diagrams are presented.  
poster icon Poster WEPMS015 [0.799 MB]  
WEPMS023 ALBA Timing System - A Known Architecture with Fast Interlock System Upgrade diagnostics, interlocks, booster, network 1024
  • O. Matilla, D.B. Beltrán, D.F.C. Fernández-Carreiras, J.J. Jamroz, J. Klora, J. Moldes, R. Suñé
    CELLS-ALBA Synchrotron, Cerdanyola del Vallès, Spain
  Like most of the newest synchrotron facilities the ALBA Timing System works on event based architecture. Its main particularity is that integrated with the Timing system a Fast Interlock System has been implemented which allows for an automated and synchronous reaction time from any-to-any point of the machine faster than 5μs. The list of benefits of combining both systems is large: very high flexibility, reuse of the timing actuators, direct synchronous output in different points of the machine reacting to an interlock, implementation of the Fast Interlock with very low cost increase as the timing optic fiber network is reused or the possibility of combined diagnostic tools implementation for triggers and interlocks. To enhance this last point a global timestamp of 8ns accuracy that could be used both for triggers and interlocks has been implemented. The system has been designed, installed and extensively used during the Storage Ring commissioning with very good results.  
poster icon Poster WEPMS023 [0.920 MB]  
WEPMU011 Automatic Injection Quality Checks for the LHC injection, kicker, GUI, software 1077
  • L.N. Drosdal, B. Goddard, R. Gorbonosov, S. Jackson, D. Jacquet, V. Kain, D. Khasbulatov, M. Misiowiec, J. Wenninger, C. Zamantzas
    CERN, Geneva, Switzerland
  Twelve injections per beam are required to fill the LHC with the nominal filling scheme. The injected beam needs to fulfill a number of requirements to provide useful physics for the experiments when they take data at collisions later on in the LHC cycle. These requirements are checked by a dedicated software system, called the LHC injection quality check. At each injection, this system receives data about beam characteristics from key equipment in the LHC and analyzes it online to determine the quality of the injected beam after each injection. If the quality is insufficient, the automatic injection process is stopped, and the operator has to take corrective measures. This paper will describe the software architecture of the LHC injection quality check and the interplay with other systems. A set of tools for self-monitoring of the injection quality checks to achieve optimum performance will be discussed as well. Results obtained during the LHC commissioning year 2010 and the LHC run 2011 will finally be presented.  
poster icon Poster WEPMU011 [0.358 MB]  
WEPMU034 Infrastructure of Taiwan Photon Source Control Network controls, network, EPICS, Ethernet 1145
  • Y.-T. Chang, J. Chen, Y.-S. Cheng, K.T. Hsu, S.Y. Hsu, K.H. Hu, C.H. Kuo, C.Y. Wu
    NSRRC, Hsinchu, Taiwan
  A reliable, flexible and secure network is essential for the Taiwan Photon Source (TPS) control system which is based upon the EPICS toolkit framework. Subsystem subnets will connect to control system via EPICS based CA gateways for forwarding data and reducing network traffic. Combining cyber security technologies such as firewall, NAT and VLAN, control network is isolated to protect IOCs and accelerator components. Network management tools are used to improve network performance. Remote access mechanism will be constructed for maintenance and troubleshooting. The Ethernet is also used as fieldbus for instruments such as power supplies. This paper will describe the system architecture for the TPS control network. Cabling topology, redundancy and maintainability are also discussed.  
THCHMUST06 The FAIR Timing Master: A Discussion of Performance Requirements and Architectures for a High-precision Timing System controls, FPGA, kicker, network 1256
  • M. Kreider
    GSI, Darmstadt, Germany
  • M. Kreider
    Hochschule Darmstadt, University of Applied Science, Darmstadt, Germany
  Production chains in a particle accelerator are complex structures with many interdependencies and multiple paths to consider. This ranges from system initialisation and synchronisation of numerous machines to interlock handling and appropriate contingency measures like beam dump scenarios. The FAIR facility will employ WhiteRabbit, a time based system which delivers an instruction and a corresponding execution time to a machine. In order to meet the deadlines in any given production chain, instructions need to be sent out ahead of time. For this purpose, code execution and message delivery times need to be known in advance. The FAIR Timing Master needs to be reliably capable of satisfying these timing requirements as well as being fault tolerant. Event sequences of recorded production chains indicate that low reaction times to internal and external events and fast, parallel execution are required. This suggests a slim architecture, especially devised for this purpose. Using the thread model of an OS or other high level programs on a generic CPU would be counterproductive when trying to achieve deterministic processing times. This paper deals with the analysis of said requirements as well as a comparison of known processor and virtual machine architectures and the possibilities of parallelisation in programmable hardware. In addition, existing proposals at GSI will be checked against these findings. The final goal will be to determine the best instruction set for modelling any given production chain and devising a suitable architecture to execute these models.  
slides icon Slides THCHMUST06 [2.757 MB]  
FRBHAULT03 Beam-based Feedback for the Linac Coherent Light Source feedback, network, linac, controls 1310
  • D. Fairley, K.H. Kim, K. Luchini, P. Natampalli, L. Piccoli, D. Rogind, T. Straumann
    SLAC, Menlo Park, California, USA
  Funding: Work supported by the U. S. Department of Energy Contract DE-AC02-76SF00515
Beam-based feedback control loops are required by the Linac Coherent Light Source (LCLS) program in order to provide fast, single-pulse stabilization of beam parameters. Eight transverse feedback loops, a 6x6 longitudinal feedback loop, and a loop to maintain the electron bunch charge were successfully commissioned for the LCLS, and have been maintaining stability of the LCLS electron beam at beam rates up to 120Hz. In order to run the feedback loops at beam rate, the feedback loops were implemented in EPICS IOCs with a dedicated ethernet multicast network. This paper will discuss the design, configuration and commissioning of the beam-based Fast Feedback System for LCLS. Topics include algorithms for 120Hz feedback, multicast network performance, actuator and sensor performance for single-pulse control and sensor readback, and feedback configuration and runtime control.
slides icon Slides FRBHAULT03 [1.918 MB]