THCHMU —  Embedded+Real Time 1   (13-Oct-11   14:00—15:30)
Chair: L.T. Hoff, BNL, Upton, Long Island, New York, USA
Paper Title Page
THCHMUST01 Control System for Cryogenic THD Layering at the National Ignition Facility 1236
 
  • M.A. Fedorov, O.D. Edwards, E.A. Mapoles, J. Mauvais, T.G. Parham, R.J. Sanchez, J.M. Sater, B.A. Wilson
    LLNL, Livermore, California, USA
 
  Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
The National Ignition Facility (NIF) is the world largest and most energetic laser system for Inertial Confinement Fusion (ICF). In 2010, NIF began ignition experiments using cryogenically cooled targets containing layers of the tritium-hydrogen-deuterium (THD) fuel. The 75 μm thick layer is formed inside of the 2 mm target capsule at temperatures of approximately 18 K. The ICF target designs require sub-micron smoothness of the THD ice layers. Formation of such layers is still an active research area, requiring a flexible control system capable of executing the evolving layering protocols. This task is performed by the Cryogenic Target Subsystem (CTS) of the NIF Integrated Computer Control System (ICCS). The CTS provides cryogenic temperature control with the 1 mK resolution required for beta layering and for the thermal gradient fill of the capsule. The CTS also includes a 3-axis x-ray radiography engine for phase contrast imaging of the ice layers inside of the plastic and beryllium capsules. In addition to automatic control engines, CTS is integrated with the Matlab interactive programming environment to allow flexibility in experimental layering protocols. The CTS Layering Matlab Toolbox provides the tools for layer image analysis, system characterization and cryogenic control. The CTS Layering Report tool generates qualification metrics of the layers, such as concentricity of the layer and roughness of the growth boundary grooves. The CTS activities are automatically coordinated with other NIF controls in the carefully orchestrated NIF Shot Sequence.
LLNL-CONF-477418
 
slides icon Slides THCHMUST01 [8.058 MB]  
 
THCHMUST02 Control and Test Software for IRAM Widex Correlator 1240
 
  • S. Blanchet, D. Broguiere, P. Chavatte, F. Morel, A. Perrigouard, M. Torres
    IRAM, Saint Martin d'Heres, France
 
  IRAM is an international research institute for radio astronomy. It has designed a new correlator called WideX for the Plateau de Bure interferometer (an array of six 15-meter telescopes) in the French Alps. The device started its official service in February 2010. This correlator must be driven in real-time at 32 Hz for sending parameters and for data acquisition. With 3.67 million channels, distributed over 1792 dedicated chips, that produce a 1.87 Gbits/sec data output rate, the data acquisition and processing and also the automatic hardware-failure detection are big challenges for the software. This article presents the software that has been developed to drive and test the correlator. In particular it presents an innovative usage of a high-speed optical link, initially developed for the CERN ALICE experiment, associated with real-time Linux (RTAI) to achieve our goals.  
slides icon Slides THCHMUST02 [2.272 MB]  
 
THCHMUST03 A New Fast Data Logger and Viewer at Diamond: the FA Archiver 1244
 
  • M.G. Abbott, G. Rehm, I. Uzun
    Diamond, Oxfordshire, United Kingdom
 
  At the Diamond Light Source position data from 168 Electron Beam Position Monitors (BPMs) and some X-Ray BPMs is distributed over the Fast Acquisition communications network at an update rate of 10kHz; the total aggregate data rate is around 15MB/s. The data logger described here (the FA Archiver) captures this entire data stream to disk in real time, re-broadcasts selected subsets of the live stream to interested clients, and allows rapid access to any part of the saved data. The archive is saved into a rolling buffer allowing retrieval of detailed beam position data from any time in the last four days. A simple socket-based interface to the FA Archiver allows easy access to both the stored and live data from a variety of clients. Clients include a graphical viewer for visualising the motion or spectrum of a single BPM in real time, a command line tool for retrieving any part of the stored data by time of day, and Matlab scripts for exploring the dataset, helped by the storage of decimated minimum, maximum, and mean data.  
slides icon Slides THCHMUST03 [0.482 MB]  
 
THCHMUST04 Free and Open Source Software at CERN: Integration of Drivers in the Linux Kernel 1248
 
  • J.D. González Cobas, S. Iglesias Gonsálvez, J.H. Lewis, J. Serrano, M. Vanga
    CERN, Geneva, Switzerland
  • E.G. Cota
    Columbia University, NY, USA
  • A. Rubini, F. Vaga
    University of Pavia, Pavia, Italy
 
  We describe the experience acquired during the integration of the tsi148 driver into the main Linux kernel tree. The benefits (and some of the drawbacks) for long-term software maintenance are analysed, the most immediate one being the support and quality review added by an enormous community of skilled developers. Indirect consequences are also analysed, and these are no less important: a serious impact in the style of the development process, the use of cutting edge tools and technologies supporting development, the adoption of the very strict standards enforced by the Linux kernel community, etc. These elements were also exported to the hardware development process in our section and we will explain how they were used with a particular example in mind: the development of the FMC family of boards following the Open Hardware philosophy, and how its architecture must fit the Linux model. This delicate interplay of hardware and software architectures is a perfect showcase of the benefits we get from the strategic decision of having our drivers integrated in the kernel. Finally, the case for a whole family of CERN-developed drivers for data acquisition models, the prospects for its integration in the kernel, and the adoption of a model parallel to Comedi, is also taken as an example of how this model will perform in the future.  
slides icon Slides THCHMUST04 [0.777 MB]  
 
THCHMUST05 The Case for Soft-CPUs in Accelerator Control Systems 1252
 
  • W.W. Terpstra
    GSI, Darmstadt, Germany
 
  The steady improvements in Field Programmable Gate Array (FPGA) performance, size, and cost have driven their ever increasing use in science and industry. As FPGA sizes continue to increase, more and more devices and logic are moved from external chips to FPGAs. For simple hardware devices, the savings in board area and ASIC manufacturing setup are compelling. For more dynamic logic, the trade-off is not always as clear. Traditionally, this has been the domain of CPUs and software programming languages. In hardware designs already including an FPGA, it is tempting to remove the CPU and implement all logic in the FPGA, saving component costs and increasing performance. However, that logic must then be implemented in the more constraining hardware description languages, cannot be as easily debugged or traced, and typically requires significant FPGA area. For performance-critical tasks this trade-off can make sense. However, for the myriad slower and dynamic tasks, software programming languages remain the better choice. One great benefit of a CPU is that it can perform many tasks. Thus, by including a small "Soft-CPU" inside the FPGA, all of the slower tasks can be aggregated into a single component. These tasks may then re-use existing software libraries, debugging techniques, and device drivers, while retaining ready access to the FPGA's internals. This paper discusses requirements for using Soft-CPUs in this niche, especially for the FAIR project. Several open-source alternatives will be compared and recommendations made for the best way to leverage a hybrid design.  
slides icon Slides THCHMUST05 [0.446 MB]  
 
THCHMUST06 The FAIR Timing Master: A Discussion of Performance Requirements and Architectures for a High-precision Timing System 1256
 
  • M. Kreider
    GSI, Darmstadt, Germany
  • M. Kreider
    Hochschule Darmstadt, University of Applied Science, Darmstadt, Germany
 
  Production chains in a particle accelerator are complex structures with many interdependencies and multiple paths to consider. This ranges from system initialisation and synchronisation of numerous machines to interlock handling and appropriate contingency measures like beam dump scenarios. The FAIR facility will employ WhiteRabbit, a time based system which delivers an instruction and a corresponding execution time to a machine. In order to meet the deadlines in any given production chain, instructions need to be sent out ahead of time. For this purpose, code execution and message delivery times need to be known in advance. The FAIR Timing Master needs to be reliably capable of satisfying these timing requirements as well as being fault tolerant. Event sequences of recorded production chains indicate that low reaction times to internal and external events and fast, parallel execution are required. This suggests a slim architecture, especially devised for this purpose. Using the thread model of an OS or other high level programs on a generic CPU would be counterproductive when trying to achieve deterministic processing times. This paper deals with the analysis of said requirements as well as a comparison of known processor and virtual machine architectures and the possibilities of parallelisation in programmable hardware. In addition, existing proposals at GSI will be checked against these findings. The final goal will be to determine the best instruction set for modelling any given production chain and devising a suitable architecture to execute these models.  
slides icon Slides THCHMUST06 [2.757 MB]