Author: Serrano, J.
Paper Title Page
TUBAULT04 Open Hardware for CERN’s Accelerator Control Systems 554
 
  • E. Van der Bij, P. Alvarez, M. Ayass, A. Boccardi, M. Cattin, C. Gil Soriano, E. Gousiou, S. Iglesias Gonsálvez, G. Penacoba Fernandez, J. Serrano, N. Voumard, T. Włostowski
    CERN, Geneva, Switzerland
 
  The accelerator control systems at CERN will be renovated and many electronics modules will be redesigned as the modules they will replace cannot be bought anymore or use obsolete components. The modules used in the control systems are diverse: analog and digital I/O, level converters and repeaters, serial links and timing modules. Overall around 120 modules are supported that are used in systems such as beam instrumentation, cryogenics and power converters. Only a small percentage of the currently used modules are commercially available, while most of them had been specifically designed at CERN. The new developments are based on VITA and PCI-SIG standards such as FMC (FPGA Mezzanine Card), PCI Express and VME64x using transition modules. As system-on-chip interconnect, the public domain Wishbone specification is used. For the renovation, it is considered imperative to have for each board access to the full hardware design and its firmware so that problems could quickly be resolved by CERN engineers or its collaborators. To attract other partners, that are not necessarily part of the existing networks of particle physics, the new projects are developed in a fully 'Open' fashion. This allows for strong collaborations that will result in better and reusable designs. Within this Open Hardware project new ways of working with industry are being tested with the aim to prove that there is no contradiction between commercial off-the-shelf products and openness and that industry can be involved at all stages, from design to production and support.  
slides icon Slides TUBAULT04 [7.225 MB]  
 
WEBHMULT03 EtherBone - A Network Layer for the Wishbone SoC Bus 642
 
  • M. Kreider, W.W. Terpstra
    GSI, Darmstadt, Germany
  • J.H. Lewis, J. Serrano, T. Włostowski
    CERN, Geneva, Switzerland
 
  Today, there are several System on a Chip (SoC) bus systems. Typically, these busses are confined on-chip and rely on higher level components to communicate with the outside world. Taking these systems a step further, we see the possibility of extending the reach of the SoC bus to remote FPGAs or processors. This leads to the idea of the EtherBone (EB) core, which connects a Wishbone (WB) Ver. 4 Bus via a Gigabit Ethernet based network link to remote peripheral devices. EB acts as a transparent interconnect module towards attached WB Bus devices. Address information and data from one or more WB bus cycles is preceded with a descriptive header and encapsulated in a UDP/IP packet. Because of this standard compliance, EB is able to traverse Wide Area Networks and is therefore not bound to a geographic location. Due to the low level nature of the WB bus, EB provides a sound basis for remote hardware tools like a JTAG debugger, In-System-Programmer (ISP), boundary scan interface or logic analyser module. EB was developed in the scope of the WhiteRabbit Timing Project (WR) at CERN and GSI/FAIR, which employs GigaBit Ethernet technology to communicate with memory mapped slave devices. WR will make use of EB as means to issue commands to its timing nodes and control connected accelerator hardware.  
slides icon Slides WEBHMULT03 [1.547 MB]  
 
WEMMU007 Reliability in a White Rabbit Network 698
 
  • M. Lipiński, J. Serrano, T. Włostowski
    CERN, Geneva, Switzerland
  • C. Prados
    GSI, Darmstadt, Germany
 
  White Rabbit (WR) is a time-deterministic, low-latency Ethernet-based network which enables transparent, sub-ns accuracy timing distribution. It is being developed to replace the General Machine Timing (GMT) system currently used at CERN and will become the foundation for the control system of the Facility for Antiproton and Ion Research (FAIR) at GSI. High reliability is an important issue in WR's design, since unavailability of the accelerator's control system will directly translate into expensive downtime of the machine. A typical WR network is required to lose not more than a single message per year. Due to WR's complexity, the translation of this real-world-requirement into a reliability-requirement constitutes an interesting issue on its own: a WR network is considered functional only if it provides all its services to all its clients at any time. This paper defines reliability in WR and describes how it was addressed by dividing it into sub-domains: deterministic packet delivery, data redundancy, topology redundancy and clock resilience. The studies show that the Mean Time Between Failure (MTBF) of the WR Network is the main factor affecting its reliability. Therefore, probability calculations for different topologies were performed using the "Fault Tree analysis" and analytic estimations. Results of the study show that the requirements of WR are demanding. Design changes might be needed and further in-depth studies required, e.g. Monte Carlo simulations. Therefore, a direction for further investigations is proposed.  
slides icon Slides WEMMU007 [0.689 MB]  
poster icon Poster WEMMU007 [1.080 MB]  
 
THCHMUST04 Free and Open Source Software at CERN: Integration of Drivers in the Linux Kernel 1248
 
  • J.D. González Cobas, S. Iglesias Gonsálvez, J.H. Lewis, J. Serrano, M. Vanga
    CERN, Geneva, Switzerland
  • E.G. Cota
    Columbia University, NY, USA
  • A. Rubini, F. Vaga
    University of Pavia, Pavia, Italy
 
  We describe the experience acquired during the integration of the tsi148 driver into the main Linux kernel tree. The benefits (and some of the drawbacks) for long-term software maintenance are analysed, the most immediate one being the support and quality review added by an enormous community of skilled developers. Indirect consequences are also analysed, and these are no less important: a serious impact in the style of the development process, the use of cutting edge tools and technologies supporting development, the adoption of the very strict standards enforced by the Linux kernel community, etc. These elements were also exported to the hardware development process in our section and we will explain how they were used with a particular example in mind: the development of the FMC family of boards following the Open Hardware philosophy, and how its architecture must fit the Linux model. This delicate interplay of hardware and software architectures is a perfect showcase of the benefits we get from the strategic decision of having our drivers integrated in the kernel. Finally, the case for a whole family of CERN-developed drivers for data acquisition models, the prospects for its integration in the kernel, and the adoption of a model parallel to Comedi, is also taken as an example of how this model will perform in the future.  
slides icon Slides THCHMUST04 [0.777 MB]