Keyword: Linux
Paper Title Other Keywords Page
MOMAU007 How to Maintain Hundreds of Computers Offering Different Functionalities with Only Two System Administrators controls, software, database, EPICS 56
 
  • R.A. Krempaska, A.G. Bertrand, C.E. Higgs, R. Kapeller, H. Lutz, M. Provenzano
    Paul Scherrer Institut, 5232 Villigen PSI, Switzerland
 
  The Controls section in PSI is responsible for the Control Systems of four Accelerators: two proton accelerators HIPA and PROSCAN, Swiss Light Source SLS and the Free Electron Laser (SwissFEL) Test Facility. On top of that, we have 18 additional SLS beamlines to control. The controls system is mainly composed of the so called Input Output Controllers (IOCs) which require a complete and complex computing infrastructure in order to boot, being developed, debugged and monitored. This infrastructure consists currently mainly of Linux computers like boot server, port server, or configuration server (called save and restore server). Overall, the constellation of computers and servers which compose the control system counts about five hundred Linux computers which can be split into 38 different configurations based on the work each of this system need to provide. For the administration of all this we do employ only two system administrators who are responsible for the installation, configuration and maintenance of those computers. This paper shows which tools are used to squash this difficult task: like Puppet (an open source Linux tool we further adapted) and many in-house developed tools offering an overview about computers, installation status and relations between the different servers / computers.  
slides icon Slides MOMAU007 [0.384 MB]  
poster icon Poster MOMAU007 [0.708 MB]  
 
MOPKN012 Hyperarchiver: An Epics Archiver Prototype Based on Hypertable EPICS, controls, embedded, target 114
 
  • M.G. Giacchini, A. Andrighetto, G. Bassato, L.G. Giovannini, M. Montis, G.P. Prete, J.A. Vásquez
    INFN/LNL, Legnaro (PD), Italy
  • J. Jugo
    University of the Basque Country, Faculty of Science and Technology, Bilbao, Spain
  • K.-U. Kasemir
    ORNL, Oak Ridge, Tennessee, USA
  • R. Lange
    HZB, Berlin, Germany
  • R. Petkus
    BNL, Upton, Long Island, New York, USA
  • M. del Campo
    ESS-Bilbao, Zamudio, Spain
 
  This work started in the context of NSLS2 project at Brookhaven National Laboratory. The NSLS2 control system foresees a very high number of PV variables and has strict requirements in terms of archiving/retrieving rate: our goal was to store 10K PV/sec and retrieve 4K PV/sec for a group of 4 signals. The HyperArchiver is an EPICS Archiver implementation engined by Hypertable, an open source database whose internal architecture is derived from Google's Big Table. We discuss the performance of HyperArchiver and present the results of some comparative tests.
HyperArchiver: http://www.lnl.infn.it/~epics/joomla/archiver.html
Epics: http://www.aps.anl.gov/epics/
 
poster icon Poster MOPKN012 [1.231 MB]  
 
MOPMN012 The Electronic Logbook for LNL Accelerators experiment, ion, software, booster 260
 
  • S. Canella, O. Carletto
    INFN/LNL, Legnaro (PD), Italy
 
  In spring 2009 all run-time data concerning the particle accelerators at LNL (Laboratori Nazionali di Legnaro) were still registered mainly on paper. TANDEM and its Negative Source data were logged on a large format paper logbook, for ALPI booster and PIAVE injector with its Positive ECR Source a number of independent paper notebooks were used, together with plain data files containing raw instant snapshots of each RF superconductive accelerators. At that time a decision was taken to build a new tool for a general electronic registration of accelerators run-time data. The result of this effort, the LNL electronic logbook, is presented here .  
poster icon Poster MOPMN012 [8.543 MB]  
 
MOPMS014 GSI Operation Software: Migration from OpenVMS to Linux software, operation, controls, linac 351
 
  • R. Huhmann, G. Fröhlich, S. Jülicher, V.RW. Schaa
    GSI, Darmstadt, Germany
 
  The current operation software at GSI controlling the linac, beam transfer lines, synchrotron and storage ring, has been developed over a period of more than two decades using OpenVMS now on Alpha-Workstations. The GSI accelerator facilities will serve as a injector chain for the new FAIR accelerator complex for which a control system is currently developed. To enable reuse and integration of parts of the distributed GSI software system, in particular the linac operation software, within the FAIR control system, the corresponding software components must be migrated to Linux. The interoperability with FAIR controls applications is achieved by adding a generic middleware interface accessible from Java applications. For porting applications to Linux a set of libraries and tools has been developed covering the necessary OpenVMS system functionality. Currently, core applications and services are already ported or rewritten and functionally tested but not in operational usage. This paper presents the current status of the project and concepts for putting the migrated software into operation.  
 
MOPMS023 LHC Magnet Test Benches Controls Renovation controls, network, hardware, interface 368
 
  • A. Raimondo, O.O. Andreassen, D. Kudryavtsev, S.T. Page, A. Rijllart, E. Zorin
    CERN, Geneva, Switzerland
 
  The LHC magnet test benches controls were designed in 1996. They were based on VME data acquisition systems and Siemens PLCs control and interlocks systems. During a review of renovation of superconducting laboratories at CERN in 2009 it was decided to replace the VME systems with PXI and the obsolete Sun/Solaris workstations with Linux PCs. This presentation covers the requirements for the new systems in terms of functionality, security, channel count, sampling frequency and precision. We will report on the experience with the commissioning of the first series of fixed and mobile measurement systems upgraded to this new platform, compared to the old systems. We also include the experience with the renovated control room.  
poster icon Poster MOPMS023 [1.310 MB]  
 
MOPMS025 Migration from OPC-DA to OPC-UA Windows, toolkit, controls, embedded 374
 
  • B. Farnham, R. Barillère
    CERN, Geneva, Switzerland
 
  The OPC-DA specification of OPC has been a highly successful interoperability standard for process automation since 1996, allowing communications between any compliant components regardless of vendor. CERN has a reliance on OPC-DA Server implementations from various 3rd party vendors which provide a standard interface to their hardware. The OPC foundation finalized the OPC-UA specification and OPC-UA implementations are now starting to gather momentum. This presentation gives a brief overview of the headline features of OPC-UA and a comparison with OPC-DA and outlines the necessity of migrating from OPC-DA and the motivation for migrating to OPC-UA. Feedback from research into the availability of tools and testing utilities will be presented and a practical overview of what will be required from a computing perspective in order to run OPC-UA clients and servers in the CERN network.  
poster icon Poster MOPMS025 [1.103 MB]  
 
MOPMU027 Controls System Developments for the ERL Facility controls, software, interface, electron 498
 
  • J.P. Jamilkowski, Z. Altinbas, D.M. Gassner, L.T. Hoff, P. Kankiya, D. Kayran, T.A. Miller, R.H. Olsen, B. Sheehy, W. Xu
    BNL, Upton, Long Island, New York, USA
 
  Funding: Funding: This manuscript has been authored by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U. S. Department of Energy.
The BNL Energy Recovery LINAC (ERL) is a high beam current, superconducting RF electron accelerator that is being commissioned to serve as a research and development prototype for a RHIC facility upgrade for electron-ion collision (eRHIC). Key components of the machine include a laser, photocathode, and 5-cell superconducting RF cavity operating at a frequency of 703 MHz. Starting with a foundation based on existing ADO software running on Linux servers and on the VME/VxWorks platforms developed for RHIC, we are developing a controls system that incorporates a wide range of hardware I/O interfaces that are needed for machine R&D. Details of the system layout, specifications, and user interfaces are provided.
 
poster icon Poster MOPMU027 [0.709 MB]  
 
MOPMU039 ACSys in a Box controls, framework, database, site 522
 
  • C.I. Briegel, D. Finstrom, B. Hendricks, CA. King, R. Neswold, D.J. Nicklaus, J.F. Patrick, A.D. Petrov, C.L. Schumann, J.G. Smedinghoff
    Fermilab, Batavia, USA
 
  Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
The Accelerator Control System at Fermilab has evolved to enable this relatively large control system to be encapsulated into a "box" such as a laptop. The goal was to provide a platform isolated from the "online" control system. This platform can be used internally for making major upgrades and modifications without impacting operations. It also provides a standalone environment for research and development including a turnkey control system for collaborators. Over time, the code base running on Scientific Linux has enabled all the salient features of the Fermilab's control system to be captured in an off-the-shelf laptop. The anticipated additional benefits of packaging the system include improved maintenance, reliability, documentation, and future enhancements.
 
 
WEBHAUST01 LHCb Online Infrastructure Monitoring Tools controls, monitoring, status, Windows 618
 
  • L.G. Cardoso, C. Gaspar, C. Haen, N. Neufeld, F. Varela
    CERN, Geneva, Switzerland
  • D. Galli
    INFN-Bologna, Bologna, Italy
 
  The Online System of the LHCb experiment at CERN is composed of a very large number of PCs: around 1500 in a CPU farm for performing the High Level Trigger; around 170 for the control system, running the SCADA system - PVSS; and several others for performing data monitoring, reconstruction, storage, and infrastructure tasks, like databases, etc. Some PCs run Linux, some run Windows but all of them need to be remotely controlled and monitored to make sure they are correctly running and to be able, for example, to reboot them whenever necessary. A set of tools was developed in order to centrally monitor the status of all PCs and PVSS Projects needed to run the experiment: a Farm Monitoring and Control (FMC) tool, which provides the lower level access to the PCs, and a System Overview Tool (developed within the Joint Controls Project – JCOP), which provides a centralized interface to the FMC tool and adds PVSS project monitoring and control. The implementation of these tools has provided a reliable and efficient way to manage the system, both during normal operations but also during shutdowns, upgrades or maintenance operations. This paper will present the particular implementation of this tool in the LHCb experiment and the benefits of its usage in a large scale heterogeneous system.  
slides icon Slides WEBHAUST01 [3.211 MB]  
 
WEMAU004 Integrating EtherCAT Based IO into EPICS at Diamond EPICS, controls, real-time, Ethernet 662
 
  • R. Mercado, I.J. Gillingham, J. Rowland, K.G. Wilkinson
    Diamond, Oxfordshire, United Kingdom
 
  Diamond Light Source is actively investigating the use of EtherCAT-based Remote I/O modules for the next phase of photon beamline construction. Ethernet-based I/O in general is attractive, because of reduced equipment footprint, flexible configuration and reduced cabling. EtherCAT offers, in addition, the possibility of using inexpensive Ethernet hardware, off-the-shelf components with a throughput comparable to current VME based solutions. This paper presents the work to integrate EtherCAT-based I/O to the EPICS control system, listing platform decisions, requirement considerations and software design, and discussing the use of real-time pre-emptive Linux extensions to support high-rate devices that require deterministic sampling.  
slides icon Slides WEMAU004 [0.057 MB]  
poster icon Poster WEMAU004 [0.925 MB]  
 
WEMAU010 Web-based Control Application using WebSocket controls, GUI, Windows, experiment 673
 
  • Y. Furukawa
    JASRI/SPring-8, Hyogo-ken, Japan
 
  The Websocket [1] brings asynchronous full-duplex communication between a web-based (i.e. java-script based) application and a web-server. The WebSocket started as a part of HTML5 standardization but has now been separated from the HTML5 and developed independently. Using the WebSocket, it becomes easy to develop platform independent presentation layer applications of accelerator and beamline control software. In addition, no application program has to be installed on client computers except for the web-browser. The WebSocket based applications communicate with the WebSocket server using simple text based messages, so the WebSocket can be applicable message based control system like MADOCA, which was developed for the SPring-8 control system. A simple WebSocket server for the MADOCA control system and a simple motor control application was successfully made as a first trial of the WebSocket control application. Using google-chrome (version 10.x) on Debian/Linux and Windows 7, opera (version 11.0 beta) on Debian/Linux and safari (version 5.0.3) on MacOSX as clients, the motors can be controlled using the WebSocket based web-application. The more complex applications are now under development for synchrotron radiation experiments combined with other HTML5 features.
[1] http://websocket.org/
 
poster icon Poster WEMAU010 [44.675 MB]  
 
WEMMU005 Fabric Management with Diskless Servers and Quattor on LHCb controls, experiment, embedded, collider 691
 
  • P. Schweitzer, E. Bonaccorsi, L. Brarda, N. Neufeld
    CERN, Geneva, Switzerland
 
  Large scientific experiments nowadays very often are using large computer farms to process the events acquired from the detectors. In LHCb a small sysadmin team manages 1400 servers of the LHCb Event Filter Farm, but also a wide variety of control servers for the detector electronics and infrastructure computers : file servers, gateways, DNS, DHCP and others. This variety of servers could not be handled without a solid fabric management system. We choose the Quattor toolkit for this task. We will present our use of this toolkit, with an emphasis on how we handle our diskless nodes (Event filter farm nodes and computers embedded in the acquisition electronic cards). We will show our current tests to replace the standard (RedHat/Scientific Linux) way of handling diskless nodes to fusion filesystems and how it improves fabric management.  
slides icon Slides WEMMU005 [0.119 MB]  
poster icon Poster WEMMU005 [0.602 MB]  
 
WEPKN020 TANGO Integration of a SIMATIC WinCC Open Architecture SCADA System at ANKA TANGO, controls, synchrotron, software 749
 
  • T. Spangenberg, K. Cerff, W. Mexner
    Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
  • V. Kaiser
    Softwareschneiderei GmbH, Karlsruhe, Germany
 
  The WinCC OA supervisory control and data acquisition (SCADA) system provides at the ANKA synchrotron facility a powerful and very scalable tool to manage the enormous variety of technical equipment relevant for house keeping and beamline operation. Crucial to the applicability of a SCADA system for the ANKA synchrotron are the provided options to integrate it into other control concepts even if they are working e.g. on different time scales, managing concepts, and control standards. Especially these latter aspects result into different approaches for controlling concepts for technical services, storage ring, and beamlines. The beamline control at ANKA is mainly based on TANGO and SPEC, which has been expanded by TANGO server capabilities. This approach implies the essential need to provide a stable and fast link, that does not increase the dead time of a measurement, to the slower WinCC OA SCADA system. The open architecture of WinCC OA offers a smooth integration in both directions and therefore gives options to combine potential advantages, e.g. native hardware drivers or convenient graphical skills. The implemented solution will be presented and discussed at selected examples.  
poster icon Poster WEPKN020 [0.378 MB]  
 
WEPKN027 The Performance Test of F3RP61 and Its Applications in CSNS Experimental Control System controls, EPICS, target, embedded 763
 
  • J. Zhuang, Y.P. Chu, D.P. Jin, J.J. Li
    IHEP Beijing, Beijing, People's Republic of China
 
  F3RP61 is an embedded PLC developed by Yokogawa, Japan. It is based on PowerPC 8347 platform. Linux and EPICS can run on it. We do some tests on this device, including CPU performance, network performance, CA access time and scan time stability of EPICS. We also compare E3RP61 with MVME5100, which is most used IOC in BEPCII. After the tests and comparison, the performance and ability of F3RP61 is clear. It can be used in Experiment Control System of CSNS (China Spallation Neutron Source) as communication nodes between front control layer and Epics layer. And in some cases, F3RP61 also has the ability to exert more functions such as control tasks.  
poster icon Poster WEPKN027 [0.200 MB]  
 
WEPKS004 ISAC EPICS on Linux: The March of the Penguins controls, EPICS, ISAC, hardware 778
 
  • J.E. Richards, R.B. Nussbaumer, S. Rapaz, G. Waters
    TRIUMF, Canada's National Laboratory for Particle and Nuclear Physics, Vancouver, Canada
 
  The DC linear accelerators of the ISAC radioactive beam facility at TRIUMF do not impose rigorous timing constraints on the control system. Therefore a real-time operating system is not essential for device control. The ISAC Control System is completing a move to the use of the open source Linux operating system for hosting all EPICS IOCs. The IOC platforms include GE-Fanuc VME based CPUs for control of most optics and diagnostics, rack mounted servers for supervising PLCs, small desktop PCs for GPIB and serial "one-of-a-kind" instruments, as well as embedded ARM processors controlling CAN-bus devices that provide a suitcase sized control system. This article focuses on the experience of creating a customized Linux distribution for front-end IOC deployment. Rationale, a roadmap of the process, and efficiency advantages in personnel training and system management realized by using a single OS will be discussed.  
 
WEPKS026 A C/C++ Build System Based on Maven for the LHC Controls System target, controls, pick-up, framework 848
 
  • J. Nguyen Xuan, B. Copy, M. Dönszelmann
    CERN, Geneva, Switzerland
 
  The CERN accelerator controls system, mainly written in Java and C/C++, consists nowadays of 50 projects and 150 active developers. The controls group has decided to unify the development process and standards (e.g. project layout) using Apache Maven and Sonatype Nexus. Maven is the de-facto build tool for Java, it deals with versioning and dependency management, whereas Nexus is a repository manager. C/C++ developers were struggling to keep their dependencies on other CERN projects, as no versioning was applied, the libraries have to be compiled and available for several platforms and architectures, and finally there was no dependency management mechanism. This results in very complex Makefiles which were difficult to maintain. Even if Maven is primarily designed for Java, a plugin (Maven NAR [1]) adapts the build process for native programming languages for different operating systems and platforms. However C/C++ developers were not keen to abandon their current Makefiles. Hence our approach was to combine the best of the two worlds: NAR/Nexus and Makefiles. Maven NAR manages the dependencies, the versioning and creates a file with the linker and compiler options to include the dependencies. The Makefiles carry the build process to generate the binaries. Finally the resulting artifacts (binaries, header files, metadata) are versioned and stored in a central Nexus repository. Early experiments were conducted in the scope of the controls group's Testbed. Some existing projects have been successfully converted to this solution and some starting projects use this implementation.
[1] http://cern.ch/jnguyenx/MavenNAR.html
 
poster icon Poster WEPKS026 [0.518 MB]  
 
WEPMN008 Function Generation and Regulation Libraries and their Application to the Control of the New Main Power Converter (POPS) at the CERN CPS controls, software, simulation, real-time 886
 
  • Q. King, S.T. Page, H. Thiesen
    CERN, Geneva, Switzerland
  • M. Veenstra
    EBG MedAustron, Wr. Neustadt, Austria
 
  Power converter control for the LHC is based on an embedded control computer called a Function Generator/Controller (FGC). Every converter includes an FGC with responsibility for the generation of the reference current as a function of time and the regulation of the circuit current, as well as control of the converter state. With many new converter controls software classes in development it was decided to generalise several key components of the FGC software in the form of C libraries: function generation in libfg, regulation, limits and simulation in libreg and DCCT, ADC and DAC calibration in libcal. These libraries were first used in the software class dedicated to controlling the new 60MW main power converter (POPS) at the CERN CPS where regulation of both magnetic field and circuit current is supported. This paper reports on the functionality provided by each library and in particular libfg and libreg. The libraries are already being used by software classes in development for the next generation FGC for Linac4 converters, as well as the CERN SPS converter controls (MUGEF) and MedAustron converter regulation board (CRB).  
poster icon Poster WEPMN008 [3.304 MB]  
 
WEPMN014 The Software and Hardware Architectural Design of the Vessel Thermal Map Real-Time System in JET real-time, plasma, controls, network 905
 
  • D. Alves, A. Neto, D.F. Valcárcel
    IPFN, Lisbon, Portugal
  • G. Arnoux, P. Card, S. Devaux, R.C. Felton, A. Goodyear, D. Kinna, P.J. Lomas, P. McCullen, A.V. Stephen, K-D. Zastrow
    CCFE, Abingdon, Oxon, United Kingdom
  • S. Jachmich
    RMA, Brussels, Belgium
 
  The installation of ITER-relevant materials for the plasma facing components (PFCs) in the Joint European Torus (JET) is expected to have a strong impact on the operation and protection of the experiment. In particular, the use of all-beryllium tiles, which deteriorate at a substantially lower temperature than the formerly installed CFC tiles, imposes strict thermal restrictions on the PFCs during operation. Prompt and precise responses are therefore required whenever anomalous temperatures are detected. The new Vessel Thermal Map (VTM) real-time application collects the temperature measurements provided by dedicated pyrometers and Infra-Red (IR) cameras, groups them according to spatial location and probable offending heat source and raises alarms that will trigger appropriate protective responses. In the context of JET's global scheme for the protection of the new wall, the system is required to run on a 10 millisecond cycle communicating with other systems through the Real-Time Data Network (RTDN). In order to meet these requirements a Commercial Off-The-Shelf (COTS) solution has been adopted based on standard x86 multi-core technology, Linux and the Multi-threaded Application Real-Time executor (MARTe) software framework. This paper presents an overview of the system with particular technical focus on the configuration of its real-time capability and the benefits of the modular development approach and advanced tools provided by the MARTe framework.
See the Appendix of F. Romanelli et al., Proceedings of the 23rd IAEA Fusion Energy Conference 2010, Daejeon, Korea
 
poster icon Poster WEPMN014 [5.306 MB]  
 
WEPMN017 PCI Hardware Support in LIA-2 Control System hardware, controls, interface, operation 916
 
  • D. Bolkhovityanov, P.B. Cheblakov
    BINP SB RAS, Novosibirsk, Russia
 
  LIA-2 control system* is built on cPCI crates with x86-compatible processor boards running Linux. Slow electronics is connected via CAN-bus, while fast electronics (4MHz and 200MHz fast ADCs and 200MHz timers) are implemented as cPCI/PMC modules. Several ways to drive PCI control electronics in Linux were examined. Finally a userspace drivers approach was chosen. These drivers communicate with hardware via a small kernel module, which provides access to PCI BARs and to interrupt handling. This module was named USPCI (User-Space PCI access). This approach dramatically simplifies creation of drivers, as opposed to kernel drivers, and provides high reliability (because only a tiny and thoroughly-debugged piece of code runs in kernel). LIA-2 accelerator was successfully commissioned, and the solution chosen has proven adequate and very easy to use. Besides, USPCI turned out to be a handy tool for examination and debugging of PCI devices direct from command-line. In this paper available approaches to work with PCI control hardware in Linux are considered, and USPCI architecture is described.
* "LIA-2 Linear Induction Accelerator Control System", this conference
 
poster icon Poster WEPMN017 [0.954 MB]  
 
WEPMN020 New Developments on Tore Supra Data Acquisition Units real-time, data-acquisition, target, controls 922
 
  • F. Leroux, G. Caulier, L. Ducobu, M. Goniche
    Association EURATOM-CEA, St Paul Lez Durance, France
  • G. Antar
    American University of Beirut, Beirut, Lebanon
 
  Tore Supra data acquisition system (DAS) was designed in the early 1980s and has considerably evolved since then. Three generations of data acquisition units still coexist: Multibus, VME, and PCI bus system. The second generation, VME bus system, running LynxOS real-time operating system (OS) is diskless. The third generation, PCI bus system, allows to perform extensive data acquisition for infrared and visible video cameras that produce large amounts of data to handle. Nevertheless, this third generation was up to now provided with an hard drive and a non real-time operating system Microsoft Windows. Diskless system is a better solution for reliability and maintainability as they share common resources like kernel and file system. Moreover, open source real-time OS is now available which provide free and convenient solutions for DAS. As a result, it was decided to explore an alternative solution based on an open source OS with a diskless system for the fourth generation. In 2010, Linux distributions for VME bus and PCI bus systems have been evaluated and compared to LynxOS. Currently, Linux OS is fairly mature to be used on DAS with pre-emptive and real time features on Motorola PowerPC, x86 and x86 multi-core architecture. The results allowed to choose a Linux version for VME and PC platform for DAS on Tore Supra. In 2011, the Tore Supra DAS dedicated software was ported on a Linux diskless PCI platform. The new generation was successfully tested during real plasma experiment on one diagnostic. The new diagnostics for Tore Supra will be developed with this new set up.  
poster icon Poster WEPMN020 [0.399 MB]  
 
WEPMN027 Fast Scalar Data Buffering Interface in Linux 2.6 Kernel interface, hardware, controls, instrumentation 943
 
  • A. Homs
    ESRF, Grenoble, France
 
  Key instrumentation devices like counter/timers, analog-to-digital converter and encoders provide scalar data input. Many of them allow fast acquisitions, but do not provide hardware triggering or buffering mechanisms. A Linux 2.4 kernel driver called Hook was developed at the ESRF as a generic software-triggered buffering interface. This work presents the portage of the ESRF Hook interface to the Linux 2.6 kernel. The interface distinguishes two independent functional groups: trigger event generators and data channels. Devices in the first group create software events, like hardware interrupts generated by timers or external signals. On each event, one or more device channels on the second group are read and stored in kernel buffers. The event generators and data channels to be read are fully configurable before each sequence. Designed for fast acquisitions, the Hook implementation is well adapted to multi-CPU systems, where the interrupt latency is notably reduced. On heavily loaded dual-core PCs running standard (non real time) Linux, data can be taken at 1 KHz without losing events. Additional features include full integration into the sysfs (/sys) virtual filesystem and hotplug devices support.  
 
WEPMN037 DEBROS: Design and Use of a Linux-like RTOS on an Inexpensive 8-bit Single Board Computer network, hardware, interface, software 965
 
  • M.A. Davis
    NSCL, East Lansing, Michigan, USA
 
  As the power, complexity, and capabilities of embedded processors continues to grow, it is easy to forget just how much can be done with inexpensive single board computers based on 8-bit processors. When the proprietary, non-standard tools from the vendor for one such embedded computer became a major roadblock, I embarked on a project to expand my own knowledge and provide a more flexible, standards based alternative. Inspired by operating systems such as Unix, Linux, and Minix, I wrote DEBROS (the Davis Embedded Baby Real-time Operating System) [1], which is a fully pre-emptive, priority-based OS with soft real-time capabilities that provides a subset of standard Linux/Unix compatible system calls such as stdio, BSD sockets, pipes, semaphores, etc. The end result was a much more flexible, standards-based development environment which allowed me to simplify my programming model, expand diagnostic capabilities, and reduce the time spent monitoring and applying updates to the hundreds of devices in the lab currently using this hardware.[2]
[1] http://groups.nscl.msu.edu/controls/files/DEBROS_User_Developer_Manual.doc
[2] http://groups.nscl.msu.edu/controls/
 
poster icon Poster WEPMN037 [0.112 MB]  
 
WEPMS007 Backward Compatibility as a Key Measure for Smooth Upgrades to the LHC Control System controls, software, operation, feedback 989
 
  • V. Baggiolini, M. Arruat, D. Csikos, R. Gorbonosov, P. Tarasenko, Z. Zaharieva
    CERN, Geneva, Switzerland
 
  Now that the LHC is operational, a big challenge is to upgrade the control system smoothly, with minimal downtime and interruptions. Backward compatibility (BC) is a key measure to achieve this: a subsystem with a stable API can be upgraded smoothly. As part of a broader Quality Assurance effort, the CERN Accelerator Controls group explored methods and tools supporting BC. We investigated two aspects in particular: (1) "Incoming dependencies", to know which part of an API is really used by clients and (2) BC validation, to check that a modification is really backward compatible. We used this approach for Java APIs and for FESA devices (which expose an API in the form of device/property sets). For Java APIs, we gather dependency information by regularly running byte-code analysis on all the 1000 Jar files that belong to the control system and find incoming dependencies (methods calls and inheritance). An Eclipse plug-in we developed shows these incoming dependencies to the developer. If an API method is used by many clients, it has to remain backward compatible. On the other hand, if a method is not used, it can be freely modified. To validate BC, we are exploring the official Eclipse tools (PDE-API tools), and others that check BC without need for invasive technology such as OSGi. For FESA devices, we instrumented key components of our controls system to know which devices and properties are in use. This information is collected in the Controls Database and is used (amongst others) by the FESA design tools in order to prevent the FESA class developer from breaking BC.  
 
WEPMU037 Virtualization for the LHCb Experiment network, controls, experiment, hardware 1157
 
  • E. Bonaccorsi, L. Brarda, M. Chebbi, N. Neufeld
    CERN, Geneva, Switzerland
  • F. Sborzacchi
    INFN/LNF, Frascati (Roma), Italy
 
  The LHCb Experiment, one of the four large particle physics detectors at CERN, counts in its Online System more than 2000 servers and embedded systems. As a result of ever-increasing CPU performance in modern servers, many of the applications in the controls system are excellent candidates for virtualization technologies. We see virtualization as an approach to cut down cost, optimize resource usage and manage the complexity of the IT infrastructure of LHCb. Recently we have added a Kernel Virtual Machine (KVM) cluster based on Red Hat Enterprise Virtualization for Servers (RHEV) complementary to the existing Hyper-V cluster devoted only to the virtualization of the windows guests. This paper describes the architecture of our solution based on KVM and RHEV as along with its integration with the existing Hyper-V infrastructure and the Quattor cluster management tools and in particular how we use to run controls applications on a virtualized infrastructure. We present performance results of both the KVM and Hyper-V solutions, problems encountered and a description of the management tools developed for the integration with the Online cluster and LHCb SCADA control system based on PVSS.  
 
WEPMU039 Virtual IO Controllers at J-PARC MR using Xen EPICS, controls, operation, network 1165
 
  • N. Kamikubota, N. Yamamoto
    J-PARC, KEK & JAEA, Ibaraki-ken, Japan
  • T. Iitsuka, S. Motohashi, M. Takagi, S.Y. Yoshida
    Kanto Information Service (KIS), Accelerator Group, Ibaraki, Japan
  • H. Nemoto
    ACMOS INC., Tokai-mura, Ibaraki, Japan
  • S. Yamada
    KEK, Ibaraki, Japan
 
  The control system for J-PARC accelerator complex has been developed based on the EPICS toolkit. About 100 traditional ("real") VME-bus computers are used as EPICS IOCs in the control system for J-PARC MR (Main Ring). Recently, we have introduced "virtual" IOCs using Xen, an open-source virtual machine monitor. Scientific Linux with an EPICS iocCore runs on a Xen virtual machine. EPICS databases for network devices and EPICS soft records can be configured. Multiple virtual IOCs run on a high performance blade-type server, running Scientific Linux as native OS. A few number of virtual IOCs have been demonstrated in MR operation since October, 2010. Experience and future perspective will be discussed.  
 
WEPMU040 Packaging of Control System Software software, controls, EPICS, database 1168
 
  • K. Žagar, M. Kobal, N. Saje, A. Žagar
    Cosylab, Ljubljana, Slovenia
  • F. Di Maio, D. Stepanov
    ITER Organization, St. Paul lez Durance, France
  • R. Šabjan
    COBIK, Solkan, Slovenia
 
  Funding: ITER European Union, European Regional Development Fund and Republic of Slovenia, Ministry of Higher Education, Science and Technology
Control system software consists of several parts – the core of the control system, drivers for integration of devices, configuration for user interfaces, alarm system, etc. Once the software is developed and configured, it must be installed to computers where it runs. Usually, it is installed on an operating system whose services it needs, and also in some cases dynamically links with the libraries it provides. Operating system can be quite complex itself – for example, a typical Linux distribution consists of several thousand packages. To manage this complexity, we have decided to rely on Red Hat Package Management system (RPM) to package control system software, and also ensure it is properly installed (i.e., that dependencies are also installed, and that scripts are run after installation if any additional actions need to be performed). As dozens of RPM packages need to be prepared, we are reducing the amount of effort and improving consistency between packages through a Maven-based infrastructure that assists in packaging (e.g., automated generation of RPM SPEC files, including automated identification of dependencies). So far, we have used it to package EPICS, Control System Studio (CSS) and several device drivers. We perform extensive testing on Red Hat Enterprise Linux 5.5, but we have also verified that packaging works on CentOS and Scientific Linux. In this article, we describe in greater detail the systematic system of packaging we are using, and its particular application for the ITER CODAC Core System.
 
poster icon Poster WEPMU040 [0.740 MB]  
 
THBHMUST01 Multi-platform SCADA GUI Regression Testing at CERN. GUI, framework, software, Windows 1201
 
  • P.C. Burkimsher, M. Gonzalez-Berges, S. Klikovits
    CERN, Geneva, Switzerland
 
  Funding: CERN
The JCOP Framework is a toolkit used widely at CERN for the development of industrial control systems in several domains (i.e. experiments, accelerators and technical infrastructure). The software development started 10 years ago and there is now a large base of production systems running it. For the success of the project, it was essential to formalize and automate the quality assurance process. The paper will present the overall testing strategy and will describe in detail mechanisms used for GUI testing. The choice of a commercial tool (Squish) and the architectural features making it appropriate for our multi-platform environment will be described. Practical difficulties encountered when using the tool in the CERN context are discussed as well as how these were addressed. In the light of initial experience, the test code itself has been recently reworked in OO style to facilitate future maintenance and extension. The paper concludes with a description of our initial steps towards incorporation of full-blown Continuous Integration (CI) support.
 
slides icon Slides THBHMUST01 [1.878 MB]  
 
THCHAUST05 LHCb Online Log Analysis and Maintenance System software, network, detector, controls 1228
 
  • J.C. Garnier, L. Brarda, N. Neufeld, F. Nikolaidis
    CERN, Geneva, Switzerland
 
  History has shown, many times computer logs are the only information an administrator may have for an incident, which could be caused either by a malfunction or an attack. Due to huge amount of logs that are produced from large-scale IT infrastructures, such as LHCb Online, critical information may overlooked or simply be drowned in a sea of other messages . This clearly demonstrates the need for an automatic system for long-term maintenance and real time analysis of the logs. We have constructed a low cost, fault tolerant centralized logging system which is able to do in-depth analysis and cross-correlation of every log. This system is capable of handling O(10000) different log sources and numerous formats, while trying to keep the overhead as low as possible. It provides log gathering and management, offline analysis and online analysis. We call offline analysis the procedure of analyzing old logs for critical information, while Online analysis refer to the procedure of early alerting and reacting. The system is extensible and cooperates well with other applications such as Intrusion Detection / Prevention Systems. This paper presents the LHCb Online topology, problems we had to overcome and our solutions. Special emphasis is given to log analysis and how we use it for monitoring and how we can have uninterrupted access to the logs. We provide performance plots, code modification in well known log tools and our experience from trying various storage strategies.  
slides icon Slides THCHAUST05 [0.377 MB]  
 
THCHMUST02 Control and Test Software for IRAM Widex Correlator real-time, software, simulation, hardware 1240
 
  • S. Blanchet, D. Broguiere, P. Chavatte, F. Morel, A. Perrigouard, M. Torres
    IRAM, Saint Martin d'Heres, France
 
  IRAM is an international research institute for radio astronomy. It has designed a new correlator called WideX for the Plateau de Bure interferometer (an array of six 15-meter telescopes) in the French Alps. The device started its official service in February 2010. This correlator must be driven in real-time at 32 Hz for sending parameters and for data acquisition. With 3.67 million channels, distributed over 1792 dedicated chips, that produce a 1.87 Gbits/sec data output rate, the data acquisition and processing and also the automatic hardware-failure detection are big challenges for the software. This article presents the software that has been developed to drive and test the correlator. In particular it presents an innovative usage of a high-speed optical link, initially developed for the CERN ALICE experiment, associated with real-time Linux (RTAI) to achieve our goals.  
slides icon Slides THCHMUST02 [2.272 MB]  
 
THCHMUST04 Free and Open Source Software at CERN: Integration of Drivers in the Linux Kernel controls, FPGA, framework, data-acquisition 1248
 
  • J.D. González Cobas, S. Iglesias Gonsálvez, J.H. Lewis, J. Serrano, M. Vanga
    CERN, Geneva, Switzerland
  • E.G. Cota
    Columbia University, NY, USA
  • A. Rubini, F. Vaga
    University of Pavia, Pavia, Italy
 
  We describe the experience acquired during the integration of the tsi148 driver into the main Linux kernel tree. The benefits (and some of the drawbacks) for long-term software maintenance are analysed, the most immediate one being the support and quality review added by an enormous community of skilled developers. Indirect consequences are also analysed, and these are no less important: a serious impact in the style of the development process, the use of cutting edge tools and technologies supporting development, the adoption of the very strict standards enforced by the Linux kernel community, etc. These elements were also exported to the hardware development process in our section and we will explain how they were used with a particular example in mind: the development of the FMC family of boards following the Open Hardware philosophy, and how its architecture must fit the Linux model. This delicate interplay of hardware and software architectures is a perfect showcase of the benefits we get from the strategic decision of having our drivers integrated in the kernel. Finally, the case for a whole family of CERN-developed drivers for data acquisition models, the prospects for its integration in the kernel, and the adoption of a model parallel to Comedi, is also taken as an example of how this model will perform in the future.  
slides icon Slides THCHMUST04 [0.777 MB]  
 
THCHMUST05 The Case for Soft-CPUs in Accelerator Control Systems FPGA, software, hardware, controls 1252
 
  • W.W. Terpstra
    GSI, Darmstadt, Germany
 
  The steady improvements in Field Programmable Gate Array (FPGA) performance, size, and cost have driven their ever increasing use in science and industry. As FPGA sizes continue to increase, more and more devices and logic are moved from external chips to FPGAs. For simple hardware devices, the savings in board area and ASIC manufacturing setup are compelling. For more dynamic logic, the trade-off is not always as clear. Traditionally, this has been the domain of CPUs and software programming languages. In hardware designs already including an FPGA, it is tempting to remove the CPU and implement all logic in the FPGA, saving component costs and increasing performance. However, that logic must then be implemented in the more constraining hardware description languages, cannot be as easily debugged or traced, and typically requires significant FPGA area. For performance-critical tasks this trade-off can make sense. However, for the myriad slower and dynamic tasks, software programming languages remain the better choice. One great benefit of a CPU is that it can perform many tasks. Thus, by including a small "Soft-CPU" inside the FPGA, all of the slower tasks can be aggregated into a single component. These tasks may then re-use existing software libraries, debugging techniques, and device drivers, while retaining ready access to the FPGA's internals. This paper discusses requirements for using Soft-CPUs in this niche, especially for the FAIR project. Several open-source alternatives will be compared and recommendations made for the best way to leverage a hybrid design.  
slides icon Slides THCHMUST05 [0.446 MB]  
 
THDAUST03 The FERMI@Elettra Distributed Real-time Framework real-time, controls, Ethernet, network 1267
 
  • L. Pivetta, G. Gaio, R. Passuello, G. Scalamera
    ELETTRA, Basovizza, Italy
 
  Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3
FERMI@Elettra is a Free Electron Laser (FEL) based on a 1.5 GeV linac. The pulsed operation of the accelerator and the necessity to characterize and control each electron bunch requires synchronous acquisition of the beam diagnostics together with the ability to drive actuators in real-time at the linac repetition rate. The Adeos/Xenomai real-time extensions have been adopted in order to add real-time capabilities to the Linux based control system computers running the Tango software. A software communication protocol based on gigabit Ethernet and known as Network Reflective Memory (NRM) has been developed to implement a shared memory across the whole control system, allowing computers to communicate in real-time. The NRM architecture, the real-time performance and the integration in the control system are described.
 
slides icon Slides THDAUST03 [0.490 MB]  
 
THDAULT04 Embedded Linux on FPGA Instruments for Control Interface and Remote Management FPGA, embedded, controls, TANGO 1271
 
  • B.K. Huang, R.M. Myers, R.M. Sharples
    Durham University, Durham, United Kingdom
  • G. Cunningham, G.A. Naylor
    CCFE, Abingdon, Oxon, United Kingdom
  • O. Goudard
    ESRF, Grenoble, France
  • J.J. Harrison
    Merton College, Oxford, United Kingdom
  • R.G.L. Vann
    York University, Heslington, York, United Kingdom
 
  Funding: This work was part-funded by the RCUK Energy Programme under grant EP/I501045 and the European Communities under the contract of Association between EURATOM and CCFE.
FPGAs are now large enough that they can easily accommodate an embedded 32-bit processor which can be used to great advantage. Running embedded Linux gives the user many more options for interfacing to their FPGA-based instrument, and in some cases this enables removal of the middle-person PC. It is now possible to manage the instrument directly by widely used control systems such EPICS or TANGO. As an example, on MAST (the Mega Amp Spherical Tokamak) at Culham Centre for Fusion Energy, a new vertical feedback system is under development in which waveform coefficients can be changed between plasma discharges to define the plasma position behaviour. Additionally it is possible to use the embedded processor to facilitate remote updating of firmware which, in combination with a watchdog and network booting ensures that full remote management over Ethernet is possible. We also discuss UDP data streaming using embedded Linux and a web based control interface running on the embedded processor to interface to the FPGA board.
 
slides icon Slides THDAULT04 [2.267 MB]  
 
THDAULT06 MARTe Framework: a Middleware for Real-time Applications Development real-time, controls, framework, hardware 1277
 
  • A. Neto, D. Alves, B. Carvalho, P.J. Carvalho, H. Fernandes, D.F. Valcárcel
    IPFN, Lisbon, Portugal
  • A. Barbalace, G. Manduchi
    Consorzio RFX, Associazione Euratom-ENEA sulla Fusione, Padova, Italy
  • L. Boncagni
    ENEA C.R. Frascati, Frascati (Roma), Italy
  • G. De Tommasi
    CREATE, Napoli, Italy
  • P. McCullen, A.V. Stephen
    CCFE, Abingdon, Oxon, United Kingdom
  • F. Sartori
    F4E, Barcelona, Spain
  • R. Vitelli
    Università di Roma II Tor Vergata, Roma, Italy
  • L. Zabeo
    ITER Organization, St. Paul lez Durance, France
 
  Funding: This work was supported by the European Communities under the contract of Association between EURATOM/IST and was carried out within the framework of the European Fusion Development Agreement
The Multi-threaded Application Real-Time executor (MARTe) is a C++ framework that provides a development environment for the design and deployment of real-time applications, e.g. control systems. The kernel of MARTe comprises a set of data-driven independent blocks, connected using a shared bus. This modular design enforces a clear boundary between algorithms, hardware interaction and system configuration. The architecture, being multi-platform, facilitates the test and commissioning of new systems, enabling the execution of plant models in offline environments and with the hardware-in-the-loop, whilst also providing a set of non-intrusive introspection and logging facilities. Furthermore, applications can be developed in non real-time environments and deployed in a real-time operating system, using exactly the same code and configuration data. The framework is already being used in several fusion experiments, with control cycles ranging from 50 microseconds to 10 milliseconds exhibiting jitters of less than 2%, using VxWorks, RTAI or Linux. Codes can also be developed and executed in Microsoft Windows, Solaris and Mac OS X. This paper discusses the main design concepts of MARTe, in particular the architectural choices which enabled the combination of real-time accuracy, performance and robustness with complex and modular data driven applications.
 
slides icon Slides THDAULT06 [1.535 MB]  
 
FRBHMULT05 Middleware Trends and Market Leaders 2011 CORBA, controls, Windows, network 1334
 
  • A. Dworak, P. Charrue, F. Ehm, W. Sliwinski, M. Sobczak
    CERN, Geneva, Switzerland
 
  The Controls Middleware (CMW) project was launched over ten years ago. Its main goal was to unify middleware solutions used to operate CERN accelerators. An important part of the project, the equipment access library RDA, was based on CORBA, an unquestionable standard at the time. RDA became an operational and critical part of the infrastructure, yet the demanding run-time environment revealed some shortcomings of the system. Accumulation of fixes and workarounds led to unnecessary complexity. RDA became difficult to maintain and to extend. CORBA proved to be rather a cumbersome product than a panacea. Fortunately, many new transport frameworks appeared since then. They boasted a better design, and supported concepts that made them easy to use. Willing to profit from the new libraries, the CMW team updated user requirements, and in their terms investigated eventual CORBA substitutes. The process consisted of several phases: a review of middleware solutions belonging to different categories (e.g. data-centric, object-, and message-oriented) and their applicability to a communication model in RDA; evaluation of several market recognized products and promising start-ups; prototyping of typical communication scenarios; testing the libraries against exceptional situations and errors; verifying that mandatory performance constraints were met. Thanks to the performed investigation the team have selected a few libraries that suit their needs better than CORBA. Further prototyping will select the best candidate.  
slides icon Slides FRBHMULT05 [8.508 MB]