Keyword: Linux
Paper Title Other Keywords Page
MOD3O02 Continuous Delivery at SOLEIL software, operation, controls, monitoring 51
 
  • G. Abeillé, A. Buteau, X. Elattaoui, S. Lê
    SOLEIL, Gif-sur-Yvette, France
  • G. Boissinot
    ZENIKA, Paris, France
 
  IT Department of Synchrotron SOLEIL* is structured along of a team of software developers responsible for the development and maintenance of all software from hardware controls up to supervision applications. With a very heterogonous environment such as, several software languages, strongly coupled components and an increasing number of releases, it has become mandatory to standardize the entire development process through a 'Continuous Delivery approach'; making it easy to release and deploy on time at any time. We achieved our objectives by building up a Continuous Delivery system around two aspects, Deployment Pipeline** and DevOps***. A deployment pipeline is achievable by extensively automating all stages of the delivery process (the continuous integration of software, the binaries build and the integration tests). Another key point of Continuous Delivery is also a close collaboration between software developers and system administrators, often known as the DevOps movement. This paper details the feedbacks on this Continuous Delivery approach has been adopted, modifying daily development team life and give an overview of the future steps.
*http://www.synchrotron-soleil.fr/
**http://martinfowler.com/bliki/DeploymentPipeline.html
***https://sdarchitect.wordpress.com/2012/07/24/understanding-devops-part-1-defining-devops/
 
slides icon Slides MOD3O02 [1.886 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF019 Experiences and Lessons Learned in Transitioning Beamline Front-Ends from VMEbus to Modular Distributed I/O controls, network, PLC, interface 121
 
  • I.J. Gillingham, T. Friedrich, S.C. Lay, R. Mercado
    DLS, Oxfordshire, United Kingdom
 
  Historically Diamond's photon front-ends have adopted control systems based on the VMEbus platform. With increasing pressure towards improved system versatility, space constraints and the issues of long term support for the VME platform, a programme of migration to distributed remote I/O control systems was undertaken. This paper reports on the design strategies, benefits and issues addressed since the new design has been operational.  
poster icon Poster MOPGF019 [0.477 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF027 Real-Time EtherCAT Driver for EPICS and Embedded Linux at Paul Scherrer Institute (PSI) EPICS, controls, real-time, interface 153
 
  • D. Maier-Manojlovic
    PSI, Villigen PSI, Switzerland
 
  EtherCAT bus and interface are widely used for external module and device control in accelerator environments at PSI, ranging from undulator communication, over basic I/O control to Machine Protection System for the new SwissFEL accelerator. A new combined EPICS/Linux driver has been developed at PSI, to allow for simple and mostly automatic setup of various EtherCAT configurations. The new driver is capable of automatic scanning of the existing device and module layout, followed by self-configuration and finally autonomous operation of the EtherCAT bus real-time loop. If additional configuration is needed, the driver offers both user- and kernel-space APIs, as well as the command line interface for fast configuration or reading/writing the module entries. The EtherCAT modules and their data objects (entries) are completely exposed by the driver, with each entry corresponding to a virtual file in the Linux procfs file system. This way, any user application can read or write the EtherCAT entries in a simple manner, even without using any of the supplied APIs. Finally, the driver offers EPICS interface with automatic template generation from the scanned EtherCAT configuration.  
poster icon Poster MOPGF027 [30.577 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF033 New Developments on EPICS Drivers, Clients and Tools at SESAME EPICS, controls, timing, Ethernet 167
 
  • I. Saleh, Y.S. Dabain, A. Ismail
    SESAME, Allan, Jordan
 
  SESAME is a 2.5 GeV synchrotron light source under construction in Allan, Jordan. The control system of SESAME is based on EPICS and CSS. Various developments in EPICS drivers, clients, software tools and hardware have been done. This paper will present some of the main achievements: new linux-x86 EPICS drivers and soft IOCS developed for the Micro-Research Finland event timing system replacing the VME/VxWorks-based drivers; new EPICS drivers and clients developed for the Basler GigE cameras; an IOC deployment and management driver developed to monitor the numerous virtual machines running the soft IOCs, and to ease deployment of updates to these IOCs; an automated EPICS checking tool developed to aid in the review, validation and application of the in-house rules for all record databases; a new EPICS record type (mbbi2) developed to provide alarm features missing from the multibit binary records found in the base distribution of EPICS; and a test of feasibility for replacing serial terminal servers with low-cost computers.  
poster icon Poster MOPGF033 [0.958 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF057 Quick Experiment Automation Made Possible Using FPGA in LNLS FPGA, software, experiment, EPICS 229
 
  • M.P. Donadio, J.R. Piton, H.D. de Almeida
    LNLS, Campinas, Brazil
 
  Beamlines in LNLS are being modernized to use the synchrotron light as efficiently as possible. As the photon flux increases, experiment speed constraints become more visible to the user. Experiment control has been done by ordinary computers, under a conventional operating system, running high-level software written in most common programming languages. This architecture presents some time issues as computer is subject to interruptions from input devices like mouse, keyboard or network. The programs quickly became the bottleneck of the experiment. To improve experiment control and automation speed, we transferred software algorithms to a FPGA device. FPGAs are semiconductor devices based around a matrix of logic blocks reconfigurable by software. The results of using a NI Compact RIO device with FPGA programmed through LabVIEW for adopting this technology and future improvements are briefly shown in this paper.  
poster icon Poster MOPGF057 [5.365 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF070 Report on Control/DAQ Software Design and Current State of Implementation for the Percival Detector detector, software, controls, EPICS 251
 
  • A.S. Palaha, C. Angelsen, Q. Gu, J. Marchal, U.K. Pedersen, N.P. Rees, N. Tartoni, H. Yousef
    DLS, Oxfordshire, United Kingdom
  • M. Bayer, J. Correa, P. Gnadt, H. Graafsma, P. Göttlicher, S. Lange, A. Marras, S. Řeža, I. Shevyakov, S. Smoljanin, L. Stebel, C. Wunderer, Q. Xia, M. Zimmer
    DESY, Hamburg, Germany
  • G. Cautero, D. Giuressi, A. Khromova, R.H. Menk, G. Pinaroli
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
  • D. Das, N. Guerrini, B. Marsh, T.C. Nicholls, I. Sedgwick, R. Turchetta
    STFC/RAL, Chilton, Didcot, Oxon, United Kingdom
  • H.J. Hyun, K.S. Kim, S.Y. Rah
    PAL, Pohang, Republic of Korea
 
  The increased brilliance of state-of-the-art Synchrotron radiation sources and Free Electron Lasers require imaging detectors capable of taking advantage of these light source facilities. The PERCIVAL ("Pixelated Energy Resolving CMOS Imager, Versatile and Large") detector is being developed in collaboration between DESY, Elettra Sincrotrone Trieste, Diamond Light Source and Pohang Accelerator Laboratory. It is a CMOS detector targeting soft X-rays < 1 KeV, with a high resolution of up to 13 M pixels reading out at 120 Hz, producing a challenging data rate of 6 GB/s. The controls and data acquisition system will include a SDK to allow integration with third party control systems like Tango and DOOCS; an EPICS areaDetector driver will be included by default. It will make use of parallel readout to keep pace with the data rate, distributing the data over multiple nodes to create a single virtual dataset using the HDF5 file format for its speed advantages in high volumes of regular data. This paper presents the design of the control system software for the Percival detector, an update of the current state of the implementation carried out by Diamond Light Source.  
poster icon Poster MOPGF070 [0.363 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF105 Device Control Database Tool (DCDB) EPICS, PLC, database, controls 326
 
  • P.A. Maslov, M. Komel, M. Pavleski, K. Žagar
    Cosylab, Ljubljana, Slovenia
 
  Funding: This project has received funding from the European Union's Seventh Framework Programme for research, technological development and demonstration under grant agreement no 289485.
We have developed a control system configuration tool, which provides an easy-to-use interface for quick configuration of the entire facility. It uses Microsoft Excel as the front-end application and allows the user to quickly generate and deploy IOC configuration (EPICS start-up scripts, alarms and archive configuration) onto IOCs; start, stop and restart IOCs, alarm servers and archive engines, and more. The DCDB tool utilizes a relational database, which stores information about all the elements of the accelerator. The communication between the client, database and IOCs is realized by a REST server written in Python. The key feature of the DCDB tool is that the user does not need to recompile the source code. It is achieved by using a dynamic library loader, which automatically loads and links device support libraries. The DCDB tool is compliant with CODAC (used at ITER and ELI-NP), but can also be used in any other EPICS environment (e.g. it has been customized to work at ESS).
 
poster icon Poster MOPGF105 [2.749 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF172 Bringing Quality in the Controls Software Delivery Process software, controls, TANGO, Windows 485
 
  • Z. Reszela, G. Cuní, C.M. Falcón Torres, D. Fernández-Carreiras, G. Jover-Mañas, C. Pascual-Izarra, R. Pastor Ortiz, M. Rosanes Siscart, S. Rubio-Manrique
    ALBA-CELLS Synchrotron, Cerdanyola del Vallès, Spain
 
  The Alba Controls Group develops and operates a diverse variety of controls software which is shared within international communities of users and developers. This includes: generic frameworks like Sardana* and Taurus**, numerous Tango*** device servers and applications where, among others, we can find PyAlarm and Panic****, and specific experiment procedures and hardware controllers. A study has commenced on how to improve the delivery process of our software from the hands of developers to laboratories, by making this process more reliable, predictable and risk-controlled. Automated unit and acceptance tests combined with the continuous integration, have been introduced, providing valuable and fast feedback to the developers. In order to renew and automate our legacy packaging and deployment system we have evaluated modern alternatives. The above practices were brought together into a design of the continuous delivery pipelines which were validated on a set of diverse software. This paper presents this study, its results and a proposal of the cost-effective implementation.
*http://taurus-scada.org
**http://sardana-controls.org
***http://tango-controls.org
****S. Rubio-Manrique, 'PANIC a Suite for Visualization, Logging and Notification of Incidents', Proc. of PCaPAC2014.
 
poster icon Poster MOPGF172 [1.247 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF002 A Protocol for Streaming Large Messages with UDP network, controls, Ethernet, software 693
 
  • C.I. Briegel, R. Neswold, M.Z. Sliczniak
    Fermilab, Batavia, Illinois, USA
 
  Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy.
We have developed a protocol concatenating UDP datagrams to stream large messages. The datagrams can be sized to the optimual size of the receiver. The protocol provides acknowledged reception based on a sliding window concept. The implementation provides for up to 10 Mbyte messages and guarrantees complete delivery or a corresponding error. The protocol is implemented as a standalone messaging between two sockets and also within the context of Fermilab's ACNet protocol. Results of this implementation in vxWorks is analyzed.
 
poster icon Poster WEPGF002 [0.796 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF015 Drivers and Software for MicroTCA.4 controls, hardware, interface, software 725
 
  • M. Killenberg, M. Heuer, M. Hierholzer, L.P. Petrosyan, Ch. Schmidt, N. Shehzad, G. Varghese, M. Viti
    DESY, Hamburg, Germany
  • T. Kozak, P. Prędki, J. Wychowaniak
    TUL-DMCS, Łódź, Poland
  • S. Marsching
    Aquenos GmbH, Baden-Baden, Germany
  • M. Mehle, T. Sušnik, K. Žagar
    Cosylab, Ljubljana, Slovenia
  • A. Piotrowski
    FastLogic Sp. z o.o., Łódź, Poland
 
  Funding: This work is supported by the Helmholtz Validation Fund HVF-0016 'MTCA.4 for Industry'.
The MicroTCA.4 crate standard provides a powerful electronic platform for digital and analogue signal processing. Besides excellent hardware modularity, it is the software reliability and flexibility as well as the easy integration into existing software infrastructures that will drive the widespread adoption of the new standard. The DESY MicroTCA.4 User Tool Kit (MTCA4U) comprises three main components: A Linux device driver, a C++ API for accessing the MicroTCA.4 devices and a control system interface layer. The main focus of the tool kit is flexibility to enable fast development. The universal, expandable PCI Express driver and a register mapping library allow out of the box operation of all MicroTCA.4 devices which are running firmware developed with the DESY board support package. The tool kit has recently been extended with features like command line tools and language bindings to Python and Matlab.
 
poster icon Poster WEPGF015 [0.540 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF036 Data Categorization and Storage Strategies at RHIC network, real-time, software, collider 775
 
  • S. Binello, K.A. Brown, T. D'Ottavio, R.A. Katz, J.S. Laster, J. Morris, J. Piacentino
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
This past year the Controls group within the Collider Accelerator Department at Brookhaven National Laboratory replaced the Network Attached Storage (NAS) system that is used to store software and data critical to the operation of the accelerators. The NAS also serves as the initial repository for all logged data. This purchase was used as an opportunity to categorize the data we store, and review and evaluate our storage strategies. This was done in the context of an existing policy that places no explicit limits on the amount of data that users can log, no limits on the amount of time that the data is retained at its original resolution, and that requires all logged data be available in real-time. This paper will describe how the data was categorized, and the various storage strategies used for each category.
 
poster icon Poster WEPGF036 [0.295 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF062 Processing High-Bandwidth Bunch-by-Bunch Observation Data from the RF and Transverse Damper Systems of the LHC framework, diagnostics, software, controls 841
 
  • M. Ojeda Sandonís, P. Baudrenghien, A.C. Butterworth, J. Galindo, W. Höfle, T.E. Levens, J.C. Molendijk, D. Valuch
    CERN, Geneva, Switzerland
  • F. Vaga
    University of Pavia, Pavia, Italy
 
  The radiofrequency and transverse damper feedback systems of the Large Hadron Collider digitize beam phase and position measurements at the bunch repetition rate of 40 MHz. Embedded memory buffers allow a few milliseconds of full rate bunch-by-bunch data to be retrieved over the VME bus for diagnostic purposes, but experience during LHC Run I has shown that for beam studies much longer data records are desirable. A new "observation box" diagnostic system is being developed which parasitically captures data streamed directly out of the feedback hardware into a Linux server through an optical fiber link, and permits processing and buffering of full rate data for around one minute. The system will be connected to an LHC-wide trigger network for detection of beam instabilities, which allows efficient capture of signals from the onset of beam instability events. The data will be made available for analysis by client applications through interfaces which are exposed as standard equipment devices within CERN's controls framework. It is also foreseen to perform online Fourier analysis of transverse position data inside the observation box using GPUs with the aim of extracting betatron tune signals.  
poster icon Poster WEPGF062 [4.412 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF090 Design of EPICS IOC Based on RAIN1000Z1 ZYNQ Module EPICS, embedded, controls, experiment 905
 
  • T. Xue, G.H. Gong, H. Li, J.M. Li
    Tsinghua University, Beijing, People's Republic of China
 
  ZYNQ is the new architecture of FPGA with dual high performance ARM Cortex-A9 processors from Xilinx. A new module with Giga Bit Ethernet interface based on the ZYNQ XC7Z010 is development for the High Purity Germanium Detectors' data acquisition in the CJPL (China JingPing under-ground Lab) experiment, which is named as RAIN1000Z1. Base on the nice RAIN1000Z1 hardware platform, EPICS is porting on the ARM Cortex-A9 processor with embedded Linux and an Input Output Controller is implemented on the RAIN1000Z1 module. Due to the combination of processor and logic and new silicon technology of ZYNQ, embedded Linux with TCP/IP sockets and real time high throughput logic based on VHDL are running in a single chip with small module hardware size, lower power and higher performance. This paper will introduce how to porting the EPICS IOC application on the ZYNQ based on embedded Linux and give a demo of IO control and RS232 communication.  
poster icon Poster WEPGF090 [1.811 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF096 Managing a Real-time Embedded Linux Platform with Buildroot target, software, network, controls 926
 
  • J.S. Diamond, K.S. Martin
    Fermilab, Batavia, Illinois, USA
 
  Funding: This work was supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359
Developers of real-time embedded software often need to build the operating system kernel, tools and supporting applications from source to work with the differences in their hardware configuration. The first attempt to introduce Linux-based real-time embedded systems into the Fermilab accelerator controls system used this approach but it was found to be time-consuming, difficult to maintain and difficult to adapt to different hardware configurations. Buildroot is an open source build system with a menu-driven configuration tool (similar to the Linux kernel build system) that automates this process. A customized Buildroot system has been developed for use in the Fermilab accelerator controls system that includes several hardware configuration profiles (including Intel, ARM and PowerPC) and packages for Fermilab support software. A bootable image file is produced containing the Linux kernel, shell and supporting software suite that varies from 3 to 20 megabytes large ' ideal for network booting. The result is a platform that is easier to maintain and deploy in diverse hardware configurations.
 
poster icon Poster WEPGF096 [1.058 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF112 Flop: Customizing Yocto Project for MVMExxxx PowerPC and BeagleBone ARM network, software, controls, embedded 958
 
  • L. Pivetta, A.I. Bogani, R. Passuello
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  During the last fifteen years several PowerPC-based VME single board computers, belonging to the MVMExxxx family, have been used for the control system front-end computers at Elettra Sincrotrone Trieste. Moreover, a low cost embedded board has been recently adopted to fulfill the control requirements of distributed instrumentation. These facts lead to the necessity of managing several releases of the operating system, kernel and libraries, and finally to the decision of adopting a comprehensive unified approach based on a common codebase: the Yocto Project. Based on Yocto Project, a control system oriented GNU/Linux distribution called 'Flop' has been created. The complete management of the software chain, the ease of upgrading or downgrading complete systems, the centralized management and the platform-independent deployment of the user software are the main features of Flop.  
poster icon Poster WEPGF112 [1.254 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF129 CERN timing on PXI and cRIO platforms timing, hardware, software, FPGA 1011
 
  • A. Rijllart, O.O. Andreassen, J. Blanco Alonso
    CERN, Geneva, Switzerland
 
  Given the time critical applications, the use of PXI and cRIO platforms in the accelerator complex at CERN, require the integration into the CERN timing system. In this paper the present state of integration of both PXI and cRIO platforms in the present General Machine Timing system and the White Rabbit Timing system, which is its successor, is described. PXI is used for LHC collimator control and for the new generation of control systems for the kicker magnets on all CERN accelerators. The cRIO platform is being introduced for transient recording on the CERN electricity distribution system and has potential for applications in other domains, because of its real-time OS, FPGA backbone and hot swap modules. The further development intended and what type of applications are most suitable for each platform, will be discussed.  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)