Paper | Title | Other Keywords | Page |
---|---|---|---|
MOMIB08 | Continuous Integration Using LabVIEW, SVN and Hudson | LabView, software, framework, interface | 74 |
|
|||
In the accelerator domain there is a need of integrating industrial devices and creating control and monitoring applications in an easy and yet structured way. The LabVIEW-RADE framework provides the method and tools to implement these requirements and also provides the essential integration of these applications into the CERN controls infrastructure. Building and distributing these core libraries for multiple platforms, e.g.Windows, Linux and Mac, and for different versions of LabVIEW, is a time consuming task that consist of repetitive and cumbersome work. All libraries have to be tested, commissioned and validated. Preparing one package for each variation takes almost a week to complete. With the introduction of Subversion version control (SVN) and Hudson continuous integration server (HCI) the process is now fully automated and a new distribution for all platforms is available within the hour. In this paper we are evaluating the pros and cons of using continuous integration, the time it took to get up and running and the added benefits. We conclude with an evaluation of the framework and indicate new areas of improvement and extension. | |||
![]() |
Slides MOMIB08 [2.990 MB] | ||
![]() |
Poster MOMIB08 [6.363 MB] | ||
MOMIB09 | ZIO: The Ultimate Linux I/O Framework | framework, controls, interface, software | 77 |
|
|||
ZIO (with Z standing for "The Ultimate I/O" Framework) was developed for CERN with the specific needs of physics labs in mind, which are poorly addressed in the mainstream Linux kernel. ZIO provides a framework for industrial, high-throughput, high-channel count I/O device drivers (digitizers, function generators, timing devices like TDCs) with performance, generality and scalability as design goals. Among its many features, it offers abstractions for - input and output channels, and channel sets - configurable trigger types - configurable buffer types - interface via sysfs attributes, control and data device nodes - a socket interface (PFZIO) which provides enormous flexibility and power for remote control In this paper, we discuss the design and implementation of ZIO, and describe representative cases of driver development for typical and exotic applications (FMC ADC 100Msps digitizer, FMC TDC timestamp counter, FMC DEL fine delay). | |||
![]() |
Slides MOMIB09 [0.818 MB] | ||
MOPPC094 | ARIEL Control System at TRIUMF – Project Update | controls, EPICS, PLC, ISAC | 318 |
|
|||
The Advanced Rare Isotope & Electron Linac (ARIEL) facility at TRIUMF, scheduled for Phase 1 completion in 2014, will use a control system based on EPICS. Discrete subsystems within the accelerator, beamlines and conventional facilities have been clearly identified. Control system strategies for each identified subsystem have been developed, and components have been chosen to satisfy the unique requirements of each system. The ARIEL control system will encompass methodology already established in the TRIUMF ISAC & ISAC-II facilities in addition to adoption of a number of technologies previously unused at TRIUMF. The scope includes interface with other discrete subsystems such as cryogenics and power distribution, as well as complete subsystem controls packages. | |||
MOPPC116 | Evolution of Control System Standards on the Diamond Synchrotron Light Source | controls, EPICS, interface, hardware | 381 |
|
|||
Control system standards for the Diamond synchrotron light source were initially developed in 2003. They were largely based on Linux, EPICS and VME and were applied fairly consistently across the three accelerators and first twenty photon beamlines. With funding for further photon beamlines in 2011 the opportunity was taken to redefine the standards to be largely based on Linux, EPICS, PC’s and Ethernet. The developments associated with this will be presented, together with solutions being developed for requirements that fall outside the standards. | |||
![]() |
Poster MOPPC116 [0.360 MB] | ||
MOPPC124 | Optimizing EPICS for Multi-Core Architectures | EPICS, real-time, controls, software | 399 |
|
|||
Funding: Work supported by German Bundesministerium für Bildung und Forschung and Land Berlin. EPICS is a widely used software framework for real-time controls in large facilities, accelerators and telescopes. Its multithreaded IOC (Input Output Controller) Core software has been developed on traditional single-core CPUs. The ITER project will use modern multi-core CPUs, running the RHEL Linux operating system in its MRG-R real-time variant. An analysis of the thread handling in IOC Core shows different options for improving the performance and real-time behavior, which are discussed and evaluated. The implementation is split between improvements inside EPICS Base, which have been merged back into the main distribution, and a support module that makes full use of these new features. This paper describes design and implementation aspects, and presents results as well as lessons learned. |
|||
![]() |
Poster MOPPC124 [0.448 MB] | ||
MOPPC131 | Experience of Virtual Machines in J-PARC MR Control | controls, operation, EPICS, embedded | 417 |
|
|||
At the J-PARC Main Ring (MR), we have used virtual-machine environment extensively in our accelerator control. In 2011, we developed a virtual-IOC, an EPICS In/Out Controller running on a virtual machine [1]. Now in 2013, about 20 virtual-IOCs are used in daily MR operation. In the summer of 2012, we updated our operating system from Scientific Linux 4 (SL4) to Scientific Linux 6 (SL6). In the SL6, KVM virtual-machine environment is supported as a default service. This fact encouraged us to port basic control services (ldap, dhcp, tftp, rdb, achiver, etc.) to multiple virtual machines. Each virtual machine has one service. Virtual machines are running on a few (not many) physical machines. This scheme enables easier maintenance of control services than before. In this paper, our experiences using virtual machines during J-PARC MR operation will be reported.
[1] VIRTUAL IO CONTROLLERS AT J-PARC MR USING XEN, N.Kamikubota et. al., ICALEPCS 2011 |
|||
![]() |
Poster MOPPC131 [0.213 MB] | ||
MOPPC132 | Evaluating Live Migration Performance of a KVM-Based EPICS | EPICS, network, software, controls | 420 |
|
|||
In this paper we present some results about live migration performance evaluation of a KVM-Based EPICS on PC.About PC,we care about the performance of storage,network and CPU. EPICS is a control system. we make a demo control system for evaluation, and it is lightweight. For time measurement, we set a monitor PV, and the PV can automatics change its value at regular time intervals. Data Browser can display the values of 'live' PVs and can measure the time. In the end, we get the evaluation value of live migration time using Data Browser. | |||
MOPPC148 | Not Dead Yet: Recent Enhancements and Future Plans for EPICS Version 3 | EPICS, software, controls, target | 457 |
|
|||
Funding: Work supported by U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357. The EPICS Version 4 development effort* is not planning to replace the current Version 3 IOC Database or its use of the Channel Access network protocol in the near future. Interoperability is a key aim of the V4 development, which is building upon the older IOC implementation. EPICS V3 continues to gain new features and functionality on its Version 3.15 development branch, while the Version 3.14 stable branch has been accumulating minor tweaks, bug fixes, and support for new and updated operating systems. This paper describes the main enhancements provided by recent and upcoming releases of EPICS Version 3 for control system applications. * Korhonen et al, "EPICS Version 4 Progress Report", this conference. |
|||
![]() |
Poster MOPPC148 [5.067 MB] | ||
TUMIB10 | Performance Testing of EPICS User Interfaces - an Attempt to Compare the Performance of MEDM, EDM, CSS-BOY, and EPICS | interface, hardware, software, EPICS | 547 |
|
|||
Funding: Work at the APS is supported by the U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH1135 Upgrading of the display manger or graphical user interface at EPICS sites reliant on older display technologies, typically MEDM or EDM, requires attention not only to functionality but also performance. For many sites, performance is not an issue - all display managers will update small numbers of process variables at rates exceeding the human ability to discern changes; but for certain applications typically found at larger sites, the ability to respond to updates rates at sub-Hertz frequencies for thousands of process variables is a requirement. This paper describes a series of tests performed on both older display managers – MEDM and EDM – and also the newer display managers CSS-BOY, epicsQT, and CaQtDM. Modestly performing modern hardware is used. |
|||
![]() |
Slides TUMIB10 [0.486 MB] | ||
![]() |
Poster TUMIB10 [0.714 MB] | ||
TUPPC008 | A New Flexible Integration of NeXus Datasets to ANKA by Fuse File Systems | software, synchrotron, detector, neutron | 566 |
|
|||
In the high data rate initiative (HDRI) german accelerator and neutron facilities of the Helmholtz Association agreed to use NeXus as a common data format. The synchrotron radiation source ANKA decided in 2012 to introduce NeXus as common data format for all beam lines. Nevertheless it is a challenging work to integrate a new data format in existing data processing work flows. Scientists rely on existing data evaluation kits which require specific data formats. To solve this obstacle, for linux a filesystem in userspace (FUSE) was developed, allowing to mount NeXus-Files as a filesystem. Easy in XML configurable filter rules allow a very flexible view to the data. Tomography data frames can be directly accessed as TIFF files by any standard picture viewer or scan data can be presented as a virtual ASCII file compatible to spec. | |||
TUPPC036 | A Status Update on Hyppie – a Hyppervisored PXI for Physics Instrumentation under EPICS | EPICS, LabView, controls, hardware | 635 |
|
|||
Beamlines at LNLS are moving to the concept of distributed control under EPICS. This has being done by reusing available code from the community and/or by programming hardware access in LabVIEW integrated to EPICS through Hyppie. Hyppie is a project to make a bridge between EPICS records and corresponding devices in a PXI chassis. Both EPICS/Linux and LabVIEW Real-Time run simultaneously in the same PXI controller, in a virtualization system with a common memory block shared as their communication interface. A number of new devices were introduced in the Hyppie suite and LNLS beamlines are experiencing a smooth transition to the new concept. | |||
![]() |
Poster TUPPC036 [1.658 MB] | ||
TUPPC088 | Development of MicroTCA-based Image Processing System at SPring-8 | FPGA, controls, interface, framework | 786 |
|
|||
In SPring-8, various CCD cameras have been utilized for electron beam diagnostics of accelerators and x-ray imaging experiments. PC-based image processing systems are mainly used for the CCD cameras with Cameralink I/F. We have developed a new image processing system based on MicroTCA platform, which has an advantage over PC in robustness and scalability due to its hot-swappable modular architecture. In order to reduce development cost and time, the new system is built with COTS products including a user-configurable Spartan6 AMC with an FMC slot and a Cameralink FMC. The Cameralink FPGA core is newly developed in compliance with the AXI4 open-bus to enhance reusability. The MicroTCA system will be first applied to upgrade of the two-dimensional synchrotron radiation interferometer[1] operating at the SPring-8 storage ring. The sizes and tilt angle of a transverse electron beam profile with elliptical Gaussian distribution are extracted from an observed 2D-interferogram. A dedicated processor AMC (PrAMC) that communicates with the primary PrAMC via backplane is added for fast 2D-fitting calculation to achieve real-time beam profile monitoring during the storage ring operation.
[1] "Two-dimensional visible synchrotron light interferometry for transverse beam-profile measurement at the SPring-8 storage ring", M.Masaki and S.Takano, J. Synchrotron Rad. 10, 295 (2003). |
|||
![]() |
Poster TUPPC088 [4.372 MB] | ||
TUPPC124 | Distributed Network Monitoring Made Easy - An Application for Accelerator Control System Process Monitoring | monitoring, network, controls, software | 875 |
|
|||
Funding: This work was supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357. As the complexity and scope of distributed control systems increase, so does the need for an ever increasing level of automated process monitoring. The goal of this paper is to demonstrate one method whereby the SNMP protocol combined with open-source management tools can be quickly leveraged to gain critical insight into any complex computing system. Specifically, we introduce an automated, fully customizable, web-based remote monitoring solution which has been implemented at the Argonne Tandem Linac Accelerator System (ATLAS). This collection of tools is not limited to only monitoring network infrastructure devices, but also to monitor critical processes running on any remote system. The tools and techniques used are typically available pre-installed or are available via download on several standard operating systems, and in most cases require only a small amount of configuration out of the box. High level logging, level-checking, alarming, notification and reporting is accomplished with the open source network management package OpenNMS, and normally requires a bare minimum of implementation effort by a non-IT user. |
|||
![]() |
Poster TUPPC124 [0.875 MB] | ||
THMIB03 | From Real to Virtual - How to Provide a High-avaliblity Computer Server Infrastructure | controls, hardware, operation, network | 1076 |
|
|||
During the commissioning phase of the Swiss Light Source (SLS) at the Paul Scherrer Institut (PSI) we decided in 2000 for a strategy to separate individual services for the control system. The reason was to prevent interruptions due to network congestion, misdirected control, and other causes between different service contexts. This concept proved to be reliable over the years. Today, each accelerator facility and beamline of PSI resides on a separated subnet and uses its dedicated set of service computers. As the number of beamlines and accelerators grew, the variety of services and their quantity rapidly increased. Fortunately, about the time when the SLS announced its first beam, VMware introduced its VMware Virtual Platform for Intel IA32 architecture. This was a great opportunity for us to start with the virtualization of the controls services. Currently, we have about 200 of such systems. In this presentation we discuss the way how we achieved the high-level-virtualization controls infrastructure, as well as how we will proceed in the future. | |||
![]() |
Slides THMIB03 [2.124 MB] | ||
![]() |
Poster THMIB03 [1.257 MB] | ||
THPPC022 | Securing Mobile Control System Devices: Development and Testing | controls, network, EPICS, interface | 1131 |
|
|||
Recent advances in portable devices allow end users convenient wasy to access data over the network. Networked control systems have traditionally been kept on local or internal networks to prevent external threats and isolate traffic. The UMWC Clinical Neutron Therapy System has its control system on such an isolated network. Engineers have been updating the control system with EPICS, and have developed EDM-based interfaces for control and monitoring. This project describes a tablet-based monitoring device being developed to allow the engineers to monitor the system, while, e.g. moving from rack to rack, or room to room. EDM is being made available via the tablet. Methods to maintain security of the control system and tablet, while providing ease of access and meaningful data for management are being created. In parallel with the tablet development, security and penetration tests are also being produced. | |||
THPPC023 | Integration of Windows Binaries in the UNIX-based RHIC Control System Environment | controls, Windows, software, interface | 1135 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Since its inception, the RHIC control system has been built-up on UNIX or LINUX and implemented primarily in C++. Sometimes equipment vendors include software packages developed in the Microsoft Windows operating system. This leads to a need to integrate these packaged executables into existing data logging, display, and alarms systems. This paper will describe an approach to incorporate such non-UNIX binaries seamlessly into the RHIC control system with minimal changes to the existing code base, allowing for compilation on standard LINUX workstations through the use of a virtual machine. The implementation resulted in the successful use of a windows dynamic linked library (DLL) to control equipment remotely while running a synoptic display interface on a LINUX machine. |
|||
![]() |
Poster THPPC023 [1.391 MB] | ||
THPPC024 | Operating System Upgrades at RHIC | software, controls, network, collider | 1138 |
|
|||
Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. Upgrading hundreds of machines to the next major release of an Operating system (OS), while keeping the accelerator complex running, presents a considerable challenge. Even before addressing the challenges that an upgrade represents, there are critical questions that must be answered. Why should an upgrade be considered? (An upgrade is labor intensive and includes potential risks due to defective software.) When is it appropriate to make incremental upgrades to the OS? (Incremental upgrades can also be labor intensive and include similar risks.) When is the best time to perform an upgrade? (An upgrade can be disruptive.) Should all machines be upgraded to the same version at the same time? (At times this may not be possible, and there may not be a need to upgrade certain machines.) Should the compiler be upgraded at the same time? (A compiler upgrade can also introduce risks at the software application level.) This paper examines our answers to these questions, describes how upgrades to the Red Hat Linux OS are implemented by the Controls group at RHIC, and describes our experiences. |
|||
![]() |
Poster THPPC024 [0.517 MB] | ||
THPPC032 | Embedded EPICS Controller for KEK Linac Screen Monitor System | controls, linac, PLC, EPICS | 1150 |
|
|||
The screen monitor (SC) of the KEK linac is a beam diagnostics device to measure transverse beam profiles with a fluorescent screen. The screen material is made of 99.5% Al2O3 and 0.5% CrO3, with which a sufficient amount of fluorescent light can be obtained when electron and positron beams impinge on the screen. the fluorescent light with a camera embedded with a charge-coupled device (CCD), the transverse spatial profiles of the beam can be easily measured. Compact SCs were previously developed in 1995 for the KEKB project. About 110 compact SCs were installed into the beam line at that time. VME-based computer control system was also developed in order to perform fast and stable control of the SC system. However, the previous system becomes obsolete and hard to maintain. Recently, a new screen monitor control system for the KEK electron/positron injector linac has been developed and fully installed. The new system is an embedded EPICS IOC based on the Linux/PLC. In this paper, we present the new screen monitor control system in detail. | |||
THPPC056 | Design and Implementation of Linux Drivers for National Instruments IEEE 1588 Timing and General I/O Cards | hardware, timing, software, controls | 1193 |
|
|||
Cosylab is developing GPL Linux device drivers to support several National Instruments (NI) devices. In particular, drivers have already been developed for the NI PCI-1588, PXI-6682 (IEEE1588/PTP) devices and the NI PXI-6259 I/O device. These drivers are being used in the development of the latest plasma fusion research reactor, ITER, being built at the Cadarache facility in France. In this paper we discuss design and implementation issues, such as driver API design (device file per device versus device file per functional unit), PCI device enumeration, handling reset, etc. We also present various use-cases demonstrating the capabilities and real-world applications of these drivers. | |||
![]() |
Poster THPPC056 [0.482 MB] | ||
THCOBB04 | Overview of the ELSA Accelerator Control System | database, controls, interface, hardware | 1396 |
|
|||
The Electron Stretcher Facility ELSA provides a beam of polarized electrons with a maximum energy of 3.2 GeV for hadron physics experiments. The in-house developed control system has continuously been improved during the last 15 years of operation. Its top layer consists of a distributed shared memory database and several core applications which are running on a linux host. The interconnectivity to hardware devices is built up with a second layer of the control system operating on PCs and VMEs. High level applications are integrated into the control system using C and C++ libraries. An event based messaging system notifies attached applications about parameter updates in near real-time. The overall system structure and specific implementation details of the control system will be presented. | |||
![]() |
Slides THCOBB04 [0.527 MB] | ||
FRCOBAB03 | The New Multicore Real-time Control System of the RFX-mod Experiment | controls, real-time, plasma, framework | 1493 |
|
|||
The real-time control system of RFX-mod nuclear fusion experiment has been in operation since 2004 and has been used to control the plasma position and the MagnetoHydroDinamic (MHD) modes. Over time new and more computing demanding control algorithm shave been developed and the system has been pushed to its limits. Therefore a complete re-design has been carried out in 2012. The new system adopts radically different solutions in Hardware, Operating System and Software management. The VME PowerPc CPUs communicating over Ethernet used in the former system have been replaced by a single multicore server. The VxWorks Operating System , previously used in the VME CPUs has now been replaced by Linux MRG, that proved to behave very well in real-time applications. The previous framework for control and communication has been replaced by MARTe, a modern framework for real-time control gaining interest in the fusion community. Thanks to the MARTe organization, a rapid development of the control system has been possible. In particular, its intrinsic simulation ability of the framework gave us the possibility of carrying out most debugging in simulation, without affecting machine operation. | |||
![]() |
Slides FRCOBAB03 [1.301 MB] | ||