Paper | Title | Other Keywords | Page |
---|---|---|---|
MOPKN012 | Hyperarchiver: An Epics Archiver Prototype Based on Hypertable | EPICS, controls, Linux, target | 114 |
|
|||
This work started in the context of NSLS2 project at Brookhaven National Laboratory. The NSLS2 control system foresees a very high number of PV variables and has strict requirements in terms of archiving/retrieving rate: our goal was to store 10K PV/sec and retrieve 4K PV/sec for a group of 4 signals. The HyperArchiver is an EPICS Archiver implementation engined by Hypertable, an open source database whose internal architecture is derived from Google's Big Table. We discuss the performance of HyperArchiver and present the results of some comparative tests.
HyperArchiver: http://www.lnl.infn.it/~epics/joomla/archiver.html Epics: http://www.aps.anl.gov/epics/ |
|||
Poster MOPKN012 [1.231 MB] | |||
MOPMS025 | Migration from OPC-DA to OPC-UA | Windows, Linux, toolkit, controls | 374 |
|
|||
The OPC-DA specification of OPC has been a highly successful interoperability standard for process automation since 1996, allowing communications between any compliant components regardless of vendor. CERN has a reliance on OPC-DA Server implementations from various 3rd party vendors which provide a standard interface to their hardware. The OPC foundation finalized the OPC-UA specification and OPC-UA implementations are now starting to gather momentum. This presentation gives a brief overview of the headline features of OPC-UA and a comparison with OPC-DA and outlines the necessity of migrating from OPC-DA and the motivation for migrating to OPC-UA. Feedback from research into the availability of tools and testing utilities will be presented and a practical overview of what will be required from a computing perspective in order to run OPC-UA clients and servers in the CERN network. | |||
Poster MOPMS025 [1.103 MB] | |||
MOPMU014 | Development of Distributed Data Acquisition and Control System for Radioactive Ion Beam Facility at Variable Energy Cyclotron Centre, Kolkata. | controls, interface, linac, status | 458 |
|
|||
To facilitate frontline nuclear physics research, an ISOL (Isotope Separator On Line) type Radioactive Ion Beam (RIB) facility is being constructed at Variable Energy Cyclotron Centre (VECC), Kolkata. The RIB facility at VECC consists of various subsystems like ECR Ion source, RFQ, Rebunchers, LINACs etc. that produce and accelerate the energetic beam of radioactive isotopes required for different experiments. The Distributed Data Acquisition and Control System (DDACS) is intended to monitor and control large number of parameters associated with different sub systems from a centralized location to do the complete operation of beam generation and beam tuning in a user friendly manner. The DDACS has been designed based on a 3-layer architecture namely Equipment interface layer, Supervisory layer and Operator interface layer. The Equipment interface layer consists of different Equipment Interface Modules (EIMs) which are designed around ARM processor and connected to different equipment through various interfaces such as RS-232, RS-485 etc. The Supervisory layer consists of VIA-processor based Embedded Controller (EC) with embedded XP operating system. This embedded controller, interfaced with EIMs through fiber optic cable, acquires and analyses the data from different EIMs. Operator interface layer consists mainly of PCs/Workstations working as operator consoles. The data acquired and analysed by the EC can be displayed at the operator console and the operator can centrally supervise and control the whole facility. | |||
Poster MOPMU014 [2.291 MB] | |||
WEMMU005 | Fabric Management with Diskless Servers and Quattor on LHCb | Linux, controls, experiment, collider | 691 |
|
|||
Large scientific experiments nowadays very often are using large computer farms to process the events acquired from the detectors. In LHCb a small sysadmin team manages 1400 servers of the LHCb Event Filter Farm, but also a wide variety of control servers for the detector electronics and infrastructure computers : file servers, gateways, DNS, DHCP and others. This variety of servers could not be handled without a solid fabric management system. We choose the Quattor toolkit for this task. We will present our use of this toolkit, with an emphasis on how we handle our diskless nodes (Event filter farm nodes and computers embedded in the acquisition electronic cards). We will show our current tests to replace the standard (RedHat/Scientific Linux) way of handling diskless nodes to fusion filesystems and how it improves fabric management. | |||
Slides WEMMU005 [0.119 MB] | |||
Poster WEMMU005 [0.602 MB] | |||
WEPKN027 | The Performance Test of F3RP61 and Its Applications in CSNS Experimental Control System | controls, EPICS, target, Linux | 763 |
|
|||
F3RP61 is an embedded PLC developed by Yokogawa, Japan. It is based on PowerPC 8347 platform. Linux and EPICS can run on it. We do some tests on this device, including CPU performance, network performance, CA access time and scan time stability of EPICS. We also compare E3RP61 with MVME5100, which is most used IOC in BEPCII. After the tests and comparison, the performance and ability of F3RP61 is clear. It can be used in Experiment Control System of CSNS (China Spallation Neutron Source) as communication nodes between front control layer and Epics layer. And in some cases, F3RP61 also has the ability to exert more functions such as control tasks. | |||
Poster WEPKN027 [0.200 MB] | |||
WEPKS005 | State Machine Framework and its Use for Driving LHC Operational States* | framework, controls, operation, GUI | 782 |
|
|||
The LHC follows a complex operational cycle with 12 major phases that include equipment tests, preparation, beam injection, ramping and squeezing, finally followed by the physics phase. This cycle is modeled and enforced with a state machine, whereby each operational phase is represented by a state. On each transition, before entering the next state, a series of conditions is verified to make sure the LHC is ready to move on. The State Machine framework was developed to cater for building independent or embedded state machines. They safely drive between the states executing tasks bound to transitions and broadcast related information to interested parties. The framework encourages users to program their own actions. Simple configuration management allows the operators to define and maintain complex models themselves. An emphasis was also put on easy interaction with the remote state machine instances through standard communication protocols. On top of its core functionality, the framework offers a transparent integration with other crucial tools used to operate LHC, such as the LHC Sequencer. LHC Operational States has been in production for half a year and was seamlessly adopted by the operators. Further extensions to the framework and its application in operations are under way.
* http://cern.ch/marekm/icalepcs.html |
|||
Poster WEPKS005 [0.717 MB] | |||
WEPMN024 | NSLS-II Beam Position Monitor Embedded Processor and Control System | controls, EPICS, Ethernet, FPGA | 932 |
|
|||
Funding: Work supported by DOE contract No: DE-AC02-98CH10886 NSLS-II is a 3 Gev 3rd generation light source that is currently under construction. A sub-micron Digital Beam Position Monitor (DBPM) system which is hardware electronics and embedded software processor and EPICS IOC has been successfully developed and tested in the ALS storage ring and BNL Lab. |
|||
WEPMS011 | The Timing Master for the FAIR Accelerator Facility | timing, FPGA, network, real-time | 996 |
|
|||
One central design feature of the FAIR accelerator complex is a high level of parallel beam operation, imposing ambitious demands on the timing and management of accelerator cycles. Several linear accelerators, synchrotrons, storage rings and beam lines have to be controlled and re-configured for each beam production chain on a pulse-to-pulse basis, with cycle lengths ranging from 20 ms to several hours. This implies initialization, synchronization of equipment on the time scale down to the ns level, interdependencies, multiple paths and contingency actions like emergency beam dump scenarios. The FAIR timing system will be based on White Rabbit [1] network technology, implementing a central Timing Master (TM) unit to orchestrate all machines. The TM is subdivided into separate functional blocks: the Clock Master, which deals with time and clock sources and their distribution over WR, the Management Master, which administrates all WR timing receivers, and the Data Master, which schedules and coordinates machine instructions and broadcasts them over the WR network. The TM triggers equipment actions based on the transmitted execution time. Since latencies in the low μs range are required, this paper investigates the possibilities of parallelisation in programmable hardware and discusses the benefits to either a distributed or monolithic timing master architecture. The proposed FPGA based TM will meet said timing requirements while providing fast reaction to interlocks and internal events and offers parallel processing of multiple signals and state machines.
[1] J. Serrano, et al, "The White Rabbit Project", ICALEPCS 2009. |
|||
WEPMS017 | The Global Trigger Processor: A VXS Switch Module for Triggering Large Scale Data Acquisition Systems | FPGA, Ethernet, interface, hardware | 1010 |
|
|||
Funding: Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The 12 GeV upgrade for Jefferson Lab's Continuous Electron Beam Accelerator Facility requires the development of a new data acquisition system to accommodate the proposed 200 kHz Level 1 trigger rates expected for fixed target experiments at 12 GeV. As part of a suite of trigger electronics comprised of VXS switch and payload modules, the Global Trigger Processor (GTP) will handle up to 32,768 channels of preprocessed trigger information data from the multiple detector systems that surround the beam target at a system clock rate of 250 MHz. The GTP is configured with user programmable Physics trigger equations and when trigger conditions are satisfied, the GTP will activate the storage of data for subsequent analysis. The GTP features an Altera Stratix IV GX FPGA allowing interface to 16 Sub-System Processor modules via 32 5-Gbps links, DDR2 and flash memory devices, two gigabit Ethernet interfaces using Nios II embedded processors, fiber optic transceivers, and trigger output signals. The GTP's high-bandwidth interconnect with the payload modules in the VXS crate, the Ethernet interface for parameter control, status monitoring, and remote update, and the inherent nature of its FPGA give it the flexibility to be used large variety of tasks and adapt to future needs. This paper details the responsibilities of the GTP, the hardware's role in meeting those requirements, and elements of the VXS architecture that facilitated the design of the trigger system. Also presented will be the current status of development including significant milestones and challenges. |
|||
Poster WEPMS017 [0.851 MB] | |||
WEPMU019 | First Operational Experience with the LHC Beam Dump Trigger Synchronisation Unit | software, hardware, monitoring, operation | 1100 |
|
|||
Two LHC Beam Dumping Systems (LBDS) remove the counter-rotating beams safely from the collider during setting up of the accelerator, at the end of a physics run and in case of emergencies. Dump requests can come from 3 different sources: the machine protection system in emergency cases, the machine timing system for scheduled dumps or the LBDS itself in case of internal failures. These dump requests are synchronised with the 3 μs beam abort gap in a fail-safe redundant Trigger Synchronisation Unit (TSU) based on Digital Phase Lock Loops (DPLL), locked onto the LHC beam revolution frequency with a maximum phase error of 40 ns. The synchronised trigger pulses coming out of the TSU are then distributed to the high voltage generators of the beam dump kickers through a redundant fault-tolerant trigger distribution system. This paper describes the operational experience gained with the TSU since their commissioning with beam in 2009, and highlights the improvements which have been implemented for a safer operation. This includes an increase of the diagnosis and monitoring functionalities, a more automated validation of the hardware and embedded firmware before deployment, or the execution of a post-operational analysis of the TSU performance after each dump action. In the light of this first experience the outcome of the external review performed in 2010 is presented. The lessons learnt on the project life-cycle for the design of mission critical electronic modules are discussed. | |||
Poster WEPMU019 [1.220 MB] | |||
THDAULT01 | Modern System Architectures in Embedded Systems | controls, FPGA, hardware, software | 1260 |
|
|||
Several new technologies are making their way also in embedded systems. In addition to FPGA technology which has become commonplace, multicore CPUs and I/O virtualization (among others) are being introduced to the embedded systems. In our paper we present our ideas and studies about how to take advantage of these features in control systems. Some application examples involving things like CPU partitioning, virtualized I/O and so an are discussed, along with some benchmarks. | |||
Slides THDAULT01 [1.426 MB] | |||
THDAULT04 | Embedded Linux on FPGA Instruments for Control Interface and Remote Management | FPGA, Linux, controls, TANGO | 1271 |
|
|||
Funding: This work was part-funded by the RCUK Energy Programme under grant EP/I501045 and the European Communities under the contract of Association between EURATOM and CCFE. FPGAs are now large enough that they can easily accommodate an embedded 32-bit processor which can be used to great advantage. Running embedded Linux gives the user many more options for interfacing to their FPGA-based instrument, and in some cases this enables removal of the middle-person PC. It is now possible to manage the instrument directly by widely used control systems such EPICS or TANGO. As an example, on MAST (the Mega Amp Spherical Tokamak) at Culham Centre for Fusion Energy, a new vertical feedback system is under development in which waveform coefficients can be changed between plasma discharges to define the plasma position behaviour. Additionally it is possible to use the embedded processor to facilitate remote updating of firmware which, in combination with a watchdog and network booting ensures that full remote management over Ethernet is possible. We also discuss UDP data streaming using embedded Linux and a web based control interface running on the embedded processor to interface to the FPGA board. |
|||
Slides THDAULT04 [2.267 MB] | |||
THDAULT05 | Embedded LLRF Controller with Channel Access on MicroTCA Backplane Interconnect | controls, LLRF, EPICS, FPGA | 1274 |
|
|||
A low-level RF controller has been developed for the accelerator controls for SuperKEKB, Super-conducting RF Test facility (STF) and Compact-ERL (cERL) at KEK. The feedback mechanism will be performed on Vertex-V FPGA with 16-bit ADCs and DACs. The card was designed as an advanced mezzanine card (AMC) for a MicroTCA shelf. An embedded EPICS IOC on the PowerPC core in FPGA will provide the global controls through channel access (CA) protocol on the backplane interconnect of the shelf. No other mechanisms are required for the external linkages. CA is exclusively employed in order to communicate with central controls and with an embedded IOC on a Linux-based PLC for slow controls. | |||
Slides THDAULT05 [1.780 MB] | |||