A   B   C   D   E   F   G   H   I   K   L   M   N   O   P   Q   R   S   T   U   V   W  

monitoring

Paper Title Other Keywords Page
MOAB01 The Status of the LHC Controls System Shortly Before Injection of Beam controls, laser, diagnostics, cryogenics 5
 
  • P. Charrue, H. Schmickler
    CERN, Geneva
  At the time of the ICALEPCS 2007 conference, the LHC main accelerator will be close to its final state of installation, and major components will have passed the so-called “hardware commissioning.” In this paper the requirements and the main components of the LHC control system will be described very briefly. Out of its classical 3-tier architecture, those solutions will be presented, which correspond to major development work done here at CERN. Focus will be given to the present status of these developments and to lessons learned in the past months.  
slides icon Slides  
 
MOAB03 Trends in Software for Large Astronomy Projects controls, optics, feedback, laser 13
 
  • K. K. Gillies
    Gemini Observatory, Southern Operations Center, Tucson, AZ
  • B. D. Goodrich, S. B. Wampler
    Advanced Technology Solar Telescope, National Solar Observatory, Tucson
  • J. M. Johnson, K. McCann
    W. M. Keck Observatory, Kamuela
  • S. Schumacher
    National Optical Astronomy Observatories, La Serena, Chile
  • D. R. Silva
    AURA/Thirty Meter Telescope, Pasadena/CA
  • A. Wallander, G. Chiozzi
    ESO, Garching bei Muenchen
  The current 8-10M ground-based telescopes require complex real-time control systems that are large, distributed, fault-tolerant, integrated, and heterogeneous. New challenges are on the horizon with new instruments, AO, laser guide stars, and the next generation of even larger telescopes. These projects are characterized by increasing complexity, where requirements cannot be met in isolation due to the high coupling between the components in the control and acquisition chain. Additionally, the high cost for the observing time imposes very challenging requirements in terms of system reliability and observing efficiency. The challenges presented by the next generation of telescopes go beyond a matter of scale and may even require a change in paradigm. Although our focus is on control systems, it is essential to keep in mind that this is just one of the several subsystems integrated in the whole observatory end-to-end operation. In this paper we show how the astronomical community is responding to these challenges in the software arena. We analyze the evolution in control system architecture and software infrastructure, looking into the future for these two generations of projects.  
slides icon Slides  
 
MOPA01 Summary of the Control System Cyber-Security (CS)2/HEP Workshop controls, synchrotron, factory, photon 18
 
  • S. Lueders
    CERN, Geneva
  Over the last few years modern accelerator and experiment control systems have increasingly been based on commercial-off-the-shelf products (VME crates, PLCs, SCADA systems, etc.), on Windows or Linux PCs, and on communication infrastructures using Ethernet and TCP/IP. Despite the benefits coming with this (r)evolution, new vulnerabilities are inherited, too: Worms and viruses spread within seconds via the Ethernet cable, and attackers are becoming interested in control systems. Unfortunately, control PCs cannot be patched as fast as office PCs. Even worse, vulnerability scans at CERN using standard IT tools have shown that commercial automation systems lack fundamental security precautions: Some systems crashed during the scan, others could easily be stopped or their process data be altered. The (CS)2/HEP workshop held the week-end before ICALEPCS2007 was intended to present, share, and discuss countermeasures deployed in HEP laboratories in order to secure control systems. This presentation will give a summary overview of the solution planned, deployed and the experience gained.  
slides icon Slides  
 
MOPA02 LHC@FNAL – A New Remote Operations Center at Fermilab controls, site, quadrupole, instrumentation 23
 
  • W. F. Badgett, K. B. Biery, E. G. Gottschalk, S. R. Gysin, M. O. Kaletka, M. J. Lamm, K. M. Maeshima, P. M. McBride, E. S. McCrory, J. F. Patrick, A. J. Slaughter, A. L. Stone, A. V. Tollestrup, E. R. Harms
    Fermilab, Batavia, Illinois
  • Hadley, Nicholas J. Hadley, S. K. Kunori
    UMD, College Park, Maryland
  • M. Lamont
    CERN, Geneva
  Commissioning the LHC accelerator and experiments will be a vital part of the worldwide high-energy physics program beginning in 2007. A remote operations center, LHC@FNAL, has been built at Fermilab to make it easier for accelerator scientists and experimentalists working in North America to help commission and participate in operations of the LHC and experiments. We report on the evolution of this center from concept through construction and early use. We also present details of its controls system, management, and expected future use.  
slides icon Slides  
 
MOPA03 Redundancy for EPICS IOCs controls, cryogenics 26
 
  • L. R. Dalesio
    SLAC, Menlo Park, California
  • G. Liu, B. Schoeneburg, M. R. Clausen
    DESY, Hamburg
  High availability is driving the reliability demands for today’s control systems. Commercial control systems are tackling these requirements by redundant implementations of major components. Design and implementation of redundant Input Output Controllers (IOCs) for EPICS will open new control regimes also for the EPICS collaboration. The origin of this development is the new XFEL project at DESY. The demands on the availability for the machine uptime are extremely high (99.8%) and can only be achieved if all the utility supplies are permanently available 24/7. This paper will describe the implementation of redundant EPICS IOCs at DESY that shall replace the existing redundant commercial systems for cryogenic controls. Special technical solutions are necessary to synchronize continuous control process databases (e.g., PID). Synchronization of sequence programs demands similar technical solutions. All of these update mechanisms must be supervised by a redundancy monitor task (RMT) that implements a hard-coded expert system that has to fulfill the essential failover criteria: A failover may only occur if the new state is providing more reliable operations than the current state.  
slides icon Slides  
 
MOPB04 JavaIOC factory, controls, power-supply 40
 
  • M. R. Kraimer
    Private Address, Osseo
  EPICS is a set of Open Source software tools, libraries, and applications developed collaboratively and used worldwide to create distributed soft real-time control systems for scientific instruments such as particle accelerators, telescopes, and other large scientific experiments. An IOC (Input/Output Controller) is a network node that controls and/or monitors a collection of devices. An IOC contains a memory resident real-time database. The real-time database has a set of "smart" records. Each record is an instance on a record of a particular type. JavaIOC is a JAVA implementation of an EPICS IOC. It has many similarities to a Version 3 EPICS IOC, but extends the data types to support structures and arrays.  
slides icon Slides  
 
TOAB03 ALICE Control System – Ready for LHC Operation controls, site, heavy-ion, collider 65
 
  • A. Augustinus, M. Boccioli, P. Ch. Chochula, S. Kapusta, P. Rosinsky, C. Torcato de Matos, L. W. Wallet, L. S. Jirden
    CERN, Geneva
  • G. De Cataldo, M. Nitti
    INFN-Bari, Bari
  ALICE is one of the four LHC experiments presently being built at CERN and due to start operations by the end of 2007. The experiment is being built by a very large worldwide collaboration; about 1000 collaborators and 85 institutes are participating. The construction and operation of the experiment pose many technical and managerial problems, and this also applies to the design, implementation, and operation of the control system. The control system is technically challenging, representing a major increase in terms of size and complexity with respect to previous-generation systems, and the managerial issues are of prime importance due to the widely scattered contributions. This paper is intended to give an overview of the status of the control system. It will describe the overall structure and give some examples of chosen controls solutions, and it will highlight how technical and managerial challenges have been met. The paper will also describe how the various subsystems are integrated to form a coherent control system, and it will finally give some hints on the first experiences and an outlook of the forthcoming operation.  
 
TPPA03 Software Factory Techniques Applied to Process Control at CERN controls, factory, target, collider 87
 
  • M. D. Dutour
    CERN, Geneva
  The LHC requires constant monitoring and control of large quantities of parameters to guarantee operational conditions. For this purpose a methodology called UNICOS was implemented to standardize the design of process control applications. To further accelerate the development of these applications, we migrated our existing UNICOS tooling suite toward a software factory in charge of assembling project, domain, and technical information seamlessly into deployable PLC–SCADA systems. This software factory delivers consistently high quality by reducing human error and repetitive tasks and adapts to user specifications in a cost-efficient way. Hence, this production tool is designed to hide the PLC and SCADA platforms, enabling the experts to focus on the business model rather than specific syntax. Based on industry standards, this production tool along with the UNICOS methodology provides a modular environment meant to support process control experts to develop their solutions quickly. This article presents the user requirements and chosen approach. Then the focus moves to the benefits of the selected architecture and finishes with the results and a vision for the future.

LHC:Large Hadron ColliderUNICOS:UNified Industrial COntrol SystemsPLC:Programmable Logic ControllerSCADA:Supervisory Control And Data AcquisitionTerms:Process control, software engineering

 
 
TPPA05 Control of Acquisition and Cluster-Based Online Processing of GRETINA Data controls 93
 
  • M. L. Cromaz, C. A. Lionberger
    LBNL, Berkeley, California
  The GRETINA gamma ray tracking detector will acquire data from 112 digitizer modules in 28 VME crates. The data will be distributed to a cluster of on the order of 100 computer servers for the computation-intensive initial processing steps which will be run concurrently with data acquisition. A slow-controls system based on EPICS controls all aspects of data acquisition and this online processing. On the cluster, EPICS controls not only when processing is occurring but which processing programs are running on which nodes and where their inputs and outputs are directed. The EPICS State Notation Language is used extensively both in the VME and cluster environments.  
 
TPPA20 Canone – A Highly-Interactive Web-Based Control System Interface controls, background 129
 
  • A. J. Green
    University of Cambridge, Cambridge
  • K. Zagar, M. Pelko
    Cosylab, Ljubljana
  • L. Zambon
    ELETTRA, Basovizza, Trieste
  In the recent years, usability of web applications has significantly improved, approaching that of rich desktop applications. Example applications are numerous, e.g., many different web applications from Google. The enabling driver for these developments is the AJAX (Asynchronous JavaScript and XML) architecture. Canone, originally a PHP web interface for Tango control system developed at Elettra, is one of the first attempts of long-distance interaction with the control system via Web. Users with suitable privileges can create panels consisting of various graphical widgets for monitoring and control of the process variables of the control system online. Recently, Canone was extended to interact with a control system through an abstract DAL (Data Access Layer) interface, making it applicable to EPICS and TINE as well. Also, the latest release of Canone comes with drag'n'drop functionality for creating the panels, making the framework even easier to use. This article discusses the general issues of the web-based interaction with the control system such as security, usability, network traffic and scalability, and presents the approach taken by Canone.  
 
TPPA22 Standard Device Control via PVSS Object Libraries in ALICE controls, radiation, power-supply, heavy-ion 135
 
  • A. Augustinus, P. Ch. Chochula, L. S. Jirden, L. W. Wallet
    CERN, Geneva
  The device control in the LHC experiments is based on OPC servers and PVSS SCADA systems. A software framework enables the user to set up his PVSS project for the different devices used. To achieve a homogeneous operational environment for the ALICE experiment, these devices need to be controlled thought standard interfaces. PVSS panels act as the upper control layer and should allow for full control of the devices. The PVSS object-oriented feature has allowed the development of device Object Libraries. The Object Libraries have two main advantages. On one hand, they ease the operator task thanks to the introduced standardization of the various device control panels. On the other hand, they reduce the developer’s job as only basic software knowledge is required to set up a control application for a standard device. This paper will describe the device control architecture including PVSS, software framework, and OPC server. It will describe the Object Libraries developed for some devices, and it will explain how the Object Libraries integrate tools in the ALICE controls environment, such as Finite State Machines, access control, and trending.

ALICE (A Large Ion Collider Experiment)LHC (Large Hadron Collider)OPC (Ole for Process Control)SCADA (Supervisory Control And Data Acquisition)

 
 
TPPA29 Interfacing of Peripheral Systems to EPICS Using Shared Memory controls, diagnostics, SNS, laser 152
 
  • E. Tikhomolov
    TRIUMF, Vancouver
  Interfacing of peripheral control and data acquisition systems to an EPICS-based control system is a common problem. At the ISAC radioactive beam facility, both Linux-based and Windows-based systems were integrated using the “soft” IOC, which became available in EPICS release 3.14. For Linux systems, shared memory device support was implemented using standard Linux functions. For Windows-based RF control systems, the “soft” IOC runs as a separate application, which uses shared memory for data exchange with the RF control applications. A set of DLLs exposes an API for use by the application programmer. Additional features include alarm conditions for read-back updates, watchdogs for each running application, and test channels.  
 
TPPA31 Redundant EPICS IOC in PC-based Unix-like Environment controls, cryogenics, linac, radiation 158
 
  • M. R. Clausen, G. Liu, B. Schoeneburg
    DESY, Hamburg
  • K. Furukawa
    KEK, Ibaraki
  • A. Kazakov
    GUAS/AS, Ibaraki
  Redundant EPICS IOC is being actively developed at DESY in order to achieve high availability. Current development focuses on VME vxWorks environment for cryogenics controls. However, many facilities use PC-architecture and unix-like systems as Linux and FreeBSD. These facilities require high availability and redundancy as well. So this paper will describe the implementation of EPICS redundant IOC in PC-based environment with Linux and FreeBSD. This work will be done by porting Redundancy Monitor Task (RMT) and Continuous Control Executive (CCE). RMT is responsible to make a decision when to fail-over; it is rather independent and may be used in a wide range of applications. In the future it can be employed in caGateway to add redundancy. CCE is aimed to synchronize two RSRV-based IOC servers.  
 
TPPA32 LivEPICS: An EPICS Linux Live CD NAGIOS Equipped controls, feedback, site 161
 
  • R. Lange
    BESSY GmbH, Berlin
  • N. J. Richter
    CQU, Rockhampton
  • M. G. Giacchini
    INFN/LNL, Legnaro, Padova
  EPICS* distributions – analogous to a Linux distribution, are collections of EPICS software that have been proven to work together. It is much quicker to download and install a distribution than it would be to obtain all of the individual pieces and install them separately. LivEPICS** distribution contains binaries from EPICS Base, various extensions, and source code.

* EPICS official web site: http://www.aps.anl.gov/epics/distributions/index.php** M. Giacchini., PCaPAC Workshop 2006 poster. http://conferences.jlab.org/pcapac/talks/poster/Giacchini.pdf.

 
 
TPPB05 The Cryogenic Control System of BEPCII controls, cryogenics, superconducting-magnet, superconductivity 169
 
  • M. H. Dai, Y. L. Huang, B. Jiang, K. X. Wang, K. J. Yue, J. Zhao, G. Li
    IHEP Beijing, Beijing
  A cryogenic system for the superconducting RF cavity (SRFC), superconducting solenoid magnet (SSM), and superconducting quadrupole magnet (SCQ) has been designed and installed in the Beijing Electron-Positron Collider (BEPCII). The cryogenic control system is a fully automatic system using PLCs and EPICS IOCs and consists of three components. One is the Siemens PLC system for compressor control, another is the AB-PLC system for cryogenic equipment control, and they are integrated into the high-level EPICS system. The functions of cryogenic control include process control, PID control loops, real-time data access and data restore, alarm handler, and human–machine interface. The control system can also be automatically recovered from emergency. This paper will describe the BEPCII cryogenic control system, data communication between S7-PLC and EPICS IOCs, and how to integrate the flow control and the low-level interlock with the AB-PLC system and EPICS.  
 
TPPB09 The ALICE Transition Radiation Detector Control System controls, radiation, power-supply, collider 181
 
  • J. M. Mercado
    Heidelberg University, Physics Institute, Heidelberg
  The ALICE experiment at the LHC incorporates a transition radiation detector (TRD) designed to provide electron identification in the central barrel at momenta in excess of 2 GeV/c as well as fast (6 us) triggering capability for high transverse momentum (pt > 3 GeV/c) processes. It consists of 540 gas detectors and about 1.2 million electronics readout channels that are digitized during the 2 us drift time by the front-end electronics (FEEs) designed in full custom for on-detector operation. The TRD detector control system (DCS) back end is fully implemented as a detector-oriented hierarchy of objects behaving as finite state machines (FSMs). PVSS II is used as the SCADA system. The front-end part is composed of a 3-layer software architecture with a distributed information management (DIM) server running on an embedded Linux on-detector system pool (about 550 servers) and the so-called InterComLayer interfacing the DIM client in PVSS as well as the configuration database. The DCS also monitors and controls several hundreds of low- and high-voltage channels, among many other parameters. The layout of the system and status on installation and commissioning are presented.  
 
TPPB13 The Detector Control System for the Electromagnetic Calorimeter of the CMS Experiment at LHC controls, radiation, laser, power-supply 190
 
  • P. Adzic, P. Milenovic, P. Milenovic
    VINCA, Belgrade
  • A. B. Brett, G. Dissertori, G. Leshev, T. Punz
    ETH, Zürich
  • D. Di Calafiori
    UERJ, Rio de Janeiro
  • R. Gomez-Reino, R. Ofierzynski
    CERN, Geneva
  • A. Inyakin, S. Zelepoukine
    IHEP Protvino, Protvino, Moscow Region
  • D. Jovanovic, J. Puzovic
    Faculty of Physics, Belgrade
  The successful achievement of many physics goals of the CMS experiment required the design of an electromagnetic calorimeter (ECAL) with an excellent energy and angular resolution. The choice of the scintillating crystals, photodetectors, and front-end readout electronics of the ECAL has been made according to these criteria. However, certain characteristics of the chosen components imposed challenging constraints on the design of the ECAL, such as the need for rigorous temperature and high voltage stability. For this reason an ECAL Detector Control System (DCS) had to be carefully designed. In this presentation we describe the main DCS design objectives, the detailed specifications, and the final layout of the system. Emphasis is put on the system implementation and its specifc hardware and software solutions. The latest results from final system prototype tests in the 2006 ECAL test-beam program, as well as the system installation and commissioning at the CMS experimental construction site, are also discussed.  
 
TPPB15 The CSNS Controls Plan controls, power-supply, SNS, target 196
 
  • X. C. Kong, Q. Le, G. Lei, G. Li, J. Liu, J. C. Wang, X. L. Wang, G. X. Xu, Z. Zhao, C. H. Wang
    IHEP Beijing, Beijing
  The China Spallation Neutron Source (CSNS) is an accelerator-based high-power project currently under planning in China. For the similarities between the CSNS and the U. S. Spallation Neutron Source (SNS), the SNS control framework will be used as a model for the machine controls. And the software framework used at SNS, XAL, is a natural choice for the CSNS. This paper provides a controls overview and progress. Also, the technical plan, schedule, and personnel plan are discussed.  
 
TPPB23 LHC Powering Circuit Overview: A Mixed Industrial and Classic Accelerator Control Application controls, cryogenics, superconducting-magnet, diagnostics 211
 
  • H. M. Milcent, F. B. Bernard
    CERN, Geneva
  Three control systems are involved in the powering of the LHC magnets: QPSs (Quench Protection Systems), PICs (Powering Interlock Controllers), and PCs (Power Converters). They have been developed and managed by different teams. The requirements were different; in particular, each system has its own expert software. The starting of the LHC hardware commissioning has shown that a single access point should make the tests easier. Therefore, a new application has been designed to get the powering circuit information from the three expert softwares. It shows synthetic information, through homogenous graphical interfaces, from various sources: PLCs (Programmable Logic Controllers) and WorldFIP agents via FESA (Front-End Software Architecture) and via gateways. Furthermore, this application has been developed for later use. During the LHC operation, it will provide powering circuit overview. This document describes the powering circuit overview application based on an industrial SCADA (Supervisory Control and Data Acquisition) system named PVSS with the UNICOS (Unified Industrial Control System) framework. It also explains its integration into the LHC accelerator control infrastructure.  
 
TPPB29 The OPC-Based System at SNS: An EPICS Supplement SNS, controls, site, power-supply 223
 
  • R. J. Wood, M. P. Martinez
    ORNL, Oak Ridge, Tennessee
  The Power Monitoring System at the Spallation Neutron Source (SNS) is a Windows-based system using OLE for Process Control (OPC) technology. It is employed as the primary vehicle to monitor the entire SNS Electrical Distribution System. This OPC-based system gathers real-time data, via the system's OPC server, directly from the electrical devices: substations, generators, and Uninterruptible Power Supply (UPS) units. Thereupon, the OPC-EPICS softIOC interface reads and sends the data from the OPC server to EPICS, the primary control system of SNS. This interface provides a scheme for real-time power data to be shared by both systems. Unfortunately, it engenders obscure anomalies that include data inaccuracy and update inconsistency in EPICS. Nevertheless, the OPC system supplements the EPICS system with user-friendly applications—besides the ability to compare real-time and archived data between the two systems—that enable performance monitoring and analysis with ease. The OPC-based system at SNS is a complimentary system to EPICS.  
 
TPPB34 ISAC Control System Update controls, diagnostics, ion, optics 235
 
  • D. Bishop, D. Dale, T. Howland, H. Hui, K. Langton, M. LeRoss, R. B. Nussbaumer, C. G. Payne, K. Pelzer, J. E. Richards, W. Roberts, E. Tikhomolov, G. Waters, R. Keitel
    TRIUMF, Vancouver
  At the ISAC radioactive beam facility, the superconducting Linac was commissioned, and several experimental beam lines were added. The paper will describe the additions to the EPICS-based control system, issues with integration of third-party systems, as well as integration of accelerator controls with experiment controls.  
 
WOAA01 The ILC Control System controls, feedback, linear-collider, collider 271
 
  • R. S. Larsen
    SLAC, Menlo Park, California
  • F. Lenkszus, C. W. Saunders, J. Carwardine
    ANL, Argonne, Illinois
  • P. M. McBride, M. Votava
    Fermilab, Batavia, Illinois
  • S. Michizono
    KEK, Ibaraki
  • S. Simrock
    DESY, Hamburg
  Since the last ICALEPCS, a small multi-region team has developed a reference design model for the ILC Control System as part of the ILC Global Design Effort. The scale and performance parameters of the ILC accelerator require new thinking in regards to control system design. Technical challenges include the large number of accelerator systems to be controlled, the large scale of the accelerator facility, the high degree of automation needed during accelerator operations, and control system equipment requiring “Five Nines” availability. The R&D path for high availability touches the control system hardware, software, and overall architecture, and extends beyond traditional interfaces into the accelerator technical systems. Software considerations for HA include fault detection through exhaustive out-of-band monitoring and automatic state migration to redundant systems, while the telecom industry’s emerging ATCA standard–conceived, specified, and designed for High Availability–is being evaluated for suitability for ILC front-end electronics. Parallels will be drawn with control system challenges facing the ITER CODAC team.  
slides icon Slides  
 
WOPA04 Front-End Software Architecture controls, diagnostics, pick-up, beam-losses 310
 
  • L. Fernandez, S. Jackson, F. Locci, J. L. Nougaret, M. P. Peryt, A. Radeva, M. Sobczak, M. Vanden Eynden, M. Arruat
    CERN, Geneva
  CERN’s Accelerator Controls group launched a project in 2003 to develop the new CERN accelerator Real-Time Front-End Software Architecture (FESA) for the LHC and its injectors. In this paper, we would like to report the status of this project, at the eve of the LHC start-up. After describing the main concepts of this real-time Object Oriented Software Framework, we will present how we have capitalized on this technical choice by showing the flexibility through the new functionalities recently introduced such as Transactions, Diagnostics, Monitoring, Management of LHC Critical Settings, and Communication with PLC devices. We will depict the methodology we have put in place to manage the growing community of developers and the start of a collaboration with GSI. To conclude we will present the extensions foreseen in the short term.  
slides icon Slides  
 
WPPA01 A Novel PXI-Based Data Acquisition and Control System for Stretched Wire Magnetic Measurements for the LHC Magnets: An Operation Team Proposal controls, power-supply, diagnostics, quadrupole 316
 
  • K. Priestnall, V. Chohan
    CERN, Geneva
  • S. Shimjith, A. Tikaria
    BARC, Mumbai
  The SSW system developed by Fermilab, USA, has been the main device heavily used since 2004 at CERN for certain required measurements of all the LHC Quadrupole assemblies as well as certain measurements for the LHC Dipoles. All these structures also include various small and large corrector magnets. A novel system is proposed, based on three years of operational experience in testing the LHC Magnets on a round-the-clock basis. A single stretched wire system is based on the wire cutting the magnetic flux, producing the electrical potential signal. Presently this signal is integrated with a VME-based data acquisition system and is used to analyse the magnetic field. The acquisition and control is currently done via a SUN workstation communicating between different devices with different buses and using different protocols. The new system would use a PXI based data acquisition system with an embedded controller; the different devices are replaced by PXI-based data acquisition and control cards using a single bus protocol and on one chassis. The use of windows based application software would enhance the user friendliness, with overall costs of the order of 10 KCHF.  
 
WPPA12 The STAR Slow Control System - Upgrade Status controls, heavy-ion, ion, SNS 340
 
  • M. G. Cherney, J. Fujita, W. T. Waggoner, Y. N. Gorbunov
    Creighton University, Omaha, NE
  The STAR (Solenoidal Tracker At RHIC) experiment located at Brookhaven National Laboratory has been studying relativistic heavy ion collisions since it began operation in the summer of 2000. An EPICS-based hardware controls system monitors the detector's 40000 operating parameters. The system I/O control uses VME processors and PCs to communicate with sub-system based sensors over a variety of field busses. The system also includes interfaces to the accelerator and magnet control systems, an archiver with CGI web based interface and C++ based communication between STAR online system, run control and hardware controls and their associated databases. An upgrade project is underway. This involves the migration of 60% of the I/O control from the aging VME processors to PC's. The host system has been transferred from Sun OS to Scientific Linux and some of the VME boards were replaced with "softIOC" applications. The experience gained with the current setup will be discussed, and upgrade plans and progress will be outlined.  
 
WPPA25 Remote Monitoring System for Current Transformers and Beam Position Monitors of PEFP proton, diagnostics, rfq, controls 368
 
  • Y.-S. Cho, H. S. Kim, H.-J. Kwon, Y.-G. Song, I.-S. Hong
    KAERI, Daejon
  • J. W. Lee
    PAL, Pohang, Kyungbuk
  PEFP(Proton Engineering Frontier Project) in Korean proton linear accelerator program has a diagnostic system with current transformers and beam position monitors. Prototype of current transformer(CT) and beam position monitor(BPPM) were made and tested successfully in tools of the beam diagnostic systems. We are preparing to monitor remotely signals from the diagnostic system. Remote monitoring system is based on VME system with EPICS environments. For fast digitizing the analog signals VME ADC Input Output Board (VTR812/10) are used to meet the various needs of beam diagnosis device. EPICS channel access and drivers have been programmed in VME CPU to operate the Input output controller(IOC) and interface operators. Operator console and data storage have been implemented with EDM and channel archiver as well.  
 
WPPA30 Detector Control System of BESIII controls, radiation, power-supply, luminosity 377
 
  • X. H. Chen, X. H. Chen
    Graduate School of the Chinese Academy of Sciences, Beijing
  • C. S. Gao, X. N. Li, J. Min, Z. D. Nie, X. X. Xie, Y. G. Xie, Y. H. Zhang
    IHEP Beijing, Beijing
  In the upgrade project of Beijing Electron Positron Collider (BEPC)II, a novel DCS(Detector Control System) for the Beijing Spectrometer (BES)III is developed. In the system, nearly 7000 data points covering dozens of physical parameters need monitoring or control. The upper system is mainly developed by LabVIEW and OPC. The lower system mainly used Embedded system, MCU, and PLC, etc. These technologies reduced the cost greatly without any lose in system functions or performance. This paper will give a detailed introduction to the system architecture and advanced technologies we used or invented.  
 
WPPA31 Status of a Versatile Video System at PITZ, DESY-2 and EMBL Hamburg controls, diagnostics, electron, laser 380
 
  • M. Lomperski, P. Duval
    DESY, Hamburg
  • G. Trowitzsch, S. Weisse
    DESY Zeuthen, Zeuthen
  The market for industrial vision components is evolving towards GigE Vision (Gigabit Ethernet vision standard). In recent years, the usage of TV systems/optical readout at accelerator facilities has been increasing. The Video System at PITZ, originated in the year 2001, has overcome a huge evolution over the last years. Being real-time capable, lossless capable, versatile, well-documented, interoperable, and designed with the user's perspective in mind, use cases at Petra 3 and EMBL at DESY Hamburg have been implemented to great success. The wide use range spans from robotics to live monitoring up to precise measurements. The submission will show the hardware and software structure, components used, current status as well as a perspective for future work.  
 
WPPA33 Console System Using Thin Client for the J-PARC Accelerator linac, controls, klystron, target 383
 
  • T. Iitsuka, S. Motohashi, M. Takagi, S. Y. Yoshida
    Kanto Information Service (KIS), Accelerator Group, Ibaraki
  • N. Kamikubota, T. Katoh, H. Nakagawa, J.-I. Odagiri, N. Yamamoto
    KEK, Ibaraki
  An accelerator console system, based on a commercial thin client, has been developed for J-PARC accelerator operation and software development. Using thin client terminals, we expect a higher reliability and longer life-cycle due to more robust hardware (i.e., diskless and fanless configuration) than standard PCs. All of the console terminals share a common development/operation environment. We introduced LDAP (Lightweight Directory Access Protocol) for user authentication and NFS (Network File System) to provide users with standard tools and environment (EPICS tools, Java SDK, and so on) with standard directory structures. We have used the console system for beam commissioning and software development in the J-PARC. This paper describes early experiences with them.  
 
WPPA37 Developing of SMS Mobile System for the PLS Control System controls, vacuum, focusing, diagnostics 392
 
  • J. Choi, H.-S. Kang, J. W. Lee, B. R. Park, J. C. Yoon
    PAL, Pohang, Kyungbuk
  The PLS SMS mobile system is based on Linux PC platform. The SMS mobile system is equipped with a wireless SMS(Simple Message Service) interface giving an opportunity to use fault alarm interlock system. It was developed as a network-based distributed real-time control system composed of several subsystems (EPICS IOC and PLC system). The mobile system sends simple message of fault trip signal to users’ mobile devices with fault tag address and immediately sends warning or alert messages to mobile devices, or remote users are real-time monitoring the device fault states by mobile devices. Control systems can be set remotely by mobile devices in emergency situation. In order to provide suitable actions against system fault, SMS Mobile System will enable system administrator to promptly access, monitor and control the system whenever users want and wherever users are, by utilizing wireless Internet and mobile devices. This paper presents the Mobile SMS system for PLS Control System.  
 
WPPB04 Convergence Computer–Communication Methods for Advanced High-Performance Control System controls, impedance, target, instrumentation 406
 
  • V. I. Vinogradov
    RAS/INR, Moscow
  Based on analysis of advanced computer and communication system architectures, a future control system approach is proposed and discussed in this paper. Convergence computer and communication technologies are moving to high-performance modular system architectures on the basis of high-speed switched interconnections. Multicore processors become more perspective ways to high-performance systems, and traditional parallel bus system architectures are extended by higher-speed serial switched interconnections. Compact modular system on the base of passive 3-4 slots PCI bas with fast switch network interconnection are described as examples of a modern, scalable control system solution, which can be compatible extended to advanced system architecture on the basis of new technologies (ATCA,μTCA). Kombi wired and wireless subnets can be used as effective platforms also for large experimental physics control systems and complex computer automation in an experimental area with human interactions inside systems by IP-phones.  
 
WPPB11 Secure Remote Operations of NSLS Beamlines with (Free)NX controls, site, feedback, synchrotron 421
 
  • D. P. Siddons, Z. Yin
    BNL, Upton, Long Island, New York
  In light source beamlines, there are times when remote operations from users are desired. This becomes challenging, considering cybersecurity has been dramatically tightened throughout many facilities. Remote X-windows display to Unix/Linux workstations at the facilities, either with straight x-traffic or tunneling through ssh (ssh -XC), is quite slow over long distance, thus not quite suitable for remote control/operations. We implemented a solution that employs the open source FreeNX technology. With its efficient compression technology, the bandwidth usage is quite small and the response time from long distance is very impressive. The setup we have, involves a freenx server configured on the linux workstation at the facility and free downloadable clients (Windows, Mac, Linux) at the remote site to connect to the freenx servers. All traffic are tunneled through ssh, and special keys can be used to further security. The response time is so good that remote operations are routinely performed. We believe this technology can have great implications for other facilities, including those for the high energy physics community.  
 
WPPB15 Beyond PCs: Accelerator Controls on Programmable Logic controls, survey, diagnostics, power-supply 433
 
  • J. Dedic, K. Zagar, M. Plesko
    Cosylab, Ljubljana
  The large number of gates in modern FPGAs including processor cores allows implementation of complex designs, including a core implementing Java byte-code as the instruction set. Instruments based on FPGA technology are composed only of digital parts and are totally configurable. Based on experience gained on our products (a delay generators producing sub-nanosecond signals and function generators producing arbitrary functions of length in the order of minutes) and on our research projects (a prototype hardware platform for real-time Java, where Java runtime is the operating system and there is no need for Linux), I will speculate about possible future scenarios: A combination of an FPGA processor core and custom logic will provide all control tasks, slow and hard real-time, while keeping our convenient development environment for software such as Eclipse. I will illustrate my claims with designs for tasks such as low-latency PID controllers running at several dozen MHz, sub-nanosecond resolution timing, motion control, and a versatile I/O controller–all implemented in real-time Java and on exactly the same hardware, just with different connectors.  
 
WPPB20 Extended MicroIOC Family (LOCO) ion, vacuum, storage-ring, controls 439
 
  • D. Golob, R. Kovacic, M. Pelko, M. Plesko, A. Podborsek, M. Kobal
    Cosylab, Ljubljana
  MicroIOC is an affordable, compact, embedded computer designed for controlling and monitoring of devices via a control system (EPICS, ACS, and TANGO are supported). Devices can be connected to microIOC via Ethernet, serial, GPIB, other ports, or directly with digital or analog inputs and outputs, which makes microIOC a perfect candidate for a platform that integrates devices into your control system. Already over 90 microIOCs are installed in 18 labs over the world. LOgarithmic COnverter (LOCO) is a specialized microIOC used as a high-voltage power-supply distribution system for vacuum ion pumps. A single high-voltage power-supply controller can be used for delivering power to multiple ion pumps. A highly-accurate logarithmic-scale current measurement is provided on each pump, enabling an affordable and reliable pressure measurement ranging from 10-12 to 10-4 mbar.  
 
WPPB28 Remote Operation of Large-Scale Fusion Experiments controls, plasma, site, diagnostics 454
 
  • G. Abla, D. P. Schissel
    GA, San Diego, California
  • T. W. Fredian
    MIT, Cambridge, Massachusetts
  • M. Greenwald, J. A. Stillerman
    MIT/PSFC, Cambridge, Massachusetts
  This paper examines the past, present, and future remote operation of large-scale fusion experiments by large, geographically dispersed teams. The fusion community has considerable experience placing remote collaboration tools in the hands of real users. Tools to remotely view operations and control selected instrumentation and analysis tasks were in use as early as 1992 and full remote operation of an entire tokamak experiment was demonstrated in 1996. Today’s experiments invariable involve a mix of local and remote researchers, with sessions routinely led from remote institutions. Currently, the National Fusion Collaboratory Project has created a FusionGrid for secure remote computations and has placed collaborative tools into operating control rooms. Looking toward the future, ITER will be the next major step in the international program. Fusion experiments put a premium on near real-time interactions with data and among members of the team and though ITER will generate more data than current experiments, the greatest challenge will be the provisioning of systems for analyzing, visualizing and assimilating data to support distributed decision making during ITER operation.  
 
WPPB30 Cybersecurity and User Accountability in the C-AD Control System controls, site, survey, heavy-ion 457
 
  • S. Binello, T. D'Ottavio, R. A. Katz, J. Morris
    BNL, Upton, Long Island, New York
  A heightened awareness of cybersecurity has led to a review of the procedures that ensure user accountability for actions performed on the computers of the Collider-Accelerator Department (C-AD)Control System. Control system consoles are shared by multiple users in control rooms throughout the C-AD complex. A significant challenge has been the establishment of procedures that securely control and monitor access to these shared consoles without impeding accelerator operations. This paper provides an overview of C-AD cybersecurity strategies with an emphasis on recent enhancements in user authentication and tracking methods.  
 
WPPB32 Cybersecurity in ALICE DCS controls, site 460
 
  • A. Augustinus, L. S. Jirden, P. Rosinsky, P. Ch. Chochula
    CERN, Geneva
  In the design of the control system for the ALICE experiment much emphasis has been put on cyber security. The control system operates on a dedicated network isolated from the campus network and remote access is only granted via a set of Windows Server 2003 machines configured as application gateways. The operator consoles are also separated from the control system by means of a cluster of terminal servers. Computer virtualization techniques are deployed to grant time-restricted access for sensitive tasks such as control system modifications. This paper will describe the global access control architecture and the policy and operational rules defined. The role-based authorization schema will also be described as well as the tools implemented to achieve this task. The authentication based on smartcard certificates will also be discussed.  
 
WPPB34 Information Technology Security at the Advanced Photon Source controls, photon, target 463
 
  • W. P. McDowell, K. V. Sidorowicz
    ANL, Argonne, Illinois
  The proliferation of “bot” nets, phishing schemes, denial-of-service attacks, root kits, and other cyber attack schemes designed to capture a system or network creates a climate of worry for system administrators, especially for those managing accelerator and large experimental-physics facilities as they are very public targets. This paper will describe the steps being taken at the Advanced Photon Source (APS) to protect the infrastructure of the overall network with emphasis on security for the APS control system.  
 
WPPB40 LCLS Beam-Position Monitor Data Acquisition System controls, pick-up, coupling, feedback 478
 
  • R. Akre, R. G. Johnson, K. D. Kotturi, P. Krejcik, E. Medvedko, J. Olsen, S. Smith, T. Straumann
    SLAC, Menlo Park, California
  In order to determine the transversal LCLS beam position from the signals induced by the beam in four stripline pickup electrodes, the BPM electronics have to process four concurrent short RF bursts with a dynamic range > 60dB. An analog front end conditions the signals for subsequent acquisition with a waveform digitizer and also provides a calibration tone that can be injected into the system in order to compensate for gain variations and drift. Timing of the calibration pulser and switches, as well as control of various programmable attenuators, is provided by an FPGA. Because no COTS waveform digitizer with the desired performance (>14bit, ≥119MSPS) was available, the PAD digitizer (see separate contribution WPPB39) was selected. It turned out that the combination of a waveform digitizer with a low-end embedded CPU running a real-time OS (RTEMS) and control system (EPICS) is extremely flexible and could very easily be customized for our application. However, in order to meet the BPM real-time needs (readings in < 1ms), a second Ethernet interface was added to the PAD so that waveforms can be shipped, circumventing the ordinary TCP/IP stack on a dedicated link.  
 
ROAA01 Status of the ITER CODAC Conceptual Design controls, plasma, site, factory 481
 
  • J. W. Farthing
    UKAEA Culham, Culham, Abingdon, Oxon
  • M. Greenwald
    MIT/PSFC, Cambridge, Massachusetts
  • I. Yonekawa
    JAEA/NAKA, Ibaraki-ken
  • J. B. Lister
    ITER, St Paul lez Durance
  Since the last ICALEPCS conference, a number of issues have been studied in the conceptual design of the ITER Control, Data Access, and Communication Systems. Almost all of the technical challenges have seen workable approaches selected. The conceptual design will be reviewed in 2007, before starting the preliminary engineering design. One software component that does not have a clear solution is the execution of data-driven schedules to operate the installation at multiple levels, from daily program management to plasma feedback control. Recent developments in workflow products might be useful. The present conceptual weakness is not having found a satisfactory "universal" description of the I&C design process for the "self-description" of the 100 procured Plant Systems. A vital CODAC design feature is to operate the full plant on the basis of imported “self-description” data, which necessarily includes the process description in each Plant System. The targeted formal link between 3-D design, process design, and process control has not yet been created. Some of the strawman designs meeting the technical requirements will be mentioned in detail.  
slides icon Slides  
 
ROAB03 Software Integration and Test Techniques in a Large Distributed Project: Evolution, Process Improvement, Results site, controls 508
 
  • M. Pasquato, P. Sivera
    ESO, Garching bei Muenchen
  The Atacama Large Millimeter Array (ALMA) is a radio telescope that is being built in Chile. The software development for the project is committed to the Computing Integrated Product Team, (IPT) which has the responsibility of realizing an end-to-end software system consisting of different subsystems, each one with specified development areas. Within the Computing IPT, the Integration and Test subsystem has the role of collecting the software produced, build it and test it and preparing releases. In this paper, the complexity of the software integration and test tasks is analyzed and the problems due to the high geographical distribution of the developers and the variety of software features to be integrated are highlighted. Different implemented techniques are discussed, among them the use of a common development framework (the ALMA Common Software or ACS), the use of standard development hardware and the organization of the developers work in Function Based Team (FBT). Frequent automatic builds and regression tests repeated regularly on so called Standard Test Environments (STE) are also routinely used. Advantages, benefits and shortcomings of the adopted solutions are presented.  
slides icon Slides  
 
ROPA03 ANTARES Slow Control Status controls, insertion 520
 
  • J. M. Gallone
    IPHC, Strasbourg Cedex 2
  ANTARES is a neutrino telescope project based on strings of Cerenkov detectors in deep sea. These detectors are spread over a volume of 1 km3 at a depth of about 2 km in the Mediterranean Sea near Toulon. About 400 of such detectors are now operational, as well as a large variety of instruments that need a reliable and accurate embedded slow control system. Based on Commodity Off-the-Shelf (COTS) low-power integrated processors and industry standards such as Ethernet and ModBus, the foreseen system is expected to run for 3 years without any direct access to the hardware. We present the system architecture and some performance figures. The slow control system stores the state of the system at any time in a database. This state may be analyzed by technical staff in charge of the maintenance, physicists to check the setup of the experiment, or the data acquisition system for saving experimental conditions. The main functions of the slow control system are to give a record of the state of the whole system, to set the value of a parameter of a physical device, to modify the value of a parameter of a physical device, and to set up the initial values of the physical devices.  
slides icon Slides  
 
RPPA25 The Data Acquisition System (DAQ) of the FLASH Facility photon, controls, feedback, laser 564
 
  • K. Rehlich, R. Rybnikov, R. Kammering
    DESY, Hamburg
  Nowadays the photon science experiments and the machines providing these photon beams produce enormous amounts of data. To capture the data from the photon science experiments and from the machine itself, we developed a novel Data AcQusition (DAQ) system for the FLASH (Free electron LASer in Hamburg) facility. Meanwhile the system is not only fully integrated into the DOOCS control system, but is also the core for a number of essential machine-related feedback loops and monitoring tasks. A central DAQ server records and stores the data of more than 900 channels with 1-MHz up to 2-GHz sampling and several images from the photon science experiments with a typical frame rate of 5 Hz. On this server all data are synchronized on a bunch basis which makes this the perfect location to attach, e.g., high-level feedbacks and calculations. An overview of the architecture of the DAQ system and its interconnections within the complex of the FLASH facility together with the status of the DAQ system and possible future extensions/applications will be given.  
 
RPPA26 Database for Control System of J-PARC 3 GeV RCS controls, linac, power-supply, pick-up 567
 
  • S. F. Fukuta
    MELCO SC, Tsukuba
  • Y. Kato, M. Kawase, H. Sakaki, H. Sako, H. Yoshikawa, H. Takahashi
    JAEA/J-PARC, Tokai-Mura, Naka-Gun, Ibaraki-Ken
  • S. S. Sawa
    Total Support Systems Corporation, Tokai-mura, Naka-gun, Ibaraki
  • M. Sugimoto
    Mitsubishi Electric Control Software Corp, Kobe
  The Control System of J-PARC 3GeV RCS is configured based on Database, which is comprised of Component Data Management DB (Component DB) and Data Acquisition DB (Operation DB. Component DB was developed mainly to manage the data on accelerator components and to generate EPICS records automatically using the data. Presently we are testing the reliability of DB application software at Linac operation. Later most Linac EPICS records are generated from DB, and we are able to operate Linac with very few problems. Operation DB collects the two kinds of data. One is EPICS records data, and the other is synchronized data. Now we are testing the reliability of application software for EPICS records data collection, and we have confirmed that EPICS record data are corrected with very few problems. Later Linac EPICS records data are inserted in Operation DB from Linac Operation start. On the other hand, application software for synchronized data collection is now being developed, and we will test the reliability of this application software from comprehensive information on RCS operation. We report on the status of development for Database for Control System of J-PARC 3GeV RCS.  
 
RPPA27 Status of the TANGO Archiving System controls, extraction, synchrotron, vacuum 570
 
  • J. Guyot, M. O. Ounsy, S. Pierre-Joseph Zephir
    SOLEIL, Gif-sur-Yvette
  This poster will give a detailed status of the major functionality delivered as a Tango service: the archiving service. The goal of this service is to maintain the archive history of thousands of accelerators or beamline control parameters in order to be able to correlate signals or to get snapshots of the system at different times and to compare them. For this aim, three database services have been developed and fully integrated in Tango: an historical database with an archiving frequency up to 0.1 Hz, a short-term database providing a few hours retention but with higher archiving frequency (up to 10 HZ), and finally a snapshotting database. These services are available to end users through two graphical user interfaces: Mambo (for data extraction/visualization from historical and temporary databases) and Bensikin (for snapshots management). The software architecture and design of the whole system will be presented, as well as the current status of the deployment at SOLEIL.  
 
RPPA31 Construction and Application of Database for CSNS controls, SNS, survey, alignment 579
 
  • P. Chu
    SLAC, Menlo Park, California
  • C. H. Wang, Q. Gan
    IHEP Beijing, Beijing
  The database of the China Spallation Neutron Source (CSNS) Accelerator is designed to store machine parameters, magnet measurement data, survey and alignment data, control system configuration data, equipment historical data, e-logbook, and so on. It will also provide project management quality assurance, error impact analysis, and assembly assistance including sorting. This paper introduces the construction and application of the database for CSNS. Details such as convention name rules, database model and schema, interface of import and export data, and database maintenance will be presented.  
 
RPPA35 The DIAMON Project – Monitoring and Diagnostics for the CERN Controls Infrastructure diagnostics, controls, laser, power-supply 588
 
  • M. Buttner, J. Lauener, K. Sigerud, M. Sobczak, N. Stapley, P. Charrue
    CERN, Geneva
  The CERN accelerators’ controls infrastructure spans over large geographical distances and accesses a big diversity of equipment. In order to ensure smooth beam operation, efficient monitoring and diagnostic tools are required by the operators, presenting the state of the infrastructure and offering guidance for the first line support. The DIAMON project intends to deploy software monitoring agents in the controls infrastructure, each agent running predefined local tests and sending its result to a central service. A highly configurable graphical interface will exploit these results and present the current state of the controls infrastructure. Diagnostic facilities to get further details on a problem and first aid to repair it will also be provided. This paper will describe the DIAMON project’s scope and objectives as well as the user requirements. Also presented will be the system architecture and the first operational version.  
 
RPPA36 Handling Large Data Amounts in ALICE DCS controls, power-supply, alignment, proton 591
 
  • A. Augustinus, L. S. Jirden, S. Kapusta, P. Rosinsky, P. Ch. Chochula
    CERN, Geneva
  The amount of control data to be handled by the ALICE experiment at CERN is by a magnitude larger than in previous-generation experiments. Some 18 detectors, 130 subsystems, and 100,000 control channels need to be configured, controlled, and archived in normal operation. During the configuration phase several Gigabytes of data are written to devices, and during stable operations some 1,000 values per second are written to archival. The peak load for the archival is estimated to 150,000 changes/s. Data is also continuously exchanged with several external systems, and the system should be able to operate unattended and fully independent from any external resources. Much care has been taken in the design to fulfill the requirements, and this report will describe the solutions implemented. The data flow and the various components will be described as well as the data exchange mechanisms and the interfaces to the external systems. Some emphasis will also be given to data reduction and filtering mechanisms that have been implemented in order to keep the archive within maintainable margins.  
 
RPPB03 Alarms Configuration Management laser, controls, vacuum, site 606
 
  • R. Martini, K. Sigerud, N. Stapley, A. S. Suwalska, P. Sollander
    CERN, Geneva
  The LHC alarm service, LASER, is the alarm tool used by the operators for the accelerators and the technical services at CERN. To ensure that the alarms displayed are known and understood by the operators, each alarm should go through a well-defined procedure from its definition to being accepted in operation. In this paper we describe the workflow to define alarms for the technical services at CERN. We describe the different stages of the workflow like equipment definition, alarm information specification, control system configuration, test, and final acceptance in operation. We also describe the tools available to support each stage and the actors involved. Although the use of a strict workflow will limit the number of alarms that arrive to LASER and ensure that they are useful for operations, for a large complex like CERN there are still potentially many alarms displayed at one time. Therefore the LASER tool provides facilities for the operators to manage and reduce the list of alarms displayed. The most important of these facilities are described, together with other important services like automatic GSM and/or e-mail notification and alarm system monitoring.  
 
RPPB06 Device Control Tool for CEBAF Beam Diagnostics Software controls, diagnostics, instrumentation, target 615
 
  • P. Chevtsov
    Jefferson Lab, Newport News, Virginia
  By continuously monitoring the beam quality in the CEBAF accelerator, a variety of beam diagnostics software created at Jefferson Lab makes a significant contribution to very high availability of the machine for nuclear physics experiments. The interface between this software and beam instrumentation hardware components is provided by a device control tool, which is optimized for beam diagnostics tasks. As a part of the device/driver development framework at Jefferson Lab, this tool is very easy to support and extend to integrate new beam instrumentation devices. All device control functions are based on the configuration (ASCII text) files that completely define the used hardware interface standards (CAMAC, VME, RS-232, GPIB) and communication protocols. The paper presents the main elements of the device control tool for beam diagnostics software at Jefferson Lab.  
 
RPPB07 The System Overview Tool of the Joint Controls Project (JCOP) Framework controls, diagnostics, power-supply, feedback 618
 
  • M. Gonzalez-Berges, F. Varela
    CERN, Geneva
  • K. D. Joshi
    BARC, Mumbai
  For each control system of the Large Hadron Collider (LHC) experiments, there will be many processes spread over many computers. All together, they will form a PVSS distributed system with around 150 computers organized in a hierarchical fashion. A centralized tool has been developed for supervising, error identification and troubleshooting in such a large system. A quick response to abnormal situations will be crucial to maximize the physics usage. The tool gathers data from all the systems via several paths (e.g., process monitors, internal database) and, after some processing, presents it in different views: hierarchy of systems, host view and process view. The relations between the views are added to help to understand complex problems that involve more than one system. It is also possible to filter the information presented to the shift operator according to several criteria (e.g. node, process type, process state). Alarms are raised when undesired situations are found. The data gathered is stored in the historical archive for further analysis. Extensions of the tool are under development to integrate information coming from other sources (e.g., operating system, hardware).  
 
RPPB08 The Development of Detector Alignment Monitoring System for the ALICE ITS laser, alignment, controls, collider 621
 
  • M. G. Cherney, Y. N. Gorbunov, R. P. Thomen, J. Fujita
    Creighton University, Omaha, NE
  • T. J. Humanic, B. S. Nilsen, J. Schley, D. Trusdale
    Ohio State University
  A real-time detector alignment monitoring system has been developed by using commodity USB cameras, spherical mirrors, and laser beams introduced via a single mode fiber. An innovative control and online analysis software has been developed by using the OpenCV (Open Computer Vision) library & PVSS (Prozessvisualisierungs- und Steuerungssystem). This system is being installed in the ALICE detector to monitor the position of ALICE's Inner Tracking System subdetector. The operational principle and software implementation will be described.  
 
RPPB11 EPICS CA Gateway Employment in the BEPCII Network controls, linac, photon 627
 
  • J. Liu, C. H. Wang, Y. H. Wang, Z. Zhao, X. H. Huang
    IHEP Beijing, Beijing
  The control network of the BEPCII is divided into two separate different subnets. In order to access IOC PVs between the separate subnets as well as IOC PVs from the campus network, we adopt EPICS CA gateway in the BEPCII network. This paper describes the EPICS CA gateway employment and network management in the BEPCII .  
 
RPPB15 Management System Tailored to Research Institutes controls 635
 
  • J. K. Kamenik, P. Kolaric, I. Verstovsek
    Cosylab, Ljubljana
  As with all disciplines, project management has a set of rules that must be followed and a set of recommendations that make work easier. But as in all engineering, there is no single magical formula or equation, no matter how much managers and physicists alike would love to have it. We present a working solution tailored to academic projects that requires only a minimum of effort and discipline and results in huge benefits, which will be presented in this article. Commercially available project management tools are not suited to manage the diversity of work in research institutes. We have therefore adopted a set of open source tools, implemented some custom additions, and integrated the tools into a coherent product to suit our purpose. It enables developers to track their work and communicate effectively, project managers to monitor progress of individual projects, and management to supervise critical parameters of the company at any time. In the article, the experiences gained by using the system are presented. As it has turned out in practice, the product is also ideal for research institutes, as is demonstrated by its use in control groups of DESY and ANKA.  
 
RPPB21 Finite State Machines for Integration and Control in ALICE controls, injection, beam-losses, heavy-ion 650
 
  • A. Augustinus, M. Boccioli, P. Ch. Chochula, L. S. Jirden, G. De Cataldo
    CERN, Geneva
  From the control point of view a physics experiment can be seen as a vast hierarchy of systems and subsystems with an experiment control node at the top and single atomic control channels at the bottom. In the case of the ALICE experiment at CERN the many systems and subsystems are being built by many engineers and physicists in different institutes around the world. The integration of the various parts to form a homogeneous system enabling coherent automatic control can therefore be seen as a major challenge. A distributed PVSS SCADA system complemented with a device and system modeling schema based on finite state machines has been used to achieve this. This paper will describe the schema and the tools and components that have been developed at CERN and it will show how this has been implemented and used in Alice. The efforts of standardizing the state diagrams for different types of devices and systems at different levels will be described and some detailed examples will be shown. The Alice graphics user interface integrating both the FSM control hierarchy and the PVSS monitoring will also be described.  
 
RPPB27 A Proposed Alarm Handling System Management Plan for SNS with Application to Target Control System target, controls, SNS 668
 
  • R. E. Battle, E. Danilova, R. L. Sangrey, E. Williams, J. Munro
    ORNL, Oak Ridge, Tennessee
  We have developed a set of requirements for an SNS alarm handling system and have applied these to the control system for the SNS liquid mercury target to gain experience with an implementation first on a limited scale before applying them to the whole accelerator. This implementation is based on the EPICS alarm handler ALH. The requirements address such topics as alarm classification, priorities, types of warning, hierarchies, and management under different modes of target operation. Alarms are currently organized by system and subsystem. Target control systems considered in the examples here include the Hg loop, three light-water and one heavy-water cooling loops. Modifications to ALH include addition of “drag and drop” capabilities for individual PVs and drop-down lists of selectable actions. One such action provides access to the alarm response procedures required for a process variable that shows an alarm. Alarm and operator action log files are maintained separately from instances of ALH launched for operator displays. Database reporting tools have been developed to aid analysis of data in these files. Examples of the use of our tools and features will be presented.  
 
RPPB32 A MySQL-based Data Archiver: Preliminary Results controls, insertion, site 680
 
  • C. J. Slominski, M. Bickley
    Jefferson Lab, Newport News, Virginia
  Following an evaluation of the archival requirements of the Jefferson Laboratory accelerator's user community, a prototyping effort was executed to determine if an archiver based on mySql had sufficient functionality to meet those requirements. This approach was chosen because an archiver based on a relational database enables the development effort to focus on data acquisiti and management, letting the database take care of storage, indexing and data consistency. It was clear from the prototype effort that there were no performance impediments to successful implementation of a final system. With our performance concerns addressed, the lab undertook the design and development of an operational system. The system is in its operational testi phase now. This paper discusses the archiver system requirements, some of th design choices and their rationale, and presents the acquisition, storage and retrieval performance levels achieved with the system.  
 
FOAA01 Automated Diagnosis of Physical Systems diagnostics, controls 701
 
  • S. Narasimhan
    UARC, Moffet Field
  Automated diagnosis deals with techniques to determine the cause of any abnormal or unexpected behavior of physical systems. The key issue is that inferences have to be made from the limited sensor information available from the system. Some major categories of diagnostic technologies are rule-based systems, case-based reasoning systems, data-drive learning systems, and model-based reasoning systems among others. In this paper we will briefly introduce these categories and then focus on model-based reasoning. We will present the Hybrid Diagnosis Engine (HyDE) developed at the NASA Ames Research Center and its application to real problems.  
slides icon Slides  
 
FOPA01 Future of Tango controls, synchrotron, feedback, instrumentation 723
 
  • A. Buteau, N. L. Leclercq, M. O. Ounsy
    SOLEIL, Gif-sur-Yvette
  • J. M. Chaize, J. M. Meyer, F. Poncet, E. T. Taurel, P. V. Verdier, A. Gotz
    ESRF, Grenoble
  • D. Fernandez-Carreiras, J. Klora
    ALBA, Bellaterra (Cerdanyola del Vallès)
  • T. Kracht
    DESY, Hamburg
  • M. Lonza, C. Scafuri
    ELETTRA, Basovizza, Trieste
  Tango is a control system based on the device server concept. It is currently being actively developed by 4 (soon 5) institutes, 3 of which are new institutes. In October 2006 the Tango community met in the French Alps to discuss the future evolution of Tango. This paper summarizes the fruits of this meeting. It presents the different areas Tango will concentrate on for the next 5 years. Some of the main topics concern services, beamline control, embedded systems on FPGA, 64-bit support, scalability for large systems, faster boot performance, enhanced Python and Java support for servers, more model-driven development, and integrated workbench-like applications. The aim is to keep on adding batteries to Tango so that it remains a modern, powerful control system that satisfies not only the needs of light-source facilities but other communities too.  
slides icon Slides