Paper | Title | Other Keywords | Page |
---|---|---|---|
MOMAU005 | Integrated Approach to the Development of the ITER Control System Configuration Data | database, controls, software, network | 52 |
|
|||
ITER control system (CODAC) is steadily going into the implementation phase. A design guidelines handbook and a software development toolkit, named CODAC Core System, were produced in February 2011. They are ready to be used off-site, in the ITER domestic agencies and associated industries, in order to develop first control "islands" of various ITER plant systems. In addition to the work done off-site there is wealth of I&C related data developed centrally at ITER, but scattered through various sources. These data include I&C design diagrams, 3-D data, volume allocation, inventory control, administrative data, planning and scheduling, tracking of deliveries and associated documentation, requirements control, etc. All these data have to be kept coherent and up-to-date, with various types of cross-checks and procedures imposed on them. A "plant system profile" database, currently under development at ITER, represents an effort to provide integrated view into the I&C data. Supported by a platform-independent data modeling, done with a help of XML Schema, it accumulates all the data in a single hierarchy and provides different views for different aspects of the I&C data. The database is implemented using MS SQL Server and Java-based web interface. Import and data linking services are implemented using Talend software, and the report generation is done with a help of MS SQL Server Reporting Services. This paper will report on the first implementation of the database, the kind of data stored so far, typical work flows and processes, and directions of further work. | |||
Slides MOMAU005 [0.384 MB] | |||
Poster MOMAU005 [0.692 MB] | |||
MOMAU008 | Integrated Management Tool for Controls Software Problems, Requests and Project Tasking at SLAC | software, controls, HOM, feedback | 59 |
|
|||
The Controls Department at SLAC, with its service center model, continuously receives engineering requests to design, build and support controls for accelerator systems lab-wide. Each customer request can vary in complexity from installing a minor feature to enhancing a major subsystem. Departmental accelerator improvement projects, along with DOE-approved construction projects, also contribute heavily to the work load. These various customer requests and projects, paired with the ongoing operational maintenance and problem reports, place a demand on the department that usually exceeds the capacity of available resources. An integrated, centralized repository - comprised of all problems, requests, and project tasks - available to all customers, operators, managers, and engineers alike - is essential to capture, communicate, prioritize, assign, schedule, track progress, and finally, commission all work components. The Controls software group has recently integrated its request/task management into its online problem tracking "Comprehensive Accelerator Tool for Enhancing Reliability" (CATER ) tool. This paper discusses the new integrated software problem/request/task management tool - its work-flow, reporting capability, and its many benefits. | |||
Slides MOMAU008 [0.083 MB] | |||
Poster MOMAU008 [1.444 MB] | |||
MOMMU001 | Extending Alarm Handling in Tango | TANGO, database, controls, synchrotron | 63 |
|
|||
This paper describes the alarm system developed at Alba Synchrotron, built on Tango Control System. It describes the tools used for configuration and visualization, its integration in user interfaces and its approach to alarm specification; either assigning discrete Alarm/Warning levels or allowing versatile logic rules in Python. This paper also covers the life cycle of the alarm (triggering, logging, notification, explanation and acknowledge) and the automatic control actions that can be triggered by the alarms. | |||
Slides MOMMU001 [1.119 MB] | |||
Poster MOMMU001 [2.036 MB] | |||
MOPKN009 | The CERN Accelerator Measurement Database: On the Road to Federation | database, controls, extraction, data-management | 102 |
|
|||
The Measurement database, acting as short-term central persistence and front-end of the CERN accelerator Logging Service, receives billions of time-series data per day for 200,000+ signals. A variety of data acquisition systems on hundreds of front-end computers publish source data that eventually end up being logged in the Measurement database. As part of a federated approach to data management, information about source devices are defined in a Configuration database, whilst the signals to be logged are defined in the Measurement database. A mapping, which is often complex and subject to change and extension, is therefore required in order to subscribe to the source devices, and write the published data to the corresponding named signals. Since 2005, this mapping was done by means of dozens of XML files, which were manually maintained by multiple persons, resulting in a configuration that was error prone. In 2010 this configuration was improved, such that it becomes fully centralized in the Measurement database, reducing significantly the complexity and the number of actors in the process. Furthermore, logging processes immediately pick up modified configurations via JMS based notifications sent directly from the database, allowing targeted device subscription updates rather than a full process restart as was required previously. This paper will describe the architecture and the benefits of current implementation, as well as the next steps on the road to a fully federated solution. | |||
MOPKN014 | A Web Based Realtime Monitor on EPICS Data | EPICS, monitoring, interface, real-time | 121 |
|
|||
Funding: IHEP China Monitoring systems such as EDM and CSS are extremely important in EPICS system. Most of them are based on client/server(C/S). This paper designs and implements a web based realtime monitoring system on EPICS data. This system is based on browser and server (B/S using Flex [1]). Through CAJ [2] interface, it fetches EPICS data including beam energy, beam current, lifetime and luminosity and so on. Then all data is displayed in a realtime chart in browser (IE or Firefox/Mozilla). The chart is refreshed every regular interval and can be zoomed and adjusted. Also, it provides data tips showing and full screen mode. [1]http://www.adobe.com/products/flex.html [2]M.Sekoranja, "Native Java Implement of channel access for Epics", 10th ICALEPCS, Geneva, Oct 2005, PO2.089-5. |
|||
Poster MOPKN014 [1.105 MB] | |||
MOPMN010 | Development of a Surveillance System with Motion Detection and Self-location Capability | network, radiation, survey, controls | 257 |
|
|||
A surveillance system with the motion detection and the location measurement capability has been in development for the help of effective security control of facilities in our institute. The surveillance cameras and sensors placed around the facilities and the institute have the primary responsibility for preventing unwanted accesses to our institute, but there are some cases where additional temporary surveillance cameras are used for the subsidiary purposes. Problems in these additional surveillance cameras are the detection of such unwanted accesses and the determination of their respective locations. To eliminate such problems, we are constructing a surveillance camera system with motion detection and self-locating features based on a server-client scheme. A client, consisting of a network camera, wi-fi and GPS modules, acquires its location measured by use of GPS or the radio wave from surrounding wifi access points, then sends its location to a remote server along with the motion picture over the network. The server analyzes such information to detect the unwanted access and serves the status or alerts on a web-based interactive map for the easy access to such information. We report the current status of the development and expected applications of such self-locating system beyond this surveillance system. | |||
MOPMN013 | Operational Status Display and Automation Tools for FERMI@Elettra | TANGO, controls, operation, electron | 263 |
|
|||
Funding: The work was supported in part by the Italian Ministry of University and Research under grants FIRB-RBAP045JF2 and FIRB-RBAP06AWK3 Detecting and locating faults and malfunctions of an accelerator is a difficult and time consuming task. The situation is even more difficult during the commissioning phase of a new accelerator, when physicists and operators are still acquiring confidence with the plant. On the other hand a fault free machine does not imply that it is ready to run: the definition of "readiness" depends on what is the expected behavior of the plant. In the case of FERMI@Elettra, in which the electron beam goes to different branches of the machine depending on the programmed activity, the configuration of the plant determines the rules for understanding whether the activity can be carried out or not. In order to help the above task and display the global status of the plant, a tool known as the "matrix" has been developed. It is composed of a graphical front-end, which displays a synthetic view of the plant status grouped by subsystem and location along the accelerator, and by a back-end made of Tango servers which reads the status of the machine devices via the control system and calculates the rules. The back-end also includes a set of objects known as "sequencers" that perform complex actions automatically for actively switching from one accelerator configuration to another. |
|||
Poster MOPMN013 [0.461 MB] | |||
MOPMN025 | New SPring-8 Control Room: Towards Unified Operation with SACLA and SPring-8 II Era. | controls, operation, network, laser | 296 |
|
|||
We have renovated the SPring-8 control room. This is the first major renovation since its inauguration in 1997. In 2011, the construction of SACLA (SPring-8 Angstrom Compact Laser Accelerator) was completed and it is planned to be controlled from the new control room for close cooperative operation with the SPring-8 storage ring. It is expected that another SPring-8 II project will require more workstations than the current control room. We have extended the control room area for these foreseen projects. In this renovation we have employed new technology which did not exist 14 years ago, such as a large LCD and silent liquid cooling workstations for comfortable operation environment. We have incorporated many ideas which were obtained during the 14 years experience of the design. The operation in the new control room began in April 2011 after a short period of the construction. | |||
MOPMN028 | Automated Voltage Control in LHCb | controls, detector, experiment, high-voltage | 304 |
|
|||
LHCb is one of the 4 LHC experiments. In order to ensure the safety of the detector and to maximize efficiency, LHCb needs to coordinate its own operations, in particular the voltage configuration of the different sub-detectors, according to the accelerator status. A control software has been developed for this purpose, based on the Finite State Machine toolkit and the SCADA system used for control throughout LHCb (and the other LHC experiments). This software permits to efficiently drive both the Low Voltage (LV) and High Voltage (HV) systems of the 10 different sub-detectors that constitute LHCb, setting each sub-system to the required voltage (easily configurable at run-time) based on the accelerator state. The control software is also responsible for monitoring the state of the Sub-detector voltages and adding it to the event data in the form of status-bits. Safe and yet flexible operation of the LHCb detector has been obtained and automatic actions, triggered by the state changes of the accelerator, have been implemented. This paper will detail the implementation of the voltage control software, its flexible run-time configuration and its usage in the LHCb experiment. | |||
Poster MOPMN028 [0.479 MB] | |||
MOPMS005 | The Upgraded Corrector Control Subsystem for the Nuclotron Main Magnetic Field | controls, power-supply, software, operation | 326 |
|
|||
This report discusses a control subsystem of 40 main magnetic field correctors which is a part of the superconducting synchrotron Nuclotron Control System. The subsystem is used in static and dynamic (corrector's current depends on the magnetic field value) modes. Development of the subsystem is performed within the bounds of the Nuclotron-NICA project. Principles of digital (PSMBus/RS-485 protocol) and analog control of the correctors' power supplies, current monitoring, remote control of the subsystem via IP network, are also presented. The first results of the subsystem commissioning are given. | |||
Poster MOPMS005 [1.395 MB] | |||
MOPMS006 | SARAF Beam Lines Control Systems Design | controls, vacuum, operation, hardware | 329 |
|
|||
The first beam lines addition to the SARAF facility was completed in phase I and introduced new hardware to be controlled. This article will describe the beam lines vacuum, magnets and diagnostics control systems and the design methodology used to achieve a reliable and reusable control system. The vacuum control systems of the accelerator and beam lines have been integrated into one vacuum control system which controls all the vacuum control hardware for both the accelerator and beam lines. The new system fixes legacy issues and is designed for modularity and simple configuration. Several types of magnetic lenses have been introduced to the new beam line to control the beam direction and optimally focus it on the target. The control system was designed to be modular so that magnets can be quickly and simply inserted or removed. The diagnostics systems control the diagnostics devices used in the beam lines including data acquisition and measurement. Some of the older control systems were improved and redesigned using modern control hardware and software. The above systems were successfully integrated in the accelerator and are used during beam activation. | |||
Poster MOPMS006 [2.537 MB] | |||
MOPMS016 | The Control System of CERN Accelerators Vacuum (Current Status and Recent Improvements) | vacuum, controls, interface, interlocks | 354 |
|
|||
The vacuum control system of most of the CERN accelerators is based on Siemens PLCs and on PVSS SCADA. The application software for both PLC and SCADA started to be developed specifically by the vacuum group; with time, it included a growing number of building blocks from the UNICOS framework. After the transition from the LHC commissioning phase to its regular operation, there has been a number of additions and improvements to the vacuum control system, driven by new technical requirements and by feedback from the accelerator operators and vacuum specialists. New functions have been implemented in PLC and SCADA: for the automatic restart of pumping groups, after power failure; for the control of the solenoids, added to reduce e-cloud effects; and for PLC power supply diagnosis. The automatic recognition and integration of mobile slave PLCs has been extended to the quick installation of pumping groups with the electronics kept in radiation-free zones. The ergonomics and navigation of the SCADA application have been enhanced; new tools have been developed for interlock analysis, and for device listing and selection; web pages have been created, summarizing the values and status of the system. The graphical interface for windows clients has been upgraded from ActiveX to QT, and the PVSS servers will soon be moved from Windows to Linux. | |||
Poster MOPMS016 [113.929 MB] | |||
MOPMU014 | Development of Distributed Data Acquisition and Control System for Radioactive Ion Beam Facility at Variable Energy Cyclotron Centre, Kolkata. | controls, interface, embedded, linac | 458 |
|
|||
To facilitate frontline nuclear physics research, an ISOL (Isotope Separator On Line) type Radioactive Ion Beam (RIB) facility is being constructed at Variable Energy Cyclotron Centre (VECC), Kolkata. The RIB facility at VECC consists of various subsystems like ECR Ion source, RFQ, Rebunchers, LINACs etc. that produce and accelerate the energetic beam of radioactive isotopes required for different experiments. The Distributed Data Acquisition and Control System (DDACS) is intended to monitor and control large number of parameters associated with different sub systems from a centralized location to do the complete operation of beam generation and beam tuning in a user friendly manner. The DDACS has been designed based on a 3-layer architecture namely Equipment interface layer, Supervisory layer and Operator interface layer. The Equipment interface layer consists of different Equipment Interface Modules (EIMs) which are designed around ARM processor and connected to different equipment through various interfaces such as RS-232, RS-485 etc. The Supervisory layer consists of VIA-processor based Embedded Controller (EC) with embedded XP operating system. This embedded controller, interfaced with EIMs through fiber optic cable, acquires and analyses the data from different EIMs. Operator interface layer consists mainly of PCs/Workstations working as operator consoles. The data acquired and analysed by the EC can be displayed at the operator console and the operator can centrally supervise and control the whole facility. | |||
Poster MOPMU014 [2.291 MB] | |||
WEBHAUST01 | LHCb Online Infrastructure Monitoring Tools | controls, monitoring, Windows, Linux | 618 |
|
|||
The Online System of the LHCb experiment at CERN is composed of a very large number of PCs: around 1500 in a CPU farm for performing the High Level Trigger; around 170 for the control system, running the SCADA system - PVSS; and several others for performing data monitoring, reconstruction, storage, and infrastructure tasks, like databases, etc. Some PCs run Linux, some run Windows but all of them need to be remotely controlled and monitored to make sure they are correctly running and to be able, for example, to reboot them whenever necessary. A set of tools was developed in order to centrally monitor the status of all PCs and PVSS Projects needed to run the experiment: a Farm Monitoring and Control (FMC) tool, which provides the lower level access to the PCs, and a System Overview Tool (developed within the Joint Controls Project – JCOP), which provides a centralized interface to the FMC tool and adds PVSS project monitoring and control. The implementation of these tools has provided a reliable and efficient way to manage the system, both during normal operations but also during shutdowns, upgrades or maintenance operations. This paper will present the particular implementation of this tool in the LHCb experiment and the benefits of its usage in a large scale heterogeneous system. | |||
Slides WEBHAUST01 [3.211 MB] | |||
WEPKN002 | Tango Control System Management Tool | controls, TANGO, device-server, database | 713 |
|
|||
Tango is an object oriented control system toolkit based on CORBA initially developed at the ESRF. It is now also developed and used by Soleil, Elettra, Alba, Desy, MAX Lab, FRM II and some other labs. Tango concept is a full distributed control system. That means that several processes (called servers) are running on many different hosts. Each server manages one or several Tango classes. Each class could have one or several instances. This poster will show existing tools to configure, survey and manage a very large number of Tango components. | |||
Poster WEPKN002 [1.982 MB] | |||
WEPKS018 | MstApp, a Rich Client Control Applications Framework at DESY | framework, controls, operation, hardware | 819 |
|
|||
Funding: Deutsches Elektronen-Synchrotron DESY The control system for PETRA 3 [1] and its pre accelerators extensively use rich clients for the control room and the servers. Most of them are written with the help of a rich client Java framework: MstApp. They total to 106 different console and 158 individual server applications. MstApp takes care of many common control system application aspects beyond communication. MstApp provides a common look and feel: core menu items, a color scheme for standard states of hardware components and standardized screen sizes/locations. It interfaces our console application manager (CAM) and displays on demand our communication link diagnostics tools. MstApp supplies an accelerator context for each application; it handles printing, logging, resizing and unexpected application crashes. Due to our standardized deploy process MstApp applications know their individual developers and can even send them – on button press of the users - emails. Further a concept of different operation modes is implemented: view only, operating and expert use. Administration of the corresponding rights is done via web access of a database server. Initialization files on a web server are instantiated as JAVA objects with the help of the Java SE XMLEncoder. Data tables are read with the same mechanism. New MstApp applications can easily be created with in house wizards like the NewProjectWizard or the DeviceServerWizard. MstApp improves the operator experience, application developer productivity and delivered software quality. [1] Reinhard Bacher, “Commissioning of the New Control System for the PETRA 3 Accelerator Complex at Desy”, Proceedings of ICALEPCS 2009, Kobe, Japan |
|||
Poster WEPKS018 [0.474 MB] | |||
WEPKS021 | EPICS V4 in Python | EPICS, software, controls, data-analysis | 830 |
|
|||
Funding: Work supported under auspices of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886 with Brookhaven Science Associates, LLC, and in part by the DOE Contract DE-AC02-76SF00515 A novel design and implementation of EPICS version 4 is undergoing in Python. EPICS V4 defined an efficient way to describe a complex data structure, and data protocol. Current implementation in either C++ or Java has to invent a new wheel to present its data structure. However, it is more efficient in Python by mapping the data structure into a numpy array. This presentation shows the performance benchmarking, comparison in different language, and current status. |
|||
WEPKS023 | Further Developments in Generating Type-Safe Messaging | software, target, controls, network | 836 |
|
|||
Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy. At ICALEPCS '09, we introduced a source code generator that allows processes to communicate safely using native data types. In this paper, we discuss further development that has occurred since the conference in Kobe, Japan, including adding three more client languages, an optimization in network packet size and the addition of a new protocol data type. |
|||
Poster WEPKS023 [3.219 MB] | |||
WEPMN005 | Spiral2 Control Command: a Standardized Interface between High Level Applications and EPICS IOCs | interface, controls, operation, EPICS | 879 |
|
|||
The SPIRAL2 linear accelerator will produce entirely new particle beams enabling exploration of the boundaries of matter. Coupled with the existing GANIL machine this new facility will produce light and heavy exotic nuclei at extremely high intensities. The field deployment of the Control System relies on Linux PCs and servers, VME VxWorks crates and Siemens PLCs; equipment will be addressed either directly or using a Modbus/TCP field bus network. Several laboratories are involved in the software development of the control system. In order to improve efficiency of the collaboration, special care is taken of the software organization. During the development phase, in a context of tough budget and time constraints, this really makes sense, but also for the exploitation of the new machine, it helps us to design a control system that will require as little effort as possible for maintenance and evolution. The major concepts of this organization are the choice of EPICS, the definition of an EPICS directory tree specific to SPIRAL2, called "topSP2": this is our reference work area for development, integration and exploitation, and the use of version control system (SVN) to store and share our developments independently of the multi-site dimension of the project. The next concept is the definition of a "standardized interface" between high level applications programmed in Java and EPICS databases running in IOCs. This paper relates the rationale and objectives of this interface and also its development cycle from specification using UML diagrams to testing on the actual equipment. | |||
Poster WEPMN005 [0.945 MB] | |||
WEPMS005 | Automated Coverage Tester for the Oracle Archiver of WinCC OA | software, controls, operation, database | 981 |
|
|||
A large number of control systems at CERN are built with the commercial SCADA tool WinCC OA. They cover projects in the experiments, accelerators and infrastructure. An important component is the Oracle archiver used for long term storage of process data (events) and alarms. The archived data provide feedback to the operators and experts about how the system was behaving at particular moment in the past. In addition a subset of these data is used for offline physics analysis. The consistency of the archived data has to be ensured from writing to reading as well as throughout updates of the control systems. The complexity of the archiving subsystem comes from the multiplicity of data types, required performance and other factors such as operating system, environment variables or versions of the different software components, therefore an automatic tester has been implemented to systematically execute test scenarios under different conditions. The tests are based on scripts which are automatically generated from templates. Therefore they can cover a wide range of software contexts. The tester has been fully written in the same software environment as the targeted SCADA system. The current implementation is able to handle over 300 test cases, both for events and alarms. It has enabled to report issues to the provider of WinCC OA. The template mechanism allows sufficient flexibility to adapt the suite of tests to future needs. The developed tools are generic enough to be used to tests other parts of the control systems. | |||
Poster WEPMS005 [0.279 MB] | |||
WEPMU010 | Automatic Analysis at the Commissioning of the LHC Superconducting Electrical Circuits | operation, framework, hardware, GUI | 1073 |
|
|||
Since the beginning of 2010 the LHC has been operating in a routinely manner, starting with a commissioning phase and then an operation for physics phase. The commissioning of the superconducting electrical circuits requires rigorous test procedures before entering into operation. To maximize the beam operation time of the LHC these tests should be done as fast as procedures allow. A full commissioning needs 12000 tests and is required after circuits have been warmed above liquid nitrogen temperature. Below this temperature, after an end of year break of two months, commissioning needs about 6000 tests. Because the manual analysis of the tests takes a major part of the commissioning time, we proceeded to the automation of the existing analysis tools. We present the way in which these LabVIEW™ applications were automated. We evaluate the gain in commissioning time and reduction of experts on night shift observed during the LHC hardware commissioning campaign of 2011 compared to 2010. We end with an outlook at what can be further optimized. | |||
Poster WEPMU010 [3.124 MB] | |||
WEPMU013 | Development of a Machine Protection System for the Superconducting Beam Test Facility at FERMILAB | controls, laser, operation, FPGA | 1084 |
|
|||
Funding: Operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy. Fermilab’s Superconducting RF Beam Test Facility currently under construction will produce electron beams capable of damaging the acceleration structures and the beam line vacuum chambers in the event of an aberrant accelerator pulse. The accelerator is being designed with the capability to operate with up to 3000 bunches per macro-pulse, 5Hz repetition rate and 1.5 GeV beam energy. It will be able to sustain an average beam power of 72 KW at the bunch charge of 3.2 nC. Operation at full intensity will deposit enough energy in niobium material to approach the melting point of 2500 °C. In the early phase with only 3 cryomodules installed the facility will be capable of generating electron beam energies of 810 MeV and an average beam power that approaches 40 KW. In either case a robust Machine Protection System (MPS) is required to mitigate effects due to such large damage potentials. This paper will describe the MPS system being developed, the system requirements and the controls issues under consideration. |
|||
Poster WEPMU013 [0.755 MB] | |||
WEPMU017 | Safety Control System and its Interface to EPICS for the Off-Line Front-End of the SPES Project | controls, EPICS, target, interface | 1093 |
|
|||
The SPES off-line front-end apparatus involves a number of subsystems and procedures that are potentially dangerous both for human operators and for the equipments. The high voltage power supply, the ion source complex power supplies, the target chamber handling systems and the laser source are some example of these subsystems. For that reason, a safety control system has been developed. It is based on Schneider Electrics Preventa family safety modules that control the power supply of critical subsystems in combination with safety detectors that monitor critical variables. A Programmable Logic Controller (PLC), model BMXP342020 from the Schneider Electrics Modicon M340 family, is used for monitoring the status of the system as well as controlling the sequence of some operations in automatic way. A touch screen, model XBTGT5330 from the Schneider Electrics Magelis family, is used as Human Machine Interface (HMI) and communicates with the PLC using MODBUS-TCP. Additionally, an interface to the EPICS control network was developed using a home-made MODBUS-TCP EPICS driver in order to integrate it to the control system of the Front End as well as present the status of the system to the users on the main control panel. | |||
Poster WEPMU017 [2.847 MB] | |||
WEPMU028 | Development Status of Personnel Protection System for IFMIF/EVEDA Accelerator Prototype | radiation, operation, controls, monitoring | 1126 |
|
|||
The Control System for IFMIF/EVEDA* accelerator prototype consists of six subsystems; Central Control System (CCS), Local Area Network (LAN), Personnel Protection System (PPS), Machine Protection System (MPS), Timing System (TS) and Local Control System (LCS). The IFMIF/EVEDA accelerator prototype provides deuteron beam with power greater than 1 MW, which is the same as that of J-PARC and SNS. The PPS is required to protect technical and engineering staff against unnecessary exposure, electrical shock hazard and the other danger phenomena. The PPS has two functions of building management and accelerator management. For both managements, Programmable Logic Controllers (PLCs), monitoring cameras and limit switches and etc. are used for interlock system, and a sequence is programmed for entering and leaving of controlled area. This article presents the PPS design and the interface against each accelerator subsystems in details.
* International Fusion Material Irradiation Facility / Engineering Validation and Engineering Design Activity |
|||
Poster WEPMU028 [1.164 MB] | |||
WEPMU038 | Network Security System and Method for RIBF Control System | controls, network, EPICS, operation | 1161 |
|
|||
In RIKEN RI beam factory (RIBF), the local area network for accelerator control system (control system network) consists of commercially produced Ethernet switches, optical fibers and metal cables. On the other hand, E-mail and Internet access for unrelated task to accelerator operation are usually used in RIKEN virtual LAN (VLAN) as office network. From the viewpoint of information security, we decided to separate the control system network from the Internet and operate it independently from VLAN. However, it was inconvenient for users for the following reason; it was unable to monitor the information and status of accelerator operation from the user's office in a real time fashion. To improve this situation, we have constructed a secure system which allows the users to get the accelerator information from VLAN to control system network, while preventing outsiders from having access to the information. To allow access to inside control system network over the network from VLAN, we constructed reverse proxy server and firewall. In addition, we implement a system to send E-mail as security alert from control system network to VLAN. In our contribution, we report this system and the present status in detail. | |||
Poster WEPMU038 [45.776 MB] | |||
THBHAUST01 | SNS Online Display Technologies for EPICS | controls, network, site, EPICS | 1178 |
|
|||
Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy The ubiquitousness of web clients from personal computers to cell phones results in a growing demand for web-based access to control system data. At the Oak Ridge National Laboratory Spallation Neutron Source (SNS) we have investigated different technical approaches to provide read access to data in the Experimental Physics and Industrial Control System (EPICS) for a wide variety of web client devices. We compare them in terms of requirements, performance and ease of maintenance. |
|||
Slides THBHAUST01 [3.040 MB] | |||
THBHAUST03 | Purpose and Benefit of Control System Training for Operators | controls, EPICS, hardware, background | 1186 |
|
|||
The complexity of accelerators is ever increasing and today it is typical that a large number of feedback loops are implemented, based on sophisticated models which describe the underlying physics. Despite this increased complexity the machine operators must still effectively monitor and supervise the desired behaviour of the accelerator. This is not alone sufficient; additionally, the correct operation of the control system must also be verified. This is not always easy since the structure, design, and performance of the control system is usually not visualized and is often hidden to the operator. To better deal with this situation operators need some knowledge of the control system in order to react properly in the case of problems. In this paper we will present the approach of the Paul Scherrer Institute for operator control system training and discuss its benefits. | |||
Slides THBHAUST03 [4.407 MB] | |||
THBHAUST04 | jddd, a State-of-the-art Solution for Control Panel Development | controls, operation, software, feedback | 1189 |
|
|||
Software for graphical user interfaces to control systems may be developed as a rich or thin client. The thin client approach has the advantage that anyone can create and modify control system panels without specific skills in software programming. The Java DOOCS Data Display, jddd, is based on the thin client interaction model. It provides "Include" components and address inheritance for the creation of generic displays. Wildcard operations and regular expression filters are used to customize the graphics content at runtime, e.g. in a "DynamicList" component the parameters have to be painted only once in edit mode and then are automatically displayed multiple times for all available instances in run mode. This paper will describe the benefits of using jddd for control panel design as an alternative to rich client development. | |||
Slides THBHAUST04 [0.687 MB] | |||