Paper | Title | Other Keywords | Page |
---|---|---|---|
MOPKN017 | From Data Storage towards Decision Making: LHC Technical Data Integration and Analysis | database, monitoring, operation, beam-losses | 131 |
|
|||
The monitoring of the beam conditions, equipment conditions and measurements from the beam instrumentation devices in CERN's Large Hadron Collider (LHC) produces more than 100 Gb/day of data. Such a big quantity of data is unprecedented in accelerator monitoring and new developments are needed to access, process, combine and analyse data from different equipments. The Beam Loss Monitoring (BLM) system has been one of the most reliable pieces of equipment in the LHC during its 2010 run, issuing beam dumps when the detected losses were above the defined abort thresholds. Furthermore, the BLM system was able to detect and study unexpected losses, requiring intensive offline analysis. This article describes the techniques developed to: access the data produced (∼ 50.000 values/s); access relevant system layout information; access, combine and display different machine data. | |||
Poster MOPKN017 [0.411 MB] | |||
MOPMS025 | Migration from OPC-DA to OPC-UA | Linux, toolkit, controls, embedded | 374 |
|
|||
The OPC-DA specification of OPC has been a highly successful interoperability standard for process automation since 1996, allowing communications between any compliant components regardless of vendor. CERN has a reliance on OPC-DA Server implementations from various 3rd party vendors which provide a standard interface to their hardware. The OPC foundation finalized the OPC-UA specification and OPC-UA implementations are now starting to gather momentum. This presentation gives a brief overview of the headline features of OPC-UA and a comparison with OPC-DA and outlines the necessity of migrating from OPC-DA and the motivation for migrating to OPC-UA. Feedback from research into the availability of tools and testing utilities will be presented and a practical overview of what will be required from a computing perspective in order to run OPC-UA clients and servers in the CERN network. | |||
Poster MOPMS025 [1.103 MB] | |||
MOPMS033 | Status, Recent Developments and Perspective of TINE-powered Video System, Release 3 | interface, controls, electron, site | 405 |
|
|||
Experience has shown that imaging software and hardware installations at accelerator facilities need to be changed, adapted and updated on a semi-permanent basis. On this premise, the component-based core architecture of Video System 3 was founded. In design and implementation, emphasis was, is, and will be put on flexibility, performance, low latency, modularity, interoperability, use of open source, ease of use as well as reuse, good documentation and multi-platform capability. In the last year, a milestone was reached as Video System 3 entered production-level at PITZ, Hasylab and PETRA III. Since then, development path is stronger influenced by production-level experience and customer feedback. In this contribution, we describe the current status, layout, recent developments and perspective of the Video System. Focus will be put on integration of recording and playback of video sequences to Archive/DAQ, a standalone installation of the Video System on a notebook as well as experiences running on Windows 7-64bit. In addition, new client-side multi-platform GUI/application developments using Java are about to hit the surface. Last but not least it must be mentioned that although the implementation of Release 3 is integrated into the TINE control system, it is modular enough so that integration into other control systems can be considered. | |||
Slides MOPMS033 [0.254 MB] | |||
Poster MOPMS033 [2.127 MB] | |||
WEBHAUST01 | LHCb Online Infrastructure Monitoring Tools | controls, monitoring, status, Linux | 618 |
|
|||
The Online System of the LHCb experiment at CERN is composed of a very large number of PCs: around 1500 in a CPU farm for performing the High Level Trigger; around 170 for the control system, running the SCADA system - PVSS; and several others for performing data monitoring, reconstruction, storage, and infrastructure tasks, like databases, etc. Some PCs run Linux, some run Windows but all of them need to be remotely controlled and monitored to make sure they are correctly running and to be able, for example, to reboot them whenever necessary. A set of tools was developed in order to centrally monitor the status of all PCs and PVSS Projects needed to run the experiment: a Farm Monitoring and Control (FMC) tool, which provides the lower level access to the PCs, and a System Overview Tool (developed within the Joint Controls Project – JCOP), which provides a centralized interface to the FMC tool and adds PVSS project monitoring and control. The implementation of these tools has provided a reliable and efficient way to manage the system, both during normal operations but also during shutdowns, upgrades or maintenance operations. This paper will present the particular implementation of this tool in the LHCb experiment and the benefits of its usage in a large scale heterogeneous system. | |||
Slides WEBHAUST01 [3.211 MB] | |||
WEBHAUST02 | Optimizing Infrastructure for Software Testing Using Virtualization | network, software, hardware, distributed | 622 |
|
|||
Virtualization technology and cloud computing have a brought a paradigm shift in the way we utilize, deploy and manage computer resources. They allow fast deployment of multiple operating system as containers on physical machines which can be either discarded after use or snapshot for later re-deployment. At CERN, we have been using virtualization/cloud computing to quickly setup virtual machines for our developers with pre-configured software to enable them test/deploy a new version of a software patch for a given application. We also have been using the infrastructure to do security analysis of control systems as virtualization provides a degree of isolation where control systems such as SCADA systems could be evaluated for simulated network attacks. This paper reports both on the techniques that have been used for security analysis involving network configuration/isolation to prevent interference of other systems on the network. This paper also provides an overview of the technologies used to deploy such an infrastructure based on VMWare and OpenNebula cloud management platform. | |||
Slides WEBHAUST02 [2.899 MB] | |||
WEMAU010 | Web-based Control Application using WebSocket | controls, GUI, Linux, experiment | 673 |
|
|||
The Websocket [1] brings asynchronous full-duplex communication between a web-based (i.e. java-script based) application and a web-server. The WebSocket started as a part of HTML5 standardization but has now been separated from the HTML5 and developed independently. Using the WebSocket, it becomes easy to develop platform independent presentation layer applications of accelerator and beamline control software. In addition, no application program has to be installed on client computers except for the web-browser. The WebSocket based applications communicate with the WebSocket server using simple text based messages, so the WebSocket can be applicable message based control system like MADOCA, which was developed for the SPring-8 control system. A simple WebSocket server for the MADOCA control system and a simple motor control application was successfully made as a first trial of the WebSocket control application. Using google-chrome (version 10.x) on Debian/Linux and Windows 7, opera (version 11.0 beta) on Debian/Linux and safari (version 5.0.3) on MacOSX as clients, the motors can be controlled using the WebSocket based web-application. The more complex applications are now under development for synchrotron radiation experiments combined with other HTML5 features.
[1] http://websocket.org/ |
|||
Poster WEMAU010 [44.675 MB] | |||
WEPMS006 | Automated testing of OPC Servers | DSL, software, operation, Domain-Specific-Languages | 985 |
|
|||
CERN relies on OPC Server implementations from 3rd party device vendors to provide a software interface to their respective hardware. Each time a vendor releases a new OPC Server version it is regression tested internally to verify that existing functionality has not been inadvertently broken during the process of adding new features. In addition bugs and problems must be communicated to the vendors in a reliable and portable way. This presentation covers the automated test approach used at CERN to cover both cases: Scripts are written in a domain specific language specifically created for describing OPC tests and executed by a custom software engine driving the OPC Server implementation. | |||
Poster WEPMS006 [1.384 MB] | |||
THBHMUST01 | Multi-platform SCADA GUI Regression Testing at CERN. | GUI, framework, software, Linux | 1201 |
|
|||
Funding: CERN The JCOP Framework is a toolkit used widely at CERN for the development of industrial control systems in several domains (i.e. experiments, accelerators and technical infrastructure). The software development started 10 years ago and there is now a large base of production systems running it. For the success of the project, it was essential to formalize and automate the quality assurance process. The paper will present the overall testing strategy and will describe in detail mechanisms used for GUI testing. The choice of a commercial tool (Squish) and the architectural features making it appropriate for our multi-platform environment will be described. Practical difficulties encountered when using the tool in the CERN context are discussed as well as how these were addressed. In the light of initial experience, the test code itself has been recently reworked in OO style to facilitate future maintenance and extension. The paper concludes with a description of our initial steps towards incorporation of full-blown Continuous Integration (CI) support. |
|||
Slides THBHMUST01 [1.878 MB] | |||
FRAAULT02 | STUXNET and the Impact on Accelerator Control Systems | controls, software, network, hardware | 1285 |
|
|||
2010 has seen a wide news coverage of a new kind of computer attack, named "Stuxnet", targeting control systems. Due to its level of sophistication, it is widely acknowledged that this attack marks the very first case of a cyber-war of one country against the industrial infrastructure of another, although there is still is much speculation about the details. Worse yet, experts recognize that Stuxnet might just be the beginning and that similar attacks, eventually with much less sophistication, but with much more collateral damage, can be expected in the years to come. Stuxnet was targeting a special model of the Siemens 400 PLC series. Similar modules are also deployed for accelerator controls like the LHC cryogenics or vacuum systems or the detector control systems in LHC experiments. Therefore, the aim of this presentation is to give an insight into what this new attack does and why it is deemed to be special. In particular, the potential impact on accelerator and experiment control systems will be discussed, and means will be presented how to properly protect against similar attacks. | |||
Slides FRAAULT02 [8.221 MB] | |||
FRBHMULT05 | Middleware Trends and Market Leaders 2011 | CORBA, controls, network, Linux | 1334 |
|
|||
The Controls Middleware (CMW) project was launched over ten years ago. Its main goal was to unify middleware solutions used to operate CERN accelerators. An important part of the project, the equipment access library RDA, was based on CORBA, an unquestionable standard at the time. RDA became an operational and critical part of the infrastructure, yet the demanding run-time environment revealed some shortcomings of the system. Accumulation of fixes and workarounds led to unnecessary complexity. RDA became difficult to maintain and to extend. CORBA proved to be rather a cumbersome product than a panacea. Fortunately, many new transport frameworks appeared since then. They boasted a better design, and supported concepts that made them easy to use. Willing to profit from the new libraries, the CMW team updated user requirements, and in their terms investigated eventual CORBA substitutes. The process consisted of several phases: a review of middleware solutions belonging to different categories (e.g. data-centric, object-, and message-oriented) and their applicability to a communication model in RDA; evaluation of several market recognized products and promising start-ups; prototyping of typical communication scenarios; testing the libraries against exceptional situations and errors; verifying that mandatory performance constraints were met. Thanks to the performed investigation the team have selected a few libraries that suit their needs better than CORBA. Further prototyping will select the best candidate. | |||
Slides FRBHMULT05 [8.508 MB] | |||