Keyword: framework
Paper Title Other Keywords Page
MOC3O02 PID_TUNE: A PID Autotuning Software Tool on UNICOS CPC controls, cryogenics, PLC, operation 22
 
  • E. Blanco Vinuela, B. Bradu, R. Marti Martinez
    CERN, Geneva, Switzerland
  • R. Mazaeda, L. de Frutos, C. de Prada
    University of Valladolid, Valladolid, Spain
 
  PID (Proportional, integral and derivative) is the most used feedback control algorithm in the process control industry. Despite its age, its simplicity in terms of deployment and its efficiency on most of industrial processes allow this technique to still have a bright future. One of the biggest challenges in using PID control is to find its parameters, the so-called tuning of the controller. This may be a complex problem as it mostly depends on the dynamics of the process being controlled. In this paper we propose a tool that is able to provide the engineers a set of PID parameters in an automated way. Several auto-tuning methods, both in open and close loop, are selectable and others can be added as the tool is designed to be flexible. The tool is fully integrated in the UNICOS framework and can be used to tune multiple controllers at the same time.  
slides icon Slides MOC3O02 [2.793 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOD3I01 Bayesian Reliability Model for Beam Permit System of RHIC at BNL hardware, collider, ion, operation 46
 
  • P. Chitnis
    Stony Brook University, Stony Brook, New York, USA
  • K.A. Brown
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy.
Bayesian Analysis provides a statistical framework for updating prior knowledge as observational evidence is acquired. It can handle complex and realistic models with flexibility. The Beam Permit System (BPS) of RHIC plays a key role in safeguarding against the faults occurring in the collider, hence directly impacts RHIC availability. Earlier a multistate reliability model* was developed to study the failure characteristics of the BPS that incorporated manufacturer and military handbook data. Over the course of its 15 years of operation, RHIC has brought forth operational failure data. This work aims towards the integration of earlier reliability calculations with operational failure data using Bayesian analysis. This paper discusses the Bayesian inference of the BPS reliability using a two-parameter Weibull survival model, with unknown scale and shape parameters. As the joint posterior distribution for Weibull with both parameters unknown is analytically intractable, the Markov Chain Monte Carlo methodology with Metropolis-Hastings algorithm is used to obtain the inference. Selection criteria for the Weibull distribution, prior density and hyperparameters are also discussed.
*P. Chitnis et al., 'A Monte Carlo Simulation Approach to the Reliability Modeling of the Beam Permit System of Relativistic Heavy Ion Collider (RHIC) at BNL', Proc. of ICALEPCS'13, San Francisco, CA.
 
slides icon Slides MOD3I01 [3.934 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOD3O04 Introducing the SCRUM Framework as Part of the Product Development Strategy for the ALBA Control System controls, software, operation, synchrotron 60
 
  • G. Cuní, F. Becheri, D. Fernández-Carreiras, Z. Reszela, S. Rubio-Manrique
    ALBA-CELLS Synchrotron, Cerdanyola del Vallès, Spain
 
  At Alba, the Controls Section provides the software that is needed to operate the accelerators, the beamlines and the peripheral laboratories. It covers a wide range of areas or subsystems like vacuum, motion, data acquisition and analysis, graphical interfaces, or archiving. Since the installation and commissioning phases, we have been producing the software solutions mostly in single-developer projects based on the personal criteria. This organization scheme allowed each control engineer to gain the expertise in particular areas by being the unit contact responsible to develop and deliver products. In order to enrich the designs and improve the quality of solutions we have grouped the engineers in teams. The hierarchy of the product backlogs, represents the desired features and the known defects in a transparent way. Instead of planning the whole project upfront, we try to design the products incrementally and develop them in short iterations mitigating the risk of not satisfying the emerging user requirements. This paper describes the introduction of the Scrum framework as the product development strategy in a service oriented organization like the Computing Division at Alba*.
*D. Fernández-Carreiras et al., 'Using Prince2 and ITIL Practices for Computing Project and Service Management in a Scientific Installation', TUMIB01, Proc. of ICALEPCS'13, San Francisco, CA.
 
slides icon Slides MOD3O04 [2.256 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF024 Testing Framework for the LHC Beam-based Feedback System software, feedback, hardware, real-time 140
 
  • S. Jackson, D. Alves, L. Di Giulio, K. Fuchsberger, B. Kolad, E. Pedersen
    CERN, Geneva, Switzerland
 
  During the first LHC shut-down period, software for the LHC Beam-based Feedback Controller (BFC) and Service Unit (BFSU) was migrated to new 64-bit multi-core hardware and to a new version of CERN's FESA3 real-time framework. This coincided with the transfer of responsibility to a new software team, charged with readying the systems for beam in 2015 as well as maintaining and improving the code-base in the future. In order to facilitate the comprehension of the system's 90'000+ existing lines of code, a new testing framework was developed which would not only serve to define the system's functional specification, but also provide acceptance tests for future releases. This paper presents how the BFC and BFSU systems were decoupled from each other as well as from the LHC plant's measurement and correction systems, thus allowing simulation-data driven instances to be deployed in a test environment. It also describes the resulting Java-based domain-specific language (DSL) which, when employed in JUnit, allows the formation of repeatable acceptance tests.  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF039 TIP: An Umbrella Application for all SCADA-Based Applications for the CERN Technical Infrastructure controls, laser, operation, interface 184
 
  • F. Varela, Ph. Gayet, P. Golonka, M. Gonzalez-Berges, J. Pache, P. Sollander
    CERN, Geneva, Switzerland
  • L. Goralczyk
    AGH University of Science and Technology, Kraków, Poland
 
  The WinCC Open Architecture (OA) SCADA package and the controls frameworks (UNICOS, JCOP) developed at CERN were successfully used to implement many critical control systems at CERN. In the recent years, the supervision and the controls of many technical infrastructure systems (electrical distribution, cooling and ventilation, etc.) were rewritten to use this standard environment. Operators at the Technical Infrastructure desk, who monitor these systems, are forced to continuously switch between the applications that allow them to monitor these infrastructure systems. The Technical Infrastructure Portal (TIP) was designed and is being developed to provide centralized access to all technical infrastructure systems and extend their functionality by linking to a powerful localization system based on GIS. Furthermore, it provides an environment for operators to develop views that aggregate data from different sources, like cooling and electricity.  
poster icon Poster MOPGF039 [1.396 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF050 Tango-Kepler Integration at ELI-ALPS TANGO, controls, device-server, database 212
 
  • P. Ács, S. Brockhauser, L.J. Fülöp, V. Hanyecz, M. Kiss, Cs. Koncz, L. Schrettner
    ELI-ALPS, Szeged, Hungary
 
  Funding: The ELI-ALPS project (GOP-1.1.1-12/B-2012-000, GINOP-2.3.6-15-2015-00001) is supported by the European Union and co-financed by the European Regional Development Fund.
ELI-ALPS will provide a wide range of attosecond pulses which will be used for performing experiments by international research groups. ELI-ALPS will use the TANGO Controls framework to build up the central control system and to integrate the autonomous subsystems regarding software monitoring and control. Beside a robust central and integrated control system a flexible and dynamic high level environment could be beneficial. The envisioned users will come from diverse fields including chemistry, biology, physics or medicine. Most of the users will not have programming or scripting background. Meanwhile workflow system provides visual programming facilities where the logics can be drawn, which is understandable by the potential users. We have integrated TANGO into the Kepler workflow system because it gives a lot of actors for all natural scientific fields. Moreover it has the potential for running the workflows on HPC or GRID resources. We demonstrated the usability of the development with a beamline simulation. The TANGO-Kepler integration provides an easy-to-use environment for the users therefore it can facilitate e.g. the standardization of measurements protocols as well.
 
poster icon Poster MOPGF050 [0.668 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF056 Synchronising High-Speed Triggered Image and Meta Data Acquisition for Beamlines EPICS, hardware, data-acquisition, controls 225
 
  • N. De Maio, A.P. Bark, T.M. Cobb, J.A. Thompson
    DLS, Oxfordshire, United Kingdom
 
  High-speed image acquisition is becoming more and more common on beamlines. As experiments increase in complexity, the need to record parameters related to the environment at the same time increases with them. As a result, conventional systems for combining experimental meta data and images often struggle to deliver at a speed and precision that would be desirable for the experiment. We describe an integrated solution that addresses those needs, overcoming the performance limitations of PV monitoring by combining hardware triggering of an ADC card, coordination of signals in a Zebra box* and three instances of area-Detector streaming to HDF5 data. This solution is expected to be appropriate for frame rates ranging from 30Hz to 1000Hz, with the limiting factor being the maximum speed of the camera. Conceptually, the individual data streams are arranged in pipelines controlled by a master Zebra box, expecting start/stop signals on one end and producing the data collections at the other. This design ensures efficiency on the acquisition side while allowing easy interaction with higher-level applications on the other.
*T. Cobb, Y. Chernousko, I. Uzun, ZEBRA: A Flexible Solution for Controlling Scanning Experiments, Proc. ICALEPCS13, http://jacow.org/.
 
poster icon Poster MOPGF056 [0.456 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF115 LabVIEW as a New Supervision Solution for Industrial Control Systems controls, LabView, PLC, database 349
 
  • O.Ø. Andreassen, F. Augrandjean, E. Blanco Vinuela, M.F. Gomez De La Cruz, A. Rijllart
    CERN, Geneva, Switzerland
  • D. Abalo Miron
    University of Oviedo, Oviedo, Spain
 
  To shorten the development time of supervision applications, CERN has developed the UNICOS framework, which simplifies the configuration of the front-end devices and the supervision (SCADA) layer. At CERN the SCADA system of choice is WinCC OA, but for specific projects (small size, not connected to accelerator operation or not located at CERN) a more customisable SCADA using LabVIEW is an attractive alternative. Therefore a similar system, called UNICOS in LabVIEW (UiL), has been implemented. It provides a set of highly customisable re-usable components, devices and utilities. Because LabVIEW uses different programming methods than WinCC OA, the tools for automatic instantiation of devices on both the front-end and supervision layer had to be re-developed, but the configuration files of the devices and the SCADA can be reused. This paper reports how the implementation was done, it describes the first project implemented in UiL and an outlook to other possible applications.  
poster icon Poster MOPGF115 [4.417 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF147 Realization of a Concept for Scheduling Parallel Beams in the Settings Management System for FAIR controls, operation, storage-ring, ion 434
 
  • H.C. Hüther, J. Fitzek, R. Müller, A. Schaller
    GSI, Darmstadt, Germany
 
  Approaching the commissioning of CRYRING, the first accelerator to be operated using the new control system for FAIR (Facility for Antiproton and Ion Research), the new settings management system will also be deployed in a production environment for the first time. A major development effort is ongoing to realize requirements necessary to support accelerator operations at FAIR. The focus is on the pattern concept which allows controlling the whole facility with its different parallel beams in an integrative way. Being able to utilize central parts of the new control system already at CRYRING, before the first FAIR accelerators are commissioned, facilitates an early proof of concept and testing possibilities. Concurrently, refactorings and enhancements of the commonly used LSA (LHC Software Architecture) framework take place. At CERN, the interface to devices has been redesigned to enhance maintainability and diagnostics capabilities. At GSI, support for polynomials as a native datatype has been implemented, which will be used to represent accelerator settings as well as calibration curves. Besides functional improvements, quality assurance measures are being taken to increase code quality in prospect of productive use.  
poster icon Poster MOPGF147 [1.498 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUD3O02 Extreme Light Infrastructure, Beamlines - Control System Architecture for the L1 Laser laser, controls, LabView, software 570
 
  • J. Naylon, P. Bakule, M.A. Drouin, B. Himmel, J. Horáček, M. Horáček, K. Kasl, T. Mazanec, P. Škoda
    ELI-BEAMS, Prague, Czech Republic
  • A. Greer, C. Mayer
    OSL, Cambridge, United Kingdom
  • B. Rus
    Czech Republic Academy of Sciences, Institute of Physics, Prague, Czech Republic
 
  Funding: Work supported by the European Regional Development Fund and the European Social Fund under Operational Programs ECOP and RDIOP.
The ELI-Beamlines facility aims to provide a selection of high-energy and high repetition-rate TW-PW femtosecond lasers driving high intensity XUV/X-ray and accelerated particle secondary sources for applications in materials, medical, nuclear and high-field physics sectors. The highest repetition rate laser in the facility will be the L1 laser, producing 1 kHz, 20 fs laser pulses of 200 mJ energy. This laser is based entirely on picosecond chirped-pulse parametric amplification and solid-state pump lasers. The high repetition rate combined with kW pump powers and advanced technologies calls for a highly automated, reliable and flexible control system. Current progress on the L1 control system is discussed, focussing on the architecture, software and hardware choices. Special attention is given to the LabVIEW-EPICS framework that was developed for the ELI Beamlines lasers. This framework offers comprehensive and scalable EPICS integration while allowing the full range of LabVIEW real-time and FPGA embedded targets to be leveraged in order to provide adaptable, high-performance control and rapid development.
 
slides icon Slides TUD3O02 [3.306 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEA3O03 Towards Building Reusability in Control Systems - a Journey controls, DSL, TANGO, target 593
 
  • P. Patwari, A.S. Banerjee, G. Muralikrishna, S. Roy Chaudhuri
    Tata Research Development and Design Centre, Pune, India
 
  Development of similar systems leads to a strong motivation for reuse. Our involvement with three large experimental physics facilities led us to appreciate this better in the context of development of their respective monitoring and control (M&C) software. We realized that the approach to allowing reuse follows the onion skin model that is, building re-usability in each layer in the solution to the problem. The same motivation led us to create a generic M&C architecture through our first collaborative effort which resulted into a fairly formal M&C domain model. The second collaboration showed us the need to have a common vocabulary that could be used across multiple systems to specify respective domain specific M&C solutions at higher levels of abstraction implemented using the generic underlying M&C engine. This resulted in our definition and creation of a domain specific language for M&C. The third collaboration leads us to imagine capturing domain knowledge using the common vocabulary which will substantially further reuse, this thought is already demonstrated through a preliminary prototype. We discuss our learning through this journey in this paper.  
slides icon Slides WEA3O03 [1.816 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEB3O01 Open Source Contributions and Using OSGi Bundles at Diamond Light Source software, interface, site, controls 598
 
  • M.W. Gerring, A. Ashton, R.D. Walton
    DLS, Oxfordshire, United Kingdom
 
  This paper presents the involvement of Diamond Light Source (DLS) with the open source community, the Eclipse Science Working Group and how DLS is changing to share software development effort better between groups. The paper explains moving from product-based to bundle-based software development process which lowers reinvention, increases reuse and reduces software development and support costs. This paper details specific ways in which DLS are engaging with the open source community and changing the way that research institutions deliver open source code.  
slides icon Slides WEB3O01 [0.940 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEB3O02 quasar - A Generic Framework for Rapid Development of OPC UA Servers controls, interface, toolkit, software 602
 
  • S. Schlenker, B. Farnham, P.P. Nikiel, C.-V. Soare
    CERN, Geneva, Switzerland
  • D. Abalo Miron
    University of Oviedo, Oviedo, Spain
  • V. Filimonov
    PNPI, Gatchina, Leningrad District, Russia
 
  This paper describes a new approach for generic design and efficient development of OPC Unified Architecture (UA) servers. Development starts with creation of a design XML file, describing an OO information model of the target system or device. Using this model, the framework generates an executable OPC UA server exposing the per-design address space without writing a single line of code while supporting standalone or embedded platforms. Further, the framework generates skeleton code for the interface logic of the target system or device. This approach allows both novice and expert developers to create servers for the systems they are experts in while greatly reducing design and development effort as compared to developments based on COTS OPC UA toolkits. Higher level software such as SCADA systems may benefit from using the design description to generate client connectivity configuration and data representation as well as validation tools. In this contribution, the concept and implementation of this framework is detailed along with examples of actual production-level usage in the detector control system of the ATLAS Experiment at CERN and beyond.  
slides icon Slides WEB3O02 [3.906 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEB3O03 Disruptor - Using High Performance, Low Latency Technology in the CERN Control System controls, software, hardware, data-acquisition 606
 
  • M. Gabriel, R. Gorbonosov
    CERN, Geneva, Switzerland
 
  Accelerator control systems process thousands of concurrent events per second, which adds complexity to their implementation. The Disruptor library provides an innovative single-threaded approach, which combines high performance event processing with a simplified software design, implementation and maintenance. This open-source library was originally developed by a financial company to build a low latency trading exchange. In 2014 the high-level control system for CERN experimental areas (CESAR) was renovated. CESAR calculates the states of thousands of devices by processing more than 2500 asynchronous event streams. The Disruptor was used as an event-processing engine. This allowed the code to be greatly simplified by removing the concurrency concerns. This paper discusses the benefits of the programming model encouraged by the Disruptor (simplification of the code base, performance, determinism), the design challenges faced while integrating the Disruptor into CESAR as well as the limitations it implies on the architecture.  
slides icon Slides WEB3O03 [0.954 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEB3O04 Accelerator Modelling and Message Logging with ZeroMQ controls, CORBA, database, GUI 610
 
  • J.T.M. Chrin, M. Aiba, A. Rawat, Z. Wang
    PSI, Villigen PSI, Switzerland
 
  ZeroMQ is an emerging message oriented middleware architecture that is being increasingly adopted in the software engineering of distributed control and data acquisition systems within the accelerator community. The rich array of built-in core messaging patterns may, however, be equally applied to within the domain of high-level applications, where the seamless integration of accelerator models and message logging capabilities, respectively serve to extend the effectiveness of beam dynamics applications and allow for their monitoring. Various advanced patterns that include intermediaries and proxies further provide for reliable service-oriented brokers, as may be required in real-world operations. A report on an investigation into ZeroMQ's suitability for integrating key distributed components into high-level applications, and the experience gained, are presented.  
slides icon Slides WEB3O04 [3.542 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEB3O05 Why Semantics Matter: a Demonstration on Knowledge-Based Control System Design software, PLC, controls, DSL 615
 
  • W. Pessemier, G. Deconinck, G. Raskin, P. Saey, H. Van Winckel
    KU Leuven, Leuven, Belgium
 
  Knowledge representation and reasoning are hot topics in academics and industry today, as they are enabling technologies for building more complex and intelligent future systems. At the Mercator Telescope, we've built a software framework based on these technologies to support the design of our control systems. At the heart of the framework is a metamodel: a set of ontologies based on the formal semantics of the Web Ontology Language (OWL), to provide meaningful reusable building blocks. Those building blocks are instantiated in the models of our control systems, via a Domain Specific Language (DSL). The metamodels and models jointly form a knowledge base, i.e. an integrated model that can be viewed from different perspectives, or processed by an inference engine for model verification purposes. In this paper we present a tool called OntoManager, which demonstrates the added value of semantic modeling to the engineering process. By querying the integrated model, our web-based tool is able to generate systems engineering views, verification test reports, graphical software models, PLCopen compliant software code, Python client-side code, and much more, in a user-friendly way.  
slides icon Slides WEB3O05 [10.408 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WED3O02 Databroker: An Interface for NSLS-II Data Management System experiment, interface, detector, data-acquisition 645
 
  • A. Arkilic, D.B. Allan, D. Chabot, L.R. Dalesio, W.K. Lewis
    BNL, Upton, Long Island, New York, USA
 
  Funding: Brookhaven National Lab, U.S. Department of Energy
A typical experiment involves not only the raw data from a detector, but also requires additional data from the beamline. This information is largely kept separated and manipulated individually, to date. A much more effective approach is to integrate these different data sources, and make these easily accessible to data analysis clients. NSLS-II data flow system contains multiple backends with varying data types. Leveraging the features of these (metadatastore, filestore, channel archiver, and Olog), this library provides users with the ability to access experimental data. This service acts as a single interface for time series, data attribute, frame data access and other experiment related information.
 
slides icon Slides WED3O02 [2.944 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WED3O05 Big Data Analysis and Analytics with MATLAB software, database, data-acquisition, controls 656
 
  • D.S. Willingham
    ASCo, Clayton, Victoria, Australia
 
  Overview using Data Analytics to turn large volumes of complex data into actionable information can help you improve design and decision-making processes. In today's world, there is an abundance of data being generated from many different sources. However, developing effective analytics and integrating them into existing systems can be challenging. Big data represents an opportunity for analysts and data scientists to gain greater insight and to make more informed decisions, but it also presents a number of challenges. Big data sets may not fit into available memory, may take too long to process, or may stream too quickly to store. Standard algorithms are usually not designed to process big data sets in reasonable amounts of time or memory. There is no single approach to big data. Therefore, MATLAB provides a number of tools to tackle these challenges. In this paper 2 case studies will be presented: 1. Manipulating and doing computations on big datasets on light weight machines; 2. Visualising big, multi-dimensional datasets Developing Predictive Models High performance computing with clusters and Cloud Integration with Databases, HADOOP and Big Data Environments.  
slides icon Slides WED3O05 [10.989 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF025 Data Driven Simulation Framework simulation, controls, software, hardware 749
 
  • S. Roy Chaudhuri, A.S. Banerjee, P. Patwari
    Tata Research Development and Design Centre, Pune, India
  • L. Van den Heever
    SKA South Africa, National Research Foundation of South Africa, Cape Town, South Africa
 
  Funding: Tata Research Development and Design Centre, TCSL.
Control systems for Radio Astronomy projects such as MeerKAT* require testing functionality of different parts of the Telescope even when the system is not fully developed. Usage of software simulators in such scenarios is customary. Projects build simulators for subsystems such as Dishes, Beamformers and so on to ensure the correctness of a) their interface to the control system b) logic written to coordinate and configure them. However, such simulators are developed as one-offs, even when they implement similar functionality. This leads to duplicated effort impacting large projects such as Square Kilometer Array**. We leverage the idea of data driven software development and conceptualize a simulation framework that reduces the simulator development effort, to mitigate this: 1) capturing all the necessary information through instantiation of a well-defined simulation specification model, 2) configuring a reusable engine that performs the required simulation functions based on the instantiated and populated model provided to it as input. The results of a PoC for such a simulation framework implemented in the context of Giant Meter-wave Radio Telescope*** are presented.
*MeerKAT CAM Design Description, DNo M1500-0000-006, Rev 2, July 2014
**A.R. Taylor, "The Square Kilometre Array", Proceedings IAU Symposium, 2012
***www.gmrt.ncra.tifr.res.in
 
poster icon Poster WEPGF025 [0.676 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF041 Monitoring Mixed-Language Applications with Elastic Search, Logstash and Kibana (ELK) LabView, distributed, interface, network 786
 
  • O.Ø. Andreassen, C. Charrondière, A. De Dios Fuente
    CERN, Geneva, Switzerland
 
  Application logging and system diagnostics is nothing new. Ever since we had the first computers scientist and engineers have been storing information about their systems, making it easier to understand what is going on and, in case of failures, what went wrong. Unfortunately there are as many different standards as there are file formats, storage types, locations, operating systems, etc. Recent development in web technology and storage has made it much simpler to gather all the different information in one place and dynamically adapt the display. With the introduction of Logstash with Elasticsearch as a backend, we store, index and query data, making it possible to display and manipulate data in whatever form one wishes. With Kibana as a generic and modern web interface on top, the information can be adapted at will. In this paper we will show how we can process almost any type of structured or unstructured data source. We will also show how data can be visualised and customised on a per user basis and how the system scales when the data volume grows.  
poster icon Poster WEPGF041 [3.848 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF042 Scalable Web Broadcasting for Historical Industrial Control Data database, software, controls, interface 790
 
  • B. Copy, O.O. Andreassen, Ph. Gayet, M. Labrenz, H. Milcent, F. Piccinelli
    CERN, Geneva, Switzerland
 
  With the wide-spread use of asynchronous web communication mechanisms like WebSockets and WebRTC, it has now become possible to distribute industrial controls data originated in field devices or SCADA software in a scalable and event-based manner to a large number of web clients in the form of rich interactive visualizations. There is however no simple, secure and performant way yet to query large amounts of aggregated historical data. This paper presents an implementation of a tool, able to make massive quantities of pre-indexed historical data stored in ElasticSearch available to a large amount of web-based consumers through asynchronous web protocols. It also presents a simple, Opensocial-based dashboard architecture, that allows users to configure and organize rich data visualizations (based on Highcharts Javascript libraries) and create navigation flows in a responsive mobile-friendly user interface. Such techniques are used at CERN to display interactive reports about the status of the LHC infrastructure (e.g. vacuum or cryogenics installations) and give access to fine-grained historical data stored in the LHC Logging database in a matter of seconds.

 
poster icon Poster WEPGF042 [1.056 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF046 Towards a Second Generation Data Analysis Framework for LHC Transient Data Recording data-analysis, operation, software, hardware 802
 
  • S. Boychenko, C. Aguilera-Padilla, M. Dragu, M.A. Galilée, J.C. Garnier, M. Koza, K.H. Krol, R. Orlandi, M.C. Poeschl, T.M. Ribeiro, K.S. Stamos, M. Zerlauth
    CERN, Geneva, Switzerland
  • M. Zenha-Rela
    University of Coimbra, Coimbra, Portugal
 
  During the last two years, CERNs Large Hadron Collider (LHC) and most of its equipment systems were upgraded to collide particles at an energy level twice higher compared to the first operational period between 2010 and 2013. System upgrades and the increased machine energy represent new challenges for the analysis of transient data recordings, which have to be both dependable and fast. With the LHC having operated for many years already, statistical and trend analysis across the collected data sets is a growing requirement, highlighting several constraints and limitations imposed by the current software and data storage ecosystem. Based on several analysis use-cases, this paper highlights the most important aspects and ideas towards an improved, second generation data analysis framework to serve a large variety of equipment experts and operation crews in their daily work.  
poster icon Poster WEPGF046 [0.501 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF047 Smooth Migration of CERN Post Mortem Service to a Horizontally Scalable Service controls, distributed, dumping, operation 806
 
  • J.C. Garnier, C. Aguilera-Padilla, S. Boychenko, M. Dragu, M.A. Galilée, M. Koza, K.H. Krol, T. Martins Ribeiro, R. Orlandi, M.C. Poeschl, M. Zerlauth
    CERN, Geneva, Switzerland
 
  The Post Mortem service for CERNs accelerator complex stores and analyses transient data recordings of various equipment systems following certain events, like a beam dump or magnet quenches. The main purpose of this framework is to provide fast and reliable diagnostic to the equipment experts and operation crews to decide whether accelerator operation can continue safely or whether an intervention is required. While the Post Mortem System was initially designed to serve CERNs Large Hadron Collider (LHC), the scope has been rapidly extended to include as well External Post Operational Checks and Injection Quality Checks in the LHC and its injector complex. These new use cases impose more stringent time-constraints on the storage and analysis of data, calling to migrate the system towards better scalability in terms of storage capacity as well as I/O throughput. This paper presents an overview on the current service, the ongoing investigations and plans towards a scalable data storage solution and API, as well as the proposed strategy to ensure an entirely smooth transition for the current Post Mortem users.  
poster icon Poster WEPGF047 [1.454 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF050 Integrated Detector Control and Calibration Processing at the European XFEL detector, controls, photon, software 814
 
  • A. Münnich, S. Hauf, B.C. Heisen, F. Januschek, M. Kuster, P.M. Lang, N. Raab, T. Rüter, J. Sztuk, M. Turcato
    XFEL. EU, Hamburg, Germany
 
  The European X-ray Free Electron Laser is a high-intensity X-ray light source currently being constructed in the area of Hamburg, that will provide spatially coherent X-rays in the energy range between 0.25 keV and 25 keV. The machine will deliver 10 trains/s, consisting of up to 2700 pulses, with a 4.5 MHz repetition rate. The LPD, DSSC and AGIPD detectors are being developed to provide high dynamic-range Mpixel imaging capabilities at the mentioned repetition rates. A consequence of these detector characteristics is that they generate raw data volumes of up to 15 Gbyte/s. In addition the detector's on-sensor memory-cell and multi-/non-linear gain architectures pose unique challenges in data correction and calibration, requiring online access to operating conditions and control settings. We present how these challenges are addressed within XFEL's control and analysis framework Karabo, which integrates access to hardware conditions, acquisition settings (also using macros) and distributed computing. Implementation of control and calibration software is mainly in Python, using self-optimizing (py) CUDA code, numpy and iPython parallels to achieve near-real time performance for calibration application.  
poster icon Poster WEPGF050 [3.429 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF062 Processing High-Bandwidth Bunch-by-Bunch Observation Data from the RF and Transverse Damper Systems of the LHC Linux, diagnostics, software, controls 841
 
  • M. Ojeda Sandonís, P. Baudrenghien, A.C. Butterworth, J. Galindo, W. Höfle, T.E. Levens, J.C. Molendijk, D. Valuch
    CERN, Geneva, Switzerland
  • F. Vaga
    University of Pavia, Pavia, Italy
 
  The radiofrequency and transverse damper feedback systems of the Large Hadron Collider digitize beam phase and position measurements at the bunch repetition rate of 40 MHz. Embedded memory buffers allow a few milliseconds of full rate bunch-by-bunch data to be retrieved over the VME bus for diagnostic purposes, but experience during LHC Run I has shown that for beam studies much longer data records are desirable. A new "observation box" diagnostic system is being developed which parasitically captures data streamed directly out of the feedback hardware into a Linux server through an optical fiber link, and permits processing and buffering of full rate data for around one minute. The system will be connected to an LHC-wide trigger network for detection of beam instabilities, which allows efficient capture of signals from the onset of beam instability events. The data will be made available for analysis by client applications through interfaces which are exposed as standard equipment devices within CERN's controls framework. It is also foreseen to perform online Fourier analysis of transverse position data inside the observation box using GPUs with the aim of extracting betatron tune signals.  
poster icon Poster WEPGF062 [4.412 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF074 FPGA Firmware Framework for MTCA.4 AMC Modules interface, hardware, FPGA, LLRF 876
 
  • Ł. Butkowski, T. Kozak, B.Y. Yang
    DESY, Hamburg, Germany
  • P. Prędki
    TUL-DMCS, Łódź, Poland
  • R. Rybaniec
    Warsaw University of Technology, Institute of Electronic Systems, Warsaw, Poland
 
  Many of the modules in specific hardware architectures use the same or similar communication interfaces and IO connectors. MicroTCA (MTCA.4) is one example of such a case. All boards: communicate with the central processing unit (CPU) over PCI Express (PCIe), send data to each other using Multi-Gigabit Transceivers (MGT), use the same backplane resources and have the same Zone3 IO or FPGA mezzanine card (FMC) connectors. All those interfaces are connected and implemented in Field Programmable Gate Array (FPGA) chips. It makes possible to separate the interface logic from the application logic. This structure allows to reuse already done firmware for one application and to create new application on the same module. Also, already developed code can be reused in new boards as a library. Proper structure allows the code to be reused and makes it easy to create new firmware. This paper will present structures of firmware framework and scripting ideas to speed up firmware development for MTCA.4 architecture. European XFEL control systems firmware, which uses the described framework, will be presented as example.  
poster icon Poster WEPGF074 [0.706 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF092 PLCverif: A Tool to Verify PLC Programs Based on Model Checking Techniques PLC, software, controls, background 911
 
  • D. Darvas, E. Blanco Vinuela, B. Fernández Adiego
    CERN, Geneva, Switzerland
 
  Model checking is a promising formal verification method to complement testing in order to improve the quality of PLC programs. However, its application typically needs deep expertise in formal methods. To overcome this problem, we introduce PLCverif, a tool that builds on our verification methodology and hides all the formal verification-related difficulties from the user, including model construction, model reduction and requirement formalisation. The goal of this tool is to make model checking accessible to the developers of the PLC programs. Currently, PLCverif supports the verification of PLC code written in ST (Structured Text), but it is open to other languages defined in IEC 61131-3. The tool can be easily extended by adding new model checkers.  
poster icon Poster WEPGF092 [3.550 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF094 A Modular Approach to Develop Standardized HVAC Control Systems with UNICOS CPC Framework controls, PLC, site, operation 919
 
  • W. Booth, R. Barillère, M. Bes, E. Blanco Vinuela, B. Bradu, M. Quilichini, M.Z. Zimny
    CERN, Geneva, Switzerland
 
  At CERN there are currently about 200 ventilation air handling units in production, used in many different applications, including building ventilation, pressurization of safe rooms, smoke extraction, pulsion/extraction of experimental areas (tunnel, cavern, etc), and the ventilation of the computing centre. The PLC applications which operate these installations are currently being revamped to a new framework (UNICOS CPC). This work began 3 years ago, and we are now in a position to standardize the development of these HVAC applications, in order to reduce the cost of initial development (including specification and coding), testing, and long-term maintenance of the code. In this paper the various improvements to the process with be discussed, and examples will be shown, which can thus help the community develop HVAC applications. Improvements include templates for the "Functional Analysis" specification document, standardized HVAC devices and templates for the PLC control logic, and automatically generated test documentation, to help during the Factory Acceptance Test (FAT) and Site Acceptance Test (SAT) processes.  
poster icon Poster WEPGF094 [1.277 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF101 A Modular Software Architecture for Applications that Support Accelerator Commissioning at MedAustron interface, software, database, controls 938
 
  • M. Hager, M. Regodic
    EBG MedAustron, Wr. Neustadt, Austria
 
  The commissioning and operation of an accelerator requires a large set of supportive applications. Especially in the early stages, these tools have to work with unfinished and changing systems. To allow the implementation of applications that are dynamic enough for this environment, a dedicated software architecture, the Operational Application (OpApp) architecture, has been developed at MedAustron. The main ideas of the architecture are a separation of functionality into reusable execution modules and a flexible and intuitive composition of the modules into bigger modules and applications. Execution modules are implemented for the acquisition of beam measurements, the generation of cycle dependent data, the access to a database and other tasks. On this basis, Operational Applications for a wide variety of use cases can be created, from small helper tools to interactive beam commissioning applications with graphical user interfaces. This contribution outlines the OpApp architecture and the implementation of the most frequently used applications.  
poster icon Poster WEPGF101 [2.169 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF116 PvaPy: Python API for EPICS PV Access EPICS, software, interface, monitoring 970
 
  • S. Veseli
    ANL, Argonne, Ilinois, USA
 
  As the number of sites deploying and adopting EPICS Version 4 grows, so does the need to support PV Access from multiple languages. Especially important are the widely used scripting languages that tend to reduce both software development time and the learning curve for new users. In this paper we describe PvaPy, a Python API for the EPICS PV Access protocol and its accompanying structured data API. Rather than implementing the protocol itself in Python, PvaPy wraps the existing EPICS Version 4 C++ libraries using the Boost. Python framework. This approach allows us to benefit from the existing code base and functionality, and to significantly reduce the Python API development effort. PvaPy objects are based on Python dictionaries and provide users with the ability to access even the most complex of PV Data structures in a relatively straightforward way. Its interfaces are easy to use, and include support for advanced EPICS Version 4 features such as implementation of client and server Remote Procedure Calls (RPC).  
poster icon Poster WEPGF116 [0.742 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF118 Use of Tornado in KAT-­7 and MeerKAT Framework software, controls, operation, GUI 977
 
  • C.C.A. de Villiers, B. Xaia
    SKA South Africa, National Research Foundation of South Africa, Cape Town, South Africa
 
  Funding: SKA South Africa, National Research Foundation of South Africa, Department of Science and Technology, 3rd Floor, The Park, Park Road, Pinelands, Cape Town, South Africa, 7405.
The KAT­-7 and MeerKAT radio telescope control systems (www.ska.ac.za) are built on a rich Python architecture. At its core, we use KATCP (Karoo Array Telescope Communications Protocol), a text­-based protocol that has served the projects very well. KATCP is supported by every device and connected software component in the system. However, its original implementation relied on threads to support asynchronous operations, and this has sometimes complicated the evolution of the software. Since MeerKAT (with 64 dishes) will be much larger and more complex than KAT-7, the Control and Monitoring (CAM) team investigated some alternatives to classical threading. We have adopted Tornado (www.tornadoweb.org) as the asynchronous engine for KATCP. Tornado, popular for Web applications, is built on a robust and very efficient coroutine paradigm that in turn is based on Python's generators. Co-routines avoid the complexity of thread re-entrancy and lifetime management, resulting in cleaner and more maintainable user code. This paper will describe our migration to a Tornado co-routine architecture, highlighting the benefits and some of the pitfalls and implementation challenges we have met.
*www.tornadoweb.org.
 
poster icon Poster WEPGF118 [6.066 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF135 Using the Vaadin Web Framework for Developing Rich Accelerator Controls User Interfaces controls, GUI, interface, real-time 1025
 
  • K.A. Brown, T. D'Ottavio, W. Fu, S. Nemesure
    BNL, Upton, Long Island, New York, USA
 
  Funding: Work supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy
Applications used for Collider-Accelerator Controls at Brookhaven National Laboratory typically run as console level programs on a Linux operating system. One essential requirement for accelerator controls applications is bidirectional synchronized IO data communication. Several new web frameworks (Vaadin, GXT, node.js, etc.) have made it possible to develop web based Accelerator Controls applications that provide all the features of console based UI applications that includes bidirectional IO. Web based applications give users flexibility by providing an architecture independent domain for running applications. Security is established by restricting access to users within the local network while not limiting this access strictly to Linux consoles. Additionally, the web framework provides the opportunity to develop mobile device applications that makes it convenient for users to access information while away from the office. This paper explores the feasibility of using the Vaadin web framework for developing UI applications for Collider-Accelerator controls at Brookhaven National Laboratory.
 
poster icon Poster WEPGF135 [0.990 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF137 Adopting and Adapting Control System Studio at Diamond Light Source controls, interface, GUI, Windows 1032
 
  • M.J. Furseman, N.W. Battam, T.M. Cobb, I.J. Gillingham, M.T. Heron, G. Knap, W.A.H. Rogers
    DLS, Oxfordshire, United Kingdom
 
  Since commissioning, Diamond Light Source has used the Extensible Display Manager (EDM) to provide a GUI to its EPICS-based control system. As Linux moves away from X-Windows the future of EDM is uncertain, leading to the evaluation of Control System Studio (CS-Studio) as a replacement. Diamond has a user base accustomed to the interface provided by EDM and an infrastructure designed to launch the multiple windows associated with it. CS-Studio has been adapted to provide an interface that is similar to EDM's while keeping the new features of CS-Studio available. This will allow as simple as possible a transition to be made to using CS-Studio as Diamond's user interface to EPICS. It further opens up the possibility of integrating the control system user interface with those in the Eclipse based GDA and DAWN tools which are used for data acquisition and data analysis at Diamond.  
poster icon Poster WEPGF137 [4.177 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF141 Tools and Procedures for High Quality Technical Infrastructure Monitoring Reference Data at CERN monitoring, database, controls, interface 1036
 
  • R. Martini, M. Bräger, J.L. Salmon, A. Suwalska
    CERN, Geneva, Switzerland
 
  The monitoring of the technical infrastructure at CERN relies on the quality of the definition of numerous and heterogeneous data sources. In 2006, we introduced the MoDESTI* procedure for the Technical Infrastructure Monitoring** (TIM) system to promote data quality. The first step in the data integration process is the standardisation of the declaration of the various data points whether these are alarms, equipment statuses or analogue measurement values. Users declare their data points and can follow their requests, monitoring personnel ensure the infrastructure is adapted to the new data, and control room operators check that the data points are defined in a consistent and intelligible way. Furthermore, rigorous validations are carried out on input data to ensure correctness as well as optimal integration with other computer systems at CERN (maintenance management, geographical viewing tools etc.). We are now redesigning the MoDESTI procedure in order to provide an intuitive and streamlined Web based tool for managing data definition, as well as reducing the time between data point integration requests and implementation. Additionally, we are introducing a Class-Device-Property data definition model, a standard in the CERN accelerator sector, for a more flexible use of the TIM data points.
*MoDESTI: Monitoring Data Entry System for the Technical Infrastructure
**TIM: Technical Infrastructure Monitoring
 
poster icon Poster WEPGF141 [0.509 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF148 Unifying All TANGO Control Services in a Customizable Graphical User Interface controls, TANGO, GUI, interface 1052
 
  • S. Rubio-Manrique, G. Cuní, D. Fernández-Carreiras, C. Pascual-Izarra, D. Roldán
    ALBA-CELLS Synchrotron, Cerdanyola del Vallès, Spain
  • E. Al-Dmour
    MAX-lab, Lund, Sweden
 
  TANGO is a distributed Control System with an active community of developers. The community features multiple services like Archiving or Alarms with an heterogeneous mix of technologies and look-and-feels that must be integrated in the final user workflow. The Viewer and Commander Control Application (VACCA) was developed on top of Taurus to provide TANGO with the user experience of a commercial SCADA, keeping the advantages of open source. The Taurus GUI application enables scientists to design their own live applications using drag-and-drop from the widget catalog. The VACCA User Interface provides a template mechanism for synoptic-driven applications and extends the widget catalog to interact with all the components of the control system (Alarms, Archiving, Databases, Hosts Administration). The elements of VACCA are described in this paper, as well as its mechanisms to encapsulate all services in a GUI for an specific subsystem (e.g. Vacuum).  
poster icon Poster WEPGF148 [1.588 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF152 Time Travel Made Possible at FERMI by the Time-Machine Application database, TANGO, interface, controls 1059
 
  • G. Strangolino, M. Lonza, L. Pivetta
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  The TANGO archiving system HDB++ continuously stores data over time into the historical database. The new time-machine application, a specialization of the extensively used save/restore framework, allows bringing back sets of control system variables to their values at a precise date and time in the past. Given the desired time stamp t0 and a set of TANGO attributes, the values recorded at the most recent date and time preceding or equaling t0 are fetched from the historical database. The user can examine the list of variables with their values before performing a full or partial restoration of the set. The time-machine seamlessly integrates with the well known save/restore application, sharing many of its characteristics and functionalities, such as the matrix-based subset selection, the live difference view and the simple and effective user interface.  
poster icon Poster WEPGF152 [0.445 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THHA2O02 The LASNCE FPGA Embedded Signal Processing Framework FPGA, software, hardware, interface 1079
 
  • J.O. Hill
    LANL, Los Alamos, New Mexico, USA
 
  Funding: Work supported by US Department of Energy under contract DE-AC52-06NA25396.
During the replacement of some LANSCE LINAC instrumentation systems a common architecture for timing system synchronized embedded signal processing systems was developed. The design follows trends of increasing levels of electronics system integration; a single commercial-off-the-shelf (COTS) board assumes the roles of analog-to-digital conversion and advanced signal processing while also providing the LAN attached EPICS IOC functionality. These systems are based on agile FPGA-based COTS VITA VPX boards with an VITA FMC mezzanine site. The signal processing is primarily developed at a high level specifying numeric algorithms in software source code to be integrated together with COTS signal processing intellectual property components for synthesis of hardware implementations. This paper will discuss the requirements, the decision point selecting the VPX together with the FMC industry standards, the benefits along with costs of system integrating multi-vendor COTS components, the design of some of the signal processing algorithms, and the benefits along with costs of embedding the EPICS IOC within an FPGA.
 
slides icon Slides THHA2O02 [2.113 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THHC3O01 The MeerKAT Graphical User Interface Technology Stack interface, controls, GUI, monitoring 1134
 
  • M. Alberts, F. Joubert
    SKA South Africa, National Research Foundation of South Africa, Cape Town, South Africa
 
  Funding: SKA South Africa National Research Foundation of South Africa Department of Science and Technology 3rd floor, The Park Park Road Pinelands ZA - Cape Town 7405 +27 21 506 7300
The South African MeerKAT radio telescope, currently being built some 90 km outside the small Northern Cape town of Carnarvon, is a precursor to the Square Kilometre Array (SKA) telescope and will be integrated into the mid-frequency component of SKA Phase 1. Providing the graphical user interface (GUI) for MeerKAT required a reassessment of currently employed technologies with a strong focus on leveraging modern user interface technologies and design techniques. An extensive investigation was performed to evaluate and assess potential GUI technologies and frameworks. The result of this investigative study identified a responsive web application for the frontend and asynchronous web server for the backend. In particular the AngularJS framework used in combination with Material Design principles, Websockets and other popular javascript layout and imaging libraries, such as D3.js, proved an ideal fit for the requirements of the MeerKAT GUI frontend. This paper will provide a summary of the user interface technology investigation and further expound on the whole technology stack adopted to provide a modern user interface with real time capabilities.
 
slides icon Slides THHC3O01 [10.206 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THHC3O05 National Ignition Facility (NIF) Experiment Interface Consolidation and Simplification to Support Operational User Facility experiment, software, hardware, site 1143
 
  • A.D. Casey, E.J. Bond, B.A. Conrad, M.S. Hutton, P.D. Reisdorf, S.M. Reisdorf
    LLNL, Livermore, California, USA
 
  Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam 1.8 MJ ultraviolet laser system designed to support high-energy-density science. NIF can create extreme states of matter, including temperatures of 100 million degrees and pressures that exceed 100 billion times Earth's atmosphere. At these temperatures and pressures, scientists explore the physics of planetary interiors, supernovae, black holes and thermonuclear burn. In the past year, NIF has transitioned to an operational facility and significant focus has been placed on how users interact with the experimental tools. The current toolset was developed with a view to commissioning the NIF and thus allows flexibility that most users do not require. The goals of this effort include enhancing NIF's external website, easier proposal entry, reducing both the amount and frequency of data the users have to enter, and simplifying user interactions with the tools while reducing the reliance on custom software. This paper will discuss the strategies adopted to meet the goals, highlight some of the user tool improvements that have been implemented and planned future directions for the toolset.
 
slides icon Slides THHC3O05 [3.167 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)