Keyword: experiment
Paper Title Other Keywords Page
MOC3O01 Comprehensive Fill Pattern Control Engine: Key to Top-Up Operation Quality injection, controls, operation, radiation 18
 
  • T. Birke, F. Falkenstern, R. Müller, A. Schälicke
    HZB, Berlin, Germany
 
  Funding: Work supported by BMBF and Land Berlin.
At the light source BESSY II numerous experiments as well as machine development studies benefit from a very flexible and stable fill pattern: standard operation mode comprises a multibunch train for the average users, a purity controlled high current camshaft bunch in a variable length ion clearing gap for pump/probe experiments and a mechanical pulse picking chopper, three high current bunches for femto second slicing opposite to the gap and a specific bunch close to the end of the ion clearing gap for resonant excitation pulse picking. The fill pattern generator and control software is based on a state machine. It controls the full chain from gun timing, linac pulse trains, injection and extraction elements as well as next shot predictions allowing triggering the next DAQ cycle. Architecture and interplay of the software components as well as implemented functionality with respect to hardware control, performance surveillance and reasoning of next actions, radiation protection requirements are described.
 
slides icon Slides MOC3O01 [3.692 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOC3O06 The Laser Megajoule Facility: The Computational System PARC simulation, laser, software, interface 38
 
  • S. Vermersch
    CEA, LE BARP cedex, France
 
  The Laser MegaJoule (LMJ) is a 176-beam laser facility, located at the CEA CESTA Laboratory near Bordeaux (France). It is designed to deliver about 1.4 MJ of energy to targets, for high energy density physics experiments, including fusion experiments. The assembly of the first line of amplification (8 beams) was achieved in October 2014. A computational system, PARC has been developed and is under deployment to automate the laser setup process, and accurately predicts the laser energy and temporal shape. PARC is based on the computer simulation code MIRO. For each LMJ shot, PARC determines the characteristics of the laser injection system required to achieve the desired main laser output, provide parameter checking needed for all equipment protections, determines the required diagnostic setup, and supplies post-shot data analysis and reporting. This paper presents the first results provided by PARC. It also describe results obtained with the PARC demonstrator during the first experiments conducted on the LMJ facility.  
slides icon Slides MOC3O06 [4.985 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOM305 Control System for a Dedicated Accelerator for SACLA Wide-Band Beam Line controls, electron, operation, database 74
 
  • N. Hosoda, T. Fukui
    RIKEN SPring-8 Center, Innovative Light Sources Division, Hyogo, Japan
  • M. Ishii
    JASRI/SPring-8, Hyogo-ken, Japan
  • T. Ohshima, T. Sakurai, H. Takebe
    RIKEN SPring-8 Center, Sayo-cho, Sayo-gun, Hyogo, Japan
 
  This paper report about a control system for a dedicated accelerator for SACLA wide-band beam line (BL1), requirements, construction strategies, and present status. At the upgrade plan of SACLA BL1, it was decided to move SCSS test accelerator, which operated from 2005 to 2013, to the upstream of the BL1 in the undulator hall. The control system of the accelerator had to be operated seamlessly with SACLA, to reuse old components as much as possible, and to avoid stopping SACLA user experiments during the start up. The system was constructed with MADOCA which is already used at SACLA. In the control components, VME optical DIO cards and chassis for magnet power supplies were reused after cleaning and checking that there was no degradation of quality. The RF conditioning of the accelerator was started in in October 2014, while SACLA user experiments were going on. A data collection system was prepared, myCC, having a MADOCA compatible interface and an independent database from SACLA. It enabled efficient start up and after enough debugging, the data collection was successfully merged to SACLA in January 2015. Beam commissioning of the accelerator is planned for autumn 2015.  
slides icon Slides MOM305 [0.969 MB]  
poster icon Poster MOM305 [0.368 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF016 Improving the Compact Muon Solenoid Electromagnetic Calorimeter Control and Safety Systems for the Large Hadron Collider Run 2 detector, controls, hardware, software 117
 
  • D.R.S. Di Calafiori, G. Dissertori, L. Djambazov, O. Holme, W. Lustermann
    ETH, Zurich, Switzerland
  • P. Adzic, P. Cirkovic, D. Jovanovic
    VINCA, Belgrade, Serbia
  • S. Zelepoukine
    UW-Madison/PD, Madison, Wisconsin, USA
 
  Funding: Swiss National Science Foundation (SNSF); Ministry of Education, Science and Technological Development of Serbia
The first long shutdown of the Large Hadron Collider (LS1, 2013-2015) provided an opportunity for significant upgrades of the detector control and safety systems of the CMS Electromagnetic Calorimeter. A thorough evaluation was undertaken, building upon experience acquired during several years of detector operations. Substantial improvements were made to the monitoring systems in order to extend readout ranges and provide improved monitoring precision and data reliability. Additional remotely controlled hardware devices and automatic software routines were implemented to optimize the detector recovery time in the case of failures. The safety system was prepared in order to guarantee full support for both commercial off-the-shelf and custom hardware components throughout the next accelerator running period. The software applications were modified to operate on redundant host servers, to fulfil new requirements of the experiment. User interface extensions were also added to provide a more complete overview of the control system. This paper summarises the motivation, implementation and validation of the major improvements made to the hardware and software components during the LS1 and the early data-taking period of LHC Run 2.
 
poster icon Poster MOPGF016 [2.512 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF020 Detector and Run Control Systems for the NA62 Fixed-Target Experiment at CERN controls, hardware, operation, detector 125
 
  • P. Golonka, R. Fantechi, M. Gonzalez-Berges, F. Varela
    CERN, Geneva, Switzerland
  • V. Falaleev
    JINR, Dubna, Moscow Region, Russia
  • N. Lurkin
    Birmingham University, Birmingham, United Kingdom
  • R.F. Page
    University of Bristol, Bristol, United Kingdom
 
  The Detector and Run Control systems for the NA62 experiment, which started physics data-taking in Autumn of 2014, were designed, developed and deployed in collaboration between the Physics and Engineering Departments at CERN. Based on the commonly used control frameworks, UNICOS and JCOP, they were developed with scarce manpower while meeting the challenge of extreme agility, evolving requirements, as well as integration of new types of hardware. This paper presents, for the first time, the architecture of these systems and discusses the challenges and experience in developing and maintaining them during the first months of operation.  
poster icon Poster MOPGF020 [4.624 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF021 Database Archiving System for Supervision Systems at CERN: a Successful Upgrade Story database, controls, operation, software 129
 
  • P. Golonka, M. Gonzalez-Berges, J. Hofer, A. Voitier
    CERN, Geneva, Switzerland
 
  Almost 200 controls applications, in domains like LHC magnet protection, cryogenics and vacuum systems, cooling-and-ventilation or electrical network supervision, have been developed and are currently maintained by the CERN Industrial Controls Group in close collaboration with several equipment groups. The supervision layer of these systems is based on the same technologies as 400 other systems running in the LHC Experiments (e.g. WinCC Open Architecture, Oracle). During the last two-year LHC Long Shutdown 1, the 200 systems have been successfully migrated from a file-based archiver to a centralized infrastructure based on Oracle databases. This migration has homogenized the archiving chain for all CERN systems, and at the same time has presented a number of additional challenges. The paper presents the design, the necessary optimizations and the migration process that allowed us to meet unprecedented data-archiving rates (unachievable for the previously used system), and liaise with the existing long-term storage system (LHC LoggingDB) to assure data-continuity.  
poster icon Poster MOPGF021 [3.510 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF025 Enhancing the Detector Control System of the CMS Experiment with Object Oriented Modelling software, toolkit, software-architecture, real-time 145
 
  • R.J. Jiménez Estupiñán, A. Andronidis, O. Chaze, C. Deldicque, M. Dobson, A.D. Dupont, D. Gigi, F. Glege, J. Hegeman, M. Janulis, L. Masetti, F. Meijers, E. Meschi, S. Morovic, C. Nunez-Barranco-Fernandez, L. Orsini, A. Petrucci, A. Racz, P. Roberts, H. Sakulin, C. Schwick, B. Stieger, S. Zaza, P. Zejdl
    CERN, Geneva, Switzerland
  • J.M. Andre, R.K. Mommsen, V. O'Dell, P. Zejdl
    Fermilab, Batavia, Illinois, USA
  • U. Behrens
    DESY, Hamburg, Germany
  • J. Branson, S. Cittolin, A. Holzner, M. Pieri
    UCSD, La Jolla, California, USA
  • G.L. Darlea, G. Gomez-Ceballos, C. Paus, K. Sumorok, J. Veverka
    MIT, Cambridge, Massachusetts, USA
  • S. Erhan
    UCLA, Los Angeles, California, USA
  • O. Holme
    ETH, Zurich, Switzerland
 
  WinCC Open Architecture (WinCC OA) is used at CERN as the solution for many control system developments. This product models the process variables in structures known as data points and offers a custom procedural scripting language, called Control Language (CTRL). CTRL is also the language to program functionality of the native user interfaces (UI) and is used by the WinCC OA based CERN control system frameworks. CTRL does not support object oriented (OO) modeling by default. A lower level OO application programming interface (API) is provided, but requires significantly more expertise and development effort than CTRL. The Detector Control System group of the CMS experiment has developed CMSfwClass, a programming toolkit which adds OO behavior to the data points and CTRL. CMSfwClass reduces the semantic gap between high level software design and the application domain. It increases maintainability, encapsulation, reusability and abstraction. This paper presents the details of the implementation as well as the benefits and use cases of CMSfwClass.  
poster icon Poster MOPGF025 [1.441 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF048 IBEX - the New EPICS Based Instrument Control System at the ISIS Pulsed Neutron and Muon Source controls, GUI, EPICS, LabView 205
 
  • F.A. Akeroyd, K.V.L. Baker, M.J. Clarke, G.D. Howells, D.P. Keymer, K.J. Knowles, C. Moreton-Smith, D.E. Oram
    STFC/RAL/ISIS, Chilton, Didcot, Oxon, United Kingdom
  • M. Bell, I.A. Bush, R.F. Nelson, K. Ward, K. Woods
    Tessella, Abingdon, United Kingdom
 
  Instrument control at ISIS is in the process of migrating from a mainly locally developed system to an EPICS based system. The new control system, called IBEX, was initially used during commissioning of a new instrument prior to a long maintenance shutdown. This first usage has provided valuable feedback and significant progress has been made on enhancing the system during the facility maintenance period in preparation for the move onto production use. Areas that will be of particular interest to scientists in the future will be linking feedback from live data analysis with instrument control and also providing a simple and powerful scripting interface for facility users. In this paper we will cover the architecture and design of the new control system, our choices of technologies, how the system has evolved following initial use, and our plans for moving forward.  
poster icon Poster MOPGF048 [0.718 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF052 A Framework for Hardware Integration in the LHCb Experiment Control System hardware, interface, controls, detector 221
 
  • L.G. Cardoso, F. Alessio, J. Barbosa, C. Gaspar, R. Schwemmer
    CERN, Geneva, Switzerland
  • P-Y. Duval
    CPPM, Marseille, France
 
  LHCb is one of the four experiments at the LHC accelerator at CERN. For the LHCb upgrade, hundreds of new electronics boards for the central data acquisition and for the front-end readout of the different sub-detectors are being developed. These devices will need to be integrated in the Experiment Control System (ECS) that drives LHCb. Typically, they are controlled via a server running on a PC which allows the communication between the hardware registers and the experiment SCADA (WinCC OA). A set of tools was developed that provide an easy integration of the control and monitoring of the devices in the ECS. The fwHw is a tool that allows the abstraction of the device models into the ECS. Using XML files describing the structure and registers of the devices it creates the necessary model of the hardware as a data structure in the SCADA. It allows then the control and monitoring of the defined registers using their name, without the need to know the details of the hardware behind. The fwHw tool also provides the facility of defining and applying recipes - named sets of configuration parameters which can be used to easily configure the hardware according to specific needs.  
poster icon Poster MOPGF052 [0.710 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF057 Quick Experiment Automation Made Possible Using FPGA in LNLS FPGA, software, Linux, EPICS 229
 
  • M.P. Donadio, J.R. Piton, H.D. de Almeida
    LNLS, Campinas, Brazil
 
  Beamlines in LNLS are being modernized to use the synchrotron light as efficiently as possible. As the photon flux increases, experiment speed constraints become more visible to the user. Experiment control has been done by ordinary computers, under a conventional operating system, running high-level software written in most common programming languages. This architecture presents some time issues as computer is subject to interruptions from input devices like mouse, keyboard or network. The programs quickly became the bottleneck of the experiment. To improve experiment control and automation speed, we transferred software algorithms to a FPGA device. FPGAs are semiconductor devices based around a matrix of logic blocks reconfigurable by software. The results of using a NI Compact RIO device with FPGA programmed through LabVIEW for adopting this technology and future improvements are briefly shown in this paper.  
poster icon Poster MOPGF057 [5.365 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF077 Drift Control Engines Stabilize Top-Up Operation at BESSY II feedback, controls, injection, operation 262
 
  • T. Birke, F. Falkenstern, R. Müller, A. Schälicke
    HZB, Berlin, Germany
 
  Funding: Work supported by BMBF and Land Berlin.
Full stability potential of orbit and bunch-by-bunch-feedback controlled top-up operation becomes available to the experimental users only if the remaining slow drifts of essential operational parameters are properly compensated. At the light source BESSY II these are the transversal tunes as well as the path length and energy. These compensations are realized using feedback control loops together with supervising state machines. Key to the tune control is a multi-source tune determination algorithm. For the path length correction empirical findings are utilized. All involved software systems and data-paths are sketched.
 
poster icon Poster MOPGF077 [2.068 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF097 Architecture of Transverse Multi-Bunch Feedback Processor at Diamond feedback, controls, FPGA, EPICS 298
 
  • M.G. Abbott, G. Rehm, I.S. Uzun
    DLS, Oxfordshire, United Kingdom
 
  We describe the detailed internal architecture of the Transverse Multi-Bunch Feedback processor used at Diamond for control of multi-bunch instabilities and measurement of betatron tunes. Bunch by bunch selectable control over feedback filters, gain and excitation allows fine control over feedback, allowing for example the single bunch in a hybrid or camshaft fill pattern to be controlled independently from the bunch train. It is also possible to excite all bunches at a single frequency while simultaneously sweeping the excitation for tune measurement of a few selected bunches. The single frequency excitation has been used for continuous measurement of the beta-function. A simple programmable event sequencer provides support for up to 7 steps of programmable sweeps and changes to feedback and excitation, allowing a variety of complex and precisely timed beam characterisation experiments including grow-damp measurements in unstable conditions and programmed bunch cleaning. Finally input and output compensation filters allow for correction of front end and amplifier phasing at higher frequencies.  
poster icon Poster MOPGF097 [0.251 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
MOPGF103 The Upgrade of Control Hardware of the CERN NA62 Beam Vacuum vacuum, controls, PLC, interface 318
 
  • F. Mateo, F. Antoniotti, S. Blanchard, R. Ferreira, P. Gomes, A. Gutierrez, B. Jenninger, H.F. Pereira
    CERN, Geneva, Switzerland
 
  NA62 is the follow-up of the NA48 experiment, in the SPS North Area of CERN, and reuses a large fraction of its detectors and beam line equipment. Still, there are many new vacuum devices in the beam line (including pumps, valves & gauges), which required a thorough modification of the control system and a large number of new controllers, many of which were custom-made. The NA62 vacuum control system is based on the use of PLCs (Programmable Logic Controllers) and SCADA (Supervisory Control and Data Acquisition). The controllers and signal conditioning electronics are accessed from the PLC via a field bus (Profibus); optical fibre is used between surface racks and the underground gallery. The control hardware was completely commissioned during 2014. The nominal pressure levels were attained in all sectors of the experiment. The remote control of all devices and the interlocks were successfully tested. This paper summarizes the architecture of the vacuum control system of NA62, the types of instruments to control, the communication networks, the hardware alarms and the supervisory interface.  
poster icon Poster MOPGF103 [6.506 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUA3O01 Detector Controls Meets JEE on the Web controls, interface, detector, distributed 513
 
  • F. Glege, A. Andronidis, O. Chaze, C. Deldicque, M. Dobson, A.D. Dupont, D. Gigi, J. Hegeman, O. Holme, M. Janulis, R.J. Jiménez Estupiñán, L. Masetti, F. Meijers, E. Meschi, S. Morovic, C. Nunez-Barranco-Fernandez, L. Orsini, A. Petrucci, A. Racz, P. Roberts, H. Sakulin, C. Schwick, B. Stieger, S. Zaza, P. Zejdl
    CERN, Geneva, Switzerland
  • J.M. Andre, R.K. Mommsen, V. O'Dell
    Fermilab, Batavia, Illinois, USA
  • U. Behrens
    DESY, Hamburg, Germany
  • J. Branson, S. Cittolin, A. Holzner, M. Pieri
    UCSD, La Jolla, California, USA
  • G.L. Darlea, G. Gomez-Ceballos, C. Paus, J. Veverka
    MIT, Cambridge, Massachusetts, USA
  • S. Erhan
    UCLA, Los Angeles, California, USA
 
  Remote monitoring and controls has always been an important aspect of physics detector controls since it was available. Due to the complexity of the systems, the 24/7 running requirements and limited human resources, remote access to perform interventions is essential. The amount of data to visualize, the required visualization types and cybersecurity standards demand a professional, complete solution. Using the example of the integration of the CMS detector controls system into our ORACLE WebCenter infrastructure, the mechanisms and tools available for integration with controls systems shall be discussed. Authentication has been delegated to WebCenter and authorization been shared between web server and control system. Session handling exists in either system and has to be matched. Concurrent access by multiple users has to be handled. The underlying JEE infrastructure is specialized in visualization and information sharing. On the other hand, the structure of a JEE system resembles a distributed controls system. Therefore an outlook shall be given on tasks which could be covered by the web servers rather than the controls system.  
slides icon Slides TUA3O01 [2.611 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUA3O04 CS-Studio Scan System Parallelization controls, EPICS, interface, operation 517
 
  • K.-U. Kasemir, M.R. Pearson
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy.
For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to the Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.
 
slides icon Slides TUA3O04 [2.789 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUB3O01 Advanced Workflow for Experimental Control controls, interface, hardware, scattering 521
 
  • D. Mannicke, N. Hauser, N. Xiong
    ANSTO, Menai, New South Wales, Australia
 
  Gumtree is a software product developed at ANSTO and used for experimental control as well as data visualization and treatment. In order to simplify the interaction with instruments and optimize the available time for users, a user friendly multi sample workflow has been developed for Gumtree. Within this workflow users follow a step by step guide where they list available samples, setup instrument configurations and even specify sample environments. Users are then able to monitor the acquisition process in real-time and receive estimations about the completion time. In addition users can modify the previously entered information, even after the acquisitions have commenced. This paper will focus on how ANSTO integrated a multi sample workflow into Gumtree, what approaches were taken to allow realistic time estimations, what programming patterns were used to separate the user interface from the execution of the acquisition, and how standardization across multiple instruments was achieved. Furthermore, this paper will summarize the lessons learned during the development iterations, the feedback received from users and the future opportunities the approach enables.
* Gumtree T. Lam, N. Hauser, A. Gotz, P. Hathaway, F. Franceschini, H. Rayner, GumTree. An integrated scientific experiment environment, Physica B 385-386, 1330-1332 (2006)
 
slides icon Slides TUB3O01 [1.040 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
TUB3O02 Iterative Development of the Generic Continuous Scans in Sardana controls, software, hardware, data-acquisition 524
 
  • Z. Reszela, G. Cuní, C.M. Falcón Torres, D. Fernández-Carreiras, C. Pascual-Izarra, M. Rosanes Siscart
    ALBA-CELLS Synchrotron, Cerdanyola del Vallès, Spain
 
  Sardana* is a software suite for Supervision, Control and Data Acquisition in scientific installations. It aims to reduce cost and time of design, development and support of the control and data acquisition systems. Sardana is used in several synchrotrons where continuous scans are the desired way of executing experiments**. Most experiments require an extensive and coordinated control of many aspects like positioning, data acquisition, synchronization and storage. Many successful ad-hoc solutions have already been developed, however they lack generalization and are hard to maintain or reuse. Sardana, thanks to the Taurus*** based applications, allows the users to configure and control the scan experiments. The MacroServer, a flexible python based sequencer, provides parametrizable turn-key scan procedures. Thanks to the Device Pool controllers interfaces, heterogeneous hardware can be easily plug into Sardana and their elements used during scans and data acquisitions. Development of the continuous scans is an ongoing iterative process and its current status is described in this paper.
* http://sardana-controls.org** D. Fernandez-Carreiras, Synchronization of Motion and Detectors and Cont. Scans as the Standard Data Acquisition Technique, ICALEPCS2015*** http://taurus-scada.org
 
slides icon Slides TUB3O02 [3.173 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEA3O01 The TANGO Controls Collaboration in 2015 TANGO, controls, device-server, laser 585
 
  • A. Götz, J.M. Chaize, T.M. Coutinho, J.L. Pons, E.T. Taurel, P.V. Verdier
    ESRF, Grenoble, France
  • G. Abeillé
    SOLEIL, Gif-sur-Yvette, France
  • S. Brockhauser, L.J. Fülöp
    ELI-ALPS, Szeged, Hungary
  • M.O. Cernaianu
    IFIN-HH, Bucharest - Magurele, Romania
  • I.A. Khokhriakov
    HZG, Geesthacht, Germany
  • R. Smareglia
    INAF-OAT, Trieste, Italy
  • A. Vázquez-Otero
    ELI-BEAMS, Prague, Czech Republic
 
  This paper presents the latest news from the TANGO collaboration. TANGO is being used in new domains. The three ELI pillars - ELI-Beamlines, ELI-ALPS and ELI-NP in Czech Republic, Hungary and Romania respectively have selected TANGO for many of their control systems. In ELI-Beamlines and ELI-Alps, TANGO will play the role of integrating all the hardware and turn-key systems (some delivered with EPICS or Labview) into one integrated system. In ELI-NP, the HPLS and LBTS will be controlled using TANGO, while the GBS will be controlled using EPICS. On the experimental side, ELI-NP will use both TANGO and EPICS control systems. TANGO will be extended with new features required by the laser community. These features will include nanosecond time-stamping. The latest major release of TANGO V9 includes the following features - data pipes, enumerated types, dynamic commands and forwarded attributes. The collaboration has been extended to include the new members and to provide a sustainable source of resources through collaboration contracts. A new website (http://www.tango-controls.org/) has been designed which improves the communication within the community.  
slides icon Slides WEA3O01 [2.344 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEA3O02 Recent Advancements and Deployments of EPICS Version 4 EPICS, controls, detector, database 589
 
  • G.R. White, M.V. Shankar
    SLAC, Menlo Park, California, USA
  • A. Arkilic, L.R. Dalesio, M.A. Davidsaver, M.R. Kraimer, N. Malitsky, B.S. Martins
    BNL, Upton, Long Island, New York, USA
  • S.M. Hartman, K.-U. Kasemir
    ORNL, Oak Ridge, Tennessee, USA
  • D.G. Hickin
    DLS, Oxfordshire, United Kingdom
  • A.N. Johnson, S. Veseli
    ANL, Argonne, Ilinois, USA
  • T. Korhonen
    ESS, Lund, Sweden
  • R. Lange
    ITER Organization, St. Paul lez Durance, France
  • M. Sekoranja
    Cosylab, Ljubljana, Slovenia
  • G. Shen
    FRIB, East Lansing, Michigan, USA
 
  EPICS version 4 is a set of software modules that add to the base of the EPICS toolkit for advanced control systems. Version 4 adds the possibility of process variable values of structured data, an introspection interface for dynamic typing plus some standard types, high-performance streaming, and a new front-end processing database for managing complex data I/O. A synchronous RPC-style facility has also been added so that the EPICS environment supports service-oriented architecture. We introduce EPICS and the new features of version 4. Then we describe selected deployments, particularly for high-throughput experiment data transport, experiment data management, beam dynamics and infrastructure data.  
slides icon Slides WEA3O02 [2.413 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WED3O01 MASSIVE: an HPC Collaboration to Underpin Synchrotron Science synchrotron, software, real-time, scattering 640
 
  • W.J. Goscinski
    Monash University, Faculty of Science, Clayton, Victoria, Australia
  • K. Bambery, C.J. Hall, A. Maksimenko, S. Panjikar, D. Paterson, C.G. Ryan, M. Tobin
    ASCo, Clayton, Victoria, Australia
  • C.U. Felzmann
    SLSA, Clayton, Australia
  • C. Hines, P. McIntosh
    Monash University, Clayton, Australia
  • D.A. Thompson
    CSIRO ATNF, Epping, Australia
 
  MASSIVE is the Australian specialised High Performance Computing facility for imaging and visualisation. The project is a collaboration between Monash University, Australian Synchrotron and CSIRO. MASSIVE underpins a range of advanced instruments, with a particular focus on Australian Synchrotron beamlines. This paper will report on the outcomes of the MASSIVE project since 2011, in particular focusing on instrument integration, and interactive access. MASSIVE has developed a unique capability that supports an increasing number of researchers generating and processing instrument data. The facility runs an instrument integration program to help facilities move data to an HPC environment and provide in-experiment data processing. This capability is best demonstrated at the Imaging and Medical Beamline where fast CT reconstruction and visualisation is now essential to performing effective experiments. The MASSIVE Desktop provides an easy method for researchers to begin using HPC, and is now an essential tool for scientists working with large datasets, including large images and other types of instrument data.  
slides icon Slides WED3O01 [28.297 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WED3O02 Databroker: An Interface for NSLS-II Data Management System interface, detector, framework, data-acquisition 645
 
  • A. Arkilic, D.B. Allan, D. Chabot, L.R. Dalesio, W.K. Lewis
    BNL, Upton, Long Island, New York, USA
 
  Funding: Brookhaven National Lab, U.S. Department of Energy
A typical experiment involves not only the raw data from a detector, but also requires additional data from the beamline. This information is largely kept separated and manipulated individually, to date. A much more effective approach is to integrate these different data sources, and make these easily accessible to data analysis clients. NSLS-II data flow system contains multiple backends with varying data types. Leveraging the features of these (metadatastore, filestore, channel archiver, and Olog), this library provides users with the ability to access experimental data. This service acts as a single interface for time series, data attribute, frame data access and other experiment related information.
 
slides icon Slides WED3O02 [2.944 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF037 Data Lifecycle in Large Experimental Physics Facilities: The Approach of the Synchrotron ELETTRA and the Free Electron Laser FERMI operation, data-analysis, synchrotron, electron 777
 
  • F. Billè, R. Borghes, F. Brun, V. Chenda, A. Curri, V. Duic, D. Favretto, G. Kourousias, M. Lonza, M. Prica, R. Pugliese, M. Scarcia, M. Turcinovich
    Elettra-Sincrotrone Trieste S.C.p.A., Basovizza, Italy
 
  Often the producers of Big Data face the emerging problem of Data Deluge. Nevertheless experimental facilities such as synchrotrons and free electron lasers may have additional requirements, mostly related to the necessity of managing the access for thousands of scientists. A complete data lifecycle describes the seamless path that joins distinct IT tasks such as experiment proposal management, user accounts, data acquisition and analysis, archiving, cataloguing and remote access. This paper presents the data lifecycle of the synchrotron ELETTRA and the free electron laser FERMI. With the focus on data access, the Virtual Unified Office (VUO) is presented. It is a core element in scientific proposal management, user information DB, scientific data oversight and remote access. Eventually are discussed recent developments of the beamline software, that holds the key role to data and metadata acquisition but also requires integration with the rest of the system components in order to provide data cataloging, data archiving and remote access. The scope of this paper is to disseminate the current status of a complete data lifecycle, discuss key issues and hint on the future directions.  
poster icon Poster WEPGF037 [1.137 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF043 Metadatastore: A Primary Data Store for NSLS-2 Beamlines database, data-analysis, EPICS, GUI 794
 
  • A. Arkilic, D.B. Allan, T.A. Caswell, L.R. Dalesio, W.K. Lewis
    BNL, Upton, Long Island, New York, USA
 
  Funding: Department of Energy, Brookhaven National Lab
The beamlines at NSLS-II are among the highest instrumented, and controlled of any worldwide. Each beamline can produce unstructured data sets in various formats. This data should be made available for data analysis and processing for beamline scientists and users. Various data flow systems are in place in numerous synchrotrons, however these are very domain specific and cannot handle such unstructured data. We have developed a data flow service, metadatastore, that manages experimental data in NSLS-II beamlines. This service enables data analysis and visualization clients to access this service either directly or via databroker api in a consistent and partition tolerant fashion, providing a reliable and easy to use interface to our state-of-the-art beamlines.
 
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF044 Filestore: A File Management Tool for NSLS-II Beamlines data-analysis, EPICS, data-acquisition, database 796
 
  • A. Arkilic, T.A. Caswell, D. Chabot, L.R. Dalesio, W.K. Lewis
    BNL, Upton, Long Island, New York, USA
 
  Funding: Brookhaven National Lab, Departmet of Energy
NSLS-II beamlines can generate 72,000 data sets per day resulting in over 2 M data sets in one year. The large amount of data files generated by our beamlines poses a massive file management challenge. In response to this challenge, we have developed filestore, as means to provide users with an interface to stored data. By leveraging features of Python and MongoDB, filestore can store information regarding the location of a file, access and open the file, retrieve a given piece of data in that file, and provide users with a token, a unique identifier allowing them to retrieve each piece of data. Filestore does not interfere with the file source or the storage method and supports any file format, making data within files available for NSLS-II data analysis environment.
 
poster icon Poster WEPGF044 [0.854 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF053 Monitoring and Cataloguing the Progress of Synchrotron Experiments, Data Reduction, and Data Analysis at Diamond Light Source From a User's Perspective software, data-analysis, detector, radiation 822
 
  • J. Aishima
    SLSA, Clayton, Australia
  • A. Ashton, S. Fisher, K. Levik, G. Winter
    DLS, Oxfordshire, United Kingdom
 
  The high data rates produced by the latest generation of detectors, more efficient sample handling hardware and ever more remote users of the beamlines at Diamond Light Source require improved data reduction and data analysis techniques to maximize their benefit to scientists. In this paper some of the experiment data reduction and analysis steps are described, including real time image analysis with DIALS, our Fast DP and xia2-based data reduction pipelines, and Fast EP phasing and Dimple difference map calculation pipelines that aim to rapidly provide feedback about the recently completed experiment. SynchWeb, an interface to an open source laboratory information management system called ISPyB (co-developed at Diamond and the ESRF), provides a modern, flexible framework for managing samples and visualizing the data from all of these experiments and analyses, including plots, images, and tables of the analysed and reduced data, as well as showing experimental metadata, sample information.  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF059 The Australian Store. Synchrotron Data Management Service for Macromolecular Crystallography synchrotron, data-management, interface, real-time 830
 
  • G.R. Meyer, S. Androulakis, P.J. Bertling, A.M. Buckle, W.J. Goscinski, D. Groenewegen, C. Hines, A. Kannan, S. McGowan, S.M. Quenette, J. Rigby, P. Splawa-Neyman, J.M. Wettenhall
    Monash University, Clayton, Australia
  • D. Aragao, T. Caradoc-Davies, N. Mudie
    SLSA, Clayton, Australia
  • C.S. Bond
    University of Western Australia, Crawley, Australia
 
  Store. Synchrotron is a service for management and publication of diffraction data from the macromolecular crystallography (MX) beamlines of the Australian Synchrotron. Since the start of the development, in 2013, the service has handled over 51.8 TB of raw data (~ 4.1 million files). Raw data and autoprocessing results are made available securely via the web and SFTP so experimenters can sync it to their labs for further analysis. With the goal of becoming a large public repository of raw diffraction data, a guided publishing workflow which optionally captures discipline specific information was built. The MX-specific workflow links PDB coordinates from the PDB to raw data. An optionally embargoed DOI is created for convenient citation. This repository will be a valuable tool for crystallography software developers. To support complex projects, integration of other instruments such as microscopes is underway. We developed an application that captures any data from instrument computers, enabling centralised data management without the need for custom ingestion workflows. The next step is to integrate the hosted data with interactive processing and analysis tools on virtual desktops.  
poster icon Poster WEPGF059 [2.177 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF060 A Data Management Infrastructure for Neutron Scattering Experiments in J-PARC/MLF data-management, neutron, database, operation 834
 
  • K. Moriyama, T. Nakatani
    JAEA/J-PARC, Tokai-Mura, Naka-Gun, Ibaraki-Ken, Japan
 
  The role of data management is one of the greatest contributions in the research workflow for scientific experiments such as neutron scattering. The facility is required to safely and efficiently manage a huge amount of data over the long duration, and provide an effective data access for facility users promoting the creation of scientific results. In order to meet these requirements, we are operating and updating a data management infrastructure in J-PAPC/MLF, which consists of the web-based integrated data management system called the MLF Experimental Database (MLF EXP-DB), the hierarchical raw data repository composed of distributed storages, and the integrated authentication system. The MLF EXP-DB creates experimental data catalogues in which raw data, measurement logs, and other contextual information on sample, experimental proposal, investigator, etc. are interrelated. This system conducts the reposition, archive and on-demand retrieve of raw data in the repository. Facility users are able to access the experimental data via a web portal. This contribution presents the overview of our data management infrastructure, and the recent updated features for high availability, scaling-out, and flexible data retrieval in the MLF EXP-DB.  
poster icon Poster WEPGF060 [1.075 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF071 Python Scripting for Instrument Control and Online Data Treatment controls, interface, software, GUI 869
 
  • N. Xiong, N. Hauser, D. Mannicke
    ANSTO, Menai, New South Wales, Australia
 
  Scripting is an important feature of instrument control software. It allows scientists to execute a sequence of tasks to run complex experiments, and it makes a software developers' life easier when testing and deploying new features. Modern instrument control applications require easy to develop and reliable scripting support. At ANSTO we provide a Python scripting interface for Gumtree. Gumtree is an application that provides three features; instrument control, data treatment and visualisation for neutron scattering instruments. The scripting layer has been used to coordinate these three features. The language is simple and well documented, so scientists require minimal programming experience. The scripting engine has a web interface so that users can use a web browser to run scripts remotely. The script interface has a numpy-like library that makes data treatment easier. It also has a GUI library that automatically generates control panels for scripts. The same script can be loaded in both the workbench (desktop) application and the web service application for online data treatment. In both cases a GUI will be generated with similar look and feel.

 
poster icon Poster WEPGF071 [3.069 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF090 Design of EPICS IOC Based on RAIN1000Z1 ZYNQ Module EPICS, Linux, embedded, controls 905
 
  • T. Xue, G.H. Gong, H. Li, J.M. Li
    Tsinghua University, Beijing, People's Republic of China
 
  ZYNQ is the new architecture of FPGA with dual high performance ARM Cortex-A9 processors from Xilinx. A new module with Giga Bit Ethernet interface based on the ZYNQ XC7Z010 is development for the High Purity Germanium Detectors' data acquisition in the CJPL (China JingPing under-ground Lab) experiment, which is named as RAIN1000Z1. Base on the nice RAIN1000Z1 hardware platform, EPICS is porting on the ARM Cortex-A9 processor with embedded Linux and an Input Output Controller is implemented on the RAIN1000Z1 module. Due to the combination of processor and logic and new silicon technology of ZYNQ, embedded Linux with TCP/IP sockets and real time high throughput logic based on VHDL are running in a single chip with small module hardware size, lower power and higher performance. This paper will introduce how to porting the EPICS IOC application on the ZYNQ based on embedded Linux and give a demo of IO control and RS232 communication.  
poster icon Poster WEPGF090 [1.811 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF126 Prototype of White Rabbit Network in LHAASO network, detector, timing, controls 999
 
  • H. Li, G.H. Gong
    Tsinghua University, Beijing, People's Republic of China
  • Q. Du
    LBNL, Berkeley, California, USA
 
  Funding: Key Laboratory of Particle & Radiation Imaging, Open Research Foundation of State Key Lab of Digital Manufacturing Equipment & Technology in Huazhong Univ. of Science & Technology
Synchronization is a crucial concern in distributed measurement and control systems. White Rabbit provides sub-nanosecond accuracy and picoseconds precision for large distributed systems. In the Large High Altitude Air Shower Observatory project, to guarantee the angular resolution of reconstructed air shower event, a 500 ps overall synchronization precision must be achieved among thousands of detectors. A small prototype built at Yangbajin, Tibet, China has been working well for a whole year. A portable calibration node directly synced with the grandmaster switch and a simple detectors stack named Telescope are used to verify the overall synchronization precision of the whole prototype. The preliminary experiment results show that the long term synchronization of the White-Rabbit network is promising and 500 ps overall synchronization precision is achievable with node by node calibration and temperature correction.
 
poster icon Poster WEPGF126 [1.233 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF136 Development of iBeacon Based Equipment Inventory System at STAR Experiment hardware, site, toolkit, detector 1029
 
  • J. Fujita, M.G. Cherney
    Creighton University, Omaha, NE, USA
 
  An inventory system using iBeacon technology has been developed. Using a specially written iOS app, makes the location of the equipment easier to a workers during the routine access to the experiment. The use of iBeacons and iOS devices allow us to distinguish one equipment rack from another very easily. Combined with 2D barcode, the use of iBeacons may provide better inventory management of the equipment for experiments.  
poster icon Poster WEPGF136 [2.598 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
WEPGF147 ALICE Monitoring in 3-D detector, controls, software, monitoring 1049
 
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
  • A. Augustinus, P.M. Bond, P.Ch. Chochula, M. Lechman, J. Niedziela
    CERN, Geneva, Switzerland
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
 
  The ALICE experiment is a complex hardware and software device, monitored and operated with a control system based on WinCC OA. ALICE is composed of 19 detectors and installed in a cavern along the LHC at CERN; each detector is a set of modular elements, assembled in a hierarchical model called Finite State Machine. A 3-D model of the ALICE detector has been realized, where all elements of the FSM are represented in their relative location, giving an immediate overview of the status of the detector. For its simplicity, it can be a useful tool for the training of operators. The development is done using WinCC OA integrated with the JCOP fw3DViewer, based on the AliRoot geometry settings. Extraction and conversion of geometry data from AliRoot requires the usage of conversion libraries, which are currently being implemented. A preliminary version of ALICE 3-D is now deployed on the operator panel in the ALICE Run Control Centre. In the next future, the 3-D panel will be available on a big touch screen in the ALICE Visits Centre, providing visitors with the unique experience of navigating the experiment from both inside and out.  
poster icon Poster WEPGF147 [1.282 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THHA3O01 The Evolution of the ALICE Detector Control System detector, operation, controls, electronics 1087
 
  • P.Ch. Chochula, A. Augustinus, P.M. Bond, A.N. Kurepin, M. Lechman, O. Pinazza
    CERN, Geneva, Switzerland
  • A.N. Kurepin
    RAS/INR, Moscow, Russia
  • O. Pinazza
    INFN-Bologna, Bologna, Italy
 
  The ALICE Detector Control System has provided its service since 2007. Its operation in the past years proved that the initial design of the system fulfilled all expectations and allowed the evolution of the detectors and operational requirements to follow. In order to minimize the impact of the human factor, many procedures have been optimized and new tools have been introduced in order to allow the operator to supervise about 1 000 000 parameters from a single console. In parallel with the preparation for new runs after the LHC shutdown a prototyping for system extensions which shall be ready in 2018 has started. New detectors will require new approaches to their control and configuration. The conditions data, currently collected after each run, will be provided continuously to a farm containing 100 000 CPU cores and tens of PB of storage. In this paper the DCS design, deployed technologies, and experience gained during the 7 years of operation will be described and the initial assumptions with the current setup will be compared. The current status of the developments for the upgraded system, which will be put into operation in less than 3 years from now, will also be described.  
slides icon Slides THHA3O01 [4.556 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THHA3O03 Managing Neutron Beam Scans at the Canadian Neutron Beam Centre database, controls, neutron, software 1096
 
  • M.R. Vigder, M.L. Cusick, D. Dean
    CNL, Ontario, Canada
 
  The Canadian Neutron Beam Centre (CNBC) of the Canadian Nuclear Laboratories (CNL) operate six beam lines for material research. A single beam line experiment requires scientists to acquire data as a sequence of scans that involves data acquisition at many points, varying sample positions, samples, wavelength, sample environment, etc. The points at which measurements must be taken can number in the thousands with scans or their variations having to be run multiple times. At the CNBC an approach has been developed to allow scientists to specify and manage their scans using a set of processes and tools. Scans are specified using a set of constructors and a scan algebra that allows scans to be combined using a set of scan operators. Using the operators of the algebra, complex scan sequences can be constructed from simpler scans and run unattended for up to a few days. Based on the constructors and the algebra, tools are provided to scientists to build, organize and execute their scans. These tools can take the form of scripting languages, spreadsheets, or databases. This scanning technique is currently in use at CNL, and has been implemented in Python on an EPICS based control system.  
slides icon Slides THHA3O03 [0.745 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THHB2O03 The Global Trigger with Online Vertex Fitting for Low Energy Neutrino Research electronics, detector, simulation, photon 1107
 
  • G.H. Gong, H. Li, T. Xue
    Tsinghua University, Beijing, People's Republic of China
  • H. Gong
    TUB, Beijing, People's Republic of China
 
  Neutrino research is of great importance for particle physics, astrophysics and cosmology, the JUNO (Jiangmen Underground Neutrino Observatory) is a multi-purpose neutrino experiment for neutrino mass ordering determination and precision measurement of neutrino mixing parameters. A brand new global trigger scheme with online vertex fitting has been proposed, aiming at the ultra-low anti-neutrino energy threshold as down to 0.1MeV which is essential for the study of solar neutrino and elastic scattering of neutrinos on supernova burst. With this scheme, the TOF (time of flight) difference of photons fly through the liquid media from the interaction point to the surface of central detector can be corrected online with real time, the width of trigger window to cover the whole period of a specific neutrino generated photons can be significantly reduced which lessen the integrated dark noise introduced from the large amount of PMT devices hence a lower energy threshold can be achieved. The scheme is compatible, flexible and easy to implement, it can effectively extend the physics potential of the JUNO for low energy neutrino research topics.  
slides icon Slides THHB2O03 [4.257 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THHB3O03 On-the-Fly Scans for Fast Tomography at LNLS Imaging Beamline EPICS, controls, network, LabView 1119
 
  • G.B.Z.L. Moreno, R. Bongers, M.B. Cardoso, F.P. O'Dowd, H.H. Slepicka
    LNLS, Campinas, Brazil
 
  Funding: Brazilian Synchrotron Light Laboratory.
As we go to brighter light sources and time resolved ex-periments, different approaches for executing faster scans in synchrotrons are an ever­present need. In many light sources, performing scans through a sequence of hardware triggers is the most commonly used method for synchronizing instruments and motors. Thus, in order to provide a sufficiently flexible and robust solution, the X­Ray Imaging Beamline (IMX) at the Brazilian Synchrotron Light Source [1] upgraded its scanning system to a NI PXI chassis interfacing with Galil motion controllers and EPICS environment. It currently executes point­to­point and on­the­fly scans controlled by hard-ware signals, fully integrated with the beamline control system under EPICS channel access protocol. Some approaches can use CS­Studio screens and automated Python scripts to create a user­friendly interface. All pro-gramming languages used in the project are easy to use and to learn, which allows high maintainability for the system delivered. The use of LNLS Hyppie platform [2, 3] also enables software modularity for better compatibil-ity and scalability over different experimental setups and even different beamlines.
[1]F. P. O'Dowd et al.,"X-Ray micro-tomography at the IMX beamline (LNLS)", XRM2014.[2]J. R. Piton et al.,"Hyppie: A hypervisored PXI for physics instrumentation under EPICS", BIW2012.
 
slides icon Slides THHB3O03 [3.591 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
THHC3O05 National Ignition Facility (NIF) Experiment Interface Consolidation and Simplification to Support Operational User Facility framework, software, hardware, site 1143
 
  • A.D. Casey, E.J. Bond, B.A. Conrad, M.S. Hutton, P.D. Reisdorf, S.M. Reisdorf
    LLNL, Livermore, California, USA
 
  Funding: This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam 1.8 MJ ultraviolet laser system designed to support high-energy-density science. NIF can create extreme states of matter, including temperatures of 100 million degrees and pressures that exceed 100 billion times Earth's atmosphere. At these temperatures and pressures, scientists explore the physics of planetary interiors, supernovae, black holes and thermonuclear burn. In the past year, NIF has transitioned to an operational facility and significant focus has been placed on how users interact with the experimental tools. The current toolset was developed with a view to commissioning the NIF and thus allows flexibility that most users do not require. The goals of this effort include enhancing NIF's external website, easier proposal entry, reducing both the amount and frequency of data the users have to enter, and simplifying user interactions with the tools while reducing the reliance on custom software. This paper will discuss the strategies adopted to meet the goals, highlight some of the user tool improvements that have been implemented and planned future directions for the toolset.
 
slides icon Slides THHC3O05 [3.167 MB]  
Export • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)