Author: Buteau, A.
Paper Title Page
WEPKN003 Distributed Fast Acquisitions System for Multi Detector Experiments 717
 
  • F. Langlois, A. Buteau, X. Elattaoui, C.M. Kewish, S. Lê, P. Martinez, K. Medjoubi, S. Poirier, A. Somogyi
    SOLEIL, Gif-sur-Yvette, France
  • A. Noureddine
    MEDIANE SYSTEM, Le Pecq, France
  • C. Rodriguez
    ALTEN, Boulogne-Billancourt, France
 
  An increasing number of SOLEIL beamlines need to use in parallel several detection techniques, which could involve 2D area detectors, 1D fluorescence analyzers, etc. For such experiments, we have implemented Distributed Fast Acquisition Systems for Multi Detectors. Data from each Detector are collected by independent software applications (in our case Tango Devices), assuming all acquisitions are triggered by a unique Master clock. Then, each detector software device streams its own data on a common disk space, known as the spool. Each detector data are stored in independent NeXus files, with the help of a dedicated high performance NeXus streaming C++ library (called NeXus4Tango). A dedicated asynchronous process, known as the DataMerger, monitors the spool, and gathers all these individual temporary NeXus files into the final experiment NeXus file stored in SOLEIL common Storage System. Metadata information describing context and environment are also added in the final file, thanks to another process (the DataRecorder device). This software architecture proved to be very modular in terms of number and type of detectors while making life of users easier, all data being stored in a unique file at the end of the acquisition. The status of deployment and operation of this "Distributed Fast Acquisitions system for multi detector experiments" will be presented, with the examples of QuickExafs acquisitions on the SAMBA beamline and QuickSRCD acquisitions on DISCO. In particular, the complex case of the future NANOSCOPIUM beamline will be developed.  
poster icon Poster WEPKN003 [0.671 MB]  
 
THBHMUST02 Assessing Software Quality at Each Step of its Lifecycle to Enhance Reliability of Control Systems 1205
 
  • V.H. Hardion, G. Abeillé, A. Buteau, S. Lê, N. Leclercq, S. Pierre-Joseph Zéphir
    SOLEIL, Gif-sur-Yvette, France
 
  A distributed software control system aims to enhance the evolutivity and reliability by sharing responsibility between several components. Disadvantage is that detection of problems is harder on a significant number of modules. In the Kaizen spirit, we choose to continuously invest in automatism to obtain a complete overview of software quality despite the growth of legacy code. The development process was already mastered by staging each lifecycle step thanks to a continuous integration server based on JENKINS and MAVEN. We enhanced this process focusing on 3 objectives : Automatic Test, Static Code Analysis and Post-Mortem Supervision. Now the build process automatically includes the test part to detect regression, wrong behavior and integration incompatibility. The in-house TANGOUNIT project satisfies the difficulties of testing the distributed components that Tango Devices are. Next step, the programming code has to pass a complete code quality check-up. SONAR quality server was integrated to the process, to collect each static code analysis and display the hot topics on synthetic web pages. Finally, the integration of Google BREAKPAD in every TANGO Devices gives us an essential statistic from crash reports and allows to replay the crash scenarii at any time. The gain already gives us more visibility on current developments. Some concrete results will be presented like reliability enhancement, better management of subcontracted software development, quicker adoption of coding standard by new developers and understanding of impacts when moving to a new technology.  
slides icon Slides THBHMUST02 [2.973 MB]  
 
THCHAUST03 Common Data Model ; A Unified Layer to Access Data from Data Analysis Point of View 1220
 
  • N. Hauser, T.K. Lam, N. Xiong
    ANSTO, Menai, New South Wales, Australia
  • A. Buteau, M. Ounsy, S. Poirier
    SOLEIL, Gif-sur-Yvette, France
  • C. Rodriguez
    ALTEN, Boulogne-Billancourt, France
 
  For almost 20 years, the scientific community of neutrons and synchrotron facilities has been dreaming of using a common data format to be able to exchange experimental results and applications to analyse them. If using HDF5 as a physical container for data quickly raised a large consensus, the big issue is the standardisation of data organisation. By introducing a new level of indirection for data access, the CommonDataModel (CDM) framework offers a solution and allows to split development efforts and responsibilities between institutes. The CDM is made of a core API that accesses data through a data format plugins mechanism and scientific applications definitions (i.e. sets of logically organized keywords defined by scientists for each experimental technique). Using a innovative "mapping" system between applications definitions and physical data organizations, the CDM allows to develop data reduction applications regardless of data files formats AND organisations. Then each institute has to develop data access plugins for its own files formats along with the mapping between application definitions and its own data files organisation. Thus, data reduction applications can be developed from a strictly scientific point of view and are natively able to process data coming from several institutes. A concrete example on a SAXS data reduction application, accessing NeXus and EDF (ESRF Data Format) file will be commented.  
slides icon Slides THCHAUST03 [36.889 MB]