Author: Alnajjar, D.
Paper Title Page
THPV021 TATU: A Flexible FPGA-Based Trigger and Timer Unit Created on CompactRIO for the First Sirius Beamlines 908
 
  • J.R. Piton, D. Alnajjar, D.H.C. Araujo, J.L. Brito Neto, L.P. Do Carmo, L.C. Guedes, M.A.L. Moraes
    LNLS, Campinas, Brazil
 
  In the modern synchrotron light sources, the higher brilliance leads to shorter acquisition times at the experimental stations. For most beamlines of the fourth-generation source SIRIUS, it was imperative to shift from the usual software-based synchronization of operations to the much faster triggering by hardware of some key equipment involved in the experiments. As a basis of their control system for devices, the SIRIUS beamlines have standard CompactRIO controllers and I/O modules along the hutches. Equipped with a FPGA and a hard processor running Linux Real-Time, this platform could deal with the triggers from and to other devices, in the order of ms and µs. TATU (Time and Trigger Unit) is a code running in a CompactRIO unit to coordinate multiple triggering conditions and actions. TATU can be either the master pulse generator or the follower of other signals. Complex trigger pattern generation is set from a user-friendly standardized interface. EPICS process variables (by means of LNLS Nheengatu*) are used to set parameters and to follow the execution status. The concept and first field test results in at least four SIRIUS beamlines are presented.
* D. Alnajjar, G. S. Fedel, and J. R. Piton, "Project Nheengatu: EPICS support for CompactRIO FPGA and LabVIEW-RT", ICALEPCS’19, New York, NY, USA, Oct. 2019, paper WEMPL002.
 
poster icon Poster THPV021 [0.618 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-THPV021  
About • Received ※ 10 October 2021       Accepted ※ 21 November 2021       Issue date ※ 02 February 2022  
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)  
 
FRBL05 RemoteVis: An Efficient Library for Remote Visualization of Large Volumes Using NVIDIA Index 1047
 
  • T.V. Spina, D. Alnajjar, M.L. Bernardi, F.S. Furusato, E.X. Miqueles, A.Z. Peixinho
    LNLS, Campinas, Brazil
  • A. Kuhn, M. Nienhaus
    NVIDIA, Santa Clara, USA
 
  Funding: We would like to thank the Brazilian Ministry of Science, Technology, and Innovation for the financial support.
Advancements in X-ray detector technology are increasing the amount of volumetric data available for material analysis in synchrotron light sources. Such developments are driving the creation of novel solutions to visualize large datasets both during and after image acquisition. Towards this end, we have devised a library called RemoteVis to visualize large volumes remotely in HPC nodes, using NVIDIA IndeX as the rendering backend. RemoteVis relies on RDMA-based data transfer to move large volumes from local HPC servers, possibly connected to X-ray detectors, to remote dedicated nodes containing multiple GPUs for distributed volume rendering. RemoteVis then injects the transferred data into IndeX for rendering. IndeX is a scalable software capable of using multiple nodes and GPUs to render large volumes in full resolution. As such, we have coupled RemoteVis with slurm to dynamically schedule one or multiple HPC nodes to render any given dataset. RemoteVis was written in C/C++ and Python, providing an efficient API that requires only two functions to 1) start remote IndeX instances and 2) render regular volumes and point-cloud (diffraction) data on the web browser/Jupyter client.
*NVIDIA IndeX, https://developer.nvidia.com/nvidia-index
 
slides icon Slides FRBL05 [12.680 MB]  
DOI • reference for this paper ※ https://doi.org/10.18429/JACoW-ICALEPCS2021-FRBL05  
About • Received ※ 10 October 2021       Revised ※ 28 October 2021       Accepted ※ 20 November 2021       Issue date ※ 01 March 2022
Cite • reference for this paper using ※ BibTeX, ※ LaTeX, ※ Text/Word, ※ RIS, ※ EndNote (xml)