A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   X   Y   Z  

Gonzalez-Berges, M.

Paper Title Page
ROPA02 The High Performance Database Archiver for the LHC Experiments 517
 
  • M. Gonzalez-Berges
    CERN, Geneva
 
  Each of the Large Hadron Collider (LHC) experiments will be controlled by a large distributed system built with the SCADA tool PVSS. There will be about 150 computers and millions of input/output channels per experiment. The values read from the hardware, alarms generated, and user actions will be archived for the physics analysis and for the debugging of the control system itself. Although the original PVSS implementation of a database archiver was appropriate for standard industrial use, the performance was not enough. A collaboration was set up between CERN and ETM, the company that develops PVSS. Changes in the architecture and several optimizations were made and tested in a system of a comparable size to the final ones. As a result we have been able to improve the performance by more than one order of magnitude, and what is more important, we now have a scalable architecture based on the Oracle clustering technology (Real Application Cluster or RAC). This architecture can deal with the requirements for insertion rate, data querying, and manageability of the high volume of data (e.g., an insertion rate of > 150,000 changes/s was achieved with a 6-node RAC cluster).  
slides icon Slides  
RPPB07 The System Overview Tool of the Joint Controls Project (JCOP) Framework 618
 
  • M. Gonzalez-Berges, F. Varela
    CERN, Geneva
  • K. D. Joshi
    BARC, Mumbai
 
  For each control system of the Large Hadron Collider (LHC) experiments, there will be many processes spread over many computers. All together, they will form a PVSS distributed system with around 150 computers organized in a hierarchical fashion. A centralized tool has been developed for supervising, error identification and troubleshooting in such a large system. A quick response to abnormal situations will be crucial to maximize the physics usage. The tool gathers data from all the systems via several paths (e.g., process monitors, internal database) and, after some processing, presents it in different views: hierarchy of systems, host view and process view. The relations between the views are added to help to understand complex problems that involve more than one system. It is also possible to filter the information presented to the shift operator according to several criteria (e.g. node, process type, process state). Alarms are raised when undesired situations are found. The data gathered is stored in the historical archive for further analysis. Extensions of the tool are under development to integrate information coming from other sources (e.g., operating system, hardware).