Paper |
Title |
Page |
ROAB01 |
Software Engineering Processes Used to Develop the NIF Integrated Computer Control System
|
500 |
|
- R. W. Carey, R. D. Demaret, L. J. Lagin, U. P. Reddi, P. J. Van Arsdall, A. P. Ludwigsen
LLNL, Livermore
|
|
|
The National Ignition Facility (NIF) at Lawrence Livermore National Laboratory is a 192-beam laser facility for high-energy density physics experiments. NIF is operated by the Integrated Computer Control System (ICCS), which is comprised of 60,000 devices deployed on 850 computers. Software is constructed from an object-oriented framework based on CORBA distribution. ICCS is 85% complete, with over 1.5 million lines of verified code now deployed online. Success of this large-scale project was keyed to early adoption of rigorous software engineering practices, including architecture, code design, configuration management, product integration, and formal verification testing. Verification testing is performed in a dedicated test facility following developer integration. These processes are augmented by an overarching quality assurance program featuring assessment of quality metrics and corrective actions. Engineering processes are formally documented, and releases are managed by a change control board. This talk discusses software engineering and results obtained for the NIF control system.
|
|
|
Slides
|
|
ROAB02 |
Software Development and Testing: Approach and Challenges in a Distributed HEP Collaboration
|
503 |
|
- D. Burckhart-Chromek
CERN, Geneva
|
|
|
In the development of the ATLAS Trigger and Data Acquisition (TDAQ) software, the iterative waterfall model, evolutionary process management, formal software inspection, as well as lightweight review techniques are applied. The long preparation phase with a geographically widespread team required that the standard techniques be adapted to this HEP environment. Special emphasis is given to the testing process. Unit tests and check targets in nightly project builds form the basis for the subsequent software project release testing. The integrated software is then run on computing farms that give further opportunity for gaining experience, fault finding, and acquiring ideas for improvement. Dedicated tests on a farm of up to 1000 nodes address the large-scale aspect of the project. Integration test activities on the experimental site include the special purpose-built event readout hardware. Deployment in detector commissioning starts the countdown towards running the final ATLAS experiment. These activities aim at understanding and completing the complex system, but also help in forming a team whose members have a variety of expertise, working cultures, and professional backgrounds.
|
|
|
Slides
|
|
ROAB03 |
Software Integration and Test Techniques in a Large Distributed Project: Evolution, Process Improvement, Results
|
508 |
|
- M. Pasquato, P. Sivera
ESO, Garching bei Muenchen
|
|
|
The Atacama Large Millimeter Array (ALMA) is a radio telescope that is being built in Chile. The software development for the project is committed to the Computing Integrated Product Team, (IPT) which has the responsibility of realizing an end-to-end software system consisting of different subsystems, each one with specified development areas. Within the Computing IPT, the Integration and Test subsystem has the role of collecting the software produced, build it and test it and preparing releases. In this paper, the complexity of the software integration and test tasks is analyzed and the problems due to the high geographical distribution of the developers and the variety of software features to be integrated are highlighted. Different implemented techniques are discussed, among them the use of a common development framework (the ALMA Common Software or ACS), the use of standard development hardware and the organization of the developers work in Function Based Team (FBT). Frequent automatic builds and regression tests repeated regularly on so called Standard Test Environments (STE) are also routinely used. Advantages, benefits and shortcomings of the adopted solutions are presented.
|
|
|
Slides
|
|
ROAB04 |
Experience of Developing BEPCII Control System
|
511 |
|
- J. Zhao
IHEP Beijing, Beijing
|
|
|
The project of upgrading the Beijing Electron Positron Collider (BEPC) to the BEPCII was started in autumn of 2001, and the goal is to reach a higher luminosity, 1*1033cm-2s-1. The first beams were stored in the Storage Ring in November 2006, and the e+/e- beams successfully collided in March 2007, which is an important milestone of the BEPCII. The BEPCII control system has rebuilt with the standard mode and EPICS, which has 20,000 channels and about 30 VME IOCs for equipment control and high-level applications. The control system was put into operation in November 2007, and the system development has followed its schedule and finished on time. In the past few years, we went through the design stage, R&D stage, system development, testing, and installation and commissioning stages. This paper describes experiences and lessons of design and developing the system, including the design considerations, selection of standard hardware and software, building of the development environment, and what we have done in the user requirement, R&D, and other stages. The paper also discusses project management issues, such as interface definition, collaborations, people training, and so on.
|
|
|
Slides
|
|