THBHMU —  Quality Assurance   (13-Oct-11   10:45—12:30)
Chair: C.W. Saunders, ANL, Argonne, USA
Paper Title Page
THBHMUST01 Multi-platform SCADA GUI Regression Testing at CERN. 1201
 
  • P.C. Burkimsher, M. Gonzalez-Berges, S. Klikovits
    CERN, Geneva, Switzerland
 
  Funding: CERN
The JCOP Framework is a toolkit used widely at CERN for the development of industrial control systems in several domains (i.e. experiments, accelerators and technical infrastructure). The software development started 10 years ago and there is now a large base of production systems running it. For the success of the project, it was essential to formalize and automate the quality assurance process. The paper will present the overall testing strategy and will describe in detail mechanisms used for GUI testing. The choice of a commercial tool (Squish) and the architectural features making it appropriate for our multi-platform environment will be described. Practical difficulties encountered when using the tool in the CERN context are discussed as well as how these were addressed. In the light of initial experience, the test code itself has been recently reworked in OO style to facilitate future maintenance and extension. The paper concludes with a description of our initial steps towards incorporation of full-blown Continuous Integration (CI) support.
 
slides icon Slides THBHMUST01 [1.878 MB]  
 
THBHMUST02 Assessing Software Quality at Each Step of its Lifecycle to Enhance Reliability of Control Systems 1205
 
  • V.H. Hardion, G. Abeillé, A. Buteau, S. Lê, N. Leclercq, S. Pierre-Joseph Zéphir
    SOLEIL, Gif-sur-Yvette, France
 
  A distributed software control system aims to enhance the evolutivity and reliability by sharing responsibility between several components. Disadvantage is that detection of problems is harder on a significant number of modules. In the Kaizen spirit, we choose to continuously invest in automatism to obtain a complete overview of software quality despite the growth of legacy code. The development process was already mastered by staging each lifecycle step thanks to a continuous integration server based on JENKINS and MAVEN. We enhanced this process focusing on 3 objectives : Automatic Test, Static Code Analysis and Post-Mortem Supervision. Now the build process automatically includes the test part to detect regression, wrong behavior and integration incompatibility. The in-house TANGOUNIT project satisfies the difficulties of testing the distributed components that Tango Devices are. Next step, the programming code has to pass a complete code quality check-up. SONAR quality server was integrated to the process, to collect each static code analysis and display the hot topics on synthetic web pages. Finally, the integration of Google BREAKPAD in every TANGO Devices gives us an essential statistic from crash reports and allows to replay the crash scenarii at any time. The gain already gives us more visibility on current developments. Some concrete results will be presented like reliability enhancement, better management of subcontracted software development, quicker adoption of coding standard by new developers and understanding of impacts when moving to a new technology.  
slides icon Slides THBHMUST02 [2.973 MB]  
 
THBHMUST03 System Design towards Higher Availability for Large Distributed Control Systems 1209
 
  • S.M. Hartman
    ORNL, Oak Ridge, Tennessee, USA
 
  Funding: SNS is managed by UT-Battelle, LLC, under contract DE-AC05-00OR22725 for the U.S. Department of Energy
Large distributed control systems for particle accelerators present a complex system engineering challenge. The system, with its significant quantity of components and their complex interactions, must be able to support reliable accelerator operations while providing the flexibility to accommodate changing requirements. System design and architecture focused on required data flow are key to ensuring high control system availability. Using examples from the operational experience of the Spallation Neutron Source at Oak Ridge National Laboratory, recommendations will be presented for leveraging current technologies to design systems for high availability in future large scale projects.
 
slides icon Slides THBHMUST03 [7.833 MB]  
 
THBHMUST04 The Software Improvement Process – Tools and Rules to Encourage Quality 1212
 
  • K. Sigerud, V. Baggiolini
    CERN, Geneva, Switzerland
 
  The Applications section of the CERN accelerator controls group has decided to apply a systematic approach to quality assurance (QA), the "Software Improvement Process", SIP. This process focuses on three areas: the development process itself, suitable QA tools, and how to practically encourage developers to do QA. For each stage of the development process we have agreed on the recommended activities and deliverables, and identified tools to automate and support the task. For example we do more code reviews. As peer reviews are resource-intensive, we only do them for complex parts of a product. As a complement, we are using static code checking tools, like FindBugs and Checkstyle. We also encourage unit testing and have agreed on a minimum level of test coverage recommended for all products, measured using Clover. Each of these tools is well integrated with our IDE (Eclipse) and give instant feedback to the developer about the quality of their code. The major challenges of SIP have been to 1) agree on common standards and configurations, for example common code formatting and Javadoc documentation guidelines, and 2) how to encourage the developers to do QA. To address the second point, we have successfully implemented 'SIP days', i.e. one day dedicated to QA work to which the whole group of developers participates, and 'Top/Flop' lists, clearly indicating the best and worst products with regards to SIP guidelines and standards, for example test coverage. This paper presents the SIP initiative in more detail, summarizing our experience since two years and our future plans.  
slides icon Slides THBHMUST04 [5.638 MB]  
 
THBHMUKP05
Distributed Software Infrastructure for Scientific Applications  
 
  • M. Livny
    University of Wisconsin-Madison, Madison, Wisconsin, USA
 
  For more than two decades we have been involved in developing and implementing distributed software tools that have been adopted by a broad spectrum of commercial and scientific infrastructures. Ranging from state of the art rendering farms to distributed high throughput computing facilities for the LHC community our Condor software tools effectively and reliably manage large distributed infrastructures. These open source tools are distributed and supported by commercial entities in support of enterprise wide infrastructures and commercial applications. We believe that the design principals, the software development procedures and the software lifecycle practices we use are applicable to the Accelerator and LEPCS communities. Over the years we have learned the importance of information flow and policy decision points as well as an appreciation for the challenges of logging and error propagation in dynamic environments. We have adopted continuous integration methodologies that are supported by a dedicated build and test facility using multiple packaging tools and devoted effort toward installation and configuration tools. A key element of our software methodology is that we use the tools we develop. We use them to manage a campus wide computing infrastructure of more than 10K cores as well as to manage the more than 250 cores in our build and test facility that supports more than 20 software projects. We also make our infrastructure available for other communities. We work closely with our users and maintain ties with application developers that depend on our tools.  
slides icon Slides THBHMUKP05 [3.793 MB]