Test writing guidelines

Test types

Testing in certification is concerned with blackbox tests of installed and configured node types. We are not doing component tests but

  • Integration tests
  • System tests

Integration tests verify if the software is installable and configurable They also check for basic, not depending on other grid services, functionality of the component (e.g. daemon is up and running).

System tests verify if the component works within the grid in interaction with other grid services.

Test category Test type Possible test location
Integration Tests 1. Installation on install node
2. Configuration on install node
3. Service ping test: basic test if service is up and running on install node
via another node
System Tests 1. Functionality: test fully supported functionality (including APIs and CLI) on install node
via another node
2. Stress on install node
via another node
3. Interoperability on install node
via another node
4. Security on install node
via another node
audit

Test writing should first focus on Integration Test No. 3 and System Test No. 1. Only if these are available one can consider writing stress and security tests.

Integration Tests No.1 and 2. are done manually by the certifier and/or automated within the build system. Interoperability tests are currently being done manually.

Writing the test plan

Before starting writing tests we require a test plan. The test plan is not a manual of a test suite, it is meant to describe what are the functionality of a component that will be tested. A CLI must be tested both with correct input to verify the normal workflow, and with erroneous input, to verify that the proper error messages are returned. The test plan can be delivered as a TWiki page or a EGEE document registered in Indico. A test plan template that can be used is available: TestPlanTemplate

We require the following test coverage

Service ping tests

All parts (daemons) of the service that are mentioned in the documentation.

Functionality tests

  • All examples mentioned in the documentation
  • CLI: every command of the CLI, every available option
  • API: every function or method

For test cases we require combinations of options resp. values that represent typical use cases on the grid. Testing all possible combinations of options for CLI commands is not required for the first release of the tests but an ever increasing coverage should be aimed at with later releases. Consider techniques like Cause effect graphing and decision tables to get a clear view on different combinations of options.

The requests are similar for API tests. Testing different input values for functions/methods is considered to be a part of component tests which are in the responsibility of the developers. However bearing in mind that the component is being tested within a grid infrastructure, giving different values to API functions/methods can also be considered as system testing. The approach is as follows:

  • Component test developer: Develop extensive unit tests on function/methods. Techniques like Equivalence partitioning and Boundary value analysis should help to define a reasonable number of tests.
  • Functional test developer: Develop API tests that only use reasonable input values (based on the analysis done by the component test developer). Cause effect graphing and decision tables might help to narrow the possibilities of input values.

Initially API tests should touch every function/method at least once using input values for typical grid usage. In further releases test coverage should be increased following the above procedure.

Test script writing guidelines

  • Make sure that every test you write belongs to only one line of the above table.
  • A test script should be written in one of the following scripting languages: Bash, Python or Perl. If in doubt use Bash. API tests of course use the language of the API.
  • If the test has prerequisites on the grid (e.g. a valid voms proxy) this must be documented and a check at the beginning of the test has to be implemented.
  • More complicated tests should be built using simple ones.
  • Tests should be written in such a way that they can be integrated into different frameworks. To achieve this one should be able to execute a test on the command line without the use of any framework. Currently it is most likely that tests will be integrated within Nagios maybe also SAM.
  • Tests must be fully configurable either by options to the script or a configuration file (key/value pairs resp. environment variables for export). There must be no hardcoded paths etc.
  • Test scripts must be public accessible and documented.
  • Tests should have a simple output (text or simple html). It should be possible to add a post processing step to a test such that its output can be transformed for use with a particular presentation framework.

Focus on writing the test script. The CERN certification team can do the integration into a test framework.

Regression tests

Regression tests are test bound to specific bugs. A regression test is used to verify weather a bug is fixed or not. Whenever possible, when certifying a patch, regression tests for bugs depending on the patch have to be written. The readme file here explain how to write regression tests and integrate them in the existing framework.

Test repository

We keep tests in CVS. Have a look at our list of list of available tests. Your tests will be added to this list.

Nagios integration

TBD

SAM integration

For reference we keep some recommendations for SAM. Bear in mind that SAM is no longer the target framework for the tests.

For SAM integration consider these points:

  • For an introduction to SAM have a look at the SAM Overview page
  • The CERN certification team collects tests in a public CVS repository: org.glite.testsuites.ctb.
  • Take a look to the already existing simple example sandbox. It would be nice if all test result would have a similar layout.
  • Don't use your own exit codes. Use the SAM exit codes ! $SAM_ERROR, $SAME_OK, $SAM_INFO... etc,
  • If possible use the common functions to maintain your layout. They are defined in function.sh.
  • The first parameter of the script is the name of the node on which the test is executed ! Make use of this parameter and don't expect any other parameter as $1, $2, $3.
  • If compilation is necessary then it is best to do in on-the-fly. (See hostcert-check test.)

Two interesting documents by Domenico Vicinanza: A document describing how to integrate tests into SAM and a presentation on "ALICE VOBOX tests integration within SAM environment".

SAM documentation page.

-- AndreasUnterkircher - 01 Aug 2007

Topic attachments
I Attachment History Action Size Date Who Comment
PDFpdf AddingSensors.pdf r1 manage 763.9 K 2006-11-08 - 08:19 AndreasUnterkircher Adding new tests and publishing results with SAM
Edit | Attach | Watch | Print version | History: r22 < r21 < r20 < r19 < r18 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r22 - 2009-05-05 - GianniPucciani
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    LCG All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback