Summary of strategy for DM testing

The following broad strategy for testing within IT-GT-DMS was agreed:

Unit Tests

We will use the following unit test frameworks:

cppunit, check, junit

Their use must not introduce any new dependencies into the released packages.

Unit tests will be integrated into ETICS which has hooks for this.

FTS will retire the mock server currently used and will use simpler unit tests and move other required tests to integration testing

Test cases should be implemented by preference as unit tests, where possible, in order to catch problems as early in the lifecycle as possible.

ETICS and post-build testing

We aim to use the automated testing capabilities of ETICS where possible. The plan is work towards full functional testing, proceeding in steps which will each bring value:

  • Deployment tests - automatically upgrade a production-level instance of a service with build results to ensure their installability
  • Configuration tests - check that configuration of the previously installed node works correctly
  • Local tests - run any local tests possible to ensure the correct functioning of the node
  • Client/server functional tests - use the node as a networked service (where appropriate) by running client/server tests
  • System tests - test the node in a full grid context, with one or more external services required for a full end-to-end workflow.

The above testing scenarios will be tackled in order.

Integration tests and monitoring

The development teams will produce regular internal releases, which will be deployed on a permanently monitored testbed.

This testbed will profit from the SAM tests (Nagios implementation) and we will try to integrate as many existing certification (or other) tests as possible.

The testbed will also be monitored by gstat 2.0 to verify information system correctness.

Certification

Certification testing will include all integration tests, plus remaining tests which may have to be run manually.

Certification is performed on a strictly controlled set of packages, in a well defined environment, and its results are logged and archived. It is not performed on all internal checkpoint releases, only those which are intended for production.

We will try to minimise the duplicated effort between integration testing and certification.

The section is likely to have to maintain production level endpoints of its own services for use by other Product Teams in their own certification.

Regression tests

We will maintain a list of bugs which are candidates for regression tests. These can subsequently be implemented by our test writers, particularly short-term visitors.

The section will conduct a review of existing open issues, which will include logging of those bugs for which regression tests should be created.

System and post-release testing

The EGEE and EMI release processes involve a 'staged rollout' whereby early-adopter sites install releases in production before those releases are marked as full production releases.

In addition, the DMS section maintains relationships with various interested sites, who are willing to install pre-releases in production.

The question is how to profit from both of these mechanisms without duplication of effort or unnecessary delay in releases.

The agreed strategy is to continue with the direct provision of certified patches to partner sites, supported directly by the DMS team. This can be done before official release to 'staged rollout'. Where the staged rollout provides further useful exposure, it will be used. If not, patch release notes will be updated with details of the already-completed production level testing and sites consequently encouraged to install services during the 'staged rollout' phase, as is permitted by the current EGEE process.

After certification, releases can also be made available where possible to external testbeds such as that maintained by Atlas, to allow further early exposure to production workflows.

Test Plans

We will maintain test plans for all components to keep a record of what tests are required.

Test plans can be found at https://twiki.cern.ch/twiki/bin/view/EGEE/SA3Testing

Additional functionality should result in an update of the test plan.

Testing Infrastructure

The following classes of test resource are envisaged

  • Monitored Integration testbed
  • Certification testbed + virtual machines
  • ETICS nodes for build and subsequent deployment tests
  • Production level instances of services to be tested against
  • Are nodes dedicated to 'chaotic' developer testing required?

Testing infrastructure will be provided centrally by the IT-GT-SL team

-- OliverKeeble - 04-Feb-2010

Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2010-02-04 - OliverKeeble
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    EGEE All webs login

This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright & by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Ask a support question or Send feedback