Product Team / Product / Middleware |
Code Static/Integrity Analysis ------ Is it automated? Link to tool or documentation Is it performed at every build? |
Deployment Test ------- Is it automated? Link to tool or documentation Is it performed at every build? |
Unit Test ------- Is it automated? Link to tool or documentation Is it performed at every build? |
Functionality Tests ------- Is it automated? Link to tool or documentation Is it performed at every build? |
Integration Tests ------- Is it automated? Link to tool or documentation Is it performed at every build? |
Performance Tests ------- Do you perform it? Is it automated? Link to tool or documentation Is it performed at every build? |
Scalability Tests ------- Do you perform it? Is it automated? Link to tool or documentation Is it performed at every build? |
Framework Used ------- Link to exploited framework |
Notes & Remarks + Perspectives ------- Please write here additional info or comments + Describe here ongoing work in your PT to automate part of your certification process? |
ARC Products |
Automated for every revision commited to trunk, http://arc-emi.grid.upjs.sk/revisionTests.php |
work in progress |
run as part of revision tests (with every revision from trunk) |
automated, once a day for daily code snapshot from trunk, http://arc-emi.grid.upjs.sk/functionalTests.php |
No |
currently manual, http://wiki.nordugrid.org/index.php/Performance_testing |
No |
python scripts (php for results visualization), http://svn.nordugrid.org/trac/workarea/browser/ARCTestScripts (for revision and funct. tests) |
|
dCache |
Automated within Jenkins framework |
handish |
Automated within Jenkins framework |
automated g2 and s2 tests for srm |
http(s) transfers will be executed when feasible |
none, some handish SRM tests |
none |
Jenkins framework |
I would like to have automatic VM setup, but this is a bit hard to achieve with centralized provisioning. |
UNICORE |
No |
No automation |
https://unicore-dev.zam.kfa-juelich.de/bamboo/telemetry.action Performed regularly after SVN commits |
Some functionality tests are in JUnit tests also covered by Bamboo |
No automation |
Some performance tests are in JUnit tests also covered by Bamboo |
No automation |
https://unicore-dev.zam.kfa-juelich.de/bamboo/telemetry.action |
Automated package builds are supported, thus automated deployment tests are within reach |
DPM, LFC, FTS |
No |
Automated on every nightly build (using saket) |
Automated on every nightly build (using saket) |
Automated on every nightly build (using saket) |
Only among our own clients (lcgutil) |
Manually done (using Perfsuite) |
Manually done (using Perfsuite) |
Saket , Saket Nightlies , PerfSuite |
All certification tests are merged to the nightlies. Ongoing work for automated upgrade tests in the nightlies. |
gLite Computing (CREAM, WMS, BLAH, CEMON) |
WMS: NO CREAM: NO |
WMS: Not automated CREAM: NO |
WMS: NO CREAM:YES (blah, cemon NO) |
WMS: Automated WmsTestSuite WmsServiceTestSuite Not provided at every build. CREAM: YES http://nkua-emi.posterous.com/ https://wiki.italiangrid.it/twiki/bin/view/CREAM/CreamTesting https://wiki.italiangrid.it/twiki/bin/view/IGIRelease/IGITestCert |
WMS: see Functionality tests |
WMS: automated via home made scripts. Not performed at every build CREAM: Manual Implementation |
WMS: NO CREAM: YES, Manual |
https://twiki.cern.ch/twiki/bin/view/EMI/RobotFrameworkQuickstartGuide http://code.google.com/p/robotframework/ |
|
CaNL, gridsite, L&B, proxyrenewal |
No code analysis |
Deployment tests automated, scripts at http://scientific.zcu.cz/scatter/scripts |
Unit tests part of ETICS nightly build |
Functionality tests automated, EMI testplan |
No integration tests |
L&B: Performance tests available, not launched daily CaNL, gridsite, proxyrenewal: No performance tests |
No scalability tests |
poster: http://www.metacentrum.cz/export/sites/metacentrum/downloads/presentations/EGI-CF2012-EMI-mass-testing.pdf link: http://scientific.zcu.cz/scatter/results/EMI2-NIGHTLY/dashboard.html |
CZ-NGI Virtual Machines Capability with simple script launching EMI Tesplan and sending results |
StORM |
No |
Yes but not automated at every build. T-StoRM |
No |
Yes but not automated at every build. T-StoRM |
In Progress. T-StoRM |
In Progress. T-StoRM |
No |
T-StoRM |
|
VOMS |
No |
Not automated, work in progress |
Yes, automated for VOMS core. Not currently run on every build. |
Yes, automated for VOMS core. Not currently run on every build. |
Not automated. |
Not automated. |
Not automated. |
Dejagnu for VOMS, JUnit for VOMS Admin. |
The VOMS PT is working to automated all the steps involved in the build certification and testing of a release. |
ARGUS |
No |
Manual |
JUnit tests run at every build |
Automated functionality tests scripts https://twiki.cern.ch/twiki/bin/view/EMI/ArgusTestPlan run manually |
No automated tests |
Manual performance and load testing with the Grinder framework, see summary https://twiki.cern.ch/twiki/bin/view/EGEE/AuthZTestingSummary140 |
See performance testing |
bash and python scripts, Grinder framework, see https://twiki.cern.ch/twiki/bin/view/EGEE/AuthZLLT |
|
AMGA |
No |
Manual |
Yes/Run every build |
Automated functionality tests with Test suites |
Yes, but not automated |
No performance tests |
No |
bash scripts |
|
APEL |
No |
Manual |
JUnit; not automated |
Test suite in bash; run manually |
No |
No |
No |
No |
|
gLite Infosys / BDII |
NA |
Yes/No |
Yes/No |
Yes/No |
Yes/No |
Yes/No |
Yes/No |
Bash |
Componts are tested via a Bash scripts. Product is certified using a Bash scripts. Performance and Scalabiity test are done using various methods. |
gLiteMPI |
No code analysis |
Not automated |
Yes/Run every build |
Partially automated, run in every build |
Not automated |
No performance tests |
No scalability tests |
Bash with shunit |
Tests described at test plan |
gLite Security (gLExec-wn (gLExec, LCAS, LCMAPS, LCMAPS-plugins-c-pep), Hydra, Trustmanager, STS, Pseudonymity ) |
Hydra: n/a TM: none Pseudonymity: none |
Hydra: manual TM:Manually triggered automated testing before release. Pseudonymity: manual |
Hydra: manual TM: jUnit and Cobertura run when needed. Pseudonymity: TestNG during every build |
Hydra: manual TM:part of unit testing. Pseudonymity: manual |
Hydra: manual TM:None Pseudonymity: none |
Hydra: manual TM: Manual Pseudonymity: manual |
Hydra: manual TM:None Pseudonymity: none |
Hydra: shell scripts TM:jUnit for unit testing, scripts for deployment testing Pseudonymity: TestNG for unit testing |
|
WNODES |
No |
No |
Yes for 1 component. Not automated at every build. |
Yes. Not automated at every build. |
No |
Yes. Not automated. |
No |
python script |
|
|
|
|
|
|
|
|
|
|
|