Introduction

When defining software metrics, one of two approaches is normal taken (described in [2]), either, measure everything that is measureable, report on it and try to correlate the information, or secondly, collect what is seen to be the currently most important metric. The problem with the first approach is that you can collect so many measurements that even if you can produce a metric to describe these measures, it may not be meaningful to the customer who wants to use them. For instance, one can collect many metrics for working code that in the short life of the project, actually adds no value at all to the build, test and certification of the combined middleware stack. Looking at the latest most important metric in isolation may result in the code not being adequately covered with a wide enough spread of metrics, resulting in minimal effect to the overall project.

There is a huge number of open source and propriety metrics[1] available to the EMI project. Therefore, the project must be very selective in its chose of what it wants to measure and what each associated metric means.

So, what approach do we take?

The EMI project proposal describes the first primary goal as:

 
Objective 1: Simplify and organize the different middleware services implementations by delivering
a streamlined, coherent, tested and standard compliant distribution able to meet and exceed the
requirements of EGI, PRACE and other distributed computing infrastructures and their user communities.

From a quality assurance point of view this goal is relatively straightforward to summarize, but not easy to implement:

Define what is not working in the combined middleware, define what a working middleware looks like, and define metrics that will help a customer track the transition to that working state.

Goals of Task 2.3 in EMI

  • To measure and produce metrics related to software development lifecycle and the software release lifecycle.
  • To recommend tools that will improve the product teams code quality without introducing large overheads into the process.
  • To produce metrics specific to each middleware that eventually will converge with a progression towards a single EMI middleware.

Architecture for Metric Definitions

Figure 1: A sample Venn Diagram of the possible middleware overlap:
MiddlewareOverlap.jpg

EMI is starting with 4 middlewares ARC, dCache, gLite and UNICORE, each of which are assumed at the beginning to have some or no overlap. Therefore, given that the goal of the project is to produce a combined middleware structure, this goal defines the "what?" of the project, in other words, "what are we trying to do?".

The next question is "how?" are we going to arrive at our goal? We can categorize the transition as:

  1. Obseleted software due to overlap in middlewares
  2. Software that remains part of one middleware, that isn't part of the core features
  3. Software that overlaps between two or more middleware.
  4. Software that becomes a core service of all the currently separate middlewares.

For items 2-4 categorized above, there needs to be suitably defined metrics for the EMI project to arrive at its final goal:

  • Individually, each middleware process could produce one or more metrics for:
  1. The software code itself written in many different programming languages.
  2. The tests that ensure that the code is compiling correctly with a set of unit and coverage tests that the code must pass.
  3. The bug tracking service for each middleware and associated turnaround times on fixing bugs.
  4. The reliability of the repository generation.
  5. The certification process.
  • For two middlewares with some degree of overlap, the APIs will gradually converge together. A metrics of this overlap must show improvement over time.
  • For all middlewares, certain core services will gradually overlap: e.g: BES-compliance, POSIX-compliance and multi-platform overlap must me monitored over time.

Creating a Framework: The International Standards Organisation (ISO 9126) Specification

Functionality - A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs.

  1. Suitability
    • Metric recording deletion over time of useless or obselete software.
  2. Accuracy
  3. Interoperability
       1.3 Define, improve, implement and validate common standards for the most important middleware
           functions, like job management, data management and information management in collaboration
           with relevant standardization initiatives, but with primary focus on their functional and operational
           aspects.  (taken from project goal 1)
       
    • Possibly: Basic Execution Service (BES) compliance
    • Possibly: Glue 2.0
  4. Security
       1.2 Simplify the management of security credentials by reducing the complexities of handling certificates 
           and integrating different security mechanisms like Shibboleth and Kerberos across the three
           middleware stacks.  (taken from project goal 1)
       2.5 Implement common services and interfaces for service location, data storage and access, MPI
          management across the three middleware stacks and in particular between HTC and HPC services
           (taken from project goal 2)
       
    • AAA converge metrics
  5. Functionality Compliance
Reliability - A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time.
  1. Maturity
  2. Fault Tolerance
  3. Recoverability
  4. Reliability Compliance
       1.4 Establish a common, measurable software certification process based on best-practices and put it
           at the base of the Service Level Agreements with infrastructure providers and users.
       
    • Valgrind and memory leakage metrics
Usability - A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users.
  1. Understandability
       1.5 Provide a common repository of certified middleware components, testsuites and documentation to
           allow users and application and services providers, also from commercial initiatives, to take informed
           decisions about their requirements and usage criteria. (taken from project goal 1)
       
    • Metrics tracking population of repositories: insertions, deletions, unchanged. Defined per product team, over time.
  2. Learnability
    • Extract Savannah/bugzilla tickets related to misconfiguration due to documentation misinterpretation.
  3. Operability
       1.1 Identify common layers of functionality in its middleware services and actively work on producing
           common components and libraries across the three middleware stacks. (taken from project goal 1)
       
    • Requires metrics to document the convergence of: APIs, clients, messaging.
  4. Attractiveness
    • EGI task: most likely, since it involves the NGI/ROC.
  5. Usability Compliance
    • EGI task: Bug tracking based on user bug reporting to Savannah/bugzilla
Efficiency - A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions.
  1. Time Behaviour
  2. Resource Utilisation
    • EGI task: most likely, since it involves the NGI/ROC.
  3. Efficiency Compliance
Maintainability - A set of attributes that bear on the effort needed to make specified modifications.
  1. Analyzability
  2. Changeability
    • metric showing how much the code changes per certified patch, or each certified release.
    • metric based on the regularity of amendments to the change log or version control system comments.
  3. Stability
  4. Testability
    • unit testing metrics.
    • Coverage testing metrics applied to unit testing.
    • certification test suites must be 100% successful.
  5. Maintainability Compliance
Portability - A set of attributes that bear on the ability of software to be transferred from one environment to another.
  1. Adaptability
  2. Installability
    • Document the number of currently open/closed in progress Yaim related bugs.
  3. Co-Existence
  4. Replaceability
  5. Portability Compliance
    • POSIX 1, POSIX 1.a and POSIX 1.b compliance towards conformance: using the Open POSIX testsuite: http://posixtest.sourceforge.net/
    • C, C++, Java deprecation metrics: Requires a build machine using: gcc-4.5.0 and Java 1.6 to produce these metrics.
    • metric: percentage of (failed component builds)/(total components built), over time, per platform.

References

  1. ApTests, "software testing specialist", http://www.aptests.com.
  2. Linda Westfall, "12 steps to useful software metrics", http://www.westfallteam.com/Papers/12_steps_paper.pdf

-- EamonnKenny - 17-Jun-2010

Topic attachments
I Attachment History Action Size Date Who Comment
JPEGjpg MiddlewareOverlap.jpg r1 manage 108.7 K 2010-06-21 - 14:08 UnknownUser  
Edit | Attach | Watch | Print version | History: r6 < r5 < r4 < r3 < r2 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r6 - 2010-06-25 - unknown
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    EMI All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright & 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback