Software Testing Metrics

Software Testing Metrics

Overview: Ever wondered, how do you measure testing productivity or testing effectiveness? When to say testing is complete? In all such questions, QA metrics come to the rescue. QA metrics allow test leads or managers to take decisions on various situations encountered in the project life cycle. Tricky part is that QA metrics make sense only in the given context. For example, for one project 95% pass percentage is good to go live but for a safety-critical application anything less than 100% is life-threatening. So, let’s first find out why we need metrics and how they can be useful for the project.

Why do we need QA Metrics?

Software Testing Metrics are useful for evaluating the quality and progress of a software testing effort. Without metrics, it would be almost impossible to quantify, explain, or demonstrate software quality. Metrics also provide a quick insight into the status of software testing efforts, hence resulting in better control through smart decision making. Basically, we are trying to say that “What cannot be measured, cannot be improved”.

Following are few reasons why we need QA metrics -

  • Helps with the effectiveness of test approach
  • Help in cost saving by defect prevention
  • Helps improve project planning (example- by giving right values on effort estimation)
  • Helps improve existing processes by identifying the gaps
  • Helps analyze risks associated by giving the data values (rather than notional feeling of risk)
  • Provides coverage and helps to identify areas to be tested.
  • Helps to identify which area needs more testing (Ex- categorizing defects and functional areas)

Types of Metrics

There are four areas for test progress monitoring and reporting which can be used as metrics i.e. Risk, Defects, Test cases and Coverage. These are quantifiable and measurable. It is important that metrics align with our defined exit criteria and vice versa.

1. Product Risk Metrics

  • These are used when we are following a risk-based strategy for our testing. It gives you the number of risks remaining/mitigated (including type of risk and level of these risks).
  • At the beginning of test execution phase, when we have not executed very few areas, risk is high. As our testing progresses, bugs are identified and fixed, risk gradually comes down.
  • The mitigated risks, open risks can be represented as pie charts (or any other graphical formats) as shown below.
  • This metric is not very commonly used but it does provide the information which leadership is looking for from test teams.

2. Defect Metrics:

  • Defects Metrics help you identify which functional area is susceptible of more defects and thus helps you plan for your testing.
  • It also helps you assess the quality of code at any given time.
  • There are different defect metrics which are commonly used. Example –

Defect Density: It is defined as number of confirmed defects found divided by the size of application.

Defect Density=(Number of defects identified)/Size

  • Density is usually counted per thousand lines of code.
  • Defect density helps in deciding the readiness of any software component for release.
  • It is useful in finding the areas which need more testing.
  • 1 defect per thousand lines of code is considered a good standard to be followed.

Defect Removal Efficiency: It is correlation of bugs found before release (by testing) with the bugs found post release (i.e. by the external users)

DRE=(Number of defects found internally by testing)/((Number of defects found internally + Externally)) x 100

  • DRE can also be calculated for each phase separately.
  • Mature organization with thorough and comprehensive requirements and design inspection process maintain DRE around 95%.

Defect Leakage: Defect Leakage is the metric which identifies the efficiency of the testing i.e., how many defects are missed by testing and are found in UAT/Prod.

Defect Leakage=(Number of defects found post release)/(Number of defects found in testing) x 100

  • Defect leakage can be prevented by following real life use case journeys and data like production combined with rigorous testing.

Defect rejection Ratio: It is the number of rejected defects divided by the total submitted defects. It gives a parameter to measure test execution quality of testing team.

DRR=(Number of defects rejected as invalid)/(Total number of defects logged by test team) x 100

Defect categorization by Severity: This metric is used to identify the defects based on severity which in turn is used to identify which areas need more testing.

Cumulative defects reported Vs resolved (defect resolution trend graph)- This is a common report generated by test management tools like Jira and gives a view of pending risk areas.

  • It shows number of defects logged by test team vs number of defects resolved by dev team.
  • It gives an idea if fixing rate is slower/equal to creation rate and thus can be used for defect fix SLA calculation.
  • If a greater number of defects are logged than resolved, then application quality is judged poor, or we can also say that testing team might run into a situation that they cannot proceed further.

3. Test Case Metrics:

Test case metrics are used to identify the improvement areas in test process and to identify how much testing has been accomplished i.e. for test tracking and efficiency assessment. Commonly used test case metrics are:

Test case Pass %=(Number of test cases passed)/(Number of total test cases) x 100

Test case Fail %=(Number of test cases failed)/(Number of total test cases) x 100

Test execution %=(Number of passed + failed test cases)/(Total test cases) x 100

Test Progress wrt timeline:

  • This metrics gives a measure in terms of schedule of test cycle. How are we making progress based on time elapsed? It is basically a progress metrics. 
  • It is derivation of planned vs actual test case execution

4. Coverage Metrics: Test coverage metrics are less commonly used than defects metrics and test case metrics.

  • Test coverage measures how much your tests are covering things like test requirements or code or design elements.
  • It is a useful metric for measuring the effectiveness of your testing efforts.
  • To produce test coverage metrics, you need ability to trace your tests back to requirements and design elements they cover.
  • Test coverage is also useful in identifying and eliminating test cases that don’t make much sense in the given scenario and helps identify the areas which do not have test cases created yet.
  • Examples of test coverage metrics are -

Test execution coverage %=(Number of tests run)/(Total number of tests to be run) x 100

Requirement Coverage %=(Number of requirements covered by tests)/(Total number of requirements) x 100

  • Coverage metrics can be calculated against code by measuring statement coverage, condition coverage etc.

Conclusion: As a test manager, it is important to recognize that metrics are contextual, and reporting will also depend on audience. Audience’s needs, goals and abilities should be considered while sharing the data. Also, if any testing metric indicates problem, we should have various options to bring the project back under control. As with project control, each chosen option might have consequences and risks should be appropriately weighed in with relevant stakeholders before making decisions.