Overview: Ever wondered, how do you measure testing productivity or testing effectiveness? When to say testing is complete? In all such questions, QA metrics come to the rescue. QA metrics allow test leads or managers to take decisions on various situations encountered in the project life cycle. Tricky part is that QA metrics make sense only in the given context. For example, for one project 95% pass percentage is good to go live but for a safety-critical application anything less than 100% is life-threatening. So, let’s first find out why we need metrics and how they can be useful for the project.
Why do we need QA Metrics?
Software Testing Metrics are useful for evaluating the quality and progress of a software testing effort. Without metrics, it would be almost impossible to quantify, explain, or demonstrate software quality. Metrics also provide a quick insight into the status of software testing efforts, hence resulting in better control through smart decision making. Basically, we are trying to say that “What cannot be measured, cannot be improved”.
Following are few reasons why we need QA metrics -
Types of Metrics
There are four areas for test progress monitoring and reporting which can be used as metrics i.e. Risk, Defects, Test cases and Coverage. These are quantifiable and measurable. It is important that metrics align with our defined exit criteria and vice versa.
1. Product Risk Metrics
2. Defect Metrics:
Defect Density: It is defined as number of confirmed defects found divided by the size of application.
Defect Density=(Number of defects identified)/Size
Defect Removal Efficiency: It is correlation of bugs found before release (by testing) with the bugs found post release (i.e. by the external users)
DRE=(Number of defects found internally by testing)/((Number of defects found internally + Externally)) x 100
Defect Leakage: Defect Leakage is the metric which identifies the efficiency of the testing i.e., how many defects are missed by testing and are found in UAT/Prod.
Defect Leakage=(Number of defects found post release)/(Number of defects found in testing) x 100
Defect rejection Ratio: It is the number of rejected defects divided by the total submitted defects. It gives a parameter to measure test execution quality of testing team.
DRR=(Number of defects rejected as invalid)/(Total number of defects logged by test team) x 100
Defect categorization by Severity: This metric is used to identify the defects based on severity which in turn is used to identify which areas need more testing.
Cumulative defects reported Vs resolved (defect resolution trend graph)- This is a common report generated by test management tools like Jira and gives a view of pending risk areas.
3. Test Case Metrics:
Test case metrics are used to identify the improvement areas in test process and to identify how much testing has been accomplished i.e. for test tracking and efficiency assessment. Commonly used test case metrics are:
Test case Pass %=(Number of test cases passed)/(Number of total test cases) x 100
Test case Fail %=(Number of test cases failed)/(Number of total test cases) x 100
Test execution %=(Number of passed + failed test cases)/(Total test cases) x 100
Test Progress wrt timeline:
4. Coverage Metrics: Test coverage metrics are less commonly used than defects metrics and test case metrics.
Test execution coverage %=(Number of tests run)/(Total number of tests to be run) x 100
Requirement Coverage %=(Number of requirements covered by tests)/(Total number of requirements) x 100
Conclusion: As a test manager, it is important to recognize that metrics are contextual, and reporting will also depend on audience. Audience’s needs, goals and abilities should be considered while sharing the data. Also, if any testing metric indicates problem, we should have various options to bring the project back under control. As with project control, each chosen option might have consequences and risks should be appropriately weighed in with relevant stakeholders before making decisions.