Performance Testing

Software Performance Testing

Overview: In general testing executed to determine how a system performs in terms of responsiveness and stability under a particular workload is called Performance Testing. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability, and resource usage.

Performance Testing identifies shortcomings and operational difficulties during a system test, well before the system is placed into production. If a business fails to performance test its system, the system may run slowly or even crash when subjected to high-volume traffic and peak-load data quantities. Such problems can result in:

  • Loss of revenue
  • Loss of customers
  • Underuse of expensive company systems
  • A backlog customer orders
  • Negative publicity from the media and blog sites

Unlike Functional Testing, Performance Testing is aimed to identify quality attributes related with the non-functional aspects of the application. So, In a nutshell,

  • Performance Testing is not directly related with the application code input and expected outputs but with the underlying outcomes of its execution and the effect this code has on resources consumption, administration, impact, efficiency, speed, and degradation among others.
  • Given that Performance Testing is not a validation of the functionality of the application, it is not intended to cover an extensive number of use cases as Functional Testing would do. It will only cover scenarios which are used heavily in the application.
  • Performance Testing use case selection follows the rule of the 80-20 (also known as the law of the vital few, and the principle of factor sparsity).
    • In load testing, it is common practice to estimate that 80% of the traffic occurs during 20% of the time.
  • Performance Testing can be executed from early stages in the development, such as component testing, until after system integration has taken place.

How Performance Test Helps?

  • Verify if the system meets the performance requirements defined in Service Level Agreements (SLAs)
  • Help stablish the capacity of existing systems
  • Creates benchmarks for future systems
  • Evaluates if there is any degradation in performance with various loads and/or configurations
  • Provides inputs to size the production infrastructure to support business requirements
  • Provides understanding of the system stability with a load beyond the normal usage patterns.

Performance Tests are of various types based on conditions to be tested- These are listed as below-

  1. Baseline Test

Baseline testing is done for making sure that application performance is not degrading over time with new changes. This can be done by executing individual APIs or combined APIs with a low volume. It is used to identify -

  • Gross performance issues that would negate the value of testing at higher loads and/or provide a basis of comparison(benchmark) for future tests.
  • Baseline testing also helps a great deal in solving most of the problems that are discovered in the preparation phase i.e., scripting phase or while environment is being built and hence it acts as a Sanity test as well.
  1. Benchmark Testing
  • A Benchmark test is a test conducted to get a point of reference against which software products or services can be compared to assess the performance.
  • Purpose is to compare the present and future software versions.
  • A benchmark must be repeatable and quantifiable.
  1. Load Test
  • Load testing is the simplest form of performance testing.
  • A load test is usually conducted to understand the behavior of the system under a specified expected load. (This data can be derived from production metrics)
  • This load can be the expected concurrent number of users on the application performing a specific number of transactions within the given time duration.
  • Load test identifies the response times of all the important business critical transactions.
  • Database and server monitoring during the load test using tools like New Relic can identify the bottle necks. It also identifies which layer in the architecture is cause of performance issues.
  1. Stress/Saturation Testing
  • Stress tests are executed at greater-than-expected user loads on Application under test.
  • It is often used to test application and system stability and recovery or to collect data for capacity planning for future releases.
  • It may be aimed to max out systems or determine the maximum point where the application and its landscape still functions at accepted levels.
  1. Scalability Testing
  • It is a type of performance testing which tests the ability of a system, a network, or a process to continue to function well, when it is changed in size or volume to meet increasing demands.
  • It is the testing of a software application for measuring its capability to scale up in terms of any of its non-functional capability like load supported, the number of transactions, the data volume or throughput.
  • Normally, scalability testing is performed as a series of load tests with different hardware (or software) settings while keeping other testing environment conditions unchanged
  • Scalability testing can give answers to the following typical questions:
    1. How do hardware and software changes affect the server performance?
    2. Is it possible to increase the application’s productivity by upgrading the server hardware?
  1. Endurance Testing
  • This test is executed during extended periods of time, normally simulating sustained business day peak.
  • This type of test is usually done to determine if the system can sustain the continuous expected load.
  • During soak tests, memory utilization is monitored to detect potential leaks.
  • Another important thing that this type of test serves is to identify performance degradation to ensure that the throughput and/or response times after some long periods of sustained activity are as good or better than at the beginning of the test.

Metrics Collected During the Performance Tests:

  1. Client Metrics

Client-side metrics are useful to control the load generated against the system under test (SUT). It helps to assess when an end user may start to see performance degradation. It basically measures the end user perception of performance. Examples of Client Side Metrics are-

  1. Concurrent Users
  2. Response Time
  3. Transactions/second- Derived from monthly, weekly, daily or hour load
  4. Hits/second- Provides the number of “hits” the various servers within the technology stack receive against various load.
  5. Pass/Fail statistics- The ratio between the transactions which have met the Performance SLA (response time, throughput, etc.) and those that have breached (deviation from) the performance SLA.
  6. Bandwidth
  1. Server Metrics

Server-side performance test metrics are used to find out how resources are utilized in the various layers of the technology stack against the increasing user load. It identifies bottlenecks in system components. Example of Server metrics are-

  1. Throughput - The rate at which the requests can be serviced by the system.
  2. Processor- This metric allows to observe the load on the SUT. If beyond 80% capacity must be a concern
  3. Memory- This metric helps to determine when the system under test is maxing out or when memory leaks are present in the system
  4. Disk - Helps to identify memory shortage problems and problems with IO system
  5. Network- Metrics such as bandwidth or latency are important to know if the connection media is causing the performance issues
  6. Web Server - The purpose of a web server is to accept requests from clients and respond with appropriate content. Content can be static web pages, dynamic web pages etc. Metrics such as Requests per second, Average response time, Peak response times, thread count, uptime etc., are captured.
  7. Application Server - It is a server designed to install, operate, and host applications for IT services. Metrics such as application availability, CPU utilization, HTTP error% are captured (Example snippet below).

  1. DB - Transactions which have direct calls to DB are monitored for DB response times.
  2. Availability- Defines Mean time to Failure. Used as an SLA Metric.
  3. Bandwidth - is a characterization of the amount of data that can be transferred over a given time.

Tools Used for Performance Testing:

There are multiple tools in market. However, most popular ones are-

  • Micro Focus LoadRunner
  • IBM Rational Performance Tester
  • Micro Focus Silk Performer
  • Apache JMeter

Challenges of Performance Testing:

  • Lack of Proper Performance Testing Strategy- Performance Testing requires lot of inputs regarding real time transactions, which transactions generate peak volumes, what are peak loads etc. Lack of proper brainstorming around these can cause performance testing outcomes to be less than expected.
    • The performance testing team should spend significant effort on understanding application architecture and other performance characteristics like load distribution, availability requirements, reliability requirements, technology stack etc. to come up with the right strategy.
  • Lack of Stable Test Environment- Usually performance tests are conducted in a test environment which is scaled up to match production environment. However, issues arise, if the test environment is still defect prone. Performance test will fail badly in such cases and will not be able to meet the objective of the test.
    • The testing teams should first focus on making sure that test environment is stable and does not include defects which hamper Performance transactions.
  • Improper Analysis of Performance Issues -Performance issues are very difficult to analyze and fix. Hence, very often it is observed that analysis involves hit and trial method.
    • Performance defect fixes require very close collaboration between testing teams, development teams and infrastructure teams.
    • Also, from skill perspective, very skilled professionals should be involved in analyzing the root cause and fixes.
    • Usually, the turnaround time to identify and fix the performance issues is much higher than functional issues.
    • With Infrastructures moving to cloud, it becomes necessary to involve cloud vendors to involve in performance issue analysis.

Conclusion: Performance of an application makes or breaks the customer base. It is essential for businesses to leverage performance testing services to ensure scalable, stable, and high-performing applications. With more complex applications these days and complex user journeys, performance testing job has become even more important.