Performance Testing

Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Qualitative attributes such as reliability, scalability and interoperability may also be evaluated. Performance testing is often done in conjunction with stress testing.

Performance testing can verify that a system meets the specifications claimed by its manufacturer or vendor. The process can compare two or more devices or programs in terms of parameters such as speed, data transfer rate, bandwidth, throughput, efficiency or reliability.

Performance testing can also be used as a diagnostic aid in locating communications bottlenecks. Often a system will work much better if a problem is resolved at a single point or in a single component. For example, even the fastest computer will function poorly on today’s Web if the connection occurs at only 40 to 50 Kbps (kilobits per second).

Slow data transfer rate may be inherent in hardware but can also result from software-related problems, such as:
•    Too many applications running at the same time
•    A corrupted file in a Web browser
•    A security exploit
•    Heavy-handed antivirus software
•    The existence of active malware on the hard disk.

Effective performance testing can quickly identify the nature or location of a software-related performance problem.

Performance testing is done to provide stakeholders with information about their application regarding speed, stability and scalability. More importantly, performance testing uncovers what needs to be improved before the product goes to market. Without performance testing, software is likely to suffer from issues such as: running slow while several users use it simultaneously, inconsistencies across different operating systems and poor usability. Performance testing will determine whether or not their software meets speed, scalability and stability requirements under expected workloadsy. Applications sent to market with poor performance metrics due to nonexistent or poor performance testing are likely to gain a bad reputation and fail to meet expected sales goals.Also, mission critical applications like space launch programs or life saving medical equipments should be performance tested to ensure that they run for a long period of time without deviations.

 

Types of performance testing

•    Load testing – checks the application’s ability to perform under anticipated user loads. The objective is to identify performance bottlenecks before the software application goes live.
•    Stress testing – involves testing an application under extreme workloads to see how it handles high traffic or data processing .The objective is to identify breaking point of an application.
•    Endurance testing – is done to make sure the software can handle the expected load over a long period of time.
•    Spike testing – tests the software’s reaction to sudden large spikes in the load generated by users.
•    Volume testing – Under Volume Testing large no. of. Data is populated in database and the overall software system’s behavior is monitored. The objective is to check software application’s performance under varying database volumes.
•    Scalability testing – The objective of scalability testing is to determine the software application’s effectiveness in “scaling up” to support an increase in user load. It helps plan capacity addition to your software system.

 

Common Performance Problems

Most performance problems revolve around speed, response time, load time and poor scalability. Speed is often one of the most important attributes of an application. A slow running application will lose potential users. Performance testing is done to make sure an app runs fast enough to keep a user’s attention and interest. Take a look at the following list of common performance problems and notice how speed is a common factor in many of them:

•    Long Load time – Load time is normally the initial time it takes an application to start. This should generally be kept to a minimum. While some applications are impossible to make load in under a minute, Load time should be kept under a few seconds if possible.
•    Poor response time – Response time is the time it takes from when a user inputs data into the application until the application outputs a response to that input. Generally this should be very quick. Again if a user has to wait too long, they lose interest.
•    Poor scalability – A software product suffers from poor scalability when it cannot handle the expected number of users or when it does not accommodate a wide enough range of users. Load testing should be done to be certain the application can handle the anticipated number of users.
•    Bottlenecking – Bottlenecks are obstructions in system which degrade overall system performance. Bottlenecking is when either coding errors or hardware issues cause a decrease of throughput under certain loads. Bottlenecking is often caused by one faulty section of code. The key to fixing a bottlenecking issue is to find the section of code that is causing the slow down and try to fix it there. Bottle necking is generally fixed by either fixing poor running processes or adding additional Hardware. Some common performance bottlenecks are
CPU utilization
Memory utilization
Network utilization
Operating System limitations
Disk usage

 

Performance Parameters Monitored

 The basic parameters monitored during performance testing include:

•    Processor Usage – amount of time processor spends executing non-idle threads.
•    Memory use – amount of physical memory available to processes on a computer.
•    Disk time – amount of time disk is busy executing a read or write request.
•    Bandwidth – shows the bits per second used by a network interface.
•    Private bytes – number of bytes a process has allocated that can’t be shared amongst other processes. These are used to measure memory leaks and usage.
•    Committed memory – amount of virtual memory used.
•    Memory pages/second – number of pages written to or read from the disk in order to resolve hard page faults. Hard page faults are when code not from the current working set is called up from elsewhere and retrieved from a disk.
•    Page faults/second – the overall rate in which fault pages are processed by the processor. This again occurs when a process requires code from outside its working set.
•    CPU interrupts per second – is the avg. number of hardware interrupts a processor is receiving and processing each second.
•    Disk queue length – is the avg. no. of read and write requests queued for the selected disk during a sample interval.
•    Network output queue length – length of the output packet queue in packets. Anything more than two means a delay and bottlenecking needs to be stopped.
•    Network bytes total per second – rate which bytes are sent and received on the interface including framing characters.
•    Response time – time from when a user enters a request until the first character of the response is received.
•    Throughput – rate a computer or network receives requests per second.
•    Amount of connection pooling – the number of user requests that are met by pooled connections. The more requests met by connections in the pool, the better the performance will be.
•    Maximum active sessions – the maximum number of sessions that can be active at once.
•    Hit ratios – This has to do with the number of SQL statements that are handled by cached data instead of expensive I/O operations. This is a good place to start for solving bottlenecking issues.
•    Hits per second – the no. of hits on a web server during each second of a load test.
•    Rollback segment – the amount of data that can rollback at any point in time.
•    Database locks – locking of tables and databases needs to be monitored and carefully tuned.
•    Top waits – are monitored to determine what wait times can be cut down when dealing with the how fast data is retrieved from memory
•    Thread counts – An applications health can be measured by the no. of threads that are running and currently active.
•    Garbage collection – has to do with returning unused memory back to the system. Garbage collection needs to be monitored for efficiency.

 

Summary

Performance testing is necessary before marketing any software product. It ensures customer satisfaction & protects investor’s investment against product failure. Costs of performance testing are usually more than made up for with improved customer satisfaction, loyalty and retention.