Showing posts with label Performance Testing. Show all posts
Showing posts with label Performance Testing. Show all posts

Saturday 19 October 2019

Performance Testing Terminology

Performance testing is a practice conducted to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.

The focus of Performance Testing is checking a software program's

Speed - Determines whether the application responds quickly
Scalability - Determines maximum user load the software application can handle.
Stability - Determines if the application is stable under varying loads

Performance Testing is popularly called “Perf Testing” and is a subset of performance engineering.

There are a number of terms used to refer to the different types of performance testing and associated tools and practices. Please find below a useful performance testing glossary to enable you to confer and better-understand some of the terms commonly used.

Non-functional requirements (NFRs) - Requirements that do not relate to the functioning of the system, but to other aspects of the system such as reliability, usability and performance

Performance Engineering - Activities designed to ensure a system will be designed and implemented to meet specific non functional requirements. Often takes place following completion of testing activities that highlight weaknesses in the design and implementation.

Performance Test Plan - Typically a written document that details the objectives, scope, approach, deliverable, schedule, risks, data and test environment needs for testing on a specific project.

Performance Testing - Testing designed to determine the performance levels of a system

Reliability - Related to stability, reliability is the degree to which a system provides the same result for the same action over time under load.

Scalability - The degree to which a system’s performance and capacity can be increased typically by increasing available hardware resources within a set of servers (vertical scaling) or increasing the number of servers available to service requests (horizontal scaling).


Connect Time
Connect time is the time taken to establish a TCP connection between the server and the client. TCP/IP model guarantees delivery of data by its nature by using the TCP Handshake. If TCP Handshake is successful, then the client can send further requests. That data send/receive task is not on the HTTP layer. If there’s no TCP connection made between the server and the client, the client can’t talk to the server. This can happen if the server is not live or busy responding to other requests.

Latency Time
Latency is measured by the time taken for information to get to its target and back again. It’s a round trip. Sometimes, latency means delay. which is a very problematic issue when working with remote data centers. Data hops through nodes till it’s sent from the server so the bigger the distance the more the delay. That’s why those nodes will increase the response time and violate your service level agreements (SLA’s).  That’s why dealing with latency is the hardest one. JMeter measures the latency from the first moment of sending the request until the first byte is received. So, in JMeter, Connect time is included when calculating Latency Time. There’s also the network latency to express the time for a packet of data to get from one designated point to another.

Elapsed Time
Elapsed time is measured by the time from the first moment of sending the data and the time of the last byte of the received response.

Latency time – Connect time = Server Processing Time

Elapsed time – Latency time = Download Time

Throughput
Throughput is the number of units of work that can be handled per unit of time. From a performance testing perspective, it can count requests per second, request calls per day, hits per second, bytes per second. JMeter allows you to create assertions for response time, so you can set your request’s fail\success status according to its result.

Performance Thresholds
Performance thresholds are the KPIs or the maximum acceptable values of a metric. There are many metrics to be identified for a performance test project. That can be response time, throughput, and resource-utilization levels like processor capacity, memory, disk I/O, and network I/O.

Saturation
Saturation means the point where a resource is at the maximum utilization point. At this point, if we are talking about a server, it cannot respond to any more requests.

Soak Testing - A type of Performance Testing used to evaluate the behavior of a system or component when the system is subjected to expected load over a sustained period of time

Spike Testing - A type of Performance Testing used to evaluate the behavior of a system or component when subjected to large short-term changes in demand. Normally this is to test how the system responds to large increases in demand, e.g. User Logins, Black Friday-like sales events etc.

Stability - The degree to which a system exhibits failures and errors when under normal usage. For example, erroneous errors when registering new users under load.

Stress Testing - A type of Performance Testing used to evaluate the behaviour of a system or component when subjected to load beyond the anticipated workload or by reducing the resources the system can use, such as CPU or memory.

Transaction Volume Model (TVM) - A document detailing the user journeys to be simulated, the click-path steps that make up the user journeys and associated load / transaction volume models to be tested. This should include information regarding the geographical locale from where users will be expected to interact with the system and the method of interaction e.g. mobile vs desktop.

User Journey - The path through the system under test that a group of Virtual Users will use to simulate real users. It should be noted that key performance / volume impacting journeys should be used as it is impractical to performance test all possible User Journeys, a good rule-of-thumb is to use the 20% of User Journeys that generate 80% of the volume.

Virtual User - A simulated user that performs actions as a real user would during the execution of a test.