Showing posts with label Manual Testing. Show all posts
Showing posts with label Manual Testing. Show all posts

Saturday 3 October 2015

Software Testing Life Cycle (STLC)

Software Testing Life Cycle (STLC)

Software Testing Life Cycle (STLC) defines the steps/stages/phases in testing of software.

The different stages in Software Testing Life Cycle:

  • Requirement Analysis
  • Test Planning
  • Test Case Design / Development
  • Environment setup
  • Test Execution / Reporting / Defect Tracking
  • Test Cycle Closure / Retrospective study

1) Requirement Analysis
During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability ) .Automation feasibility for the given testing project is also done in this stage.

Activities

  • Identify types of tests to be performed. 
  • Gather details about testing priorities and focus.
  • Prepare Requirement Traceability Matrix (RTM).
  • Identify test environment details where testing is supposed to be carried out. 
  • Automation feasibility analysis (if required).


Deliverables 
RTM
Automation feasibility report. (if applicable)

2) Test Planning

This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.

Activities

  • Preparation of test plan/strategy document for various types of testing
  • Test tool selection 
  • Test effort estimation 
  • Resource planning and determining roles and responsibilities.
  • Training requirement


Deliverables 
Test plan /strategy document.
Effort estimation document.

3) Test Case Design / Development
This phase involves creation, verification and rework of test cases & test scripts. Test data , is identified/created and is reviewed and then reworked as well.

Activities

  • Create test cases, automation scripts (if applicable)
  • Review and baseline test cases and scripts 
  • Create test data (If Test Environment is available)


Deliverables 
Test cases/scripts

4) Test Environment Setup
Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.

Activities 

  • Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment. 
  • Setup test Environment and test data 
  • Perform smoke test on the build


Deliverables 
Environment ready with test data set up
Smoke Test Results.

5) Test Execution / Reporting / Defect Tracking
 During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.

Activities 

  • Execute tests as per plan
  • Document test results, and log defects for failed cases 
  • Map defects to test cases in RTM 
  • Retest the defect fixes 
  • Track the defects to closure


Deliverables 
Completed RTM with execution status
Test cases updated with results
Defect reports

6) Test Cycle Closure / Retrospective study
Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.

Activities

  • Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical Business Objectives , Quality
  • Prepare test metrics based on the above parameters. 
  • Document the learning out of the project 
  • Prepare Test closure report 
  • Qualitative and quantitative reporting of quality of the work product to the customer. 
  • Test result analysis to find out the defect distribution by type and severity.


Deliverables 
Test Closure report
Test metrics

Difference Between Smoke And Sanity Testing

Difference Between Smoke And Sanity Testing?

Smoke Testing:

  1. Smoke testing is done to check the normal health of the build and make sure if it is possible to continue testing. It is done in the beginning of the software testing cycle.
  2. Smoke testing is conducted on early builds to ascertain application most critical features are working and its ready for complete testing
  3. The objective of Smoke testing is to check the application stability before starting the thorough testing
  4. It follows shallow and wide approach where you cover all the basic functionality of the software.
  5. Smoke testing is like general health checkup where all application critical areas are tested without going into their details (build verification)


Sanity Testing:

  1. Sanity testing is done after thorough regression testing is over, it is done to make sure that any defect fixes or changes after regression testing does not break the core functionality of the product. It is done towards the end of the product release phase.
  2. Sanity testing is conducted on stable builds (after multiple regression tests) to ascertain new functionality/ bugs have been fixed and application is ready for complete testing
  3. The objective of Sanity testing is to check the application rational before starting the thorough testing
  4. Sanity testing follows narrow and deep approach with detailed testing of some limited features.
  5. Sanity testing is like specialized health checkup where small area of the application is thoroughly tested

Difference between Retesting and Regression Testing

Definition of Retesting and Regression Testing:

Re-Testing: After a defect is detected and fixed, the software should be retested to confirm that the original defect has been successfully removed. This is called Confirmation Testing or Re-Testing

Regression testing:  Testing your software application when it undergoes a code change to ensure that the new code has not affected other parts of the software.

It can be done in 2 scenarios:

1) When ever test engineer identified a defect, after rectification once
again we need to test the defected functionality as well as its related
functionality.

2) When ever some new features are added to the application then we need to
test the related functionalities of those new functionalities.

The defect logged by tester while testing application and same fixed by developer. In Retesting we check same defect whether fixed or not using steps to reproduce mentioned in the defect. In Regression testing we check same defect fixes are not impacted other unchanged part of the application, not breaking the functionality working previously and break due to fixing defect.

Test Case Design Techniques In Black Box Testing With Examples

Test Case Design Techniques In Black Box Testing With Examples

Boundary value analysis and Equivalence Class Partitioning both are test case design techniques in black box testing.

What is Boundary value analysis:

Boundary value analysis is a test case design technique to test boundary value between partitions (both valid boundary partition and invalid boundary partition). A boundary value is an input or output value on the border of an equivalence partition, includes minimum and maximum values at inside and outside boundaries. Normally Boundary value analysis is part of stress and negative testing.

Using Boundary Value Analysis technique tester creates test cases for required input field. For example; an Address text box which allows maximum 500 characters. So, writing test cases for each character once will be very difficult so that will choose boundary value analysis.

Example for Boundary Value Analysis:
Example 1
Suppose you have very important tool at office, accepts valid User Name and Password field to work on that tool, and accepts minimum 8 characters and maximum 12 characters. Valid range 8-12, Invalid range 7 or less than 7 and Invalid range 13 or more than 13.

Write Test Cases for Valid partition value, Invalid partition value and exact boundary value.
Test Cases 1: Consider password length less than 8.
Test Cases 2: Consider password of length exactly 8.
Test Cases 3: Consider password of length between 9 and 11.
Test Cases 4: Consider password of length exactly 12.
Test Cases 5: Consider password of length more than 12.

Example 2
Test cases for the application whose input box accepts numbers between 1-1000. Valid range 1-1000, Invalid range 0 and Invalid range 1001 or more.

Write Test Cases for Valid partition value, Invalid partition value and exact boundary value.
Test Cases 1: Consider test data exactly as the input boundaries of input domain i.e. values 1   and 1000.
Test Cases 2: Consider test data with values just below the extreme edges of input domains i.e. values 0 and 999.
Test Cases 3: Consider test data with values just above the extreme edges of input domain i.e. values 2 and 1001.

What is Equivalence Class Partitioning?

Equivalence partitioning is a Test Case Design Technique to divide the input data of software into different equivalence data classes. Test cases are designed for equivalence data class. The equivalence partitions are frequently derived from the requirements specification for input data that influence the processing of the test object. A use of this method reduces the time necessary for testing software using less and effective test cases.
Equivalence Partitioning = Equivalence Class Partitioning = ECP
It can be used at any level of software for testing and is preferably a good technique to use first. In this technique, only one condition to be tested from each partition. Because we assume that, all the conditions in one partition behave in the same manner by the software. In a partition, if one condition works other will definitely work. Likewise we assume that, if one of the condition does not work then none of the conditions in that partition will work.
Equivalence partitioning is a testing technique where input values set into classes for testing.
Valid Input Class = Keeps all valid inputs.
Invalid Input Class = Keeps all Invalid inputs.

 Example of Equivalence Class Partitioning?

A text field permits only numeric characters
Length must be 6-10 characters long
Partition according to the requirement should be like this:

While evaluating Equivalence partitioning, values in all partitions are equivalent that’s why 0-5 are equivalent, 6 – 10 are equivalent and 11- 14 are equivalent.
At the time of testing, test 4 and 12 as invalid values and 7 as valid one.

It is easy to test input ranges 6–10 but harder to test input ranges 2-600. Testing will be easy in the case of lesser test cases but you should be very careful. Assuming, valid input is 7. That means, you belief that the developer coded the correct valid range (6-10).

Explain About Test Data,Test Script And Metric Reports

Explain About Test Data,Test Script And Metric Reports?

Test Data: Test data is the data created by the test engineer based on the pre-condition(pre-condition is the data setup that needs to be created before proceeding with executing the test case)in order to proceed with our testing.
In order to test a software application you need to enter some data for testing most of the features. Any such specifically identified data which is used in tests is known as test data.
You can have test data in excel sheet which can be entered manually while executing test cases or it can be read automatically from files (XML, Flat Files, Database etc.) by automation tools.

Some test data is used to confirm the expected result, i.e. When test data is entered the expected result should come and some test data is used to verify the software behavior to invalid input data.
Test data is generated by testers or by automation tools which support testing. Most of the times in regression testing the test data is re-used, it is always a good practice to verify the test data before re-using it in any kind of test.

Test Script: A test script in software testing is a set of instructions that will be performed on the system under test to test that the system functions as expected.
There are various means for executing test scripts.
Manual testing: These are more commonly called test cases.
Automated testing: Short program written in a programming language used to test part of the functionality of a software system. Test scripts written as a short program can either be written using a special automated functional GUI test tool (such as QTP, Selenium and Test Complete etc..) or in a well-known programming language (such as C++, C#, Java, PHP, Perl etc..).

Metric Reports: Metric reports return statistical data and metrics about the Workspace, rather than returning specific Issue data.  Output options include HTML Text, Graphical, and Text File Export.  The information returned by these reports can help you measure how well your organization performs and where improvement is needed.



Basic Concepts
























Manual Testing Roles And Responsibilities

Manual Testing Roles And Responsibilities ?

Answer

  1. Analyzing the requirements and interacting with BA for clarifications if any
  2. Understanding testing requirements related to the current project / product.
  3. Participating in preparing Test Plans
  4. Preparing Test Scenarios
  5. Preparing Test Cases for module, integration and system testing
  6. Preparing Test Data’s for the test cases
  7. Preparing Test Environment to execute the test cases
  8. Reviewing the Test Cases prepared by other team members
  9. Executing the Test Cases
  10. Logging Defects and Tracking
  11. Giving mandatory information of a defect to developers in order to fix it
  12. Conducting Review Meetings within the Team
  13. Assist test lead in RTM preparation before signoff from the project / product
  14. Preparing Summary Reports
  15. Preparing Lesson Learnt documents from the previous project testing experience
  16. Preparing Suggestion Documents to improve the quality of the application
  17. Preparing Suggestion Documents to improve the quality of the application
  18. Conducting Review Meetings within the Team