Monday 5 October 2015

Alpha and Beta Testing

Alpha and Beta Testing 

Alpha testing is done before the software is made available to the general public. Typically, the developers will perform the Alpha testing using white box testing techniques. Subsequent black box and grey box techniques will be carried out afterwards. The focus is on simulating real users by using these techniques and carrying out tasks and operations that a typical user might perform. Normally, the actual Alpha testing itself will be carried out in a lab type environment and not in the usual workplaces. Once these techniques have been satisfactorily completed, the Alpha testing is considered to be complete.

The next phase of testing is known as Beta testing. Unlike Alpha testing, people outside of the company are included to perform the testing. As the aim is to perform a sanity check before the products release, there may be defects found during this stage, so the distribution of the software is limited to a selection of users outside of the company. Typically, outsourced testing companies are used as their feedback is independent and from a different perspective than that of the software development company employees. The feedback can be used to fix defects that were missed, assist in preparing support teams for expected issues or in some cases even enforce last minute changes to functionality.

In some cases, the Beta version of software will be made available to the general public. This can give vital 'real-world' information for software/systems that rely on acceptable performance and load to function correctly.

The types of techniques used during a public Beta test are typically restricted to Black box techniques. This is due to the fact that the general public does not have inside knowledge of the software code under test, and secondly the aim of the Beta test is often to gain a sanity check, and also to retrieve future customer feedback from how the product will be used in the real world.

Various sectors of the public are often eager to take part in Beta testing, as it can give them the opportunity to see and use products before their public release. Many companies use this phase of testing to assist with marketing their product. For example, Beta versions of a software application get people using the product and talking about it which (if the application is any good) builds hype and pre-orders before its public release.

What is Compatibility testing in software testing

What is Compatibility testing in software testing?

Compatibility testing is used to determine if your software application has issues related to how it functions in concert with the operating system and different types of system hardware and software.

It can be of two types - forward compatibility testing and backward compatibility testing.

  • Operating system Compatibility Testing - Linux , Mac OS, Windows
  • Database Compatibility Testing - Oracle SQL Server
  • Browser Compatibility Testing - IE , Chrome, Firefox
  • Other System Software - Web server, networking/ messaging tool, etc.


Browser compatibility testing
It is very popular in compatibility testing. It is to check the compatibility of the software application on different browsers like Chrome, Firefox, Internet Explorer, Safari, and Opera etc.

Hardware
it is to check the application/ software compatibility with the different hardware configurations.

Network
it is to check the application in different network like 3G, WIFI etc.

Mobile Devices
it is to check if the application is compatible with the mobile devices and their platforms like android, iOS, windows etc.

Operating Systems
it is to check if application is compatible with different Operating Systems like Windows, Linux, Mac etc.

How to perform Compatibility testing?

  • Test the application in same browsers but in different versions. For e.g. to test the compatibility of site google.com. Download different versions of Firefox and install them one by one and test the google site. google site should behave equally same in each version.
  • Test the application in different browsers but in different versions. For e.g. testing of site google.com in different available browsers like Firefox, Safari, Chrome, Internet Explorer and Opera etc.

Common Compatibility testing defects

  • Changes in UI ( look and feel)
  • Change in font size
  • Alignment related issues
  • Change in CSS style and color
  • Scroll bar related issues
  • Content or label overlapping
  • Broken tables or Frames

What is Traceability Matrix

What is Traceability Matrix


Why use traceability matrices?

The traceability matrices are the answer to the following questions when testing any software project:
  • How is it feasible to ensure, for each phase of the SDLC, that I have correctly accounted for all the customer’s needs?
  • How can I certify that the final software product meets the customer’s needs? It lets us make sure requirements are captured in test cases.
Disadvantages of not using traceability matrices include the following:
  • More defects in production poor or unknown test coverage.
  • Discovering bugs later in the development cycle resulting in more expensive fixes.
  • Difficulties planning and tracking projects.
  • Misunderstandings between different teams over project dependencies, delays, etc…
Benefits of using traceability matrices include the following:
  • Making it obvious to the client that the software is being developed as required.
  • Ensuring that all requirements are included in the test cases.
  • Ensuring that developers are not creating features that no one has requested.
  • Making it easy to identify missing functionalities.
  • Making it easy to find out which test cases need updating if there are change requests.
Requirements Traceability Matrix:

In simple words, a requirements traceability matrix is a document that traces and maps user requirements, usually requirement IDs from a requirement specification document, with the test case IDs. The purpose of this document is to make sure that all the requirements are covered in test cases so that nothing is missed.

The traceability matrix document is prepared to show clients that the coverage is complete. It usually includes the following columns: requirement, baseline document reference number, test case/condition and defect/bug ID. Using this document the person can track the Requirement based on the Defect id.

Adding a few more columns to the traceability matrix gives you a good test case coverage checklist.


Difference between Test case and use case

What difference between Test case and use case

Use case describes the specific action done by the user (actor) upon the system to achieve certain predefined task. Use case is done in terms of the actor. It deals with the steps followed by the user to accomplish the task.It has no matter with the input data. Use case can solve the integration defects between different component while used by the actor. Use case designed from URS.
Use cases describe the system from the user's point of view 

TEST CASE:
Test case is the combination of the Test conditions and Test data.It simply visualize the a particular task with a set of input data and expected output and certain test conditions. Test case is designed from SRS.


Saturday 3 October 2015

What is Six Sigma Concept

What is Six Sigma Concept

Six Sigma is a highly disciplined process that helps us focus on developing and delivering near-perfect products and services.

Features of Six Sigma

  • Six Sigma's aim is to eliminate waste and inefficiency, thereby increasing customer satisfaction by delivering what the customer is expecting.
  • Six Sigma follows a structured methodology, and has defined roles for the participants.
  • Six Sigma is a data driven methodology, and requires accurate data collection for the processes being analyzed.
  • Six Sigma is about putting results on Financial Statements.

Six Sigma is a business-driven, multi-dimensional structured approach for:

  • Improving Processes
  • Lowering Defects
  • Reducing process variability
  • Reducing costs
  • Increasing customer satisfaction
  • Increased profits

The word Sigma is a statistical term that measures how far a given process deviates from perfection.

The central idea behind Six Sigma: If you can measure how many "defects" you have in a process, you can systematically figure out how to eliminate them and get as close to "zero defects" as possible and specifically it means a failure rate of 3.4 parts per million or 99.9997% perfect.

Key Concepts of Six Sigma

  • At its core, Six Sigma revolves around a few key concepts.
  • Critical to Quality : Attributes most important to the customer.
  • Defect : Failing to deliver what the customer wants.
  • Process Capability : What your process can deliver.
  • Variation : What the customer sees and feels.
  • Stable Operations : Ensuring consistent, predictable processes to improve what the customer sees and feels.



Agile Testing :- Scrum Methodology

What is Agile Testing?

A software testing practice that follows the principles of agile software development is called Agile Testing. Agile is an iterative development methodology, where requirements evolve through collaboration between the customer and self-organizing teams and agile aligns development with customer needs.

SCRUM is a process in agile methodology which is a combination of Iterative model and incremental model.

Some of the key characteristics of SCRUM include:

  • Self-organized and focused team
  • No huge requirement documents, rather have very precise and to the point stories.
  • Cross functional team works together as a single unit.
  • Close communication with the user representative to understand the features.
  • Has definite time line of max 1 month.
  • Instead of doing the entire “thing” at a time, Scrum does a little of everything at a given interval
  • Resources capability and availability are considered before committing any thing.


Important SCRUM Terminologies:

1. Scrum Team

Scrum team is a team comprising of 7 with + or – two members. These members are a mixture of competencies and comprise of developers, testers, data base people, support people etc. along with the product owner and a scrum master. All these members work together in close collaboration for a recursive and definite interval, to develop and implement the said features.

2. Sprint

Sprint is a predefined interval or the time frame in which the work has to be completed and make it ready for review or ready for production deployment. This time box usually lies between 2 weeks to 1 month. In our day to day life when we say that we follow 1 month Sprint cycle, it simply means that we work for one month on the tasks and make it ready for review by the end of that month.

3. Product Owner

Product owner is the key stakeholder or the lead user of the application to be developed.

Product owner is the person who represents the customer side. He / she have the final authority and should always be available for the team. He / she should be reachable in case any one has any doubts that need clarification. It is important for the product owner to understand and not to assign any new requirement in the middle of the sprint or when the sprint has already started.

4. Scrum Master

Scrum Master is the facilitator of the scrum team. He / she make sure that the scrum team is productive and progressive. In case of any impediments, scrum master follows up and resolves them for the team.

5. User Story

User stories are nothing but the requirements or feature which has to be implemented. In scrum, we don’t have those huge requirements documents, rather the requirements are defined in a single paragraph.

6. Product Backlog

Product backlog is a kind of bucket or source where all the user stories are kept. This is maintained by Product owner. Product backlog can be imagined as a wish list of the product owner who prioritizes it as per business needs. During planning meeting (see next section), one user story is taken from the product backlog, team does the brainstorming, understands it and refine it and collectively decides which user stories to take, with the intervention of the product owner.

7. Sprint Backlog

Based on the priority, user stories are taken from the Product Backlog one at a time. The Scrum team brainstorms on it, determines the feasibility and decides on the stories to work on a particular sprint. The collective list of all the user stories which the scrum team works on a particular sprint is called s Sprint backlog.

Activities done in SCRUM Methodology:

#1: Planning meeting

Planning meeting is the starting point of SCRUM. It is the meeting where the entire scrum team gather, the product owner selects a user story based on the priority from the product back log and the team brain storms on it. Based on the discussion, the scrum team decides the complexity of the story and sizes it as per the Fibonacci series. Team identifies the tasks along with the efforts (in hours) which would be done to complete the implementation the user story.

Many a time planning meeting is preceded by a “Pre-Planning meeting”. It’s just like a home work which the scrum team does before they sit for the formal planning meet. Team tries to write down the dependencies or other factors which they would like to discuss in the planning meet.

#2: Execution of sprint tasks

As the name suggests, these are the actual work done by the scrum team to accomplish their task and take the user story into the “Done” state.

#3: Daily scrum meeting (call)

During the sprint cycle, every day the scrum team meets for, not more than 15 minutes (could be a stand up call, recommended to have during the beginning of the day) and state 3 points:

What did the team member did yesterday
What did the team member plan to do today
Any impediments (roadblocks)
It is the Scrum master who facilitates this meeting. In case, any team member is facing any kind of difficulties, the scrum master follows up to get it resolved.

#4: Review meeting

At the end of every sprint cycle, the SCRUM team meets again and demonstrates implemented user stories to the product owner. The product owner may cross verify the stories as per its acceptance criteria. It’s again the responsibility of the Scrum master to preside over this meeting.

#5: Retrospective meeting

Retrospective meeting happens after the review meeting. The SCRUM team meets and discusses & document the following points:

What went well during the Sprint (Best practices)
What did not went well in the Sprint
Lessons learnt
Action Items.

The Scrum team should continue to follow the best practice, ignore the “not best practices” and implement the lessons learnt during the consequent sprints. The retrospective meeting helps to implement the continuous improvement of the SCRUM process.

Spiral model- advantages, disadvantages

What is Spiral model- advantages, disadvantages and when to use it?

The spiral model is similar to the incremental model, with more emphasis placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, requirements are gathered and risk is assessed. Each subsequent spirals builds on the baseline spiral.

Planning Phase: Requirements are gathered during the planning phase. Requirements like ‘BRS’ that is ‘Bussiness Requirement Specifications’ and ‘SRS’ that is ‘System Requirement specifications’.

Risk Analysis: In the risk analysis phase, a process is undertaken to identify risk and alternate solutions.  A prototype is produced at the end of the risk analysis phase. If any risk is found during the risk analysis then alternate solutions are suggested and implemented.

Engineering Phase: In this phase software is developed, along with testing at the end of the phase. Hence in this phase the development and testing is done.

Evaluation phase: This phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.

Diagram of Spiral model:


Advantages of Spiral model:

  • High amount of risk analysis hence, avoidance of Risk is enhanced.
  • Good for large and mission-critical projects.
  • Strong approval and documentation control.
  • Additional Functionality can be added at a later date.
  • Software is produced early in the software life cycle.

Disadvantages of Spiral model:

  • Can be a costly model to use.
  • Risk analysis requires highly specific expertise.
  • Project’s success is highly dependent on the risk analysis phase.
  • Doesn’t work well for smaller projects.

 When to use Spiral model:

  • When costs and risk evaluation is important
  • For medium to high-risk projects
  • Long-term project commitment unwise because of potential changes to economic priorities
  • Users are unsure of their needs
  • Significant changes are expected (research and exploration)
  • Requirements are complex
  • New product line

difference between smoke testing and functional testing

What is the difference between smoke testing and functional testing?

1. Smoke testing commonly involves performing functional testing (but may include non-functional testing such as installation testing for example).

2. Fewer test cases are executed in smoke testing than in detailed functional testing. Consequently, a smoke test lasts shorter than a detailed functional test.

3. The focus of a smoke test is broad and shallow. A detailed functional test focuses each test area in depth.

4. In order to find any major or obvious defects quickly, it is common to execute a smoke test more times than a detailed functional test.

5. A detailed functional test follows a successful smoke test.

Difference Between Severity And Priority

What is The Difference Between Severity And Priority?

Severity: It is the extent to which the defect can affect the software. In other words it defines the impact that a given defect has on the system.

Priority: Priority defines the order in which we should resolve a defect. Should   we fix it now, or can it wait? This priority status is set by the tester to the developer mentioning the time frame to fix the defect. If high priority is mentioned then the developer has to fix it at the earliest. The priority status is set based on the customer requirements.

Examples:

High Priority & High Severity: An error which occurs on the basic functionality of the application and will not allow the user to use the system.

High Severity & Low Priority: An error which occurs on one of the functionality of the application and will not allow the user to use that functionality but that functionality which is rarely used by the end user.

High Priority & Low Severity: The client logo is not appearing on the web site but the site is working fine. in this case the severity is low but the priority is high because from company's reputation it is most important to resolve. After all the reputation wins more clients and projects and hence increases revenue.

Low Priority and Low Severity: Any cosmetic or spelling issues which is within a paragraph or in the report

So the bottom line is that in the priority v/s severity matrix the priority is completely business impact driven and the severity is completely technical impact driven.

What Is User Acceptance Testing

What Is User Acceptance Testing? 
 
User Acceptance testing is the software testing process where system tested for acceptability & validates the end to end business flow. Such type of testing executed by client in separate environment (similar to production environment) & confirm whether system meets the requirements as per requirement specification or not.

UAT is performed after System Testing is done and all or most of the major defects have been fixed. This testing is to be conducted in the final stage of Software Development Life Cycle (SDLC) prior to system being delivered to a live environment. UAT users or end users are concentrating on end to end scenarios & typically involves running a suite of tests on the completed system.

The Acceptance testing is “black box” tests, means UAT users doesn’t aware of internal structure of the code, they just specify the input to the system & check whether systems respond with correct result.

User Acceptance testing also known as Customer Acceptance testing (CAT), if the system is being built or developed by an external supplier. The CAT or UAT are the final confirmation from the client before the system is ready for production. The business customers are the primary owners of these UAT tests. These tests are created by business customers and articulated in business domain languages. So ideally it is collaboration between business customers, business analysts, testers and developers. It consists of test suites which involve multiple test cases & each test case contains input data (if required) as well as the expected output. The result of test case is either a pass or fail.

Prerequisites of User Acceptance Testing:

Prior to start the UAT following checkpoints to be considered:
The Business Requirements should be available.
The development of software application should be completed & different levels of testing like Unit Testing, Integration Testing & System Testing is completed.
All High Severity, High Priority defects should be verified. No any Showstoppers defects in the system.
Check if all reported defects should be verified prior to UAT starts.
Check if Traceability matrix for all testing should be completed.
Before UAT starts error like cosmetic error are acceptable but should be reported.
After fixing all the defects regression Testing should be carried out to check fixing of defect not breaking the other working area.
The separate UAT environment similar to production should be ready to start UAT.
The Sign off should be given by System testing team which says that Software application ready for UAT execution.

What to Test in User Acceptance Testing:

Based on the Requirements definition stage use cases the Test cases are created.
Also the Test cases are created considering the real world scenarios for the application.
The actual testing is to be carried out in environments that copy of the production environment. So in the type of testing is concentrating on the exact real world use of application.
Test cases are designed such that all area of application is covered during testing to ensure that an effective User Acceptance Testing.

What are the key deliverable of User Acceptance Testing:

The completion of User Acceptance Testing is the significant milestone for traditional testing method. The following key deliverable of User Acceptance Testing phase:
Test Plan: This outlines the Testing Strategy
UAT Test cases: The Test cases help the team to effectively test the application in UAT environment.
Test Results and Error Reports: This is a log of all the test cases executed and the actual results.
User Acceptance Sign-off: This is the system, documentation, and training materials have passed all tests within acceptable margins.
Installation Instructions: This is document which helps to install the system in production environment.
Documentation Materials: Tested and updated user documentation and training materials are finalized during user acceptance testing

Positive Testing And Negative Testing With Example

Explain Positive Testing And Negative Testing With Example?

Positive Testing: - When tester test the application from positive point of mind than it is known as positive testing.

Testing the application with valid input and data is known as positive testing.

Positive testing: - A test which is designed to check that application is correctly working. Here the aim of tester is to pass affecting application, sometimes it is obviously called as clean testing, and that is “test to pass”.

Negative Testing: - When tester test the application from negative point of mind than it is known as negative testing.

Testing the application always with invalid input and data is known as negative testing.

Example of positive testing is given below:

Considering example length of password defined in requirements is 7 to 15 characters, and whenever we check the application by giving alphanumeric characters on password field “between” 7 to 15 characters than it is positive testing, because we test the application with valid data/ input.

Example of negative testing is given below:

Considering example as we know phone no field does not accept the alphabets and special characters it obviously accepts numbers, but if we type alphabets and special characters on phone number field to check it accepts the alphabets and special characters or not than it is negative testing.

What is Test Plan ?

What is Test Plan ?

Test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies among others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice,and any risks requiring contingency planning. It is a record of the test planning process.

TEST PLAN TEMPLATE

The format and content of a software test plan vary depending on the processes, standards, and test management tools being implemented. Nevertheless, the following format, which is based on IEEE standard for software test documentation, provides a summary of what a test plan can/should contain.

Test Plan Identifier:

Provide a unique identifier for the document. (Adhere to the Configuration Management System if you have one.)

Introduction:

  • Provide an overview of the test plan.
  • Specify the goals/objectives.
  • Specify any constraints.

References:

List the related documents, with links to them if available, including the following:
Project Plan
Configuration Management Plan

Test Items:

List the test items (software/products) and their versions.

Features to be Tested:

List the features of the software/product to be tested.
Provide references to the Requirements and/or Design specifications of the features to be tested

Features Not to Be Tested:

List the features of the software/product which will not be tested.
Specify the reasons these features won’t be tested.

Approach:

Mention the overall approach to testing.
Specify the testing levels [if it’s a Master Test Plan], the testing types, and the testing methods [Manual/Automated; White Box/Black Box/Gray Box]

Item Pass/Fail Criteria:

Specify the criteria that will be used to determine whether each test item (software/product) has passed or failed testing.

Suspension Criteria and Resumption Requirements:

Specify criteria to be used to suspend the testing activity.
Specify testing activities which must be redone when testing is resumed.

Test Deliverables:

List test deliverables, and links to them if available, including the following:
Test Plan (this document itself)
Test Cases
Test Scripts
Defect/Enhancement Logs
Test Reports


Test Environment:

Specify the properties of test environment: hardware, software, network etc.
List any testing or related tools.

Estimate:

Provide a summary of test estimates (cost or effort) and/or provide a link to the detailed estimation.
Schedule:

Provide a summary of the schedule, specifying key test milestones, and/or provide a link to the detailed schedule.

Staffing and Training Needs:

Specify staffing needs by role and required skills.
Identify training that is necessary to provide those skills, if not already acquired.

Responsibilities:

List the responsibilities of each team/role/individual.

Risks:

List the risks that have been identified.
Specify the mitigation plan and the contingency plan for each risk.

Assumptions and Dependencies:

List the assumptions that have been made during the preparation of this plan.
List the dependencies.

Approvals:

Specify the names and roles of all persons who must approve the plan.
Provide space for signatures and dates. (If the document is to be printed.)

Bugs / Defect

Defect Life Cycle Or Bug Life Cycle

Defect Life Cycle Or Bug Life Cycle

Defect:
A fault in a program, which causes the program to perform in an unintended or unanticipated manner.

Defect Life Cycle (Bug Life cycle) is the journey of a defect from its identification to its closure. The Life Cycle varies from organization to organization and is governed by the software testing process the organization or project follows and/or the Defect tracking tool being used.

1) New: When QA files new bug.

2) Assigned: Assigned to field is set by lead or manager and assigns bug to developer.

3) Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as CNR. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.

4) Need more information: If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as “Need more information. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.

5) Deferred: If the bug is not related to current build or can not be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred.

6) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.

7) Resolved/Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug status as Fixed and the bug is passed to testing team.

8) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as Reopen so that developer can take appropriate action.

9) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as Closed.

Software Testing Life Cycle (STLC)

Software Testing Life Cycle (STLC)

Software Testing Life Cycle (STLC) defines the steps/stages/phases in testing of software.

The different stages in Software Testing Life Cycle:

  • Requirement Analysis
  • Test Planning
  • Test Case Design / Development
  • Environment setup
  • Test Execution / Reporting / Defect Tracking
  • Test Cycle Closure / Retrospective study

1) Requirement Analysis
During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability ) .Automation feasibility for the given testing project is also done in this stage.

Activities

  • Identify types of tests to be performed. 
  • Gather details about testing priorities and focus.
  • Prepare Requirement Traceability Matrix (RTM).
  • Identify test environment details where testing is supposed to be carried out. 
  • Automation feasibility analysis (if required).


Deliverables 
RTM
Automation feasibility report. (if applicable)

2) Test Planning

This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.

Activities

  • Preparation of test plan/strategy document for various types of testing
  • Test tool selection 
  • Test effort estimation 
  • Resource planning and determining roles and responsibilities.
  • Training requirement


Deliverables 
Test plan /strategy document.
Effort estimation document.

3) Test Case Design / Development
This phase involves creation, verification and rework of test cases & test scripts. Test data , is identified/created and is reviewed and then reworked as well.

Activities

  • Create test cases, automation scripts (if applicable)
  • Review and baseline test cases and scripts 
  • Create test data (If Test Environment is available)


Deliverables 
Test cases/scripts

4) Test Environment Setup
Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.

Activities 

  • Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment. 
  • Setup test Environment and test data 
  • Perform smoke test on the build


Deliverables 
Environment ready with test data set up
Smoke Test Results.

5) Test Execution / Reporting / Defect Tracking
 During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.

Activities 

  • Execute tests as per plan
  • Document test results, and log defects for failed cases 
  • Map defects to test cases in RTM 
  • Retest the defect fixes 
  • Track the defects to closure


Deliverables 
Completed RTM with execution status
Test cases updated with results
Defect reports

6) Test Cycle Closure / Retrospective study
Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.

Activities

  • Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical Business Objectives , Quality
  • Prepare test metrics based on the above parameters. 
  • Document the learning out of the project 
  • Prepare Test closure report 
  • Qualitative and quantitative reporting of quality of the work product to the customer. 
  • Test result analysis to find out the defect distribution by type and severity.


Deliverables 
Test Closure report
Test metrics

Difference Between Smoke And Sanity Testing

Difference Between Smoke And Sanity Testing?

Smoke Testing:

  1. Smoke testing is done to check the normal health of the build and make sure if it is possible to continue testing. It is done in the beginning of the software testing cycle.
  2. Smoke testing is conducted on early builds to ascertain application most critical features are working and its ready for complete testing
  3. The objective of Smoke testing is to check the application stability before starting the thorough testing
  4. It follows shallow and wide approach where you cover all the basic functionality of the software.
  5. Smoke testing is like general health checkup where all application critical areas are tested without going into their details (build verification)


Sanity Testing:

  1. Sanity testing is done after thorough regression testing is over, it is done to make sure that any defect fixes or changes after regression testing does not break the core functionality of the product. It is done towards the end of the product release phase.
  2. Sanity testing is conducted on stable builds (after multiple regression tests) to ascertain new functionality/ bugs have been fixed and application is ready for complete testing
  3. The objective of Sanity testing is to check the application rational before starting the thorough testing
  4. Sanity testing follows narrow and deep approach with detailed testing of some limited features.
  5. Sanity testing is like specialized health checkup where small area of the application is thoroughly tested

Difference between Retesting and Regression Testing

Definition of Retesting and Regression Testing:

Re-Testing: After a defect is detected and fixed, the software should be retested to confirm that the original defect has been successfully removed. This is called Confirmation Testing or Re-Testing

Regression testing:  Testing your software application when it undergoes a code change to ensure that the new code has not affected other parts of the software.

It can be done in 2 scenarios:

1) When ever test engineer identified a defect, after rectification once
again we need to test the defected functionality as well as its related
functionality.

2) When ever some new features are added to the application then we need to
test the related functionalities of those new functionalities.

The defect logged by tester while testing application and same fixed by developer. In Retesting we check same defect whether fixed or not using steps to reproduce mentioned in the defect. In Regression testing we check same defect fixes are not impacted other unchanged part of the application, not breaking the functionality working previously and break due to fixing defect.

Test Case Design Techniques In Black Box Testing With Examples

Test Case Design Techniques In Black Box Testing With Examples

Boundary value analysis and Equivalence Class Partitioning both are test case design techniques in black box testing.

What is Boundary value analysis:

Boundary value analysis is a test case design technique to test boundary value between partitions (both valid boundary partition and invalid boundary partition). A boundary value is an input or output value on the border of an equivalence partition, includes minimum and maximum values at inside and outside boundaries. Normally Boundary value analysis is part of stress and negative testing.

Using Boundary Value Analysis technique tester creates test cases for required input field. For example; an Address text box which allows maximum 500 characters. So, writing test cases for each character once will be very difficult so that will choose boundary value analysis.

Example for Boundary Value Analysis:
Example 1
Suppose you have very important tool at office, accepts valid User Name and Password field to work on that tool, and accepts minimum 8 characters and maximum 12 characters. Valid range 8-12, Invalid range 7 or less than 7 and Invalid range 13 or more than 13.

Write Test Cases for Valid partition value, Invalid partition value and exact boundary value.
Test Cases 1: Consider password length less than 8.
Test Cases 2: Consider password of length exactly 8.
Test Cases 3: Consider password of length between 9 and 11.
Test Cases 4: Consider password of length exactly 12.
Test Cases 5: Consider password of length more than 12.

Example 2
Test cases for the application whose input box accepts numbers between 1-1000. Valid range 1-1000, Invalid range 0 and Invalid range 1001 or more.

Write Test Cases for Valid partition value, Invalid partition value and exact boundary value.
Test Cases 1: Consider test data exactly as the input boundaries of input domain i.e. values 1   and 1000.
Test Cases 2: Consider test data with values just below the extreme edges of input domains i.e. values 0 and 999.
Test Cases 3: Consider test data with values just above the extreme edges of input domain i.e. values 2 and 1001.

What is Equivalence Class Partitioning?

Equivalence partitioning is a Test Case Design Technique to divide the input data of software into different equivalence data classes. Test cases are designed for equivalence data class. The equivalence partitions are frequently derived from the requirements specification for input data that influence the processing of the test object. A use of this method reduces the time necessary for testing software using less and effective test cases.
Equivalence Partitioning = Equivalence Class Partitioning = ECP
It can be used at any level of software for testing and is preferably a good technique to use first. In this technique, only one condition to be tested from each partition. Because we assume that, all the conditions in one partition behave in the same manner by the software. In a partition, if one condition works other will definitely work. Likewise we assume that, if one of the condition does not work then none of the conditions in that partition will work.
Equivalence partitioning is a testing technique where input values set into classes for testing.
Valid Input Class = Keeps all valid inputs.
Invalid Input Class = Keeps all Invalid inputs.

 Example of Equivalence Class Partitioning?

A text field permits only numeric characters
Length must be 6-10 characters long
Partition according to the requirement should be like this:

While evaluating Equivalence partitioning, values in all partitions are equivalent that’s why 0-5 are equivalent, 6 – 10 are equivalent and 11- 14 are equivalent.
At the time of testing, test 4 and 12 as invalid values and 7 as valid one.

It is easy to test input ranges 6–10 but harder to test input ranges 2-600. Testing will be easy in the case of lesser test cases but you should be very careful. Assuming, valid input is 7. That means, you belief that the developer coded the correct valid range (6-10).

Explain About Test Data,Test Script And Metric Reports

Explain About Test Data,Test Script And Metric Reports?

Test Data: Test data is the data created by the test engineer based on the pre-condition(pre-condition is the data setup that needs to be created before proceeding with executing the test case)in order to proceed with our testing.
In order to test a software application you need to enter some data for testing most of the features. Any such specifically identified data which is used in tests is known as test data.
You can have test data in excel sheet which can be entered manually while executing test cases or it can be read automatically from files (XML, Flat Files, Database etc.) by automation tools.

Some test data is used to confirm the expected result, i.e. When test data is entered the expected result should come and some test data is used to verify the software behavior to invalid input data.
Test data is generated by testers or by automation tools which support testing. Most of the times in regression testing the test data is re-used, it is always a good practice to verify the test data before re-using it in any kind of test.

Test Script: A test script in software testing is a set of instructions that will be performed on the system under test to test that the system functions as expected.
There are various means for executing test scripts.
Manual testing: These are more commonly called test cases.
Automated testing: Short program written in a programming language used to test part of the functionality of a software system. Test scripts written as a short program can either be written using a special automated functional GUI test tool (such as QTP, Selenium and Test Complete etc..) or in a well-known programming language (such as C++, C#, Java, PHP, Perl etc..).

Metric Reports: Metric reports return statistical data and metrics about the Workspace, rather than returning specific Issue data.  Output options include HTML Text, Graphical, and Text File Export.  The information returned by these reports can help you measure how well your organization performs and where improvement is needed.



Basic Concepts
























Manual Testing Roles And Responsibilities

Manual Testing Roles And Responsibilities ?

Answer

  1. Analyzing the requirements and interacting with BA for clarifications if any
  2. Understanding testing requirements related to the current project / product.
  3. Participating in preparing Test Plans
  4. Preparing Test Scenarios
  5. Preparing Test Cases for module, integration and system testing
  6. Preparing Test Data’s for the test cases
  7. Preparing Test Environment to execute the test cases
  8. Reviewing the Test Cases prepared by other team members
  9. Executing the Test Cases
  10. Logging Defects and Tracking
  11. Giving mandatory information of a defect to developers in order to fix it
  12. Conducting Review Meetings within the Team
  13. Assist test lead in RTM preparation before signoff from the project / product
  14. Preparing Summary Reports
  15. Preparing Lesson Learnt documents from the previous project testing experience
  16. Preparing Suggestion Documents to improve the quality of the application
  17. Preparing Suggestion Documents to improve the quality of the application
  18. Conducting Review Meetings within the Team