Monday 30 November 2015

Manual Testing Interview Q/A -Part 5

Q61.What are the key challenges of software testing(Deloitte,HSBC)

Following are some challenges of software testing:
1. Application should be stable enough to be tested.
2. Testing always under time constraint
3. Understanding the requirements.
4. Domain knowledge and business user perspective understanding.
5. Which tests to execute first?
6. Testing the Complete Application.
7. Regression testing.
8. Lack of skilled testers.
9. Changing requirements.
10. Lack of resources, tools and training

Q62.What makes a good QA or Test manager?(HSBC,Portware)

A good QA or Test manager should have following characteristics:
• Knowledge about Software development process
• Improve the teamwork to increase productivity
• Improve cooperation between software, test, and QA engineers.
• To improvements the QA processes.
• Communication skills.
• Able to conduct meetings and keep them focused.

Q63.What is Requirement Traceability Matrix(HSBC,Deloitte,Accenture,Infosys)

The Requirements Traceability Matrix (RTM) is a tool to make sure that project requirement remain same throughout the whole development process. RTM is used in the development process because of following reasons:
To determine whether the developed project is meet the requirements of the user.
To determine all the requirements given by the user .
To make sure the application requirement can be fulfilled in the verification process.

Q64.Difference between SRS, FRS and BRS(Portware,Infosys)

Distinction among BRS,FRS and SRS – Top 7
1. It means “Software Requirement Specification” (SRS).
1. It means “Functional Requirement Specification” (FRS).
1. It means “Business Requirement Specification” (BRS).
2. It deals with resources provided by Company.
2. It deals with requirements given by client.
2. It deals with aspects of business requirements.
3. It always includes Use cases to describe the interaction of the system.
3. In this Use cases are not included.
3. In this Use cases are also not included.
4. It is developed by System Analyst. And it is also known as User Requirement Specifications.
4. It is always developed by developers and engineers.
4. It is always developed by Business Analyst.
5. In SRS, we describe all the business functionalities of the application.
5. In FRS, we describe the particular functionalities of  every single page in detail from start to end.
5.In BRS, we defines what exactly customer wants. This is the document which is followed by team from start to end.
6. SRS tells means explains all the functional and non functional requirements.
6. FRS tells means explains the sequence of operations to be followed on every single process.
6. BRS tells means explains the story of whole requirements.
7. SRS is a complete document which describes the behavior of the system which would be developed.
7. FRS is a document, which describes the Functional requirements i.e. all the functionalities of the system would be easy and efficient for end user.
7. BRS is a simple document, which describes the business requirements on a quite broad level.

Q65.What is Test plan: (Almost in all Interview)
A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice,and any risks requiring contingency planning. It is a record of the test planning process.
Test Plan Identifier:
Provide a unique identifier for the document. (Adhere to the Configuration Management System if you have one.)
Introduction:
Provide an overview of the test plan.
Specify the goals/objectives.
Specify any constraints.
References:
List the related documents, with links to them if available, including the following:
Project Plan
Configuration Management Plan
 Test Items:
List the test items (software/products) and their versions.
Features to be Tested:
List the features of the software/product to be tested.
Provide references to the Requirements and/or Design specifications of the features to be tested
Features Not to Be Tested:
List the features of the software/product which will not be tested.
Specify the reasons these features won’t be tested.
Approach:
Mention the overall approach to testing.
Specify the testing levels [if it’s a Master Test Plan], the testing types, and the testing methods [Manual/Automated; White Box/Black Box/Gray Box]
Item Pass/Fail Criteria:
Specify the criteria that will be used to determine whether each test item (software/product) has passed or failed testing.
Suspension Criteria and Resumption Requirements:
Specify criteria to be used to suspend the testing activity.
Specify testing activities which must be redone when testing is resumed.
Test Deliverables:
List test deliverables, and links to them if available, including the following:
Test Plan (this document itself)
Test Cases
Test Scripts
Defect/Enhancement Logs
Test Reports
 Test Environment:
Specify the properties of test environment: hardware, software, network etc.
List any testing or related tools.
Estimate:
Provide a summary of test estimates (cost or effort) and/or provide a link to the detailed estimation.
Schedule:
Provide a summary of the schedule, specifying key test milestones, and/or provide a link to the detailed schedule.
Staffing and Training Needs:
Specify staffing needs by role and required skills.
Identify training that is necessary to provide those skills, if not already acquired.
Responsibilities:
List the responsibilities of each team/role/individual.
Risks:
List the risks that have been identified.
Specify the mitigation plan and the contingency plan for each risk.
Assumptions and Dependencies:
List the assumptions that have been made during the preparation of this plan.
List the dependencies.
Approvals:
Specify the names and roles of all persons who must approve the plan.
Provide space for signatures and dates. (If the document is to be printed.)

Q66.Describe how to perform Risk analysis during software testing?

Risk analysis is the process of identifying risk in the application and prioritizing them to test. Following are some of the risks:

1. New Hardware.
2. New Technology.
3. New Automation Tool.
4. Sequence of code delivery.
5. Availability of application test resources.

We prioritize them into three categories these are:

• High magnitude: Impact of the bug on the other functionality of the application.
• Medium: it can be tolerable in the application but not desirable.
• Low: it can be tolerable. This type of risk has no impact on the company business.

Q67.Difference between System Testing and Acceptance Testing(Infosys,accenture,Portware,Cigniti)

1. User is not involved in System Testing.
1. User is completely involved in Acceptance Testing.
2. It is not the final stage of Validation.
2. It is the final stage of Validation.
3. System testing of software or hardware is testing conducted on a whole, integrated system to estimate the systems compliance with its specified set of requirements?
3. Acceptance Testing of software or hardware is testing conducted to evaluate the system compliance with its specified set of user requirements.
4. It is done to check how the system as a whole is functioning. Here the functionality and performance of the system is validated.
4. It is done by the developer before releasing the product to check whether it meets the requirements of user or not and it is also done by the user to check whether I have to accept the product or not.
5. It is used to check whether the system meets the defined Specifications or not.
5. It is used to check whether it meets the defined User Requirements or not.
6. System Testing determine the developer and tester for satisfaction with system specifications.
6. Acceptance Testing determine the customer for satisfaction with software product.

Q68.What Is Sanity Testing Explain It with Example?
Sanity testing is the surface level testing where QA engineer verifies that all the menus, functions, commands available in the product and project are working fine.
Sanity testing is performed after the build has clear the Smoke test and has been accepted by QA team for further testing, sanity testing checks the major functionality with finer details.
When we Perform Sanity Testing?
Sanity testing is performed when development team needs to know quick state of the product after they have done changes in the code or there is some controlled code change in a feature to fix any critical issue, and stringent release time-frame does not allow complete regression testing.
Conclusion:
Sanity testing will be done mostly after retest (retest will be done after fixing the bug). We always use Script for Smoke but do not for sanity.

Q69.Difference between Static and Dynamic Testing

 Static Testing
It is done in the phase of verification.
This testing means “How we prevent” means it always talks about prevention.
It is always considered as less cost effective job/task.
It is not considered as a time consuming job or task.
Static testing is also known by the name Dry Run Testing.
Techniques/methods of static testing are inspections, reviews, and walkthroughs etc.
 Dynamic Testing
Technique/method of dynamic testing is always software testing means testing.
It is always considered as a time consuming job or task because it requires several test cases to execute.
It can find errors that static testing cannot find and it is a high level exercise.
It is always considered as more cost effective job/task.
This testing means “How we cure” means it always talks about cure.
It is done in the phase of validation.

Q70.What Is Regression Testing

Regression testing will be conducted after any bug fixed or any functionality changed.
Regression testing is always done to verify that modified code does not break the existing functionality of the application and works within the requirements of the system.
Usually in regression testing bug fixed module is tested. During regression testing tester always check the entire system whether the fixed bug make any adverse affect in the existing system or not.
There are mostly two strategies to regression testing, 1) to run all tests and 2) always run a subset of tests based on a test case prioritization technique.

Q71.Why do We Use Stubs And Drivers?(accenture,Kofax)

Stubs are dummy modules that are always distinguish as "called programs", or you can say that is handle in integration testing (top down approach), it used when sub programs are under construction.
Stubs are considered as the dummy modules that always simulate the low level modules.
Drivers are also considered as the form of dummy modules which are always distinguished as "calling programs”, that is handled in bottom up integration testing, it is only used when main programs are under construction.
Drivers are considered as the dummy modules that always simulate the high level modules.
So it is fine from the above example that Stubs act “called” functions in top down integration. Drivers are “calling” Functions in bottom up integration.

Q72.Define Scrum technology(Portware,Infosys)

In each sprint, following tasks will be carried out:
Analysis and Design
Elaborate user stories with detailed scenarios/ acceptance criteria
Development, Testing and Delivery for Sprint Review and Acceptance
Input to this phase will be the elaborated user stories generated earlier. ITCube developers and testers will work on the tasks identified for them.
Deliveries will be made to Customer at the end of Sprint.
Sprint Review and Acceptance
At the end of each Sprint, a Sprint Review meeting will be held where ITCube team will demonstrate the features developed in the Sprint to relevant stakeholders and customers.
Customer will carry out acceptance testing against the acceptance criteria defined during Sprint planning meeting and report back issues
ITCube will fix issues and re-deliver the features / code to Customer
Customer will re-test and accept the features
The table below lists various roles and their responsibilities in Scrum/Agile methodology:
Role From Responsibilities
Product Owner Customer
Drives the software product from business perspective
Define and prioritize requirements
Lead release planning meeting
Arrive at release date and content
Lead sprint planning meetings
Prioritize product backlog
Make sure that valuable requirements take priority
Accept the software work product at the end of each sprint
Scrum Master ITCube
Ensure that release planning meeting takes place
Ensure sprint planning meetings take place
Conduct daily stand up meetings
Remove roadblocks
Track progress
Align team structure and processes as required from time to time
Conduct demo meetings at end of sprints
Record learning from retrospective meetings
Implement learning from previous sprints
Customer(s) Customer
Define stories (requirements)
Elaborate stories as and when required by developers and testers
Define acceptance criteria
Acceptance testing
Accept stories upon completion of acceptance testing
Architect Council Customer and ITCube
Lead and driven by Scrum Master
Arrive at initial application architecture
Adjust and refine architecture as required
Address architecture issues as required
Define best practices to be used in development like design patterns, domain driven design
Define tools to be used in development
Define best practices to be used in testing
Define tools to be used in testing
Developer (Delivery Team) ITCube
Analyze stories for estimating effort
Functional design
Technical design
Coding
Unit testing
Support user acceptance testing
Technical documentation
Tester (Delivery Team) ITCube
Writing test cases
System testing
Stress testing
Stakeholders Customer
Enable the project
Reviews
Best practices
Given below are some of the best practices in Scrum/Agile project execution methodology. These best practices revolve around various meetings or artifacts. ITCube will endeavor to follow as many of these as possible. ITCube may tailor these to suit the project requirements.
Best practices for meetings
As Agile promotes communication, these best practices are focused around various meetings in Agile – Release Planning, Sprint Planning, Daily Standup, Demo and Retrospectives.
Daily Standup
Each day during the sprint, a project status meeting occurs. This is called “the daily standup”. This meeting has specific guidelines:
The meeting starts precisely on time.
All are welcome, but only Delivery Team may speak
The meeting is time-boxed to short duration like 15 minutes
The meeting should happen at the same location and same time every day
During the meeting, each team member answers three questions:
What have you done since yesterday’s meeting?
What are you planning to do today?
Do you have any problems in accomplishing your goal?
Scrum Master should facilitate resolution of these problems. Typically this should occur outside the context of the Daily Standup.
Post-standup (optional)
Held each day, normally after the daily standup.
These meetings allow clusters of teams to discuss their work, focusing especially on areas of overlap and integration.
A designated person from each team attends.
The agenda will be the same as the Daily Standup, plus the following four questions:
What has your team done since we last met?
What will your team do before we meet again?
Is anything slowing your team down or getting in their way?
Are you about to put something in another team’s way?
Sprint Planning Meeting
At the beginning of sprint, a “Sprint Planning Meeting” is held.
Select what work is to be done
Prepare the Product Backlog that details the time it will take to do that work, with the entire team
Identify and communicate how much of the work is likely to be done during the current sprint
Eight hour time limit
(1st four hours) Product Owner + Team: dialog for prioritizing the Product Backlog
(2nd four hours) Team only: hashing out a plan for the sprint, resulting in the sprint backlog
Sprint Review Meeting (Demo)
Review the work that was completed and not completed
Present the completed work to the stakeholders ( “the demo”)
Incomplete work cannot be demonstrated
Four hour time limit
Sprint Retrospective
All team members reflect on the past sprint
Make continuous process improvements
Two main questions are asked in the sprint retrospective:
What went well during the sprint?
What could be improved in the next sprint?
Three hour time limit
Best practices for Artifacts
If the team uses a tool like Rally, most of the artifacts will be stored in the tool.
Product backlog
The product backlog is a high-level document for the entire project.
Contains backlog items like broad descriptions of all required features, wish-list items, etc. prioritized by business value.
It is the “What” that will be built.
Open and editable by anyone and contains rough estimates of both business value and development effort.
These estimates help the Product Owner to gauge the timeline and, to a limited extent, priority.
The product backlog is the property of the Product Owner. Business value is set by the Product Owner. Development effort is set by the Team.
Sprint backlog
Describes how the team is going to implement the features for the upcoming sprint
Features are broken down into tasks; as a best practice, tasks are normally estimated between four and sixteen hours of work
With this level of detail the whole team understands exactly what to do, and anyone can potentially pick a task from the list
Tasks on the sprint backlog are never assigned; rather, tasks are signed up for by the team members as needed, according to the set priority and the team member skills.
It is the property of the Team. Estimations are set by the Team. Often an accompanying Task Board is used to see and change the state of the tasks of the current sprint, like “to do”, “in progress” and “done”. Tools like Rally make this very easy.
Burn down charts
It is a publicly displayed chart showing remaining work in the sprint backlog.
Updated every day
Gives a simple view of the sprint progress.
Provides quick visualizations for reference.
Tools like Rally give almost current burn down charts at any point of time
It is NOT an earned value chart.

Q72.Bug Report (QC 10.00)

Summary: Wrong login name is displayed.
Description: Wrong login name is displayed on Home Page.
Steps to Follow or Chase:
1.      Enter the URL Http:// ----------------- in the address bar.
2.      Fill the username field in login form.
3.      Fill the password field in login form.
4.      Click on login button below the form.
Test data: Enter xyz on username field.
Environment: Windows 7, Safari like that.
Expected Result: Proper logged in name should be displayed on home page.
Actual Result: After login with xyz wrong login name is displayed on home page likes logged in as ABC.
Screenshot:  Attach the screenshot here on where you detect the bug and save it in jpeg format.
When should the testing be stopped?
It depends on the risks for the system being tested. There are some criteria bases on which you can stop testing.
Deadlines (Testing, Release)
Test budget has been depleted
Bug rate fall below certain level
Test cases completed with certain percentage passed
Alpha or beta periods for testing ends
Coverage of code, functionality or requirements are met to a specified point

Q73.How to deal with not reproducible bug?(Helius,HSBC)
 A bug cannot be reproduced for following reasons:

1. Low memory.
2. Addressing to non available memory location.
3. Things happening in a particular sequence.

Tester can do following things to deal with not reproducible bug:

• Includes steps that are close to the error statement.
• Evaluate the test environment.
• Examine and evaluate test execution results.
• Resources & Time Constraints must be kept in point.

Q74.What is difference between QA and QC? (ADP,Accenture,Kofax)

Quality Assurance (QA): QA refers to the planned and systematic way of monitoring the quality of process which is followed to produce a quality product. QA tracks the outcomes and adjusts the process to meet the expectation.

Quality Control (QC): Concern with the quality of the product. QC finds the defects and suggests improvements. The process set by QA is implemented by QC. The QC is the responsibility of the tester.