Showing posts with label Interview Q/A. Show all posts
Showing posts with label Interview Q/A. Show all posts

Tuesday 1 December 2015

QTP Interview Q/A

Basic Level

QTP Interview Q/A Part 1

QTP Interview Q/A Part 2

QTP Interview Q/A Part 3

QTP Interview Q/A Part 4

QTP Interview Q/A Part 5

 

 

 

QTP Interview Q/A Part 2

Q11. What phases are involved in testing an application in QTP?

The Quick Test Professional process consists of the following main phases:

   Analyzing Application: before preparing test cases need to analyze the application to find out the testing needs.
    Preparing Testing Infrastructure: based on testing needs create those resources, resources like, shared repository, function library etc.
    Building Test Cases: create test script containing action to be performed while testing. Add object repository with the test function libraries.
    Enhancing Test: by making use of checkpoints, broadening the scope of test, add logic and condition to test for checking purpose.
    Debugging, Running and analyzing Test: debug the test so that it works without interruption. Run these test and analyze the test result generated by QTP
    Report Defects: lock bug into the bug report and send it to the development team.

Q12. How many types of recording modes in the QTP?


The QTP enable us with three type of recording mode:

    Normal (by Default recording): In this recording mode QTP identify the object irrespective of their location on the screen. It is done by recording object based on application window.
    Analog Recording: it is used when exact mouse movement and action performed by mouse is important. Used in testing the paint application and signature made with the help of mouse.
    Low Level Recording: it helps in identifying those objects which is not recognized by the QTP. It is used when the location of object is changing inside the screen.

Q13. What is object repository?

Object Repository: when QTP learn any object from application it stores those object in the Object Repository with the properties of the object. It is used to identify the object. There are two types of object repository:

    Shared Object Repository: It can be shared between multiple tests but it does not allow making changes in the repository. Mostly used in Keyword Driven methodology. It is saved with .TSR extension.
    Local Object Repository: This type of object repository is linked with only one test. In this we can perform change in the repository, like changing the properties of the object, adding object in the repository. It is saved with .MTR extension.

Q14. Explain step generator in QTP?

Step Generator in QTP helps in creating those steps which is performed over the object while testing. Use of Step Generator in QTP:

    Help in debugging the script by making use of Break.
    To add those step which we forget to perform while recording.
    To ensure that objects exist in the repository
    To add up step in the function library.

Q15. Explain the use of Action Split in QTP?


Action Split: it is used to split the action into two parts. There are two type of split an action:

    Splitting into two sibling action: both split actions are independent of each other.
    Splitting into Parent-Child nested action: in this second split action is only called after the execution of the parent split action. Child split action depends upon the parent split action.

QTP generated the duplicate copy of the object repository when we perform Split action. We can add object to anyone spilt action which is not added into another split action’s repository.

Q16. What is the purpose of loading QTP Add-Ins?


Add-Ins: are small programs or files which can be added to computer in order to enhance the capabilities of the system. The purposes of loading Add-Ins into QTP are following:

    To increase capabilities of the system.
    To improve the graphics quality, communications interface.
    To load the particular function into the memory.
    To excess only those functions that is required for the execution of the script.

Q17. What is a data driven test in QTP?

Data Driven is an automation testing part in which test input or output values, these values are read from data files. It is performed when the values are changing by the time. The different data files may include data pools. The data is then loaded into variables in recorded or manually coded scripts. In QTP to perform the data to drive the test, we use the parameterization process. When we do data-driven test, we perform two extra steps:

    Converting the test to a data-driven test.
    Creating a corresponding data table.

Q18. How to use Parameterization in QTP?

It is the process of making use of different values in place of recorded values which is replaced by variable which contains different values that can be used during the execution of the scripts. QTP enable us with different type of Parameterization, passing of data:

    Using Loop statement.
    Dynamically test data submission
    Using data table.
    Fetching data from external files.
    Fetching data from databases.
    By taking test data front end(GUI)

Q19. Is it possible to call from one action to another action in QTP?

Yes, QTP enable us to call from one action to another action. There are two ways of calling action:

    Call to Copy of action: in this we generate the copy of action in our repository which enables us to perform change to the copy of action.
    Call to Existing action: we call to action which is made up earlier. This generates the reference to the action. We can access the action in read only mode. In this no copy of existing script and data table is made.

Q20. Explain different types of action in QTP?

When generating the test script, it includes only one action. Action contains the number of steps to be performed on application to test the application. There are three type of action in QTP:

    Non-Reusable action: it can be called by test only once in which it is stored.
    Reusable action: it can be called by test multiple times in which it is stored.
    External action: it is reusable action but stored in external test. We can call external action but it will be available in read only mode we cannot perform any change to the External Action.

Monday 30 November 2015

QTP Interview Q/A Part 1

Q1.What is QuickTest Professional (QTP) ?

QuickTest is a graphical interface record-playback automation tool. It is able to work with any web, java or windows client application. Quick Test enables you to test standard web objects and ActiveX controls. In addition to these environments, QuickTest Professional also enables you to test Java applets and applications and multimedia objects on Applications as well as standard Windows applications, Visual Basic 6 applications and .NET framework applications...
QTP is Mercury Interactive Functional Testing Tool. QTP stands for Quality Test Professional.

Mercury QuickTest Professional: provides the industry's best solution for functional test and regression test automation - addressing every major software application and environment. This next-generation automated testing solution deploys the concept of Keyword-driven testing to radically simplify test creation and maintenance. Unique to QuickTest Professional’s Keyword-driven approach, test automation experts have full access to the underlying test and object properties, via an integrated scripting and debugging environment that is round-trip synchronized with the Keyword View.

QuickTest Professional enables you to test standard Windows applications, Web objects, ActiveX controls, and Visual Basic applications. You can also acquire additional QuickTest add-ins for a number of special environments (such as Java, Oracle, SAP Solutions, .NET Windows and Web Forms, Siebel, PeopleSoft, Web services, and terminal emulator applications).

Q2.What’s the basic concept of QuickTest Professional (QTP)?

QTP is based on two concept-
* Recording
* Playback

Q3.Which scripting language used by QuickTest Professional (QTP)?

QTP using VB scripting.

Q4.How many types of recording facility are available in QuickTest Professional (QTP)?

QTP provides three types of recording methods-
* Context Recording (Normal)
* Analog Recording
* Low Level Recording

Q5.How many types of Parameters are available in QuickTest Professional (QTP)?

QTP provides three types of Parameter-
* Method Argument
* Data Driven
* Dynamic

Q6.What’s the QuickTest Professional (QTP) testing process?

QTP testing process consist of seven steps-
* Preparing to recoding
* Recording
* Enhancing your script
* Debugging
* Run
* Analyze
* Report Defects

Q7.How to Start recording using QuickTest Professional (QTP)?

Choose Test > Record or click the Record button.
When the Record and Run Settings dialog box opens to do this;
1. In the Web tab, select Open the following browser when a record or run session begins.
2. In the Windows Applications tab, confirm that Record and run on these applications (opened on session start) is selected, and that there are no applications listed.

Q8. How to insert a check point to a image to check enable property in QTP?

The Image Checkpoint does not have any property to verify the enable/disable property.
One thing you need to check is:
* Find out form the Developer if he is showing different images for activating/deactiving i.e greyed out image. That is the only way a developer can show deactivate/activate if he is using an "image". Else he might be using a button having a headsup with an image.
* If it is a button used to display with the headsup as an image you woudl need to use the object Properties as a checkpoint.

Q9.How to Save your test using QuickTest Professional (QTP)?

Select File > Save or click the Save button. The Save dialog box opens to the Tests folder.
Create a folder which you want to save to, select it, and click Open.
Type your test name in the File name field.
Confirm that Save Active Screen files is selected.
Click Save. Your test name is displayed in the title bar of the main QuickTest window.

Q10.How to Run a Test using QuickTest Professional (QTP)?

1 Start QuickTest and open your test.

If QuickTest is not already open, choose Start > Programs > QuickTest Professional > QuickTest Professional.

. If the Welcome window opens, click Open Existing.
. If QuickTest opens without displaying the Welcome window, choose File > Open or click the Open button.
In the Open Test dialog box, locate and select your test, then click Open.

2 Confirm that all images are saved to the test results.
QuickTest allows you to determine when to save images to the test results.

Choose Tools > Options and select the Run tab. In the Save step screen capture to test results option, select Always.

Click OK to close the Options dialog box.

3 Start running your test.

Click Run or choose Test > Run. The Run dialog box opens.
Select New run results folder. Accept the default results folder name.
Click OK to close the Run dialog box.

Manual Testing Interview Q/A -Part 5

Q61.What are the key challenges of software testing(Deloitte,HSBC)

Following are some challenges of software testing:
1. Application should be stable enough to be tested.
2. Testing always under time constraint
3. Understanding the requirements.
4. Domain knowledge and business user perspective understanding.
5. Which tests to execute first?
6. Testing the Complete Application.
7. Regression testing.
8. Lack of skilled testers.
9. Changing requirements.
10. Lack of resources, tools and training

Q62.What makes a good QA or Test manager?(HSBC,Portware)

A good QA or Test manager should have following characteristics:
• Knowledge about Software development process
• Improve the teamwork to increase productivity
• Improve cooperation between software, test, and QA engineers.
• To improvements the QA processes.
• Communication skills.
• Able to conduct meetings and keep them focused.

Q63.What is Requirement Traceability Matrix(HSBC,Deloitte,Accenture,Infosys)

The Requirements Traceability Matrix (RTM) is a tool to make sure that project requirement remain same throughout the whole development process. RTM is used in the development process because of following reasons:
To determine whether the developed project is meet the requirements of the user.
To determine all the requirements given by the user .
To make sure the application requirement can be fulfilled in the verification process.

Q64.Difference between SRS, FRS and BRS(Portware,Infosys)

Distinction among BRS,FRS and SRS – Top 7
1. It means “Software Requirement Specification” (SRS).
1. It means “Functional Requirement Specification” (FRS).
1. It means “Business Requirement Specification” (BRS).
2. It deals with resources provided by Company.
2. It deals with requirements given by client.
2. It deals with aspects of business requirements.
3. It always includes Use cases to describe the interaction of the system.
3. In this Use cases are not included.
3. In this Use cases are also not included.
4. It is developed by System Analyst. And it is also known as User Requirement Specifications.
4. It is always developed by developers and engineers.
4. It is always developed by Business Analyst.
5. In SRS, we describe all the business functionalities of the application.
5. In FRS, we describe the particular functionalities of  every single page in detail from start to end.
5.In BRS, we defines what exactly customer wants. This is the document which is followed by team from start to end.
6. SRS tells means explains all the functional and non functional requirements.
6. FRS tells means explains the sequence of operations to be followed on every single process.
6. BRS tells means explains the story of whole requirements.
7. SRS is a complete document which describes the behavior of the system which would be developed.
7. FRS is a document, which describes the Functional requirements i.e. all the functionalities of the system would be easy and efficient for end user.
7. BRS is a simple document, which describes the business requirements on a quite broad level.

Q65.What is Test plan: (Almost in all Interview)
A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice,and any risks requiring contingency planning. It is a record of the test planning process.
Test Plan Identifier:
Provide a unique identifier for the document. (Adhere to the Configuration Management System if you have one.)
Introduction:
Provide an overview of the test plan.
Specify the goals/objectives.
Specify any constraints.
References:
List the related documents, with links to them if available, including the following:
Project Plan
Configuration Management Plan
 Test Items:
List the test items (software/products) and their versions.
Features to be Tested:
List the features of the software/product to be tested.
Provide references to the Requirements and/or Design specifications of the features to be tested
Features Not to Be Tested:
List the features of the software/product which will not be tested.
Specify the reasons these features won’t be tested.
Approach:
Mention the overall approach to testing.
Specify the testing levels [if it’s a Master Test Plan], the testing types, and the testing methods [Manual/Automated; White Box/Black Box/Gray Box]
Item Pass/Fail Criteria:
Specify the criteria that will be used to determine whether each test item (software/product) has passed or failed testing.
Suspension Criteria and Resumption Requirements:
Specify criteria to be used to suspend the testing activity.
Specify testing activities which must be redone when testing is resumed.
Test Deliverables:
List test deliverables, and links to them if available, including the following:
Test Plan (this document itself)
Test Cases
Test Scripts
Defect/Enhancement Logs
Test Reports
 Test Environment:
Specify the properties of test environment: hardware, software, network etc.
List any testing or related tools.
Estimate:
Provide a summary of test estimates (cost or effort) and/or provide a link to the detailed estimation.
Schedule:
Provide a summary of the schedule, specifying key test milestones, and/or provide a link to the detailed schedule.
Staffing and Training Needs:
Specify staffing needs by role and required skills.
Identify training that is necessary to provide those skills, if not already acquired.
Responsibilities:
List the responsibilities of each team/role/individual.
Risks:
List the risks that have been identified.
Specify the mitigation plan and the contingency plan for each risk.
Assumptions and Dependencies:
List the assumptions that have been made during the preparation of this plan.
List the dependencies.
Approvals:
Specify the names and roles of all persons who must approve the plan.
Provide space for signatures and dates. (If the document is to be printed.)

Q66.Describe how to perform Risk analysis during software testing?

Risk analysis is the process of identifying risk in the application and prioritizing them to test. Following are some of the risks:

1. New Hardware.
2. New Technology.
3. New Automation Tool.
4. Sequence of code delivery.
5. Availability of application test resources.

We prioritize them into three categories these are:

• High magnitude: Impact of the bug on the other functionality of the application.
• Medium: it can be tolerable in the application but not desirable.
• Low: it can be tolerable. This type of risk has no impact on the company business.

Q67.Difference between System Testing and Acceptance Testing(Infosys,accenture,Portware,Cigniti)

1. User is not involved in System Testing.
1. User is completely involved in Acceptance Testing.
2. It is not the final stage of Validation.
2. It is the final stage of Validation.
3. System testing of software or hardware is testing conducted on a whole, integrated system to estimate the systems compliance with its specified set of requirements?
3. Acceptance Testing of software or hardware is testing conducted to evaluate the system compliance with its specified set of user requirements.
4. It is done to check how the system as a whole is functioning. Here the functionality and performance of the system is validated.
4. It is done by the developer before releasing the product to check whether it meets the requirements of user or not and it is also done by the user to check whether I have to accept the product or not.
5. It is used to check whether the system meets the defined Specifications or not.
5. It is used to check whether it meets the defined User Requirements or not.
6. System Testing determine the developer and tester for satisfaction with system specifications.
6. Acceptance Testing determine the customer for satisfaction with software product.

Q68.What Is Sanity Testing Explain It with Example?
Sanity testing is the surface level testing where QA engineer verifies that all the menus, functions, commands available in the product and project are working fine.
Sanity testing is performed after the build has clear the Smoke test and has been accepted by QA team for further testing, sanity testing checks the major functionality with finer details.
When we Perform Sanity Testing?
Sanity testing is performed when development team needs to know quick state of the product after they have done changes in the code or there is some controlled code change in a feature to fix any critical issue, and stringent release time-frame does not allow complete regression testing.
Conclusion:
Sanity testing will be done mostly after retest (retest will be done after fixing the bug). We always use Script for Smoke but do not for sanity.

Q69.Difference between Static and Dynamic Testing

 Static Testing
It is done in the phase of verification.
This testing means “How we prevent” means it always talks about prevention.
It is always considered as less cost effective job/task.
It is not considered as a time consuming job or task.
Static testing is also known by the name Dry Run Testing.
Techniques/methods of static testing are inspections, reviews, and walkthroughs etc.
 Dynamic Testing
Technique/method of dynamic testing is always software testing means testing.
It is always considered as a time consuming job or task because it requires several test cases to execute.
It can find errors that static testing cannot find and it is a high level exercise.
It is always considered as more cost effective job/task.
This testing means “How we cure” means it always talks about cure.
It is done in the phase of validation.

Q70.What Is Regression Testing

Regression testing will be conducted after any bug fixed or any functionality changed.
Regression testing is always done to verify that modified code does not break the existing functionality of the application and works within the requirements of the system.
Usually in regression testing bug fixed module is tested. During regression testing tester always check the entire system whether the fixed bug make any adverse affect in the existing system or not.
There are mostly two strategies to regression testing, 1) to run all tests and 2) always run a subset of tests based on a test case prioritization technique.

Q71.Why do We Use Stubs And Drivers?(accenture,Kofax)

Stubs are dummy modules that are always distinguish as "called programs", or you can say that is handle in integration testing (top down approach), it used when sub programs are under construction.
Stubs are considered as the dummy modules that always simulate the low level modules.
Drivers are also considered as the form of dummy modules which are always distinguished as "calling programs”, that is handled in bottom up integration testing, it is only used when main programs are under construction.
Drivers are considered as the dummy modules that always simulate the high level modules.
So it is fine from the above example that Stubs act “called” functions in top down integration. Drivers are “calling” Functions in bottom up integration.

Q72.Define Scrum technology(Portware,Infosys)

In each sprint, following tasks will be carried out:
Analysis and Design
Elaborate user stories with detailed scenarios/ acceptance criteria
Development, Testing and Delivery for Sprint Review and Acceptance
Input to this phase will be the elaborated user stories generated earlier. ITCube developers and testers will work on the tasks identified for them.
Deliveries will be made to Customer at the end of Sprint.
Sprint Review and Acceptance
At the end of each Sprint, a Sprint Review meeting will be held where ITCube team will demonstrate the features developed in the Sprint to relevant stakeholders and customers.
Customer will carry out acceptance testing against the acceptance criteria defined during Sprint planning meeting and report back issues
ITCube will fix issues and re-deliver the features / code to Customer
Customer will re-test and accept the features
The table below lists various roles and their responsibilities in Scrum/Agile methodology:
Role From Responsibilities
Product Owner Customer
Drives the software product from business perspective
Define and prioritize requirements
Lead release planning meeting
Arrive at release date and content
Lead sprint planning meetings
Prioritize product backlog
Make sure that valuable requirements take priority
Accept the software work product at the end of each sprint
Scrum Master ITCube
Ensure that release planning meeting takes place
Ensure sprint planning meetings take place
Conduct daily stand up meetings
Remove roadblocks
Track progress
Align team structure and processes as required from time to time
Conduct demo meetings at end of sprints
Record learning from retrospective meetings
Implement learning from previous sprints
Customer(s) Customer
Define stories (requirements)
Elaborate stories as and when required by developers and testers
Define acceptance criteria
Acceptance testing
Accept stories upon completion of acceptance testing
Architect Council Customer and ITCube
Lead and driven by Scrum Master
Arrive at initial application architecture
Adjust and refine architecture as required
Address architecture issues as required
Define best practices to be used in development like design patterns, domain driven design
Define tools to be used in development
Define best practices to be used in testing
Define tools to be used in testing
Developer (Delivery Team) ITCube
Analyze stories for estimating effort
Functional design
Technical design
Coding
Unit testing
Support user acceptance testing
Technical documentation
Tester (Delivery Team) ITCube
Writing test cases
System testing
Stress testing
Stakeholders Customer
Enable the project
Reviews
Best practices
Given below are some of the best practices in Scrum/Agile project execution methodology. These best practices revolve around various meetings or artifacts. ITCube will endeavor to follow as many of these as possible. ITCube may tailor these to suit the project requirements.
Best practices for meetings
As Agile promotes communication, these best practices are focused around various meetings in Agile – Release Planning, Sprint Planning, Daily Standup, Demo and Retrospectives.
Daily Standup
Each day during the sprint, a project status meeting occurs. This is called “the daily standup”. This meeting has specific guidelines:
The meeting starts precisely on time.
All are welcome, but only Delivery Team may speak
The meeting is time-boxed to short duration like 15 minutes
The meeting should happen at the same location and same time every day
During the meeting, each team member answers three questions:
What have you done since yesterday’s meeting?
What are you planning to do today?
Do you have any problems in accomplishing your goal?
Scrum Master should facilitate resolution of these problems. Typically this should occur outside the context of the Daily Standup.
Post-standup (optional)
Held each day, normally after the daily standup.
These meetings allow clusters of teams to discuss their work, focusing especially on areas of overlap and integration.
A designated person from each team attends.
The agenda will be the same as the Daily Standup, plus the following four questions:
What has your team done since we last met?
What will your team do before we meet again?
Is anything slowing your team down or getting in their way?
Are you about to put something in another team’s way?
Sprint Planning Meeting
At the beginning of sprint, a “Sprint Planning Meeting” is held.
Select what work is to be done
Prepare the Product Backlog that details the time it will take to do that work, with the entire team
Identify and communicate how much of the work is likely to be done during the current sprint
Eight hour time limit
(1st four hours) Product Owner + Team: dialog for prioritizing the Product Backlog
(2nd four hours) Team only: hashing out a plan for the sprint, resulting in the sprint backlog
Sprint Review Meeting (Demo)
Review the work that was completed and not completed
Present the completed work to the stakeholders ( “the demo”)
Incomplete work cannot be demonstrated
Four hour time limit
Sprint Retrospective
All team members reflect on the past sprint
Make continuous process improvements
Two main questions are asked in the sprint retrospective:
What went well during the sprint?
What could be improved in the next sprint?
Three hour time limit
Best practices for Artifacts
If the team uses a tool like Rally, most of the artifacts will be stored in the tool.
Product backlog
The product backlog is a high-level document for the entire project.
Contains backlog items like broad descriptions of all required features, wish-list items, etc. prioritized by business value.
It is the “What” that will be built.
Open and editable by anyone and contains rough estimates of both business value and development effort.
These estimates help the Product Owner to gauge the timeline and, to a limited extent, priority.
The product backlog is the property of the Product Owner. Business value is set by the Product Owner. Development effort is set by the Team.
Sprint backlog
Describes how the team is going to implement the features for the upcoming sprint
Features are broken down into tasks; as a best practice, tasks are normally estimated between four and sixteen hours of work
With this level of detail the whole team understands exactly what to do, and anyone can potentially pick a task from the list
Tasks on the sprint backlog are never assigned; rather, tasks are signed up for by the team members as needed, according to the set priority and the team member skills.
It is the property of the Team. Estimations are set by the Team. Often an accompanying Task Board is used to see and change the state of the tasks of the current sprint, like “to do”, “in progress” and “done”. Tools like Rally make this very easy.
Burn down charts
It is a publicly displayed chart showing remaining work in the sprint backlog.
Updated every day
Gives a simple view of the sprint progress.
Provides quick visualizations for reference.
Tools like Rally give almost current burn down charts at any point of time
It is NOT an earned value chart.

Q72.Bug Report (QC 10.00)

Summary: Wrong login name is displayed.
Description: Wrong login name is displayed on Home Page.
Steps to Follow or Chase:
1.      Enter the URL Http:// ----------------- in the address bar.
2.      Fill the username field in login form.
3.      Fill the password field in login form.
4.      Click on login button below the form.
Test data: Enter xyz on username field.
Environment: Windows 7, Safari like that.
Expected Result: Proper logged in name should be displayed on home page.
Actual Result: After login with xyz wrong login name is displayed on home page likes logged in as ABC.
Screenshot:  Attach the screenshot here on where you detect the bug and save it in jpeg format.
When should the testing be stopped?
It depends on the risks for the system being tested. There are some criteria bases on which you can stop testing.
Deadlines (Testing, Release)
Test budget has been depleted
Bug rate fall below certain level
Test cases completed with certain percentage passed
Alpha or beta periods for testing ends
Coverage of code, functionality or requirements are met to a specified point

Q73.How to deal with not reproducible bug?(Helius,HSBC)
 A bug cannot be reproduced for following reasons:

1. Low memory.
2. Addressing to non available memory location.
3. Things happening in a particular sequence.

Tester can do following things to deal with not reproducible bug:

• Includes steps that are close to the error statement.
• Evaluate the test environment.
• Examine and evaluate test execution results.
• Resources & Time Constraints must be kept in point.

Q74.What is difference between QA and QC? (ADP,Accenture,Kofax)

Quality Assurance (QA): QA refers to the planned and systematic way of monitoring the quality of process which is followed to produce a quality product. QA tracks the outcomes and adjusts the process to meet the expectation.

Quality Control (QC): Concern with the quality of the product. QC finds the defects and suggests improvements. The process set by QA is implemented by QC. The QC is the responsibility of the tester.

Manual Testing Interview Q/A -Part 4


Q47. What is Requirement Traceability Matrix?


The Requirements Traceability Matrix (RTM) is a tool to make sure that project requirement remain same throughout the whole development process. RTM is used in the development process because of following reasons:

• To determine whether the developed project is meet the requirements of the user.
• To determine all the requirements given by the user
• To make sure the application requirement can be fulfilled in the verification process.

Q48. What is difference between Pilot and Beta testing?

The differences between these two are listed below:

• A beta test when the product is about to release to the end user whereas pilot testing take place in the earlier phase of the development cycle.
• In beta testing application is given to a few user to make sure that application meet the user requirement and does not contain any showstopper whereas in case of pilot testing team member give their feedback to improve the quality of the application.

Q49. Describe how to perform Risk analysis during software testing?


Risk analysis is the process of identifying risk in the application and prioritizing them to test. Following are some of the risks:

1. New Hardware.
2. New Technology.
3. New Automation Tool.
4. Sequence of code delivery.
5. Availability of application test resources.

We prioritize them into three categories these are:

• High magnitude: Impact of the bug on the other functionality of the application.
• Medium: it can be tolerable in the application but not desirable.
• Low: it can be tolerable. This type of risk has no impact on the company business.

Q50. What is difference between Master Test Plan and Test Plan.


The differences between Master Plan and Test Plan are given below:

• Master Test Plan contains all the testing and risk involved area of the application where as Test case document contains test cases.
• Master Test plan contain all the details of each and every individual tests to be run during the overall development of application whereas test plan describe the scope, approach, resources and schedule of performing test.
• Master Test plan contain the description of every tests that is going to be performed on the application where as test plan only contain the description of few test cases. during the testing cycle like Unit test, System test, beta test etc
• Master Test Plan is created for all large projects but when it is created for the small project then we called it as test plan.

Q51. How to deal with not reproducible bug?


Ans. A bug cannot be reproduced for following reasons:

1. Low memory.
2. Addressing to non available memory location.
3. Things happening in a particular sequence.

Tester can do following things to deal with not reproducible bug:

• Includes steps that are close to the error statement.
• Evaluate the test environment.
• Examine and evaluate test execution results.
• Resources & Time Constraints must be kept in point.

Q52. What are the key challenges of software testing?


Following are some challenges of software testing:

1. Application should be stable enough to be tested.
2. Testing always under time constraint
3. Understanding the requirements.
4. Domain knowledge and business user perspective understanding.
5. Which tests to execute first?
6. Testing the Complete Application.
7. Regression testing.
8. Lack of skilled testers.
9. Changing requirements.
10. Lack of resources, tools and training

Q53. What is difference between QA, QC and Software Testing?

Quality Assurance (QA): QA refers to the planned and systematic way of monitoring the quality of process which is followed to produce a quality product. QA tracks the outcomes and adjusts the process to meet the expectation.

Quality Control (QC): Concern with the quality of the product. QC finds the defects and suggests improvements. The process set by QA is implemented by QC. The QC is the responsibility of the tester.

Software Testing: is the process of ensuring that product which is developed by the developer meets the user requirement. The motive to perform testing is to find the bugs and make sure that they get fixed.

Q54.What is Exhaustive Testing?


Exhaustive Testing, as the name suggests is very exhaustive. Exhaustive testing means to test every component in the application with every possible number of inputs. According to Principles of testing Exhaustive Testing is Impossible because exhaustive testing requires more time and effort to test the application for all possible number of inputs. This may lead to high cost and delay in the release of the application.

Q55. What is Gray Box Testing?

Grey box testing is the hybrid of black box and white box testing. In gray box testing, test engineer has the knowledge of coding section of the component and designs test cases or test data based on system knowledge. In this tester has knowledge of code, but this is less than the knowledge of white box testing. Based on this knowledge the test cases are designed and the software application under test treats as a black box & tester test the application from outside.

Q56. What is Scalability Testing?

Scalability testing is testing performed in order to enhanced and improve the functional and performance capabilities of the application. So that, application can meets requirements of the end users. The scalability measurements is done by doing the evaluating the application performance in load and stress conditions. Now depending upon this evaluation we improve and enhanced the capabilities of the application.


Q57. Can you define test driver and test stub?

• The Stub is called from the software component to be tested. It is used in top down approach.
• The driver calls a component to be tested. It is used in bottom up approach.
• Both test stub and test driver are dummy software components.

We need test stub and test driver because of following reason:

• Suppose we want to test the interface between modules A and B and we have developed only module A. So we cannot test module A but if a dummy module is prepare, using that we can test module A.
• Now module B cannot send or receive data from module A directly so, in these cases we have to transfer data from one module to another module by some external features. This external feature used is called Driver.

Q58.What is good design?


Design refers to functional design or internal design. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and status logging capability, and works correctly when implemented. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements.

Q59. What makes a good QA or Test manager?

A good QA or Test manager should have following characteristics:

• Knowledge about Software development process
• Improve the teamwork to increase productivity
• Improve cooperation between software, test, and QA engineers
• To improvements the QA processes.
• Communication skills.
• able to conduct meetings and keep them focused

Q60. How does a client or server environment affect testing?


There are lots of environmental factors that affect the testing like speed of data transfer data transfer, hardware, and server etc while working with client or server technologies, testing will be extensive. When we have time limit, we do the integration testing. In most of the cases we prefer the load, stress and performance testing for examine the capabilities of the application for the client or server environment.

Friday 16 October 2015

Manual Testing Interview Q/A - Part 3

Manual Testing Interview Q/A - Part 3

41. On which basis we give priority and severity for a bug and give one example for high priority and low severity and high severity and low priority?
Always the priority is given by team leader or Business Analyst. Severity is given by the reporter of bug. For example, High severity: hardware bugs application crash. Low severity: User interface bugs. High priority: Error message is not coming on time, calculation bugs etc. Low priority: Wrong alignment, etc

42. What do you mean by reproducing the bug? If the bug was not reproducible, what is the next step?
If you find a defect, for example click the button and the corresponding action didn’t happen, it is a bug. If the developer is unable to find this behaviour he will ask us to reproduce the bug. In another scenario, if the client complaints a defect in the production we will have to reproduce it in test environment.
If the bug was not reproducible by developer, then bug is assigned back to reporter or goto meeting or informal meeting (like walkthrough) is arranged in order to reproduce the bug. Sometimes the bugs are inconsistent, so that that case we can mark the bugs as inconsistent and temporarily close the bug with status working fine now.

43. What is the responsibility of a tester when a bug which may arrive at the time of testing. Explain?
First check the status of the bug, then check whether the bug is valid or not then forward the same bug to the team leader and then after confirmation forward it to the concern developer.
If we cannot reproduce it, it is not reproducible in which case we will do further testing around it and if we cannot see it we will close it, and just hope it would never come back ever again.

44. How can we design the test cases from requirements? Do the requirements, represent exact functionality of UAT?
Ofcourse, requirements should represents exact functionality of  UAT.
First of all you have to analyze the requirements very thoroughly in terms of functionality. Then we have to think about suitable test case design technique [Black Box design techniques like Specification based test cases, functional test cases, Equivalence Class Partitioning (ECP), Boundary Valve Analysis (BVA), Error guessing and Cause Effect Graphing] for writing the test cases.
By these concepts you should design a test case, which should have the capability of finding the absence of defects.

45. How to launch the test cases in Quality Centre (Test Director) and where it is saved?
You create the test cases in the test plan tab and link them to the requirements in the requirement tab. Once the test cases are ready we change the status to ready and go to the “Test Lab” Tab and create a test set and add the test cases to the test set and you can run from there.
For automation, in test plan, create a new automated test and launch the tool and create the script and save it and you can run from the test lab the same way as you did for the manual test cases.
The test cases are sorted in test plan tab or more precisely in the test director, lets say quality centers database. test director is now referred to as quality center.

46. How is traceability of bug follow?
The traceability of bug can be followed in so many ways.
1. Mapping the functional requirement scenarios(FS Doc) - test cases (ID) - Failed test cases(Bugs)
2. Mapping between requirements(RS Doc) - Test case (ID) - Failed test cases.
3. Mapping between test plan (TP Doc) - test case (ID) - failed test cases.
4. Mapping between business requirements (BR Doc) - test cases (ID) - Failed test cases.
5. Mapping between high level design(Design Doc) - test cases (ID) - Failed test cases.
Usually the traceability matrix is mapping between the requirements, client requirements, function specification, test plan and test cases.

47. What is the difference between use case, test case, test plan?
Use Case: It is prepared by Business analyst in the Functional Requirement Specification(FRS), which are nothing but a steps which are given by the customer.
Test cases: It is prepared by test engineer based on the use cases from FRS to check the functionality of an application thoroughly
Test Plan: Team lead prepares test plan, in it he represents the scope of the test, what to test and what not to test, scheduling, what to test using automation etc.

Manual Testing Interview Q/A - Part 2

Manual Testing Interview Q/A - Part 2

21. What is the difference between QA and testing?
The goals of QA are very different from the goals of testing.  The purpose of QA is to prevent errors is the application while the purpose of testing is to find errors.

22. What is the difference between Quality Control and Quality Assurance?
Quality control (QC) and quality assurance (QA) are closely linked but are very different concepts. While QC evaluates a developed product, the purpose of QA is to ensure that the development process is at a level that makes certain that the system or application will meet the requirements.

23. What is the difference between regression testing and retesting?
Regression testing is performing tests to ensure that modifications to a module or system do not have a negative effect on previous releases.  Retesting is merely running the same testing again. Regression testing is widely asked manual testing interview questions and hence further research to understand this topic is needed.

24. Explain the difference between bug severity and bug priority.
Bug severity refers to the level of impact that the bug has on the application or system while bug priority refers to the level of urgency in the need for a fix.

25. What is the difference between system testing and integration testing?
For system testing, the entire system as a whole is checked, whereas for integration testing, the interaction between the individual modules are tested.

26. Explain the term bug.
A bug is an error found while running a program. Bug fall into two categories: logical and syntax.

27. Explain the difference between functional and structural testing.
Functional testing is considered to be behavioral or black box testing in which the tester verifies that the system or application functions according to specification.  Structural testing on the other hand is based on the code or algorithms and is considered to be white box testing.

28. Define defect density.
Defect density is the total number of defects per lines of code.

29. When is a test considered to be successful?
The purpose of testing is to ensure that the application operates according to the requirements and to discover as many errors and bugs as possible.  This means that tests that cover more functionality and expose more errors are considered to be the most successful.

30. What good bug tracking systems have you used?
This is a simple interview question about your experience with bug tracking.  Provide the system/systems that you are most familiar with if any at all.It would also be good to provide a comparison of the pros and cons of several if you have experience. Bug tracking is the essence of testing process and is a must asked manual testing interview questions in any interview. Do not forget this.

31. In which phase should testing begin – requirements, planning, design, or coding?
Testing should begin as early as the requirements phase.

32. Can you test a program and find 100% of the errors?
It is impossible to fine all errors in an application mostly because there is no way to calculate how many errors exist.  There are many factors involved in such a calculation such as the complexity of the program, the experience of the programmer, and so on. This Manual testing interview questions is the most tricky questions considered by testers.

33. What is the difference between debugging and testing?
The main difference between debugging and testing is that debugging is typically conducted by a developer who also fixes errors during the debugging phase.  Testing on the other hand, finds errors rather than fixes them.  When a tester finds a bug, they usually report it so that a developer can fix it.

34. How should testing be conducted?
Testing should be conducted based on the technical requirements of the application.

35. What is considered to be a good test?
Testing that covers most of the functionality of an object or system is considered to be a good test.

36. What is the difference between top-down and bottom-up testing?
Top-Down testing begins with the system and works its way down to the unit level.  Bottom-up testing checks in the opposite direction, unit level to interface to overall system. Both have value but bottom-up testing usually aids in discovering defects earlier in the development cycle, when the cost to fix errors is lower.

37. Explain how to develop a test plan and a test case.
A test plan consists of a set of test cases. Test cases are developed based on requirement and design documents for the application or system. Once these documents are thoroughly reviewed, the test cases that will make up the test plan can be created.

38. What is the role of quality assurance in a product development lifecycle?
Quality assurance should be involved very early on in the development life cycle so that they can have a better understanding of the system and create sufficient test cases. However, QA should be separated from the development team so that the team is not able to build influence on the QA engineers.

39. What is the average size of executables that you have created?
This is a simple interview question about our experience with executables.  If you know the size of any that you’ve created, simply provide this info.

40. What version of the Oracle are you familiar with?
This is an interview question about experience.  Simply provide the versions of the software that you have experience with.

Manual Testing Interview Q/A

Manual Testing Interview Q/A-Part 1

Manual Testing Interview Q/A-Part 1

1. What are the components of an SRS?
An SRS contains the following basic components:
Introduction
Overall Description
External Interface Requirements
System Requirements
System Features

2. What is the difference between a test plan and a QA plan?
A test plan lays out what is to be done to test the product and includes how quality control will work to identify errors and defects.  A QA plan on the other hand is more concerned with prevention of errors and defects rather than testing and fixing them.

3. How do you test an application if the requirements are not available?
If requirements documentation is not available for an application, a test plan can be written based on assumptions made about the application.  Assumptions that are made should be well documented in the test plan.

4. What is a peer review?
Peer reviews are reviews conducted among people that work on the same team.  For example, a test case that was written by one QA engineer may be reviewed by a developer and/or another QA engineer.

5. How can you tell when enough test cases have been created to adequately test a system or module?
You can tell that enough test cases have been created when there is at least one test case to cover every requirement.  This ensures that all designed features of the application are being tested.

6. Who approves test cases?
The approver of test cases varies from one organization to the next. In some organizations, the QA lead may approve the test cases while another approves them as part of peer reviews.

7. Give an example of what can be done when a bug is found.
When a bug is found, it is a good idea to run more tests to be sure that the problem witnessed can be clearly detailed. For example, let say a test case fails when Animal=Dog and.  A tester should run more tests to be sure that the same problem doesn’t exist with Animal=Cow.  Once the tester is sure of the full scope of the bug can be documented and the bug adequately reported.

8. Who writes test plans and test cases?
Test plans are typically written by the quality assurance lead while testers usually write test cases.

9. Is quality assurance and testing the same?
Quality assurance and testing is not the same.  Testing is considered to be a subset of QA. QA is should be incorporated throughout the software development life cycle while testing is the phase that occurs after the coding phase.

10. What is a negative test case?
Negative test cases are created based on the idea of testing in a destructive manner.  For example, testing what will happen if inappropriate inputs are entered into the application.

11. If an application is in production, and one module of code is modified, is it necessary to retest just that module or should all of the other modules be tested as well?
It is a good idea to perform regression testing and to check all of the other modules as well.  At the least, system testing should be performed.

12. What should be included in a test strategy? 
The test strategy includes a plan for how to test the application and exactly what will be tested (user interface, modules, processes, etc.).  It establishes limits for testing and indicates whether manual or automated testing will be used.

13. What can be done to develop a test for a system if there are no functional specifications or any system and development documents?
When there are no functional specifications or system development documents, the tester should familiarize themselves with the product and the code.  It may also be helpful to perform research to find similar products on the market.

14. What are the functional testing types?
The following are the types of functional testing:
Compatibility
Configuration
Error handling
Functionality
Input domain
Installation
Inter-systems
Recovery

15. What is the difference between sanity testing and smoke testing?
When sanity testing is conducted, the product is sent through a preliminary round of testing with the test group in order to check the basic functionality such as button functionality.  Smoke testing, on the other hand is conducted by developers based on the requirements of the client.

16. Explain random testing.
Random testing involves checking how the application handles input data that is generated at random. Data types are typically ignored and a random sequence of letter, numbers, and other characters are inputted into the data field.

17. Define smoke testing.
Smoke testing is a form of software testing that is not exhaustive and checks only the most crucial components of the software but does not check in more detail.
 
18. What steps are involved in sanity testing?
Sanity testing is very similar to smoke testing. It is the initial testing of a component or application that is done to make sure that it is functioning at the most basic level and it is stable enough to continue more detailed testing.

19. What is the difference between WinRunner and Rational Robot?
WinRunner is a functional test tool but Rational Robot is capable of both functional and performance testing. Also, WinRunner has 4 verification points and Rational Robot has 13 verification points.

20. What is the purpose of the testing process?
The purpose of the testing process is to verifying that input data produces the anticipated output.