Monday 30 November 2015

Manual Testing Interview Q/A -Part 4


Q47. What is Requirement Traceability Matrix?


The Requirements Traceability Matrix (RTM) is a tool to make sure that project requirement remain same throughout the whole development process. RTM is used in the development process because of following reasons:

• To determine whether the developed project is meet the requirements of the user.
• To determine all the requirements given by the user
• To make sure the application requirement can be fulfilled in the verification process.

Q48. What is difference between Pilot and Beta testing?

The differences between these two are listed below:

• A beta test when the product is about to release to the end user whereas pilot testing take place in the earlier phase of the development cycle.
• In beta testing application is given to a few user to make sure that application meet the user requirement and does not contain any showstopper whereas in case of pilot testing team member give their feedback to improve the quality of the application.

Q49. Describe how to perform Risk analysis during software testing?


Risk analysis is the process of identifying risk in the application and prioritizing them to test. Following are some of the risks:

1. New Hardware.
2. New Technology.
3. New Automation Tool.
4. Sequence of code delivery.
5. Availability of application test resources.

We prioritize them into three categories these are:

• High magnitude: Impact of the bug on the other functionality of the application.
• Medium: it can be tolerable in the application but not desirable.
• Low: it can be tolerable. This type of risk has no impact on the company business.

Q50. What is difference between Master Test Plan and Test Plan.


The differences between Master Plan and Test Plan are given below:

• Master Test Plan contains all the testing and risk involved area of the application where as Test case document contains test cases.
• Master Test plan contain all the details of each and every individual tests to be run during the overall development of application whereas test plan describe the scope, approach, resources and schedule of performing test.
• Master Test plan contain the description of every tests that is going to be performed on the application where as test plan only contain the description of few test cases. during the testing cycle like Unit test, System test, beta test etc
• Master Test Plan is created for all large projects but when it is created for the small project then we called it as test plan.

Q51. How to deal with not reproducible bug?


Ans. A bug cannot be reproduced for following reasons:

1. Low memory.
2. Addressing to non available memory location.
3. Things happening in a particular sequence.

Tester can do following things to deal with not reproducible bug:

• Includes steps that are close to the error statement.
• Evaluate the test environment.
• Examine and evaluate test execution results.
• Resources & Time Constraints must be kept in point.

Q52. What are the key challenges of software testing?


Following are some challenges of software testing:

1. Application should be stable enough to be tested.
2. Testing always under time constraint
3. Understanding the requirements.
4. Domain knowledge and business user perspective understanding.
5. Which tests to execute first?
6. Testing the Complete Application.
7. Regression testing.
8. Lack of skilled testers.
9. Changing requirements.
10. Lack of resources, tools and training

Q53. What is difference between QA, QC and Software Testing?

Quality Assurance (QA): QA refers to the planned and systematic way of monitoring the quality of process which is followed to produce a quality product. QA tracks the outcomes and adjusts the process to meet the expectation.

Quality Control (QC): Concern with the quality of the product. QC finds the defects and suggests improvements. The process set by QA is implemented by QC. The QC is the responsibility of the tester.

Software Testing: is the process of ensuring that product which is developed by the developer meets the user requirement. The motive to perform testing is to find the bugs and make sure that they get fixed.

Q54.What is Exhaustive Testing?


Exhaustive Testing, as the name suggests is very exhaustive. Exhaustive testing means to test every component in the application with every possible number of inputs. According to Principles of testing Exhaustive Testing is Impossible because exhaustive testing requires more time and effort to test the application for all possible number of inputs. This may lead to high cost and delay in the release of the application.

Q55. What is Gray Box Testing?

Grey box testing is the hybrid of black box and white box testing. In gray box testing, test engineer has the knowledge of coding section of the component and designs test cases or test data based on system knowledge. In this tester has knowledge of code, but this is less than the knowledge of white box testing. Based on this knowledge the test cases are designed and the software application under test treats as a black box & tester test the application from outside.

Q56. What is Scalability Testing?

Scalability testing is testing performed in order to enhanced and improve the functional and performance capabilities of the application. So that, application can meets requirements of the end users. The scalability measurements is done by doing the evaluating the application performance in load and stress conditions. Now depending upon this evaluation we improve and enhanced the capabilities of the application.


Q57. Can you define test driver and test stub?

• The Stub is called from the software component to be tested. It is used in top down approach.
• The driver calls a component to be tested. It is used in bottom up approach.
• Both test stub and test driver are dummy software components.

We need test stub and test driver because of following reason:

• Suppose we want to test the interface between modules A and B and we have developed only module A. So we cannot test module A but if a dummy module is prepare, using that we can test module A.
• Now module B cannot send or receive data from module A directly so, in these cases we have to transfer data from one module to another module by some external features. This external feature used is called Driver.

Q58.What is good design?


Design refers to functional design or internal design. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and status logging capability, and works correctly when implemented. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements.

Q59. What makes a good QA or Test manager?

A good QA or Test manager should have following characteristics:

• Knowledge about Software development process
• Improve the teamwork to increase productivity
• Improve cooperation between software, test, and QA engineers
• To improvements the QA processes.
• Communication skills.
• able to conduct meetings and keep them focused

Q60. How does a client or server environment affect testing?


There are lots of environmental factors that affect the testing like speed of data transfer data transfer, hardware, and server etc while working with client or server technologies, testing will be extensive. When we have time limit, we do the integration testing. In most of the cases we prefer the load, stress and performance testing for examine the capabilities of the application for the client or server environment.

Tuesday 20 October 2015

Introduction to Apache JMeter Performance Testing

Introduction to Apache JMeter Performance Testing





Apache JMeter is a 100% pure Java application designed to load test client/server software (such as a web application). It may be used to test performance both on static and dynamic resources such as static files, Java Servlets, ASP.NET, PHP, CGI scripts, Java objects, databases, FTP servers, and more. JMeter can be used to simulate a heavy load on a server, network or object to test its strength or to analyze overall performance under different load types.

Additionally, JMeter can help you regression test your application by letting you create test scripts with assertions to validate that your application is returning the results you expect. For maximum flexibility, JMeter lets you create these assertions using regular expressions.

But please note that JMeter is not a browser, it works at protocol level.


Why to use JMeter:-

  • Open source license:Being an open source software, it is freely available.
  • Friendly GUI: It has a very simple GUI.
  • Support multi-protocol: JMeter can conduct load and performance test for many different server types − Web - HTTP, HTTPS, SOAP, Database via JDBC, LDAP, JMS, Mail - POP3, etc.
  • Platform independent:It is a platform-independent tool. On Linux/Unix, JMeter can be invoked by clicking on JMeter shell script. On Windows, it can be invoked by starting the jmeter.bat file.
  • It has full Swing and lightweight component support.
  • JMeter store its test plans in XML format. This means you can generate a test plan using a text editor.
  • Its full multi-threading framework allows concurrent sampling by many threads and simultaneous sampling of different functions by separate thread groups.
  • It can also be used to perform automated and functional testing of the applications.


Performance Testing Tool

Performance Testing Tool

1. Apache Jmeter

2. LoadRunner

Jmeter Tutorials

Jmeter Tutorials

Topics:- 

  1. Introduction to Apache JMeter Performance Testing
  2. How to install Jmeter in easy steps
  3. Complete Element reference for Jmeter
  4. Hands on with JMeter GUI
  5. Performance Testing using Jmeter
  6. How to use Timers in Jmeter
  7. How to use Assertions in JMeter
  8. How to use Controllers in JMeter
  9. How to use Processor in JMeter
  10. How to perform Distributed Testing in JMeter
  11. Best Practices for your Jmeter Tests
  12. How To Use Jmeter for HTTP Proxy Server Testing

Friday 16 October 2015

Manual Testing Interview Q/A - Part 3

Manual Testing Interview Q/A - Part 3

41. On which basis we give priority and severity for a bug and give one example for high priority and low severity and high severity and low priority?
Always the priority is given by team leader or Business Analyst. Severity is given by the reporter of bug. For example, High severity: hardware bugs application crash. Low severity: User interface bugs. High priority: Error message is not coming on time, calculation bugs etc. Low priority: Wrong alignment, etc

42. What do you mean by reproducing the bug? If the bug was not reproducible, what is the next step?
If you find a defect, for example click the button and the corresponding action didn’t happen, it is a bug. If the developer is unable to find this behaviour he will ask us to reproduce the bug. In another scenario, if the client complaints a defect in the production we will have to reproduce it in test environment.
If the bug was not reproducible by developer, then bug is assigned back to reporter or goto meeting or informal meeting (like walkthrough) is arranged in order to reproduce the bug. Sometimes the bugs are inconsistent, so that that case we can mark the bugs as inconsistent and temporarily close the bug with status working fine now.

43. What is the responsibility of a tester when a bug which may arrive at the time of testing. Explain?
First check the status of the bug, then check whether the bug is valid or not then forward the same bug to the team leader and then after confirmation forward it to the concern developer.
If we cannot reproduce it, it is not reproducible in which case we will do further testing around it and if we cannot see it we will close it, and just hope it would never come back ever again.

44. How can we design the test cases from requirements? Do the requirements, represent exact functionality of UAT?
Ofcourse, requirements should represents exact functionality of  UAT.
First of all you have to analyze the requirements very thoroughly in terms of functionality. Then we have to think about suitable test case design technique [Black Box design techniques like Specification based test cases, functional test cases, Equivalence Class Partitioning (ECP), Boundary Valve Analysis (BVA), Error guessing and Cause Effect Graphing] for writing the test cases.
By these concepts you should design a test case, which should have the capability of finding the absence of defects.

45. How to launch the test cases in Quality Centre (Test Director) and where it is saved?
You create the test cases in the test plan tab and link them to the requirements in the requirement tab. Once the test cases are ready we change the status to ready and go to the “Test Lab” Tab and create a test set and add the test cases to the test set and you can run from there.
For automation, in test plan, create a new automated test and launch the tool and create the script and save it and you can run from the test lab the same way as you did for the manual test cases.
The test cases are sorted in test plan tab or more precisely in the test director, lets say quality centers database. test director is now referred to as quality center.

46. How is traceability of bug follow?
The traceability of bug can be followed in so many ways.
1. Mapping the functional requirement scenarios(FS Doc) - test cases (ID) - Failed test cases(Bugs)
2. Mapping between requirements(RS Doc) - Test case (ID) - Failed test cases.
3. Mapping between test plan (TP Doc) - test case (ID) - failed test cases.
4. Mapping between business requirements (BR Doc) - test cases (ID) - Failed test cases.
5. Mapping between high level design(Design Doc) - test cases (ID) - Failed test cases.
Usually the traceability matrix is mapping between the requirements, client requirements, function specification, test plan and test cases.

47. What is the difference between use case, test case, test plan?
Use Case: It is prepared by Business analyst in the Functional Requirement Specification(FRS), which are nothing but a steps which are given by the customer.
Test cases: It is prepared by test engineer based on the use cases from FRS to check the functionality of an application thoroughly
Test Plan: Team lead prepares test plan, in it he represents the scope of the test, what to test and what not to test, scheduling, what to test using automation etc.

Manual Testing Interview Q/A - Part 2

Manual Testing Interview Q/A - Part 2

21. What is the difference between QA and testing?
The goals of QA are very different from the goals of testing.  The purpose of QA is to prevent errors is the application while the purpose of testing is to find errors.

22. What is the difference between Quality Control and Quality Assurance?
Quality control (QC) and quality assurance (QA) are closely linked but are very different concepts. While QC evaluates a developed product, the purpose of QA is to ensure that the development process is at a level that makes certain that the system or application will meet the requirements.

23. What is the difference between regression testing and retesting?
Regression testing is performing tests to ensure that modifications to a module or system do not have a negative effect on previous releases.  Retesting is merely running the same testing again. Regression testing is widely asked manual testing interview questions and hence further research to understand this topic is needed.

24. Explain the difference between bug severity and bug priority.
Bug severity refers to the level of impact that the bug has on the application or system while bug priority refers to the level of urgency in the need for a fix.

25. What is the difference between system testing and integration testing?
For system testing, the entire system as a whole is checked, whereas for integration testing, the interaction between the individual modules are tested.

26. Explain the term bug.
A bug is an error found while running a program. Bug fall into two categories: logical and syntax.

27. Explain the difference between functional and structural testing.
Functional testing is considered to be behavioral or black box testing in which the tester verifies that the system or application functions according to specification.  Structural testing on the other hand is based on the code or algorithms and is considered to be white box testing.

28. Define defect density.
Defect density is the total number of defects per lines of code.

29. When is a test considered to be successful?
The purpose of testing is to ensure that the application operates according to the requirements and to discover as many errors and bugs as possible.  This means that tests that cover more functionality and expose more errors are considered to be the most successful.

30. What good bug tracking systems have you used?
This is a simple interview question about your experience with bug tracking.  Provide the system/systems that you are most familiar with if any at all.It would also be good to provide a comparison of the pros and cons of several if you have experience. Bug tracking is the essence of testing process and is a must asked manual testing interview questions in any interview. Do not forget this.

31. In which phase should testing begin – requirements, planning, design, or coding?
Testing should begin as early as the requirements phase.

32. Can you test a program and find 100% of the errors?
It is impossible to fine all errors in an application mostly because there is no way to calculate how many errors exist.  There are many factors involved in such a calculation such as the complexity of the program, the experience of the programmer, and so on. This Manual testing interview questions is the most tricky questions considered by testers.

33. What is the difference between debugging and testing?
The main difference between debugging and testing is that debugging is typically conducted by a developer who also fixes errors during the debugging phase.  Testing on the other hand, finds errors rather than fixes them.  When a tester finds a bug, they usually report it so that a developer can fix it.

34. How should testing be conducted?
Testing should be conducted based on the technical requirements of the application.

35. What is considered to be a good test?
Testing that covers most of the functionality of an object or system is considered to be a good test.

36. What is the difference between top-down and bottom-up testing?
Top-Down testing begins with the system and works its way down to the unit level.  Bottom-up testing checks in the opposite direction, unit level to interface to overall system. Both have value but bottom-up testing usually aids in discovering defects earlier in the development cycle, when the cost to fix errors is lower.

37. Explain how to develop a test plan and a test case.
A test plan consists of a set of test cases. Test cases are developed based on requirement and design documents for the application or system. Once these documents are thoroughly reviewed, the test cases that will make up the test plan can be created.

38. What is the role of quality assurance in a product development lifecycle?
Quality assurance should be involved very early on in the development life cycle so that they can have a better understanding of the system and create sufficient test cases. However, QA should be separated from the development team so that the team is not able to build influence on the QA engineers.

39. What is the average size of executables that you have created?
This is a simple interview question about our experience with executables.  If you know the size of any that you’ve created, simply provide this info.

40. What version of the Oracle are you familiar with?
This is an interview question about experience.  Simply provide the versions of the software that you have experience with.

Manual Testing Interview Q/A

Manual Testing Interview Q/A-Part 1

Manual Testing Interview Q/A-Part 1

1. What are the components of an SRS?
An SRS contains the following basic components:
Introduction
Overall Description
External Interface Requirements
System Requirements
System Features

2. What is the difference between a test plan and a QA plan?
A test plan lays out what is to be done to test the product and includes how quality control will work to identify errors and defects.  A QA plan on the other hand is more concerned with prevention of errors and defects rather than testing and fixing them.

3. How do you test an application if the requirements are not available?
If requirements documentation is not available for an application, a test plan can be written based on assumptions made about the application.  Assumptions that are made should be well documented in the test plan.

4. What is a peer review?
Peer reviews are reviews conducted among people that work on the same team.  For example, a test case that was written by one QA engineer may be reviewed by a developer and/or another QA engineer.

5. How can you tell when enough test cases have been created to adequately test a system or module?
You can tell that enough test cases have been created when there is at least one test case to cover every requirement.  This ensures that all designed features of the application are being tested.

6. Who approves test cases?
The approver of test cases varies from one organization to the next. In some organizations, the QA lead may approve the test cases while another approves them as part of peer reviews.

7. Give an example of what can be done when a bug is found.
When a bug is found, it is a good idea to run more tests to be sure that the problem witnessed can be clearly detailed. For example, let say a test case fails when Animal=Dog and.  A tester should run more tests to be sure that the same problem doesn’t exist with Animal=Cow.  Once the tester is sure of the full scope of the bug can be documented and the bug adequately reported.

8. Who writes test plans and test cases?
Test plans are typically written by the quality assurance lead while testers usually write test cases.

9. Is quality assurance and testing the same?
Quality assurance and testing is not the same.  Testing is considered to be a subset of QA. QA is should be incorporated throughout the software development life cycle while testing is the phase that occurs after the coding phase.

10. What is a negative test case?
Negative test cases are created based on the idea of testing in a destructive manner.  For example, testing what will happen if inappropriate inputs are entered into the application.

11. If an application is in production, and one module of code is modified, is it necessary to retest just that module or should all of the other modules be tested as well?
It is a good idea to perform regression testing and to check all of the other modules as well.  At the least, system testing should be performed.

12. What should be included in a test strategy? 
The test strategy includes a plan for how to test the application and exactly what will be tested (user interface, modules, processes, etc.).  It establishes limits for testing and indicates whether manual or automated testing will be used.

13. What can be done to develop a test for a system if there are no functional specifications or any system and development documents?
When there are no functional specifications or system development documents, the tester should familiarize themselves with the product and the code.  It may also be helpful to perform research to find similar products on the market.

14. What are the functional testing types?
The following are the types of functional testing:
Compatibility
Configuration
Error handling
Functionality
Input domain
Installation
Inter-systems
Recovery

15. What is the difference between sanity testing and smoke testing?
When sanity testing is conducted, the product is sent through a preliminary round of testing with the test group in order to check the basic functionality such as button functionality.  Smoke testing, on the other hand is conducted by developers based on the requirements of the client.

16. Explain random testing.
Random testing involves checking how the application handles input data that is generated at random. Data types are typically ignored and a random sequence of letter, numbers, and other characters are inputted into the data field.

17. Define smoke testing.
Smoke testing is a form of software testing that is not exhaustive and checks only the most crucial components of the software but does not check in more detail.
 
18. What steps are involved in sanity testing?
Sanity testing is very similar to smoke testing. It is the initial testing of a component or application that is done to make sure that it is functioning at the most basic level and it is stable enough to continue more detailed testing.

19. What is the difference between WinRunner and Rational Robot?
WinRunner is a functional test tool but Rational Robot is capable of both functional and performance testing. Also, WinRunner has 4 verification points and Rational Robot has 13 verification points.

20. What is the purpose of the testing process?
The purpose of the testing process is to verifying that input data produces the anticipated output.

Tuesday 13 October 2015

Introduction to HP ALM(Quality Center)

Introduction to HP ALM(Quality Center)

Quality Center was initially a test management tool developed by Mercury interactive.

  • HP Quality Center, a test management tool is now popularly known as Application Life Cycle Management Tool(ALM) is a web based tool that helps organizations to manage the application life cycle right from project planning, requirements gathering, until testing & deployment.
  • ALM also provides integration to all other HP products such as UFT and Load Runner.


History of QC

  • Quality Center was earlier known as Test Director which was developed by Mercury Interactive.
  • In 2008, Version 8 was released, and the product was renamed as Quality Center.
  • Later, HP acquired Mercury Interactive and re-branded all mercury products as HP.
  • So Mercury Quality Center became HP Quality Center
  • In 2011, Version 11 was released, and Quality center was rechristened as HP ALM.


Why is ALM/QC used?

ALM helps make project management, from requirements to deployment easier. It increases predictability and creates a framework to manage projects from a central repository.  With ALM you will be able to:

  1. Define and maintain requirements and tests.
  2. Create Tests
  3. Organize tests into logical subsets
  4. Schedule tests and execute them
  5. Collect results and analyze the data
  6. Create, monitor and analyze defects
  7. Share defects across projects
  8. Track progress of a project
  9. Collect metrics
  10. Share asset libraries across projects
  11. Integrate ALM with HP testing tools other third party tools for a complete automation experience.


HP ALM Editions:
HP ALM is a commercial licensed tool and HP deploys it in 4 different editions that are listed below:

  • HP ALM
  • HP ALM Essentials
  • HP Quality Center Enterprise Edition
  • HP ALM Performance Center Edition


ALM Workflow


  1. Planning and drafting, Release details: Determine no of Cycles in each release & Scope of each release
  2. Requirement Specification: We draft the Requirements Specifications.
  3. Test Planning: Based on the requirements, Test plans and test cases are created.
  4. Test Execution: Executing the created tests plan
  5. Defect Tracking: In this test processes is tracking and fixing the defects detected in the execution stage
During all stages, analysis is done, and reports and graphs are generated for test metric generation.

HP Quality Center/ALM Tutorials

HP Quality Center/ALM Tutorials

Syllabus


  • Introduction to HP ALM(Quality Center)
  • How to install HP ALM
  • Create a Domain, Project, User in HP ALM
  • Release Specifications: Understanding the Management Tab in HP ALM
  • All About Requirements Specifications module in HP ALM
  • All About Test Plan Module in HP ALM (Quality Center)
  • Working with Test Lab in HP ALM
  • How to integrate UFT(QTP) with ALM (Quality Center)
  • Defect Management in HP ALM(Quality Center)
  • All About Dashboard & Analysis in HP ALM
  • Getting used to HP ALM Interface
  • How to customize your ALM project

Friday 9 October 2015

To test Notepad Application

To test Notepad Application : 

Functional Test Cases :
1) new page should be blank and cursor on beginning of first line.
2) the application allows typing
3) the application allows saving
4) the application allows opening a saved instance
5) validate menus and submenus
6) type of data that can be saved (numeric, alphanumeric, special chars)
7) editing the saved data(changing the font size, type, deleting, adding) and saving the changes.
8) saving a blank file.
9) validate file name : renaming file/keeping the default name/duplicate file name/spcl chars in file name

Non-functional Test Cases :
1) do performance testing by opening, say, 100 instances of notepad (possibly by using an automated tool) and determining the response time.
2) check max length of data.

Test Case For Keyboard:

Test Case For Keyboard:

1.Check the company of the keyboard.
2.Check the color of the keyboard.
3.Check the type of the keyboard ps/2 or usb.
4.Check the number of keys on the keyboard.
5.Check whether it is multimedia keyboard or not.
6.Check whether the drivers of the keyboard are properly installed or not.
7.Check the wire is properly plugged in or not.
8.Check the functionality of all alphabet keys.
9.Check the functionality of the function keys.
10.Check the functionality of the navigation keys.
11.Check the number lock key.
12.Check the caps lock keys functionality.

13.Check the shift keys functionality.

Test Cases For Flight Reservation

Test Cases For Flight Reservation:

  1. To check whether the page is possible to login without providing user any details
  2. To check and try to login the page with three characterstics user name .
  3. To check and login the page with valid username
  4. To check and login the page with valid username and invalid password
  5. To check and login the page with valid username and valid password
  6. To check whether the password is encrypted form while entering the same
  7. To  check whether the application is accepting the date in DD/MM/YY format
  8. To check whether the application is accepting the date in MM/DD/YY format
  9. To check the date by entering the same in MM/DD/YY but execute the test by providing the single digit value for date and month E.g.6/7/13
  10. To check whether the drop down list box are listings the name of the boarding places
  11. To check whether the drop down list box are lisitng the name of the destination places to select the place.
  12. To check the application whether it is showing the availability of the flights with date and timings of the departure
  13. To check whether the cursor is moving to the name of the passenger field for entering the passenger name who is going to travel
  14. To check the application is allowed to selecting the class of travel by clicking the available options like first, second and business class
  15. To check whether the application is showing the details of the passenger order once it  has been placed

Test Cases for forget password :

Test Cases for forget password : 

1. Check whether when we click the forgot password link it must directs to forgot password link page
2.Check whether it must ask the email to send the forgot password link
3.Check whether what the user give his email the forgot password link should display in their email
4.Check whether when we sign in to our account it must send a forgot password link
5.Check whether when we click the link it must open a new link page
6.Check whether it must ask the user to enter new password
7.Check whether the entred new password and re-type password must be the same
8.Check whether when we login wiht new password it must sign in to our account

Test case on compose box in mail

Test case on compose box in mail

Functional Tests

Checkout whether
On clicking Compose mail, takes you to "Compose mail page"
Check whether it has
a) To, Cc, Bcc to enter email address.
b) Subject, to enter the subject of the mail
c) Text body, space to enter the text.
Check whether
a) In To, Cc, Bcc, you can delete, edit, cut, copy, paste text.
b) Subject, you can delete, edit, cut, copy, paste text.
c) Text body, you can delete, edit, cut, copy, paste text and format text.
Check whether you can attach a file
Check whether you can send, save or discard the mail
---
System Tests (Load Tests)
a) The number of email addresses that can be entered in To, Cc, and Bcc
b) The maximum length of the subject
c) The maxim no of words that can be entered in the text space
d) The maximum size of the file that can be attached
e) The max no of files that can be attached.

Performance Testing:If sending mail, receiving mail etc are considered, then we could test the performance of the email server as:
1) Like if one user is connected, what is the time taken to receive a single mail.
2) If 1000s of users are connected, what is the time taken to receive the same mail.
3) If 1000s of users are connected, what is the time taken to receive a huge attachment file.

Usability Testing:
1) In Usability testing, we can check that, if a part of the email address is entered, the matching email addresses are displayed
2) If the mail is tried to send without the subject or “body of the text”, a warning is displayed.
3) If the To, Cc, Bcc contain an address, without @, it should immediately display a warning that the mail id is invalid.
4) Composing mails should be automatically stored as drafts.

Test case for Chair

Test case for Chair
  1. Verify that the chair is stable enough to take an average human load
  2. Check the material used in making the chair-wood, plastic etc
  3. Check if the chair's leg are level to the floor
  4. Check the usability of the chair as an office chair, normal household chair
  5. Check if there is back support in the chair
  6. Check if there is support for hands in the chair
  7. Verify the paint's type and color
  8. Verify if the chair's material is brittle or not
  9. Check if cushion is provided with chair or not
  10. Check the condition when washed with water or effect of water on chair
  11. Verify that the dimension of chair is as per the specifications
  12. Verify that the weight of the chair is as per the specifications
  13. Check the height of the chair's seat from floor

Test Case For copy and paste in MS word

Test Case For copy and paste in MS word

For negative testing of copy n paste we check for all other commands
of the package, like-

* Pressing copy on selected content - it should not bold / italic etc.

* on pressing highlighted content should not cut.

For positive testing of copy we can design test cases like –

1) It should be de highlited if no content is selected.
2) Copy icon and command should be highlighted just after selecting any content.
3) Without any selection when we press hot key “ctrl+c” it should display
clipboard.
4) It should copy again n again to different selections and store it on the
clipboard.
5) It should copy content with their formats.

For Paste-

1) paste icon and command should be de highlighted at beginning.
2) It should highlighted just after copy something.
3) It should paste same content with their styles when we use hot key/ command/
icon.
4) It should paste n time, same content(last copied content).
5) It should paste copied content from clipboard as per our selection from

clipboard.

Test Case for Wi-Fi Enabled devices

Test Case for Wi-Fi Enabled devices

1. Check if the Wi-Fi device is on or not.
2. Internet wire is connected to device.
3. Internet connectivity notifications are coming on device.
4. Check on laptop that you Wi-Fi is enable on laptop.
5. Test if the Wi-Fi is connected to this Wi-Fi device.
6. Test if the Wi-Fi drivers are installed properly on laptop.
7. Test if the Wi-Fi connectivity is coming.
8. Test if the Wi-Fi get off when application is using.
9. Disconnect the Wi-Fi while using application over Wi-Fi.
10. Use internet on multiple application at the same time and check the speed of internet.
11. Use multiple device on same Wi-Fi to test the internet connectivity speed.
12.Test if the user with device is out of range.
13. Test on the edge of Wi-Fi rage.
14. Connect with another Wi-Fi and use the application then move to the range of this testing Wi-Fi and test if laptop get connect with it or not.

Test Case for Petrol Pump

Test Case for Petrol Pump

1.First we have to verify this is petrol pump or diesel pump
2.If it is petrol pump verify the petrol is there or not
3.If petrol is the1er not it must show the some signal
4.After check the meter it should be at zero level or not
5.Then meter is correctly working or not
6.Check the quantity and meter is zeri
7.Check the reset button is working correctly or not

8.Verify the amount display on the screen

Test cases for any credit card

Test cases for any credit card
  1. Test the size of the card
  2. Test the color of the card
  3. Test the thickness
  4. Bank name should be there
  5. Material used for card should not be harmful
  6. Card should not break if thrown from a certain height
  7. Card should not be very flexible
  8. Magnetic strip should be at proper place
  9. Magnetic strip have no effect in different environmental conditions
  10. Card contains the name of the holder
  11. Card contains the expiry date
  12. Card contains account number.if we swipe the card, machine is accepting it

Test Cases for the below Field Browse Upload

Test Cases for the below Field Browse Upload

There is one more case which is an imp one.
1. Instead of using browse button, check whether the
text box is editable and user can directly write the
entire path of the file and the on click of Upload button
the file should get Uploaded.
2. Negative case for the above scenario is, write a wrong
path or a path where no such file exits and click
Upload. Here the system should give a message saying wrong
path or no such file found.
3. Check if the Upload functionality is only for
authorized user, if mentioned in the specification.
4. On successful Upload, display a message the file Upload
successfully.
5.When Upload is in process the Upload and browse button
should be disabled, just to avoid clicking of those buttons
when activity is in process.
6.Check if error handling is done , if the Upload takes
time then there should not be any error saying time out.
7. Also check in what time the file should be Uploaded for
the Max size.
8.If Upload is done to store a file on some server folder
i.e. server location, check same file with same file name
should not be Uploaded in the same location, or if its a
database then duplicate records should not be Uploaded in

the same tables.

Test case for Coffee Machine

Test case for Coffee Machine

1.Verify the Coffee machine is working properly or not by
switching ON power supply.
2.Verify the Coffee machine when power supply is improper.
3.Verify the machine that all buttons are visible.
4.Verify the indicator light that the machine is turned ON
after switching on power supply.
5.Verify the machine when there is no water.
6.Verify the machine when there is no Coffee powder.
7.Verify the machine when there is no milk.
10.Verify the machine when there is no sugar.
8.Verify the machine operation when it is empty.
9.Verify the machine operation when all the ingredients are
upto the capacity level.
10.Verify the machine operation when water quantity is less
than its limit.
11.Verify the machine operation when milk quantity is less
than its capacity limit.
12.Verify the machine operation when Coffee powder is less
than its capacity limit.
13.Verify the machine operation when sugar available is
less than its capacity limit.
14.Verify the machine operation when there is metal piece
is stuck inside the machine.
15.Verify the machine by pressing the Coffee button and
check it is pouring Coffee with appropriate mixture and
taste.
16.Verify the machine by pressing the Tea button and check
it is pouring Tea with appropriate mixture and taste.
17.It should fill the Coffee cup appropriately i,e quantiy.
18.Verify Coffee machine operation with in seconds after
pouring milk,sugar,water etc. It
should display message.
19.Verify all the buttons operation.
20.Verify all the machine operation by pressing the buttons
simultaneously one after the other.
21.Verify the machine operation by pressing two buttons at
a time.
22.Verify the machine operation at the time power
fluctuations.
23.Verify the machine operation when all the ingredients
are overloaded.
24.Verify the machine operation when one of the ingredient
is overloaded and others are upto limit.

25.Verify the machine operation when one or some of the parts inside the machine are damaged.

Write a Test Cases on Fan

Write a Test Cases on Fan

Test Cases On Fan are as follows:

1.It should have a hook for hanging in the roof.
2. it should have minimum three blades.
3. If should be moving once the electricity pass into it.
4. Speed of the fan should be controlled by the regulator.
5.It should be stop once the electric switch off.
6. The fan should run with minimum noise.
7. The blades should have proper distance from the ceiling.
8. The fan while in motion, should not vibrate.
9. The color of the fan should be dark.

WHAT INFORMATION WOULD THE TEST MANAGER WANT OUT OF TEST CASE DOCUMENT/S

WHAT INFORMATION WOULD THE TEST MANAGER WANT OUT OF TEST CASE DOCUMENT/S

  • Which features have been tested/ will be tested eventually?
  • How many user scenarios/ use cases have been executed?
  • How many features are stable?
  • Which features need more work?
  • Are sufficient input combinations exercised?
  • Does the app give out correct error messages if the user does not use it the way it was intended to be used?
  • Does the app respond to the various browser specific functions as it should?
  • Does the UI conform to the specifications?
  • Are the features traceable to the requirement spec? Have all of them been covered?
  • Are the user scenarios traceable to the use case document? Have all of them been covered?
  • Can these tests be used as an input to automation?
  • Are the tests good enough? Are they finding defects?
  • Is software ready to ship? Is testing enough?
  • What is the quality of the application?

Thursday 8 October 2015

Test cases of pen

Test cases of pen are like that:

1. Verify the color of the pen.
2. Check GUI testing means logo of the pen maker.
3. Check Usability testing means grip of the pen.
4. Verify whether the pen is ballpoint pen or ink pen.
5. Check Integration Testing means cap of the pen should easily fit beside the body of the pen.
6. Check pen should be continuously in writing mode.

Some Functional test cases for pen:

1. Check whether it writes on paper or not.
2. Verify whether the ink on the paper is belongs with the similar color as what we see in the refill.

Performance and load test cases for pen:

1. Verify how it performs when writing on wet paper.
2. Verify how it performs when writing on rough paper.
3. Verify how it performs when writing on hand because we occasionally do that
4. Check load test means when pen is pressed very hard against the tough surface then pen refill should not come out of the pen.

Negative test cases about pen:

1. Verify whether ink is available or not.
2. Check if ink is available, than the pen does not write on the paper.
3. Verify by bend the refill at multiple ends and then try to write with it.
4. Verify by dip the pen always in to the water and then write it again.
5. Check whether it write on leaves or not.

Additional test cases for pen:

1. Check usability testing means test by writing on a section of paper, Examine if you can write smoothly. It should not be writing and stopping among (with) breaks.
2. Check capability or reliability testing means Test the writing capacity (the amount of writing that is possible from a single refill) of the pen.
3. Check Robustness testing means Test wherever you can carry the pen in to your shirt and pent pocket using its cap. The cap distension should be solid enough to grip your pocket.
4. Check Compatibility testing means Test by writing on distinct types of surfaces like: rough paper, packing material, glass, leather, cotton, wood, plastic, metals like aluminum or iron, polythene sheet etc.

Test Case for gmail Page

Test Case for gmail Page

1.Testing without entering any username and password
2.Test it only with Username
3.Test it only with password.
4 .User name with wrong password
5. Password with wrong user name
6. Right username and right password
7. Cancel, after entering username and pwd.
8.Enter long username and password that exceeds the set limit of characters.
9.Try copy/paste in the password text box.
10.After successful sign-out, try “Back” option from your browser. Check whether it gets you to the “signed-in” page.

Test cases Bulb

Test cases Bulb

  1. Check the bulb is req shap and size
  2. Check the bulb is fitted and removed from holder
  3. Check the bulb glow req illumunation r not
  4. Check the bulb it should glow when we switch on 
  5. Check the bulb it should off when we switch off
  6. Check the bulb material
  7. Life of the bulb should meet the reqrmt

Test cases Calculator

Test cases Calculator

1 It should have 9 numeric digits.
2 it should give proper output based on the operation.
3 it should not allow characters.
4 it should run from cell or battery not through power supply.
5 it should be small in size.
6 at least it should perform 4 basic operation such as add, sub. Div, mul

Test cases Elevator

Test cases Elevator

Some of the use cases would be:
1) Elevator is capable of moving up and down.
2) It is stopping at each floor.
3) It moves exactly to that floor when corresponding floor no is pressed.
4) It moves up when called from upward and down when called from downward.
5) It waits until ‘close’ button is pressed.
6) If anyone steps in between the door at the time of closing, door should open.
7) No break points exists
8) More use cases for the load that the elevator can carry (if required)

Test cases for 2 way switch

Test cases for 2 way switch

1. Check whether two switches are present.
2. Check whether both switches are connected properly.
3. Check power supplies for both switches.
4. Check on/off conditions are working properly.
5. Check whether any electronic appliance connected to the 2-way switches should not get power supply when both switches are either in on state or off state.
6. Check whether any electronic appliance connected to the 2-way switches should get power supply when one switch is in on state and other is in off state or vice versa.

Test cases for Traffic Signal

Test cases for Traffic Signal

1.verify if the traffic lights are having three lights(green, yellow, red)
2.verify that the lights turn on in a sequence
3.verify that lights turn on in a sequence based on time specified(greenlight 1 min, yellowlight 10 sec , redlight 1 min)
5.verify that only one light glows at a time
6.verify if the speed of the Traffic light can be accelerated as time specified based on the traffic
7.verify if the traffic lights in some spots are sensor activated.

Test cases for sending a message through mobile phone

Test cases for sending a message through mobile phone (assuming all the scenarios)

1.Check for the availability of the Mobile
2.Check the buttons on the mobile
3.Check the mobile is locked or unlocked
4.Check for the unlock of the mobile
5.Select the menu.
6.Check for the messages in menu
7.Select the messages.
8.Check for the write message in messages menu
9.Select the write message.
10.Check the buttons writing alphabets.
11.Check for the chars how many u can send.
12.Write the message on the write message menu.
13.Select the options.
14.Select the send.
15.Check whether asking for the phone number of the receiver.
16.Select the Search for the receiver Phone Number if Exits.
17.Enter the Phone Number of the receiver.
18.Select OK.

19.Check for the request send.

Test cases for Mobile Phone

Test cases for Mobile Phone

Test Cases for Mobile Phone:

1)Check whether Battery is inserted into mobile properly
2)Check Switch on/Switch off of the Mobile
3)Insert the sim into the phone n check
4)Add one user with name and phone number in Address book
5)Check the Incoming call
6)Check the outgoing call
7)send/receive messages for that mobile
8)Check all the numbers/Characters on the phone working fine by clicking on them.
9)Remove the user from phone book n check removed properly with name and phone number
10)Check whether Network working fine.
11)If its GPRS enabled check for the connectivity.

Test Cases for ATM

Test Cases for ATM

TC 1 :- successful card insertion.
TC 2 :- unsuccessful operation due to wrong angle card insertion.
TC 3:- unsuccessful operation due to invalid account card.
TC 4:- successful entry of pin number.
TC 5:- unsuccessful operation due to wrong pin number entered 3 times.
TC 6:- successful selection of language.
TC 7:- successful selection of account type.
TC 8:- unsuccessful operation due to wrong account type selected w/r to that inserted card.
TC 9:- successful selection of withdrawal option.
TC 10:- successful selection of amount.
TC 11:- unsuccessful operation due to wrong denominations.
TC 12:- successful withdrawal operation.
TC 13:- unsuccessful withdrawal operation due to amount greater than possible balance.
TC 14:- unsuccessful due to lack of amount in ATM.
TC 15:- unsuccessful due to amount greater than the day limit.
TC 16:- unsuccessful due to server down.
TC 17:- unsuccessful due to click cancel after insert card.
TC 18:- unsuccessful due to click cancel after insert card and pin no.
TC 19:- unsuccessful due to click cancel after language selection, account type selection, withdrawl selection, enter amount

Test Cases Example

What if there is not enough time for thorough testing

What if there is not enough time for thorough testing ?

If we have enough time to test the application then it is not a problem at all. But if there isn’t enough time for through testing of application, in this situation it won’t possible to test each & every combination of scenario. The Risk analysis is playing vital role in Software Testing, we recommend that you should use risk analysis to determine where testing should be focused. Most of the times, it's not possible to test the whole application within the specified time. In such situations.
Here are some points to be considered when you are in such a situation:

  1. What is the most important functionality of the project ?
  2. What is the high-risk module of the project ?
  3. Which functionality is most visible to the user ?
  4. Which functionality has the largest safety impact ?
  5. Which functionality has the largest financial impact on users ?
  6. Which aspects of the application are most important to the customer ?
  7. Which parts of the code are most complex, and thus most subject to errors ?
  8. Which parts of the application were developed in rush or panic mode ?
  9. What do the developers think are the highest-risk aspects of the application ?
  10. What kind of problems would cause the worst publicity ?
  11. What kind of problems would cause the most customer service complaints ?
  12. What kind of tests could easily cover multiple functionalities ?

Tuesday 6 October 2015

Types of testing

Types of testing

There are many types of testing like

  • Black box testing
  • White box testing
  • Unit Testing
  • Integration Testing
  • Functional Testing
  • System Testing
  • Stress Testing
  • Performance Testing
  • Usability Testing
  • Acceptance Testing
  • Regression Testing
  • Beta Testing
  • End-to-end testing  
  • Sanity testing
  • Load testing


Black box testing
Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.

White box testing
White box testing – This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.

Unit Testing
Unit testing is the testing of an individual unit or group of related units. It falls under the class of white box testing. It is often done by the programmer to test that the unit he/she has implemented is producing expected output against given input.

Integration Testing
Integration testing is testing in which a group of components are combined to produce output. Also, the interaction between software and hardware is tested in integration testing if software and hardware components have any relation. It may fall under both white box testing and black box testing.

Functional Testing
Functional testing is the testing to ensure that the specified functionality required in the system requirements works. It falls under the class of black box testing.

System Testing
System testing is the testing to ensure that by putting the software in different environments (e.g., Operating Systems) it still works. System testing is done with full system implementation and environment. It falls under the class of black box testing.

Stress Testing
Stress testing is the testing to evaluate how system behaves under unfavorable conditions. Testing is conducted at beyond limits of the specifications. It falls under the class of black box testing.

Performance Testing
Performance testing is the testing to assess the speed and effectiveness of the system and to make sure it is generating results within a specified time as in performance requirements. It falls under the class of black box testing.

Usability Testing
Usability testing is performed to the perspective of the client, to evaluate how the GUI is user-friendly? How easily can the client learn? After learning how to use, how proficiently can the client perform? How pleasing is it to use its design? This falls under the class of black box testing.

Acceptance Testing
Acceptance testing is often done by the customer to ensure that the delivered product meets the requirements and works as the customer expected. It falls under the class of black box testing.

Regression Testing
Regression testing is the testing after modification of a system, component, or a group of related units to ensure that the modification is working correctly and is not damaging or imposing other modules to produce unexpected results. It falls under the class of black box testing.

Beta Testing
Beta testing is the testing which is done by end users, a team outside development, or publicly releasing full pre-version of the product which is known as beta version. The aim of beta testing is to cover unexpected errors. It falls under the class of black box testing.

End-to-end testing
Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing
Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.

Load testing
Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

What is bug leak or memory leak

What is bug leak or memory leak 

Bug Leak-: If something is missed out by developer and caught Tester.
Defect Leak-: If something is missed out by Tester and caught by client.
Memory Leak-:When we access the large amount of data from the database that require same amout of buffer memory, if data is exceeded the buffer memory then MEMORY LEAKAGE will happen.
Example-:If we need 100 data then it require same (100) buffer memory, if buffer memory is 99 then memory leakage will happen.

Describe the basic elements you put in a defect report

Describe the basic elements you put in a defect report

Bug means deviation from the requirements. i.e; an application which does not perform the operations as mentioned in the Requirements Document. So we have to report that to the developer. for that we use some tracking tool. in other sense it is called Bug Reporting Tool. which contains the following information.
1.Bug id
2.program name
3. version, release.
4.Type of the bug
5.attachments
6. severity
7.is it reproducible
8. steps for reproducing.
9.suggestion
10.Name of the Reporter
11. date. 

What do you do if developer not accepted the bug

What do you do if developer not accepted the bug?

1) First of all the developer will be concerntrating on the bugs depending on priority. He first fix the bugs of Critical or show stopper,then High priority, Medium and last the low priority bug. If the developer is not ready to fix the bug means may the bug is of low priority and he is concentrating on other priority bugs.
2) If the bug is not of low priority and even then he is not ready to fix means he didnt understand the bug clearly in that case the tester will sit with the developer and explains him the bug in detail with the help of step by step process how the bug occured.
3) If the developer doesnt have the time to fix as the project release is ahead and if that bug doesnt affect the main functionality in that case this bug as an known issue will be released. The testing team should take care of this known issue is fixed in next release of the project.

What are the defect severity and priority

What are the defect severity and priority ?

Priority :- How quickly we need to fix the bug? Or How soon the bug should get fixed?

Severity : - How much the bug is effecting the functionality of the application or how deadly can it be for the application under consideration.

Eg:-

(1) High Priority and Low Severity
If a company logo is not properly displayed on their website.

(2) High Priority and High Severity
Suppose you are doing online shopping and filled payment informations, but after submitting the form, you get a message like "Order has been cancelled." Normally high severity bugs comes with high priority.

(3) Low Priority and High Severity
If we have a typical scenario in which the application get crashed, but that scenario exists rarely.

(4) Low Priority and Low Severity
There is a mistake like "You have registered success" instead of successfully, success is written.Basically general spelling mistake kind of thing which is not expected.

Why does a software have bugs ?

Why does a software have bugs ?

There are many reasons for a software having errors
1. It may be poor documentation
2. If there are some enhancemnts which has not been designed as per the enhancements
3. The main reason is being human being make errors which leads to defect and which leads to failure of the application
4. Time pressure may be one constraint
5. If the application is not verifying the documents
6. Inavlid inputs may leads to give error
7. Programming errors .e.g. suppose a loop should start from i=0 to 100 ,but bymistakely the loop gets started from i=1.This leads to a error
8. Not properly following the documentation

Difference between Desktop,client server and Web based Application

Difference between Desktop,client server and Web based Application

Testing of desktop applications, client server application testing and testing of web applications. Everyone has a different environment in which to experiment and you lose your monitor an environment in which to test the application, when you go to the desktop Web applications.

Desktop application runs on personal computers and workstations, so when you test the desktop application that you are focusing on a particular environment. There will be a test of the complete application of classes, a graphical user interface, functionality, Load, and backend ie DB.

In client-server application has two distinct components to be tested. The application is loaded on the server while the application exe on every client machine. Usually are tested in categories like, GUI on both sides, functionality, load, client-server interaction, server. This environment is mostly used in intranets. We know the number of clients and servers and their locations in the test case.

Projects are broadly divided into two types of :
2 tier applications
3 tier applications


CLIENT / SERVER TESTING

This type of testing usually done for 2 tier applications (usually developed for LAN)
Here we will be having frontend and backend.
The application launched on frontend will be having forms and reports which will be monitoring and manipulating data Eg : applications developed in VB,VC++,Core Java,C,C++,D2K,PowerBuilder etc.,The backend for these applications would be MS Access, SQL Server, oracle, sybase, mysql, quadbase

The tests performed on these type of applications would be
- user interface testing
- manual support testing
- Functionality testing
- compatability testing & configuration testing
- intersystems testing

WEB TESTING
This is done for 3 tier applications (developed for Internet / intranet / xtranet)
Here we will be having Browser, web server and DB server.
The applications accessable in browser would be developed in HTML, DHTML, XML, JavaScript etc.,(we can monitor thru these applications)
Applications for the webserver would be developed in Adv Java, ASP, JSP, VBScript, JavaScript, Perl, ColdFusion, PHP etc.,
(all the manipulations are done on the web server with the help of these programs developed)

The DBserver would be having oracle, sql server, sybase, mysql etc.,
(all data is stored in the database available on the DB server)


The tests performed on these type of applications would be
- user interface testing
- Functionality testing
- security testing
- browser compatability testing
- load / stress testing
- interoperability testing/intersystems testing
- storage and data volume testing

A web-application is a three tier application.
This has a browser (monitors data) [monitoring is done using html, dhtml, xml, javascript]-> webserver (manipulates data) [ manipulations are done using programming languages or scripts like adv java, asp, jsp, vbscript, javascript, perl, coldfusion, php] -> database server (stores data) [data storage and retrieval is done using databases like oracle, sql server, sybase, mysql] .

The type of tests which can be applied on this type of applications are : 
1. User interface testing for validation & userfriendliness
2. Funtionality testing to validate behavious, i/p, error handling, o/p, manipulations, services levels, order of functionality, links, content of web page & backend coverages
3. Security testing
4. Browser compatability
5. Load / stress testing
6. Interoperability testing
7. Storage & data volume testing

a client-server application is a two tier application. 
This has forms & reporting at frontend (monitoring & manipulations are done) [ using vb, vc++, core java, c, c++, d2k, power builder etc.,] -> database server at the backend [data storage & retrieval) [ using ms access, sql server, oracle, sybase, mysql, quadbase etc.,]

the tests performed on these applications would be 
1. User interface testing
2. Manual support testing
3. Functionality testing
4. Compatability testing
5. Intersystems testing


When to Stop Testing

When to Stop Testing

Testing should be stopped when it meets the completion criteria. Now how to find the completion criteria? Completion criteria can be derived from test plan and test strategy document. Also, re-check your test coverage.

Completion criteria should be based on Risks. Testing should be stopped when -
1. All the high priority bugs are fixed.
2. The rate at which bugs are found is too small.
3. The testing budget is exhausted.
4. The project duration is completed.
5. The risk in the project is under acceptable limit.

As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -

  • Measuring Test Coverage.
  • Number of test cycles.
  • Number of high priority bugs.

What is Testware

Testware

"Testware" is a term used to describe all of the materials used to perform a test. Testware includes test plans, test cases, test scripts, and any other items needed to design and perform a test.
Designing tests effectively, maintaining the test documentation, and keeping track of all the test documentation (testware) is all major challenges in the testing effort.
Generally speaking, Testware a sub-set of software with a special purpose, i.e. for software testing, especially for software testing automation
Testware: - Testware is produced by both verification and validation testing methods.
Testware includes test cases, test plan, test report and etc. Like software, testware should be placed under the control of a configuration management system, saved, faithfully maintained.