Showing posts with label Manual Testing. Show all posts
Showing posts with label Manual Testing. Show all posts

Saturday 29 July 2017

Non Functional Testing --Web Applications

Non Functional Testing of Web Applications

Non-Functional of Web Applications involve either or all of the following seven types of testing

  1. Configuration Testing
  2. Usability Testing
  3. Performance Testing
  4. Scalability Testing
  5. Security Testing
  6. Recoverability Testing
  7. Reliability Testing
Let us discuss each type of these testings in detail :-

1) Configuration Testing: This type of test includes :-

a) The operating system platforms used.

b) The type of network connection.

c) Internet service provider type.

d) Browser used (including version).

The real work for this type of test is ensuring that the requirements and assumptions are understood by the development team, and that test environment with those choices are put in place to properly test it.

2) Usability Testing:

For usability testing, there are standards and guidelines that have been established throughout the industry. The end-users can blindly accept these sites since the standards are being followed. But the designer shouldn't completely rely on these standards.

While following these standards and guidelines during the making of the website, he should also consider the learnability, understandability, and operability features so that the user can easily use the website.

3) Performance Testing: Performance testing involves testing a program for timely responses.

The time needed to complete an action is usually benchmarked, or compared, against either the time to perform a similar action in a previous version of the same program or against the time to perform the identical action in a similar program. The time to open a new file in one application would be compared against the time to open a new file in previous versions of that same application, as well as the time to open a new file in the competing application. When conducting performance testing, also consider the file size.

In this testing, the designer should also consider the loading time of the web page during more transactions. For example: a web page loads in less than eight seconds, or can be as complex as requiring the system to handle 10,000 transactions per minute, while still being able to load a web page within eight seconds.

Another variant of performance testing is load testing. Load testing for a web application can be thought of as multi-user performance testing, where you want to test for performance slow-downs that occur as additional users use the application. The key difference in conducting performance testing of a web application versus a desktop application is that the web application has many physical points where slow-downs can occur. The bottlenecks may be at the web server, the application server, or at the database server, and pinpointing their root causes can be extremely difficult.

We can create performance test cases by following steps:

a) Identify the software processes that directly influence the overall performance of the system.

b) For each of the identified processes, identify only the essential input parameters that influence system performance.

c) Create usage scenarios by determining realistic values for the parameters based on past use. Include both average and heavy workload scenarios. Determine the window of observation at this time.

d) If there is no historical data to base the parameter values on, use estimates based on requirements, an earlier version, or similar systems.

e) If there is a parameter where the estimated values from a range, select values that are likely to reveal useful information about the performance of the system. Each value should be made into a separate test case.

Performance testing can be done through the "window" of the browser, or directly on the server. If done on the server, some of the performance time that the browser takes is not accounted for.


4) Scalability Testing:

The term "scalability" can be defined as a web application's ability to sustain its required number of simultaneous users and/or transactions while maintaining adequate response times to its end users.

When testing scalability, the configuration of the server under test is critical. All logging levels, server timeouts, etc. need to be configured. In an ideal situation, all of the configuration files should be simply copied from test environment to the production environment, with only minor changes to the global variables.

In order to test scalability, the web traffic loads must be determined to know what the threshold requirement for scalability should be. To do this, use existing traffic levels if there is an existing website, or choose a representative algorithm (exponential, constant, Poisson) to simulate how the user "load" enters the system.


5) Security Testing:

Probably the most critical criterion for a web application is that of security. The need to regulate access to information, to verify user identities, and to encrypt confidential information is of paramount importance. Credit card information, medical information, financial information, and corporate information must all be protected from persons ranging from the casual visitor to the determined cracker. There are many layers of security, from password-based security to digital certificates, each of which has its pros and cons.

We can create security test cases by following steps:

a) The web server should be setup so that unauthorized users cannot browse directories and the log files in which all data from the website stores.

b) Early in the project, encourage developers to use the POST command wherever possible because the POST command is used for large data.

c) When testing, check URLs to ensure that there are no "information leaks" due to sensitive information being placed in the URL while using a GET command.

d) A cookie is a text file that is placed on a website visitor's system that identifies the user's "identity." The cookie is retrieved when the user revisits the site at a later time. Cookies can be controlled by the user, regarding whether they want to allow them or not. If the user does not accept cookies, will the site still work?

e) Is sensitive information stored in the cookie? If multiple people use a workstation, the second person may be able to read the sensitive information saved from the first person's visit. Information in a cookie should be encoded or encrypted.


6) Recoverability Testing:

The website should have the backup or redundant server to which the traffic is rerouted when the primary server fails. And the rerouting mechanism for the data must be tested. If a user finds your service unavailable for an excessive period of time, the user will switch over or browse the competitor's website. If the site can't recover quickly then inform the user when the site will be available and functional.


7) Reliability Testing:

Reliability testing is done to evaluate the product's ability to perform its required functions and give response under stated conditions for a specified period of time.

For example: A web application is trusted by users who use an online banking web application (service) to complete all of their banking transactions. One would hope that the results are consistent and up to date and according to the user's requirements.

Thursday 8 October 2015

What if there is not enough time for thorough testing

What if there is not enough time for thorough testing ?

If we have enough time to test the application then it is not a problem at all. But if there isn’t enough time for through testing of application, in this situation it won’t possible to test each & every combination of scenario. The Risk analysis is playing vital role in Software Testing, we recommend that you should use risk analysis to determine where testing should be focused. Most of the times, it's not possible to test the whole application within the specified time. In such situations.
Here are some points to be considered when you are in such a situation:

  1. What is the most important functionality of the project ?
  2. What is the high-risk module of the project ?
  3. Which functionality is most visible to the user ?
  4. Which functionality has the largest safety impact ?
  5. Which functionality has the largest financial impact on users ?
  6. Which aspects of the application are most important to the customer ?
  7. Which parts of the code are most complex, and thus most subject to errors ?
  8. Which parts of the application were developed in rush or panic mode ?
  9. What do the developers think are the highest-risk aspects of the application ?
  10. What kind of problems would cause the worst publicity ?
  11. What kind of problems would cause the most customer service complaints ?
  12. What kind of tests could easily cover multiple functionalities ?

Tuesday 6 October 2015

Types of testing

Types of testing

There are many types of testing like

  • Black box testing
  • White box testing
  • Unit Testing
  • Integration Testing
  • Functional Testing
  • System Testing
  • Stress Testing
  • Performance Testing
  • Usability Testing
  • Acceptance Testing
  • Regression Testing
  • Beta Testing
  • End-to-end testing  
  • Sanity testing
  • Load testing


Black box testing
Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.

White box testing
White box testing – This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.

Unit Testing
Unit testing is the testing of an individual unit or group of related units. It falls under the class of white box testing. It is often done by the programmer to test that the unit he/she has implemented is producing expected output against given input.

Integration Testing
Integration testing is testing in which a group of components are combined to produce output. Also, the interaction between software and hardware is tested in integration testing if software and hardware components have any relation. It may fall under both white box testing and black box testing.

Functional Testing
Functional testing is the testing to ensure that the specified functionality required in the system requirements works. It falls under the class of black box testing.

System Testing
System testing is the testing to ensure that by putting the software in different environments (e.g., Operating Systems) it still works. System testing is done with full system implementation and environment. It falls under the class of black box testing.

Stress Testing
Stress testing is the testing to evaluate how system behaves under unfavorable conditions. Testing is conducted at beyond limits of the specifications. It falls under the class of black box testing.

Performance Testing
Performance testing is the testing to assess the speed and effectiveness of the system and to make sure it is generating results within a specified time as in performance requirements. It falls under the class of black box testing.

Usability Testing
Usability testing is performed to the perspective of the client, to evaluate how the GUI is user-friendly? How easily can the client learn? After learning how to use, how proficiently can the client perform? How pleasing is it to use its design? This falls under the class of black box testing.

Acceptance Testing
Acceptance testing is often done by the customer to ensure that the delivered product meets the requirements and works as the customer expected. It falls under the class of black box testing.

Regression Testing
Regression testing is the testing after modification of a system, component, or a group of related units to ensure that the modification is working correctly and is not damaging or imposing other modules to produce unexpected results. It falls under the class of black box testing.

Beta Testing
Beta testing is the testing which is done by end users, a team outside development, or publicly releasing full pre-version of the product which is known as beta version. The aim of beta testing is to cover unexpected errors. It falls under the class of black box testing.

End-to-end testing
Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Sanity testing
Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.

Load testing
Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

Difference between Desktop,client server and Web based Application

Difference between Desktop,client server and Web based Application

Testing of desktop applications, client server application testing and testing of web applications. Everyone has a different environment in which to experiment and you lose your monitor an environment in which to test the application, when you go to the desktop Web applications.

Desktop application runs on personal computers and workstations, so when you test the desktop application that you are focusing on a particular environment. There will be a test of the complete application of classes, a graphical user interface, functionality, Load, and backend ie DB.

In client-server application has two distinct components to be tested. The application is loaded on the server while the application exe on every client machine. Usually are tested in categories like, GUI on both sides, functionality, load, client-server interaction, server. This environment is mostly used in intranets. We know the number of clients and servers and their locations in the test case.

Projects are broadly divided into two types of :
2 tier applications
3 tier applications


CLIENT / SERVER TESTING

This type of testing usually done for 2 tier applications (usually developed for LAN)
Here we will be having frontend and backend.
The application launched on frontend will be having forms and reports which will be monitoring and manipulating data Eg : applications developed in VB,VC++,Core Java,C,C++,D2K,PowerBuilder etc.,The backend for these applications would be MS Access, SQL Server, oracle, sybase, mysql, quadbase

The tests performed on these type of applications would be
- user interface testing
- manual support testing
- Functionality testing
- compatability testing & configuration testing
- intersystems testing

WEB TESTING
This is done for 3 tier applications (developed for Internet / intranet / xtranet)
Here we will be having Browser, web server and DB server.
The applications accessable in browser would be developed in HTML, DHTML, XML, JavaScript etc.,(we can monitor thru these applications)
Applications for the webserver would be developed in Adv Java, ASP, JSP, VBScript, JavaScript, Perl, ColdFusion, PHP etc.,
(all the manipulations are done on the web server with the help of these programs developed)

The DBserver would be having oracle, sql server, sybase, mysql etc.,
(all data is stored in the database available on the DB server)


The tests performed on these type of applications would be
- user interface testing
- Functionality testing
- security testing
- browser compatability testing
- load / stress testing
- interoperability testing/intersystems testing
- storage and data volume testing

A web-application is a three tier application.
This has a browser (monitors data) [monitoring is done using html, dhtml, xml, javascript]-> webserver (manipulates data) [ manipulations are done using programming languages or scripts like adv java, asp, jsp, vbscript, javascript, perl, coldfusion, php] -> database server (stores data) [data storage and retrieval is done using databases like oracle, sql server, sybase, mysql] .

The type of tests which can be applied on this type of applications are : 
1. User interface testing for validation & userfriendliness
2. Funtionality testing to validate behavious, i/p, error handling, o/p, manipulations, services levels, order of functionality, links, content of web page & backend coverages
3. Security testing
4. Browser compatability
5. Load / stress testing
6. Interoperability testing
7. Storage & data volume testing

a client-server application is a two tier application. 
This has forms & reporting at frontend (monitoring & manipulations are done) [ using vb, vc++, core java, c, c++, d2k, power builder etc.,] -> database server at the backend [data storage & retrieval) [ using ms access, sql server, oracle, sybase, mysql, quadbase etc.,]

the tests performed on these applications would be 
1. User interface testing
2. Manual support testing
3. Functionality testing
4. Compatability testing
5. Intersystems testing


When to Stop Testing

When to Stop Testing

Testing should be stopped when it meets the completion criteria. Now how to find the completion criteria? Completion criteria can be derived from test plan and test strategy document. Also, re-check your test coverage.

Completion criteria should be based on Risks. Testing should be stopped when -
1. All the high priority bugs are fixed.
2. The rate at which bugs are found is too small.
3. The testing budget is exhausted.
4. The project duration is completed.
5. The risk in the project is under acceptable limit.

As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -

  • Measuring Test Coverage.
  • Number of test cycles.
  • Number of high priority bugs.

What is Testware

Testware

"Testware" is a term used to describe all of the materials used to perform a test. Testware includes test plans, test cases, test scripts, and any other items needed to design and perform a test.
Designing tests effectively, maintaining the test documentation, and keeping track of all the test documentation (testware) is all major challenges in the testing effort.
Generally speaking, Testware a sub-set of software with a special purpose, i.e. for software testing, especially for software testing automation
Testware: - Testware is produced by both verification and validation testing methods.
Testware includes test cases, test plan, test report and etc. Like software, testware should be placed under the control of a configuration management system, saved, faithfully maintained.

Monday 5 October 2015

Alpha and Beta Testing

Alpha and Beta Testing 

Alpha testing is done before the software is made available to the general public. Typically, the developers will perform the Alpha testing using white box testing techniques. Subsequent black box and grey box techniques will be carried out afterwards. The focus is on simulating real users by using these techniques and carrying out tasks and operations that a typical user might perform. Normally, the actual Alpha testing itself will be carried out in a lab type environment and not in the usual workplaces. Once these techniques have been satisfactorily completed, the Alpha testing is considered to be complete.

The next phase of testing is known as Beta testing. Unlike Alpha testing, people outside of the company are included to perform the testing. As the aim is to perform a sanity check before the products release, there may be defects found during this stage, so the distribution of the software is limited to a selection of users outside of the company. Typically, outsourced testing companies are used as their feedback is independent and from a different perspective than that of the software development company employees. The feedback can be used to fix defects that were missed, assist in preparing support teams for expected issues or in some cases even enforce last minute changes to functionality.

In some cases, the Beta version of software will be made available to the general public. This can give vital 'real-world' information for software/systems that rely on acceptable performance and load to function correctly.

The types of techniques used during a public Beta test are typically restricted to Black box techniques. This is due to the fact that the general public does not have inside knowledge of the software code under test, and secondly the aim of the Beta test is often to gain a sanity check, and also to retrieve future customer feedback from how the product will be used in the real world.

Various sectors of the public are often eager to take part in Beta testing, as it can give them the opportunity to see and use products before their public release. Many companies use this phase of testing to assist with marketing their product. For example, Beta versions of a software application get people using the product and talking about it which (if the application is any good) builds hype and pre-orders before its public release.

What is Compatibility testing in software testing

What is Compatibility testing in software testing?

Compatibility testing is used to determine if your software application has issues related to how it functions in concert with the operating system and different types of system hardware and software.

It can be of two types - forward compatibility testing and backward compatibility testing.

  • Operating system Compatibility Testing - Linux , Mac OS, Windows
  • Database Compatibility Testing - Oracle SQL Server
  • Browser Compatibility Testing - IE , Chrome, Firefox
  • Other System Software - Web server, networking/ messaging tool, etc.


Browser compatibility testing
It is very popular in compatibility testing. It is to check the compatibility of the software application on different browsers like Chrome, Firefox, Internet Explorer, Safari, and Opera etc.

Hardware
it is to check the application/ software compatibility with the different hardware configurations.

Network
it is to check the application in different network like 3G, WIFI etc.

Mobile Devices
it is to check if the application is compatible with the mobile devices and their platforms like android, iOS, windows etc.

Operating Systems
it is to check if application is compatible with different Operating Systems like Windows, Linux, Mac etc.

How to perform Compatibility testing?

  • Test the application in same browsers but in different versions. For e.g. to test the compatibility of site google.com. Download different versions of Firefox and install them one by one and test the google site. google site should behave equally same in each version.
  • Test the application in different browsers but in different versions. For e.g. testing of site google.com in different available browsers like Firefox, Safari, Chrome, Internet Explorer and Opera etc.

Common Compatibility testing defects

  • Changes in UI ( look and feel)
  • Change in font size
  • Alignment related issues
  • Change in CSS style and color
  • Scroll bar related issues
  • Content or label overlapping
  • Broken tables or Frames

What is Traceability Matrix

What is Traceability Matrix


Why use traceability matrices?

The traceability matrices are the answer to the following questions when testing any software project:
  • How is it feasible to ensure, for each phase of the SDLC, that I have correctly accounted for all the customer’s needs?
  • How can I certify that the final software product meets the customer’s needs? It lets us make sure requirements are captured in test cases.
Disadvantages of not using traceability matrices include the following:
  • More defects in production poor or unknown test coverage.
  • Discovering bugs later in the development cycle resulting in more expensive fixes.
  • Difficulties planning and tracking projects.
  • Misunderstandings between different teams over project dependencies, delays, etc…
Benefits of using traceability matrices include the following:
  • Making it obvious to the client that the software is being developed as required.
  • Ensuring that all requirements are included in the test cases.
  • Ensuring that developers are not creating features that no one has requested.
  • Making it easy to identify missing functionalities.
  • Making it easy to find out which test cases need updating if there are change requests.
Requirements Traceability Matrix:

In simple words, a requirements traceability matrix is a document that traces and maps user requirements, usually requirement IDs from a requirement specification document, with the test case IDs. The purpose of this document is to make sure that all the requirements are covered in test cases so that nothing is missed.

The traceability matrix document is prepared to show clients that the coverage is complete. It usually includes the following columns: requirement, baseline document reference number, test case/condition and defect/bug ID. Using this document the person can track the Requirement based on the Defect id.

Adding a few more columns to the traceability matrix gives you a good test case coverage checklist.


Difference between Test case and use case

What difference between Test case and use case

Use case describes the specific action done by the user (actor) upon the system to achieve certain predefined task. Use case is done in terms of the actor. It deals with the steps followed by the user to accomplish the task.It has no matter with the input data. Use case can solve the integration defects between different component while used by the actor. Use case designed from URS.
Use cases describe the system from the user's point of view 

TEST CASE:
Test case is the combination of the Test conditions and Test data.It simply visualize the a particular task with a set of input data and expected output and certain test conditions. Test case is designed from SRS.


Saturday 3 October 2015

What is Six Sigma Concept

What is Six Sigma Concept

Six Sigma is a highly disciplined process that helps us focus on developing and delivering near-perfect products and services.

Features of Six Sigma

  • Six Sigma's aim is to eliminate waste and inefficiency, thereby increasing customer satisfaction by delivering what the customer is expecting.
  • Six Sigma follows a structured methodology, and has defined roles for the participants.
  • Six Sigma is a data driven methodology, and requires accurate data collection for the processes being analyzed.
  • Six Sigma is about putting results on Financial Statements.

Six Sigma is a business-driven, multi-dimensional structured approach for:

  • Improving Processes
  • Lowering Defects
  • Reducing process variability
  • Reducing costs
  • Increasing customer satisfaction
  • Increased profits

The word Sigma is a statistical term that measures how far a given process deviates from perfection.

The central idea behind Six Sigma: If you can measure how many "defects" you have in a process, you can systematically figure out how to eliminate them and get as close to "zero defects" as possible and specifically it means a failure rate of 3.4 parts per million or 99.9997% perfect.

Key Concepts of Six Sigma

  • At its core, Six Sigma revolves around a few key concepts.
  • Critical to Quality : Attributes most important to the customer.
  • Defect : Failing to deliver what the customer wants.
  • Process Capability : What your process can deliver.
  • Variation : What the customer sees and feels.
  • Stable Operations : Ensuring consistent, predictable processes to improve what the customer sees and feels.



Agile Testing :- Scrum Methodology

What is Agile Testing?

A software testing practice that follows the principles of agile software development is called Agile Testing. Agile is an iterative development methodology, where requirements evolve through collaboration between the customer and self-organizing teams and agile aligns development with customer needs.

SCRUM is a process in agile methodology which is a combination of Iterative model and incremental model.

Some of the key characteristics of SCRUM include:

  • Self-organized and focused team
  • No huge requirement documents, rather have very precise and to the point stories.
  • Cross functional team works together as a single unit.
  • Close communication with the user representative to understand the features.
  • Has definite time line of max 1 month.
  • Instead of doing the entire “thing” at a time, Scrum does a little of everything at a given interval
  • Resources capability and availability are considered before committing any thing.


Important SCRUM Terminologies:

1. Scrum Team

Scrum team is a team comprising of 7 with + or – two members. These members are a mixture of competencies and comprise of developers, testers, data base people, support people etc. along with the product owner and a scrum master. All these members work together in close collaboration for a recursive and definite interval, to develop and implement the said features.

2. Sprint

Sprint is a predefined interval or the time frame in which the work has to be completed and make it ready for review or ready for production deployment. This time box usually lies between 2 weeks to 1 month. In our day to day life when we say that we follow 1 month Sprint cycle, it simply means that we work for one month on the tasks and make it ready for review by the end of that month.

3. Product Owner

Product owner is the key stakeholder or the lead user of the application to be developed.

Product owner is the person who represents the customer side. He / she have the final authority and should always be available for the team. He / she should be reachable in case any one has any doubts that need clarification. It is important for the product owner to understand and not to assign any new requirement in the middle of the sprint or when the sprint has already started.

4. Scrum Master

Scrum Master is the facilitator of the scrum team. He / she make sure that the scrum team is productive and progressive. In case of any impediments, scrum master follows up and resolves them for the team.

5. User Story

User stories are nothing but the requirements or feature which has to be implemented. In scrum, we don’t have those huge requirements documents, rather the requirements are defined in a single paragraph.

6. Product Backlog

Product backlog is a kind of bucket or source where all the user stories are kept. This is maintained by Product owner. Product backlog can be imagined as a wish list of the product owner who prioritizes it as per business needs. During planning meeting (see next section), one user story is taken from the product backlog, team does the brainstorming, understands it and refine it and collectively decides which user stories to take, with the intervention of the product owner.

7. Sprint Backlog

Based on the priority, user stories are taken from the Product Backlog one at a time. The Scrum team brainstorms on it, determines the feasibility and decides on the stories to work on a particular sprint. The collective list of all the user stories which the scrum team works on a particular sprint is called s Sprint backlog.

Activities done in SCRUM Methodology:

#1: Planning meeting

Planning meeting is the starting point of SCRUM. It is the meeting where the entire scrum team gather, the product owner selects a user story based on the priority from the product back log and the team brain storms on it. Based on the discussion, the scrum team decides the complexity of the story and sizes it as per the Fibonacci series. Team identifies the tasks along with the efforts (in hours) which would be done to complete the implementation the user story.

Many a time planning meeting is preceded by a “Pre-Planning meeting”. It’s just like a home work which the scrum team does before they sit for the formal planning meet. Team tries to write down the dependencies or other factors which they would like to discuss in the planning meet.

#2: Execution of sprint tasks

As the name suggests, these are the actual work done by the scrum team to accomplish their task and take the user story into the “Done” state.

#3: Daily scrum meeting (call)

During the sprint cycle, every day the scrum team meets for, not more than 15 minutes (could be a stand up call, recommended to have during the beginning of the day) and state 3 points:

What did the team member did yesterday
What did the team member plan to do today
Any impediments (roadblocks)
It is the Scrum master who facilitates this meeting. In case, any team member is facing any kind of difficulties, the scrum master follows up to get it resolved.

#4: Review meeting

At the end of every sprint cycle, the SCRUM team meets again and demonstrates implemented user stories to the product owner. The product owner may cross verify the stories as per its acceptance criteria. It’s again the responsibility of the Scrum master to preside over this meeting.

#5: Retrospective meeting

Retrospective meeting happens after the review meeting. The SCRUM team meets and discusses & document the following points:

What went well during the Sprint (Best practices)
What did not went well in the Sprint
Lessons learnt
Action Items.

The Scrum team should continue to follow the best practice, ignore the “not best practices” and implement the lessons learnt during the consequent sprints. The retrospective meeting helps to implement the continuous improvement of the SCRUM process.

Spiral model- advantages, disadvantages

What is Spiral model- advantages, disadvantages and when to use it?

The spiral model is similar to the incremental model, with more emphasis placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, requirements are gathered and risk is assessed. Each subsequent spirals builds on the baseline spiral.

Planning Phase: Requirements are gathered during the planning phase. Requirements like ‘BRS’ that is ‘Bussiness Requirement Specifications’ and ‘SRS’ that is ‘System Requirement specifications’.

Risk Analysis: In the risk analysis phase, a process is undertaken to identify risk and alternate solutions.  A prototype is produced at the end of the risk analysis phase. If any risk is found during the risk analysis then alternate solutions are suggested and implemented.

Engineering Phase: In this phase software is developed, along with testing at the end of the phase. Hence in this phase the development and testing is done.

Evaluation phase: This phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral.

Diagram of Spiral model:


Advantages of Spiral model:

  • High amount of risk analysis hence, avoidance of Risk is enhanced.
  • Good for large and mission-critical projects.
  • Strong approval and documentation control.
  • Additional Functionality can be added at a later date.
  • Software is produced early in the software life cycle.

Disadvantages of Spiral model:

  • Can be a costly model to use.
  • Risk analysis requires highly specific expertise.
  • Project’s success is highly dependent on the risk analysis phase.
  • Doesn’t work well for smaller projects.

 When to use Spiral model:

  • When costs and risk evaluation is important
  • For medium to high-risk projects
  • Long-term project commitment unwise because of potential changes to economic priorities
  • Users are unsure of their needs
  • Significant changes are expected (research and exploration)
  • Requirements are complex
  • New product line

difference between smoke testing and functional testing

What is the difference between smoke testing and functional testing?

1. Smoke testing commonly involves performing functional testing (but may include non-functional testing such as installation testing for example).

2. Fewer test cases are executed in smoke testing than in detailed functional testing. Consequently, a smoke test lasts shorter than a detailed functional test.

3. The focus of a smoke test is broad and shallow. A detailed functional test focuses each test area in depth.

4. In order to find any major or obvious defects quickly, it is common to execute a smoke test more times than a detailed functional test.

5. A detailed functional test follows a successful smoke test.

Difference Between Severity And Priority

What is The Difference Between Severity And Priority?

Severity: It is the extent to which the defect can affect the software. In other words it defines the impact that a given defect has on the system.

Priority: Priority defines the order in which we should resolve a defect. Should   we fix it now, or can it wait? This priority status is set by the tester to the developer mentioning the time frame to fix the defect. If high priority is mentioned then the developer has to fix it at the earliest. The priority status is set based on the customer requirements.

Examples:

High Priority & High Severity: An error which occurs on the basic functionality of the application and will not allow the user to use the system.

High Severity & Low Priority: An error which occurs on one of the functionality of the application and will not allow the user to use that functionality but that functionality which is rarely used by the end user.

High Priority & Low Severity: The client logo is not appearing on the web site but the site is working fine. in this case the severity is low but the priority is high because from company's reputation it is most important to resolve. After all the reputation wins more clients and projects and hence increases revenue.

Low Priority and Low Severity: Any cosmetic or spelling issues which is within a paragraph or in the report

So the bottom line is that in the priority v/s severity matrix the priority is completely business impact driven and the severity is completely technical impact driven.

What Is User Acceptance Testing

What Is User Acceptance Testing? 
 
User Acceptance testing is the software testing process where system tested for acceptability & validates the end to end business flow. Such type of testing executed by client in separate environment (similar to production environment) & confirm whether system meets the requirements as per requirement specification or not.

UAT is performed after System Testing is done and all or most of the major defects have been fixed. This testing is to be conducted in the final stage of Software Development Life Cycle (SDLC) prior to system being delivered to a live environment. UAT users or end users are concentrating on end to end scenarios & typically involves running a suite of tests on the completed system.

The Acceptance testing is “black box” tests, means UAT users doesn’t aware of internal structure of the code, they just specify the input to the system & check whether systems respond with correct result.

User Acceptance testing also known as Customer Acceptance testing (CAT), if the system is being built or developed by an external supplier. The CAT or UAT are the final confirmation from the client before the system is ready for production. The business customers are the primary owners of these UAT tests. These tests are created by business customers and articulated in business domain languages. So ideally it is collaboration between business customers, business analysts, testers and developers. It consists of test suites which involve multiple test cases & each test case contains input data (if required) as well as the expected output. The result of test case is either a pass or fail.

Prerequisites of User Acceptance Testing:

Prior to start the UAT following checkpoints to be considered:
The Business Requirements should be available.
The development of software application should be completed & different levels of testing like Unit Testing, Integration Testing & System Testing is completed.
All High Severity, High Priority defects should be verified. No any Showstoppers defects in the system.
Check if all reported defects should be verified prior to UAT starts.
Check if Traceability matrix for all testing should be completed.
Before UAT starts error like cosmetic error are acceptable but should be reported.
After fixing all the defects regression Testing should be carried out to check fixing of defect not breaking the other working area.
The separate UAT environment similar to production should be ready to start UAT.
The Sign off should be given by System testing team which says that Software application ready for UAT execution.

What to Test in User Acceptance Testing:

Based on the Requirements definition stage use cases the Test cases are created.
Also the Test cases are created considering the real world scenarios for the application.
The actual testing is to be carried out in environments that copy of the production environment. So in the type of testing is concentrating on the exact real world use of application.
Test cases are designed such that all area of application is covered during testing to ensure that an effective User Acceptance Testing.

What are the key deliverable of User Acceptance Testing:

The completion of User Acceptance Testing is the significant milestone for traditional testing method. The following key deliverable of User Acceptance Testing phase:
Test Plan: This outlines the Testing Strategy
UAT Test cases: The Test cases help the team to effectively test the application in UAT environment.
Test Results and Error Reports: This is a log of all the test cases executed and the actual results.
User Acceptance Sign-off: This is the system, documentation, and training materials have passed all tests within acceptable margins.
Installation Instructions: This is document which helps to install the system in production environment.
Documentation Materials: Tested and updated user documentation and training materials are finalized during user acceptance testing

Positive Testing And Negative Testing With Example

Explain Positive Testing And Negative Testing With Example?

Positive Testing: - When tester test the application from positive point of mind than it is known as positive testing.

Testing the application with valid input and data is known as positive testing.

Positive testing: - A test which is designed to check that application is correctly working. Here the aim of tester is to pass affecting application, sometimes it is obviously called as clean testing, and that is “test to pass”.

Negative Testing: - When tester test the application from negative point of mind than it is known as negative testing.

Testing the application always with invalid input and data is known as negative testing.

Example of positive testing is given below:

Considering example length of password defined in requirements is 7 to 15 characters, and whenever we check the application by giving alphanumeric characters on password field “between” 7 to 15 characters than it is positive testing, because we test the application with valid data/ input.

Example of negative testing is given below:

Considering example as we know phone no field does not accept the alphabets and special characters it obviously accepts numbers, but if we type alphabets and special characters on phone number field to check it accepts the alphabets and special characters or not than it is negative testing.

What is Test Plan ?

What is Test Plan ?

Test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies among others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice,and any risks requiring contingency planning. It is a record of the test planning process.

TEST PLAN TEMPLATE

The format and content of a software test plan vary depending on the processes, standards, and test management tools being implemented. Nevertheless, the following format, which is based on IEEE standard for software test documentation, provides a summary of what a test plan can/should contain.

Test Plan Identifier:

Provide a unique identifier for the document. (Adhere to the Configuration Management System if you have one.)

Introduction:

  • Provide an overview of the test plan.
  • Specify the goals/objectives.
  • Specify any constraints.

References:

List the related documents, with links to them if available, including the following:
Project Plan
Configuration Management Plan

Test Items:

List the test items (software/products) and their versions.

Features to be Tested:

List the features of the software/product to be tested.
Provide references to the Requirements and/or Design specifications of the features to be tested

Features Not to Be Tested:

List the features of the software/product which will not be tested.
Specify the reasons these features won’t be tested.

Approach:

Mention the overall approach to testing.
Specify the testing levels [if it’s a Master Test Plan], the testing types, and the testing methods [Manual/Automated; White Box/Black Box/Gray Box]

Item Pass/Fail Criteria:

Specify the criteria that will be used to determine whether each test item (software/product) has passed or failed testing.

Suspension Criteria and Resumption Requirements:

Specify criteria to be used to suspend the testing activity.
Specify testing activities which must be redone when testing is resumed.

Test Deliverables:

List test deliverables, and links to them if available, including the following:
Test Plan (this document itself)
Test Cases
Test Scripts
Defect/Enhancement Logs
Test Reports


Test Environment:

Specify the properties of test environment: hardware, software, network etc.
List any testing or related tools.

Estimate:

Provide a summary of test estimates (cost or effort) and/or provide a link to the detailed estimation.
Schedule:

Provide a summary of the schedule, specifying key test milestones, and/or provide a link to the detailed schedule.

Staffing and Training Needs:

Specify staffing needs by role and required skills.
Identify training that is necessary to provide those skills, if not already acquired.

Responsibilities:

List the responsibilities of each team/role/individual.

Risks:

List the risks that have been identified.
Specify the mitigation plan and the contingency plan for each risk.

Assumptions and Dependencies:

List the assumptions that have been made during the preparation of this plan.
List the dependencies.

Approvals:

Specify the names and roles of all persons who must approve the plan.
Provide space for signatures and dates. (If the document is to be printed.)

Bugs / Defect

Defect Life Cycle Or Bug Life Cycle

Defect Life Cycle Or Bug Life Cycle

Defect:
A fault in a program, which causes the program to perform in an unintended or unanticipated manner.

Defect Life Cycle (Bug Life cycle) is the journey of a defect from its identification to its closure. The Life Cycle varies from organization to organization and is governed by the software testing process the organization or project follows and/or the Defect tracking tool being used.

1) New: When QA files new bug.

2) Assigned: Assigned to field is set by lead or manager and assigns bug to developer.

3) Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as CNR. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.

4) Need more information: If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as “Need more information. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.

5) Deferred: If the bug is not related to current build or can not be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred.

6) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.

7) Resolved/Fixed: When developer makes necessary code changes and verifies the changes then he/she can make bug status as Fixed and the bug is passed to testing team.

8) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as Reopen so that developer can take appropriate action.

9) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as Closed.