Sunday, 17 July 2022

First step towards Automation Testing

Automation testing 101

What is automation testing?

In the software testing world, there are two types of testing techniques – manual and automated. Both kinds aim to execute the test case, then compare the actual outcome with the expected result.

To put it simply, manual testing is a testing technique performed by human effort to ensure the software code does anything it is supposed to do. So, what is automation testing? On the contrary, it is the practice of running tests automatically, managing test data, and utilizing results to improve software quality.

If you are familiar with testing, you understand that successive development cycles require the execution of the same test suite repeatedly. This process can be extremely repetitive and time-consuming if you perform it manually. However, by leveraging a test automation tool, it is easier to write the test suite, re-play it as required, mitigating human intervention, and improving testing ROI.

Why do we automate a test?

This is one of the questions I ask when I interview candidates for a Test Automation role and to my surprise, many candidates seem to miss the main and most important reason to automate a test. Some of the answers I get from candidates are quite credible, but still not the answer that I’m looking for. Some of the answers I get to the above question are:

Increase Test Coverage

This answer is quite valid, but how do we define coverage? If we have 100 tests, how can we measure the percentage coverage?

With a mature test automation practice in place, you could be running hundreds of tests in a relatively short period of time.

Because of this, we can create more test cases, more test scenarios and test with more input data for a given feature and thus gain more confidence that they system is working as expected.

However, in testing and especially test automation, more tests don’t really mean better quality or more chance of finding bugs.

In a post by Martin Fowler, where he discuses Test Coverage, he mentions

“If you make a certain level of coverage a target, people will try to attain it. The trouble is that high coverage numbers are too easy to reach with low quality testing. At the most absurd level you have AssertionFreeTesting. But even without that you get lots of tests looking for things that rarely go wrong distracting you from testing the things that really matter.”

Save Time

This answer is also true as you can spend valuable time doing interesting exploratory testing while the automated tests are running. However, for a brand new feature that has been developed, it could actually take longer to write automated scripts than to test the feature manually in the first instant.

So, it is important to note that to save time from automated tests, it requires an initial increased effort in scripting the automated tests, making sure they are code reviewed, and that there are no hiccups in the execution of automated tests.

Find More Bugs

This answer worries me sometimes as I have never seen any metrics that suggests there were more bugs found by automation than manual / exploratory testing. Automated tests generally check for any regression in the system after new code has been implemented.

There is always more chance of finding bugs in new features than in existing functionality. Furthermore, there are other reasons why automated tests fail to find defects.

Replace Manual Testers

This is probably the worst answer I have heard in regards to why we automate a test. There is a clear distinction between what a manual tester does and what an automated test checks. Automated testing is not testing, it is checking of facts.

In order to be able to automate a test, we have to know the expected outcome so that we can check for the valid or invalid outcome. This is what gives us true or false, positive or negative, pass or fail.

Testing, on the other hand, is an investigation exercise, where we design and execute tests simultaneously. Many things can behave differently where only an observant human tester can notice.

Good manual testers will always be needed because of the different mindset and the ability to question the system.

Improve Quality

Although automated tests are capable of giving us quick feedback and alert us about the health of an application, so that we can revert any code change that has broken the system, automated testing on its own does not improve quality. Just because we have a mature test automation in place does not guarantee that no bugs escape into production.

We can improve quality by ensuring correct practices are followed from start to finish of a development cycle. Quality is not an afterthought; it should be baked in right from the beginning. It is not enough to rely on automated tests to get a picture of the quality of the product.

Benefits of automation testing

Now that we have gone through what automation testing is, it is time to glance at several benefits of automation testing to help you eliminate the ambiguity on whether automation testing is the right choice for your team. The following are highlighted points on why automation testing is so important:

automation testing benefits

Simplify test execution

With the automated testing tools, test scripts can be reused as often as you need, thus saving both time and effort. Imagine using manual testing, you have to write a single code line for the same test case, over and over again.

Reduce human intervention

Utilizing automation tools, you can run automated tests unattended (overnight) without human intervention. Once written, the tests can be reused and executed unlimited times without additional cost. The tests are also available 24/7, unlike manual testers!

Speed up test

The speed of test execution and test coverage increases, thus shortening the software development cycles.

Increase test coverage on multiple platforms

Automation testing grants you the ability to perform testing on multiple platforms in parallel without creating abundant test cases in different browser versions.

When to opt for automation testing?

While QA teams turn their testing strategy towards a more inclusive automation approach to increase efficiency and coverage of the testing process, there are still testers wondering if automation testing is the right choice for them.

Automation is an integral part of a development cycle, so it is essential to determine what you want to achieve with automation before switching into it. A test should meet some criteria in order to be automated – otherwise, it may end up costly investment rather than saving.

Ultimately, it is substantial to remember the goal of automation is to reduce your time, effort, and money. Take the below criteria into account before making your own decision

  • High Risk – Business Critical test cases

    Some test cases may contain severe risks, which will have a negative impact on the business. The negative impact includes costs, customer dissatisfaction, poor user experience.
    In case the whole testing process is run by a manual tester, even by the most experienced one, there is always a higher possibility of error-prone codes. Running an automated test is considered as a better way under risk-based testing, where higher priority should be put to prevent these unexpected errors.

  • Repetitive test cases

    There is no sense in applying automation testing tools for the tests that can only be run one time. Under these circumstances, repeatable tests can be run on-demand, resulting in a reduction of the cost per test run and the time to complete a development cycle.

  • Functional test cases

    Functional testing is also an excellent time to take advantage of automated testing. You can quickly and seamlessly detect the real-time performance of the functional requirements. This approach allows you to achieve accuracy, interoperability, and compliance at ease.

5 Steps to get started with Automated Testing

5 Steps get started automated testing

Step 1: Defining the Scope of Automation

The scope of automation means the area of your Application Under Test that will be automated. Make sure you have walked through and know precisely your team’s test state, the amount of test data, also the environment where tests take place. Below are additional clues helping you to determine the scope:

  • Technical feasibility
  • The complexity of test cases
  • The features or functions that are important for the business
  • The extent to which business components are reused
  • The ability to use the same test cases for cross-browser testing

Step 2: Selecting a Testing Tool

After determining your scope, it is now the time for you to pick up a tool for automation testing. Of course, you can select it from a wide range of automation tools available in the market. Yet, it solely depends on the technology on which the application tests are built. Each type of tool or framework may serve different demands, therefore having a thorough understanding of multiple tool types is also a prominent factor in choosing your best tool.

Step 3: Planning, Designing, and Development

At this stage, you will create an automation strategy and plan. This plan can include the following items:

  • Your selected automation testing tool
  • Framework design and its features
  • A detailed timeline for scripting and executing test cases
  • In-scope and Out-of-scope items of automation
  • Goals and deliverables of automation testing process

Step 4: Executing Test Cases and Build your reports

Once finishing all of the preceding steps, it is time to take action! You can write the scripts, run the test automatically, either by running the code directly or by calling an application’s API or user interface. After your execution, the test report provides a consolidated summary of the testing performed so far for the project.

Step 5: Maintaining previous test cases

No matter how well you manage the automation testing, test maintenance is unavoidable if you want to expand your collection of reusable test scripts. Once your automated tests have been scripted and running, they still need updating if the application changes the next time.

Conclusion

To sum up, this article provides you with an introduction to automation testing, the benefits of automation testing, and how to start your journey with it. We believe this is the best way to fulfill most of the testing goals with practical resources and time in an Agile world. But be careful before choosing the type of automation that fulfills the requirement of the application because no one can meet 100% requirement.

Saturday, 19 October 2019

Performance Testing Terminology

Performance testing is a practice conducted to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.

The focus of Performance Testing is checking a software program's

Speed - Determines whether the application responds quickly
Scalability - Determines maximum user load the software application can handle.
Stability - Determines if the application is stable under varying loads

Performance Testing is popularly called “Perf Testing” and is a subset of performance engineering.

There are a number of terms used to refer to the different types of performance testing and associated tools and practices. Please find below a useful performance testing glossary to enable you to confer and better-understand some of the terms commonly used.

Non-functional requirements (NFRs) - Requirements that do not relate to the functioning of the system, but to other aspects of the system such as reliability, usability and performance

Performance Engineering - Activities designed to ensure a system will be designed and implemented to meet specific non functional requirements. Often takes place following completion of testing activities that highlight weaknesses in the design and implementation.

Performance Test Plan - Typically a written document that details the objectives, scope, approach, deliverable, schedule, risks, data and test environment needs for testing on a specific project.

Performance Testing - Testing designed to determine the performance levels of a system

Reliability - Related to stability, reliability is the degree to which a system provides the same result for the same action over time under load.

Scalability - The degree to which a system’s performance and capacity can be increased typically by increasing available hardware resources within a set of servers (vertical scaling) or increasing the number of servers available to service requests (horizontal scaling).


Connect Time
Connect time is the time taken to establish a TCP connection between the server and the client. TCP/IP model guarantees delivery of data by its nature by using the TCP Handshake. If TCP Handshake is successful, then the client can send further requests. That data send/receive task is not on the HTTP layer. If there’s no TCP connection made between the server and the client, the client can’t talk to the server. This can happen if the server is not live or busy responding to other requests.

Latency Time
Latency is measured by the time taken for information to get to its target and back again. It’s a round trip. Sometimes, latency means delay. which is a very problematic issue when working with remote data centers. Data hops through nodes till it’s sent from the server so the bigger the distance the more the delay. That’s why those nodes will increase the response time and violate your service level agreements (SLA’s).  That’s why dealing with latency is the hardest one. JMeter measures the latency from the first moment of sending the request until the first byte is received. So, in JMeter, Connect time is included when calculating Latency Time. There’s also the network latency to express the time for a packet of data to get from one designated point to another.

Elapsed Time
Elapsed time is measured by the time from the first moment of sending the data and the time of the last byte of the received response.

Latency time – Connect time = Server Processing Time

Elapsed time – Latency time = Download Time

Throughput
Throughput is the number of units of work that can be handled per unit of time. From a performance testing perspective, it can count requests per second, request calls per day, hits per second, bytes per second. JMeter allows you to create assertions for response time, so you can set your request’s fail\success status according to its result.

Performance Thresholds
Performance thresholds are the KPIs or the maximum acceptable values of a metric. There are many metrics to be identified for a performance test project. That can be response time, throughput, and resource-utilization levels like processor capacity, memory, disk I/O, and network I/O.

Saturation
Saturation means the point where a resource is at the maximum utilization point. At this point, if we are talking about a server, it cannot respond to any more requests.

Soak Testing - A type of Performance Testing used to evaluate the behavior of a system or component when the system is subjected to expected load over a sustained period of time

Spike Testing - A type of Performance Testing used to evaluate the behavior of a system or component when subjected to large short-term changes in demand. Normally this is to test how the system responds to large increases in demand, e.g. User Logins, Black Friday-like sales events etc.

Stability - The degree to which a system exhibits failures and errors when under normal usage. For example, erroneous errors when registering new users under load.

Stress Testing - A type of Performance Testing used to evaluate the behaviour of a system or component when subjected to load beyond the anticipated workload or by reducing the resources the system can use, such as CPU or memory.

Transaction Volume Model (TVM) - A document detailing the user journeys to be simulated, the click-path steps that make up the user journeys and associated load / transaction volume models to be tested. This should include information regarding the geographical locale from where users will be expected to interact with the system and the method of interaction e.g. mobile vs desktop.

User Journey - The path through the system under test that a group of Virtual Users will use to simulate real users. It should be noted that key performance / volume impacting journeys should be used as it is impractical to performance test all possible User Journeys, a good rule-of-thumb is to use the 20% of User Journeys that generate 80% of the volume.

Virtual User - A simulated user that performs actions as a real user would during the execution of a test.

Sunday, 30 July 2017

Test case for Whatsapp

Test case for Whatsapp

  • Verify that on downloading Whatsapp application, a user can register using a new mobile number.
  • Verify that for a new mobile number user will get a verification code on his mobile and filling the same verifies the new user account.
  • Check the maximum number of incorrect attempts allowed while filling the verification code.
  • Verify that registering an existing mobile number for new user account registration is not allowed.
  • Verify that on successful registration all the contacts in user's contact directory get imported to WhatsApp contact list.
  • Verify that user can set DP and status on Whatsapp.
  • Verify that user can update existing DP and Whatsapp status.
  • Verify that user can send a message to any individual selected from his contact list.
  • Verify that 'Chats' window contains all the chat list with DP and name and last message preview of the other person with whom chat was initiated.
  • Verify that clicking a chat in the chat list opens a new window containing all the chats received and sent with the other person.
  • Verify that user can check the message delivered and read time for a message in the 'Message Info' section.
  • Verify that user can share or receive a contact with the other person.
  • Verify that user can create a group adding multiple people from his contact list.
  • Verify that user can send and receive a message in group chats.
  • Verify that user can send and receive images, audio, video, emoticons in chat to individuals.
  • Verify that user can send and receive images, audio, video, emoticons in group chats.
  • Verify that user can send and receive chats in secondary languages available.
  • Verify that user can delete text, images, audio, video messages within a chat.
  • Verify that user can clear complete chat history in an individual or group chat.
  • Verify that user can archive chats in an individual or group chat.
  • Verify that user can block a user to prevent any message from getting received from the blocked contact.
  • Verify that user makes Whatsapp calls to the person in his contact list.
  • Verify that user can receive Whatsapp calls from a person in his contact list.
  • Verify that user can mark chats as favorite and access all chats marked as the favorite from the 'Favorites' section.

Chat settings test scenario


  • Verify that user can set a chat wallpaper.
  • Verify that user set privacy settings like turning on/off last seen, online status, read receipts etc.
  • Verify that user can update notification settings like - notification sound, on/off, show preview for both group and individual chats.
  • Verify that user can take the complete chat backup of his chats.
  • Verify that user updates his phone number that is used by WhatsApp application.
  • Verify that user can disable/delete his WhatsApp account.
  • Verify that user can check data usage by images, audio, video, and documents in WhatsApp chats.

Test Case for Video Uploading and Viewing

Test Case for Video Uploader Functionality


  • Verify that user can upload single video or allowed format and size successfully.
  • Verify that while uploading user should select the video license and type of video along with its attributes like- name, artist name, company etc.
  • Verify the maximum size of the video that is permitted to upload and check that any attempt to upload the video of size greater than the allowed value results in an error message.
  • Verify if there is any minimum size of the video that is permitted to upload and any attempt to upload file size less than specified results in error message.
  • Verify all the video formats that are allowed to upload - .mp4, .3gp, .avi etc and check that uploading file formats other that allowed results in error message.
  • Verify that uploading blank file should result in error message.
  • Verify that user can upload multiple videos or allowed format and size successfully.
  • Verify that uploaders get notification of comments posted on the videos uploaded by them.
  • Verify that user can view likes, dislikes, and comments for their videos.
  • Verify that user can reply to the comments posted in their videos.


Test case for Video Viewing Functionality


  • Verify that video page can be opened by a direct link to a video.
  • Verify that on clicking the video play icon over the video, the video should play.
  • Verify all the video player controls- play, pause, volume, mute etc.
  • Verify that user can select the allowed video quality for playing the video.
  • Verify that once the video is complete, the user can replay the video using 'replay' icon.
  • Verify that video should be searchable by name, displaying the most relevant video on the top in search results.
  • Verify that other attributes of video like artist name, a description should also be searchable.
  • Verify that user should get auto suggestions while searching for videos in the youtube search bar.
  • Verify that search results should display information like video name, thumbnail, video length, view counts etc.
  • Verify that clicking the video thumbnails in the search results should lead to the video page.
  • Verify that video filtering and sorting option while searching for video like - sort be view count, like, upload date etc.
  • Verify that user can view 'view count', 'comments', 'like' and 'dislikes' for a video.
  • Verify that with each view the 'view count' increases by one.
  • Verify that user can like or dislike a video and the corresponding count should increase by one.
  • Verify that user can comment in the comments section.
  • Verify that user should be presented with related videos in the sidebar section.
  • Verify that the related videos are related to the current video or is based on the past viewing history of the user.
  • Verify that clicking related video thumbnail should open the video.
  • Verify that for age restricted video, a user is asked to login to the youtube account.
  • Verify that logged-in user should see their history as well as recommended videos in the home page.
  • Verify that every video viewed goes to history for logged in user.
  • Verify that user can view or delete history items.

Saturday, 29 July 2017

Classification of Defects

Severity of Defects / Failures: Rating of defects can be done into four different levels according to the severity of the resulting failures.

The severity is the extent to which a defect causes a failure that degrades the expected operational capabilities of the system of which the software is a component. The severity is classified from the point of view of effect on business efficiency.

The software users, who discover the defects, are the best judges to classify the severity of the defect.
Following classification of defect severity assumes that the software remains operational.

A) Critical Defects: These are the extremely severe defects, which have already halted or are capable of halting the operation of a business system. It means the defect has stopped some business function thereby forcing the deployment of manual workaround procedures into the operation.
Example of critical defects can be

  • The software application refuses to run or stops in-between; 
  • Corruption of data
  • Serious impact on customer operations for which the customers is aware of the consequences.

B) Major Defects: These are also severe defects, which have not halted the system, but have seriously degraded the performance of some business function. Although this is also a failure, but the business operation continues at a lower rate of performance or disable only some portion of the business operation.

Example of critical defects in a financial application can be

Software allows the input of customer details but refuses to confirm the credit information. This affects the customer facing operation - although the customer may not be aware of it.

C) Minor Defects:

These types of defects are the ones, which can or have caused a low-level disruption of the system or the business operation. Such defects can result into user inefficiency. Although a software with a minor defect suffers from the failure, but it continues to operate. Such a disruption or non-availability of some functionality can be acceptable for a limited period.
Minor defects could cause corruption of some data values in a way that is tolerable for a short period. But the business operations continue despite some degradation in performance - although the business customer may not be aware of it.

D) Cosmetic Defects:

These types of defects are the ones, which are primarily related to the presentation or the layout of the data. However there is no danger of corruption of data and incorrect values.

Example of cosmetic defects are

Headings, Banners, Labels, or Colors are absent or badly chosen. These can be considered as a failure of the software product despite the software being completely operational.
Cosmetic defects have no real effect on the customer facing operations & many times the customers don't come to know of any degradation in performance. But cosmetic defects can cause frustration & annoyance among the users due to continued inefficiency of using such defective software.


Priority of Defects: When some defect is discovered, a suitable priority is assigned to it. This priority is an indication of the urgency with which the defect must be fixed.
Generally, defects of the greatest severity will be assigned the highest priority. However, due some overriding factors, a high priority may sometimes be allocated to even a minor or a cosmetic defect. For example a cosmetic defect in a presentation by the companies CEO can be assigned a high priority. Or sometimes, it is wise to club many easily resolvable, but low severity defects before undertaking fixing of a higher severity defect.


Different Levels of Priority:
1) Very Urgent: This priority is allocated to a defect that must be attended with immediate effect.
2) Urgent: This priority is allocated to a defect that must be attended to as early as possible generally at the next convenient break e.g. overnight or at the weekend.
3) Routine: This priority is allocated to a defect, fixing of which is scheduled by the next or some particular release of the software.
4) Not Urgent: This priority is allocated to a defect that may be rectified whenever it is convenient.

Tags: Software Testing, Software Quality, Software design defects, software data defects, quality Assurance, software defects

Root cause analysis


Let’s start with the “WHY?” questions, (The list is not limited). You can start from the outer phase and move towards the inner phase of SDLC.
How should I handle root cause analysis in software testing

It's important to take finding the root cause of a defect seriously in an Agile team using continuous improvement. Expert Amy Reichert explains how she narrows it down.
What is Root Cause Analysis?

Introduction:


RCA (Root cause analysis) is a mechanism of analyzing the defects, to identify its cause. We brainstorm, read and dig the defect to identify whether the defect was due to “testing miss”, “development miss” or was a “requirement or designs miss”.

Doing the RCA accurately helps to prevent defects in the later releases or phases. If we find, that a defect was due to design miss, we can review the design documents and can take appropriate measures. Similarly if we find that a defect was due to testing miss, we can review our test cases or metrics, and update it accordingly.

RCA should not be limited only to the testing defects. We can do RCA on production defects as well. Based on the decision of RCA, we can enhance our test bed and include those production tickets as regression test cases to ensure that the defect or similar kinds of defects are not repeated.

How Root Cause Analysis is Done?

There are many factors which provokes the defects to occur


  • Unclear / missing / incorrect requirements
  • Incorrect design
  • Incorrect coding
  • Insufficient testing
  • Environment issues ( Hardware, software or configurations)


These factors should always be kept in mind while kicking off the RCA process.

There is no defined process of doing RCA. It basically starts and proceeds with brainstorming on the defect. The only question which we ask to ourselves while doing RCA is “WHY?”and “WHAT?” We can dig into each phase of the life cycle to track, where the defect actually persists.

Key Benefits of Root Cause Analysis in Software Testing


  • Analysis of near-misses
  • Reinforcement of quality control
  • Continuous improvement
  • Early identification of risks
  • Enhanced Project management
  • Improvement in Performance Management



Failure Mode and Effects Analysis (FMEA)

This method derives the idea of Risk Analysis approach to identify points or areas where a system could fail.


  • Impact Analysis


This is yet another useful method which provides the facility to analyze the positive & negative impact of the change on different areas of the system or application which can be affected.


  • Kaizen or Continuous Improvement


Kaizen or CI provides significance to those in the organization who are able to identify the places for improvement. Based on the applied small changes or solutions, one can make the system better.


  • Identification of Defect Injected / Detected phase


The Defect Injected Phase (the phase in which the defect was injected) and the Defect Detected Phased (the phase in which the defect was identified) are two important aspects to be identified for any ideal tester. Identification of the found defect and the rectified defect is known as Defect Age.

What is Defect Based Testing Technique

Defect Based Software Testing Technique:

A defect based testing technique is a technique where test cases are derived on the basis of defects. Instead of using the traditional requirements documents or the use cases (Specification based techniques), this strategy uses the defects to base their test cases.

The categorized list of defects (called as defect taxonomy) is being used. The coverage using this technique is not very systematic, hence driving your test cases based on this technique only, may not solve the purpose of the quality deliverable. This technique can complement your test driving conditions and can be taken as one of the option to increase the coverage of testing. Or in some other sense – This technique can be applied when all the test conditions and test cases are identified and we need some extra coverage or insight into testing.


Defect based technique can be used at any level of testing, but it is best suited in Systems Testing. We should base our test cases from the available defect taxonomies as well. These defects can be the production ones or the historical ones. Root cause analysis can also be used to baseline your test cases.

The 5 step plan to write the test cases through defect based testing techniques can be:


  1. Identify and prioritize the requirements
  2. Identify and collect all the defects
  3. Brainstorm on the defect
  4. Link the defect to the requirement
  5. Write the test conditions/test cases based on the linked defects.

Non Functional Testing --Web Applications

Non Functional Testing of Web Applications

Non-Functional of Web Applications involve either or all of the following seven types of testing

  1. Configuration Testing
  2. Usability Testing
  3. Performance Testing
  4. Scalability Testing
  5. Security Testing
  6. Recoverability Testing
  7. Reliability Testing
Let us discuss each type of these testings in detail :-

1) Configuration Testing: This type of test includes :-

a) The operating system platforms used.

b) The type of network connection.

c) Internet service provider type.

d) Browser used (including version).

The real work for this type of test is ensuring that the requirements and assumptions are understood by the development team, and that test environment with those choices are put in place to properly test it.

2) Usability Testing:

For usability testing, there are standards and guidelines that have been established throughout the industry. The end-users can blindly accept these sites since the standards are being followed. But the designer shouldn't completely rely on these standards.

While following these standards and guidelines during the making of the website, he should also consider the learnability, understandability, and operability features so that the user can easily use the website.

3) Performance Testing: Performance testing involves testing a program for timely responses.

The time needed to complete an action is usually benchmarked, or compared, against either the time to perform a similar action in a previous version of the same program or against the time to perform the identical action in a similar program. The time to open a new file in one application would be compared against the time to open a new file in previous versions of that same application, as well as the time to open a new file in the competing application. When conducting performance testing, also consider the file size.

In this testing, the designer should also consider the loading time of the web page during more transactions. For example: a web page loads in less than eight seconds, or can be as complex as requiring the system to handle 10,000 transactions per minute, while still being able to load a web page within eight seconds.

Another variant of performance testing is load testing. Load testing for a web application can be thought of as multi-user performance testing, where you want to test for performance slow-downs that occur as additional users use the application. The key difference in conducting performance testing of a web application versus a desktop application is that the web application has many physical points where slow-downs can occur. The bottlenecks may be at the web server, the application server, or at the database server, and pinpointing their root causes can be extremely difficult.

We can create performance test cases by following steps:

a) Identify the software processes that directly influence the overall performance of the system.

b) For each of the identified processes, identify only the essential input parameters that influence system performance.

c) Create usage scenarios by determining realistic values for the parameters based on past use. Include both average and heavy workload scenarios. Determine the window of observation at this time.

d) If there is no historical data to base the parameter values on, use estimates based on requirements, an earlier version, or similar systems.

e) If there is a parameter where the estimated values from a range, select values that are likely to reveal useful information about the performance of the system. Each value should be made into a separate test case.

Performance testing can be done through the "window" of the browser, or directly on the server. If done on the server, some of the performance time that the browser takes is not accounted for.


4) Scalability Testing:

The term "scalability" can be defined as a web application's ability to sustain its required number of simultaneous users and/or transactions while maintaining adequate response times to its end users.

When testing scalability, the configuration of the server under test is critical. All logging levels, server timeouts, etc. need to be configured. In an ideal situation, all of the configuration files should be simply copied from test environment to the production environment, with only minor changes to the global variables.

In order to test scalability, the web traffic loads must be determined to know what the threshold requirement for scalability should be. To do this, use existing traffic levels if there is an existing website, or choose a representative algorithm (exponential, constant, Poisson) to simulate how the user "load" enters the system.


5) Security Testing:

Probably the most critical criterion for a web application is that of security. The need to regulate access to information, to verify user identities, and to encrypt confidential information is of paramount importance. Credit card information, medical information, financial information, and corporate information must all be protected from persons ranging from the casual visitor to the determined cracker. There are many layers of security, from password-based security to digital certificates, each of which has its pros and cons.

We can create security test cases by following steps:

a) The web server should be setup so that unauthorized users cannot browse directories and the log files in which all data from the website stores.

b) Early in the project, encourage developers to use the POST command wherever possible because the POST command is used for large data.

c) When testing, check URLs to ensure that there are no "information leaks" due to sensitive information being placed in the URL while using a GET command.

d) A cookie is a text file that is placed on a website visitor's system that identifies the user's "identity." The cookie is retrieved when the user revisits the site at a later time. Cookies can be controlled by the user, regarding whether they want to allow them or not. If the user does not accept cookies, will the site still work?

e) Is sensitive information stored in the cookie? If multiple people use a workstation, the second person may be able to read the sensitive information saved from the first person's visit. Information in a cookie should be encoded or encrypted.


6) Recoverability Testing:

The website should have the backup or redundant server to which the traffic is rerouted when the primary server fails. And the rerouting mechanism for the data must be tested. If a user finds your service unavailable for an excessive period of time, the user will switch over or browse the competitor's website. If the site can't recover quickly then inform the user when the site will be available and functional.


7) Reliability Testing:

Reliability testing is done to evaluate the product's ability to perform its required functions and give response under stated conditions for a specified period of time.

For example: A web application is trusted by users who use an online banking web application (service) to complete all of their banking transactions. One would hope that the results are consistent and up to date and according to the user's requirements.

Sunday, 9 April 2017

Why & How WebDriver (Selenium) coding?

The limitations of Selenium IDE are:

1) Selenium IDE uses only HTML language.
2) Conditional or branching statements execution like using of if, select statements is not possible.
3) Looping statements using is not possible directly in Selenium HTML language in ide.
4) Reading from external files like .txt, .xls is not possible.
5) Reading from the external databases is not possible with ide.
6) Exceptional handling is not there.
7) A neat formatted Reporting is not possible with IDE.


Selenium IDE is very useful in learning stage of Selenium as it has feature of record actions. We can export recorded test case to any of these formats.
























Sunday, 27 December 2015

Locators In Selenium

 Locators :

Selenium webdriver uses 8 locators to find the elements on web page. The following are the list of object identifier or locators supported by selenium.

We have prioritized the list of locators to be used when scripting.

id - >

Select element with the specified @id attribute.
Name

Select first element with the specified @name attribute.
Linktext

Select link (anchor tag) element which contains text matching the specified link text
Partial Linktext

Select link (anchor tag) element which contains text matching the specified partial link text
Tag Name

Locate Element using a Tag Name .
Class name

Locate Element using a Tag Name ..
Css

Select the element using css selectors. You can check here for Css examples and You can also refer W3C CSS Locatros
Xpath

Locate an element using an XPath expression.

Locating an Element By ID:
The most efficient way and preferred way to locate an element on a web page is By ID. ID will be the unique on web page which can be easily identified.
IDs are the safest and fastest locator option and should always be the first choice even when there are multiple choices, It is like an Employee Number or Account which will be unique.
Example 1:

<div id="toolbar">.....</div>

Example 2:

<input id="email" class="required" type="text"/>

We can write the scripts as

WebElement Ele = driver.findElement(By.id("toolbar"));

Unfortunately there are many cases where an element does not have a unique id (or the ids are dynamically generated and unpredictable like GWT). In these cases we need to choose an alternative locator strategy, however if possible we should ask development team of the web application to add few ids to a page specifically for (any) automation testing.

Locating an Element By Name:
When there is no Id to use, the next worth seeing if the desired element has a name attribute. But make sure there the name cannot be unique all the times. If there are multiple names, Selenium will always perform action on the first matching element
Example:

<input name="register" class="required" type="text"/>
WebElement register= driver.findElement(By.name("register"));


Locating an Element By LinkText:
Finding an element with link text is very simple. But make sure, there is only one unique link on the web page. If there are multiple links with the same link text (such as repeated header and footer menu links), in such cases Selenium will perform action on the first matching element with link.

Example:

<a href="http://www.seleniumhq.org">Downloads</a>
WebElement download = driver.findElement(By.linkText("Downloads"));


Locating an Element By Partial LinkText:
In the same way as LinkText, PartialLinkText also works in the same pattern.

User can provide partial link text to locate the element.
Example:

<a href="seleniumhq.org">Download selenium server</a>
WebElement download = driver.findElement(By.PartialLinkText("Download"));


Locating an Element By TagName:
TagName can be used with Group elements like , Select and check-boxes / dropdowns.
below is the example code:

Select select = new Select(driver.findElement(By.tagName("select")));
select.selectByVisibleText("Nov");
or
select.selectByValue("11");

Locating an Element By Class Name:
There may be multiple elements with the same name, if we just use findElementByClassName,m make sure it is only one. If not the you need to extend using the classname and its sub elements.
Example:

WebElement classtest =driver.findElement(By.className(“sample”));

CSS Selector:
CSS mainly used to provide style rules for the web pages and we can use for identifying one or more elements in the web page using css.
If you start using css selectors to identify elements, you will love the speed when compared with XPath. Check this for more details on Css selectors examples

We can you use Css Selectors to make sure scripts run with the same speed in IE browser. CSS selector is always the best possible way to locate complex elements in the page.

Example:

WebElement CheckElements = driver.findElements(By.cssSelector("input[id=email']"));

XPath Selector:
XPath is designed to allow the navigation of XML documents, with the purpose of selecting individual elements, attributes, or some other part of an XML document for specific processing

There are two types of xpath

1. Native Xpath, it is like directing the xpath to go in direct way. like
Example:
html/head/body/table/tr/td

Here the advantage of specifying native path is, finding an element is very easy as we are mention the direct path. But if there is any change in the path (if some thing has been added/removed) then that xpath will break.

2. Relative Xpath.
In relative xpath we will provide the relative path, it is like we will tell the xpath to find an element by telling the path in between.
Advantage here is, if at all there is any change in the html that works fine, until unless that particular path has changed. Finding address will be quite difficult as it need to check each and every node to find that path.
Example:
//table/tr/td

Example Syntax to work with Image

    xpath=//img[@alt='image alt text goes here']

Example syntax to work with table

    xpath=//table[@id='table1']//tr[4]/td[2]
    xpath=(//table[@class='nice'])//th[text()='headertext']/

Example syntax to work with anchor tag

    xpath=//a[contains(@href,'href goes here')]
    xpath=//a[contains(@href,'#id1')]/@class

Example syntax to work with input tags

    xpath=//input[@name='name2' and @value='yes']

We will take and sample XML document and we will explain different methods available to locate an element using Xpath

Tuesday, 15 December 2015

Introducing WebDriver

Selenium

Selenium runs in many browsers and operating systems
can be controlled by many programming languages and testing frameworks.

What is Selenium?

Selenium automates browsers. That's it. What you do with that power is entirely up to you. Primarily it is for automating web applications for testing purposes, but is certainly not limited to just that. Boring web-based administration tasks can (and should) also be automated as well.

Selenium has the support of some of the largest browser vendors who have taken (or are taking) steps to make Selenium a native part of their browser. It is also the core technology in countless other browser automation tools, APIs and frameworks.
If you want to
  •   Create robust, browser-based regression automation
  •    Scale and distribute scripts across many environments

Then you want to use Selenium WebDriver; a collection of language specific bindings to drive a browser -- the way it is meant to be driven.
Selenium WebDriver is the successor of Selenium Remote Control which has been officially deprecated.

Introducing WebDriver

The primary new feature in Selenium 2.0 is the integration of the WebDriver API. WebDriver is designed to providing a simpler, more concise programming interface along with addressing some limitations in the Selenium-RC API. Selenium-WebDriver was developed to better support dynamic web pages where elements of a page may change without the page itself being reloaded. WebDriver’s goal is to supply a well-designed object-oriented API that provides improved support for modern advanced web-app testing problems.

How Does WebDriver ‘Drive’ the Browser Compared to Selenium-RC?

Selenium-WebDriver makes direct calls to the browser using each browser’s native support for automation. How these direct calls are made, and the features they support depends on the browser you are using. Information on each ‘browser driver’ is provided later in this chapter.
For those familiar with Selenium-RC, this is quite different from what you are used to. Selenium-RC worked the same way for each supported browser. It ‘injected’ JavaScript functions into the browser when the browser was loaded and then used its JavaScript to drive the AUT within the browser. WebDriver does not use this technique. Again, it drives the browser directly using the browser’s built in support for automation.


Versions of Selenium

Different versions of Selenium

Selenium IDE
Firefox add-on Selenium IDE allows users to record and re-play user actions in Firefox. It supports exporting the recorded scripts into Selenium RC or Selenium WebDriver code.

Selenium 1 / Selenium RC
Also known as Selenium 1 incorrectly, Selenium Remote Control is the first version of Selenium API, which was generally known as "Selenium" without any version suffixes at the time. After the release of second generation of Selenium, it started to be called with version number or name in order to be distinguishable from the new API. It is now officially deprecated but still released within Selenium WebDriver library for backward compatibility purpose.

Selenium 2 / Selenium WebDriver
Selenium 2, a.k.a. Selenium WebDriver, is the latest API in Selenium project, which replaces Selenium RC with fundamentally different mechanisms and dominates web UI automation market right now.

Selenium 3
The next release of Selenium project, which is only in staging at the time of writing. One possible major change would be breaking the backward compatibility, i.e. Selenium RC will be no longer a part of Selenium release. More details can be followed and discussed in this post on Selenium Developers' forum.

WebDriver
The term "WebDriver" might have different meanings in various contexts.
Synonym for Selenium WebDriver / Selenium 2.
A tool called "WebDriver" which was created independently then got merged into Selenium.

Comparison of Selenium versions

Version
Version
Comparison
Selenium 1
Selenium RC
Essentially the same thing.
Selenium 1 has never been an official name, but is commonly used in order to distinguish between versions.
Selenium 2
Selenium WebDriver
Essentially the same thing.
The term "Selenium WebDriver" is now more commonly used.
Selenium RC
Selenium WebDriver
Selenium RC is the predecessor of Selenium WebDriver.
It has been deprecated and now released inside Selenium WebDriver for backward compatibility.
Selenium IDE
Selenium RC/WebDriver
Selenium IDE is a recording tool for automating Firefox, with the ability to generate simple RC/WebDriver code.
Selenium RC/WebDriver are frameworks to automate browsers diagrammatically.
Selenium Grid
Selenium WebDriver
Selenium Grid is a tool to execute Selenium tests in parallel on different machines.
Selenium WebDriver is the core library to drive web browsers on a single machine.

Selenium Relationships

Selenium RC Server / Selenium Server
Selenium RC Server was the Java-based package to run Selenium RC tests. With the release of Selenium WebDriver, Selenium (Standalone) Server was introduced as the super-set of the previous version, so that tests can be executed remotely in Selenium Grid mode. For Selenium WebDriver tests that are running locally, Selenium Server is not required.

Tuesday, 1 December 2015

QTP Interview Q/A

Basic Level

QTP Interview Q/A Part 1

QTP Interview Q/A Part 2

QTP Interview Q/A Part 3

QTP Interview Q/A Part 4

QTP Interview Q/A Part 5

 

 

 

QTP Interview Q/A Part 2

Q11. What phases are involved in testing an application in QTP?

The Quick Test Professional process consists of the following main phases:

   Analyzing Application: before preparing test cases need to analyze the application to find out the testing needs.
    Preparing Testing Infrastructure: based on testing needs create those resources, resources like, shared repository, function library etc.
    Building Test Cases: create test script containing action to be performed while testing. Add object repository with the test function libraries.
    Enhancing Test: by making use of checkpoints, broadening the scope of test, add logic and condition to test for checking purpose.
    Debugging, Running and analyzing Test: debug the test so that it works without interruption. Run these test and analyze the test result generated by QTP
    Report Defects: lock bug into the bug report and send it to the development team.

Q12. How many types of recording modes in the QTP?


The QTP enable us with three type of recording mode:

    Normal (by Default recording): In this recording mode QTP identify the object irrespective of their location on the screen. It is done by recording object based on application window.
    Analog Recording: it is used when exact mouse movement and action performed by mouse is important. Used in testing the paint application and signature made with the help of mouse.
    Low Level Recording: it helps in identifying those objects which is not recognized by the QTP. It is used when the location of object is changing inside the screen.

Q13. What is object repository?

Object Repository: when QTP learn any object from application it stores those object in the Object Repository with the properties of the object. It is used to identify the object. There are two types of object repository:

    Shared Object Repository: It can be shared between multiple tests but it does not allow making changes in the repository. Mostly used in Keyword Driven methodology. It is saved with .TSR extension.
    Local Object Repository: This type of object repository is linked with only one test. In this we can perform change in the repository, like changing the properties of the object, adding object in the repository. It is saved with .MTR extension.

Q14. Explain step generator in QTP?

Step Generator in QTP helps in creating those steps which is performed over the object while testing. Use of Step Generator in QTP:

    Help in debugging the script by making use of Break.
    To add those step which we forget to perform while recording.
    To ensure that objects exist in the repository
    To add up step in the function library.

Q15. Explain the use of Action Split in QTP?


Action Split: it is used to split the action into two parts. There are two type of split an action:

    Splitting into two sibling action: both split actions are independent of each other.
    Splitting into Parent-Child nested action: in this second split action is only called after the execution of the parent split action. Child split action depends upon the parent split action.

QTP generated the duplicate copy of the object repository when we perform Split action. We can add object to anyone spilt action which is not added into another split action’s repository.

Q16. What is the purpose of loading QTP Add-Ins?


Add-Ins: are small programs or files which can be added to computer in order to enhance the capabilities of the system. The purposes of loading Add-Ins into QTP are following:

    To increase capabilities of the system.
    To improve the graphics quality, communications interface.
    To load the particular function into the memory.
    To excess only those functions that is required for the execution of the script.

Q17. What is a data driven test in QTP?

Data Driven is an automation testing part in which test input or output values, these values are read from data files. It is performed when the values are changing by the time. The different data files may include data pools. The data is then loaded into variables in recorded or manually coded scripts. In QTP to perform the data to drive the test, we use the parameterization process. When we do data-driven test, we perform two extra steps:

    Converting the test to a data-driven test.
    Creating a corresponding data table.

Q18. How to use Parameterization in QTP?

It is the process of making use of different values in place of recorded values which is replaced by variable which contains different values that can be used during the execution of the scripts. QTP enable us with different type of Parameterization, passing of data:

    Using Loop statement.
    Dynamically test data submission
    Using data table.
    Fetching data from external files.
    Fetching data from databases.
    By taking test data front end(GUI)

Q19. Is it possible to call from one action to another action in QTP?

Yes, QTP enable us to call from one action to another action. There are two ways of calling action:

    Call to Copy of action: in this we generate the copy of action in our repository which enables us to perform change to the copy of action.
    Call to Existing action: we call to action which is made up earlier. This generates the reference to the action. We can access the action in read only mode. In this no copy of existing script and data table is made.

Q20. Explain different types of action in QTP?

When generating the test script, it includes only one action. Action contains the number of steps to be performed on application to test the application. There are three type of action in QTP:

    Non-Reusable action: it can be called by test only once in which it is stored.
    Reusable action: it can be called by test multiple times in which it is stored.
    External action: it is reusable action but stored in external test. We can call external action but it will be available in read only mode we cannot perform any change to the External Action.

Monday, 30 November 2015

QTP Interview Q/A Part 1

Q1.What is QuickTest Professional (QTP) ?

QuickTest is a graphical interface record-playback automation tool. It is able to work with any web, java or windows client application. Quick Test enables you to test standard web objects and ActiveX controls. In addition to these environments, QuickTest Professional also enables you to test Java applets and applications and multimedia objects on Applications as well as standard Windows applications, Visual Basic 6 applications and .NET framework applications...
QTP is Mercury Interactive Functional Testing Tool. QTP stands for Quality Test Professional.

Mercury QuickTest Professional: provides the industry's best solution for functional test and regression test automation - addressing every major software application and environment. This next-generation automated testing solution deploys the concept of Keyword-driven testing to radically simplify test creation and maintenance. Unique to QuickTest Professional’s Keyword-driven approach, test automation experts have full access to the underlying test and object properties, via an integrated scripting and debugging environment that is round-trip synchronized with the Keyword View.

QuickTest Professional enables you to test standard Windows applications, Web objects, ActiveX controls, and Visual Basic applications. You can also acquire additional QuickTest add-ins for a number of special environments (such as Java, Oracle, SAP Solutions, .NET Windows and Web Forms, Siebel, PeopleSoft, Web services, and terminal emulator applications).

Q2.What’s the basic concept of QuickTest Professional (QTP)?

QTP is based on two concept-
* Recording
* Playback

Q3.Which scripting language used by QuickTest Professional (QTP)?

QTP using VB scripting.

Q4.How many types of recording facility are available in QuickTest Professional (QTP)?

QTP provides three types of recording methods-
* Context Recording (Normal)
* Analog Recording
* Low Level Recording

Q5.How many types of Parameters are available in QuickTest Professional (QTP)?

QTP provides three types of Parameter-
* Method Argument
* Data Driven
* Dynamic

Q6.What’s the QuickTest Professional (QTP) testing process?

QTP testing process consist of seven steps-
* Preparing to recoding
* Recording
* Enhancing your script
* Debugging
* Run
* Analyze
* Report Defects

Q7.How to Start recording using QuickTest Professional (QTP)?

Choose Test > Record or click the Record button.
When the Record and Run Settings dialog box opens to do this;
1. In the Web tab, select Open the following browser when a record or run session begins.
2. In the Windows Applications tab, confirm that Record and run on these applications (opened on session start) is selected, and that there are no applications listed.

Q8. How to insert a check point to a image to check enable property in QTP?

The Image Checkpoint does not have any property to verify the enable/disable property.
One thing you need to check is:
* Find out form the Developer if he is showing different images for activating/deactiving i.e greyed out image. That is the only way a developer can show deactivate/activate if he is using an "image". Else he might be using a button having a headsup with an image.
* If it is a button used to display with the headsup as an image you woudl need to use the object Properties as a checkpoint.

Q9.How to Save your test using QuickTest Professional (QTP)?

Select File > Save or click the Save button. The Save dialog box opens to the Tests folder.
Create a folder which you want to save to, select it, and click Open.
Type your test name in the File name field.
Confirm that Save Active Screen files is selected.
Click Save. Your test name is displayed in the title bar of the main QuickTest window.

Q10.How to Run a Test using QuickTest Professional (QTP)?

1 Start QuickTest and open your test.

If QuickTest is not already open, choose Start > Programs > QuickTest Professional > QuickTest Professional.

. If the Welcome window opens, click Open Existing.
. If QuickTest opens without displaying the Welcome window, choose File > Open or click the Open button.
In the Open Test dialog box, locate and select your test, then click Open.

2 Confirm that all images are saved to the test results.
QuickTest allows you to determine when to save images to the test results.

Choose Tools > Options and select the Run tab. In the Save step screen capture to test results option, select Always.

Click OK to close the Options dialog box.

3 Start running your test.

Click Run or choose Test > Run. The Run dialog box opens.
Select New run results folder. Accept the default results folder name.
Click OK to close the Run dialog box.

Manual Testing Interview Q/A -Part 5

Q61.What are the key challenges of software testing(Deloitte,HSBC)

Following are some challenges of software testing:
1. Application should be stable enough to be tested.
2. Testing always under time constraint
3. Understanding the requirements.
4. Domain knowledge and business user perspective understanding.
5. Which tests to execute first?
6. Testing the Complete Application.
7. Regression testing.
8. Lack of skilled testers.
9. Changing requirements.
10. Lack of resources, tools and training

Q62.What makes a good QA or Test manager?(HSBC,Portware)

A good QA or Test manager should have following characteristics:
• Knowledge about Software development process
• Improve the teamwork to increase productivity
• Improve cooperation between software, test, and QA engineers.
• To improvements the QA processes.
• Communication skills.
• Able to conduct meetings and keep them focused.

Q63.What is Requirement Traceability Matrix(HSBC,Deloitte,Accenture,Infosys)

The Requirements Traceability Matrix (RTM) is a tool to make sure that project requirement remain same throughout the whole development process. RTM is used in the development process because of following reasons:
To determine whether the developed project is meet the requirements of the user.
To determine all the requirements given by the user .
To make sure the application requirement can be fulfilled in the verification process.

Q64.Difference between SRS, FRS and BRS(Portware,Infosys)

Distinction among BRS,FRS and SRS – Top 7
1. It means “Software Requirement Specification” (SRS).
1. It means “Functional Requirement Specification” (FRS).
1. It means “Business Requirement Specification” (BRS).
2. It deals with resources provided by Company.
2. It deals with requirements given by client.
2. It deals with aspects of business requirements.
3. It always includes Use cases to describe the interaction of the system.
3. In this Use cases are not included.
3. In this Use cases are also not included.
4. It is developed by System Analyst. And it is also known as User Requirement Specifications.
4. It is always developed by developers and engineers.
4. It is always developed by Business Analyst.
5. In SRS, we describe all the business functionalities of the application.
5. In FRS, we describe the particular functionalities of  every single page in detail from start to end.
5.In BRS, we defines what exactly customer wants. This is the document which is followed by team from start to end.
6. SRS tells means explains all the functional and non functional requirements.
6. FRS tells means explains the sequence of operations to be followed on every single process.
6. BRS tells means explains the story of whole requirements.
7. SRS is a complete document which describes the behavior of the system which would be developed.
7. FRS is a document, which describes the Functional requirements i.e. all the functionalities of the system would be easy and efficient for end user.
7. BRS is a simple document, which describes the business requirements on a quite broad level.

Q65.What is Test plan: (Almost in all Interview)
A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice,and any risks requiring contingency planning. It is a record of the test planning process.
Test Plan Identifier:
Provide a unique identifier for the document. (Adhere to the Configuration Management System if you have one.)
Introduction:
Provide an overview of the test plan.
Specify the goals/objectives.
Specify any constraints.
References:
List the related documents, with links to them if available, including the following:
Project Plan
Configuration Management Plan
 Test Items:
List the test items (software/products) and their versions.
Features to be Tested:
List the features of the software/product to be tested.
Provide references to the Requirements and/or Design specifications of the features to be tested
Features Not to Be Tested:
List the features of the software/product which will not be tested.
Specify the reasons these features won’t be tested.
Approach:
Mention the overall approach to testing.
Specify the testing levels [if it’s a Master Test Plan], the testing types, and the testing methods [Manual/Automated; White Box/Black Box/Gray Box]
Item Pass/Fail Criteria:
Specify the criteria that will be used to determine whether each test item (software/product) has passed or failed testing.
Suspension Criteria and Resumption Requirements:
Specify criteria to be used to suspend the testing activity.
Specify testing activities which must be redone when testing is resumed.
Test Deliverables:
List test deliverables, and links to them if available, including the following:
Test Plan (this document itself)
Test Cases
Test Scripts
Defect/Enhancement Logs
Test Reports
 Test Environment:
Specify the properties of test environment: hardware, software, network etc.
List any testing or related tools.
Estimate:
Provide a summary of test estimates (cost or effort) and/or provide a link to the detailed estimation.
Schedule:
Provide a summary of the schedule, specifying key test milestones, and/or provide a link to the detailed schedule.
Staffing and Training Needs:
Specify staffing needs by role and required skills.
Identify training that is necessary to provide those skills, if not already acquired.
Responsibilities:
List the responsibilities of each team/role/individual.
Risks:
List the risks that have been identified.
Specify the mitigation plan and the contingency plan for each risk.
Assumptions and Dependencies:
List the assumptions that have been made during the preparation of this plan.
List the dependencies.
Approvals:
Specify the names and roles of all persons who must approve the plan.
Provide space for signatures and dates. (If the document is to be printed.)

Q66.Describe how to perform Risk analysis during software testing?

Risk analysis is the process of identifying risk in the application and prioritizing them to test. Following are some of the risks:

1. New Hardware.
2. New Technology.
3. New Automation Tool.
4. Sequence of code delivery.
5. Availability of application test resources.

We prioritize them into three categories these are:

• High magnitude: Impact of the bug on the other functionality of the application.
• Medium: it can be tolerable in the application but not desirable.
• Low: it can be tolerable. This type of risk has no impact on the company business.

Q67.Difference between System Testing and Acceptance Testing(Infosys,accenture,Portware,Cigniti)

1. User is not involved in System Testing.
1. User is completely involved in Acceptance Testing.
2. It is not the final stage of Validation.
2. It is the final stage of Validation.
3. System testing of software or hardware is testing conducted on a whole, integrated system to estimate the systems compliance with its specified set of requirements?
3. Acceptance Testing of software or hardware is testing conducted to evaluate the system compliance with its specified set of user requirements.
4. It is done to check how the system as a whole is functioning. Here the functionality and performance of the system is validated.
4. It is done by the developer before releasing the product to check whether it meets the requirements of user or not and it is also done by the user to check whether I have to accept the product or not.
5. It is used to check whether the system meets the defined Specifications or not.
5. It is used to check whether it meets the defined User Requirements or not.
6. System Testing determine the developer and tester for satisfaction with system specifications.
6. Acceptance Testing determine the customer for satisfaction with software product.

Q68.What Is Sanity Testing Explain It with Example?
Sanity testing is the surface level testing where QA engineer verifies that all the menus, functions, commands available in the product and project are working fine.
Sanity testing is performed after the build has clear the Smoke test and has been accepted by QA team for further testing, sanity testing checks the major functionality with finer details.
When we Perform Sanity Testing?
Sanity testing is performed when development team needs to know quick state of the product after they have done changes in the code or there is some controlled code change in a feature to fix any critical issue, and stringent release time-frame does not allow complete regression testing.
Conclusion:
Sanity testing will be done mostly after retest (retest will be done after fixing the bug). We always use Script for Smoke but do not for sanity.

Q69.Difference between Static and Dynamic Testing

 Static Testing
It is done in the phase of verification.
This testing means “How we prevent” means it always talks about prevention.
It is always considered as less cost effective job/task.
It is not considered as a time consuming job or task.
Static testing is also known by the name Dry Run Testing.
Techniques/methods of static testing are inspections, reviews, and walkthroughs etc.
 Dynamic Testing
Technique/method of dynamic testing is always software testing means testing.
It is always considered as a time consuming job or task because it requires several test cases to execute.
It can find errors that static testing cannot find and it is a high level exercise.
It is always considered as more cost effective job/task.
This testing means “How we cure” means it always talks about cure.
It is done in the phase of validation.

Q70.What Is Regression Testing

Regression testing will be conducted after any bug fixed or any functionality changed.
Regression testing is always done to verify that modified code does not break the existing functionality of the application and works within the requirements of the system.
Usually in regression testing bug fixed module is tested. During regression testing tester always check the entire system whether the fixed bug make any adverse affect in the existing system or not.
There are mostly two strategies to regression testing, 1) to run all tests and 2) always run a subset of tests based on a test case prioritization technique.

Q71.Why do We Use Stubs And Drivers?(accenture,Kofax)

Stubs are dummy modules that are always distinguish as "called programs", or you can say that is handle in integration testing (top down approach), it used when sub programs are under construction.
Stubs are considered as the dummy modules that always simulate the low level modules.
Drivers are also considered as the form of dummy modules which are always distinguished as "calling programs”, that is handled in bottom up integration testing, it is only used when main programs are under construction.
Drivers are considered as the dummy modules that always simulate the high level modules.
So it is fine from the above example that Stubs act “called” functions in top down integration. Drivers are “calling” Functions in bottom up integration.

Q72.Define Scrum technology(Portware,Infosys)

In each sprint, following tasks will be carried out:
Analysis and Design
Elaborate user stories with detailed scenarios/ acceptance criteria
Development, Testing and Delivery for Sprint Review and Acceptance
Input to this phase will be the elaborated user stories generated earlier. ITCube developers and testers will work on the tasks identified for them.
Deliveries will be made to Customer at the end of Sprint.
Sprint Review and Acceptance
At the end of each Sprint, a Sprint Review meeting will be held where ITCube team will demonstrate the features developed in the Sprint to relevant stakeholders and customers.
Customer will carry out acceptance testing against the acceptance criteria defined during Sprint planning meeting and report back issues
ITCube will fix issues and re-deliver the features / code to Customer
Customer will re-test and accept the features
The table below lists various roles and their responsibilities in Scrum/Agile methodology:
Role From Responsibilities
Product Owner Customer
Drives the software product from business perspective
Define and prioritize requirements
Lead release planning meeting
Arrive at release date and content
Lead sprint planning meetings
Prioritize product backlog
Make sure that valuable requirements take priority
Accept the software work product at the end of each sprint
Scrum Master ITCube
Ensure that release planning meeting takes place
Ensure sprint planning meetings take place
Conduct daily stand up meetings
Remove roadblocks
Track progress
Align team structure and processes as required from time to time
Conduct demo meetings at end of sprints
Record learning from retrospective meetings
Implement learning from previous sprints
Customer(s) Customer
Define stories (requirements)
Elaborate stories as and when required by developers and testers
Define acceptance criteria
Acceptance testing
Accept stories upon completion of acceptance testing
Architect Council Customer and ITCube
Lead and driven by Scrum Master
Arrive at initial application architecture
Adjust and refine architecture as required
Address architecture issues as required
Define best practices to be used in development like design patterns, domain driven design
Define tools to be used in development
Define best practices to be used in testing
Define tools to be used in testing
Developer (Delivery Team) ITCube
Analyze stories for estimating effort
Functional design
Technical design
Coding
Unit testing
Support user acceptance testing
Technical documentation
Tester (Delivery Team) ITCube
Writing test cases
System testing
Stress testing
Stakeholders Customer
Enable the project
Reviews
Best practices
Given below are some of the best practices in Scrum/Agile project execution methodology. These best practices revolve around various meetings or artifacts. ITCube will endeavor to follow as many of these as possible. ITCube may tailor these to suit the project requirements.
Best practices for meetings
As Agile promotes communication, these best practices are focused around various meetings in Agile – Release Planning, Sprint Planning, Daily Standup, Demo and Retrospectives.
Daily Standup
Each day during the sprint, a project status meeting occurs. This is called “the daily standup”. This meeting has specific guidelines:
The meeting starts precisely on time.
All are welcome, but only Delivery Team may speak
The meeting is time-boxed to short duration like 15 minutes
The meeting should happen at the same location and same time every day
During the meeting, each team member answers three questions:
What have you done since yesterday’s meeting?
What are you planning to do today?
Do you have any problems in accomplishing your goal?
Scrum Master should facilitate resolution of these problems. Typically this should occur outside the context of the Daily Standup.
Post-standup (optional)
Held each day, normally after the daily standup.
These meetings allow clusters of teams to discuss their work, focusing especially on areas of overlap and integration.
A designated person from each team attends.
The agenda will be the same as the Daily Standup, plus the following four questions:
What has your team done since we last met?
What will your team do before we meet again?
Is anything slowing your team down or getting in their way?
Are you about to put something in another team’s way?
Sprint Planning Meeting
At the beginning of sprint, a “Sprint Planning Meeting” is held.
Select what work is to be done
Prepare the Product Backlog that details the time it will take to do that work, with the entire team
Identify and communicate how much of the work is likely to be done during the current sprint
Eight hour time limit
(1st four hours) Product Owner + Team: dialog for prioritizing the Product Backlog
(2nd four hours) Team only: hashing out a plan for the sprint, resulting in the sprint backlog
Sprint Review Meeting (Demo)
Review the work that was completed and not completed
Present the completed work to the stakeholders ( “the demo”)
Incomplete work cannot be demonstrated
Four hour time limit
Sprint Retrospective
All team members reflect on the past sprint
Make continuous process improvements
Two main questions are asked in the sprint retrospective:
What went well during the sprint?
What could be improved in the next sprint?
Three hour time limit
Best practices for Artifacts
If the team uses a tool like Rally, most of the artifacts will be stored in the tool.
Product backlog
The product backlog is a high-level document for the entire project.
Contains backlog items like broad descriptions of all required features, wish-list items, etc. prioritized by business value.
It is the “What” that will be built.
Open and editable by anyone and contains rough estimates of both business value and development effort.
These estimates help the Product Owner to gauge the timeline and, to a limited extent, priority.
The product backlog is the property of the Product Owner. Business value is set by the Product Owner. Development effort is set by the Team.
Sprint backlog
Describes how the team is going to implement the features for the upcoming sprint
Features are broken down into tasks; as a best practice, tasks are normally estimated between four and sixteen hours of work
With this level of detail the whole team understands exactly what to do, and anyone can potentially pick a task from the list
Tasks on the sprint backlog are never assigned; rather, tasks are signed up for by the team members as needed, according to the set priority and the team member skills.
It is the property of the Team. Estimations are set by the Team. Often an accompanying Task Board is used to see and change the state of the tasks of the current sprint, like “to do”, “in progress” and “done”. Tools like Rally make this very easy.
Burn down charts
It is a publicly displayed chart showing remaining work in the sprint backlog.
Updated every day
Gives a simple view of the sprint progress.
Provides quick visualizations for reference.
Tools like Rally give almost current burn down charts at any point of time
It is NOT an earned value chart.

Q72.Bug Report (QC 10.00)

Summary: Wrong login name is displayed.
Description: Wrong login name is displayed on Home Page.
Steps to Follow or Chase:
1.      Enter the URL Http:// ----------------- in the address bar.
2.      Fill the username field in login form.
3.      Fill the password field in login form.
4.      Click on login button below the form.
Test data: Enter xyz on username field.
Environment: Windows 7, Safari like that.
Expected Result: Proper logged in name should be displayed on home page.
Actual Result: After login with xyz wrong login name is displayed on home page likes logged in as ABC.
Screenshot:  Attach the screenshot here on where you detect the bug and save it in jpeg format.
When should the testing be stopped?
It depends on the risks for the system being tested. There are some criteria bases on which you can stop testing.
Deadlines (Testing, Release)
Test budget has been depleted
Bug rate fall below certain level
Test cases completed with certain percentage passed
Alpha or beta periods for testing ends
Coverage of code, functionality or requirements are met to a specified point

Q73.How to deal with not reproducible bug?(Helius,HSBC)
 A bug cannot be reproduced for following reasons:

1. Low memory.
2. Addressing to non available memory location.
3. Things happening in a particular sequence.

Tester can do following things to deal with not reproducible bug:

• Includes steps that are close to the error statement.
• Evaluate the test environment.
• Examine and evaluate test execution results.
• Resources & Time Constraints must be kept in point.

Q74.What is difference between QA and QC? (ADP,Accenture,Kofax)

Quality Assurance (QA): QA refers to the planned and systematic way of monitoring the quality of process which is followed to produce a quality product. QA tracks the outcomes and adjusts the process to meet the expectation.

Quality Control (QC): Concern with the quality of the product. QC finds the defects and suggests improvements. The process set by QA is implemented by QC. The QC is the responsibility of the tester.