Testing and Quality Assurance Process

The goal of the quality assurance process is to ensure that the work we release are of high quality. It creates excellent end-user experiences and thereby increases client confidence. QA also improves work processes and efficiency.

Below is the system testing life cycle (STLC) that we follow within our testing process.

1. Requirements Analysis

In this phase testers analyze the client requirements and work with developers to understand which requirements are testable and how they are going to test them.

2. Test Planning

In test planning our QA team plans on what needs to be tested, how the testing will be done, test strategy to be followed, what will be the test environment, hardware and software availability, resources, risks etc. A high level test plan document is created which includes all the planning inputs mentioned above and is circulated to the stakeholders. (We usually use the IEEE 829 test plan template)

3. Test Analysis

In this phase it is required to dig deeper into the project and figure out what testing needs to be carried out in each SDLC phase.

Automation activities are also decided in this phase. if automation needs to be done for a product, how will it be done, how much time will it take to automate and what features need to be automated etc.

4. Test Design

In this phase various black-box and white-box test design techniques are used to design the test cases. If automation testing needs to be done then automation scripts also need to be written in this phase.

5. Test Execution and Bug Reporting

Once the unit testing is done by the developers and the test team gets the test build, The test cases are executed and defects are reported in a bug tracking tool. After the test execution is completed and all the defects are reported, test execution reports are created and circulated to project stakeholders. After developers fix the bugs they give the updated build to the testers. Then the testers do re-testing and regression testing to ensure that the defects have been fixed and have not affected any other areas of the system.

6. Final Testing and Implementation

This is when the final testing is done. Non functional testing like load testing is performed in this phase. Final test execution reports and documents are also prepared in this phase.

We carry out few different types of testing as shown below:

Component Testing (Unit Testing)

We test the system modules to check if individual components/units of source code are fit for use. Unit tests are typically written and run by our developers to ensure that the code has met its expected design and it behaves according to the approved System Requirement Specification (SRS) and UI (PSD).

Integration Testing

This test proves that all areas of the system, interface with each other correctly and that there are no gaps in the data flow. Final integration test proves that the system works as an integrated unit when all the fixes are completed.

System Testing

System testing is the testing of behavior of a complete and fully integrated software product based on the System Requirement Specification (SRS) and System Design Specification (approved PSD/UI) documents. We strongly focus on this to ensure that developed systems meet or exceed Business /Functional /End user requirements in each sprint.

User Interface (UI) Testing

User interface testing is one of the most important system tests in assuring the quality of a project. UI testing is conducted to evaluate whether the system or its components pass data and control correctly to one another. UI testing is carried out by our QA team to ensure that the Graphical User Interfaces (GUI) has been developed as expected in the approved Design/UI/PSD.

UI testing includes both desktop design and responsive design testing for agreed resolutions.

Standard sizes for screens (width): 2560px, 1920px, 1680px, 1360px, 1280px, 1199px, 1024px, 960px, 768px, 480px and 320px.

These resolutions could differ depending upon client requirements.

Cross Browser Testing

With a wide range of web browsers being available, end users can use different web browsers to access web applications. On different browsers, client components like Javascript, AJAX requests, Applets, Flash, Flex etc. may behave differently. Based on the user-agent received from the client side, different browsers may process server side requests differently, hence the need for testing web applications across multiple browsers. It involves checking compatibility of applications/websites across multiple web browsers and ensuring that they work correctly across different web browsers.

Listed below are the main and standard browsers that we do our testing on

  • Firefox (Latest 3 versions)
  • Google Chrome (Latest 3 versions)
  • IE – Different Versions (IE9, IE10 and IE11)
  • Safari (Latest version)
  • Opera (Latest version)

In order to test applications in different environments we conduct testing on a number of mobile and tablet devices.

For more information on Cross-browser Compatibility testing, click here.

Regression Testing

This is a repeated test to check whether the application is working as intended after a bug is fixed. It is conducted to check whether new fixes have had any unintended effect on other areas of the system or website.

We use bug tracking tools like JIRA, Bugzilla and Flyspray to track and manage defects that are found during test execution.

The aim of regression testing is also to ensure that any increases in functionality continues to maintain the smoothness and stability of the website. Also to avoid repetitive re-tests in regression testing, we use automation. Currently we use Selenium IDE for test automation in regression testing.

Ad-hoc Testing

In ad- hoc testing, the QA team performs an unplanned and undocumented procedure designed to assess the viability of a product. Unless an ad-hoc test detects a bug, the procedure is usually only run a single time. Our primary goal of ad-hoc testing is to ensure the effectiveness and efficiency of what we develop.

The above tests are carried out in three separate cycles:

  1. Cycle 1
  2. Cycle 2
  3. Cycle 3

Cycle 1

In Cycle 1, the QA team executes all test cases/scripts which are written for system and UI testing and reports the observed defects.

Cycle 2

In Cycle 2, the QA team re-tests the defects reported in Cycle 1. Automation testing is performed in this stage to achieve optimum results.

Cycle 3

In Cycle 3, the QA team performs ad-hoc testing and reports the observed defects. And finally once those defects are fixed by the developers we do a re-test to ensure that everything works fine.