Functional Testing

Published date: April 15, 2024, Version: 1.0

Objectives

The objective of the functional testing is to validate the software’s functionality and ensure that it is working as per the requirements and acceptance criteria defined.

Sprint Level Activities

  • Product Owner/ Business walkthrough of the user stories

  • QE team to raise open questions and clarify in user story grooming sessions

  • QE team to prepare test cases and review with business. Upload the test cases in qTest

  • QE team to update the test execution status in qTest as part of daily execution

  • Perform Zero-day readiness testing on the first day of testing and Smoke test as and when the build is deployed

  • Proceed with ST/ SIT based on the successful Smoke test results

  • Perform Sprint specific regression, defect regressions for defect deployments and high eve regression for all previous sprints

  • Perform End to End testing with all integration up in the last sprint

  • Perform Regression testing in the previous sprint

  • QE team to raise the identified defects of each testing phase in Jira

  • Dev team to fix the defects and the QE team to close/ reopen the defect

  • QE team to prepare Go/No-Go report, review and take it forward for implementation

In Sprint (N, N+1..) Activities

Backlog Grooming Sprint Planning Sprint Execution Hardening
  • Complete understanding of upcoming sprint through grooming

  • Prioritization of stories for the upcoming sprint

  • Break the large stories into small stories

  • Refine the acceptance criteria for each story

  • Acceptance criteria are considered a part of the DoR criteria for the story.

  • Backlog grooming happens in the previous week of the sprint

  • Estimate the stories using story points

  • Allocate the stories to each team member

  • Commit the stories to the sprint

  • Start the sprint - scrum master

  • Sprint planning happens one or two days before of sprint starts

  • Daily scrum calls

  • Creation of test scenarios

  • Test scenario review/rework/ approval

  • Creation of manual test cases

  • Creation of automated scripts

  • Update existing automated scripts

  • Test case/ Script review/ Rework Approval

  • Test Execution and Defect Logging

  • Stories closure post-testing and review with PO

  • In-sprint system integration testing/ Regression

  • N-1 sprint performance testing

  • Release regression – Integrated regression for the entire PI

  • UAT

  • Performance testing

  • Security testing

  • Implementation/ Backout

  • Prod Implementation

Activities by Phase

Role  Phase  Activity

Product Owner

Sprint 0

Update the product backlog and release plan

ART Release Quality Manager (RQM)

Sprint 0

  • Assess and identify the risk

  • Initiate project kick-off

  • Participate and establish high-level estimates

  • Plan for Quality assurance

Scrum Master

 

Release Planning (1- N)

  • Develop a detailed release plan

  • Perform release estimation based on velocity

Sprint Execution (1-N)

  • Develop a Sprint plan and track daily stats

  • Support on removing dependencies

  • Participate and conduct a retrospective meeting

 

QA Specialist (Functional SME)/ Test Automation Engineer

 

 

 

Sprint 0

  • Develop Test Strategy

  • Develop Test Plan

  • Setup Test environment

  • Understand business requirements

Sprint Execution (1-N)

  • Develop Test scenarios

  • Develop Test cases

  • Create Test data

  • Validate test environment

  • Identify smoke/ sanity test cases

  • Perform smoke/ sanity testing

  • Perform test execution

  • Prepare Test Summary

  • Provide UAT support/ setup UAT test data

Final Sprint

All sev 1-3 defects are closed (Conditional) within the new features of Sprint N and N+1. Functional and Regression testing has been completed and passed.

  • Full regression of all streams and core regression is executed

  • One round of End to End testing is carried out

Implementation and Go Live – All sev 1-3 defects are closed as of the hardening phase. Functional, Regression and UAT testing have been completed and passed, and the results summary is reviewed before implementation.

Build deployments during the sprint.

  • For a regular sprint, planned deployment starts on the second day of the sprint. The succeeding builds are to be deployed twice a week.

  • An email notification will be sent for planned/ unplanned builds

  • Automated Smoke tests need to be triggered automatically once the build is deployed, irrespective of the environment, and the build will be accepted based on the smoke results

  • For applications, if the smoke test is not automated, manual smoke testing is performed, and results are shared along with release notes

  • The build labels from the Smoke results only to be deployed in the QE environment

  • Quality Engineers to perform functional testing in the sprint

  • Defects within each sprint should be resolved and tested within the sprint

  • Any defects that are from deployment notes need to be validated

  • Quality Engineers to perform sprint-specific regression and regression of past sprints in PI

  • Quality Engineers to share Test execution, defects metrics and daily status report daily with all stakeholders

  • Performance team to do performance testing for n-1 sprint in performance env.

  • SAST is to be run in Dev and QE environments

Test Execution Planning in Test Management tools (qTest)

Squad teams QE will leverage qTest as the primary Test Management tool and will manage the execution of test cases by setting conditions and scheduling the date for executing test cases. When a test case is executed manually, the tester executes the test steps defined and sets the Test Step to Pass or Fail depending on whether stated expected results are observed. 

When an automated test script is executed

  1. qTest shall utilize the automation tool to execute the automated test script - If the automated script fails, results and screenshots are captured for analysis and reporting

  2. Failed automated scripts shall be executed manually to determine if the underlying cause is the application behaviour or the script. If the test case fails manually, a defect shall be logged and managed per the Defect Management Process.

  3. Suppose the manual test execution is completed successfully. In that case, the automated script shall be examined for the root cause of failure, and suitable corrections shall be made to the automated test script.

Guidelines

Guideline Phase

Estimation

Sprint 0

Test Strategy

Sprint 0

Test scenario and Test case preparation

Sprint Execution (1-N)

Test case design

Sprint Execution (1-N)

Defect Management

Sprint Execution (1-N)

Risk and Mitigation Plan

Sprint 0

TDM data masking methodology

Sprint Execution (1-N)

TDM subset

Sprint Execution (1-N)

Test data generation

Sprint Execution (1-N)

Test data management

Sprint Execution (1-N)

Test data provisioning

Sprint Execution (1-N)