Test Design

Published date: April 15, 2024, Version: 1.0

Test coverage is having the right no. of test cases against the requirements/ features being tested. The right test coverage is crucial for any project to ensure the software is delivered as expected. Test coverage is derived in the below-mentioned ways:

User Stories to Test Cases:

Identifying the Scenarios

  • QE team to attend business requirements walkthrough and tech specifications walkthrough
  • Identify Test scenarios based on the basic and alternate flows of requirements and based on Epics/ feature acceptance criteria
  • Create a traceability matrix with Requirements to Test scenarios
  • Review the Test Scenarios with the business and sign-off

Converting Test scenarios into Test cases

  • QE team should understand user story description and acceptance criteria in detail in sprint grooming sessions
  • Map the Test scenarios that are mapped to each user story
  • For each Test scenario, identify different combinations of test cases based on the input parameters, business rules, user rights etc
  • Create Test cases
  • Create a traceability matrix from test cases to Test scenarios and User story acceptance criteria
  • Ensure the right coverage is captured. If a test case has to be tested with a range of values as input, test cases to be tested with sample data and boundary values etc
  • Identifying all possible Test scenarios is key to having the right test coverage
  • Review the test cases with business and upload them to qTest

Creating End to End test cases

  • Identify the Test scenarios for all user stories at the MVP level
  • Create End to End test cases to cover MVP
  • Collaborate with the other project teams and create end-to-end test cases that are to be tested post-completion of both projects in scope
  • Create traceability with Test scenarios and feature acceptance criteria

Test coverage for Regression

  • Perform impact analysis of features/ user stories in scope
  • Identify regression test cases from the master regression suite
  • Create new test cases as required
  • Map all identified test cases to the sprint in qTest
  • Update priority for each test case

Test coverage based on Prod incidents and defects

  • Past three months' incidents and post-prod defects need to be analyzed
  • Create root cause and create test cases based on that (If no test cases are there)
  • This has to be done periodically, and add the test cases to the master regression suite with priority as 1
  • Mandate these test cases for execution if relative modules are in scope
  • Analyze the highest defect density modules in the past three releases and improvise test coverage with additional combinations
  • Prod data analytics

Common Practices for Waterfall and Agile

All the test cases should be uploaded to qTest with all mandatory fields updated. All the regression test cases should be under separate folders, and the QE team should pull them to the sprint or Test cycle as required. Each test case should have the status updated with respect to automation (Automation ready/ Automated/ not automated) for future metrics preparation. End to End traceability has to be established from test cases to requirements or test cases to feature acceptance criteria. Traceability coverage has to be established against the epics in Jira/qTest.

 

To ensure the right Test Coverage :

  • Adequate business review of test scenarios and test cases

  • Traceability between test scenario/ test cases to acceptance criteria and feature goals

  • Periodic review of defect density by module and revisiting test coverage

  • Periodic review of prod incidents and defects and improve test coverage

Test Case Guidelines

The following are guidelines for writing proper test cases.

  • All steps/ actions must be written in the imperative tense.

  • If the user has to provide input to the application, mention specific test data. Test data is to be updated in the respective column in the qTest tool.

  • Mention Pre-condition in the test case, wherever necessary.

  • Test Case description must be provided for every test case.

  • Test case description must begin with “To Check…” or “To Validate…” or “To Verify…”

  • The expected Result must contain words like “should” rather than “is/may/shall/might”

  • There must be no ambiguity in the step that is to be executed.

    • Example – Login to Application as User/ Administrator – Incorrect

      • Login to Application as User – Correct

      • Login to Application as Administrator – Correct

      • Each Test case is to be identified with a Test case identifier (This will be assigned automatically as soon as test cases are uploaded in qTest)

  • Test Case naming convention must be followed in the following format.

    • Example – Test Scenario – Module Name_Scenario

  • Test Case – Module Name_Test Case ID

    • Do not include any redundant verification points in the test case.

    • Do not use any acronyms or symbols.

    • Reference to any object name, such as a window or a label, must be written within double quotes, and the Object name must be written in the Title case.

      • For example – Navigate to the “Login” page.

Enter ‘A001’ in the “Login ID” field.

  • Reference to any input data being provided must be given in single quotes.

    • The negative test case step must be written first, followed by the positive test step.

    • The ideal number of steps in a test case is 15 to 20

       

Test Case Prioritization Guidelines:

Definition of Test Case Criticality

 

Test cases that validate/ verify a High priority BRD requirement or High priority feature/ user story shall be deemed critical. When designing the test scenario/ test case, a priority will be assigned to the test case, aligning with the priority assigned to its corresponding requirement or feature/ user story. Priorities range from 1 to 3

1 - Critical - Applied to tests that must be executed first and are determined to be critical. If these tests are not executed, the risk to the project shall be High. These test cases would cover P1 or “High” priority requirements.

2 - Medium - Applied to tests that should be executed as time permits. If these tests are not executed, the risk to the project shall be Medium. These test cases would cover P2 or “Medium” priority requirements.

3 - Low - Applied to tests that, if not executed, carry low risk to the testing effort and a Low risk to the project. These test cases would cover P3 or “Low” priority requirements.

During the test case review process, these may be needed to revise the priority based on other criteria. This would be done on an exception basis, and the priority would change only if warranted.

The Test Case priority may be modified based on the criteria given as part of the test case design and review process. The team will assess the risk to system stability to determine if the test case priority should be modified.

  • New versions of interfacing software (tests connectivity with other systems)

  • Complex functions are being introduced or modified.

  • Modifications to components with a history of failures or defect leakages into a production environment

  • Impacts on Business Clients / Business Systems

  • Government regulations

  • Compliance adherence

  • Changes to functionality that affect financials or affects interfaces with financial systems

Test Design Techniques

Test design techniques help ensure comprehensive test coverage and effective identification of defects in software applications. The following test design techniques, including black-box and white-box techniques, can be employed to design compelling test cases for applications.

 

 

 

Black-box Test Design Techniques:

 

Equivalence Partitioning

 

Equivalence Partitioning is a technique that divides the input data into classes or groups, treating them as equivalent. This technique helps reduce the number of test cases while maintaining sufficient test coverage.

Steps to implement Equivalence Partitioning:

a. Identify the input domains: Determine the input variables or parameters with different valid or invalid values in the application. For example, in an online retail application, the input domain for a customer's age could be "under 18," "18-30," and "over 30."

b. Divide the input domains into equivalent partitions: Group the input domains into equivalent partitions such that the system's behaviour is expected to be the same within each partition. For example, for the age input domain, you could have partitions like "under 18" and "18-30."

c. Design test cases: Select representative values from each partition to create test cases. For example, for the "under 18" partition, you could select the values 15 and 17 as test cases.

d. Execute test cases: Execute the designed test cases, ensuring the system behaves consistently within each partition.

Boundary Value Analysis (BVA) tests the boundary values of input domains. This technique is based on the observation that defects often occur at the edges of input ranges.

Steps to implement Boundary Value Analysis:

a. Identify the input variables: Identify the input variables or parameters with specific valid or invalid ranges. For example, in an application, the input variable for the quantity of a product could have a valid range of 1 to 10.

b. Determine the boundaries: Determine each input variable's lower and upper boundaries. For example, the lower boundary for the quantity could be 1, and the upper boundary could be 10.

c. Design test cases: Create test cases that include values at the boundaries and beyond them. For example, you could design test cases with values like 0, 1, 2, 9, 10, and 11 for the quantity input variable.

d. Execute test cases: Execute the designed test cases, considering how the system behaves at the boundaries.

Decision Table Testing is a technique that helps identify combinations of conditions and corresponding actions systematically. It is particularly useful when an application has complex business rules or conditional logic.

Steps to implement Decision Table Testing:

a. Identify the conditions and actions: Identify the conditions and actions involved in the decision-making process of the application. For example, in an order processing system, conditions could be "payment received," "item in stock," and actions could be "accept the order" or "reject the order."

b. Create the decision table: Create a table with rows representing combinations of conditions and columns representing the corresponding actions. Fill in the table with appropriate values.

c. Design test cases: Generate test cases by selecting different combinations of conditions and corresponding actions from the decision table.

d. Execute test cases: Execute the designed test cases, ensuring that the system behaves according to the specified actions for different combinations of conditions.

White-box Test Design Techniques

Statement Coverage: Statement Coverage is a white-box technique that tests every statement in the code to ensure it is executed at least once.

Steps to implement Statement Coverage:

a. Identify the code modules: Identify the relevant code modules or components of the retail application that must be tested.

b. Design test cases: Create test cases that exercise each statement in the code. Ensure all statements are covered, including conditional statements, loops, and error-handling statements.

c. Execute test cases: Execute the designed test cases and track the execution of each statement. Use code coverage analysis tools to determine the coverage achieved.

Branch Coverage: Branch Coverage is a white-box technique that tests all possible outcomes of decision points or branches in the code.

Steps to implement Branch Coverage:

a. Identify the decision points: Identify the decision points or branches in the code where different paths can be taken based on conditions.

b. Design test cases: Create test cases that cover all possible outcomes of each decision point. Ensure that both true and false branches of conditions are tested.

c. Execute test cases: Execute the designed test cases, tracking the execution of different branches. Use code coverage analysis tools to determine the coverage achieved.

Path Coverage: Path Coverage is a white-box technique that aims to test every possible path through the code, including all combinations of branches and loops.

Steps to implement Path Coverage:

a. Identify the paths: Identify the different paths through the code, considering the combinations of branches, loops, and conditional statements.

b. Design test cases: Create test cases that cover all possible paths in the code. This may involve designing test cases that exercise different combinations of conditions and iterations.

c. Execute test cases: Execute the designed test cases, ensuring all paths are covered. Use code coverage analysis tools to determine the coverage achieved.

By combining black-box and white-box test design techniques, you can achieve comprehensive test coverage and enhance the quality of your testing efforts for applications