50% discount coupon

International Agile Tester Foundation

Steen Lerche-Jensen

3.3 Techniques in Agile Projects

Testing is testing, and most of the techniques and test levels we know from traditional testing may be applied to Agile projects as well. In addition, Agile projects often use variations in test techniques, terminologies, and documentation.

Figure: Testers need a powerful tool-box!

3.3.1 Acceptance Criteria, Adequate Coverage, and Other Information for Testing

In the simplest definition, the Scrum Product Backlog is simply a list of all things that need to be done within the project. It replaces the traditional requirements specification. These items can have a technical nature or can be user-centric, for example in the form of User Stories. Non-functional requirements are sometimes specified in the User Stories. Examples of non-functional requirements can be performance, volume and security requirements.

Example of User Story. For Key, see earlier.

The User Story often has an important role as Test Basis, but also look for other test bases to improve your test quality. Here are some test bases for Agile projects:

  • User Stories
  • Experience from previous projects or retrospective
  • Existing functions, features, and quality characteristics of the system
  • Code, architecture, and design
  • User profiles as personas
  • Information on defects from existing and previous projects
  • A categorization of defects in a defect taxonomy
  • Any relevant standards or contracts
  • User documentation
  • Quality risks documentation.

Figure: The Scrum Management Framework. For Key, see earlier.

During the Sprint “iteration”, the developers code and implement the functions and features outlined in the User Story. In order to be able to decide when an activity from the Sprint Backlog is completed, the Definition of Done (DoD) is used. It is a comprehensive checklist of necessary activities that ensure that only truly done features are delivered, not only in terms of functionality but in terms of quality as well. The DoD may vary from one Scrum Team to another, but must be consistent within one team.
Consider the following criteria:

  • Each User Story consistent with the others in the iteration
  • Aligned with product theme
  • Understood by the entire Agile Team
  • Have sufficiently detailed, testable acceptance criteria
  • Card, conversation, and confirmation completed
  • User Story acceptance tests completed
  • Development and test tasks for selected User Stories identified, estimated, and within achievable velocity.

As a tester, it is important that the User Stories are testable and the acceptance “Done” criteria should address the following topics where relevant:

  • Externally observable functional behavior
  • Relevant quality characteristics, especially non-functional ones
  • Steps to achieve goals or tasks (use cases)
  • Business rules or procedures relevant to the User Story
  • Interfaces between system and users, other systems, external data repositories, etc.
  • Design and implementation constraints
  • Format, types, valid/ invalid/ default data.

In addition to the User Stories and their associated acceptance criteria, other information is relevant for the tester, including:

  • How the system is supposed to work and be used
  • The system interfaces that can be used/ accessed to test the system
  • Whether current tool support is sufficient
  • Whether the tester has enough knowledge and skill to perform the necessary tests.

As a tester, you will often find that you need other information:

  • Testers also need
    • Information about testing interfaces
    • What tools are available to support testing
    • Clarification on system operation and utilization (if test bases unclear)
    • Clear definition of done (shared across team), including test coverage
  • Given the lightweight documentation, consider if you have necessary knowledge and skill
  • Throughout iteration, information gaps that affect testing will be found
  • Testers must work collaboratively with others on the team to resolve those gaps
  • Unlike sequential projects, getting relevant information for testing is an ongoing process on Agile projects
  • Measuring whether a specific test level or activity is done is part of the tester role.

Here is an example of User Story Acceptance Criteria (DoD):

 

 

User Story:
As a customer, I want to be able to open a popup window that shows the last 30 transactions on my account, with backward/ forward arrows allowing me to scroll through transaction history, so that I can see my transaction history without closing the “enter payment amount” window

Acceptance Criteria (DoD):

  1. Initially populated with 30 most-recent transactions; if no transactions, display “No transaction history yet”
  2. Backward scrolls back 10 transactions; forward scrolls forward 10 transactions
  3. Transaction data retrieved only for current account
  4. Displays within 2 seconds of pressing “show transaction history” key
  5. Backward/ forward arrows at bottom
  6. Conforms to corporate UI standard
  7. Can minimize or close pop-up through standard controls at upper right
  8. Properly opens in all supported browsers

Test Levels:

On Agile projects, we also need different Test Levels and Definition of Done (DoD) for each of them. Often we only think about DoD when we talk about User Stories, but we have to think about it for Test Levels, User Stories, Features, Iterations and Release.
Let us have a look at a list of examples that may be relevant for each of them:

Test Levels, User Stories, Features, Iterations and Release

Examples of definition of Done

Unit testing

  • 100% decision coverage where possible, with careful reviews of any infeasible paths
  • Static analysis performed on all code
  • No unresolved major defects (ranked based on priority and severity)
  • No known unacceptable technical debt remaining in the design and the code
  • All code, unit tests, and unit test results reviewed
  • All unit tests automated
  • Important characteristics are within agreed limits (e.g., performance)

Integration testing

  • All functional requirements tested, including both positive and negative tests, with the number of tests based on size, complexity, and risks
  • All interfaces between units tested
  • All quality risks covered according to the agreed extent of testing
  • No unresolved major defects (prioritized according to risk and importance)
  • All defects found are reported
  • All regression tests automated, where possible, with all automated tests stored in a common repository

System testing

  • End-to-end tests of User Stories, features, and functions
  • All user personas covered
  • The most important quality characteristics of the system covered (e.g., performance, robustness, reliability)
  • Testing done in production-like environments, including all hardware and software for all supported configurations, to the extent possible
  • All quality risks covered according to the agreed extent of testing
  • All regression tests automated, where possible, with all automated tests stored in a common repository
  • All defects found are reported and possibly fixed
  • No unresolved major defects (prioritized according to risk and importance)

User Story

  • The User Stories selected for the iteration are complete, understood by the team, and have detailed, testable acceptance criteria
  • All the elements of the User Story are specified and reviewed, including the User Story acceptance tests, and have been completed
  • Tasks necessary to implement and test the selected User Stories have been identified and estimated by the team

Feature

  • All constituent User Stories, with acceptance criteria, are defined and approved by the customer
  • The design is complete, with no known technical debt
  • The code is complete, with no known technical debt or unfinished refactoring
  • Unit tests have been performed and have achieved the defined level of coverage
  • Integration tests and system tests for the feature have been performed according to the defined coverage criteria
  • No major defects remain to be corrected
  • Feature documentation is complete, which may include release notes, user manuals, and on-line help functions

Iteration (Sprint)

  • All features for the iteration are ready and individually tested according to the feature level criteria
  • Any non-critical defects that cannot be fixed within the constraints of the iteration are added to the product backlog and prioritized
  • Integration of all features for the iteration completed and tested
  • Documentation written, reviewed, and approved

Release

  • Coverage: All relevant test basis elements for all contents of the release have been covered by testing. The adequacy of the coverage is determined by what is new or changed, its complexity and size, and the associated risks of failure.
  • Quality: The defect intensity (e.g., how many defects are found per day or per transaction), the defect density (e.g., the number of defects found compared to the number of User Stories, effort, and/or quality attributes), and the estimated number of remaining defects are within acceptable limits, the consequences of unresolved and remaining defects (e.g., the severity and priority) are understood and acceptable, and the residual level of risk associated with each identified quality risk is understood and acceptable.
  • Time: If the pre-determined delivery date has been reached, the business considerations associated with releasing and not releasing need to be considered.
  • Cost: The estimated lifecycle cost should be used to calculate the return on investment for the delivered system (i.e., the calculated development and maintenance cost should be considerably lower than the expected total sales of the product). The main part of the lifecycle cost often comes from maintenance after the product has been released, due to the number of defects escaping to production.

 

3.3.2 Applying Acceptance Test-Driven Development

Acceptance test-driven development (ATDD) involves the whole Agile Team, Developers, Testers and also the Business, and it is a test-first approach done before programming. The test can be manual or automated.
Normally it follows this process:

  • Workshop: User Stories are analyzed, discussed and written
    • Fixed during the workshop: Any incompleteness, ambiguities or errors in the User Story
  • Create tests:
    • Can be done together with the whole team, or by individual tester
    • Validation by business representatives.

The test should give examples to show how to use the system describing the specific characteristics of the User Story:

  • One or more positive paths (examples, also called tests)
  • One or more negative paths
  • And examples covering all non-functional attributes (volume, performance, stress, security...).

The tests are written in clear normal language understandable to all stakeholders, and should contain, if any:

  • Preconditions
  • The input
  • Related output.

The tests/ examples should cover all functionalities and non-functional characteristics that are described in the User Story, but should not expand and give examples that are not documented in the User Story. Nor should two examples cover the same characteristics of the User Story.

3.3.3 Functional and Non-Functional Black-box Test Design

You can use all normal testing techniques on Agile projects as well to help the developers and testers to design the tests. Often the techniques are used very early on in the Agile project, before programming starts; this is Test-Driven Development (TDD) and follows the recommended good testing practice of “Testing Early”. So when the team and the developers design their test they can apply traditional black-box test design techniques like:

  • Equivalence partitioning
  • Boundary value analysis
  • Decision tables
  • State transition
  • Class tree
  • And others.

For example, if a User Story for a banking loan system includes lower and upper limits for the minimum and maximum loan value, test the boundaries (valid and invalid) as well as other equivalence partitions.
Normally non-functional requirements are also documented in the User Stories, and black-box design techniques can be used to create the tests.
By way of example: Boundary values can be useful to test non-functional requirements, for instance, if the system should support 900 concurrent users, test that (and probably beyond).
In the last chapter, we will go through the most basic black-box test design techniques. They are good to know for everybody working on Agile Teams, because they are the foundation of all good testing practices securing good quality in the system deliveries.

3.3.4 Exploratory Testing and Agile Testing

Time and minimal documentation are major factors in Agile projects, giving us testers limited time for test analysis. Therefore, exploratory testing techniques are important. We should combine exploratory test design techniques with other techniques as part of our reactive test strategy, since requirements are never perfect. Here are some examples of other techniques:

  • Analytical Risk-Based Testing
  • Analytical requirements-based testing
  • Model-based testing
  • Regression-averse testing.

Exploratory Testing and Reactive Strategies

  • Exploratory testing and other techniques associated with reactive test strategies are useful in all situations, since requirements are never perfect
  • In Agile projects, limited documentation and on-going change make these reactive strategies even more useful
  • Blend reactive testing with other strategies (e.g., analytical risk-based, analytical requirements-based, regression-averse)
  • For exploratory testing:
    • Analysis during iteration planning produces the test condition(s) for the Test Charter (more below) which will guide a test session (60-120 minutes) or a test thread (not time-boxed)
    • Test design and test execution occur at the same time, covering the Test Charter, once software is delivered to testers
  • Test design can use all dynamic test techniques discussed in Foundation, Advanced Test Analyst, and Advanced Technical Test Analyst, influenced by the results of the previous tests.

Test Charter

Test conditions are shown in a Test Charter, and may include the following information:

  • Actor: intended user of the system
  • Purpose: the theme of the Test Charter including what particular objective the actor wants to achieve, i.e., the test conditions
  • Setup: what needs to be in place in order to start the test execution
  • Priority: relative importance of this Test Charter, based on the priority of the associated User Story or the risk level
  • Reference: specifications (e.g., User Story), risks, or other information sources
  • Data: whatever data is needed to carry out the Test Charter
  • Activities: a list of ideas of what the actor may want to do with the system (e.g., “Log on to the system as a Super-User”) and what would be interesting to test (both positive and negative tests)
  • Oracle notes: how to evaluate the product to determine correct results (e.g., to capture what happens on the screen and compare to what is written in the user’s manual)
  • Variations: alternative actions and evaluations to complement the ideas described under activities.

Session-based test management

There are different methods to manage exploratory testing, and one of them is session-based test management. The session goes like this:

  • Survey session (to learn how it works)
  • Analysis session (evaluation of the functionality or characteristics)

Deep coverage (corner cases, scenarios, interactions) is where the quality of the tests depends on the tester’s ability to ask relevant questions about what to test. Examples include the following:

  • What is most important to find out about the system?
  • In what way may the system fail?
  • What happens if...?
  • What should happen when...?
  • Are customer needs, requirements, and expectations fulfilled?
  • Is the system possible to install (and remove if necessary) in all supported upgrade paths?
  • Also, consider heuristics such as boundaries, CRUD (Create, Read, Update, Delete), configuration variations, and possible interruptions
  • Utilize all creativity, intuition, ideas, and skills with…
    • The system
    • The business domain
    • The ways people use the software
    • The ways the software fails

Documentation and logging of the process

A frequent mistake is to assume that exploratory testing should not be documented and logged. Logging the process is important; otherwise, it will not be possible to see how a problem was detected in the system. Some documentation is needed. Here are some examples of useful information:

  • Test coverage: what input data have been used, how much has been covered, and how much remains to be tested
  • Evaluation notes: observations during testing, do the system and feature under test seem to be stable, were any defects found, what is planned as the next step according to the current observations, and any other list of ideas
  • Risk and strategy list: which risks have been covered and which ones remain among the most important ones, will the initial strategy be followed, or does it need any changes
  • Issues, questions, and anomalies: any unexpected behavior, any questions regarding the efficiency of the approach, any concerns about the ideas/ test attempts, test environment, test data, misunderstanding of the function, test script or the system under test
  • Actual behavior: recording of actual behavior of the system that needs to be saved (e.g., video, screen captures, output data files).

If insufficient logging is done, testing may need to be repeated.
Test logs should be captured and summarized in the relevant test management or task management system. In Agile projects, some of the information is often shown on the task board, making it easy for the stakeholders to understand the status for all testing activities.

Use the promo code: atfacademy10 and get 10% discount for the International Agile Tester Foundation Certification