50% discount coupon

International Agile Tester Foundation

Steen Lerche-Jensen

2 Fundamental Agile Testing Principles, Practices, and Processes

Agile testing follows the Agile manifesto and principles; in addition to this, we have nine Agile testing principles [Hendrickson]:

  1. Testing Moves the Project Forward
  2. Testing is NOT a Phase…
  3. Everyone Tests
  4. Shortening Feedback Loops
  5. Tests Represent Expectations
  6. Keep the Code Clean
  7. Lightweight Documentation
  8. “Done Done,” Not Just Done
  9. From Test Last to Test-Driven

There are also ten other Agile testing principles, which were introduced in the book Agile Testing [By Lisa Crispin and Janet Gregory]. It is so easy to get confused, but both of them contain good principles

  1. Provide continuous feedback
  2. Deliver value to the customer
  3. Enable face-to-face communication
  4. Have courage
  5. Keep it simple
  6. Practice continuous improvement
  7. Respond to change
  8. Self-organize
  9. Focus on people
  10. Enjoy

Agile is different from traditional lifecycles and the principles also show that. If you do not remember or have not seen the traditional testing principles, here is a summary:
ISTQB's seven testing principles:

  1. Testing shows presence of defects
  2. Exhaustive testing is impossible
  3. Testing Early
  4. Defect clustering
  5. Pesticide paradox
  6. Testing is context dependent
  7. Absence-of-errors fallacy

The seven traditional testing principles also work fine for Agile projects; they are the core of testing and should not be forgotten just because the project is Agile. Another thing to mention is that traditional projects can also adapt many of the Agile testing principles.

2.1 The Differences between Testing in Traditional and Agile Approaches

Figure: Traditional versus Agile testing strategies. Key: Requirements Analysis, Design, Implementation, Testing, Maintenance. Versus. Planning and Feasibility Study, Analysis, Design, Development, Testing, Deployment, Release and Maintenance.

There are differences between testing in traditional lifecycle models (e.g., sequential such as the V-model or iterative such as RUP) and Agile lifecycles in order to work effectively and efficiently. The Agile models differ in terms of the way testing and development activities are integrated, the project work products, the names, entry and exit criteria used for various levels of testing, the use of tools, and how independent testing can be effectively utilized.
Organizations vary considerably in their implementation of lifecycles. Deviation from the ideals of Agile lifecycles may represent intelligent customization and adaptation of the practices. The ability to adapt to the context of a given project, including the software development practices actually followed, is a key success factor for testers.
Let us look at some of the differences:

2.1.1 Testing and Development Activities

Comparing Agile and Sequential 1

Agile

Sequential

  • Short iterations deliver valuable, working features
  • Release and iteration planning and quality risk analysis
  • User Stories selected in each iteration
  • Continuous test execution, overlapping test levels
  • Testers, developers, and business stakeholders test
  • Hardening iterations may occur periodically
  • Avoid accumulating technical debt
  • Pairing (at least in XP)
  • Testing and quality coaching is a best practice
  • Heavy use of test automation for regression risk
  • Change may occur during project—deal with it
  • Lightweight work products
  • Longer timeframes, deliver large groups of features
  • Overall planning and risk analysis up front, with control throughout
  • Scope of requirements established up front
  • Test execution in sequential levels in last half of project
  • Testers, developers, and business stakeholders test
  • Hardening happens at end of lifecycle, in system test/ SIT
  • Unrecognized technical debt is a major risk
  • Pairing is unusual
  • Testing and quality coaching is a best practice
  • Test automation is a best practice
  • Unmanaged change can result in a death march
  • Risk of over-documentation

 Big bang versus iteration deliveries

Figure: Waterfall method versus Agile method, showing Income per unit of Time, and Value Delivery relative to Risk of Failure (missed needs).

One of the main differences between traditional lifecycles and Agile lifecycles is the idea of very short iterations, each iteration resulting in working software that delivers features of value to business stakeholders. In an Agile Team, testing activities occur throughout the iteration, not as a final activity.

Roles in testing

Testers, developers, and business stakeholders all have a role in testing, as with traditional lifecycles. Developers perform unit tests as they develop features from the User Stories. Testers then test those features. Business stakeholders also test the stories during implementation. Business stakeholders might use written test cases, but they also might simply experiment with and use the feature in order to provide fast feedback to the development team.

Testers as test and quality coaches

On Agile Teams, we often see testers serve as testing and quality coaches within the team, sharing testing knowledge and supporting quality assurance work within the team. This promotes a sense of collective ownership of quality of the product.

Test automation

You will find that test automation at all levels of testing occurs in many Agile Teams, and this can mean that testers spend time creating, executing, monitoring, and maintaining automated tests and results. Because of the heavy use of test automation, a higher percentage of the manual testing on Agile projects tends to be done using experience-based and defect-based techniques such as software attacks, exploratory testing, and error guessing.

Changes

One core Agile principle is that change may occur throughout the project. Therefore, lightweight work product documentation is favored in Agile projects. Changes to existing features have testing implications, especially regression testing implications. The use of automated testing is one way of managing the amount of test effort associated with change.

2.1.2 Project Work Products

Categories of work products [Agile test Syllabus]:

  1. Business-oriented work products that describe what is needed (e.g., requirements specifications) and how to use it (e.g., user documentation)
  2. Development work products that describe how the system is built (e.g., database entity-relationship diagrams), that actually implement the system (e.g., code), or that evaluate individual pieces of code (e.g., automated unit tests)
  3. Test work products that describe how the system is tested (e.g., test strategies and plans), that actually test the system (e.g., manual and automated tests), or that present test results

On Agile projects, we try to minimize documentation, emphasize working software and automated tests.
Avoid producing vast amounts of documentation. Instead, focus is more on having working software, together with automated tests that demonstrate conformance to requirements.

Sufficient documentation must be provided for business, testing, development, and maintenance activities

In a successful Agile project, a balance is struck between increasing efficiency by reducing documentation and providing sufficient documentation to support business, testing, development, and maintenance activities.

Typical business-oriented work products on Agile projects

  • User Stories (the Agile form of requirements specifications)
  • Acceptance criteria

Example of User Story. For Key, see earlier.

Typical developer work products on Agile projects

  • Code
  • Unit test (normally this will be automated)

Unit tests are created incrementally, before each portion of the code is written, in order to provide a way of verifying, once that portion of code is written, whether it works as expected. While this approach is referred to as test first or test-driven development, in reality the tests are more a form of executable low-level design specifications rather than tests.

Example of Unit Test of Bank Account project.

Typical tester work products on Agile projects

  • Manual tests
  • Automated tests
  • Test plans (lightweight)
  • Quality risk catalogues (lightweight)
  • Defects reports and results logs (lightweight)
  • Test metrics (lightweight)

Example: Agile Testing Dashboard in Enterprise Tester.

In regulated, safety critical, distributed, or highly complex projects, more documentation is often required:
On some Agile projects, teams transform User Stories and acceptance criteria into more formal requirements specifications. Vertical and horizontal traceability reports may be prepared to satisfy auditors, regulations, and other requirements.

2.1.3 Test Levels

In waterfalls lifecycle models (sequential lifecycle) and the expanded V-Model, the test levels are often defined such that the exit criteria of one level are part of the entry criteria for the next level. In the below model, you see five test levels:

  • Unit Testing
  • Component Testing
  • System Testing
  • Maintenance Testing
  • User Acceptance Testing

Example of V Model, showing VERIFICATION and VALIDATION against TIME. Verification involves Planning: Concept, Design, Functional, Technical, Development. Validation involves Testing: Unit Testing, Components, System, Maintenance, User Acceptance.

V-Model

In some iterative models, this rule does not apply. Test levels overlap. Requirement specification, design specification, and development activities may overlap with test levels.

Agile Test Levels

Changes to requirements, design, and code can happen often during Agile, but changes are normally kept in the product backlog, but can happen at any point in an iteration. During an iteration, any given User Story will typically progress sequentially through the following test activities:

  • Unit testing (developer, often automated)
  • Feature acceptance testing
  • Feature verification (developer or tester, often automated, based on acceptance criteria)
    • Feature validation (developers, testers, and business stakeholders, usually manual, shows usefulness, progress)
    • Regression testing throughout the iteration (via automated unit tests and feature verification tests)
  • System testing (tester, functional and non-functional)
  • Unit integration testing (developer and tester, sometimes not done)
  • System integration testing (tester, sometimes one iteration behind)
  • Acceptance testing (alpha, beta, UAT, OAT, regulatory, and contractual, at end of each iteration, after each iteration, or after a series of iterations).

Some highlights

Often there is a parallel process of regression testing occurring throughout the iteration. This involves re-running the automated unit tests and feature-verification tests from the current iteration and previous iterations, usually via a Continuous Integration framework.
There may be a system test level, which starts once the first User Story is ready for such testing. This can involve executing functional tests, as well as non-functional tests for performance, reliability, usability, and other relevant test types. This will be determined under the grooming of the User Stories before the iteration starts. A good help to do this analysis is to use the Agile test quadrant to make sure you get the right quality focus on each User Story.

Example of Agile Test Quadrant of Business-Facing versus Technology-Facing; and Supporting the Team versus Critique of Product. Quadrants are: Q1 Unit Tests, Component Tests; Q2 Functional Tests, Examples, Story Tests, Prototypes, Simulations; Q3 Exploratory Testing, Usability Testing, UAT, Alpha/ Beta; Q4 Performance, Load, Stress, Volume, Security... Non-functional.

Internal alpha tests and external beta tests may occur, at the close of each iteration, either after the completion of each iteration, or after a series of iterations. User acceptance tests, operational acceptance tests, regulatory acceptance tests, and contract acceptance tests also may occur, at the close of each iteration, either after the completion of each iteration, or after a series of iterations.

2.1.4 Testing and Configuration Management

Automated tools in Agile projects are often used to develop, test, and manage software development. Static analysis tools are used by developers for unit testing, and code coverage.

Example of Continuous Integration (CI), showing Source Control, Initiate CI Process, Build, Test, Testing, Report, Development, Commit.

Developers continuously check the code and unit tests into a configuration management system, using automated build and test frameworks. These frameworks allow the Continuous Integration of new software with the system, with the static analysis and unit tests run repeatedly as new software is checked in.
Automated tests, at all levels, in build/ test frameworks, achieve Continuous Integration.
Software tests have to be repeated often during development cycles to ensure quality. Every time source code is modified in software, tests should be repeated. For each release of the software, it may be tested on all supported operating systems and hardware configurations. Manually repeating these tests is costly and time-consuming. Once created, automated tests can be run repeatedly at no additional cost and they are much faster than manual tests.

Example of Test Automation, showing Test Automation Feasibility Study, ROI Study for Test Automation, Evaluate and Select Tool for Test Automation, Identify Automation Framework, Performing Proof of Concept POC for Automation Framework, Develop Automation Framework, Review Automation Test Scripts.

Some of the goals of the automated tests are:

  • Confirm that the build is functioning and installable: If any automated test fails, the team should fix the underlying defect in time for the next code check-in.
  • Help to manage the regression risk: associated with the frequent change that often occurs in Agile projects.
  • Save Time and Money : Automated software testing can reduce the time to run repetitive tests from days to hours. A timesaving that translates directly into cost savings.
  • Improve Accuracy : Automated tests perform the same steps precisely every time they are executed and never forget to record detailed results.
  • Increase Test Coverage : Automated software testing can increase the depth and scope of tests to help improve software quality. Lengthy tests that are often avoided during manual testing can be run unattended. They can even be run on multiple computers with different configurations. Automated software testing can look inside an application and see memory contents, data tables, file contents, and internal program states to determine if the product is behaving as expected. Automated software tests can easily execute thousands of different complex test cases during every test run providing coverage that is impossible with manual tests.

2.1.5 Organizational Options for Independent Testing

When we think about how independent the test team is, it is very important to understand that independence is not an either/or condition, but a range, and yes, it will occur in Agile Teams as well:

  • At one end of the range lies the absence of independence, where the programmer performs testing within the programming team.
  • Moving toward independence, we find an integrated tester or group of testers working alongside the programmers, but still within and reporting to the development manager.
  • Then moving a little bit more towards independence we might find a team of testers who are independent and outside the development team, but reporting to project management.
  • Near the other end of the continuum lies complete independence. We might see a separate test team reporting into the organization at a point equal to the development or project team. We might find specialists in the business domain (such as users of the system), specialists in technology (such as database experts), and specialists in testing (such as security testers, certification testers, or test automation experts) in a separate test team, as part of a larger independent test team, or as part of a contract, outsourced test team.

In some Agile Teams, developers create many of the tests in the form of automated tests. One or more testers may be embedded within the team, performing many of the testing tasks. However, given those testers’ position within the team, there is a risk of loss of independence and objective evaluation.

Other Agile Teams retain fully independent, separate test teams, and assign testers on-demand during the final days of each Sprint. This can preserve independence, and these testers can provide an objective, unbiased evaluation of the software. However, time pressures, lack of understanding of the new features in the product, and relationship issues with business stakeholders and developers often lead to problems with this approach.

A third option is to have an independent, separate test team where testers are assigned to Agile Teams on a long-term basis, at the beginning of the project, allowing them to maintain their independence while gaining a good understanding of the product and strong relationships with other team members. In addition, the independent test team can have specialized testers outside of the Agile Teams to work on long-term and/or iteration-independent activities, such as developing automated test tools, carrying out non-functional testing, creating and supporting test environments and data, and carrying out test levels that might not fit well within a Sprint.

Use the promo code: atfacademy10 and get 10% discount for the International Agile Tester Foundation Certification