Category Archives: 6. Agile Testing

Agile Testing

The entire agile testing philosophy is based on the goal of having ship-ready code at the end of each iteration, where ‘ship-ready code’ means: 100% feature tested, 100% system/performance/stress tested, and zero open bugs.

This sounds like a very tall order, requiring a fundamentally different approach to the traditional practice of development hand-offs (typically of half-baked code), to the test organization. There is simply not enough time for this approach to work if we accept the above-stated objective. Here is the key point:

Agile iterations are not mini-waterfalls

Feature and system testing must happen concurrently with development, and for this to work, the development team must be making  available nothing but clean code to the testers.

Notice also how the agile approach significantly reduces project risk by avoiding the accumulation of unresolved defects – all defects are fixed within the iteration in which they are discovered.

Let’s break this down to see what is required for this to work in practice. Agile testing starts as soon as the first User Story is declared done (not at the end of the sprint!). But for this approach to have any chance of success, re-work must be minimized. By re-work we mean the traditional test and bug-fixing cycle, characteristic of waterfall development, that starts with the hand-off from development to the test organization. There are many definitions of ‘done’ for a user story, but at minimum this means:

  • Code clean compiles with all static analysis warnings removed
  • Code reviewed, with all review issues resolved
  • Story has been unit tested, and ideally units tests are automated
  • Test coverage based on unit testing meets some minimum threshold
  • Code and automated unit tests checked into build system, and system builds and passes all unit tests
  • Build passes all predefined build tests

Next, the test team verifies the user story based on its defined acceptance criteria. The majority of stories should be passing at this point. The manufacturing analogy is the production ‘yield’, and we should be striving for the highest possible yield, say > 90%. If the yield is low (and the corresponding re-work high), then we need to dig into the reasons for this, identify root causes, and apply corrective actions to drive the yield higher. Clearly, this will not happen overnight, and may require multiple iterations, if not releases, to get there. There are a couple of additional prerequisites that go along with getting to a high first-pass success rate:

  • A continuous integration environment with a high degree of automation of both the unit test and build sanity level
  • A high degree of system test automation
  • A continuous improvement mindset where the team routinely dissects test failures and institutes actions to push the bar higher for first-pass test success.

Each of the above points merits its own discussion which we will get to in future articles.


Print Friendly

Continuous Integration

One of the fundamental goals of agile development is to have deployable code at the end of every iteration. Working backwards from that challenge implies that a number of technical practices need to be in place. These technical practices need to support the organization’s definition of ‘done’ at both the story and sprint level. For example:

User Story Done Criteria

  • Story designed/coded/unit tested
  • Unit tests automated (Why? See below)
  • Tested code checked in and built without errors:
    • Static analysis tests run and passed
    • Automated unit tests run and passed
    • (Unit) test coverage measured and meets acceptable threshold
  • Independent validation of user story by QA team
  • User story acceptance criteria met
  • Zero open bugs

Sprint Done Criteria

  • All user stories done
  • All system tests executed and passed
  • All performance/stress tests executed and passed
  • All regression tests executed and passed
  • Zero open bugs

How on earth are we expected to accomplish all of this in an iteration lasting a maximum of 2-4 weeks?

To make all of this happen, a number of practices must be in place:

  • There is no ‘hand-off’ from the developers to the testers. Story acceptance testing runs concurrently with development. The QA team can begin testing as soon as the first user story has been delivered cleanly through the build system.
  • Re-work must be absolutely minimized. There is simply is no time for the classical back-and-forth between QA and development. The vast majority of user stories must work first time. This can only be accomplished by rigorous unit testing.
  • System-level regression and performance testing must be running continuously throughout the iteration
  • Test cases for new user stories must be automated. This requires resources and planning.
  • All changed code must be checked in, built and tested as frequently as possible The goal is to re-build the system upon every change.
  • Fixing of broken builds must be given the highest priority.

When all of the above is in place we have something referred to as ‘Continuous Integration’. A typical continuous integration configuration is summarized in the following diagram.

Fig. 1 Continuous Integration

In this system we have setup a CI system such as Hudson which orchestrates all of the individual sub-systems of the CI system. Here is a step-by-step summary of how the system works:

  1. Developers check code changes into the SCM system
  2. Hudson constantly polls the SCM system, and initiates a build when new check-ins are detected.
  3. Automated units tests, static analysis tests and build sanity tests are run on the new build
  4.  Successful builds are copied to an internal release server, from where they can be
  5. loaded into the QA test environment. The QA automated regression and performance tests are run
  6. Test results are reported back to the team

Knowing that every change made to an evolving code-base resulted in a correctly built and defect-free image is invaluable to a development team. Inevitably, defects do get created from time to time. However, identifying and correcting these early means that the team will not be confronted with the risk of a large defect backlog near the end of a release cycle, and can be confident in delivering a high quality release on-time.

Print Friendly