Tag Archives: agile testing

Agile Testing

The entire agile testing philosophy is based on the goal of having ship-ready code at the end of each iteration, where ‘ship-ready code’ means: 100% feature tested, 100% system/performance/stress tested, and zero open bugs.

This sounds like a very tall order, requiring a fundamentally different approach to the traditional practice of development hand-offs (typically of half-baked code), to the test organization. There is simply not enough time for this approach to work if we accept the above-stated objective. Here is the key point:

Agile iterations are not mini-waterfalls

Feature and system testing must happen concurrently with development, and for this to work, the development team must be makingĀ  available nothing but clean code to the testers.

Notice also how the agile approach significantly reduces project risk by avoiding the accumulation of unresolved defects – all defects are fixed within the iteration in which they are discovered.

Let’s break this down to see what is required for this to work in practice. Agile testing starts as soon as the first User Story is declared done (not at the end of the sprint!). But for this approach to have any chance of success, re-work must be minimized. By re-work we mean the traditional test and bug-fixing cycle, characteristic of waterfall development, that starts with the hand-off from development to the test organization. There are many definitions of ‘done’ for a user story, but at minimum this means:

  • Code clean compiles with all static analysis warnings removed
  • Code reviewed, with all review issues resolved
  • Story has been unit tested, and ideally units tests are automated
  • Test coverage based on unit testing meets some minimum threshold
  • Code and automated unit tests checked into build system, and system builds and passes all unit tests
  • Build passes all predefined build tests

Next, the test team verifies the user story based on its defined acceptance criteria. The majority of stories should be passing at this point. The manufacturing analogy is the production ‘yield’, and we should be striving for the highest possible yield, say > 90%. If the yield is low (and the corresponding re-work high), then we need to dig into the reasons for this, identify root causes, and apply corrective actions to drive the yield higher. Clearly, this will not happen overnight, and may require multiple iterations, if not releases, to get there. There are a couple of additional prerequisites that go along with getting to a high first-pass success rate:

  • A continuous integration environment with a high degree of automation of both the unit test and build sanity level
  • A high degree of system test automation
  • A continuous improvement mindset where the team routinely dissects test failures and institutes actions to push the bar higher for first-pass test success.

Each of the above points merits its own discussion which we will get to in future articles.