One of the fundamental goals of agile development is to have deployable code at the end of every iteration. Working backwards from that challenge implies that a number of technical practices need to be in place. These technical practices need to support the organization’s definition of ‘done’ at both the story and sprint level. For example:
User Story Done Criteria
- Story designed/coded/unit tested
- Unit tests automated (Why? See below)
- Tested code checked in and built without errors:
- Static analysis tests run and passed
- Automated unit tests run and passed
- (Unit) test coverage measured and meets acceptable threshold
- Independent validation of user story by QA team
- User story acceptance criteria met
- Zero open bugs
Sprint Done Criteria
- All user stories done
- All system tests executed and passed
- All performance/stress tests executed and passed
- All regression tests executed and passed
- Zero open bugs
How on earth are we expected to accomplish all of this in an iteration lasting a maximum of 2-4 weeks?
To make all of this happen, a number of practices must be in place:
- There is no ‘hand-off’ from the developers to the testers. Story acceptance testing runs concurrently with development. The QA team can begin testing as soon as the first user story has been delivered cleanly through the build system.
- Re-work must be absolutely minimized. There is simply is no time for the classical back-and-forth between QA and development. The vast majority of user stories must work first time. This can only be accomplished by rigorous unit testing.
- System-level regression and performance testing must be running continuously throughout the iteration
- Test cases for new user stories must be automated. This requires resources and planning.
- All changed code must be checked in, built and tested as frequently as possible The goal is to re-build the system upon every change.
- Fixing of broken builds must be given the highest priority.
When all of the above is in place we have something referred to as ‘Continuous Integration’. A typical continuous integration configuration is summarized in the following diagram.
In this system we have setup a CI system such as Hudson which orchestrates all of the individual sub-systems of the CI system. Here is a step-by-step summary of how the system works:
- Developers check code changes into the SCM system
- Hudson constantly polls the SCM system, and initiates a build when new check-ins are detected.
- Automated units tests, static analysis tests and build sanity tests are run on the new build
- Successful builds are copied to an internal release server, from where they can be
- loaded into the QA test environment. The QA automated regression and performance tests are run
- Test results are reported back to the team
Knowing that every change made to an evolving code-base resulted in a correctly built and defect-free image is invaluable to a development team. Inevitably, defects do get created from time to time. However, identifying and correcting these early means that the team will not be confronted with the risk of a large defect backlog near the end of a release cycle, and can be confident in delivering a high quality release on-time.