You have a large complex system comprising multiple subsystems that interact to provide a set of services to end-users. This could be a large financial system or a piece of communications infrastructure. You have perhaps 10 development teams working on different subsystem components, and a system integration team that takes the output from the development teams and then does 2 sets of things:
- Runs end-to-end testing on the fully integrated system to validate the new functionality
- Runs a full system regression to ensure that the system still performs as expected.
There are basically 2 approaches to this (plus many variations):
1. The system integration testers are embedded within each of the development teams. Each user story follows a work-flow, for example: Backlog -> Defined -> In Progress -> Completed -> Accepted. ‘Defined’ means that the user story is fully defined, including acceptance criteria, and hence test cases can be created. ‘Completed’ means that coding and unit test are done on the story, and the story is now available in a branch of code for testing. System testers can take the build and test it end-to-end on a fully integrated system. Upon successful completion of this testing, the story is marked ‘Accepted’. In this model there are no separate phases or hand-offs. Testing happens concurrently with development and is triggered by stories reaching the ‘Completed’ state. This model may require a longer iteration length, say 4 weeks, to give the system testers sufficient time to verify all stories end-to-end, and to complete a full regression. If a full regression cannot be completed, then at least a ‘core’ regression should be done, and arrangements made to complete the remaining tests at another time – perhaps a full regression requires a 2-sprint cycle.
2. The system integration testers run their own separate sprints, and lag one full sprint behind the development teams as in the diagram below. This is far from ideal, but we might have a situation where the system integration team has thus far only automated say less than 50% of their test library of 3,000 test cases, leaving a very significant manual test effort to get through on each iteration. In the second approach the system integration team picks up a completed set of user stories at the beginning of their iteration, and completes testing of all new functionality, plus a full regression within their cycle. This process repeats itself at the end of each development sprint. If the sprint length is 3 weeks then it will take 6 weeks to deliver a ‘potentially shippable product increment’ from sprint 1, but a ship-quality release is available every 3 weeks thereafter.