Iteration Planning

Let’s summarize where we are so far. When release planning is complete we should have the following in place:

  • A ranked set of product features
  • Features elaborated into User Stories. (At least enough stories to feed the first few sprints)
  • Size estimates done for each User Story (in Story Points)

The next step is to define the development and testing tasks required to complete each story. To do this you need to have a clear definition of ‘Done’ for user stories. Here is an example:

  • Code completed and checked in to the SCM system
  • System builds without error
  • Product daily or continuous regression completed without error
  • Static analysis has no warnings
  • Unit testing complete and automated
  • Code review complete
  • Zero outstanding defects against the user story
  • Product owner has accepted the story

We go to these lengths because the goal is to get each user story to a shippable state, and not to accumulate a pile of partially completed work leading to re-work or ‘technical debt’ in agile jargon. That’s what being agile means. People new to agile development are frequently surprised when they learn about the degree of rigor required to deliver a user story – done means done – ready to ship.

Iteration planning gets us down to a level of detail where each user story is broken into the development and other tasks that must be completed to get the story to ‘Done’. Let’s look at an example, and add the required tasks to implement a user story which is part of a User Administration feature:

User_Story_0100: Users can self-register

    • Task_8001: Create new page with registration form
    • Task_8002: Create user class to support all required data
    • Task_8003: Create methods to collect user data and insert into database
    • Task_8004: Create database schema to support user details
    • Task_8005: Code review of all new and modified code
    • Task_8006: Create and execute unit tests for the story
    • Task_8007: Add unit tests to the unit test automation library
    • Task_8008: Create acceptance tests for the story
    • Task_8009: Execute acceptance tests
    • Task_8010: Acceptance test automation

For each task, we need an estimate (in hours, using 2 days as a guideline for the upper limit for a task), and we need an owner. When this data has been determined for all user stories, and we can re-confirm that all of the work will fit within the iteration boundaries, then we have an iteration plan. For tracking purposes, we can track the overall release by burning down story-points, and we can track the progress of an iteration by burning down either story points or hours.

The output of an iteration planning session might look something like:

Iteration Planning

Iteration Planning

Arguably, once a team has established a stable velocity, the effort-hours task-level estimates could be dispensed with, that is, once a team becomes good at estimating in story points the additional step of estimating tasks in hours may no longer be necessary. The breakdown of stories into tasks at the iteration level is still required so that the work can be defined and allocated. But all that really matters is that the team has a high probability of delivering all required stories within the sprint boundary.

Let’s say we have decided to tackle 6 of the so far defined 11 user stories from the release backlog in the first iteration. These stories amount to 18 points, definitely within the team’s established velocity of 20 points. Our updated plan for the next iteration might be represented as follows:

Iteration Plan

Iteration Plan

Representing the iteration plan this way helps us to maintain good visibility of the relevance of these stories to the features they  are derived from. While this iteration is being executed, work continues on refining and elaborating on the remaining features in the release. It is a good strategy though to try and completely finish individual features before proceeding to new ones. This helps minimize feature-level WIP, and supports the efforts of system- or integration test teams to completely validate entire features end-to-end.

 

Velocity-Based Iteration Planning

The output of the release planning step is a prioritized list of stories plus size estimates in story points. Velocity-based planning proceeds in 2 steps:

  1. Select user stories in priority order from the release plan and assign them to the iteration. Continue assigning stories until the total story points assigned matches the team’s velocity.
  2. Decompose each user story into the tasks required to develop and validate it, paying attention to the tasks required by your definition of ‘done’. In short, include every task needed to get a user story to a shippable state. Team members sign up for the tasks.

This approach assumes that the team has a good track record of accurately sizing stories and scoping iterations to match their velocity.

Scaling with Multiple Teams

In the case where we have a more complex system under development, requiring a large level of engineering effort, then we will need to distribute the work between multiple teams. There are many ways to do this, but let’s look at a simple example. Consider a Video On-Demand (VOD) system that resides at the ‘head-end’ of a cable TV operators network, and gives subscribers the ability to browse, purchase and stream stored TV shows and movies to their TV’s, smartphones, tablets and PC’s. A simplified view of the architecture of this system could be represented as:

A VOD System

A VOD System

The system has 3 major subsystems:

  1. Content Ingest Subsystem
    • Ingest content from content providers and store on storage arrays
    • Create metadata (Title, Genre, Year Released, Running Time, Production Studio, etc) for each ingested asset – required for content catalog creation.
    • Transcode content  – convert content for delivery to multiple device types
    • Encrypt content – provide protection of content against unauthorized use.
  2. Content Publishing Subsystem
    • Create catalogs of video products that users can browse and select for viewing
    • Ensure users only see content they are entitled to
  3. Content Delivery Subsystem
    • Deliver content in different formats to various user devices
    • Use different delivery protocols depending on user devices and available network bandwidth

The most straightforward approach would be to have one team working on each subsystem.  This is is not a bad arrangement because each of the three subsystems operates almost independently of the others, sharing data via a database. Furthermore, each subsystem can be tested fairly independently, for example it is fairly easy to see how validation of the accuracy of asset metadata creation could be tested on the ingest subsystem without any dependency on the Content Publisher or Content Delivery subsystems.

Going back to the requirements management discussion, with this example part of the process to get features to the ‘Defined’ state would be to identify which system components needed work to deliver the overall feature. When done for all features in a release, the output of this exercise might me summarized as follows:

Requirements Allocation to System Components

Requirements Allocation to System Components

With multiple teams, our revised iteration plan is going to look something like:

Iteration Planning with Multiple Teams

Iteration Planning with Multiple Teams

Note that when planning iterations, it is important to get the highest ranked features built and demo’d as early as possible – at least the ‘core’ stories for those features (leave the bells, whistles and exception-handling to a subsequent iteration). This provides the opportunity to get early feedback not only on what has been built, but also on potential re-prioritization or further refinement of the remaining features.

 

 

Print Friendly