(Alternate Subtitle: How the heck do you do estimation and forecasting in Kanban?)
One of the biggest changes for many team who are adopting agile is in the way they slice, track and measure their work. They learn about â€œUser Storiesâ€ which are sized using â€œStory Pointsâ€. The team adds up the number of Story Points in a given â€œSprintâ€ which gives them their â€œVelocityâ€. I donâ€™t put the words in quotes to belittle them, but to point out the official terms.
In fact, there are some very good elements to this. The story format can be quite helpful for focusing teams on what it is they are trying to deliver. Further, the idea of breaking down the work into small, incremental chunks is fantastic. And story points have a great attribute about them â€“ they are relative. You arenâ€™t trying to measure the exact amount, but rather a relative estimation (â€œThis story is twice as hard as this other one, so weâ€™ll call this a 4 and that one a 2â€).
But the one drawback comes when you begin interacting with other teams, or looking at different methodologies. Specifically:
- How do you deal with teams with different sprint lengths?
- How do you know teams are estimating the same way?
- Can you ever really trust velocity numbers across teams?
- How do you forecast estimates across teams?
- What happens when you remove the timebox of the sprint completely?
- How do you â€œcreditâ€ stories which arenâ€™t completed during the Sprint?
In fact, there are several solutions to the above problem. For example, you can have an estimation session where members from each team estimate a set of items, and the teams use that as the baselines. You can do other mathematical tricks to make it work, too. So it is possible.
But what if you didnâ€™t have to do that? What if there was a way to get a more quantifiable estimate without resorting to detailed hour estimates, without removing the relative estimation and whose accuracy and confidence levels could be proven?
To understand, letâ€™s take a look at a scenario that I see played out quite often. We have a user story which a team has committed to, say to change a user entry screen. The team â€œcompletesâ€ the user story. In fact, all of the teams complete their user stories. But yet, the application still isnâ€™t shipping, there are lots of delays, and it isnâ€™t clear why.
Well, at least at the surface it isnâ€™t clear why. If you step back, you discover that after the team is finished, it goes into User Acceptance Testing, and then packaging, and then deployment. In other words, the User Story isnâ€™t actually done â€“ even though the team said it was. This may not seem like a huge deal, but letâ€™s look at the consequences:
- The team isnâ€™t honoring their commitment to their process
- Forecasting based on the velocity is useless, since there is additional work happening behind the scenes not being accounted for
- The team grows frustrated because they know the velocity number isnâ€™t reality
Note that none of the above is the fault of the methodology itself. The team isnâ€™t adhering to it, and from that are causing potentially very large problems. So, if this is our reality, and it isnâ€™t possible to shrink the UAT/Packaging/Deploy to the size of the sprint, what should the team do? And since these are variable length stories, how can we accurately forecast and estimate?
Luckily we actually have a very simple method which combines relative estimation, Cycle Time and Classes of Service. Letâ€™s tackle the first two first. Going back to our story above, the team pulls a story to change the screen. In Scrum, they would discuss the story with the customer, break it down into tasks, and estimate it using Story Points. Instead of doing story points, letâ€™s imagine they used T-Shirt sizes â€“ Extra Small, Small, Medium, Large, Extra Large. So far it isnâ€™t that much different â€“ I generally advised teams that anything larger than an 8 shouldnâ€™t go in a sprint.
But hereâ€™s where it gets different, Rather than add up the story points completed during the sprint, they measure the cycle time of the stories. The Cycle Time is the amount of time that it takes for a story to go from initially being worked on through shipped. It includes any loopbacks or rework. Now, letâ€™s imagine we charted the T-Shirt Size and Cycle Time for each of our user stories. We might get a chart that looks like this:
|T-Shirt Size||Cycle Time|
You can see a couple of things. First, we can quickly calculate the Average Cycle Time for a given size story:
- Small – ~4 days
- Medium – ~15 days
- Large – ~27 days
But, more importantly, we know the confidence in those estimates. For Small stories, we know that the minimum is 3 days, and the max is 5 days, with a standard deviation of only about 1 day. But for Large stories, the minimum is 23 days, and the maximum is 31 days, giving us a standard deviation of just over 4 days. So the risk is higher that we wonâ€™t meet the Average Cycle Time.
So now that we have our Average Cycle Time, we can use that for forecasting. If a certain feature consists of 5 medium stories, 3 large stories, and 10 small stories, we can estimate out approximately how long it will take us â€“ (5*15) + (3*27) + (10*4) = 196 days. But in reality, it could take us as long as (5*16) + (3*31) + (10*5) = 223 days if every story took the maximum amount of time.
Whatâ€™s nice about this way of looking at forecasting and estimating is that the calculations are based on the actual time it takes the teams to do the work. And it is easy to calculate across teams, since you can take into account variances and differences as part of the calculations.
If you remember earlier, I mentioned there was one more element to Average Cycle Time for forecasting â€“ Classes of Service. Jeff Anderson has a good blog post on the concept, but in a nutshell, classes of service are a recognition that all work is not the same, and should likely not be treated the same.
What this means for us is that we can extend out our Average Cycle Time estimation to include Classes of Service. Letâ€™s say that weâ€™ve defined three classes of service: Normal work, Bug Fixes and Critical Fixes. By simply having those marked on our board, we can track the Average Cycle Time for each category of work. In fact, I use this with teams to help them understand how to get a better handle when some work requires the assistance of a certain part of the organization, and when other work requires a different part.
Letâ€™s imagine that we have a separate department which houses the Database Administrators. When we make a change that requires a database change, it has to be handed off to them. Thatâ€™s a type of work which has a different workflow, and different policy requirements â€“ meaning a good candidate to be labeled as a different class of service. Now our table might look like:
|Normal Work||Database Work|
|Small||4 days||5 days|
|Medium||15 days||18 days|
|Large||27 days||34 days|
Should you actually break your work and forecasting down to this level? Not necessarily. After all, if you have flexible dates, or can cut scope to account for work changes, then going through all of the work to understand the averages, deviations and probabilities may not be worth it. But if you have a high need for better estimates, then using Average Cycle Time may give you what you are looking for.
One final note â€“ even if you are using Scrum, I still highly recommend tracking the Average Cycle Time of the stories. You donâ€™t have to throw out your Story Points, but you might find that you look better wearing a T-Shirt instead.