Recently there has been a lot of talk in the Agile community about moving away from estimated stories into more of a Kanban-style “pull-based” system. At Agile 2008, Joshua Kerievsky did a talk on MicroReleases where he discussed how his team did just that. They base the next release (which is just a couple of days away) on a gut feel of what the can accomplish. However, when you are working with teams who have a much longer release cycle, velocity estimates can be vital to product planning for both the developers and the business.
For example, when I worked at CARFAX we had to coordinate our releases with marketing efforts, campaigns, TV ads, etc. This meant that we needed to have a projection system we could utilize to estimate when we’d be done. So the business (in our case, the Product Manager) would bring the user stories needed for that release. We as a team would estimate them based on level of difficulty – 1, 2, 4, 8, 16, etc. We would then add up all of the effort estimates and add two cards – a budget card and a change card, both equal to 10% of the effort.
We worked in weekly iterations, so the first week we would commit to completing (code complete, QA, and Business Verified) a certain set of stories. In subsequent weeks, we could “sign up” for as many story points as we completed in the last iteration (also known as “Yesterday’s Weather“)
What this allowed us to do was project the end date (the part we could control of scope/quality/date) with great accuracy. On 6-month projects, we typically could release the week we said we would.
On the team I’m currently working with, we are dealing with a fixed date, since our release is tied around an entire user conference. What this typically meant in the past was that the development team was handed a list of specs, they did their best to try to meet that, and the last 3 months of a 12-18 month release cycle were the typical “death march” of 14 hour days, weekends, etc.
One of the first things I did when I came on board was to bring all of the stories and features that were spread across SharePoint sites, Excel spreadsheets, emails and people’s heads into a centralized place, and have that place be the area where if the developers were working on something not in that system, they were in the wrong.
All of the items were estimated (or re-estimated in some cases) using the relative effort estimates. Our first iteration consisted of a gut feel of what they could complete. It seemed to go very well for them. The second iteration didn’t – mainly because the first iteration was technical tasks, and in the second iteration, the agreement of teams not getting velocity until the story was coded, QA’d and verified by the business came much more into play. One team went from a velocity of 27 in the first iteration to 1 (yes, one) in the second iteration.
However, once we worked through those issues, we turned our focus to the backlogs and burndown charts. Based on each team’s velocity, we projected out where we would be at the cut-off date for the product. What we found was that, at the date we wanted to be able to be code complete, each team would have 75% of their tasks remaining. As you can imagine, this was quite a shock to the management, who had been prepped for something like this to happen.
However, what happened next is one of the most important things a team can do, and perhaps the make or break point for almost any team. Knowing we couldn’t adjust cost or the date, we began to work together as a management team to reprioritize the features based on the velocity estimates. Our executives could have simply rejected the velocities and creeded 14-hour days. Or ignored the results. But, instead, they took the information they were given early in the process and can now craft a response to it in a calm fashion without having to back out of promises.
In other words, in a previous release, this information would have been discovered practically at the end of the cycle – long after the release plans had been sent to the customers of what would be in the release, and after the agendas had been made for the conference. Literally the only option the team had was to nearly kill themselves to finish everything. Now, we can have serious, but calm, discussions about our strategy for truly delivering the most important business value in the time allotted.
We were able to make these decisions because we had solid information at our hand of what the teams were capable of, and what kind of effort we had laid out before us. Of course, nothing is perfect, and I’m sure velocity may change somewhat, stories may get added, or bugs may be found. But having the ability to report to the decision-makers what they can expect, couple with their willingness to work closely with the teams to deliver business value, can dramatically change how you build software.
IMHO, that was a very wise decision to start estimating (coming sprint and whole release) and tracking velocity from the very beginning.
I know quite many teams that that were reluctant to start estimations early and then management didn’t have the good vehicle to prioritize and understand the team tradeoffs.
Unfortunately tracking velocity is useful on large scale only and without somebody wanting it really much it is often tempting to skip what doesn’t bring the immediate benefits.