Happy Friday! This week’s #fasterfridays is a good chance to reflect on our work and our process through the use of retrospectives. Specifically we are using it to help teams develop a mechanism for inspecting, experimenting with, changing and owning their process. If you’re interested in finding out how to apply this to your organization, don’t hesitate to reach out via Twitter or email hello at coryfoy dot com. Have a great weekend!
Links:
Transcript:
Happy Friday, and welcome to another edition of FASTER Fridays! The end of a week is always a good time for reflection, and in today’s video I want to share how I teach teams to use cadenced retrospectives as an experimentation framework for their own process.
First, if you’re not familiar with Retrospectives (or After Action Reviews, or maybe even “Postmortemsâ€) I’d highly recommend picking up the great book Agile Retrospectives by Diana Larsen and Esther Derby. There’s no single way to run a retrospective, and it was a game changing book for me in thinking about setting ups facilitating and running retrospectives.
One of the challenges I run into with teams is that if they try something – or are told to try something – that becomes their process forever, and ever, amen. This often comes because they don’t have an explicit mechanism for reviewing their processes and operational models to try and report on different ways of working. So instead, I teach them not to think of it as “Teh Process†but instead to think of it as an experiment to explore a hypothesis. Here’s how it works.
I set up what looks like a common board for Retrospectives with 4 columns – What did we expect to happen, what went well, what didn’t go so well, and what do we want to try. Let’s look at an example team and try filling out the board.
The start by saying they expected to release version 3.14 this week. What went well is that they shipped it! But what didn’t go so well is that they had a lot of critical bugs immediately after shipping, so they had to rapidly follow 3.14 with .15. [Pi]. The defects were caused because of a miscommunication between the code reviewed by the business and what actually got shipped. This was caused by two reasons. The first, a section of code was refactored, but the test coverage missed some edge cases, so even though everything passed it wasn’t actually testing all of the critical logic. Secondly, an additional last-minute feature was requested by the business and the team thought it would be easy to put in – but missed that it modified a data variable used by a later calculation that was now off.
Now, does this mean refactoring is a bad thing to do, or adding small features even late is a bad thing to do? Should we say we shouldn’t do that? No! We need to think beyond that to the roots of the issue. We’ve already modified the unit tests to catch the boundary cases, and we modified our integration tests to catch the calculation cases. We throw out a bunch of ideas, but ultimately settle on two:
1) We have a policy that refactorings where all the tests pass don’t require a Code Review, but in this case it would have likely caught the problem. So we want to try modifying our policy that all code requires a pull request reviewed by a second team member
2) The data calculation bug would have been caught by the data team, but they weren’t involved in the discussion because of how late in the game it was. So we’ll set a policy that all changes that could impact data we’ll run past them for input – or not ship it if we can’t get it in a timely fashion.
Now here’s where things get interesting. We agree to try both of them for 3 cycles – in this case, 6 weeks. In our next retrospective, we talk about the two processes, but in the context of what we expected to happen.
We expected that the code reviews on refactoring would catch some defects, but would cause more time to be spent in review and a delay in features
We also expected that having to loop data into any additional change would potentially lead to less cross-team defects, but would cause a significant delay in work since they aren’t always as available
What went well is that we had two cases where the code review caught defects the tests missed. In addition, we found that it didn’t significantly impact the time we spent on reviews.
What didn’t go so well is that two stories that potentially included data work were delayed by several days waiting for their review. We also didn’t notify them of the change we were making, so they weren’t prepared.
But going back to what went well is that the data team agreed to have reserved capacity for our requests with a turnaround time of less than 2 hours.
We want to try another round of the code review, as well as the data team review.
And so the next week the team might find that the code reviews still went well, so they were permanently adopting it, and the data team review times dropped, but uncovered a significant number of cases of missing integration tests no one had thought about before. So they want to try fixing those tests this cycle to see if they can reduce the number of items they need to refer to the data team.
What I like about this pattern is that the team stays in control of their process. They have both a mechanism and a forum for trying new things, and an understanding when they try something new of being able to visualize it as an input for the next cycle’s retrospective.
If you’re interested in more, I go a little more in-depth in a separate blog post on retrospectives. And again I’ll highly recommend the book “Agile Retrospectives†as well as the book “Innovation Games†by Luke Hohmann.
Hope you’ve enjoyed this. Feel free to check out all of the FASTER Friday videos on my blog, and if you’d like more information do reach out on Twitter at @Cory_foy or via email at hello at coryfoy dot com. And be sure to come back next Friday for another Faster Fridays video!
1 thought on “FASTER Fridays – Retrospectives”
Comments are closed.