A huge thanks to Arjay Hinek and Greg Neighbors with Red Hat for having me out at the 2014 Red Hat Agile Conference. Here’s the slides from my talks “Scaling Agile” and “Scaling Agility” (combined into one).


When the Single Responsibility Principle is taught among developers, one aspect – the responsibility – is harped on the most. But what counts as a responsibility of a class or a method? Is it the concepts it touches? The number of classes it uses? The number of methods it calls?

While each of the above questions are very good questions to ask of your method, there is an easier way given right in Robert Martin’s explanation – a responsibility is a reason to change. And it turns out that we can use something more than just code to determine that and help guide us to write good code.

As with many programming topics, code is the best place to start. Let’s look at a basic class in Ruby:

class ReportPrinter
  def print_report
     records = ReportRecords.all
     puts “Records Report”
     puts “(printed #{DateTime.now.to_s})”
     puts “-----------------------------------------”
     records.each do |record|
        puts “Title: #{record.title}”
        puts “   Amount: #{record.total}”
        puts “   Total Participants: #{record.total_count}”
     puts “-----------------------------------------”
     puts “Copyright FooBar Corp, 2012”

How many reasons could the method in this class change for?

  • We need to change where we get records from
  • We want to print different information about a record
  • New records have fields other records don’t have (conditional logic)
  • We want to output to a different format
  • We want to make sure line endings are set correctly
  • We need to change the report title
  • We want to change where the date is printed
  • We want to change the separators
  • We want to change the footer
  • We need to print the report in a different language

10 lines of value add code. 10 (at least) reasons to change. Now, let’s compare that code to this version:

class ReportPrinter
  def print_report
     records = load_records

  def load_records
     records = ReportRecords.all

  def header
     puts “Records Report”
    puts “(printed #{DateTime.now.to_s})”

  def separator
     puts “-----------------------------------------”

  def records(recs)
     recs.each do |record|
        puts “Title: #{record.title}”
        puts “   Amount: #{record.total}”
        puts “   Total Participants: #{record.total_count}”

  def footer
     puts “Copyright FooBar Corp, 2012”


The first thing that should strike you is that this is exactly the same code. Yet, this class is better code because each method has a single responsibility – header prints the header, footer prints the footer, etc. We could continue the extractions by pulling out the duplication of “puts” into a writer method, and then dynamically swap that in.

But, I want to focus on the print_report method for a minute. It seems like there are lots of reasons for it to change – it does an awful lot. However, it has an important job – one that I will title a “Seargent Method” since that’s the name I got from Alan Shalloway and Scott Bain. To understand its responsibility, let’s step back and look at a way of defining ways of modeling software from Martin Fowler’s UML Distilled. Fowler discusses three levels of modeling:

  • Conceptual
  • Specification
  • Implementation

Coupling should happen at the Conceptual level, and never should there be coupling between the Conceptual and Implementation levels. The way I explain these levels is that the Conceptual level is the container that holds the concepts. The Specification describes what should be implemented, and the Implementation level how it should be implemented (see John Daniels’ “Modeling with a Sense of Purpose” http://www.syntropy.co.uk/papers/modelingwithpurpose.pdf for more information).

With this in our mind, we can see that the print_report method has the responsibility of defining the specification of what it means to print a report. In other words, it gives us the algorithm for printing a report, and only needs to change if the algorithm changes. We are free to implement that in any way we choose without having to change the print_report method.

With this terminology, we can now look at the records method and be able to define what smells about it. It is operating at two levels – a specification level (loop over a set of records and print information about each one) and an implementation level (print the title, total and count). We could move the loop up to the print_records method, but that would be a true violation of Single Responsibility – it needs to not only know the order of operations, but how to loop over a collection. It’s better to have two methods:

def records(recs)
  recs.each do |record| 

def print_record(record)
  puts “Title: #{record.title}”
  puts “   Amount: #{record.total}”
  puts “   Total Participants: #{record.total_count}”

Which are now both operating at the correct level.

Sharpening your eyes to look for both increases in the reasons a class can change and the level of abstraction the class or method is operating at will open a whole new world of identifying code smells, and understanding where to put in divisions to your code to make it easier to grow and scale your code base.

{ 1 comment }

On July 16th I gave a talk for the Triangle DevOps group. Here’s the slides from the presentation. Thanks for having me out!


Do you really have a Scrum Team?

Julia is the ScrumMaster for Team Gumbo. They were recently formed as their organization’s first foray into the agile world. They were all given a two-day Certified ScrumMaster class, and are collocated. But Julia has noticed that after two months of working together, they don’t seem to be hitting their Sprint Commitments – and don’t seem to care!

Many times organizations interested in adding more agile methods to their toolchain start with Scrum, since it seems one of the clearest and easiest to start with. They take a group of people, put them through some training, and tell them to go! But they really struggle, even after they should have been through the normal “Forming, Storming, Norming” phases.

5 Dysfunctions TriangleOne way of understanding the challenges is explained in the book The Five Dysfunctions of a Team, Lencioni lays out a hierarchical model that shows the interdependencies of the dysfunctions. At the very top, we see the problem that Julia was seeing on her team – Inattention to Results. In Scrum, one of the keys to success is the ability to deliver increments of product, usually measured by Nebulous Units of Time (NUTs) like Story Points.

But when teams aren’t delivering increments, or adhering to their commitments, we try to understand why by holding a retrospective. But by looking at Lencioni’s model, we can see that if we haven’t solved more underlying issues, then we’re not going to have great luck:

  • The team isn’t delivering their commitments and don’t seem to care (Inattention to Results) because…
  • Team members aren’t holding each other accountable to the commitments they made (Avoidance of Accountability) because…
  • No one really committed to their sprint goals during sprint planning (Lack of Commitment) because…
  • The members knew there were issues with certain members of the team, but didn’t want to bring it up (Fear of Conflict) because…
  • They were afraid that the person most identified as causing the issues would push back on being identified, and the remaining members wouldn’t get support from management to be able to express themselves (Absence of Trust)

One of the challenges as a ScrumMaster is that, without Trust, Retrospectives are going to be fairly ineffective, since teams (or people working in close proximity being called teams) are not going to want to get to the real root issues – especially if they don’t feel like they will be resolved.

For Team Gumbo, Julia held a series of one-on-ones and discovered the issues of missing trust. She worked with management to remove the problem member from the team, which greatly increased the confidence the team had. Julia noted a fairly rapid improvement with the members sharing challenges and concerns more freely. Ultimately she found that they started performing and delivering much more effectively – and seemed happier doing it!

So next time your team don’t seem to care about delivery, ask yourself if that is really a tip of an iceberg that is keeping them from being an truly effective team.


Start With Expressiveness

A great thing about being a programmer today is the wide variety of libraries, packages, and patterns we have access to. Which is great, because we’re building bigger, more distributed systems as a norm than ever before.

But code is ultimately about communication with other developers (that’s why they are called programming languages). The following three programs output the same thing (credit to Peter Welch for the last two):

puts "Hello World!"
>++++++++[<++++>-] <.>+++++++++++[<++++++++>-]<-.--------.+++
.------.--------.[-]>++++++++[<++++>- ]<+.[-]++++++++++.
Ook. Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook! Ook? Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook? Ook! Ook! Ook? Ook! Ook? Ook.
Ook! Ook. Ook. Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook! Ook? Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook?
Ook! Ook! Ook? Ook! Ook? Ook. Ook. Ook. Ook! Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook! Ook. Ook! Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook! Ook. Ook. Ook? Ook. Ook? Ook. Ook? Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook! Ook? Ook? Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook? Ook! Ook! Ook? Ook! Ook? Ook. Ook! Ook.
Ook. Ook? Ook. Ook? Ook. Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook! Ook? Ook? Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook? Ook! Ook! Ook? Ook! Ook? Ook. Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook.
Ook? Ook. Ook? Ook. Ook? Ook. Ook? Ook. Ook! Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook! Ook. Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook.
Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook!
Ook! Ook. Ook. Ook? Ook. Ook? Ook. Ook. Ook! Ook. Ook! Ook? Ook! Ook! Ook? Ook!
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook! Ook.

My bets are, I'm mostly like to high-five my past self for only one of these three examples. But while our choice of languages and communication are usually easy to see, how we interface with other code is not always easy.

In the original Gang of Four Design Patterns book, one of the key principles was "Design to Interfaces". Oftentimes this gets confused to mean you should create interfaces which your objects implement - but that is literally an implementation detail. What the authors really meant was this:

Start with expressiveness first.

What this really means is - control your own API, the way you want your code to interface. Then adapt theirs to fit yours. While I gave some examples in an earlier blog post on how this can be used to scale component teams, the technique is not limited to just scaling.

As an example, in this great StackOverflow post the poster has a Car object that can be created, updated and viewed. But there are also additional actions, such as the ability to purchase a car.

If we start with expressiveness first, it leads us to think about what really happens when we purchase a car. We aren't actually manipulating the car itself (at least, not yet). Instead, we have some set of operations that when bundled together form the concept of purchase. So we would set up the notion that the action of purchasing a car creates a new trigger for these set of operations, leading to the RESTful path:

POST /purchase

Of course, our code should not be littered with RESTful calls. We should instead centralize those into our own API we call. For example, using Ruby's rest-client gem, we can write the following:

require 'rest_client'

class Car
  def purchase
    RestClient.post("#{Config.server}/purchase", self.to_json)

So our code base would merely call car.purchase and not have to know anything about REST.

But we can take this one step further, and this is where the real power of expressiveness comes into play. Let's imagine we worked through the above example, and then find out that the vendor we work with provides an API which looks like:

POST /api/v1.1/car/123/purchase

While not the end of the world, that isn't the best way they could structure that. We could modify our Car class above to call the updated structure, but then we are tying ourselves to that interface. Instead, we can continue to use the RESTful interface we think is correct, and use an Adapter to give us the interface we want.

For example, we could set up a tiny Sinatra server that looks like this:

require 'sinatra'
require 'rest-client'

post '/purchase'
  car = params[:id]
  RestClient.post("#{Config.server}/api/v1.1/car/#{car}/purchase", params)

Now we rely on a RESTful service which is a thin adapter around the real service. This can be really helpful if multiple applications we are building all rely on the same RESTful services which are out of our control.

So, if your code is telling you to write something, listen to it and write the code you want to write. Only after you've done that should you go back and adapt it to the nastiness you really have to deal with. You gain expressiveness, readability, and as a bonus, an added layer of separation from your dependencies.

(Note: I don't feel this should end without a brief note about performance. Start with expressiveness, and then optimize for performance. There is no doubt an art to the balance between clean code and defensive programming, where everything is built up front and wrapped to the nth degree. But if we start with the code our problem is telling us to write, we can easily read it to figure out where we can make it faster - and better)

{ 1 comment }

jugglingOn Friday, I had the wonderful opportunity to present at TriAgile 2014. My talk on The Agile Mindset – Agility Across Your Organization got a lot of great feedback, and I wanted to share the slides here. The key takeaway would be: if we relentlessly and ruthlessly focus on removing the delays in our organization, we’ll begin to see the real benefits of agility that most companies – even ones saying they are doing agile – are not yet seeing. It’s time for the next step to really begin to define what true organizational agility is all about.

Thanks to the awesome Tim Wingfield for the great action shot of me juggling – and eating – apples as part of the talk to highlight one form of agility!


One of the most important patterns to come from modern web development is the idea of Model-View-Controller – in effect, separating out our data logic from the view of said data, as well as the separation of how that data and view gets wired up. This works really well when you have a single team, or group of teams, working cross-functionally in the code. As teams begin to scale, some organizations opt to split teams among components that will be assembled together, usually via service calls. This idea of a Service-Oriented Architecture has allowed some amazing advances in the ability for large organizations and teams to be surprisingly nimble in how they build software.

However, sometimes teams are split into component teams which may not work at the same pace as the consuming teams. Or the service to be consumed may come from an external vendor or contractor. Worse, the data may not end up matching the expected API, or there may be data quality problems. These problems can drastically slow down the agility of software teams, especially if they have to wait for fixes from vendors or downstream teams.

Luckily, Alistair Cockburn, in his article on Hexagonal Architecture has provided the vision of a solution for this problem. By providing adapters, we can separate out the coupling to components external to us. Further, we can provide stubbed data, and when we are ready to connect to a real data source, we can use a Remote Facade or Service Layer to connect to it. Let’s take a look at how this would work to enable agility in concurrent teams working on various components of a system.

In this picture, you see a pretty basic three-tier setup. We have a Front-End, such as a web page, that gets data from a Data Services layer, which gets its data from several different data sources, some of which they have control over, and some which they don’t. In this example, we’ll have Team Grasshopper who works on the front-end, Team Sloths who provide the data services, and Vendor Valley who provides the vendor data sources.

The teams have been struggling to rapidly deliver software. Team Grasshopper requires the data to be sent to them to know how to communicate to the end-users. Team Sloths work as quickly as they can to provide those services, but have several other teams to provide for, and find that the data they attempt to pull from the vendor and public data sources isn’t always reliable or correct. Can hexagonal architecture help here?

A trick I learned from Alan Shalloway and Scott Bain was that if you needed an API, but were having trouble getting it, you simply built the API you needed. Since both teams are struggling to get the data they need, let’s start there. Both teams create thin API layers for the information they need to retrieve. These layers are usually known as Domain Models, with a famous implementation being Rails’ Active Record. In the diagram to the right, you can see that Team Grasshopper created an API layer for the communication to the services, and Team Sloths have created three separate API layers for each of the data sources they talk to.

With these APIs in place, each team can hook up a stubbed data provider to the other side, and build away without having to worry about the data on the other side yet. This is the central point of Hexagonal Architecture – once the port to the data is extracted, then multiple ways of providing data can be inserted without the calling system having to change.

However, it doesn’t end there for our teams, and this is where the power of this architecture really comes into play. As mentioned above, both teams have problems where the downstream services don’t provide the information they need. Team Sloths discovered that the information coming from the vendor data has extra characters prepended to the data, while Team Grasshopper discovered that the service which was supposed to provide 50 records at a time was only providing 1 record per call. They could, of course, go back to the calling systems and have them fix the problems, but that would slow down development, since they are dependent upon those changes. So what should they do?

The answer lies in the original name of Hexagonal Architecture: Ports and Adapters. In the Design Pattern world, an Adapter is used to make the interface of an existing class work with other interfaces without modifying either’s source code. For our teams, that means that if the API we expected is not present, we simply write a very thin adapter to wire it to the API we put into place. This can be seen in the design to the left. By using these adapters, Team Sloths can strip the extra characters coming from the vendor’s data store before passing the data up through their API. Team Grasshopper could use the adapter to loop the single call 50 times, aggregate the data, and pass it up through the API layer while waiting for Team Sloths to be able to modify the original service call. And once the native call is corrected, the adapter can simply be passthrough the call to the original service.

While the overall design may look complicated, each of the layers are designed to be extremely thin and provide only the separation absolutely required. While they are drawn as separate abstract concepts, the boundary does not (and probably should not) be web service calls or other heavyweight communication mechanisms. The overall goal is to segregate our dependencies and be able to respond to them while keeping a stable API for internal use.

An additional advantage comes from a testing perspective. Team Grasshopper can write end-to-end integration tests that pass with the adapters and mock data in place – and continue to pass as the mocks are removed and the real data is wired up – all the way down. This can help provide an early confidence boost of having the tests in place, and knowing that they can help catch any errors on the way down.

So if you have concurrent teams which are struggling with waiting on information and data from other teams, you might find that a little adapting can go a long way.

{ 1 comment }

The Triangle of Support for Great Software Teams

One of the key approaches of good software development is building software iteratively and incrementally. Iterating over our process and software gives us a chance to reflect on how we are working, while building increments of functionality allows us to begin to deliver functionality and value (in the form of working software) early. This feedback loop is critical to good software development.

What I see less talked about are the thought process and roles necessary for this kind of approach – especially in large scale systems. We want slices of value which cut through the various layers – Database, Services, UI, etc. That’s the goal of incremental development. But a common pushback comes from the notion that in working this way we throw out the vision, guidance and architecture discussions. I think this couldn’t be further from the truth, but how we implement it is a little different.

Systems Thinking, Business Thinking, Technical Thinking

In the diagram above, there are three key bubbles – Systems Thinking, Business Vision and Technical Vision. Systems Thinking is responsible for How We Work. In a Scrum implementation, this is going to be your ScrumMaster. From a Lean perspective, we’re all responsible for visualizing how we work to improve it.

The second bubble is the Business Vision. This person is responsible for driving out the What We Need of the solution. In Scrum, this is likely your Product Owner.

The final bubble is the Technical Vision. The goal of this role (or roles) is that they set the vision for the various components that will work together – database, services, systems, etc. But instead of dictating a vision, they work closely with the teams with each slice through the system (also known as a tracer bullet) to see if the overall solution is moving towards the vision, and whether the work – or the vision – needs to change.

These three roles form a critical element of vision and leadership for the teams, enabling a highly fertile container where great software can grow. So next time you are working with your teams, think about who is driving the technical vision, the business vision and the vision of how the team should work – and see if explicitly defining some of these improves your process.


The Safety Net of Retrospectives

Stop Planning - ExecuteOrganizations are filled with decisions. Well-run organizations enable decisions to be made at any level – ideally at the level closest to the work being done. But sometimes it can be difficult to know if a decision is good or not. As a team, you can talk about the challenges, the benefits, and the goals, but you just don’t know what you don’t know. This fear of not knowing can lead to inaction – especially in cultures which value risk management and mitigation over innovation.

When I come across teams stuck with inaction, I usually see a tool missing from their toolbelts – retrospectives. Because it isn’t so much the action that people are stuck on, but the reaction and knowing what to do about it.

One way I teach teams to do retrospectives on a regular basis is to have 4 quadrants – Well, Less Well, Change and Expect. That last item is key. What did we expect to happen since the last time we reflected? If we put a change in place, did we expect something to happen because of that? Then, if the change was good, it can go in the Well column, and if it wasn’t good, it can go in the Less Well, allowing both to feed into new ideas we can try. We move from being a culture of inaction to one of micro-experimentation.

ExpectAs an example, I was working with a leadership team earlier this week that had gone through some reshuffling and was taking a look at the way they were currently running their program. One of the questions that came up was whether they needed all of the meetings they were currently having, and specifically if they could combine two meetings (with two different sets of participants) that happened on the same day. I asked two questions:

  1. What do we expect to happen by combining these?
  2. How long should we wait to see if we’re seeing the results?

This allowed them to say that we would expect to see an increased level of collaboration, decreased level of communication delays, and that we could reflect on it in 4 weeks to see if the desired impact was happening, and make a decision about what to try next then.

Small steps. Microexperimentation. Engaged participants. That’s a good step towards building – and discovering – a great organization.


Retrospective – Hats and Soup

A couple of weeks ago, I wrote about a large scale retrospective I facilitated to get some rapid insights from a large team in just one hour. This week we were able to bring in a fuller set of the team, but were still under time constraints – just 3 hours. We needed to solve several issues at once:

1) Large scale. We were expecting up to 120 participants

2) Scalable – We knew 120 people wouldn’t be able to stay in one room, so we needed a way to split them into at least 4 subgroups before splitting into smaller groups

3) Rapid – we only had 3 hours, which needed to also include a review of the previous retrospective, as well as time from leadership to be able to ask questions about the insights generated from the game

4) Specific – the last retrospective generated only one item in the “What we did well” category – “Deflecting Issues”. We wanted to explicitly give them time to think about good things that have happened

5) Long-term – we were covering a 6-month period

6) Rapid Filtering – we needed the teams to be able to quickly self-identify issues under their control, and issues which were beyond their control so they could vote on things which could actually be changed

I ended up choosing a combination of 6 Thinking Hats as well as a modified version of Spheres of Influence called Circles and Soup which I found on Diana Larsen’s blog. We ran it the following way:

Purple Hat – Things You’ve Learned: Our goal here was to get people thinking about what new skills they’ve learned while a part of the program. Some people have been on the program for two years, and while there have been lots of challenges, they also have learned a lot of new ways of working. This isn’t actually in 6 Thinking Hats, but was critical to add.

White Hat – Facts: There are lots of conjectures and guesses about things, but we wanted to focus on facts. Some were great (), while some were concerning (“There are 7 Work Days in week”).

Yellow Hat – Good things: What we thought was going to be the hardest part of the retrospective was really insightful. They were only allowed to write good things that had happened over the past 6 months. The teams didn’t use the full 10 minutes, but still generated some great insights

Black Hat – Bad things: As we expected, the teams did use up the full 10 minutes on this, and then some. But the insights here were things that we could take action on as a leadership team, which is always a great thing. When people say, “I hate this” you can’t do a lot. But when they give specific things, you can fix or mitigate those, which is a great feeling.

Green Hat – Ideas for Improvement: The goal here was to generate actionable items. One concern we had was how to generate actionable items that, well, were actually actionable – meaning, things the organization could do something about. In the preparation, the leadership team was really concerned about how to filter these, and I’ll talk more about that below.

Red Hat – Emotive Statements: In 5 minutes, write two ‘emotive’ statements – things that just come to your mind. The example I gave was Steve Ballmer’s famous [Developers, Developers, Developers, Developers] video where he says, “I…..LOVE…..THIS…..COMPANY!”

As I mentioned above, we wanted to be able to let the teams vote and prioritize the things they thought were the biggest items. But we also wanted to filter things the team could actually control. My concern was that I didn’t want that to be a management action – the filtering needed to come from the teams. But given that we only had about 90 minutes to collect and analyze data, how could we do that?

I came up with the following visual chart. The four quadrants around the center were, clockwise from top left, New Skills, Facts, Bad and Good. In the center of the chart, I drew the Circles from Circles and Soup, with the inner circle representing the things the teams could directly control, the middle as the things the team could influence, and the outer circle being “the soup” that the team could only respond to when they found themselves in it.

With the first four hats, I had the teams consolidate the answers into the appropriate quadrant. Behind the scenes two of our coaches worked to consolidate duplicates – something the teams should have done with more time, but we were surprisingly low on space with the amount of items they were generating.

Once we got to the ideas for improvement, I introduced the circles. I told them to think about the circles as they wrote their ideas, and then post them to the appropriate circle. We did the same with the emotive statements – they put them into a circle which was closest to what they felt the statement fit into, or the control they felt over it.

I found this to be extremely powerful – it created a natural filter for showing the teams what they could control and handle. In addition, it appeared to focus their thinking into items they could do, and things they needed help with.

Once we had the Thinking Hats exercise finished, I had the teams dot vote the items they felt were most important. Each participant got two dots with a marker. In addition, if they thought something was really critical, they could grab a dot sticker and put it on the card to highlight it.

As all of this was happening, the key directors from the program were reviewing the board and watching what was coming up. So after the dot vote, I was able to turn the floor over to them to be able to talk directly about some of the things they saw, including questions and some answers. I stressed it important not to promise any actions directly during the Q&A until we had a chance to analyze the data.

In the end, we ended up with over 350 cards of information, and some really great insights into what could be improved across the program and organization. The leadership has already begun taking action on certain items from the retrospective, which is building more trust in the teams that, when it’s within their control, action will be taken. In addition, we achieved the goal of getting the right information from the teams by helping them naturally filter the things under their control so we could not only respond faster, but also help them to see that something we just have to figure out how to respond to when it happens.

Thanks Diana for the great exercise idea, and to Jared Richardson, Paul Mahoney and the other coaches who helped co-facilitate, collect information, and bounce ideas off of.

{ 1 comment }