On July 16th I gave a talk for the Triangle DevOps group. Here’s the slides from the presentation. Thanks for having me out!

{ 0 comments }

Do you really have a Scrum Team?

Julia is the ScrumMaster for Team Gumbo. They were recently formed as their organization’s first foray into the agile world. They were all given a two-day Certified ScrumMaster class, and are collocated. But Julia has noticed that after two months of working together, they don’t seem to be hitting their Sprint Commitments – and don’t seem to care!

Many times organizations interested in adding more agile methods to their toolchain start with Scrum, since it seems one of the clearest and easiest to start with. They take a group of people, put them through some training, and tell them to go! But they really struggle, even after they should have been through the normal “Forming, Storming, Norming” phases.

5 Dysfunctions TriangleOne way of understanding the challenges is explained in the book The Five Dysfunctions of a Team, Lencioni lays out a hierarchical model that shows the interdependencies of the dysfunctions. At the very top, we see the problem that Julia was seeing on her team – Inattention to Results. In Scrum, one of the keys to success is the ability to deliver increments of product, usually measured by Nebulous Units of Time (NUTs) like Story Points.

But when teams aren’t delivering increments, or adhering to their commitments, we try to understand why by holding a retrospective. But by looking at Lencioni’s model, we can see that if we haven’t solved more underlying issues, then we’re not going to have great luck:

  • The team isn’t delivering their commitments and don’t seem to care (Inattention to Results) because…
  • Team members aren’t holding each other accountable to the commitments they made (Avoidance of Accountability) because…
  • No one really committed to their sprint goals during sprint planning (Lack of Commitment) because…
  • The members knew there were issues with certain members of the team, but didn’t want to bring it up (Fear of Conflict) because…
  • They were afraid that the person most identified as causing the issues would push back on being identified, and the remaining members wouldn’t get support from management to be able to express themselves (Absence of Trust)

One of the challenges as a ScrumMaster is that, without Trust, Retrospectives are going to be fairly ineffective, since teams (or people working in close proximity being called teams) are not going to want to get to the real root issues – especially if they don’t feel like they will be resolved.

For Team Gumbo, Julia held a series of one-on-ones and discovered the issues of missing trust. She worked with management to remove the problem member from the team, which greatly increased the confidence the team had. Julia noted a fairly rapid improvement with the members sharing challenges and concerns more freely. Ultimately she found that they started performing and delivering much more effectively – and seemed happier doing it!

So next time your team don’t seem to care about delivery, ask yourself if that is really a tip of an iceberg that is keeping them from being an truly effective team.

{ 0 comments }

Start With Expressiveness

A great thing about being a programmer today is the wide variety of libraries, packages, and patterns we have access to. Which is great, because we’re building bigger, more distributed systems as a norm than ever before.

But code is ultimately about communication with other developers (that’s why they are called programming languages). The following three programs output the same thing (credit to Peter Welch for the last two):

puts "Hello World!"
>+++++++++[<++++++++>-]<.>+++++++[<++++>-]<+.+++++++..+++.[-]
>++++++++[<++++>-] <.>+++++++++++[<++++++++>-]<-.--------.+++
.------.--------.[-]>++++++++[<++++>- ]<+.[-]++++++++++.
Ook. Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook! Ook? Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook? Ook! Ook! Ook? Ook! Ook? Ook.
Ook! Ook. Ook. Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook! Ook? Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook?
Ook! Ook! Ook? Ook! Ook? Ook. Ook. Ook. Ook! Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook! Ook. Ook! Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook! Ook. Ook. Ook? Ook. Ook? Ook. Ook? Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook! Ook? Ook? Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook? Ook! Ook! Ook? Ook! Ook? Ook. Ook! Ook.
Ook. Ook? Ook. Ook? Ook. Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook! Ook? Ook? Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook? Ook! Ook! Ook? Ook! Ook? Ook. Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook.
Ook? Ook. Ook? Ook. Ook? Ook. Ook? Ook. Ook! Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook! Ook. Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook.
Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook!
Ook! Ook. Ook. Ook? Ook. Ook? Ook. Ook. Ook! Ook. Ook! Ook? Ook! Ook! Ook? Ook!
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook! Ook.

My bets are, I'm mostly like to high-five my past self for only one of these three examples. But while our choice of languages and communication are usually easy to see, how we interface with other code is not always easy.

In the original Gang of Four Design Patterns book, one of the key principles was "Design to Interfaces". Oftentimes this gets confused to mean you should create interfaces which your objects implement - but that is literally an implementation detail. What the authors really meant was this:

Start with expressiveness first.

What this really means is - control your own API, the way you want your code to interface. Then adapt theirs to fit yours. While I gave some examples in an earlier blog post on how this can be used to scale component teams, the technique is not limited to just scaling.

As an example, in this great StackOverflow post the poster has a Car object that can be created, updated and viewed. But there are also additional actions, such as the ability to purchase a car.

If we start with expressiveness first, it leads us to think about what really happens when we purchase a car. We aren't actually manipulating the car itself (at least, not yet). Instead, we have some set of operations that when bundled together form the concept of purchase. So we would set up the notion that the action of purchasing a car creates a new trigger for these set of operations, leading to the RESTful path:

POST /purchase

Of course, our code should not be littered with RESTful calls. We should instead centralize those into our own API we call. For example, using Ruby's rest-client gem, we can write the following:

require 'rest_client'

class Car
  def purchase
    RestClient.post("#{Config.server}/purchase", self.to_json)
  end
end

So our code base would merely call car.purchase and not have to know anything about REST.

But we can take this one step further, and this is where the real power of expressiveness comes into play. Let's imagine we worked through the above example, and then find out that the vendor we work with provides an API which looks like:

POST /api/v1.1/car/123/purchase

While not the end of the world, that isn't the best way they could structure that. We could modify our Car class above to call the updated structure, but then we are tying ourselves to that interface. Instead, we can continue to use the RESTful interface we think is correct, and use an Adapter to give us the interface we want.

For example, we could set up a tiny Sinatra server that looks like this:

require 'sinatra'
require 'rest-client'

post '/purchase'
  car = params[:id]
  RestClient.post("#{Config.server}/api/v1.1/car/#{car}/purchase", params)
end

Now we rely on a RESTful service which is a thin adapter around the real service. This can be really helpful if multiple applications we are building all rely on the same RESTful services which are out of our control.

So, if your code is telling you to write something, listen to it and write the code you want to write. Only after you've done that should you go back and adapt it to the nastiness you really have to deal with. You gain expressiveness, readability, and as a bonus, an added layer of separation from your dependencies.

(Note: I don't feel this should end without a brief note about performance. Start with expressiveness, and then optimize for performance. There is no doubt an art to the balance between clean code and defensive programming, where everything is built up front and wrapped to the nth degree. But if we start with the code our problem is telling us to write, we can easily read it to figure out where we can make it faster - and better)

{ 1 comment }

jugglingOn Friday, I had the wonderful opportunity to present at TriAgile 2014. My talk on The Agile Mindset – Agility Across Your Organization got a lot of great feedback, and I wanted to share the slides here. The key takeaway would be: if we relentlessly and ruthlessly focus on removing the delays in our organization, we’ll begin to see the real benefits of agility that most companies – even ones saying they are doing agile – are not yet seeing. It’s time for the next step to really begin to define what true organizational agility is all about.

Thanks to the awesome Tim Wingfield for the great action shot of me juggling – and eating – apples as part of the talk to highlight one form of agility!

{ 0 comments }

One of the most important patterns to come from modern web development is the idea of Model-View-Controller – in effect, separating out our data logic from the view of said data, as well as the separation of how that data and view gets wired up. This works really well when you have a single team, or group of teams, working cross-functionally in the code. As teams begin to scale, some organizations opt to split teams among components that will be assembled together, usually via service calls. This idea of a Service-Oriented Architecture has allowed some amazing advances in the ability for large organizations and teams to be surprisingly nimble in how they build software.

However, sometimes teams are split into component teams which may not work at the same pace as the consuming teams. Or the service to be consumed may come from an external vendor or contractor. Worse, the data may not end up matching the expected API, or there may be data quality problems. These problems can drastically slow down the agility of software teams, especially if they have to wait for fixes from vendors or downstream teams.

Luckily, Alistair Cockburn, in his article on Hexagonal Architecture has provided the vision of a solution for this problem. By providing adapters, we can separate out the coupling to components external to us. Further, we can provide stubbed data, and when we are ready to connect to a real data source, we can use a Remote Facade or Service Layer to connect to it. Let’s take a look at how this would work to enable agility in concurrent teams working on various components of a system.

In this picture, you see a pretty basic three-tier setup. We have a Front-End, such as a web page, that gets data from a Data Services layer, which gets its data from several different data sources, some of which they have control over, and some which they don’t. In this example, we’ll have Team Grasshopper who works on the front-end, Team Sloths who provide the data services, and Vendor Valley who provides the vendor data sources.

The teams have been struggling to rapidly deliver software. Team Grasshopper requires the data to be sent to them to know how to communicate to the end-users. Team Sloths work as quickly as they can to provide those services, but have several other teams to provide for, and find that the data they attempt to pull from the vendor and public data sources isn’t always reliable or correct. Can hexagonal architecture help here?

A trick I learned from Alan Shalloway and Scott Bain was that if you needed an API, but were having trouble getting it, you simply built the API you needed. Since both teams are struggling to get the data they need, let’s start there. Both teams create thin API layers for the information they need to retrieve. These layers are usually known as Domain Models, with a famous implementation being Rails’ Active Record. In the diagram to the right, you can see that Team Grasshopper created an API layer for the communication to the services, and Team Sloths have created three separate API layers for each of the data sources they talk to.

With these APIs in place, each team can hook up a stubbed data provider to the other side, and build away without having to worry about the data on the other side yet. This is the central point of Hexagonal Architecture – once the port to the data is extracted, then multiple ways of providing data can be inserted without the calling system having to change.

However, it doesn’t end there for our teams, and this is where the power of this architecture really comes into play. As mentioned above, both teams have problems where the downstream services don’t provide the information they need. Team Sloths discovered that the information coming from the vendor data has extra characters prepended to the data, while Team Grasshopper discovered that the service which was supposed to provide 50 records at a time was only providing 1 record per call. They could, of course, go back to the calling systems and have them fix the problems, but that would slow down development, since they are dependent upon those changes. So what should they do?

The answer lies in the original name of Hexagonal Architecture: Ports and Adapters. In the Design Pattern world, an Adapter is used to make the interface of an existing class work with other interfaces without modifying either’s source code. For our teams, that means that if the API we expected is not present, we simply write a very thin adapter to wire it to the API we put into place. This can be seen in the design to the left. By using these adapters, Team Sloths can strip the extra characters coming from the vendor’s data store before passing the data up through their API. Team Grasshopper could use the adapter to loop the single call 50 times, aggregate the data, and pass it up through the API layer while waiting for Team Sloths to be able to modify the original service call. And once the native call is corrected, the adapter can simply be passthrough the call to the original service.

While the overall design may look complicated, each of the layers are designed to be extremely thin and provide only the separation absolutely required. While they are drawn as separate abstract concepts, the boundary does not (and probably should not) be web service calls or other heavyweight communication mechanisms. The overall goal is to segregate our dependencies and be able to respond to them while keeping a stable API for internal use.

An additional advantage comes from a testing perspective. Team Grasshopper can write end-to-end integration tests that pass with the adapters and mock data in place – and continue to pass as the mocks are removed and the real data is wired up – all the way down. This can help provide an early confidence boost of having the tests in place, and knowing that they can help catch any errors on the way down.

So if you have concurrent teams which are struggling with waiting on information and data from other teams, you might find that a little adapting can go a long way.

{ 0 comments }

The Triangle of Support for Great Software Teams

One of the key approaches of good software development is building software iteratively and incrementally. Iterating over our process and software gives us a chance to reflect on how we are working, while building increments of functionality allows us to begin to deliver functionality and value (in the form of working software) early. This feedback loop is critical to good software development.

What I see less talked about are the thought process and roles necessary for this kind of approach – especially in large scale systems. We want slices of value which cut through the various layers – Database, Services, UI, etc. That’s the goal of incremental development. But a common pushback comes from the notion that in working this way we throw out the vision, guidance and architecture discussions. I think this couldn’t be further from the truth, but how we implement it is a little different.

Systems Thinking, Business Thinking, Technical Thinking

In the diagram above, there are three key bubbles – Systems Thinking, Business Vision and Technical Vision. Systems Thinking is responsible for How We Work. In a Scrum implementation, this is going to be your ScrumMaster. From a Lean perspective, we’re all responsible for visualizing how we work to improve it.

The second bubble is the Business Vision. This person is responsible for driving out the What We Need of the solution. In Scrum, this is likely your Product Owner.

The final bubble is the Technical Vision. The goal of this role (or roles) is that they set the vision for the various components that will work together – database, services, systems, etc. But instead of dictating a vision, they work closely with the teams with each slice through the system (also known as a tracer bullet) to see if the overall solution is moving towards the vision, and whether the work – or the vision – needs to change.

These three roles form a critical element of vision and leadership for the teams, enabling a highly fertile container where great software can grow. So next time you are working with your teams, think about who is driving the technical vision, the business vision and the vision of how the team should work – and see if explicitly defining some of these improves your process.

{ 3 comments }

The Safety Net of Retrospectives

Stop Planning - ExecuteOrganizations are filled with decisions. Well-run organizations enable decisions to be made at any level – ideally at the level closest to the work being done. But sometimes it can be difficult to know if a decision is good or not. As a team, you can talk about the challenges, the benefits, and the goals, but you just don’t know what you don’t know. This fear of not knowing can lead to inaction – especially in cultures which value risk management and mitigation over innovation.

When I come across teams stuck with inaction, I usually see a tool missing from their toolbelts – retrospectives. Because it isn’t so much the action that people are stuck on, but the reaction and knowing what to do about it.

One way I teach teams to do retrospectives on a regular basis is to have 4 quadrants – Well, Less Well, Change and Expect. That last item is key. What did we expect to happen since the last time we reflected? If we put a change in place, did we expect something to happen because of that? Then, if the change was good, it can go in the Well column, and if it wasn’t good, it can go in the Less Well, allowing both to feed into new ideas we can try. We move from being a culture of inaction to one of micro-experimentation.

ExpectAs an example, I was working with a leadership team earlier this week that had gone through some reshuffling and was taking a look at the way they were currently running their program. One of the questions that came up was whether they needed all of the meetings they were currently having, and specifically if they could combine two meetings (with two different sets of participants) that happened on the same day. I asked two questions:

  1. What do we expect to happen by combining these?
  2. How long should we wait to see if we’re seeing the results?

This allowed them to say that we would expect to see an increased level of collaboration, decreased level of communication delays, and that we could reflect on it in 4 weeks to see if the desired impact was happening, and make a decision about what to try next then.

Small steps. Microexperimentation. Engaged participants. That’s a good step towards building – and discovering – a great organization.

{ 3 comments }

Retrospective – Hats and Soup

A couple of weeks ago, I wrote about a large scale retrospective I facilitated to get some rapid insights from a large team in just one hour. This week we were able to bring in a fuller set of the team, but were still under time constraints – just 3 hours. We needed to solve several issues at once:

1) Large scale. We were expecting up to 120 participants

2) Scalable – We knew 120 people wouldn’t be able to stay in one room, so we needed a way to split them into at least 4 subgroups before splitting into smaller groups

3) Rapid – we only had 3 hours, which needed to also include a review of the previous retrospective, as well as time from leadership to be able to ask questions about the insights generated from the game

4) Specific – the last retrospective generated only one item in the “What we did well” category – “Deflecting Issues”. We wanted to explicitly give them time to think about good things that have happened

5) Long-term – we were covering a 6-month period

6) Rapid Filtering – we needed the teams to be able to quickly self-identify issues under their control, and issues which were beyond their control so they could vote on things which could actually be changed

I ended up choosing a combination of 6 Thinking Hats as well as a modified version of Spheres of Influence called Circles and Soup which I found on Diana Larsen’s blog. We ran it the following way:

Purple Hat – Things You’ve Learned: Our goal here was to get people thinking about what new skills they’ve learned while a part of the program. Some people have been on the program for two years, and while there have been lots of challenges, they also have learned a lot of new ways of working. This isn’t actually in 6 Thinking Hats, but was critical to add.

White Hat – Facts: There are lots of conjectures and guesses about things, but we wanted to focus on facts. Some were great (), while some were concerning (“There are 7 Work Days in week”).

Yellow Hat – Good things: What we thought was going to be the hardest part of the retrospective was really insightful. They were only allowed to write good things that had happened over the past 6 months. The teams didn’t use the full 10 minutes, but still generated some great insights

Black Hat – Bad things: As we expected, the teams did use up the full 10 minutes on this, and then some. But the insights here were things that we could take action on as a leadership team, which is always a great thing. When people say, “I hate this” you can’t do a lot. But when they give specific things, you can fix or mitigate those, which is a great feeling.

Green Hat – Ideas for Improvement: The goal here was to generate actionable items. One concern we had was how to generate actionable items that, well, were actually actionable – meaning, things the organization could do something about. In the preparation, the leadership team was really concerned about how to filter these, and I’ll talk more about that below.

Red Hat – Emotive Statements: In 5 minutes, write two ‘emotive’ statements – things that just come to your mind. The example I gave was Steve Ballmer’s famous [Developers, Developers, Developers, Developers] video where he says, “I…..LOVE…..THIS…..COMPANY!”

As I mentioned above, we wanted to be able to let the teams vote and prioritize the things they thought were the biggest items. But we also wanted to filter things the team could actually control. My concern was that I didn’t want that to be a management action – the filtering needed to come from the teams. But given that we only had about 90 minutes to collect and analyze data, how could we do that?

I came up with the following visual chart. The four quadrants around the center were, clockwise from top left, New Skills, Facts, Bad and Good. In the center of the chart, I drew the Circles from Circles and Soup, with the inner circle representing the things the teams could directly control, the middle as the things the team could influence, and the outer circle being “the soup” that the team could only respond to when they found themselves in it.

With the first four hats, I had the teams consolidate the answers into the appropriate quadrant. Behind the scenes two of our coaches worked to consolidate duplicates – something the teams should have done with more time, but we were surprisingly low on space with the amount of items they were generating.

Once we got to the ideas for improvement, I introduced the circles. I told them to think about the circles as they wrote their ideas, and then post them to the appropriate circle. We did the same with the emotive statements – they put them into a circle which was closest to what they felt the statement fit into, or the control they felt over it.

I found this to be extremely powerful – it created a natural filter for showing the teams what they could control and handle. In addition, it appeared to focus their thinking into items they could do, and things they needed help with.

Once we had the Thinking Hats exercise finished, I had the teams dot vote the items they felt were most important. Each participant got two dots with a marker. In addition, if they thought something was really critical, they could grab a dot sticker and put it on the card to highlight it.

As all of this was happening, the key directors from the program were reviewing the board and watching what was coming up. So after the dot vote, I was able to turn the floor over to them to be able to talk directly about some of the things they saw, including questions and some answers. I stressed it important not to promise any actions directly during the Q&A until we had a chance to analyze the data.

In the end, we ended up with over 350 cards of information, and some really great insights into what could be improved across the program and organization. The leadership has already begun taking action on certain items from the retrospective, which is building more trust in the teams that, when it’s within their control, action will be taken. In addition, we achieved the goal of getting the right information from the teams by helping them naturally filter the things under their control so we could not only respond faster, but also help them to see that something we just have to figure out how to respond to when it happens.

Thanks Diana for the great exercise idea, and to Jared Richardson, Paul Mahoney and the other coaches who helped co-facilitate, collect information, and bounce ideas off of.

{ 1 comment }

How Agile Is Your Process?

Individuals and Interactions over Processes and Tools
Responding to Change over Following a Plan

When teams and organizations look towards better agility, they generally start with one of the known frameworks out there – most often Scrum but also SAFe or many of the other agile methods. But one of the key principles from the Agile Manifesto says:

At regular intervals, the team reflects on how to become more 
effective, then tunes and adjusts its behavior accordingly. 

What this says is that, over time, we should be getting better and better in how we work and deliver. But oftentimes this ends up running counter to the process we have in place, and it is difficult to know what to do in those cases.

Scrum by any other name

Team Grasshopper has been working with Scrum for a while. They’ve been very successful so far in delivering features to their customer. They all sit together in an open room with plenty of whiteboards, so conversation seems to flow freely throughout the day. At the end of Sprint 9, the team has a retrospective. Julia, the team’s ScrumMaster, is facilitating.

“OK team, one thing we wanted to do this retrospective is focus on our process itself. Let’s start with an open conversation and I’ll capture the data.” Julia moves to the whiteboard. Jackson, one of the developers, speaks first. “You know, I appreciate the idea of the daily standup. But it seems like we don’t have much to talk about there. After all, when we have a blocker, we put it on the board immediately, and between our Scrum board and the conversations throughout the day, we all know what we’re working on.”

Roberta jumps in. “That’s a great point, Jackson. Also, I noticed that our customers – especially Jessie – could benefit from more frequent demos so they can talk to the other vendors in advance of Sprint Planning. It seems like we should be checking in with them every week, but I know we can’t do planning any more frequently than every two weeks because of their schedule.”

Team Grasshopper is at a crossroads. If they stop doing daily standups, and move sprint demos to be weekly, but sprint planning still every two weeks, aren’t they violating the rules of Scrum?

Individuals and Interactions

Part of the confusion for teams such as Team Grasshopper, as I mention in Recreating Scrum Using Kanban, is that they don’t own all aspects of their process. Scrum is predicated on several implicit policies which come out of choosing your sprint length. So if you have a two week sprint, you do Sprint Planning every two weeks, Sprint Review every two weeks, and Retrospectives every two weeks – that is implied because of your sprint length.

As another example, in the Disciplined Agile Delivery framework it is implied that you will use iterations and demo at the end of an iteration. Or in SAFe, it is implied that if you are “Scaling Agile” you use a Team, Program, Portfolio approach.

To be clear – these aren’t necessarily bad things. For many teams and organizations, it provides a clear, proven starting point. But it isn’t clear to teams how to modify the process and still get the value – in other words, the “why” that backs the “how” of the processes. This is critical to move beyond selling the term “Agile” and moving towards true agility While many of the methodologies are backed by a set of principles, the idea goes beyond just principles. David Anderson refers to them as “properties” in his Kanban book, but perhaps the most familiar way of understanding these “things” is via Design Patterns.

Towards a Pattern Language

Let’s imagine a team that is having communication problems. They struggle to know what the other members are working on. So one force is that they need to find a way to communicate more frequently. However, another force is that you are dealing with multiple people, so you need a way of making it easy for them to remember when to do it. The third force is that if we are going to have frequent communications, we don’t want them to be overtly long – they should be self-limiting.

With that problem and those set of forces, we can look towards the pattern Daily Standup. The team will come together to coordinate their work, on a daily basis so there is a clear cadence, and stand up to help remind the team that this needs to be high-impact and no longer than it needs to be.

This same team needs a way to help their customer understand the work they are doing. The first force is that the customer can not sit with them all the time. The second is that their customer isn’t a software developer, so they can’t just see code. The third force is that we need the customer to be able to help make decisions of what else they want – recognizing that they won’t know what they don’t want until they see it. This leads to a fourth force – if they don’t know what they want until they see what they don’t, we need to get working software in front of them as quickly as possible.

This set of forces leads to the pattern of System Demo. Which, like Daily Standup we’ll want to set a cadence for. Talking with the customer and the teams, weekly seems about right to be able to demo working software in a way that gives the customer enough information to be able to influence the backlog or at least be thinking about what they want.

By looking at our practices in this manner, we begin to explicitly own the process we use by determining policies based on the patterns in our specific situation. Once we own the policies, we can make the decision of when to modify them, since we understand both the challenges, as well as the principles behind our goals.

Lean Thinking

The idea of explicit policies is core to Lean Thinking through the principle of Standard Work. Teams document their actual process as the standard way of working. They then inspect that process, allowing innovation and individuality to shine a light on it, and then as they modify it, they document that as the new standard. For example, in Kanban we start with where we are – meaning we document our value stream, and then document the explicit policies of work (see Recreating Scrum using Kanban and Explicit Policies). We then observe the flow of work, and modify our policies as appropriate.

…Over Processes and Tools

Back to Team Grasshopper. Their struggle is not with their process, but with the idea that they don’t know how to modify Scrum to fit into the process they need. This isn’t Scrum’s fault per se – one could easily modify many of the methodologies to include guidance on how to pay attention to the principles and modify as they see fit. But the Agile Manifesto is clear – Individuals and the Interactions come before Processes and Tools.

In the end, Team Grasshopper ended up making their policies explicit, which helped them understand how to track the principles behind the “Why” of the practices. They moved sprint demos to weekly, and kept Sprint Planning as every two weeks. They cancelled their daily standups, but added a retrospective board to their scrum wall to keep a close eye on the communication between team members to make sure that they did not see a drop off in communication.

Your team can do this as well. You own your process – not a consultant, not a website, not a training, not a book. To modify it, you need to understand the principles and patterns behind it – so it’s OK to start with a set of prescriptive practices at first to see what needs to change. But remember that it’s you and your team’s interactions that come before any process and tool. Modify your tool for your process – don’t modify your process for a tool.

{ 3 comments }

Putting Your Best Code Forward

I like sharing things I’m working on. When I’ve searched for a solution, and not found one, I hope that sharing what I did find will help some other poor schmuck like me in the future. In fact, more than once I’ve done a search for a problem, and found a solution – in one of my own blog posts from years back.

I’ve been working on a Rails Security talk, and needed a way to figure out just how many lines of code are present in a default Rails application. Meaning – not just the lines of code for controllers, views, etc, but also the number of lines of code for all of the default gems. So I put together the following code:

GFILE = "Gemfile.lock"
 
list = File.read(GFILE)
 
gems = []
 
list.each_line do |line|
 line = line.strip
 break if line.include?("PLATFORMS")
 next if line.include?("GEM")
 next if line.include?("remote:")
 next if line.include?("specs:")
 next if line.empty?
 gems.push(line.split(' ').first)
end
 
total = 0
gems.uniq!
gems.each do |gem|
 puts "Processing #{gem}"
 contents = `gem contents #{gem}`.split
 local = 0
 contents.each do |file|
  output = `wc -l #{file}`
  amount = output.strip.split(' ').first.to_i
  local += amount
 end
 puts " LOC: #{local}"
 total += local
end
 
puts "Total Lines: #{total}"

It did what I needed, and I could have just thrown it away. But I figured someone else might need something like this, so I posted in publically. The problem? Let’s be frank – that’s some awful code. `GFILE`? `local`? When we post code, we should put our best code forward – examples of how to do things right. I didn’t post it as a draft – I posted it as a solution that worked. My good friend [J.B. Rainsberger][2] pointed out in a comment that I could do better. And he’s right! So let’s clean this up:

require 'set'

GEM_FILE_TO_PROCESS = "Gemfile.lock"

The first thing was to change that horrible name to something understandable. I also included the set library, because I really wanted a `set`, not an `array` that I needed to call `uniq!` on. Next, I extracted the line checks to separate methods, based on what I interpreted them to do when I came across them:

def gem_list_finished?(line)
 line.include?("PLATFORMS")
end
 
def non_gem_line?(line)
 line.include?("GEM") ||
 line.include?("remote:") ||
 line.include?("specs:") ||
 line.empty?
end

I then extracted out the line counting to their own methods:

def gem_name_without_version(line)
 line.split(' ').first
end
 
def get_files_in_gem(gem)
 `gem contents #{gem}`.split
end
 
def line_count_for_file(file)
 output = `wc -l #{file}`
 output.strip.split(' ').first.to_i
end

This has the handy side effect of removing the system calls (code called in backticks) away from the main logic. I also pulled out the `puts` statements into a log function:

def log(message)
 puts message
end

Now comes the meat of it. We still read in the file, but also initialize a set to store the gems, and put our other critical variable all in one place:

gemfile_list = File.read(GEM_FILE_TO_PROCESS)
gems_to_process = Set.new
total_line_count = 0

Next we process the Gemfile. Note that it’s clearer now that we have conditions we stop processing, and conditions where we skip processing:

gemfile_list.each_line do |gem_line|
 gem_line = gem_line.strip
 break if gem_list_finished?(gem_line)
 next if non_gem_line?(gem_line)
 gems_to_process.add(gem_name_without_version(gem_line))
end

Finally, we walk the gems we found and get their contents. I could probably use inject here instead of initializing a variable, but I prefer it to be a little clearer:

gems_to_process.each do |gem|
 contents = get_files_in_gem(gem)
 gem_line_count = 0
 contents.each do |file|
  gem_line_count += line_count_for_file(file)
 end
 total_line_count += gem_line_count
end

All in all, it’s a much better representation of something I would want to share. Could it get better? Absolutely. I could create a model around the Gemfile, asking the lines questions instead of checking them explicitly. And I probably will – if I ever need to turn this into something more explicit. The full code is below, and at [the gist][1]:

require 'set'
 
GEM_FILE_TO_PROCESS = "Gemfile.lock"
 
def gem_list_finished?(line)
 line.include?("PLATFORMS")
end
 
def non_gem_line?(line)
 line.include?("GEM") ||
 line.include?("remote:") ||
 line.include?("specs:") ||
 line.empty?
end
 
def gem_name_without_version(line)
 line.split(' ').first
end
 
def get_files_in_gem(gem)
 #gem contents returns the files
 #as a line break delimited string
 `gem contents #{gem}`.split
end
 
def line_count_for_file(file)
 output = `wc -l #{file}`
 #line count is the first column from
 #the returned value
 output.strip.split(' ').first.to_i
end
 
def log(message)
 puts message
end
 
gemfile_list = File.read(GEM_FILE_TO_PROCESS)
gems_to_process = Set.new
total_line_count = 0
 
gemfile_list.each_line do |gem_line|
 gem_line = gem_line.strip
 break if gem_list_finished?(gem_line)
 next if non_gem_line?(gem_line)
 gems_to_process.add(gem_name_without_version(gem_line))
end
 
log "TOTAL GTP: #{gems_to_process.count}"
 
gems_to_process.each do |gem|
 log "Processing #{gem}"
 contents = get_files_in_gem(gem)
 gem_line_count = 0
 contents.each do |file|
   gem_line_count += line_count_for_file(file)
 end
 log " LOC: #{gem_line_count}"
 total_line_count += gem_line_count
end
 
log "Total Lines: #{total_line_count}"

So what are your thoughts? Could it be improved even more?

{ 2 comments }