≡ Menu

“This is awesome!” Jenny, the Product Manager exclaims to the team. “We’ve been having some challenges retaining customers, and I think we’ve got just the right idea of how to fix it! It’ll require a little retooling of our sign-up flow, but I think the impact will be incredible.”

Great ideas are some of the most powerful accelerants there are. Lighting one off produces a great burst of energy, light and acceleration. But like great accelerants, it burns out very quickly if it doesn’t have clear direction and sustaining energy. Ideas in software can be especially fast-burning as we try to convert an idea into actual, working software. Not only do we have the idea itself to contend with, we have to integrate it into systems that probably weren’t conceived with the notion of this incredible new idea. That can lead to friction, bugs, and downright failure to launch.

Having worked with many teams like Jenny’s, there’s a common flow I’ve begun sharing around the notion of how we can take ideas in all their uncertainty and get them to production with a high level of confidence. This isn’t a new model by any means, but highlights where steps should be at and a flow that has worked well with teams I coach.

Picture of a model showing steps progressing towards confidence

We start with a wide interpretation of what the idea is. We take steps during development to focus that interpretation into a clear, releasable chunk. Once we’ve committed the feature, we have a clear of idea of what we wanted to build, but not a high confidence that is working as we intended. So the second part of the diagram moves us to that high level of confidence by using the work we did to gain understanding and clarity to also gain confidence.

Let’s dig a little more into these steps.

Development Cycle

At the top, we start with an idea. We write Acceptance Tests to capture the high-level goals. This is ideally automated (using something like Cucumber or FitNesse), but can also include things like “We expect to see a 3% increase in conversions”. The point here is to develop concrete, measurable business objectives and goals for the outcome of the feature.

As we’ve begun capturing our acceptance tests, the team starts looking at how things integrate into the system, using Integration Tests. Here we think about how the feature will be put into the system, and write tests to capture what will happen once the feature has been integrated. These will not be passing, because the feature hasn’t been written yet, but will begin letting the team think through the integration questions and challenges.

Similarly, if we’re integrating into an existing system, we may need a series of Regression Tests to capture how the system works now to make sure it continues functioning as expected. For example, if we’re adding a new module to accept gift cards, we may need regression tests around calculating tax, or total costs when there is no gift card. This gains us confidence that the new feature won’t have unintended side effects.

In parallel, the developers will start writing Unit Tests (ideally using Test-First Development) to capture the behavior and functionality of the code they want to write. These tests are focused at a low-level of functionality, and help drive out the design of the actual feature. These are paired with writing the code for the feature itself.

Finally, many teams use a Code Review process to look over the feature. This could be done in real-time using Pair Programming, or through a Pull Request process, or just simply emailing team members. The goal here is to think through the logic from an outsider’s perspective, and also give information about how it integrates with the system.

With our tests written, our design reviewed, and our clarity sharpened, we commit our code and begin our deployment cycle.

Deployment Cycle

Throughout Development, we write tests at various levels not just for testing purposes, but to drive out the understanding of how the code functions and integrates into the system. But at some point we need confidence that we’ve done what we needed.

So once code has been committed, it begins a workflow of gaining confidence. Immediately after code is checked in, the unit tests are run. This gives fast feedback about whether we’ve broken anything, or impacted code quality. We can also run Static Code Analysis, Security scans or other tools to make sure we’re following our team’s standards.

If our unit tests pass, we can now start digging deeper into the viability of our code. We start by deploying it – ideally to a test or staging server. These servers will have the data and access to run scenarios. We can then execute our integration and regression tests, ensuring our system is functioning as expected, and finally executing our acceptance tests to make sure we’re meeting the business needs of the feature. If all of these pass, we’ve widened our confidence in the feature.

Cycles are actually continuous

So while this is laid out as a single workflow, the idea is to use automation and close collaboration to keep these cycles continually flowing. Being able to know within 10 minutes that a change a pair made to the code impacted conversion rates is a life-changing thing for many teams, allowing them to respond very quickly to the needs of the code – and the business.

{ 1 comment }

Prioritization vs Sequencing

When teams get introduced to various agile methods, one of the seemingly easy aspects is the notion of the product backlog. They then ask their business parts to prioritize the work in the backlog so the team(s) can pull the highest priority item.

It sounds easy because people think that priorities are binary. In reality, multiple things can be the highest priority. For example, a company might have a mandate to deliver a specific type of reporting feed to a government agency, while also needing to hit a critical product launch. Both are high priority, and would be “tied” for the top spot.

I often get around this by introducing sequencing as a distinct concept from prioritization. For example, in our above scenario the government report might take about two weeks of effort, while the product launch is 6 weeks of effort. If both are due in 8 weeks, we can sequence the work to hit the most critical portions of the product launch for 4 weeks, work on the government reporting feed for 2 weeks (giving a 2 week buffer), then finish up the product launch.

So if you find yourself struggling with “prioritizing” a product backlog with business partners, try talking about sequencing instead. That’s what you are really aiming for anyway, and opens the door to richer conversations about risks, costs of delay and business value delivery.

{ 1 comment }

A quick protip – if you are projecting (say for a conference or training), want to have your screen split (so it’s showing different things on each screen) but still want to show what you are typing in your terminal, you can use tmux to have it mirror what you are typing in a separate console window.

First, install tmux. If you are on a Mac and use Homebrew, you can do brew install tmux.

Once you have tmux installed, open up a terminal window and start a new tmux session with the following command:

tmux new -s training

This creates a new session on your localhost named training. Next, open a second terminal window and run the following command:

tmux attach -t training

This “attaches” the second window to the first session. Now, whatever you type in one terminal window will show in the other, and vice versa. It will even show you if the viewable area is smaller than your current window, so you know what the other screen will see.

{ 0 comments }

2015 Can Bite Me

At first it seemed silly to me to post a “2015 review” as if I do anything all that interesting, but I also know it’s good to step back and reflect on things, so here goes.

2015 can bite me.

I don’t want that to sound like everything was horrible. Back in June I came on board at The Iron Yard to build their Corporate Education / B2B business, and we’re building some incredibly awesome stuff that will be hitting hard in 2016. In addition, I got to play a tiny part in helping bring an awesome new platform for the deaf into play (much bigger kudos to VTCSecure who actually has been working on it for years to bring it to fruition). I also launched an apprenticeship program at my former employer based off of the 8th Light model.

I did some fun talks, too:

Outside of that, this was one of the most challenging years I’ve ever had, even blowing out of the water the time we bought a house and got laid off right after. The year started off promising enough with a new position as the CTO of a local company with 30+ developers, many who I knew personally. There were some interesting projects in the pipeline. But then the worst possible series of events that could have happened, did:

  • The team was behind on a critical project with tight deadlines. I was able to rally the troops, and by putting in an extraordinary effort (the final week I believe we worked well over 100 hours) we shipped it.
  • However, in the midst of that, my dad got extremely sick. He had to have emergency surgery, and we brought him up to stay with us. During that final project push, my dad had developed a blockage in his intestines that I should have been able to catch, but didn’t because I was at work.
  • So the day after we shipped, I ended up in ICU with my dad, where he spent 10 days in the hospital. I was then able to transfer him back to Florida where he ended up in the hospital twice – with me making frequent, frantic drives to Florida from NC – before he passed away in May.

I questioned a lot the decisions I made balancing work and family. Ultimately I couldn’t have predicted things happening as they did, and wanted to be able to be there for my team and my dad. You can’t take back decisions you made, and as much as I talk about work/life balance, I made a decision to not focus on my dad as much as I should have. That’s rough.

Looking back, I’m incredibly grateful for many things:

  • A strong family that has supported me
  • Friends I could bounce ideas off of
  • An awesome team at The Iron Yard who understands diversity and how to make people successful (even if we’re still figuring out the growth thing)
  • An amazing group of people I met this year on the Twitter that inspires me: Saron Yitbarek with Code Newbie, Cate Huston, Lesley Carhart, Carina C. Zona, Keri Karandrakis (and her Tweet heard round the world), Coraline Ada Ehmke for her journey, and Ruth Malan for her incredible generosity and motivational tweets. And let’s not forget Taylor Swift for helping expand the reach of security topics to more people than anyone could imagine.

I’m excited by what 2016 is going to bring. Maybe it’s just me coming off the first vacation I’ve had in years, but there is a lot of amazing things all coming together, and I’m ready to uncork a bottle of fizzy drink and celebrate the end of The Year That Shall No Longer Be Named.

So 2015, I leave you with this:

Bite My Shiny Metal Ass

{ 0 comments }

Huge thanks to Red Hat for having me out for Red Hat Agile Day 2015! Below are the slides from the talk!

Update: The talk was recorded! The video is posted below under the slides!

Strategic Play from Cory Foy on Vimeo.

{ 1 comment }

Why are my commits attributed to devil man?

Last night, a scary looking message came to me from our Slack channel:

Slack message asking if a commit was mine

We were referring to the user as “Devil Man” because this was his profile picture.

Picture of a devil looking cartoon

After wondering if perhaps my ssh keys had been compromised, I came across something that makes way more sense, and isn’t sinster at all. I ran the following command on my laptop:

Corys-MacBook-Pro-2:~ foyc$ git config --list
user.name=BuildTools
user.email=unconfigured@null.spigotmc.org

Turns out, I have a new laptop that I haven’t configured with all my dotfiles, etc. However, I was playing around with a Minecraft/Raspberry Pi server, and pulling down the tools set my global Git configuration. Since I didn’t change it to anything else, the PR I sent got pushed up with the above as the configuration, and “Devil Man” must have that as his GitHub repository email, so GitHub helpfully linked the two.

Thankfully this wasn’t a security vulnerability per se, but in case you ever wonder why your commits are attributed to Devil Man, now you know that it isn’t his fault, but your own for not configuring your commits properly.

(For futher reading, GitHub has a more boringly titled article “Why are my commits linked to the wrong user”)

{ 0 comments }

This week I had the privilege of sitting in for the great Jared Richardson and giving his Continuous Testing workshop at SQE’s Better Software West. I added my own flavor to various parts, and wanted to post the slides for participants. If you are interested in finding ways to improve your practices, be sure to reach out!

{ 0 comments }

Thanks to everyone who came out to my TriAgile talk on Choosing Between Scrum and Kanban! You can also see a video of it from my presentation at the SQE Conference from November.

{ 0 comments }

Slides and Code Kata Recording from Triangle.rb

Last night I spoke at the Triangle.rb meetup on focused practice and code katas. Thanks to everyone who came out! Below are my slides, and a screen recording of the code kata I performed (known as Coin Changer).

{ 0 comments }

Test-Driven iOS Development with Specta and Expecta

I’ve always enjoyed Test-Driven Development. I find that it helps keep me focused on thinking about small steps and object interactions. But TDD has always been a challenge in UI heavy applications, and even more so in the mobile world where much of what we do is user interface interactions. But over the past couple of years, a lot of work has been done to make iOS and Xcode much more amenable to TDD. However, I haven’t found a great tutorial that goes end-to-end in getting up and running with tests in a step-by-step fashion. Gordon Fontenot’s Test Driving iOS – A Primer comes closest, but still leaves out some critical steps.

For this article, we’ll build towards an implementation of Conway’s Game of Life – a common exercise used in Code Retreats around the world. So let’s get started!

Getting Setup

To get setup, you’ll need to install:

(Note that if this is your first time using Cocoapods, you will need to open a terminal after installing and run the pod setup command. Otherwise you may end up with an error like No such file or directory @ dir_initialize - /Users/foyc/.cocoapods/repos (Errno::ENOENT) the first time you try to run pod install)

Open Xcode and create a new iOS Single View Application. Name it TDD Game of Life and set the org identifier to com.example.gol (if you don’t already know what to put in there). Select the directory to put it in, and let it create the project.

Single View Application

Screen Shot 2015-02-16 at 4.16.30 PM

Once you have the project created, close Xcode and open up a Terminal prompt and navigate to the directory that houses your project. Once there, run the command pod init. This creates a default Podfile for Cocoapods. This is where you’ll list the frameworks we’ll be using – similar to a Gemfile from the Ruby world.

Open the Podfile in your favorite text editor and add the following:

pod 'Specta'
pod 'Expecta'
pod 'OCMock'

in the target 'TDD Game of LifeTests' section:

Screen Shot 2015-02-16 at 4.23.13 PM

Save the Podfile, and then in your terminal, run the command pod install. You’ve now installed the Pods into your Xcode project and are ready to start testing!

Write our first test

Conway’s Game of Life is based on some pretty simple rules. One of those rules is that a cell can be in one of two states – alive or dead – and can either be killed or resurrected based on the state of the cells surrounding it.

The behavior of killing and resurrecting a cell seems to be a small enough chunk of behavior we can start with. Navigate to your project directory and open the “TDD Game of Life.xcworkspace” file in Xcode. In Xcode, right-click on TDD Game of LifeTests and choose “New File”. In the dialog, choose iOS->Source->Objective-C File:

Screen Shot 2015-02-16 at 4.31.52 PM

Name it CellSpec and leave the File Type as “Empty File”, then click Next. Make sure it is adding the file to the Tests folder (leaving everything else as defaults), and click Create. Xcode should now open the newly created file.

Inside the file, you should see an import for Foundation.h. Just below that import line, add the following:

#import <Specta/Specta.h>
#define EXP_SHORTHAND
#import "Expecta.h"
#import "Cell.h"

The first import includes the Specta library. The second and third lines include the Expecta library and setup a shorthand to prevent us from having to write expectations with the prefix EXP_expect.

The final line imports our Cell class which doesn’t exist yet! It doesn’t exist because no test has told us to create it (and yes, compiler errors in our tests cound as test failures). So let’s try to build our test file. Either go to Product->Build or just hit Command-B. It should fail because Cell.h couldn’t be found. (If it fails for any other reason, make sure you have Cocoapods installed properly and ran the pod install command).

To make this “test” pass, right-click on the “TDD Game of Life” folder and choose New File. Selection iOS->Source->Cocoa Touch Class:

Screen Shot 2015-02-16 at 4.55.06 PM

Name the class Cell and leave all of the other defaults as is, then click Next. Make sure it is adding the file to the main “TDD Game of Life” folder, and click create. Once the file is created, build the application again using Command-B. You should see the build succeed. (If your build does not succeed because of a linker error, close Xcode and reopen it so the Cocoapods changes we made earlier can take effect).

Now let’s add some behavior. To write tests, we wrap our tests with SpecBegin/SpecEnd macros. So back in our CellSpec.m class, add the following code under the import statements:

SpecBegin(Cell)
describe(@"Cell", ^{
  it(@"is dead on creation", ^{
    Cell *cell = [[Cell alloc] init];

    expect([cell isAlive]).to.equal(NO);
  });
});
SpecEnd

There’s a lot of design assumptions packed into this one test. First, we assume we can create a Cell with no other information. Secondly. we assume we can query a cell’s status by using a method called isAlive. Lastly, we assume that method tells us that status by giving us a binary YES/NO.

We’ll go with those assumptions for now and see if we can get our test to pass. Trying to compile results in an exception because isAlive is not defined. In our Cell.h class, make your code look like this below the import line:

@interface Cell : NSObject
- (BOOL)isAlive;
@end

This should allow us to compile, although with warnings. If we now hit Command-U (or go to Product->Test), we should see our first test failure!

Screen Shot 2015-02-16 at 5.24.10 PM

We can also open the log files by enabling the bottom view in Xcode. In the upper right hand corner there shoudl be three boxes with lines to the left, bottom and right. Click on the one with the line on the bottom:

Screen Shot 2015-02-16 at 5.25.04 PM

and you’ll see output similar to:

Screen Shot 2015-02-16 at 5.25.13 PM

Alright! We have our failing test! Let’s make it green! Open Cell.m and implement our isAlive method by making our code look like the following under our import statement:

@implementation Cell

- (BOOL)isAlive
{
    return NO;
}

@end

If we now hit Command-U, we’ll see our tests pass! (If you don’t see the below in your Xcode, click on the diamond icon with the dash in the middle).

Screen Shot 2015-02-16 at 5.27.42 PM

Congratulations! You’ve written your first iOS Objective-C test!

Writing your second test

While writing our first test was awesome, the implementation wasn’t particularly interesting. After all, cells need to be able to be both dead and alive. So let’s add a second test to our CellSpec.m file:

it(@"is alive when brought to life", ^{
  Cell *cell = [[Cell alloc] init];
  [cell resurrect];
  expect([cell isAlive]).to.equal(YES);
});

So our whole file should look like:

SpecBegin(Cell)
describe(@"Cell", ^{
    it(@"is dead on creation", ^{
        Cell *cell = [[Cell alloc] init];

        expect([cell isAlive]).to.equal(NO);
    });
    it(@"is alive when brought to life", ^{
        Cell *cell = [[Cell alloc] init];
        [cell resurrect];
        expect([cell isAlive]).to.equal(YES);
    });
});
SpecEnd

If we try to run the tests (Command-U), we’ll get a failure because the method resurrect doesn’t exist. Again, we’re making a design assumption here that we’ll be using a method called resurrect to make a cell alive.

Let’s get our compiler happy by adding the following to Cell.h:

- (void)resurrect;

and the following implementation to Cell.m:

- (void) resurrect
{

}

Notice that the implementation is blank. That’s because we don’t actually have a test failure telling us to put anything in there. Now that our compiler is happy, let’s run our tests:

Screen Shot 2015-02-16 at 5.37.40 PM

Alright, test failure! To make this test pass, we’ll need to promote the constant we’re using in isAlive to be a variable. Objective-C makes this easier by using properties, so let’s add the following to our Cell.h, just under our @interface line:

@property (nonatomic) BOOL aliveState;

Now we’ll change our Cell.m file to use that property:

- (BOOL)isAlive
{
    return [self aliveState];
}
-(void)resurrect
{
    [self setAliveState:YES];
}

And if we run our tests – success!

Screen Shot 2015-02-16 at 5.43.43 PM

We now have one more test to write – killing alive cells. So let’s add the following test to our CellSpec.m class:

it(@"is dead when killed after being brought to life", ^{
    Cell *cell = [[Cell alloc] init];
    [cell resurrect];
    [cell kill];
    expect([cell isAlive]).to.equal(NO);
});

This fails because we don’t have the kill method, so let’s add that. Add the following to Cell.h:

- (void)kill;

And the following to Cell.m:

-(void)kill
{
}

Then run our tests.

Screen Shot 2015-02-16 at 5.49.25 PM

With our test failure in our place, implement the kill method:

-(void)kill
{
  [self setAliveState:NO];
}

And…..green!

Screen Shot 2015-02-16 at 5.50.31 PM

…Refactor

TDD’s lifecycle is known as the Red-Green-Refactor loop. We write a failing test, write just enough code to make it pass, and then refactor any duplication. In this case, our production code is pretty straightforward and doesn’t have any duplication. But our test code does – namely, the creation of the Cell class over and over. It’s important to remember that test code is still code, and should be as clean as our production code (although I will sometimes trade redability for no duplication).

We can refactor that out in our describe block. Change your code in CellSpec to look like:

SpecBegin(Cell)
describe(@"Cell", ^{
    Cell *cell = [[Cell alloc] init];
    it(@"is dead on creation", ^{
        expect([cell isAlive]).to.equal(NO);
    });
    it(@"is alive when brought to life", ^{
        [cell resurrect];
        expect([cell isAlive]).to.equal(YES);
    });
    it(@"is dead when killed after being brought to life", ^{
        [cell resurrect];
        [cell kill];
        expect([cell isAlive]).to.equal(NO);
    });
});
SpecEnd

Yay! Less code!

Lessons Learned

Now we have a basic TDD workflow setup with Expecta and Selecta that gives us the opportunity to test our code. Remember the cycle of Red-Green-Refactor: Don’t write production code where a test isn’t failing because you don’t have it.

Now go forth and have fun!

(You can see an example of this workflow in this repository up on GitHub)

{ 4 comments }