Test-Driven iOS Development with Specta and Expecta

I’ve always enjoyed Test-Driven Development. I find that it helps keep me focused on thinking about small steps and object interactions. But TDD has always been a challenge in UI heavy applications, and even more so in the mobile world where much of what we do is user interface interactions. But over the past couple of years, a lot of work has been done to make iOS and Xcode much more amenable to TDD. However, I haven’t found a great tutorial that goes end-to-end in getting up and running with tests in a step-by-step fashion. Gordon Fontenot’s Test Driving iOS – A Primer comes closest, but still leaves out some critical steps.

For this article, we’ll build towards an implementation of Conway’s Game of Life – a common exercise used in Code Retreats around the world. So let’s get started!

Getting Setup

To get setup, you’ll need to install:

(Note that if this is your first time using Cocoapods, you will need to open a terminal after installing and run the pod setup command. Otherwise you may end up with an error like No such file or directory @ dir_initialize - /Users/foyc/.cocoapods/repos (Errno::ENOENT) the first time you try to run pod install)

Open Xcode and create a new iOS Single View Application. Name it TDD Game of Life and set the org identifier to com.example.gol (if you don’t already know what to put in there). Select the directory to put it in, and let it create the project.

Single View Application

Screen Shot 2015-02-16 at 4.16.30 PM

Once you have the project created, close Xcode and open up a Terminal prompt and navigate to the directory that houses your project. Once there, run the command pod init. This creates a default Podfile for Cocoapods. This is where you’ll list the frameworks we’ll be using – similar to a Gemfile from the Ruby world.

Open the Podfile in your favorite text editor and add the following:

pod 'Specta'
pod 'Expecta'
pod 'OCMock'

in the target 'TDD Game of LifeTests' section:

Screen Shot 2015-02-16 at 4.23.13 PM

Save the Podfile, and then in your terminal, run the command pod install. You’ve now installed the Pods into your Xcode project and are ready to start testing!

Write our first test

Conway’s Game of Life is based on some pretty simple rules. One of those rules is that a cell can be in one of two states – alive or dead – and can either be killed or resurrected based on the state of the cells surrounding it.

The behavior of killing and resurrecting a cell seems to be a small enough chunk of behavior we can start with. Navigate to your project directory and open the “TDD Game of Life.xcworkspace” file in Xcode. In Xcode, right-click on TDD Game of LifeTests and choose “New File”. In the dialog, choose iOS->Source->Objective-C File:

Screen Shot 2015-02-16 at 4.31.52 PM

Name it CellSpec and leave the File Type as “Empty File”, then click Next. Make sure it is adding the file to the Tests folder (leaving everything else as defaults), and click Create. Xcode should now open the newly created file.

Inside the file, you should see an import for Foundation.h. Just below that import line, add the following:

#import <Specta/Specta.h>
#define EXP_SHORTHAND
#import "Expecta.h"
#import "Cell.h"

The first import includes the Specta library. The second and third lines include the Expecta library and setup a shorthand to prevent us from having to write expectations with the prefix EXP_expect.

The final line imports our Cell class which doesn’t exist yet! It doesn’t exist because no test has told us to create it (and yes, compiler errors in our tests cound as test failures). So let’s try to build our test file. Either go to Product->Build or just hit Command-B. It should fail because Cell.h couldn’t be found. (If it fails for any other reason, make sure you have Cocoapods installed properly and ran the pod install command).

To make this “test” pass, right-click on the “TDD Game of Life” folder and choose New File. Selection iOS->Source->Cocoa Touch Class:

Screen Shot 2015-02-16 at 4.55.06 PM

Name the class Cell and leave all of the other defaults as is, then click Next. Make sure it is adding the file to the main “TDD Game of Life” folder, and click create. Once the file is created, build the application again using Command-B. You should see the build succeed. (If your build does not succeed because of a linker error, close Xcode and reopen it so the Cocoapods changes we made earlier can take effect).

Now let’s add some behavior. To write tests, we wrap our tests with SpecBegin/SpecEnd macros. So back in our CellSpec.m class, add the following code under the import statements:

SpecBegin(Cell)
describe(@"Cell", ^{
  it(@"is dead on creation", ^{
    Cell *cell = [[Cell alloc] init];

    expect([cell isAlive]).to.equal(NO);
  });
});
SpecEnd

There’s a lot of design assumptions packed into this one test. First, we assume we can create a Cell with no other information. Secondly. we assume we can query a cell’s status by using a method called isAlive. Lastly, we assume that method tells us that status by giving us a binary YES/NO.

We’ll go with those assumptions for now and see if we can get our test to pass. Trying to compile results in an exception because isAlive is not defined. In our Cell.h class, make your code look like this below the import line:

@interface Cell : NSObject
- (BOOL)isAlive;
@end

This should allow us to compile, although with warnings. If we now hit Command-U (or go to Product->Test), we should see our first test failure!

Screen Shot 2015-02-16 at 5.24.10 PM

We can also open the log files by enabling the bottom view in Xcode. In the upper right hand corner there shoudl be three boxes with lines to the left, bottom and right. Click on the one with the line on the bottom:

Screen Shot 2015-02-16 at 5.25.04 PM

and you’ll see output similar to:

Screen Shot 2015-02-16 at 5.25.13 PM

Alright! We have our failing test! Let’s make it green! Open Cell.m and implement our isAlive method by making our code look like the following under our import statement:

@implementation Cell

- (BOOL)isAlive
{
    return NO;
}

@end

If we now hit Command-U, we’ll see our tests pass! (If you don’t see the below in your Xcode, click on the diamond icon with the dash in the middle).

Screen Shot 2015-02-16 at 5.27.42 PM

Congratulations! You’ve written your first iOS Objective-C test!

Writing your second test

While writing our first test was awesome, the implementation wasn’t particularly interesting. After all, cells need to be able to be both dead and alive. So let’s add a second test to our CellSpec.m file:

it(@"is alive when brought to life", ^{
  Cell *cell = [[Cell alloc] init];
  [cell resurrect];
  expect([cell isAlive]).to.equal(YES);
});

So our whole file should look like:

SpecBegin(Cell)
describe(@"Cell", ^{
    it(@"is dead on creation", ^{
        Cell *cell = [[Cell alloc] init];

        expect([cell isAlive]).to.equal(NO);
    });
    it(@"is alive when brought to life", ^{
        Cell *cell = [[Cell alloc] init];
        [cell resurrect];
        expect([cell isAlive]).to.equal(YES);
    });
});
SpecEnd

If we try to run the tests (Command-U), we’ll get a failure because the method resurrect doesn’t exist. Again, we’re making a design assumption here that we’ll be using a method called resurrect to make a cell alive.

Let’s get our compiler happy by adding the following to Cell.h:

- (void)resurrect;

and the following implementation to Cell.m:

- (void) resurrect
{

}

Notice that the implementation is blank. That’s because we don’t actually have a test failure telling us to put anything in there. Now that our compiler is happy, let’s run our tests:

Screen Shot 2015-02-16 at 5.37.40 PM

Alright, test failure! To make this test pass, we’ll need to promote the constant we’re using in isAlive to be a variable. Objective-C makes this easier by using properties, so let’s add the following to our Cell.h, just under our @interface line:

@property (nonatomic) BOOL aliveState;

Now we’ll change our Cell.m file to use that property:

- (BOOL)isAlive
{
    return [self aliveState];
}
-(void)resurrect
{
    [self setAliveState:YES];
}

And if we run our tests – success!

Screen Shot 2015-02-16 at 5.43.43 PM

We now have one more test to write – killing alive cells. So let’s add the following test to our CellSpec.m class:

it(@"is dead when killed after being brought to life", ^{
    Cell *cell = [[Cell alloc] init];
    [cell resurrect];
    [cell kill];
    expect([cell isAlive]).to.equal(NO);
});

This fails because we don’t have the kill method, so let’s add that. Add the following to Cell.h:

- (void)kill;

And the following to Cell.m:

-(void)kill
{
}

Then run our tests.

Screen Shot 2015-02-16 at 5.49.25 PM

With our test failure in our place, implement the kill method:

-(void)kill
{
  [self setAliveState:NO];
}

And…..green!

Screen Shot 2015-02-16 at 5.50.31 PM

…Refactor

TDD’s lifecycle is known as the Red-Green-Refactor loop. We write a failing test, write just enough code to make it pass, and then refactor any duplication. In this case, our production code is pretty straightforward and doesn’t have any duplication. But our test code does – namely, the creation of the Cell class over and over. It’s important to remember that test code is still code, and should be as clean as our production code (although I will sometimes trade redability for no duplication).

We can refactor that out in our describe block. Change your code in CellSpec to look like:

SpecBegin(Cell)
describe(@"Cell", ^{
    Cell *cell = [[Cell alloc] init];
    it(@"is dead on creation", ^{
        expect([cell isAlive]).to.equal(NO);
    });
    it(@"is alive when brought to life", ^{
        [cell resurrect];
        expect([cell isAlive]).to.equal(YES);
    });
    it(@"is dead when killed after being brought to life", ^{
        [cell resurrect];
        [cell kill];
        expect([cell isAlive]).to.equal(NO);
    });
});
SpecEnd

Yay! Less code!

Lessons Learned

Now we have a basic TDD workflow setup with Expecta and Selecta that gives us the opportunity to test our code. Remember the cycle of Red-Green-Refactor: Don’t write production code where a test isn’t failing because you don’t have it.

Now go forth and have fun!

(You can see an example of this workflow in this repository up on GitHub)

{ 1 comment }

Debugging a Rails Initializer Problem

Over the past couple of weeks, I’ve been upgrading a particularly large and older Rails 3 app to Rails 4. This app uses Oracle against a legacy database, meaning a large chunk of the data structure does not follow Rails conventions.

One of the examples is that the primary key we use on the tables is not named id but is instead called something like row_no. This is defined in the table as a NUMBER type with no precision or scale (in Oracle, you can define types like NUMBER(9,2) which would have 9 significant digits, 2 of which are after the decimal). To access the database, we’re using the excellent Oracle Enhanced adapter. However, by default, the adapter treats columns defined as NUMBER with no precision or scale as a decimal type. Since we’re using them as IDs, we expect them to be integers.

Prior to the Rails 4 upgrade, we had forked the adapter library to change this behavior. However, that left us in the position of having to maintain our own library – something I didn’t want to do. In looking at the latest version of the adapter, they had added the ability to configure the behavior of NUMBER types using an initializer: `

ActiveSupport.on_load(:active_record) do
    ActiveRecord::ConnectionAdapters::OracleEnhancedAdapter.class_eval do
        self.emulate_integers_by_column_name = true
        def self.is_integer_column?(name, table_name = nil)
            !!(name =~ /row_no$/i)
        end
    end
end

<

p>`

This worked great to convert all of the columns. But, we discovered along the way that, for one specific table, this logic wasn’t being applied. The adapter has a method called simplified_type which applies the appropriate logic. But when I put a breakpoint on it, I noticed that the logic wasn’t applied for this one specific table.

The first clue came at the timing of the breakpoint. I noticed that I hit the breakpoint for the trouble table during startup of the rails console, while the other tables weren’t hit until I interacted with them in the console. The second clue was then when I hit the breakpoint, emulate_integers_by_column_name was set to false – even though I had it set above. The third clue was remembering that initializers are run in alphabetical order and since my Oracle initializer was named, well, oracle.rb, there was a good chance something was running ahead of time.

And there was – but it wasn’t what I was expecting. Because this is a legacy database, there are some specific sanity checks we run on startup to check certain aspects of it. And sure enough, we were doing a call which caused the problematic table’s model to become initialized. That meant that table would be called – and column mappings cached – before we had the chance to configure the database to have the behavior we really needed.

We solved the issue by renaming the database adapter initializer to run first (000_oracle.rb).

This problem represents the joy and challenge of debugging problems. The debugger was able to tell us the state of the application, but it should provide input for hypothesis. The column isn’t converted – what could lead to that? The flag isn’t set – when should it be set, and why wouldn’t it be? When could code run before other code? So when you face a challenge, start with a hypothesis before the debugger – not only will you potentially learn more, you might be able to deploy it in the right spot. Use it to answer questions.

Special thanks to the folks at the Oracle Enhanced forum especially the ever awesome Lori Olson and Yasuo Honda!

{ 0 comments }

Last week I presented at the 2014 SQE Orlando conference on a talk called “Choosing Between Scrum and Kanban”. I recorded the talk on my laptop, and though it’s not the best quality in the world, I did put it up online below. The slides are also available

Choosing Between Scrum and Kanban from Cory Foy on Vimeo.

{ 2 comments }

Slides from: Choosing Between Scrum and Kanban

On November 13th, I gave a talk at the SQE Orlando conference on “Choosing Between Scrum and Kanban”. I’ve published the slides on Slideshare, or you can view them below!

If you’re interested in learning how you can do this in your organization, give me a shout at foyc at coryfoy dot com!

{ 1 comment }

Thanks to everyone who came out to my Distributed Agility talk at Southern Fried Agile 2014. You can find the slides from the talk below:

{ 0 comments }

A huge thanks to Arjay Hinek and Greg Neighbors with Red Hat for having me out at the 2014 Red Hat Agile Conference. Here’s the slides from my talks “Scaling Agile” and “Scaling Agility” (combined into one).

{ 0 comments }

When the Single Responsibility Principle is taught among developers, one aspect – the responsibility – is harped on the most. But what counts as a responsibility of a class or a method? Is it the concepts it touches? The number of classes it uses? The number of methods it calls?

While each of the above questions are very good questions to ask of your method, there is an easier way given right in Robert Martin’s explanation – a responsibility is a reason to change. And it turns out that we can use something more than just code to determine that and help guide us to write good code.

As with many programming topics, code is the best place to start. Let’s look at a basic class in Ruby:

class ReportPrinter
  def print_report
     records = ReportRecords.all
     puts “Records Report”
     puts “(printed #{DateTime.now.to_s})”
     puts “-----------------------------------------”
     records.each do |record|
        puts “Title: #{record.title}”
        puts “   Amount: #{record.total}”
        puts “   Total Participants: #{record.total_count}”
     end
     puts “-----------------------------------------”
     puts “Copyright FooBar Corp, 2012”
  end
end

How many reasons could the method in this class change for?

  • We need to change where we get records from
  • We want to print different information about a record
  • New records have fields other records don’t have (conditional logic)
  • We want to output to a different format
  • We want to make sure line endings are set correctly
  • We need to change the report title
  • We want to change where the date is printed
  • We want to change the separators
  • We want to change the footer
  • We need to print the report in a different language

10 lines of value add code. 10 (at least) reasons to change. Now, let’s compare that code to this version:

class ReportPrinter
  def print_report
     records = load_records
     header
     separator
     records(records)
     separator
     footer
  end

  def load_records
     records = ReportRecords.all
  end

  def header
     puts “Records Report”
    puts “(printed #{DateTime.now.to_s})”
  end

  def separator
     puts “-----------------------------------------”
  end

  def records(recs)
     recs.each do |record|
        puts “Title: #{record.title}”
        puts “   Amount: #{record.total}”
        puts “   Total Participants: #{record.total_count}”
     end
  end

  def footer
     puts “Copyright FooBar Corp, 2012”
  end

end

The first thing that should strike you is that this is exactly the same code. Yet, this class is better code because each method has a single responsibility – header prints the header, footer prints the footer, etc. We could continue the extractions by pulling out the duplication of “puts” into a writer method, and then dynamically swap that in.

But, I want to focus on the print_report method for a minute. It seems like there are lots of reasons for it to change – it does an awful lot. However, it has an important job – one that I will title a “Seargent Method” since that’s the name I got from Alan Shalloway and Scott Bain. To understand its responsibility, let’s step back and look at a way of defining ways of modeling software from Martin Fowler’s UML Distilled. Fowler discusses three levels of modeling:

  • Conceptual
  • Specification
  • Implementation

Coupling should happen at the Conceptual level, and never should there be coupling between the Conceptual and Implementation levels. The way I explain these levels is that the Conceptual level is the container that holds the concepts. The Specification describes what should be implemented, and the Implementation level how it should be implemented (see John Daniels’ “Modeling with a Sense of Purpose” http://www.syntropy.co.uk/papers/modelingwithpurpose.pdf for more information).

With this in our mind, we can see that the print_report method has the responsibility of defining the specification of what it means to print a report. In other words, it gives us the algorithm for printing a report, and only needs to change if the algorithm changes. We are free to implement that in any way we choose without having to change the print_report method.

With this terminology, we can now look at the records method and be able to define what smells about it. It is operating at two levels – a specification level (loop over a set of records and print information about each one) and an implementation level (print the title, total and count). We could move the loop up to the print_records method, but that would be a true violation of Single Responsibility – it needs to not only know the order of operations, but how to loop over a collection. It’s better to have two methods:

def records(recs)
  recs.each do |record| 
    print_record(record)
  end
end

def print_record(record)
  puts “Title: #{record.title}”
  puts “   Amount: #{record.total}”
  puts “   Total Participants: #{record.total_count}”
end

Which are now both operating at the correct level.

Sharpening your eyes to look for both increases in the reasons a class can change and the level of abstraction the class or method is operating at will open a whole new world of identifying code smells, and understanding where to put in divisions to your code to make it easier to grow and scale your code base.

{ 1 comment }

On July 16th I gave a talk for the Triangle DevOps group. Here’s the slides from the presentation. Thanks for having me out!

{ 0 comments }

Do you really have a Scrum Team?

Julia is the ScrumMaster for Team Gumbo. They were recently formed as their organization’s first foray into the agile world. They were all given a two-day Certified ScrumMaster class, and are collocated. But Julia has noticed that after two months of working together, they don’t seem to be hitting their Sprint Commitments – and don’t seem to care!

Many times organizations interested in adding more agile methods to their toolchain start with Scrum, since it seems one of the clearest and easiest to start with. They take a group of people, put them through some training, and tell them to go! But they really struggle, even after they should have been through the normal “Forming, Storming, Norming” phases.

5 Dysfunctions TriangleOne way of understanding the challenges is explained in the book The Five Dysfunctions of a Team, Lencioni lays out a hierarchical model that shows the interdependencies of the dysfunctions. At the very top, we see the problem that Julia was seeing on her team – Inattention to Results. In Scrum, one of the keys to success is the ability to deliver increments of product, usually measured by Nebulous Units of Time (NUTs) like Story Points.

But when teams aren’t delivering increments, or adhering to their commitments, we try to understand why by holding a retrospective. But by looking at Lencioni’s model, we can see that if we haven’t solved more underlying issues, then we’re not going to have great luck:

  • The team isn’t delivering their commitments and don’t seem to care (Inattention to Results) because…
  • Team members aren’t holding each other accountable to the commitments they made (Avoidance of Accountability) because…
  • No one really committed to their sprint goals during sprint planning (Lack of Commitment) because…
  • The members knew there were issues with certain members of the team, but didn’t want to bring it up (Fear of Conflict) because…
  • They were afraid that the person most identified as causing the issues would push back on being identified, and the remaining members wouldn’t get support from management to be able to express themselves (Absence of Trust)

One of the challenges as a ScrumMaster is that, without Trust, Retrospectives are going to be fairly ineffective, since teams (or people working in close proximity being called teams) are not going to want to get to the real root issues – especially if they don’t feel like they will be resolved.

For Team Gumbo, Julia held a series of one-on-ones and discovered the issues of missing trust. She worked with management to remove the problem member from the team, which greatly increased the confidence the team had. Julia noted a fairly rapid improvement with the members sharing challenges and concerns more freely. Ultimately she found that they started performing and delivering much more effectively – and seemed happier doing it!

So next time your team don’t seem to care about delivery, ask yourself if that is really a tip of an iceberg that is keeping them from being an truly effective team.

{ 0 comments }

Start With Expressiveness

A great thing about being a programmer today is the wide variety of libraries, packages, and patterns we have access to. Which is great, because we’re building bigger, more distributed systems as a norm than ever before.

But code is ultimately about communication with other developers (that’s why they are called programming languages). The following three programs output the same thing (credit to Peter Welch for the last two):

puts "Hello World!"
>+++++++++[<++++++++>-]<.>+++++++[<++++>-]<+.+++++++..+++.[-]
>++++++++[<++++>-] <.>+++++++++++[<++++++++>-]<-.--------.+++
.------.--------.[-]>++++++++[<++++>- ]<+.[-]++++++++++.
Ook. Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook! Ook? Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook? Ook! Ook! Ook? Ook! Ook? Ook.
Ook! Ook. Ook. Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook! Ook? Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook?
Ook! Ook! Ook? Ook! Ook? Ook. Ook. Ook. Ook! Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook! Ook. Ook! Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook! Ook. Ook. Ook? Ook. Ook? Ook. Ook? Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook! Ook? Ook? Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook? Ook! Ook! Ook? Ook! Ook? Ook. Ook! Ook.
Ook. Ook? Ook. Ook? Ook. Ook? Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook! Ook? Ook? Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook? Ook! Ook! Ook? Ook! Ook? Ook. Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook.
Ook? Ook. Ook? Ook. Ook? Ook. Ook? Ook. Ook! Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook! Ook. Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook.
Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook! Ook!
Ook! Ook. Ook. Ook? Ook. Ook? Ook. Ook. Ook! Ook. Ook! Ook? Ook! Ook! Ook? Ook!
Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook. Ook.
Ook. Ook. Ook. Ook. Ook! Ook.

My bets are, I'm mostly like to high-five my past self for only one of these three examples. But while our choice of languages and communication are usually easy to see, how we interface with other code is not always easy.

In the original Gang of Four Design Patterns book, one of the key principles was "Design to Interfaces". Oftentimes this gets confused to mean you should create interfaces which your objects implement - but that is literally an implementation detail. What the authors really meant was this:

Start with expressiveness first.

What this really means is - control your own API, the way you want your code to interface. Then adapt theirs to fit yours. While I gave some examples in an earlier blog post on how this can be used to scale component teams, the technique is not limited to just scaling.

As an example, in this great StackOverflow post the poster has a Car object that can be created, updated and viewed. But there are also additional actions, such as the ability to purchase a car.

If we start with expressiveness first, it leads us to think about what really happens when we purchase a car. We aren't actually manipulating the car itself (at least, not yet). Instead, we have some set of operations that when bundled together form the concept of purchase. So we would set up the notion that the action of purchasing a car creates a new trigger for these set of operations, leading to the RESTful path:

POST /purchase

Of course, our code should not be littered with RESTful calls. We should instead centralize those into our own API we call. For example, using Ruby's rest-client gem, we can write the following:

require 'rest_client'

class Car
  def purchase
    RestClient.post("#{Config.server}/purchase", self.to_json)
  end
end

So our code base would merely call car.purchase and not have to know anything about REST.

But we can take this one step further, and this is where the real power of expressiveness comes into play. Let's imagine we worked through the above example, and then find out that the vendor we work with provides an API which looks like:

POST /api/v1.1/car/123/purchase

While not the end of the world, that isn't the best way they could structure that. We could modify our Car class above to call the updated structure, but then we are tying ourselves to that interface. Instead, we can continue to use the RESTful interface we think is correct, and use an Adapter to give us the interface we want.

For example, we could set up a tiny Sinatra server that looks like this:

require 'sinatra'
require 'rest-client'

post '/purchase'
  car = params[:id]
  RestClient.post("#{Config.server}/api/v1.1/car/#{car}/purchase", params)
end

Now we rely on a RESTful service which is a thin adapter around the real service. This can be really helpful if multiple applications we are building all rely on the same RESTful services which are out of our control.

So, if your code is telling you to write something, listen to it and write the code you want to write. Only after you've done that should you go back and adapt it to the nastiness you really have to deal with. You gain expressiveness, readability, and as a bonus, an added layer of separation from your dependencies.

(Note: I don't feel this should end without a brief note about performance. Start with expressiveness, and then optimize for performance. There is no doubt an art to the balance between clean code and defensive programming, where everything is built up front and wrapped to the nth degree. But if we start with the code our problem is telling us to write, we can easily read it to figure out where we can make it faster - and better)

{ 1 comment }