A couple of weeks ago our team accidentally deployed the wrong build to our production environment. We deployed that latest green build from CruiseControl.NET, rather than the slightly older one that we’d intended. Fortunately, this didn’t matter too much. After all, this build had passed our test suite, so there were no serious bugs.
We decided to perform some root-cause analysis, using the 5-Whys technique, to understand how this had happened. We came up with a few root causes:
- It’s hard to tell what code changes are contained in a given build. The only way to do this is to analyse the git commit history, which can be rather tangled. To make this easier, we have now agreed on a convention for git branch naming and merging patterns, based on this one from @nvie.
- We were testing stories, not builds. This is a natural way to approach a story-driven Agile project, but stories aren’t what gets deployed to production, builds are. Come time to deploy, we’d know that stories #123, #134 and #96 are signed off, but that sign off would be tied to a point in time, not a build number. How do we know what build to deploy for these stories?
- We didn’t have a way to retrospectively ‘fail’ green cruise builds. Just because CruiseControl.NET passes a build, it doesn’t mean it’s perfect. When a QA finds a bug during manual or exploratory testing of a story, they fail that story. But we really need to fail that build. We don’t want to deploy this build or any following one until this bug is fixed.
We came up with a great, low-tech way to solve this problem: coloured stickers!
- When the developers think that they’ve finished coding a story and that it’s ready for test, they put a white sticker in the bottom left of the story card, and write the build number on.
- If QA pass the story, they stick a green sticker on, with the build number against which they tested written on it. The story is now ready for deployment.
- If they find a problem, they use a red sticker. Again, they write on the sticker the number of the build in which they found the problem.
- When the developer commits their fix to the problem, they use another white sticker labelled with the build number containing the fix. This will be followed by another green or red, and so on.
What does this give us?
Imagine I want to deploy stories #123, #134 and #96 to production. We know that they’ve all been passed by QA, but we need to answer some more questions.
- What build contains working versions of all these stories? It’s now very easy to answer this: just look at the green stickers on each card. To deploy these stories, we need the latest of the build numbers on the green stickers.
- Is the build I’m deploying safe to deploy? Anything on the wall with a white or red sticker not followed by a green one represents a build containing code that either is known to have a bug or has never been tested. Let’s say we want to deploy build 2503 (it has a green sticker for a story we want to deploy to our users), and the currently deployed build is 2496. If we see a white or red sticker on a story that’s still in test with build 2499 on it, we know that we can’t deploy 2503. The team can now focus on finishing the story that’s carrying the sticker for build 2499, so that we can get these features to our users as quickly as possible.
What’s great about this is how easy it is to glance at the wall to get the answers to these questions. We don’t have to go digging in our cruise result logs or story tracking tool or the project manager’s spreadsheet.
Each time a story goes into test and fails, that is waste. In the language of lean thinking, we see rework (QA may have to retest areas of functionality after the fix, dev may have to re-code something), transportation (extra handoffs between QA and dev, often delayed if one party is busy working on something else) and waiting (QA and dev waiting for each other to be available for collaboration, dev idling while a story is QAed as they don’t want to pick up a new one and thus have to switch context back to the one in test). We are always looking to minimise waste, so we’d like to reduce the number of trips taken by each story around the dev-QA cycle.
The number of stickers on each card is an immediate visual indicator of this metric. A quick look at the wall (again, no digging in the story tracking tool!) shows which stories have incurred this type of waste. We might then decide to apply some root-cause analysis and other continuous improvement activities to those stories, so that we can understand what really caused the waste and prevent it in the future.
I can’t recall whose idea this was, so credit goes to the whole team. Extra credit to Nick Ashley, who chose the type of stickers to use when I was telling him to use a completely different sort that would have been entirely impractical.