I was reading Alistair Cockburn's Agile Software Development book the other night. In it, he described a game developed by Darin Cummins to reinforce good development practices. The game is described here. This inspired me to create a fun game for continuous integration builds.
As I described in a previous blog entry, I've had problems in the past with people breaking the build. To help, I've used techniques like the rotating Build Nazi and the put a dollar in the broken build jar, but these are all negative focused. How about something that rewards developers that don't break the build? How about rewarding developers for following the best practice of breaking their work into smaller chunks and checking in early and often?
So I'm thinking of a game where a developer gets, say 1 point for getting his name on a successful build. The number of files checked in is ignored. We want to discourage big check-ins, so you get more points for smaller grained check-ins since your name will show up on more successful builds. And on the other side, you get points taken away when you break the build. Now, we want to keep things simple, so we could probably stop right here, but I'm thinking that some failures are worse than others. So maybe we have something like:
Description | Reward Points |
---|---|
Check-in and build passes | 1 |
One or more unit tests failed | -10 |
Compiler error (come on now) | -20 |
Big check-in caused build to remain in a broken state for hours | -40 |
It's a game, so we need something for the winner. Bragging rights is too lame, so maybe lunch on the team, or some kind of trophy kept on the winner's desk to remind everybody of his champion status.
Now, there are some negatives. Perhaps not everybody would want to play. Particularly, notorious build breakers wouldn't want everybody (specifically management) to see their poor results. In that case, I suppose we could only publicly display the leaders, or top half, but that wouldn't be as fun.
People could easily cheat too. Maybe, write a cron job that every hour checks out a file, changes a comment and checks back in. We'd have to look out for that kind of thing.
What about the analysis time required to keep score? I could easily see how a post processing ant task could be developed to update the points for developers on a successful build. But for a failure, I think you'd need human analysis. That's a negative because it requires time, so the job could be rotated. On the plus side, what I've noticed is that analyzing why the build failed brings awareness to issues. Issues like some people needing training, or a test requiring change because it's non-deterministic, or a hole in the pre-check-in process used to ensure a successful build.
To keep the game fresh and the developers motivated, we'd have to reset the points periodically. Iteration boundaries seem appropriate here.
Well, maybe I'll give it a shot and see what happens...
No comments:
Post a Comment