Releasing Every Fortnight

Genius.com’s successful adoption of agile practices has been covered at some length in earlier postings, including Presenting on Going Agile with Scrum and An Agile Fortnight.  Building on this success, we have most recently reached the point where the completed user stories for any given sprint at not only ‘potentially shippable’ but are actually deployed to production. So, how did we get here and how long did it take?

Testing as the foundation

One of the key elements of our success in bi-weekly product releases is the commitment to increasing automated test coverage – both unit tests and functional automation tests.

With rapid rate of change – and new features in every release – it is imperative that developers know immediately if their check-ins have caused a build to break. This is only possible with a concerted investment in unit testing and QA automation. In our cases, we proceeded in phases, each taking approximately 4 months to implement:

  1. All check-ins must have associated unit tests. While we did not take the time to retrofit existing code, all new or modified code was required to have associated unit tests
  2. All product builds must run the complete unit test suite. We use Hudson, integrated with JUnit, mbUnit, Test::Unit, jsUnity, and PHPUnit to execute all the unit tests with every build and to report on failures at any stage
  3. Run builds on every checkin.
  4. All regression tests in TestRun (our test plan management tool) must be automated using Selenium and added to the nightly build. This took some time and had to be done incrementally. With an end-to-end test that required 3 days of manual testing by the entire QA team when we started, the impact of incremental investments in test automation began to pay off quickly. Automation of existing regression tests became a background task for the QA Engineers for each sprint. Developers also pitched in, writing helper functions to ease automation and writing automated tests themselves.
  5. All stories must have associated Selenium RC automated functional tests checked in and added to the nightly build test. In addition to the manual functional testing, every new story must have associated automated tests checked in and executing (via Hudson) nightly so that we were not adding to the regression debt.
  6. Run an acceptance test of functional tests on every checkin.

When is a story done?

We established a very rigorous definition of ‘done’ for stories to ensure a consistent quality level. We also adopted ‘story swarming’ (applying as many developers/QA/DB to the story) to shorten times on individual stories and to avoid having many stories open at once.

For a story to be done:

  1. All phases completed (in our case ‘To Do’, ‘In Progress’, ‘Security Review’, ‘Ready for QA’, ‘In QA’, ‘Validated’)
  2. Unit testing complete
  3. Security reviewed (code reviewed for web application security vulnerabilities)
  4. Validated by QA
  5. Test cases documented in TestRun
  6. Automated QA testing complete
  7. Validated by Product Owner
  8. All Operational considerations have been addressed

Providing all these conditions have been met, the story will be demonstrated to the company at the Sprint Review on the second Friday of the two-week Sprint and released to customers the following Tuesday.

What else needs to be considered?

One of the things I often get asked about when moving so quickly is the coherency of the architecture and the user experience. At Genius, we employ several methods to ensure the architecture is appropriately scalable and maintainable and that the product is easy to use:

  1. NMI (needs more information) stories. For user stories that have a significant impact on user experience or the underlying architecture, the team will first complete an NMI. NMI stories are focused on a subset of the team determining user flow (with leadership from the Product Designer) and/or underlying architecture (with leadership from the Technical Leads and the Development Director). The input to an NMI story is a list of questions that need answering (such as “how will the Marketing user…?” or “How can we ensure continuous availability of this feature during system maintenance?” The output of NMIs is a user flow or technical design, and a documented list of tasks for an upcoming sprint.
  2. Development framework. Ease of use is a key differentiator at Genius, as is performance. We evaluated several frameworks and determined that to achieve the level of user interactivity required (Ajax) we would need to build our own lightweight PHP framework. This framework is now the basis for all new functionality added to the product – not only speeding development, but further ensuring consistency in coding and usability.
  3. Designated ‘leads’ in each of the major technical components or code bases of the product, Technical Operations and User Experience with primary responsibly to making the team productive – and secondary responsibility to completing story tasks for the sprint.

Another concern with bi-weekly deployments is releasing partially complete features. As a SaaS provider, all the software we release to our production servers is immediately available to customers, so our goal is to complete at least a minimal feature set within each release. That said, we do make use of a beta flag (set by the provisioning team) to preview new features with customers or internally. This, combined with feature-based provisioning, can provide a lot of control over what an individual customer user can see or access. Of course, in the case that work on an existing feature is partially complete, we will typically rollback the code to the prior version (excluding it from the current sprint) to prevent user inconsistencies.

What’s up next?

The next step in our process evolution is to parallelize the nightly functional build tests (which currently contains over 600 Selenium scripts and runs for over 3 hours) so they can be run with every build. We are taking a two-pronged approach to this:

  1. Virtualized Selenium servers in-house. These will be used to run functional tests against every build for a single browser.
  2. Sauce Labs Sauce On Demand for cross-browser Selenium testing of all the automated functional tests on a daily basis.

In the future we will provide updates on our experiences with Sauce Labs and any other process developments.

  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Google Bookmarks
  • DZone
  • HackerNews
  • LinkedIn
  • Reddit