Training Banner

Thursday, October 23, 2014

Assessing software end-to-end while it's still "in the oven"


Every once in a while, we come across a best practice that generates so much value it becomes part of our product management toolkit. I'd like to discuss a practice related to evaluating software products that are under development that with a reasonable amount of effort:

  • Improves overall product quality
  • Generates great ideas, both large and small
  • Increases executive buy-in to your product and development efforts
  • Provides clear focus to the development team that creates results and changes behavior

This practice can be thought of as an extension of Scrum as it fits well into the rhythm of sprints and the idea of always having a shippable product, although in theory it is not methodology- or framework-specific. It involves semi-formal, end-to-end reviews of products as they are being developed. We called these reviews "assessments", not to be confused with the practice of assessing the effectiveness of product managers or product management organizations that is often called an assessment or audit.

Our practice worked thusly: Toward the beginning of the release, we would identify milestones at which we would convene what might be characterized as a workshop to experience firsthand using our product throughout its life cycle (our product was on-premise).These were different than usability tests, unstructured testing or "bug jams" in terms of the emphasis on the life cycle, the roles of the participants and the way we conducted the exercise: a multi-function group of users working through predefined scenarios at the same time in the same place.

Perhaps a quick description of a typical assessment is the easiest way to describe this practice. Product management (with the help of Q and others) would define scenarios including installing, configuring, using and even uninstalling our software product based on the features that we felt were mature enough to test. We typically wrote simple scripts describing the steps at a fairly high level which were distributed to the participants, which included product managers, executives, people from sales and marketing and even people from other product groups. We would typically block an entire day, scheduling it enough in advance to ensure broad participation. We would reserve a room with plenty of space with all the hardware necessary to execute the scripts. A few of us with deep knowledge of the product, including representatives from development, would be available at all times in the room to help participants with issues they inevitably encountered, capture feedback and help development address issues that were identified. Beyond helping participants, predetermined members of the development team were at the ready to address bugs and other issues as quickly as possible.

The first script usually involved installing the product from scratch, something that probably doesn't receive enough emphasis in exercises like usability tests. Having novices attempt to install the product exposed complexity in a very painful way, giving execs and team leadership insight into a process that isn't always on their radar. Prerequisites that had to be installed separately or complex installation steps requiring lots of arcane configuration information helped us realize what we were putting our customers through. Truth is, in some cases, we barely got much beyond the install in the allotted time, especially early in the release.

In terms of testing features and functions, we usually scheduled most of the time for participants to execute the scripts. However, most people, especially execs, love to go off-script. We also allocated time for freeform "wandering" through the product (especially as we got closer to the release date).

Elaborating on the points at the beginning of the post, this approach commonly generated the following benefits:

  • We found that having some formality around our assessments, scheduling them in advance and inviting leadership provided, to say the least, significant focus for the entire product development team (it seemed to particularly get dev's attention)
  • Quality improved as many bugs were addressed almost immediately (with an exec waiting for the fix!)
  • Stakeholder buy-in increased as execs and others outside of the product development team understood the product better and became acquainted with the cools things it could do
  • At least one really good idea was generated, sometimes regarding neglected parts of the product such as configuration
  • The extended product development team (top to bottom) got closer

Is your organization doing this type of multi-function, life cycle-focused, script-based assessments? Do you think it would work for you?

No comments:

Post a Comment