Hypothesis-Driven Design (Case)

A Season of Change & Disruption

“HVAC in a Hurry” (HinH) is now itself in a hurry. After decades of happy existence as a prosperous regional supplier of heating & air conditioning services for commercial properties, its industry is consolidating. The CEO, Mary Condor, feels they need to either scale, combine with another firm, or gradually be out-competed into insolvency. Scaling is her first choice.

She’s brought in Frangelico DeWitt to lead a change management and digital transformation program. His charter was to take the best of what HinH has learned and to use it to improve existing operations and facilitate rapid expansion into new regions (possibly through a franchising model). While resources are scarcer than he would like, DeWitt feels up to the challenge (with a first name like Frangelico, you have to be resilient).

Learning What to Build

It’s Friday afternoon, and the team is closing out a one-week design sprint to understand the company’s key roles/personas and the relative importance of their various jobs-to-be-done/problem-scenarios. Based on what they learned in the interviews, the team thinks the most valuable early wins will involve improving the interface between dispatch and the HVAC field technicians. The dispatchers work at headquarters and field calls and emails from customers as well as technicians about HVAC repairs. Technicians are dispatched to customer sites to execute jobs (installation, maintenance, and repair). Basically, the idea is to take what individuals at the the company have learned works well and to standardize and automate it (where possible) with updated software.

The team has tuned their current charter to focus on following two design challenge: How might we improve the self-service capability of technicians on site to reduce downtime on site, the number of visits per job, and customer retention?

Solving the Right Problem

To close out their current sprint, the team plans to focus on developing user stories for the JTBD+Value Proposition pairing below:

JTBD + Current Alternative Value Proposition Outcome Metrics
JTBD) Getting replacement parts to a job site

Alt.) Call the office and request the part then wait for an update on the phone or through a call-back

If we automate parts lookup and ordering online, then the tech’s will use it and it will improve outcomes. – Decrease in non-billable hours/week rel. baseline

– Decrease in job turnaround time rel. baseline

– Increase in Customer Satisfaction (via post-job survey) rel. baseline

Iterating to the Right Solution

The team has drafted the user stories and storyboard below. An epic that summarizes the interaction might be something like: ‘As Trent the HVAC technician, I want to identify a part that needs replacing so I can take the next steps with the customer.’

Based on their observations and interviews with the tech’s, they think the arc of that story might look something like this:

hvac-epic-story

From there, they detailed the epic with individual user stories that are children of the epic.

Example Child  Stories

As Trent the Technician, I know the part number and I want to find it on the system so I can figure out next steps on the repair.

As Trent the Technician, I don’t know the part number and I want to try to identify it online so I can figure out next steps on the repair.

As Trent the Technician, I don’t know the part number and I can’t determine it and I want help so I can figure out next steps on the repair.

As Trent the Technician, I want to see the cost of the part and time to receive it so I decide on next steps and get agreement from the customer.

As Trent the Technician, I want to order the part so that I can complete the repair.

Towards a Testable Design

As the team gets ready to spend the next week designing and coding against these stories, they want to make sure they instrument relevant analytics as they go. For the child stories above, the teams wants to finish their pre-sprint design work with by completing answering the following:

  1. For the user stories in the case, what simple, non-technical analytical questions do you think would help the team make more purposeful decisions about how they’re doing with a given implementation?
  2. For those analytical questions, how do we answer them in Google Analytics? How do we unpack the answers into Goals, analytical views/reports, etc.? 

For a specific example of what this might look like, see Exhibit: The Littlest Tutorial on Testing User Stories.

You may also find the following sketch book/work book helpful: Analytics Sketch Book-HVAC in a Hurry (Parts Self-Service), though that is optional. There are icons and note items in the margins- you may need to Zoom Out (in the Google Slides menu) to see those.

Exhibit: Agile User Stories- The Littlest Tutorial

TL;DR User stories are a generally accepted tool for creating design/dev. inputs to a modern software development process like agile.

User stories have a specific format:

 “As a [persona],
I want to [do something]
so that I can [realize a reward]

This format is designed to help the story writer be descriptive about the target user experience while not prescribing the implementation details. The purpose of user stories is to foster better discussions about implementation with the whole team vs. have team members working against an arbitrary specification.

Done right, the format helps prompt the following important questions:
agile-user-story
Stories are generally organized around ‘epics’. These are user stories (same format as any other story), but they summarize the user experience where the balance of the user stories (I refer to them as ‘child’ user stories) detail the individuals components of the experience.

For example, let’s say you had this storyboard of an inside salesperson working and winning (or losing) a sale:

sal-the-salesperson-making-calls-highres

An epic that summarizes the interaction might be something like: ‘‘As Ivan the Inside Salesperson, I want to decide who to call so I can maximize my contribution to quota.”

From there, you’d move on the detailing individual user stories. If you’re starting from storyboards (which is a great idea!), don’t force a 1:1 relationship between the storyboard squares and the user stories. You may find some squares require more than one user story and others are just background and don’t really need their own story.

Here’s an example set of user stories that detail the interaction.

Child Stories

As Ivan the Inside Salesperson, I want to know who to call with what proposition, so that I can make quick, quality decisions that maximize my quota.

As Ivan the Inside Salesperson, I want to mark an opportunity for callback, so I don’t forget and don’t let it distract me from my next thing.

As Ivan the Inside Salesperson, I want to close out the opportunity as a win, so I get full credit for the sale.

As Ivan the Inside Salesperson, I want to close out the opportunity as a loss, so I don’t have to keep explaining it back to management.

Right Problem vs. Right Solution & User Stories

TL;DR User stories are part of finding the right solution where jobs-to-be-done/problem scenarios are part of finding the right problem.

In product design and integrated development practices like hypothesis-driven development, there’s a lot of emphasis on decoupling problem/job vs. solution. Donald Norman’s point of view on this is one popular way of showing the separation and the relationship between these:

for-HDD-case

In this case, you’re dealing with JTBD and user stories. JTBD are ways of encapsulating a point of view on the question of ‘What problems matter to the user?’ while user stories (and prototypes) are ways of exploring how, for a given JTBD, you might deliver a solution that’s better than the user’s current alternatives.

Exhibit: Testing User Stories- The Littlest Tutorial

The best way to focus the testability of your user stories is with simple, relevant questions, like the ones you see in the second column of the table below. Are the questions relevant? Actionable? Make sure of this before you worry about instrumenting them with metrics. That said, instrumentation is the next step and you’ll see that in the notes below.

CHILD USER STORY NOTES & ANALYTICS
 ‘I want to know who to call with what proposition, so that I can make quick, quality decisions that maximize my quota.’

Does he use the dashboard for this?
– Logged calls vs. dashboard
– Time on page

‘I want to mark an opportunity for callback, so I don’t forget and don’t let it distract me from my next thing.’

Does he use the callback feature?
– Count of reminders set/calls made

 ‘I want to close out the opportunity as a win, so I get full credit for the sale.’  Are they closing out opportunities from the dashboard?
How do salespeople perform with this vs. without it?
– Opportunities (properly) updated to win from dashboard vs. vs. post-mortem vs. blank
– Sales, Sales of Priority Product, and Customer Sat. vs. baseline cohort
I want to close out the opportunity as a loss, so I don’t have to keep explaining it back to management.’ (see above)

Exhibit: The INVEST Checklist for Better User Stories

Writing great user stories and (more importantly) driving great discussion and development with them takes practice. Below are a few key focal points for you to consider as you draft your stories. These are based on the INVEST checklist by renowned agilist Bill Wake.

Individual

Is the story integral- does it have all three clauses ([1: As a persona], [2: I want to [do something]] so that I can [3: achieve some kind of reward/conclusion.]?

Does the story make sense as a stand-alone item? Could you sit down right now and prototype it? Making sure you’ve broken down the experience into discrete pieces will make your design better. Making sure you can work in small batches will make your practice of agile better.

Negotiable

Is the story about the experience you think the user should have vs. how that experience should be implemented? This is key to your functioning effectively with your team and generally making sure you stay focused on doing what actually makes sense for the user vs. checking off items on a to-do list (which is easier but not effective).

Here’s an extreme example: ‘As a user, I want to push a red button, so I can submit the contact form.’. This story has no reason for existing- if somehow you’re just utterly convinced you need a red button, do a prototype that shows how it would look. Instead, this story should be more about what this user is hoping to achieve but submitting the form and how they’ll know if they’re going to get it.

Also, avoid terms like ‘easily’, ‘quickly’. No one is every tried to build software that’s difficult or slow- it happens all the time but it’s not because someone forgot to add ‘quickly’ to their user story! If there’s some kind of benchmark you ultimately need to reach with users, that’s a good item to add to your test cases.

Valuable

What validated learning do you have that suggests this user story is worth building? What measurable outcome would it improve? See the above section Learning What to Build for more depth on this.

Estimable & Small

These aren’t the same, but they’re tightly related. If it’s hard for a developer to roughly estimate the level of effort to implement a story, it’s very likely that the story is too vague (IRL, this is what I observe 92.4% of the time). It’s also possible (though far less likely) that the developer or team is thinking of an approach to implementation that’s more expansive than is necessary or product. For perspective on this, check out Bill Wake himself discussing ‘YAGNI’: Bill Wake on Yagni.

Testable

Personally, this is my number one with a bullet. Why? A highly testable story is very likely to score well on all the elements above (in my experience). Key to this is the third clause of the story: ‘…so that I can [achieve some kind of reward/conclusion].’. Making sure you can think through the user experience to a testable conclusion is not only key to moving forward with implementation and launch of this functionality, it will help you be more thoughtful about your design.