Your Usability Test Plan

While other types of testing are equally critical to product success, no one likes a product with poor usability. The practice of usability has improved a ton in the last few years, and a strong capability in this area is now standard for any team in digital.

Team-wide fluency with the basics is important to consistent practice. This includes enabling non-specialists to prototype and test their ideas. While testing usability requires the creation of a working item (interactive prototype or some kind of working software), it’s actually one of the more straight forward types of testing. (For others, see the Customer Discovery Handbook).

After reviewing this guide and the related practice, you will be able to:

  1. Identify focused and appropriate usability testing for your product and features
  2. Design said testing around your user stories
  3. Screen subjects and conduct usability testing
  4. Interpret and iterate on the results of that testing

Why test so often?

In the early phases of a big change (or new product), push yourself to diverge, developing multiple possible directions and completing preliminary testing on your favorites. The successful innovator is also a good economist and the economy of broad prototyping in the early phases is clear:

prototyping-economy-curve

What testing do I do?

I’ve organized the balance of section against three progressive approaches to testing: exploratory, assessment, and validation. These aren’t mutually exclusive- usually you’ll progress from one to the next.

Exploratory

This accelerates learning on what interface patterns will work through early user contact. Rather than having one or two shots at getting it right with working software, you can get ten with the same amount of time and effort by using prototypes.

You’ll start with an explicit prototype and test plan, but freely update them on a test by test basis as you learn what’s working and your observations encourage new ideas. Specific measurements are unimportant. Popular tools are paper prototypes, PowerPoint/Keynotes prototypes, and various other facile tools that discourage overemphasis on details and encourage experimentation and variation. Exploratory testing drives to a decision about the fundamental approach on an interface. 

Assessment

This focuses on measuring the efficacy of a new interface (or interface element). As opposed to validation testing, it’s less concerned with detailed measurements on specific items, but instead on whether or not most users can accomplish topline goals, noting the rough spots. Assessment testing typically provides inputs to further interface refinement (if it’s basically successful) or pivot and revision (if it is not).

Validation

As the name suggests, validation testing is for once you’re relatively sure you’ve got an acceptable interface, and you want a) a final check on that and b) some intuitive sense about the nature of what you’ll see with measurements you take out in the wild (through Google Analytics, etc. with users you don’t have the chance to meet and observe directly). Try hard to leave yourself enough time to make at least a few minor changes before releasing to the public- you’ll likely identify a few rough spots well worth fixing.

You can also execute all three of these tests on a comparative basis (comparison between alternatives).

How do I do it?

Preparation

Relative to customer discovery interviews, user testing requires a relatively detailed set up. As with any detail-oriented task, it’s easy to get lost in the weeds. Resist the false gratification that task completion provides: the important thing is connecting what you’ve learned back to your core innovation engine.

User testing occurs relatively late in the overall discovery process. At this point, you should have validated personas, problem scenarios, and value propositions, something you can summarize in a Product Hypothesis. The diagram below describes all this in terms of the convergence-divergence pattern from design thinking. Your work on the Persona & Problem Hypothesis should ultimately converge on a focal problem set, to which you attach your value propositions. All this you can summarize in a Product Hypothesis to connect problem with solution.

The Product Hypothesis will include a Value Hypothesis and that will have certain assumptions attached to it. You should test those assumptions with the quickest, least expensive MVP product proxies you think are viable to provide a definitive, actionable result (see section on Value Hypothesis for popular patterns). Once you’ve converged on which problems you can provide adequately valuable solutions, encapsulate your solution ideas in agile user stories. Then prototype, test, refine, and test some more!

customer-discovery-divergence-convergence-v2

Every substantial interaction you plan to test should have a user story attached to it. The user stories will anchor your executions in the customer narrative and validated learning you’ve developed. For every material interface element, you should be able to answer these questions:

What user story did we design & execute this against?

What value proposition was that story delivering against? Why do we believe the user wants to do this?

What problem are we solving for the user with this value proposition?

Who is this user? Do we understand what makes them tick? What shoes they might wear?

This will not only help you converge on more valuable solutions; it will also help you zero in on the right place to start revisions when your execution doesn’t go perfectly (and it never does!). Also, successful innovation is a loop, not a line. I’ve presented most of the material as a sequence for clarity, but the successful innovator is constantly looping back through the hypothesis areas. People change, problems change (a little), and every solution is temporary.

Execution

0. Prepare

User testing will deliver useful results on the investment just about all the time. Don’t feel like you absolutely can’t do it if you haven’t completed the items in the preceding section. That said, a little investment in those other areas (personas, problem, MVP testing) will go a long way.

1. Decide Test Type

Is your primary test objective exploratory, assessment, or validation? You can have elements of each, but I strongly advise deciding in advance your principal focus. The implications for preparation, execution, and decision (post-test) or substantially different between the three. Clearly explain to the rest of your team the focus- that will help bring the right focus to the testing.

Here’s a brief assessment, if you’re having trouble deciding–

You’re in the early phases of creating a new interface/interface element: Exploratory

You’ve drafted one or more directions that are well supported by validated learning and a review of comparables: Assessment

You think you’re done: Validation

2A. Prepare Research Design 

This has several sections in addition to the test plan- I call use the term ‘research suite’ to designate the whole package for a given set of testing. At first, it may look like a lot of stuff, but every time I’ve done user testing I’ve found all this preparation well worthwhile (and the reverse is also true). The example and template I reference are in the Venture Design Template/Appendix B. Notes on preparing the various sections of the research suite follow:

:: Objectives & Methods

This is for you to use internally. Describe in the clearest possible terms what you want to have happen as a result of the testing and your principal testing focus (exploratory, assessment, or validation), linking that to your objectives. Particularly if you’re early in a project, you may want to couple additional customer discovery interviews (persona, problem hypothesis) with your user testing. This is the place to explain that.

Here’s an example from Enable Quiz:

There are three general types of tests:- Exploratory: for learning about customer problem scenarios in greater detail, sometimes with a paper or working prototype

– Assessment: for testing the usability of an early direction on product implementation

– Validation: for later stage final testing of an implementation

This test suite is exploratory and we’re preceding the user testing with customer discovery interviews from Appendix A [this is the appendix for customer discovery interviews in the template] to deepen and align our view of personas and problem scenarios with the exploratory test results.

:: Product Version

Note the version you’re planning to use for testing. See the next step (2B) on defining a test release. Preview, though, so you don’t get stuck: this need not be actual software or a real release- it could be paper prototypes or a prototype you make in PowerPoint or Keynote.

:: Subjects

Define your target subject count in terms of the personas you’ve identified (see above or tutorial on personas). You’re not trying to achieve statistical significance in most cases (and almost always in the early phases), so 5-10 subjects is perfectly OK.

Here’s an example from Enable Quiz:

Since enabling the HR manager persona to be more effective is central to our value proposition, our target weighting of subjects should reflects that. An ideal total and mix of subjects would be:
Helen (or Hank!)  the HR Manager 4Frank the Functional Manager 1-2The screening question for both these subjects type is:
How many technical job candidates did you interview in the last month?

:: Research Composition

Here you will summarize everything that happens to a subject and how long you think it will take. Here’s an example from Enable Quiz:

# Item Duration (min.) Notes
1 Intro. & Explanation 5 Here we will explain the objectives of the test and the parameters of their participation. We’ll also obtain the designated release & consent form*.
2 Discovery Questions 20 Using the interview guide, we’ll spend a few minutes to discovery to improve our personas, problem scenarios and user stories.
3 Test Tasks 15 We’ll introduce the test scenario and then ask them to complete the Test Items.
4 Post-Test Debrief 5 Make sure we ask if it is OK to follow-up with additional questions.

:: Pre-Session Checklist

There’s nothing worse than starting of a test with something not being ready or general not in the right state. This is a simple checklist to help you and/or your collaborators make sure they don’t have any false starts. Here’s an Example from Enable Quiz:

# Item Notes
1 Make sure have written versions of discovery and test questions to refer to
2 Make sure test instance is up and functional – log in- make sure app is on starting page
3 Make sure recording equipment* is up and functional

:: Session Design

This includes the intro you’ll do with subjects as well as the test items.

I strongly advise writing up the intro and practicing it- it takes work to put yourself in the subject’s shoes and as things get busy (and repetitive) you’ll easily miss things.

On the test items, you’ll notice each row has four items–

Enumeration (#): This is just for reference

Research Objective: This will help keep you focused. Each item should have a research objective (otherwise, why is it there?). If you’re running an exploratory or assessment test, your user stories can provide a great anchor for the objective (see example below).

Estimated vs. Actual Time: This is for setting expectations on duration as well as evaluation. If you’re running an exploratory or assessment test, you’ll be less concerned with actuals.

Notes: This is where you set up and design the testing. I like to break each of these into a set of notes for the moderator and a set of target outputs. The outputs should closely and obviously tie to the research objective.

Here’s an Example from Enable Quiz:

Intro
Thanks for making time to take part in our study. My name’s [name] and this is [observer]. [Explain participation and deal with consent form/obtain written consent]*We’ll be using a test guide through the rest of this, so I hope you won’t mind me referring to that.We’re here to learn about [an early version of a solution that allows HR managers to assess the technical skill set of a job candidate through an online quiz].I’m going to ask you some questions and give you some tasks. Feel free to think out loud if you’re comfortable with that. We’re not here to test you and there are no wrong answers. Our results just help us better understand our product and its audience.The session will take roughly [40-60] minutes.Do you have any questions before we start?Test Items
# Research Objective Est. v. Actual (min.) Notes
1 Exploratory Intro 5 MODERATOR GUIDE
Let’s say your job is to create one of these quizzes for an open position. Here’s a description of the position [Provide them sample job description and let them review.]. Let me know when you’ve finished reviewing it and if you have any questions.OUTPUT
Validation that the subject understands their goal and the job description, roughly as well as they would in their current position.
2 Assess primary navigation for new quiz creation 2 MODERATOR GUIDE
Let’s say you want to create a new quiz. What would you do?OUTPUT
Assessment of primary navigation for new quiz creation
3 How are we doing on this user story:As an HR manager, I want to match an open position’s required skills with quiz topics so I can create a quiz relevant for candidate screening.

?

5 MODERATOR GUIDE
Tell me what you think you’re seeing here?Let’s say you wanted to choose a set of quiz topics for the open position you just reviewed. Show me how you’d do that?OUTPUT
An assessment of the user’s relationship to the available affordances and their appropriateness to the current user narratives and tasks.

 

NOTES ON TAKEAWAYS

Personas & Problem Scenarios […]
UI and User Stories […]

 

:: Post-Test Debrief

Do you really need this? Yes, probably. This is an after the fact checklist to make sure you cover your bases: seeing if follow-up questions are OK, compensating the subject (in whatever way you plan), seeing if they have other thoughts, seeing if they have ideas on other subjects. Here’s an example from Enable Quiz:

– Thanks so much. We’ll be using this to make the product and solution stuff like documentation better.
– Would you mind if we send you follow-up questions?
– (if you’re giving them some kind of tangible thank you, make sure that gets done)

*:: Note on Recording and Compliance

I don’t supply legal advice on this site and I don’t warrant these notes as fit for legal compliance. As well it should be, recording individuals is subject to various laws and regulations depending on who you are, who they are, where you are, and how the recording will be used and stored (among other factors). It’s important that you get advice from your legal counsel and maintain transparency and applicable compliance with your subjects. At a minimum, this means securing written releases for the recordings and making sure that the recordings are stored and accessed securely (if you store them at all). Regarding releases and consent, your specific compliance requirements will vary, but here are a few sample consent forms from US institutions:

Usability.gov

Indiana University

2B. Prepare Test ‘Release’

If you’re working on an actual piece of software, test what’s current but don’t make yourself crazy (and probably your subjects) by cutting it too close.

If you want to do an exploratory or assessment test against a prototype, say a Balsamiq mockup, there are alternatives to building working software.

:: Paper Prototype

Yes, you can actually get meaningful test results from playing with pieces of paper. It’s hard to believe until you do it.

To start, you need a prototype. The Balsamiq prototying process will serve you well, assuming your subject is software. You will essentially prepare a set of screens on paper and ask the user to interact with them- clicking (by pointing) and typing (by writing with a pen).

Modularity and layering will serve you well in your preparation. I recommend having a few templates that are regular paper layered on cardboard or a similar substrate. This will make it easier to physically handle the prototypes and exchange them with the user. The base template should look like your target device- phone, tablet, laptop, etc.

Then layer basic screens on top of those (with light paste or spray adhesive which you can buy at any craft store). On top of those you can layer additional controls (Balsamiq lends itself to modular disposition of controls). And finally on top of the controls you can layer Post-It’s (or strips thereof) onto which users can type (photograph the results after tests and then just replace the Post-It’s).

Try it a few times and you’ll probably find you’re not uncomfortable with the process.

:: PowerPoint or Keynote Prototype

This is the same basic idea as paper prototyping but you’re simulating the interaction with inter-slide links on PPT or Keynote. The advantage is that everything’s on the computer if you’re not a glue-and-scisssors fan, and the experience may feel more real to subjects. The disadvantage is that the linking can get confusing, improvisation is harder, and if you want the user to fill out text you’ll need to have a paper form for them anyway.

Create the various (static) screens you want as slides within your application of choice. Then add inter-slide links. In the current version of PowerPoint (Mac; I’ll guess it’s the same on PC but haven’t been able to check), you do this by:

– two-finger (or right) clicking on a shape

– selecting ‘Hyperlink’

-then selecting ‘Document’ in the pop-up

– and using the ‘Locate’ button to find an ‘Anchor’ (you’ll need to click the right triangle to unfurl the list of slides).

On Keynote it’s simpler: two-finger/right click a shape, select ‘Add Link’ or ‘Edit Link’ if you have one in place, and then select the target slide.

3. Prepare Test ‘Infrastructure’

When the alternative is doing nothing, you can finish a darn good test by sitting someone down in front of what you have, giving them a few goals to complete, and seeing what happens.

Few of you will have access to observation booths etc., so I’ll skip that.

If you have the team size, separating the facilitator and observer/note-taker functions is very helpful, leaving the facilitator free to focus on the experience of the subject.

Make sure the facilitator is close by, but ideally not immediately visible or over the shoulder of the subject. A good location is between the subjects 4/5-o’clock and 2′-4′ distant. The observers will generally sit behind the subject- as far away as possible where they can still see what’s happening.

A simple PC/Mac with a web-cam will do fine. For recording screen activity and a web-cam feed on a PC, I like Camtasia Studio. For the Mac, I use ScreenFlow. Make sure you have everything recording and rendering the way you expect beforehand.

Note: You have serious obligations (ethical and legal) to steward and safeguard your subjects’ privacy and obtain their explicit agreement on participation, particularly if you’re recording. See the above note on ‘Recording & Compliance’.

4. Obtain Subjects

First off, if you’re trying out a new test set up (not to mention if you’ve never done this before), find some subjects where you can ‘test the test’. This is anyone who could plausibly use the product, even if they don’t well represent one of your target personas. Things will break, you’ll fix them, don’t worry, it’s natural.

Following this, prepare a screener- simple, factual question or questions to quality the relevance of subjects. With usability you can be a little more lenient than with development of your persona and problem hypothesis, but watch for the bias towards subjects that are convenient & comfortable vs. relevant.

6. Execute Test Suite

If you have a research suite along the lines of what’s above/in the template, then you have a plan.

In working the plan, practice is the best tutor. Be careful not to coach subjects too much or make them feel judged. It’s painful to watch them struggle with something they don’t understand, but better to learn about that now than subject every future user of your product to it! Give them time to work through confusion. Eventually (set a threshold for yourself) you’ll need to help them move forward, but make sure you don’t do it too soon.

Don’t forget to thank your subject, compensate them (in whatever way you plan), and ask them if follow-up’s are OK.

8. Make Your Notes (ASAP!)

I recommend doing this right away. Most of the important insights you’ll have, you’ll have on the spot.

Validation (Invalidation)

As with any test, conclusions are the point. Success/a good result will vary by test type.

Exploratory: The results should help you better understand the likely journey of a typical user and, depending on where you are in designing/prototyping the interface, whether you’re headed in a workable direction. Comparison tests here are highly desirable given their low cost and possible impact.

Assessment: The key question here is whether the ‘pivot or persevere’ on a given direction. Lots of stuck and/or frustrated users means no. Be ready to iterate a lot- the change you need may not be radical. Comparison testing is also highly economical here.

Validation: Here you’ll generally have a quantitative target for time spent per task and in total on your major experience arcs. Validation is being within a reasonable deviation from that.

Pro Tips

Focus on the Details

It is cumbersome, but make sure you’re actually getting the subjects to engage with all the individual interface components, prompting them (without leading them) through your test plan. While you’re validating a rough general concept, the way you do that is through careful test design and observation of what the subject actually does (vs. says).

Use a ‘Cheater’ for Free Text Input

If there’s a text input of any importance, consider using a ‘cheater’. This is just a printout of your prototype screen with a place for the subject to write in the text. You can just present these mid-session- don’t make a big thing about it and neither will the subject. You can see an example of this here:
Usability Testing Demo.

I like to put these print-outs on a cardboard backing and use Post-It’s that I can replace between sessions. Just hold the cheater up to the camera if you don’t find that too disruptive or loop through them at the end of your recording.

Anchor in User Stories

There’s nothing like a fully-articulated user story make sure you’ve covered the who, the why, and, critically, how you’ll know if a user ‘got there’ on a given interface approach. Particularly with exploratory and assessment testing, I recommend organizing your test objectives around your stories. This will help you stay focused on the UX vs. getting overly fixated on some aspect of your interface idea/prototype.

Supply Motivation

You’re testing usability, not motivation and this is not the right way to test motivation (for that, see Lean Startup and the use of MVP’s). Make sure you give the user an intro that tells them the specific situation they’re in and gives them their goal. Then with a minimum of prompting see if they can realize those goals with your current interface. Don’t ask them if they want to do x or y and don’t ask them to think up their own inputs and parameters unless you have a really good reason.

No Red Button Tests

If you have a big red button, don’t prompt your user with ‘Would you show me how you would press the big red button?’. You already know what’s going to happen. Instead, prompt your user with what you’re telling them is their goal and find alternative but relevant language to describe your interface instead of keying to specific labels on the page.

Turn off Hints (Balsamiq-Specific)

This is specific to the prototyping tool Balsamiq, which I use in a lot of examples and courses. While there are many tools out there, I like Balsamiq’s focus on rough, early prototyping. Anyway, if you create interactive prototypes with it and use it in ‘play’ mode for user testing make sure to turn off ‘Link Hints’ and ‘Big Arrow Cursor’. These are meant (I believe) for cases where you’re showing a colleague a prototype and want to cue them to where they should click- in user testing that’s not something you want.

Example A: Enable Quiz Usability Test Plan

Objectives & Methods

There are three general types of tests:
– Exploratory: for learning about customer problem scenarios in greater detail, sometimes with a paper or working prototype
– Assessment: for testing the usability of an early direction on product implementation
– Validation: for later stage final testing of an implementation

This test suite is exploratory and we’re preceding the user testing with customer discovery interviews to deepen and align our view of personas and problem scenarios with the exploratory test results.

Product Version

We’ll be using version [0.1] of the product for this exploratory test. [NOTE: They could easily be using paper or PowerPoint prototypes as this stage as well]

Subjects
Since enabling the HR manager persona to be more effective is central to our value proposition, our target weighting of subjects should reflects that. An ideal total and mix of subjects would be:

Helen (or Hank!)  the HR Manager 4
Frank the Functional Manager 1-2

The screening question for both these subjects type are-
Helen (or Hank!)  the HR Manager: How many technical job candidates did you screen in the last month?

Research Composition

# Item Duration (min.) Notes
1 Intro. & Explanation 5 Here we will explain the objectives of the test and the parameters of their participation. We’ll also obtain the designated release & consent form*.
2 Discovery Questions 20 Using the interview guide, we’ll spend a few minutes to discovery to improve our personas, problem scenarios and user stories.
3 Test Tasks 15 We’ll introduce the test scenario and then ask them to complete the Test Items.
4 Post-Test Debrief 5 Make sure we ask if it is OK to follow-up with additional questions.

Pre-Session Checklist

# Item Notes
1 Make sure have written versions of discovery and test questions to refer to
2 Make sure test instance is up and functional – log in

– make sure app is on starting page

3 Make sure recording equipment* is up and functional

Session Design

Intro

Thanks for making time to take part in our study. My name’s [name] and this is [observer]. [Explain participation and deal with consent form/obtain written consent]*

We’ll be using a test guide through the rest of this, so I hope you won’t mind me referring to that.

We’re here to learn about [an early version of a solution that allows HR managers to assess the technical skill set of a job candidate through an online quiz].

I’m going to ask you some questions and give you some tasks. Feel free to think out loud if you’re comfortable with that. We’re not here to test you and there are no wrong answers. Our results just help us better understand our product and its audience.

The session will take roughly [40-60] minutes.

Do you have any questions before we start?

Test Items

# Research Objective Est. v. Actual (min.) Notes
1 Exploratory Intro 5 MODERATOR GUIDE

Let’s say your job is to create one of these quizzes for an open position. Here’s a description of the position [Provide them sample job description and let them review.]. Let me know when you’ve finished reviewing it and if you have any questions.
OUTPUT

Validation that the subject understands their goal and the job description, roughly as well as they would in their current position.

2 Assess primary navigation for new quiz creation 2 MODERATOR GUIDE

Let’s say you want to create a new quiz. What would you do?
OUTPUT

Assessment of primary navigation for new quiz creation

3 How are we doing on this user story:

As an HR manager, I want to match an open position’s required skills with quiz topics so I can create a quiz relevant for candidate screening.

?

5 MODERATOR GUIDE

Tell me what you think you’re seeing here?
Let’s say you wanted to choose a set of quiz topics for the open position you just reviewed. Show me how you’d do that?
OUTPUT

An assessment of the user’s relationship to the available affordances and their appropriateness to the current user narratives and tasks.

NOTES ON TAKEAWAYS

Personas & Problem Scenarios […]
UI and User Stories […]

Post-Test Debrief

– Thanks so much. We’ll be using this to make the product and solution stuff like documentation better.

– Would you mind if we send you follow-up questions?

– (if you’re giving them some kind of tangible thank you, make sure that gets done)

* I don’t supply legal advice on this site and I don’t warrant these notes as fit for legal compliance. As well it should be, recording individuals is subject to various laws and regulations depending on who you are, who they are, where you are, and how the recording will be used and stored (among other factors). It’s important that you get advice from your legal counsel and maintain transparency and applicable compliance with your subjects. At a minimum, this means securing written releases for the recordings and making sure that the recordings are stored and accessed securely (if you store them at all). Regarding releases and consent, your specific compliance requirements will vary, but here are a few sample consent forms from US institutions:

Usability.gov

Indiana University

Example B: Usability Test Plan for Small Business Social Media Automation

Screener

How many times last month did you post to social media for your business? What services did you use?

Objectives & Methods

There are three general types of tests:
– Exploratory: for learning about customer problem scenarios in greater detail, sometimes with a paper or working prototype
– Assessment: for testing the usability of an early direction on product implementation
– Validation: for later stage final testing of an implementation

This test suite is an assessment test.

Product Version

We’ll be using version [x.y] of the product for this exploratory test. [NOTE: They could easily be using paper or PowerPoint prototypes as this stage as well]

Subjects

Our core subject has a small company or personal brand they’re promoting. [XYZ], is central to our value proposition of [ABC] so we’re targeting a composition of subjects as follows (organized against our personas):

Sam the Small Business Owner 4

Research Composition

# Item Duration (min.) Notes
1 Intro. & Explanation 5 Here we will explain the objectives of the test and the parameters of their participation. We’ll also obtain the designated release & consent form*.
2 Test Tasks 15 We’ll introduce the test scenario and then ask them to complete the Test Items.
4 Post-Test Debrief 5 Make sure we ask if it is OK to follow-up with additional questions.

Pre-Session Checklist

# Item Notes
1 Make sure have written versions of discovery and test questions to refer to
2 Make sure test instance is up and functional – log in

– make sure app is on starting page

3 Make sure the subject doesn’t have an account on [social media automation system] already
4 Make sure they have accounts on at least two of: FB, Twitter, LinkedIn, G+ and they know their username and password

Session Design

Intro

Thanks for making time to take part in our study. My name’s [name]. [Explain participation and deal with consent form/obtain written consent]*

We’ll be using a test guide through the rest of this, so I hope you won’t mind me referring to that.

We’re here to learn about a product that helps individuals and teams manage social media accounts.

I’m going to ask you some questions and give you some tasks. Feel free to think out loud if you’re comfortable with that. We’re not here to test you and there are no wrong answers. Our results just help us better understand our product and its audience.

The session will take roughly [40-60] minutes.

Do you have any questions before we start?

Test Items

# Research Objective Est. v. Actual (min.) Notes
1 How are we doing on this user story:

As a Sam the Small Business Owner, I want to sign up for the service, so I can give it a try.’

?

n/a MODERATOR GUIDE
What do you think you’re seeing here?Let’s say you want to sign up. Would you show me how you’d do that?
OUTPUT
User’s understanding of the landing page. Validation that the subject can complete and understands the initial signup process.
2 How are we doing on this user story:

As a Sam the Small Business Owner, I want to connect my social media accounts, so I can create automated posts to them.’

?

n/a MODERATOR GUIDE
Let’s say you wanted to add your Twitter and LinkedIn accounts. Would you show me how you would you do that?
OUTPUT
An assessment of whether the user can connect accounts and understands the process.
4 How are we doing on this user story:

As Sam the Small Business Owner, I want to schedule some posts for two days from now at 8AM [local time zone], so I know it’s going to post at the time I want.

?

n/a MODERATOR GUIDE
Let’s say you wanted to compose and schedule a posting for 2 days now at 8AM [local time zone]. The text is:‘Free fries from 3-6pm today!’and the url is www.alexandercowan.com
I’ll help you make sure it’s not accidentally posted and we’ll delete it at the end. Let’s say you want it to post to both Twitter and LinkedIn. Would you show me how you’d do that?
OUTPUT
An assessment of whether the user can schedule posts and understands the process.
5 How are we doing on this user story:

As Sam the Small Business Owner, I want to update the time on a scheduled event, so I know it’s going to post at the time I want.

?

 n/a MODERATOR GUIDE

Let’s say you wanted to change the items you just scheduled to post at 10am instead of 8am. Would you show me how you would do that?

OUTPUT
An assessment of whether the user can connect find scheduled posts and update their post times.

6 How are we doing on this user story:

As Sam the Small Business Owner, I want to remove a scheduled post, so I know it’s not going to post.

?

n/a MODERATOR GUIDE

Let’s say you wanted to remove the LinkedIn post so nothing goes out at all. Would you show me how you would do that? (Repeat for Twitter)

OUTPUT
An assessment of whether the user can locate and delete posts accounts and understands the process.

NOTES ON TAKEAWAYS

Personas & Problem Scenarios […]
UI and User Stories […]

Post-Test Debrief

– Thanks so much. We’ll be using this to make the product and solution stuff like documentation better.
– Would you mind if we send you follow-up questions?
– (if you’re giving them some kind of tangible thank you, make sure that gets done)
– (make sure to delete their accounts and all login, password, and personal identifying data)

* I don’t supply legal advice on this site and I don’t warrant these notes as fit for legal compliance. As well it should be, recording individuals is subject to various laws and regulations depending on who you are, who they are, where you are, and how the recording will be used and stored (among other factors). It’s important that you get advice from your legal counsel and maintain transparency and applicable compliance with your subjects. At a minimum, this means securing written releases for the recordings and making sure that the recordings are stored and accessed securely (if you store them at all). Regarding releases and consent, your specific compliance requirements will vary, but here are a few sample consent forms from US institutions:

Usability.gov

Indiana University