Executing Continuous Product/Market Fit

If you build a house or manufacture a crate of shoelaces, you can pretty well predict how much you’ll earn. Output and outcomes are tightly coupled. If you build the 1.0 version of a new digital application, this is not the case. On the one hand, odds are it will be worth zero or even less than zero; on the other hand, if you do connect with demand, scaling up in digital is gloriously inexpensive. Output and outcomes are loosely coupled.

This fundamental economic reality is the basis for every framework or practice associated with tech and innovation: design thinking, Lean Startup, and agile, to name a few of the headliners. Long popular with venture capitalists, the concept of ‘product/market fit’ is great at explaining the difference between the minority of digital products that are ready for explosive growth and those that aren’t.

The problem with it is that while it does a great job of explaining what happened with a product after the fact, it’s not actionable for product managers grinding their way to product/market fit week to week, quarter to quarter. Sean Ellis’ ‘40% rule’ offers that if you survey customers and 40% would be ‘very disappointed’ if the product went away, then you have product market fit.

However, this definition relies on what customers say vs. what they do, which is deeply problematic for a number of reasons- one of the most obvious being that it requires the use of a secondary instrument, some kind of survey. Surveys are both relatively cumbersome to administer and their results will always be confounded by selection bias- are the users that decide to respond really representative of all your users? And how do you use this in a case where you want to test the partial effect of a new feature or customer experience?

If that definition is too broad then the secondary alternatives that suggest always looking at very specific metrics like how often customers return to your site (or app) are too narrow: how specifically do you apply that definition to products with different natural cadences: tax prep software vs. a social media app, for example? What if changes to the UI naturally shift user behavior from quarter to quarter?

A Working Definition of Product/Market Fit?

For the products I’m working on as an advisor and founder, I’ve found I need a definition of product/market fit that does these 5 things:

1. Relies on What Users Do vs. What Users Say

Leaning on what users do vs. what they say is an established lesson learned from product design. It’s also an area where (recently) working as a professor in a stats group has helped me a lot- in science speak, behavioral observations are much less subject to ‘confounders’ which make inferences and hence decisions much less reliable.

For example, consider the case where you just had to get in touch with technical support to resolve an issue with a product that you generally like, but the support person was terrible. When the company sends you a post-call satisfaction survey, you might be inclined to respond that no you would not recommend the product to a friend, even though, really, you would. Or consider the opposite case- you regret buying the product but the support person just did the best they possibly could and you when that survey comes, you want to say something nice, even though you wouldn’t actually recommend the product.

This does not mean that a good take on product/market fit is just about the numbers. If a team can’t readily pair qualitative and quantitative evidence, they’re going to lose the thread. We even have a term for this in analytics/data science: ‘ground truth’, and a good definition of product/market fit has to be relatable for both types of observations, qualitative and quantitative.

2. Can be Easily Implemented with Standard Analytics Tools

I want a definition that’s leveraging all my data on users, vs. just a few periodic survey responses. I’m a big fan of agile and continuous design and generally making the question of ‘Is this working the way we want?’ something that teams can easily check on whenever they want. Whether the product team is using Google Analytics, Mixpanel, KISSMETRICS, etc., I want to be able to frame a few focal observations about user behavior and use that as our true north week to week, quarter to quarter, to figure out if what we’re doing is working or not.

Not only do I want to be able to make use of all the (useful) data I have on user behavior as it comes in, but I also want to comparability with what I’ve learned so far, and this is a problem with the too narrow metrics: they’re so specific that incremental changes to the user experience (UX) or the more general customer experience (CX) make it hard to compare and apply your existing lessons learned. And this is particularly important in the case of leveraging leading vs. lagging metrics.

3. Leverages Leading vs. Lagging Metrics

No matter how you slice it, surveys are a lagging indicator and will make for slower, more expensive decisions, which is not what you want in a hyper-competitive, innovation-intensive environment. How do you leverage what you’ve learned about leading metrics like engagement levels so that you can make a call on a new feature, customer segment, or lead source as soon as possible?

4. Easily Extends to Testing New Features, CX’s, and Segments

Successful ventures either get lucky or iteratively test their way to success. I don’t know how to be reliably lucky, so I focus on testing- so did notable startups like Dropbox and Aardvark and so do durable franchises like Google.

Let’s say you have three new customer segments you think might be the next source of growth for you- how do you test that? How do you figure out how much you can pay for an acquisition? Is a new feature enhancing product/market fit or is it the first step in a journey like Evernote’s where a mish mash of feature craters product/market fit? I want to be able to observe, infer, and act on leading indicators where I feel I have reliable lessons learned on the downstream behaviors, like the relationship between engagement and retention.

For example, consider the sample example below: If you roll out something new, strong initial acquisition can mask fatal amounts of churn if you’re not looking in the right places.

5. Readily ‘Cascades’ with OKR’s or Similar

This is a bigger deal at scale, but it’s important for individual teams that they can readily understand and facilitate alignment with the company’s larger goals. For example, if the company’s topline objective is to become the leading CRM for B2B manufacturing companies and their target results for the current quarter are to increase revenue by 45%, what does that mean for the team deals with a very specific part of the product, like interfacing with third party data services? Or a chatbot for customer support?

The idea with using OKR’s or a similar metrics-driven approach is that the company can describe their progress to product/market fit in specific terms at the company level and then decompose or ‘cascade’ that description to the specific work of individual departments and teams. For this, periodic measurements on overall how happy customers are isn’t directly actionable for them.

A Third Definition of Product/Market Fit?

OK, OK I’ll cut to the case: a third definition of product/market fit that I prefer is to frame it in terms of user behavior- specifically, the user behaviors that constitute an individual ‘win’ on product/market fit. I find that this definition delivers the five things I’m after fairly well:
1. It allows me to rely on specific user behavior vs. generalized, circumstantial survey responses

2. I can immediately implement it with any standard web/app analytics suite (Google Analytics, Mixpanel, etc.) and continuously observe progress (or not progress) towards product/market fit

3. It allows me to immediately and continuously observe leading metrics for tighter, faster actionability. For example, if I know (or at least I’m ready to assume based on prior observations) that a certain level of user engagement leads to a certain level of retention and monetization, then I can more immediately invest in scaling up a new feature or CX that improves user engagement.

4. Since I’m observing specific user behaviors, I can readily test the specific effect of a new feature or CX and, for the reasons above, make quicker, more confident decisions about whether to ‘pivot or persevere’.

5. If the operation is working at scale, I can facilitate alignment to the company goals with individual teams through team-relevant metrics, which gives them the kind of outcome-focused definition of success they want for freedom of action and autonomy.

The overall framing I use do this is ‘customer experience (CX) mapping’. For a given job you do for the user or problem you solve for them, the idea is to frame their journey in qualitative terms and then identify a focal metrics (dependent variable/DV) for each of those.

The example here shows how a team building an app for HVAC (heating, ventilation and air conditioning) tech’s to order replacement parts would unpack and measure their target CX:

Each step in the customer experience has a focal dependent variable (DV) based on observed user behavior and a ‘line in the sand’ threshold that constantly pushes the team to prioritize relative to a successful CX. Everything the team’s trying out is framed as a testable independent variable (IV) relative to the focal DV.

Where’s the part about product/market fit? Great question! It is specifically defined as the Retention behaviors. For example, an individual user accrues to product/market fit if they pay >$80/month and are retained for >14 months. This has the useful feature of making it very obvious where you need to be on leading DV’s like Acquisition: given a certain definition of Retention, you know how much you can pay for an Acquisition.

In closing, I’ll offer this: I love to build stuff and I hate worrying about product/market fit. But just blithely making things doesn’t make for a good business, unless you get lucky. So, I have to make sure what I’m doing is driving toward product/market fit, and you know that saying about how dull knives cut more people than sharp ones (because they’re clumsy)? Well, that’s kind of how I feel about dealing with product/market fit: I want something that’s operationally decisive and easy to implement so I can focus on what I like doing, which is designing and building products. For this, I’ve found that CX Mapping delivers what I want: something that’s more immediate and continuous than the 40% rule but which still gives me more operational context than just looking at a specific measurement like returning visitors.

If you’re interested in trying it out, I can’t help but recommend this delightful little volume which offers even more detail: Hypothesis-Driven Development.

Acknowledgements

I’d like to thank Colin Zima for his help improving this post and also to absolve him for any of its shortcomings. Colin is a serial founder and product executive who’s had multiple exits, including most recently as CPO of Looker, now part of Google. He’s currently figuring out how more users can get at the data they need to do their jobs better at Omni.