While I’ve put it together so (I think) it makes sense on its own, this page’s main job is to serve as a general reference for learners in my online (Coursera) courses on Continuous Delivery and Hypothesis-Driven Development. After reading this page, you will be able to:
- Explain how agile teams can structure their overall flow around a product pipeline
- Identify the relevance and tools associated with each step on said pipeline
- Facilitate discussions about where a team should invest its time on improving its execution and how they’ll know if those investments are working.
A generalized product pipeline looks something like this:
The items you see in black below each major area (Analytics & Inference, Continuous Design, Agile Dev., and Continuous Delivery) are the metrics I recommend you use to assess them. I find the concept helps teams think about the relationship between what everyone does and the outcomes they should be driving for their company. The following sections describe practices and tools teams can use to improve each area. ‘Hypothesis-Driven Development’ is an overarching framework I find helps practitioners, particularly general managers, think about how to do all this well, and you’ll see that referenced in all the subsections below.
How do we release more features that have higher engagement? Or, put another way, how do we decrease the amount of features that see low engagement and should be scrapped?
Here are a few extreme(ly funny) examples: 13 New Inventions That No One Asked For. With all due gravity, though, I see lots of smart, capable teams burn themselves out building an app that no one wants. Pretty much my reason f0r being here on the Internet is to help you avoid that.
Great design is mythologized, especially in retrospect, usually in a way that drastically oversimplifies what the team had to do to get there. As design thought leader Mike Monteiro says, design is a job. It’s work. It takes ongoing focus and discipline, usually with a lot of buy-in and participation across the team.
All that said, there are a few easy ways to minimize waste and maximize wins and ongoing improvement in this area. Unpacking the famous ‘double diamond’ point of view on design, you can avoid a lot of waste and make time to do more work in Continuous Design by moving from left to right across the hypothesis areas. Why? Essentially, if you build something for a user that doesn’t exist you can’t use Lean Startup to test MVP’s and furthermore there’s no point in developing a highly usable interface for this nonexistent user.
For more depth on this, check out this practitioner’s guide: Hypothesis-Driven Development.
How do you know if it’s working? I think all teams should always been looking at the proportion of features (release content) that seems high engagement vs. doesn’t. If you haven’t killed at least 20% of your new features, you may have a problem. No one gets it right all the time.
How do we release more features in smaller batches?
Relative to the other areas of the pipeline, this area gets more than its fair share of a team’s attention. Easily a third of questions I field from practitioners are some version of ‘How do I prioritize my backlog?’. It’s understandable- you do have to get software out the door and nothing looks like progress like a bunch of new features- except for actual progress.
What does that progress look like? It looks like validated learning on the user behaviors powering the company’s business model. One particular practice I like for this is story mapping, where teams layer and sequence their user stories to maximize this:
How do we release faster with less downtime?
Through test and deploy automation, Amazon famously releases new code every 11.6 seconds. This capability is easy to observe on the part of a team or company, and in my experience recently it’s a story of have’s and have-not’s. The teams that have more continuous pipelines love it and that’s keeping everyone focused on keeping the pipeline healthy; teams that have to content with legacy processes and code struggle.
Analytics & Inference
How do we purposefully make evidence-based decisions from one iteration to the next? How do we create a culture of experimentation where the team is more excited about results than deadlines?
Maybe this is overly dramatic, but I see this area of the pipeline as the crown jewel of a high-functioning pipeline and team. Partly, this a matter of correlation vs. causation: few teams graduate to doing this well without strong practice in the other three areas where:
a. the team has a disciplined approach to Continuous Design, which is helping them bring relevant, focused hypotheses into their experimentation pipeline
b. the teams has a practice of Agile Dev. where they’ve found a sustainable pace and aren’t madly going from one deadline to another
c. they have a fairly continuous pipeline where they can release a lot and a manual or error-prone release process isn’t draining the team’s energy.