7 lessons about success in digital product development

-

In my first years as a UX designer, about 15 years ago, there was one question I feared: “Can you prove that your design will actually result in a good user experience?” I did not know how to prove that my work was a success yet. My instincts told me my work would smoothly guide users in whatever they needed from the product, but I could not hand over any hard evidence.

The best I could do was to test a prototype with users; a time-consuming undertaking that most of the time did not even deliver any hard evidence but would uncover flaws in my design that called for a next design iteration. To be honest: I would rather stay with my instinct telling me I had done a good job, than face reality and discover I should have done better. Because that was how it felt: whenever a design flaw came to light, I thought I had failed as a designer. And I soooo needed that pat on my back.

I do not fear those questions anymore and I even started to ask them to myself. Because there is no shame, but a lot of value in uncovering design flaws. And time spent on testing and experimenting is not wasted when it saves you from building the wrong thing.

Lesson 1: There is no such thing as the perfect design

Eventually I learned that designing (and building) complex digital products is a process of learning just as much as creating. There is no such thing as the perfect design; there is only a solid process to continuously improve a product’s user experience. And that actually applies to digital product development in general.

Question: When you’re in a company developing a digital product or service, when do you feel you are doing a good job? Is your success about:

  • the lines of code written,
  • story points accomplished,
  • product features added,
  • deadlines met,
  • …?

Or is it about:

  • revenue increased,
  • users time spared,
  • customer satisfaction improved,
  • cost of ownership lowered,
  • …?

If your answer falls into the first category, I hope you have been feeling successful lately. But even if you have, chances are that the product you’ve been working on has not been successful at all. The first category are output parameters; metrics of what you have produced. Not about the value that was delivered by your work. Measuring output instead of outcome is not wrong, but can be misleading. I’ve seen well-oiled teams successfully burn through a backlog at a dizzying pace, absolutely convinced they were on top of their game. Only to discover 6 months later that the problem they thought they were solving for their customers, was not a big issue for them at all. Great work, but no value.

The second category, which are the outcomes, can be measured as well, but not as easy to measure as the first category. They represent value delivered by your work on the product, for either the business, the customer or the user. And that is what you as a developer, designer or manager of digital products should be looking at to know whether you are successful: did your work result in the value as intended?

Lesson 2: Separate output from outcome

It is not wrong to keep track of your production with output parameters; it gives you solid insights in your development process and helps you and your team monitor, tweak and optimize. Output parameters do in fact impact outcome in many ways (just think about the number of bugs found). But meeting your target on output parameters does not guarantee a positive outcome for the business, customer or user. Though not meeting your target on your output mostly will have a negative effect on the outcome.

Best is to have data on both output and outcome. But what should you be measuring? On the output side the DORA metrics (deployment rate, lead time for changes, change failure rate and time to restore service) give a good indication of your product team’s performance. But measuring success on the outcome side is different. To know what you should be measuring, you first need to know what you are trying to accomplish for the business, customer or user. And this is where things often get merky, because: why are we actually building this thing?

Lesson 3: Make the business everybody’s business

At a lot of companies I encounter a disconnect between the business people and the people that are developing the product or service. I’ve often seen one of these things happening:

  • a lack of vision and/or strategy on the business side, leaving it up to the development team (and not getting the right thing built);
  • a lack of communication between business and development, both being frustrated (and not getting the right thing built);
  • a lack of understanding between business and development, leading to a misinterpretation of the business’ intent (and not getting the right thing built);
  • a lack of strategic focus, with the business prioritizing all things that come up and seem important (and not getting the right thing built);

Developing a product or service ultimately serves a function for the customer and user, but also should add value for the company developing it. Product teams should know what success looks like for both the customers and users as well as for the company, and how their work contributes. How else could you expect them to make the right decisions?

A lot has been written about how to bridge the gap between business and development. An excellent read on this is “Escaping the Build Trap”, by Melissa Perri. In her book she stresses the importance of aligning strategy throughout the company, in order to keep developing your product effectively towards delivering the intended value. She gives the example of having three different recurring meetings to review progress towards strategic intents and to make strategic decisions on product level:

  • Business review: Financial outcomes like revenues and costs, progress towards strategic intents and how product initiatives are contributing to this progress (and adjust product strategy accordingly)
  • Product initiative review: Progress made on the initiatives and how experiments and options are contributing to the initiative (and adjust initiative strategy accordingly)
  • Release review: Functionality that will be shipped, success metrics and roadmap updates (so marketing, sales and executive teams are aware).

Not all strategic decisions will be made in these meetings. But they do help to keep everybody in the company aligned strategically.

Lesson 4: Focus on value instead of features

Products are delivering value for the company by delivering value for the customers and users. So, before thinking of any features, product teams should be exploring and answering these questions:

  • Where is the value for your customers and users (fast delivery, better service, …)
  • Where is the value for your business (increase revenue, reduce costs, …

You want to measure this value in order to determine how successful your product is and know if your efforts are paying off. Vision and strategy on how the company can deliver this value should be shared throughout the company. Also with the product team, so that product goals and initiatives can be aligned with company goals and strategy. Strategy maps and roadmaps help, both at company as product level (I will cover that in another blog, so stay tuned).

There are many ways to measure the value of a product, it depends on the specific goals and objectives of the product which ones are the most useful. Some common examples of metrics used to measure the success of a product are:

  • User Engagement: Metrics such as active users, time spent on the product, and frequency of use can be used to measure how engaged users are with the product.
  • Conversion Rates: Measuring the number of users who convert from free to paid plans or the percentage of website visitors who become customers can help determine if the product is delivering value.
  • Customer Satisfaction: Feedback from users, such as Net Promoter Scores or surveys, can provide insights into how satisfied customers are with the product.
  • Business Metrics: Revenue, profit margins, and other financial metrics can help determine if the product is delivering value to the business.
  • User Retention: Measuring the number of users who return to the product over time can help assess the stickiness of the product and how well it meets user needs.

Lesson 5: Ask yourself what is preventing you to deliver (more of) this value

After settling what value(s) the product should deliver, you can define success indicators on your product and figure out the way to measure these. To get the moving numbers (like those mentioned in the examples above) in the right direction, you need to explore the problem or opportunity and discover solutions:

  • What is preventing us to deliver (more of) this value?
  • Which of these problems should we be solving first?
  • What should we measure to know the impact of a solution to this problem?

Let’s take an example: Customer satisfaction is one of our core values and we are using Net Promoter Scores to measure this. We are seeing that our scores are dropping, so before thinking of measures, we start by looking for the cause: what is causing this lower customer satisfaction? Because in this example our customer is also our user, we are conducting user research to find out. Our quantitative research shows a lower task completion rate since last quarter’s release. Subsequent qualitative research then points out that many users don’t understand the new filter feature that we added in that release.

Once we have targeted the problem, we can explore possible solutions:

  • What could be our options to solve the problem?
  • What are direct success indicators in our solution and how could we measure them?
  • What option emerges as our best bet, after preliminary experiments?

In our example, options to solve the filter problem could be to remove the filter altogether, to improve the usability of the feature or to make filtering optional instead of mandatory. We run some A-B tests with these options and find out that task completion increases most when the filter is removed. But this filter was introduced after previous research pointed out that users wanted it, so removing it might not be our best option after all. The usability improvement of the feature (a short tutorial explaining the filter feature to the user) did also result in slightly higher task completion rates, but not as much as making the filtering optional instead of mandatory. This last option seems to be our best bet in our example.

By first discovering what is preventing us to deliver more value, and experimenting with multiple solution options, we can be confident that our selected solution will effectively solve a real problem and therefore will lead to value increase.

Lesson 6: Experiments are meant to learn first and to scale later

Running experiments might seem like a waste of resources, because it means throwing away the less-optimal solutions. But as you’ve seen in our example, it also teaches us about the users’ behavior and preferences, and it eliminates most of the risk of releasing the wrong thing. Still, we should keep the costs of these experiments as low as possible. Sometimes that means using paper prototypes or other ways to experiment with little or no coding. The most reliable experiments are the ones conducted in a live production environment though. Luckily, cloud technology enables us to run these live experiments more easily and to scale the right solution faster.

To know whether you really are successful after scaling the right solution, you need to keep measuring:

  • Initiative-level success indicators, like Task Completion rates.
  • Product-level outcomes, like Customer Satisfaction;
  • Company-level results, like Revenue;

Results at company level are so-called lagging success indicators; it will take some time to notice the effect of your product initiatives, and it provides only indirect evidence of your success; results at this level are affected by many things. Outcomes at product level are also lagging indicators, but provide more direct evidence for your success developing the product. Lastly, success indicators at initiative level are the most direct way of measuring success, because they show whether your solution is working.

Lesson 7: Not wrong long

It is kind of scary to measure success, because you might find your investment and work has not been paying off. Nobody likes hearing that, but let’s be realistic: the sooner you know you took a wrong turn, the sooner you can correct your course towards delivering value. Experimenting and measuring success doesn’t completely eliminate the risk of being wrong, but it does make sure you are not wrong long.