How to Determine a Winner in Solution Test Interviews

This is an installment in the Solution Test Interviews series.

Don’t use your gut to figure out which solution users preferred the most.

This introduces bias and reduces morale as different team members often have conflicting gut reactions.

Use a quantitative approach.

You can still determine a credible winner even without statistical significance.

Tally the results and compare to the benchmarks listed below.

A great solution test creates several decision points.

The most important decision point focuses on the Primary Hypothesis.

To find a winning solution for the Primary Hypothesis, pick a consistent survey question to ask every user.

Don’t ask users directly “Would you use this [solution]?”

Users aim to please. So Yes/No questions often just get a “Yes” answer.

Instead, use one of these survey methods.

For other decision points, which I refer to as Secondary Hypotheses[1], determine the winners based on users’ reactions and compare and contrast comments.

Don’t ask a survey question about every decision point as it gets tedious.

Use one of the following methods to determine a winner for the Primary Hypothesis.

The strongest form of validation is the user who commits to your product.

In economics terms, we are seeking a signal of demand from the user.

A strong demand signal is paying Tesla $1,000 for the right to get on a wait list for a car that doesn’t exist yet.

A weak signal of demand is giving out your email address in order to access a new product.

Alberto Savoia, former Director of Engineering at Google, discusses getting demand signals in a video series based on his book, The Right It.

If a consensus of users gives you a strong demand signal then you might have a winner.

The “Disappointment” question compares the user’s life with and without your solution.

It sorts users into these categories:

  • “Very Disappointed” → Strong demand

  • “Somewhat Disappointed” → Weak demand

  • “Not Disappointed” → No demand

To determine a winner, we’re looking for 40% or more of the users to respond “Very Disappointed”.

I use the “Disappointment” question frequently since it’s simple to understand and administer.

More tips:

  • Watch out for a high “Not Disappointed” score of more than 20%. According to Sean Ellis who pioneered this question, the “Not Disappointed” respondents won’t convert into long-term customers.

  • Most users answer “Somewhat Disappointed”. Ask what could be improved to switch their opinion to “Very Disappointed.”

  • The wording of the “Disappointment” question works for both individual features and entire products.

  • Note that Sean Ellis typically uses this question for working software. I typically use it after testing a solution prototype and it still works for the purpose of determining demand. That said, working software is going to give a better signal but of course could takes months of time and several engineers to produce.

  • Reference: https://firstround.com/review/how-superhuman-built-an-engine-to-find-product-market-fit/  

The Net Promoter Score indicates whether a user will recommend your solution.

The word of mouth referrals that come from recommendations are valuable to companies because they are free and authentic.

To determine a winner, we’re looking for an NPS of 50 or higher.

I use NPS mostly for my own presentations and webinars since I’m seeking these word of mouth recommendations.

More tips:

  • First, derive your NPS from individual responses with this online calculator.

  • Next, lookup the score to see how well you’ve done.

  • Try to get above 50. For my presentations and webinars, I strive to beat 70.

Note that not every product is recommendable. These include offerings such as health condition apps, adult websites, etc. that users want to be discreet about.

When demand signals, NPS or the “Disappointment” question isn’t appropriate, the team can still determine a winner using a consensus of the users’ reactions to the prototype concepts.

To determine a winner, look for agreement from 80% or more of the users (eg, 4 out of 5 users).

We don't ask the user directly for a Yes/No answer since they often just respond “Yes.”

Do this by having the team analyze the user testing notes and then agree on a yes/no preference for that user for each hypothesis.

There is a lot of gray area in analyzing solution tests so you will need to apply common sense.

You might even be tempted to stop doing solution testing altogether if you’re not immediately successful.

Don’t give up.

As you conduct interviews every week, you will learn about your users, your market, your product and surprisingly, yourself.

Product Discovery is a process where the journey matters as much as the destination.


[1] The Secondary Hypotheses are lesser concepts such as specific user interface elements, styles of communication (text vs email), wording, imagery, etc.


Jim is a coach for Product Management leaders and teams in early stage startups, tech companies and Fortune 100 corporations.

Jim co-founded PowerReviews which grew to 1,200+ clients and sold for $168 million. He product-managed and architected one of the Internet's first ecommerce systems at Fogdog.com that went IPO at a $450 million valuation.

These days, he coaches companies to find product-market fit and accelerate growth in digital health, financial services, ecommerce, internal platforms, machine learning, computer vision, energy infrastructure and more.

He graduated from Stanford University with a BS in Computer Science. He lectures in Product Management at the University of California at Berkeley.

Previous
Previous

How to Speed Up Analysis of Solution Test Interviews

Next
Next

Now What? Making Decisions After Testing Solutions with Users