Case Study: Are your renewals a house of cards?
After a reorg, my team inherited what we thought was a high performing product.
It was a young product that already had $2.4 million in Annual Recurring Revenue (ARR).
We had big name clients and a flashy demo that opened door after door for new prospective clients.
However, as we talked to clients and looked into their use of the product (or lack thereof), it was clear this product was struggling.
On paper, the product was a success. In reality, it was a house of cards ready to fall during the upcoming renewal period.
Fast forward a year and the ARR was down 80% to $500k. Why did all of these clients fail to renew? How did we respond?
A deep dive revealed several fatal flaws. Here’s a case study on what happened.
The product
No need for a subscription
Cold start problem
Lack of reference customers
Flawed premise
Sold to the wrong customer type
No usage
Flawed pricing
Too much focus on the business buyer and not enough on the end user
Resolution → How we responded
The product
I’ll refer to the product as Product Review Insights. Imagine you’re an online retailer or brand and you're sitting on a treasure trove of product reviews written by your customers.
Imagine what you could do with this data. You could improve your product assortment. You could investigate negatively reviewed products. You could learn more about why certain products are highly rated and incorporate that knowledge into future products. It’s a way to leverage your initial investment in a Product Reviews Platform (our company’s first product).
With Product Review Insights, you could make better decisions gleaned from the whole product review data set.
No need for a subscription
Though Product Review Insights had a solid set of initial clients, it was starting to show warning signs.
The first signal that renewals would be a problem was a lack of repeat visitation to the application. Users would look at the data once and not return. Turns out that the main graphs and analysis were based on large amounts of data. As such, the insights they gave weren’t likely to change that often. If you have 20,000 reviews, the aggregate analysis won’t change much until you get thousands more reviews which could take months.
With subscription products, the product needs to be compelling enough for users to come back again and again.
So the first problem was selling a subscription to a product that focused on insights that could be achieved in one or two visits.
Cold start problem
Since the main insights relied on large amounts of data, this was not an application that could be sold to a new customer.
Unfortunately, our sales team had bundled the Product Reviews Insights product for several new clients. With no review data, these clients wondered why they paid for a product that required data they didn’t have.
The customer had to wait until thousands of reviews had been generated by their customers which could take 6 months or longer.
This was obvious but not clear to the marketing and sales folks since the demo they used was full of data and promised insights on day one.
You could get deep insights quickly as long as you had lots of data. With little or no data, the algorithms weren’t useful.
The client could get more value using our regular reporting tools that were included in the subscription to the main Product Reviews Platform service.
Lack of reference customers
Since Product Review Insights was developed so fast, it wasn’t properly tested with prospective customers. And the team didn’t develop any reference customers.
These reference customers could have provided valuable feedback and would have been a solid foundation of ARR that was likely to renew.
The other problem with a lack of reference customers is that it hampered our ability to move from the innovators and early adopters to the early majority.
Once the innovators and early adopters had purchased Product Review Insights, it took more work to convert the next wave of customers (the Early Majority) who were more skeptical.
The innovators and early adopters were friends of our company and had a solid layer of trust built up with us. The next wave of prospective clients needed an external form of confirmation before they would buy.
Typically, high priced products like Product Review Insights had reference customers that filled this gap of trust.
These are companies that pay for the product, use the product and have agreed to publish a public case study that affirms the benefits that the vendor is claiming.
The previous Product Review Insights team had not developed these public reference customers.
Flawed premise
There were no reference customers because the product was created in a vacuum. My company ran a yearly customer summit. That year, the marketing department decided to take advantage of the hype around “Big Data”. So the CEO commissioned a data insights product from the Product and Engineering organization.
And the race was started to develop something, anything that could be demoed at the client summit just a few months away. A desire by a CEO to create a new product is either deep insight or a flawed premise.
History decides.
In this instance, given the large quantity of non-renewals, it was clearly a flawed premise that drove this product’s birth. Riding a hype cycle is a great way to land initial sales but a bad way to get renewals.
Sold to the wrong customer type
To understand this point, you’ll need some background on the main customer types for our Product Reviews Platform company.
One type is a consumer-facing retailer (think Staples, Walmart).
Another type is a consumer brand (think Sony, LG). We (wrongly) believed any company with a lot of product reviews data would benefit from data insights. It was shown at our client summit and the crowd received it really well.
At face value, Product Review Insights could offer insights equally useful to retailers and brands. Since our company’s first clients were retailers, our network was strongest there. So the first customers ended up being retailers.
But the retailers did not find the insights from reviews useful.
This seems counter-intuitive. It feels like product review data should help retailers sell more. Certainly, a customer who reads reviews was more likely to purchase. But for an employee of a retailer, the insights fell flat.
On one hand, most product reviews are fairly positive and positive reviews lead to a higher purchase rate. So why would a retailer mess with that formula? No need to analyze positively reviewed products (about 90% of products).
On the other hand, negative reviews should be actionable. They were, sort of. Turns out that retailers have so many skus that when they encountered negatively reviewed products, they discounted the price, put them into the Clearance section, and stopped buying them from the brand. The retailer doesn’t manufacture the product so they can’t fix the problems mentioned in the negative reviews of a product.
In fact, this slow down in sales of a product due to negative reviews would happen on its own because consumers would slowly stop buying products whose average rating trended downward. So negatively reviewed products took care of themselves. Again, no reason for the retailer to analyze the reviews.
Unfortunately, all of the clients of Product Review Insights at that time were retailers.
Simply put, the product had been sold to the wrong customer.
We learned the hard way that brands were a better fit for Product Review Insights. We learned many of them already had contracted with agencies to scrape the Internet for product reviews, aggregate them and provide them with insights.
The brands told us they went several steps further than product reviews and scraped other sources of data on the Internet: social media posts, expert reviews, forums and more. We eventually found the right buyer but realized we only had one piece of a larger puzzle that the buyer wanted.
No usage
In the early days of onboarding clients, users would only use Product Reviews Insights once or twice and not continue using it.
These users would not spread the word to their colleagues even though additional logins were free.
No user adoption and no word of mouth referrals were clear signals of an impending non-renewal.
Flawed pricing
When we inherited this product, we looked into how the $2.4 million in ARR was calculated. Which client had paid the most? What had influenced the price during the sales process?
It turns out that many of the sales of Product Review Insights took the following path. A client was up for renewal for our flagship Product Reviews Platform product. They wanted a price reduction (don’t they always?!). To main the current ARR, the sales team tossed in extra products like Product Review Insights. The sales team would attribute a portion of the sale to our product and that was the main source of ARR for our team.
Certainly, there were clients interested in a standalone insights product but the bundling methodology slowly eroded its perceived value within our company.
It was rarely sold as a standalone product since it needed such a large data set of reviews to be useful. And our insights product didn’t have a way to use reviews from other sources.
We hadn’t kept the discipline within the company to maintain separate pricing.
Too much focus on the business buyer and not enough on the end user
In business facing products, most of the marketing and some of the product features are created explicitly for the buyer, not the end user.
For example, a Director of Ecommerce buys an ecommerce system. But other people actually use the ecommerce system. Customer service representatives, merchandisers, buyers, and other end users are the day to day users. Product Management teams know this dynamic and make sure to build features that connect with the business buyer even though that person will likely never touch the product as a user.
In the case of Product Reviews Insights, the majority of the features were built to be shown on stage at the client summit which was attended exclusively by business buyers, not end users. After the summit, many business buyers purchased Product Reviews Insights. Then, we would provision end users who had never really seen the product. And this resulted in a general lack of usage. Even when these end users logged in, the insights, as described above, were for a different type of customer.
Resolution → How we responded
As we watched the slow motion trainwreck of debookings and non-renewals, we interviewed customers, and we analyzed product usage.
We figured out the retailer vs brand mistake and we focused sales and marketing efforts away from retailers onto brands.
We created features to drive repeat usage such as daily and weekly alerts for negatively reviewed products (investigate and fix) and positive super star products (repurpose for positive social media posts).
We contracted with a social media monitoring company to include 3rd party data sources.
We built the product that could have been built in the first place.
We built back a sustainable, renewing set of clients.
These days, I use these learnings to help Product organizations avoid this roller coaster of wasted time and energy.
Reach out to me if you’ve encountered similar problems and want to detect and solve them quickly.
Jim coaches Product Management organizations in startups, growth stage companies and Fortune 100s.
He's a Silicon Valley founder with over two decades of experience including an IPO ($450 million) and a buyout ($168 million). These days, he coaches Product leaders and teams to find product-market fit and accelerate growth across a variety of industries and business models.
Jim graduated from Stanford University with a BS in Computer Science and currently lectures at University of California, Berkeley in Product Management.