In an ideal world, every time we wanted to validate a market opportunity or test a new solution we'd go for the highest level of evidence rigor, using academic standards of experiment design and statistical certainty.
Alas, for reasons like the speed in which business moves and other constraints, this approach is often not feasible—or even prudent.
What follows is a real-life example that shows how an excessive focus on quantitative data can hinder progress and even create existential business threats.
TL;DR:
When time and resources are in limited supply, instead of obsessing over data, consider reaching out and talking to people directly.
The situation
Once upon a time I was hired to help rescue a failing product component. The software was primarily used by a big client that sent sales specialists to its retail units to help store managers improve store revenue. The product allowed the specialists to take notes about the visits, add follow-up tasks to their calendar, etc.
The component in question was a dashboard meant to be used when a specialist was sitting in front of a store manager to discuss sales performance. The idea was to help the specialist show to the store manager how the store was doing and the areas of improvement based on the most recent sales data and comparisons with the performance of similar stores.
The problem
After its launch, user adoption of the dashboard was extremely low (only 3% of the user base opened it on a weekly basis). The specialists would offer different excuses for continuing to rely on the manual spreadsheet they opened when talking to a store manager about sales revenue.
The big client was invested in having the specialists use the dashboard. Unlike the spreadsheet, which only contained days old data, the dashboard provided near real-time sales performance data. But the software provider was running out of time to convince the client to continue paying for the component. Unless we found a way to improve user adoption, the client was not going to renew the add-on contract, which would represent a big loss for the provider.
The problem to be solved was clear: How can we make the dashboard more attractive for the specialists than their legacy spreadsheet? We needed to identify the appropriate changes to address product deficiencies and create and promote delighters, elements that would exceeded expectations in order to convince a higher number of specialists to switch from the spreadsheet to the new tool.
The investigation
Because there wasn’t a lot of time to design and implement a solution, my first step consisted on talking to a few specialists in both groups, users and non-users. Very quickly a few things became crystal clear:
The important and underserved need that the dashboard was meant to address was not in dispute. The specialists needed the data to improve the advice given to store managers.
The problem with the dashboard had nothing to do with the quality or usability of the data visualizations and charts provided. In fact, the specialists freely admitted that the dashboard was far superior to the spreadsheet both in data freshness and look-and-feel.
The real deficiency was in locating the right chart without time wasting. While the manual spreadsheet had older data and less powerful charts, at least it was organized in an intuitive manner. The dashboard, on the other hand, had a plethora of tiles to choose from, many with confusing or misleading names. With a tight schedule, specialists didn’t want to waste time looking for the tile that contained the right chart to show to a busy sales manager.
Voilà! The root cause of the problem had nothing to do with missing features or an unappealing design. Navigating the dashboard was like trying to find your way to the living room in a house with a confusing layout. The path to the destination was convoluted and non-intuitive; no wonder users favored the spreadsheet with its well-organized tabs.
I asked a UX designer to create some low-fidelity wireframes and showed them to specialists visiting the office or available to speak over a conference call. The positive feedback received from 12 members of the target audience confirmed we were in the right direction.
Having convinced the team (which, by the way, included a data scientist) that we had a good solution, the only thing left to do was to share the recommendation with top management: keep the data visualization as is, but reorganize the tiles and give them more meaningful names.
Rather than taking it as good news (there was a clear path to success, and it didn't involve investing in more expensive charts and graphs), the leadership team remained skeptical. "Where is the data?", they asked repeatedly. "How can we be certain that if we implement this change we will bring adoption and frequency of use to its target levels?"
When obsession with data becomes the enemy of progress
Senior management made it clear that they expected a fully validated hypothesis before giving the go-ahead to implement the proposed solution.
The issue wasn't that we couldn't get more data to support my initial findings. In fact, it would be possible to apply quantitative and qualitative methods all the way to the final solution:
We could expand the number of interviews and even invite all specialists to answer a survey, hoping for a sufficiently large number of responses and ways to prevent and account for selection and response bias.
We could conduct moderated and unmoderated user test research either in-person or remotely (again, trying to compensate for the significant selection and response bias we’d to face given our specific set of circumstances).
We could validate the solution using a split (A/B) test, running the new tile configuration concurrent with the old one to see the impact in the metrics we were trying to improve.
There were several obstacles to granting the wishes of the leadership team, though. We only had a few months to launch a solution in order to be able to collect enough data prior to contract renewal to prove to the client that it was worth renewing the contract for the dashboard component. Among other things against us, there was the time pressure and a particularly hard to reach audience (the specialists were constantly traveling and had little availability or incentive to answer a call or complete a survey).
We clearly couldn't afford to do all the "right things". But we had one thing on our side: we weren't facing an overly complicated or time-consuming user activity that required multiple rounds of usability testing to get right.
How much data do you truly need to confirm why no one uses a kitchen drawer that can’t be fully opened because it's blocked by the handle of the drawer next to it?
Image: @_maehanson)
In the end, the decision-makers only accepted to move forward with what they considered “insufficient data” when it became clear that the window of opportunity to act was about to close. (We did, in fact, have the three pieces of information we needed: the jobs the customers were trying to get done with our product, the outcomes they were trying to achieve, and the constraints that were preventing the customers from adopting the new solution. They just didn’t come packaged in a nice spreadsheet like the ones the executives were used to getting.)
Quantitative data is not a panacea
Trying to improve performance without appropriate measures (in the example above, user adoption and frequency of use) is like shooting at targets in the dark. You may hit a few, but your score won’t be anything to brag about because you won’t know how to adjust your aim.
In two decades working for all kinds of businesses, from Fortune 500 companies to tiny startups, I’ve witnessed time and again successes that couldn’t be projected on a numeric spreadsheet. To strike gold, sometimes we need to rely on curiosity and qualitative insights extracted from talking to and observing our target audience.