AI rich, insight poor
The challenge plaguing corporate generative AI efforts in customer service
AI-augmented applications are all the rage, but in reality most companies proclaiming their intention to “revolutionize” customer service via Generative AI won’t see a return on their investments.
I wrote in my previous post why I believe that’s the case:
The organizational impact of generative AI is largely dependent of robust data management practices capable of translating the information embedded in the organization’s operating systems into performance data.
To illustrate what I mean, consider the following example.
A company was struggling to meet service level agreements (SLAs) in contracts that guarantee certain levels of incident reaction time. To minimize losses from SLA complaints, management decided to invest in AI to scale ticket resolution. The company spent millions building a conversational AI application. While the solution achieved the stated goal, the operational costs—in particular those associated with regularly updating its knowledge library and conducting tests to ensure that AI continues to perform accurately over time—quickly become prohibitive.
What would have prevented this undesired outcome? It starts with knowing how to prioritize an automation initiative. Eighty percent of the reported incidents were outside the scope of an SLA. For those 80%, the company only had to confirm the condition and instruct the customer to troubleshoot on their side.
Armed with this information, the team responsible for the conversational AI solution could have achieved the desired outcome—minimize losses from SLA complaints— simply building a triage system to screen out the 80% false alarms and escalate to technicians the 20% of tickets that truly needed assistance. The simplified solution would have required a fraction of the content originally ingested to fine-tune the language model, made the technicians’ workload manageable, and achieved the desired service metrics at a much lower cost.
This may seem like an obvious mistake easily avoidable by checking the relevant metrics. After all, “only 20% of the support tickets opened by our customers can result in an SLA infringement and need to be addressed by one of our technicians” is information critical for the health of a company’s operations. Why wasn’t it considered when choosing appropriate targets for AI automation?
The truth is that few companies have developed the kind of “industrial-strength analytics” required to piece together the knowledge hidden in disparate, unclear, conflicting sources. Even in large organizations with big investments in technology it is common to see the IT department working hard to integrate internal data sources with slow and underwhelming results.
As noted in the prior article, the culprit is a weak data strategy that encourages rogue data sets to propagate in silos. Here is one of the most common situations I’ve faced in over a decade of data science consulting work:
Client: “We want to use AI to prevent or minimize X.” (Where X can be anything from billing exceptions to avoidable support calls or erroneous location readings from IoT devices.)
Me: “OK, can you tell me the frequency at which X happens?”
Client: Blank stare followed by an admission that a reliable answer to my question wouldn’t be ready in time to inform any decisions about their urgent AI project.
Game theory provided a formula for the value of information many decades ago. In an ideal world, we would be able to eliminate any uncertainty about some big investment decision by seeking all relevant information. In practice, we know that the cost of data acquisition may exceed its benefits. Still, if we are realistic about the uncertainty surrounding a business decision and the cost of making the wrong choice, we must care about collecting enough data to mitigate the risk of a bad investment.
Executives anxious to find proof cases that show that Gen AI can deliver the promise of outsize productivity gains may fall into the temptation to make swift decisions for fear of waiting too long and falling behind competitors. But this is exactly when slowing down and investing even in partial uncertainty reduction can dramatically increase the odds of success.
In a pinch, this may require using a labor-intensive ad hoc analysis to minimize the risk of erroneous allocation of limited resources. But hopefully this exercise will also lead to a better understanding of the value of taking raw data (such as customer support logs) and integrating it with other data to transform it into information to guide decision making. And, from there, maybe more C-level executives will start to appreciate how improvements to data management enable the strategic use of operating data to minimize investment uncertainty and make the organization more responsive to market changes.
One can hope.