Percentiles & Asymptotes – Shining a Light on Extremely Negative Customer ExperienceAdd bookmark
Metrics are Just Proxies – and Outliers Matter in Customer Experience – #SharedServices
Two of my recent articles explored our experiments that gave rise to “frustration-free” as our standard anchor question (article here), and our ongoing tests of various ways to visually represent 5-point Likert scale responses beyond just Mean (article here). As we continue our shared services Customer/Employee Experience (CX/EX) journey, we increasingly derive value from monitoring both subjective and objective measures.
We openly acknowledge that, just like any Excel model is by definition insufficient – able to capture only a portion of the world it seeks to represent – similarly our customer experience metrics represent only one lens. The persistent temptation is to over-index on such measures and forget that metrics are merely proxies and aren’t the actual experience, in the same way that basic demographics aren’t a person (more in a future article on ways we work to prevent the proxy becoming “the thing”).
To start with, we ensure that in measuring “service health” we strive to capture both objective and subjective indicators. I’ve already written about some of our subjective (opinion or feeling-driven) measures, such as How’s My Driving (HMD) surveys, focus groups, and unstructured written or audio feedback. Objective measures look at empirical aspects of a service – anything that can be directly measured. We welcome traditional indicators used by many shared service operations, such as Service Level Adherence (SL/SLA), Time to Resolve (TTR), and Turnaround Time (TAT). It’s of course common to use Mean to track such measures, or in some cases Median (the advantages of Median over Mean in many contexts is well established—notably that Median tends to mitigate the impact of outliers). But because we want to hold ourselves to a higher bar for employee experience, we’re increasingly benchmarking some of the more demanding measures used by technical teams here.
One of my favorites is a group of measures referred to collectively as “Top Percentile” (referred to more succinctly by their degree of severity, such as TP90 or TP99, which are simply a service’s performance at the 90th or 99th percentile). Being percentile measures, these do a better job of holding us responsible for high variation in performance.
To illustrate, assume that one of your services delivers 100 transactions in a month, and you sort those transactions by how long they took to complete, from fastest to slowest. Find the 90th transaction in that list—say it took 2.09 days to complete – and you have your TP90 score for the month: 2.09 days. Do this a few consecutive months and you quickly see how much harder a performance bar it is to hit TP90 or TP99 rather than TP50 (which is also the same thing as Median). A service that consistently hits a Median/TP50 of perhaps 0.5 days, will often find itself staring at a TP90 of several days, and a TP99 of a week or longer.
When you start publishing metrics like that, you’re forced to ask yourself hard questions about why a segment of your customers is having such a horrible experience with your service. If your service is already world-class, then you get the enviable challenge of starting to look at TP99.9 or TP99.999. You nail those, and you quickly find yourself in the rarefied air of Six Sigma-level quality. TP90-type measures are at once aligned with and contrarian to some principles in Lean/Six Sigma. They are inherently a close cousin of control charts – which use Statistical Process Control to indicate when a process has gone out of control, often due to special cause rather than common cause variation. But, they’re also something of an inversion of the Pareto principle – causing you to look at the long tail of the distribution (hopefully not asymptotic) rather than at the 20 percent driver of 80 percent of variation.
What do TP90 measures do indisputably well? Leave a service owner with a stark reminder – a clear visual signal – that something in the customer experience has gone awry and needs fixing. For that reason they’re a powerful tool for any shared services operation wanting to shine a bright light on extremes in their service delivery.