As someone who writes and speaks about customer support and leveraging data to do customer support better, I’ve had this article dropped to me in at least two Slack channels. Folks get a sense of mirth, I suspect, from needling me with articles and arguments that run contrary to the sorts of things I write about, and try to be persuasive around.
Yes; I will admit that I found this piece hard to swallow at first blush. Opening with…
Here’s a secret from the support team at Highrise. Customer support metrics make us feel icky.
… is a guaranteed burr in my side. Arguing against measurement from emotional premises?
So I read it, and I read it again. And I’ve spent some time thinking about it, and there are some good points to be had here. You should read it, too. Specifically the piece around finding value in qualitative data, in spending time listening to your customers – this is key to running an excellent support operation.
(If you’re just starting to think about leveraging qualitative data in your customer support organization, I’d recommend my earlier posts You’re Already Interviewing Your Customers and Research in the Right Order: When to Interview)
There are arguments in Gallo’s piece that I am absolutely sympathetic to – the idea that algorithms and other data mechanisms are inherently subject to human morality and should be treated as interpretative acts, not as pure fact. Thinking about customer support as a way to put yourself out of a job by improving the product – sure, I agree with the big idea there.
The two pieces where I find myself, even after a few reads, in disagreement with Chris:
1.) He’s discussing metrics as though they are ends, when they are means.
2.) He’s treating customer conversations as though they are the exclusive source of customer communication.
Let’s suss these out.
I’ve written in a zoomed-out way about my philosophy around metrics and measurement in the past: Metrics, Means and Maps.
Chris says, as a way of undercutting the value of data work in customer support;
Almost all data is built on biases and judgement. Because humans are deciding what to measure, how to measure, and why to measure.
…whereas, in my view, this is the very strength of measurement. Of course data is built on human judgment, that is why we have it. The first role of measurement of any kind, especially measurement around human behavior in a marketplace, is to reduce complexity to a manageable level.
I recognize that claim could be controversial. Do I really think measurements like temperature, or length, are grounded in being complexity reduction mechanisms?
Consider the purpose of the Celsius scale, or centimeters. We need these mechanisms because they reduce an otherwise massively complex thing (heat, the physical world) into a smaller chunk, so that we can then use that chunk to answer questions.
If I asked, “Should I wear a jacket?” explaining to me the source of and current level of kinetic activity in our general region of the globe would not be helpful in answering that question.
So, we intentionally reduce the complexity as a means – we use the one number (say, 2 degrees, yes wear your coat), to answer the question (our end), even though it isn’t the full picture.
Metrics, measurements, are intentionally reductive. They are reductive as a result of intentional human choice and design, because only by thinking about big ideas in smaller terms are we able to apply those ideas toward our daily work.
We don’t say something like, “Temperature is the result of human bias and judgment,” because it is a natural part of the concept. Measurement is by definition a result of human bias and judgment.
Our businesses, and our relationship with our customers, are also massively complicated things, and only by considering them through intentionally reductive lenses can we find insights.
In this discussion it’s important for us to remember that any metric we create to further our understanding must be considered as a means to a larger end. We measure customer satisfaction because that simple survey can help point to a bigger, more complicated, unmeasurable truth: Are our customers happy?
This is true for all departments: you don’t measure the ROI of your ad spend because the ROI itself is the important piece. You measure the ROI of your ad spend to better understand the financial health of your business and its advertising efforts.
The classic horror stories of Comcast et al are a great illustration of treating metrics as ends, rather than as means: giving call center employees a mandated number of calls per hour, with a certain average call time, and then leaning on those metrics and hoping for success.
The error is in treating those metrics as ends rather than as means. Yes, a great call center will likely have a certain number of calls per hour, with a certain average call time, but those metrics are pointing to something bigger and more complex; customer happiness.
Pushing on the metric alone in pursuit of the bigger goal feels incoherent; it would be like noticing that all of the best students in your class sit in the front row, so to improve the grades of the low performers, you mandate that they also sit in the front row. It’s mistaking a metric (row placement) for a much larger and more complex issue (academic horsepower).
Measuring your performance is important. How else will you know if you’re improving? Choosing the right measurement for you, your company, and your goals is also important. Ensuring that you don’t lose sight of the larger goals, above and beyond your metrics, is also key to success and happiness.
That’s it for discussing metrics as though they are ends, when they are means. Let’s move on to treating customer conversations as though they are the exclusive source of customer communication.
Your customers are talking to you. They’re talking to you on Twitter, they’re talking to you in emails and live chat, maybe they’re literally talking to you on the phone.
Those aren’t the only ways that our customers communicate with us: they send us messages with their behavior, with their purchasing habits, and their departure from our services. They communicate with us by never using the search bar on our documentation. They communicate with us by clicking on non-clickable homepage images. These are all important and valuable pieces of data that our customers are providing us.
That means, if you rely solely on active customer communication via Twitter, email, what have you, you’re missing out on what you could have learned from those other 96%. Can you imagine if a customer support rep at a company you admire told you they only listened to 4% of their customers? Can you imagine how much better their product could be?
This is what low-fidelity measurement of customer communication is about: listening better. To paint a portrait, you don’t start with the flecks of green in your subject’s eyes; you start with the background, with the broad strokes. In understanding our customers, we have to do the same – we start with the big pieces, the Google Analytics, the CSAT scores, and from there we drill into more specific, more granular information.
(More on low and high fidelity customer research here)
If we rely solely on what our customers actually, literally say to us, we’re limiting ourselves in two ways: we fail to understand the broader background context from which they’re speaking, and we only listen to 4% of our customers.
I agree with Chris that listening to our customers is a huge part of success for a SaaS company in 2016. I would push him further: we should listen to all of our customers, and seek out that communication in all its forms.
(If you’re looking for a more step-by-step way of thinking about this type of customer listening, I have a recorded talk and a series of posts from the inaugural SupConf that would be useful for you: Use the Data You Have)