What a title, am I right?
It’s a big title for a big article – strap in, this post is a long one, a question-by-question teardown of a Customer Success survey from a major media company, along with actionable take-aways. How’s that for clickbait?
Let’s start with the first part: Customer Success is still a New Thing. It’s not clearly defined across companies or industries, there are lots of folks attaching those two words to a great many different job descriptions. That’s OK – like Growth Hacking before it, Customer Success will come to be known by what we do, by the practitioners and the pioneers.
Many software companies, especially Software as a Service (SaaS) companies, are beginning to place big bets on Customer Success. Makes sense, right? It’s evident that the old way of doing business (pay a huge sum up front, get some service over time) is going the way of the dodo, and the new way of doing business (pay a smaller sum each month or each year, expect excellent service at all times) is only growing.
Even more exciting, in my view, is that companies outside of the software realm are starting to dip a toe into Customer Success work. At this point, my love for the podcast medium is well known (I’ve started two podcasts, everybody knows that by now, right?).
Imagine my delight at hearing an invitation to take part in a customer survey at the opening of one of my favorite podcasts! It was a spoken invitation, to all listeners of the podcast – not just me, mind you.
(Full Disclosure, it was the singular John Dickerson’s Whistlestop. In private, among friends, I am a political news and history junkie.)
This strikes me as a great opportunity to take a live customer survey (a key tool for the growth hacker or customer success professional in 2016) and do a line-by-line teardown. Let’s see what they’re doing well, and what they might be able to improve.
Some context:
The Panoply podcast network is part of the broader Slate media family, and as such has two big forms of revenue: advertising and via membership in the Slate Plus membership program. I’m guessing that these are their big ones, although they probably find some revenue in live events, appearances, and sponsored content (see The Message, co-produced with GE)
Panoply has a LOT of podcasts, and the survey covers ALL of them – this strikes me as a very tricky situation when it comes to segmentation; are all of your listeners the same? Are The Message consumers similar to Whistlestop consumers?
If I were working Customer Success at Panoply, there are two big things I’d be looking to understand from any customer surveys:
1.) To better understand my customers to provide better, more meaningful advertising
2.) To better understand what makes current Slate Plus subscribers unique – so I can more easily find other folks who are likely to subscribe to Slate Plus in the future.
Survey Distribution
I came across the survey ask during a routine podcast listen – I am a subscriber to Whistlestop, so I heard the request. The audio format is a tricky place to place this sort of ask, and I’d be curious to see their listener vs. survey-filled-out rate.
As a data wonk, it’s also a bummer that a simple URL shared via audio doesn’t offer any tracking – you don’t want to ask listeners to type in a long and unwieldy URL so you can identify which shows are channeling folks into the survey, but similarly, if the majority of your survey results are dominated by one or two podcast offerings, you’d want to know.
We can see that different episode note pages (1, 2, 3) have linked to the survey as well – this might be an opportunity to add tracking pixels, at least partially identifying who is providing the most traffic, but these URLs are all the bare survey URL, panoply.fm/survey – bummer.
Let’s take a look!
The Survey Itself
I like this first question – this is a key piece of information that will help do some analysis down the line – how do frequent listeners’ behavior, spending habits, etc, change according to the number of podcasts they listen to? We can see how this might be helpful; if you find that folks who listen to more podcasts also tend to have a higher median income, that’s important information you can take to advertisers.
The trouble with this first question is that it is too vague. Remember that Panoply hosts dozens of different podcasts – what does this question want you to report here? How many podcast episodes you listen to, or how many individual shows? This could be referring to multiple different things, and pushes interpretation onto the reader. Especially in 2016, when many podcast listeners will subscribe to many podcasts and then listen back into the show’s archives, it’s entirely possible that an individual could listen to 3 shows but more than 7 episodes per week – so which box do they check?
Insofar as the question itself is vague, the end results will be unreliable – or, worse, will indicate an insight that is not accurate.
Question number two is fine – it seems like a question they’re asking to pass the information along to potential sponsors, which makes sense.
This is only the beginning of this question – like I said, Panoply has tons and tons of podcast shows. That’s part of what makes this question tricky – it ends up taking a lot of scrolling, and also a lot of recall. It’s alphabetical, but if you listen to lots of podcasts, this is a pretty cognitively heavy question.
If the goal of this question is to associate different podcasts’ listeners across different metrics, I’d recommend using better tracking rather than huge checkmark questions, as well as a write-in section rather than a list of check boxes. Give five text inputs and ask “What Panoply shows do you listen to now?” – yes, the final result will take some cleaning up for spelling, accuracy, etc, but it’ll be more likely to be completed, and will be a better reflection of what shows are front-of-mind for your customers – whereas this is more “what shows do you recognize before you abandon this question?”
If you must have a big list like this, at least include an auto-completing search box.
I like this, it makes sense. Get a feeling for how your products overlap with other offerings in the marketplace. I bet this question will also produce some surprising insights – “Oh, it turns out that Culture listeners also really like Politics podcasts. Maybe we should do something in that intersection.”
Cool! This is the type of text input I love to see – it’s quick, it relies on the human recall, and while the data may not be super clean right away, it’s going to help you understand your customers better than a long list of check boxes, especially in the broader context of this (long!) survey.
Number six I really like an awful lot – especially because it will help Panoply understand:
A.) What new customers are listening to, since future customers are likely to be like current new customers, and
B.) How listening habits change (or fail to change) as a customer matures, which will help them create and curate offerings that will not only bring in new customers but continue to add value to customers in the future.
Right, good one, love it. Standard stuff here; how did you hear about us, because we want more people like you to hear about us! Especially when combined with some of the other questions, this could be really, really helpful. It could allow for insights like, “Folks who report being new listeners and also report learning about new podcasts from existing podcasts are highly correlated with folks who love the Culture Gabfest – we should use that as a channel for new podcasts that that audience would enjoy!”
This Question is a must-ask, but are not often as full of insight as you’d like. It’s like Henry Ford said, “If I asked the people what they wanted, they’d have said, ‘A faster horse.’ ”
#9, I answered, “Agriculture and hand crafts,” because I’m a maniac 🙂
#10 & #11 – Likert Scales! If you’ve been following this blog, you’ll know I’m an active fan of Likert scales, especially in the survey setting. They’re intuitive, most populations are fairly familiar with them already, and they allow a great density of information in a short period of time. The Likert Scale has been around and in active use since 1932 for a reason: it works!
These aren’t exactly your classic Likert Scale, which typically measures sympathy or agreement. (Wikipedia!) You might call this more of a frequency scale, or Likert-Like Frequency Scale. Let’s not get pedantic; it enjoys the benefits of a Likert scale, both for the survey-taker and the final analyst, so it gets a thumbs-up from me.
That being said, I’m curious what the desired output of these questions are – to find if there’s a place where folks listen to the radio but not to podcasts? It may be that the output here is more for audience understanding and marketing than for Customer Success operations; “You should know that 85% of Panoply customers listen to podcasts during their daily commute, Tesla advertising executive.”
#12, looks good, I like it, same sort of thing as the previous two. I am curious if folks often think of podcasts and music as interchangeable – are these goods that can easily substitute for one another? Maybe this question will help answer that.
#13 is interesting because it’s a direct result of the podcast medium – unlike, say, YouTube or Wistia video, a podcaster has no sense of how far into an episode her audience is listening. When I was hosting Trellis to Table, this was a cause of no small consternation for me; I could see the number of downloads, but not the depth of play.
I suspect this is part of what makes in-house listening apps (like NPR One) so attractive for content producers; it allows them to own and then leverage their own listening data, rather than letting a third party collect and warehouse it, as iTunes or Stitcher would.
#14 – Good question, and one I would not have considered. When you think about how many of Panoply’s podcasts are relatively time-sensitive (Culture and Political Gabfests, Trumpcast), knowing the lag between download and listen could be key to getting the right message to the right ears when it’s still relevant and valuable.
Even moreso as these podcasts ramp up live events; you don’t want listeners to hear about a show in their town the night after the show goes on!
#15 – When we look at #15 along with #16 and our earlier questions #2 and #4, we can see that there is getting to be a lot of overlap here. Sometimes overlap is helpful; it can ensure that you get an accurate picture from a large number of participants who may be somewhat internally inconsistent in their replies, for instance. In surveys like this, direct-to-consumer, unpaid surveys, it can sometimes add unnecessary bloat to the survey, dropping participation and completion numbers.
#17 & #18 – I would love to know the answers to these – especially considering how hard it is to click on a URL that someone reads to you in an audio file. Finding ways to successfully monetize podcast content is still the Wild West – I can’t think of the last time I visited a podcast’s social media site or website; am I an outlier on that front?
Ah, here we go! #19, #20 and #21 also speak to the challenges of advertising in the podcast medium. In an ideal world, this question can be combined with other questions about podcast usage to find some correlations; this podcast has listeners who really liked the Casper mattress offer, so let’s get them more stuff like that.
In my mind #21 will be leveraged by an innovative member of Panoply to push for new and revolutionary ways to find revenue that don’t rely on the whims of advertisers – “Nearly everyone skips our ads, it’s a sinking ship, we need to innovate or die!”
To be candid, #22 #23 and #24 are what spurred me to put together this teardown in the first place.
#22 is a very dense question, placed fairly late into the survey. It’s what you could call a three dimensional question – it’s asking you to take the combination of two variables (media source and trait, ie TV and Trustworthy) and then assign that combination a value – creating a 3-dimensional data point, media/trait/value.
You can see how this question could produce some really helpful output; understanding customer affinity for and sympathy for different media could go a long way toward attractive advertisers and understanding the way your customers view the world.
All that being true, I’d be surprised if this question gets many answers; while we’re calling it one question it’s actually 25 numbers, so 25 clicks, and 25 thoughts. They’re drop-downs, which is better than pure text inputs, but that’s still a heady question that takes a moment to understand – I bet it gets skipped a lot.
These three questions could be designed together more intuitively. They could be easier to answer and inconsistent. The 1-7 scale is unfamiliar and cognitively sort of hard to parse. “Do I avoid advertisements on TV four or more like five?”
Part of the beauty of the 5-part Likert scale is it’s immediately understandable; it’s neutral, a little yay, a lot yay, a little boo, or a lot boo. Boom. Easy, quick, next question. Adding an additional layer there, it’s not clear to me how helpful this is going to be in the end result, as well as being somewhat confusing.
When I say they’re inconsistent, I mean, between #22 and #23, the axes swap; the media are on the horizontal axis for #22 but then on the vertical axis for #23 and #24.
(also, having some 5-point questions and some 7-point questions means your final visualizations won’t look quite right next to each other – but that’s more a visualization quibble than a serious objection)
I like these questions a lot! I like them because they enable the folks at Panoply to engage in some Small Data efforts – if 80% of survey respondents say that they love the Casper Mattress ads, that opens up a whole new line of inquiry. You get to go see where Casper has been advertised, see if that correlates to what shows survey respondents say they listen to, and then go listen to the ad again.
This kind of research is good for Panoply but also means they can start supporting their advertising vendors – becoming a partner in research, and adding value both directions in the chain. “Here’s a white paper we put together with our findings about what our audience loves in podcast ads – you’ll like it!”
There are 35 total questions in the survey, but the last nine are all your standard demographics questions – gender, age, household income, the usual.
Conclusions & Take-Aways
I commend Panoply for this survey – it’s awesome to see this work being done, and for folks working in the podcast space, there are so many outstanding questions – in both senses of the word. Many of these questions are outstanding, as in valuable and productive, but many are also reflective of larger, long-standing questions facing the broader podcasting community (and even the larger maker community in general!)
We can see one of those outstanding issues evidenced clearly in the way this survey is being presented; a spoken word URL spelled out in audio. This is a bigger problem for all podcast advertising – URLs are hard to say, and audio is not clickable. With the majority of podcast listeners using a third-party listening app (iTunes, Stitcher, etc), getting a clickable link to your audience is very hard.
Asking customers to have the good will to go from a podcast to an unpaid survey URL – that’s even harder. I’d love to see the podcast download vs. survey completion numbers, especially compared against reports from advertisers who are seeking similar behavior.
From the very start, I’d recommend using more specific tracking whenever possible. The way that this survey is set up, Panoply is missing out on some really great potential segmentation. Assuming that their customers are all similar enough, that no podcast’s listeners will be different from another’s, is leaving a lot of nuance on the table.
Practically, the simplest way to do that would be to have each show have its own survey URL that automatically redirects to the survey, where ever it’s hosted. That way I’d have gone to http://www.whistlestopbook.com/survey or whatever. Done properly, the analyst could then slice the final surveys by referring URL, helping to understand the differences across products/podcasts.
After Question #6 (“How long have you been listening to podcasts?”), I’d love to see a question like, “What was the first podcast you listened to?” – if you offer a large number of possibly overlapping products, getting an understanding for which products are the gateway to your commercial ecosystem could be hugely helpful.
Especially when combined with the other insights, you could then create a more full picture of which podcasts are good for new listeners or recent new podcast folks, and which ones are better for more mature members of the audience.
With that insight, you can better direct your outside ad spend, as well as cross-promote in a smarter, and more effective, way.
When we look at some of the overlapping questions – #2, #4, #15, #16 – especially in an unpaid survey of this length, I’d try to trim those down, or combine them into fewer questions. At 35 questions this survey is getting a bit long in the tooth, especially considering the challenge of getting folks to the survey URL in the first place.
If we recall, the two pieces I estimated we’d be looking to learn from this survey were:
1.) To better understand my customers to provide better, more meaningful advertising
2.) To better understand what makes current Slate Plus subscribers unique – so I can more easily find other folks who are likely to subscribe to Slate Plus in the future.
We certainly can start to pursue that first piece – there are a lot of good questions here, and especially when combined with one another, we can start to put together a full picture of the general, broad Panoply customer base. When we slice across some of the questions, we can also start to do some segmentation.
The second piece here, though, is totally untouched; note that there is not a single question about the Slate Plus program. Perhaps they’re phasing it out? Perhaps it’s not offered or available for all of their programs? I’m surprised to see there isn’t even a nod in the form of “Are you a Slate Plus subscriber?” type yes/no question.
All things considered, I think this survey was well done; there are a few places I’d have had different recommendations, but the final outcome will surely offer the folks at Panoply some more insight into their customer base – and surely a dump truck’s worth of additional questions!