Tag: data

I’m Giving Video Content a Try!

As y’all may recall, last year I was lucky enough to spens some time working with the fine folks at Locally Optimistic to produce and run some AMA content for them – they ended up being more similar to traditional interviews, but folks seemed to enjoy them!

You can find those all here!

These were well received, and generated a TON of insight for folks working in the data and analytics space – but I had a few things I wanted to try doing a little differently:

  • They could be more discoverable: it was tough to know which guests talked about what, they were about an hour long so it was a big bite of content if there was only one thing the viewer was interested in – even with YouTube’s search function it’s likely folks were leaving before the parts they were interested in arrived.
  • They needed a little more social support: I tweeted about each one, but probably different parts and points of the conversation could have warranted its own outreach.
  • The live format, where we’d schedule them and invite members of the community to join, and then post afterward, was a bit tough to schedule, and we never really got the community engagement during the calls that we had hoped for.

So, I’m putting together some videos that hopefully are a step in the right direction – I’ll chat with similar folks, luminaries in the data and analytics space, and then publish the entire conversation, but also smaller chunks (ideally one per topic) which can be posted separately so that folks who are only interested in, say, data career ladders, can easily find and watch only that piece.

I still absolutely have a lot to learn – both about being a data professional as well as producing and sharing video content! – but, I’m giving it a try! I’m also hoping to use this energy to help carry me into blogging more, once more – but that’s a perennial hope, isn’t it?

With no further ado, here is the first full-length conversation, with my friends Stephen and Emilie – I think you’re going to like it!

Cogitating on Return on Ad Spend – AKA ROAS

I’m still pretty new to this whole marketing thing: I’ve been a part of Automattic’s marketing efforts for just over a year, and I feel like I’m still learning: the pace of education hasn’t slowed down even a bit.

One of the things that was a real challenge for me was getting to understand the language of the work, especially given our interactions with a number of outside vendors and agencies: the number of acronyms, shorthand and unusual usage of otherwise common words is a huge part of the advertising world, and it serves many purposes.

The import of accessible language is probably something I should save for its own post: I think that, especially in highly interdependent company like Automattic, opaque language, complex jargon, and inscrutable acronyms are more of a hindrance than a help, and in fact likely do us harm, given the way that we, as humans, myself included, want to feel smart, and powerful, and it can be very attractive to nod along rather than ask hard questions.

If you’ve been following this blog for a little while, you know that measurement and the implications of measurement are things that I think about – here’s a piece about metrics generally.

(Here’s a slightly longer one where I take a bit of umbrage, such drama!)

My broad position on metrics is, they’re reductive, necessarily and usefully so, and need to be understood as means rather than as ends.

All that to say, we should also be careful not to treat our metrics as being perhaps more reductive than they really are, or to behave as though what we are measuring is simple, when in fact it is not simple at all.

Taking something complex and making it simple enough to be useful – that’s the essential core of all measurement. Taking something complex and acting like it is something simple is another thing entirely, and a very easy way to increase your overall Lifetime Error Rate.

This brings us to Return on Ad Spend, sometimes shortened to ROAS. Return on Ad Spend can be calculated like this:

…with being revenue and being spend. Generally the output is represented either by a ratio like 3:1, where for every dollar you spend on advertising, you get three dollars worth of revenue, or with a percentage – 3:1 would be represented as 300%.

It looks pretty simple. It’s generally referred to as being very simple, or easy, that kind of thing. Which, well, it is, at least on the face of it.

(The rest of this Post is about the sometimes hidden complexities of ROAS. If you want to learn more about using the metric in a tactical way, John at Ignite Visibility has a great write up on how to calculate and break out ROAS, as well as some wrinkles about attribution, which I recommend if that’s what you’re looking for. Here’s a link)

Let’s talk about this metric: ROAS. The name holds a lot of promise, right? Return on Ad Spend: something everyone who spends money on ads wants to learn, the dream of marketers everywhere. How much are we taking back in, for the amount we are putting out?

The trick of ROAS is, we have built in a set of assumptions: specifically, that the numbers we put in represent the whole of each of those categories. The trouble here is that there are only very specific parts of the marketing spend where that is a safe assumption: low-funnel type tactics, especially for e-commerce companies shipping physical products.

In these situations, for these companies, ROAS tends to be a clean metric: you have a very clear picture of where you are spending money, and each transaction has a straightforward, static revenue.

The trick is,

For SaaS companies, ROAS can become much more complicated: imagine your company sells a single product, some type of Helpful Business Software, and it retails for $100 / year. If you run some numbers, you find that you spend on average $50 in ads to get a customer – this looks good, right? We can say we have 200% ROAS and call it a day.

Of course, one of the great advantages of having Data is that we are able to record it, and then see how it changes over time, and try to do the sorts of things in our business that move the needle in our desired direction.

For a SaaS company, two of the metrics that you live or die by are Customer Lifetime Value (sometimes called CLV or LTV) and the dreaded Churn Rate – astute readers will note that these two metrics are inextricably linked. Briefly: LTV is the amount of revenue that your business can expect to make from a given customer, and the dreaded Churn Rate is the expected number of customers (generally at a rate out of 100, represented as a percent, like: “Our Dreaded Churn Rate is a spooky 13%!” )

A saavy SaaS marketing analyst will use the expected lifetime value of a customer in the top of the fraction up there, to determine Return on Ad Spend – for two great reasons. FIRST, because it is more accurate: if you’re looking to determine the total return it makes more sense to use LTV than simply the ticket price. SECOND, because it will make her look better in her reporting.

Consider: for this same sale of our Helpful Business Software, our expected LTV isn’t $100, which is the annual cost of our product, but rather, $200. This doubles our ROAS. This is great news!

(It’s not really news at all though, right? We’re not actually improving either our ads or our product, we just used a more accurate number. Metrics are means!)

One wrinkle, though, is that now we’re not really using that equation above anymore – we’re using something more like:

If you’ve ever spent any time trying to calculate your customers’ lifetime value, you know that this has suddenly become a much more complicated metric.

What happens once we start to bring in more complicated ingredients into our ROAS pie here, things like LTV, is that ROAS moves from being a static sort of snapshot into a metric that is much more dependent on other parts of the business to be successful.

In the above example, imagine if your company has had a disastrous year, and your Dreaded Churn Rate has skyrocketed, driving your LTV down to below $100 (due to let’s say sweeping customer refunds and growing customer support costs) – now our ROAS is below 100%, even though literally nothing has changed on the advertising side. In this situation, ROAS becomes a larger aggregate metric, telling us something about the business at large.

This brings to mind a larger question: do we want ROAS to be a heartbeat metric, an indicator of the business overall? Or do we want it to be what it was about a thousand words ago, a simple snapshot of how our advertising efforts are going?

As we move away from direct retail e-commerce businesses into more complex companies, and up what’s called the advertising funnel, ROAS becomes additionally tricky, not because the equation itself becomes more complicated, but because we start to introduce uncertainty, and even worse than that, we introduce unequal uncertainty.

Generally, you know how much you’ve spent. This is true even for less measurable marketing efforts, things like event sponsorships, branding, and so forth. What you decide to include is a little bit of a wrinkle: do you include agency fees? Payroll?

The uncertainty comes into play in the revenue piece, and this is why ROAS as a metric starts to break down as we move up the funnel, because the lower part of your fraction, your spend, stays certain, while the upper part, the revenue, becomes increasingly uncertain, which makes the output more and more difficult to use in a reliable way.

This is a problem that crops up a lot in marketing metrics, and something I’ve been thinking on quite a lot: we often will compare or do arithmetic on numbers which have wildly different underlying levels of base uncertainty, sometimes to our detriment, maybe sometimes to our advantage.

 

I’ve been working with ROAS quite a lot, and trying to really get my teeth into it, and my brain around its under-the-surface complexity. For most businesses today, ROAS is useful, but it is not as simple as it looks.

This is where I ask you to add something in the comments! What metrics are stuck in your craw this week? Do you think I spend too much time trying to become certain about uncertainty? Let me know!

It’s Good that Data is Man Made

There’s a post from the folks at Highrise that’s been going around Customer Support and Success circles over the last couple of weeks: Data is Man Made, from Chris Gallo.

As someone who writes and speaks about customer support and leveraging data to do customer support better, I’ve had this article dropped to me in at least two Slack channels. Folks get a sense of mirth, I suspect, from needling me with articles and arguments that run contrary to the sorts of things I write about, and try to be persuasive around.

Yes; I will admit that I found this piece hard to swallow at first blush. Opening with…

Here’s a secret from the support team at Highrise. Customer support metrics make us feel icky.

… is a guaranteed burr in my side. Arguing against measurement from emotional premises?

Continue reading “It’s Good that Data is Man Made”

You’re Already Interviewing Your Customers

Let’s start with a story!

At Automattic, we’re lucky enough to have some pretty sophisticated internal tracking and analysis tools. I was recently involved in a conversation with my friend and colleague Martin, about a particular slice of our customer base, whose churn is higher than we would have expected.

One of the ingredients for this particular group of customers was that they had, at some point in the seven days before leaving our services, interacted with our Happiness Engineers via our live chat support offering. Given the tools at our disposal, we were able to pull together a list of all of these customers – and with the churn rate being what it was, and the total userbase for that product what it was, the list was not terrifically long. Double digits.

Some of you out there know this story, right? What better way to find out what is going on with your customers (or former customers) than asking them outright? Put together some post-churn interviews, offer an Amazon gift card, learn something new and helpful about your product or service. This is a pretty standard flow for researchers – start with Big Data to identify a focus spot, then focus in with more quantitative methods, interviews, surveys, what I think of as Small Data.

In this case, rather than jump to the usual move, and at Martin’s suggestion, I pulled up all of the chat transcripts, and read through them, categorizing them along obvious lines, pulling out noteworthy quotes and common understandings (and misunderstandings!) – treating these last live chats with churned customers like they were transcribed interviews, because in a real way, that’s what they are.

I was really surprised how insightful and interesting these live chat sessions were, especially when read back-to-back-to-back like that. In fact, I did not even feel the need to follow up with any of the customers, the picture was clear enough from what they’d already communicated with us. I was honestly floored by this, and left wondering: how much good stuff is already in these transcripts? 

Moving forward, I’m including customer email and live chat review as an integral part of any user cohort research that I do – it will allow me to come to the interviews three steps ahead, with far better questions in mind, and a much sharper understanding of what their experience might have been like.

Especially with robust data slicing tools, being able to cut down through verticals, cohorts and purchase levels means that I’ll be able to see a ton of useful, relevant conversations with customers similar to those I’m looking to learn more about.

This is also the case with you and your customers.

Even if you don’t have a user research team, or even one researcher, your support team is interviewing your customers every day. Even without data slicing tools, you can do something as simple as a full-text search on your last month of email interactions and get something close to what you’re looking to learn.

If you enjoy a support tool that has a taxonomy system or plugs into your existing verticals and cohorts, all the better.

This Small Data on your customers, these conversations, already exist. You don’t need to generate new information, you don’t need to sign up for third party user testing.

You’ve heard me say it before, folks – there’s value in the data you have. Use it!

 

 

 

 

Full SupConf Video for Use the Data You Have

Hey folks! I’m really happy to share with you the full video of my recent talk at SupConf 2016, where I gave a talk on leveraging your existing data to build value from your support organization – here it is!

I also created a supplementary Page with a three-parter on how to execute the ideas I present in the talk – you can find that at https://s12k.com/supconf/ .

SupConf East is coming soon – as a speaker and an attendee, I cannot recommend it highly enough! You can get updates on the next SupConf event by signing up for the mailing list here.