Category: marketing

Trombone Oil & Picking Good Problems

There are these three ideas that are coming together for me right now.

We’ve talked about the importance of novel intersections before – how as you explore different areas, texts, content, relationships, you find places where they approach the same problem in different ways, or you find a similar perspective being represented in unexpected ways across industries.

One of the best ways to drive innovation is to get out of the office, and we need to follow that same pattern when it comes to our information and research diets. We have to get out of the standard operating procedure sometimes, and cast a wide net, find other things that are interesting and engaging outside of our professional day-to-day.

(I think this is why we see such a strong correlation between arts and crafts and winning the Nobel!)

When I joined Disney, I read Bob Iger’s book. In this book there are a number of useful take-aways (although it is a pretty classic business guy book), but one rang out to me and has been hanging around my mind since:

“My former boss Dan Burke once handed me a note that said: “Avoid getting into the business of manufacturing trombone oil. You may become the greatest trombone-oil manufacturer in the world, but in the end, the world only consumes a few quarts of trombone oil a year!” ”

Iger in Ride of a Lifetime

It was a little later that I first read the (now classic) Shreyas Doshi piece on the importance of not only identifying customer problems but also seeking to understand how those problems relate to one another.

After you’ve talked to a customer about a specific problem & possible solutions you could build, ask them to stack rank the problem being discussed vs. the other problems they are trying to solve for their business & org. This is where the real truth will emerge.

Shreyas Doshi

And there was the time that ol’ Brian Chesky scared me into learning about product marketing, which brought me to the very smart, very thoughtful, very valuable podcast and books of April Dunford (which I have recommended before and will recommend again!)

One of my biggest takeaways from Dunford’s Obviously Awesome (which was my Work Book of 2023 by the way!) was the importance of framing a product or solution within the broader context of your target customers or market – and being sensitive to the dynamic and shifting nature not just of your own product being developed, but also how the market itself can shift away from established successful frames.

(I know classically we think about positioning as a skillset for product folks working with external customers, but I’ve started using Dunford’s positioning framework with internal platform teams, and it’s been really valuable!)

These are three ideas that are in the same neighborhood, which is of special interest to product folks, which is the area of Problem Assessment. The most important thing a healthy product organization does is ensure that they don’t build the wrong thing, and it’s easy to hyper focus on a solution, on a product, and lose sight of what problems that real people have, that you can help them to solve.

It feels like every team I talk with, someone has a story about working for months on a project, crunching to hit a deadline, and then seeing the delivered product fail to achieve any interest from the market. We want to avoid this!

When we chat with our customers, when we observe the platform landscape of our companies, there will always be things to improve, areas where we might deploy our resources and time. It’s important that we take the above lessons and leverage them to help consider problems from a few different perspective:

  • “Is this trombone oil?” (Assess business opportunity)
    • We want to consider, if we absolutely defeat the problem, if we build out the absolutely best possible solution and become the dominant player in that market, will that be … a big deal? Would it move the needle for our firm?
  • “How does this rank against other problems? (Assess customer pain)
    • When we talk to our customers, do they consistently report that the problem at hand is more important, more urgent, or more painful than the broad landscape of other problems they have?
  • “Can we appropriately frame this problem?” (Assess market understanding)
    • Even if your firm has the product/engineering talent to build out an exceptional solution to a serious problem, do you know enough about your target market to bring the solution to them in a way that will communicate the value in terms and ways that resonate with that market?

Like any discovery exercise, this assessment can go as deep as you can sensibly prioritize: of course, the framing piece can be improved by learning more about a market and audience, the customer rank piece might be dynamic based on who your target customer is, and even the market sizing piece might change given other larger shifts in the macroeconomic landscape (think about the market for AI Assistants only two years ago!)

They also relate to one another: you wouldn’t want to invest a great deal in learning about the appropriate framing for a customer segment if you aren’t yet sure you have a solution to their most painful problems – and ensuring with relative confidence that you have a significant addressable market probably should come before the other assessments.

I hope bringing these different pieces together this can be a helpful lens in considering the different problems that you might work against: if we can avoid trombone oil, and build things that create real value, and solve real problems, that’s a great start!

The Time Brian Chesky Scared Me So Bad I Bought A Book

When I first heard that the CEO of AirBNB, Brian Chesky, had eliminated the Product Management role, and transitioned that department whole cloth into Product Marketers, I was surprised – and skeptical.

Surprised in part because, AirBNB is a company whose product folks have had a serious impact on me and my own product practice – Lenny of course but also more directly Nick and James from the Transform team (who I got to know during my time in the modern data stack space). To think that a company that produced such thoughtful and successful product folks, was pivoting away from this mindset and methodology (which I’ve built my professional journey on!) was jarring, to say the least.

Was this part of a larger market shift? Was Product Management going the way of the Elevator Operator? Surely a little over-the-top, a dramatic overreaction from a former theater kid. But even so!

A quick Google will give you lots of hot takes and deep dives on what Chesky really meant, and how what they were really doing was shifting from the way that they had found themselves doing product, more intentionally to a revenue- and market-focused direction.

That being said, in hearing the news, my first reflection was more personal – I’ve had the opportunity to work with some world class product marketers, but that part of the overall Product Toolkit was an area where I felt quite weak – not for lack of interest or adjacency, but just, never happened to get around to it!

So, I did the thing any good Product person does when faced with uncertainty; I did some discovery! I reached out to the undisputed most talented product marketing professionals I know, and asked for help.

And help they did – after a few Zoom powered coffee hangouts, I felt as though I had passed through the Dunning Kruger horizon. The depth and complexity of the topic unfolded before me in an exciting (and nervous-making) way, like any topic does once you get close enough.

One aspect of my concern was validated almost immediately: while I had worked closely with marketing teams and marketers, my own toolkit could use some sharpening in this area. So, I set about doing just that sharpening – ingesting a lot of audio and text content, YouTube videos, and overall sort of soaking in the broad Product Marketing Content Ocean – and an ocean it is! There’s no shortage of folks who are happy to opine on the many aspects of the field.

Through this effort I did find two resources to be especially useful and continue to be folks I look to for expertise and value regularly, which I share here:

  • Jason Oakley of Productive PMM – I subscribe to his newsletter, which shares regular quick-hit examples and analysis of interesting things being done in the wild.
  • April Dunford – I will read, write, and listen to anything that April makes! I’ve been so impressed by her thinking, her storytelling style, and her deep and expert analysis. She has a podcast which is a great first stop.

It was through April’s podcast that I first started to think a lot about positioning – not something I had considered with much depth before, but something which, due to the topic itself or perhaps due to April’s natural charisma and engagement around the topic, really drew me in.

I think most folks have the experience of, occasionally, being really taken by an aspect of their own work, some line of thinking or research, or new approach or methodology, that can arise with a sort of renewed energy, a renewal of excitement and a new sort of lens on a great many things that you’ve been doing regularly without much new insight or novelty.

For me, the most recent example of this is positioning – in coming to better understand this line of thought and methodology, I’m find it applying to more and more aspects of my own work.

It has me so fired up, between the great fright that Chesky gave me, and the compelling nature of Dunford’s podcast, I did something I never thought I’d do. I bought a sales book.

I haven’t finished the sales book, but I am already seeing some unexpected intersections of how we thoughtfully market and sell to external customers, and how internal / platform product teams could more thoughtfully represent the value of their partners and work. There’s something here, and I’m stoked to dig in more.

All this to say, if there’s a kerfuffle in your industry, in your place of work, and it makes you nervous or anxious (as Chesky’s shift to Product Marketing did for me) it’s worth sitting with that internal landscape, spending some time interrogating that feeling, maybe having a coffee or two with friends or mentors. You may find there’s something new and exciting behind that anxiety that unlocks a whole new space in your journey.

Source & Medium: A Medium Sized Dilemma

Subtitle: Source, Medium, Attribution, Stale Information, and The Future of Data

Here’s our situation – we want to be able to slice reporting and dashboards by a number of dimensions, including source and medium.

MARDAT (the team I’m lucky enough to be working with) is working to make this kind of thing a simple exercise in curiosity and (dare I say) wonder. It’s really interesting to me, and has become more and more clear over the last year or so, how important enabling curiosity is. One of the great things that Google Analytics and other business intelligence tools can do is open the door to exploration and semi-indulgent curiosity fulfillment.

You can imagine, if you’re a somewhat non-technical member of a marketing or business development team, you’re really good at a lot of things. Your experience gives you a sense of intuition and interest in the information collected by and measured by your company’s tools.

If the only way you have access to that information is by placing a request, for another person to go do 30 minutes, two hours, three hours of work, that represents friction in the process, that represents some latency, and you’re going to find yourself disinclined to place that kind of request if you’re not fairly certain that there’s a win there – it pushes back on curiosity. It reduces your ability to access and leverage your expertise.

This is a bad thing!

That’s a little bit of a digression – let’s talk about Source and Medium. Source and Medium are defined pretty readily by most blogs and tools: these are buckets that we place our incoming traffic in. People who arrive at our websites, where ever they were right before they arrived at our websites, that’s Source and Medium.

We assign other things too – campaign name, keyword, all sorts of things. My dilemma here actually applies to the entire category of things we tag our customers with, but it’s quicker to just say, Source and Medium.

Broadly, Source is the origin (Google, another website, Twitter, and so forth) and Medium is the category (organic, referral, etc) – if this is all new to you I recommend taking a spin through this Quora thread for a little more context.

What I am struggling with, is this: for a site like WordPress.com, where folks may come and go many times before signing up, and they may enjoy our free product for a while before making a purchase, at what point do you say, “OK, THIS is the Source and Medium for this person!”

Put another way:  when you make a report, say, for all sales in May, and you say to the report, “Split up all sales by Source and Medium,” what do you want that split to tell you?

Here are some things it might tell you:

  • The source and medium for the very first page view we can attribute back to that customer, regardless of how long ago that page view was.
  • The source and medium for a view of a page we consider an entry page (landing pages, home page, etc), regardless of how long ago that page view was.
  • The source and medium for the very first page view, within a certain window of time (7 days, 30 days, 1 year)
  • The source and medium for the first entry page (landing page, homepage) within a certain window of time (7 days, 30 days, 1 year)
  • The source and medium for the visit that resulted in a signup, rather than the first ever visit.
  • The source and medium for the visit that resulted in a conversion, rather than the first ever visit.
  • The source and medium for an arrival based on some other criteria (first arrival of all time OR first arrival since being idle for 30 days, something like that)

It feels like at some point Source and Medium should go bad, right? If someone came to the site seven years ago, via Friendster or Plurk or something, signed up for a free site, and then came back last week via AdWords, we wouldn’t want to assign Friendster | Referral to that sale, right?

Maybe we have to create more dynamic Source/Medium assignation: have one for “First Arrival,” one for “Signup,” one for “Purchase.” Maybe even something like Source/Medium for “Return After 60+ Days Idle”

In the long run, it feels like having a sense of what sources are driving each of those behaviors more or less effectively would be helpful, and could help build insights – but I also feel a little crazy: does no one else have this problem with Source and Medium?

Cogitating on Return on Ad Spend – AKA ROAS

I’m still pretty new to this whole marketing thing: I’ve been a part of Automattic’s marketing efforts for just over a year, and I feel like I’m still learning: the pace of education hasn’t slowed down even a bit.

One of the things that was a real challenge for me was getting to understand the language of the work, especially given our interactions with a number of outside vendors and agencies: the number of acronyms, shorthand and unusual usage of otherwise common words is a huge part of the advertising world, and it serves many purposes.

The import of accessible language is probably something I should save for its own post: I think that, especially in highly interdependent company like Automattic, opaque language, complex jargon, and inscrutable acronyms are more of a hindrance than a help, and in fact likely do us harm, given the way that we, as humans, myself included, want to feel smart, and powerful, and it can be very attractive to nod along rather than ask hard questions.

If you’ve been following this blog for a little while, you know that measurement and the implications of measurement are things that I think about – here’s a piece about metrics generally.

(Here’s a slightly longer one where I take a bit of umbrage, such drama!)

My broad position on metrics is, they’re reductive, necessarily and usefully so, and need to be understood as means rather than as ends.

All that to say, we should also be careful not to treat our metrics as being perhaps more reductive than they really are, or to behave as though what we are measuring is simple, when in fact it is not simple at all.

Taking something complex and making it simple enough to be useful – that’s the essential core of all measurement. Taking something complex and acting like it is something simple is another thing entirely, and a very easy way to increase your overall Lifetime Error Rate.

This brings us to Return on Ad Spend, sometimes shortened to ROAS. Return on Ad Spend can be calculated like this:

…with being revenue and being spend. Generally the output is represented either by a ratio like 3:1, where for every dollar you spend on advertising, you get three dollars worth of revenue, or with a percentage – 3:1 would be represented as 300%.

It looks pretty simple. It’s generally referred to as being very simple, or easy, that kind of thing. Which, well, it is, at least on the face of it.

(The rest of this Post is about the sometimes hidden complexities of ROAS. If you want to learn more about using the metric in a tactical way, John at Ignite Visibility has a great write up on how to calculate and break out ROAS, as well as some wrinkles about attribution, which I recommend if that’s what you’re looking for. Here’s a link)

Let’s talk about this metric: ROAS. The name holds a lot of promise, right? Return on Ad Spend: something everyone who spends money on ads wants to learn, the dream of marketers everywhere. How much are we taking back in, for the amount we are putting out?

The trick of ROAS is, we have built in a set of assumptions: specifically, that the numbers we put in represent the whole of each of those categories. The trouble here is that there are only very specific parts of the marketing spend where that is a safe assumption: low-funnel type tactics, especially for e-commerce companies shipping physical products.

In these situations, for these companies, ROAS tends to be a clean metric: you have a very clear picture of where you are spending money, and each transaction has a straightforward, static revenue.

The trick is,

For SaaS companies, ROAS can become much more complicated: imagine your company sells a single product, some type of Helpful Business Software, and it retails for $100 / year. If you run some numbers, you find that you spend on average $50 in ads to get a customer – this looks good, right? We can say we have 200% ROAS and call it a day.

Of course, one of the great advantages of having Data is that we are able to record it, and then see how it changes over time, and try to do the sorts of things in our business that move the needle in our desired direction.

For a SaaS company, two of the metrics that you live or die by are Customer Lifetime Value (sometimes called CLV or LTV) and the dreaded Churn Rate – astute readers will note that these two metrics are inextricably linked. Briefly: LTV is the amount of revenue that your business can expect to make from a given customer, and the dreaded Churn Rate is the expected number of customers (generally at a rate out of 100, represented as a percent, like: “Our Dreaded Churn Rate is a spooky 13%!” )

A saavy SaaS marketing analyst will use the expected lifetime value of a customer in the top of the fraction up there, to determine Return on Ad Spend – for two great reasons. FIRST, because it is more accurate: if you’re looking to determine the total return it makes more sense to use LTV than simply the ticket price. SECOND, because it will make her look better in her reporting.

Consider: for this same sale of our Helpful Business Software, our expected LTV isn’t $100, which is the annual cost of our product, but rather, $200. This doubles our ROAS. This is great news!

(It’s not really news at all though, right? We’re not actually improving either our ads or our product, we just used a more accurate number. Metrics are means!)

One wrinkle, though, is that now we’re not really using that equation above anymore – we’re using something more like:

If you’ve ever spent any time trying to calculate your customers’ lifetime value, you know that this has suddenly become a much more complicated metric.

What happens once we start to bring in more complicated ingredients into our ROAS pie here, things like LTV, is that ROAS moves from being a static sort of snapshot into a metric that is much more dependent on other parts of the business to be successful.

In the above example, imagine if your company has had a disastrous year, and your Dreaded Churn Rate has skyrocketed, driving your LTV down to below $100 (due to let’s say sweeping customer refunds and growing customer support costs) – now our ROAS is below 100%, even though literally nothing has changed on the advertising side. In this situation, ROAS becomes a larger aggregate metric, telling us something about the business at large.

This brings to mind a larger question: do we want ROAS to be a heartbeat metric, an indicator of the business overall? Or do we want it to be what it was about a thousand words ago, a simple snapshot of how our advertising efforts are going?

As we move away from direct retail e-commerce businesses into more complex companies, and up what’s called the advertising funnel, ROAS becomes additionally tricky, not because the equation itself becomes more complicated, but because we start to introduce uncertainty, and even worse than that, we introduce unequal uncertainty.

Generally, you know how much you’ve spent. This is true even for less measurable marketing efforts, things like event sponsorships, branding, and so forth. What you decide to include is a little bit of a wrinkle: do you include agency fees? Payroll?

The uncertainty comes into play in the revenue piece, and this is why ROAS as a metric starts to break down as we move up the funnel, because the lower part of your fraction, your spend, stays certain, while the upper part, the revenue, becomes increasingly uncertain, which makes the output more and more difficult to use in a reliable way.

This is a problem that crops up a lot in marketing metrics, and something I’ve been thinking on quite a lot: we often will compare or do arithmetic on numbers which have wildly different underlying levels of base uncertainty, sometimes to our detriment, maybe sometimes to our advantage.

 

I’ve been working with ROAS quite a lot, and trying to really get my teeth into it, and my brain around its under-the-surface complexity. For most businesses today, ROAS is useful, but it is not as simple as it looks.

This is where I ask you to add something in the comments! What metrics are stuck in your craw this week? Do you think I spend too much time trying to become certain about uncertainty? Let me know!