Category: analytics

Become an Analytics Engineer!

OK, so let’s get something out of the way up front – yes, I wanted to be a data scientist.

But you know what? Once, I also wanted to be a professional coffee roaster.

These jobs (and aspirations) are similar primarily insofar as that my desire to have them took a nosedive once I got a real glimpse of what doing them was like.

If you like the idea of working with data, if you see yourself as someone who has ambitions or aspirations working in the data space, you should read this article from Dan Friedman – Data Science: Reality Doesn’t Meet Expectations

I work closely with data scientists – in some ways I genuinely envy their approach to work, and the way that they can find impact within organizations. I am super glad they are out there and I am so grateful for the insights and thoughtfulness they bring to the table – but that job’s not for me!

The job that I’ve found suits my nature, allows me to have a lot of impact, and work on important and interesting problems, is a new one – the Analytics Engineer!

Job titles in data and in tech are hard – do we really need a new one? The Analytics Engineer is this sort of emergent term, that describes an area of work that folks have been operating in for a while now, but with modern tooling and third party solutions has seen a rising need.

NB, not everyone knows that they need an Analytics Engineer – often you’ll see job descriptions for titles like Data Analysts, Business Analysts, Data Engineers, even Data Scientists – but the work that will be expected is Analytics Engineering work.

That work is more technical than a strictly Excel based analyst – no disrespect to Excel, sufficiently advanced Excel is indistinguishable from software engineering in my opinion, but, you will need some SQL chops to be effective as an Analytics Engineer. It’s less statistically heavy than a data science role. It requires literacy in data engineering but, in most cases, not necessarily the chops to originate an Airflow DAG. Strong opinions about data architecture is helpful but, often you can learn that on the job!

As I talk more with folks about this kind of work, and as we struggle to find qualified candidates for our own teams, I realized that I’ve repeated the same advice probably a half dozen times: sometimes to friends, at least once to an Uber driver, over Slack and in person. When this happens, I take it as a strong signal that I ought to put up a blog post!

So here it is: this is my guide to how you can become a competitive candidate for Analytics Engineering roles (even if they’re hiring for the wrong job title!)

One of the challenges to gaining the kind of experience you need in order to become a competitive candidate is that much of the best in class tooling for this kind of work is either hard to use alone or prohibitively expensive – something like Airflow is a great solution and very broadly used, but, it’s going to be a challenge to set up locally to use with toy data. Looker is a very common tool for this kind of work, but is terribly expensive for an individual to use as an educational tool.

So, this set of suggestions is meant to be used in reality by anyone – you should be able to follow this advice at low or no cost.

Yes, if a job description is looking for Airflow ETL experience or Looker modeling experience, you won’t have exactly that – BUT as someone hiring into a role with exactly that wording in our job description, I also recognize that the free tooling below is eminently transferable to the tooling that we use in-house. You can mention that you accomplished the same tasks with a different tool and that the skills are laterally transferable in the cover letter – a cover letter with that kind of attention to detail is already ahead of the pack.

Here’s your stack:

FIRST you have to find some free data that you’re interested in. That second part should not be neglected – if you want to see this project through to its completion (and gain your Competitive Candidate merit badge!) , it is absolutely imperative that you make choices that make it as easy as possible for you to stay motivated!

Are you interested in food? See if you can get data from your local agricultural co ops or agencies on historical data. I’m interested in local politics, so I FOIA requested the voter registration data for the entire State of New York – it came on a CD!

NYS_CD

Being interested in the data you’re using is going to make a big difference when it comes to understanding it, modeling it, and then building some reporting – especially if the only end consumer is you! Bonus points if it is a streaming source of regularly-updated data, like web traffic or an ecommerce application.

SECOND I recommend using BigQuery as your data storage solution – they have good docs, they have a free plan, and they integrate really easily with the other parts of the data stack. If you have another solution you prefer, that’s fine too!

THIRD You must learn the excellent and open source dbt from your friends and mine at Fishtown. Here’s the tutorial and here is the Slack community. dbt is what you’ll use to take your ocean of raw data, transform it into  tables that fit the dimensional modeling standard, and apply robust testing to those transformations.

If you have a little extra cash for this endeavor, I recommend buying the Database Warehouse Toolkit and reading the first four chapters to really dig deep into dimensional modeling. If you’re trying to stay absolutely no-cost, you can suss out some blog posts and other resources for free!

FOURTH You’ll build out your final reporting using the free tier of Mode Analytics – note that in order to stay within their free tier, you may need to reduce your final reporting tables to “Thousands of Rows” – take this as an extra challenge to your transformation later, and an opportunity to additionally leverage the power of dbt!

FIFTH Make sure you document the journey – I always recommend blog posts, but probably a well documented Github repo will be more interesting, and more likely to be reviewed, by most technical hiring managers.

At the end, your process would look something like this:

I recognize that the above glosses over a lot of the work that is behind this proposal – probably a dedicated person already working full time, putting in some time nights and weekends, could get through the above in six months. It’s not a short trip, but, if you’re looking to make a move, this is one way to do it.

The need for Analytics Engineers is only growing, even if the job title itself is still only starting to gain steam – I hope you’ll give it a try!

Crowd Sourcing Organizational Improvement

Subtitle: It’s Called the Analytics Road Show!

Here’s the situation: our company has a data organization – it’s probably kind of like your company’s data organization: it has some data engineers, it has some governance experts, it has some analysts, some developers.

We’ve been making great strides in doing the right work, and getting better at delivering that work quickly, accurately, and in communication and consultation with our stakeholders.

But, that feels like table stakes, right? One lesson that really rings out to me from my time before I worked in tech, a lesson from one of the owners of the chain of bakeries where I taught over one hundred baristas how to do latte art:

You can’t work on the business if you’re always working in the business.

(This idea I believe originates with the book The E Myth ? Correct me if I’m wrong on that one though!)

This is something I’ve been cogitating on a lot these days: not just, how do we do what we do, and do it well? But, how do we improve the improvement? How do we improve our processes, our structure, the whole way we think about and engage with our data, with our measurements – even how we engage with one another within the organization.

So – I think I need to get outside of the organization to get greater insights here: I’m taking this show on the road. I’m calling it the Analytics Road Show. I have a deep and abiding love of chatting with folks – some might uncharitably call it nosiness – which I am hoping to leverage into a bunch of sit-down sessions with folks working in similar organizations but not mine.

Getting outside the building is a key part of this endeavor: I need to get at this with a beginner’s mind. So then, dear readers, where can I find folks willing to talk with ol’ SAO?

I have the great fortune of being a member of the Locally Optimistic Slack community (you should join us!), and when I dropped this into the #nyc channel:

… I got a serious no-joke resounding response. So, here goes nothing! July 15th and 16th (that’s next Monday and Tuesday!) I’ll be heading down the mighty Hudson to have coffees, lunches, and mid-level IPAs with some brand new friends in NYC.

I am really looking forward to this, as well as recording my thoughts in standard blog-post format for y’all – and internal action plans for my colleagues.

I’m Doing Live Video Interviews!

This post is a very exciting announcement for me, so I won’t do the typical online-content thing where I tell you a big, narrative tale about me and my values before I actually do the announcing – I’ll do that after.

This coming Monday – TOMORROW – June 24th, at 4PM EST, I’ll be doing the first of many live-streamed video Ask-Me-Anything style interviews with professionals working at the intersection of data and analytics!

(If you’re reading this and want to make a Google Calendar event right this minute, you know what, here’s the Zoom link: https://brooklyndata.zoom.us/j/501489762 )

This first session I’ll be sitting down with my friend and yours, the singular Matt Mazur, once my colleague at Automattic, then a member of the eminent customer-support software Help Scout, and now a free agent, applying his immense experience and insight to problems of analysis and data management for a number of companies, all of which are lucky to have him.

I’m putting this interview together in partnership with the Locally Optimistic team, who I have gotten to know over the last few months and have just been, honestly, consistently impressed!

I first joined the Locally Optimistic community via their blog, as I think is also the case for many of the current members of that Slack instance. As its membership has grown, it’s been a really excellent source of insight and camaraderie: I got to meet a few folks in person at a Looker meetup in NYC (I’m just a drive up the Hudson, remember), as well as at the Marketing Analysis and Data Science conference out in San Francisco, earlier this year.

Ever since I shuttered my podcast about hop farming (more about that here), I’ve missed the kind of social access that doing regular interviews can offer: I am by nature an inquisitive person (some might uncharitably say nosy), and having access to a socially acceptable way to totally pepper someone with questions was in so many ways a rewarding experience for me.

In some ways, Trellis to Table (the hop podcast referenced above) was about connecting small groups and individuals involved in small-scale hop farming, and helping them to share value: by interviewing this totally novel little crew of twenty-something first time farmers in Minnesota, their lessons and energy could leapfrog to the lifetime farmers in Upstate NY, in South Carolina, and suddenly this value had exploded across a network that didn’t even exist before – that was the big motivation for me, by the end.

I think in some ways the intersection of software engineering, data analysis, and business intelligence is in a similar place – there’s a good post about this new type of professional, the Analytics Engineer, on LO – there is this really large, and growing, community of folks whose work doesn’t yet have a clear set of job titles, or a clear sense of what their career progression might look like.

In tapping the Locally Optimistic community for exciting, interesting folks to engage in these video conversations, we can start to create a better shared understanding of our work, and what our work looks like, and how we can get better both as individuals but also as a community of practice.

I’m very excited to get back into the interview game: it’s something I really enjoy, and I hope that y’all are able to get a lot out of it as well.

Matt and I will talk about his professional journey, which has taken him from an officer in the Air Force, to leading an analytics team, to starting his own software business and becoming a business intelligence consultant.

We’ll also explore the world of internal organizational communication, working with non-data teams, and having an impact as a data analyst.I’m very excited to get back into the interview game: it’s something I really enjoy, and I hope that y’all are able to get a lot out of it as well.

As one last reminder, this first session is this coming Monday – TOMORROW – June 24th, at 4PM EST

Here’s the Zoom link: https://brooklyndata.zoom.us/j/501489762

If you want to be super cool, I am also going to be trying to live-stream this via my Twitch channel, which I am literally creating just for this series (!) here: My Real Not a Joke Twitch Stream

Source & Medium: A Medium Sized Dilemma

Subtitle: Source, Medium, Attribution, Stale Information, and The Future of Data

Here’s our situation – we want to be able to slice reporting and dashboards by a number of dimensions, including source and medium.

MARDAT (the team I’m lucky enough to be working with) is working to make this kind of thing a simple exercise in curiosity and (dare I say) wonder. It’s really interesting to me, and has become more and more clear over the last year or so, how important enabling curiosity is. One of the great things that Google Analytics and other business intelligence tools can do is open the door to exploration and semi-indulgent curiosity fulfillment.

You can imagine, if you’re a somewhat non-technical member of a marketing or business development team, you’re really good at a lot of things. Your experience gives you a sense of intuition and interest in the information collected by and measured by your company’s tools.

If the only way you have access to that information is by placing a request, for another person to go do 30 minutes, two hours, three hours of work, that represents friction in the process, that represents some latency, and you’re going to find yourself disinclined to place that kind of request if you’re not fairly certain that there’s a win there – it pushes back on curiosity. It reduces your ability to access and leverage your expertise.

This is a bad thing!

That’s a little bit of a digression – let’s talk about Source and Medium. Source and Medium are defined pretty readily by most blogs and tools: these are buckets that we place our incoming traffic in. People who arrive at our websites, where ever they were right before they arrived at our websites, that’s Source and Medium.

We assign other things too – campaign name, keyword, all sorts of things. My dilemma here actually applies to the entire category of things we tag our customers with, but it’s quicker to just say, Source and Medium.

Broadly, Source is the origin (Google, another website, Twitter, and so forth) and Medium is the category (organic, referral, etc) – if this is all new to you I recommend taking a spin through this Quora thread for a little more context.

What I am struggling with, is this: for a site like WordPress.com, where folks may come and go many times before signing up, and they may enjoy our free product for a while before making a purchase, at what point do you say, “OK, THIS is the Source and Medium for this person!”

Put another way:  when you make a report, say, for all sales in May, and you say to the report, “Split up all sales by Source and Medium,” what do you want that split to tell you?

Here are some things it might tell you:

  • The source and medium for the very first page view we can attribute back to that customer, regardless of how long ago that page view was.
  • The source and medium for a view of a page we consider an entry page (landing pages, home page, etc), regardless of how long ago that page view was.
  • The source and medium for the very first page view, within a certain window of time (7 days, 30 days, 1 year)
  • The source and medium for the first entry page (landing page, homepage) within a certain window of time (7 days, 30 days, 1 year)
  • The source and medium for the visit that resulted in a signup, rather than the first ever visit.
  • The source and medium for the visit that resulted in a conversion, rather than the first ever visit.
  • The source and medium for an arrival based on some other criteria (first arrival of all time OR first arrival since being idle for 30 days, something like that)

It feels like at some point Source and Medium should go bad, right? If someone came to the site seven years ago, via Friendster or Plurk or something, signed up for a free site, and then came back last week via AdWords, we wouldn’t want to assign Friendster | Referral to that sale, right?

Maybe we have to create more dynamic Source/Medium assignation: have one for “First Arrival,” one for “Signup,” one for “Purchase.” Maybe even something like Source/Medium for “Return After 60+ Days Idle”

In the long run, it feels like having a sense of what sources are driving each of those behaviors more or less effectively would be helpful, and could help build insights – but I also feel a little crazy: does no one else have this problem with Source and Medium?