Tag: small data

7 Weeks of Hop Growth Data

7 Weeks of Hop Growth Data

Since the very end of May, I’ve taken weekly measurements of the height of all of the first year hop bines in my test yard. Here are the results, by location and height:

Screen Shot 2015-07-12 at 12.11.12 PM

Like any pile of data, we come away with more questions than answers: are there significant differences between the locations that grew better and those that grew worse? Is there a variable at play that isn’t described by the graphic? In this case, I can tell you I hope not; they’re all watered automatically and at the same rate – I tested! They also all have nearly exactly the same amount of sunlight per day, due to the location and alignment.

However, it is neat to notice how the different variety of hop plant are growing differently: you can see that B2 and B3 are far outgrowing the others (at 85″ and 93″ respectively, versus a yard average of 41″ for this week) – these plants are both of the Chinook variety, described by my friends and yours at Hopunion as “A high alpha hop with acceptable aroma.”

We can also see that the two laggards (A1 and B1) are both Centennials (“Very balanced, sometimes called a super Cascade.”) – while I know that the first year’s growth is not necessarily indicative of any plant or variety’s long term success, it will be interesting to see how these trends correlate to yield in future years – it’s possible that the Centennial plants are pushing out more substantial root stock than the others, which may make this apparent first-year laziness in fact an investment in greater long term success.

Ain’t data fun?

Finding Hospitality in the Numbers


It’s always a funny thing when you find a problem you weren’t expecting – especially when spending time with usage data, taking a moment to blink once or twice and consider why something looks odd can really bear dividends.

When doing a fairly standard rundown of the support statistics for our in-app support, I noticed that, despite making up about 40% of our userbase, our Android app users were submitting as many support requests as our iOS users. This meant that an Android user was almost twice as likely to contact support as an iOS user.

This seemed strange – I did some digging. Was the Android app more difficult to use? The app store rating for the Android app was actually higher than that of the iOS app. It was also noteworthy that the Android users accessed the in-app FAQ about half as much as iOS users – perhaps for some reason Android users tended to speed past the FAQ and go directly to support? Perhaps the FAQ wasn’t displaying properly?

Like anyone feeling stumped, I brought the question to the team, hoping someone would find some insight where I didn’t – and it turned out that our Android application in fact offered more points of access to support than the iOS app – that is, the Android app offered folks a chance to reach support at points of failure and error messages, whereas the iOS device did not. All of these additional access points did not require a customer to go through the usual flow of FAQ before reaching out to support.

Mystery solved. We’re increasing the number of access points to support in the iOS app.

Working on the mobile apps has revealed to me again and again that the lower the barrier to entry is, the better you’ll be able to hear from your customers. They have a lot of valuable things to say – given the opportunity, they’ll help you to make better things.

If you’re keeping track, yes, this is the second story about working with the mobile team where I end up increasing the number of incoming support requests. Yes, I am the worst.

Small Data: A Case Study



Big Data is a Big Thing, an idea that often goes hand in hand with words like “Enterprise” and “scientist.” Today I’d like to share a story from my past to illustrate that data, experimentation, and testing, are entirely accessible to business owners of all flavors and sizes, not just massive corporations with a dedicated team of growth hackers, data scientists and an in-house barista.

Two jobs before Automattic, I worked for a small chain of artisan bakeries in Providence, Rhode Island, called Seven Stars. There are three locations (very small chain), and it is owned by a lovely couple who brought me on to design and execute an improved employee training system. Once that was up and running on its own steam (after about 18 months), I became a bit more of a general utility player for them – finding problems and then solving them. It took great trust on their part, but I like to think I earned that trust, in efficiency gains, improved revenues, and tastier coffee.

During a conversation with one of the owners, he mentioned that he had a real gripe with muffins – not only were they one of the more involved pastries that we sold, they also had the slimmest margins. A situation fraught with possibility. I asked him a few more questions, and headed back to my shared office to dig through some of our historical point of sale data. I didn’t know it at the time, but what I was about to embark on was the retail bakery version of growth hacking.

At the time, we offered three different muffins every day, with the selection rotating from day to day – Blueberry, Corn and Pumpkin, say, on Monday, then Chocolate, Bran and Blueberry on Tuesday, etc etc.

After establishing a baseline (easily done with today’s computerized point of sale systems), I proposed an experiment: we would produce only 2 kinds of muffins per day, and only produce the ones that had the strongest current sales. We’d do this for six weeks, then take a look at the data, and decide from there – or, as I’d say today, we would then iterate on the process.

And, thankfully, since this is a case study, it worked! After six weeks, the sales at of each store had retained its pre-experiment growth percentage. Now, this may not sound like a success – sales growth had not changed? How can an experiment be a success if sales growth had not improved?

Sales growth may not have changed (up or down), but the numbers behind the sales growth had shifted; muffins fell significantly, but other areas (specifically scones, which interestingly sat next to the muffins in the display) grew to match the decrease in muffin sales.

If I were to guess, I would suggest that this indicated that folks who were at one time buying a muffin (perhaps the third, dropped, variety), were not simply abandoning their purchase, but rather purchasing another item, possibly even at the same price point. However, since muffins were the worst-producing item, revenue-wise, anything else represented a greater revenue for the bakery. Additionally, moving bakery labor from muffins to another product represented a second win, since muffins were the most laborious and frustrating product.

I like to think of this kind of data implementation as Small Data – using the information that you have to run experiments that are within your grasp for small, consistent wins. You don’t need a data scientist on staff, you don’t need a degree in statistics, you just need to know your business and have a curious mind. Data can work for everyone – all you need is a willingness to experiment.