What you’ve learned about applying the tool/technique

by Mark Lurie

I’m a big fan of the lean methodology, but it has its flaws. In implementing lean methodology over the past few months, I have learned several things.

First, you can never prove a hypothesis, only negate it. More practically, you can negate a hypothesis with a very negative result. And, while you can’t prove a hypothesis, you can get really confident on it with a series of great results. However, it is hard to know how to interpret a middling result. Unfortunately, middling results are the most frequent! It is easy to interpret middling data in a positive or negative light, depending on your state of mind, which is problematic. For example, what does It mean if 50% of your initial customers also use the competition? Does this mean the market need is being met already, or there is an untapped market that the competition isn’t addressing? Regardless, you have to make a decision and keep moving.

Second, the fastest route to an answer isn’t always quantitative. We are focused on curating products for creative women, and we hypothesized that a certain type of woman exists and that they would like a certain type of product. We were able to run some landing page tests and A/B test products, but at the end of the day, we didn’t really think the results were meaningful. Instead, we began to call professional merchandisers and buyers, and ask them directly if they thought our hypothesis was correct. This turned out to be a great way to test our assumptions – ask experts who have spent years merchandising for different customers. While these weren’t formal tests, the people we spoke with have lots of experience, have seen lots of data, and seem to really know what they do and don’t know. Their strong confidence ended up feeling more meaningful than our A/B tests. I think this is ok – not all hypothesis tests have to be quantitative.

Third, be careful of relying on precedent. We replicated the prelaunch invite user experience of a competitor, and expected to see similar levels of virality. While we did see some sharing, we realized that each audience has its idiosyncrasies. In order to encourage our audience to share, we needed to give them more value out of our initial product. No test or precedent is truly representative. It can give you directional guidance, but always be skeptical and walk through the logic and customer experience in your own head before trusting a test.

Fourth, tests have a ‘cycle time’, and sometimes the cycle time is so long that the test is irrelevant. Unfortunately, the best examples of this are also the most important, like repeat purchase rate and lifetime value. It is very difficult to test these KPIs unless you actually wait for two years of data. Unfortunately, this is too long to wait. Sometimes you have to take a leap of faith and fix it later if you’re wrong.

Fifth, the lean methodology has not yet penetrated the investor community. We spent $5,000 on our product, and could have run significantly more tests with an incremental $10,000. However, investors don’t believe you can make meaningful progress without hundreds of thousands of dollars. They don’t appreciate that lean tests are progress, and that this requires significantly less capital. This also creates a funding gap between $0 and $100,000 of capital for lean testing of startup ideas.




Comments

Popular posts from this blog

TCS IT Wiz 2013 Bhubaneswar Prelims

What Can be Done About Technical Debt?

Don’t code: 5 steps to an outsourced MVP