How the MVP channel can impact results

By Angela Lopes

Recently I've been thinking about how to test the viability of a business idea of mine by applying, of course, the principles around lean start-up.  As I come up with my hypotheses and different tests to prove or disprove them, however, I find myself wrestling with how to deal with biases introduced through test design and deployment.  In particular, how an MVP (or other marketing test) is deployed will certainly impact and potentially massively distort the results of the test.  Limitations to the MVP based on channel choice can manifest in two ways: either the test participants reached can vary from the target customer, or the channel itself can elicit behavior that varies from what would result from use of the actual product.  Here I've explored these issues by highlighting a few examples.

Does my MVP reach my target customer?

In the Lit Motors case, Daniel Kim sought to prove the demand for enclosed two-wheel vehicle using several tests, including a display at the Electric Vehicle Festival in Portland, Oregon.  I found myself asking "who is their target customer?" and "are they reaching their target customer through this test?"  If their aim was to test demand across young urban professionals or older traditional motorcycle enthusiasts, they were unlikely to find many members of either group at the EV festival.  Further, the test itself introduced an additional self-selection bias since only festival attendees willing to spend the time on a survey were allowed to interact with the product.  So how useful was this test really?

Another method they used to test for consumer demand was to set up a stylish website and see who would indicate interest by signing up for emails.  In general, I'm suspicious of results from any "smoke-screen" style tests delivered online that involve physical products to be distributed through physical channels.  The customers who eventually have access to and purchase the product may not spend a great deal of time online and vice versa. 

In my specific case, I am trying to find survey and beta participants to test a product which ultimately is most valuable for people who want to save time.  How can I attract suitable candidates when either test will require an investment of their time and therefore deter time-poor participants?  Offering prizes or other compensation can only go so far as I eventually hit a limit at the point where candidates value their time more than the prize.  Given this limitation, can I confidently draw any conclusions about my "target customer's" behavior or needs?

Does my MVP encourage misleading behavior?

In Eric Ries' book "The Lean Startup" he discusses several different MVP setups, including the concierge MVP.  The specific example he cites is the initial MVP set up by Food on the Table (FotT) for their first customer.  This initial customer got the VIP treatment, with personal visits from the CEO to determine their needs and the product - a shopping list of ingredients and recipes - hand delivered to her door.  What a great way to intimately learn about what the customer wants, being right there to ask questions and get feedback.  Right?

As I think about designing my own concierge MVP, however, I run into problems.  If I set up a concierge service over email, say, will this encourage use of my "product" that I wouldn't see with a complete online or mobile app solution?  The biggest concern is the push versus pull nature of the different channels.  Maybe someone will make use of a feature if it's shoved in their inbox every day, but will they seek it out if it requires them to log onto a website?  So thinking back to the FotT example, how might the "channel" or nature in which the service was delivered impact the use of the service?


Ultimately, I think the issues raised above point to a need to consider how the MVP deployment channel differs from the final product’s likely go-to-market channel, and to apply a healthy skepticism to any MVP test as a result.  Perhaps it is enough to be aware of any potential biases and make sure any thresholds set to define MVP success or failure are adjusted to reflect these.  In any case, I believe an MVP test deployed through a different channel can still be useful in shedding light on relative importance of product features or preferred value propositions, e.g. through split testing, while any absolute measure should probably be ignored given the likely biases introduced.


Popular posts from this blog

Quiz Time 129

TCS IT Wiz 2013 Bhubaneswar Prelims

The 5 hour start-up: BrownBagBrain