The Survey

By Justin Ekins

One of the core tenants of the Lean Startup methodology is the value of user feedback. User feedback is critical at every stage of product development. During the ideation phase, (potential) user feedback in the form of surveys, research reports, etc. can help an entrepreneur understand the market potential and the nature of an unmet need. During product design, usability tests, qualitative interviews and ethnographic-style research that observes the pain points of a customer can inform nuanced aspects of a product’s ideal design. And the value of user feedback does not go away once a startup becomes an established company; indeed, one of the great advantages that large incumbents have vs. early-stage companies is active feedback from their users.

One of the most common methods that I have seen employed (and have employed myself) is the user (or prospective user) survey. The popularity of this method has more to do with the very little resources that are required to deploy one. (They can often be performed for free.)

There are a number of pitfalls that an entrepreneur must avoid when deploying a user survey.

First, entrepreneurs cannot user surveys to extract information that surveys are not capable of providing.

For instance, surveys are not very effective at prioritizing a product feature sets or divining the most effective price point for a product. (I cringe when I take a survey that asks me my willingness to pay for some new product.) In many cases, users’ revealed preferences through their actions can be much more valuable than—and entirely different from—their stated preferences. It is much easier for me to say that I would sign up for a $15/mo. subscriptions service than it is for me to actually do so—and so I am likely to answer such a question on a survey in the affirmative more than I would actually sign up for such a service in real life.

Another risk of user surveys is that the method for obtaining respondents may skew the data toward a particular subset of users who are not representative of the target market.
For our cab sharing application, we gathered some customer research by posting a SurveyMonkey link in our class Facebook group. Most of the respondents who volunteered their time to answer a survey about a cab sharing application are probably those who would be excited to use such an application. As a result, any positive feedback we get from this group must be interpreted to better represent enthusiastic early adopters rather than our average user.

That’s not to say that you can’t get much value from such surveys. We learned from our surveys that users preferred email notifications to push notifications—an assumption that contradicts my own personal experience managing my email inflow until reaching Inbox Zero. The important thing is to understand the limitations of the particular research method employed and to respond accordingly. Specifically, this means being realistic about the kind of data you hope to get out of a survey, understanding the likely skewed respondent set that you will be working with, and accompanying surveys with other methods for more in-depth (though probably more expensive) user feedback.


Popular posts from this blog

Quiz Time 129

TCS IT Wiz 2013 Bhubaneswar Prelims

The 5 hour start-up: BrownBagBrain