Lessons learned from applying LTV tools

by Neil Chheda

For my LTV project, I helped develop and launch the minimal viable product of Docphin, a platform that allows physicians to easily find, personalize, and share medical news, research, and other hospital-related content. I used three tools: 1) product trial, 2) focus groups, and 3) interviews.

The goal was to understand product-market fit and get customer feedback to iterate on the product in lean style. Unlike releasing social gaming products at Zynga—which immediately had millions of users—we launched Docphin only to 200 users of three types—residents, administrators, and attending physicians. As a result, we surveyed and interviewed every user to solicit feedback. While the outcome of this feedback will guide the next version of the product, I learned some critical meta-lessons about using these tools. Here they are:

1) If you want to hear your users, stop talking.

Immediately after we launched Docphin, we pushed a lot of content onto our initial 200 users. First, every user had access to personalized medical news and research with notifications. Second, each user group had access to tutorial videos explaining how features worked. Finally, we had a daily short poll that allowed users to rate their experience on multiple dimensions.

Not surprisingly, less than 25% of users submitted survey feedback on the first two days. Of the users that did, most of it wasn’t that helpful (either rated features 5 stars or zero stars, without any reasoning for why the feature was good or bad).

They problem was that we thought we knew what we wanted to learn from users. First, content overload flustered users and many simply didn’t know how to start with providing feedback on features that didn’t yet make sense. But more importantly, we spoke too much and didn’t listen. The “in-your-face” nature of the pre-populated questions only allowed users to answer what we thought we wanted to know. Instead, we should have left open-ended feedback on what users liked the most, disliked the most, and why.

Key Learning: Users have great insights. Talk less and given them opportunities to speak freely rather than bound them by your notions of what you think you need to learn.

2) Not all customers are right.

For a given feature, some customers loved it and some customers hated it. For example, one feature allows tracking of what training materials have been viewed and sends automatic updates when materials are overdue. On the first day, every user hated it. Before we turned it off, on the second day, every user loved it. Which users should we listen to?

As it turns out, resident’s hated the feature and attending physicians/coordinators loved the feature. While both users are critical to the product, coordinators drive the customer acquisition model since they make the decision to adopt the product for the program. Ultimately, we kept the feature, and over time, residents understand why its necessary.

Key learning: if you get mixed reactions, categorize the users and identify which is most important to the customer acquisition.

3) Beware of bias

While we did not have a case discussion feature, we asked two questions:

1. “If we were to add a case discussion feature, would you use it?”

   30% of users said yes.

2. “If we removed the current case discussion feature, would you miss it?”

   75% of users said yes.

The point here is that surveys often introduce bias, and this bias can lead you to misunderstanding your users. The specific bias here is loss aversion. The idea of losing something you are already paying for and receiving is more painful than the delight of getting something extra.

Key learning: When surveying, be careful to word questions without bias. Where possible, allow users to give open-ended feedback. And, don’t just listen to what users say, see how they behave (0% were using such a feature, since it didn’t exist!)

4) Use dynamic surveying to get the most from users

For the first few questions of the survey, we had high response rates. However, the last few questions—and usually the most important ones—had the lowest response rates. In addition, many questions did not apply to users taking the survey. (i.e. Do you use this product? If yes, what do you like most about it?). 


The problem was we just sat down and thought about what we wanted to know, not about the user experience of filling out the survey. We changed this process to be more dynamic and adaptive to user responses. During an interview, if a user expressed particular dislike of a feature, we would dig into it. If the reason wasn’t clear, we asked more questions to unearth the underlying problem. Similarly, if a user had a particularly insightful piece of feedback on a specific feature, rather than move on to the next question, the interviewer would ask more about that feature.

Key Learning: Don’t ask a laundry list of questions. Be dynamic during interviews, and you’ll get more insight “bang” for your survey “buck.”



Comments

Popular posts from this blog

Managing Your Online Persona, Part I

TCS IT Wiz 2013 Bhubaneswar Prelims

Interroger Prelims