Starting Something – Lessons Learned

by Michael Belkin, Ben Story, Travis Webb, Jadelind Wong

Out team developed a beta version of a social, location-based iPhone app that allows people to easily meet new people at the same location. We were excited to get the app downloaded and tested by as many people as possible. Luckily for us, if there is one thing we know well, it is that graduate students never need a reason to attend a party. We threw a “Grad Tech Mixer” party for Boston-area graduate students and used the event as a testing ground for the app. The event turned out to be a success and some of our most valuable learnings came from just talking with people who were using the app or demonstrating the app to people who didn’t have iPhones. The responses we received were very positive and encouraging. Below are some of our thoughts and learnings:

Software Development

Perhaps the most critical task for a lean software developer is to determine which set of features represents “minimum viability” in the eyes of the consumer. These decisions directly drive sprints and the development cycle, ultimately affecting the number of real world trials that can be achieved and the quality of the resulting analytics.

One example from our experience was the integration of push notifications in the for in-app messaging. Though simple in concept, the handoff between client and server, along with Apple’s complex push notification architecture led us to de-prioritize this feature. However, at the live event, many users put their phone away after registration and weren’t aware of the messages they had received. This not only significantly reduced interaction, but also discouraged the users who attempted to reach out to people in the bar.

We also believe that in its early stages, software needs the opportunity to “fail quickly and often.” Our compressed timeline didn’t allow for other opportunities for testing before the large-scale event. This put significant pressure on the development team and resulted in some features being over-engineered, while other bugs remained undiscovered until the live event. The developer’s artificial environment is not a fair proxy for real world usage and future teams looking to use MVP software as a hypothesis testing tool should plan several smaller usage tests, beginning with the most forgiving audiences before working up to significant real-world trials.

Customer Surveys

To gain insights about existing customer behavior we started with a survey which asked basic questions about social media usage, location based check-ins, and technology based socialization. We also asked about potential usage behavior and features users might like to use in their favorite application.

Out first learning was just how hard it is to properly word questions such that you get unbiased responses data, but which also encourages answering the questions. We tried to provide check lists wherever possible to remove the time and barrier to answering questions. The response rate was very high for all check-box questions. Anything that required added information or a qualified answer with a typed response got almost zero responses.

We also observed the people are much more likely to answer questions about their existing behavior and preferences versus thinking about potential behaviors or features they might value. There is a false negative risk where asking about behavior change will initially be met with skepticism and negativity. We would use survey only mostly to learn about existing behavior and downplay any “wish list” questions which are probably better addresses with other tools.

Usability Tests

To get more in-depth data on how users interact with the app, we did a series of usability tests prior to the launch party. These tests generated a number of anecdotes that helped refine the product. In general, we found that usability tests, or our definition of them, are really effective at testing specific flows and the intuitiveness of your UI; however, they are less effective at feature prioritization and gauging interest/intent. As a result, we’ve identified two challenges that we would encourage future generations to think about as they deploy these tools:

First, developing an effective script is challenging because the mechanism is inherently heavy-handed; nevertheless, there’s a balance you have to strike between guiding and observing if you have specific objectives in mind. Ultimately, the litmus test should be whether your script links back to a hypothesis you are trying to test. If your hypothesis is narrow in scope – testing a specific feature or user flow – than a rigid script probably makes sense. If your hypothesis is broader it might be valuable to give the user free reign for the first 5-10 minutes of the test.

In that vein, the second issue we found is getting people to focus their commentary on higher-level issues rather than technical minutiae that the entrepreneur abstracts away. We were constantly trying to bring the discussion up a level from “that screen took a long time to load” to “I would use this feature because…” Again, this reiterates the need to test at different periods in the product’s lifecycle for different reasons (technical Q&A vs. fit with CVP) and perform tests that are linked to the hypothesis you are in the process of testing.


Popular posts from this blog

Quiz Time 129

TCS IT Wiz 2013 Bhubaneswar Prelims

The 5 hour start-up: BrownBagBrain