Better safe than sorry

• 2 comments

Recently, I read an article by Erika Hall on How the ‘Failure’ Culture of Startups Is Killing Innovation. Since I had a talk coming up and couldn’t decide what to talk about and startups are “big” in Serbia, this article provided a nice schlagwort for the talk.

I agree with everything she said but I need to elaborate since it’s not only innovation this culture is killing. It’s killing the founders as well. Maybe not literally but that might only be a matter of time.

Startup culture is using a bunch of clichés to tell [mostly] young people that it’s ok to invest an enormous amount of time and energy into something and then let it fail. Well, it’s not ok. It’s bollocks. There is nothing wrong in investing your time and effort into something you are passionate about, but you can make sure that the risk of failure is as small as possible.

Do your research, people!

Erika has very nicely explained what research is and isn’t, so, if you haven’t read her article, go read it now. I’ll wait.

You’re done? Good. What she hasn’t explained, and that wasn’t the point of her article, are the different research techniques you could use.

Techniques we use

These research techniques are well known and there have been numerous articles written on them. Google and ye shall find. What I’ll do is write briefly about each technique we use at the Studio and how and when we usually use it.

Heuristic evaluation is very easy to organize and we use it when we are redesigning an existing product or there is at least some version of the product ready. You will need a number of evaluators who have the knowledge to do the evaluation. This number can be anything between 2 and 5 (you won’t get a better result with more than 5 evaluators). This type of analysis is done according to a predefined set of heuristics. You could use the ones defined by Jakob Nielsen or you can set up your own depending on where you want to go with your product.

Competitive analysis is another technique we use often along with heuristic evaluation. People often think that this technique is used to see what the competition is doing and copy them. They couldn’t be farther from the truth.We use this technique to see where the competition falls short so we can fill the gap thus giving the client’s product competitive advantage. We begin by defining a set of goals for the product to accomplish and we look at how successful the competition was in fulfilling those goals. Next, we map that out into a spider chart. The areas where the competition has failed stare you right in the face.

When dealing with a full redesign or design from scratch, we use surveys and interviews – stakeholder, domain expert and user interviews. Stakeholder interviews are usually what we begin with. That’s how we find out what the client wants to get from the product and what they would like their users to get from it. Domain expert interviews provide an invaluable insight into how the system operates. They are usually also the system users but are not the ones for whom we are designing. Next step is the survey. Surveys are the easiest way to collect quantitative data about current and potential users. We pose typical demographic questions and some probing questions with regards to the product we’re designing. This helps us segment the users for a more in-depth research using interviews.

Organizing and conducting interviews is hard, time consuming and expensive, but the benefits are worth it. That’s how you get the answer to that all important question: Why? You can find out more on organizing and conducting interviews by reading up on Developing your interviewing skills part 1 and part 2 or getting some advice from Whitney Hess.

Usability testing is not usually regarded as a research technique per se. It’s more often used to validate an existing product or prototype. However, we use it in both our research phase as well as for validation. For a new product you can do usability testing as early as you have sketches to show.

The last technique in this set is contextual inquiry. This is observation in context and, without any intention to defer you from it, I must warn that this is by far the most time consuming and most expensive of research techniques. First of all, you need to find users that are willing to let you observe them while working. This can be tricky if your users deal with confidential information on a daily bases (think banks). Secondly, you need to invest a tremendous amount of time in shadowing and observing the users in their day to day work. Nevertheless, this technique will give you the answers not only to why but to how the users do what they do. As Dr. House would say “People lie” and if you rely only on answers to interview questions you might not get the whole truth.

Conclusion

All these techniques should help you determine if you are building the right product to solve a real problem. Sometimes, you’ll need a few days for heuristic evaluation and competitive analysis. Sometimes you’ll need a few weeks for interviews. Sometimes you’ll need months or even years for contextual inquiry. You can use as many or as little as you think you need as long as in the end you know where you stand with your product. There is no point in solving a problem no one has.
This is a phase in product development where startups learn to what it is that they should say “No”.

I’ll admit that sometimes it’s quicker and easier to build a working product or prototype and ask questions later but often it’s not.

Fail often, fail early but don’t forget to fail cheap.

2 Responses

  1. Janko says:

    Congrats on your first article!

    You nicely summed up techniques we use. You didn’t mention participation, technique we used on that tennis project when I participated in a tournament and when we realized I suck at tennis even though I was thinking I have decent skills :)

    Another reminder that we shouldn’t just listen what users say, but also observe what they do.

  2. […] It’s still so frustrating to see how many companies embark on their redesigns or MVPs without doing contextual research first. You might get the usability of your product right, but without utility, it will still be useless. As Milica T. Jovanovic points out in Better safe than sorry: […]