While I really enjoy working with qualitative data, and while there are definitely camps of people who argue about the superiority of quantitative vs. qualitative data, what ultimately matters is what type of question you’re trying to answer. Certain methods are better suited for certain situations or types of information. Selecting your methods should be not be about what you prefer or are most skilled at (and thus most comfortable with), but about which method will help you answer your question. Each method has its strengths and its limitations.
Here’s how I think about design research methods:

On one axis: qualitative to quantitative.
Quantitative methods are for when you want to know, does this work or not? Surveys, website analytics, A/B testing – for these, you’re looking for patterns across a large sample size, or group of people (or interactions/transactions). Note: If you’ve only got 10 people to see data from, this is probably not going to be your best course because even 1 extreme case (10%!) is going to skew your results.
Qualitative methods are when you want to understand why. Why someone does things in a certain order, or why certain functions or features are important. If someone makes a frowny face when presented with something, what is it they were expecting instead?
On the other axis: attitudinal to behavioral.
Attitudinal methods are for when you want to know how someone feels about something. What’s their opinion? Even if you are surveying or interviewing someone about their past behavior, this is based on recall—which will be colored by how they felt about it.
In contrast, behavioral methods allow you to observe what the person is actually doing. Quantitative behavioral methods like analyzing website visitor behavior can tell you that 30% of website visitors did in fact click on the “Donate” button from the home page. It can’t tell you why, though. A usability test* allows you to observe behavior while people are trying to use your design to complete a task—so you can see the behavior directly without it being filtered by what the person thinks is relevant to tell you (or remembers), but you also have the ability to ask them questions. Like, “I noticed you moved your mouse back and forth between the Donate button and the Join Us button a couple times before clicking on the Donate button—can you tell me more?”
*A usability test is a test of how usable your design is—it is not a test of the user! For a usability test, you typically create some realistic tasks and have an idea of how you might expect them to go about the tasks, then you give them to someone who is representative of your audience of focus. What you’re looking for is whether your design and your expectations for how someone would do that task match up to a real person’s expectations of how things should work.
Let’s walk through an example.
Say that I’m doing advocacy work and I’m trying to get people to contact their government officials about how much we would in fact like a taco truck on every corner.
- If I want to increase how many people open our emails and then sign our petitions, I’ll probably use A/B testing and look at the email analytics.
- If I want to know whether people who supported our taco truck on every corner initiative are interested in our other initiatives, I’d use a survey.
- If I want to understand what would motivate someone to recruit others to take action and how we could make it easy for them to form their own street team – I’d use interviews.
Typically, I will layer methods. For example, I might use web analytics or a survey to inform what questions I ask in interviews or what tasks I focus on during a usability test.
Just as the best tacos have multiple fillings with a mix of different flavors and textures – the same goes for when you’re gathering data to make for the most enjoyable eating experience!
You must be logged in to post a comment.