When to use quantitative vs. qualitative methods

While I really enjoy working with qualitative data, and while there are definitely camps of people who argue about the superiority of quantitative vs. qualitative data, what ultimately matters is what type of question you’re trying to answer.   Certain methods are better suited for certain situations or types of information.  Selecting your methods should be not be about what you prefer or are most skilled at (and thus most comfortable with), but about which method will help you answer your question.  Each method has its strengths and its limitations.

Here’s how I think about design research methods:

Image is a chart with 4 quadrants and 2 axes.  Y-axis is a spectrum of Qualitative (Tell me why) to Quantitative (Does this work?).  X-axis is a spectrum of Attitudinal (How do you feel about it?) to Behavioral (What do you do?).  Clockwise from the top left: Attitudinal & Qualitative includes Interviews; Behavioral & Qualitative includes Contextual Inquiry, Usability Testing, Service Safari; Behavioral & Quantitative includes A/B Testing, Web Analytics, Unmoderated Testing; and Attitudinal & Quantitative includes Surveys.

On one axis: qualitative to quantitative. 

Quantitative methods are for when you want to know, does this work or not?  Surveys, website analytics, A/B testing – for these, you’re looking for patterns across a large sample size, or group of people (or interactions/transactions).  Note: If you’ve only got 10 people to see data from, this is probably not going to be your best course because even 1 extreme case (10%!) is going to skew your results. 

Qualitative methods are when you want to understand why.  Why someone does things in a certain order, or why certain functions or features are important.  If someone makes a frowny face when presented with something, what is it they were expecting instead?

On the other axis: attitudinal to behavioral.

Attitudinal methods are for when you want to know how someone feels about something.  What’s their opinion?  Even if you are surveying or interviewing someone about their past behavior, this is based on recall—which will be colored by how they felt about it. 

In contrast, behavioral methods allow you to observe what the person is actually doing.  Quantitative behavioral methods like analyzing website visitor behavior can tell you that 30% of website visitors did in fact click on the “Donate” button from the home page.  It can’t tell you why, though.  A usability test* allows you to observe behavior while people are trying to use your design to complete a task—so you can see the behavior directly without it being filtered by what the person thinks is relevant to tell you (or remembers), but you also have the ability to ask them questions.  Like, “I noticed you moved your mouse back and forth between the Donate button and the Join Us button a couple times before clicking on the Donate button—can you tell me more?”

*A usability test is a test of how usable your design is—it is not a test of the user!  For a usability test, you typically create some realistic tasks and have an idea of how you might expect them to go about the tasks, then you give them to someone who is representative of your audience of focus.  What you’re looking for is whether your design and your expectations for how someone would do that task match up to a real person’s expectations of how things should work.   

Let’s walk through an example.

Say that I’m doing advocacy work and I’m trying to get people to contact their government officials about how much we would in fact like a taco truck on every corner.

  • If I want to increase how many people open our emails and then sign our petitions, I’ll probably use A/B testing and look at the email analytics.
  • If I want to know whether people who supported our taco truck on every corner initiative are interested in our other initiatives, I’d use a survey.
  • If I want to understand what would motivate someone to recruit others to take action and how we could make it easy for them to form their own street team – I’d use interviews.

Typically, I will layer methods.  For example, I might use web analytics or a survey to inform what questions I ask in interviews or what tasks I focus on during a usability test. 

Just as the best tacos have multiple fillings with a mix of different flavors and textures – the same goes for when you’re gathering data to make for the most enjoyable eating experience!

How to prioritize which data to collect and keep

One of the things that can feel overwhelming for a lot of people, whether you’re beginning to collect data for the first time or you’re trying to clean up a decade’s worth of data, is how to prioritize which data you need.

Of course, there’s data you’re required to collect and keep due to regulations, taxes, grant contracts.

And then there’s the data that you will need in the course of your operations—like where to send donation thank you letters, volunteer contact information, client names, budget totals, event attendees, etc.

But of that second category, which data do you need to keep?

And, if you’re starting out, what data should you collect that would be helpful?

Here’s how I like to answer that question:

A question you can answer using dataWhat action can the organization take based on this data?What data do you have (or can you get) that would be needed to answer this?
Example: Which social media channel generates the most conversions?Example: We can change how much time we spend on various channelsExample: Click through rate
Table to use for prioritizing data to collect

By filling in this table, I can get a lot of clarity on which data is worth keeping or collecting.

First, identify some questions you’d like to be able to answer using data.

These are typically things that might help you make a decision.  For example, is our current marketing strategy working as we’d hoped?  Is our program achieving the intended goal?  Did we budget appropriately for our technology needs? 

Next, determine what actions you could take based on answering that question.

Perhaps you’ll spend more time on your TikTok content than YouTube if that’s what the data proves to be more effective.  Maybe you’ll be able to (combined with demographic data) notice that your program is more effective for certain age groups than for others.  Or you can budget more realistically for your technology needs in the future.

Last but not least, determine what data is needed to answer that question and evaluate whether you can actually obtain that data or whether it would be ethical to collect or keep it.

Of course, there are plenty of things we’d love to be able to answer.  For example, we’d be interested to know what drives those seemingly random donations from new donors.  But maybe it’s really not feasible to obtain that data beyond how they got to our donation page (e.g., from an email vs. social media vs. something else).  Or perhaps we’d like to know how long the effect of our program lasts, but we work with a population that moves frequently. 

Then again, there’s data that is highly sensitive in nature.  Like someone’s undocumented status or the fact that an individual has contacted your organization about services related to intimate partner violence. 

If you are dealing with data that puts your constituents at risk should it end up in the wrong hands, consider whether you actually need this data to fulfill your mission and if so, what steps you can take to minimize data collected, secure it, and carefully discard it as soon as possible. 

NTEN’s Equity Guide for Nonprofit Technology is a great resource to check out for more considerations for your data collection and data management practices.