Learning about focus groups from an RCT

In my previous job at the International Initiative for Impact Evaluation (3ie) I spent a lot of time telling researchers that a randomized controlled trial (RCT) with a few focus groups thrown in for good measure doesn’t count as a mixed methods impact evaluation. In the course of repeatedly saying that focus groups are not enough, I must have developed an unconscious bias against focus groups, because I was pleasantly surprised by what I learned from a recently published article written by some of my FHI 360 colleagues. In their study, Guest, et al. (2017 gated) use an RCT to compare the performance of individual interviews against focus groups for collecting certain data. This article contributes in two interesting ways: first, by showing the difference in effectiveness for data collection from the two methods; and second, by demonstrating how to fuse substantively-focused and methodological studies into one, allowing us to learn about methods while learning about other stuff.

The set up

The thematic focus of the study was to understand health-seeking behaviors among African American men in Durham, North Carolina. Simply speaking, the objective of collecting data from a sample of these men was to collect as much information as possible about their situations and these behaviors. The researchers defined “as much” in two ways. They were interested in getting a broad range of a responses identifying health problems in the African American community, and they were interested in how much sensitive information might be revealed by participants. So they set up an experiment to test whether individual interviews or focus groups are more effective at eliciting a broad range of responses and unsolicited sensitive information.

They enrolled 350 men in the study with 310 randomly assigned to one of 40 focus groups and the other 40 men to individual interviews. They used the same person to both conduct the interviews and facilitate the focus groups. They used the same data collection instrument, which includes 13 questions, for both the interviews and the focus groups. All the data collection events, as the authors call them, were audio recorded and then transcribed, and the authors used NVivo to analyze the data.

They used a single list-generating question from the instrument to measure the breadth of the responses: “what do you think are the most common health problems in the African American community in Durham?” They coded the sharing of sensitive information based on all responses to all questions. They identified sensitive information as “information about one’s own experience related to topics that are highly personal, taboo, illegal, or socially stigmatized.”

Findings

For the question about which method generates a broader range of responses, the authors compare the results on a per-event basis and a per-person basis. On average, each individual interview produced 0.78 unique items (health problems) and each focus group produced 0.80 unique items. Put differently, the 40 interviews yielded 31 items, and the 40 focus groups yielded 32 items. There were a handful of items identified only by one or the other (36 unique items in total). So on a per-event basis, the two methods performed equally. But those 40 focus groups comprised 310 men. So on a per-person basis, the individual interviews produced 0.78 unique items on average, while the focus group produced 0.1. If your goal is to get a big list from brainstorming, running focus groups is not an efficient way to do it!

On the other hand, focus groups performed much better in terms of how often sensitive information was disclosed. Of the ten items identified, two of them were raised only in focus groups with none raised only in interviews. All ten were raised more frequently in the 40 focus groups than in the 40 interviews and for four items, that difference in frequencies is statistically significant (p-value < 0.05). To give you an example, mental illness was raised 15 times in the 40 focus groups and only six times in the 40 individual interviews. (Remember the sensitive information was always unsolicited.)

I initially considered this finding to be counter-intuitive; I expected men to reveal more in private. And Guest, et al.’s literature review shows this finding to run counter to previous studies. But it also makes sense to me that once someone starts sharing personal information, others might feel more comfortable sharing their own personal information. I’d be interested in knowing more about the different patterns of disclosure of sensitive information across the focus groups. The authors also point out that the data collector (interviewer/facilitator) was a Caucasian woman, which could explain the difference in dynamics between a one-on-one interview and a focus group with several African American men.

Learning about methods

At the end of their article, the authors invite other researchers to similarly look for opportunities to conduct studies within studies to learn more about methods. What I find compelling is that this wasn’t just a “what we learned in the process” reflection on methods after conducting a study, but rather a carefully designed evaluation of methods from the beginning. My past experience has been flooded with so many discussions of quantitative vs. qualitative and experimental vs. non-experimental that a study of methods like this one is refreshing! There are other good examples like this one on testing survey length, but too few of them. I hope many researchers accept my colleagues’ invitation to do more research like this.

Photo credit: People vector created by Freepik

Sharing is caring!