Mobile-based surveys: Can you hear me now?

The technologies and processes we now have at our disposal to locate individuals and populations, push information to them, and gather information from or about them are being developed and refined at break-neck speed. Tools utilizing mobile technologies alone – voice services, SMS, Interactive Voice Recognition (IVR), Unstructured Supplementary Service Data (USSD), location-based services, data-based survey apps, chatbots – have introduced new opportunities to reduce the time, cost, uncertainty and risk in gathering data and feedback. As mobile coverage and access have expanded globally, governments, marketing firms, research organizations and international development actors alike have been iterating on approaches for using mobile-based surveys in their initiatives and programs. This post presents key takeaway lessons regarding the methodology, feasibility and suitability of using mobile surveys based on experience from our Mobile Solutions, Technical Assistance and Research project (mSTAR) in Mozambique.

Design of the Mozambique Mobile Access and Use Study

With support from USAID/Mozambique and DFID, mSTAR recently completed the large-scale Mobile Access and Use Study (MAUS) employing a mobile-based survey component. The study examined the availability and accessibility of mobile technologies and the dynamic ways they are being used in the daily lives of Mozambicans.

MAUS consisted of two separate surveys in four provinces of Mozambique. One survey, the household survey, utilized traditional face-to-face interviews to obtain data for adults living in the target provinces. A second survey, the Computer-Assisted-Telephone-Interviewing (CATI) survey, featured remote data collection using live enumerators and dedicated software for dialing pre-loaded mobile phone numbers and capturing responses digitally.

Based in a Maputo call center, CATI survey enumerators conducted interviews with thousands of active mobile users in the target provinces in February (baseline) and June (endline) 2016. The CATI survey sought not only to examine mobile use, but also to test the impact of common methods used in mobile-based surveys to increase participation rates and retain respondents over time: reminders via SMS and airtime incentives. To test these interventions, the survey employed an experimental design in which respondents completing surveys during baseline calls were placed randomly into one of three groups: 1) a control group which received no interaction during an interim period of four months, 2) a treatment group which received a reminder SMS once a week for four months, and 3) a treatment group which received a reminder SMS and airtime credit to their account each week.

Our work garnered important lessons for conducting a study with mobile-based surveys. Below are key takeaway messages that summarize our experience. See here and here for the survey findings.

Five takeaways from our mobile-based survey in Mozambique
  1. Do your homework to determine whether mobile surveys are appropriate for the population you are trying to reach.
  1. Do your homework to determine whether mobile surveys are appropriate for the population you are trying to reach.
    The design of the CATI survey in Mozambique called for simple stratification by province and operator. Although this survey was not intended to be representative of the entire population, the survey results were revealing – our respondent population differed significantly from the general population in each province: 33% of those surveyed were female (compared to 51% females of the general population) and 66% received a secondary education or above (as compared to 28% of the population). Similarly, in a multi-round SMS-based poll conducted during the West African Ebola outbreak in 2014, males and youth were frequently over-represented, with over 80% of the Liberian and Sierra Leonean respondents between the ages of 15 and 34.In our experience, significant research, sampling discussions and persistence are required to capture data beyond the typical demographic users that have proven easier to reach on mobiles: predominately urban, male, literate and young populations who are comfortable using their phones often and can afford to do so. You may read about firms that claim to reach “virtually anyone on the globe” but be sure to challege that notion and think collaboratively about the feasibility of reaching the intended populations.
  1. Carefully consider your sampling frames.
  1. Carefully consider your sampling frames.
    This is really a follow-on to the first takeaway; obtaining mobile survey responses that are even close to representative of a given population requires, just as in household data collection, careful consideration of sampling frames.Beyond the statistics, one of the many challenges surrounding mobile surveys is gaining access to the primary sampling frame: mobile phone numbers. For programs that use mobile surveys to obtain feedback from participants, gathering mobile numbers and keeping these up-to-date through the months or years requires forward-thinking, a maintenance budget, and privacy considerations. In many more challenging cases, gaining access to mobile numbers requires navigating regulations and the back-rooms of the local mobile operators, or hiring a firm with proprietary databases or advanced calling equipment.Since the design of the CATI survey called for stratification only by operator and province, we approached the operators to provide us with 60,000 numbers, the minimal amount of numbers we determined we would need to reach 3,000 users. We had to build many relationships and find the key decision-makers in person to make this happen. Despite working proactively, the uncertainty and extensive leg-work required to obtain phone numbers resulted in some delays. Additionally, the operators did not share how these numbers were selected and had no associated demographics. To stratify by province, we filtered calls by asking the respondent answering the phone where they resided. Many surveys have incorporated quota sampling and similarly filter those participating in an SMS or IVR survey using basic introductory demographic questions, with the need to often extend timelines.

    Research scientists have begun to turn their attention, and a critical eye, towards some of these issues with sampling and statistical validity. Both the World Bank and the Center for Global Development are advancing the conversation on these topics as well as scrutinizing the ability to produce nationally representative mobile-based surveys, revealing mixed evidence. Programs like the World Food Program’s Mobile Vulnerability and Analysis Mapping (mVAM) offer information to staff and others on when and where mobile data collection is appropriate, and others caution that their tools may be best used for citizen engagement or program feedback.

  1. Connecting with respondents over time, no matter the medium, is hard – incentives help but not as much as you might think.
  1. Connecting with respondents over time, no matter the medium, is hard – incentives help but not as much as you might think.
    The traditional challenges in retaining the same respondents to participate in surveys over time are well-known: people move around constantly and, even when you know where they are, they can easily lose interest and motivation. Mobile phones are often perceived as a way to effectively contact and engage participants over time. Specialized firms are constantly honing their methods for achieving higher response and retention rates – adjusting languages, greetings and call-times. One of the most commonly used methods of enticing a response is an incentive in the form of free airtime.MAUS sought to further examine common assumptions by investigating retention rates and how these might be improved. We found the same challenges in Mozambique as in other contexts where we work: high rates of phone borrowing, gatekeeping family members, use of multiple SIMs, frequent customer churn, infrequent phone charging, inconsistencies in network quality, and potentially no network availability. Mozambique was also facing a government crack-down on unregistered SIM cards, which resulted in the elimination of thousands of numbers at one time.The retention rate for endline data collection was 45%. We anticipated a wide variation rate in retention rates between the control and treatment groups. Surprisingly, results showed that there was no statistically significant difference in retention between the control group and the treatment group receiving only SMS reminders. We did find a statistically significant difference between the control group and the treatment group receiving an SMS plus incentives; however, the difference was much smaller than anticipated, with 51% retention among the incentive group.

    In the context of Mozambique, implementers should think carefully about whether the benefits of higher participation outweigh project costs. Our work in Liberia and Sierra Leone conducting SMS surveys showed significant respondent drop-off as well. About 21% of respondents replied to an SMS twice, but the percentage of respondents completing a survey more than three times dropped to single digits.

  1. Mobile-based surveys can save time (a lot) and money.
  1. Mobile-based surveys can save time (a lot) and money.
    This should come as no surprise – across all mobile survey methodologies data collection is nearly guaranteed to be quicker, more efficient and safer for widely dispersed populations. For MAUS, data collection via the CATI survey required two weeks at baseline to conduct 3,000 interviews and eight days at endline, including multiple follow-up calls. Household data collection and localized approvals took three months, with many logistical recalculations and hand-wringing along the way. Time for set-up of the scripted forms and databases was equivalent between the two methodologies.Costs to deploy a mobile-based survey can vary widely depending on the prevalence of phones, sampling frames available and solutions selected. In our case, approximate cost per survey was 11% higher for face-to-face household surveys versus the CATI survey. Once our CATI system was operational, the marginal costs of adding additional data collection rounds decreased significantly.
  1. Humans are indispensable for data quality and respondent engagement.
  1. Humans are indispensable for data quality and respondent engagement.
    The CATI survey employed a team of trained enumerators fluent in five languages to conduct the survey via live calls. In the Mozambican context, the results point to much better survey participation rates than other options such as text-based surveys using SMS or USSD, and even audio-based IVR. The CATI survey achieved a 24.6% participation rate, i.e., the percentage of people who answered the call and elected to participate. As a comparison, one survey in Mozambique administered via IVR experienced a 9% participation rate. Feedback from participants also revealed that speaking with a live enumerator, particularly someone from the capital, was an enjoyable experience and that they learned something new.Additionally, a common conception in Mozambique is that communications from unknown numbers are from scammers. After enumerators acknowledged this concern, explaining the study and always detailing consent, many participants felt more comfortable and willing to participate. The solution appropriate for a given project will depend on the budget, the need to adapt questions, and the requirements for data quality.

Looking forward, we’re continuing to explore the growing menu of mobile-based survey tools and approaches, their nuances and the many opportunities they offer for connecting with the individuals and communities we serve.

We hope these takeaways were helpful, and we look forward to reading your comments below.

Photo credit: Jodi-Ann Burey/VillageReach, Courtesy of Photoshare

Sharing is caring!