3 women leading the charge in ICT4D research

 
Tags: , , ,

It’s no secret that the technology sector is riddled with major gender disparities. In the United States, discrepancies in employment and pay are so widespread that tech firms and the government alike regularly commission reports to evaluate why women comprise less than a quarter of the tech workforce and how this stifles growth. Couple the gender imbalances in the tech sphere with those in the research world and it’s not hard to conceive of the challenges faced by women conducting research in the information and communication technologies for development (ICT4D) field. As the 10th conference on ICT4D in Lusaka, Zambia, approaches in May, I’d like to take a moment to highlight the work of several incredibly talented women powering the evidence base for ICT4D.

Through an FHI 360-funded learning agenda project, Annette Brown and I recently created an evidence map that identifies and categorizes impact evaluations across the broad and multi-sectoral beast we term ICT4D. We used a systematic review approach to identify and code 254 impact evaluations across 11 ICT4D intervention types, such as digital identity and technology-assisted learning, that provide evidence in nine sectors. Researchers in the field have been busy – in the last five years the total number of publications providing rigorous evidence in ICT4D increased 311 percent. Below, I take a look at three pieces of evidence from the map and the women behind the work.

Addressing bias in our systematic review of STEM research

 
Tags: , , , , , , ,

Research is a conversation. Researchers attempt to answer a study question, and then other groups of researchers support, contest or expand on those findings. Over the years, this process produces a body of evidence representing the scientific community’s conversation on a given topic. But what did those research teams have to say? What did they determine is the answer to the question? How did they arrive at that answer?

That is where a systematic review enters the conversation. We know, for example, that a significant amount of research exists exploring gender differences in mathematics achievement, but it is unclear how girls’ math identity contributes to or ameliorates this disparity. In response, we are conducting a systematic review to understand how improving girls’ math identity supports their participation, engagement and achievement in math. This review will assist us in moving from a more subjective understanding of the issue to a rigorous and unbiased assessment of the current evidence to date.

Developing a systematic review protocol requires thoughtful decision-making about how to reduce various forms of bias at each stage of the process. Below we discuss some of the decisions made to reduce bias in our systematic review exploring girls’ math identity, in the hopes that it will inform others undertaking similar efforts.

Investigating STEM and the importance of girls’ math identity

 
Tags: , , , , , , , ,

Despite significant progress in closing the gender gap in science, technology, engineering and math (also known as STEM), inequities in girls’ and women’s participation and persistence in math and across STEM education and careers remain. According to the U.S. Census Bureau, women make up nearly half of the U.S. workforce but just 26 percent of STEM workers, as of 2011. Within STEM, the largest number of new jobs are in the computer science and math fields; however, the gender gap in these careers has increased rather than decreased, with female representation decreasing since 2000.

While much of the current STEM research has focused heavily on the barriers and reasons why there aren’t more girls or women in STEM-related fields, here we argue that future research must focus on how to design and develop effective approaches, practices, situations, tools, and materials to foster girls’ interest and engagement.

5 features of a monitoring, evaluation and learning system geared towards equity in education

 
Tags: , , , , ,

A great accomplishment arising from the era following 1990’s World Declaration on Education for All in Jomtein, Thailand, is recognition of the gender gap in education, and the mandate for sex-disaggregated reporting from funders and multilateral agencies. Data on the dramatic access and outcome disparities between male and female learners created demand for programming focused on gender inequity. Twenty-seven years after Jomtien, there is a substantial amount of evidence on solutions that build gender equity in education, and on how education systems need to adapt to help girls and boys overcome gender-related institutional barriers.

The Education Equity Research Initiative, a collaborative partnership led by FHI 360 and Save the Children, seeks to create the same dynamic around other aspects of inequity in education – be it poverty, ethnic or racial disadvantage, migration status, or disability. As a community, we create frameworks, modules, and tools, so that little by little the reams of data that get produced include a consistent set of questions around equity of program participation and equity of outcomes.

My previous blog post speaks to the need to be deliberate in building a monitoring, evaluation and learning system that generates the data and analysis that help answer the question: are we improving education equity through our programming and policy? But how do we operationalize equity in education, in the context of education development programming? In Mainstreaming Equity in Education, a paper commissioned by the International Education Funders Group, our collaborative begins by recognizing that an equity-oriented monitoring, evaluation and learning (MEL) system around a program or set of interventions has an essential purpose not just to produce data on scope and coverage, but to allow for depth of understanding around who benefits and doesn’t, and offer actionable information on what to do about it. Here I outline five features that describe such a learning system.

To achieve equity in education, we need the right data

 
Tags: , , , ,

As we work to realize the Sustainable Development Goals (SDGs) related to education, it is the responsibility of every funding, implementing and research organization internationally to be asking questions about our own contributions to building equity in education. While a great amount of data gets produced in the course of education projects, only a fraction provides the detail that is needed to assess intervention impact on different equity dimensions. At the technical and implementation level, organizations need to capture and use the necessary evidence to understand and respond to inequity in education provision and outcomes.

To do that, we need to be deliberate in building monitoring, evaluation and learning systems that generate the data and analysis that help answer the question: are we improving education equity through our programming and policy? Disaggregated data are the first step to understanding who is left behind in obtaining a quality education for successful and productive adulthood. My recent paper, Mainstreaming Equity in Education, outlines key issues and challenges that need to be addressed around equity in education, and provides a way forward for mainstreaming equity-oriented programming and data analysis. In this blog post, I show how disaggregated data can make a difference to understanding impacts. I then provide evidence that, unfortunately, such disaggregated data are rarely collected.

Seizing an opportunity to collect user experience data

 
Tags: , , ,

Contraceptive clinical trials routinely collect vast amounts of data, but what new data can we collect about method acceptability during this research stage? If a method has reached the clinical trial phase, we’d hope formative acceptability research was already conducted to inform its development and to determine if a potential market exists. At this point in the game, few changes can be made to a method based on acceptability findings… so what’s left to learn?

Hypothetically speaking… If we build it, will they come?

 
Tags: , , , ,

Contraceptive product development efforts, to date, have largely been premised on the notion that if we build it, they will come. Primary attention has been paid to making products that work, with the assumption that if women want to avoid pregnancy, they will use them. While the desire to avoid pregnancy is an extremely powerful motivator, it is not enough. For many women, the fear of contraceptive side effects or the challenge associated with accessing and using contraceptives is greater than the burden of another pregnancy.

Some argue that to improve uptake and continuation rates, we need to improve provider counseling around contraceptive side effects and address socio-cultural barriers, such as inequitable gender norms, that prevent women from using contraceptives. These efforts – while essential – are still insufficient. Even the most informed and empowered women can have unintended pregnancies when they don’t have access to acceptable contraceptives – methods that meet their particular needs in their particular life stage and context.

As researchers, how do we shift the model of contraceptive development to focus first on what users want from an ideal contraceptive?

Beyond research: Using science to transform women’s lives

 
Tags: , , , ,

It was a warm spring day in 2011, and eight of my colleagues were helping me celebrate the realization of a long-awaited policy change in Uganda by sipping tepid champagne out of kid-sized paper cups. A colleague asked me, amazed, “How did you guys pull this off? What’s your secret to changing national policy?” I offered up some words about patience, doggedness, and committed team work. My somewhat glib response is still true, but since then I’ve thought a lot about what it takes to get a policy changed.