Gearing up to enhance college readiness for underserved students: Insights from a capacity building workshop

Tags: , , , ,

I was fortunate to attend the National Council for Community and Education Partnerships (NCCEP)/ Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) 2018 Capacity Building Workshop in Las Vegas, Nevada. The three-day workshop brought together GEAR UP community partners, school and district administrators, and researchers from across the country for professional learning.

Each day of the workshop began with a keynote address. Linda Cliatt-Wayman, principal and renowned school-turnaround expert, spoke of her motivational story to transform Philadelphia schools on the Persistently Dangerous List. Greg Simon, President of the Biden Cancer Initiative, stressed the importance of communication, shared data, and coming together to share practices and strategies. Natalie Spiro, Founder and President of Drum Café West Coast, led the audience through an interactive drumming performance illustrating GEAR UP’s collective voice and unity of purpose. Even an Elvis impersonator came to liven things up!

But what I found most informative and inspiring were the seminars after each keynote. It was during these sessions that peers from various disciplines from across the country could discuss their successes and difficulties, share strategies and practices, and more importantly share alternate perspectives on common issues and topics. In this post, I share some of my personal insights from those discussions.

Strengthening capacity to track, store and use data effectively at Catholic and independent schools

Tags: , , ,

I’ll be the first to confess that my research most often focuses on the public school system, with the inherent but weak assumption that strong practices and strategies in public schools can loosely apply to non-public schools (i.e., Catholic and independent schools). But as an education researcher, I’m intimately aware that setting and populations matter and just because an intervention works in a public school doesn’t mean it will work in a nearby independent school.

So, I was pleasantly surprised when I and a few of my colleagues recently had the opportunity to work with a network of Catholic and independent schools in Minnesota to 1) support the integration of assessment data into ongoing school improvement processes, and 2) promote best practices in data collection, assessment and use. I was excited to be able to directly apply my experience and learn about their data issues and concerns. Below I highlight some of my lessons learned in working with the schools and across the network, in the hopes that it will assist other Catholic and independent schools as they seek to use data to inform their practice.

Investigating STEM and the importance of girls’ math identity

Tags: , , , , , , , ,

Despite significant progress in closing the gender gap in science, technology, engineering and math (also known as STEM), inequities in girls’ and women’s participation and persistence in math and across STEM education and careers remain. According to the U.S. Census Bureau, women make up nearly half of the U.S. workforce but just 26 percent of STEM workers, as of 2011. Within STEM, the largest number of new jobs are in the computer science and math fields; however, the gender gap in these careers has increased rather than decreased, with female representation decreasing since 2000.

While much of the current STEM research has focused heavily on the barriers and reasons why there aren’t more girls or women in STEM-related fields, here we argue that future research must focus on how to design and develop effective approaches, practices, situations, tools, and materials to foster girls’ interest and engagement.

5 features of a monitoring, evaluation and learning system geared towards equity in education

Tags: , , , , ,

A great accomplishment arising from the era following 1990’s World Declaration on Education for All in Jomtein, Thailand, is recognition of the gender gap in education, and the mandate for sex-disaggregated reporting from funders and multilateral agencies. Data on the dramatic access and outcome disparities between male and female learners created demand for programming focused on gender inequity. Twenty-seven years after Jomtien, there is a substantial amount of evidence on solutions that build gender equity in education, and on how education systems need to adapt to help girls and boys overcome gender-related institutional barriers.

The Education Equity Research Initiative, a collaborative partnership led by FHI 360 and Save the Children, seeks to create the same dynamic around other aspects of inequity in education – be it poverty, ethnic or racial disadvantage, migration status, or disability. As a community, we create frameworks, modules, and tools, so that little by little the reams of data that get produced include a consistent set of questions around equity of program participation and equity of outcomes.

My previous blog post speaks to the need to be deliberate in building a monitoring, evaluation and learning system that generates the data and analysis that help answer the question: are we improving education equity through our programming and policy? But how do we operationalize equity in education, in the context of education development programming? In Mainstreaming Equity in Education, a paper commissioned by the International Education Funders Group, our collaborative begins by recognizing that an equity-oriented monitoring, evaluation and learning (MEL) system around a program or set of interventions has an essential purpose not just to produce data on scope and coverage, but to allow for depth of understanding around who benefits and doesn’t, and offer actionable information on what to do about it. Here I outline five features that describe such a learning system.

To achieve equity in education, we need the right data

Tags: , , , ,

As we work to realize the Sustainable Development Goals (SDGs) related to education, it is the responsibility of every funding, implementing and research organization internationally to be asking questions about our own contributions to building equity in education. While a great amount of data gets produced in the course of education projects, only a fraction provides the detail that is needed to assess intervention impact on different equity dimensions. At the technical and implementation level, organizations need to capture and use the necessary evidence to understand and respond to inequity in education provision and outcomes.

To do that, we need to be deliberate in building monitoring, evaluation and learning systems that generate the data and analysis that help answer the question: are we improving education equity through our programming and policy? Disaggregated data are the first step to understanding who is left behind in obtaining a quality education for successful and productive adulthood. My recent paper, Mainstreaming Equity in Education, outlines key issues and challenges that need to be addressed around equity in education, and provides a way forward for mainstreaming equity-oriented programming and data analysis. In this blog post, I show how disaggregated data can make a difference to understanding impacts. I then provide evidence that, unfortunately, such disaggregated data are rarely collected.

Paper-based data collection: Moving backwards or expanding the arsenal?

Tags: , , ,

Considerable effort has gone into perfecting the art of tablet data collection, which is the method typically used to collect data for evaluating education programs. The move away from paper has been a welcome shift, as for many research and evaluation professionals, paper conjures images of junior staff buried under boxes of returned questionnaires manually entering data into computers. Indeed, when our team recently began experimenting with paper-based data collection in our education projects, one colleague with decades of experience remarked warily, “It just seems like we’re moving backwards here!”

Improvements in the software, however, allow us to merge new technology with “old school” methods. Digital scanners can now replace manual data entry, powered by software that is able to read completed questionnaires, and quickly format responses into a data set for subsequent analysis. Our team has been experimenting with a new digital scanning software called Gravic to easily and quickly enter data from paper-based surveys. The Gravic digital scanning tool introduces flexibility and opens a new option for data collection across our projects, but not without some drawbacks. In this post, we make the case for paper surveys combined with the Gravic software and then review the drawbacks.

Don’t waste evidence on the youth! Recent data highlights education and employment trends

Tags: , , ,

A recent New York Times article describes a major contemporary challenge facing governments: the world has too many young people. A quarter of the world’s population is young (ages 10-24), and the majority live in developing countries. Policy makers are struggling with high levels of youth unemployment in every country, but a key challenge in developing countries has been a lack of data on education and employment characteristics. To fill this evidence gap, FHI 360’s Education Policy and Data Center (EPDC) recently added country-level Youth Education and Employment profiles to the resources available on our website. In this post, I describe the data and how they were collected, and I give some examples of how these data can be used to inform policy making and program design.

7 takeaways from changes in US education grant programs

Tags: , , ,

I recently had the opportunity to attend a workshop on the U.S. Department of Education’s (ED) new Education Innovation and Research (EIR) grant competition. EIR is the successor to the Investing in Innovation (i3) grant program, which invested approximately $1.4 billion through seven competitions from 2010 to 2016 to develop, validate and scale-up evidence-based programs in education. Like i3, EIR implements a tiered award structure to support programs at various levels of development. This blog post summarizes my seven takeaway points from the workshop. These seven points highlight the main changes in the transition from i3 to EIR.

Gearing up to address attrition: Cohort designs with longitudinal data

Tags: , , , , ,

As education researchers we know that one of the greatest threats to our work is sample attrition – students dropping out of a study over time. Attrition plays havoc with our carefully designed studies by threatening internal validity and making our results uncertain. To gear up for our evaluation of the Pennsylvania State Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP), we designed a three-pronged approach to handling sample attrition. We describe it here in case it can be helpful to others.

Null results should produce answers, not excuses

Tags: , , ,

I recently served on a Center for Global Development (CGD) panel to discuss a new study of the effects of community-based education on learning outcomes in Afghanistan. (Burde, Middleton, and Samii 2016) This exemplary randomized evaluation finds some important positive results. But the authors do one thing in the study that almost all impact evaluation researchers do – where they have null results, they make, what I call for the sake of argument, excuses.