Teasing apart stigma and knowledge as barriers to HIV testing: A study with young Black adults in Durham, North Carolina

Tags: , , ,

What experiences do young Black adults in Durham, North Carolina, have with HIV testing? And what influence does stigma play on those experiences? To answer these questions, my co-authors and I recently published the results of a community-based participatory research (CBPR) study: Relationship between HIV knowledge, HIV-related stigma, and HIV testing among young Black adults in a southeastern city. Our cross-sectional survey examined barriers, facilitators and contributors to HIV testing. This blog post summarizes our findings and provides guidance on HIV prevention strategies.

7 takeaways from changes in US education grant programs

Tags: , , ,

I recently had the opportunity to attend a workshop on the U.S. Department of Education’s (ED) new Education Innovation and Research (EIR) grant competition. EIR is the successor to the Investing in Innovation (i3) grant program, which invested approximately $1.4 billion through seven competitions from 2010 to 2016 to develop, validate and scale-up evidence-based programs in education. Like i3, EIR implements a tiered award structure to support programs at various levels of development. This blog post summarizes my seven takeaway points from the workshop. These seven points highlight the main changes in the transition from i3 to EIR.

Gearing up to address attrition: Cohort designs with longitudinal data

Tags: , , , , ,

As education researchers we know that one of the greatest threats to our work is sample attrition – students dropping out of a study over time. Attrition plays havoc with our carefully designed studies by threatening internal validity and making our results uncertain. To gear up for our evaluation of the Pennsylvania State Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP), we designed a three-pronged approach to handling sample attrition. We describe it here in case it can be helpful to others.

Learning about focus groups from an RCT

Tags: , , ,

In my previous job at 3ie I spent a lot of time telling researchers that a randomized controlled trial (RCT) with a few focus groups thrown in for good measure doesn’t count as a mixed methods impact evaluation. In the course of repeatedly saying that focus groups are not enough, I must have developed an unconscious bias against focus groups, because I was pleasantly surprised by what I learned from a recently published article written by some of my FHI 360 colleagues. In their study, Guest, et al. use an RCT to compare the performance of individual interviews against focus groups for collecting certain data.