Strengthening capacity to track, store and use data effectively at Catholic and independent schools

I’ll be the first to confess that my research most often focuses on the public school system, with the inherent but weak assumption that strong practices and strategies in public schools can loosely apply to non-public schools (i.e., Catholic and independent schools). But as an education researcher, I’m intimately aware that setting and populations matter and just because an intervention works in a public school doesn’t mean it will work in a nearby independent school. There is a myriad of factors that may differentiate these two types of school settings – from classroom size, teacher experience and certification, and the extent of available administrative personnel and support to parent and student attitudes and beliefs towards school. Moreover, while public and non-public schools may have similar goals for collecting student assessment data, Catholic and independent schools are more likely to demonstrate greater variation in the types of student information systems used to store data, the assessments used to measure student performance, and the administration of assessments throughout the year.

Setting and populations matter – just because an intervention works in a public school doesn’t mean it will work in a nearby independent school.

So, I was pleasantly surprised when I and a few of my colleagues recently had the opportunity to work with a network of Catholic and independent schools in Minnesota to 1) support the integration of assessment data into ongoing school improvement processes, and 2) promote best practices in data collection, assessment and use. I was excited to be able to directly apply my experience and learn about their data issues and concerns.

Given the necessary buy-in from school administrators, our activities began with a firsthand overview of current assessments, an in-person launch meeting with school principals, and select school visits to better understand current data use before moving forward. Next, we developed tools, data guides, online dashboards, reports and workshops to support the use of assessment data to improve instruction. These included the development of individualized school reports that answer specific questions (e.g., which grade levels/subjects are strongest, are students demonstrating growth, are their achievement gaps); the creation of online assessment dashboards, which allow schools to see growth and disaggregate data (i.e., common demographics, but also those school-specific such as graduation cohort and new/returning students); and a workshop to train and support teachers in how to analyze class test scores using best practices.

My lessons learned will assist other Catholic & independent schools as they seek to use data to inform their practice.
Schools varied in the number of and populations served across the network, but united in the desire to analyze their periodic assessment data to improve instructional efforts and student achievement. What struck me most was that the schools – regardless of assessment, student populations, or grade levels served – struggled with similar roadblocks toward effectively using their assessment data. Below I highlight some of my lessons learned in working with the schools and across the network, in the hopes that it will assist other Catholic and independent schools as they seek to use data to inform their practice.

Working with data in an individual school
  1. Adopt unique student identifiers.
    #1: Adopt unique student identifiers.

    One of the first things I noticed in working with the individual schools was that while all students may have had a local identification number it was not always used to consistently link to enrollment data, student demographics or assessments. To further complicate matters, students often had multiple identification numbers (e.g., a local number and an assessment number) with no common number across data sets.

    Without unique student identifiers across these data sets, analyzing achievement gaps, demonstrating growth, or even pinpointing students in need of support became a much more intensive and challenging task. Our team’s first step was to create a master identification list linking the appropriate student enrollment, demographics and assessment data together. By creating this master file, we were able to more easily respond to schools’ questions regarding specific areas of strength and improvement needs. An additional advantage of unique student identifiers is that they protect student privacy by replacing personal information and also help to improve the overall quality, access, accuracy and reliability of the data.

  2. Simplify data collection and data elements.
    #2: Simplify data collection and data elements.

    As discussed above, schools had many separate data sets. Student enrollment and demographics were commonly maintained within a licensed student information system. However, common student descriptors (e.g., free or reduced lunch, English-language learner, and individualized education plans) were often maintained separately. Student assessment data were commonly only available through the assessment developers’ online portals which required users to be familiar with the online system reporting options. This separation of data complicated connections between data sets and across years which made it difficult to use the data to address equity issues. See here for more information about education equity data.

    To simplify, our team designed a short data template that combined student descriptors and the needed data fields (e.g., student identifiers, grade, race/ethnicity, assessment, period/year, discipline/subject, and score) into a master data set that could easily be filtered in Excel to allow schools to drill down to specific subject test, grade levels, or assessment periods. Our team also provided schools with detailed instructions on how to obtain the required information from the respective assessment portals. The template provided schools with a uniform data structure with minimum required fields that improve overall data management.

  3. Meaningfully interpret assessment data.
    #3: Meaningfully interpret assessment data.

    Understanding the scoring intricacies and details of one assessment may be daunting for schools with limited support, let alone for multiple assessments. So, to helps schools meaningfully interpret their data, our team determined proficiency by student (i.e., proficient/not proficient) and by grade level, separately for each assessment using national/state norms and benchmarks.

    Analyzing proficiency, in this way, helps schools understand the percentage of students performing at or near established benchmarks by grade and subject, as well as evaluate growth from one assessment to another. Additionally, our team also developed four proficiency descriptors for each assessment: 1) exceeds the standards/benchmarks (proficient); 2) meets the standards/benchmarks (proficient); 3) partially meets the standards/benchmarks (not proficient); and 4) does not meet the standards/benchmarks (not proficient). These descriptors helped schools and teachers identify students who could immediately benefit (i.e., students in the partially meets descriptor) from targeted instruction as well as those in need of more intensive support.

  4. Guarantee meaningful access to data.
    #4: Guarantee meaningful access to data.

    One of the primary barriers schools encountered in effectively using their data was providing meaningful access to staff. Whether it was the lack of common identifiers, data sets stored and maintained separately, or complicated reporting features, school administrators and teachers found it difficult to meaningfully interact with their data to answer questions.

    For example, several schools were participating in the Minnesota Comprehensive Assessments (MCA) and their only access to their data was through PDF files that could not be easily imported or manipulated. To better facilitate access, our team transcribed PDF documents and incorporated the data into our master data sets. We also developed a secure Power BI dashboard that allowed schools to manipulate, interact with, and visually see their data. Rather than static reports or summary tables, schools could now drill down into their assessment data to answer specific questions on how different groups of students were performing, and how these trends persisted or changed over time.

Working with data across a school network
  1. Establish common definitions.
    #5: Establish common definitions.

    Probably not a surprise to anyone who cleans and presents data across schools, but even within our small subset of Catholic and independent schools, our team needed to identify common definitions and values for measures of interest. Different schools in the same network had different definitions for the fields in their databases covering things like race and ethnicity, Individual Education Program participation, and the receipt of additional support and accommodations.

    To address this problem, our team identified and cleaned discrepancies. We also provided schools with a data guide that included an overview of how data had been structured and a detailed data dictionary with relevant descriptors. The use of common data definitions provided schools the ability to exchange and make use of data to better inform practice and allocate resources across the network.

  2. Consider a unified assessment calendar.
    #6: Consider a unified assessment calendar.

    Our team encountered multiple assessments with different subtest and assessment periods across schools. Unlike public schools, which uniformly participate in their respective state assessments, Catholic and independent schools may opt to participate in one or more different student assessments that meet their respective needs. Even within one school, different grade levels may participate in different assessments depending on varying intent and focus. To complicate matters further, assessments may report results using differing scales. For example, some assessments may provide proficiency descriptors, while others may provide scale score or percentile ranks.

    While it may be difficult to immediately change assessment calendars and schedules, our team recommended that schools work together to determine the primary purpose of their assessment initiative, and weigh the pros and cons of different assessment calendar options. At a minimum, schools across the network should be assessing the same grades during similar assessment periods.

  3. Meaningfully compare assessment data.
    #7: Meaningfully compare assessment data.

    Beyond being able to understand their assessments, schools commonly wanted to know how they compare to other Catholic schools, to public schools in their district, to state assessments, and nationally. But given the difference in assessments and the grade level assessed, finding meaningful comparisons for Catholic and independent schools was, wait for it, complicated.

    While the proficiency descriptors and benchmarks, discussed above, help schools better understand their data, they also help provide the basis for comparisons since they are based on national or state norms. Our team assisted schools with establishing baseline proficiencies across the network and within their individual schools, from which they could easily gauge how they compare to other network schools participating in similar assessments and how they compare their own longitudinal growth year over year.

In this work with Catholic and independent schools in Minnesota, I have been excited to see the genuine enthusiasm and interest surrounding the use of data to inform instruction and improve student outcomes. When provided with meaningful access to data and a set of tools and support to help navigate it, administrators and teachers become even more passionate about their assessment data and ask questions that further drive results. They are curious about discrepancies between their perceptions and the data, and more importantly they are engaged in a continuous cycle to improve outcomes for their students. Keeping the above lessons in mind will assist other schools as they seek to use data to inform their practice.

Acknowledgement: The author would like to acknowledge Risa Sackman and Eleanor Wang for their contributions to the project work.

Sharing is caring!