203
Views
1
CrossRef citations to date
0
Altmetric
Research Article

The Unexpected Benefits of a Research-Practice Partnership’s Efforts to Strengthen Budgetary Decision-Making

, , , , , & show all

ABSTRACT

Our research-practice partnership (RPP) focused on developing and testing metrics and tools to foster improved evidence-based budgetary decision-making. We expected our research findings to directly influence decisions about program expansion, contraction, or elimination. Instead, unexpected findings led to unexpected uses: changes in program implementation, administrative data collection processes, and the kinds of information provided to inform budget and program improvement decisions. We conducted a content analysis of two rounds of interviews with senior district budget decision-makers to identify recommendations for improving the budget decision process and document changes in the budget process over time. Our study highlights the sometimes unanticipated ways that RPPs can inform educational decisions.

Introduction

The initial purpose of our research-practice partnership

In 2018, Jefferson County Public Schools (JCPS) in Louisville, Kentucky, and a team of researchers from Teachers College, Columbia University, and American University received a research-practice partnership (RPP) grant from the U.S. Department of Education’s Institute of Education Sciences (IES) to devise and evaluate methods to strengthen school district budgetary decision-making. Specifically, our RPP, Exploring Academic Return on Investment as a Metric to Direct District-level Funding Towards Programs That Improve Student Outcomes, set out to develop a practically feasible yet sufficiently rigorous metric of academic return on investment (AROI) for informing budget decisions. Broadly speaking, AROI is a ratio of effects to expenditures—that is, increases in student outcomes divided by the dollars spent to produce those increases (e.g., Levenson, Citation2011). We aimed to calculate AROI on a range of educational investments up for review in the district’s cycle-based budgeting system and to cross-validate those metrics with external evidence and alternative approaches. These alternative approaches included cost-effectiveness analysis (CEA), in which economic costs are divided by effectiveness as measured in an experimental or quasi-experimental study (Levin et al., Citation2018), and program value-added analysis (Shand et al., Citation2022), an adaptation of teacher value-added measures to program evaluation whereby outcomes for students participating in interventions are compared to predicted outcomes based on their prior performance. We expected these results to directly inform decisions, and we planned to investigate the decision-making process and assess the extent to which our findings influenced it.

In this paper, we show that, while our research findings were not used as originally intended, they led to concrete changes in program implementation, administrative data collection processes, and the kinds of information provided to inform budget decision-making. The processes and outcomes of our RPP ultimately affected how the district uses evidence to improve programs rather than substantially changing the set of district-funded programs. We first contextualize our work within the literature on the purposes and benchmarks of success for RPPs and within the setting of JCPS. Next, we discuss the RPP activities themselves in more detail, along with specific changes in program implementation and budget decision-making processes that occurred in the course of and as a result of the partnership. We report findings from interviews with senior decision-makers about their perceptions of how the process had changed after two and a half years. We conclude with a discussion of implications both for evidence-based budgetary decision-making and partnership-based research more generally.

Purpose of this paper and our research question

Our RPP comprised researchers from Teachers College, Columbia University, and American University and members of the JCPS research and evaluation unit. We present our partnership as a case study, focusing on changes in practice, policy, and data analysis that occurred because of the RPP, supported by analysis of semistructured interviews with senior district leaders focused on budgetary decision-making and the evidence that informs it. Our RPP activities addressed the questions of how to calculate and apply AROI in practice, how it compared with other metrics, and how the information we were providing affected budget decisions. In this paper, we contribute to ongoing research about the outcomes of RPPs as an approach to organizing research for educational improvement (e.g., Coburn & Penuel, Citation2016) by addressing the question of how RPP activities as an “intervention” can improve uses of evidence in making decisions about continuously improving program implementation, in addition to program adoption decisions.

Theoretical perspectives on RPPs

As the focus on using evidence to guide educational decisions has increased, due in part to federal mandates under the Elementary and Secondary Education Act, our understanding of how evidence enters into educational decisions has become more nuanced. Crucially, however, evidence use in annual budget allocations is understudied and represents an opportunity to systematize evidence use (F. Hollands et al., Citation2020; Rennie Center for Education Research & Policy, Citation2012). Researchers have noted the slow uptake of economic evaluation methods such as cost-effectiveness analysis to inform programmatic decisions in education (Hummel-Rossi & Ashdown, Citation2002; Rice, Citation1997).

Investigating AROI through the mechanism of an RPP has the potential to address persistent evaluation challenges in educational settings: insufficient conceptualization and measurement of outcomes, assessment of difficult-to-measure outcomes, and rigorous estimation of effects attributable to a program (Hummel-Rossi & Ashdown, Citation2002; Rice, Citation1997). Penuel and Farrell (Citation2017) outline ways RPPs can help districts meet the evidence requirements of the Every Student Succeeds Act (ESSA) by providing results that are timely, credible, and relevant but note that participation in the research process is important for the utilization of research results. Collaborative evaluation can foster the conceptual use of evidence by promoting reflective practice, through ongoing self-evaluation, and by questioning why programs do or do not work (Weiss, Citation1998). Evaluation can have varying effects at different organizational levels, affecting both individual and collective decision-making (Henry & Mark, Citation2003). RPPs can foster an understanding of organizational context and how these levels interact.

However, greater clarity is needed on the purpose and measures of success of RPPs and on the conditions for success. We build upon prior research on the conception of the purposes and success of RPPs. Although a consensus definition of a successful RPP does not currently exist, scholars have refined conceptions of the purpose of RPPs through the lenses of organizational theory and continuous improvement (Arce-Trigatti et al., Citation2018; Coburn & Penuel, Citation2016). These studies emphasize the importance of partnerships themselves as vehicles to support organizations as they learn, promoting embedded and conceptual uses of evidence that shape the framing of problems and solutions and enduring beyond symbolic or instrumental use. There is ample evidence that RPPs contribute to increased access to research evidence (Tseng, Citation2012; Welsh, Citation2021; Wentworth et al., Citation2017); however, a better understanding of actual research use can help RPPs fully realize their capacity-building potential for ongoing research-informed improvement (Bryk et al., Citation2015).

Our work also contributes to literature describing the processes by which RPPs mediate the relationship between research and practice and the conditions for greater success in doing so. Rather than simply translating research into practice, the relationship between the two is better characterized as “non-linear” (Tseng & Nutley, Citation2014) and an interactive process over many formal and informal engagements (Conaway, Citation2020). Successful partnership efforts can be measured along multiple dimensions (Henrick et al., Citation2017). Even when partnerships design and execute a policy- and practice-relevant research agenda, integrating research findings into practice can be challenging, especially when research findings were unanticipated, not deemed relevant to the decision at hand, or insufficient (Farrell et al., Citation2021).

Setting and context

JCPS is a large, urban school district located in Louisville, Kentucky. With nearly 7,000 teachers serving more than 95,000 students in 165 schools, JCPS is by far the largest public school system in Kentucky and among the largest in the United States. The district’s 2021–22 budget surpassed $1.8 billion, not including Elementary and Secondary School Emergency Relief (ESSER) funds, with per-pupil spending in excess of $16,000. The district follows a school-based decision-making (SBDM) model that gives each school’s SBDM Council wide-ranging authority over policy creation, hiring, and budgeting (Jefferson County Public Schools [JCPS], Citation2018).

The relationship between JCPS and the university researchers dates back to 2012 when a curriculum audit conducted by Phi Delta Kappa International (Citation2012) recommended that the district build capacity for cost-effectiveness and cost-benefit analysis to evaluate its programs and to inform decisions about program continuation, expansion, modification, or termination. The district reached out to the Center for Benefit-Cost Studies of Education, then based at Teachers College, Columbia University, about receiving such training. In 2013, Dr. Shand, then at Teachers College, delivered a workshop on economic evaluations of educational programs to JCPS’ research department. Subsequently, when JCPS research specialist Dr. Yan started developing cycle-based budgeting and AROI for JCPS, he engaged in ongoing dialogue with the university-based researchers about applying cost-effectiveness analysis in practice, the cycle-based budgeting approach, and the AROI method. A seminal meeting in which JCPS and Teachers College researchers compared and contrasted methods used in cycle-based budgeting and cost-utility analysis clarified joint interests and led to a practitioner publication on how to prioritize budget decisions (Yan & Hollands, Citation2018). Dr. Yan attended additional methods training on cost-effectiveness and cost-benefit analysis at Teachers College in 2018 and met with Dr. Hollands and Dr. Shand. The mutual decision to apply for the research-practice partnership grant in 2018 grew from these sustained discussions and engagements.

The JCPS budget process in 2018

At the start of the RPP, the district’s budget process was already undergoing changes as a result of new leadership and the emergence of improved systems to document and monitor investments. The bulk of the JCPS budget at the time was allocated to schools according to unweighted student enrollments, allowing schools a fair degree of autonomy over the use of those funds under SBDM. Schools were required to use funds in adherence to basic staffing model parameters approved by the Kentucky Department of Education but could also request budget “add-ons” such as additional special education teachers or mental health counselors to address school-specific student needs.

The district had recently developed and implemented cycle-based budgeting (Yan, Citation2022) and an online Investment Tracking System to improve the documentation and monitoring of discretionary investments with respect to alignment with district priorities and effective use of funds. The district office made most of the discretionary investments, but some were made by schools. The district adopted cycle-based budgeting to evaluate discretionary investments in multiyear cycles: after 1–5 years, each investment comes up for end-of-cycle review, at which time decision-makers are expected to review cost, outcome, and other data to help them decide whether to continue, discontinue, and/or modify the investment. External grant-funded initiatives are not part of the cycle-based budgeting process and are not documented in the Investment Tracking System. However, the district is moving toward adopting a similar review process for the entire budget.

Funding decisions at JCPS are “stepped,” meaning that requests generally funnel through program leaders to assistant superintendents first, then to department chiefs, and finally to the chief financial officer (CFO) and superintendent before ultimately being approved or denied by the board of education. Individuals at each level of this process are responsible for prioritizing the budget requests within their purview and deciding which investments to advance. For the most part, program leaders and assistant superintendents focus primarily on their own program areas. In contrast, high-level decisions about allocations between different priority areas are primarily the purview of the board, the superintendent, and, to a lesser extent, the CFO and cabinet (a district office leadership team of department chiefs).

When our RPP was being established, new district leadership desired greater consistency of educational program offerings and services across the district to facilitate evaluation and ensure more-equitable access to resources and opportunities among students from different schools. Greater consistency of offerings would limit the number of requests for new investments from individual schools. At the same time, the district was under pressure from the media and the state to reduce central office costs, leading to a complex dynamic in which decisions were being centralized and streamlined while resources were being reallocated from central office to schools.

Description of our RPP

The overall goal of our RPP was to explore cycle-based budgeting and the use of AROI to inform educational decision-making by providing an easy-to-calculate and simple metric that incorporated information about the effectiveness of budgetary investments relative to their costs. Specifically, that entailed assessing the evidence cited in budget requests, assessing the viability of calculating AROI with existing evidence and data, calculating AROI, cross-validating AROI with alternative metrics, and determining how our findings had affected educational decisions. Crucial for the completion of these tasks, the JCPS team included a research analyst devoted full-time to the RPP who assembled, cleaned, and analyzed data. This district researcher served as a liaison between internal and external researchers and programmatic, budget, and finance teams within the district, including facilitating end-of-cycle review meetings along with other members of the district’s research and evaluation team. The full RPP team initially comprised three university-based researchers supported by several research assistants, the embedded researcher at JCPS, and four members of the district’s research and evaluation team. The larger team met monthly via videoconference, with subteams working on specific analyses meeting more frequently. The external research team made two site visits to JCPS for team meetings and interviews or for training sessions. Additional in-person meetings were held at conferences.

We used two data sets to develop and cross-validate a practical AROI metric: one, the Intervention Tab of the district’s student information system, consisting of information required by the state about student-level reading and math interventions and the other containing information about general budget fund investment items entered into the district’s online Investment Tracking System. To validate our AROI approach, we compared AROI metrics for three district programs—Reading Recovery, Restorative Practices, and a school nurse program—with results from two other methods: value-added analysis (VAA; see Shand et al., Citation2022) and cost-effectiveness analysis (CEA) (see Hollands, Leach, et al., Citation2022). We have described the three methods and cross-validation process in detail elsewhere (Hollands, Shand, et al., Citation2022); the focus here is not on the methods but on how our RPP affected decision-making and program implementation.

Initial interviews about JCPS budget decision-making process

Interview methods

At the outset of the RPP in 2018, we conducted 11 semistructured interviews with district leaders to understand how budget decisions were made and how they felt the process could be improved. These interviews, in conjunction with later interviews conducted in 2021 and described below, were part of our initial data collection plan and serve as the primary data source for this study on the RPP itself. Our original aim for the interviews was to understand how evidence was used to inform budgetary decisions initially and how that changed over the course of the partnership. Interviewees were selected to represent a cross-section of the senior leadership at the district who would be involved in making decisions about the district’s budget. The interview protocol was iteratively codesigned by the RPP team and tested with two leaders. The research partners interviewed 10 central-office administrators, including assistant superintendents and department chiefs and a member of the board of education.

The interviews were recorded and transcribed, and each interviewer took detailed notes and participated in an initial synthesis of themes soon after the interviews were conducted to share preliminary findings and recommendations with the district quickly. We later reviewed these notes at greater leisure, adding details from the audio recordings. We conducted a content analysis of the expanded notes by identifying key themes that emerged (White & Marsh, Citation2006). Two coders independently reviewed three interviews and then compared themes before proceeding; five of the initial interviews were independently themed by two analysts notating text passages with corresponding themes. They then compared and discussed notes, finding high agreement in the themes identified. Because the themes were inductive and not based on a predetermined set of codes, we did not calculate a specific measure of interrater reliability; however, we did not have any substantive disagreement among coders about themes. The lead author themed the remaining interviews and synthesized themes from all interviews, noting patterns of convergence and divergence across interviewees as well as recurring themes and challenges. The second coder reviewed the synthesis, and the JCPS team members reviewed a synopsis of themes.

Findings from initial interviews

Interviewees generally reported that the district had begun to tighten up what was previously perceived as a “carte blanche” process for requesting and receiving funds beyond the standard allocation. Senior leadership aimed to limit the number of budget requests by requiring requesters to collaborate with the research and evaluation department to provide more-detailed information at the outset, encouraging greater needs assessment and alignment with district priorities, requiring program and department heads to apply greater scrutiny and to prioritize requests before advancing them to the cabinet and the superintendent, and limiting requests to a specific time of the year to facilitate evaluation and comparisons between requests.

Criteria and evidence used to inform decisions about budget requests

When evaluating requests for new investment items, most interviewees emphasized that they considered the need being addressed and the proposed strategy’s alignment with high-level district priorities. However, there was not always consensus on how to assess needs, what the highest priority needs were, or how to compare investments that met very different needs or had different intended outcomes. Needs being addressed included students’ safety and well-being, academic outcomes, and practical needs such as operations or staff professional development. However, one interviewee noted that the needs addressed by investments were seldom tied to the district’s formal needs assessment process; instead, the relative importance of these needs and the extent to which investments adequately addressed them appeared to be largely based on professional judgment.

Many interviewees also mentioned the importance of an investment’s impacts on intended outcomes, which could include proximal outputs (e.g., programs being implemented or students being served as intended) or more-distal student learning outcomes. The district had recently adopted the NWEA Measures of Academic Progress (MAP) periodic assessment as a universal screener for measuring student progress and monitoring program effectiveness formatively. Results from this screener were used to monitor existing programs. However, several interviewees noted challenges with isolating the effects of individual programs when so many initiatives were happening simultaneously across the district. These interviewees also noted the difficulty of measuring the effects of nonacademic investments, such as those in operations, and comparing the outcomes of unlike investments. For new investments, for which the district did not yet have any of its own data on performance, interviewees claimed they considered evidence that an intervention could have an impact. Often, this came from informal sources such as professional expertise and the experience of peer districts as well as an assessment of whether there was an implementation plan that could lead to the desired effect. Two decision-makers suggested that new investments were more likely to succeed if they built on existing district or school investments, which could provide a foundation and a record of success. Although there were few direct mentions of costs, sustainability was raised as an important issue, referring, in this case, to the district building internal capacity to continue an intervention without relying on outside funding and vendors. Decision-makers were also concerned with scale and scalability, tending to prefer programs that reached larger numbers of schools and students due to concerns about equitable access and the view that such programs would be more feasible to evaluate.

Other criteria used in making funding decisions were more distal from tangible impacts but still important to the district. These included several mentions of political considerations, such as appeasing influential funders and community leaders, with several programs being labeled as “taboo” or unable to be cut even if they failed to improve student outcomes. Leaders also often sought to protect individuals’ jobs and thus were reluctant to cut programs if those cuts might lead to layoffs or personnel reallocation. Some decision-makers also aimed to ensure nominal fairness between departments with a greater focus on “shared sacrifice,” such as across-the-board cuts, than on investing resources or making cuts according to how effectively programs met student needs. Difficult decisions to end ineffective programs or those underperforming relative to their costs were reportedly easier to make in times of leadership changes, when there were changes in funding source or availability, and when programs were not highly personnel-intensive and thus did not require significant staff reallocation.

Identified needs and suggestions for improvement

Although the district was in the process of laying the groundwork for more systematic, strategic, and evidence-based budgetary decisions, interviewees identified several opportunities for improving budgetary decisions to render them more transparent, more data driven, and less political. Almost every interviewee suggested that the current process was too political, citing several examples of ineffective programs that persisted due to political sensitivities. However, there was less clarity on how to address this dilemma. The general perception was that the budget process and criteria for evaluating which budget requests to approve were becoming more consistent, but there was still room for improvement. For example, the evidence provided to support budget requests was not always sufficient or rigorous enough to evaluate whether they met an important need, aligned with district priorities, or impacted student outcomes.

A key area for further development was overcoming status quo bias; there were concerns that it was easier to make decisions about new programs than to cut existing ones. Some interviewees went as far as to suggest developing each year’s budget from scratch, that is, zero-based budgeting:

For the most part, we rarely touch anybody’s what they already got … the way this worked for several years when I was here, is literally each … department head … like your office gets a discretionary budget. So … I’ve got a budget, when it gets sent to me, it’s a spreadsheet of last year’s numbers, asking me how I’d like to allocate that money. Instead of a blank spreadsheet that says, what’s it going to take to run your office this year …

Several interviewees noted that clearer protocols and greater transparency about the decision-making process and factors affecting the decision outcome (fund, extend funding, discontinue, or reduce funding) could help depersonalize such decisions. For example, one interviewee indicated, “When you have very clear reasons why you are selecting things and what the requirements are, we have seen much more transparent, trustworthy processes come out of that and we have seen greater consistency from that process.”

Participants disagreed about how much input department leaders should have in one another’s budgets, with one complaining about “too much democracy” and another arguing that the process could be more robust if it were more of a “contact sport,” whereby district leaders could challenge one another’s proposals to ensure resources were aligned with district priorities. However, there was widespread agreement that a more transparent process was needed to ensure fairness and equity between major programmatic areas and that greater incentives were needed to encourage cross-departmental collaboration rather than competition for scarce resources.

While some decision-makers felt they received sufficient information to help guide their funding decisions, there was widespread recognition of the need for more evidence, which could include program impacts on student learning or other evidence on factors such as program implementation. Some decision-makers noted that confounding factors make it difficult for internal evaluation to isolate the effects of a single intervention but that more-rigorous external studies may lack face validity in the district due to differing contexts. Furthermore, decision-makers do not always have the luxury of time to wait for evidence before making a decision.

Although the district does conduct internal evaluations, it only has the capacity to conduct large-scale, formal evaluations for a handful of programs each year, which creates a need for additional systematic evaluation, either by building capacity within programmatic departments for rigorous self-evaluation or connecting with appropriate outside research. One interviewee asked, “How do we really measure these programs systematically to see, is it impacting student achievement?” A further challenge was providing sufficient and timely information that is easily digestible and does not overwhelm decision-makers. Another interviewee said, “I think we still have a ways to go in terms of providing information in a format that allows people to make decisions.”

In addition to evidence on impact, some decision-makers desired more and better information on the fidelity of implementation and how staff spent their time in schools. This increased information would guide continuous improvement efforts, support better program planning and monitoring, and provide better evidence on costs that would help decision-makers understand a program’s effect relative to its resource demands as well as the resource commitment relative to the level of challenge and urgency of need. One interviewee acknowledged that such improvements take time to implement:

We are getting very good at the first part [measuring impact], and I think we are scratching the surface and beginning to kind of see movement on the latter part [monitoring and follow-up]. Not shocking as we only started to ask people to enter things [in the budget request system] 2–3 years ago.

The partnership activities

Primary RPP activities

After the baseline research period, the RPP team conducted analyses and created tools and metrics to provide more evidence to guide decisions and address some of the needs raised in preliminary interviews, as described above. We highlight several key assumptions that guided our work. Although we did not set out to test these assumptions formally, as discussed in the next section, the process of calculating AROI and conducting CEA and VAA effectively served to test them.

Informal tests of underlying assumptions

At the outset, we assumed that the Intervention Tab and Investment Tracking System data sets, combined with other student administrative data, contained sufficient data to calculate AROI and conduct VAA. While this was largely true for the Intervention Tab, we quickly discovered that many budget request proposals in the Investment Tracking System were missing crucial information regarding program participants, goals, and/or measures, precluding both AROI and VAA. At the same time, we realized that our VAA approach required more-granular participation data than AROI (Leach, Shand, et al., Citation2022), which necessitated contacting schools and district departments. Although JCPS colleagues provided sufficient information to conduct VAA for many investments, this process revealed several additional issues with many of the budget request proposals: a missing or misspecified theory of change or logic model (F. Hollands et al., Citation2020), missing participant rosters, mission drift between proposals and actual spending, and misalignment between goals and activities (Leach, Shand, et al., Citation2022).

Because we planned to use the ingredients method (e.g., Levin et al., Citation2018) to estimate costs for CEA, as opposed to using program expenditures (i.e., approved budget amounts) for AROI and VAA, we rightly anticipated devoting substantially more time per program to conduct the three CEAs. Furthermore, gathering the cost data needed for CEA would require considerable assistance from program personnel. Because of this, we secured support from the Chief of Staff for each program’s JCPS division prior to contacting Reading Recovery, Restorative Practices, and school nursing program directors for assistance. Given the graciously high levels of cooperation from program directors and the prior approval from their respective division chiefs, we assumed our findings would be useful to and used by both program directors and budget decision-makers, even though none had requested a CEA. This assumption turned out to be partly true but depended on each program director’s openness to receiving nonpositive findings and the director’s autonomy to implement recommended program improvements.

Overall, we expected that if AROI, CEA, and VAA results for each of the three programs pointed to the same budget decision, then, for simplicity, we would prefer to report only AROI metrics to decision-makers and to use only AROI in the future. Our expectation was due to the relative ease of conducting AROI, especially compared with CEA, to produce easily understood metrics in a timely fashion to inform budget decisions. In retrospect, our views regarding the potential use of evidence produced by our RPP were too simplistic. For example, we did not anticipate the value of the rich information on implementation fidelity that we obtained in the process of conducting CEA and VAA (e.g., Leach, Hollands, Yan, et al., Citation2022; Leach, Shand, et al., Citation2022). Because our preliminary interviews with decision-makers revealed an appetite for additional evidence on the relative contributions of the district’s programs to student outcomes, we presumed decision-makers would be receptive to using AROI metrics to at least partially inform budget decisions. While our assumption here was not explicitly wrong, no official mechanism existed for introducing evidence of returns into the budget decision-making process. Although JCPS had fully implemented its cycle-based budgeting model by the time our first presentable AROI metrics were available in 2020, the district did not have a formal process in place for reviewing end-of-cycle budget items in the Investment Tracking System. We also underestimated the challenges of helping decision-makers use and interpret an AROI metric.

Applying results of research to programmatic and policy decisions

While the production of AROI, VAA, and CEA metrics for Reading Recovery, Restorative Practices, and the school nursing program was primarily for AROI validation, a secondary aim was to provide evaluative information to program personnel. While we did not find evidence to support the cost-effectiveness of the programs as implemented, and issues with program assignment, comparability, and gaps in data meant that AROI and VAA were not as applicable or informative as we had hoped, the process of performing these studies yielded valuable insights. These insights informed changes to program implementation, future data collection, and the way results would be communicated to stakeholders, including an updated Investment Tracking System that requires budget requesters to complete a logic model. summarizes the outcomes of the RPP, with each outcome discussed in more detail in the subsequent sections. Because the RPP itself was not a randomly assigned intervention but rather a collaboration that arose organically out of the district’s own efforts to improve evidence-based decision-making, we cannot say with great certainty what changes were caused by the RPP directly.

Table 1. Summary of outcomes of RPP.

Direct effects: Programmatic changes

Our CEA of the school nursing program revealed no significant effects of school-based licensed practical nurses (LPNs) on student attendance or chronic absenteeism despite substantial investment in the program (Leach, Hollands, Stone, et al., Citation2022). During the course of the CEA, we uncovered three key factors that likely contributed to our findings. First, JCPS did not have an explicit theory of change (or logic model) for the school nurse program. Second, the district did not identify and train nurses on standardized, evidence-based practices related to its desired outcomes. Third, the district did not have a monitoring plan in place to ensure best practices were being implemented. In response to these findings, the JCPS district health manager and JCPS RPP members worked together to develop an evidence-based logic model for the nursing program. Key activities included the creation of a standard operating manual and additional leadership training for six advanced practice registered nurse (APRN) supervisors. The JCPS team also worked with the health manager and APRNs to develop a standardized walkthrough tool for monitoring school nurses and with the district’s Pupil Personnel department to provide training for schools to encourage the inclusion of nurses on school-based attendance teams as a best practice for reducing chronic absenteeism (cf., Rankine et al., Citation2021). Notably, JCPS was recently awarded a research grant to rigorously evaluate the cost-effectiveness of these changes, reflecting a new willingness to consider randomized controlled trials, which were previously resisted.

The CEA of Restorative Practices found that the program was modestly successful at reducing suspension of Black male students after 2 years but was not reducing adverse behaviors or improving school safety and climate overall (Hollands, Leach, et al., Citation2022). A key discovery was that an important component of the program, restorative conferences, was not being implemented due to staff time constraints. Because the program is deeply entwined with the district’s strategic priorities to reduce disproportionality in disciplinary actions across racial groups and to reduce reliance on punitive and exclusionary discipline, the district remains committed to the program but is trying to scale it up at reduced cost by bringing training in-house. The district is also working with a partner organization to support implementation, including restorative conferences, in eight schools. The CEA of Reading Recovery found inequitable distribution of early literacy teachers relative to student needs, which the district is also taking steps to address.

The researcher-practitioner relationships established as a result of the CEAs contributed to program improvements for all three programs studied (Reading Recovery, Restorative Practices, and the school nursing program) and to capacity-building for ongoing research and evaluation efforts. However, several factors contributed to the most substantial changes being made to the school nursing program. These factors suggest contextual and programmatic circumstances that lend themselves to program improvement through RPPs. First, there had been a recent leadership change in school nursing. The new leader had research training, was amenable to change, was actively seeking to improve nursing service delivery, and had positional authority to make changes. Additionally, because school nursing is a professional practice with more discretion in implementation compared to prescriptive and well-defined interventions and less rigorous research on effectiveness, there is greater room for programmatic improvement in response to new research findings.

Indirect effects: Budget process and policy decisions

Toward the end of the 2020 budget cycle, JCPS RPP members sought to develop a formalized process for incorporating evidence into end-of-cycle budget decisions. The team focused on investments initiated by central office departments that were up for review, working iteratively within the RPP and with JCPS colleagues to create summary cards for each investment that included the length of the budget cycle, initial budget allocation versus actual expenses incurred, target populations, expected outcomes, and when available, observed outcomes. Because the formal review process itself was new, the JCPS team deemed it best at that time to address only the concepts behind AROI—scrutiny and review—rather than introduce a new metric simultaneously with the summary-card innovation. Therefore, although we computed AROI, the summary cards reported whether the investment had achieved its stated goal (“Yes” or “No”) instead of the AROI metric itself. In cases in which we lacked sufficient information to assess goal achievement, we marked goal attainment “Unclear.” The cards were distributed to division leaders, who shared them with program staff. This gave program staff an opportunity to provide their own evidence in support of their desired budget decision before division leaders made the official decision about whether to apply for continued funding. This process was disrupted when district operations went remote in response to the COVID-19 pandemic; however, many of the initial lessons learned were incorporated into the district’s updated Investment Tracking System, ITS 2.0, described in greater detail below.

Follow-up interviews

Follow-up interview methods

In Spring 2021, toward the end of the RPP’s third year, we conducted follow-up interviews with 10 senior JCPS decision-makers to elicit their perspectives on how the budget decision-making process had changed and what additional improvements were needed. The interviewees were the same as in 2018, except that one department chief had left the district and the board member was unavailable. We also interviewed a program director to help capture additional perspectives on changes to the budget decision-making process. We analyzed the interviews in a similar manner to the initial interviews but used full interview transcripts.

Follow-up interview findings

How the budget process changed

Since the initial round of interviews in 2018, the budget process underwent significant changes. The now-more-established senior leadership had streamlined its strategic priorities by focusing efforts on three pillars: racial equity, school culture and climate, and the student-led portfolio-based assessment program known as the “backpack of success skills.” These priorities were supplemented by more-concrete plans for action to achieve desired “future states” for the district’s areas of strategic focus. The COVID-19 pandemic significantly disrupted the budget process in several ways, including creating significant new health, safety, academic recovery, and mental health needs. It led to an infusion of temporary emergency resources through federal pandemic relief aid and interrupted regular assessment and data collection efforts.

Against this backdrop, one interviewee described the budget process as “cleaner” in 2021 than it was in 2018, with greater districtwide cohesion around the overall goals of the district, more diligence in setting clear goals for each investment, more collaboration across departments, and more emphasis on monitoring student learning outcomes for continuous improvement, including some evidence of more data-informed decision-making. One interviewee noted better alignment between needs assessment, monitoring goals, and allocating necessary resources to achieve goals: “You can’t move the needle with guessing. You can’t, and data doesn’t lie. We have to look at it, and we have to figure out, based on the data, what is it going to take?”

Our interviews also revealed evidence that budget requests, evidence supporting them, funding decisions, and ongoing monitoring of investments were better documented in the Investment Tracking System in 2021 relative to 2018:

I think there’s been a more concerted effort to align requests into our Investment Tracking System. … In the past, there’s been requests that come up, and money’s been allocated, and it hasn’t necessarily gone through that same process of documenting and tracking that in terms of what’s the need, what’s the goal, what are you planning on doing with this?

Relatedly, several interviewees mentioned a shift away from off-cycle requests to a more consistent budget timeline, allowing for more systematic review and comparison among investments. There was also a general increase in attention to evidence to support new investment ideas. However, evidence was still defined broadly to include experiences of other districts and implementation data, with relatively less discussion of evidence of impact on student outcomes in part due to data limitations exacerbated by the pandemic. Finally, the already strong collaboration between program directors and the research and evaluation department deepened, with greater emphasis on building capacity for self-monitoring and evaluation and that they sought support from the research and evaluation team to build that capacity.

Changes to criteria and evidence for review of investments

Many of the same criteria for evaluating new and continuing investments were still being applied to decisions 3 years later but with additional criteria and new ways of assessing the budget requests against the criteria. Alignment with student needs and high-level district priorities remained the most frequently mentioned and arguably most important criteria. However, providing evidence of impact on student academic and nonacademic outcomes was also frequently reported. There was also greater emphasis on using the budget process itself to ensure that investments are well-planned and supported to be successful. It remained true that the district applied greater scrutiny to new programs and was unlikely to discontinue existing programs, although interviewees identified some underperforming programs that had been discontinued or thoroughly redesigned. In the absence of formal assessment data because of the pandemic, external research served as evidence of impact or potential impact for some investments. This included identifying programs considered “best practice” in a peer district. For high-level priorities such as the district’s focus on racial equity, however, there was growing willingness to apply rigorous criteria, such as evaluating potential new investments against the district’s Racial Equity Analysis Protocol (REAP).

Decision-makers placed greater emphasis on factors they believed would contribute to program improvement and success, including having a strong theory of change, a clear plan, and implementation monitoring. One interviewee noted, in relation to the need for and challenge of developing and testing a fully specified theory of change:

The strategies were very heavy in PD, but there wasn’t necessarily a clear through-line from the PD to that student outcome metric that we were looking at. Again, disproportionality in suspensions, how can we connect all of the professional development that’s happening around Restorative Practices, implicit bias, PBIS, all of those things? How do we know that that’s actually leading to change in teacher practice, which is leading to a change in student outcomes?

Several interviewees proposed additional criteria and metrics to consider, including financial feasibility, availability of needed resources, and monitoring of time use given the personnel-intensive nature of many investments; for example, one interviewee said, “I have some reports that we do, basically, it’s just how people spend their time, we call it organizing for impact. I wish that there was more that I had on that.”

Given capacity constraints, in some cases, evidence to evaluate investments against these criteria and support decision-making was gathered via state, national, and role-specific networks, with program-specific networks being reported as particularly helpful in sharing ideas. One decision-maker reported increased reliance on formal research repositories:

We use sources like EdReports, What Works [Clearinghouse]. We have a list of sources, and then our Kentucky Department of Education has—the ESSA resources have to be researched but evidence-based. We use those lists and really try to be true to those.

How the RPP affected evidence use in the budget process

Several interviewees mentioned the end-of-cycle review process, the Investment Tracking System, the data provided as part of cycle-based budgeting, and structured conversations with the district members of the RPP team as critical components of program review decisions. Relatively few programs were discontinued because of the end-of-cycle review. However, interviewees noted that the process was helpful in informing program improvement, building research and evaluation capacity across the district, and contributing to refinements in the budgetary review process itself. One interviewee said the RPP team was “looking at the right things” and communicated them to decision-makers in a way that was “easy to understand.” The RPP team focused attention not just on financial expenditures but also on personnel time and other resources needed to implement a program or strategy alongside impact on student outcomes. This approach helped decision-makers see connections between how resources were contributing (or not) to impact and whether a theory of change needed to be revisited. One interviewee said, “It’s more than just cash moving from one bucket line item to another, it is resourcing staff and teachers and others to help train.” These clearer connections, along with more-continuous availability of data, contributed to growth in formative implementation checks (e.g., whether personnel time was allocated to priority activities as specified in the theory of change).

Continued needs and questions

A key remaining challenge identified by JCPS decision-makers is how to agree on discontinuing ineffective programs. Political considerations, as well as concerns for individual jobs, still dominate, as illustrated by a hypothetical example one interviewee quoted for resisting the elimination of staff from a program that was failing to show evidence of effectiveness: “Yes, but Ms. [X] is the best part of this elementary school, and everyone loves her. Do you know she does the Christmas Bazaar? You can’t take her away from us.” Other interviewees noted a possible way to resolve this dilemma by preferring the reallocation of personnel from one program to another over outright layoffs.

Interviewees argued that tighter alignment between the district’s annual needs assessment and spending plans could drive further improvement. Relatedly, continued efforts to support stronger theories of change, monitor implementation, and provide programs with the supports they need to be successful, such as ensuring adequate staffing, training, and coaching, could build success. One interviewee indicated the importance of such continued efforts: “I think support. I think we do a lot of stuff and then we leave. We abandon people, and if they’re not effective, then it’s a waste typically.”

Changes to budget request and investment tracking processes

The RPP team’s findings from preinterviews, from a document analysis of evidence use in budget requests (F. Hollands et al., Citation2020), and from lessons learned from two rounds of end-of-cycle reviews informed a redesign of elements of the budgeting process. Funding requests must now describe the problem of practice, the root cause of the problem, and a logic model that defines how the newly requested resources, together with existing resources, are expected to solve the problem through activities and implementation monitoring. The request form uses familiar language from the annual Comprehensive School Improvement Plan (CSIP). Additionally, requesters must now indicate the appropriate ESSA evidence tier level (U. S. Department of Education, Citation2016) and cite relevant evidence to support the strategies to be enabled by the investment, deepening an evidence requirement that was already in place. To promote the use of evidence-based practices, the district’s research department compiled a list of suggested practices.

The RPP also contributed to technical changes to the budget request form and its submission process, requiring requesters to identify target participants, planned activities, and strategies for measuring success or to seek help from the research and evaluation department in meeting this requirement. This process provides a vehicle for more routine collaboration and capacity building. Before implementing these changes for the district’s general fund, JCPS piloted them with the third round of ESSER funds received from the federal government. Encouragingly, school and district leaders did not push back against providing additional information in their budget requests. Some used the planning tool to build strong logic models on their own, while the research department was able to provide support to strengthen logic models for others. The same approach will be applied to the district’s general fund. The goal is to present the investment tracking data in a way that allows school and district leaders to see, not only annually but also cumulatively, the alignment between improvement priorities, investments, and impacts, as well as the coherence and efficiency of those investment strategies.

Discussion and conclusion

Our RPP succeeded in developing, refining, and validating an AROI metric and implementing improvements to the budget request and tracking processes. However, the impact of the work on decision-making was unexpected, as our findings were not directly used to make high-stakes decisions about funding or discontinuing programs. Rather, the research findings and recommendations influenced more process-oriented and program implementation decisions. These included identifying and filling gaps in implementation data, a focus on robust theories of change, making changes to the budget request and review processes to encourage more-strategic uses of data, and establishing ongoing formal and informal interactions between research and programmatic staff. A major lesson of the RPP is the value of providing simple metrics and dashboards that unify several data points in one place to inform decision-making and provide program feedback. This pattern of research-evidence use aligns with Weiss’s (Citation1998) observation that decisions are most likely to be informed by research and evaluation under certain conditions, such as when they are small-scale, relatively noncontroversial, and not accompanied by large contextual shifts.

We expected more instrumental use of evidence in collective decision-making, such as cabinet discussions or board presentations about allocating the annual budget. A key element affecting how evidence produced by the RPP was used was the composition of the team. Although one of the RPP team members was in the district’s cabinet and could influence decisions at that level, program adoption or elimination decisions in districts are not made by individuals; these decisions may, therefore, be harder to influence directly with evidence. Many factors go into starting, continuing, or ending a program, with many interested groups and politics playing a crucial role. Instead of influencing collective decisions, the RPP contributed to evidence use primarily at the individual and interpersonal levels (Henry & Mark, Citation2003).

Our embedded JCPS researcher, who has a strong research background and was devoted full-time to the RPP, served as a critical bridge between external researchers and program personnel, building trusting relationships, lending internal credibility to the enterprise, and helping to ensure our research activities were aligned with practical concerns. The researcher typically worked with individual program directors (e.g., of Restorative Practices and of school nursing) or small groups (e.g., three Reading Recovery Teacher Leaders) and, therefore, primarily contributed to applying evidence from the RPP to program-specific continuous improvement and implementation decisions, as opposed to program-adoption decisions. In hindsight, given their critical role in facilitating and influencing budget allocation decisions, we should have more consistently engaged the district’s chief financial officer and budget director in our research-practice activities. More direct contact with program personnel than senior decision-makers or budget and finance teams may explain our greater success in achieving changes in program implementation than in influencing major budget allocations.

Our experience suggests that RPPs teams would ideally include multiple types of practitioners—namely, budget decision-makers, program personnel, and internal researchers—to ensure direct influence at the individual, interpersonal, and collective levels. This approach may be implemented feasibly by adopting a flexible RPP structure, whereby, internal researchers embedded in the practitioner organization serve as dedicated analysts and key liaisons while other practitioners participate when appropriate. Such a structure may enable embedded use of evidence in what Yoshizawa (Citation2022, p. 1162) characterizes as an influence-building process, whereby, “routines and relationships” help establish meaningful and ongoing engagement with research. Further study of the “micro-processes” (Little, Citation2012, p. 144) by which decisions are made at various levels of the organization can identify the most-salient opportunities for ongoing engagement of RPPs with evidence-based decision-making processes.

Although RPPs often focus on increasing the use of evidence-based practices and improving student outcomes, another potential role for RPPs in facilitating research use in educational decision-making is helping districts identify needs and establish strategic priorities. While, over the course of our RPP, the district tightened alignment between needs, priorities, and investments, the district’s needs and priorities did not always appear to be identified using data and evidence. Especially in the case of strategic priorities, budget decisions appeared to be more based on leaders’ values, prior experiences, and perceptions of which problems could realistically be addressed. RPPs could help districts design mechanisms to collect and use local data routinely for the purpose of identifying needs. Partnerships could also work with district leaders to agree on a set of criteria for establishing priorities that are responsive to these needs. One of these criteria should be the consideration of research evidence, but others could be more practical, such as feasibility of execution and likelihood of success. Finally, although nearly all aspects of the budget request and review process are grounded in evidence, data, and research of one kind or another, those constructs are defined broadly with very different understandings across decision-makers. Structured end-of-cycle review conversations between research and evaluation personnel and program directors provide opportunities for further growth by acting as a launching point for developing shared language and building capacity for evidence use.

Acknowledgments

We are grateful to the anonymous interview participants for their time and candor and to the Guest Editors and anonymous referees for this special issue for their helpful comments and feedback on the paper. We also appreciate comments from Cara Jackson and participants in sessions at the Society for Research on Educational Effectiveness and the American Educational Research Association where prior versions of this work were presented.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work was supported by the Institute of Education Sciences, U.S. Department of Education, through Grant R305H180003 to Teachers College, Columbia University. The opinions expressed are those of the authors and do not represent views of the Institute or the U.S. Department of Education.

Notes on contributors

Robert Shand

Robert Shand is an Assistant Professor in the School of Education at American University. He received his PhD in economics and education from Teachers College, Columbia University. His current research focuses on teacher improvement through collaboration and professional development and how schools and teachers use data from economic evaluation to make decisions and improve over time. He is a coauthor of the third edition of Economic Evaluation in Education: Cost-Effectiveness and Benefit-Cost Analysis, and his work has been published in outlets such as the American Journal of Evaluation, Educational Evaluation and Policy Analysis, and the Journal of Research on Educational Effectiveness.

Stephen M. Leach

Stephen M. Leach is a Grant Developer with Jefferson County Public Schools. Steve is a member of the JCPS Program Evaluation Committee and enjoys conducting practice-focused quantitative research in his spare time. Steve’s research interests include psychometrics, program evaluation, and continuous improvement. Whether applying for grants, estimating statistical models, or coaching elementary soccer, his goals are to have fun, try new things, and get better. Steve earned a PhD in educational psychology measurement and evaluation and a bachelor’s degree in mathematics from the University of Louisville.

Fiona M. Hollands

Fiona M. Hollands, PhD, is the Founder and Managing Director of EdResearcher, an independent education research and evaluation organization. Previously, Fiona was a Senior Researcher at Teachers College, Columbia University, in the Department of Education Policy and Social Analysis. Fiona collaborates with other researchers and with education agencies on grant-funded research, R&D, and technical assistance projects to improve the use of evidence by education decision-makers. This includes applying cost-effectiveness analysis and cost-utility analysis to educational programs to optimize the use of available resources. Fiona applies qualitative methods to understand decision-making processes and how they can be best informed and improved.

Bo Yan

Bo Yan is a specialist in Jefferson County Public Schools in Louisville, KY. Over the past 10 years, he has been leading the effort to develop, implement, and improve cycle-based budgeting to help district leaders make informed decisions on resource use and programmatic adjustments based on alignment, coherence, and return on investment. Previously, he conducted research and evaluations for Blue Valley School District in Overland Park, KS. Recently, he has been exploring how to help districts move toward a more balanced approach to improvement through both addition and subtraction. Bo received his PhD in learning, technology, and culture from Michigan State University.

Dena Dossett

Dena Dossett, PhD is the Chief of Accountability, Research, and Systems Improvement for Jefferson County Public Schools (JCPS), Louisville, KY. She manages the district offices of planning and systems improvement, research and program evaluation, testing, and resource development. She coordinates the district’s improvement plan and directs the district’s program evaluation work focused on key initiatives. She works closely with district and school leaders, universities, and community agencies to provide data that is useful, accurate, and timely. She has served the district in various roles for 25 years.

Florence Chang

Dr. Florence Chang is currently an Education Program Consultant in the Office of Continuous Improvement and Support at the Kentucky Department of Education (KDE). Before joining KDE in 2022, Chang was an Executive Administrator for over 10 years in research for Jefferson County Public Schools (JCPS), where she worked on program evaluation and continuous district and school improvement efforts.

Yilin Pan

Yilin Pan is an economist specializing in economic evaluation methods, including cost-effectiveness, cost-benefit, and cost-utility analysis, as well as impact evaluation and Bayesian statistics. Leveraging these methodologies, Yilin has worked with a wide range of stakeholders, from governments to educational institutions worldwide, to foster inclusive, systematic, and transparent decision-making in education. She is currently an Educational Programme Specialist at UNESCO IIEP. Previously, she worked for the World Bank and Columbia University. Yilin earned a B.A. and an M.A. from Tsinghua University and a PhD in economics and education from Columbia University.

References

  • Arce-Trigatti, P., Chukhray, I., & López Turley, R. N. (2018). Research–practice partnerships in education. In B. Schneider (Ed.), Handbook of the sociology of education in the 21st century (pp. 561–579). Springer.
  • Bryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard University Press.
  • Coburn, C. E., & Penuel, W. R. (2016). Research–practice partnerships in education: Outcomes, dynamics, and open questions. Educational Researcher, 45(1), 48–54. https://doi.org/10.3102/0013189X16631750
  • Conaway, C. (2020). Maximizing research use in the world we actually live in: Relationships, organizations, and interpretation. Education Finance and Policy, 15(1), 1–10. https://doi.org/10.1162/edfp_a_00299
  • Farrell, C. C., Penuel, W. R., Coburn, C. E., Daniel, J., & Steup, L. (2021). Research Practice partnerships in education: The state of the field. William T. Grant Foundation.
  • Henrick, E. C., Cobb, P., Penuel, W. R., Jackson, K., & Clark, T. (2017). Assessing research-practice partnerships: Five dimensions of effectiveness. William T. Grant Foundation.
  • Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation’s influence on attitudes and actions. American Journal of Evaluation, 24(3), 293–314. https://doi.org/10.1177/109821400302400302
  • Hollands, F. M., Leach, S. M., Shand, R., Head, L., Wang, Y., Dossett, D., Chang, F., Yan, B., Martin, M., Pan, Y., & Hensel, S. (2022). Restorative practices: Using local evidence on costs and student outcomes to inform school district decisions about behavioral interventions. Journal of School Psychology, 92, 188–208. https://doi.org/10.1016/j.jsp.2022.03.007
  • Hollands, F. M., Shand, R., Yan, B., Leach, S. M., Dossett, D., Chang, F., & Pan, Y. (2022). A comparison of three methods for providing local evidence to inform school and district budget decisions. Leadership and Policy in Schools, 1–35. https://doi.org/10.1080/15700763.2022.2131581
  • Hollands, F., Yan, B., Leach, S. M., Shand, R., Dossett, D., Wang, Y., & Head, L. (2020, March 11–14). LEA use of evidence in budget decisions [ Poster presentation]. Society for Research on Educational Effectiveness Spring 2020 Conference, Arlington, VA. https://www.sree.org/spring-2020
  • Hummel-Rossi, B., & Ashdown, J. (2002). The state of cost-benefit and cost-effectiveness analyses in education. Review of Educational Research, 72(1), 1–30. https://doi.org/10.3102/00346543072001001
  • Jefferson County Public Schools. (2018). School-based decision making. https://www.jefferson.kyschools.us/about/leadership/sbdm
  • Leach, S. M., Hollands, F. M., Stone, E., Shand, R., Head, L., Wang, Y., Yan, B., Dossett, D., Chang, F., Ginsberg, Y. C., & Pan, Y. (2022). Costs and effects of school-based licensed practical nurses on elementary student attendance and chronic absenteeism. Prevention Science, 24(1), 1–11. https://doi.org/10.1007/s11121-022-01459-0
  • Leach, S. M., Hollands, F. M., Yan, B., & Shand, R. (2022, February 3). Unexpected benefits of conducting cost-effectiveness analysis. Inside IES Research Blog. https://ies.ed.gov/blogs/research/post/unexpected-benefits-of-conducting-cost-effectiveness-analysis
  • Leach, S. M., Shand, R., Yan, B., & Hollands, F. M. (2022, February 15). Unexpected value from conducting value-added analysis. Inside IES Research Blog. https://ies.ed.gov/blogs/research/post/unexpected-value-from-conducting-value-added-analysis
  • Levenson, N. (2011). Academic ROI: What does the most good? Educational Leadership, 69(4), 34–39.
  • Levin, H. M., McEwan, P. J., Belfield, C., Bowden, A. B., & Shand, R. (2018). Economic evaluation in education: Cost-effectiveness and benefit-cost analysis. Sage.
  • Little, J. W. (2012). Understanding data use practice among teachers: The contribution of micro-process studies. American Journal of Education, 118(2), 143–166. https://doi.org/10.1086/663271
  • Penuel, W. R., & Farrell, C. (2017). Research-practice partnerships and ESSA: A learning agenda for the coming decade. In E. Quintero-Corrall (Ed.), Teaching in context: The social side of education reform (pp. 181–200). Harvard Education Press.
  • Phi Delta Kappa International. (2012). A curriculum management audit™ of the Jefferson County Public Schools Louisville, Kentucky. International Curriculum Management Audit Center, Author. https://www.jefferson.kyschools.us/sites/default/files/JCPS_Audit.pdf
  • Rankine, J., Goldberg, L., Miller, E., Kelley, L., & Ray, K. N. (2021). School nurse perspectives on addressing chronic absenteeism. Journal of School Nursing. Advance online publication. https://doi.org/10.1177/10598405211043872
  • Rennie Center for Education Research & Policy. (2012, October). Smart school budgeting: Resources for districts. https://www.renniecenter.org/sites/default/files/2017-01/SmartSchoolBudgeting.pdf.
  • Rice, J. K. (1997). Cost analysis in education: Paradox and possibility. Educational Evaluation and Policy Analysis, 19(4), 309–317. https://doi.org/10.3102/01623737019004309
  • Shand, R., Leach, S. M., Hollands, F., Chang, F., Pan, Y., Yan, B., Dossett, D., Nayyer-Qureshi, S., Wang, Y., & Head, L. (2022). Program value-added: A feasible method for providing evidence on the effectiveness of multiple programs implemented simultaneously in schools. American Journal of Evaluation, 43(4), 584–606. https://doi.org/10.1177/10982140211071017
  • Tseng, V. (2012). Partnerships: Shifting the dynamics between research and practice. William T. Grant Foundation.
  • Tseng, V., & Nutley, S. (2014). Building the infrastructure to improve the use and usefulness of research in education. In K. S. Finnigan & A. J. Daly (Eds.), Using research evidence in education (pp. 163–175). Springer International. https://doi.org/10.1007/978-3-319-04690-7_11
  • U. S. Department of Education. (2016, September 16). Non-regulatory guidance: Using evidence to strengthen education investments. https://www2.ed.gov/policy/elsec/leg/essa/guidanceuseseinvestment.pdf
  • Weiss, C. H. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19(1), 21–33. https://doi.org/10.1177/109821409801900103
  • Welsh, R. O. (2021). Assessing the quality of education research through its relevance to practice: An integrative review of research-practice partnerships. Review of Research in Education, 45(1), 170–194. https://doi.org/10.3102/0091732X20985082
  • Wentworth, L., Mazzeo, C., & Connolly, F. (2017). Research practice partnerships: A strategy for promoting evidence-based decision-making in education. Educational Research, 59(2), 241–255. https://doi.org/10.1080/07391102.2017.1314108
  • White, M. D., & Marsh, E. E. (2006). Content analysis: A flexible methodology. Library Trends, 55(1), 22–45. https://doi.org/10.1353/lib.2006.0053
  • Yan, B. (2022). Cycle-based budgeting. https://cyclebasedbudgeting.org/
  • Yan, B., & Hollands, F. (2018). To fund or to defund: Making the hard decisions. School Business Affairs, 84(8), 11–13. https://www.wasbo.com/images/wasbo/documents/6/newsletter/Newsletter2018_12_December-Spread.pdf
  • Yoshizawa, L. (2022). The imposition of instrumental research use: How school and district practitioners enact their state’s evidence requirements. American Educational Research Journal, 59(6), 1157–1193. https://doi.org/10.3102/00028312221113556