November 1 Meeting Recap
Thank you for your time.
In sum: The fundamental decisions about which models to use and which measures to prioritize have shaped—and constrained—the range of possible outcomes in this process. The initial framing of 95% utilization as a benchmark seems to have anchored subsequent board discussions, while how scorecard concepts like “equity” have been measured essentially predetermined results by design: the metrics correlate highly with each other and predominantly measure school size and cost, meaning the scorecard can only identify small and/or expensive schools rather than provide broader strategic insight. This measurement framework, combined with model sensitivity issues and communication choices about specific schools, has created a self-reinforcing narrative around “underutilization” that continues to drive major decisions. As we discussed, the scorecard can’t tell anyone ‘how many’ schools to close, but the conversation around utilization and district-set capacities are a powerful lever.
The core issue is that methodological choices made at the outset (from capacity calculations to which variables are included and how they’re measured) don’t just influence the answers; they fundamentally determine what answers are even possible. Without recognizing these embedded constraints and biases, the board risks making decisions based on a tool designed to produce a narrow set of conclusions rather than illuminate the full complexity of the situation.
Key points:
-
Scorecard design: the concerns are around how specific concepts, such as equity, were measured. The way these elements were operationalized, turned into measured concepts, is problematic. The equity values correlate highly with each other (0.9) and many of the measures are related to school size. Relying on scorecards is an issue. Using a scorecard as part of a broader process can be helpful if there is a description of where it fits in the context of things., but there needs to be appropriate education around what it is, what it does, and most, importantly, what it cannot do and what its limitations and biases include. The design of this scorecard basically provides a quantitative answer to the question: which are small and/or expensive schools? How the number of students ‘changing schools’ is calculated needs to be clearer (re: whether middle school feeder patterns are included or not). From that perspective, it is not surprising the results came out how they did. It means that this scorecard is only designed to give these outcomes, and can’t be used for insight beyond this. The scorecards have been the foundation of all closure discussions and votes, but they are incomplete and being used far beyond the context in which they would be useful.
-
Scorecard use: building on above, the scorecard should be a starting point for conversation. One thing we didn’t dive into as much is that the scores within each scenario can be compared to one another somewhat – although there are still limitations. For example, it’s tough to quantify what a 2.8 means at all, much less how to interpret 2.7 vs 2.8. Furthermore, the scorecard values cannot be compared across scenarios – it doesn’t have meaning to say how a score from one scenario can be compared to another since all scores are relative.
-
Sensitivity analysis: Sensitivity analysis involves changing the measured values of your model (both the inputs and the weighting of categories). It is done to assess how the different inputs (e.g. number affected, gym/cafeteria, etc.) connect to the outputs (school metrics). I have concerns regarding how stable the model is – with sensitivity analysis, your model should be robust to small changes, but responsive to larger ones. I’m not seeing that in this model. The dangers are that the model is a proxy for something else, such as school size/feeder membership. Given the measurement issues above, and this rigidness, the value of the scorecard is very narrow and should not be used in a context such as an ordered list of schools to close – more so for identifying schools to consider closely.
-
Communication around results: the board doesn’t have the experience you do with the data or scenarios. While you are working to present things in as unbiased manner as possible, how things are communicated can have a big impact (e.g. the Haven feeder slide at 50% utilization – while technically true, likely helped focus the board on two within Haven). I understand you’re working with a lot of materials and under a lot of pressure but want to call out how this is an important part of the process. King Arts is noticeably absent on this slide and also has low utilization.
-
Capacity measures should (and do, as of now), take the smaller of DEC and Cordogan Clark numbers. This seems to be much more reasonable.
-
Utilization rates: education that utilization rates are one measure of how we could think about schools. The denominator, capacity, is incredibly important for utilization rates, and completely within the purview of the school district. This is a tremendous amount of control over the scenarios and how they are perceived.
I want to emphasize that these issues are methodological and do not reflect that community or committee input was invalid. The underlying values are still there and incredibly important. The issues are with how these values and desiderata have been translated into a model, how they have been communicated, and the broader context given.