Why Test Score Data Is Confusing
Walk into any state's school report card website and you'll find at least five different ways test performance is reported: proficiency rates, scale scores, percentile ranks, growth percentiles, and various composite indices. Each metric tells a different story. Using only one โ the most common mistake parents make โ produces a dangerously incomplete picture.
Here's a practical field guide to understanding what you're actually reading.
Proficiency Rate: The Most Reported, Most Misunderstood Metric
A proficiency rate tells you: what percentage of students scored at or above the "proficient" cutoff on a state test. Example: "72% of 4th graders are proficient in reading."
What it misses: The proficiency cutoff is arbitrary and varies by state. A student who scores just above proficient and one who scores in the 99th percentile both count as "proficient." A student scoring just below the cutoff counts the same as one scoring near the bottom. The proficiency rate collapses an entire distribution of scores into a binary pass/fail, losing enormous information in the process.
Use it for: Quick comparison against the state average. A school at 80% proficiency in a state where the average is 40% is a strong signal. But don't use it to compare schools across state lines โ states set their own cutoffs, and they vary dramatically.
Scale Scores: More Information, Less Interpretable
Scale scores are the raw numeric scores students receive (e.g., 247 out of 350 on the state math assessment). They're reported at the school and district level as averages.
What they tell you: Year-over-year change in the same score scale. If a school's average math scale score went from 241 to 248 over three years, that's concrete improvement.
What they don't tell you: How good 248 is โ that requires knowing the score distribution across all schools in the state.
Percentile Rank: Where a School Stands in the Distribution
A percentile rank tells you what percentage of comparable schools (or students) scored lower. A school at the 75th percentile outperformed 75% of schools on that measure.
Critical caveat: Most state systems report school percentile ranks within the state only โ not nationally. A school at the 70th percentile in Massachusetts is likely performing at a much higher absolute level than a school at the 70th percentile in a lower-performing state.
Student Growth Percentile (SGP): The Most Valuable Metric Most Parents Ignore
The Student Growth Percentile answers a fundamentally different question than proficiency or scale scores: given where students started, how much did they grow compared to academically similar students elsewhere?
An SGP of 65 means: students at this school grew more than 65% of students who started at the same point in the prior year. This is the closest thing to a "teaching quality" signal in public test data.
Why it matters for families: A school with moderate proficiency rates but high growth percentiles may be doing exceptional work with students who start behind. A school with high proficiency rates but low growth percentiles may be coasting on affluent demographics while actually adding less value than average.
Find growth data in your state's school report card system, or look for "student growth" in the school profiles on MySchoolPeek.
Reading the Data Table: A Practical Example
| Metric | School A | School B | What It Means |
|---|---|---|---|
| Math proficiency rate | 82% | 54% | School A looks much better |
| Free/reduced lunch rate | 8% | 61% | Very different demographics |
| Math growth percentile | 48 | 71 | School B grows students faster |
| Chronic absenteeism | 6% | 19% | School A has stronger attendance culture |
Reading this table: School A's high proficiency rate likely reflects its affluent demographics. School B is growing students significantly faster despite serving a much more economically challenged population โ a signal of genuine instructional quality. School B's high absenteeism rate is a real concern worth investigating.
National vs. State Assessments
NAEP (National Assessment of Educational Progress) is the only truly nationally comparable test, administered to a representative sample of students in each state every two years. NAEP does not produce school-level scores โ only state, district (in large urban districts), and demographic subgroup data. It's excellent for comparing state-level performance.
State assessments (like CAASPP in California, STAAR in Texas, MCAS in Massachusetts) are used for school and district accountability. They're not comparable across state lines.
SAT and ACT data for high schools is voluntarily reported and participation rates vary widely โ a school with 40% participation will look very different from one with 95% participation even if underlying ability distributions are identical.
The Bottom Line on Test Score Interpretation
No single test score metric tells the full story. The most informed approach:
- Check proficiency rates against the state average โ not as an absolute measure
- Find the growth percentile โ this is the best proxy for instructional quality
- Note the demographics โ free/reduced lunch rate contextualizes proficiency data
- Look at trend over three years โ improving schools are often better investments than declining ones even if current scores are lower
- Cross-reference with qualitative data โ visit, talk to parents, read local coverage
Use MySchoolPeek's comparison tool to examine multiple schools' data profiles side by side before making any decisions.