What about these gaps and areas of redundancy?

At the informational meeting I attended to learn more about the new science curriculum, I was handed two tables showing a large number of State science benchmarks and the corresponding Portage benchmarks. The table also indicated whether the benchmark was not currently taught in Portage (a gap) or was covered more than once (an area of redundancy). Finally, the table showed the percentage of students passing MEAP questions on that benchmark. This looked very interesting to me, although I wasn't sure if the School was presenting this as an analysis that backs up their claim that that gaps were bad (which I'll admit does seems intuitive), or what.

To find out, I did my own analysis comparing benchmarks that were not covered (gaps) to those that were (no gap). To my surprise, there was no statistically significant difference in the MEAP scores for these two types of benchmarks, based on the student's t-test analysis. Click here to see the complete spreadsheet, and the summary is provided below.

 
Gap
No Gap
Median
62.2
63.7
Mean
59.5
64.2
SD
17.5
15.2
Mean +/- 2 SD
24.5 - 94.5
33.8 - 94.6
Student t-test
0.2786
.

The conclusion, which hardly makes sense, is that MEAP scores for benchmarks taught were not significantly higher than for benchmarks that were not taught.

Several explanations for this come to mind. Perhaps the so-called "gaps" do not reflect a gap in students' knowledge, at least as measured by the MEAP. The "gaps" are merely within the syntax used to describe the current curriculum. Or perhaps the MEAP data provided on the School's handout is not useful for this type of comparison because it shows the percentage of students that successfully answered the question, but some questions on some benchmarks are harder than others. To eliminate this confounding factor, we'd need to have percentile scores that show how Portage students do compared to students at other schools across the state so the results were in essence normalized across all the questions. Unless we believe the first explanation that the gaps don't really reflect a gap in the student's knowledge, we have to conclude that the school was just presenting a bunch of basically meaningless numbers hoping to either confuse parents, or perhaps intimidate them into acceptance of something that's not based on valid data or scientific analysis. What does this say about the rest of the thinking that went into the new curriculum?

It's apparent not much scientific analysis went into the curriculum changes, and it appears to be based almost completely on the intuition and wishful thinking of the committee that developed it. How many practicing scientists (other than educators) served on this committee? What kinds of experiences have other school districts had with similar changes?

I think we can accept intuitively that it's good to eliminate gaps and reduce areas of redundancy where they really exist, but I'm not willing to accept that having semester survey courses in 9th and 10th grade is a good idea without data to support it.

To comment on this comment, click here.

go back to portagescience.org