Friday, July 4, 2014

What does the research really say? (Language Arts Edition)


Yes it is. And understanding decisions about teaching and the related research can be harder still. This was especially evident in the last post where I examined how reporters struggled to make sense of educational research on first-grade math instruction. Robert Pondiscio, a Senior Fellow at the Fordham Institute, makes a similar point here:

It’s simply asking too much for even the most seasoned education reporters to develop a discerning eye for curriculum; it’s not their job, and it makes their job covering the instructional shifts taking place under Common Core uphill work.

Untangling educational issues is tough work, especially for those without the prerequisite knowledge and experience. That is why we rely on Education Experts like Mr. Pondiscio.

But what if Education Experts struggle with a study on teaching? Does this mean it is too hard for even them, or is there something else at play? These are not hypothetical questions. Recently, two separate Education Experts misinterpreted/misrepresented (you decide) a study on teaching reading in New York City elementary schools.

First, an opinion piece in the New York Daily News attacked an approach to reading instruction called “balanced literacy.” According to the author this is a failed approach and he has the study to prove it. He writes:
So according to this author, the study showed that Core Knowledge was superior to balanced literacy in almost every way. Let me use an analogy to make it clearer to those reading that might not be Education Experts.

Apple Pie Recipe
In honor of Independence Day, let’s pretend we are conducting an apple pie study. We have two different recipes (approaches) that we want to compare: Recipe B (Balanced Literacy) and Recipe C (Core Knowledge). Let’s say we have 20 bakers. Half of the bakers get Recipe B and the other ten get Recipe C. All other variables (the apples, the ovens, a baker’s ability, any common ingredients, …) are assumed to be the same so we can focus on the recipes. When the bakers are finished, the pies are compared using some objective measures. Then we can know which recipe is best.


But how are we defining best? Taste? Flakiness of crust? Presence of some ingredient only found in Recipe C? As I said last time, the measures matter when determining what a study really says.  So I went about trying to find out more information about the Core Knowledge study.

What I found was another Education Expert referencing the same study to make essentially the same point. However, this author added a bit more context in his column.
A link to a New York Times article about the study was also included but it did not seem to support the idea that the study was a direct comparison of Core Knowledge and balanced literacy.

Perhaps we ought to return to and modify our apple pie analogy. We still have 20 bakers split into two equal groups. One group still gets Recipe C, but the other bakers are not given any particular instructions about what recipe to use. There is some indication that they have used Recipe B in the past, however, it is unclear to what extent they continue to follow that recipe exactly, if at all. In fact, maybe some in the second group of bakers took a peek at Recipe C and are trying to use elements from it. We just don’t know for sure without actually watching them bake. Therefore, when the apple pies baked explicitly using Recipe C are judged as superior, the best we can do is say that, in general, using Recipe C is better than the variety of recipes "typically" used by the other bakers. We cannot say that Recipe C is better than Recipe B because it was not an explicit part of our study.

Neither can we say that Core Knowledge is better than balanced literacy using the information provided. That is why the New York Times continually refers to the second set of schools as “comparison” schools and not “balanced literacy” schools. In this case, the education report got it right and the Education Experts got it wrong. How could this happen?


Perhaps there is more to the study than was reported by the Times that supports the seemingly faulty claim made by the Education Experts. Anyone with more information can share it in the comments and I will make any necessary edits or apologies. In the meantime, I would encourage you to be skeptical about any opinions/commentaries that use studies to support their point without providing all the details (or misrepresenting the details provided, as in this case), even if Education Experts, who ought to know better, wrote the piece.


Updated (7/7/14): Another Op-Ed has appeared that misuses the Core Knowledge Study; this time in the New York Times. The central message of both opinion pieces seems to be that the authors struggled to implement balanced literacy so no one ought to use it.  Education policy is not based on a pair of negative experiences, however, and the authors know it. Therefore, in order to bolster their position, they overstate the findings of the study. I hope that the people making the decisions around this issue see through this ruse.

Updated (7/12/14): Somehow I missed the opinion piece by E. D. Hirsch Jr., the founder of Core Knowledge, that also states that the study directly compared his program to a balanced literacy approach. He writes:
The New York City Department of Education recently did a three-year study comparing 20 schools. Ten used the Core Knowledge approach. Ten used balanced literacy.
However, he links to the same New York Times article that says:
Half of the schools adopted a curriculum designed by the education theorist E. D. Hirsch Jr.’s Core Knowledge Foundation. The other 10 used a variety of methods, but most fell under the definition of “balanced literacy,” ...
Without having the methodology of the original study to back up his assertion, his direct connection to Core Knowledge raises the possibility of bias in how he is interpreting the results.

It turns out that the author of the first Op-Ed also has ties to Core Knowledge (starting at 11:18 on this podcast - which, by the way, also misrepresents the study). That means three of the four authors overstating the Core Knowledge Study have a connection to the Core Knowledge Foundation. It is understandable that they would believe in their program, and the study seems to suggest they ought to be proud to share the results - how their program compared to a control group.  It is unfortunate that they believe they have to embellish these results in order to attack what they must see as the competition.

For me this begs the question: Are these critics really interested in students' success, or is it their own program's success that has them worried?

2 comments:

  1. I don't know how that study was set-up, but there are other studies with completely opposite results. You might want to investigate the research of Stephen Krashen.
    In addition, regardless of what this one study says, both the experience of many of those that commented on the second opinion piece and my own experience show the exact opposite: the lowest-performing students made the greatest gains on the end-of-year tests when I switched to a balanced literacy approach.

    ReplyDelete
  2. Thanks for your comment, Stefan. I am familiar with Dr. Krashen's work and others that show a balanced approach to literacy can be effective. I considered including these in this post but I wanted to focus on the authors' gross misuse of the Core Knowledge Study to make a point. Unless we maintain the integrity of the results, then there is no point in using the studies at all.

    ReplyDelete

TEDxGrandValley