Why Did the DMN Largely Ignore 35 Percent of DISD Test-takers?

How many STAAR tests did DISD students take? More than 27.


Hey, I’m back! What did I miss?

[Checks Dallas Morning News, sees a three-byline story on STAAR results, sees three blog posts by editorial board members excoriating Mike Miles for said results, sees a guest column saying that Mike Miles and reformers are bad but Aldine ISD is good.]

Good sweet heavens. It’s like I came back on Christmas. So much silliness to digest, it’s going to take three posts to do so properly.

Let’s start with the first three paragraphs of this story, which sets the tone and sleight of hand used for the entire piece:

Three years after Superintendent Mike Miles promised his reforms would improve Dallas ISD, the school district is losing ground to the state on most STAAR exams.

Statewide passing rates released Tuesday show another year of flat results, while DISD’s scores dropped for most tests, widening the gap with the state averages. The results are for English test takers, which is how the state reports them.

Under Miles, the gap has worsened for nine of the 11 exams in grades three through eight.

Each of these is wrong.

1st graph: Doing better on STAAR exams is one of, I don’t know, at least a dozen components of “improvement.” So to suggest they are the very barometer of improvement or failure is wrong. More important, this statement is provably false. There are 27 STAAR tests. Nine out of 27 is not “most.”

2nd graph: Half-true-ish: The paper mentions only 11 tests; DISD students took at least 27 tests (14 reading/writing tests in English and Spanish, seven in math in English, four in science in English and Spanish, and two in social studies in English. I don’t count three math tests given in Spanish, because the number/percentage of students who take those is so low.)

The second sentence is the real piece of work, though: “which is how the state reports them.” This is not really true. It is how a press release from the state announcing these interim results reported them. It is not how the state’s performance reports list them — on those final annual reports used for actual accountability analysis, they combine English and Spanish results. So this sentence is just a cover for the DMN to focus on the English-only tests. This makes no sense unless you’re trying to paint the district in the worse possibly light, since English results are bad but Spanish results are pretty good. Again, presenting it this way deliberately excludes the more than 35 percent of elementary school kids in Dallas who take the Spanish exams. I guess those tests don’t count, because, c’mon, they’re not the WHITE tests. You know, the hard tests? Because when does a state press release EVER preclude a newspaper from giving you context, especially when the data are known?

3rd graph: Half-true but misleading. In the sample of 11 tests out of the 27 taken, gaps have widened. At least here they finally say they’re ignoring high school STAAR tests.

Now, I know this sounds insane. I know you think that a daily newspaper would not, for the most part, ignore that a large poor urban district has unique challenges, including trying to test a student population that is about 70 percent Hispanic (and about 40 percent ESL). You would think that said paper also wouldn’t subtly mock the superintendent for trying to point out these challenges. If you read the story, you’ll see that you think wrong.

So how did this happen? Well, you had 11 test results in English, five in Spanish, and then eleven tests in math and high school with results that are as of yet unknown. How do you tell people about that if you want to put these results in proper context? Your options are as follows:

a) Split out English and Spanish tests and count them separately (I would hope you’d also explain why you made this decision other than “that’s how the state press release listed them,” but whatever)

b) Combine English and Spanish tests and report the combined weighted average passing percentages as if they’re one test (this is how DISD reported them, and how state accountability reports list them)

But the DMN chose option:

c) Split out English from Spanish, then ignore Spanish results entirely, except to mock the superintendent for pointing out the lunacy of doing this. Even though a good 35 percent of DISD elementary school aged kids take a Spanish language reading or science test.

If we go with a), then you can say: “Dallas ISD loses ground on state on nine* of 27 tests.”

(*I don’t have the Spanish and English data split out going back to 2012. The DMN reporters do it seems. It’s possible that one of the Spanish language tests shows a loss of ground with the state, which could potentially raise the count of nine. I just can’t compute that at the moment.)

If we go with b), then you can say: “Dallas ISD loses ground on state on eight of 22 tests.”

I could also suggest this long but very accurate headline:

“Dallas ISD loses ground on state on nine of 16 tests, waiting on 11 more”

But only if you deliberately want to take about 35 percent of DISD kids out of your sample, and only if you want to confuse readers about the absence of math data (which has shown improvement in the last two years), do you go with the paper’s “loses ground on state on nine of 11 tests” headline.

We’re supposed to ignore the five other tests for which we have results, and we’re not supposed to know that a bunch more tests have been administered but results aren’t in yet. Instead, we’re supposed to believe this is the definitive performance assessment of the district.

The problem here is that you don’t have to do all that sleight-of-hand stuff to rap the district on the knuckles. You’ll notice I’m not really defending DISD. Consistent declines in English reading and writing performance are a very serious problem, one the administration shouldn’t ignore. I’m just not hiding the other data when I present this.

If the DMN wanted to run a story on problematic English reading and writing score performance, perhaps they should have done this. I’d love to see that story. They could talk about phonics education, and what’s really involved in teaching kids how to read and write at basic and then more advanced levels. They could talk about the problems other districts with lots of families in multigenerational poverty are having in the area of reading and writing, and the degree of correlation between poverty rates and reading scores compared to other subject areas. (It’s one reason poor kids do better on math than on reading tests.) They could look at how formerly Spanish-language-tested kids fair on English language reading and writing tests once they take get to middle school. (Spoiler alert: not well.) They could comment on whether there has been a demographic shift at all in DISD elementary schools that might explain performance declines, or if there are any changes in Spanish opt-in testing. They could rule all that out, and then get quotes from a few teachers who might blame it all on Miles dictates. (I’ve talked to some who complain mightily about the prompts they receive for STAAR reading and writing portions.) If they wanted to be fair, they might also ask the district for quotes from a few distinguished teachers whose classrooms have bucked this downward English-speaking reading and writing trend. (One teacher who complained also told me just how she overcame the bad instructional help.)

After all that, they could run another story on overall performance, using testing data from all 27-plus tests. Perhaps rather than scoring the district using some simplistic test counting methodology, they could look at the core subjects being assessed: language, math, science, and social studies, and compare performance in the aggregate for all of them relative to the state and to districts with similar demographics. And then they might STILL come to the conclusion that the data show Miles’ reforms are not helping kids overcome their challenges.

But that is writing about education. That is not a horserace narrative designed to foster outrage.

You think I’m being too tough? Tell me, what was the decision-making process behind this? In what world does a newspaper use three reporters to give us results that de-emphasize 35 percent of the district’s student body? In what world does math cease to matter? Especially when plenty of people can make legitimate criticisms of the results even if you present them correctly.

The problem is magnified when the editorial writers do their reporting. And by that I mean, when the editorial writers read the paper and come up with a HOT TAKE that isn’t based on real data but is based on the flawed news reporting. I’ll address that tomorrow.


Keep me up to date on the latest happenings and all that D Magazine has to offer.