Friday, April 19, 2024 Apr 19, 2024
65° F Dallas, TX
Advertisement
Publications

EDUCATION THE AGING OF THE SAT

If this formidable college entrance exam is biased and irrelevant, why are so many people working so hard to get higher scores?
|

During three otherwise unremarkable Saturdays this past October, November, and December, roughly one million American teenagers experienced one of the most memorable rites of passage of their high school years. No, not their first intellectual epiphany. Not their first stint behind the wheel of a car. Not even their first sexual encounter. The correct answer is E: they took the Scholastic Aptitude Test, the dreaded college entrance examination better known by the acronym SAT.

Much like a terrifying nightmare or a stupendous faux pas, there is something quite unforgettable about the SAT. It has often been said-and it’s true in my case-that people who can’t remember their collar size still remember their SAT scores. With the exception of eye examinations, probably no single standardized test is taken by more Americans each year. And with the exception of drug and AIDS tests, none has attracted more contemplation, controversy, and criticism.

In recent years, the validity of the once-sacred SAT has been challenged by those who say the test is biased against women and minorities. But despite these critics, and despite the fact that more than half of all high school students don’t take any college entrance exams, SAT scores have become a generally accepted measure of the state of high school education. As a result, many schools-including those in the Dallas Independent School District-have instituted test-preparation programs, and SAT scores, which were once a matter of only private concern, are now cause for public wringing of the hands (when the numbers are down) and civic celebration (when the numbers are up). “The biggest problem right now,” says Janet Herald, associate dean of admissions at Texas Christian University, and one of many people in the field who insist that SAT scores don’t play a primary role in the admissions process, “is that the test has gotten so much emphasis. It’s almost overexposed.”

Until not too long ago, the SAT was shrouded in an aura of quasi-oracular mystery. Back when I took the test in 1972, for instance, we were inculcated with the notion that the SAT had the power to divine which of us possessed the stuff it took to succeed in college and which of us might be better off, say, enrolling in beauty school. Although we weren’t exactly sure how the test worked-its contents were then a closely guarded secret-I don’t remember many of us questioning the acceptability of the SAT’s role as an arbiter of college-worthiness.

The era of blissful ignorance ended in the early Eighties, when a combination of truth-in-testing legislation and public pressure prompted the College Entrance Examination Board (which sponsors the SAT) and Educational Testing Service (which writes and administers it) to start releasing past tests. This long-resisted concession unlocked Pandora’s box and let loose a host of allegations about bias in SAT questions. Robert Schaeffer, public education director of the National Center for Fair & Open Testing in Cambridge, Massachusetts, a nonprofit consumer advocate, sums up the antagonist position in a single sentence: “FairTest believes there is overwhelming evidence that the SAT is worthless in the admissions process, except for minorities and women, which it hurts.”

This charge strikes at the very core of the SAT, which was originally created in 1926 to serve as the great equalizer in the college admissions process. Whether from a large school or small, rich or poor, virtually all college-bound students must take the SAT (or the ACT-a less popular but similar standardized test written and administered by the American College Testing Program). “Colleges are looking for some sort of yardstick that serves as a common denominator,” says Arthur Kroll, vice president of ETS’s College Board Programs Division. “In a state like Texas, for instance, the valedictorian of a small school in a rural area might only be middle-of-the-class in a magnet-type school in Dallas.”

So ETS developed a test that purports to measure “developed ability” rather than mastery of curriculum, and designed it to minimize the effectiveness of short-term test-preparation. Then they filled it with multiple-choice questions that call for identifying the antonym of “exculpate” and calculating the value ofx – zif the average of x, y, and eighty is six more than the average of y, z, and eighty. Now do you remember why you hated the SAT? (See page 41 for answers.) The math and verbal portions of the test are graded separately on a unique but meaningless 200- to 800-point scale and then added together.

If the SAT were a truly egalitarian tool, critics say, scores would run evenly across all ethnic and socioeconomic groups. They don’t. They don’t even come close-a point that SAT critics cite as prima facie evidence of the inherent bias of the test. Consider the glaring discrepancies between these nationwide averages in 1988: whites (935), Asian-Americans (930), Mexican-Americans (810), and blacks (737). The fifty-six-point gap between men (933) and women (877) is even more disturbing, since it holds true even in groups drawn largely from the same ethnic and socioeconomic pool.

Critics of the SAT say these results don’t prove that one group is smarter or better educated than another but that the test itself is flawed. In other words, women don’t score lower on the SAT because they’re dumber than men but because the cultural bias in the test discriminates against them. But while the scores certainly indicate that something’s amiss, it’s hard to find clear-cut examples of test questions that put women at a disadvantage, or enough of them to account for the fifty-six-point gap.

The cultural bias against minorities is much easier to identify. “For instance,” says Margaret Herrera, the DISD director of counseling services, “those students who have not traveled extensively might be at a disadvantage on geography questions.” By the same token, it should come as no surprise that low-income inner-city students fare poorly on questions requiring a knowledge of words such as “polo” and “regatta.” While much of the cultural bias has been weeded out of the test in recent years through the ETS Sensitivity Review Process, the correlation between SAT scores and family income remains unmistakable and compelling. “What we’re looking for,” Schaeffer says, “is a level playing field.”

Predictably and somewhat persuasively, ETS prefers to put a different spin on the story: the SAT works just fine, thank you. It’s the educational system that’s biased, putting disadvantaged students at a further disadvantage when the proctor tells them to open their test booklets. “Is the SAT biased against a student who’s recently come across the border from Mexico and barely speaks English? Sure it is,” ETS’s Kroll says. “The question isn’t, ’Why are the scores so different?’ It’s, ’Why did you ever expect that they’re going to perform equally in the first place?’ The quality of education varies significantly from one area to the other.”

But SAT scores themselves are not equal. College admissions officers are likely to be more impressed by a 1,000 posted by a DISD student (where the average score is 800) than a 1,000 from a student at Cistercian Preparatory School (average score 1,300). And they’re likely to be even more impressed if that DISD student is black-not because they look favorably upon minority applicants, but because most research shows that blacks perform better in college than SAT scores tend to predict. As a result, college admissions officers must take the great leveler and level it. “We don’t give you 150 points because you’re black or Hispanic, or seventy points because you’re a woman. Thai wouldn’t he fair,” says Dr. Leo Pucac-co, senior associate director of admissions at Southern Methodist University. “’But we’re willing to admit [minorities and women] with lower scores.”

The issue of coaching also generates a lot of heat for ETS and the College Board. Back when I first took the test, and in keeping with the SAT’s image as some sort of cabalistic icon, they categorically denied thai commercial test-preparation classes helped raise scores. Nowadays, they grudgingly concede that these courses may prompt some improvement, but they still insist that the average aggregate increases are limited. The new party line, according to the Taking the SAT booklet, is that programs involving about twenty hours of class time may produce an improvement of up to twenty-five points aggregate, while forty-hour programs could produce between thirty-five and fifty additional points. ’”Unfortunately,” the booklet goes on to say, “.. . it is still not possible to predict ahead of time who will improve, and by how much-and who will not. For that reason, the College Board cannot recommend coaching courses. ..”

But despite the College Board’s emphatic and oft-repeated denials, the belief that coaching can and does produce dramatic results is now widely held among both high school counselors and college admissions officers, Most major bookstores now have an area devoted to nothing but test-prep books. Thousands of students now take commercial test-prep classes that promise improvements averaging up to 150 points. And even at prices as high as $595 a pop, these classes are hot. During the past decade, in fact, test-prep schools have become a $50 million segment of the education pie.

Traditional schools generally emphasize drill and other conventional approaches to education. Says Elissa Sommerfield, a former SMU English instructor who’s been helping Dallas-area students prepare for the SAT for the past twenty-seven years, “We work on test-taking techniques, particularly for the SAT, but I have a larger purpose in mind: an improved student makes for improved scores.” But the brash, new kid on the block-the fast-growing New York-based Princeton Review-barely bothers to pay even lip service to traditional educational values. As the company’s test-prep book (fittingly titled Cracking the System) puts it, “We’re not going to teach you math. We’re not going to teach you English. We’re going to teach you the SAT.” And this is just Chapter One. Later, things get really subversive.

According to Princeton Review, the SAT is written for a poor sap named Joe Bloggs-a mythical, stereotypical, average student who gets hammered on every difficult or tricky question on the test. Instructors explain to their students that there are four or five multiple-choice answers to every question. One of them has to be correct. A couple of the others tend to be obviously wrong. The remainder are there to distract, confuse, and otherwise screw good ol’ Joe. By identifying and rejecting the choices that Joe Bloggs is likely to select, students can often come up with the right answer simply by the process of elimination. By using this and a few similar techniques, says Jennifer Robbins, director of Dallas-area Princeton Review classes, *i can get somebody twenty or thirty points in an hour.”

Rating the effectiveness of Princeton Review and the other test-prep classes remains a matter of bitter dispute, with ETS and the College Board arrayed on one side of the battleground, the test-prep schools and FairTest on the other, and consumers stuck in the middle wondering which way to run. Oddly enough, many high schools now have extracurricular test-prep classes of their own. The primary purpose is to help students, but administrators generally aren’t shy about mailing out self-congratulatory press releases when scores happen to rise,

The irony of this growing concern about the SAT is that it comes at a time when colleges and universities are publicly downplaying the role SAT scores play in the admissions process. True, virtually all four-year institutions still require some sort of entrance examination, but a 1985 survey (sponsored in part by ETS and the College Board) found that high school performance was the top criterion used by admissions officers, with SAT scores ranking second, slightly ahead of high school coursework. A survey conducted this year by USA Today ranked entrance examinations fourth behind high school grades, class rank, and curriculum. “We use the high school transcript foremost because we get a better correlation between how well they did |in high school] and how well they’ll do [in college],” says Zack Prince, registrar and director of admissions at the University of Texas at Arlington.

Of course, I remember hearing this sort of talk when I was applying to college-and not believing a word of it. Admissions officers loved to go on and on about the importance of extracurricular activities and a strong in terview and a creative essay, but we knew damn well that we weren’t going to get into Harvard unless we scored 1,400-plus on our college boards. (For the record, I did neither.) Although a handful of liberal liberal-arts colleges have made the SAT op tional since then, that’s still the bottom line today. And that’s why the Class of ’89 will still remember their SAT scores come the year 2000.

Related Articles

Image
Home & Garden

The One Thing Bryan Yates Would Save in a Fire

We asked Bryan Yates of Yates Desygn: Aside from people and pictures, what’s the one thing you’d save in a fire?
Image
Business

New York Data Center Developer Edged Energy to Open Latest Facility in Irving 

Plus: o9 Solutions expands collaboration with Microsoft and Dallas-based Korean fried chicken chain Bonchon to open 20 new locations.
Image
Restaurants & Bars

Where to Find the Best Italian Food in Dallas

From the Tuscan countryside to New York-inspired red sauce joints, we recommend the best of every variety of Italian food available in North Texas.
Advertisement