prestigious Rice University in Houston. That same year, it registered the highest SAT andACT scores in its history.Its rank in the U.S. News list climbed. In 2015, it finished in seventy-sixth place, a climb of thirty-seven places in just seven years.
Despite my issues with the U.S. News model and its status as a WMD, it’s important to note that this dramatic climb up the rankings may well have benefited TCU as a university. After all, most of the proxies in the U.S. News model reflect a school’s overall quality to some degree, just as many dieters thrive by following the caveman regime. The problem isn’t the U.S. News model but its scale. It forces everyone to shoot for exactly the same goals, which creates a rat race—and lots of harmful unintended consequences.
In the years before the rankings, for example, college-bound students could sleep a bit better knowing that they had applied to a so-called safety school, a college with lower entrance standards. If students didn’t get into their top choices, including the long shots (stretch schools) and solid bets (target schools), they’d get a perfectly fine education at the safety school—and maybe transfer to one of their top choices after a year or two.
The concept of a safety school is now largely extinct, thanks in great part to the U.S. News ranking. As we saw in the example of TCU, it helps in the rankings to be selective. If an admissions office is flooded with applications, it’s a sign that something is going right there. It speaks to the college’s reputation. And if a college can reject the vast majority of those candidates, it’ll probably end up with a higher caliber of students. Like many of the proxies, this metric seems to make sense. It follows market movements.
But that market can be manipulated. A traditional safety school, for example, can look at historical data and see that only a small fraction of the top applicants ended up going there. Most of them got into their target or stretch schools and didn’t need what amounted to an insurance policy. With the objective of boosting its selectivity score, the safety school can now reject the excellentcandidates that, according to its own algorithm, are most likely not to matriculate. This process is far from exact. And the college, despite the work of the data scientists in its admissions office, no doubt loses a certain number of top students who would have chosen to attend. Those are the ones who learn, to their dismay, that so-called safety schools are no longer a sure bet.
The convoluted process does nothing for education. The college suffers. It loses the top students—the stars who enhance the experience for everyone, including the professors. In fact, the former safety school may now have to allocate some precious financial aid to enticing some of those stars to its campus. And that may mean less money for the students who need it the most.
It’s here that we find the greatest shortcoming of the U.S. News college ranking. The proxies the journalists chose for educational excellence make sense, after all. Their spectacular failure comes, instead, from what they chose not to count: tuition and fees. Student financing was left out of the model.
This brings us to the crucial question we’ll confront time and again. What is the objective of the modeler? In this case, put yourself in the place of the editors at U.S. News in 1988. When they were building their first statistical model, how would they know when it worked? Well, it would start out with a lot more credibility if it reflected the established hierarchy. If Harvard, Stanford, Princeton, and Yale came out on top, it would seem to validate their model, replicating the informal models that they and their customers carried in their own heads. To build such a model, they simply had to look at those top universities and count what made them so special. What did they have in common, as opposed to the safety school in the next town?