simply reflects the dynamic of any competition.
So it is with IQ. Without question, there are wide differences in intellectual abilities throughout life, and if you test one hundred thousand kids at age ten and then test them again at age twenty-six, you’re going to find that, on average, they remain in roughly the same intellectual pecking order. Many individual scoreswill diverge—IQ scores are known to swing as much as thirty points over time in individuals with changing circumstances—but as a group, the age-ten numbers will correlate rather well with the age-twenty-six numbers.
Surprise, surprise: most people who are pretty good at academics at age ten (compared to others the same age) are also pretty good at age twenty-six; most who are excellent at age ten are also excellent at age twenty-six. That’s what IQ stability tells us—and that’s all it tells us. It does not suggest inborn limits, and it doesn’t even hint at the extraordinary power of individuals to change their own circumstances and lift their intellectual performance.
Intelligence scores of infants are not predictive of future scores or life success. That population is still too much in flux; individuals have not yet hit their stride; the pack has not yet taken shape; population inertia has not yet set in.
Comparing raw IQ scores over nearly a century, Flynn saw that they kept going up : Nippert, “Eureka!”
IQ test takers improved over their predecessors by three points every ten years .
These comparisons draw on the raw scores—not the weighted scores that are annually recalibrated so that the average is always 100.
Using a late-twentieth-century average score of 100, the comparative score for the year 1900 was calculated to be about 60—leading to the truly absurd conclusion, acknowledged Flynn, “that a majority of our ancestors were mentally retarded . ”
This retroactive analysis illustrates the logical flaw in continually using a curved IQ score to dismiss the competence of anyone scoring below 100.
“[The intelligence of] our ancestors in 1900 was anchored in everyday reality,” explains Flynn . “We differ from them in that we can use abstractions and logic and the hypothetical.”
Flynn adds:
When [asked]: “What do dogs and rabbits have in common,” Americans in 1900 would be likely to say, “You use dogs to hunt rabbits.” The correct [contemporary test] answer, that both are mammals, assumes that the important thing about the world is to classify it in terms of the taxonic categoriesof science … Our ancestors found pre-scientific spectacles more comfortable than post-scientific spectacles, [because that’s what] showed them what they considered to be most important about the world … (Flynn, “Beyond the Flynn Effect.”)
Examples of abstract notions that simply didn’t exist in the minds of our nineteenth-century ancestors include the theory of natural selection (formulated in 1864), and the concepts of control group (1875) and random sample (1877) .
This comes from a 2006 lecture by James Flynn. An extended excerpt:
Over the last century and a half, science and philosophy have expanded the language of educated people, particularly those with a university education, by giving them words and phrases that greatly increase their critical acumen. Each of these terms stands for a cluster of interrelated ideas that virtually spell out a method of critical analysis applicable to social and moral issues. I will call them “shorthand abstractions” (or SHAs), it being understood that they are abstractions with peculiar analytic significance.
I will name [some] SHAs followed by the date they entered educated usage (dates all from the Oxford English Dictionary on line):
(1) Market (1776: economics). With Adam Smith, this term altered from the merely concrete (a place where you bought something) to an abstraction (the law of supply and demand). It