Chapter 1 for Hive Mind

Hive Mind
How Your Nation’s IQ Matters So Much More Than Your Own
Garett Jones

Chapter 1

JUST A TEST SCORE?

Here's the most important fact about IQ tests: skill in one area predicts skill in another. If a person has an above-average score on one part of an IQ test—the vocabulary section, for instance—she probably has an above-average score on any other part of the test. A thorough IQ test such as the Wechsler or the Stanford-Binet actually contains about a dozen separate tests. So check to see whether that person did well on solving the vocabulary test: if she did, she’s probably better than average at memorizing a long list of numbers, she could probably look at the drawing of a person talking to a police officer and instantly realize that the officer is standing knee-deep in water, and she probably did better than average on the wood block puzzle.

That’s the real surprise of IQ tests and other cognitive tests: high scores in one area tend to go along with high scores in other areas, even ones that don’t outwardly appear similar. Psychologists often talk about the “general factor of intelligence,” the “g factor,” or the “positive manifold,” but let’s call it “the da Vinci Effect,” since Leonardo’s excellence spanned so many subjects from painting to clock design to military engineering. The da Vinci Effect means that our parents and grandparents are usually wrong when they tell us “everything balances out in the end” or “if you’re weak in one area that just means you’re stronger in another.” When it comes to IQ tests—on average—if a person is stronger in one area, that’s a sign the person is probably stronger at other tasks as well.

We’ll return to the notion of the da Vinci Effect a lot, so it’s a concept worth understanding well. The claim isn’t that every relationship between mental skills is always strongly positive—there are always exceptions to every rule, just as there are people who smoke two packs a day and live to be ninety. But, as we’ll see in this chapter, many of the most commonly recognized general skills have strong positive relationships, and it’s rare to find any sort of negative relationship across large groups of people.

IQ tests are often the stuff of controversy. What can they really tell us? What can they actually measure? What real-world outcomes can they help us to predict? That’s exactly what we’ll discuss in this chapter. It’s going to focus exclusively on studies done in rich countries, studies in which test subjects are reasonably healthy and have some prospect of a real education. And I make a claim that, in these settings, the mainstream of psychology is also comfortable making: IQ tests are a rough, imperfect measure of what the typical person would call general intelligence.

Of course, a test score is just a test score until we’ve seen real evidence that it predicts something beyond other test scores. But when we see that the da Vinci Effect turns up repeatedly during IQ tests in today’s rich countries, we know we’re getting closer to the real-world version of intelligence: the ability to solve a variety of problems, quickly recall different types of information, and use deductive reasoning in multiple settings. When ordinary people say someone is intelligent, they usually mean that the person has mental skills that span a wide range. They mean that that person’s mental skills have at least a touch of the da Vinci Effect.

“True on Average”

I discuss a lot of facts in this book and make a lot of claims about general tendencies. It should go without saying but bears repeating when discussing the important topic of human intelligence: these statements are only true on average. There are many exceptions; in fact almost every case is an exception, with about half of the cases turning out better than predicted and half turning out worse.

It would be tedious if I had to repeat the phrases “true on average” or “this relationship has many exceptions” or “tends to predict” every single time I make a factual claim. So I won’t. But remember: every data-driven claim in this book is only a claim about the general tendency, and there are always exceptions. Every person we meet, every nation we visit, is an exception to the rules—but it’s still a good idea to know the rules.

Intelligence: As with Strength or Size, Oversimplification Often Helps

Suppose you were given a hundred computers and told your job was to figure out which ones were faster than others. There’s one catch: you don’t know the actual processor speed of any of the computers. How would you rank them? You might try running ten or twenty different pieces of software on each of them—a video game or two, a spreadsheet, a word processor, a couple of web browsers. For each computer, you could write down, on a scale of 1 to 100, how fast the computer runs each piece of software, and then average those numbers together to create a computer speed index for each computer. Of course, the process won’t be entirely fair—maybe you unintentionally chose a spreadsheet program that was designed specifically for one type of computer—but it’s a step in the right direction. Further, it’s probably better than just trying out one or two applications indiscriminately on each computer for half an hour and then writing up a subjective review of each machine. Structuring the evaluation process probably makes it fairer.

Now suppose you were trying to assess the overall physical strength of a hundred male Army recruits. You know that some people are great at carrying rocks and some are great at pushups and so on, but you also suspect that, on average, some people are just “stronger” than others. There will be tough cases to compare, but perhaps you could create a set of ten athletic events—call it a decathlon. People who do better in each event get more points. Wouldn’t the people with the ten highest scores generally be quite a bit stronger—in the common sense of the word—than those with the ten lowest scores? Of course they would. There’d be an exception here and there, but the ranking would work pretty well. And here’s a big claim you’ll probably agree with: recruits who did the best in the decathlon would usually be better at other lifting-punching-carrying tasks that weren’t even part of the decathlon. The decathlon score would help predict nondecathlon excellence.

Again, an index, an average, will hide some features that might be important. But for large, diverse populations, there is almost surely a da Vinci Effect for strength. It’s not impossible for an adult male who benches only seventy-five pounds to be great at pullups, but it will be relatively rare. Usually, strength in one area will predict strength in others. Some people are on average “stronger” overall. You get the point: the da Vinci Effect comes up in areas of life other than discussions of mental skill. In these other, less sensitive areas, it’s easy to see the value of a structured test. We get the same benefit by measuring intelligence in a structured way.1

It was psychologist Charles Spearman who began the century-long study of the da Vinci Effect. In a 1904 study of students at a village school in Berkshire, England, Spearman looked at student performance in six different areas: the classics (works written in Greek and Latin), as well as French, English, math, discrimination of musical pitch, and musical talent.2 And while it’s perhaps obvious that people who did better at French would usually be better at Greek and Latin, it’s not at all obvious that people with better musical pitch would be substantially better at math—and yet that’s what Spearman found.

But Spearman went further than that—he asked whether it was reasonable to sum up all of the data into just two categories: a “general factor” of intelligence, and a residual set of skills in each specific area. If you tried to sum up a person’s various academic skills—or later, his test scores—with just one number, just one “general factor,” how much information would you throw away? We do this kind of data reduction every time we sum up your body temperature with just one number. (You know you’re not the same temperature everywhere, right?). We also do this when we sum up a national economy’s productivity by its “gross domestic product per person” (which hides the various strengths and weaknesses of the medical sector, the restaurant sector, and so on), or even when we describe a person as simply “nice” or “mean.” Whether the simplification works well is a practical matter—so how practical is it to sum up all of your cognitive skills on a variety of tests with just one number?

As it turns out, it actually works pretty well. Here’s one way to sum it up for modern IQ tests: this “general factor,” this “g factor,” this weighted average of a large number of test scores, can summarize 40 to 50 percent of all of the differences across people on a modern IQ test.3 Some people do better on math sections, some do better on verbal sections, some do better on visual puzzles—but almost half the overall differences across all tests can be summed up with one number. Not bad for an oversimplification.

At the same time, this g factor in mental skills helps to explain why reasonable, well-informed people can dispute the value of IQ tests. On the one hand, it’s great to know that one number can sum up so much. On the other hand, a little more than half of the information is still left on the table—so if you’re hiring someone just to solve math problems or just to write good prose, you’d obviously want to know more than just that one overall IQ score. What the g factor can tell you is that your math expert probably has a good vocabulary.

Measuring Cognitive Skills: A Rainbow of Diverse Methods

It’s worth noting that the most comprehensive IQ tests aren’t like normal tests; they’re structured more like interviews. Some skeptics dismiss IQ tests as just measuring whether you’re good at staring at a piece of paper, coming up with an answer, and writing it down. But the comprehensive IQ test used most often today—the Wechsler mentioned earlier—involves little paper-staring and almost no pencils. The person giving the test (a psychologist or other testing expert) asks you why the seasons change or asks you to recite a list of numbers that she reads out to you. You answer verbally. Later you are handed some wooden puzzle blocks and you try to assemble them into something meaningful.

And on one section, you do actually take a pencil to mark down your answers. Your job on this “coding test” is to translate small, made-up characters into numbers using the coding key at the bottom of the page. The circle with a dot inside stands for 4; an “X” with a parenthesis next to it stands for 7. Code as many as you can in a minute or two. (Note that I am not using actual items from IQ tests here. I just use examples that are similar. One doesn’t give away answers to IQ test questions.)

However, some more rudimentary IQ tests really are just written multiple-choice exams, and one of them plays an important role throughout this book and in economic research: Raven’s Progressive Matrices. Take a look at Wikipedia’s sample Raven’s question (Figure 1.1): What kind of shape in the lower-right corner would complete the pattern?4 Fortunately, the real Raven’s is multiple choice, so you needn’t solve it yourself. In all these questions, the goal is to look for a visual pattern and then choose the option that completes the pattern.

FIGURE 1.1   A problem similar to those on the Raven’s Progressive Matrices Source: http://en.wikipedia.org/wiki/File:Raven_Matrix.svg (Under Creative Commons License, from user Life_of_Riley)

The questions eventually get quite difficult. The lower-right corner is always blank, and you choose the best multiple-choice response. Raven’s is popular because it can easily be given to a roomful of students at once (no need for one tester per student) and because it appears (note the italics) to have fewer cultural biases than some other IQ tests: the test doesn’t measure your vocabulary, your exposure to American or British history, your skill at arithmetic, or any other obviously school-taught skill. Most people don’t practice Raven’s-style questions at school or at home, so training (which obviously can distort IQ scores artificially) might not be much of a concern.

Verbal Scores Predict Visual Scores Predict Verbal Scores

The g factor or da Vinci Effect means that your scores on one part of an IQ test predict your scores on other parts. But how well do they do that? Is it almost perfect? And if so, what does an “almost perfect” relationship look like in the real world? Here’s one example: the relationship between the heights of identical twins. Identical twins are almost always almost exactly the same height as each other.5

Throughout this book, when two measures have a relationship that strong, I’ll call that a “nearly perfect” or “almost perfect” relationship.6 The two measures don’t have to be recorded in the same units: the average monthly Fahrenheit temperature in Washington, D.C., has a nearly perfect relationship with the average monthly centigrade temperature in Baltimore, for instance, rising and falling together over the course of a year. Another example of a “nearly perfect” relationship is your IQ measured this week versus your IQ measured next week. A few people have exceptionally good or bad test days, but they’re not common enough to weaken the nearly perfect relationship. Even more relevant: in one study, a person’s adult IQ has an almost perfect relationship with his IQ five years later.7

A slightly weaker but still strong relationship exists between the body mass index (BMI) of identical twins raised apart.8 BMI is a complicated ratio of weight and height that is used to measure whether people are over- or underweight. You can imagine why this relationship might be a bit weaker than the height relationship: some parents feed their kids more calories, some kids live in towns where sports are popular, and so on. But the rule that identical twins have similar BMI is still extremely useful. This is what we’ll call a “strong” or “robust” relationship. This is like the relationship between your IQ when you’re a teenager and your IQ when you’re in middle age, at least in the rich countries. High scorers in tenth grade are almost always above-average scorers in middle age, with some doing noticeably better than before and some doing noticeably worse. Here, the exceptions are interesting, noticeable, an area for future research, but only a fool would ignore the rule.9 For instance, the link between national average test scores and national income per person is strong.

Slightly weaker relationships need their own expression, and we’ll call those “modest” or “moderate” relationships.10 Here, big exceptions are extremely common, but if you’re comparing averages of small groups of people, you’ll still see the rule at work. An example we’re all familiar with is the relationship between height and gender. Men are usually taller than women, but enormous exceptions abound: indeed, few would protest the statement “men are taller than women” because we all know it’s just a generalization. These “modest” or “moderate” relationships sometimes exist between different parts of an IQ test or across very different kinds of IQ tests. For example, one study of third graders found a moderate relationship between a child’s Raven’s score and her vocabulary scores—but the same study found a strong, robust relationship between vocabulary scores and overall reading skills in the third grade, and by the fifth grade even the Raven’s score had a robust relationship with reading skills.11 As people get older, the relationships across different parts of an IQ test tend to grow more robust.

This is one of the surprising yet reliable findings of the past century: visual-spatial IQ scores have moderate to robust relationships with verbal IQ scores, so you can give one short test and have a rough estimate of how that person would do on other IQ tests. My fellow economists and I have taken advantage of this aspect of the da Vinci Effect in our research. We often have test subjects take the Raven’s matrices since it has a moderate to robust relationship with other IQ test scores and it’s quite easy to hand out copies of the written test to groups of students.

Anything less than a “modest” relationship I’ll call a “weak” relationship. That’s like the relationship between height and IQ.12 The relationship is positive, but much taller people only have slightly higher than average IQs. The relationship isn’t nothing, but it’s an effect that will only be noticeable when you compare averages over large numbers of people. Typically, a group of women who are six feet tall are probably just a little bit smarter than a group of women who are five feet tall, with the emphasis on “just a little bit.” You should still do the job interview even if she walks through the door at 4′11″.

IQ Without a Test

Wouldn’t it be wonderful if we could get a rough measure of someone’s IQ, their average set of mental skills, without having to give any test at all? That way, all the arguments about test bias, language skills, and who went to a good school could fade into the background and we could have a useful, if only somewhat accurate, measure of a person’s IQ. Fortunately, the past few decades have presented us with just such a measure, and it comes from an MRI machine. Yes, magnetic resonance imaging, the same device that’s used to scan for tumors and heart disease.

IQ researchers use the MRI to definitively answer a question that people have asked for centuries: Do smarter people usually have bigger brains? The answer, on average, is a very clear “yes.” The correlation between a person’s IQ and his brain size is modest or moderate: big exceptions abound, but the rule is still there. There are so many studies looking at the IQ-to-brain-size question that there are now reviews that look at all the relevant studies and textbooks that review the reviews. Three quotes—two from textbooks published by Oxford and Cambridge University presses, and a third from a book in Oxford’s excellent Very Short Introduction series, give us the story:

Although the overall correlation between brain size and intelligence is not very high, there can be no doubt of its reliability.13Clearly overall brain size is correlated with intelligence test scores. The [modest] correlation . . . is not everything, but is not to be dismissed.14

To the best that we can judge, then, the untutored guess that the cleverer person is literally more “brainy” has some modest force.15

The author of the second quote, psychologist Earl Hunt, goes on to say that even though science has verified the rough accuracy of the IQ-to-brain-size relationship,

I doubt there will be any great effort to develop this finding any more. . . .16

The reason he gives? The same MRI technology that verified the relationship has made it possible to search for the precise regions of the brain that are associated with individual cognitive skills, so one can stop looking at size and start looking for locations. The path to understanding the how of intelligence is likely to run through the where of individual mental processes. I won’t pursue these fascinating questions further, but this new frontier illustrates how MRI technology is allowing researchers to search for the precise brain processes that form the structure of intelligence.

What does the modest relationship between brain size and IQ really tell us about the nature of intelligence? By itself, not much. Nobody thinks the weak link between IQ and height is evidence that height by itself makes a person smarter, so we should be similarly cautious about concluding that a bigger brain makes a person smarter. But the modestly reliable link between IQ and brain size is useful all the same. Critics of IQ testing often claim that the tests are biased in this way or that, and these claims deserve serious attention. However, it’s unlikely that the IQ tests and the MRI machines both share the same bias. IQ researchers have found still other predictors of IQ, predictors that aren’t at all like traditional IQ tests, predictors that get closer to the idea that the brain is a computer that processes information. These predictors remind us that while speed isn’t everything, some computers are faster than others and faster is usually better.

Quick!

You’re looking at a screen and an image flashes quickly in front of you for half a second. Hmm—was that the letter F or the letter L? You say “F” out loud and soon another image flashes—this time for a quarter-second. Again, you give your best guess, another F. Then an eighth of a second flash, a sixteenth, and so on. Shorter and shorter flashes of the image make it harder to guess correctly each time.

The images are usually simple shapes, like a regular C or a backwards one, and your job is to note, for instance, whether the C is open to the left or the right. Often you won’t have to say the words “left” or “right”; you’ll just toggle a switch to the left or right.

So at what point would you no longer be able to do better than random at correctly answering “left” or “right”? When the image flashes for just an eighth of a second? A 32nd? A 128th? That’s the key variable the researcher keeps track of in this study. Some people are essentially guessing when it’s anything less than a 16th of a second, some find that’s more than enough time. Of course, there’s one other variable the researcher wants to measure: your conventional IQ score. Studies that compare IQ to how much time you need to inspect the image are known, unsurprisingly, as “inspection time” studies.

Can it possibly be the case that people who only need to see the image for a tiny fraction of a second tend to have higher average IQ scores than people who need to see it for an eighth or sixteenth of a second? Summarizing “dozens of studies” run on “four continents” dating back to the 1970s, psychologist Ian Deary says,

[t]he overall answer is yes, there is a moderate association between how good people are at the inspection time test and how well they score on intelligence tests.17

Another review of inspection time studies found that people with shorter inspection times—people who could more quickly identify the image—not only tended to have higher overall IQs, but in particular tended to be better at the more abstract parts of the IQ test. Good “inspectors” tended to be better at visual, Raven’s-type puzzles, but perhaps not that much better at trivia and vocabulary questions.18 Inspection time tests get us closer to the idea of IQ as processing speed, but at the same time these tests show that processing speed isn’t the whole story of IQ—just as brain size isn’t the whole story.

A third kind of non-test test has been run many more times than the MRI and inspection time tests. You have a small computer panel in front of you, with a large button in the middle of the panel and four smaller buttons above it. The smaller buttons have lights in them. Your job is to hold down the large button until one of the smaller ones lights up; then, you touch the small lighted button as fast as you can. People who press the lighted button faster tend to have higher IQ scores. These simple tasks—“elementary cognitive tasks,” or ECTs as they are known in the literature—have provided further evidence that there’s more to IQ than book learning.

Reaction time studies have tended to focus on three measures: how quickly you press the target button, how quickly you take your finger off the large button when you see the smaller one light up, and how variable a person’s responses are. Do you reliably react after exactly two tenths of a second, or do you react faster some times than others? People with higher IQ scores tend to be more stable, while those with lower scores respond more erratically. Overall, the relationship between IQ and most measures of reaction time is weak, weaker than IQ’s relationship with brain size or with inspection time. But the relationship has been found so often and in so many different testing situations—partly because it’s such a cheap experiment to run—that it’s now a bedrock fact of modern IQ research: people with higher IQs tend to press the lighted button faster. They tend, on average, to be quicker.

Your Job Is a Test, Too

IQ tests help predict scores on other IQ tests, even tests that are quite unlike each other. And IQ tests help predict brain size and some kinds of mental and even physical quickness. But how well do IQ tests predict how effective you will be at work? Here, the answer is unequivocal and backed by decades of research: IQ tests do about as well as the best kinds of job interviews—structured job interviews, in which the interviewer carefully designs the questions beforehand and sticks to the same ones with each candidate—and IQ tests are better than most of the methods people use to choose employees. Most human resource management textbooks will tell you the same story, probably citing one of management professor Frank Schmidt and psychologist John Hunter’s summary analyses of hundreds of IQ and job performance studies.

No method of hiring is perfect, or even close to perfect, at picking the best workers—for instance, the relationship between IQ scores and eventual worker performance is modest to strong at best. But IQ tests are as good as anything that exists in the real world. And here’s one useful finding: you’re much better off forming your opinion of a worker based on her IQ score than basing it on a check of her references or (worst of all) a handwriting analysis.19

In addition, it appears that IQ tests are even better at predicting outcomes when the job requires higher skills. Back in the 1960s, the Bell Telephone System gave its entry-level management trainees an IQ-type test along with a number of personality tests. Bell’s human resources division kept the test results a secret for two decades, even from other employees in the firm. When, after two decades, the company looked back to see which tests did the best job of predicting which trainees eventually rose the highest in the company hierarchy, the IQ-type test did the best job, beating out the personality tests.20 Looking across many studies of IQ in the elite workforce, one review says,

[G]eneral cognitive ability is the best single predictor of executive/professional-level performance, just as it is of performance in the middle to high-end range of the general workforce.21

But IQ has predictive power among workers outside the elite. The U.S. military uses IQ tests routinely to screen recruits, and every year the military turns down potential recruits who do poorly on the tests. If IQ tests were useless in the workplace, the military would be foolish to turn down able-bodied, low-scoring men and women who were willing to serve. The U.S. military acts like it believes in the power of the da Vinci Effect.

And the military has sound evidence for taking IQ tests seriously. Research using the U.S. Army’s vast datasets on soldier IQ and subsequent performance found that an enlistee’s IQ score has a strong positive relationship with that soldier’s “technical proficiency” and “general soldiering” skills. The researchers ran a comparison as well, testing one predictor of job success against another. They asked which better predicted a soldier’s overall technical proficiency and general soldiering skills, IQ or a measure of their personality and temperament? IQ won the race quite handily: it had a strong relationship with these measures of job success, while personality was just a modest predictor of success. That said, the personality and temperament measure won in the attempt to predict a soldier’s leadership skills, discipline level, and physical fitness—so again, IQ isn’t everything, but it’s not something you’d leave on the table.22

In most of these studies that look at the relationship between IQ and worker quality, the quality measure is subjective: you ask the worker’s boss how well the worker did, and compare that judgment against the worker’s IQ score. Some studies can look at somewhat more objective measures, such as sales per year for a salesperson, or successful sorties for a military pilot. The more objective the measure, the stronger the relationship usually is between IQ and the measure of worker quality. But there’s another indirect measure of worker quality that’s particularly popular among economists: wages. The United Way aside, most employers aren’t running charities, so they only pay one worker more than another when they need to. And one reason they might have to pay a worker more is if he’s especially productive or especially good at his job. After all, good workers are likely to get hired away if they aren’t paid enough.

This is one reason why educated workers earn more than less educated ones: the educated workers can usually just do more stuff better. If that weren’t the case and employers were usually wrong about the (moderate) link between education and worker productivity, then any upstart firm could just hire inexpensive, less educated workers, do just as good a job at making cars or pizzas or software, and pocket the massive profits. The low-education firms would have explosive growth, blowing away the competition that remained foolishly wedded to the idea that you had to pay more to get more. Pretty soon, it would be all the buzz in the management consulting world: triple your profits by hiring the less educated!

Of course, we don’t see that, not as a general trend. At least in the private sector, there’s usually a reason why one group of workers in a particular line of work gets paid more than another, and it’s not because the owner of the firm is especially charitable or especially foolish: it’s because the higher-paid group is accomplishing more. That said, things are more complicated across lines of work: you usually have to pay people more to take on risky jobs, and you don’t have to pay them as much to do fun jobs. There will always be qualified people willing to play bass guitar in front of an audience for very little cash.

But as a rule, if we routinely see firms paying a lot for a set of skills, it’s probably because that set of skills is genuinely productive. How much more a group of workers is being paid will tell us how much the market values that set of skills. Well, how much does the market value IQ?

The Market Test: IQ and Wages

In 1957, a government agency in Wisconsin gave IQ tests to about three thousand teenage males, all high school graduates, and then checked up on the men later on in their lives, first at the age of thirty-five and again at the age of fifty-three.23 As part of their study they asked some basic lifestyle questions, a bit about the subjects’ education and their parent’s education; they also used tax records to find out how much the teenagers’ parents and later they themselves earned. It turns out that the subjects’ teenage IQ scores did a better job of predicting their wages as they grew older! So it looks like your IQ is something that, at least in a rich country such as the United States, you “grow into.” It takes a while for people to find their place in life, and that’s true for finding a place to use your intelligence.

So when these men were fifty-three, how much did IQ pay? The payoff to a high IQ appears moderate. Those with IQs in the top 10 percent earned about 60 percent more than those in the bottom 10 percent.

Has the Payoff to IQ Risen?

Other researchers have looked at other samples of workers, some in rich countries such as the United States, some in poor countries, and they tend to come to roughly similar results. What’s the overall picture of the IQ-wage relationship? Two of the great progressive economists of our time, Samuel Bowles and Herbert Gintis, coauthored a paper back in 2001 with the influential economist Melissa Osborne that looked at dozens of studies documenting the link between IQ and wages. Some social scientists had claimed that, in our postindustrial age, the labor market was placing more value on high-IQ workers; Bowles, Gintis, and Osborne wanted to see if that was the case.24 They showed pretty conclusively that it wasn’t. The market had valued IQ for decades, and it seemed to value IQ about as much in the early 1990s as it had in the early 1960s. Overall the IQ premium hadn’t changed.

A recent study confirms this finding: it looks at how well a young person’s IQ has predicted his or her level of education, occupational status, and income all the way from 1929 to 2003. The study, by sociologist Tarmo Strenze, assembled previous studies run in Europe, the United States, Canada, Australia, and New Zealand across the twentieth century and found that IQ overall had a moderate relationship with education and job status and a weak but positive relationship with income across the decades.25 And most important for our purposes, the link between IQ and income neither weakened nor strengthened across the decades. Young people in the 1930s and young people in the 1990s alike tended to have a weak, positive relationship between their measured IQ and their later income. So the rumors of IQ’s exploding importance have turned out to be wrong so far: it’s a reasonable guess that they’ll be wrong in the future as well. The paradox of IQ is likely to be with us well into the twenty-first century.

Coda: Intelligence Is a Key Ingredient in Emotional Intelligence

But isn’t there more than just one kind of intelligence? Aren’t emotional intelligence and social intelligence just as important as narrow IQ-type intelligence? The ability to read people, the ability to get along well with others—those skills are important, and IQ tests can’t be measuring those skills, can they? Social skills seem so different from the abstract pattern-finding of some IQ tests—but then again, being able to remember relevant facts about people you met a few weeks ago or the ability to interpret an ambiguous social situation might involve some of the same memory and puzzle-solving skills that IQ tests try to measure. Does the da Vinci Effect show up in social settings too?

One might contend that it’s even harder to measure social or emotional intelligence than it is to measure more conventional intelligence. But psychologists have tried: they’ve checked to see whether people with more social or emotional intelligence tend to have higher IQs, and so far it looks like they do. The relationship often isn’t as strong as the relationship between, say, a person’s vocabulary test scores and her score on the Raven’s matrices, so there are many exceptions, but the results are clear: IQ scores predict practical social skills.

The link between social or emotional intelligence and IQ has been tested for decades. Back in the 1920s, one early social intelligence test, the George Washington Social Intelligence Test, actually found a moderate relationship with overall IQ. That social intelligence test asked about “judgment in social situations, memory for names and faces, and recognition of the mental states behind words.”26 Another social intelligence test had people look at “film clips of brief scenes” showing people’s “emotional states, and their task was to identify that state.” Such tests have a weak to moderate relationship with a person’s IQ.27

Tests of emotional intelligence are better developed, and indeed there’s now a widely used test for “EQ,” the MSCEIT, the Mayer-Salovey-Caruso Emotional Intelligence Test. The MSCEIT measures both the perception side of emotional intelligence (“What is that person probably feeling?”) and the reasoning side (“What is the best way to handle this awkward situation?”). And perhaps by now it will come as no surprise that people who do better on the MSCEIT tend to do better on the Raven’s, an entirely visual-spatial IQ test. The relationship is modest but real: the da Vinci Effect is strong enough to span human relationships. But on its own, does EQ matter more than IQ? Let me turn it over to psychologist N. J. Mackintosh:

Contrary to some popular claims . . . there is no convincing evidence that tests of social or emotional intelligence are a better predictor of success than IQ.28

Indeed, if you know a person’s conventional IQ and you’re trying to predict job or school performance, there’s usually little benefit to learning that person’s EQ scores. But the reverse isn’t true: if all you know is a person’s score on the MSCEIT or a similar test, there’s real benefit to learning his IQ score. Intelligence tests predict emotional intelligence, and the two go together to some degree. But of the two, it’s clear which is usually more valuable. Better average social skills are typically just another benefit of having a higher IQ score, and since the economy is a social system, those social skills may prove important in explaining why higher-scoring nations tend to be more productive.

Notes

1. Both the computer speed and the strength analogies are standard metaphors in discussions of intelligence research, as is the idea of using indirect methods to measure the g factors for computer speed and strength. I would welcome a canonical reference to either idea, but am aware of none.

2. Spearman, “‘General Intelligence.’

3. Mackintosh, IQ and Human Intelligence, 45. This is an excellent textbook.

4. Raven Matrix, User: Life of Riley.

5. Allison, Kaprio, Korkeila, Koskenvuo, Neale, and Hayakawa, “The Heritability of Body Mass Index.”

6. Technical note: “nearly perfect” indicates a correlation with an absolute value of 0.9 or greater. All correlations are rounded to the nearest 0.1.

7. Dodrill, “Long-Term Reliability of the Wonderlic Personnel Test.”

8. Allison and others, “Heritability of Body Mass Index.”

9. These “strong” or “robust” relationships are correlations between 0.6 and 0.8 in absolute value.

10. “Modest” or “moderate” equals correlations of 0.3 to 0.5.

11. Stanovich, Cunningham, and Freeman, “Intelligence, Cognitive Skills, and Early Reading Progress.”

12. Weinberg, Dietz, Penick, and McAlister, “Intelligence, Reading Achievement, Physical Size, and Social Class.”

13. Mackintosh, IQ and Human Intelligence, 132.

14. Hunt, Human Intelligence, 201. This is a provocative and informed survey and interpretation of the literature.

15. Deary, Intelligence, 48. This is an excellent introduction to IQ research.

16. Hunt, Human Intelligence, 201.

17. Deary, Intelligence, 57.

18. Kranzler and Jensen, “Inspection Time and Intelligence.”

19. Schmidt and Hunter, “The Validity and Utility of Selection Methods in Personnel Psychology.” See a graphical version of their famous Table 1 in Hunt, Human Intelligence, 331.

20. Hunt, Human Intelligence, 333.

21. Hunt, Human Intelligence, 334.

22. McHenry, Hough, Toquam, Hanson, and Ashworth, “Project A Validity Results.” See Table 4, cited in Hunt, Human Intelligence, 331.

23. Zax and Rees, “IQ, Academic Performance, Environment, and Earnings.”

24. Bowles, Gintis, and Osborne, “The Determinants of Earnings.”

25. Strenze, “Intelligence and Socioeconomic Success.”

26. Mackintosh, IQ and Human Intelligence, 242.

27. Mackintosh, IQ and Human Intelligence, 242–243.

28. Mackintosh, IQ and Human Intelligence, 250.

Back to Excerpts + more