Sunday, December 19, 2010

Review of 2010

Drinking from the wrong chalice? By his mid-40s, Michael Jackson had skin like parchment.

The end of 2010 is drawing nigh, and the time has come to review my predictions from last year.

Brain growth genes

Back in 2005, it was found that human populations vary considerably at two genes, ASPM and microcephalin, that control the growth of brain tissue. The finding seemed to be ‘huge’ in its implications. Then, it all fizzled out. No correlation could be found between variation at either gene and differences in mental ability or head circumference (Mekel-Bobrov et al., 2007; Rushton et al., 2007).

A recent study has now shown that ASPM and several other genes (MCPH1, CDK5RAP2, CENPJ) do in fact influence growth of brain tissue, specifically cortical tissue.

… In 2010, we’ll probably see further developments in this area. Stay tuned.

This year did see further developments. Interestingly, these gene loci seem to interact with sex and ethnicity in their effects:

[In a Norwegian study by Rimol et al.] for each of the 15 positive SNPs, the association was sex-specific with all significant results for CDK5RAP2 SNPs being found only in males, whilst the significant results for MCPH1 and ASPM were only found in females.

The second study, by Wang et al., only considered variation in the coding sequence of MCPH1 but found that one non-synonymous SNP is associated with male cranial volume but not female cranial volume in a Chinese population of nearly 900 individuals, supporting a role for sex in the action of microcephaly genes. Intriguingly, it also suggests that SNPs in the same locus can have opposite effects in males and females, as for MCPH1 an exonic SNP contributes to Chinese male cranial volume whilst intronic SNPs and SNPs downstream of the coding sequence are associated with Norwegian female brain size. As the authors discuss, these results strongly suggest some microcephaly variants may influence brain development dependent on hormonal background or through interactions with genes which are differentially expressed between the sexes, potentially contributing to sex specific differences in brain structure. (Montgomery & Mundy 2010)

But why did earlier studies find nothing?

First, many of the previous studies only tested for associations with the few, recently derived ASPM and MCPH1 haplotypes which were the focus of claims of recent positive selection, while both Rimol et al. and Wang et al. consider a larger number of SNPs for which there is no a priori evidence for selection. Second, despite the possibility of deriving clear hypotheses of what phenotypes these loci should affect, many previous studies examined traits that are, at best, not directly relevant (e.g. IQ or altruism) or quite distantly removed (e.g. adult head circumference). (Montgomery & Mundy 2010)

Many people had thought that all variation in mental capacity shows up on IQ tests. So they threw in the towel once it became apparent that IQ does not vary with genetic variation at these loci.

So how do these loci affect mental capacity? I’ve argued that the most recent ASPM variant seems to be associated with the spread of alphabetical writing. It may thus assist the visual cortex in recognizing, storing, and processing strings of alphabetical script (Frost 2008).

Alternatively, Dediu and Ladd (2007) have argued that ASPM and microcephalin variants correlate with use or non-use of tone languages. This hypothesis has been tested with Chinese, Korean, Hmong, and American Caucasians who had little training in tone recognition, i.e., they were not musicians and did not engage in singing or instrument playing. The Chinese and Koreans consistently outperformed the other participants when asked to identify the relative distance between two tones. The Hmong showed no such advantage, even though they shared the ASPM and microcephalin characteristics of the Chinese and Koreans (Hove et al., 2010). Thus, while some East Asian populations apparently are better at processing differences in pitch, this aptitude seems to be unrelated to ASPM or microcephalin.

Early modern human genome

Scientists have retrieved mtDNA from a 30,000 year-old hunter-gatherer from Kostenki, Russia. This seems to be part of a trend to study the genome of early modern humans.

The Kostenki mtDNA genome was entirely sequenced, despite problems that seemed intractable (difficulties in distinguishing between early modern human DNA and contamination from present-day human DNA). The authors concluded: “With this approach, it may even become possible to analyze the nuclear genomes of early modern humans” (Krause et al., 2010).

This development is indeed promising. If we can compare early modern DNA with present-day nuclear DNA, we’ll find out the exact genetic changes, especially those in neural wiring, that led to the ‘big bang’ of modern human evolution some 80,000 to 60,000 years ago. Unfortunately, this ‘big bang’ almost certainly took place in Africa, where the climate is much less conducive to DNA preservation.

Ethnic differences in vitamin D metabolism

This year will see further evidence that natural selection has caused differences in metabolism among different human populations, including vitamin D metabolism.

For instance, many populations have long been established at latitudes where vitamin-D synthesis is impossible for most of the year. Some of these populations can get vitamin D from dietary sources (e.g., fatty fish) but most cannot. In these circumstances, natural selection seems to have adjusted their metabolism to reduce their vitamin-D requirements. We know that the Inuit have compensated for lower production of vitamin D by converting more of this vitamin to its most active form (Rejnmark et al., 2004). They also seem to absorb calcium more efficiently, perhaps because of a different vitamin-D receptor genotype (Sellers et al., 2003). Even outside the Arctic zone, there seem to be differences in vitamin-D metabolism from one population to another. In particular, vitamin-D levels seem to be generally lower in darker-skinned populations (Frost, 2009).

… Unfortunately, our norms for adequate vitamin intake are based on subjects or populations of European origin. We are thus diagnosing vitamin-D deficiency in non-European individuals who are, in fact, perfectly normal. This is particularly true for African Americans, nearly half of whom are classified as vitamin-D deficient, even though few show signs of calcium deficiency—which would be a logical outcome. Indeed, this population has less osteoporosis, fewer fractures, and a higher bone mineral density than do Euro-Americans, who generally produce and ingest more vitamin D (Frost, 2009).

… What will be the outcome of raising vitamin-D levels in these populations? Keep in mind that we are really talking about a hormone, not a vitamin. This hormone interacts with the chromosomes and gradually shortens their telomeres if concentrations are either too low or too high. Tuohimaa (2009) argues that optimal levels may lie in the range of 40-60 nmol/L. In non-European populations the range is probably lower. It may also be narrower in those of tropical origin, since their bodies have not adapted to the wide seasonal variation of non-tropical humans.

If this optimal range is continually exceeded, the long-term effects may look like those of aging …

I hope that people of African or Native origin will resist the vitamin-D siren song. Otherwise, many of them will become shriveled-up husks by their mid-40s … just like
Michael Jackson.

Evidence continued to mount this year that vitamin-D metabolism differs by ethnicity. For risk of atherosclerosis, the optimal range is lower among African Americans than among European Americans. A sample of African Americans showed a positive correlation between calcified plaque formation and blood levels of vitamin D (25(OH)D), despite a negative correlation among European Americans over the same range (Freedman et al., 2010).

Another study of African Americans found that blood levels of 25(OH)D decreased linearly with increasing African ancestry, the decrease being 2.5-2.75 nmol/L per 10% increase in African ancestry. Sunlight and diet were 46% less effective in raising these levels among subjects with high African ancestry than among those with low/medium African ancestry (
Signorello et al. 2010).

The New York Times has recently covered the growing unease with vitamin D supplements:

The very high levels of vitamin D that are often recommended by doctors and testing laboratories — and can be achieved only by taking supplements — are unnecessary and could be harmful, an expert committee says.

… The 14-member expert committee was convened by the
Institute of Medicine, an independent nonprofit scientific body, at the request of the United States and Canadian governments. It was asked to examine the available data — nearly 1,000 publications — to determine how much vitamin D and calcium people were getting, how much was needed for optimal health and how much was too much.

… Some labs have started reporting levels of less than 30 nanograms of vitamin D per milliliter of blood as a deficiency. With that as a standard, 80 percent of the population would be deemed deficient of vitamin D, Dr. Rosen said. Most people need to take supplements to reach levels above 30 nanograms per milliliter, he added.

But, the committee concluded, a level of 20 to 30 nanograms [50 to 75 nmol/L] is all that is needed for bone health, and nearly everyone is in that range.

… It is not clear how or why the claims for high vitamin D levels started, medical experts say. First there were two studies, which turned out to be incorrect, that said people needed 30 nanograms of vitamin D per milliliter of blood, the upper end of what the committee says is a normal range. They were followed by articles and claims and books saying much higher levels — 40 to 50 nanograms or even higher — were needed.

After reviewing the data, the committee concluded that the evidence for the benefits of high levels of vitamin D was “inconsistent and/or conflicting and did not demonstrate causality.”

Evidence also suggests that high levels of vitamin D can increase the risks for fractures and the overall death rate and can raise the risk for other diseases. (Kolata 2010)

H/T to Tod


Dediu, D., and D.R. Ladd (2007). Linguistic tone is related to the population frequency of the adaptive haplogroups of two brain size genes, ASPM and Microcephalin. Proceedings of the National Academy of Sciences, 104, 10944-10949.

Freedman B.I., L.E. Wagenknecht, K.G. Hairston KG et al. (2010). Vitamin D, adiposity, and calcified atherosclerotic plaque in African-Americans. Journal of Clinical Endocrinology & Metabolism, 95, 1076-1083.

Frost, P. (2009). Black-White differences in cancer risk and the vitamin-D hypothesis, Journal of the National Medical Association, 101, 1310-1313.

Frost, P. (2008). The spread of alphabetical writing may have favored the latest variant of the ASPM gene, Medical Hypotheses, 70, 17-20.

Hove, M.J., M.E. Sutherland, and C.L. Krumhansl. (2010). Ethnicity effects in relative pitch, Psychonomic Bulletin & Review, 17, 310-316.

Kolata, G. (2010). Report Questions Need for 2 Diet Supplements, The New York Times, November 29, 2010

Krause, J., A.W. Briggs, M. Kircher, T. Maricic, N. Zwyns, A. Derevianko, and S. Pääbo. (2010). A Complete mtDNA genome of an early modern human from Kostenki, Russia, Current Biology 20, 231–236.

Mekel-Bobrov, N., Posthuma D., Gilbert S.L., et al. (2007). The ongoing adaptive evolution of ASPM and Microcephalin is not explained by increased intelligence. Hum Mole Genet, 16, 600–8.

Montgomery, S.H. and N.I. Mundy. (2010). Brain Evolution : Microcephaly genes weigh in, Current Biology, 20(5), R244

Rejnmark L, Jørgensen ME, Pedersen MB, et al. (2004). Vitamin D insufficiency in Greenlanders on a Westernized fare: ethnic differences in calcitropic hormones between Greenlanders and Danes, Calcif Tissue Int, 74, 255-263.

Rimol, L.M., I. Agartz, S. Djurovic, A.A. Brown, J.C. Roddey, A.K. Kähler, M. Mattingsdal, L. Athanasiu, A.H. Joyner, N.J. Schork, et al. for the Alzheimer’s Disease Neuroimaging Initiative (2010). Sex-dependent association of common variants of microcephaly genes with brain structure. Proceedings of the National Academy of Science. USA, 107, 384–388.

Rushton, J.P., Vernon, PA.., Bons, T.A. (2007). No evidence that polymorphisms of brain regulator genes Microcephalin and ASPM are associated with general mental ability, head circumference or altruism. Biology Letters-UK, 3, 157–60.

Sellers EAC, Sharma A, Rodd C. (2003). Adaptation of Inuit children to a low-calcium diet, CMAJ, 168, 1141-1143.

Signorello, L.B., S.M. Williams, W. Zheng, J.R. Smith, J. Long, Q. Cai, M.K, Hargeaves, B.W. Hollis, and W.J. Blot. (2010). Blood vitamin D levels in relation to genetic estimation of African ancestry, Cancer Epidemiology, Biomarkers & Prevention, 19(9), 2325–2331.

Tuohimaa, P. (2009). Vitamin D and aging, Journal of Steroid Biochemistry and Molecular Biology, 114, 78-84.

Wang, J.K., Li, Y., and Su, B. (2008). A common SNP of MCPH1 is associated with cranial volume variation in Chinese population. Human Molecular Genetics, 17, 1329–1335.

Sunday, December 12, 2010

Gender/face recognition: hue and luminosity

Averaged female face (left) and averaged male face (right). The key facial regions for gender recognition, in terms of either response time or accuracy, seem to be where facial skin borders the lips or the eyes.

The human face is a special visual object. We do not learn to recognize it. Instead, it is processed by the brain via a hardwired mechanism. There seems to be an evolutionary tendency to hardwire recognition of objects that appear often enough while being significant enough to human existence.

One task of this mechanism is to tell a female face from a male face. To this end, we unconsciously focus on several visual cues, including skin tone. It is well established that skin pigmentation visibly differs between men and women. This sexual dimorphism reflects differences in both constitutive pigmentation (untanned skin) and facultative pigmentation (tanning capacity). In comparison to women, men have higher concentrations of melanin and hemoglobin in their skin and lower concentrations of carotene. Men also tan more readily than women do for equal exposure times.

The psychologist Richard Russell argues that this visual cue has two components: (1) the absolute luminosity of facial skin and (2) the contrast between this luminosity and that of the lips and the eyes. Now a study from the Université de Montréal points to a third component: differences in hue, i.e., the degree of ruddiness and yellowness. Hue assists gender recognition particularly where facial skin borders the mouth region. In contrast, luminosity is most helpful where facial skin borders the eye/eyebrow region.

This gender recognition works faster with hue than with luminosity. If the observer is too distant or the lighting too dim, the brain falls back on the “slow channel” of luminosity:

This suggests that humans do use chromatic cues for discriminating face gender: When it’s informative, they use it and respond rapidly (for evidence that color is perceived faster than shape, see Holcombe & Cavanagh, 2001; Moutoussis & Zeki, 1997a, 1997b); when it’s not, they have to rely on the more robust and more sluggish luminance cues.
(Dupuis-Roy et al., 2009)

Interestingly, the authors conclude that this mechanism may be located in the infero-temporal cortex, since this brain region is involved in both face perception and color perception.

H/T to Eugene


Dupuis-Roy, N., I. Fortin, D. Fiset, and F. Gosselin. (2009). Uncovering gender discrimination cues in a realistic setting. Journal of Vision, 9(2), 10, 1–8., doi:10.1167/9.2.10.

Russell, R. (2003). Sex, beauty, and the relative luminance of facial features, Perception, 32, 1093-1107.

Russell, R., B. Duchaine, and K. Nakayama. (2009). Super-recognizers: People with extraordinary face recognition ability. Psychonomic Bulletin & Review, 16(2), 252-257.

Russell, R. and P. Sinha. (2007). Real-world face recognition: The importance of surface reflectance properties, Perception, 36, 1368-1374.

Russell, R., P. Sinha, I. Biederman, and M. Nederhouser. (2006). Is pigmentation important for face recognition? Evidence from contrast negation, Perception, 35, 749-759.

Sunday, December 5, 2010

Sex, ethnicity, and facial skin perception

Postgraduate students, School of Psychology, Cardiff University

A recent study from Cardiff University (Wales) has found interesting sex differences in the way men and women evaluate facial skin color, specifically for faces of white, black, or mixed-race origin. The female participants evaluated White faces the least favorably out of all male facial photos. Conversely, the male participants evaluated Black faces the least favorably out of all female facial photos. Participants of both sexes gave relatively low ratings to White faces for a wide range of characteristics (attractiveness, competence, dominance, warmth, maturity, strength, masculinity).

Previous research has suggested that perceived attractiveness and personality are affected by the race such that White faces are more attractive but less masculine than Black faces. Such studies, however, have been based on very small stimulus sets. The current study investigated perceived attractiveness and personality for 600 Black, White and mixed-race faces. Many of the investigated personality traits were correlated with race when rated by White participants. Attractiveness specifically was greater for Black male faces than White male faces and among mixed-race faces. Blackness correlated with increased attractiveness. A reverse pattern was found for female faces with Whiteness being associated with attractiveness. The results are discussed in terms of the sexual dimorphism demonstrated in skin color. (Lewis 2010)

These findings are partially consistent with previous studies:

Feinman & Gill 1978
When a thousand American students were surveyed on their physical preferences in the opposite sex, 30% of the males versus 10% of the females disliked black skin. Conversely, 56% of the males versus 82% of the females disliked very light skin.

van den Berghe & Frost 1986
According to a cross-cultural survey, lighter skin is more strongly preferred for women than for men in all culture areas.

Frost 1994
Women have varying preferences over the menstrual cycle when choosing between human faces that differ slightly in skin tone. When pairs of male faces are shown, the darker face is more strongly preferred by participants in the first two-thirds of the cycle (high ratio of estrogen to progesterone) than by those in the last third (low estrogen/progesterone ratio). In contrast, when pairs of female faces are shown, skin-tone preference remains unchanged throughout the cycle.

Nonetheless, Lewis (2010) differs from these previous studies on three points:

1). The participants were asked to evaluate major differences in human skin color that clearly have racial/ethnic significations. In contrast, the previous studies examined how men and women evaluate minor differences. Van den Berghe and Frost (1986) found a cross-cultural preference for lighter-skinned women, but only in the sense of their being lighter than average for the local population. Similarly, Frost (1994) only examined female response to minor differences in skin tone.

2). Dark skin was generally preferred. This preference was merely stronger for male faces than for female faces. In contrast, van den Berghe and Frost (1986) found that light skin was generally preferred, with this preference being stronger in response to female faces. Frost (1994) likewise found that light skin was generally preferred, with this preference being weaker with regard to male faces during the first two-thirds of the menstrual cycle. Even then, the lighter male face remained the more popular of the two.

3). There was no control for phase of menstrual cycle. The sex difference in preference would probably have been greater if the author had excluded those female participants who were in the last third of the menstrual cycle.

The first point probably explains the second one. The author examined how men and women respond to major differences in skin color, and such differences have meanings that go far beyond sexual aesthetics. Because the white British participants had to choose among very divergent skin colors, their responses were almost certainly contaminated by ‘prejudice avoidance’, i.e., they avoided giving low ratings to non-white faces for fear of seeming prejudiced. Since anti-white prejudice is not stigmatized, the tendency would be to overcompensate—to err on the safe side.

Overcompensation is suggested by the results. Black faces were given top rating on all 7 items by the female participants and on 4 of the 7 by the male participants. White faces failed to get top rating on any item. This is particularly surprising given that all of the participants were white British. They apparently wished to avoid seeming prejudiced—even to the point of systematically rejecting their own people.

This source of bias does not invalidate the overall finding, i.e., the sex difference in face ratings. All of the participants were presumably immersed in the same ideological environment, and there is no reason to believe that prejudice avoidance is weaker in men than in women.


Feinman, S., & Gill, G.W. (1978). Sex differences in physical attractiveness preferences. Journal of Social Psychology, 105, 43‑52.

Frost, P. (1994). Preference for darker faces in photographs at different phases of the menstrual cycle: Preliminary assessment of evidence for a hormonal relationship, Perceptual and Motor Skills, 79, 507-514.

Lewis, M.B. (2010). Who is the fairest of them all? Race, attractiveness and skin color sexual dimorphism, Personality and Individual Differences, 50, 159-162.

Van den Berghe, P.L., & P. Frost. (1986). Skin color preference, sexual dimorphism and sexual selection: A case of gene‑culture co‑evolution? Ethnic and Racial Studies, 9, 87‑113.

Sunday, November 28, 2010

Femmes claires, hommes foncés (English post)

Averaged faces of 22 women and 22 men (White American subjects with no makeup). Female faces are lighter-skinned than male faces, while showing more contrast between facial skin and lips/eyes (see research by Richard Russell).

Les Presses de l’Université Laval has just published my book Femmes claires, hommes foncés. Les racines oubliées du colorisme.

An old Christian manuscript recounts the story of a man who went to live in a monastery with his infant son. As the boy became a young man, he began to see strange-looking beings in his dreams. One day, he ventured with his father into the outside world. On seeing some women, he exclaimed: “Father, those are the ones who came to see me during the night!”

We do not learn to recognize human faces. Nor do we learn to identify whether they are male or female. This type of image is processed by an innate mechanism that we inherit independently of other cognitive abilities. If this mechanism ceases to function following brain damage, the result is an odd syndrome: the patient may seem to be like everyone else but will not recognize a normally positioned face more easily than any other object, including an upside-down face.

This should be no surprise. If an object appears often enough in your visual field, while playing a significant enough role, you gain from recognizing it automatically (instead of having to learn its visual characteristics). Over time, natural selection gradually hardwires recognition of familiar objects, like human faces.

Which facial features are processed by this mechanism? There are notably the eyes and the mouth. There is also skin tone. Indeed, this feature seems to be key to distinguishing between male and female faces. When shown a facial photo, human subjects can tell its sexual identity even if the image is blurred and provides only the visual cue of skin tone. This cue has two aspects. First, women are paler than men, who are browner and ruddier because their skin is richer in melanin and blood. Second, women exhibit a higher luminous contrast between their facial skin and their lips and eyes.

This sex difference is universal, being larger if the population is medium in skin color and smaller if very fair or very dark. Just as universally, people tend to ritualize this difference in traditionalist societies, in general by lightening women’s skin even further through sun avoidance, wearing of protective clothing, use of cosmetics, and so on. In contrast, it gets little attention in modern societies, where skin color means ethnicity. Hence a common reproach I hear: “But that no longer matters nowadays. Skin color is about racism!”

Perhaps. But this sex difference used to matter. It once formed part of our very notions of femininity and masculinity. And whatever we may think, this past has shaped us.

Origins of male and female skin tones

Humans are born with very little skin pigment. This paleness is striking in dark-skinned populations, which consider it a mark of infancy. In Africa, a proud mother may invite her neighbors to see the ‘white’ person who has just arrived!

Skin darkens until just before puberty when this trend reverses in young girls, the result being a sex difference. This lightening of female skin after puberty coincides with the thickening of subcutaneous fat. It seems that both are part of the same process of sexual development, like other changes that happen at this life stage.

What use is a fairer skin for women? There are three hypotheses:

Infantile mimicry? A fairer color is one of several visual cues that identify the human infant including smooth hairless skin and a ‘baby face.’ These cues acquired the property of decreasing the observer’s aggressiveness and increasing his/her feelings of care. They were then retained by the adult female as a means to modify her partner’s behavior in the same way. Such mimicry exists in other primate species. To the degree that the sexual bond is stronger and longer-lasting, the adult female retains certain highly visible infant traits.

Hormonal side effect? Fairer skin, through a fortuitous interaction between pigmentation and the sex hormones, became a means to assess a potential partner’s fecundability. Remember that girls lighten in color after puberty. Later, women tend to darken during pregnancy and as they get older. They also darken slightly during the nonfertile phase of the menstrual cycle.

Means to prevent vitamin-D deficiency? Natural selection lightened women’s skin in order to increase vitamin-D production, thereby ensuring enough calcium and phosphorus during pregnancy and breastfeeding.

Whatever the initial cause, lighter skin also became a means to distinguish women from men, as long as this sex difference remained the main source of pigment variation.

Today, women’s lighter skin has become much less noticeable in an increasingly multiethnic context. This gender line has also been blurred with the entry of the suntanned look into women’s fashion. Thus, in a survey of university students, I found that only a quarter were aware of this sex difference.

Earlier meanings

Yet our ancestors were very much aware. Before their continent opened up to the world five centuries ago, Europeans described skin color with reference to the complexions they saw among themselves. ‘White,’ ‘brown,’ and ‘black’ corresponded to what we now call light, tan, and dark. Again contrary to current usage, these skin tones identified individuals rather than ethnic groups. A white was a fairer-skinned person and a black a darker-skinned one. This way of seeing things persists in family names that once referred to gradations in pigmentation within a single population, like Leblanc, Lebrun, and Lenoir among the French, White, Brown, and Black among the English, or Weiss, Braun, and Schwartz among the Germans.

This narrow spectrum was conducive to gendering of skin color. A woman had to have a fairer skin than average, i.e., ‘white’ in Europe or East Asia, ‘golden’ in South-East Asia, and ‘red’ in sub-Saharan Africa. Despite being normative for women, a fairer skin did not monopolize all erotic male desires. In old European folklore, some desires could target darker women, i.e., the nut-brown maid of the English, the braunes or schwarzbraunes Mädel of the Germans, the brune of the French, or the barna kislány of the Hungarians. This type of eroticism was ardent, but also stormy and short-lived.

Conversely, a man had to have a darker skin than average. There was nonetheless some ambivalence. A man was handsome if fair, but virile and strong if brown. In medieval England, the tenth token of a knight of ‘strong Corage’ required a ‘broun coloure in al the body’, a quality that many vaunted by adding ‘the brown’ to their names.

According to American psychologist Richard Russell, people still use skin color to identify gender, albeit unconsciously, by means of two visual cues: (1) the absolute luminosity of the skin and (2) the contrast between this luminosity and that of the lips and the eyes. Russell also argues that these cues have shaped the evolution of cosmetics in different culture areas and different time periods. There is a cross-cultural tendency for women to lighten facial color and to accentuate its contrast with lip and eye color.

Sexual attraction and other functions

Besides recognition of sexual identity, skin tone seems to be used for other mental tasks. First, there is sexual attraction, which nonetheless involves many other factors (personal history, social and physical context, nature of the sexual relationship, etc.). This task seems to be governed by the levels of the sex hormones, at least in women. This is what I found in presenting female participants with six pairs of facial photos, three being female and three male. Each pair of faces was identical, except for a slight difference in skin tone. The participant then had to choose the face she preferred. It turned out that preferences varied with the phase of the menstrual cycle, but only for male faces. The darker man was more strongly preferred if the participants were in the first two-thirds of their cycle (high ratio of estrogen to progesterone) than if they were in the last third (low ratio of estrogen to progesterone).

Other researchers have noted this cyclical effect for other secondary sexual characteristics, like face shape and body odor. The more the level of estrogen increases, the more the ‘masculine’ version is preferred.

Skin tone also seems to influence certain prejudices. This phenomenon has given rise to a large body of literature, the theme evidently being how children learn race prejudice. An exception would be the work of two American psychologists, Deborah Best and John Williams, who saw this prejudice as being constructed from a universal and early developing tendency to prefer fairer skin.

This was their finding in Japan and in Europe with young children who were unfamiliar with darker-colored ethnic groups. When the children were shown pictures of people or animals, they associated lighter skin with positive qualities (e.g., clean, pretty, nice) and darker skin with negative qualities (e.g., dirty, ugly, nasty). These associations did not seem to be learned. Their development did not follow a learning curve. Nor was there any correlation with the child’s IQ, as would be if they were learned.

But does all of this come down to fairer skin = positive qualities and darker skin = negative qualities? When through a translation error the children were given the word ‘robust,’ they associated this positive quality with darker skin. It seems that Best and Williams unconsciously chose feminine-sounding positive qualities and masculine-sounding negative ones.

Upcoming …

Last year, a Chinese research team demonstrated that the human face, as a visual object, is analyzed by a distinct mental mechanism. This was a major project with many participants but it proved in several months what had been suspected for several decades. With the same method, we could find out whether this mechanism analyzes skin tone. All that is needed is a research team ready to act.

Thirty years ago, such teams existed. Today, there are no longer any. In their time, Best and Williams attracted many collaborators and sources of funding. Then, in the 1990s, the team was dismantled and its members directed to more down-to-earth projects.

The same story played out elsewhere in North America. To a large degree, the reason was the shift to applied research. The watchword was to make research more ‘cost-effective’, more ‘targeted’ and more ‘realistic.’ This conservative discourse was taken up by other people, who saw a way to settle old scores and control the future …

It must be said that biological determinism bothers some people who see in it a fatalistic ideology, even an adversary of efforts to improve the human condition. In my opinion, such a view is exaggerated, but it was the one held by most decision makers in the 1980s and 1990s. The result? This paradigm has virtually disappeared from most social science departments. At times, this ‘departmental cleansing’ took place out in the open with much shouting and finger-pointing. But in general it happened quietly through attrition and reprioritizing.

North America has long claimed to lead the free world—a claim that is less and less true. If the social sciences are the ‘canary in the mine,’ we seem to be moving toward another model of society, one where people naively avoid difficult questions to avoid the troubles that come with them.

Sunday, November 21, 2010

Femmes claires, hommes foncés

Les Presses de l’Université Laval viennent de publier mon oeuvre Femmes claires, hommes foncés. Les racines oubliées du colorisme.

Dans un vieux manuscrit chrétien, on raconte l’histoire d’un homme qui s’est retiré dans un monastère avec son tout jeune fils. Ce dernier, devenu adolescent, a commencé à voir dans ses rêves des êtres d’apparence étrange. Puis, un jour, en allant à l’extérieur avec son père, il a vu des femmes. Le jeune homme s’est exclamé : « Père, ce sont eux qui venaient me voir pendant la nuit ! »

Nous n’apprenons pas à reconnaître le visage humain. Nous n’apprenons pas non plus à identifier si c’est un homme ou une femme. Ce type d’image est traité par un mécanisme inné que nous héritons indépendamment des autres capacités cognitives. Si ce mécanisme cesse de fonctionner, à la suite d’une lésion cérébrale, le résultat est une pathologie assez particulière : le patient peut sembler comme tout le monde, mais placé devant un visage mis en position normale il ne le reconnaît mieux que tout autre objet, incluant un visage mis sens dessus dessous.

Cela ne devrait pas nous étonner. Si un objet se présente assez souvent dans le champ visuel, tout en jouant un rôle assez important, on gagne à le reconnaître automatiquement, sans avoir à passer par l’apprentissage. Ainsi, avec le temps, la sélection naturelle finit par « pré-câbler » la reconnaissance de certains objets familiers, comme le visage humain.

Quels éléments du visage sont traités par ce mécanisme ? Il y a surtout les yeux et la bouche. Il y a aussi le teint de la peau. En effet, ce dernier semble crucial pour distinguer entre un homme et une femme. Devant une photo de visage, les sujets humains peuvent en deviner l’identité sexuelle même si l’image est floue et laisse pour seul indice le teint. Cet indice comporte deux volets. D’abord, la femme est pâle par rapport à l’homme qui, lui, est plutôt brun-rouge en ayant une peau plus riche en mélanine et en sang. Ensuite, le visage féminin affiche un plus grand contraste lumineux entre la couleur de la peau et celle des lèvres et des yeux.

Cette différence entre hommes et femmes est universelle, étant plus prononcée chez les populations de couleur moyenne et moins chez celles à la peau très claire ou très foncée. Parallèlement, on tend à ritualiser cette différence dans les sociétés traditionalistes, surtout en éclaircissant encore plus le teint féminin par l’évitement du soleil, le port des vêtements protectrices, l’utilisation des fards blancs, etc. En revanche, on y prête peu d’attention dans les sociétés modernes : la couleur de la peau y indique l’ethnicité. C’est pourquoi j’entends souvent ce reproche : « Mais voyons, ça n’a plus aucune importance aujourd’hui. La couleur de la peau, ça a rapport au racisme ! »

Sans doute. Mais cette différence sexuelle avait une grande importance par le passé. Elle s’inscrivait alors dans nos notions même de féminité et de masculinité. Et, qu’on veuille ou non, ce passé nous a façonnés.

Origines des teints masculin et féminin

L’être humain naît avec une peau peu pigmentée. Cette pâleur saute à l’œil chez les populations de couleur sombre, y étant considérée comme une marque du nouveau-né. En Afrique, il arrive qu’une mère fière invite ses voisins à venir voir le « blanc » qui est arrivé chez elle !

Par la suite, la peau s’assombrit jusqu'à la veille de la puberté. Puis, la tendance s’inverse chez la jeune femme, d’où la différence de teint entre les sexes. Cet éclaircissement de la peau féminine après la puberté suit l’épaississement de la graisse sous-cutanée. Les deux, semble-t-il, font partie du même processus de développement sexuel, tout comme d’autres changements qui surviennent à ce moment de la vie.

À quoi sert le teint clair chez la femme ? Il existe trois hypothèses :

Mimétisme infantile ? La peau claire s’inscrirait dans un ensemble de caractères propres au nouveaux-né : une peau lisse et sans poil, un « visage de bébé. » Ces caractères clés auraient acquis la propriété de baisser le niveau d’agressivité chez l’observateur, en l’incitant également à donner des soins. Ensuite, la femme les aurait imités afin de modifier de la même manière le comportement de son partenaire mâle. Ce type de mimétisme existe chez d’autres espèces primates ; à mesure que le lien sexuel se prolonge et s’intensifie, la femelle conserve certains traits visuels du nourrisson.

Effet secondaire des hormones ? La peau claire, grâce à une interaction fortuite entre la pigmentation et les hormones sexuelles, serait devenue un moyen d’évaluer la fécondabilité d’une partenaire potentielle. Rappelons que la jeune fille s’éclaircit après la puberté. Plus tard, la femme tend à s’assombrir pendant la grossesse et en vieillissant, ainsi que légèrement pendant la phase non fertile du cycle menstruel.

Moyen de combler une carence de vitamine D ? La sélection naturelle aurait éclairci la peau de la femme afin d’augmenter sa production de vitamine D et, ainsi, de lui fournir assez de calcium et de phosphore pendant la grossesse et l’allaitement.

Peu importe la cause initiale, cette pâleur serait devenue aussi un moyen de distinguer la femme de l’homme, tant que cette différence sexuelle demeurait la principale source de variation pigmentaire.

Or, de nos jours, la pâleur féminine se fait remarquer beaucoup moins dans un contexte devenu de plus en plus multiethnique. Les cartes sont également brouillées par la mode du bronzage. Ainsi, en menant un sondage auprès des étudiants universitaires, j’ai constaté que seulement le quart était conscient de cette différence sexuelle.

Anciennes significations

Cependant, nos aïeux en étaient vivement conscients. Avant l’ouverture de leur continent sur le monde, il y a cinq siècles, les Européens décrivaient la couleur de la peau selon les teints qu’ils voyaient parmi eux-mêmes. Ils parlaient de la peau « blanche », « brune » ou « noire », là où nous disons la peau claire, mate ou foncée. Toujours à l’encontre de la manière actuelle, ces teints désignaient plutôt des individus que des ethnies : un blanc était quelqu’un au teint clair ; un noir, quelqu’un au teint foncé. Cette façon de voir subsiste dans des noms de famille qui indiquaient autrefois les gradations pigmentaires d’une seule population, comme Leblanc, Lebrun et Lenoir chez les Français, White, Brown et Black chez les Anglais ou Weiss, Braun et Schwartz chez les Allemands.

Cette gamme de couleurs, par son étroitesse, permettait de sexualiser le teint. Une femme devait posséder une peau plus claire que la moyenne, soit « blanche » en Europe ou en Asie de l’Est, « dorée » en Asie du Sud-Est et « rouge » en Afrique subsaharienne. Or, bien que le teint normatif chez la femme, la peau claire n’accaparait pas tous les désirs érotiques chez l’homme. Dans le folklore de l’Europe ancienne, certains désirs pouvaient cibler la femme à la peau brune, soit la nut-brown maid des Anglais, la braunes ou schwarzbraunes Mädel des Allemands, la brune des Français ou la barna kislány des Hongrois. Ce type d’érotisme était ardent, mais aussi bref et tempétueux.

Inversement, un homme devait posséder une peau plus foncée que la moyenne. Il existait toutefois une certaine ambivalence. Est beau celui au teint clair ; viril et fort, celui au teint brun. En Angleterre médiévale, la dixième marque d’un chevalier de « fort courage » exige de lui « une couleur brune sur tout le corps », qualité dont se vantent nombre de chevaliers anglais dénommés the brown.

Selon le psychologue américain Richard Russell, on se sert toujours de la couleur de la peau pour reconnaître l’identité sexuelle, quoique de façon inconsciente, en captant deux aspects : (1) la luminosité absolue de la peau et (2) le contraste entre cette luminosité et celle des lèvres et des yeux. Toujours selon Richard Russell, ces indices ont façonné l’évolution des soins de beauté dans des aires culturelles différentes et à des époques différentes. Il existerait ainsi une tendance transculturelle chez la femme de chercher à éclaircir le visage et à en accentuer le contraste avec les lèvres et les yeux.

Attirance sexuelle et autres fonctions

Au-delà de la reconnaissance d’identité sexuelle, le teint semble servir à d’autres tâches mentales. Il y a d’abord l’attirance sexuelle, qui implique toutefois de nombreux autres facteurs (histoire personnelle, contexte social et physique, nature de la relation sexuelle, etc.). Cette tâche semble actionnée par les niveaux hormonaux, du moins chez la femme. C’est ce que j’ai constaté dans une étude portant sur des femmes, en leur présentant six paires de photos de visage, dont trois de sexe féminin et trois de sexe masculin. Chaque paire de visages était identique, sauf une légère différence de teint. Ensuite, la participante devait choisir le visage qui lui plaisait le plus. Il en est ressorti que les participantes ont changé de préférence selon le cycle menstruel, mais uniquement à l’égard des visages d’hommes. Elles avaient une plus grande préférence pour l’homme foncé si elles se situaient dans les deux premiers tiers de leur cycle (rapport élevé d’œstrogène/progestérone) que si elles se situaient dans le dernier tiers (rapport réduit d’œstrogène/progestérone).

D’autres chercheurs ont noté cet effet cyclique à l’égard d’autres caractères sexuels secondaires, comme la forme du visage et l’odeur corporelle. Plus le taux d’œstrogène augmente, plus on préfère la version « masculine. »

Enfin, le teint semble alimenter certains préjugés. Il existe en effet une vaste littérature là-dessus dont le thème évident est l’apprentissage des préjugés raciaux. Une exception serait les travaux de deux psychologues américains, Deborah Best et John Williams, pour qui ces préjugés se construisent à partir d’une tendance universelle à préférer la peau claire, laquelle se manifeste dès les premières années de la vie.

C’est ce que les deux chercheurs ont constaté au Japon et en Europe auprès des jeunes enfants connaissant très peu des personnes de couleur. L’enfant, en voyant des images de personnes ou d’animaux, associait la peau claire à des qualités positives (ex. propre, joli, gentil) et la peau foncée à des qualités négatives (ex. sale, vilain, méchant). Ces associations ne semblaient pas être apprises. D’une part, leur évolution ne suivait pas une courbe d’apprentissage. D’autre part, il n’existait pas de corrélation avec le QI de l’enfant, ce qui serait le cas si on les apprenait.

Mais faut-il réduire tout cela à « peau claire = qualités positives » et à « peau foncée = qualités négatives » ? Lorsque, par erreur de traduction, on a présenté aux enfants le mot « robuste », ils ont associé cette qualité, pourtant positive, à la peau foncée. Il semble que Best et Williams aient inconsciemment choisi des qualités positives à résonance féminine et des qualités négatives à résonance masculine.

À l’avenir …

L’an dernier, une équipe chinoise a démontré que le visage humain, en tant qu’objet visuel, est analysé par un mécanisme mental distinct. C’était un projet d’envergure, faisant appel à de nombreux participants, mais on a réussi à prouver en quelques mois ce qu’on avait soupçonné pendant quelques décennies. Avec la même méthodologie, on pourra également déterminer si ce mécanisme analyse le teint de la peau. Il ne faudra qu’une équipe prête à le faire.

Il y a trente ans, de telles équipes existaient. Aujourd’hui, il n’y en a presque plus. Dans leur temps, Best et Williams ont attiré de nombreux collaborateurs, ainsi que des sources de financement. Puis, dans les années 1990, cette équipe a été démantelée et ses membres dirigés vers des projets plus terre-à-terre.

C’était à peu près la même histoire ailleurs en Amérique du Nord. En grande partie, la raison est le virage vers la recherche appliquée. On voulait « rentabiliser » la recherche, la « cibler » mieux, la rendre plus « réaliste. » Ce discours conservateur a ensuite été repris par d’autres personnes, qui y voyaient un moyen de régler des comptes et de contrôler l’avenir …

Il faut dire que le déterminisme biologique dérange certains. Ces derniers y décèlent une idéologie fataliste, voire un adversaire des efforts d’améliorer la condition humaine. À mon avis, ce jugement est exagéré, mais c’était celui de la plupart des décideurs au cours des années 1980 et 1990. Le résultat ? Une épuration presque totale. Parfois, il y a eu des éclats, mais en général on l’a faite en douceur, en citant la nécessité de ménager les priorités.

Depuis longtemps, l’Amérique du Nord se veut le leader du monde libre. C’est de moins en moins le cas. Les sciences sociales étant, en quelque sorte, « le canari dans la mine », on semble se diriger vers un autre modèle de société, là où on évite les questions difficiles en croyant éviter les ennuis qui vont avec.

Saturday, November 13, 2010

More on high-polygyny societies

Female and male bush spirits, Côte d’Ivoire

The polygyny rate varies considerably among human populations, being highest (20 to 40% of all sexual unions) in the agricultural societies of sub-Saharan Africa and Papua-New Guinea.

Such high rates have consequences. The average man will have to wait for a wife until well into adulthood. The average woman will end up sharing her man with another woman. How are these consequences managed in a high-polygyny society? Surely there must be jealousy and frustration?

Many anthropologists, especially armchair ones, will deny there is any. What seems intolerable to us is quite normal to such societies.

This is half-true. Yes, people eventually resign themselves to most situations, especially if they see no alternative. The intolerable can become normal. But it would be false to claim that widespread polygyny creates no more than its fair share of frictions. Indeed, such frictions are much talked about in oral African culture.

This may be seen in a recent compendium of West African folk tales, Contes de l’Afrique de l’Ouest. Many of the tales have polygyny-related themes.

One theme is a woman’s resistance to sharing her husband with another woman. As the editor explains:

These tales are about conflicts that pit wives against the children of co-wives or about conflicts between co-wives. In this tale, the conflict pits the husband against his first wife, who does not want a co-wife. The tale, in keeping with tradition, criticizes the attitude of the woman who wishes to remain her husband’s exclusive partner.

Another theme is the difficulty in finding a wife. There is a “series of tales whose theme is the father who imposes impossible ordeals on his daughter’s suitors.” One tale is The Marriage without Bride Price:

A man had a very beautiful daughter, as beautiful as a genie. She grew up and reached the age of being married off. Her father made his requirements known. Whoever wishes to marry her will pay no bride price. He will simply have to harvest a field of fonio, —the man’s field was a hundred kilometers long! He will also have to weave, —the man had yarn that formed a hill the size of our hill of Bagnon. He will furthermore have to gather the fruits of a baobab, —the baobab had been bearing fruits for a hundred years, and they had never fallen. It had so many that one could no longer see its branches! He will in addition have to drink a full cask of gruel. To end it all, he will have to strike one by one the pearls of his daughter’s belts. She wore them up to her armpits! Whoever would accomplish all this in one day, between sunrise and sunset, would have his daughter.

A bush spirit learns of these outrageous demands and turns itself into a handsome young man. It passes all of the tests and marries the girl. It then resumes its former appearance—a filthy body covered with ticks.

“You’re not the one I love!” says the daughter.
“If you repeat that, I will kill you. However pretty a woman may be, it’s still a man who will marry her. Your father had demanded work that no one can accomplish. He said there was no bride price. But it has always been by bride price that one gets a woman. He thought there was no other man than himself.”

(Meyer 2009, pp. 89-92)

Both types of folk tale reflect the male viewpoint. One type criticizes women who reject co-wives. The other criticizes fathers who want too much in exchange for their daughters. Yet these two criticisms are mutually inconsistent. Polygyny is what drives up the cost of brides on the marriage market. You can’t have one without the other.

High-polygyny society = Failed society?

In my last post, I noted that high-polygyny societies remain simple in large part because intense sexual competition keeps them from evolving into more complex entities. The surplus males stir up endless conflict, if only because war provides them with access to women, i.e., through rape and abduction. There can never be pacification and, therefore, the formation of larger, more advanced societies.

There is a second problem that blocks the evolution of high-polygyny societies. Although widespread polygyny is widely resented by men and women, this resentment cannot translate into efforts to limit the practice. Divided attitudes are part of the problem. On the one hand, younger men resent the wife shortage created by older polygynous men. On the other, they themselves hope to become polygynous as they too advance in years. Furthermore, power is generally vested in older men—the very ones most likely to have multiple wives.

Some women likewise gain from polygyny. This is particularly so with second wives who usurp resources built up by older co-wives. Other women accept polygyny as a lesser evil:

Undoubtedly one reason for the widespread acceptance of polygyny is the distaste for the alternative, which in this cultural context is most often not faithful monogamy but legal monogamy paralleled by a series of more or less open affairs. [… The reason is that] men spend less money on their wives than on their mistresses, and that the position of a wife is defined rather than fluid and uncontrollable.

It may seem remarkable that a mistress should impose a greater economic strain than an additional wife, but it should be remembered that Yoruba wives are traditionally largely self-supporting, their major economic claim upon their husbands being on behalf of their children, while a mistress in many cases only remains a mistress for as long as it is economically beneficial for her to do so.
(Ware 1979)

Widespread polygyny is thus an institution that cannot easily reform itself. Yes, it does create profound social frictions that pit men against men and women against women. These frictions, however, cannot coalesce into a movement to oppose polygyny. The potential opposition is too fragmented and too marginal.

Urban Africa and the new mating environment

This is not to say that a high-polygyny society cannot evolve into a low-polygyny one. It can, if the material conditions of life change. We see this happening as Africans move off the land and into cities and towns, where women can less easily feed themselves and their children without assistance (1). Urban African men are less likely to be polygynous because it costs them more to provide for a second wife.

This in turn has shifted the pressure of sexual competition from men to women. It is increasingly the woman who must compete to find a mate. Whereas before she only had to work hard at tending her plot of land, she must now invest in her physical appearance, notably by lengthening her hair and bleaching her skin.

This new mating environment is described by Fokuo (2009) with respect to Ghana. Traditionally, Ghanaian women were married off through family mediation and bride price. This situation has changed since World War II and especially since the 1980s. They now largely find mates on their own, and do so in an increasingly competitive market that pressures them to be as sexually attractive as possible. One result has been the spread of skin bleaching:

By the late 1980’s and 1990’s, skin bleaching was no longer practiced by prostitutes. The popular culture of the 1980s praised lighter skin tones. This praise encouraged the spread of skin bleaching across gender lines and throughout all socio-economic classes of women. (Fokuo 2009)

Interviews with Ghanaian women suggest that this practice is driven by a competitive marriage market:

Sometimes if you really want to marry a particular man, you have to bleach.
(Interview 14, 2006)

Lighter-skinned women tend to attract more men by virtue of their lightness. So if they are at marrying age they get more men coming to court them earlier and quicker than darker-skinned women. (Interview 16, 2006)

Darker-skinned women look at themselves and realize that they need to bleach to be beautiful. Just so men can call them beautiful. (Interview 17, 2006)
(Fokuo 2009)

This shift to ‘bodily commodification,’ together with the decline of matriarchy, is deplored in the literature. Yet the consequences are not entirely negative. Matriarchy meant that African women bore a very disproportionate share of labor in raising their families, especially physical labor. Today, there is a move toward a more equal balance of parental investment between African men and women. And bodily commodification is perhaps a necessary precondition for much of what we call ‘high culture,’ i.e., the pursuit of the aesthetic.


1. The future of polygyny in sub-Saharan Africa is closely linked to female autonomy. This came out in a study of polygynous families in Ibadan, Nigeria:

Perhaps the ultimate women's issue is the extent to which women can lead autonomous existences without men. Trial interviews showed that it was possible to ask the question: "Apart from having children, do women need to have husbands?" without puzzling or antagonizing the respondents (only 2 percent of wives failed to respond to this question in the final survey). Remarkably, in a society where 99 percent of women marry by the age of 40, 47 percent of women answered that women do not need husbands. They explained that women are equal to men, that marriage has many disadvantages and that in many cases women are in a better position on their own. For those women who did consider husbands to play an important role apart from fatherhood, companionship (cited by 16 percent of all wives) was the most valuable function, followed by advice (11 percent) and the value of working together (10 percent). Only 7 percent of wives thought that women needed husbands for economic support, care or protection, the remainder cited sexual satisfaction (2 percent), prestige (2 percent) and nature (3 percent). Understandably, wives in polygynous unions were less likely to stress the companionship role of the husband, but even among women with two or more co-wives, 12 percent argued for this as compared with 21 percent of the monogamously married. Overall, what is most striking is just how dispassionately women view marriage. Childlessness is a terrible tragedy never to be assuaged, but every woman can find a husband of some sort, and hence they are not very highly valued. In discussing the need for a husband, nearly a fifth of the wives mentioned companionship, but no one mentioned love.
(Ware 1979)


Fokuo, J.K. (2009). The lighter side of marriage: Skin bleaching in post-colonial Ghana, Research Review NS, 25(1), 47-66.

Meyer, G. (2009). Contes de l’Afrique de l’Ouest, Paris: Éditions Karthala.

Ware, H. (1979). Polygyny: Women's Views in a Transitional Society, Nigeria 1975, Journal of Marriage and Family, 41, 185-195.

Saturday, November 6, 2010

Extraversion: a tool for mating success

Extraversion is part of the male toolkit for mating success. It is especially useful in societies where a high incidence of polygyny means too many men must compete for too few women.

As a single man, I would spend close to $3,000 a year on dating. And that didn’t include things like buying a sportier-looking car. My behavior was also higher risk, in large part to impress women and show how ‘edgy’ I could be.

The mating game is costly. A British study found that men improve their sexual access to women at the cost of increased risk of hospitalization for accident or illness (Nettle 2005). There is thus a trade-off. If you invest more effort in chasing women, less is left over for taking care of yourself and any children you’ve fathered. You also risk early death.

The trade-off varies from one society to another. For a man, the mating game is less costly if the woman can provide for herself and her children with little male assistance. The cost may even be negative if she also provides for her man.

In such societies, it is in a man’s reproductive interest to mate with as many women as possible. The result? A ‘tragedy of the commons’ where the pursuit of self-interest ends up harming society as a whole. There won’t be enough women to go around, and many men will go a long time without a mate, or never find one at all.

This has long been the case with simple ‘horticultural’ societies in the tropical zone. The women feed themselves and their children with minimal male assistance because they can grow food year-round. And this food production is not appropriated by a State or a land-owning class.

Such societies remain simple in large part because intense sexual competition keeps them from evolving into more complex entities. The surplus males stir up endless conflict, if only because their sole access to women is through warfare, i.e., rape and abduction. There can never be pacification and, therefore, the formation of larger, more advanced societies.

How men and women adapt to high-polygyny environments

A high incidence of polygyny favors men with a different toolkit of physical and mental traits. Some personality traits, for instance, will be more advantageous than others. Such is the finding of a series of studies from rural Senegal, where 48% of men over 40 are polygynous.


Alvergne et al. (2009, 2010a, 2010b) found no correlation among Senegalese men between mating success and most personality traits, i.e., neuroticism, openness, and agreeableness. One trait, however, showed a strong correlation. This was extraversion, defined as “pro-social behavior which reflects sociability, assertiveness, activity, dominance and positive emotions.” Men with above-medium extraversion were 40% more likely to have more than one wife than those with below-medium extraversion, after controlling for age. Furthermore, this personality trait correlated with higher testosterone levels. Such a linkage suggests that extraversion is part of the male toolkit for mating success in a high-polygyny environment.

Of course, it may be that the relation of cause and effect is indirect. Extraversion helps men accumulate wealth, and wealthy men can afford to take second or third wives. Nonetheless, the correlation remained significant even after the researchers controlled for social class


Among Senegalese women, reproductive success correlated with neuroticism, i.e., a tendency “to be anxious, depressive, and moody.” Women with above-medium neuroticism had 12% more children than those with below-medium neuroticism, after controlling for age and marital rank. This personality trait may thus be part of the female toolkit for infant survival in an environment where women are almost solely responsible for parenting.

Could the cause and effect run in the other direction? Perhaps having more children makes a woman more neurotic. Yet neuroticism did not increase with age, whereas the number of children did. Furthermore, the correlation was stronger among rich women, who presumably had less reason to worry about child care.

Inter-individual variation

On average, the Senegalese men were more oriented to polygyny and extraversion, but there was significant variation. Some seemed to be more monogamous and introverted.

Perhaps all human populations display this sort of variation in reproductive strategy, the differences among them being one of degree than of kind. Indeed, statistical differences can easily develop among human populations because the raw material for gene-culture co-evolution is already available. There is no need to wait for new mutations to come into existence.

Inter-population variation

What do the authors say about differences among human populations? They initially state, “Men in the present study had lower T [testosterone] levels than has been recorded for men from western societies using similar saliva assays” (Alvergne et al. 2009). Later on, in reviewing the literature, they qualify this statement:

It is worthy of note that inter-population differences in T levels were found to be more pronounced for young men (15-30 years) than for older men (45-60 years) (Ellison, 2003). The authors conclude that the differences between populations in patterns of T decline with age result from variations in peak levels during young adulthood and are thus highly dependent on the reproductive physiology of young males.
(Alvergne et al. 2009)

The decline in T levels with age is sharper among polygynous Senegalese men than among monogamous Senegalese men. In short, higher T levels in young adulthood mean lower ones later in life:

When men get older than 50, a reversed pattern is observed, with polygynously married men having lower T levels than monogamously married men.
(Alvergne et al. 2009).

This may account for the authors’ initial statement that T levels were lower in Senegalese men than in Western men. The Senegalese subjects were 38.3 years old on average.

A similar reversal with age has been noted in U.S. studies. African Americans have a clear testosterone advantage over Euro-Americans only from puberty to about 24 years of age. This advantage then shrinks and eventually disappears at some point during the 30s (1). The pattern then seems to reverse at older ages (Ellis & Nyborg 1992; Gapstur et al. 2002; Nyborg 1994, pp. 111-113; Ross et al. 1986; Ross et al. 1992; Tsai et al. 2006; Winters et al. 2001).

No one really knows why. We know that too much testosterone early in life causes long-term harm, e.g., increased risk of prostate cancer. Perhaps there is also degradation of the body’s capacity to produce testosterone.


A hen is an egg’s way to make another egg. I remember being told that as a child. Perhaps we should now say: A man is a sperm’s way to make more sperm.

H/T to Tod.


1. Tsai et al. (2006) found that baseline levels of both total and bioavailable testosterone were significantly higher in African Americans than in Euro-Americans with a median age of 33-34. Ellis and Nyborg (1992) found that African Americans had a slight but still significant (p=0.028) testosterone advantage over Euro-Americans among subjects with a median age of 38. It is difficult to identify the ‘tipping point’ because both studies used pools of subjects with wide age ranges.


Alvergne, A., M. Jokela, C. Faurie, and V. Lummaa. (2010). Personality and testosterone in men from a high-fertility population, Personality and Individual Differences, 49, 840-844.

Alvergne, A., M. Jokela, and V. Lummaa. (2010). Personality and reproductive success in a high-fertility human population, Proceedings of the National Academy of Sciences, 107, 11745-11750.

Alvergne, A., C. Faurie, and M. Raymond. (2009). Variation in testosterone levels and male reproductive effort: Insight from a polygynous human population, Hormones and Behavior, 56, 491-497.

Ellis, L. and H. Nyborg. (1992). Racial/ethnic variations in male testosterone levels: a probable contributor to group differences in health, Steroids, 57, 72-75.

Gapstur, S.M., P.H. Gann, P. Kopp, L. Colangelo, C. Longcope, and K. Liu. (2002). Serum androgen concentrations in young men: A longitudinal analysis of associations with age, obesity, and race. The CARDIA male hormone study. Cancer Epidemiology, Biomarkers & Prevention, 11, 1041-1047.

Nettle, D. (2005). An evolutionary approach to the extraversion continuum: Evolution and human behaviour. Evolution and Human Behavior, 26, 363-373.

Nyborg, H. (1994). Hormones, Sex, and Society. The Science of Physiology. Westport (Conn.): Praeger.

Ross, R.K., L. Bernstein, R.A. Lobo, H. Shimizu, F.Z. Stanczyk, M.C. Pike, and B.E. Henderson. (1992). 5-apha-reductase activity and risk of prostate cancer among Japanese and US white and black males. Lancet, 339, 887-889.

Ross, R., L. Bernstein, H. Judd, R. Hanisch, M. Pike, and B. Henderson. (1986). Serum testosterone levels in healthy young black and white men. Journal of the National Cancer Institute, 76, 45-48.

Tsai, C.J., B.A. Cohn, P.M. Cirillo, D. Feldman, F.Z. Stanczyk, A.S. Whittemore. (2006). Sex steroid hormones in young manhood and the risk of subsequent prostate cancer: a longitudinal study in African-Americans and Caucasians (United States), Cancer Causes Control, 17, 1237–1244.

Winters, S.J., A. Brufsky, J. Weissfeld, D.L. Trump, M.A. Dyky, and V. Hadeed. (2001). Testosterone, sex hormone-binding globulin, and body composition in young adult African American and Caucasian men. Metabolism, 50, 1242-1247.

Saturday, October 30, 2010

Did human evolution accelerate?

Modern humans changed little when they initially spread out of Africa and into the Middle East. Real change occurred farther north, when they entered seasonally varying environments that differed much more even in summer.

Three years ago, a research team led by John Hawks found that the rate of genetic change accelerated once ancestral humans had spread from Africa to the other continents. Over the past 40,000 years, natural selection seems to have altered at least 7% of our genome. And this process speeded up even more as agriculture replaced hunting and gathering over the past 10,000 years. The rate of genetic change then increased more than a hundred-fold (Hawks et al. 2007).

This finding, however, seems to be at odds with a recent Scientific American article by Jonathan Pritchard:

As early Homo sapiens spread out from Africa starting around 60,000 years ago, they encountered environmental challenges that they could not overcome with prehistoric technology.

Many scientists thus expected that surveys of our genomes would reveal considerable evidence of novel genetic mutations that have recently spread quickly through different populations by natural selection […]

But it turns out that although the genome contains some examples of very strong, rapid natural selection, most of the detectable natural selection appears to have occurred at a far slower pace than researchers had envisioned.
(Pritchard 2010)

Is there a fundamental disagreement here between Jonathan Pritchard and John Hawks? Perhaps not. Pritchard doesn’t actually deny that genetic change accelerated in ancestral humans. He simply states that its pace has been far slower than the one envisioned by “researchers.”

Curiously, he makes no reference to John Hawks. This is all the more curious because no one else matches the unnamed “many scientists” and “researchers.” Until three years ago, and even today, the conventional view has been that cultural evolution replaced genetic evolution in our species. Culture provided us with faster ways to adapt. Instead of changing our genes, we changed our environment by means of new technologies, modes of subsistence, forms of shelter, and so on.

That’s what I learned as an undergrad. If Pritchard wishes to argue against Hawks’ position, why not mention him by name? Why create a fictitious ‘conventional view’ that needs to be put in its place?

I see other weaknesses in this Scientific American article, particularly in the methodology behind its conclusion that human genetic evolution has been relatively slow. One is recognized by Pritchard himself. What matters is not the degree of change at a single gene, but rather the total change at all genes that influence a single trait:

A series of papers published in 2008, for example, identified more than 50 different genes that influence human height, and certainly many more remain to be found. For each of these, one allele increases average height by just three to five millimeters compared with the other allele.

When natural selection targets human height […] it may operate in large part by tweaking the allele frequencies of hundreds of different genes. If the “short” version of every height gene became just 10 percent more common, then most people in the population would quickly come to have more “short” alleles, and the population would be shorter overall.
(Pritchard 2010)

Another weakness is the impossibility of measuring the rate of genetic change directly:

It would be great if in our efforts to understand recent human evolution, we could obtain DNA samples from ancient remains and actually track the changes of favored alleles over time. (Pritchard 2010)

Because this approach is still in its infancy, Pritchard falls back on assumptions about when and where human populations have come into being. He assumes that Homo sapiens began to spread out of Africa some 60,000 years ago and then split into Europeans and East Asians some 20,000 to 30,000 years ago. These are the baselines he uses to calculate the rate of genetic change.

But these are high-end estimates. The ‘Out of Africa’ event is probably closer to 45,000 BP (1) and the best dating for the European/East Asian split is 20,000 BP (Laval et al. 2010) (2). Moreover, natural selection has not changed non-African humans at a constant rate since their ancestors left Africa. Those who remained within the tropical zone, such as Australian Aborigines, Papua-New Guineans, and Andaman Islanders, have changed surprisingly little. There has been much more evolution among those who spread out of the tropical zone and into temperate and arctic environments, beginning around 30,000 BP. Evolutionarily speaking, the key event was not when humans began to spread out of Africa. It was when they began to spread out of the Tropics (3).

So for many if not most traits, Pritchard is underestimating the rate of genetic change by a factor of two. Another source of error is his unspoken assumption that genetic change in Eurasia has never flowed back into Africa. By using Africa as a baseline for genetic change, he is excluding new Eurasian alleles that have displaced older ones even in Africa. Evidently, this error would lead to underestimation of the rate of genetic change both within and outside Africa.


1. Uranium dating suggests that modern humans entered the Middle East c. 46,500 BP (Schwarcz et al. 1979).

2. Maps of human prehistory typically show two lines of advance out of Africa: one turning left and into Europe and another turning right and into South Asia, Southeast Asia, and East Asia. The second line of advance did exist, but was not ancestral to present-day East Asians. It was instead ancestral to relic groups like the Andaman Islanders and the Semang, as well as Papua-New Guineans and Australian Aborigines.

East Asians, like the Inuit and Amerindians, have their origins in North Asia, as seen by their ‘Arctic’ physiognomy. These early North Asians in turn came from early Europeans, specifically the reindeer-hunting nomads who spread eastward through the steppe-tundra belt of northern Eurasia. In other words, Europeans and East Asians are not siblings who parted company in the Middle East some 45,000 years ago. The latter are instead ‘offspring’ of the former, the two groups having become reproductively isolated from each other at the height of the last ice age, c. 20,000 BP.

3. In sum, the ‘Out of Africa’ event did not occur when modern humans first ventured across the present-day Suez Canal. This is an arbitrary line based on current geopolitical realities. Ecologically speaking, the Middle East is part of Africa. Real adaptive change did not begin until modern humans had spread farther north and into environments with wide seasonal variations in temperature, vegetation, wildlife, and other resources.


Hawks, J., E.T. Wang, G.M. Cochran, H.C. Harpending, and R.K. Moyzis. (2007).
Recent acceleration of human adaptive evolution. Proceedings of the National Academy of Sciences (USA), 104, 20753-20758.

Laval, G., E. Patin, L.B. Barreiro, and L-Quintana-Murci. (2010). Formulating a historical and demographic model of recent human evolution based on resequencing data from noncoding regions, PloS ONE, 5(4), e10284

Pritchard, J.K. (2010). How we are evolving, Scientific American, October, pp. 41-47.

Schwarcz, H.P., B. Blackwell, P. Goldberg, and A.E. Marks. (1979). Uranium series dating of travertine from archaeological sites, Nahal Zin, Israel, Nature, 277, 558-560.

Saturday, October 23, 2010

Ich bin Mittel-Ostländer?

Spread of farming in Europe. Der Spiegel

New research has revealed that agriculture came to Europe amid a wave of immigration from the Middle East during the Neolithic period. The newcomers won out over the locals because of their sophisticated culture, mastery of agriculture -- and their miracle food, milk.
(Schulz 2010)

This recent Der Spiegel article has stirred up comment in the blogosphere (Hawks 2010, Khan 2010, Sailer 2010). It argues that Europeans do not descend from the reindeer hunters who once roamed the continent during the last ice age. Nor do they descend from the more recent hunter-fisher-gatherers of the Mesolithic. All of those people went extinct. They were replaced by the real ancestors of present-day Europeans—farmers who began to spread out of the Middle East only 9,000 years ago and who reached northern Europe two millennia later—almost at the dawn of history!

If you have any doubts, this finding is based on “a barrage of articles in professional journals like Nature and BMC Evolutionary Biology, [whose authors] have turned many of the prevailing views upside down over the course of the last three years.”

There is nothing wrong with the above journal articles. A lot is wrong, however, with the Der Spiegel article. It neatly confuses two separate findings, only one of which supports the ‘population replacement’ hypothesis.

1. Origins of the European allele for adult digestion of lactose

Unlike most humans, Europeans can digest milk sugar (lactose) as adults. This is made possible by an allele that allows adults to produce an enzyme, lactase, that breaks down milk sugar. So where did this allele come from? Did it originate in Europe or in some place outside Europe?

According to Der Spiegel:

But where did the first milk drinker live? Which early man was the first to feast on cow's milk without suffering the consequences?

In a bid to solve the mystery, molecular biologists have sawed into and analyzed countless Neolithic bones. The breakthrough came last year, when scientists discovered that the first milk drinkers lived in the territory of present-day Austria,
Hungary and Slovakia.

The reader is left with the impression that the new allele had been brought to Europe from the Middle East. That impression is false. In fact, the above molecular biologists found no trace of this allele in DNA retrieved from the earliest European farmers. It apparently arose later as a mutation and then spread among Europeans through natural selection.

[…] the absence of the 13.910*T allele in our Neolithic samples indicates that the early farmers in Europe were not yet adapted to the consumption of unprocessed milk. Dairying is unlikely to have spread uniformly over Europe, and the use of milk in the Early Neolithic may have been rare. Although our data are consistent with strong selection for LP [lactose persistence] beginning with the introduction of cattle to Europe ≈8800 B.P., it is unlikely that fresh milk consumption was widespread in Europe before frequencies of the 13.910*T allele had risen appreciably during the millennia after the onset of farming.
(Burger et al. 2007)

Consequently, this area of research argues against the Der Spiegel article. Europeans did change genetically, but the change occurred through natural selection, not through population replacement (1).

2. Genetic divide between late hunter-fisher-gatherers and early farmers

Researchers have also retrieved mitochondrial DNA from Europe’s late hunter-fisher-gatherers and early farmers. Comparison shows a sharp genetic divide between the two groups. In particular, the first group had high incidences of haplogroup U—a genetic lineage that was rare among early farmers and still is among present-day Europeans.

The past year, however, has brought evidence of genetic continuity. After studying 92 Danish human remains that range in time from the Mesolithic to the Middle Ages, Melchior et al. (2010) found that high incidences of haplogroup U persisted among the earliest farmers and declined only in later groups.

Thus, the sharp genetic divide was not between late hunter-fisher-gatherers and early farmers. It seems to have been between the earliest farmers and groups that had been farming for at least a millennium or so. Once again, the evidence points to natural selection, and not to population replacement.

But isn’t mtDNA unresponsive to natural selection? That’s what I and others used to think. There is growing evidence, however, that some mtDNA loci can change in response to natural selection. In particular, some haplogroups seem to reflect a tradeoff between thermogenesis and ATP synthesis (
Balloux et al 2009). If true, the decline of U-type haplogroups among early farmers may reflect the differences in physical activity (leading to overheating or underheating) that exist between hunting/fishing/gathering and farming.


The jury is still out. The evidence, however, seems to be tilting toward natural selection and away from population replacement as the best way to explain these genetic changes among ancestral Europeans. In short, the Der Spiegel article is bad science.

And bad journalism.


1. Towards the end, the Der Spiegel article seems to acknowledge that the allele for adult digestion of lactose must have arisen after farming had spread to Europe:

Some [farmers] had genetic mutations that enabled them to drink milk without getting sick. They were the true progenitors of the movement.

As a result of "accelerated evolution," says Burger, lactose tolerance was selected for on a large scale within the population in the space of about 100 generations. Europe became the land of the eternal infant as people began drinking milk their whole lives

(Schulz 2010)


Balloux F., L.J. Handley, T. Jombart, H. Liu, and A. Manica (2009).
Climate shaped the worldwide distribution of human mitochondrial DNA sequence variation. Proceedings. Biological Sciences, 276 (1672), 3447–55.

Burger, J., M. Kirchner, B. Bramanti, W. Haak, and M.G. Thomas. (2007).
Absence of the lactase-persistence-associated allele in early Neolithic Europeans, Proceedings of the National Academy of Science, 104(10), 3736-3741.

Hawks, J. (2010).
Neolithic milk fog, John Hawks weblog, October 17.

Khan, R. (2010).
Völkerwanderung back with a vengeance, Discover, October 17.
Melchior, L., N. Lynnerup, H.R. Siegismund, T. Kivisild, J. Dissing. (2010). Genetic diversity among ancient Nordic populations, PLoS ONE, 5(7), e11898.

Sailer, S. (2010).
Are Europeans all Middle Easterners? October 17.

Schulz, M. (2010).
How Middle Eastern Milk Drinkers Conquered Europe, Spiegel Online International, October 15.,1518,723310,00.html

Saturday, October 16, 2010

The evolution of Cavalli-Sforza. Part VII

Bird in a gilded cage

Cavalli-Sforza’s last big project was the publication of The History and Geography of Human Genes, which came out in 1994. Since then, he has kept himself busy tying up loose ends.

Advancing age is only one reason why he has lowered his sights. In fact, he had originally planned to work on two major projects until the end of his life. One was on gene-culture co-evolution. It would have involved studying the Inuit to see how their hunting lifestyle had selected for a keen sense of spatial orientation, specifically the ability to disembed an object from a larger visual landscape, to store it as a spatio-temporal model in the mind, and then to convert it back into a real-world object (e.g., a soapstone carving). Adopted and non-adopted Inuit would have been studied to find out how much of this ability was innate and how much learned. The project would have then served as a springboard for comparative studies of mental traits in other hunter-gatherer groups and, later, in agricultural populations.

That project suddenly aborted, for nebulous reasons. Its place was then taken by the Human Genome Diversity Project. This would have been a continuation of work that Cavalli-Sforza had been pursuing off and on since the mid-1960s, the main aim being to reconstruct how ancestral humans had split up as they spread out of Africa to the other continents. That project too came to a sudden end—in the face of violent accusations of racism. Funding dried up and researchers shied away. Today, research is still ongoing unofficially, the unspoken premise being that an unofficial project is less likely to catch flak than an official one. And the less Cavalli-Sforza has to do with it, the better.

So what should he do in his twilight years? One possibility would be a second edition of The History and Geography of Human Genes. This massive tome is based on data collected up to 1986, so it is now a quarter of a century out of date (Cavalli-Sforza & Cavalli-Sforza 2008, p. 281). An update is sorely needed and Razib Khan (2010) has shown how some of the gene charts could be redone. Such reediting would not be difficult. Most of the work could be delegated to other people, with Cavalli-Sforza keeping overall editorial control. His opus would thus gain a new lease on life and earn itself a place in university classrooms for another quarter-century. This is something he can and should do.

Yet something tells me he won’t. He seems content, or perhaps obliged, to rest on his laurels … and be buried with them. Until then, he will certainly not suffer from lack of recognition. His eventual departure from life will be met with eulogies of praise, such as befits a great man of science, and probably a state funeral in his home country.

And then his works will fade into obscurity. THGHG will be the first to go. Ironically, his earliest works will retain attention the longest. In twenty years, he will be remembered as we now ‘remember’ great anthropologists like William Sumner and Lewis Morgan.

But who knows? These are the shadows of what might be, not what must be. Cavalli-Sforza may still surprise us. Let me give him the last word of this unauthorized biography:

Why does one fear the unknown, the future, that which is new? Some stability is necessary for everyone’s life and well-being. It is normal to fear sudden changes that could upset this equilibrium.

[…] In all the cases where we feel powerless before the unknown, we should simply keep our eyes wide open and face the situation, if possible, with a certain fatalism, as befits the old saying, “whoever will live will see.”

(Cavalli-Sforza & Cavalli-Sforza 2008, pp. 326-327)


Cavalli-Sforza, L.L. and F. Cavalli-Sforza (2008). La génétique des populations : histoire d'une découverte, Paris: Odile Jacob. (translation of Perché la scienza : L’aventura di un ricercatore).

Cavalli-Sforza, L.L., P. Menozzi, and A. Piazzi. (1994). The History and Geography of Human Genes, Princeton: Princeton University Press.

Khan, R. (2010). A generation of human genetics & genomics. Discover Magazine. October 8.