Saturday, August 25, 2012

On global inequality


Western Europe began to overtake the rest of the world long before it established colonial empires in Africa, Asia, and the Americas (source)


In Why Nations Fail, economist Daren Acemoglu sees global inequality as a legacy of colonialism. Wherever European settlers were numerous enough, they formed inclusive, democratic societies that aimed for sustainable growth. Wherever they were few in number, they created exclusive, undemocratic societies that sought to extract resources and do little else.

Colonialism thus caused a “reversal of fortunes”:

Among the countries colonized by Europeans, those that were more prosperous before colonization ended up as relatively less prosperous today. This is prima facie evidence that, at least in the sample that makes up almost half of the countries in the world, geographic factors cannot account—while institutional ones can—for differences in prosperity as these factors haven’t changed, while fortunes have. (Acemoglu & Robinson, 2012b)

Acemoglu is half-right. These differences in fortune are at least partly due to the different ways human societies organize themselves, i.e., their “institutions.” But is European colonialism responsible? Would non-European societies have continued onward and upward had it not been for the great European expansion that began around 1500 AD?

These questions have caught the interest of another economist, Michael Cembalest, who has charted per capita GDP in different world regions over the past two thousand years (Thompson, 2012). His conclusion? Europe, and Western Europe in particular, had already overtaken the rest of the world by the year 1500. The relative poverty of the non-European world cannot therefore be due to European colonialism. Instead, the arrow of causality seems to run in the other direction. Europe was able to expand into Asia, Africa, and the Americas because it already had a lead over those regions socially, economically, and technologically.

Europe’s rivals

This conclusion is all the more certain if we look at the two regions that still rivaled Western Europe in 1500. One was the Muslim world, centered on the Ottoman Empire. The other was East Asia, with China as its center. Neither region would suffer European encroachment and colonialism until much later, essentially no earlier than the late 1700s. But by then Western Europe had an even more commanding lead.

Acemoglu is right in saying that failed states suffer from ruling classes that seek to plunder wealth rather than create wealth. He is wrong, however, in seeing such rapaciousness as a perverse result of European colonialism. This mentality actually used to be normal among elites throughout the world, including those of Western Europe.

All states originate in warrior bands that seize power with a view to plunder and self-aggrandizement. In so doing, they seek to keep the plundering to themselves. Rival bands are outlawed, and the use of violence greatly limited. The State thus becomes a means for pacifying society and providing an environment that favors people who create wealth rather than steal wealth (Frost, 2010).

In time, this new economic environment leads to a new cultural environment. The violent male goes from hero to zero. Instead of being a desired sexual partner and a role model for younger males, he becomes a despised criminal to be tracked down and killed. The role model now becomes the industrious family provider. This cultural evolution is described by Gregory Clark with respect to England from the 11th century onward. With the pacification of society and the State’s monopoly on violence, successful individuals were now those who would settle disputes peacefully and display thrift, foresight, and sobriety—what would become known as middle-class values (Clark, 2007; Clark, 2009).

This process can lead to steady economic and material advancement. But it can also abort. There is no reason to assume that Europe’s rivals would have kept on going onward and upward. In fact, they were developing serious internal contradictions long before Europeans were able to exploit these weaknesses for their own benefit.

In some areas, like West Africa, this cultural evolution stalled during the early stages of State formation, specifically the one where the State imposes a monopoly on the use of violence. Lasting internal peace was impossible because of the large surplus of single males, itself due to a high polygyny rate. For these excess males, war of any sort was often their only means of securing women and becoming real men (van den Berghe, 1979, p. 65).

Ottoman Empire

Elsewhere, Clark’s model of cultural evolution would abort at a later stage. This was the case with the Ottoman Empire, which by 1500 had encompassed the Middle East, North Africa, southeastern Europe, and much of Ukraine and southern Russia. Yet pacification within this territory remained incomplete. Even at the height of its power, the countryside was often controlled by warlords, called ayans, who commanded their own private armies. Typically, the Ottoman state would try to co-opt the most powerful ones by appointing them to official posts or endeavor to play one off against another. And typically the results were disastrous. Furthermore, since the ayans were Muslim, effective action against them often meant arming the empire’s Christian subjects, but such action offended the sensibilities of Ottoman leaders (Jelavich & Jelavich, 1977, pp. 16-17, 28).

Thus, societal pacification was much less complete in the Ottoman Empire than in Western Europe. It was also largely confined to the empire’s non-Muslim subjects, who could not serve in the army and were normally forbidden to bear arms (Jelavich & Jelavich, 1977, pp. 5-6). They were thus the ones who would experience the kind of economic and demographic dynamism that was already making Western Europe so successful. Trade and industry became dominated by Greeks, Armenians, and Sephardic Jews. Slavic nations like the Bulgarians were able to rise from a position of subservience to one of relative dominance:

Thereafter [after 1829] Bulgaria became the chief supplier of food and of textiles for uniforms, blankets, and other military needs. From 1830 until 1878 the country enjoyed the market of the entire empire. It traded its agricultural products, including grains, honey, wax, silk, cattle, wine, and also manufactured goods such as pig iron, leather items, iron and metal work, and shoes and clothing. An active cottage industry specializing in woolen cloth developed in the Balkan Mountains (Jelavich & Jelavich, 1977, p. 129)

In contrast, Muslim Turks avoided trade and tried to become rentiers of one sort or another, e.g., landowners of estates worked by tenant farmers, soldiers in the pay of the army or local warlords, or civil servants:

It was estimated that half the people of Istanbul lived off the state in some way. Many, both in Istanbul and in the provincial capitals, became unsalaried hangers-on of pashas, hoping that position or graft would come their way. The crowd of relatives and parasites in the anterooms of every high official was one of the great curses of Ottoman administration, leading to favoritism, inefficiency, and bribery (Jelavich & Jelavich, 1977, p. 111)

The Muslim population thus missed out on opportunities in the expanding market economy and, hence, on opportunities for demographic growth:

By the end of the eighteenth century the Muslim population had entered a period of comparative economic and moral decline. […] This process of decay was clearly illustrated in the eighteenth century in the changing demography of the Balkan towns where Christian and national elements formed an increasingly larger proportion of the population (Jelavich & Jelavich, 1977, pp. 6-7)

We see here the same sort of change that Gregory Clark has described with respect to England: a steady expansion of what would become the middle class at the expense of less productive classes (Clark, 2007; Clark, 2009). During this early phase of capitalist development, early marriage and childbearing were the easiest way for a successful farmer or artisan to expand his workforce. Through downward mobility, such family lineages created an ever larger middle class while steadily replacing the lower classes through downward mobility. By 1800, they formed the bulk of the English population.

In England, this process of population replacement strengthened the country. In the Ottoman Empire, the consequences were different. As the Christian subject peoples grew in numbers and relative wealth, they increasingly saw secession as both feasible and desirable. What other option was there? The Ottoman Empire would never come under their control, and even the possibility of joint Muslim-Christian rule seemed unrealistic. The empire was, by definition, a Muslim state.

China

What about China? Here we see many similarities with Western Europe: a steady demographic expansion of middle-class lineages at the expense of the lower classes (Unz, 1980). Yet this process failed to translate into middle-class domination of Chinese society. Nor did it form a basis for sustained economic growth and technological progress. Indeed, from the 11th century onward, China entered a period of stagnation and relative decline.

One reason was that power fell into the hands of foreign elites: first the Mongols and then the Manchus. China’s population actually declined during the 12th and 13th centuries as a result of Mongol depredations. But the most lasting damage was done by the Manchus, who came to see their own subjects as potential enemies:

Some Chinese writers have argued that the conquest by the Manchus in 1644 (the Qing Dynasty) was a major setback for China. Thanks to inventions like paper and the printing press, China was arguably on the path toward capitalism. But under the Manchus, the amount of cultivated land fell, gunpowder weapons and naval technology was lost almost completely, and scientific thought was suppressed.

Deepak Lal follows this line of reasoning in greater depth. After 1433 the Chinese abandoned their navy and began to restrict foreign trade and contacts. The shipbuilding and sea-going skills thereafter degenerated. And China remained in relative isolation until the 19th century. This closure to the outside world amounted to a closing of the Chinese mind, comparable to that in Japan following its adoption of the policy sakoku under the Tokugawa. (The Great Divergence, 2009)

Chinese merchants were also impeded by the slow development of a true market economy. Economic transactions generally occurred face-to-face between buyer and seller in a specific location, either a shop or a marketplace. That was the “market.” The underlying reason seems to have been a low level of trust—a problem that still exists in China—which may in turn have reflected an incomplete pacification of Chinese society, as seen in the prevalence of banditry well into the 20th century. There was thus a strong tendency to favor close friends and kinfolk, while feeling indifferent to anyone beyond this charmed circle. In the absence of a high-trust environment, China became an economy of markets but not a market economy. Economic activity tended to be confined to specific places and specific points in time.

This lack of trust beyond close friends and kin might have reflected the family-centeredness of Chinese religions, particularly Confucianism, and their correspondingly weak and passive role in society beyond the family level. In European societies, religion assumed a more active and binding form, to the point that it could even dictate how the elites should behave (Fukuyama, 2011).

Conclusion

Circa 1500, the European world began a great expansion that would extend its domination over most of the planet. This expansion was far from fortuitous and seems to have resulted from internal processes that had been under way for some time within Europe itself. By 1500, there remained only two other civilizations of comparable strength: the Ottoman Empire and the Chinese Empire. Both, however, suffered from internal contradictions that prevented a similar sustained expansion. These contradictions were even more evident three centuries later when both empires began to face penetration by European powers on their own territory.

References

Acemoglu, D. & J. Robinson. (2012a). Why Nations Fail: The Origins of Power, Prosperity, and Poverty, Random House.

Acemoglu, D. & J.A. Robinson. (2012b). ‘Why Nations Fail’ , comment on review by J. Diamond, August 16, The New York Review of Books, http://www.nybooks.com/articles/archives/2012/aug/16/why-nations-fail/

Clark, G. (2009).The Domestication of Man: The Social Implications of Darwin, ArtefaCToS, 2, 64-80 http://campus.usal.es/~revistas_trabajo/index.php/artefactos/article/view/5427

Clark, G. (2007). A Farewell to Alms. A Brief Economic History of the World, Princeton University Press, Princeton and Oxford.

Frost, P. (2010). The Roman State and genetic pacification, Evolutionary Psychology, 8(3), 376-389 http://www.epjournal.net/wp-content/uploads/EP08376389.pdf

Fukuyama, F. (2011). The Origins of Political Order: From Prehuman Times to the French Revolution, Farrar, Straus and Giroux.

Jelavich, C. & B. Jelavich. (1977). The Establishment of the Balkan National States, 1804-1920, Seattle: University of Washington Press.

Lal, D. (2001). Unintended Consequences: The Impact of Factor Endowments, Culture, and Politics on Long-Run Economic Performance, MIT Press.

The Great Divergence (2009), MrGlobalization, August 22 http://www.mrglobalization.com/change-and-innovation/187-global-development-and-the-great-divergence

Thompson, D. (2012). The Economic History of the Last 2000 Years: Part II, The Atlantic, June 20, http://www.theatlantic.com/business/archive/2012/06/the-economic-history-of-the-world-after-jesus-in-4-slides/258762/

Unz, R. (1980). Preliminary notes on the possible sociobiological implications of the rural Chinese political economy, unpublished paper. http://www.ronunz.org/wp-content/uploads/2012/05/ChineseIntelligence.pdf

van den Berghe, P.L. (1979). Human Family Systems. An Evolutionary View. New York: Elsevier.

Saturday, August 18, 2012

He who pays the piper ...


You want to publish a book about HBD? You’ll have to find a wealthy patron.

Debate is continuing over Ron Unz’s article on Race, IQ, and Wealth. In a favorable review at Living Anthropologically, the following comment caught my eye:

Unz has money, and he uses it to publish and promote. Unz apparently gave out at least $500,000 to Gregory Cochran, co-author with Harpending on The 10,000 Year Explosion: How Civilization Accelerated Human Evolution and with John Hawks on Recent acceleration of human adaptive evolution.

Raised eyebrows ... And those people aren’t the only ones. Ron’s 2009 tax return mentions donations to Steve Sailer and Razib Khan, among others (Unz, 2009).

Once upon a time academics got grants or sabbaticals for book writing. But that option is becoming less and less feasible if you want to write about human biodiversity. Getting your manuscript published is even more problematic. In the past, you could submit it to a publishing house and they would have it assessed by an expert in the field. Today, that system is almost extinct, at least in North America. You must go through a ‘literary agent’ who will pitch your manuscript at wine and cheese parties. It’s a system that is highly prone to abuse: schmoozing, petty bribery, and sleeping with the right people.

There are only two other options: publish on a shoestring budget or find a wealthy patron. Like Ron Unz. But what do you do when your patron starts promoting ideas you feel are wrong? Do you say nothing? Or do you bite the hand that feeds you?

The act of giving money is not wholly altruistic. Implicitly, it can become a form of control. The receiver thinks twice before doing anything that might offend the giver. And the giver may drop hints …

Other happenings

- Emily Sohn has recently interviewed me for an article in Discovery News. See here.

- A journal article will soon come out on the relationships between blue eye color and feminization of facial structure. This finding is consistent with other evidence of sex linkage for non-brown eyes and non-black hair. Indeed, a twin study has shown that hair is, on average, lighter-colored in women than in men, with red hair being especially more frequent in females. Women also show greater variation in hair color (Shekar et al., 2008). All of this, in turn, is consistent with a selection pressure, possibly sexual selection, that has acted more strongly on European women than on European men to diversify the palette of human hair colors.

References

Anon. (2012). Race IQ – Game Over: It was always all about wealth, August 9, Living Anthropologically http://www.livinganthropologically.com/2012/08/09/race-iq-game-over/

Shekar, S.N., D.L. Duffy, T. Frudakis, G.W. Montgomery, M.R. James, R.A. Sturm, & N.G. Martin. (2008). Spectrophotometric methods for quantifying pigmentation in human hair—Influence of MC1R genotype and environment. Photochemistry and Photobiology, 84, 719–726.

Sohn, E. (2012). Why do so many women go blonde? August 14, Discovery News http://news.discovery.com/human/miley-cyrus-blonde-hair-120814.html

Unz, R. (2012). Race, IQ, and Wealth, The American Conservative, July 18. http://www.theamericanconservative.com/articles/race-iq-and-wealth/

Unz, R. (2009). Return of Private Foundation (IRS) http://irs990.charityblossom.org/990PF/200912/207181582.pdf

Saturday, August 11, 2012

What you don't know can hurt you


In 1972, the U.S. passed the Clean Water Act, despite a presidential veto by Richard Nixon. Did this act also end an era of unusually high estrogen levels in the environment? (cartoon source)

There has been much concern over the presence of “environmental estrogens” in our drinking water and elsewhere in the environment. These are man-made chemicals, like DDT, PCBs, and dioxins, that mimic the effect of natural estrogens. Among other things, they’re blamed for declining sperm counts and rising male infertility.

Yet estrogen also enters our environment from a source that excites much less concern. This is the estrogen that women excrete every day in their urine. Shouldn’t that source also be cause for worry?

At first thought, no. People have been urinating for a very long time. And other animals for even longer. During that time, microorganisms have evolved to break down and feed on whatever is present in urine. Larger organisms have likewise had plenty of time to adapt to this aspect of their environment. Urine is so ubiquitous and unchanging that it could not possibly pose a danger. Or could it?

Actually, two things have changed in recent times. One is that humans have become much more numerous, with the result that much more urine is being discharged into the environment. Another is the way it is discharged.

Before the late 19th century, urine entered the environment via privies, cesspools, and ditch sewers (Rockefeller, 1996). It was thus discharged into a warm stagnant medium rich in organic matter—ideal conditions for rapid breakdown of the estrogen molecule by nitrifying bacteria (Vader et al., 2000). These same conditions, however, increasingly became a threat to public health, particularly in the ever larger and more numerous urban centers.

And so a new disposal system was developed. Human waste was now expedited via sewers to a central facility where the liquid component would be separated and rapidly discharged into the nearest cold body of water—which often doubled as the city’s source of drinking water. It was a perfect system for discharging urinary estrogen into the environment with as little biodegradation as possible … and then bringing it back into the human organism. As for urinary androgen, it was also present in wastewater but at much lower levels because of its lower solubility in water (Tabak et al., 1981).

That system survived until the late 1960s and early 1970s, when concern about pollution brought an upgrading of almost all sewage treatment facilities. If we look at the total number of Americans who produce untreated wastewater, this number peaked at 70 million in 1960 before falling to 2 million after passage of the Clean Water Act in 1972 (Copeland, 1993; US Council on Environment Quality, 1984). Primary treatment removes 35‑55% of estrogen from wastewater, and this proportion rises to 50‑70% for secondary treatment (Tabak et al., 1981). Today, tertiary treatment removes 90% of all natural and synthetic estrogenic compounds (LeQuire, 1999).

But what about the 100-year period when most wastewater went untreated? During that time, the main sources of drinking water must have been highly contaminated with estrogen. What were the effects? The most obvious ones would have been a decline in sperm counts and a rise in male infertility. But were there others? And should we soon see a reversal of these trends?

I tried to publish a paper on this subject, but the reviewers were skeptical. I was indulging in speculation that could never be proved one way or another. There simply are no records of estrogen levels in the environment for the period in question.

So I published my speculations on my blog (here, here, and here) and let the matter lie … while keeping an eye on the literature. Recently, three findings have come to my attention:


Primary treatment less effective than previously thought

A recent paper confirms that tertiary treatment of wastewater removes over 90% of all estrogen. On the other hand, primary treatment is less effective than previously thought, the removal rate being only 10% (Limpiyakorn et al., 2011).


Estrogen content of wastewater higher than previously thought

In the past, estrogen levels were measured only for the three most common kinds of estrogen: estrone (E1), estradiol (E2), and estriol (E3). Other natural estrogens, however, are present in urine. The total estrogen level in wastewater is thus 18-34% higher than previously thought:

[…] the total excretion rates of EEQ [estrogen equivalent] by estrone (E1), 17β-estradiol (E2), and estriol (E3) only accounted for 66–82% of the total excretion rate of EEQ among four different groups, and the other corresponding natural estrogens contributed 18–34%, which meant that some of the other natural estrogens may also exist in wastewater with high estrogenic activities. (Liu et al., 2009)


River and ocean sediments reveal formerly high estrogen levels in the environment

It is possible to look into the past by taking cores of sediments from the bottoms of lakes, rivers, and coastal waters. A study of the River Thames has found that estrogen levels are higher in river sediments deposited before the 1960s:

There is an indication of higher concentrations of E1 and E3 in samples deemed to be deposited before the mid-1960s, prior to the introduction of biological treatment at STWs [sewage treatment works] discharging to the estuary. […] This provides indirect evidence that historical improvements to wastewater treatment have resulted in a decrease in the concentrations of steroids in the effluent, as observed for PCBs and DDT from the same core (Gomes et al., 2011)

A similar finding comes from a study of sediment cores from Japanese coastal areas:

The concentration of natural estrogens such as 17β-estradiol (E2) and estrone (E1) in the sediment of coastal areas in Japan was determined […] Core samples were sliced every 2cm from the surface to 20cm deep for the measurement of estrogen. Although the concentrations of estrogens decreased with the depth of core samples, fairly high levels of estrogens were again noticed at the layer deeper than 16cm. (Matsuoka et al., 2005)

Conclusion


The most promising line of research seems to be the use of sediment cores to estimate past levels of estrogen in the environment. One problem will be calibration of sediment dating. Gomes et al. (2011) have pointed to a possible solution by noting that the mid-1960s correspond to the first sediment layer that contains synthetic estrogen, i.e., from birth control pills. Another problem is that estrogen seems to degrade gradually over time, even in river or ocean sediments.

References


Copeland, C. (1993). Wastewater Treatment: Overview and Background [93-138 ENR] Washington, D.C.: Congressional Research Service.

Frost, P. (2009). The urinary estrogen theory. Part I, Evo and Proud, March 11
http://evoandproud.blogspot.ca/2009/03/urinary-estrogen-theory-part-i.html

Frost, P. (2009). The urinary estrogen theory. Part II, Evo and Proud, March 18
http://evoandproud.blogspot.ca/2009/03/urinary-estrogen-theory-part-ii.html

Frost, P. (2009). The urinary estrogen theory. Part III, Evo and Proud, March 26
http://evoandproud.blogspot.ca/2009/03/urinary-estrogen-theory-part-iii.html

Gomes, R.L., M.D. Scrimshaw, E. Cartmell, & J.N. Lester.  (2011). The fate of steroid estrogens: partitioning during wastewater treatment and onto river sediments, Environmental Monitoring and Assessment, 175, 431–441.

LeQuire, E. (1999). Something in the Water. InSites, 7(1),

Limpiyakorn, T., S. Homklin, & S.K. Ong. (2011). Fate of estrogens and estrogenic potentials in sewerage systems, Critical Reviews in Environmental Science and Technology, 41(13), 1231-1270.

Liu, Z., Y. Kanjo, S. Mizutani. (2009). Urinary excretion rates of natural estrogens and androgens from humans, and their occurrence and fate in the environment: A review, Science of the Total Environment, 407, 4975–4985

Matsuoka, S., R. Sakakura, M. Takiishi, Y. Kurokawa, S. Kawai, & N. Miyazaki. (2005). Determination of natural estrogens in the sediment of coastal area in Japan, Coastal Marine Science, 29(2), 141-146.
http://repository.dl.itc.u-tokyo.ac.jp/dspace/handle/2261/5593

Rockefeller, A.A. (1996). Civilization and sludge: Notes on the history of the management of human excreta. Current World Leaders, 39, 99‑113.

Tabak, H.H., R.N. Bloomhuff, & R.L. Bunch. (1981). Steroid hormones as water pollutants II. Studies on the persistence and stability of natural urinary and synthetic ovulation‑inhibiting hormones in untreated and treated wastewaters. Developments in Industrial Microbiology, 22, 497‑519.

U.S. Council on Environment Quality. (1984). Annual Report. Washington D.C.

Vader, J.S., C.G. van Ginkel, F.M.G.M. Sperling, J. de Jong, W. de Boer, J.S. de Graaf, M. van der Most, & P.G.W. Stokman. (2000). Degradation of ethinyl estradiol by nitrifying activated sludge. Chemosphere, 41, 1239‑1243. 

Saturday, August 4, 2012

Too darn hot?


Does higher IQ correlate with colder temperatures? Not among people belonging to the same cultural system, such as the Chinese. (source)

Big brains are costly, not only because of their high energy consumption but also because many genes have to interact to create neural tissue. The bigger and more complex the brain, the more it is vulnerable to accidents at the gene level, like random mutations.

Mutations happen more often at warmer temperatures. In Drosophila, an increase of 10ºC will double or triple the mutation rate. Tight underwear has probably done more to harm the human genome than fallout from nuclear testing (Sutton, 1975, p. 318).

Drawing on these two points, Greg Cochran is now suggesting that large human brains are a precarious outcome of evolution (here and here). However strong the natural selection may be for a bigger brain, the mutation rate is pushing back in the opposite direction. Beyond a certain size, big brains are possible only where the mutation rate is relatively low—in cooler regions at higher latitudes.

This is a seductive way of explaining why brain size correlates with latitude. And, yes, such a correlation does exist. So thought most 19th-century physical anthropologists, notably Samuel George Morton, but Stephen J. Gould (1978) concluded otherwise after a reanalysis of Morton’s data that became the centerpiece of his book The Mismeasure of Man. A team of physical anthropologists has since located and remeasured Morton’s skulls. Their conclusion? The original measurements had few errors, and the errors were distributed randomly. There was, in fact, a non-significant tendency by Morton to overestimate African skull size (Lewis et al., 2011).

Brain size and latitude also seem to correlate among ancestral hominids from A. Afarensis to H. sapiens (Henneberg & Miguel, 2004). The correlation remains even if time period is controlled. It thus cannot be due to the overall rise in cranial capacity over time and the parallel expansion of ancestral hominids into higher latitudes.

Is this correlation adequately explained by the ‘Too Darn Hot’ theory? The main supporting evidence is the finding that ‘loss of function’ mutations are much more common in sub-Saharan Africans than in other humans (MacArthur & Tyler-Smith, 2010; Tennessen et al., 2012). Natural selection seems to have more trouble weeding them out in the tropics than elsewhere.

But do these mutations need to be weeded out? Are they in fact deleterious? Some authors think so. Most don’t, including the ones of a study that Greg cites:

[…] the implicit assumption that LOF variants (and indeed other changes predicted to be damaging to the protein) are necessarily deleterious to human health is a dangerous one, especially when such an assumption is used to infer disease causality for a novel variant. In fact, the studies reviewed above demonstrate that healthy humans carry many dozens of LOF variants, most of which have little or no effect on health (at least in the heterozygous state). (MacArthur & Tyler-Smith, 2010)

In general, these mutations seem to involve genes of very low selective value, so they could very well hang around indefinitely if natural selection against them is weak enough and if the population is large enough.

Here we come to the usual explanation for Africa’s large number of ‘loss of function’ mutations. There are so many of them because Africans have largely stayed put in the same place with the same population base. In contrast, non-Africans are descended from small founder groups that took only a small portion of this junk variability on their way out of Africa:

The gene-diversity results presented here are consistent with one another and with those of many previous studies in showing higher levels of diversity in African populations than in non-African populations […] A higher level of African diversity supports the hypothesis that modern humans first arose in Africa and then colonized other parts of the world (Stoneking 1993), but genetic diversity is related not just to a population’s “age” but also to demographic events in a population’s history, such as bottlenecks and effective population size. (Jorde et al., 2000)


Alternate theories


Do we have other theories for latitudinal variation in brain size? To date, there seem to be three:

Need to reduce heat loss at higher latitudes


According to Beals, Smith & Dodd (1984), heads have grown larger at higher latitudes as a way to reduce heat loss. An object will lose less heat if it has a high ratio of volume to surface area. Natural selection has thus favored more globular heads at higher latitudes. The increase in brain size is thus incidental.

This explanation was challenged in the comments section of the above paper. Iwatoro Morimoto pointed out that "in recent centuries, brachycranic skulls show a considerable increase in frequency in Eurasian populations, including the Japanese." Since mean temperatures have changed little in recent centuries, there must have been another factor at work.

Another commenter, Erik Trinkaus, similarly pointed out that Neanderthal cranial capacity was no bigger during glacial periods than during interglacials. The same was true for early modern humans. For populations already established at northern latitudes, cranial capacity shows no evidence of rising and falling with mean temperature.

Finally, if the increase in brain size was driven by the need for a more globular head, that goal can be met by filling the extra head space with non-neural tissue, like bone or cartilage. Neural tissue has a high maintenance cost. Why maintain something at great expense if you don’t really need it?

Increase in visual cortex at higher latitudes


Pearce and Dunbar (2011) argue that bigger brains are an adaptation to lower levels of ambient light. Specifically, dimmer light requires larger eyes, which in turn require larger visual cortices in the brain. Using 73 adult crania from populations located at different latitudes, the two authors found that both eyeball size and brain size correlate positively with latitude. The correlation was stronger with eyeball size, an indication that this factor was driving the increase in brain size.

How credible is this explanation? First, visual cortex size was not directly measured. The authors inferred that this brain area was responsible for the increase in total cranial capacity. Of course they couldn’t have done otherwise. They were measuring skulls, not intact brains.

To date, the best map of human variation in brain size is by Beals et al. (1984). If dimness of light is the main determinant, brain size should be highest in northwestern Europe, northern British Columbia, the Alaskan panhandle, and western Greenland. These regions combine high latitudes with generally overcast skies. Yet they are not the regions where humans have the biggest brains. Instead, brains are biggest among humans from the northern fringe of Arctic Asia and from northeastern Arctic Canada. These regions are, if anything, less overcast than average. They often have high levels of ambient light because of reflection from snow and ice.

Increase in cognitive demands at higher latitudes


Finally, brain size may have increased at higher latitudes because of an increase in cognitive tasks, specifically foresight. As ancestral humans spread out of the tropics and into latitudes with a predictable summer/winter cycle, it became much more advantageous to simulate the future consequences of present actions.

This point is discussed by Hoffecker (2002, p. 135). Among early modern humans, tools and weapons were more complex at arctic latitudes than at tropical latitudes. “Technological complexity in colder environments seems to reflect the need for greater foraging efficiency in settings where many resources are available only for limited periods of time.” Arctic humans planned ahead to cope with resource fluctuations and high mobility requirements, such as by developing untended devices (e.g., traps and snares) and means of food storage.

Colder environments imposed even higher cognitive demands when hunting and gathering gave way to agriculture. Food had to be grown not only for present needs but also for the next cold season. As late as the 18th century, farm families often faced starvation in early spring—when their winter provisions had run out and their spring crop had not yet come in.

The yearly cycle and the need to plan ahead thus preadapted early non-tropical humans for later cultural developments, such as invention of writing and bookkeeping, complexification of social relations, creation of towns and cities, systems of military defense, roads and highways, etc. At that point, cognitive demands were no longer driven by the yearly cycle, at least not primarily. They were now being driven by an increasingly complex cultural environment—what we call ‘civilization.’

Conclusion


How does the ‘Too Darn Hot’ theory stack up against these alternate theories? The main contender seems to be the last one, i.e., the increase in cognitive demands at higher latitudes. According to that theory, the yearly cycle has given way to gene-culture co-evolution as the main driving force behind increases in intellectual capacity.

Thus, if people live within the same cultural system and are exposed to similar cognitive demands, they should on average have the same intellectual capacity … regardless of the mean temperature of their particular locality.

Conversely, the ‘Too Darn Hot’ theory would predict the existence of a north-south cline in IQ even among people of a similar cultural background, since people at more tropical latitudes should have a higher incidence of deleterious mutations.

China, for example, covers a wide range of latitudes from the sub-Arctic to the tropics. Although the Chinese have occupied this latitudinal range for some 2,500 years, i.e., about 100 generations, their mean IQ doesn’t seem to vary along a north-south cline (see above map).

Perhaps 100 generations isn’t long enough. But what about the Amerindians? They’ve inhabited a full range of latitudes from the Arctic to the equator for some 12 to 15 thousand years. That’s 480 to 600 generations. Is there a difference in mean IQ between the Naskapi of northern Labrador and the Yanomamo of Amazonia? I’d be surprised.

References


Anon. (2011). IQ geography in China, November 19, The Slitty Eye,
http://theslittyeye.wordpress.com/2011/11/19/iq-geography-in-china/

Beals, K.L., C.L. Smith, and S.M. Dodd (1984). Brain size, cranial morphology, climate, and time machines, Current Anthropology, 25, 301–330.

Cochran, G. (2012). Changes in attitudes, West Hunter, July 18
http://westhunt.wordpress.com/2012/07/18/changes-in-attitudes/

Cochran, G. (2012). Too darn hot? West Hunter, July 14
http://westhunt.wordpress.com/2012/07/14/too-darn-hot/

Gould S.J. (1981). The Mismeasure of Man. New York: W. W. Norton and Company.

Gould , S.J. (1978). Morton’s ranking of races by cranial capacity: unconscious manipulation of data may be a scientific norm, Science, 200, 503–509.

Henneberg, M. and C. de Miguel. (2004). Hominins are a single lineage: brain and body size variability does not reflect postulated taxonomic diversity of hominins, Journal of Comparative Human Biology, 55, 21–37

Hoffecker, J.F. (2002). Desolate Landscapes. Ice-Age Settlement in Eastern Europe. New Brunswick: Rutgers University Press.

Jorde, L.B., W.S. Watkins, M.J. Bamshad, M.E. Dixon, C.E. Ricker, M.T. Seielstad, & M.A. Batzer. (2000). The distribution of human genetic diversity: a comparison of mitochondrial, autosomal, and Y-chromosome data, American Journal of Human Genetics, 66, 979–988.

Lewis, J.E., D. DeGusta, M.R. Meyer, J.M. Monge, A.E. Mann, R.L. Holloway. (2011). The Mismeasure of Science: Stephen Jay Gould versus Samuel George Morton on Skulls and Bias, PLoS Biology, 9(6) e1001071

MacArthur, D.G., & C. Tyler-Smith. (2010). Loss-of-function variants in the genomes of healthy humans, Human Molecular Genetics, 19, R125-R130.

Pearce, E. and R. Dunbar. (2011). Latitudinal variation in light levels drives human visual system size, Biology Letters, doi: 10.1098/rsbl.2011.0570

Sutton, H.E. (1975). An Introduction to Human Genetics, New York: Holt, Rinehart and Winston.

Tennessen, J.A., A.W. Bigham, T.D. O’Connor, W. Fu, E.E Kenny, et al. (2012). Evolution and functional impact of rare coding variation from deep sequencing of human exomes, Science, 337, 64-69.