abdul-latif-jameel-poverty-action-lab

India’s largest social protection program, the Mahatma Gandhi National Rural Employment Guarantee Scheme (MGNREGS), guarantees households 100 days of work per year, typically in unskilled manual labor on infrastructure projects. For MGNREGS, the central government disburses funds to local governments based on projected spending, a system that has extensive delays and leakages. In fiscal year 2016-17, the Indian government spent over US$6 billion on the program and reached 74 million beneficiaries. In Bihar, J-PAL affiliates tested the impact of an information technology reform that linked the flow of funds to actual expenditures and reduced the number of officials involved in the process. The reform led to a 24 percent decline in expenditure without a detectable decline in employment or assets created, and there is direct evidence that at least part of the decline was due to reduction in fund leakage.

[…] Informed by the results of the randomized evaluation, India’s Union Cabinet approved a national reform of MGNREGS fund-flow.

The reform allows beneficiary payments across all Indian states to be made through a newly-established National Electronic Fund Management System (Ne-FMS). As detailed below, in the Bihar study MGNREGS funds flowed directly from a state government account to village councils (called panchayats), which in turn then paid the beneficiaries. […] The Cabinet note cites J-PAL’s evaluation as part of the rationale for this decision. […]

The evaluation, conducted between September 2012 and March 2013, spanned twelve districts in Bihar, covering a rural population of 33 million.

Abdul Latif Jameel Poverty Action Lab (J-PAL), Fund-flow reforms for improved social program delivery in India


Added to diary 21 March 2018

abraham-lincoln

Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.

Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this.

But, in a larger sense, we can not dedicate—we can not consecrate—we can not hallow—this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us—that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.

Abraham Lincoln, Gettysburg Address, 19 November 1863 [Alexander Bliss version]


Added to diary 20 January 2018

agence-france-presse-contributors

The Kingdom of Tonga has admitted to losing millions of dollars that it made selling passports to Asians after an American whom the king appointed as his ‘‘court jester’’ invested the money in a mysterious company that later disappeared.

In the last week, two cabinet ministers have been forced to quit over the scandal as the new deputy prime minister, Clive Edwards, conceded that $26 million, held by the Tonga Trust Fund in a Bank of America account, had been lost.

The money was taken out of the bank in June 1999 and put into Millennium Asset Management in Nevada.

At the time, it said, a Bank of America employee, Jesse Bogdonoff, became the Trust Funds advising officer, just after King Taufaahau Tupou IV issued a royal decree declaring him the court jester.

The fund owed its origins to the late 1980’s when a Hong Kong businessman, George Chen, won royal approval to sell Tongan citizenship and special passports mainly to Asians, with a particular eye on Hong Kong Chinese who were worried about its handover to China.

Mr. Chen put the money into a checking account at the Bank of America after the king refused to keep it in Tonga, saying the government would only spend it on roads.

At the time, Mr. Bogdonoff was working at the bank, and by his own account in a company newsletter, ‘‘he stumbled onto millions of dollars inexplicably invested in a checking account.’’ He persuaded the king to allow him to invest the money.

Millennium was established on March 25, 1999, and the fund was moved into it on June 21. The government statement concluded, though, that Millennium no longer exists and that the $26 million dollars, plus an additional $11 million estimated to be accrued interest, was gone.

'’Some common questions being asked by the general public in Tonga today include: Why did the trustee deposit so much of our Foreign Reserves in such a suspicious company?’’ the government said.

Agence France-Presse, The Money Is All Gone in Tonga, And the Jester’s Role Was No Joke, The New York Times, 7 Oct. 2001


Added to diary 18 May 2018

alex-tabarrok

The Baumol effect is easy to explain but difficult to grasp. In 1826, when Beethoven’s String Quartet No. 14 was first played, it took four people 40 minutes to produce a performance. In 2010, it still took four people 40 minutes to produce a performance. Stated differently, in the nearly 200 years between 1826 and 2010, there was no growth in string quartet labor productivity. In 1826 it took 2.66 labor hours to produce one unit of output, and it took 2.66 labor hours to produce one unit of output in 2010.

Fortunately, most other sectors of the economy have experienced substantial growth in labor productivity since 1826. We can measure growth in labor productivity in the economy as a whole by looking at the growth in real wages. In 1826 the average hourly wage for a production worker was $1.14. In 2010 the average hourly wage for a production worker was $26.44, approximately 23 times higher in real (inflation-adjusted) terms. Growth in average labor productivity has a surprising implication: it makes the output of slow productivity-growth sectors (relatively) more expensive. In 1826, the average wage of $1.14 meant that the 2.66 hours needed to produce a performance of Beethoven’s String Quartet No. 14 had an opportunity cost of just $3.02. At a wage of $26.44, the 2.66 hours of labor in music production had an opportunity cost of $70.33. Thus, in 2010 it was 23 times (70.33/3.02) more expensive to produce a performance of Beethoven’s String Quartet No. 14 than in 1826. In other words, one had to give up more other goods and services to produce a music performance in 2010 than one did in 1826. Why? Simply because in 2010, society was better at producing other goods and services than in 1826.

The 23 times increase in the relative price of the string quartet is the driving force of Baumol’s cost disease. The focus on relative prices tells us that the cost disease is misnamed. The cost disease is not a disease but a blessing. To be sure, it would be better if productivity increased in all industries, but that is just to say that more is better. There is nothing negative about productivity growth, even if it is unbalanced.

Helland, Eric, and Alexander T. Tabarrok. “Why Are the Prices So Damn High?” (2019).


Added to diary 09 June 2019

ali-hortacsu

Economists build a database from 4000-year-old clay tablets, plug it into a trade model, and use it to locate lost bronze age cities.

They had clay tablets from ancient merchants, saying things like:

(I paid) 6.5 shekels (of tin) from the Town of the Kanishites to Timelkiya. I paid 2 shekels of silver and 2 shekels of tin for the hire of a donkey from Timelkiya to Hurama. From Hurama to Kaneš I paid 4.5 shekels of silver and 4.5 shekels of tin for the hire of a donkey and a packer.

This allowed them to measure the amount of trade between any two cities.

Then they constructed a theoretical model of the expected amount of trade between any two cities, as inversely propotional to the distance between the two cities. Given the data on amount of trade, and the locations of the known cities, they are able to estimate the lost locations:

As long as we have data on trade between known and lost cities, with sufficiently many known compared to lost cities, a structural gravity model is able to estimate the likely geographic coordinates of lost cities [….]

We build a simple Ricardian model of trade. Further imposing that bilateral trade frictions can be summarized by a power function of geographic distance, our model makes predictions on the number oftransactions between city pairs, which is observed in our data. The model can be estimated solely on bilateral trade flows and on the geographic location of at least some cities.

Gojko Barjamovic, Thomas Chaney, Kerem A. Coşar, Ali Hortaçsu, Trade, Merchants, and the Lost Cities of the Bronze Age, 2017


Added to diary 15 January 2018

amia-srinivasan

Many male octopuses, to avoid being eaten during mating, will keep their bodies as far removed from the female as possible, extending a single arm with a sperm packet towards her siphon, a manoeuvre known as ‘the reach’. […]

In 1959, Peter Dews, a Harvard scientist, trained three octopuses there to pull a lever to obtain a chunk of sardine. Two of the octopuses, Albert and Bertram, pulled the lever in a ‘reasonably consistent’ manner. But the third, Charles, would anchor his arms on the side of the tank and apply great force to the lever, eventually breaking it and bringing the experiment to a premature end. Dews also reported that Charles repeatedly pulled a lamp into his tank, and that he ‘had a high tendency to direct jets of water out of the tank; specifically … in the direction of the experimenter’. ‘This behaviour,’ Dews wrote, ‘interfered materially with the smooth conduct of the experiments, and is … clearly incompatible with lever-pulling.’ […]

Both female and male octopuses mate only once, and enter a swift and sudden decline into senescence soon after, developing white lesions on their skin, losing interest in food, and becoming unco-ordinated and confused. The females die from starvation while they tend their eggs, and the males are typically preyed on as they wander the ocean aimlessly. […] In its early evolutionary history, the octopus gave up its protective, molluscan shell in order to embrace a life of unboundaried potential. But the cost was an increased vulnerability to toothy and bony predators. An animal with a soft body and no shell cannot expect to live long, and so harmful mutations that take effect only once it has been alive for a couple of years will soon spread through the population. The result is a life that is experientially rich but conspicuously brief.

Amia Srinivasan, The Sucker, the Sucker!, London Review of Books, Vol. 39 No. 17, 7 September 2017, pages 23-25


Added to diary 16 March 2018

anonymous

What am I? I’m a bunch of bones. I get to wiggle my bones around for a few years, and then I die. And I’m supposed to affect the light cone? It’s like being a worm in a back yard, wiggling around, and hoping to maximise the price of tea in China.

Anonymous, February 2019, Oxford


Added to diary 13 February 2019

Me: Why do some people wear clothes that are generally considered very strange (e.g. punk or emo)? They clearly know most people don’t like it.
Friend: It increases the variance.
Me: But it decreases the expected value.
Friend: Well, you can fuck the variance, but you can’t fuck the expected value.

A conversation with a friend, Oxford, 2017


Added to diary 18 January 2018

aron-vallinder

Natural Turing Machines. The final issue is the choice of Universal Turing machine to be used as the reference machine. The problem is that there is still subjectivity involved in this choice since what is simple on one Turing machine may not be on another. More formally, it can be shown that for any arbitrarily complex string as measured against the UTM there is another UTM machine for which has Kolmogorov complexity . This result seems to undermine the entire concept of a universal simplicity measure but it is more of a philosophical nuisance which only occurs in specifically designed pathological examples. The Turing machine would have to be absurdly biased towards the string which would require previous knowledge of . The analogy here would be to hard-code some arbitrary long complex number into the hardware of a computer system which is clearly not a natural design. To deal with this case we make the soft assumption that the reference machine is natural in the sense that no such specific biases exist. Unfortunately there is no rigorous definition of natural but it is possible to argue for a reasonable and intuitive definition in this context.

Hutter and Rathmanner 2011, Section 5.9 “Andrey Kolmogorov”

In section 2.4 we saw that Solomonoff’s prior is invariant under both reparametrization and regrouping, up to a multiplicative constant. But there is another form of language dependence, namely the choice of a uni- versal Turing machine.

There are three principal responses to the threat of language dependence. First, one could accept it flat out, and admit that no language is better than any other. Second, one could admit that there is language dependence but argue that some languages are better than others. Third, one could deny language dependence, and try to show that there isn’t any.

For a defender of Solomonoff’s prior, I believe the second option is the most promising. If you accept language dependence flat out, why intro- duce universal Turing machines, incomputable functions, and other need- lessly complicated things? And the third option is not available: there isn’t any way of getting around the fact that Solomonoff’s prior depends on the choice of universal Turing machine. Thus, we shall somehow try to limit the blow of the language dependence that is inherent to the framework. Williamson (2010) defends the use of a particular language by saying that an agent’s language gives her some information about the world she lives in. In the present framework, a similar response could go as follows. First, we identify binary strings with propositions or sensory observations in the way outlined in the previous section. Second, we pick a UTM so that the terms that exist in a particular agent’s language gets low Kolmogorov complexity.

If the above proposal is unconvincing, the damage may be limited some- what by the following result. Let be the Kolmogorov complexity of relative to universal Turing machine , and let be the Kolmogorov complexity of relative to Turing machine (which needn’t be universal). We have that That is: the difference in Kolmogorov complexity relative to and rela- tive to is bounded by a constant that depends only on these Turing machines, and not on . (See Li and Vitanyi (1997, p. 104) for a proof.) This is somewhat reassuring. It means that no other Turing machine can outperform infinitely often by more than a fixed constant. But we want to achieve more than that. If one picks a UTM that is biased enough to start with, strings that intuitively seem complex will get a very low Kolmogorov complexity. As we have seen, for any string it is always possible to find a UTM such that . If , the corresponding Solomonoff prior will be at least . So for any binary string, it is always possible to find a UTM such that we assign that string prior probability greater than or equal to . Thus some way of discriminating between universal Turing machines is called for.

Vallinder 2012, Section 4.1 “Language dependence”


Added to diary 15 January 2018

ben-garfinkel

a generalization of all these concepts [metaphor, deepity, motte and bailey]:

Blurry sentence: a sentence with at least two possible interpretations, where one is both much more interesting and much less plausible than the other. The effect is to make the reader feel that the sentence is both interesting and plausible. The implausible interpretation also does not help the reader to understand the plausible one.

Here’s how the generalization works:

Basic metaphor: a blurry sentence where the plausible interpretation is the most salient one to the reader.

Deepity: a blurry sentence where neither the plausible interpretation nor the implausible interpretation is much more salient than the other, leaving the reader with a sense of vagueness or ineffability. The feeling is: There is something true and interesting here, although I can’t quite put my finger on it.

Motte and bailey: a blurry sentence where the implausible interpretation is the most salient one. If the reader objects that the sentence is in fact implausible, then the writer has the option of switching to only the plausible interpretation.

These revised definitions reveal that there is a spectrum here, such that the names “basic metaphor,” “deepity,” and “motte and bailey” are only calling out particular regions of the spectrum.

Ben Garfinkel, Basic Metaphors, Deepities, and Motte-and-Baileys, The best that can happen, 16 September 2017


Added to diary 15 January 2018

bryan-caplan

But George Akerlof aggressively seizes the booby prize. He apparently keeps his wealth in money market funds and the like. His justification? Less than zero: “I know it’s utterly stupid.”

The whole article reminds me of a quote I wrongly attributed to Einstein instead of Thomas Szasz: “Clear thinking requires courage rather than intelligence.”

Bryan Caplan, Four bad role models, EconLog, 12 May 2005

All four basically said that they have portfolios that their research says are stupid portfolios. I’m just like, what is wrong with you, like why do you do this? […]

To me it’s just so frustrating. This is the way a lot of people approach economics, is it’s like a game. Say you get publications or maybe you go and write things, but you don’t actually really use it to change behavior.

Bryan Caplan, 80,000 Hours Podcast, 22 May 2018


Added to diary 23 May 2018

List of passages I highlighted in my copy of “The Case Against Education”.

On conformity and conscientiousness

Heterodox signals of your strengths, in contrast, automatically suggest offsetting weaknesses. Suppose you scored well on the SAT but never went to college. Employers will readily believe you’re smart. But if you’re so smart, why didn’t you go to college? As long as your conscientiousness and conformity were in the normal range, finishing college would have been a snap. Once employers see your SATs, they naturally infer you’re below average in conscientiousness and conformity. The higher your scores, the more suspicious your missing diploma becomes. […]

The further outside the box your substitute signal of conformity, the more it backfires. Try telling employers, “I’m not Jewish, but I keep kosher to prove I can conform to intricate rules.” They’ll take you for a freak. […]

On employer learning:

Give people a chance, observe how they do, fire them if they don’t measure up: a “Hire, Look, Flush” personnel policy sounds both profitable and fair. Yet group identity and pity get in the way. After a firm hires you, you’re part of the team. […]

Employers do have one guilt-free way to reverse a bad hiring decision. Human resources calls it “dehiring.” Instead of firing the unwanted worker, help them jump ship. Privately urge them to find new opportunities. When firms call for a reference, shade the truth—or lie. […]

For most workers, employer learning takes years or even decades, not months. Two seminal studies of employer learning found that during your first decade in the workforce, the ability premium sharply rises, while the education premium falls 25–30%. A subsequent prize-winning article found the education and ability premiums plateau after roughly ten years of experience; the education premium stops falling, and the ability premium stops rising. […]

The more fundamental reason why signals durably affect pay, though, is employers underreact to what they learn. Why? Because they want to match pay and perceived productivity without seeming unfair. When employers spot poor performance, they could swiftly respond with wage cuts, demotions, or terminations. The catch: such “unfair” measures are bad for morale—and make employers feel guilty. […]

A subpar worker can profit from their fancy degree long after their employer sees their true colors. The degree lands them a good job. As truth unfolds, the typical employer responds with stingy raises, not outright pay cuts or demotion. This slowly erodes the value of the signal, but squeamish firms show mercy long before they sync pay with performance. If and when the employer vows to eject the underperformer, both prudence and pity tell them to informally “dehire” rather than blatantly fire. As long as the subpar worker lands another position suitable for their paper persona, the cycle of disappointment, mercy, and deception is reborn. […]

A litmus test:

Imagine this stark dilemma: you can have either a Princeton education without a diploma, or a Princeton diploma without an education. Which gets you further on the job market? For a human capital purist, the answer is obvious: four years of training are vastly preferable to a page of paper. But try saying that with a straight face. Sensible versions of the signaling model don’t imply the diploma is clearly preferable; after all, Princeton teaches some useful skills. But you need signaling to explain why choosing between an education and a diploma is a head-scratcher rather than a no-brainer. […]

Counter-intuitive effects of education subsidies:

Yes, awarding a full scholarship to one poor youth makes that individual better off by helping send a fine signal to the labor market. Awarding full scholarships to all poor youths, however, changes what educational signals mean—and leads more affluent competitors to pursue further education to keep their edge. The result, as we’ve seen, is credential inflation. As education rises, workers—including the poor—need more education to get the same job. Where’s the social justice in that? Imagine the government subsidized wedding rings for the poor. Anyone ready for marriage can go to any jewelry store in the country, knowing—whatever their income—they can buy a diamond ring. The snag: diamond rings are largely a signal of marital commitment. If diamonds were cheap as plastic, other gems would adorn our rings. They’re valuable because they’re costly. Once the government makes them affordable to all, then, diamond rings signal little or nothing. Doesn’t this “level the playing field”? Only for a heartbeat. Once the nonpoor see diamond rings don’t signal what they used to, they procure a snazzier ring to separate themselves from the pack. Thanks to government subsidies, every suitor can afford a wedding ring, but so what? Society is functionally as unequal as ever. […]

Subsidies don’t just hurt the poor by fueling credential inflation. They reshape hiring and promotion to the poor’s detriment. Picture a society where half the population can’t afford college. In this setting, reserving good jobs for college grads is bad business. “There are plenty of qualified candidates who didn’t go to college” is not wishful thinking, but literal truth. Education still signals something, but lack of education is not the kiss of death. When asked, “Why didn’t you go to college?” “I couldn’t afford it” is a great excuse. Heavy subsidies take it off the table.

Does school build human capital?

One major study tested roughly a thousand people’s knowledge of algebra and geometry. Some participants were still in high school; the rest were adults between 19 and 84 years old. The researchers had data on subjects’ full mathematical education. Main finding: Most people who take high school algebra and geometry forget about half of what they learn within five years and forget almost everything within twenty-five years. Only […]

Despite the shortage of long-term retention studies, we can fall back on a compelling shortcut. Instead of measuring the enduring effect of education on adult knowledge, we can place an upper bound on that effect. It’s a two-step process. Step one: measure adult knowledge about various school subjects. Step two: note that schools can’t be responsible for more than 100% of what adults know about these subjects. What people now know is therefore an upper bound on the school learning they retain. My shortcut is easy to implement. Surveys of adults’ knowledge of reading, math, history, civics, science, and foreign languages are already on the shelf. The results are stark: Basic literacy and numeracy are virtually the only book learning most American adults possess. […]

Barely half of American adults know the Earth goes around the sun. Only 32% know atoms are bigger than electrons. Just 14% know that antibiotics don’t kill viruses. Knowledge of evolution barely exceeds zero. Knowledge of the Big Bang is actually less than zero; respondents would have done better flipping a coin. […]

If you throw a coin straight up, how many forces act on it midair? The textbook answer is “one”: after it leaves your hand, the only force on the coin is gravity. The popular answer, however, is “two”: the force of the throw keeps sending it up, and the force of gravity keeps dragging it down. Popular with whom? Virtually everyone—physics students included. At the beginning of the semester, only 12% of college students in introductory mechanics get the coin problem right. At the end of the semester, 72% still get it wrong. […]

Learning how to learn?

The clash between teachers’ grand claims about “learning how to learn” and a century of careful research is jarring. Yet commonsense skepticism is a shortcut to the expert consensus. Teachers’ plea that “we’re mediocre at teaching what we measure, but great at teaching what we don’t measure” is comically convenient. […]

Effects on IQ:

“Flowers for Algernon” is science fiction, but life mirrors art. Making IQ higher is easy. Keeping IQ higher is hard. Researchers call this “fadeout.” Fadeout for early childhood education is especially well documented. After six years in the famous Milwaukee Project, experimental subjects’ IQs were 32 points higher than controls’. By age fourteen, this advantage had declined to 10 points. In the Perry Preschool program, experimental subjects gained 13 points of IQ, but all this vanished by age 8. Head Start raises preschoolers’ IQs by a few points, but gains disappear by the end of kindergarten. […]

In any case, suppose each year of school permanently made you a whopping 3 IQ points smarter. According to standard estimates, this would raise your earnings by about 3%, leaving a supermajority of the education premium unexplained. […]

This section, comparing three models of education (human capital, signalling, ability bias) was excellent:

Given a clear explanation, most people readily grasp the conflict between human capital and its rivals. Yet even experts occasionally confuse signaling with ability bias. Both stories agree that employers value workers’ skill; both deny that schooling enhances workers’ skill. The two stories diverge on the question of visibility. In a pure signaling story, employers never see your skill. So if your skills mismatch your credentials, the labor market rewards your credentials, not your skills. In a pure ability bias story, in contrast, employers see your skill plain as day. So if your skills mismatch your credentials, the labor market rewards your skills, not your credentials. […]

Table 3.2: Human Capital, Signaling, and Ability Bias [WYSIWYG = “What You See Is What You Get.”]

Story Visibility of Skill Education’s Effect on Skill Education’s Effect on Income
Pure Human Capital Perfect WYSIWYG WYSIWYG
Pure Signaling Zero Zero WYSIWYG
Pure Ability Bias Perfect Zero Zero
⅓ Human Capital, ⅓ Signaling, ⅓ Ability Bias 2/3 1/3 x WYSIWYG 2/3 x WYSIWYG

In 1999, a comprehensive review of earlier studies found that correcting for IQ reduces the education premium by an average of 18%. 7 When researchers correct for scores on the Armed Forces Qualification Test (AFQT), an especially high-quality IQ test, the education premium typically declines by 20–30%. 8 Correcting for mathematical ability may tilt the scales even more; the most prominent researchers to do so report a 40–50% decline in the education premium for men and a 30–40% decline for women.

Internationally, correcting for cognitive skill cuts the payoff for years of education by 20%, leaving clear rewards of mere years of schooling in all 23 countries studied. The highest serious estimate finds the education premium falls 50% after correcting for students’ twelfth-grade math, reading, and vocabulary scores, self-perception, perceived teacher ranking, family background, and location. 

A thinner body of research weighs the importance of so-called noncognitive abilities such as conscientiousness and conformity. The results parallel those for IQ: noncognitive ability pays, and correcting for noncognitive ability reduces the education premium. Correcting for AFQT, self-esteem, and fatalism (belief about the importance of luck versus effort) reduces the education premium by a total of 30%. The sole study correcting for detailed personality tests finds the education premium falls 13%. The highest serious estimate says that once you correct for intelligence and background, correcting for attitudes (such as fear of failure, personal efficacy, and trust) and personal behavior (such as church attendance, television viewing, and cleanliness) further cuts the education premium by 37%.

There are admittedly two big reasons to mistrust these basic results: reverse causation and missing abilities. The former could systematically overstate the severity of ability bias. The latter could systematically understate the severity of ability bias. Need we fret over either flaw?

Reverse causation. When you estimate the education premium correcting for ability X, you implicitly assume education does not enhance X. If this assumption is false, correcting for X leads to misleadingly low estimates of the effect of education on earnings. The best remedy for this “reverse causation” problem is to measure ability, then estimate the effect of subsequent education on earnings. Research on cognitive ability bias routinely applies this remedy—and uncovers little evidence of reverse causation. The comprehensive review article mentioned earlier separated studies into two categories: those that measured IQ before school completion and those that measured IQ after school completion. If reverse causation were at work, studies that relied on IQ after completion would report more ability bias than studies that relied on IQ before completion. In fact, both categories yield similar estimates of cognitive ability bias. 

Researchers who rely on the AFQT and related tests reach a similar result: When you correct for cognitive ability in 1980, the payoff for posttest education falls at least as much as the payoff for pretest education. Correcting for mathematical ability in the senior year of high school shaves 25–32% off the male college premium and 4–20% off the female college premium. What about reverse causation from education to noncognitive ability? Truth be told, relevant research is sparse. A few papers grapple with the issue, with mixed results. Most research, however, either measures noncognitive ability and education at the same point in time, or fails to distinguish between the effect of pre- and posttest education. The shortage of evidence hardly shows reverse causation is a serious problem, but caution is in order.

Missing abilities. Correcting for ability doesn’t fully eliminate ability bias unless you measure all relevant abilities. Are there any important abilities we’ve overlooked? Family background—via nature or nurture—is a plausible contender. Perhaps wealthy families use their money to help their kids get good educations and good jobs. Maybe college is a four-year vacation for rich kids—and a status symbol for their parents. Perhaps children from large families get less educational and professional assistance from their parents. Maybe well-educated workers come from high-achieving families—and would have been high achievers even without their schooling. The mechanism is hard to nail down, but most researchers find correcting for family background reduces the education premium by 0–15%. On reflection, though, correcting for family background probably “double-counts.” Both cognitive and noncognitive ability are moderately to highly hereditary, so you should correct for individual ability before you conclude family background overstates school’s payoff. This caveat matters. Rare studies that correct for intelligence and family background find that correcting for intelligence alone suffices. Armed with good measures of cognitive and noncognitive ability, we can probably safely ignore family background. The most troubling evidentiary gap: researchers usually settle for mediocre measures of noncognitive ability. Most studies that correct for noncognitive ability rely on one or two hastily measured traits and find only mild ability bias. Yet when asked, employers hail the importance of workers’ attitude and motivation—and the study with the best measures of noncognitive ability finds large ability bias. Until better measures come along, we should picture existing results as a lower bound on noncognitive ability bias rather than a solid estimate. So how severe is ability bias, all things considered? For cognitive ability bias, 20% is a cautious estimate, and 30% is reasonable. For noncognitive ability bias, 5% is cautious, and 15% is reasonable. Figure 3.1 shows education premiums correcting for both abilities, assuming equal bias for all education levels. […]

Why don’t more companies use IQ tests?

IQ “laundering.” Human capital purists often protest, “Why on earth do workers signal ability with a four-year degree instead of a three-hour IQ test?” My response: employers reasonably fear high-IQ, low-education applicants’ low conscientiousness and conformity. Other critics of the education industry, however, have a more streamlined response: American employers rely on educational credentials rather than IQ tests because IQ tests are effectively illegal. Thanks to the landmark 1971 Griggs vs. Duke Power case, later codified in the 1991 Civil Rights Act, anyone who hires by IQ risks pricey lawsuits. Why? Because IQ tests have a “disparate impact” on black and Hispanic applicants. To escape liability, employers must prove IQ testing is a “business necessity.” […]

Sheepskin effects: graduation is much more valuable than learning

Labor economists normally neglect sheepskin effects. By default, they assume all years of education are created equal, then estimate “the” effect of a year of education on earnings. 2 Yet economists who trouble to look almost always find pay spikes for diplomas. 3 High school graduation has a big spike: twelfth grade pays more than grades 9, 10, and 11 combined. In percentage terms, the average study finds graduation year is worth 3.4 regular years. College graduation has a huge spike: senior year of college pays over twice as much as freshman, sophomore, and junior years combined. 4 In percentage terms, the average study finds graduation year is worth 6.7 regular years. […]

correcting for ability usually modestly cuts the effect of both years of education and diplomas—holding the relative payoff for diplomas steady. […]

The Onion and the politics of education:

Who could oppose investment in our children, our people, our nation, and our future? The Onion, the best parody site ever, once ran an article titled, “U.S. Government to Discontinue Long-Term, Low-Yield Investment in Nation’s Youth.” 8 In it, Secretary of Education Rod Paige takes a calmly analytical approach that would cost any politician their job: “Testing is exactly the sort of research the government should do before making spending decisions,” Paige said. “How else will we know which individuals are sound investments and which are likely to waste our time and money?” […]

Bryan Caplan, The Case Against Education, 2017


Added to diary 21 April 2018

A Motte and Bailey castle is a medieval system of defence in which a stone tower on a mound (the Motte) is surrounded by an area of land (the Bailey) which in turn is encompassed by some sort of a barrier such as a ditch. Being dark and dank, the Motte is not a habitation of choice. The only reason for its existence is the desirability of the Bailey, which the combination of the Motte and ditch makes relatively easy to retain despite attack by marauders. When only lightly pressed, the ditch makes small numbers of attackers easy to defeat as they struggle across it: when heavily pressed the ditch is not defensible and so neither is the Bailey. Rather one retreats to the insalubrious but defensible, perhaps impregnable, Motte. Eventually the marauders give up, when one is well placed to reoccupy desirable land.

For my purposes the desirable but only lightly defensible territory of the Motte and Bailey castle, that is to say, the Bailey, represents a philosophical doctrine or position with similar properties: desirable to its proponent but only lightly defensible. The Motte is the defensible but undesired position to which one retreats when hard pressed. I think it is evident that Troll’s Truisms have the Motte and Bailey property, since the exciting falsehoods constitute the desired but indefensible region within the ditch whilst the trivial truth constitutes the defensible but dank Motte to which one may retreat when pressed.

An entire doctrine or theory may be a Motte and Bailey Doctrine just by virtue of having a central core of defensible but not terribly interesting or original doctrines surrounded by a region of exciting but only lightly defensible doctrines. Just as the medieval Motte was often constructed by the stonemasons art from stone in the surrounding land, the Motte of dull but defensible doctrines is often constructed by the use of the sophists art from the desired but indefensible doctrines lying within the ditch.

Diagnosis of a philosophical doctrine as being a Motte and Bailey Doctrine is invariably fatal. Once made it is relatively obvious to those familiar with the doctrine that the doctrine’s survival required a systematic vacillation between exploiting the desired territory and retreating to the Motte when pressed.

The dialectic between many refutations of specific postmodernist doctrines and the postmodernist defences correspond exactly to the dynamics of Motte and Bailey Doctrines. When pressed with refutation the postmodernists retreat to their Mottes, only to venture out and repossess the desired territory when the refutation is not in immediate evidence. For these reasons, I think the proper diagnosis of postmodernism is precisely that it is a Motte and Bailey Doctrine.

Shackel, N. (2005). The vacuity of postmodernist methodology. Metaphilosophy, 36(3), 295-320.

So the motte-and-bailey doctrine is when you make a bold, controversial statement. Then when somebody challenges you, you claim you were just making an obvious, uncontroversial statement, so you are clearly right and they are silly for challenging you. Then when the argument is over you go back to making the bold, controversial statement.

Some classic examples:

  1. The religious group that acts for all the world like God is a supernatural creator who builds universes, creates people out of other people’s ribs, parts seas, and heals the sick when asked very nicely (bailey). Then when atheists come around and say maybe there’s no God, the religious group objects “But God is just another name for the beauty and order in the Universe! You’re not denying that there’s beauty and order in the Universe, are you?” (motte). Then when the atheists go away they get back to making people out of other people’s ribs and stuff.

  2. Or…”If you don’t accept Jesus, you will burn in Hell forever.” (bailey) But isn’t that horrible and inhuman? “Well, Hell is just another word for being without God, and if you choose to be without God, God will be nice and let you make that choice.” (motte) Oh, well that doesn’t sound so bad, I’m going to keep rejecting Jesus. “But if you reject Jesus, you will BURN in HELL FOREVER and your body will be GNAWED BY WORMS.” But didn’t you just… “Metaphorical worms of godlessness!”

  3. The feminists who constantly argue about whether you can be a real feminist or not without believing in X, Y and Z and wanting to empower women in some very specific way, and who demand everybody support controversial policies like affirmative action or affirmative consent laws (bailey). Then when someone says they don’t really like feminism very much, they object “But feminism is just the belief that women are people!” (motte) Then once the person hastily retreats and promises he definitely didn’t mean women aren’t people, the feminists get back to demanding everyone support affirmative action because feminism, or arguing about whether you can be a feminist and wear lipstick.

  4. Proponents of pseudoscience sometimes argue that their particular form of quackery will cure cancer or take away your pains or heal your crippling injuries (bailey). When confronted with evidence that it doesn’t work, they might argue that people need hope, and even a placebo solution will often relieve stress and help people feel cared for (motte). In fact, some have argued that quackery may be better than real medicine for certain untreatable diseases, because neither real nor fake medicine will help, but fake medicine tends to be more calming and has fewer side effects. But then once you leave the quacks in peace, they will go back to telling less knowledgeable patients that their treatments will cure cancer.

  5. Critics of the rationalist community note that it pushes controversial complicated things like Bayesian statistics and utilitarianism (bailey) under the name “rationality”, but when asked to justify itself defines rationality as “whatever helps you achieve your goals”, which is so vague as to be universally unobjectionable (motte). Then once you have admitted that more rationality is always a good thing, they suggest you’ve admitted everyone needs to learn more Bayesian statistics.

  6. Likewise, singularitarians who predict with certainty that there will be a singularity, because “singularity” just means “a time when technology is so different that it is impossible to imagine” – and really, who would deny that technology will probably get really weird (motte)? But then every other time they use “singularity”, they use it to refer to a very specific scenario of intelligence explosion, which is far less certain and needs a lot more evidence before you can predict it (bailey).

The motte and bailey doctrine sounds kind of stupid and hard-to-fall-for when you put it like that, but all fallacies sound that way when you’re thinking about them. More important, it draws its strength from people’s usual failure to debate specific propositions rather than vague clouds of ideas. If I’m debating “does quackery cure cancer?”, it might be easy to view that as a general case of the problem of “is quackery okay?” or “should quackery be illegal?”, and from there it’s easy to bring up the motte objection.

Scott Alexander, “All in all, another brick in the motte”, Slate Star Codex, 3 November 2014

Suppose I define socialism as, “a system of totalitarian control over the economy, leading inevitably to mass poverty and death.” As a detractor of socialism, this is superficially tempting. But it’s sheer folly, for two distinct reasons.

First, this plainly isn’t what most socialists mean by “socialism.” When socialists call for socialism, they’re rarely requesting totalitarianism, poverty, and death. And when non-socialists listen to socialists, that’s rarely what they hear, either.

Second, if you buy this definition, there’s no point studying actual socialist regimes to see if they in fact are “totalitarian” or “inevitably lead to mass poverty and death.” Mere words tell you what you need to know.

What’s the problem? The problem is that I’ve provided an argumentative definition of socialism. Instead of rigorously distinguishing between what we’re talking about and what we’re saying about it, an argumentative definition deliberately interweaves the two.

The hidden hope, presumably, is that if we control the way people use words, we’ll also control what people think about the world. And it is plainly possible to trick the naive using these semantic tactics. But the epistemic cost is high: You preemptively end conversation with anyone who substantively disagrees with you - and cloud your own thinking in the process. It’s far better to neutrally define socialism as, say, “Government ownership of most of the means of production,” or maybe, “The view that each nation’s wealth is justly owned collectively by its citizens.” You can quibble with these definitions, but people can accept either definition regardless of their position on socialism itself.

Modern discussions are riddled with argumentative definitions, but the most prominent instance, lately, is feminism. Google “feminism,” and what do you get? The top hit: “the advocacy of women’s rights on the basis of the equality of the sexes.” I’ve heard many variants on this: “the theory that men and women should be treated equally,” or even “the radical notion that women are people.”

What’s argumentative about these definitions? Well, in this 2016 Washington Post/Kaiser Family Foundation survey, 40% of women and 67% of men did not consider themselves “feminists.” But over 90% of both genders agreed that “men and women should be social, political, and economic equals.” If Google’s definition of feminism conformed to standard English usage, these patterns would make very little sense. Imagine a world where 90% of men say they’re “bachelors,” but only 40% say they’re “unmarried.”

Bryan Caplan, Against Argumentative Definitions: The Case of Feminism, EconLog, 20 February 2018


Added to diary 21 April 2018

carolyn-kormann

In the Aeneid, Virgil puts forward a prophecy founded on proto-pizza consumption, which foretells where Rome shall be built. “When hunger shall drive you, landed on unknown shores, to eat the tables at your frugal meal,” Aeneas recalls his father telling him, “remember to place your first buildings there.” These “tables,” Aeneas later realizes, falling to his knees, are plates made of hard bread off which his band of Trojan refugees eat lunch. Two millennia later, Camillo (opened in September by the proprietors of the Clinton Hill standby Locanda Vini e Olii) honors pizza’s Virgilian origins—in the ultimate old-timey Brooklyn move—with pinsa, a Roman flatbread.

Carolyn Kormann, Camillo, Tables for two, New Yorker Magazime, November 27, 2017


Added to diary 16 January 2018

christopher-pfalzgraf

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.


Added to diary 27 June 2018

colin-mcginn

Can I decide that my definite descriptions shall all conform to Russell’s theory? I simply stipulate that they do. It appears that I can indeed decide these things—my semantic will can create semantic facts. I can decide what my words and sentences, mean, since this is a matter of stipulation. […]

[…] so I decide to mean truth conditions by my sentences, not verification conditions. What is to stop me from doing that? I freely assign truth conditions to my sentences as their meanings. It’s a free country, semantically speaking. I can mean what I choose to mean—and I choose to mean truth conditions. I might even announce outright: “The meaning of my utterances is to be understood in terms of truth conditions, not verification conditions.”

But couldn’t someone of positivist or antirealist sympathies make the contrary decision? This person suspects that the sentences she has inherited from her elders are tainted with metaphysics, and she regards the concept of truth with suspicion; she wants her meaning to be determined entirely by verification conditions. She thus stipulates that her sentences are to be understood in terms of verification conditions, not truth conditions. When she says, “John is in pain” she means that the assertion conditions for that sentence are satisfied (John is groaning, writhing, etc.). She insists that there is no inaccessible private something lurking behind the behavioral evidence—no mysterious “truth condition”; there are just the criteria we use for making assertions of this type. She accordingly stipulates that her meanings shall conform to the verification conditions theory. This does not seem logically impossible: there could be a language conforming to the verificationist conception, given appropriate beliefs and intentions on the part of speakers. The traditional dispute has been over whether our actual language is subject to a truth conditions or a verification conditions theory, not over whether each theory is logically coherent.

Colin McGinn, Philosophical Provocations, Deciding to Mean p. 99ff, 2017


Added to diary 19 January 2018

daniel-defoe

So I went to work; and here I must needs observe, that as reason is the substance and original of the mathematicks, so by stating and squaring every thing by reason, and by making the most rational judgment of things, every man may be in time master of every mechanick art.

Daniel Defoe, Robinson Crusoe, 1719


Added to diary 26 January 2018

daron-acemoglu

In the wake of Hurricane Katrina in the summer of 2005, much of the Gulf Coast had been pummeled by wind and inches upon inches of rain. Water was everywhere, but of- ten undrinkable. Basic provisions we take for granted, like drinking water, weren’t easy to come by and the Federal Emergency Management Agency (FEMA) was caught flat-footed.

In response to catastrophic events like a hurricane or an earthquake, the caricature of private industry is that firms will gouge customers. And sometimes this is true, but in response to Katrina, there was one unlikely hero: Walmart. In fact, the Mayor of Kenner, a suburb of New Orleans, had this to say about Walmart’s response: “. . . the only lifeline in Kenner was the Walmart stores. We didn’t have looting on a mass scale because Walmart showed up with food and water so our people could survive.”

Indeed, in the three weeks after Katrina, Walmart shipped almost 2,500 truckloads of supplies to storm- damaged areas. These truckloads reached affected areas before FEMA, whose troubles responding to the storm were so great that it shipped 30,000 pounds of ice to Maine instead of Mississippi. These stories and more are in Horwitz (2009), which summarizes the divergent responses to Katrina by private industry and FEMA. How was Walmart so effective in its response? Well, it maintains a hurricane response center of its own that rivals FEMA’s, and prior to the storm’s landfall it anticipated a need for generators, water, and food, so it effectively diverted supplies to the area. Walmart’s emergency re- sponse center was in full swing as the storm approached with 50 employees managing the response from headquarters.

This sounds like the sort of response FEMA should have produced; so if that’s the job of FEMA, why did Walmart respond so heroically? Simple economics. Walmart un- derstood that there would be an important shift of the demand curve for water, generators, and ice in response to the storm and the textbook response to such shifts is an increase in quantity supplied. Lucky for us, few are better at shipping provisions around the country than Walmart.

Daron Acemoglu, David Laibson, John List, Economics, 1st Edition, 2015


Added to diary 19 May 2018

david-foster-wallace

Laurel Manderley, who like most of the magazine’s high level interns wore exquisitely chosen and coordinated professional attire, permitted herself a small diamond stud in one nostril that Atwater found slightly distracting in face to face exchanges, but she was extremely shrewd and pragmatic—she had actually been voted Most Rational by the Class of ’96 at Miss Porter’s School. She was also all but incapable of writing a simple declarative sentence and thus could not, by any dark stretch of the imagination, ever be any kind of rival for Atwater’s salaryman position at Style. […]

Many of Style’s upper echelon interns convened for a working lunch at Chambers Street’s Tutti Mangia restaurant twice a week, to discuss issues of concern and transact any editorial or other business that was pending, after which each returned to her respective mentor and relayed whatever was germane. It was an efficient practice that saved the magazine’s paid staffers a great deal of time and emotional energy. […]

A fellow WITW staff intern, who also roomed with Laurel Manderley and three other Wellesleyites in a basement sublet near the Williamsburg Bridge, related a vignette that her therapist had once shared with her about dating his wife, whom the therapist had originally met when both of them were going through horrible divorces, and of their going out to dinner on one of their early dates and coming back and sitting with glasses of wine on her sofa, and of she all of a sudden saying, ‘You have to leave,’ and he not understanding, not knowing whether she was kicking him out or whether he’d said something inappropriate or what, and she finally explaining, ‘I have to take a dump and I can’t do it with you here, it’s too stressful,’ using the actual word dump, and of so how the therapist had gone down and stood on the corner smoking a cigarette and looking up at her apartment, watching the light in the bathroom’s frosted window go on, and simultaneously, one, feeling like a bit of an idiot for standing out there waiting for her to finish so he could go back up, and, two, realizing that he loved and respected this woman for baring to him so nakedly the insecurity she had been feeling. He had told the intern that standing on that corner was the first time in quite a long time he had not felt deeply and painfully alone, he had realized. […]

A polished, shallow, earnest, productive, consummate corporate pro. Over the past three years, Skip Atwater had turned in some 70 separate pieces to Style, of which almost 50 saw print and a handful of others ran under rewriters’ names. A volunteer fire company in suburban Tulsa where you had to be a grandmother to join. When Baby Won’t Wait—Moms who never made it to the hospital tell their amazing stories. Drinking and boating: The other DUI. Just who really was Slim Whitman. This Grass Ain’t Blue—Kentucky’s other cash crop. He Delivers—81 year old obstetrician welcomes the grandchild of his own first patient. Former Condit intern speaks out. Today’s forest ranger: He doesn’t just sit in a tower. Holy Rollers—Inline skateathon saves church from default. Eczema: The silent epidemic. Rock ’n’ Roll High School—Which future pop stars made the grade? Nevada bikers rev up the fight against myasthenia gravis. Head of the Parade—From Macy’s to the Tournament of Roses, this float designer has done them all. The All Ads All The Time cable channel. Rock of Ages—These geologists celebrate the millennium in a whole new way. Sometimes he felt that if not for his schipperkes’ love he would simply blow away and dissipate like milkweed. The women who didn’t get picked for Who Wants to Marry a Millionaire: Where did they come from, to what do they return. Leapin’ Lizards—The Gulf Coast’s new alligator plague. One Lucky Bunch of Cats—A terminally ill Lotto winner’s astounding bequest. Those new home cottage cheese makers: Marvel or ripoff? Be(-Happy-)Atitudes—This Orange County pastor claims Christ was no sourpuss. Dramamine and NASA: The untold story. Secret documents reveal Wallis Simpson cheated on Edward VIII. A Whole Lotta Dough—Delaware teen sells $40,000 worth of Girl Scout cookies . . . and isn’t finished yet! For these former agoraphobics, home is not where the heart is. Contra: The thinking person’s square dance. […] Hopping Mad—This triple amputee isn’t taking health care costs lying down. The meth lab next door! Mrs. Gladys Hine, the voice behind over 1,500 automated phone menus. The Dish—This Washington D.C. caterer has seen it all. Computer solitaire: The last addiction? No Sweet Talkin’—Blue M&Ms have these consumers up in arms. Dallas commuter’s airbag nightmare. Menopause and herbs: Exciting new findings. Fat Chance—Lottery cheaters and the heavyweight squad that busts them. Seance secrets of online medium Duwayne Evans. Ice sculpture: How do they do that? […]

David Foster Wallace, The Suffering Channel, in Oblivion, 2004


Added to diary 27 June 2018

GOVERNOR: Guys, the state is getting soft. I can feel softness out there. It’s getting to be one big suburb and industrial park and mall. Too much development. People are getting complacent. They’re forgetting the way this state was historically hewn out of the wilderness. There’s no more hewing.

MR. OBSTAT: You’ve got a point there, Chief.

GOVERNOR: We need a wasteland.

MR. LUNGBERG and MR. OBSTAT: A wasteland?

GOVERNOR: Gentlemen, we need a desert.

MR. LUNGBERG and MR. OBSTAT: A desert?

GOVERNOR: Gentlemen, a desert. A point of savage reference for the good people of Ohio. A place to fear and love. A blasted region. Something to remind us of what we hewed out of. A place without malls. An Other for Ohio’s Self. Cacti and scorpions and the sun beating down. Desolation. A place for people to wander alone. To reflect. Away from everything. Gentlemen, a desert.

MR. OBSTAT: Just a super idea, Chief.

GOVERNOR: Thanks, Neil. Gentlemen may I present Mr. Ed Roy Yancey, of Industrial Desert Design, Dallas. They did Kuwait.

David Foster Wallace, The Broom of the System, 1987


Added to diary 27 June 2018

The nearby StairMasters were used almost exclusively by midlevel financial analysts, all of whom had bristly cybernetic haircuts.

David Foster Wallace, The Suffering Channel, in Oblivion, 2004


Added to diary 27 June 2018

I have a truly horrible dream which invariably occurs on the nights I am Lenoreless in my bed. I am attempting to stimulate the clitoris of Queen Victoria with the back of a tortoise-shell hairbrush. Her voluminous skirts swirl around her waist and my head. Her enormous cottage-cheese thighs rest heavy on my shoulders, spill out in front of my sweating face. The clanking of pounds of jewelry is heard as she shifts to offer herself at best advantage. There are odors. The Queen’s impatient breathing is thunder above me as I kneel at the throne. Time passes. Finally her voice is heard, overhead, metalled with disgust and frustration: “We are not aroused.” I am punched in the arm by a guard and flung into a pit at the bottom of which boil the figures of countless mice. I awake with a mouth full of fur. Begging for more time. A ribbed brush.

David Foster Wallace, The Broom of the System, 1987


Added to diary 27 June 2018

I had a sort of detached interest in the whole analysis scene, really. My problems were without exception very tiny.

David Foster Wallace, The Broom of the System, 1987


Added to diary 27 June 2018

Darlene Lilley, who was married and the mother of a large-headed toddler whose photograph adorned her desk and hutch at Team Δy, had, three fiscal quarters past, been subjected to unwelcome sexual advances by one of the four Senior Research Directors who liaisoned between the Field and Technical Processing teams and the upper echelons of Team Δy under Alan Britton, advances and duress more than sufficient for legal action in Schmidt’s and most of the rest of their Field Team’s opinions, which advances she had been able to deflect and defuse in an enormously skillful manner without raising any of the sort of hue and cry that could divide a firm along gender and/or political lines, and things had been allowed to cool down and blow over to such an extent that Darlene Lilley, Schmidt, and the three other members of their Field Team all now still enjoyed a productive working relationship with this dusky and pungent older Senior Research Director, who was now in fact overseeing Field research on the Mister Squishy-R.S.B. project, and Terry Schmidt was personally somewhat in awe of the self-possession and interpersonal savvy Darlene had displayed throughout the whole tense period, an awe tinged with an unwilled element of romantic attraction, and it is true that Schmidt at night in his condominium sometimes without feeling as if he could help himself masturbated to thoughts of having moist slapping intercourse with Darlene Lilley on one of the ponderous laminate conference tables of the firms they conducted statistical market research for, and this was a tertiary cause of what practicing social psychologists would call his MAM* with the board’s marker as he used a modulated tone of off-the-record confidence to tell the Focus Group about some of the more dramatic travails Reesemeyer Shannon Belt had had with establishing the product’s brand-identity and coming up with the test name Felony!, all the while envisioning in a more autonomic part of his brain Darlene delivering nothing but the standard minimal pre-GRDS instructions for her own Focus Group as she stood in her dark Hanes hosiery and the burgundy high heels she kept at work in the bottom-right cabinet of her hutch and changed out of her crosstrainers into every morning the moment she sat down and rolled her chair with small pretend whimpers of effort over to the hutch’s cabinets, sometimes (unlike Schmidt) pacing slightly in front of the whiteboard, sometimes planting one heel and rotating her foot slightly or crossing her sturdy ankles to lend her standing posture a carelessly demure aspect, sometimes taking her delicate oval eyeglasses off and not chewing on the arm but holding the glasses in such a way and in such proximity to her mouth that one got the idea she could, at any moment, put one of the frames’ arm’s plastic earguards just inside her mouth and nibble on it absently, an unconscious gesture of shyness and concentration at once.

David Foster Wallace, Mister Squishy, Oblivion (2004)


Added to diary 28 March 2018

So the fact that I had chosen to be supposedly ‘honest’ and to diagnose myself aloud was in fact just one more move in my campaign to make sure Dr. Gustafson understood that as a patient I was uniquely acute and self-aware, and that there was very little chance he was going to see or diagnose anything about me that I wasn’t already aware of and able to turn to my own tactical advantage in terms of creating whatever image or impression of myself I wanted him to see at that moment. […]

I’ll spare you any more examples, for instance I’ll spare you the literally countless examples of my fraudulence with girls—with the ladies as they say—in just about every dating relationship I ever had, or the almost unbelievable amount of fraudulence and calculation involved in my career—not just in terms of manipulating the consumer and manipulating the client into trusting that your agency’s ideas are the best way to manipulate the consumer, but in the inter-office politics of the agency itself, like for example in sizing up what sorts of things your superiors want to believe (including the belief that they’re smarter than you and that that’s why they’re your superior) and then giving them what they want but doing it just subtly enough that they never get a chance to view you as a sycophant or yes-man (which they want to believe they do not really want) but instead see you as a tough-minded independent thinker who from time to time bows to the weight of their superior intelligence and creative firepower, etc. […]

Plus it didn’t exactly seem like a coincidence that the cancer he [Dr. Gustafson] was even then harboring was in his colon—that shameful, dirty, secret place right near the rectum—with the idea being that using your rectum or colon to secretly harbor an alien growth was a blatant symbol both of homosexuality and of the repressive belief that its open acknowledgment would equal disease and death. Dr. Gustafson and I both had a good laugh over this one after we’d both died and were outside linear time and in the process of dramatic change, you can bet on that. […]

I also inserted that there was also a good possibility that, when all was said and done, I was nothing but just another fast-track yuppie who couldn’t love, and that I found the banality of this unendurable, largely because I was evidently so hollow and insecure that I had a pathological need to see myself as somehow exceptional or outstanding at all times. Without going into much explanation or argument, I also told Fern that if her initial reaction to these reasons for my killing myself was to think that I was being much, much too hard on myself, then she should know that I was already aware that that was the most likely reaction my note would produce in her, and had probably deliberately constructed the note to at least in part prompt just that reaction, just the way my whole life I’d often said and done things designed to prompt certain people to believe that I was a genuinely outstanding person whose personal standards were so high that he was far too hard on himself, which in turn made me appear attractively modest and unsmug, and was a big reason for my popularity with so many people in all different avenues of my life—what Beverly-Elizabeth Slane had termed my ‘talent for ingratiation’—but was nevertheless basically calculated and fraudulent. I also told Fern that I loved her very much, and asked her to relay these same sentiments to Marin County for me.

David Foster Wallace, Good Old Neon, in Oblivion (2004)


Added to diary 28 March 2018

I’ll say God seems to have a kind of laid-back management style I’m not crazy about. I’m pretty much anti-death. God looks by all accounts to be pro-death. I’m not seeing how we can get together on this issue, he and I […].

David Foster Wallace, Infinite Jest, 1996


Added to diary 08 February 2018

Have you ever wondered where these particular types of unfunny T-shirts come from? the ones that say things like “HORNEY IN 2.5” or “Impeach President Clinton… AND HER HUSBAND TOO!!”? Mystery solved. They come from State Fair Expos. Right here on the main floor’s a monster-sized booth, more like an open bodega, with shirts and laminated buttons and license-plate borders, all of which, for this subphylum, Testify. This booth seems integral, somehow. The seamiest fold of the Midwestern underbelly. The Lascaux Caves of a certain rural mentality. “40 Isn’t Old… IF YOU’RE A TREE” and “The More Hair I Lose, The More Head I Get” and “Retired: No Worries, No Paycheck” and “I Fight Poverty… I WORK!!” As with New Yorker cartoons, there’s an elusive sameness about the shirts’ messages. A lot serve to I.D. the wearer as part of a certain group and then congratulate that group for its sexual dynamism—“Coon Hunters Do It All Night” and “Hairdressers Tease It Till It Stands Up” and “Save A Horse: Ride A Cowboy.” Some presume a weird kind of aggressive relation between the shirt’s wearer and its reader—“We’d Get Along Better… If You Were A BEER” and “Lead Me Not Into Temptation, I Know The Way MYSELF” and “What Part Of NO Don’t You Understand?” There’s something complex and compelling about the fact that these messages are not just uttered but worn, like they’re a badge or credential. The message compliments the wearer somehow, and the wearer in turn endorses the message by spreading it across his chest, which fact is then in further turn supposed to endorse the wearer as a person of plucky or risqué wit. It’s also meant to cast the wearer as an Individual, the sort of person who not only makes but wears a Personal Statement. What’s depressing is that the T-shirts’ statements are not only preprinted and mass-produced, but so dumbly unfunny that they serve to place the wearer squarely in that large and unfortunate group of people who think such messages not only Individual but funny. It all gets tremendously complex and depressing. The lady running the booth’s register is dressed like a ’68 Yippie but has a hard carny face and wants to know why I’m standing here memorizing T-shirts. All I can manage to tell her is that the “HORNEY” on these “2.5 BEERS”-shirts is misspelled; and now I really feel like an East-Coast snob, laying judgments and semiotic theories on these people who ask of life only a Republican in the White House and a black velvet Elvis on the wood-grain mantel of their mobile home.

David Foster Wallace, Ticket to the fair, Harper’s July 1994


Added to diary 26 January 2018

They are sort of disingenuous, I believe, these magazine people. They say all they want is a sort of really big experiential postcard—go, plow the Caribbean in style, come back, say what you’ve seen. I have seen a lot of really big white ships. I have seen schools of little fish with fins that glow. I have seen a toupee on a thirteen-year-old boy. (The glowing fish liked to swarm between our hull and the cement of the pier whenever we docked.) I have seen the north coast of Jamaica. I have seen and smelled all 145 cats inside the Ernest Hemingway Residence in Key West FL. I now know the difference between straight Bingo and Prize-O, and what it is when a Bingo jackpot “snowballs.” I have seen camcorders that practically required a dolly; I’ve seen fluorescent luggage and fluorescent sunglasses and fluorescent pince-nez and over twenty different makes of rubber thong. I have heard steel drums and eaten conch fritters and watched a woman in silver lamé projectile-vomit inside a glass elevator. I have pointed rhythmically at the ceiling to the 2:4 beat of the exact same disco music I hated pointing at the ceiling to in 1977. I have learned that there are actually intensities of blue beyond very, very bright blue. I have eaten more and classier food than I’ve ever eaten, and eaten this food during a week when I’ve also learned the difference between “rolling” in heavy seas and “pitching” in heavy seas. I have heard a professional comedian tell folks, without irony, “But seriously.” I have seen fuchsia pantsuits and menstrual-pink sportcoats and and maroon-and-purple warm-ups and white loafers worn without socks. I have seen professional blackjack dealers so lovely they make you want to run over to their table and spend every last nickel you’ve got playing blackjack. I have heard upscale adult U.S. citizens ask the Guest Relations Desk whether snorkeling necessitates getting wet, whether the skeetshooting will be held outside, whether the crew sleeps on board, and what time the Midnight Buffet is. I now know the precise mixological difference between a Slippery Nipple and a Fuzzy Navel. I know what a Coco Loco is. I have in one week been the object of over 1500 professional smiles. I have burned and peeled twice. I have shot skeet at sea. Is this enough?

David Foster Wallace, Shipping Out, Harper’s January 1996


Added to diary 26 January 2018

david-hume

In every system of morality, which I have hitherto met with, I have always remark’d, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surpriz’d to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it shou’d be observ’d and explain’d; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.

David Hume, A Treatise of Human Nature Book III, Section 1, Part i,


Added to diary 21 April 2018

david-laibson

In the wake of Hurricane Katrina in the summer of 2005, much of the Gulf Coast had been pummeled by wind and inches upon inches of rain. Water was everywhere, but of- ten undrinkable. Basic provisions we take for granted, like drinking water, weren’t easy to come by and the Federal Emergency Management Agency (FEMA) was caught flat-footed.

In response to catastrophic events like a hurricane or an earthquake, the caricature of private industry is that firms will gouge customers. And sometimes this is true, but in response to Katrina, there was one unlikely hero: Walmart. In fact, the Mayor of Kenner, a suburb of New Orleans, had this to say about Walmart’s response: “. . . the only lifeline in Kenner was the Walmart stores. We didn’t have looting on a mass scale because Walmart showed up with food and water so our people could survive.”

Indeed, in the three weeks after Katrina, Walmart shipped almost 2,500 truckloads of supplies to storm- damaged areas. These truckloads reached affected areas before FEMA, whose troubles responding to the storm were so great that it shipped 30,000 pounds of ice to Maine instead of Mississippi. These stories and more are in Horwitz (2009), which summarizes the divergent responses to Katrina by private industry and FEMA. How was Walmart so effective in its response? Well, it maintains a hurricane response center of its own that rivals FEMA’s, and prior to the storm’s landfall it anticipated a need for generators, water, and food, so it effectively diverted supplies to the area. Walmart’s emergency re- sponse center was in full swing as the storm approached with 50 employees managing the response from headquarters.

This sounds like the sort of response FEMA should have produced; so if that’s the job of FEMA, why did Walmart respond so heroically? Simple economics. Walmart un- derstood that there would be an important shift of the demand curve for water, generators, and ice in response to the storm and the textbook response to such shifts is an increase in quantity supplied. Lucky for us, few are better at shipping provisions around the country than Walmart.

Daron Acemoglu, David Laibson, John List, Economics, 1st Edition, 2015


Added to diary 19 May 2018

david-rothschild

To reiterate, in just six days, The New York Times ran as many cover stories about Hillary Clinton’s emails as they did about all policy issues combined in the 69 days leading up to the election (and that does not include the three additional articles on October 18, and November 6 and 7, or the two articles on the emails taken from John Podesta). This intense focus on the email scandal cannot be written off as inconsequential: The Comey incident and its subsequent impact on Clinton’s approval rating among undecided voters could very well have tipped the election.

Duncan Watts and David Rothschild, Don’t blame the election on fake news. Blame it on the media, Columbia Journalism Review, 5 December 2017

Analyses by Columbia Journalism Review, the Berkman Klein Center for Internet and Society at Harvard University, and the Shorenstein Center at the Harvard Kennedy School show that the Clinton email controversy received more coverage in mainstream media outlets than any other topic during the 2016 presidential election.

Wikipedia contributors, Hillary Clinton email controversy


Added to diary 30 January 2018

douglas-adams

The real Universe arched sickeningly away beneath them. Various pretend ones flitted silently by, like mountain goats. Primal light exploded, splattering space-time as with gobbets of Jell-O. Time blossomed, matter shrank away. The highest prime number coalesced quietly in a corner and hid itself away for ever.

Douglas Adams, The Hitchhiker’s Guide to the Galaxy (1979)


Added to diary 28 March 2018

duncan-watts

To reiterate, in just six days, The New York Times ran as many cover stories about Hillary Clinton’s emails as they did about all policy issues combined in the 69 days leading up to the election (and that does not include the three additional articles on October 18, and November 6 and 7, or the two articles on the emails taken from John Podesta). This intense focus on the email scandal cannot be written off as inconsequential: The Comey incident and its subsequent impact on Clinton’s approval rating among undecided voters could very well have tipped the election.

Duncan Watts and David Rothschild, Don’t blame the election on fake news. Blame it on the media, Columbia Journalism Review, 5 December 2017

Analyses by Columbia Journalism Review, the Berkman Klein Center for Internet and Society at Harvard University, and the Shorenstein Center at the Harvard Kennedy School show that the Clinton email controversy received more coverage in mainstream media outlets than any other topic during the 2016 presidential election.

Wikipedia contributors, Hillary Clinton email controversy


Added to diary 30 January 2018

eliezer-yudkowsky

What capitalism is good at:

If I had to name the single epistemic feat at which modern human civilization is most adequate, the peak of all human power of estimation, I would unhesitatingly reply, “Short-term relative pricing of liquid financial assets, like the price of S&P 500 stocks relative to other S&P 500 stocks over the next three months.” This is something into which human civilization puts an actual effort. Millions of dollars are offered to smart, conscientious people with physics PhDs to induce them to enter the field. These people are then offered huge additional payouts conditional on actual performance—especially outperformance relative to a baseline.3 Large corporations form to specialize in narrow aspects of price-tuning. They have enormous computing clusters, vast historical datasets, and competent machine learning professionals. They receive repeated news of success or failure in a fast feedback loop.4 The knowledge aggregation mechanism—namely, prices that equilibrate supply and demand for the financial asset—has proven to work beautifully, and acts to sum up the wisdom of all those highly motivated actors. An actor that spots a 1% systematic error in the aggregate estimate is rewarded with a billion dollars—in a process that also corrects the estimate. Barriers to entry are not zero (you can’t get the loans to make a billion-dollar corrective trade), but there are thousands of diverse intelligent actors who are all individually allowed to spot errors, correct them, and be rewarded, with no central veto. This is certainly not perfect, but it is literally as good as it gets on modern-day Earth.[…]

In the thickly traded parts of the stock market, where the collective power of human civilization is truly at its strongest, I doff my hat, I put aside my pride and kneel in true humility to accept the market’s beliefs as though they were my own, knowing that any impulse I feel to second-guess and every independent thought I have to argue otherwise is nothing but my own folly. If my perceptions suggest an exploitable opportunity, then my perceptions are far more likely mistaken than the markets. That is what it feels like to look upon a civilization doing something adequately.[…]

Efficiency vs Inexploitavility vs Adequacy:

So the distinction is: Efficiency: “Microsoft’s stock price is neither too low nor too high, relative to anything you can possibly know about Microsoft’s stock price.” Inexploitability: “Some houses and housing markets are overpriced, but you can’t make a profit by short-selling them, and you’re unlikely to find any substantially underpriced houses—the market as a whole isn’t rational, but it contains participants who have money and understand housing markets as well as you do.” Adequacy: “Okay, the medical sector is a wildly crazy place where different interventions have orders-of-magnitude differences in cost-effectiveness, but at least there’s no well-known but unused way to save ten thousand lives for just ten dollars each, right? Somebody would have picked up on it! Right?!”[…]

Eliezer Yudkowsky, Inadequate Equilibria, 2017


Added to diary 21 April 2018

everything Bayesians said seemed perfectly straightforward and simple, the obvious way I would do it myself; whereas the things frequentists said sounded like the elaborate, warped, mad blasphemy of dreaming Cthulhu. I didn’t choose to become a Bayesian any more than fishes choose to breathe water.

Eliezer Yudkowsky, My Bayesian Enlightenment, LessWrong, 5 October 2008


Added to diary 08 February 2018

the brain can’t successfully multiply by eight and get a larger quantity than it started with. But it ought to, normatively speaking.

Eliezer Yudkowsky, The “Intuitions” Behind “Utilitarianism”, LessWrong, 28 January 2008


Added to diary 02 February 2018

The ancient war between the Bayesians and the accursèd frequentists stretches back through decades, and I’m not going to try to recount that elder history in this blog post.

But one of the central conflicts is that Bayesians expect probability theory to be… what’s the word I’m looking for? “Neat?” “Clean?” “Self-consistent?”

As Jaynes says, the theorems of Bayesian probability are just that, theorems in a coherent proof system. No matter what derivations you use, in what order, the results of Bayesian probability theory should always be consistent – every theorem compatible with every other theorem. […]

Math! That’s the word I was looking for. Bayesians expect probability theory to be math. That’s why we’re interested in Cox’s Theorem and its many extensions, showing that any representation of uncertainty which obeys certain constraints has to map onto probability theory. Coherent math is great, but unique math is even better.

And yet… should rationality be math? It is by no means a foregone conclusion that probability should be pretty. The real world is messy – so shouldn’t you need messy reasoning to handle it? Maybe the non-Bayesian statisticians, with their vast collection of ad-hoc methods and ad-hoc justifications, are strictly more competent because they have a strictly larger toolbox. It’s nice when problems are clean, but they usually aren’t, and you have to live with that.

After all, it’s a well-known fact that you can’t use Bayesian methods on many problems because the Bayesian calculation is computationally intractable. So why not let many flowers bloom? Why not have more than one tool in your toolbox?

That’s the fundamental difference in mindset. Old School statisticians thought in terms of tools, tricks to throw at particular problems. Bayesians – at least this Bayesian, though I don’t think I’m speaking only for myself – we think in terms of laws. […]

No, you can’t always do the exact Bayesian calculation for a problem. Sometimes you must seek an approximation; often, indeed. This doesn’t mean that probability theory has ceased to apply, any more than your inability to calculate the aerodynamics of a 747 on an atom-by-atom basis implies that the 747 is not made out of atoms. Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation – and fails to the extent that it departs.

Bayesianism’s coherence and uniqueness proofs cut both ways. Just as any calculation that obeys Cox’s coherency axioms (or any of the many reformulations and generalizations) must map onto probabilities, so too, anything that is not Bayesian must fail one of the coherency tests. This, in turn, opens you to punishments like Dutch-booking (accepting combinations of bets that are sure losses, or rejecting combinations of bets that are sure gains).

You may not be able to compute the optimal answer. But whatever approximation you use, both its failures and successes will be explainable in terms of Bayesian probability theory. You may not know the explanation; that does not mean no explanation exists.

So you want to use a linear regression, instead of doing Bayesian updates? But look to the underlying structure of the linear regression, and you see that it corresponds to picking the best point estimate given a Gaussian likelihood function and a uniform prior over the parameters.

You want to use a regularized linear regression, because that works better in practice? Well, that corresponds (says the Bayesian) to having a Gaussian prior over the weights.

Sometimes you can’t use Bayesian methods literally; often, indeed. But when you can use the exact Bayesian calculation that uses every scrap of available knowledge, you are done. You will never find a statistical method that yields a better answer. You may find a cheap approximation that works excellently nearly all the time, and it will be cheaper, but it will not be more accurate. Not unless the other method uses knowledge, perhaps in the form of disguised prior information, that you are not allowing into the Bayesian calculation; and then when you feed the prior information into the Bayesian calculation, the Bayesian calculation will again be equal or superior.

When you use an Old Style ad-hoc statistical tool with an ad-hoc (but often quite interesting) justification, you never know if someone else will come up with an even more clever tool tomorrow. But when you can directly use a calculation that mirrors the Bayesian law, you’re done – like managing to put a Carnot heat engine into your car. It is, as the saying goes, “Bayes-optimal”.

Eliezer Yudkowsky, Beautiful Probability, 14 January 2008


Added to diary 15 January 2018

In Empty Labels and then Replace the Symbol with the Substance, we saw the technique of replacing a word with its definition – the example being given:

All [mortal, ~feathers, bipedal] are mortal. Socrates is a [mortal, ~feathers, bipedal]. Therefore, Socrates is mortal.

Why, then, would you even want to have a word for “human”? Why not just say “Socrates is a mortal featherless biped”?

Because it’s helpful to have shorter words for things that you encounter often. If your code for describing single properties is already efficient, then there will not be an advantage to having a special word for a conjunction – like “human” for “mortal featherless biped” – unless things that are mortal and featherless and bipedal, are found more often than the marginal probabilities would lead you to expect.

In efficient codes, word length corresponds to probability—so the code for Z1Y2 will be just as long as the code for Z1 plus the code for Y2, unless P(Z1Y2) > P(Z1)P(Y2), in which case the code for the word can be shorter than the codes for its parts.

And this in turn corresponds exactly to the case where we can infer some of the properties of the thing, from seeing its other properties. It must be more likely than the default that featherless bipedal things will also be mortal.

Of course the word “human” really describes many, many more properties – when you see a human-shaped entity that talks and wears clothes, you can infer whole hosts of biochemical and anatomical and cognitive facts about it. To replace the word “human” with a description of everything we know about humans would require us to spend an inordinate amount of time talking. But this is true only because a featherless talking biped is far more likely than default to be poisonable by hemlock, or have broad nails, or be overconfident.

Having a word for a thing, rather than just listing its properties, is a more compact code precisely in those cases where we can infer some of those properties from the other properties. (With the exception perhaps of very primitive words, like “red”, that we would use to send an entirely uncompressed description of our sensory experiences. But by the time you encounter a bug, or even a rock, you’re dealing with nonsimple property collections, far above the primitive level.)

So having a word “wiggin” for green-eyed black-haired people, is more useful than just saying “green-eyed black-haired person”, precisely when:

  1. Green-eyed people are more likely than average to be black-haired (and vice versa), meaning that we can probabilistically infer green eyes from black hair or vice versa; or
  2. Wiggins share other properties that can be inferred at greater-than-default probability. In this case we have to separately observe the green eyes and black hair; but then, after observing both these properties independently, we can probabilistically infer other properties (like a taste for ketchup).

One may even consider the act of defining a word as a promise to this effect. Telling someone, “I define the word ‘wiggin’ to mean a person with green eyes and black hair”, by Gricean implication, asserts that the word “wiggin” will somehow help you make inferences / shorten your messages.

If green-eyes and black hair have no greater than default probability to be found together, nor does any other property occur at greater than default probability along with them, then the word “wiggin” is a lie: The word claims that certain people are worth distinguishing as a group, but they’re not.

In this case the word “wiggin” does not help describe reality more compactly—it is not defined by someone sending the shortest message—it has no role in the simplest explanation. Equivalently, the word “wiggin” will be of no help to you in doing any Bayesian inference. Even if you do not call the word a lie, it is surely an error.

And the way to carve reality at its joints, is to draw your boundaries around concentrations of unusually high probability density in Thingspace.

Eliezer Yudkowsky, Mutual information and density in thingspace, 23 February 2008

And so even the labels that we use for words are not quite arbitrary. The sounds that we attach to our concepts can be better or worse, wiser or more foolish. Even apart from considerations of common usage!

Eliezer Yudkowsky, Entropy, and Short Codes, 23 February 2008

 


Added to diary 15 January 2018

The Foresight Institute suggests, among other sensible proposals, that the replication instructions for any nanodevice should be encrypted. Moreover, encrypted such that flipping a single bit of the encoded instructions will entirely scramble the decrypted output. If all nanodevices produced are precise molecular copies, and moreover, any mistakes on the assembly line are not heritable because the offspring got a digital copy of the original encrypted instructions for use in making grandchildren, then your nanodevices ain’t gonna be doin’ much evolving.

Eliezer Yudkowsky, No Evolutions for Corporations or Nanodevices, 17 November 2007


Added to diary 15 January 2018

eric-helland

The Baumol effect is easy to explain but difficult to grasp. In 1826, when Beethoven’s String Quartet No. 14 was first played, it took four people 40 minutes to produce a performance. In 2010, it still took four people 40 minutes to produce a performance. Stated differently, in the nearly 200 years between 1826 and 2010, there was no growth in string quartet labor productivity. In 1826 it took 2.66 labor hours to produce one unit of output, and it took 2.66 labor hours to produce one unit of output in 2010.

Fortunately, most other sectors of the economy have experienced substantial growth in labor productivity since 1826. We can measure growth in labor productivity in the economy as a whole by looking at the growth in real wages. In 1826 the average hourly wage for a production worker was $1.14. In 2010 the average hourly wage for a production worker was $26.44, approximately 23 times higher in real (inflation-adjusted) terms. Growth in average labor productivity has a surprising implication: it makes the output of slow productivity-growth sectors (relatively) more expensive. In 1826, the average wage of $1.14 meant that the 2.66 hours needed to produce a performance of Beethoven’s String Quartet No. 14 had an opportunity cost of just $3.02. At a wage of $26.44, the 2.66 hours of labor in music production had an opportunity cost of $70.33. Thus, in 2010 it was 23 times (70.33/3.02) more expensive to produce a performance of Beethoven’s String Quartet No. 14 than in 1826. In other words, one had to give up more other goods and services to produce a music performance in 2010 than one did in 1826. Why? Simply because in 2010, society was better at producing other goods and services than in 1826.

The 23 times increase in the relative price of the string quartet is the driving force of Baumol’s cost disease. The focus on relative prices tells us that the cost disease is misnamed. The cost disease is not a disease but a blessing. To be sure, it would be better if productivity increased in all industries, but that is just to say that more is better. There is nothing negative about productivity growth, even if it is unbalanced.

Helland, Eric, and Alexander T. Tabarrok. “Why Are the Prices So Damn High?” (2019).


Added to diary 09 June 2019

eric-zitzewitz

We consider a simple prediction market in which traders buy and sell an all-or nothing contract (a binary option) paying $1 if a specific event occurs, and nothing otherwise. There is heterogeneity in beliefs among the trading population, and following Manski’s notation, we denote trader ’s belief that the event will occur as . These beliefs are orthogonal to wealth levels , and are drawn from a distribution, . Individuals are price-takers and trade so as to maximize their subjectively expected utility. Wealth is only affected by the event via the prediction market, so there is no hedging motive for trading the contract. We first consider the case where traders have log utility, and we endogenously derive their trading activity, given the price of the contract is . […] Thus, in this simple model, market prices are equal to the mean belief among traders.

[…]

We now turn to relaxing some of our assumptions. To preview, relaxing the assumption that budgets are orthogonal to beliefs yields the intuitively plausible result that prediction market prices are a wealth-weighted average of beliefs among market traders. And second, the result that the equilibrium price is exactly equal to the (weighted) mean of beliefs reflects the fact that demand for the prediction security is linear in beliefs, which is itself a byproduct of assuming log utility. Calibrating alternative utility functions, we find that prices can systematically diverge from mean beliefs, but that this divergence is typically small. […] The extent of the deviation depends crucially on how widely dispersed beliefs are.

[…]

We start by assuming that beliefs are drawn from a uniform distribution with a range of 10 percentage points, and solve for the mapping between mean beliefs and prices implied by each of the utility functions shown in Figure 1. (We rescale beliefs outside the (0,1) range to 0 or 1.) Figure 2 shows that for moderately dispersed beliefs, prediction market prices tend to coincide fairly closely with the mean beliefs. While there is some divergence, it is typically within a percentage point, although the risk neutral model yields larger differences. […]

figure2 Figure 2

Figure 3 shows the mapping from prices to probabilities when beliefs are more disperse (in this case the standard deviation and range were doubled). As the dispersion of beliefs widens, the number of traders with extreme beliefs increases, and hence the non-linear response to the divergence between beliefs and prices is increasingly important. As such, the biases evident in Figure 2 become even more evident as the distribution of beliefs widens. Even so, for utility functions with standard levels of risk aversion, these biases are small.

figure3 Figure 3

Justin Wolfers and Eric Zitzewitz, Interpreting Prediction Market Prices as Probabilities, NBER Working Paper No. 12200, May 2006


Added to diary 20 January 2018

esther-duflo

As economists increasingly help governments design new policies and regulations, they take on an added responsibility to engage with the details of policy making and, in doing so, to adopt the mindset of a plumber. […]

There are two reasons for this need to attend to details. First, it turns out that policy makers rarely have the time or inclination to focus on them, and will tend to decide on how to address them based on hunches, without much regard for evidence. […] Second, details that we as economists might consider relatively uninteresting are in fact extraordinarily important in determining the final impact of a policy or a regulation, while some of the theoretical issues we worry about most may not be that relevant. […]

For Roth, intervening in the real world should fundamentally alter the attitude of the economist and her way of working. He sets the tone in the abstract of the paper:

Market design involves a responsibility for detail, a need to deal with all of a market’s complications, not just its principle features. Designers therefore cannot work only with the simple conceptual models used for theoretical insights into the general working of markets. Instead, market design calls for an engineering approach.

The scientist provides the general framework that guides the design. […] The engineer takes these general principles into account, but applies them to a specific situation. […] The plumber goes one step further than the engineer: she installs the machine in the real world, carefully watches what happens, and then tinkers as needed.

Esther Duflo, The Economist as Plumber, 23 January 2017


Added to diary 21 March 2018

gojko-barjamovic

Economists build a database from 4000-year-old clay tablets, plug it into a trade model, and use it to locate lost bronze age cities.

They had clay tablets from ancient merchants, saying things like:

(I paid) 6.5 shekels (of tin) from the Town of the Kanishites to Timelkiya. I paid 2 shekels of silver and 2 shekels of tin for the hire of a donkey from Timelkiya to Hurama. From Hurama to Kaneš I paid 4.5 shekels of silver and 4.5 shekels of tin for the hire of a donkey and a packer.

This allowed them to measure the amount of trade between any two cities.

Then they constructed a theoretical model of the expected amount of trade between any two cities, as inversely propotional to the distance between the two cities. Given the data on amount of trade, and the locations of the known cities, they are able to estimate the lost locations:

As long as we have data on trade between known and lost cities, with sufficiently many known compared to lost cities, a structural gravity model is able to estimate the likely geographic coordinates of lost cities [….]

We build a simple Ricardian model of trade. Further imposing that bilateral trade frictions can be summarized by a power function of geographic distance, our model makes predictions on the number oftransactions between city pairs, which is observed in our data. The model can be estimated solely on bilateral trade flows and on the geographic location of at least some cities.

Gojko Barjamovic, Thomas Chaney, Kerem A. Coşar, Ali Hortaçsu, Trade, Merchants, and the Lost Cities of the Bronze Age, 2017


Added to diary 15 January 2018

gottfried-leibniz

if controversies were to arise, there would be be no more need of disputation between two philosophers than between two calculators. For it would suffice for them to take their pencils in their hands and to sit down at the abacus, and say to each other (and if they so wish also to a friend called to help): Let us calculate.

Gottfried Leibniz


Added to diary 15 January 2018

gregory-lewis

Effective Altruists, then, know the price of everything and the value of nothing. […] Heir apparent to Bentham’s reductive credo, they aspire to prize apart the rib cage of eudaimonia to feast on its entrails of utilty.

Gregory Lewis, in a comment on “Philosophical Critiques of Effective Altruism by Prof Jeff McMahan”, Effective altruism forum, 10 May 2016


Added to diary 09 February 2018

hilary-greaves

I think what the […] the people who are concerned about the excessive sacrifice argument are likely to say here is like, “Well, in so far as you’re right about morality being this extremely demanding thing, it looks like we’re going to have reason to talk about a second thing as well, which is maybe like watered down morality or pseudo morality, and that the second thing is going to be something like […] what we actually plan to act according to or what is reasonable to ask of people, or what we’re going to issue as advice to the government given that they don’t have morally perfect motivations or something like that and then, by the way, the second thing is the thing that’s going to be more directly action guiding in practice. So yeah, you philosophers can go and have your conversation about what this abstract morality abstractly requires, but nobody’s actually going to pay any attention to it when they act. They’re going to pay attention to this other thing. So, by the way, we’ve won the practical argument. I think there’s something to that line of response.

Hilary Greaves, 80,000 Hours Podcast, 23 October 2018


Added to diary 03 December 2018

immanuel-kant

Der gewöhnliche Probierstein: ob etwas bloße Überredung, oder wenigstens subjektive Überzeugung, d. i. festes Glauben sei, was jemand behauptet, ist das Wetten. Öfters spricht jemand seine Sätze mit so zuversichtlichem und unlenkbarem Trotze aus, daß er alle Besorgnis des Irrtums gänzlich abgelegt zu haben scheint. Eine Wette macht ihn stutzig. Bisweilen zeigt sich, daß er zwar Überredung genug, die auf einen Dukaten an Wert geschätzt werden kann, aber nicht auf zehn, besitze. Denn den ersten wagt er noch wohl, aber bei zehn wird er allererst inne, was er vorher nicht bemerkte, daß es nämlich doch wohl möglich sei, er habe sich geirrt. Wenn man sich in Gedanken vorstellt, man solle worauf das Glück des ganzen Lebens verwetten, so schwindet unser triumphierendes Urteil gar sehr, wir werden überaus schüchtern und entdecken so allererst, daß unser Glaube so weit nicht zulange. So hat der pragmatische Glaube nur einen Grad, der nach Verschiedenheit des Interesses, das dabei im Spiele ist, groß oder auch klein sein kann.

Immanuel Kant, Kritik der reinen Vernunft, 1781

The usual touchstone, whether that which someone asserts is merely his persuasion – or at least his subjective conviction, that is, his firm belief – is betting. It often happens that someone propounds his views with such positive and uncompromising assurance that he seems to have entirely set aside all thought of possible error. A bet disconcerts him. Sometimes it turns out that he has a conviction which can be estimated at a value of one ducat, but not of ten. For he is very willing to venture one ducat, but when it is a question of ten he becomes aware, as he had not previously been, that it may very well be that he is in error. If, in a given case, we represent ourselves as staking the happiness of our whole life, the triumphant tone of our judgment is greatly abated; we become extremely diffident, and discover for the first time that our belief does not reach so far. Thus pragmatic belief always exists in some specific degree, which, according to differences in the interests at stake, may be large or may be small.

Immanuel Kant, Critique of Pure Reason, 1781


Added to diary 17 December 2018

isaac-newton

To determine by what modes or actions light produceth in our minds the phantasm of colour is not so easie.

Isaac Newton, in a letter to Henry Oldenburg, 1672


Added to diary 21 April 2018

jacob-lagerros

In order to get the actual motion of the planets correct, both Ptolemy and Copernicus had to bolster their models with many more epicycles, and epicycles upon epicycles, than shown in the above figure and video. Copernicus even considered introducing an epicyclepicyclet — “an epicyclet whose center was carried round by an epicycle, whose center in turn revolved on the circumference of a deferent concentric with the sun as the center of the universe”… (Complete Dictionary of Scientific Biography, 2008).

Pondering his creation, Copernicus concluded an early manuscript outline his theory thus “Mercury runs on seven circles in all, Venus on five, the earth on three with the moon around it on four, and finally Mars, Jupiter, and Saturn on five each. Thus 34 circles are enough to explain the whole structure of the universe and the entire ballet of the planets” (MacLachlan & Gingerich, 2005).

These inventions might appear like remarkably awkward — if not ingenious — ways of making a flawed system fit the observational data. There is however quite an elegant reason why they worked so well: they form a primitive version of Fourier analysis, a modern technique for function approximation. Thus, in the constantly expanding machinery of epicycles and epicyclets, Ptolemy and Copernicus had gotten their hands on a powerful computational tool, which would in fact have allowed them to approximate orbits of a very large number of shapes, including squares and triangles (Hanson, 1960)!

Despite these geometric acrocrabitcs, Copernicus theory did not fit the available data better than Ptolemy’s. In the second half of the 16th century, renowned imperial astronomer Tycho Brahe produced the most rigorous astronomical observations to date — and found that they even fit Copernicus’ data worse than Ptolemy’s in some places (Gingerich, 1973, 1975).

Jacob Lagerros, The Copernican Revolution from the inside, 26 October 2017


Added to diary 15 January 2018

james-henderson-burns

Dr. Priestley published his Essay on Government in 1768. He there introduced as the only reasonable and proper object of government, ‘the greatest happiness of the greatest number.’ […]

Somehow or other, shortly after its publication, a copy of this pamphlet found its way into the little circulating library belonging to a little coffee-house, called Harper’s coffee-house, attached, as it were, to Queen’s College, Oxford, and deriving, from the popularity of that college, the whole of its subsistence. It was a corner house, having one front towards the High Street, another towards a narrow lane, which on that side skirts Queen’s College, and loses itself in a lane issuing from one of the gates of New College. To this library the subscription was a shilling a quarter, or, in the University phrase, a shilling a term. Of this subscription the produce was composed of two or three newspapers, with magazines one or two, and now and then a newly-published pamphlet; a moderate sized octavo was a rare, if ever exemplified spectacle: composed partly of pamphlets, partly of magazines, half-bound together, a few dozen volumes made up this library, which formed so curious a contrast with the Bodleian Library, and those of Christ’s Church and All Souls. […]

This year, 1768, was the latest of all the years in which this pamphlet could have come into my hands. Be this as it may, it was by that pamphlet, and this phrase in it, that my principles on the subject of morality, public and private together, were determined. It was from that pamphlet and that page of it, that I drew the phrase, the words and import of which have been so widely diffused over the civilized world. At the sight of it, I cried out, as it were, in an inward ecstasy, like Archimedes on the discovery of the fundamental principle of hydrostatics, eureka!

Jeremy Bentham, “Deontology, or the Science of Morality”, The British Critic, Quarterly Theological Review, and Ecclesiastical Record, vol. 16, no. 32 (October, 1834), pp. 279-280

(See also Bentham’s quote in full)



[T]he origins and history of the phrase and of Bentham’s use of it have been the subject of protracted scholarly debate. The seeds of uncertainty were sown by Bentham himself in confused and inconclusive recollections recorded by John Bowring. The question of origins at least was definitively resolved over thirty years ago by Robert Shackelton, in an elegant piece of research. This demonstrated that by far the likeliest source of the phrase as Bentham used it is the English translation of Beccaria’s Dei delitti e delle pene, published in 1768. That was the year in which Bentham sometimes thought, mistakenly, that he had found the phrase in a work by Joseph Priestley, and it seems likely that he read the Beccaria translation in what he later called ‘a most interesting year’ – 1769.

Burns, J. H. (2005). Happiness and utility: Jeremy Bentham’s equation. Utilitas, 17(1), 46-61.


Added to diary 17 April 2018

jared-diamond

Among the world’s thousands of wild grass species, Blumler tabulated the 56 with the largest seeds, the cream of nature’s crop: the grass species with seeds at least 10 times heavier than the median grass species (see Table 8.1). Virtually all of them are native to Mediterranean zones or other seasonally dry environments. Furthermore, they are overwhelmingly concentrated in the Fertile Crescent or other parts of western Eurasia’s Mediterranean zone, which offered a huge selection to incipient farmers: about 32 of the world’s 56 prize wild grasses! […] In contrast, the Mediterranean zone of Chile offered only two of those species. California and southern Africa just one each, and southwestern Australia none at all. That fact alone goes a long way toward explaining the course of human history.

Jared Diamond, Guns, Germs, and Steel, Chapter 8, 1997


Added to diary 24 December 2018

jason-dyer

jen-banbury

The world—and, specifically, the oil and gas industry—needs commercial divers like Hovey who can go to the seabed to perform the delicate maneuvers required to put together, maintain, and disassemble offshore wells, rigs, and pipelines, everything from flipping flow valves, to tightening bolts with hydraulic jacks, to working in tight confines around a blowout preventer. Remotely operated vehicles don’t have the touch, maneuverability, or judgment for the job. And so, a solution. Experiments in the 1930s showed that, after a certain time at pressure, divers’ bodies become fully saturated with inert gas, and they can remain at that pressure indefinitely, provided they get one long decompression at the end. In 1964, naval aquanauts occupied the first Sea Lab—a metal-encased living quarters lowered to a depth of 192 feet. The aquanauts could move effortlessly between their pressurized underwater home and the surrounding water, and they demonstrated the enormous commercial potential of saturation diving. It soon became apparent that it would be easier and cheaper to monitor and support the divers if the pressurized living quarters weren’t themselves at the bottom of the sea. At this moment, all around the world, there are commercial divers living at pressure inside saturation systems (mostly on ships, occasionally on rigs or barges), and commuting to and from their jobsites in pressurized diving bells. They can each put in solid six-hour working days on the bottom.

Jen Banbury, The Weird, Dangerous, Isolated Life of the Saturation Diver, Atlas Obscura, 9 May 2018


Added to diary 13 May 2018

jennifer-grudnik

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.


Added to diary 27 June 2018

jeremy-bentham

Dr. Priestley published his Essay on Government in 1768. He there introduced as the only reasonable and proper object of government, ‘the greatest happiness of the greatest number.’ […]

Somehow or other, shortly after its publication, a copy of this pamphlet found its way into the little circulating library belonging to a little coffee-house, called Harper’s coffee-house, attached, as it were, to Queen’s College, Oxford, and deriving, from the popularity of that college, the whole of its subsistence. It was a corner house, having one front towards the High Street, another towards a narrow lane, which on that side skirts Queen’s College, and loses itself in a lane issuing from one of the gates of New College. To this library the subscription was a shilling a quarter, or, in the University phrase, a shilling a term. Of this subscription the produce was composed of two or three newspapers, with magazines one or two, and now and then a newly-published pamphlet; a moderate sized octavo was a rare, if ever exemplified spectacle: composed partly of pamphlets, partly of magazines, half-bound together, a few dozen volumes made up this library, which formed so curious a contrast with the Bodleian Library, and those of Christ’s Church and All Souls. […]

This year, 1768, was the latest of all the years in which this pamphlet could have come into my hands. Be this as it may, it was by that pamphlet, and this phrase in it, that my principles on the subject of morality, public and private together, were determined. It was from that pamphlet and that page of it, that I drew the phrase, the words and import of which have been so widely diffused over the civilized world. At the sight of it, I cried out, as it were, in an inward ecstasy, like Archimedes on the discovery of the fundamental principle of hydrostatics, eureka!

Jeremy Bentham, “Deontology, or the Science of Morality”, The British Critic, Quarterly Theological Review, and Ecclesiastical Record, vol. 16, no. 32 (October, 1834), pp. 279-280

(See also Bentham’s quote in full)



[T]he origins and history of the phrase and of Bentham’s use of it have been the subject of protracted scholarly debate. The seeds of uncertainty were sown by Bentham himself in confused and inconclusive recollections recorded by John Bowring. The question of origins at least was definitively resolved over thirty years ago by Robert Shackelton, in an elegant piece of research. This demonstrated that by far the likeliest source of the phrase as Bentham used it is the English translation of Beccaria’s Dei delitti e delle pene, published in 1768. That was the year in which Bentham sometimes thought, mistakenly, that he had found the phrase in a work by Joseph Priestley, and it seems likely that he read the Beccaria translation in what he later called ‘a most interesting year’ – 1769.

Burns, J. H. (2005). Happiness and utility: Jeremy Bentham’s equation. Utilitas, 17(1), 46-61.


Added to diary 17 April 2018

That which has no existence cannot be destroyed — that which cannot be destroyed cannot require anything to preserve it from destruction. Natural rights is simple nonsense: natural and imprescriptible rights, rhetorical nonsense — nonsense upon stilts.

Jeremy Bentham, Anarchical Fallacies: Being an Examination of the Declarations of Rights Issued During the French Revolution, 1843


Added to diary 26 January 2018

The French have already discovered that the blackness of the skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor. It may one day come to be recognized that the number of legs, the villosity of the skin, or the termination of the os sacrum are reasons equally insufficient for abandoning a sensitive being to the same fate. What else is it that should trace the insuperable line? Is it the faculty of reason, or perhaps the faculty of discourse? But a full-grown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day or a week or even a month, old. But suppose they were otherwise, what would it avail? The question is not Can they reason?, nor Can they talk?, but Can they suffer?

Jeremy Bentham, An Introduction to the Principles of Morals and Legislation, Chapter 17, 1789


Added to diary 26 January 2018

joe-simmons

It’s a famous study. Give a mug to a random subset of a group of people. Then ask those who got the mug (the sellers) to tell you the lowest price they’d sell the mug for, and ask those who didn’t get the mug (the buyers) to tell you the highest price they’d pay for the mug. You’ll find that sellers’ minimum selling prices exceed buyers’ maximum buying prices by a factor of 2 or 3 (.pdf).

This famous finding, known as the endowment effect, is presumed to have a famous cause: loss aversion. Just as loss aversion maintains that people dislike losses more than they like gains, the endowment effect seems to show that people put a higher price on losing a good than on gaining it. The endowment effect seems to perfectly follow from loss aversion.

But a 2012 paper by Ray Weaver and Shane Frederick convincingly shows that loss aversion is not the cause of the endowment effect (.pdf). Instead, “the endowment effect is often better understood as the reluctance to trade on unfavorable terms,” in other words “as an aversion to bad deals.” [1] […]

Weaver and Frederick’s theory is simple: Selling and buying prices reflect two concerns. First, people don’t want to sell the mug for less, or buy the mug for more, than their own value of it. Second, they don’t want to sell the mug for less, or buy the mug for more, than the market price. This is because people dislike feeling like a sucker. [2]

To see how this produces the endowment effect, imagine you are willing to pay $1 for the mug and you believe it usually sells for $3. As a buyer, you won’t pay more than $1, because you don’t want to pay more than it’s worth to you. But as a seller, you don’t want to sell for as little as $1, because you’ll feel like a chump selling it for much less than it is worth. [3]. Thus, because there’s a gap between people’s perception of the market price and their valuation of the mug, there’ll be a large gap between selling ($3) and buying ($1) prices.

Weaver and Frederick predict that the endowment effect will arise whenever market prices differ from valuations.

However, when market prices are not different from valuations, you shouldn’t see the endowment effect. For example, if people value a mug at $2 and also think that its market price is $2, then both buyers and sellers will price it at $2.

And this is what Weaver and Frederick find. Repeatedly. There is no endowment effect when valuations are equal to perceived market prices. Wow.

Joe Simmons, “A Better Explanation Of The Endowment Effect”, 27 May 2015, Data Colada


Added to diary 21 June 2018

john-kennedy-toole

‘Employers sense in me a denial of their values.’ He rolled over onto his back. ‘They fear me. I suspect that they can see that I am forced to function in a century I loathe.’

John Kennedy Toole, A Confederacy of Dunces, 1994

Apparently I lack some particular perversion which today’s employer is seeking.

John Kennedy Toole, A Confederacy of Dunces, 1994


Added to diary 17 January 2018

I am at the moment writing a lengthy indictment against our century. When my brain begins to reel from my literary labors, I make an occasional cheese dip.

John Kennedy Toole, A Confederacy of Dunces, 1994


Added to diary 17 January 2018

john-kranzler

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.


Added to diary 27 June 2018

john-list

In the wake of Hurricane Katrina in the summer of 2005, much of the Gulf Coast had been pummeled by wind and inches upon inches of rain. Water was everywhere, but of- ten undrinkable. Basic provisions we take for granted, like drinking water, weren’t easy to come by and the Federal Emergency Management Agency (FEMA) was caught flat-footed.

In response to catastrophic events like a hurricane or an earthquake, the caricature of private industry is that firms will gouge customers. And sometimes this is true, but in response to Katrina, there was one unlikely hero: Walmart. In fact, the Mayor of Kenner, a suburb of New Orleans, had this to say about Walmart’s response: “. . . the only lifeline in Kenner was the Walmart stores. We didn’t have looting on a mass scale because Walmart showed up with food and water so our people could survive.”

Indeed, in the three weeks after Katrina, Walmart shipped almost 2,500 truckloads of supplies to storm- damaged areas. These truckloads reached affected areas before FEMA, whose troubles responding to the storm were so great that it shipped 30,000 pounds of ice to Maine instead of Mississippi. These stories and more are in Horwitz (2009), which summarizes the divergent responses to Katrina by private industry and FEMA. How was Walmart so effective in its response? Well, it maintains a hurricane response center of its own that rivals FEMA’s, and prior to the storm’s landfall it anticipated a need for generators, water, and food, so it effectively diverted supplies to the area. Walmart’s emergency re- sponse center was in full swing as the storm approached with 50 employees managing the response from headquarters.

This sounds like the sort of response FEMA should have produced; so if that’s the job of FEMA, why did Walmart respond so heroically? Simple economics. Walmart un- derstood that there would be an important shift of the demand curve for water, generators, and ice in response to the storm and the textbook response to such shifts is an increase in quantity supplied. Lucky for us, few are better at shipping provisions around the country than Walmart.

Daron Acemoglu, David Laibson, John List, Economics, 1st Edition, 2015


Added to diary 19 May 2018

jonathan-haidt

They concluded that most of the bizarre and depressing research findings make perfect sense once you see reasoning as having evolved not to help us find truth but to help us engage in arguments, persuasion, and manipulation in the context of discussions with other people. As they put it, “skilled arguers … are not after the truth but after arguments supporting their views.” This explains why the confirmation bias is so powerful, and so ineradicable. How hard could it be to teach students to look on the other side, to look for evidence against their favored view? Yet, in fact, it’s very hard, and nobody has yet found a way to do it. It’s hard because the confirmation bias is a built-in feature (of an argumentative mind), not a bug that can be removed (from a platonic mind).

Jonathan Haidt, The Righteous Mind, 2012


Added to diary 27 June 2018

I was talking with a taxi driver who told me that he had just become a father. I asked him if he planned on staying in the United States or returning to his native Jordan. I’ll never forget his response: “We will return to Jordan because I never want to hear my son say ‘fuck you’ to me.”

Jonathan Haidt, The Righteous Mind, 2012


Added to diary 27 June 2018

For the first billion years or so of life, the only organisms were prokaryotic cells (such as bacteria). Each was a solo operation, competing with others and reproducing copies of itself. But then, around 2 billion years ago, two bacteria somehow joined together inside a single membrane, which explains why mitochondria have their own DNA, unrelated to the DNA in the nucleus. These are the two-person rowboats in my example. Cells that had internal organelles could reap the benefits of cooperation and the division of labor (see Adam Smith). There was no longer any competition between these organelles, for they could reproduce only when the entire cell reproduced, so it was “one for all, all for one.” Life on Earth underwent what biologists call a “major transition.” Natural selection went on as it always had, but now there was a radically new kind of creature to be selected. There was a new kind of vehicle by which selfish genes could replicate themselves.

Single-celled eukaryotes were wildly successful and spread throughout the oceans. A few hundred million years later, some of these eukaryotes developed a novel adaptation: they stayed together after cell division to form multicellular organisms in which every cell had exactly the same genes. These are the three-boat septuplets in my example. Once again, competition is suppressed (because each cell can only reproduce if the organism reproduces, via its sperm or egg cells). A group of cells becomes an individual, able to divide labor among the cells (which specialize into limbs and organs). A powerful new kind of vehicle appears, and in a short span of time the world is covered with plants, animals, and fungi. It’s another major transition. Major transitions are rare. The biologists John Maynard Smith and Eörs Szathmáry count just eight clear examples over the last 4 billion years (the last of which is human societies). But these transitions are among the most important events in biological history, and they are examples of multilevel selection at work. It’s the same story over and over again: Whenever a way is found to suppress free riding so that individual units can cooperate, work as a team, and divide labor, selection at the lower level becomes less important, selection at the higher level becomes more powerful, and that higher-level selection favors the most cohesive superorganisms. (A superorganism is an organism made out of smaller organisms.) As these superorganisms proliferate, they begin to compete with each other, and to evolve for greater success in that competition.

This competition among superorganisms is one form of group selection. There is variation among the groups, and the fittest groups pass on their traits to future generations of groups. Major transitions may be rare, but when they happen, the Earth often changes. Just look at what happened more than 100 million years ago when some wasps developed the trick of dividing labor between a queen (who lays all the eggs) and several kinds of workers who maintain the nest and bring back food to share. This trick was discovered by the early hymenoptera (members of the order that includes wasps, which gave rise to bees and ants) and it was discovered independently several dozen other times (by the ancestors of termites, naked mole rats, and some species of shrimp, aphids, beetles, and spiders). In each case, the free rider problem was surmounted and selfish genes began to craft relatively selfless group members who together constituted a supremely selfish group. These groups were a new kind of vehicle: a hive or colony of close genetic relatives, which functioned as a unit (e.g., in foraging and fighting) and reproduced as a unit. These are the motorboating sisters in my example, taking advantage of technological innovations and mechanical engineering that had never before existed. It was another transition. Another kind of group began to function as though it were a single organism, and the genes that got to ride around in colonies crushed the genes that couldn’t “get it together” and rode around in the bodies of more selfish and solitary insects. The colonial insects represent just 2 percent of all insect species, but in a short period of time they claimed the best feeding and breeding sites for themselves, pushed their competitors to marginal grounds, and changed most of the Earth’s terrestrial ecosystems (for example, by enabling the evolution of flowering plants, which need pollinators). Now they’re the majority, by weight, of all insects on Earth.

What about human beings? Since ancient times, people have likened human societies to beehives. But is this just a loose analogy? If you map the queen of the hive onto the queen or king of a city-state, then yes, it’s loose. A hive or colony has no ruler, no boss. The queen is just the ovary. But if we simply ask whether humans went through the same evolutionary process as bees—a major transition from selfish individualism to groupish hives that prosper when they find a way to suppress free riding—then the analogy gets much tighter. Many animals are social: they live in groups, flocks, or herds. But only a few animals have crossed the threshold and become ultrasocial, which means that they live in very large groups that have some internal structure, enabling them to reap the benefits of the division of labor. Beehives and ant nests, with their separate castes of soldiers, scouts, and nursery attendants, are examples of ultrasociality, and so are human societies.

One of the key features that has helped all the nonhuman ultra-socials to cross over appears to be the need to defend a shared nest. The biologists Bert Hölldobler and E. O. Wilson summarize the recent finding that ultrasociality (also called “eusociality”) is found among a few species of shrimp, aphids, thrips, and beetles, as well as among wasps, bees, ants, and termites:

In all the known [species that] display the earliest stages of eusociality, their behavior protects a persistent, defensible resource from predators, parasites, or competitors. The resource is invariably a nest plus dependable food within foraging range of the nest inhabitants.

Hölldobler and Wilson give supporting roles to two other factors: the need to feed offspring over an extended period (which gives an advantage to species that can recruit siblings or males to help out Mom) and intergroup conflict. All three of these factors applied to those first early wasps camped out together in defensible naturally occurring nests (such as holes in trees). From that point on, the most cooperative groups got to keep the best nesting sites, which they then modified in increasingly elaborate ways to make themselves even more productive and more protected. Their descendants include the honeybees we know today, whose hives have been described as “a factory inside a fortress.”

Those same three factors applied to human beings. Like bees, our ancestors were (1) territorial creatures with a fondness for defensible nests (such as caves) who (2) gave birth to needy offspring that required enormous amounts of care, which had to be given while (3) the group was under threat from neighboring groups. For hundreds of thousands of years, therefore, conditions were in place that pulled for the evolution of ultrasociality, and as a result, we are the only ultrasocial primate. The human lineage may have started off acting very much like chimps, but by the time our ancestors started walking out of Africa, they had become at least a little bit like bees. And much later, when some groups began planting crops and orchards, and then building granaries, storage sheds, fenced pastures, and permanent homes, they had an even steadier food supply that had to be defended even more vigorously. Like bees, humans began building ever more elaborate nests, and in just a few thousand years, a new kind of vehicle appeared on Earth—the city-state, able to raise walls and armies. City-states and, later, empires spread rapidly across Eurasia, North Africa, and Mesoamerica, changing many of the Earth’s ecosystems and allowing the total tonnage of human beings to shoot up from insignificance at the start of the Holocene (around twelve thousand years ago) to world domination today.

As the colonial insects did to the other insects, we have pushed all other mammals to the margins, to extinction, or to servitude. The analogy to bees is not shallow or loose. Despite their many differences, human civilizations and beehives are both products of major transitions in evolutionary history.

Jonathan Haidt, The Righteous Mind, 2012


Added to diary 27 June 2018

judea-pearl

Let us now examine how the surgery interpretation resolves Russell’s enigma concerning the clash between the directionality of causal relations and the symmetry of physical equations.The equations of physics are indeed symmetrical, but when we compare the phrases “A causes B” versus “B causes A,” we are not talking about a single set of equations. Rather, we are comparing two world models, represented by two different sets of equations: one in which the equation for A is surgically removed; the other where the equation for B is removed. Russell would probably stop us at this point and ask: “How can you talk about two world models when in fact there is only one world model, given by all the equations of physics put together?” The answer is: yes. If you wish to include the entire universe in the model, causality disappears because interventions disappear – the manipulator and the manipulated lose their distinction.

Judea Pearl, Causality, 2009, Epilogue, p. 419


Added to diary 17 January 2018

Remark: The exclusion of unmeasured variables from the definition of statistical parameters is devised to prevent one from hiding causal assumptions under the guise of latent variables. Such constructions, if permitted, would qualify any quantity as statistical and would thus obscure the important distinction between quantities that can be estimated from statistical data alone, and those that require additional assumptions beyond the data.

[…]

The sharp distinction between statistical and causal concepts can be translated into a useful principle: behind every causal claim there must lie some causal assumption that is not discernable from the joint distribution and, hence, not testable in observational studies. Such assumptions are usually provided by humans, resting on expert judgment. Thus, the way humans organize and communicate experiential knowledge becomes an integral part of the study, for it determines the veracity of the judgments experts are requested to articulate.

Another ramification of this causal–statistical distinction is that any mathematical approach to causal analysis must acquire new notation. The vocabulary of probability calculus, with its powerful operators of expectation, conditionalization, and marginalization, is defined strictly in terms of distribution functions and is therefore insufficient for expressing causal assumptions or causal claims. To illustrate, the syntax of probability calculus does not permit us to express the simple fact that “symptoms do not cause diseases,’’ let alone draw mathematical conclusions from such facts. All we can say is that two events are dependent – meaning that if we find one, we can expect to encounter the other, but we cannot distinguish statistical dependence, quantified by the conditional probability P(disease | symptom), from causal dependence, for which we have no expression in standard probability calculus.

The preceding two requirements: (1) to commence causal analysis with untested, judgmental assumptions, and (2) to extend the syntax of probability calculus, constitute the two main obstacles to the acceptance of causal analysis among professionals with traditional training in statistics (Pearl 2003c, also sections 11.1.1 and 11.6.4). This book helps overcome the two barriers through an effective and friendly notational system based on symbiosis of graphical and algebraic approaches.

Judea Pearl, Causality, 2009, Section 1.4, p. 39ff


Added to diary 16 January 2018

justin-wolfers

We consider a simple prediction market in which traders buy and sell an all-or nothing contract (a binary option) paying $1 if a specific event occurs, and nothing otherwise. There is heterogeneity in beliefs among the trading population, and following Manski’s notation, we denote trader ’s belief that the event will occur as . These beliefs are orthogonal to wealth levels , and are drawn from a distribution, . Individuals are price-takers and trade so as to maximize their subjectively expected utility. Wealth is only affected by the event via the prediction market, so there is no hedging motive for trading the contract. We first consider the case where traders have log utility, and we endogenously derive their trading activity, given the price of the contract is . […] Thus, in this simple model, market prices are equal to the mean belief among traders.

[…]

We now turn to relaxing some of our assumptions. To preview, relaxing the assumption that budgets are orthogonal to beliefs yields the intuitively plausible result that prediction market prices are a wealth-weighted average of beliefs among market traders. And second, the result that the equilibrium price is exactly equal to the (weighted) mean of beliefs reflects the fact that demand for the prediction security is linear in beliefs, which is itself a byproduct of assuming log utility. Calibrating alternative utility functions, we find that prices can systematically diverge from mean beliefs, but that this divergence is typically small. […] The extent of the deviation depends crucially on how widely dispersed beliefs are.

[…]

We start by assuming that beliefs are drawn from a uniform distribution with a range of 10 percentage points, and solve for the mapping between mean beliefs and prices implied by each of the utility functions shown in Figure 1. (We rescale beliefs outside the (0,1) range to 0 or 1.) Figure 2 shows that for moderately dispersed beliefs, prediction market prices tend to coincide fairly closely with the mean beliefs. While there is some divergence, it is typically within a percentage point, although the risk neutral model yields larger differences. […]

figure2 Figure 2

Figure 3 shows the mapping from prices to probabilities when beliefs are more disperse (in this case the standard deviation and range were doubled). As the dispersion of beliefs widens, the number of traders with extreme beliefs increases, and hence the non-linear response to the divergence between beliefs and prices is increasingly important. As such, the biases evident in Figure 2 become even more evident as the distribution of beliefs widens. Even so, for utility functions with standard levels of risk aversion, these biases are small.

figure3 Figure 3

Justin Wolfers and Eric Zitzewitz, Interpreting Prediction Market Prices as Probabilities, NBER Working Paper No. 12200, May 2006


Added to diary 20 January 2018

karl-marx

Die klassische Ökonomie liebte es von jeher, das gesellschaftliche Kapital als eine fixe Größe von fixem Wirkungsgrad aufzufassen. Aber das Vorurteil ward erst zum Dogma befestigt durch den Urphilister Jeremias Bentham, dies nüchtern pedantische, schwatzlederne Orakel des gemeinen Bürgerverstandes des 19. Jahrhunderts. […] Jeremias Bentham ist ein rein englisches Phänomen. Selbst unsern Philosophen Christian Wolf nicht ausgenommen, hat zu keiner Zeit und in keinem Land der hausbackenste Gemeinplatz sich jemals so selbstgefällig breitgemacht.

Karl Marx, Das Kapital, Bd. I, Verwandlung von Mehrwert in Kapital

Classical economy always loved to conceive social capital as a fixed magnitude of a fixed degree of efficiency. But this prejudice was first established as a dogma by the arch-Philistine, Jeremy Bentham, that insipid, pedantic, leather-tongued oracle of the ordinary bourgeois intelligence of the 19th century. […] Bentham is a purely English phenomenon. Not even excepting our philosopher, Christian Wolff, in no time and in no country has the most homespun commonplace ever strutted about in so self-satisfied a way.

Karl Marx, Capital, Book I, Conversion of Surplus-Value into Capital.


Added to diary 18 April 2018

karthik-muralidharan

Public employment programs play a major role in the anti-poverty strategy of many developing countries. Besides the direct wages provided to the poor, such programs are likely to affect their welfare by changing broader labor market outcomes including wages and private employment. These general equilibrium effects may accentuate or attenuate the direct benefits of the program, but have been difficult to estimate credibly. We estimate the general equilibrium effects of a technological reform that improved the implementation quality of India’s public employment scheme on the earnings of the rural poor, using a large-scale experiment which randomized treatment across sub-districts of 60,000 people. We find that this reform had a large impact on the earnings of low-income households, and that these gains were overwhelmingly driven by higher private-sector earnings (90%) as opposed to earnings directly from the program (10%). These earnings gains reflect a 5.7% increase in market wages for rural unskilled labor, and a similar increase in reservation wages. We do not find evidence of distortions in factor allocation, including labor supply, migration, and land use. Our results highlight the importance of accounting for general equilibrium effects in evaluating programs, and also illustrate the feasibility of using large-scale experiments to study such effects.

Karthik Muralidharan, Paul Niehaus, and Sandip Sukhtankar, General Equilibrium Effects of (Improving) Public Employment Programs: Experimental Evidence from India, 2017

the cool thing is it mattered – study helped convince gov’t not to scrap the program, putting $100Ms annually into hands of the poor

@paulfniehaus, Twitter


Added to diary 15 January 2018

kathryn-schulz

Zimmerman was still downstairs when he heard her scream. He sprinted up to join her, and the two of them stood in the doorway, aghast. Their bedroom walls were crawling with insects—not dozens of them but hundreds upon hundreds. Stone knew what they were, because she’d seen a few around the house earlier that year and eventually posted a picture of one on Facebook and asked what it was. That’s a stinkbug, a chorus of people had told her—specifically, a brown marmorated stinkbug. Huh, Stone had thought at the time. Never heard of them. Now they were covering every visible surface of her bedroom.

“It was like a horror movie,” Stone recalled. She and Zimmerman fetched two brooms and started sweeping down the walls. Pre-stinkbug crisis, the couple had been unwinding after work (she is an actress, comedian, and horse trainer; he is a horticulturist), and were notably underdressed, in tank tops and boxers, for undertaking a full-scale extermination. The stinkbugs, attracted to warmth, kept thwacking into their bodies as they worked. Stone and Zimmerman didn’t dare kill them—the stink for which stinkbugs are named is released when you crush them—so they periodically threw the accumulated heaps back outside, only to realize that, every time they opened the doors to do so, more stinkbugs flew in. […]

The defining ugliness of a stinkbug, however, is its stink. Olfactory defense mechanisms are not uncommon in nature: wolverines, anteaters, and polecats all have scent glands that produce an odor rivalling that of a skunk; bombardier beetles, when threatened, emit a foul-smelling chemical hot enough to burn human skin; vultures keep predators at bay by vomiting up the most recent bit of carrion they ate; honey badgers achieve the same effect by turning their anal pouch inside out. All these creatures produce a smell worse than the stinkbug’s, but none of them do so in your home.

Slightly less noxious but vastly more pervasive, the smell of the brown marmorated stinkbug is often likened to that of cilantro, chiefly because the same chemical is present in both. In reality, stinkbugs smell like cilantro only in the way that rancid cilantro-mutton stew smells like cilantro, which is to say, they do not. […]

What makes the brown marmorated stinkbug so impressively omnivorous is also what makes it a bug. Technically speaking, bugs are not synonymous with insects but are a subset of them: those which possess mouthparts that pierce and suck (as opposed to, say, caterpillars and termites, whose mouths are built, like ours, to chew). Yet even among those insects which share its basic physiology, the stinkbug is an outlier; Michael Raupp, an entomologist at the University of Maryland, described its host range as “huge, huge, wildly huge. You’re right up there now with the big guys, with gypsy moths and Japanese beetles.” […]

[A]s it turns out, the brown marmorated stinkbug is exceptionally hard to kill with pesticides. Peter Jentsch, an entomologist with Cornell University’s Hudson Valley research laboratory, calls it the Hummer of insects: a highly armored creature built to maximize its defensive capabilities. Its relatively long legs keep it perched above the surface of its food, which limits its exposure to pesticide applications. […] A class of pesticides known as pyrethroids, which are used to control native stinkbugs, initially appeared to work just as well on the brown marmorated kind–until a day or two later, when more than a third of the ostensibly dead bugs rose up, Lazarus-like, and calmly resumed the business of demolition. […]

Once it settles down for the season, it enters a state known as diapause–a kind of insect hibernation, during which its metabolism slows to near-moribund conditions. It cannot mate or reproduce, it does not need to eat, and although it can still both crawl and fly, it performs each activity slowly and poorly. […] It is also thanks to diapause that stinkbugs, indoors, seem inordinately graceless and impossibly dumb. But, as we all now know, being graceless and dumb is no obstacle to being powerful and horrifying. […]

Unlike household pests such as ants and fruit flies, they are not particularly drawn to food and drink; then again, as equal-opportunity invaders they aren’t particularly not drawn to them, either. This has predictable but unfortunate consequences. One poor soul spooned up a stinkbug that had blended into her granola, putting her off fruit-and-nut cereals for life. Another discovered too late that a stinkbug had percolated in her coffeemaker, along with her morning brew. A third removed a turkey from the oven on Thanksgiving Day and discovered a cooked stinkbug at the bottom of the roasting pan. Other people have reported accidentally ingesting stinkbugs in, among other things, salads, berries, raisin bran, applesauce, and chili. By all accounts, the bugs release their stink upon being crunched, and taste pretty much the way they smell. […]

Raupp, who has been studying non-native species for forty-one years, called its arrival on our shores “one of the most productive incidents in the history of invasive pests in the United States.” Because the stinkbug is, as he put it, “magnificent and dastardly,” it has attracted an almost unprecedented level of scientific attention. It has spawned multimillion-dollar grants, dozens of master’s degrees and Ph.D.s, and a huge collaborative partnership that includes the federal government, land-grant colleges, Ivy League universities, extension programs, environmental organizations, trade groups, small farmers, and agribusiness. “From a research perspective,” Raupp said, “this was and continues to be one of the major drivers in the history of entomology in the United States.”

Kathryn Schulz, Home Invasion, Annals of Ecology, The New Yorker Magazine, 12 March 2018


Added to diary 18 March 2018

kendrick-lamar

So I was takin’ a walk the other day
And I seen a woman—a blind woman
Pacin’ up and down the sidewalk
She seemed to be a bit frustrated
As if she had dropped somethin’ and
Havin’ a hard time findin’ it
So after watchin’ her struggle for a while
I decide to go over and lend a helping hand, you know?
“Hello ma’am, can I be of any assistance?
It seems to me that you have lost something
I would like to help you find it.”
She replied: “Oh yes, you have lost something
You’ve lost… your life.”

Kendrick Lamar, “BLOOD.”, 14 April 2017


Added to diary 19 May 2018

kerem-cosar

Economists build a database from 4000-year-old clay tablets, plug it into a trade model, and use it to locate lost bronze age cities.

They had clay tablets from ancient merchants, saying things like:

(I paid) 6.5 shekels (of tin) from the Town of the Kanishites to Timelkiya. I paid 2 shekels of silver and 2 shekels of tin for the hire of a donkey from Timelkiya to Hurama. From Hurama to Kaneš I paid 4.5 shekels of silver and 4.5 shekels of tin for the hire of a donkey and a packer.

This allowed them to measure the amount of trade between any two cities.

Then they constructed a theoretical model of the expected amount of trade between any two cities, as inversely propotional to the distance between the two cities. Given the data on amount of trade, and the locations of the known cities, they are able to estimate the lost locations:

As long as we have data on trade between known and lost cities, with sufficiently many known compared to lost cities, a structural gravity model is able to estimate the likely geographic coordinates of lost cities [….]

We build a simple Ricardian model of trade. Further imposing that bilateral trade frictions can be summarized by a power function of geographic distance, our model makes predictions on the number oftransactions between city pairs, which is observed in our data. The model can be estimated solely on bilateral trade flows and on the geographic location of at least some cities.

Gojko Barjamovic, Thomas Chaney, Kerem A. Coşar, Ali Hortaçsu, Trade, Merchants, and the Lost Cities of the Bronze Age, 2017


Added to diary 15 January 2018

kevin-simler

No matter how fast the economy grows, there remains a limited supply of sex and social status […].

Robin Hanson and Kevin Simler, The Elephant in the Brain, 2017


Added to diary 26 June 2018

[T]here’s a very real sense in which we are the Press Secretaries within our minds. In other words, the parts of the mind that we identify with, the parts we think of as our conscious selves (“I,” “myself,” “my conscious ego”), are the ones responsible for strategically spinning the truth for an external audience. […]

Body language also facilitates discretion by being less quotable to third parties, relative to spoken language. If Peter had explicitly told a colleague, “I want to get Jim fired,” the colleague could easily turn around and relay Peter’s agenda to others in the office. Similarly, if Peter had asked his flirting partner out for a drink, word might get back to his wife—in which case, bad news for Peter. […]

[S]peaking functions in part as an act of showing off. Speakers strive to impress their audience by consistently delivering impressive remarks. This explains how speakers foot the bill for the costs of speaking we discussed earlier: they’re compensated not in-kind, by receiving information reciprocally, but rather by raising their social value in the eyes (and ears) of their listeners. […]

Participants evaluate each other not just as trading partners, but also as potential allies. Speakers are eager to impress listeners by saying new and useful things, but the facts themselves can be secondary. Instead, it’s more important for speakers to demonstrate that they have abilities that are attractive in an ally. […]

But why do speakers need to be relevant in conversation? If speakers deliver high-quality information, why should listeners care whether the information is related to the current topic? A plausible answer is that it’s simply too easy to rattle off memorized trivia. You can recite random facts from the encyclopedia until you’re blue in the face, but that does little to advertise your generic facility with information. Similarly, when you meet someone for the first time, you’re more eager to sniff each other out for this generic skill, rather than to exchange the most important information each of you has gathered to this point in your lives. In other words, listeners generally prefer speakers who can impress them wherever a conversation happens to lead, rather than speakers who steer conversations to specific topics where they already know what to say. […]

In fact, patients show surprisingly little interest in private information on medical quality. For example, patients who would soon undergo a dangerous surgery (with a few percent chance of death) were offered private information on the (risk-adjusted) rates at which patients died from that surgery with individual surgeons and hospitals in their area. These rates were large and varied by a factor of three. However, only 8 percent of these patients were willing to spend even $50 to learn these death rates. Similarly, when the government published risk-adjusted hospital death rates between 1986 and 1992, hospitals with twice the risk-adjusted death rates saw their admissions fall by only 0.8 percent. In contrast, a single high-profile news story about an untoward death at a hospital resulted in a 9 percent drop in patient admissions at that hospital. […]

When John F. Kennedy described the space race with his famous speech in 1962, he dressed up the nation’s ambition in a suitably prosocial motive. “We set sail on this new sea,” he told the crowd, “because there is new knowledge to be gained, and new rights to be won, and they must be won and used for the progress of all people.” Everyone, of course, knew the subtext: “We need to beat the Russians!” In the end, our motives were less important than what we managed to achieve by them. We may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.

Robin Hanson and Kevin Simler, The Elephant in the Brain, 2017


Added to diary 26 June 2018

kristin-caspers

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.


Added to diary 27 June 2018

larissa-macfarquhar

“We have to go further back, to 2005. I’m in Warri, in Delta State, I’m working as a doctor, and my mom and I are having a fight. She’s saying, You’re stagnating, you read medicine and you haven’t gone further, you could do better! I was happy, I was in this quiet place becoming a provincial doctor, but in Nigeria that is a lack of ambition, so my mom was angry. She showed me a photograph in a magazine of a young woman with beads in her hair, and she said, Look at this small girl, she has written a book of horticulture, about flowers—you could do something like that. She didn’t care what I did, really, she just wanted me to do more. So she told me, Write books! Don’t just sit there dishing out Tylenol. I said O.K. So I got a computer and started writing.”

Eghosa Imasuen was twenty-eight. He was living near his parents, in a small city some two hundred and fifty miles southeast of Lagos. He read a lot, mostly thrillers and science fiction, pulp paperbacks he bought from secondhand bookshops for a dollar or less. “Literature to me was recommended reading in school, which was Chinua Achebe. ‘Things Fall Apart,’ ‘Arrow of God,’ ‘Things Fall Apart,’ ‘Arrow of God,’ ‘Things Fall Apart,’ ‘Arrow of God.’ I tried to read Ben Okri once—I couldn’t get past page 10. […]

When Chimamanda read Barack Obama’s memoir and learned how his father had deserted his white American wife, whom he had married despite already having a wife in Kenya, she judged the father less harshly than Obama did. “It’s easy to understand it as deceitful,” she says. “But I don’t see it that way. To be an African man of that time, to have this privilege of being educated, often by people in your home town contributing money to pay part of your school fees—not only do you owe them money, you owe them in an emotional way, because you’re a shining star for them, you’re theirs. Then you go off and fall in love with somebody who would not be acceptable to them, and you feel torn. Often, the village wins. And so, reading Obama’s book as a person who was familiar with stories of that sort, part of me wanted to say, It’s not that he didn’t love you.” […]

She had always imagined that she would marry someone flamboyantly unfamiliar—she pictured herself shocking the family by bringing home “a spiky-haired Mongolian-Sri-Lankan-Rwandan”—but the man she ended up marrying, in 2009, was almost comically suitable: a Nigerian doctor who practiced in America, whose father was a doctor and a friend of her parents, and whose sister was her sister’s close friend. Before they had a baby, she spent about half the year in Nigeria, and her husband would join her when he could. But her husband doesn’t want to be apart from the baby for too long, so now she spends less time in Nigeria. “One of the perils of a feminist marriage is that the man actually wants to be there,” she says. “He is so present and he does every damn thing! And the child adores him. I swear to God, sometimes I look at her and say, I carried you for nine months, my breasts went down because of you, my belly is slack because of you, and now Papa comes home and you run off and ignore me. Really?”

In America, they live in a big, new house in a suburb of Baltimore, on a cul-de-sac alongside four other similar houses arranged in a semicircle. In “Americanah,” she describes Princeton as having no smell, and her neighborhood has no smell, either. It is calm, spacious, bland, empty—the opposite of Lagos. If she looks out the window, she sees nothing. She doesn’t know many people in Maryland, and doesn’t want to. She can go out and people don’t recognize her. It’s a good place to work.

In Nigeria, when a woman in her family had a baby, all of her female relatives came to help and she lay in bed like a dying queen. She loved the idea of that in some ways, but when she had her baby, in Maryland, she instructed her mother not to come for a month. She realized afterward that she had internalized what she took to be an American notion, that having help with a newborn was something to be slightly ashamed of. You were supposed to do everything on your own, or else you weren’t properly bonding, or suffering enough, or something like that. […]

Her husband told her that her father had been kidnapped, and she screamed, then vomited, then started to cry. Her father had been in a car driving from Nsukka to Abba, but he had not arrived. When her mother tried to call him, his phone was switched off, as was the phone of his driver. Two hours later, her mother received a call from his phone: the kidnapper told her, Madam, we have him, and hung up. Her mother had not called Chimamanda to tell her the news, fearing she would have a miscarriage.

She pulled herself together and started making phone calls. She called the governor of Anambra, her home state. She called the American consul-general in Lagos, because her father was an American citizen, through his two elder daughters, who were born in Berkeley. The American consul sent a Nigerian-American F.B.I. agent, a kidnapping expert, to her mother’s house; he told her mother what to say when the kidnappers called back. Chimamanda called the house to talk to the F.B.I. agent. He told her he was a big fan and had read all her books. Later, she would find this funny.

There were no demands until the next day. This was the usual method: kidnappers delayed, so that you worked yourself up into a panic. The next day, they called and demanded five million naira—around fourteen thousand dollars—and told her mother that if she told the police they would kill him. They didn’t call for another day. On the third day, they demanded ten million naira. There were laws against taking out too much money at once, but kidnappings were common enough that the banks made an exception.

She was terrified that her father was dead. When the kidnappers called her mother, her mother had asked to hear her husband’s voice, but the man on the line refused. Her father was diabetic and didn’t have his medicine with him. The F.B.I. man told her mother to forge an emotional connection with the kidnapper, so she called him “my dear son,” and told him she was an old, old lady, and begged him for mercy. The family made a plan to drop off the money. The kidnappers knew all about them: they said that Okey or a particular son-in-law could go, but no one else. Okey drove to a point on the highway near Nsukka, then, as instructed, set off on a motorcycle taxi for the designated meeting place, carrying ten million naira in a sack. Nobody knew if he would be seen again. They had heard that sometimes a family member would bring the money, only to find that the victim was already dead, and then be killed himself.

Okey rode on the back of the motorcycle, talking to the kidnapper on his phone. The motorcycle driver asked where they were going, there was nothing around here. Okey said to him, Just keep driving. When they entered a forest, the kidnapper told Okey to stop. The kidnapper told him not to look to the right or left, just keep walking, then drop the bag. Okey obeyed; the kidnapper on the phone told him to leave. Back at the house, the family held their phones, willing them to ring and afraid that they would ring. Then her father was delivered. […]

Her child is two. Soon she will have to go to school and become part of the world, and this brings up several quandaries that Chimamanda has postponed thinking about. She recently wrote a short manual on rearing a child—“Dear Ijeawele, or A Feminist Manifesto in Fifteen Suggestions”—but although she is now a published authority on the subject, and holds fully formed opinions on questions such as how gender stereotypes imprison boys as well as girls, she finds that when one descends from principles to logistics things become complicated. She cannot create a child in the way that she can create a character, of course, but she can choose the setting and the language of her daughter’s childhood, which is already to choose one set of possible selves over another.

She wants to raise her child in Nigeria, because she wants her to be protected as she herself was protected, growing up there: not knowing she is black. Someday she will talk to her about what it means to be black, but not yet. She wants her daughter to be in a place where race as she has encountered it in America does not exist.

Even as a privileged Americanah, she found that arriving at an American airport was often jarring—a reminder that she was once again black and foreign. And it wasn’t just the white customs officers who hassled her. “There is a certain kind of black American that deeply resents an African whom they think of as privileged,” she says. “Privileged Nigerians especially. My husband and I have got to the airport and they’ve said to us, You’re Nigerian, I bet you have twenty-five thousand dollars in your bag, let’s see it.”

Her neighborhood in Maryland is more diverse than most, but it’s still America. She moved into her current house just before the 2016 election, and when, the morning after Trump won, she began reading about post-election vandalism in Baltimore, and about how someone had spray-painted “nigger” on a black woman’s car, and how Trump had been elected not by the white working class after all but by suburbanites, she started to panic. She became convinced that her new neighbors had guns and were going to shoot them because they were black and supported Clinton. All day, she refused to leave the house. Then the doorbell rang, and it was the neighbors bearing welcome gifts, and they turned out to be a Japanese couple, a Bangladeshi couple, a white-black couple, and a lefty white couple. She was so relieved that she almost cried.

“There aren’t enough middle-class black folks to go around,” Bill said. “Lots of liberal white folks are looking for black friends.”

[…]

On the other hand, raising her daughter in Nigeria would mean that she would likely learn much sooner, and more definitively than she would in America, that she was a girl. She doesn’t want her to know that too early, either. Of course, there was sexism in America as well, but nobody was going to say to her in an American school, You! Go to the girls’ line. In “We Should All Be Feminists,” she told the story of her ambition, when she was nine, to be class monitor, because the monitor was empowered to patrol the classroom, holding a cane, and write down the names of noisemakers. Told that the child who scored the highest mark on a test would become monitor, she concentrated hard and attained the highest mark, only to be told by the teacher that the monitor had to be a boy. The boy who got the second-highest mark duly took up the post, although he was unsuited for its responsibilities. “The boy was a sweet, gentle soul who had no interest in patrolling the class with a cane,” she said, “whereas I was full of ambition to do so.” Should her daughter grow up cherishing similar ambitions, she did not want them thwarted.

Larissa MacFarquhar, “Writing Home: Chimamanda Ngozi Adichie Comes to Terms with Global Fame”, New Yorker Magazine, June 4 & 11, 2018 Issue


Added to diary 11 June 2018

louis-ck

Flying is the worst one, because people come back from flights, and they tell you their story…. They’re like, “It was the worst day of my life…. We get on the plane and they made us sit there on the runway for forty minutes.”… Oh really, then what happened next? Did you fly through the air, incredibly, like a bird? Did you soar into the clouds, impossibly? Did you partake in the miracle of human flight, and then land softly on giant tires that you couldn’t even conceive how they fuckin’ put air in them?

Louis CK, quoted in Pinker, Enlightenment Now


Added to diary 21 April 2018

lowell-mckirgan

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.


Added to diary 27 June 2018

ludwig-wittgenstein

Denn die philosophischen Probleme entstehen, wenn die Sprache feiert.

Ludwig Wittgenstein, Philosophische Untersuchungen, 38, 1953

For philosophical problems arise when language goes on holiday.

Ludwig Wittgenstein, Philosophical Investigations, 38, 1953

Aber welches sind die einfachen Bestandteile, aus denen sich die Realität zusammensetzt? - Was sind die einfachen Bestandteile eines Sessels? - Die Stücke Holz, aus denen er zusammengefügt ist? Oder die Moleküle, oder die Atome? - »Einfach« heißt: nicht zusammengesetzt. Und da kommt es darauf an: in welchem Sinne ›zusammengesetzt‹? Es hat gar keinen Sinn von den ›einfachen Bestandteilen des Sessels schlechtweg‹ zu reden. […] Auf die philosophische Frage: »Ist das Gesichtsbild dieses Baumes zusammengesetzt, und welches sind seine Bestandteile?« ist die richtige Antwort: »Das kommt drauf an, was du unter ›zusammengesetzt‹ verstehst.« (Und das ist natürlich keine Beantwortung, sondern eine Zurückweisung der Frage.)

Ludwig Wittgenstein, Philosophische Untersuchungen, 47, 1953

But what are the simple constituent parts of which reality is composed?—What are the simple constituent parts of a chair?—The bits of wood of which it is made? Or the molecules, or the atoms?— “Simple” means: not composite. And here the point is: in what sense ‘composite’? It makes no sense at all to speak absolutely of the ‘simple parts of a chair’. […] To the philosophical question: “Is the visual image of this tree composite, and what are its component parts?” the correct answer is: “That depends on what you understand by ‘composite’.” (And that is of course not an answer but a rejection of the question.)

Ludwig Wittgenstein, Philosophical Investigations, 47, 1953

Betrachte z.B. einmal die Vorgänge, die wir »Spiele« nennen. Ich meine Brettspiele, Kartenspiele, Ballspiel, Kampfspiele, usw. Was ist allen diesen gemeinsam? - Sag nicht: »Es muß ihnen etwas gemeinsam sein, sonst hießen sie nicht ›Spiele‹ « - sondern schau, ob ihnen allen etwas gemeinsam ist.

Ludwig Wittgenstein, Philosophische Untersuchungen, 66, 1953

Consider for example the proceedings that we call “games”. I mean board-games, card-games, ball-games, Olympic games, and so on. What is common to them all? – Don’t say: “There must be something common, or they would not be called ‘games’ “-but look and see whether there is anything common to all.

Ludwig Wittgenstein, Philosophical Investigations, 66, 1953

Diese [philosophischen Probleme] sind freilich keine empirischen, sondern sie werden durch eine Einsicht in das Arbeiten unserer Sprache gelöst, und zwar so, daß dieses erkannt wird: entgegen einem Trieb, es mißzuverstehen. Diese Probleme werden gelöst, nicht durch Beibringen neuer Erfahrung, sondern durch Zusammenstellung des längst Bekannten. Die Philosophie ist ein Kampf gegen die Verhexung unsres Verstandes durch die Mittel unserer Sprache.

Ludwig Wittgenstein, Philosophische Untersuchungen, 109, 1953

These [philosophical problems] are, of course, not empirical problems; they are solved, rather, by looking into the workings of our language, and that in such a way as to make us recognize those workings: in despite of an urge to misunderstand them. The problems are solved, not by giving new information, but by arranging what we have always known. Philosophy is a battle against the bewitchment of our intelligence by means of language

Ludwig Wittgenstein, Philosophical Investigations, 109, 1953

Fragen wir uns: Warum empfinden wir einen grammatischen Witz als tieft (Und das ist ja die philosophische Tiefe.)

Ludwig Wittgenstein, Philosophische Untersuchungen, 111, 1953

Let us ask ourselves: why do we feel a grammatical joke to be deep (And that is what the depth of philosophy is.)

Ludwig Wittgenstein, Philosophical Investigations, 111, 1953

»Die Zahl meiner Freunde ist n und n2 + 2n +2 = 0.« Hat dieser Satz Sinn? Es ist ihm unmittelbar nicht anzukennen. Man sieht an diesem Beispiel, wie es zugehen kann, daß etwas aussieht wie ein Satz, den wir verstehen, was doch keinen Sinn ergibt.

Ludwig Wittgenstein, Philosophische Untersuchungen, 513, 1953

Or: “I have n friends and n2 + 2n + 2 = o”. Does this sentence make sense? This cannot be seen immediately. This example shews how it is that something can look like a sentence which we understand, and yet yield no sense.

Ludwig Wittgenstein, Philosophical Investigations, 513, 1953


Added to diary 19 January 2018

Die Sprache verkleidet den Gedanken. Und zwar so, dass man nach der äußeren Form des Kleides, nicht auf deie Form des bekleideten Gedankens schließen kann; weil die äußere Form des Kleides nach ganz anderen Zwecken gebildet ist als danach, die Form des Körpers erkennen zu lassen.
Die stillschweigenden Abmachungen zum Verständnis der Umgangssprache sind enorm kompliziert.

Ludwig Wittgenstein, Tractatus Logico-Philosophicus, 4.002, 1921

Language disguises thought. So much so, that from the outward form of the clothing it is impossible to infer the form of the thought beneath it, because the outward form of the clothing is not designed to reveal the form of the body, but for entirely different purposes.
The tacit conventions on which the understanding of everyday language depends are enormously complicated.

Ludwig Wittgenstein, Tractatus Logico-Philosophicus, 4.002, 1921 [Translation: Pears/McGuinness]


Added to diary 19 January 2018

Denn die philosophischen Probleme entstehen, wenn die Sprache feiert.

Ludwig Wittgenstein, Philosophische Untersuchungen, 38, 1953

For philosophical problems arise when language goes on holiday.

Ludwig Wittgenstein, Philosophical Investigations, 38, 1953


Added to diary 15 January 2018

marcus-hutter

Natural Turing Machines. The final issue is the choice of Universal Turing machine to be used as the reference machine. The problem is that there is still subjectivity involved in this choice since what is simple on one Turing machine may not be on another. More formally, it can be shown that for any arbitrarily complex string as measured against the UTM there is another UTM machine for which has Kolmogorov complexity . This result seems to undermine the entire concept of a universal simplicity measure but it is more of a philosophical nuisance which only occurs in specifically designed pathological examples. The Turing machine would have to be absurdly biased towards the string which would require previous knowledge of . The analogy here would be to hard-code some arbitrary long complex number into the hardware of a computer system which is clearly not a natural design. To deal with this case we make the soft assumption that the reference machine is natural in the sense that no such specific biases exist. Unfortunately there is no rigorous definition of natural but it is possible to argue for a reasonable and intuitive definition in this context.

Hutter and Rathmanner 2011, Section 5.9 “Andrey Kolmogorov”

In section 2.4 we saw that Solomonoff’s prior is invariant under both reparametrization and regrouping, up to a multiplicative constant. But there is another form of language dependence, namely the choice of a uni- versal Turing machine.

There are three principal responses to the threat of language dependence. First, one could accept it flat out, and admit that no language is better than any other. Second, one could admit that there is language dependence but argue that some languages are better than others. Third, one could deny language dependence, and try to show that there isn’t any.

For a defender of Solomonoff’s prior, I believe the second option is the most promising. If you accept language dependence flat out, why intro- duce universal Turing machines, incomputable functions, and other need- lessly complicated things? And the third option is not available: there isn’t any way of getting around the fact that Solomonoff’s prior depends on the choice of universal Turing machine. Thus, we shall somehow try to limit the blow of the language dependence that is inherent to the framework. Williamson (2010) defends the use of a particular language by saying that an agent’s language gives her some information about the world she lives in. In the present framework, a similar response could go as follows. First, we identify binary strings with propositions or sensory observations in the way outlined in the previous section. Second, we pick a UTM so that the terms that exist in a particular agent’s language gets low Kolmogorov complexity.

If the above proposal is unconvincing, the damage may be limited some- what by the following result. Let be the Kolmogorov complexity of relative to universal Turing machine , and let be the Kolmogorov complexity of relative to Turing machine (which needn’t be universal). We have that That is: the difference in Kolmogorov complexity relative to and rela- tive to is bounded by a constant that depends only on these Turing machines, and not on . (See Li and Vitanyi (1997, p. 104) for a proof.) This is somewhat reassuring. It means that no other Turing machine can outperform infinitely often by more than a fixed constant. But we want to achieve more than that. If one picks a UTM that is biased enough to start with, strings that intuitively seem complex will get a very low Kolmogorov complexity. As we have seen, for any string it is always possible to find a UTM such that . If , the corresponding Solomonoff prior will be at least . So for any binary string, it is always possible to find a UTM such that we assign that string prior probability greater than or equal to . Thus some way of discriminating between universal Turing machines is called for.

Vallinder 2012, Section 4.1 “Language dependence”


Added to diary 15 January 2018

mark-twain

It was a crisp and spicy morning in early October. The lilacs and laburnums, lit with the glory-fires of autumn, hung burning and flashing in the upper air, a fairy bridge provided by kind Nature for the wingless wild things that have their homes in the tree-tops and would visit together; the larch and the pomegranate flung their purple and yellow flames in brilliant broad splashes along the slanting sweep of the woodland; the sensuous fragrance of innumerable deciduous flowers rose upon the swooning atmosphere; far in the empty sky a solitary esophagus slept upon motionless wing; everywhere brooded stillness, serenity, and the peace of God.

Mark Twain, A Double Barrelled Detective Story, 1902


Added to diary 26 June 2018

ADDRESS TO THE VIENNA PRESS CLUB, NOVEMBER 21, 1897, DELIVERED IN GERMAN [Here in literal translation] […]

I am indeed the truest friend of the German language […]. I would only some changes effect. I would only the language method–the luxurious, elaborate construction compress, the eternal parenthesis suppress, do away with, annihilate; the introduction of more than thirteen subjects in one sentence forbid; the verb so far to the front pull that one it without a telescope discover can.

Mark Twain, The Awful German Language, in A Tramp Abroad, 1880


Added to diary 26 June 2018

mary-ann-bates

Policy makers repeatedly face this generalizability puzzle—whether the results of a specific program generalize to other contexts—and there has been a long-standing debate among policy makers about the appropriate response. But the discussion is often framed by confusing and unhelpful questions, such as: Should policy makers rely on less rigorous evidence from a local context or more rigorous evidence from elsewhere? And must a new experiment always be done locally before a program is scaled up?

These questions present false choices. Rigorous impact evaluations are designed not to replace the need for local data but to enhance their value. This complementarity between detailed knowledge of local institutions and global knowledge of common behavioral relationships is fundamental to the philosophy and practice of our work at the Abdul Latif Jameel Poverty Action Lab (J-PAL). […]

To give a sense of our philosophy, it may help to first examine four common, but misguided, approaches about evidence-based policy making that our work seeks to resolve.

Can a study inform policy only in the location in which it was undertaken? Kaushik Basu has argued that an impact evaluation done in Kenya can never tell us anything useful about what to do in Rwanda because we do not know with certainty that the results will generalize to Rwanda. To be sure, we will never be able to predict human behavior with certainty, but the aim of social science is to describe general patterns that are helpful guides, such as the prediction that, in general, demand falls when prices rise. Describing general behaviors that are found across settings and time is particularly important for informing policy. The best impact evaluations are designed to test these general propositions about human behavior.

Should we use only whatever evidence we have from our specific location? In an effort to ensure that a program or policy makes sense locally, researchers such as Lant Pritchett and Justin Sandefur argue that policy makers should mainly rely on whatever evidence is available locally, even if it is not of very good quality. But while good local data are important, to suggest that decision makers should ignore all evidence from other countries, districts, or towns because of the risk that it might not generalize would be to waste a valuable resource. The challenge is to pair local information with global evidence and use each piece of evidence to help understand, interpret, and complement the other.

Should a new local randomized evaluation always precede scale up? One response to the concern for local relevance is to use the global evidence base as a source for policy ideas but always to test a policy with a randomized evaluation locally before scaling it up. Given J-PAL’s focus on this method, our partners often assume that we will always recommend that another randomized evaluation be done—we do not. With limited resources and evaluation expertise, we cannot rigorously test every policy in every country in the world. We need to prioritize. For example, there have been more than 30 analyses of 10 randomized evaluations in nine low- and middle- income countries on the effects of conditional cash transfers. While there is still much that could be learned about the optimal design of these programs, it is unlikely to be the best use of limited funds to do a randomized impact evaluation for every new conditional cash transfer program when there are many other aspects of antipoverty policy that have not yet been rigorously tested.

Must an identical program or policy be replicated a specific number of times before it is scaled up? One of the most common questions we get asked is how many times a study needs to be replicated in different contexts before a decision maker can rely on evidence from other contexts. We think this is the wrong way to think about evidence. There are examples of the same program being tested at multiple sites: For example, a coordinated set of seven randomized trials of an intensive graduation program to support the ultra-poor in seven countries found positive impacts in the majority of cases. This type of evidence should be weighted highly in our decision making. But if we only draw on results from studies that have been replicated many times, we throw away a lot of potentially relevant information. […]

Focusing on mechanisms, and then judging whether a mechanism is likely to apply in a new setting, has a number of practical advantages for policy making. […] We suggest the use of a four-step generalizability framework that seeks to answer a crucial question at each step:

Step 1: What is the disaggregated theory behind the program?
Step 2: Do the local conditions hold for that theory to apply?
Step 3: How strong is the evidence for the required general behavioral change?
Step 4: What is the evidence that the implementation process can be carried out well?

Bates, M. A., & Glennerster, R. (2017). “The generalizability puzzle”. Stanford Social Innovation Review, Summer 2017. Leland Stanford Jr. University.


Added to diary 22 March 2018

matthew-adam-kocher

In a preventive war scenario, the rising state (the one that is becoming more powerful) would like to guarantee that it would not use its powerful position to exploit the declining state in the future. The declining state would like to accept such a guarantee. Both states would prefer such a guarantee to risky and costly fighting. Yet both states know that the guarantee would be worthless once the rising state achieves a dominant position. Hence, the declining state may launch a war now in order to avoid being exploited in the future.

Matthew Adam Kocher, Commitment Problems and Preventive War, 8 August 2013

Complete-information bargaining can break down in this setting if the shift in the distribution of power is sufficiently large and rapid. To see why, consider the situation confronting a temporarily weak bargainer who expects to be stronger in the future (that is, the amount that this bargainer can lock in will increase). In order to avoid the inefficient use of power, this bargainer must buy off its temporarily strong adversary. To do this, the weaker party must promise the stronger at least as much of the flow as the latter can lock in. But when the once-weak bargainer becomes stronger, it may want to exploit its better bargaining position and renege on the promised transfer. Indeed, if the shift in the distribution of power is sufficiently large and rapid, the once-weak bargainer is certain to want to renege. Foreseeing this, the temporarily strong adversary uses it power to lock in a higher payoff while it still has the chance.

Robert Powell (2006). War as a Commitment Problem. International Organization, 60(1), 169-203. doi:10.1017/S0020818306060061


Added to diary 15 March 2018

nate-silver

Few news organizations gave the story more velocity than The New York Times. On the morning of Oct. 29, Comey stories stretched across the print edition’s front page, accompanied by a photo showing Clinton and her aide Huma Abedin, Weiner’s estranged wife. Although some of these articles contained detailed reporting, the headlines focused on speculation about the implications for the horse race — “NEW EMAILS JOLT CLINTON CAMPAIGN IN RACE’S LAST DAYS.” […]

Clinton’s standing in the polls fell sharply. She’d led Trump by 5.9 percentage points in FiveThirtyEight’s popular vote projection at 12:01 a.m. on Oct. 28. A week later — after polls had time to fully reflect the letter — her lead had declined to 2.9 percentage points. That is to say, there was a shift of about 3 percentage points against Clinton. And it was an especially pernicious shift for Clinton because (at least according to the FiveThirtyEight model) Clinton was underperforming in swing states as compared to the country overall. In the average swing state, Clinton’s lead declined from 4.5 percentage points at the start of Oct. 28 to just 1.7 percentage points on Nov. 4. If the polls were off even slightly, Trump could be headed to the White House. […]

So while one can debate the magnitude of the effect, there’s a reasonably clear consensus of the evidence that the Comey letter mattered — probably by enough to swing the election. This ought not be one of the more controversial facts about the 2016 campaign; the data is pretty straightforward. Why the media covered the story as it did and how to weigh the Comey letter against the other causes for Clinton’s defeat are the more complicated parts of the story.

Nate Silver, The Comey Letter Probably Cost Clinton The Election, FiveThirtyEight, 3 May 2017


Added to diary 30 January 2018

neima-jahromi

By February, when blizzards coat the oily streets, the world outside will resemble the bar’s Black Manta, a rye drink with black sesame and on-trend charcoal, made unsettlingly frothy with the aid of egg whites.

Neima Jahromi, Dromedary Bar, Bar tab, New Yorker Magazine, December 4, 2017


Added to diary 16 January 2018

nicholas-shackel

A Motte and Bailey castle is a medieval system of defence in which a stone tower on a mound (the Motte) is surrounded by an area of land (the Bailey) which in turn is encompassed by some sort of a barrier such as a ditch. Being dark and dank, the Motte is not a habitation of choice. The only reason for its existence is the desirability of the Bailey, which the combination of the Motte and ditch makes relatively easy to retain despite attack by marauders. When only lightly pressed, the ditch makes small numbers of attackers easy to defeat as they struggle across it: when heavily pressed the ditch is not defensible and so neither is the Bailey. Rather one retreats to the insalubrious but defensible, perhaps impregnable, Motte. Eventually the marauders give up, when one is well placed to reoccupy desirable land.

For my purposes the desirable but only lightly defensible territory of the Motte and Bailey castle, that is to say, the Bailey, represents a philosophical doctrine or position with similar properties: desirable to its proponent but only lightly defensible. The Motte is the defensible but undesired position to which one retreats when hard pressed. I think it is evident that Troll’s Truisms have the Motte and Bailey property, since the exciting falsehoods constitute the desired but indefensible region within the ditch whilst the trivial truth constitutes the defensible but dank Motte to which one may retreat when pressed.

An entire doctrine or theory may be a Motte and Bailey Doctrine just by virtue of having a central core of defensible but not terribly interesting or original doctrines surrounded by a region of exciting but only lightly defensible doctrines. Just as the medieval Motte was often constructed by the stonemasons art from stone in the surrounding land, the Motte of dull but defensible doctrines is often constructed by the use of the sophists art from the desired but indefensible doctrines lying within the ditch.

Diagnosis of a philosophical doctrine as being a Motte and Bailey Doctrine is invariably fatal. Once made it is relatively obvious to those familiar with the doctrine that the doctrine’s survival required a systematic vacillation between exploiting the desired territory and retreating to the Motte when pressed.

The dialectic between many refutations of specific postmodernist doctrines and the postmodernist defences correspond exactly to the dynamics of Motte and Bailey Doctrines. When pressed with refutation the postmodernists retreat to their Mottes, only to venture out and repossess the desired territory when the refutation is not in immediate evidence. For these reasons, I think the proper diagnosis of postmodernism is precisely that it is a Motte and Bailey Doctrine.

Shackel, N. (2005). The vacuity of postmodernist methodology. Metaphilosophy, 36(3), 295-320.

So the motte-and-bailey doctrine is when you make a bold, controversial statement. Then when somebody challenges you, you claim you were just making an obvious, uncontroversial statement, so you are clearly right and they are silly for challenging you. Then when the argument is over you go back to making the bold, controversial statement.

Some classic examples:

  1. The religious group that acts for all the world like God is a supernatural creator who builds universes, creates people out of other people’s ribs, parts seas, and heals the sick when asked very nicely (bailey). Then when atheists come around and say maybe there’s no God, the religious group objects “But God is just another name for the beauty and order in the Universe! You’re not denying that there’s beauty and order in the Universe, are you?” (motte). Then when the atheists go away they get back to making people out of other people’s ribs and stuff.

  2. Or…”If you don’t accept Jesus, you will burn in Hell forever.” (bailey) But isn’t that horrible and inhuman? “Well, Hell is just another word for being without God, and if you choose to be without God, God will be nice and let you make that choice.” (motte) Oh, well that doesn’t sound so bad, I’m going to keep rejecting Jesus. “But if you reject Jesus, you will BURN in HELL FOREVER and your body will be GNAWED BY WORMS.” But didn’t you just… “Metaphorical worms of godlessness!”

  3. The feminists who constantly argue about whether you can be a real feminist or not without believing in X, Y and Z and wanting to empower women in some very specific way, and who demand everybody support controversial policies like affirmative action or affirmative consent laws (bailey). Then when someone says they don’t really like feminism very much, they object “But feminism is just the belief that women are people!” (motte) Then once the person hastily retreats and promises he definitely didn’t mean women aren’t people, the feminists get back to demanding everyone support affirmative action because feminism, or arguing about whether you can be a feminist and wear lipstick.

  4. Proponents of pseudoscience sometimes argue that their particular form of quackery will cure cancer or take away your pains or heal your crippling injuries (bailey). When confronted with evidence that it doesn’t work, they might argue that people need hope, and even a placebo solution will often relieve stress and help people feel cared for (motte). In fact, some have argued that quackery may be better than real medicine for certain untreatable diseases, because neither real nor fake medicine will help, but fake medicine tends to be more calming and has fewer side effects. But then once you leave the quacks in peace, they will go back to telling less knowledgeable patients that their treatments will cure cancer.

  5. Critics of the rationalist community note that it pushes controversial complicated things like Bayesian statistics and utilitarianism (bailey) under the name “rationality”, but when asked to justify itself defines rationality as “whatever helps you achieve your goals”, which is so vague as to be universally unobjectionable (motte). Then once you have admitted that more rationality is always a good thing, they suggest you’ve admitted everyone needs to learn more Bayesian statistics.

  6. Likewise, singularitarians who predict with certainty that there will be a singularity, because “singularity” just means “a time when technology is so different that it is impossible to imagine” – and really, who would deny that technology will probably get really weird (motte)? But then every other time they use “singularity”, they use it to refer to a very specific scenario of intelligence explosion, which is far less certain and needs a lot more evidence before you can predict it (bailey).

The motte and bailey doctrine sounds kind of stupid and hard-to-fall-for when you put it like that, but all fallacies sound that way when you’re thinking about them. More important, it draws its strength from people’s usual failure to debate specific propositions rather than vague clouds of ideas. If I’m debating “does quackery cure cancer?”, it might be easy to view that as a general case of the problem of “is quackery okay?” or “should quackery be illegal?”, and from there it’s easy to bring up the motte objection.

Scott Alexander, “All in all, another brick in the motte”, Slate Star Codex, 3 November 2014

Suppose I define socialism as, “a system of totalitarian control over the economy, leading inevitably to mass poverty and death.” As a detractor of socialism, this is superficially tempting. But it’s sheer folly, for two distinct reasons.

First, this plainly isn’t what most socialists mean by “socialism.” When socialists call for socialism, they’re rarely requesting totalitarianism, poverty, and death. And when non-socialists listen to socialists, that’s rarely what they hear, either.

Second, if you buy this definition, there’s no point studying actual socialist regimes to see if they in fact are “totalitarian” or “inevitably lead to mass poverty and death.” Mere words tell you what you need to know.

What’s the problem? The problem is that I’ve provided an argumentative definition of socialism. Instead of rigorously distinguishing between what we’re talking about and what we’re saying about it, an argumentative definition deliberately interweaves the two.

The hidden hope, presumably, is that if we control the way people use words, we’ll also control what people think about the world. And it is plainly possible to trick the naive using these semantic tactics. But the epistemic cost is high: You preemptively end conversation with anyone who substantively disagrees with you - and cloud your own thinking in the process. It’s far better to neutrally define socialism as, say, “Government ownership of most of the means of production,” or maybe, “The view that each nation’s wealth is justly owned collectively by its citizens.” You can quibble with these definitions, but people can accept either definition regardless of their position on socialism itself.

Modern discussions are riddled with argumentative definitions, but the most prominent instance, lately, is feminism. Google “feminism,” and what do you get? The top hit: “the advocacy of women’s rights on the basis of the equality of the sexes.” I’ve heard many variants on this: “the theory that men and women should be treated equally,” or even “the radical notion that women are people.”

What’s argumentative about these definitions? Well, in this 2016 Washington Post/Kaiser Family Foundation survey, 40% of women and 67% of men did not consider themselves “feminists.” But over 90% of both genders agreed that “men and women should be social, political, and economic equals.” If Google’s definition of feminism conformed to standard English usage, these patterns would make very little sense. Imagine a world where 90% of men say they’re “bachelors,” but only 40% say they’re “unmarried.”

Bryan Caplan, Against Argumentative Definitions: The Case of Feminism, EconLog, 20 February 2018


Added to diary 21 April 2018

nick-bostrom

So, “maximize expected value”, say, is a quantity we could define. It just doesn’t help us very much, because whenever you try to do something specific you’re still virtually as far away as you had been. On the other hand, if you set some more concrete objective, like maximize the number of people in this room, or something like that, we can now easily tell like how many people there are, and we have ideas about how we could maximize it. So any particular action we think of we might easily see how it fares on this objective of maximizing the people in this room. However, we might feel it’s very difficult to get strong reasons for knowing whether more people in this room is better, or whether there is some inverse relationship. A good signpost would strike a reasonable compromise between being visible from afar and also being such that we can have strong reason to be sure of its sign.

Nick Bostrom, Crucial Considerations and Wise Philanthropy, Good Done Right conference, 2014


Added to diary 12 December 2018

Suppose you’re an administrator here in Oxford, you’re working in the Computer Science department, and you’re the secretary there. Suppose you find some way to make the department run slightly more efficiently: you create this mailing list so that everybody can, when they have an announcement to make, just email it to the mailing list rather than having to put in each person individually in the address field. And that’s a useful thing, that’s a great thing: it didn’t cost anything, other than one-off cost, and now everybody can go about their business more easily. From this perspective, it’s very non-obvious whether that is, in fact, a good thing. It might be contributing to AI—that might be the main effect of this, other than the very small general effect on economic growth. And it might probably be that you have made the world worse in expectation by making this little efficiency improvement. So this project of trying to think through this it’s in a sense a little bit like the Nietzschean Umwertung aller Werte — the revaluation of all values—project that he never had a chance to complete, because he went mad before.

Nick Bostrom, Crucial Considerations and Wise Philanthropy, Good Done Right conference, 2014


Added to diary 12 December 2018

nicolas-niarchos

Ass Juice, a rather unpleasantly named punch, is one of the specials at this Alphabet City dive. […] The concoction is also deceptively priced: one is four dollars, and two are nine. The ingenious valuation recently led a former hedge-fund manager to declaim, “It’s very New York!” Around him, a mix of yuppies and leather-clad locals clustered at the bar while televisions above them alternated between shots of rowdy concerts and graphic pornography. “Is that legal?” a drunk patron queried, of the questionable fare onscreen. Nearby, a man in a checked shirt took a swig from a can of Genesee beer and mumbled to his friend, “I’m more than just a back-end-data guy.”

Nicolas Niarchos, Double Down Saloon, Bar Tab, New Yorker Magazine, 5 February 2018


Added to diary 03 February 2018

paul-christiano

When I suggest that supporting technological development may be an efficient way to improve the world, I often encounter the reaction:

Markets already incentivize technological development; why would we expect altruists to have much impact working on it?

When I talk about more extreme cases, like subsidizing corporate R&D or tech startups, I seem to get this reaction even more strongly and with striking regularity: “But that’s a for-profit enterprise, right? If it were worthwhile to spend any more money on R&D, then they’d do it.” […]

If I notice a promising opportunity and believe that I can capture all of the gains from pursuing it, I might suspect that the market would have scooped it up if it were really as good as it seems. But if I can only capture some of the gains, this reasoning falls apart. If the opportunity involves diminishing returns and allows capturing only a tiny fraction of all of the social gains, then it might be worth investing a tiny bit for profit, leaving significant room for further altruistic investment. I think the existence of diminishing returns is really doing the work in this argument; it will generally cause even very good altruistic opportunities to be very profitable at first, despite a huge gap between social value and profit potential.

Paul Christiano Altruism and profit, Rational altruist, July 11 2013

This part can be hard to follow: “If the opportunity involves diminishing returns and allows capturing only a tiny fraction of all of the social gains”. The explanation is: the more convex the demand curve (quickly diminishing returns), the greater the fraction of total surplus captured by a monopolist. The more concave the demand (slowly diminishing returns), the less the fraction of social surplus captured by a monopolist. See Malueg 1994 (Monopoly Output and Welfare: The Role of Curvature of the Demand Function, Proposition 2). The same result extends to Cournot oligopoly (Anderson and Renault 2001, Efficiency and surplus bounds in Cournot competition).

And why does Paul say “very profitable at first”? Because every natural monopoly must end someday: patents expire, know-how diffuses, capital depreciates, and so on.


Added to diary 01 April 2018

I think it’s useful to split up plans into two parts:

  1. Trying to achieve some observable goals, where we can make many attempts and improve each time.
  2. Hoping that achieving these goals will lead to a positive impact. […]

So I think the upshot is to choose plans for which the arguments supporting step (2) are as simple as possible. Arguments without many moving parts, particularly which are substantiated by a direct appeal to historical regularity, may hold up even if you never get to check them. Conversely, load as much of the difficult work as possible into step (1).

Paul Christiano, Guesswork, feedback, and impact, Rational Altruist, 6 December 2012


Added to diary 28 March 2018

No matter how “dumb” the monkey is, if it is unbiased then there is no free lunch. For a time we can do what we desire at the expense of what we cesire, but any cognitive policy that does so will eventually become unappealing.

Paul Christiano, The monkey and the machine: a dual process theory, The sideways view, 9 February 2017


Added to diary 12 February 2018

paul-graham

Hacker News is definitely useful. I’ve learned a lot from things I’ve read on HN. I’ve written several essays that began as comments there. So I wouldn’t want the site to go away. But I would like to be sure it’s not a net drag on productivity. What a disaster that would be, to attract thousands of smart people to a site that caused them to waste lots of time. I wish I could be 100% sure that’s not a description of HN. I feel like the addictiveness of games and social applications is still a mostly unsolved problem. The situation now is like it was with crack in the 1980s: we’ve invented terribly addictive new things, and we haven’t yet evolved ways to protect ourselves from them. We will eventually, and that’s one of the problems I hope to focus on next.

Paul Graham, What I’ve Learned from Hacker News, February 2009


Added to diary 27 June 2018

paul-niehaus

Public employment programs play a major role in the anti-poverty strategy of many developing countries. Besides the direct wages provided to the poor, such programs are likely to affect their welfare by changing broader labor market outcomes including wages and private employment. These general equilibrium effects may accentuate or attenuate the direct benefits of the program, but have been difficult to estimate credibly. We estimate the general equilibrium effects of a technological reform that improved the implementation quality of India’s public employment scheme on the earnings of the rural poor, using a large-scale experiment which randomized treatment across sub-districts of 60,000 people. We find that this reform had a large impact on the earnings of low-income households, and that these gains were overwhelmingly driven by higher private-sector earnings (90%) as opposed to earnings directly from the program (10%). These earnings gains reflect a 5.7% increase in market wages for rural unskilled labor, and a similar increase in reservation wages. We do not find evidence of distortions in factor allocation, including labor supply, migration, and land use. Our results highlight the importance of accounting for general equilibrium effects in evaluating programs, and also illustrate the feasibility of using large-scale experiments to study such effects.

Karthik Muralidharan, Paul Niehaus, and Sandip Sukhtankar, General Equilibrium Effects of (Improving) Public Employment Programs: Experimental Evidence from India, 2017

the cool thing is it mattered – study helped convince gov’t not to scrap the program, putting $100Ms annually into hands of the poor

@paulfniehaus, Twitter


Added to diary 15 January 2018

plato

The rhetor doesn’t know the things themselves, what is good or bad, what is fine or shameful or just or unjust, but has devised persuasion about them so that though he doesn’t know, among those who don’t know he appears to know.

Plato, Gorgias 459d


Added to diary 15 January 2018

As I say, then, cookery is flattery disguised as medicine; and cosmetics is disguised as gymnastics in the same way – crooked, deceptive, mean, slavish, deceiving by shaping, colouring, smoothing, dressing, making people assume a beauty which is not their own, and neglecting the beauty of their own which would come through gymnastics. […] As cosmetics is to gymnastics, so is sophistry to legislation, and as cookery is to medicine, so is rhetoric to justice.

Plato, Gorgias 465b


Added to diary 15 January 2018

rachel-glennerster

Policy makers repeatedly face this generalizability puzzle—whether the results of a specific program generalize to other contexts—and there has been a long-standing debate among policy makers about the appropriate response. But the discussion is often framed by confusing and unhelpful questions, such as: Should policy makers rely on less rigorous evidence from a local context or more rigorous evidence from elsewhere? And must a new experiment always be done locally before a program is scaled up?

These questions present false choices. Rigorous impact evaluations are designed not to replace the need for local data but to enhance their value. This complementarity between detailed knowledge of local institutions and global knowledge of common behavioral relationships is fundamental to the philosophy and practice of our work at the Abdul Latif Jameel Poverty Action Lab (J-PAL). […]

To give a sense of our philosophy, it may help to first examine four common, but misguided, approaches about evidence-based policy making that our work seeks to resolve.

Can a study inform policy only in the location in which it was undertaken? Kaushik Basu has argued that an impact evaluation done in Kenya can never tell us anything useful about what to do in Rwanda because we do not know with certainty that the results will generalize to Rwanda. To be sure, we will never be able to predict human behavior with certainty, but the aim of social science is to describe general patterns that are helpful guides, such as the prediction that, in general, demand falls when prices rise. Describing general behaviors that are found across settings and time is particularly important for informing policy. The best impact evaluations are designed to test these general propositions about human behavior.

Should we use only whatever evidence we have from our specific location? In an effort to ensure that a program or policy makes sense locally, researchers such as Lant Pritchett and Justin Sandefur argue that policy makers should mainly rely on whatever evidence is available locally, even if it is not of very good quality. But while good local data are important, to suggest that decision makers should ignore all evidence from other countries, districts, or towns because of the risk that it might not generalize would be to waste a valuable resource. The challenge is to pair local information with global evidence and use each piece of evidence to help understand, interpret, and complement the other.

Should a new local randomized evaluation always precede scale up? One response to the concern for local relevance is to use the global evidence base as a source for policy ideas but always to test a policy with a randomized evaluation locally before scaling it up. Given J-PAL’s focus on this method, our partners often assume that we will always recommend that another randomized evaluation be done—we do not. With limited resources and evaluation expertise, we cannot rigorously test every policy in every country in the world. We need to prioritize. For example, there have been more than 30 analyses of 10 randomized evaluations in nine low- and middle- income countries on the effects of conditional cash transfers. While there is still much that could be learned about the optimal design of these programs, it is unlikely to be the best use of limited funds to do a randomized impact evaluation for every new conditional cash transfer program when there are many other aspects of antipoverty policy that have not yet been rigorously tested.

Must an identical program or policy be replicated a specific number of times before it is scaled up? One of the most common questions we get asked is how many times a study needs to be replicated in different contexts before a decision maker can rely on evidence from other contexts. We think this is the wrong way to think about evidence. There are examples of the same program being tested at multiple sites: For example, a coordinated set of seven randomized trials of an intensive graduation program to support the ultra-poor in seven countries found positive impacts in the majority of cases. This type of evidence should be weighted highly in our decision making. But if we only draw on results from studies that have been replicated many times, we throw away a lot of potentially relevant information. […]

Focusing on mechanisms, and then judging whether a mechanism is likely to apply in a new setting, has a number of practical advantages for policy making. […] We suggest the use of a four-step generalizability framework that seeks to answer a crucial question at each step:

Step 1: What is the disaggregated theory behind the program?
Step 2: Do the local conditions hold for that theory to apply?
Step 3: How strong is the evidence for the required general behavioral change?
Step 4: What is the evidence that the implementation process can be carried out well?

Bates, M. A., & Glennerster, R. (2017). “The generalizability puzzle”. Stanford Social Innovation Review, Summer 2017. Leland Stanford Jr. University.


Added to diary 22 March 2018

So, is this the recipe for moving from innovation to testing to scale: Evaluate a program, work with an organization that can scale, replicate in multiple contexts, and scale to improve millions of lives?

That model can work. It was appropriate for the graduation program, which is expensive and thus required a higher burden of evidence of effectiveness before scaling. The graduation program is also very complex implying the results may well vary by implementer, so it is worth testing with several implementers and scaling with those implementers who saw success.

But we shouldn’t conclude that this is the only model for getting to scale. In the rest of this post I will discuss examples of the “innovate, test, scale” approach in which there was no need for replication in multiple contexts; others where the testing was (appropriately) done with a different type of organization than that which scaled it; and finally, examples of evidence impacting millions of lives when it was not a program that was tested, but a theory.

Rachel Glennerster, When do innovation and evidence change lives?


Added to diary 15 January 2018

rachel-laudan

Animals, if you think of the standard picture of a cow, they first of all spend a lot of time wandering around, chewing grass, which is tough. And then they have stomachs and they spend much of the day digesting this food. It takes a huge amount of energy to digest food. So that when you cook, what you are essentially doing is outsourcing digesting–chewing and digesting–into the kitchen. And doing it previously. And that saves a lot of energy for the humans who are lucky enough to eat the cooked food. Of course, the energy has to come from somewhere, and part of it is from the thermal energy of the fire; but part of it is from the energy of the people or animals or later on wind or water or steam that are doing the hard work of grinding.

Rachel Laudan on the History of Food and Cuisine, EconTalk, 17 August 2015


Added to diary 15 January 2018

ralph-langner

Unrecognized by most who have written on Stuxnet, the malware contains two strikingly different attack routines. While literature on the subject has focused almost exclusively on the smaller and simpler attack routine that changes the speeds of centrifuge rotors, the “forgotten” routine is about an order of magnitude more complex and qualifies as a plain nightmare for those who understand industrial control system security. Viewing both attacks in context is a prerequisite for understanding the operation and the likely reasoning behind the scenes.

Both attacks aim at damaging centrifuge rotors, but use different tactics. The first (and more complex) attack attempts to over-pressurize centrifuges, the second attack tries to over-speed centrifuge rotors and to take them through their critical (resonance) speeds. […]

But how does one use thousands of fragile centrifuges in in a sensitive industrial process that doesn’t tolerate even minor equipment hiccups? In order to achieve that, Iran uses a Cascade Protection System which is quite unique as it is designed to cope with ongoing centrifuge trouble by implementing a crude version of fault tolerance. […]

The cyber attack against the Cascade Protection System infects Siemens S7-417 controllers with a matching configuration. The S7-417 is a top-of-theline industrial controller for big automation tasks. In Natanz, it is used to control the valves and pressure sensors of up to six cascades (or 984 centrifuges) that share common feed, product, and tails stations. Immediately after infection the payload of this early Stuxnet variant takes over control completely. Legitimate control logic is executed only as long as malicious code permits it to do so; it gets completely de-coupled from electrical input and output signals. The attack code makes sure that when the attack is not activated, legitimate code has access to the signals; in fact it is replicating a function of the controller’s operating system that would normally do this automatically but was disabled during infection.

In what is known as a man-in-the-middle scenario in cyber security, the input and output signals are passed from the electrical peripherals to the legitimate program logic and vice versa by attack code that has positioned itself “in the middle”.

Things change after activation of the attack sequence, which is triggered by a combination of highly specific process conditions that are constantly monitored by the malicious code. Then, the much-publicized manipulation of process values inside the controller occur. Process input signals (sensor values) are recorded for a period of 21 seconds. Those 21 seconds are then replayed in a constant loop during the execution of the attack, and will ultimately show on SCADA screens in the control room, suggesting normal operation to human operators and any software-implemented alarm routines. During the attack sequence, legitimate code continues to execute but receives fake input values, and any output (actuator) manipulations of legitimate control logic no longer have any effect. […]

The detailed pin-point manipulations of these sub-controllers indicate a deep physical and functional knowledge of the target environment; whoever provided the required intelligence may as well know the favorite pizza toppings of the local head of engineering. […]

The attack continues until the attackers decide that enough is enough, based on monitoring centrifuge status, most likely vibration sensors, which suggests a mission abort before the matter hits the fan. If the idea was catastrophic destruction, one would simply have to sit and wait. But causing a solidification of process gas would have resulted in simultaneous destruction of hundreds of centrifuges per infected controller. While at first glance this may sound like a goal worthwhile achieving, it would also have blown cover since its cause would have been detected fairly easily by Iranian engineers in post mortem analysis. The implementation of the attack with its extremely close monitoring of pressures and centrifuge status suggests that the attackers instead took great care to avoid catastrophic damage. The intent of the overpressure attack was more likely to increase rotor stress, thereby causing rotors to break early – but not necessarily during the attack run.

Nevertheless, the attackers faced the risk that the attack might not work at all because it is so overengineered that even the slightest oversight – or any configuration change – would have resulted in zero impact or, worst case, in a program crash that would have been detected by Iranian engineers quickly. It is obvious and documented later in this paper that over time Iran did change several important configuration details such as the number of centrifuges and enrichment stages per cascade, all of which would have rendered the overpressure attack useless; a fact that the attackers must have anticipated.

Whatever the effect of the overpressure attack was, the attackers decided to try something different in 2009. That may have been motivated by the fact that the overpressure attack was lethal just by accident, that it didn’t achieve anything, or – that somebody simply decided to check out something new and fresh.

The new variant that was not discovered until 2010 was much simpler and much less stealthy than its predecessor. It also attacked a completely different component: the Centrifuge Drive System (CDS) that controls rotor speeds. The attack routines for the overpressure attack were still contained in the payload, but no longer executed – a fact that must be viewed as deficient OPSEC. It provided us by far the best forensic evidence for identifying Stuxnet’s target, and without the new, easy-to-spot variant the earlier predecessor may never have been discovered. That also means that the most aggressive cyber-physical attack tactics would still be unknown to the public – unavailable for use in copycat attacks, and unusable as a deterrent display of cyber power.

Stuxnet’s early version had to be physically installed on a victim machine, most likely a portable engineering system, or it could have been passed on a USB stick carrying an infected configuration file for Siemens controllers. Once that the configuration file was opened by the vendor’s engineering software, the respective computer was infected. But no engineering software to open the malicious file, equals no propagation. That must have seemed to be insufficient or impractical for the new version, as it introduced a method of self-replication that allowed it to spread within trusted networks and via USB sticks even on computers that did not host the engineering software application. The extended dropper suggests that the attackers had lost the capability to transport the malware to its destination by directly infecting the systems of authorized personnel, or that the Centrifuge Drive System was installed and configured by other parties to which direct access was not possible. The self-replication would ultimately even make it possible to infiltrate and identify potential clandestine nuclear sites that the attackers didn’t know about. All of a sudden, Stuxnet became equipped with the latest and greatest MS Windows exploits and stolen digital certificates as the icing on the cake, allowing the malicious software to pose as legitimate driver software and thus not be rejected by newer versions of the Windows operating system. Obviously, organizations had joined the club that have a stash of zero-days to choose from and could pop up stolen certificates just like that. Whereas the development of the overpressure attack can be viewed as a process that could be limited to an in-group of top notch industrial control system security experts and coders who live in an exotic ecosystem quite remote from IT security, the circle seems to have gotten much wider, with a new center of gravity in Maryland. It may have involved a situation where the original crew is taken out of command by a casual “we’ll take it from here” by people with higher pay grades. Stuxnet had arrived in big infosec. But the use of the multiple zero-days came with a price. The new Stuxnet variant was much easier to identify as malicious software than its predecessor as it suddenly displayed very strange and very sophisticated behavior at the IT layer. In comparison, the dropper of the initial version looked pretty much like a legitimate or, worst case, pirated Step7 software project for Siemens controllers; the only strange thing was that a copyright notice and license terms were missing. Back in 2007, one would have to use extreme forensic efforts to realize what Stuxnet was all about – and one would have to specifically look for it, which was out of everybody’s imagination at the time. The newer version, equipped with a wealth of exploits that hackers can only dream about, signaled even the least vigilant anti-virus researcher that this was something big, warranting a closer look. […]

The new attack works by changing rotor speeds. With rotor wall pressure being a function of process pressure and rotor speed, the easy road to trouble is to over-speed the rotors, thereby increasing rotor wall pressure. Which is what Stuxnet did. Normal operating speed of the IR-1 centrifuge is 63,000 rpm, as disclosed by A. Q. Khan himself in his 2004 confession. Stuxnet increases that speed by a good onethird to 84,600 rpm for fifteen minutes, including the acceleration phase which will likely take several minutes. It is not clear if that is hard enough on the rotors to crash them in the first run, but it seems unlikely – even if just because a month later, a different attack tactic is executed, indicating that the first sequence may have left a lot of centrifuges alive, or at least more alive than dead. The next consecutive run brings all centrifuges in the cascade basically to a stop (120 rpm), only to speed them up again, taking a total of fifty minutes. A sudden stop like “hitting the brake” would predictably result in catastrophic damage, but it is unlikely that the frequency converters would permit such radical maneuver. It is more likely that when told to slow down, the frequency converter smoothly decelerates just like in an isolation / run-down event, only to resume normal speed thereafter. The effect of this procedure is not deterministic but offers a good chance of creating damage. The IR-1 is a supercritical design, meaning that operating speed is above certain critical speeds which cause the rotor to vibrate (if only briefly). Every time a rotor passes through these critical speeds, also called harmonics, it can break. […]

The most common technical misconception about Stuxnet that appears in almost every publication on the malware is that the rotor speed attack would record and play back process values by means of the recording and playback of signal inputs that we uncovered back in 2010 […]. Slipping the attention of most people writing about Stuxnet, this particular and certainly most intriguing attack component is only used in the overpressure attack. The S7-315 attack against the Centrifuge Drive System simply doesn’t do this, and as implemented in the CPS attack it wouldn’t even work on the smaller controller for technical reasons. The rotor speed attack is much simpler. During the attack, legitimate control code is simply suspended. The attack sequence is executed, thereafter a conditional BLOCK END directive is called which tells the runtime environment to jump back to the top of the main executive that is constantly looped on the single-tasking controller, thereby re-iterating the attack and suspending all subsequent code.

The attackers did not care to have the legitimate code continue execution with fake input data most likely because it wasn’t needed. Centrifuge rotor speed is constant during normal operation; if shown on a display, one would expect to see static values all the time. It is also a less dramatic variable to watch than operating pressure because rotor speed is not a controlled variable; there is no need to fine-tune speeds manually, and there is no risk that for whatever reason (short of a cyber attack) speeds would change just like stage process pressure. Rotor speed is simply set and then held constant by the frequency converter. If a SCADA application did monitor rotor speeds by communicating with the infected S7-315 controllers, it would simply have seen the exact speed values from the time before the attack sequence executes. The SCADA software gets its information from memory in the controller, not by directly talking to the frequency converter. Such memory must be updated actively by the control logic, reading values from the converter. However if legitimate control logic is suspended, such updates no longer take place, resulting in static values that perfectly match normal operation. Nevertheless, the implementation of the attack is quite rude; blocking control code from execution for up to an hour is something that experienced control system engineers would sooner or later detect, for example by using the engineering software’s diagnostic features, or by inserting code for debugging purposes. Certainly they would have needed a clue that something was at odds with rotor speed. It is unclear if post mortem analysis provided enough hints; the fact that both overspeed and transition through critical speeds were used certainly caused disguise. However, at some point in time the attack should have been recognizable by plant floor staff just by the old ear drum. Bringing 164 centrifuges or multiples thereof from 63,000 rpm to 120 rpm and getting them up to speed again would have been noticeable – if experienced staff had been cautious enough to remove protective headsets in the cascade hall. […]

Summing up, the differences between the two Stuxnet variants discussed here are striking. In the newer version, the attackers became less concerned about being detected. It seems a stretch to say that they wanted to be discovered, but they were certainly pushing the envelope and accepting the risk. […]

Much has been written about the failure of Stuxnet to destroy a substantial number of centrifuges, or to significantly reduce Iran’s LEU production. While that is undisputable, it doesn’t appear that this was the attackers’ intention. If catastrophic damage was caused by Stuxnet, that would have been by accident rather than by purpose. The attackers were in a position where they could have broken the victim’s neck, but they chose continuous periodical choking instead. […]

Reasons for such tactics are not difficult to identify. When Stuxnet was first deployed, Iran did already master the production of IR-1 centrifuges at industrial scale. It can be projected that simultaneous catastrophic destruction of all operating centrifuges would not have set back the Iranian nuclear program for longer than the two years setback that I have estimated for Stuxnet. During the summer of 2010 when the Stuxnet attack was in full swing, Iran operated about four thousand centrifuges, but kept another five thousand in stock, ready to be commissioned. Apparently, Iran is not in a rush to build up a sufficient stockpile of LEU that can then be turned into weapon-grade HEU but favoring a long-term strategy. A one-time destruction of their operational equipment would not have jeopardized that strategy, just like the catastrophic destruction of 4,000 centrifuges by an earthquake back in 1981 did not stop Pakistan on its way to get the bomb. […]

Somebody among the attackers may also have recognized that blowing cover would come with benefits. Uncovering Stuxnet was the end to the operation, but not necessarily the end of its utility. It would show the world what cyber weapons can do in the hands of a superpower. Unlike military hardware, one cannot display USB sticks at a military parade. […]

Stuxnet will not be remembered as a significant blow against the Iranian nuclear program. It will be remembered as the opening act of cyber warfare […].

Ralph Langner, “To Kill a Centrifuge: A Technical Analysis of What Stuxnet’s Creators Tried to Achieve”, November 2013, The Langner Group


Added to diary 21 May 2018

rebecca-yucuis

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.


Added to diary 27 June 2018

richard-dawkins

In frogs, for instance, neither sex has a penis. Perhaps, then, the words male and female have no general meaning. They are, after all, only words, and if we do not find them helpful for describing frogs, we are quite at liberty to abandon them. We could arbitrarily divide frogs into Sex I and Sex 2 if we wished. However, there is one fundamental feature of the sexes which can be used to label males as males, and females as females, throughout animals and plants. This is that the sex cells or ‘gametes’ of males are much smaller and more numerous than the gametes of females. This is true whether we are dealing with animals or plants. One group of individuals has large sex cells, and it is convenient to use the word female for them. The other group, which it is convenient to call male, has small sex cells. The difference is especially pronounced in reptiles and in birds, where a single egg cell is big enough and nutritious enough to feed a developing baby for several weeks. Even in humans, where the egg is microscopic, it is still many times larger than the sperm. As we shall see, it is possible to interpret all the other differences between the sexes as stemming from this one basic difference.

In certain primitive organisms, for instance some fungi, maleness and femaleness do not occur, although sexual reproduction of a kind does. In the system known as isogamy the individuals are not distinguishable into two sexes. Anybody can mate with anybody else. There are not two different sorts of gametes—sperms and eggs—but all sex cells are the same, called isogametes. New individuals are formed by the fusion of two isogametes, each produced by meiotic division. If we have three isogametes, A, B, and C, A could fuse with B or C, B could fuse with A or C. The same is never true of normal sexual systems. If A is a sperm and it can fuse with B or C, then B and C must be eggs and B cannot fuse with C.

When two isogametes fuse, both contribute equal numbers of genes to the new individual, and they also contribute equal amounts of food reserves. Sperms and eggs too contribute equal numbers of genes, but eggs contribute far more in the way of food reserves: indeed, sperms make no contribution at all and are simply concerned with transporting their genes as fast as possible to an egg. At the moment of conception, therefore, the father has invested less than his fair share (i.e. 50 per cent) of resources in the offspring. Since each sperm is so tiny, a male can afford to make many millions of them every day. This means he is potentially able to beget a very large number of children in a very short period of time, using different females. This is only possible because each new embryo is endowed with adequate food by the mother in each case. This therefore places a limit on the number of children a female can have, but the number of children a male can have is virtually unlimited. Female exploitation begins here.

Parker and others showed how this asymmetry might have evolved from an originally isogamous state of affairs. In the days when all sex cells were interchangeable and of roughly the same size, there would have been some that just happened to be slightly bigger than others. In some respects a big isogamete would have an advantage over an average-sized one, because it would get its embryo off to a good start by giving it a large initial food supply. There might therefore have been an evolutionary trend towards larger gametes. But there was a catch. The evolution of isogametes that were larger than was strictly necessary would have opened the door to selfish exploitation. Individuals who produced smaller than average gametes could cash in, provided they could ensure that their small gametes fused with extra-big ones. This could be achieved by making the small ones more mobile, and able to seek out large ones actively. The advantage to an individual of producing small, rapidly moving gametes would be that he could afford to make a larger number of gametes, and therefore could potentially have more children. Natural selection favoured the production of sex cells that were small and that actively sought out big ones to fuse with. So we can think of two divergent sexual ‘strategies’ evolving. There was the large-investment or ‘honest’ strategy. This automatically opened the way for a small-investment exploitative strategy. Once the divergence between the two strategies had started, it would have continued in runaway fashion. Medium-sized intermediates would have been penalized, because they did not enjoy the advantages of either of the two more extreme strategies. The exploiters would have evolved smaller and smaller size, and faster mobility. The honest ones would have evolved larger and larger size, to compensate for the ever-smaller investment contributed by the exploiters, and they became immobile because they would always be actively chased by the exploiters anyway. Each honest one would ‘prefer’ to fuse with another honest one. But the selection pressure to lock out exploiters would have been weaker than the pressure on exploiters to duck under the barrier: the exploiters had more to lose, and they therefore won the evolutionary battle. The honest ones became eggs, and the exploiters became sperms.

. . . the number of children a male can have is virtually unlimited. Female exploitation begins here.

It now seems misleading to emphasize the disparity between sperm and egg size as the basis of sex roles. Even if one sperm is small and cheap, it is far from cheap to make millions of sperms and successfully inject them into a female against all the competition. I now prefer the following approach to explaining the fundamental asymmetry between males and females.

Suppose we start with two sexes that have none of the particular attributes of males and females. Call them by the neutral names A and B. All we need specify is that every mating has to be between an A and a B. Now, any animal, whether an A or a B, faces a trade-off. Time and effort devoted to fighting with rivals cannot be spent on rearing existing offspring, and vice versa. Any animal can be expected to balance its effort between these rival claims. The point I am about to come to is that the A’s may settle at a different balance from the B’s and that, once they do, there is likely to be an escalating disparity between them.

To see this, suppose that the two sexes, the A’s and the B’s, differ from one another, right from the start, in whether they can most influence their success by investing in children or by investing in lighting (I’ll use ‘fighting’ to stand for all kinds of direct competition within one sex). Initially the difference between the sexes can be very slight, since my point will be that there is an inherent tendency for it to grow. Say the A’s start out with fighting making a greater contribution to their reproductive success than parental behaviour does; the B’s, on the other hand, start out with parental behaviour contributing slightly more than fighting to variation in their reproductive success. This means, for example, that although an A of course benefits from parental care, the difference between a successful carer and an unsuccessful carer among the A’s is smaller than the difference between a successful fighter and an unsuccessful fighter among the A’s. Among the B’s, just the reverse is true. So, for a given amount of effort, an A can do itself good by fighting, whereas a B is more likely to do itself good by shifting its effort away from fighting and towards parental care.

In subsequent generations, therefore, the A’s will fight a bit more than their parents, the B’s will fight a bit less and care a bit more than their parents. Now the difference between the best A and the worst A with respect to fighting will be even greater, the difference between the best A and the worst A with respect to caring will be even less. Therefore an A has even more to gain by putting its effort into fighting, even less to gain by putting its effort into caring. Exactly the opposite will be true of the B’s as the generations go by. The key idea here is that a small initial difference between the sexes can be self-enhancing: selection can start with an initial, slight difference and make it grow larger and larger, until the A’s become what we now call males, the B’s what we now call females. The initial difference can be small enough to arise at random. After all, the starting conditions of the two sexes are unlikely to be exactly identical.

As you will notice, this is rather like the theory, originating with Parker, Baker, and Smith and discussed on page 142, of the early separation of primitive gametes into sperms and eggs. The argument just given is more general. The separation into sperms and eggs is only one aspect of a more basic separation of sexual roles. Instead of treating the sperm-egg separation as primary, and tracing all the characteristic attributes of males and females back to it, we now have an argument that explains the sperm-egg separation and other aspects all in the same way. We have to assume only that there are two sexes who have to mate with each other; we need know nothing more about those sexes. Starting from this minimal assumption, we positively expect that, however equal the two sexes may be at the start, they will diverge into two sexes specializing in opposite and complementary reproductive techniques. The separation between sperms and eggs is a symptom of this more general separation, not the cause of it.

Richard Dawkins, The Selfish Gene, 30th Anniversary Edition, 2006

Footnotes (1989 edition) are doubly indented.


Added to diary 27 June 2018

Any user of a digital computer knows how precious computer time and memory storage space are. At many large computer centres they are literally costed in money; or each user may be allotted a ration of time, measured in seconds, and a ration of space, measured in ‘words’. The computers in which memes live are human brains.

p. 197 The computers in which memes live are human brains.

It was obviously predictable that manufactured electronic computers, too, would eventually play host to self-replicating patterns of information—memes. Computers are increasingly tied together in intricate networks of shared information. Many of them are literally wired up together in electronic mail exchange. Others share information when their owners pass floppy discs around. It is a perfect milieu for self-replicating programs to flourish and spread. When I wrote the first edition of this book I was naïve enough to suppose that an undesirable computer meme would have to arise by a spontaneous error in the copying of a legitimate program, and I thought this an unlikely event. Alas, that was a time of innocence. Epidemics of ‘viruses’ and ‘worms’, deliberately released by malicious programmers, are now familiar hazards to computer-users all over the world. My own hard disc has to my knowledge been infected in two different virus epidemics during the past year, and that is a fairly typical experience among heavy computer-users.

Richard Dawkins, The Selfish Gene, 30th Anniversary Edition, 2006

Footnotes (1989 edition) are doubly indented.


Added to diary 27 June 2018

As we have seen, it is strongly to be expected on evolutionary grounds that, where the sexes differ, it should be the males that advertise and the females that are drab. Modern western man is undoubtedly exceptional in this respect. It is of course true that some men dress flamboyantly and some women dress drably but, on average, there can be no doubt that in our society the equivalent of the peacock’s tail is exhibited by the female, not by the male. Women paint their faces and glue on false eyelashes. Apart from special cases, like actors, men do not.

Richard Dawkins, The Selfish Gene, 30th Anniversary Edition, 2006


Added to diary 27 June 2018

Grafen notes that there are at least four approaches to the handicap principle. These can be called the Qualifying Handicap (any male who has survived in spite of his handicap must be pretty good in other respects, so females choose him); the Revealing Handicap (males perform some onerous task in order to expose their otherwise concealed abilities); the Conditional Handicap (only high-quality males develop a handicap at all); and finally Grafen’s preferred interpretation, which he calls the Strategic Choice Handicap (males have private information about their own quality, denied to females, and use this information to ‘decide’ whether to grow a handicap and how large it should be). […] But males are assumed to have genes that are switched on conditionally upon the male’s own quality (and privileged access to this information is a not unreasonable assumption; a male’s genes, after all, are immersed in his internal biochemistry and far better placed than female genes to respond to his quality). Different males adopt different rules. For instance, one male might follow the rule ‘Display a tail whose size is proportional to my true quality’; another might follow the opposite rule.

Richard Dawkins, The Selfish Gene, 30th Anniversary Edition, 2006


Added to diary 27 June 2018

The interesting point is that chemical signals of old age need not in any normal sense be deleterious in themselves. For instance, suppose that it incidentally happens to be a fact that a substance S is more concentrated in the bodies of old individuals than of young individuals. S in itself might be quite harmless, perhaps some substance in the food which accumulates in the body over time. But automatically, any gene that just happened to exert a deleterious effect in the presence of S, but which otherwise had a good effect, would be positively selected in the gene pool, and would in effect be a gene ‘for’ dying of old age. The cure would simply be to remove S from the body.

Richard Dawkins, The Selfish Gene, 30th Anniversary Edition, 2006


Added to diary 27 June 2018

We no longer have to resort to superstition when faced with the deep problems: Is there a meaning to life? What are we for? What is man? After posing the last of these questions, the eminent zoologist G. G. Simpson put it thus: ‘The point I want to make now is that all attempts to answer that question before 1859 are worthless and that we will be better off if we ignore them completely.

attempts to answer that question before 1859 are worthless. . .

Some people, even non-religious people, have taken offence at the quotation from Simpson. I agree that, when you first read it, it sounds terribly philistine and gauche and intolerant, a bit like Henry Ford’s ‘History is more or less bunk’. But, religious answers apart (I am familiar with them; save your stamp), when you are actually challenged to think of pre-Darwinian answers to the questions ‘What is man?’ ‘Is there a meaning to life?’ ‘What are we for?’, can you, as a matter of fact, think of any that are not now worthless except for their (considerable) historic interest? There is such a thing as being just plain wrong, and that is what, before 1859, all answers to those questions were.

Richard Dawkins, The Selfish Gene, 30th Anniversary Edition, 2006

Footnotes (1989 edition) are doubly indented.


Added to diary 27 June 2018

Let me end with a brief manifesto, a summary of the entire selfish gene/extended phenotype view of life. It is a view, I maintain, that applies to living things everywhere in the universe. The fundamental unit, the prime mover of all life, is the replicator. A replicator is anything in the universe of which copies are made. Replicators come into existence, in the first place, by chance, by the random jostling of smaller particles. Once a replicator has come into existence it is capable of generating an indefinitely large set of copies of itself. No copying process is perfect, however, and the population of replicators comes to include varieties that differ from one another. Some of these varieties turn out to have lost the power of self-replication, and their kind ceases to exist when they themselves cease to exist. Others can still replicate, but less effectively. Yet other varieties happen to find themselves in possession of new tricks: they turn out to be even better self-replicators than their predecessors and contemporaries. It is their descendants that come to dominate the population. As time goes by, the world becomes filled with the most powerful and ingenious replicators. Gradually, more and more elaborate ways of being a good replicator are discovered. Replicators survive, not only by virtue of their own intrinsic properties, but by virtue of their consequences on the world. These consequences can be quite indirect. All that is necessary is that eventually the consequences, however tortuous and indirect, feed back and affect the success of the replicator at getting itself copied. The success that a replicator has in the world will depend on what kind of a world it is—the pre-existing conditions. Among the most important of these conditions will be other replicators and their consequences. Like the English and German rowers, replicators that are mutually beneficial will come to predominate in each other’s presence. At some point in the evolution of life on our earth, this ganging up of mutually compatible replicators began to be formalized in the creation of discrete vehicles—cells and, later, many-celled bodies. Vehicles that evolved a bottlenecked life cycle prospered, and became more discrete and vehicle-like. This packaging of living material into discrete vehicles became such a salient and dominant feature that, when biologists arrived on the scene and started asking questions about life, their questions were mostly about vehicles—individual organisms. The individual organism came first in the biologist’s consciousness, while the replicators—now known as genes—were seen as part of the machinery used by individual organisms. It requires a deliberate mental effort to turn biology the right way up again, and remind ourselves that the replicators come first, in importance as well as in history. One way to remind ourselves is to reflect that, even today, not all the phenotypic effects of a gene are bound up in the individual body in which it sits. Certainly in principle, and also in fact, the gene reaches out through the individual body wall and manipulates objects in the world outside, some of them inanimate, some of them other living beings, some of them a long way away. With only a little imagination we can see the gene as sitting at the centre of a radiating web of extended phenotypic power. And an object in the world is the centre of a converging web of influences from many genes sitting in many organisms. The long reach of the gene knows no obvious boundaries. The whole world is criss-crossed with causal arrows joining genes to phenotypic effects, far and near. It is an additional fact, too important in practice to be called incidental but not necessary enough in theory to be called inevitable, that these causal arrows have become bundled up. Replicators are no longer peppered freely through the sea; they are packaged in huge colonies—individual bodies. And phenotypic consequences, instead of being evenly distributed throughout the world, have in many cases congealed into those same bodies. But the individual body, so familiar to us on our planet, did not have to exist. The only kind of entity that has to exist in order for life to arise, anywhere in the universe, is the immortal replicator.

Richard Dawkins, The Selfish Gene, 30th Anniversary Edition, 2006


Added to diary 27 June 2018

To take just one particular example, many ground-nesting birds perform a so-called ‘distraction display’ when a predator such as a fox approaches. The parent bird limps away from the nest, holding out one wing as though it were broken. The predator, sensing easy prey, is lured away from the nest containing the chicks. Finally the parent bird gives up its pretence and leaps into the air just in time to escape the fox’s jaws. It has probably saved the life of its nestlings, but at some risk to itself. […]

Honey bees suffer from an infectious disease called foul brood. This attacks the grubs in their cells. Of the domestic breeds used by beekeepers, some are more at risk from foul brood than others, and it turns out that the difference between strains is, at least in some cases, a behavioural one. There are so-called hygienic strains which quickly stamp out epidemics by locating infected grubs, pulling them from their cells and throwing them out of the hive. The susceptible strains are susceptible because they do not practise this hygienic infanticide. The behaviour actually involved in hygiene is quite complicated. The workers have to locate the cell of each diseased grub, remove the wax cap from the cell, pull out the larva, drag it through the door of the hive, and throw it on the rubbish tip. Doing genetic experiments with bees is quite a complicated business for various reasons. Worker bees themselves do not ordinarily reproduce, and so you have to cross a queen of one strain with a drone (= male) of the other, and then look at the behaviour of the daughter workers. This is what W. C. Rothenbuhler did. He found that all first-generation hybrid daughter hives were non-hygienic: the behaviour of their hygienic parent seemed to have been lost, although as things turned out the hygienic genes were still there but were recessive, like human genes for blue eyes. When Rothenbuhler ‘back-crossed’ first-generation hybrids with a pure hygienic strain (again of course using queens and drones), he obtained a most beautiful result. The daughter hives fell into three groups. One group showed perfect hygienic behaviour, a second showed no hygienic behaviour at all, and the third went half way. This last group uncapped the wax cells of diseased grubs, but they did not follow through and throw out the larvae. Rothenbuhler surmised that there might be two separate genes, one gene for uncapping, and one gene for throwing-out. Normal hygienic strains possess both genes, susceptible strains possess the alleles—rivals—of both genes instead. The hybrids who only went half way presumably possessed the uncapping gene (in double dose) but not the throwing-out gene. Rothenbuhler guessed that his experimental group of apparently totally non-hygienic bees might conceal a subgroup possessing the throwing-out gene, but unable to show it because they lacked the uncapping gene. He confirmed this most elegantly by removing caps himself. Sure enough, half of the apparently non-hygienic bees thereupon showed perfectly normal throwing-out behaviour.*

Hygienic bees

If the original book had had footnotes, one of them would have been devoted to explaining—as Rothenbuhler himself scrupulously did—that the bee results were not quite so neat and tidy. Out of the many colonies that, according to theory, should not have shown hygienic behaviour, one nevertheless did. In Rothenbuhler’s own words, ‘We cannot disregard this result, regardless of how much we would like to, but we are basing the genetic hypothesis on the other data.’ A mutation in the anomalous colony is a possible explanation, though it is not very likely.

Mole-crickets amplify their song to stentorian loudness by singing down in a burrow which they carefully dig in the shape of a double exponential horn, or megaphone. […]

In the course of their experiments they had occasion to introduce into magpie nests the eggs and chicks of cuckoos, and, for comparison, eggs and chicks of other species such as swallows. On one occasion they introduced a baby swallow into a magpie’s nest. The next day they noticed one of the magpie eggs lying on the ground under the nest. It had not broken, so they picked it up, replaced it, and watched. What they saw is utterly remarkable. The baby swallow, behaving exactly as if it was a baby cuckoo, threw the egg out. They replaced the egg again, and exactly the same thing happened. The baby swallow used the cuckoo method of balancing the egg on its back between its wing-stubs, and walking backwards up the side of the nest until the egg toppled out. Perhaps wisely, Alvarez and his colleagues made no attempt to explain their astonishing observation. How could such behaviour evolve in the swallow gene pool? It must correspond to something in the normal life of a swallow. Baby swallows are not accustomed to finding themselves in magpie nests. They are never normally found in any nest except their own. Could the behaviour represent an evolved anti-cuckoo adaptation? Has the natural selection been favouring a policy of counter-attack in the swallow gene pool, genes for hitting the cuckoo with his own weapons? It seems to be a fact that swallows’ nests are not normally parasitized by cuckoos. Perhaps this is why. According to this theory, the magpie eggs of the experiment would be incidentally getting the same treatment, perhaps because, like cuckoo eggs, they are bigger than swallow eggs. But if baby swallows can tell the difference between a large egg and a normal swallow egg, surely the mother should be able to as well. In this case why is it not the mother who ejects the cuckoo egg, since it would be so much easier for her to do so than the baby? The same objection applies to the theory that the baby swallow’s behaviour normally functions to remove addled eggs or other debris from the nest. Once again, this task could be—and is—performed better by the parent. The fact that the difficult and skilled egg-rejecting operation was seen to be performed by a weak and helpless baby swallow, whereas an adult swallow could surely do it much more easily, compels me to the conclusion that, from the parent’s point of view, the baby is up to no good. It seems to me just conceivable that the true explanation has nothing to do with cuckoos at all. The blood may chill at the thought, but could this be what baby swallows do to each other? Since the firstborn is going to compete with his yet unhatched brothers and sisters for parental investment, it could be to his advantage to […]. The chief objection to this theory is that it is very difficult to believe that nobody would have seen this diabolical behaviour if it really occurred. I have no convincing explanation for this. There are different races of swallow in different parts of the world. It is known that the Spanish race differs from, for example, the British one, in certain respects. The Spanish race has not been subjected to the same degree of intensive observation as the British one, and I suppose it is just conceivable that fratricide occurs but has been overlooked. […]

The physical characteristics of the calls seem to be ideally shaped to be difficult to locate. If an acoustic engineer were asked to design a sound that a predator would find it hard to approach, he would produce something very like the real alarm calls of many small songbirds. […]

Unfortunately, we know too little at present to assign realistic numbers to the costs and benefits of various outcomes in nature.

We now have some good field measurements of costs and benefits in nature, which have been plugged into particular ESS models. One of the best examples comes from great golden digger wasps in North America. Digger wasps are not the familiar social wasps of our autumn jam-pots, which are neuter females working for a colony. Each female digger wasp is on her own, and she devotes her life to providing shelter and food for a succession of her larvae. Typically, a female begins by digging a long bore-hole into the earth, at the bottom of which is a hollowed-out chamber. She then sets off to hunt prey (katydids or longhorned grasshoppers in the case of the great golden digger wasp). When she finds one she stings it to paralyse it, and drags it back into her burrow. Having accumulated four or five katydids she lays an egg on the top of the pile and seals up the burrow. The egg hatches into a larva, which feeds on the katydids. The point about the prey being paralysed rather than killed, by the way, is that they don’t decay but are eaten alive and are therefore fresh. It was this macabre habit, in the related Ichneumon wasps, that provoked Darwin to write: ‘I cannot persuade myself that a beneficent and omnipotent God would have designedly created the Ichneumonidae with the express intention of their feeding within the living bodies of Caterpillars . . .” He might as well have used the example of a French chef boiling lobsters alive to preserve the flavour. Returning to the life of the female digger wasp, it is a solitary one except that other females are working independently in the same area, and sometimes they occupy one another’s burrows rather than go to the trouble of digging a new one. Dr Jane Brockmann is a sort of wasp equivalent of Jane Goodall. She came from America to work with me at Oxford, bringing her copious records of almost every event in the life of two entire populations of individually identified female wasps. These records were so complete that individual wasp time-budgets could be drawn up. Time is an economic commodity: the more time spent on one part of life, the less is available for other parts. Alan Grafen joined the two of us and taught us how to think correctly about time-costs and reproductive benefits. We found evidence for a true mixed ESS in a game played between female wasps in a population in New Hampshire, though we failed to find such evidence in another population in Michigan. Briefly, the New Hampshire wasps either Dig their own nests or Enter a nest that another wasp has dug. According to our interpretation, wasps can benefit by entering because some nests are abandoned by their original diggers and are reusable. It does not pay to enter a nest that is occupied, but an enterer has no way of knowing which nests are occupied and which abandoned. She runs the risk of going for days in double-occupation, at the end of which she may come home to find the burrow sealed up, and all her efforts in vain—the other occupant has laid her egg and will reap the benefits. If too much entering is going on in a population, available burrows become scarce, the chance of double-occupation goes up, and it therefore pays to dig. Conversely, if plenty of wasps are digging, the high availability of burrows favours entering. There is a critical frequency of entering in the population at which digging and entering are equally profitable. If the actual frequency is below the critical frequency, natural selection favours entering, because there is a good supply of available abandoned burrows. If the actual frequency is higher than the critical frequency, there is a shortage of available burrows and natural selection favours digging. So a balance is maintained in the population. The detailed, quantitative evidence suggests that this is a true mixed ESS, each individual wasp having a probability of digging or entering, rather than the population containing a mixture of digging and entering specialists.

[…]

p. 173 . . . it seems to be only in the social insects that [the evolution of sterile workers] has actually happened.

That is what we all thought. We had reckoned without naked mole rats. Naked mole rats live in extensive networks of underground burrows. Colonies typically number 70 or 80 individuals, but they can increase into the hundreds. The network of burrows occupied by one colony can be two or three miles in total length, and one colony may excavate three or four tons of soil annually. Tunnelling is a communal activity. A face worker digs at the front with its teeth, passing the soil back through a living conveyor belt, a seething, scuffling line of half a dozen little pink animals. From time to time the face-worker is relieved by one of the workers behind. Only one female in the colony breeds, over a period of several years. Jarvis, in my opinion legitimately, adopts social insect terminology and calls her the queen. The queen is mated by two or three males only. All the other individuals of both sexes are nonbreeding, like insect workers. And, as in many social insect species, if the queen is removed some previously sterile females start to come into breeding condition and then fight each other for the position of queen. The sterile individuals are called ‘workers’, and again this is fair enough. Workers are of both sexes, as in termites (but not ants, bees and wasps, among which they are females only). What mole rat workers actually do depends on their size. The smallest ones, whom Jarvis calls ‘frequent workers’, dig and transport soil, feed the young, and presumably free the queen to concentrate on childbearing. She has larger litters than is normal for rodents of her size, again reminiscent of social insect queens. The largest nonbreeders seem to do little except sleep and eat, while intermediate-sized nonbreeders behave in an intermediate manner: there is a continuum as in bees, rather than discrete castes as in many ants. Jarvis originally called the largest nonbreeders nonworkers. But could they really be doing nothing? There is now some suggestion, both from laboratory and field observations, that they are soldiers, defending the colony if it is threatened; snakes are the main predators. There is also a possibility that they act as ‘food vats’ like ‘honeypot ants’ (see p. 171). Mole rats are homocoprophagous, which is a polite way of saying that they eat one another’s faeces (not exclusively: that would run foul of the laws of the universe). Perhaps the large individuals perform a valuable role by storing up their faeces in the body when food is plentiful, so that they can act as an emergency larder when food is scarce—a sort of constipated commissariat. To me, the most puzzling feature of naked mole rats is that, although they are like social insects in so many ways, they seem to have no equivalent caste to the young winged reproductives of ants and termites. They have reproductive individuals, of course, but these don’t start their careers by taking wing and dispersing their genes to new lands. As far as anyone knows, naked mole rat colonies just grow at the margins by expanding the subterranean burrow system. Apparently they don’t throw off long-distance dispersing individuals, the equivalent of winged reproductives. This is so surprising to my Darwinian intuition that it is tempting to speculate. My hunch is that one day we shall discover a dispersal phase which has hitherto, for some reason, been overlooked. It is too much to hope that the dispersing individuals will literally sprout wings! But they might in various ways be equipped for life above ground rather than underground. They could be hairy instead of naked, for instance. Naked mole rats don’t regulate their individual body temperatures in the way that normal mammals do; they are more like ‘cold-blooded’ reptiles. Perhaps they control temperature socially—another resemblance to termites and bees. Or could they be exploiting the well-known constant temperature of any good cellar? At all events, my hypothetical dispersing individuals might well, unlike the underground workers, be conventionally ‘warm-blooded’. Is it conceivable that some already known hairy rodent, hitherto classified as an entirely different species, might turn out to be the lost caste of the naked mole rat? There are, after all, precedents for this kind of thing. Locusts, for instance. Locusts are modified grasshoppers, and they normally live the solitary, cryptic, retiring life typical of a grasshopper. But under certain special conditions they change utterly—and terribly. They lose their camouflage and become vividly striped. One could almost fancy it a warning. If so, it is no idle one, for their behaviour changes too. They abandon their solitary ways and gang together, with menacing results. From the legendary biblical plagues to the present day, no animal has been so feared as a destroyer of human prosperity. They swarm in their millions, a combined harvester thrashing a path tens of miles wide, sometimes travelling at hundreds of miles per day, engulfing 2,000 tons of crops per day, and leaving a wake of starvation and ruin. And now we come to the possible analogy with mole rats. The difference between a solitary individual and its gregarious incarnation is as great as the difference between two ant castes. Moreover, just as we were postulating for the ‘lost caste’ of the mole rats, until 1921 the grasshopper Jekylls and their locust Hydes were classified as belonging to different species. But alas, it doesn’t seem terribly likely that mammal experts could have been so misled right up to the present day. I should say, incidentally, that ordinary, untransformed naked mole rats are sometimes seen above ground and perhaps travel farther than is generally thought. But before we abandon the ‘transformed reproductive’ speculation completely, the locust analogy does suggest another possibility. Perhaps naked mole rats do produce transformed reproductives, but only under certain conditions—conditions that have not arisen in recent decades. In Africa and the Middle East, locust plagues are still a menace, just as they were in biblical times. But in North America, things are different. Some grasshopper species there have the potential to turn into gregarious locusts. But, apparently because conditions haven’t been right, no locust plagues have occurred in North America this century (although cicadas, a totally different kind of plague insect, still erupt regularly and, confusingly, they are called ‘locusts’ in colloquial American speech). Nevertheless, if a true locust plague were to occur in America today, it would not be particularly surprising: the volcano is not extinct; it is merely dormant. But if we didn’t have written historical records and information from other parts of the world it mould be a nasty surprise because the animals would be, as far as anyone knew, just ordinary, solitary, harmless grasshoppers. What if naked mole rats are like American grasshoppers, primed to produce a distinct, dispersing caste, but only under conditions which, for some reason, have not been realized this century? Nineteenth-century East Africa could have suffered swarming plagues of hairy mole rats migrating like lemmings above ground, without any records surviving to us. Or perhaps they are recorded in the legends and sagas of local tribes?

Richard Dawkins, The Selfish Gene, 30th Anniversary Edition, 2006

Footnotes (1989 edition) are doubly indented.


Added to diary 27 June 2018

Species are grouped together into genera, genera into orders, and orders into classes. Lions and antelopes are both members of the class Mammalia, as are we. Should we then not expect lions to refrain from killing antelopes, ‘for the good of the mammals’? Surely they should hunt birds or reptiles instead, in order to prevent the extinction of the class. But then, what of the need to perpetuate the whole phylum of vertebrates? […]

I am using the word gene to mean a genetic unit that is small enough to last for a large number of generations and to be distributed around in the form of many copies. This is not a rigid all-or-nothing definition, but a kind of fading-out definition, like the definition of ‘big’ or ‘old’. The more likely a length of chromosome is to be split by crossing-over, or altered by mutations of various kinds, the less it qualifies to be called a gene in the sense in which I am using the term.

Following Williams, I made much of the fragmenting effects of meiosis in my argument that the individual organism cannot play the role of replicator in natural selection. I now see that this was only half the story. The other half is spelled out in The Extended Phenotype (pp. 97–9) and in my paper ‘Replicators and Vehicles’. If the fragmenting effects of meiosis were the whole story, an asexually reproducing organism like a female stick-insect would be a true replicator, a sort of giant gene. But if a stick insect is changed—say it loses a leg—the change is not passed on to future generations. Genes alone pass down the generations, whether reproduction is sexual or asexual. Genes, therefore, really are replicators. In the case of an asexual stick-insect, the entire genome (the set of all its genes) is a replicator. But the stick-insect itself is not. A stick-insect body is not moulded as a replica of the body of the previous generation. The body in any one generation grows afresh from an egg, under the direction of its genome, which is a replica of the genome of the previous generation. All printed copies of this book will be the same as one another. They will be replicas but not replicators. They will be replicas not because they have copied one another, but because all have copied the same printing plates. They do not form a lineage of copies, with some books being ancestral to others. A lineage of copies would exist if we xeroxed a page of a book, then xeroxed the xerox, then xeroxed the xerox of the xerox, and so on. In this lineage of pages, there really would be an ancestor/descendant relationship. A new blemish that showed up anywhere along the series would be shared by descendants but not by ancestors. An ancestor/descendant series of this kind has the potential to evolve. Superficially, successive generations of stick-insect bodies appear to constitute a lineage of replicas. But if you experimentally change one member of the lineage (for instance by removing a leg), the change is not passed on down the lineage. By contrast, if you experimentally change one member of the lineage of genomes (for instance by X-rays), the change will be passed on down the lineage. This, rather than the fragmenting effect of meiosis, is the fundamental reason for saying that the individual organism is not the ‘unit of selection’—not a true replicator. It is one of the most important consequences of the universally admitted fact that the ‘Lamarckian’ theory of inheritance is false.

Just as it is not convenient to talk about quanta and fundamental particles when we discuss the workings of a car, so it is often tedious and unnecessary to keep dragging genes in when we discuss the behaviour of survival machines. In practice it is usually convenient, as an approximation, to regard the individual body as an agent ‘trying’ to increase the numbers of all its genes in future generations. […]

It is theoretically possible that a gene could arise which conferred an externally visible ‘label’, say a pale skin, or a green beard, or anything conspicuous, and also a tendency to be specially nice to bearers of that conspicuous label. It is possible, but not particularly likely. Green beardedness is just as likely to be linked to a tendency to develop ingrowing toenails or any other trait, and a fondness for green beards is just as likely to go together with an inability to smell freesias. It is not very probable that one and the same gene would produce both the right label and the right sort of altruism. Nevertheless, what may be called the Green Beard Altruism Effect is a theoretical possibility. […]

There is an even more serious shortcoming in Wilson’s definition of kin selection. He deliberately excludes offspring: they don’t count as kin!* Now of course he knows perfectly well that offspring are kin to their parents, but he prefers not to invoke the theory of kin selection in order to explain altruistic care by parents of their own offspring. He is, of course, entitled to define a word however he likes, but this is a most confusing definition, and I hope that Wilson will change it in future editions of his justly influential book. Genetically speaking, parental care and brother/sister altruism evolve for exactly the same reason: in both cases there is a good chance that the altruistic gene is present in the body of the beneficiary. […]

There is one example of a mistake which is so extreme that you may prefer to regard it not as a mistake at all, but as evidence against the selfish gene theory. This is the case of bereaved monkey mothers who have been seen to steal a baby from another female, and look after it. I see this as a double mistake, since the adopter not only wastes her own time; she also releases a rival female from the burden of child-rearing, and frees her to have another child more quickly. It seems to me a critical example which deserves some thorough research. We need to know how often it happens; what the average relatedness between adopter and child is likely to be; and what the attitude of the real mother of the child is—it is, after all, to her advantage that her child should be adopted; do mothers deliberately try to deceive naive young females into adopting their children? (It has also been suggested that adopters and baby-snatchers might benefit by gaining valuable practice in the art of childrearing.) […]

So we conclude that the ‘true’ relatedness may be less important in the evolution of altruism than the best estimate of relatedness that animals can get. This fact is probably a key to understanding why parental care is so much more common and more devoted than brother/sister altruism in nature, and also why animals may value themselves more highly even than several brothers. Briefly, what I am saying is that, in addition to the index of relatedness, we should consider something like an index of ‘certainty’. Although the parent/child relationship is no closer genetically than the brother/sister relationship, its certainty is greater. It is normally possible to be much more certain who your children are than who your brothers are. And you can be more certain still who you yourself are! […]

I recommend Alan Grafen’s essay ‘Natural Selection, Kin Selection and Group Selection’ as a clear-thinking, and I hope now definitive, sorting out of the neo-group-selection problem. […]

When a woman reached the age where the average chance of each child reaching adulthood was just less than half the chance of each grandchild of the same age reaching adulthood, any gene for investing in grandchildren in preference to children would tend to prosper. Such a gene is carried by only one in four grandchildren, whereas the rival gene is carried by one in two children, but the greater expectation of life of the grandchildren outweighs this, and the ‘grandchild altruism’ gene prevails in the gene pool. A woman could not invest fully in her grandchildren if she went on having children of her own. Therefore genes for becoming reproductively infertile in middle age became more numerous, since they were carried in the bodies of grandchildren whose survival was assisted by grandmotherly altruism. This is a possible explanation of the evolution of the menopause in females. The reason why the fertility of males tails off gradually rather than abruptly is probably that males do not invest so much as females in each individual child anyway. Provided he can sire children by young women, it will always pay even a very old man to invest in children rather than in grandchildren. […]

What Alexander is basically saying is this. A gene that made a child grab more than his fair share when he was a child, at the expense of his parent’s total reproductive output, might indeed increase his chances of surviving. But he would pay the penalty when he came to be a parent himself, because his own children would tend to inherit the same selfish gene, and this would reduce his overall reproductive success. He would be hoist with his own petard. Therefore the gene cannot succeed, and parents must always win the conflict. Our suspicions should be immediately aroused by this argument, because it rests on the assumption of a genetic asymmetry which is not really there. Alexander is using the words ‘parent’ and ‘offspring’ as though there was a fundamental genetic difference between them. As we have seen, although there are practical differences between parent and child, for instance parents are older than children, and children come out of parents’ bodies, there is really no fundamental genetic asymmetry. The relatedness is 50 per cent, whichever way round you look at it. To illustrate what I mean, I am going to repeat Alexander’s words, but with ‘parent’, ‘juvenile’ and other appropriate words reversed. ‘Suppose that a parent has a gene that tends to cause an even distribution of parental benefits. A gene which in this fashion improves an individual’s fitness when it is a parent could not fail to have lowered its fitness more when it was a juvenile.’ We therefore reach the opposite conclusion to Alexander, namely that in any parent/offspring conflict, the child must win! Obviously something is wrong here. Both arguments have been put too simply. The purpose of my reverse quotation is not to prove the opposite point to Alexander, but simply to show that you cannot argue in that kind of artificially asymmetrical way. Both Alexander’s argument, and my reversal of it, erred through looking at things from the point of view of an individual—in Alexander’s case, the parent, in my case, the child. I believe this kind of error is all too easy to make when we use the technical term ‘fitness’. This is why I have avoided using the word in this book. There is really only one entity whose point of view matters in evolution, and that entity is the selfish gene. Genes in juvenile bodies will be selected for their ability to outsmart parental bodies; genes in parental bodies will be selected for their ability to outsmart the young. There is no paradox in the fact that the very same genes successively occupy a juvenile body and a parental body. Genes are selected for their ability to make the best use of the levers of power at their disposal: they will exploit their practical opportunities. When a gene is sitting in a juvenile body its practical opportunities will be different from when it is sitting in a parental body. Therefore its optimum policy will be different in the two stages in its body’s life history. There is no reason to suppose, as Alexander does, that the later optimum policy should necessarily overrule the earlier. There is another way of putting the argument against Alexander. He is tacitly assuming a false asymmetry between the parent/child relationship on the one hand, and the brother/sister relationship on the other. You will remember that, according to Trivers, the cost to a selfish child of grabbing more than his share, the reason why he only grabs up to a point, is the danger of loss of his brothers and sisters who each bear half his genes. But brothers and sisters are only a special case of relatives with a 50 per cent relatedness. The selfish child’s own future children are no more and no less ‘valuable’ to him than his brothers and sisters. Therefore the total net cost of grabbing more than your fair share of resources should really be measured, not only in lost brothers and sisters, but also in lost future offspring due to their selfishness among themselves. Alexander’s point about the disadvantage of juvenile selfishness spreading to your own children, thereby reducing your own long-term reproductive output, is well taken, but it simply means we must add this in to the cost side of the equation. An individual child will still do well to be selfish so long as the net benefit to him is at least half the net cost to close relatives. But ‘close relatives’ should be read as including, not just brothers and sisters, but future children of one’s own as well. An individual should reckon his own welfare as twice as valuable as that of his brothers, which is the basic assumption Trivers makes. But he should also value himself twice as highly as one of his own future children. Alexander’s conclusion that there is a built-in advantage on the parent’s side in the conflict of interests is not correct.[…]

There is, then, no general answer to the question of who is more likely to win the battle of the generations. What will finally emerge is a compromise between the ideal situation desired by the child and that desired by the parent. It is a battle comparable to that between cuckoo and foster parent, not such a fierce battle to be sure, for the enemies do have some genetic interests in common—they are only enemies up to a point, or during certain sensitive times. However, many of the tactics used by cuckoos, tactics of deception and exploitation, may be employed by a parent’s own young, although the parent’s own young will stop short of the total selfishness that is to be expected of a cuckoo. […]

A male can achieve the same result without necessarily killing step-children. He can enforce a period of prolonged courtship before he copulates with a female, driving away all other males who approach her, and preventing her from escaping. In this way he can wait and see whether she is harbouring any little step-children in her womb, and desert her if so. […]

Let us take Maynard Smith’s method of analysing aggressive contests, and apply it to sex. This idea of trying to find an evolutionarily stable mix of strategies within one sex, balanced by an evolutionarily stable mix of strategies in the other sex, has now been taken further by Maynard Smith himself and, independently but in a similar direction, by Alan Grafen and Richard Sibly. Grafen and Sibly’s paper is the more technically advanced, Maynard Smith’s the easier to explain in words. Briefly, he begins by considering two strategies, Guard and Desert, which can be adopted by either sex. As in my ‘coy/fast and faithful/philanderer’ model, the interesting question is, what combinations of strategies among males are stable against what combinations of strategies among females? The answer depends upon our assumption about the particular economic circumstances of the species. Interestingly, though, however much we vary the economic assumptions, we don’t have a whole continuum of quantitatively varying stable outcomes. The model tends to home in on one of only four stable outcomes. The four outcomes are named after animal species that exemplify them. There is the Duck (male deserts, female guards), the Stickleback (female deserts, male guards), the Fruit-fly (both desert) and the Gibbon (both guard). And here is something even more interesting. Remember from Chapter 5 that ESS models can settle at either of two outcomes, both equally stable? Well, that is true of this Maynard Smith model, too. What is especially interesting is that particular pairs, as opposed to other pairs, of these outcomes are jointly stable under the same economic circumstances. For instance, under one range of circumstances, both Duck and Stickleback are stable. Which of the two actually arises depends upon luck or, more precisely, upon accidents of evolutionary history—initial conditions. Under another range of circumstances, both Gibbon and Fruit-fly are stable. Again, it is historical accident that determines which of the two occurs in any given species. But there are no circumstances in which Gibbon and Duck are jointly stable, no circumstances in which Duck and Fruit-fly are jointly stable. This ‘stablemate’ (to coin a double pun) analysis of congenial and uncongenial combinations of ESSs has interesting consequences for our reconstructions of evolutionary history. For instance, it leads us to expect that certain kinds of transitions between mating systems in evolutionary history will be probable, others improbable. Maynard Smith explores these historical networks in a brief survey of mating patterns throughout the animal kingdom, ending with the memorable rhetorical question: ‘Why don’t male mammals lactate?’ […]

Among birds and mammals these cases of paternal devotion are exceptionally rare, but they are common among fish. Why?* This is a challenge for the selfish gene theory which has puzzled me for a long time. An ingenious solution was recently suggested to me in a tutorial by Miss T. R. Carlisle. She makes use of Trivers’s ‘cruel bind’ idea, referred to above, as follows. Many fish do not copulate, but instead simply spew out their sex cells into the water. Fertilization takes place in the open water, not inside the body of one of the partners. This is probably how sexual reproduction first began. Land animals like birds, mammals and reptiles, on the other hand, cannot afford this kind of external fertilization, because their sex cells are too vulnerable to drying-up. The gametes of one sex—the male, since sperms are mobile—are introduced into the wet interior of a member of the other sex—the female. So much is just fact. Now comes the idea. After copulation, the land-dwelling female is left in physical possession of the embryo. It is inside her body. Even if she lays the fertilized egg almost immediately, the male still has time to vanish, thereby forcing the female into Trivers’s ‘cruel bind’. The male is inevitably provided with an opportunity to take the prior decision to desert, closing the female’s options, and forcing her to decide whether to leave the young to certain death, or whether to stay with it and rear it. Therefore, maternal care is more common among land animals than paternal care. But for fish and other water-dwelling animals things are very different. If the male does not physically introduce his sperms into the female’s body there is no necessary sense in which the female is left ‘holding the baby’. Either partner might make a quick getaway and leave the other one in possession of the newly fertilized eggs. But there is even a possible reason why it might often be the male who is most vulnerable to being deserted. It seems probable that an evolutionary battle will develop over who sheds their sex cells first. The partner who does so has the advantage that he or she can then leave the other one in possession of the new embryos. On the other hand, the partner who spawns first runs the risk that his prospective partner may subsequently fail to follow suit. Now the male is more vulnerable here, if only because sperms are lighter and more likely to diffuse than eggs. If a female spawns too early, i.e. before the male is ready, it will not greatly matter because the eggs, being relatively large and heavy, are likely to stay together as a coherent clutch for some time. Therefore a female fish can afford to take the ‘risk’ of spawning early. The male dare not take this risk, since if he spawns too early his sperms will have diffused away before the female is ready, and she will then not spawn herself, because it will not be worth her while to do so. Because of the diffusion problem, the male must wait until the female spawns, and then he must shed his sperms over the eggs. But she has had a precious few seconds in which to disappear, leaving the male in possession, and forcing him on to the horns of Trivers’s dilemma. So this theory neatly explains why paternal care is common in water but rare on dry land.

cases of paternal devotion . . . common among fish. Why?

Tamsin Carlisle’s undergraduate hypothesis about fish has now been tested comparatively by Mark Ridley, in the course of an exhaustive review of paternal care in the entire animal kingdom. His paper is an astonishing tour de force which, like Carlisle’s hypothesis itself, also began as an undergraduate essay written for me. Unfortunately he did not find in favour of the hypothesis.

It was Hamilton who brilliantly realized that, at least in the ants, bees, and wasps, the workers may actually be more closely related to the brood than the queen herself is! This led him, and later Trivers and Hare, on to one of the most spectacular triumphs of the selfish gene theory. The reasoning goes like this. Insects of the group known as the Hymenoptera, including ants, bees, and wasps, have a very odd system of sex determination. Termites do not belong to this group and they do not share the same peculiarity. A hymenopteran nest typically has only one mature queen. She made one mating flight when young and stored up the sperms for the rest of her long life—ten years or even longer. She rations the sperms out to her eggs over the years, allowing the eggs to be fertilized as they pass out through her tubes. But not all the eggs are fertilized. The unfertilized ones develop into males. A male therefore has no father, and all the cells of his body contain just a single set of chromosomes (all obtained from his mother) instead of a double set (one from the father and one from the mother) as in ourselves. In terms of the analogy of Chapter 3, a male hymenopteran has only one copy of each ‘volume’ in each of his cells, instead of the usual two. A female hymenopteran, on the other hand, is normal in that she does have a father, and she has the usual double set of chromosomes in each of her body cells. Whether a female develops into a worker or a queen depends not on her genes but on how she is brought up. That is to say, each female has a complete set of queen-making genes, and a complete set of worker-making genes (or, rather, sets of genes for making each specialized caste of worker, soldier, etc.). Which set of genes is ‘turned on’ depends on how the female is reared, in particular on the food she receives. Although there are many complications, this is essentially how things are. We do not know why this extraordinary system of sexual reproduction evolved. No doubt there were good reasons, but for the moment we must just treat it as a curious fact about the Hymenoptera. Whatever the original reason for the oddity, it plays havoc with Chapter 6’s neat rules for calculating relatedness. It means that the sperms of a single male, instead of all being different as they are in ourselves, are all exactly the same. A male has only a single set of genes in each of his body cells, not a double set. Every sperm must therefore receive the full set of genes rather than a 50 per cent sample, and all sperms from a given male are therefore identical. Let us now try to calculate the relatedness between a mother and son. If a male is known to possess a gene A, what are the chances that his mother shares it? The answer must be 100 per cent, since the male had no father and obtained all his genes from his mother. But now suppose a queen is known to have the gene B. The chance that her son shares the gene is only 50 per cent, since he contains only half her genes. This sounds like a contradiction, but it is not. A male gets all his genes from his mother, but a mother only gives half her genes to her son. The solution to the apparent paradox lies in the fact that a male has only half the usual number of genes. There is no point in puzzling over whether the ‘true’ index of relatedness is ½ or I. The index is only a man-made measure, and if it leads to difficulties in particular cases, we may have to abandon it and go back to first principles. From the point of view of a gene A in the body of a queen, the chance that the gene is shared by a son is ½, just as it is for a daughter. From a queen’s point of view therefore, her offspring, of either sex, are as closely related to her as human children are to their mother. Things start to get intriguing when we come to sisters. Full sisters not only share the same father: the two sperms that conceived them were identical in every gene. The sisters are therefore equivalent to identical twins as far as their paternal genes are concerned. If one female has a gene A, she must have got it from either her father or her mother. If she got it from her mother then there is a 50 per cent chance that her sister shares it. But if she got it from her father, the chances are 100 per cent that her sister shares it. Therefore the relatedness between hymenopteran full sisters is not as it would be for normal sexual animals, but . It follows that a hymenopteran female is more closely related to her full sisters than she is to her offspring of either sex.* As Hamilton realized (though he did not put it in quite the same way) this might well predispose a female to farm her own mother as an efficient sister-making machine. A gene for vicariously making sisters replicates itself more rapidly than a gene for making offspring directly. Hence worker sterility evolved. It is presumably no accident that true sociality, with worker sterility, seems to have evolved no fewer than eleven times independently in the Hymenoptera and only once in the whole of the rest of the animal kingdom, namely in the termites. However, there is a catch. If the workers are successfully to farm their mother as a sister-producing machine, they must somehow curb her natural tendency to give them an equal number of little brothers as well. From the point of view of a worker, the chance of any one brother containing a particular one of her genes is only . Therefore, if the queen were allowed to produce male and female reproductive offspring in equal proportions, the farm would not show a profit as far as the workers are concerned. They would not be maximizing the propagation of their precious genes. Trivers and Hare realized that the workers must try to bias the sex ratio in favour of females. They took the Fisher calculations on optimal sex ratios (which we looked at in the previous chapter) and re-worked them for the special case of the Hymenoptera. It turned out that the stable ratio of investment for a mother is, as usual, 1: 1. But the stable ratio for a sister is 3: 1 in favour of sisters rather than brothers. If you are a hymenopteran female, the most efficient way for you to propagate your genes is to refrain from breeding yourself, and to make your mother provide you with reproductive sisters and brothers in the ratio 3:1. But if you must have offspring of your own, you can benefit your genes best by having reproductive sons and daughters in equal proportions. As we have seen, the difference between queens and workers is not a genetic one. As far as her genes are concerned, an embryo female might be destined to become either a worker, who ‘wants’ a 3: 1 sex ratio, or a queen, who ‘wants’ a 1: 1 ratio. So what does this ‘wanting’ mean? It means that a gene that finds itself in a queen’s body can propagate itself best if that body invests equally in reproductive sons and daughters. But the same gene finding itself in a worker’s body can propagate itself best by making the mother of that body have more daughters than sons. There is no real paradox here. A gene must take best advantage of the levers of power that happen to be at its disposal. If it finds itself in a position to influence the development of a body that is destined to turn into a queen, its optimal strategy to exploit that control is one thing. If it finds itself in a position to influence the way a worker’s body develops, its optimal strategy to exploit that power is different. This means there is a conflict of interests down on the farm. The queen is ‘trying’ to invest equally in males and females. The workers are trying to shift the ratio of reproductives in the direction of three females to every one male. If we are right to picture the workers as the farmers and the queen as their brood mare, presumably the workers will be successful in achieving their 3: 1 ratio. If not, if the queen really lives up to her name and the workers are her slaves and the obedient tenders of the royal nurseries, then we should expect the 1: 1 ratio which the queen ‘prefers’ to prevail. Who wins in this special case of a battle of the generations? This is a matter that can be put to the test and that is what Trivers and Hare did, using a large number of species of ants. The sex ratio that is of interest is the ratio of male to female reproductives. These are the large winged forms which emerge from the ants’ nest in periodic bursts for mating flights, after which the young queens may try to found new colonies. It is these winged forms that have to be counted to obtain an estimate of the sex ratio. Now the male and female reproductives are, in many species, very unequal in size. This complicates things since, as we saw in the previous chapter, the Fisher calculations about optimal sex ratio strictly apply, not to numbers of males and females, but to quantity of investment in males and females. Trivers and Hare made allowance for this by weighing them. They took 20 species of ant and estimated the sex ratio in terms of investment in reproductives. They found a rather convincingly close fit to the 3: 1 female to male ratio predicted by the theory that the workers are running the show for their own benefit.* It seems then that in the ants studied, the conflict of interests is ‘won’ by the workers. This is not too surprising since worker bodies, being the guardians of the nurseries, have more power in practical terms than queen bodies. Genes trying to manipulate the world through queen bodies are outmanœuvred by genes manipulating the world through worker bodies. It is interesting to look around for some special circumstances in which we might expect queens to have more practical power than workers. Trivers and Hare realized that there was just such a circumstance which could be used as a critical test of the theory. This arises from the fact that there are some species of ant that take slaves. The workers of a slave-making species either do no ordinary work at all or are rather bad at it. What they are good at is going on slaving raids. True warfare in which large rival armies fight to the death is known only in man and in social insects. In many species of ants the specialized caste of workers known as soldiers have formidable fighting jaws, and devote their time to fighting for the colony against other ant armies. Slaving raids are just a particular kind of war effort. The slavers mount an attack on a nest of ants belonging to a different species, attempt to kill the defending workers or soldiers, and carry off the unhatched young. These young ones hatch out in the nest of their captors. They do not ‘realize’ that they are slaves and they set to work following their built-in nervous programs, doing all the duties that they would normally perform in their own nest. The slave-making workers or soldiers go on further slaving expeditions while the slaves stay at home and get on with the everyday business of running an ants’ nest, cleaning, foraging, and caring for the brood. The slaves are, of course, blissfully ignorant of the fact that they are unrelated to the queen and to the brood that they are tending. Unwittingly they are rearing new platoons of slave-makers. No doubt natural selection, acting on the genes of the slave species, tends to favour anti-slavery adaptations. However, these are evidently not fully effective because slavery is a wide spread phenomenon. The consequence of slavery that is interesting from our present point of view is this. The queen of the slave-making species is now in a position to bend the sex ratio in the direction she ‘prefers’. This is because her own true-born children, the slavers, no longer hold the practical power in the nurseries. This power is now held by the slaves. The slaves ‘think’ they are looking after their own siblings and they are presumably doing whatever would be appropriate in their own nests to achieve their desired 3: 1 bias in favour of sisters. But the queen of the slave-making species is able to get away with countermeasures and there is no selection operating on the slaves to neutralize these counter-measures, since the slaves are totally unrelated to the brood. For example, suppose that in any ant species, queens ‘attempt’ to disguise male eggs by making them smell like female ones. Natural selection will normally favour any tendency by workers to ‘see through’ the disguise. We may picture an evolutionary battle in which queens continually ‘change the code’, and workers ‘break the code’. The war will be won by whoever manages to get more of her genes into the next generation, via the bodies of the reproductives. This will normally be the workers, as we have seen. But when the queen of a slave-making species changes the code, the slave workers cannot evolve any ability to break the code. This is because any gene in a slave worker ‘for breaking the code’ is not represented in the body of any reproductive individual, and so is not passed on. The reproductives all belong to the slave-making species, and are kin to the queen but not to the slaves. If the genes of the slaves find their way into any reproductives at all, it will be into the reproductives that emerge from the original nest from which they were kidnapped. The slave workers will, if anything, be busy breaking the wrong code! Therefore, queens of a slave-making species can get away with changing their code freely, without there being any danger that genes for breaking the code will be propagated into the next generation. The upshot of this involved argument is that we should expect in slave-making species that the ratio of investment in reproductives of the two sexes should approach 1: 1 rather than 3:1. For once, the queen will have it all her own way. This is just what Trivers and Hare found, although they only looked at two slave-making species. I must stress that I have told the story in an idealized way. Real life is not so neat and tidy. For instance, the most familiar social insect species of all, the honey bee, seems to do entirely the ‘wrong’ thing. There is a large surplus of investment in males over queens—something that does not appear to make sense from either the workers’ or the mother queen’s point of view.

Alan Grafen pointed out to me another and more worrying problem with the account of hymenopteran sex ratios given in the first edition of this book. I have explained his point in The Extended Phenotype (pp. 75–6). Here is a brief extract:

The potential worker is still indifferent between rearing siblings and rearing offspring at any conceivable population sex ratio. Thus suppose the population sex ratio is female-biased, even suppose it conforms to Trivers and Hare’s predicted 3:1. Since the worker is more closely related to her sister than to her brother or her offspring of either sex, it might seem that she would ‘prefer’ to rear siblings over offspring given such a female-biased sex ratio: is she not gaining mostly valuable sisters (plus only a few relatively worthless brothers) when she opts for siblings? But this reasoning neglects the relatively great reproductive value of males in such a population as a consequence of their rarity. The worker may not be closely related to each of her brothers, but if males are rare in the population as a whole each one of those brothers is correspondingly highly likely to be an ancestor of future generations.

Richard Dawkins, The Selfish Gene, 30th Anniversary Edition, 2006

Footnotes (1989 edition) are doubly indented.


Added to diary 27 June 2018

Darwin’s ‘survival of the fittest’ is really a special case of a more general law of survival of the stable. The universe is populated by stable things. […] But when the replicators became numerous, building blocks must have been used up at such a rate that they became a scarce and precious resource. […] Some of them may even have ‘discovered’ how to break up molecules of rival varieties chemically, and to use the building blocks so released for making their own copies. These proto-carnivores simultaneously obtained food and removed competing rivals. Other replicators perhaps discovered how to protect themselves, either chemically, or by building a physical wall of protein around themselves. This may have been how the first living cells appeared. Replicators began not merely to exist, but to construct for themselves containers, vehicles for their continued existence. The replicators that survived were the ones that built survival machines for themselves to live in. […]

A society of ants, bees, or termites achieves a kind of individuality at a higher level. Food is shared to such an extent that one may speak of a communal stomach. Information is shared so efficiently by chemical signals and by the famous ‘dance’ of the bees that the community behaves almost as if it were a unit with a nervous system and sense organs of its own. Foreign intruders are recognized and repelled with something of the selectivity of a body’s immune reaction system. The rather high temperature inside a beehive is regulated nearly as precisely as that of the human body, even though an individual bee is not a ‘warm blooded’ animal. Finally and most importantly, the analogy extends to reproduction. The majority of individuals in a social insect colony are sterile workers. The ‘germ line’—the line of immortal gene continuity—flows through the bodies of a minority of individuals, the reproductives. These are the analogues of our own reproductive cells in our testes and ovaries. The sterile workers are the analogy of our liver, muscle, and nerve cells. […]

I think that a new kind of replicator has recently emerged on this very planet. It is staring us in the face. It is still in its infancy, still drifting clumsily about in its primeval soup, but already it is achieving evolutionary change at a rate that leaves the old gene panting far behind. The new soup is the soup of human culture. We need a name for the new replicator, a noun that conveys the idea of a unit of cultural transmission, or a unit of imitation. ‘Mimeme’ comes from a suitable Greek root, but I want a monosyllable that sounds a bit like ‘gene’. I hope my classicist friends will forgive me if I abbreviate mimeme to meme.*

Meme

The word meme seems to be turning out to be a good meme. It is now quite widely used and in 1988 it joined the official list of words being considered for future editions of Oxford English Dictionaries. This makes me the more anxious to repeat that my designs on human culture were modest almost to vanishing point. My true ambitions—and they are admittedly large—lead in another direction entirely. I want to claim almost limitless power for slightly inaccurate self-replicating entities, once they arise anywhere in the universe. This is because they tend to become the basis for Darwinian selection which, given enough generations, cumulatively builds systems of great complexity. I believe that, given the right conditions, replicators automatically band together to create systems, or machines, that carry them around and work to favour their continued replication. The first ten chapters of The Selfish Gene had concentrated exclusively on one kind of replicator, the gene. In discussing memes in the final chapter I was trying to make the case for replicators in general, and to show that genes were not the only members of that important class. Whether the milieu of human culture really does have what it takes to get a form of Darwinism going, I am not sure. But in any case that question is subsidiary to my concern. Chapter 11 will have succeeded if the reader closes the book with the feeling that DNA molecules are not the only entities that might form the basis for Darwinian evolution. My purpose was to cut the gene down to size, rather than to sculpt a grand theory of human culture.

Fundamentally, the reason why it is good policy for us to try to explain biological phenomena in terms of gene advantage is that genes are replicators. As soon as the primeval soup provided conditions in which molecules could make copies of themselves, the replicators themselves took over. For more than three thousand million years, DNA has been the only replicator worth talking about in the world. But it does not necessarily hold these monopoly rights for all time. Whenever conditions arise in which a new kind of replicator can make copies of itself, the new replicators will tend to take over, and start a new kind of evolution of their own. Once this new evolution begins, it will in no necessary sense be subservient to the old. The old gene-selected evolution, by making brains, provided the soup’ in which the first memes arose. Once self-copying memes had arisen, their own, much faster, kind of evolution took off.

What if a mutant gene arose that just happened to have an effect, not upon something obvious like eye colour or curliness of hair, but upon meiosis itself? Suppose it happened to bias meiosis in such a way that it, the mutant gene itself, was more likely than its allelic partner to end up in the egg. There are such genes and they are called segregation distorters. They have a diabolical simplicity. When a segregation distorter arises by mutation, it will spread inexorably through the population at the expense of its allele. It is this that is known as meiotic drive. It will happen even if the effects on bodily welfare, and on the welfare of all the other genes in the body, are disastrous. […]

The individual organism is something whose existence most biologists take for granted, probably because its parts do pull together in such a united and integrated way. Questions about life are conventionally questions about organisms. Biologists ask why organisms do this, why organisms do that. They frequently ask why organisms group themselves into societies. They don’t ask—though they should—why living matter groups itself into organisms in the first place. Why isn’t the sea still a primordial battleground of free and independent replicators? Why did the ancient replicators club together to make, and reside in, lumbering robots, and why are those robots—individual bodies, you and me—so large and so complicated? […]

The phenotypic effects of a gene are normally seen as all the effects that it has on the body in which it sits. This is the conventional definition. But we shall now see that the phenotypic effects of a gene need to be thought of as all the effects that it has on the world. It may be that a gene’s effects, as a matter of fact, turn out to be confined to the succession of bodies in which the gene sits. But, if so, it will be just as a matter of fact. It will not be something that ought to be part of our very definition. In all this, remember that the phenotypic effects of a gene are the tools by which it levers itself into the next generation. All that I am going to add is that the tools may reach outside the individual body wall. What might it mean in practice to speak of a gene as having an extended phenotypic effect on the world outside the body in which it sits? Examples that spring to mind are artefacts like beaver dams, bird nests and caddis houses. […] [C]addis larvae are anything but nondescript. They are among the most remarkable creatures on earth. Using cement of their own manufacture, they skilfully build tubular houses for themselves out of materials that they pick up from the bed of the stream. The house is a mobile home, carried about as the caddis walks, like the shell of a snail or hermit crab except that the animal builds it instead of growing it or finding it. Some species of caddis use sticks as building materials, others fragments of dead leaves, others small snail shells. But perhaps the most impressive caddis houses are the ones built in local stone. The caddis chooses its stones carefully, rejecting those that are too large or too small for the current gap in the wall, even rotating each stone until it achieves the snuggest fit. Incidentally, why does this impress us so? If we forced ourselves to think in a detached way we surely ought to be more impressed by the architecture of the caddis’s eye, or of its elbow joint, than by the comparatively modest architecture of its stone house. After all, the eye and the elbow joint are far more complicated and ‘designed’ than the house. Yet, perhaps because the eye and elbow joint develop in the same kind of way as our own eyes and elbows develop, a building process for which we, inside our mothers, claim no credit, we are illogically more impressed by the house. […]

Although geneticists may think it an odd idea, it is therefore sensible for us to speak of genes ‘for’ stone shape, stone size, stone hardness and so on. […] A geneticist might wish to claim that the direct influence of the genes is upon the nervous system that mediates the stone-choosing behaviour, not upon the stones themselves. But I invite such a geneticist to look carefully at what it can ever mean to speak of genes exerting an influence on a nervous system. All that genes can really influence directly is protein synthesis. […]

To quite a large extent the interests of parasite genes and host genes may coincide. From the selfish gene point of view we can think of both fluke genes and snail genes as ‘parasites’ in the snail body. Both gain from being surrounded by the same protective shell, though they diverge from one another in the precise thickness of shell that they ‘prefer’. This divergence arises, fundamentally, from the fact that their method of leaving this snail’s body and entering another one is different. For the snail genes the method of leaving is via snail sperms or eggs. For the fluke’s genes it is very different. Without going into the details (they are distractingly complicated) what matters is that their genes do not leave the snail’s body in the snail’s sperms or eggs. […]

Wood-boring ambrosia beetles (of the species Xyleborus ferrugineus) are parasitized by bacteria that not only live in their host’s body but also use the host’s eggs as their transport into a new host. The genes of such parasites therefore stand to gain from almost exactly the same future circumstances as the genes of their host. The two sets of genes can be expected to ‘pull together’ for just the same reasons as all the genes of one individual organism normally pull together. It is irrelevant that some of them happen to be ‘beetle genes’, while others happen to be ‘bacterial genes’. Both sets of genes are ‘interested’ in beetle survival and the propagation of beetle eggs, because both ‘see’ beetle eggs as their passport to the future. So the bacterial genes share a common destiny with their host’s genes, and in my interpretation we should expect the bacteria to cooperate with their beetles in all aspects of life. It turns out that ‘cooperate’ is putting it mildly. The service they perform for the beetles could hardly be more intimate. These beetles happen to be haplodiploid, like bees and ants (see Chapter 10). If an egg is fertilized by a male, it always develops into a female. An unfertilized egg develops into a male. Males, in other words, have no father. The eggs that give rise to them develop spontaneously, without being penetrated by a sperm. But, unlike the eggs of bees and ants, ambrosia beetle eggs do need to be penetrated by something. This is where the bacteria come in. They prick the unfertilized eggs into action, provoking them to develop into male beetles. […]

We can take this argument to its logical conclusion and apply it to normal, ‘own’ genes. Our own genes cooperate with one another, not because they are our own but because they share the same outlet—sperm or egg—into the future. If any genes of an organism, such as a human, could discover a way of spreading themselves that did not depend on the conventional sperm or egg route, they would take it and be less cooperative. This is because they would stand to gain by a different set of future outcomes from the other genes in the body. […]

Beaver lakes are extended phenotypic effects of beaver genes, and they can extend over several hundreds of yards. A long reach indeed![…]

The group of organisms—the flock of birds, the pack of wolves—does not merge into a single vehicle, precisely because the genes in the flock or the pack do not share a common method of leaving the present vehicle. To be sure, packs may bud off daughter packs. But the genes in the parent pack don’t pass to the daughter pack in a single vessel in which all have an equal share. The genes in a pack of wolves don’t all stand to gain from the same set of events in the future. A gene can foster its own future welfare by favouring its own individual wolf, at the expense of other individual wolves. An individual wolf, therefore, is a vehicle worthy of the name. A pack of wolves is not. Genetically speaking, the reason for this is that all the cells except the sex cells in a wolf’s body have the same genes, while, as for the sex cells, all the genes have an equal chance of being in each one of them. But the cells in a pack of wolves do not have the same genes, nor do they have the same chance of being in the cells of sub-packs that are budded off. […]

Everywhere we find that life, as a matter of fact, is bundled into discrete, individually purposeful vehicles like wolves and bee-hives. But the doctrine of the extended phenotype has taught us that it needn’t have been so. Fundamentally, all that we have a right to expect from our theory is a battleground of replicators, jostling, jockeying, fighting for a future in the genetic hereafter. The weapons in the fight are phenotypic effects, initially direct chemical effects in cells but eventually feathers and fangs and even more remote effects. It undeniably happens to be the case that these phenotypic effects have largely become bundled up into discrete vehicles, each with its genes disciplined and ordered by the prospect of a shared bottleneck of sperms or eggs funnelling them into the future. But this is not a fact to be taken for granted. It is a fact to be questioned and wondered at in its own right. Why did genes come together into large vehicles, each with a single genetic exit route? Why did genes choose to gang up and make large bodies for themselves to live in? In The Extended Phenotype I attempt to work out an answer to this difficult problem. […]

I shall divide the question up into three. Why did genes gang up in cells? Why did cells gang up in many-celled bodies? And why did bodies adopt what I shall call a ‘bottlenecked’ life cycle? […]

And no matter how many cells, of no matter how many specialized types, cooperate to perform the unimaginably complicated task of running an adult elephant, the efforts of all those cells converge on the final goal of producing single cells again—sperms or eggs. The elephant not only has its beginning in a single cell, a fertilized egg. Its end, meaning its goal or end-product, is the production of single cells, fertilized eggs of the next generation. The life cycle of the broad and bulky elephant both begins and ends with a narrow bottleneck. This bottlenecking is characteristic of the life cycles of all many-celled animals and most plants. Why? What is its significance? We cannot answer this without considering what life might look like without it. It will be helpful to imagine two hypothetical species of seaweed called bottle-wrack and splurge-weed. Splurge-weed grows as a set of straggling, amorphous branches in the sea. Every now and then branches break off and drift away. These breakages can occur anywhere in the plants, and the fragments can be large or small. As with cuttings in a garden, they are capable of growing just like the original plant. This shedding of parts is the species’s method of reproducing. As you will notice, it isn’t really different from its method of growing, except that the growing parts become physically detached from one another. Bottle-wrack looks the same and grows in the same straggly way. There is one crucial difference, however. It reproduces by releasing single-celled spores which drift off in the sea and grow into new plants. These spores are just cells of the plant like any others. As in the case of splurge-weed, no sex is involved. The daughters of a plant consist of cells that are clone-mates of the cells of the parent plant. The only difference between the two species is that splurgeweed reproduces by hiving off chunks of itself consisting of indeterminate numbers of cells, while bottle-wrack reproduces by hiving off chunks of itself always consisting of single cells. […]

There is only a limited amount of change that can be achieved by direct transformation in the ‘swords to ploughshares’ manner. Really radical change can be achieved only by going ‘back to the drawing board’, throwing away the previous design and starting afresh. When engineers go back to the drawing board and create a new design, they do not necessarily throw away the ideas from the old design. But they don’t literally try to deform the old physical object into the new one. The old object is too weighed down with the clutter of history. Maybe you can beat a sword into a ploughshare, but try ‘beating’ a propellor engine into a jet engine! You can’t do it. You have to discard the propellor engine and go back to the drawing board. Living things, of course, were never designed on drawing boards. But they do go back to fresh beginnings. They make a clean start in every generation. Every new organism begins as a single cell and grows anew. […]

With mutations around, the cells within a plant of splurge-weed will not have all the same genetic interests at heart. A gene in a splurge-weed cell stands to gain by promoting the reproduction of its cell. It does not necessarily stand to gain by promoting the reproduction of its ‘individual’ plant. Mutation will make it unlikely that the cells within a plant are genetically identical, so they won’t collaborate wholeheartedly with one another in the manufacture of organs and new plants. Natural selection will choose among cells rather than ‘plants’. In bottle-wrack, on the other hand, all the cells within a plant are likely to have the same genes, because only very recent mutations could divide them. Therefore they will happily collaborate in manufacturing efficient survival machines. Cells in different plants are more likely to have different genes. After all, cells that have passed through different bottlenecks may be distinguished by all but the most recent mutations—and this means the majority. Selection will therefore judge rival plants, not rival cells as in splurge-weed. So we can expect to see the evolution of organs and contrivances that serve the whole plant. […]

We can think of an individual organism as a ‘group’ of cells. A form of group selection can be made to work, provided some means can be found for increasing the ratio of between-group variation to within-group variation. Bottle-wrack’s reproductive habit has exactly the effect of increasing this ratio; splurge-weed’s habit has just the opposite effect. […]

Let me end with a brief manifesto, a summary of the entire selfish gene/extended phenotype view of life. It is a view, I maintain, that applies to living things everywhere in the universe. The fundamental unit, the prime mover of all life, is the replicator. A replicator is anything in the universe of which copies are made. Replicators come into existence, in the first place, by chance, by the random jostling of smaller particles. Once a replicator has come into existence it is capable of generating an indefinitely large set of copies of itself. No copying process is perfect, however, and the population of replicators comes to include varieties that differ from one another. Some of these varieties turn out to have lost the power of self-replication, and their kind ceases to exist when they themselves cease to exist. Others can still replicate, but less effectively. Yet other varieties happen to find themselves in possession of new tricks: they turn out to be even better self-replicators than their predecessors and contemporaries. It is their descendants that come to dominate the population. As time goes by, the world becomes filled with the most powerful and ingenious replicators. Gradually, more and more elaborate ways of being a good replicator are discovered. Replicators survive, not only by virtue of their own intrinsic properties, but by virtue of their consequences on the world. These consequences can be quite indirect. All that is necessary is that eventually the consequences, however tortuous and indirect, feed back and affect the success of the replicator at getting itself copied. The success that a replicator has in the world will depend on what kind of a world it is—the pre-existing conditions. Among the most important of these conditions will be other replicators and their consequences. Like the English and German rowers, replicators that are mutually beneficial will come to predominate in each other’s presence. At some point in the evolution of life on our earth, this ganging up of mutually compatible replicators began to be formalized in the creation of discrete vehicles—cells and, later, many-celled bodies. Vehicles that evolved a bottlenecked life cycle prospered, and became more discrete and vehicle-like. This packaging of living material into discrete vehicles became such a salient and dominant feature that, when biologists arrived on the scene and started asking questions about life, their questions were mostly about vehicles—individual organisms. The individual organism came first in the biologist’s consciousness, while the replicators—now known as genes—were seen as part of the machinery used by individual organisms. It requires a deliberate mental effort to turn biology the right way up again, and remind ourselves that the replicators come first, in importance as well as in history. One way to remind ourselves is to reflect that, even today, not all the phenotypic effects of a gene are bound up in the individual body in which it sits. Certainly in principle, and also in fact, the gene reaches out through the individual body wall and manipulates objects in the world outside, some of them inanimate, some of them other living beings, some of them a long way away. With only a little imagination we can see the gene as sitting at the centre of a radiating web of extended phenotypic power. And an object in the world is the centre of a converging web of influences from many genes sitting in many organisms. The long reach of the gene knows no obvious boundaries. The whole world is criss-crossed with causal arrows joining genes to phenotypic effects, far and near. It is an additional fact, too important in practice to be called incidental but not necessary enough in theory to be called inevitable, that these causal arrows have become bundled up. Replicators are no longer peppered freely through the sea; they are packaged in huge colonies—individual bodies. And phenotypic consequences, instead of being evenly distributed throughout the world, have in many cases congealed into those same bodies. But the individual body, so familiar to us on our planet, did not have to exist. The only kind of entity that has to exist in order for life to arise, anywhere in the universe, is the immortal replicator.

Richard Dawkins, The Selfish Gene, 30th Anniversary Edition, 2006

Footnotes (1989 edition) are doubly indented.


Added to diary 27 June 2018

INTRODUCTION TO THE 30TH ANNIVERSARY EDITION

It is sobering to realise that I have lived nearly half my life with The Selfish Gene —for better, for worse. Over the years, as each of my seven subsequent books has appeared, publishers have sent me on tour to promote it. Audiences respond to the new book, whichever one it is, with gratifying enthusiasm, applaud politely and ask intelligent questions. Then they line up to buy, and have me sign . . . The Selfish Gene. […]

Many critics, especially vociferous ones learned in philosophy as I have discovered, prefer to read a book by title only. No doubt this works well enough for The Tale of Benjamin Bunny or The Decline and Fall of the Roman Empire, but I can readily see that ‘The Selfish Gene’ on its own, without the large footnote of the book itself, might give an inadequate impression of its contents. […]

IN the dozen years since The Selfish Gene was published its central message has become textbook orthodoxy. This is paradoxical, but not in the obvious way. It is not one of those books that was reviled as revolutionary when published, then steadily won converts until it ended up so orthodox that we now wonder what the fuss was about. Quite the contrary. From the outset the reviews were gratifyingly favourable and it was not seen, initially, as a controversial book. Its reputation for contentiousness took years to grow until, by now, it is widely regarded as a work of radical extremism. […]

But a change of vision can, at its best, achieve something loftier than a theory. It can usher in a whole climate of thinking, in which many exciting and testable theories are born, and unimagined facts laid bare. […] I prefer not to make a clear separation between science and its ‘popularization’. Expounding ideas that have hitherto appeared only in the technical literature is a difficult art. It requires insightful new twists of language and revealing metaphors. If you push novelty of language and metaphor far enough, you can end up with a new way of seeing. And a new way of seeing, as I have just argued, can in its own right make an original contribution to science. […]

If superior creatures from space ever visit earth, the first question they will ask, in order to assess the level of our civilization, is: ‘Have they discovered evolution yet?’

When Oxford University Press approached me for a second edition they insisted that a conventional, comprehensive, page by page revision was inappropriate. There are some books that, from their conception, are obviously destined for a string of editions, and The Selfish Gene was not one of them. The first edition borrowed a youthful quality from the times in which it was written. There was a whiff of revolution abroad, a streak of Wordsworth’s blissful dawn. A pity to change a child of those times, fatten it with new facts or wrinkle it with complications and cautions. So, the original text should stand, warts, sexist pronouns and all. Notes at the end would cover corrections, responses and developments. And there should be entirely new chapters, on subjects whose novelty in their own time would carry forward the mood of revolutionary dawn.

Richard Dawkins, The Selfish Gene, 30th Anniversary Edition, 2006


Added to diary 27 June 2018

rob-eastaway

Think of those occasions where you are moving house. You know that the volume of items you need to shift is enough to fill about twenty identical boxes – except that it would require far too much time and planning to work out what to put where. Instead, you go for the easiest method of packing, which is to put items at random into the first box, and, when the next item you pick up doesn’t fit, you seal up the box you have been filling and start on a new one. This is known as a ‘first-fit strategy’. How efficient is it? It turns out that, however unlucky you are with the order in which things come to hand, the number of boxes you need will always be within 70 per cent of what could be achieved with perfect allocation. So if your best possible is 20 boxes, you can reassure yourself that even the lazy first-fit method strategy should require at most 34 boxes. If this isn’t good enough – and, let’s be honest, 70 per cent is a bit of a waste, though this is the worst-case scenario – a strategy of packing the biggest things first and the smallest last always turns out to be within 22 per cent of the very best solution. This means a worst case of 25 boxes, instead of the optimal 20. And, since many of us tend to use a biggest-first strategy, especially when filling the car boot, this shows that when it comes to packing, common sense is a good substitute for deep mathematical thinking. […]

[T]he building needs thirty floors just to enable the lift to reach top speed for a few moments. And that’s assuming that the lift gets a clear run. In a busy building, a lift is likely to make lots of short trips, which means it will rarely have a chance to accelerate to high speed. Added to this, the time saved in speeding up the lift will be small compared with the time spent opening and closing the doors to let passengers in and out. In other words, making the lifts go faster has little impact on the overall waiting time.[…]

For example, imagine you are three floors down in the basement of a building with two lifts. The indicator tells you that there are lifts sitting on higher floors, one on the ground floor and one on the third floor. You call the lift, and notice that the one that comes to collect you is the one from the third floor – despite being twice as far away. Why didn’t the ground-floor lift come to you? The answer is that ‘intelligent’ lifts are often programmed to have a slight bias towards sticking to the ground floor, where most of the passengers get on. The intelligent lift may calculate that it’s worth sending a more remote lift to collect you if it means that it can keep a lift waiting at the ground floor, where a flurry of passengers could arrive at any moment. You are being sacrificed (modestly) for the greater good. Here is another possible sacrifice. An intelligent lift is seeking to keep down both the average and the maximum waiting time. A customer on Floor 6 calls a lift, but is dismayed when it bypasses them to collect somebody at Floor 9. The reason might be that the modern intelligent lift is aware that the Floor 9 person has already been waiting for a minute and is therefore top priority. With this urgent case on Floor 9, the lift reckons that you on Floor 6 can wait a few more seconds. […]

In fact the Black Death entered Europe when a Tartar army catapulted infected corpses into a Genoese trading post […]. […]

Proving Benford’s Law is tricky, but here is one way of seeing why it might be true. Imagine you are setting up a raffle, in which you will randomly draw a number out of a hat. If you sell only four raffle tickets, numbered 1, 2, 3, 4, and then put them into a hat, what is the chance that the winning number will begin with 1? It is 1 in 4, of course, or 25 per cent. If you now start to sell more raffle tickets with higher numbers 5, 6, 7, and so on, your chance of drawing the 1 goes down, until it drops to 1 in 9, or 11 per cent, when nine tickets have been sold. When you add ticket number 10, however, two of the ten tickets now start with a 1 (namely 1 and 10), so the odds of having a leading 1 leap up to 2 in 10, or 20 per cent. This chance continues to climb as you sell tickets 11, 12, 13… up to 19 (when your odds are actually 11⁄19, or 58 per cent). As you add the 20s, 30s and above, your chances of getting a leading 1 fall again, so that when the hat contains the numbers 1 to 99, only 11⁄99, or about 11 per cent, have a leading 1. But what if you put in more than 100 numbers? Your chances increase once more. By the time you get to 199 raffle tickets, the chance that the first digit of the winning ticket will be 1 is 111⁄199, which is over 50 per cent again. You can plot your chance of winning this game on a graph. On the vertical axis is the chance that the number you draw will begin with a 1, and along the bottom is the number of raffle tickets sold. Interestingly, the chance zigzags between about 58 per cent and 11 per cent as the number of tickets sold increases. You don’t know how many will be sold in the end, but you can see that the ‘average’ chance is going to be somewhere around the middle of those two, which is what Benford’s Law predicts. The exact chance that a number will begin with digit N as predicted by Benford’s Law is: log (N+1) – log (N), where log is the logarithm of the number to base 10 (the log button on most calculators). For N = 1, this predicts log (2) – log (1), or 0.301, which is 30.1 per cent. […]

In 1985, a poem entitled ‘Shall I die?’ was discovered in the Bodleian Library of Oxford University. On the manuscript were the initials WS. Could this be a forgotten Shakespeare work? The investigations began. One early analysis was based on the patterns of words that Shakespeare used as his career progressed. In each new work, it turned out, Shakespeare had always included a certain number of words that had never appeared in any of his earlier works. (Fortunately, computers are able to do all of the word counting to prove this. Imagine how tedious this sort of analysis was before the electronic age.) It was therefore possible to predict how many new words might be expected in a new work. If there were too many, it would be pretty clear that the author couldn’t be Shakespeare. No new words at all, and it would look suspiciously as though somebody had tried too hard to copy Shakespeare’s style. The mathematical prediction was that the poem ‘Shall I die?’ should contain about seven new words. In fact it contained nine, which was pretty close. This was used as evidence to confirm Shakespeare’s authorship.

Rob Eastaway, How Long Is a Piece of String?, 2002


Added to diary 26 June 2018

robert-j-sternberg

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.


Added to diary 27 June 2018

robert-powell

In a preventive war scenario, the rising state (the one that is becoming more powerful) would like to guarantee that it would not use its powerful position to exploit the declining state in the future. The declining state would like to accept such a guarantee. Both states would prefer such a guarantee to risky and costly fighting. Yet both states know that the guarantee would be worthless once the rising state achieves a dominant position. Hence, the declining state may launch a war now in order to avoid being exploited in the future.

Matthew Adam Kocher, Commitment Problems and Preventive War, 8 August 2013

Complete-information bargaining can break down in this setting if the shift in the distribution of power is sufficiently large and rapid. To see why, consider the situation confronting a temporarily weak bargainer who expects to be stronger in the future (that is, the amount that this bargainer can lock in will increase). In order to avoid the inefficient use of power, this bargainer must buy off its temporarily strong adversary. To do this, the weaker party must promise the stronger at least as much of the flow as the latter can lock in. But when the once-weak bargainer becomes stronger, it may want to exploit its better bargaining position and renege on the promised transfer. Indeed, if the shift in the distribution of power is sufficiently large and rapid, the once-weak bargainer is certain to want to renege. Foreseeing this, the temporarily strong adversary uses it power to lock in a higher payoff while it still has the chance.

Robert Powell (2006). War as a Commitment Problem. International Organization, 60(1), 169-203. doi:10.1017/S0020818306060061


Added to diary 15 March 2018

robert-recorde

Wherefore in all great works are Clerkes so much desired? Wherefore are Auditors so richly fed? What causeth Geometricians so highly to be enhaunsed? Why are Astronomers so greatly advanced? Because that by number such things they finde, which else would farre excell mans minde.

Robert Recorde, Arithmetic: or, The Ground of Arts, London, 1543, p. 34


Added to diary 31 March 2018

robert-wiblin

“So, what got you into optimising the universe?”

“Actually, when I was young, the issue affected me personally — my mother died of the suboptimality of the universe.”

Robert Wiblin, Facebook post, 17 June 2016


Added to diary 17 April 2018
  • Recent surveys of hundreds of thousands of people, in over 150 countries, show that richer people report being more satisfied with their lives overall, but that the richer you become, the more money you need to increase your satisfaction further. This is because people spend money on the most important things first. Someone earning $100,000 per year is only a little more satisfied than someone earning $50,000. The best available study found that each doubling of your income correlated with a life satisfaction 0.5 points higher on a scale of 1 to 10.
  • If you look at how ‘happy’ people say they are right now, the relationship is weaker. One large study found people in countries with average incomes of $32,000 were only 10% happier with their lives than those in countries with average incomes of just $2,000; another within the US could find no effect above a $40,000 income for a single person.
  • Moreover, some and maybe even most of this relationship is not causal. For example, healthier people will be both happier and capable of earning more. This means the effect of gaining extra money on your happiness is weaker than the above correlations suggest. Unfortunately, how much of the above relationships are caused by money making people happier is still not known with confidence. Once you get to an individual income of around $40,000, other factors, such as health, relationships and a sense of purpose, seem far more important than income.

Robert Wiblin, Everything you need to know about whether money makes you happy, 80,000 Hours blog, 2 March 2016


Added to diary 15 April 2018

robin-hanson

No matter how fast the economy grows, there remains a limited supply of sex and social status […].

Robin Hanson and Kevin Simler, The Elephant in the Brain, 2017


Added to diary 26 June 2018

[T]here’s a very real sense in which we are the Press Secretaries within our minds. In other words, the parts of the mind that we identify with, the parts we think of as our conscious selves (“I,” “myself,” “my conscious ego”), are the ones responsible for strategically spinning the truth for an external audience. […]

Body language also facilitates discretion by being less quotable to third parties, relative to spoken language. If Peter had explicitly told a colleague, “I want to get Jim fired,” the colleague could easily turn around and relay Peter’s agenda to others in the office. Similarly, if Peter had asked his flirting partner out for a drink, word might get back to his wife—in which case, bad news for Peter. […]

[S]peaking functions in part as an act of showing off. Speakers strive to impress their audience by consistently delivering impressive remarks. This explains how speakers foot the bill for the costs of speaking we discussed earlier: they’re compensated not in-kind, by receiving information reciprocally, but rather by raising their social value in the eyes (and ears) of their listeners. […]

Participants evaluate each other not just as trading partners, but also as potential allies. Speakers are eager to impress listeners by saying new and useful things, but the facts themselves can be secondary. Instead, it’s more important for speakers to demonstrate that they have abilities that are attractive in an ally. […]

But why do speakers need to be relevant in conversation? If speakers deliver high-quality information, why should listeners care whether the information is related to the current topic? A plausible answer is that it’s simply too easy to rattle off memorized trivia. You can recite random facts from the encyclopedia until you’re blue in the face, but that does little to advertise your generic facility with information. Similarly, when you meet someone for the first time, you’re more eager to sniff each other out for this generic skill, rather than to exchange the most important information each of you has gathered to this point in your lives. In other words, listeners generally prefer speakers who can impress them wherever a conversation happens to lead, rather than speakers who steer conversations to specific topics where they already know what to say. […]

In fact, patients show surprisingly little interest in private information on medical quality. For example, patients who would soon undergo a dangerous surgery (with a few percent chance of death) were offered private information on the (risk-adjusted) rates at which patients died from that surgery with individual surgeons and hospitals in their area. These rates were large and varied by a factor of three. However, only 8 percent of these patients were willing to spend even $50 to learn these death rates. Similarly, when the government published risk-adjusted hospital death rates between 1986 and 1992, hospitals with twice the risk-adjusted death rates saw their admissions fall by only 0.8 percent. In contrast, a single high-profile news story about an untoward death at a hospital resulted in a 9 percent drop in patient admissions at that hospital. […]

When John F. Kennedy described the space race with his famous speech in 1962, he dressed up the nation’s ambition in a suitably prosocial motive. “We set sail on this new sea,” he told the crowd, “because there is new knowledge to be gained, and new rights to be won, and they must be won and used for the progress of all people.” Everyone, of course, knew the subtext: “We need to beat the Russians!” In the end, our motives were less important than what we managed to achieve by them. We may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.

Robin Hanson and Kevin Simler, The Elephant in the Brain, 2017


Added to diary 26 June 2018

ruth-spinks

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.


Added to diary 27 June 2018

sam-harris

When we see a person walking down the street talking to himself, we generally assume that he is mentally ill (provided he is not wearing a headset of some kind). But we all talk to ourselves constantly—most of us merely have the good sense to keep our mouths shut. We rehearse past conversations—thinking about what we said, what we didn’t say, what we should have said. We anticipate the future, producing a ceaseless string of words and images that fill us with hope or fear. We tell ourselves the story of the present, as though some blind person were inside our heads who required continuous narration to know what is happening: “Wow, nice desk. I wonder what kind of wood that is. Oh, but it has no drawers. They didn’t put drawers in this thing? How can you have a desk without at least one drawer?” Who are we talking to? No one else is there. And we seem to imagine that if we just keep this inner monologue to ourselves, it is perfectly compatible with mental health. Perhaps it isn’t.

Sam Harris, Waking up (2014)


Added to diary 28 March 2018

samuel-rathmanner

Natural Turing Machines. The final issue is the choice of Universal Turing machine to be used as the reference machine. The problem is that there is still subjectivity involved in this choice since what is simple on one Turing machine may not be on another. More formally, it can be shown that for any arbitrarily complex string as measured against the UTM there is another UTM machine for which has Kolmogorov complexity . This result seems to undermine the entire concept of a universal simplicity measure but it is more of a philosophical nuisance which only occurs in specifically designed pathological examples. The Turing machine would have to be absurdly biased towards the string which would require previous knowledge of . The analogy here would be to hard-code some arbitrary long complex number into the hardware of a computer system which is clearly not a natural design. To deal with this case we make the soft assumption that the reference machine is natural in the sense that no such specific biases exist. Unfortunately there is no rigorous definition of natural but it is possible to argue for a reasonable and intuitive definition in this context.

Hutter and Rathmanner 2011, Section 5.9 “Andrey Kolmogorov”

In section 2.4 we saw that Solomonoff’s prior is invariant under both reparametrization and regrouping, up to a multiplicative constant. But there is another form of language dependence, namely the choice of a uni- versal Turing machine.

There are three principal responses to the threat of language dependence. First, one could accept it flat out, and admit that no language is better than any other. Second, one could admit that there is language dependence but argue that some languages are better than others. Third, one could deny language dependence, and try to show that there isn’t any.

For a defender of Solomonoff’s prior, I believe the second option is the most promising. If you accept language dependence flat out, why intro- duce universal Turing machines, incomputable functions, and other need- lessly complicated things? And the third option is not available: there isn’t any way of getting around the fact that Solomonoff’s prior depends on the choice of universal Turing machine. Thus, we shall somehow try to limit the blow of the language dependence that is inherent to the framework. Williamson (2010) defends the use of a particular language by saying that an agent’s language gives her some information about the world she lives in. In the present framework, a similar response could go as follows. First, we identify binary strings with propositions or sensory observations in the way outlined in the previous section. Second, we pick a UTM so that the terms that exist in a particular agent’s language gets low Kolmogorov complexity.

If the above proposal is unconvincing, the damage may be limited some- what by the following result. Let be the Kolmogorov complexity of relative to universal Turing machine , and let be the Kolmogorov complexity of relative to Turing machine (which needn’t be universal). We have that That is: the difference in Kolmogorov complexity relative to and rela- tive to is bounded by a constant that depends only on these Turing machines, and not on . (See Li and Vitanyi (1997, p. 104) for a proof.) This is somewhat reassuring. It means that no other Turing machine can outperform infinitely often by more than a fixed constant. But we want to achieve more than that. If one picks a UTM that is biased enough to start with, strings that intuitively seem complex will get a very low Kolmogorov complexity. As we have seen, for any string it is always possible to find a UTM such that . If , the corresponding Solomonoff prior will be at least . So for any binary string, it is always possible to find a UTM such that we assign that string prior probability greater than or equal to . Thus some way of discriminating between universal Turing machines is called for.

Vallinder 2012, Section 4.1 “Language dependence”


Added to diary 15 January 2018

sandip-sukhtankar

Public employment programs play a major role in the anti-poverty strategy of many developing countries. Besides the direct wages provided to the poor, such programs are likely to affect their welfare by changing broader labor market outcomes including wages and private employment. These general equilibrium effects may accentuate or attenuate the direct benefits of the program, but have been difficult to estimate credibly. We estimate the general equilibrium effects of a technological reform that improved the implementation quality of India’s public employment scheme on the earnings of the rural poor, using a large-scale experiment which randomized treatment across sub-districts of 60,000 people. We find that this reform had a large impact on the earnings of low-income households, and that these gains were overwhelmingly driven by higher private-sector earnings (90%) as opposed to earnings directly from the program (10%). These earnings gains reflect a 5.7% increase in market wages for rural unskilled labor, and a similar increase in reservation wages. We do not find evidence of distortions in factor allocation, including labor supply, migration, and land use. Our results highlight the importance of accounting for general equilibrium effects in evaluating programs, and also illustrate the feasibility of using large-scale experiments to study such effects.

Karthik Muralidharan, Paul Niehaus, and Sandip Sukhtankar, General Equilibrium Effects of (Improving) Public Employment Programs: Experimental Evidence from India, 2017

the cool thing is it mattered – study helped convince gov’t not to scrap the program, putting $100Ms annually into hands of the poor

@paulfniehaus, Twitter


Added to diary 15 January 2018

scott-aaronson

I’ll start with the “Muddy Children Puzzle,” which is one of the greatest logic puzzles ever invented.  How many of you have seen this one? OK, so the way it goes is, there are a hundred children playing in the mud.  Naturally, they all have muddy foreheads.  At some point their teacher comes along and says to them, as they all sit around in a circle: “stand up if you know your forehead is muddy.”  No one stands up.  For how could they know?  Each kid can see all the other 99 kids’ foreheads, so knows that they’re muddy, but can’t see his or her own forehead.  (We’ll assume that there are no mirrors or camera phones nearby, and also that this is mud that you don’t feel when it’s on your forehead.)

So the teacher tries again.  “Knowing that no one stood up the last time, now stand up if you know your forehead is muddy.”  Still no one stands up.  Why would they?  No matter how many times the teacher repeats the request, still no one stands up.

Then the teacher tries something new.  “Look, I hereby announce that at least one of you has a muddy forehead.”  After that announcement, the teacher again says, “stand up if you know your forehead is muddy”—and again no one stands up.  And again and again; it continues 99 times.  But then the hundredth time, all the children suddenly stand up.

(There’s a variant of the puzzle involving blue-eyed islanders who all suddenly commit suicide on the hundredth day, when they all learn that their eyes are blue—but as a blue-eyed person myself, that’s always struck me as needlessly macabre.)

What’s going on here?  Somehow, the teacher’s announcing to the children that at least one of them had a muddy forehead set something dramatic in motion, which would eventually make them all stand up—but how could that announcement possibly have made any difference?  After all, each child already knew that at least 99 children had muddy foreheads!

Like with many puzzles, the way to get intuition is to change the numbers.  So suppose there were twochildren with muddy foreheads, and the teacher announced to them that at least one had a muddy forehead, and then asked both of them whether their own forehead was muddy.  Neither would know.  But each child could reason as follows: “if my forehead weren’t muddy, then the other child would’ve seen that, and would also have known that at least one of us has a muddy forehead.  Therefore she would’ve known, when asked, that her own forehead was muddy.  Since she didn’t know, that means my forehead is muddy.”  So then both children know their foreheads are muddy, when the teacher asks a second time.

Now, this argument can be generalized to any (finite) number of children.  The crucial concept here is common knowledge.  We call a fact “common knowledge” if, not only does everyone know it, but everyone knows everyone knows it, and everyone knows everyone knows everyone knows it, and so on.  It’s true that in the beginning, each child knew that all the other children had muddy foreheads, but it wasn’t common knowledge that even one of them had a muddy forehead.  For example, if your forehead and mine are both muddy, then I know that at least one of us has a muddy forehead, and you know that too, but you don’t know that I know it (for what if your forehead were clean?), and I don’t know that you know it (for what if my forehead were clean?).

What the teacher’s announcement did, was to make itcommon knowledge that at least one child has a muddy forehead (since not only did everyone hear the announcement, but everyone witnessed everyone else hearing it, etc.).  And once you understand that point, it’s easy to argue by induction: after the teacher asks and no child stands up (and everyone sees that no one stood up), it becomes common knowledge that at least two children have muddy foreheads (since if only one child had had a muddy forehead, that child would’ve known it and stood up).  Next it becomes common knowledge that at least three children have muddy foreheads, and so on, until after a hundred rounds it’s common knowledge that everyone’s forehead is muddy, so everyone stands up.

The moral is that the mere act of saying something publicly can change the world—even if everything you said was already obvious to every last one of your listeners.  For it’s possible that, until your announcement, not everyone knew that everyone knew the thing, or knew everyone knew everyone knew it, etc., and that could have prevented them from acting.

Scott Aaronson, Common Knowledge and Aumann’s Agreement Theorem, Shtetl-Optimized, 16 August 2015


Added to diary 27 June 2018

In my view, any assessment of Robin’s abrasive, tone-deaf, and sometimes even offensive intellectual style has to grapple with the fact that, over his career, Robin has originated not one but several hugely important ideas—and his ability to do so strikes me as clearly related to his style, not easily detachable from it. Most famously, Robin is one of the major developers of prediction markets, and also the inventor of futarchy—a proposed system of government that would harness prediction markets to get well-calibrated assessments of the effects of various policies. Robin also first articulated the concept of the Great Filter in the evolution of life in our universe. It’s Great Filter reasoning that tells us, for example, that if we ever discover fossil microbial life on Mars (or worse yet, simple plants and animals on extrasolar planets), then we should be terrified, because it would mean that several solutions to the Fermi paradox that don’t involve civilizations like ours killing themselves off would have been eliminated. Sure, once you say it, it sounds pretty obvious … but did you think of it?

Scott Aaronson, The Zeroth Commandment, Shtetl-Optimized, 6 May 2018


Added to diary 06 May 2018

scott-alexander

A Motte and Bailey castle is a medieval system of defence in which a stone tower on a mound (the Motte) is surrounded by an area of land (the Bailey) which in turn is encompassed by some sort of a barrier such as a ditch. Being dark and dank, the Motte is not a habitation of choice. The only reason for its existence is the desirability of the Bailey, which the combination of the Motte and ditch makes relatively easy to retain despite attack by marauders. When only lightly pressed, the ditch makes small numbers of attackers easy to defeat as they struggle across it: when heavily pressed the ditch is not defensible and so neither is the Bailey. Rather one retreats to the insalubrious but defensible, perhaps impregnable, Motte. Eventually the marauders give up, when one is well placed to reoccupy desirable land.

For my purposes the desirable but only lightly defensible territory of the Motte and Bailey castle, that is to say, the Bailey, represents a philosophical doctrine or position with similar properties: desirable to its proponent but only lightly defensible. The Motte is the defensible but undesired position to which one retreats when hard pressed. I think it is evident that Troll’s Truisms have the Motte and Bailey property, since the exciting falsehoods constitute the desired but indefensible region within the ditch whilst the trivial truth constitutes the defensible but dank Motte to which one may retreat when pressed.

An entire doctrine or theory may be a Motte and Bailey Doctrine just by virtue of having a central core of defensible but not terribly interesting or original doctrines surrounded by a region of exciting but only lightly defensible doctrines. Just as the medieval Motte was often constructed by the stonemasons art from stone in the surrounding land, the Motte of dull but defensible doctrines is often constructed by the use of the sophists art from the desired but indefensible doctrines lying within the ditch.

Diagnosis of a philosophical doctrine as being a Motte and Bailey Doctrine is invariably fatal. Once made it is relatively obvious to those familiar with the doctrine that the doctrine’s survival required a systematic vacillation between exploiting the desired territory and retreating to the Motte when pressed.

The dialectic between many refutations of specific postmodernist doctrines and the postmodernist defences correspond exactly to the dynamics of Motte and Bailey Doctrines. When pressed with refutation the postmodernists retreat to their Mottes, only to venture out and repossess the desired territory when the refutation is not in immediate evidence. For these reasons, I think the proper diagnosis of postmodernism is precisely that it is a Motte and Bailey Doctrine.

Shackel, N. (2005). The vacuity of postmodernist methodology. Metaphilosophy, 36(3), 295-320.

So the motte-and-bailey doctrine is when you make a bold, controversial statement. Then when somebody challenges you, you claim you were just making an obvious, uncontroversial statement, so you are clearly right and they are silly for challenging you. Then when the argument is over you go back to making the bold, controversial statement.

Some classic examples:

  1. The religious group that acts for all the world like God is a supernatural creator who builds universes, creates people out of other people’s ribs, parts seas, and heals the sick when asked very nicely (bailey). Then when atheists come around and say maybe there’s no God, the religious group objects “But God is just another name for the beauty and order in the Universe! You’re not denying that there’s beauty and order in the Universe, are you?” (motte). Then when the atheists go away they get back to making people out of other people’s ribs and stuff.

  2. Or…”If you don’t accept Jesus, you will burn in Hell forever.” (bailey) But isn’t that horrible and inhuman? “Well, Hell is just another word for being without God, and if you choose to be without God, God will be nice and let you make that choice.” (motte) Oh, well that doesn’t sound so bad, I’m going to keep rejecting Jesus. “But if you reject Jesus, you will BURN in HELL FOREVER and your body will be GNAWED BY WORMS.” But didn’t you just… “Metaphorical worms of godlessness!”

  3. The feminists who constantly argue about whether you can be a real feminist or not without believing in X, Y and Z and wanting to empower women in some very specific way, and who demand everybody support controversial policies like affirmative action or affirmative consent laws (bailey). Then when someone says they don’t really like feminism very much, they object “But feminism is just the belief that women are people!” (motte) Then once the person hastily retreats and promises he definitely didn’t mean women aren’t people, the feminists get back to demanding everyone support affirmative action because feminism, or arguing about whether you can be a feminist and wear lipstick.

  4. Proponents of pseudoscience sometimes argue that their particular form of quackery will cure cancer or take away your pains or heal your crippling injuries (bailey). When confronted with evidence that it doesn’t work, they might argue that people need hope, and even a placebo solution will often relieve stress and help people feel cared for (motte). In fact, some have argued that quackery may be better than real medicine for certain untreatable diseases, because neither real nor fake medicine will help, but fake medicine tends to be more calming and has fewer side effects. But then once you leave the quacks in peace, they will go back to telling less knowledgeable patients that their treatments will cure cancer.

  5. Critics of the rationalist community note that it pushes controversial complicated things like Bayesian statistics and utilitarianism (bailey) under the name “rationality”, but when asked to justify itself defines rationality as “whatever helps you achieve your goals”, which is so vague as to be universally unobjectionable (motte). Then once you have admitted that more rationality is always a good thing, they suggest you’ve admitted everyone needs to learn more Bayesian statistics.

  6. Likewise, singularitarians who predict with certainty that there will be a singularity, because “singularity” just means “a time when technology is so different that it is impossible to imagine” – and really, who would deny that technology will probably get really weird (motte)? But then every other time they use “singularity”, they use it to refer to a very specific scenario of intelligence explosion, which is far less certain and needs a lot more evidence before you can predict it (bailey).

The motte and bailey doctrine sounds kind of stupid and hard-to-fall-for when you put it like that, but all fallacies sound that way when you’re thinking about them. More important, it draws its strength from people’s usual failure to debate specific propositions rather than vague clouds of ideas. If I’m debating “does quackery cure cancer?”, it might be easy to view that as a general case of the problem of “is quackery okay?” or “should quackery be illegal?”, and from there it’s easy to bring up the motte objection.

Scott Alexander, “All in all, another brick in the motte”, Slate Star Codex, 3 November 2014

Suppose I define socialism as, “a system of totalitarian control over the economy, leading inevitably to mass poverty and death.” As a detractor of socialism, this is superficially tempting. But it’s sheer folly, for two distinct reasons.

First, this plainly isn’t what most socialists mean by “socialism.” When socialists call for socialism, they’re rarely requesting totalitarianism, poverty, and death. And when non-socialists listen to socialists, that’s rarely what they hear, either.

Second, if you buy this definition, there’s no point studying actual socialist regimes to see if they in fact are “totalitarian” or “inevitably lead to mass poverty and death.” Mere words tell you what you need to know.

What’s the problem? The problem is that I’ve provided an argumentative definition of socialism. Instead of rigorously distinguishing between what we’re talking about and what we’re saying about it, an argumentative definition deliberately interweaves the two.

The hidden hope, presumably, is that if we control the way people use words, we’ll also control what people think about the world. And it is plainly possible to trick the naive using these semantic tactics. But the epistemic cost is high: You preemptively end conversation with anyone who substantively disagrees with you - and cloud your own thinking in the process. It’s far better to neutrally define socialism as, say, “Government ownership of most of the means of production,” or maybe, “The view that each nation’s wealth is justly owned collectively by its citizens.” You can quibble with these definitions, but people can accept either definition regardless of their position on socialism itself.

Modern discussions are riddled with argumentative definitions, but the most prominent instance, lately, is feminism. Google “feminism,” and what do you get? The top hit: “the advocacy of women’s rights on the basis of the equality of the sexes.” I’ve heard many variants on this: “the theory that men and women should be treated equally,” or even “the radical notion that women are people.”

What’s argumentative about these definitions? Well, in this 2016 Washington Post/Kaiser Family Foundation survey, 40% of women and 67% of men did not consider themselves “feminists.” But over 90% of both genders agreed that “men and women should be social, political, and economic equals.” If Google’s definition of feminism conformed to standard English usage, these patterns would make very little sense. Imagine a world where 90% of men say they’re “bachelors,” but only 40% say they’re “unmarried.”

Bryan Caplan, Against Argumentative Definitions: The Case of Feminism, EconLog, 20 February 2018


Added to diary 21 April 2018

sean-caroll

List of passages I highlighted in “The Big Picture”.

On ontology:

We will see how our best approach to describing the universe is not a single, unified story but an interconnected series of models appropriate at different levels. Each model has a domain in which it is applicable, and the ideas that appear as essential parts of each story have every right to be thought of as “real.” […]

While there is one world, there are many ways of talking about it. We refer to these ways as “models” or “theories” or “vocabularies” or “stories”; it doesn’t matter. Aristotle and his contemporaries weren’t just making things up; they told a reasonable story about the world they actually observed. Science has discovered another set of stories, harder to perceive but of greater precision and wider applicability. It’s not good enough that the stories succeed individually; they have to fit together. […]

The different stories or theories use utterly different vocabularies; they are different ontologies, despite describing the same underlying reality. In one we talk about the density, pressure, and viscosity of the fluid; in the other we talk about the position and velocity of all the individual molecules. […]

One theory can directly be obtained from the other by a process known as coarse-graining. […]

The case of fluid dynamics emerging from molecules is as simple as it gets. One theory can directly be obtained from the other by a process known as coarse-graining. […]

Typically—though not necessarily—the theory that has a wider domain of applicability will also be the one that is more computationally cumbersome. There tends to be a trade-off between comprehensiveness of a theory and its practicality. […]

There are several different questions here, which are related to one another but logically distinct. Are the most fine-grained (microscopic, comprehensive) stories the most interesting or important ones? As a research program, is the best way to understand macroscopic phenomena to first understand microscopic phenomena, and then derive the emergent description? Is there something we learn by studying the emergent level that we could not understand by studying the microscopic level, even if we were as smart as Laplace’s Demon? Is behavior at the macroscopic level incompatible—literally inconsistent with—how we would expect the system to behave if we knew only the microscopic rules? […]

(A similar view was put forward by Stephen Hawking and Leonard Mlodinow, under the label “model-dependent realism.”) […]

To evaluate a model of the world, the questions we need to ask include “Is it internally consistent?,” “Is it well-defined?,” and “Does it fit the data?” When we have multiple distinct theories that overlap in some regime, they had better be compatible with one another; otherwise they couldn’t both fit the data at the same time. The theories may involve utterly different kinds of concepts; one may have particles and forces obeying differential equations, and another may have human agents making choices. That’s fine, as long as the predictions of the theories line up in their overlapping domains of applicability. The success of one theory doesn’t mean that another one is wrong; that only happens when a theory turns out to be internally incoherent, or when it does a bad job at describing the observed phenomena. […]

“Causation,” which after all is itself a derived notion rather than a fundamental one, is best thought of as acting within individual theories that rely on the concept. Thinking of behavior in one theory as causing behavior in a completely different theory is the first step toward a morass of confusion from which it is difficult to extract ourselves. […]

The way we talk about human beings and their interactions is going to end up being less crisp and precise than our theories of elementary particles. It might be harmless, and even useful, to borrow terms from one story because they are useful in another one—“diseases are caused by microscopic germs” being an obvious example. Drawing relations between different vocabularies, such as when Boltzmann suggested that the entropy of a gas was related to the number of indistinguishable arrangements of the molecules of which it was composed, can be extremely valuable and add important insights. But if a theory is any good, it has to be able to speak sensibly about the phenomena it purports to describe all by itself, without leaning on causes being exerted to or from theories at different levels of focus. […]

On reductionism:

Galileo observed that Jupiter has moons, implying that it is a gravitating body just like the Earth. Isaac Newton showed that the force of gravity is universal, underlying both the motion of the planets and the way that apples fall from trees. John Dalton demonstrated how different chemical compounds could be thought of as combinations of basic building blocks called atoms. Charles Darwin established the unity of life from common ancestors. James Clerk Maxwell and other physicists brought together such disparate phenomena as lightning, radiation, and magnets under the single rubric of “electromagnetism.” Close analysis of starlight revealed that stars are made of the same kinds of atoms as we find here on Earth, with Cecilia Payne-Gaposchkin eventually proving that they are mostly hydrogen and helium. Albert Einstein unified space and time, joining together matter and energy along the way. Particle physics has taught us that every atom in the periodic table of the elements is an arrangement of just three basic particles: protons, neutrons, and electrons. Every object you have ever seen or bumped into in your life is made of just those three particles. We’re left with a very different view of reality from where we started. At a fundamental level, there aren’t separate “living things” and “nonliving things,” “things here on Earth” and “things up in the sky,” “matter” and “spirit.” There is just the basic stuff of reality, appearing to us in many different forms. How far will this process of unification and simplification go? It’s impossible to say for sure. But we have a reasonable guess, based on our progress thus far: it will go all the way. We will ultimately understand the world as a single, unified reality, not caused or sustained or influenced by anything outside itself. That’s a big deal. […]

On truth and falsity of models:

Consider a coffee cup sitting at rest on a table. It is in its natural state, in this case at rest. (Unless we were to pull the table out from beneath it, in which case it would naturally fall, but let’s not do that.) Now imagine we exert a violent motion, pushing the cup across the table. As we push it, it moves; when we stop, it returns to its natural state of rest. In order to keep it moving, we would have to keep pushing on it. As Aristotle says, “Everything that is in motion must be moved by something.” This is manifestly how coffee cups do behave in the real world. The difference between Galileo and Aristotle wasn’t that one was saying true things and the other was saying false things; it’s that the things Galileo chose to focus on turned out to be a useful basis for a more rigorous and complete understanding of phenomena beyond the original set of examples, in a way that Aristotle’s did not. […]

On simluation:

To simulate the entire universe with good accuracy, you basically have to be the universe. […]

On the arrow of time:

When a later event has great leverage over an earlier one, we call the latter a “record” of the former; when the earlier event has great leverage over a later one, we call the latter a “cause” of the former. […]

On quantum mechanics:

Physicists were forced to throw out what we mean by the “state” of a physical system—the complete description of its current situation—and replace it with something utterly different. What is worse, we had to reinvent an idea we thought was pretty straightforward: the concept of a measurement or observation. […]

On cells:

Keeping the cell membrane intact and robust turns out to be a kind of Bayesian reasoning. […]

On information:

As the universe evolves from this very specific configuration to increasingly generic ones, correlations between different parts of the universe develop very naturally. It becomes useful to say that one part carries information about another part. It’s just one of the many helpful ways we have of talking about the world at an emergent, macroscopic level. […]

On fine-tuning:

The fine-tuning argument plays by the rules of how we come to learn about the world. It takes two theories, naturalism and theism, and then tests them by making predictions and going out and looking at the world to test which prediction comes true. It’s the best argument we have for God’s existence. […]

naturalists need to face fine-tuning head-on. That means understanding what the universe is predicted to look like under both theism and naturalism, so that we can legitimately compare how our observations affect our credences. We’ll see that the existence of life provides, at best, a small boost to the probability that theism is true—while related features of the universe provide an extremely large boost for naturalism. […]

On the evolution of higher intelligence:

If you’ve spent much time swimming or diving, you know that you can’t see as far underwater as you can in air. The attenuation length—the distance past which light is mostly absorbed by the medium you are looking through—is tens of meters through clear water, while in air it’s practically infinite. (We have no trouble seeing the moon, or distant objects on our horizon.) What you can see has a dramatic effect on how you think. If you’re a fish, you move through the water at a meter or two per second, and you see some tens of meters in front of you. Every few seconds you are entering a new perceptual environment. As something new looms into your view, you have only a very brief amount of time in which to evaluate how to react to it. Is it friendly, fearsome, or foodlike? Under those conditions, there is enormous evolutionary pressure to think fast. See something, respond almost immediately. A fish brain is going to be optimized to do just that. Quick reaction, not leisurely contemplation, is the name of the game. Now imagine you’ve climbed up onto land. Suddenly your sensory horizon expands enormously. Surrounded by clear air, you can see for kilometers—much farther than you can travel in a couple of seconds. At first, there wasn’t much to see, since there weren’t any other animals up there with you. But there is food of different varieties, obstacles like rocks and trees, not to mention the occasional geological eruption. And before you know it, you are joined by other kinds of locomotive creatures. Some friendly, some tasty, some simply to be avoided. Now the selection pressures have shifted dramatically. Being simple-minded and reactive might be okay in some circumstances, but it’s not the best strategy on land. When you can see what’s coming long before you are forced to react, you have the time to contemplate different possible actions, and weigh the pros and cons of each. You can even be ingenious, putting some of your cognitive resources into inventing plans of action other than those that are immediately obvious. Out in the clear air, it pays to use your imagination. […]

On our righteous minds:

Kahneman compares System 2 to “a supporting character who believes herself to be the lead actor and often has little idea of what’s going on.” […]

On cognitive science:

The study of how we think and feel, not to mention how to think about who we are, is in its relative infancy. As neuroscientist and philosopher Patricia Churchland has put it, “We’re pre-Newton, pre-Kepler. We’re still sussing out that there are moons around Jupiter.” […]

On philosophy of mind:

Frank Jackson himself has subsequently repudiated the original conclusion of the knowledge argument. Like most philosophers, he now accepts that consciousness arises from purely physical processes: “Although I once dissented from the majority, I have capitulated,” he writes. Jackson believes that Mary the Color Scientist helps pinpoint our intuition about why conscious experience can’t be purely physical, but that this isn’t enough to qualify as a compelling argument for such a conclusion. The interesting task is to show how our intuition has led us astray—as, science keeps reminding us, it so often does. […]

Sean Caroll, The Big Picture: On the Origins of Life, Meaning, and the Universe Itself Hardcover, 2016


Added to diary 21 April 2018

second-westminster-company

Finally, brethren, whatsoever things are true, whatsoever things are honest, whatsoever things are just, whatsoever things are pure, whatsoever things are lovely, whatsoever things are of good report; if there be any virtue, and if there be any praise, think on these things.

Philippians 4:8, King James Version


Added to diary 20 January 2018

shauna-lyon

Fried pork-belly skewers were eerily similar to corn dogs. Tamarind baby-back ribs were slick with grease but not sauce, and octopus with cilantro was far from tender. But robata-grilled yellowtail was fresh and juicy, and fried duck tongues were as cute as you’d want, crunchy and cheerful and dusted with chili powder. The squid-ink “pasta” noodles were made from fish—slippery and firm, they took well to bottarga. […]

Here is where Takayama’s influence is deeply felt, in perfect little pieces of nigiri or delicate temaki on crisp nori. The rice is pillowy, with just enough vinegar to provide counterpoint to the soft, silky slabs of mackerel, scallop, salmon, amberjack, even maitake mushroom. Perhaps a bit of cynicism can be detected in the uni-toro nigiri: it sounds good, but these two worshipped ingredients really don’t belong together; the metallic taste of the tuna belly overpowers the delicate sweetness of the sea urchin, plus, one piece costs sixteen dollars.

Shauna Lyon, Tetsu, Tables for Two, New Yorker Magazine, 26 February 2018


Added to diary 24 February 2018

Grilled Spanish mackerel sits in warm ponzu next to plums and “yolk jam”—a wonderfully pure, almost solid egg yolk, the texture attained, according to a server, by “cooking the yolk over low heat for a very long time.” Perfect little agnolotti ooze Fontina cheese and carrot butter; Proechel adds tender braised lamb neck and a dice of pickled squash and raw carrot to take it over the top.

And then there’s the côte de boeuf, aged for sixty days, “from Kansas,” the waiter repeats to each table, in various sizes, starting, on one recent night, at $183 for thirty-five ounces (including a hefty bone), with “all the fixings.” Unlike any fixings ever, these include black-garlic jam (if mahogany had a flavor it would be this), whipped buttermilk with charred cipollini onions (like a tart, zingy whipped cream, utterly delicious), a bowl of broth with bland unsalted potato dumplings and beef-fat-soaked croutons, and an addictive Brussels-sprout slaw.

Not everything works. There’s a reason you rarely see rutabaga; its sharpness is jarring next to perfectly seared duck breast. Beets with black-sesame tahini goes too dark.

Shauna Lyon, Ferris, Tables for two, New Yorker Magazime, December 4, 2017


Added to diary 16 January 2018

stephan-arndt

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.


Added to diary 27 June 2018

stephen-hsu

Many people lack standard cognitive tools useful for understanding the world around them. Perhaps the most egregious case: probability and statistics, which are central to understanding health, economics, risk, crime, society, evolution, global warming, etc. Very few people have any facility for calculating risk, visualizing a distribution, understanding the difference between the average, the median, variance, etc.

A remnant of the cold war era curriculum still in place in the US: if students learn advanced math it tends to be calculus, whereas a course on probability, statistics and thinking distributionally would be more useful. (I say this reluctantly, since I am a physical scientist and calculus is in the curriculum largely for its utility in fields related to mine.)

In the post below, blogger Mark Liberman (a linguist at Penn) notes that our situation parallels the absence of concepts for specific numbers (i.e., “ten”) among primitive cultures like the Piraha of the Amazon. We may find their condition amusing, or even sad. Personally, I find it tragic that leading public intellectuals around the world are mostly innumerate and don’t understand basic physics.

The Pirahã language and culture seem to lack not only the words but also the concepts for numbers, using instead less precise terms like “small size”, “large size” and “collection”. And the Pirahã people themselves seem to be suprisingly uninterested in learning about numbers, and even actively resistant to doing so, despite the fact that in their frequent dealings with traders they have a practical need to evaluate and compare numerical expressions. A similar situation seems to obtain among some other groups in Amazonia, and a lack of indigenous words for numbers has been reported elsewhere in the world.

Many people find this hard to believe. These are simple and natural concepts, of great practical importance: how could rational people resist learning to understand and use them? I don’t know the answer. But I do know that we can investigate a strictly comparable case, equally puzzling to me, right here in the U.S. of A.

Until about a hundred years ago, our language and culture lacked the words and ideas needed to deal with the evaluation and comparison of sampled properties of groups. Even today, only a minuscule proportion of the U.S. population understands even the simplest form of these concepts and terms. Out of the roughly 300 million Americans, I doubt that as many as 500 thousand grasp these ideas to any practical extent, and 50,000 might be a better estimate. The rest of the population is surprisingly uninterested in learning, and even actively resists the intermittent attempts to teach them, despite the fact that in their frequent dealings with social and biomedical scientists they have a practical need to evaluate and compare the numerical properties of representative samples.

[OK, perhaps 500k is an underestimate… Surely >1% of the population has been exposed to these ideas and remembers the main points?]

…Before 1900 or so, only a few mathematical geniuses like Gauss (1777-1855) had any real ability to deal with these issues. But even today, most of the population still relies on crude modes of expression like the attribution of numerical properties to prototypes (“A woman uses about 20,000 words per day while a man uses about 7,000”) or the comparison of bare-plural nouns (“men are happier than women”).

Sometimes, people are just avoiding more cumbersome modes of expression – “Xs are P-er than Ys” instead of (say) “The mean P measurement in a sample of Xs was greater than the mean P measurement in a sample of Ys, by an amount that would arise by chance fewer than once in 20 trials, assuming that the two samples were drawn from a single population in which P is normally distributed”. But I submit that even most intellectuals don’t really know how to think about the evaluation and comparison of distributions – not even simple univariate gaussian distributions, much less more complex situations. And many people who do sort of understand this, at some level, generally fall back on thinking (as well as talking) about properties of group prototypes rather than properties of distributions of individual characteristics.

Stephen Hsu, Bounded cognition, Information Processing, 9 October 2007


Added to diary 15 January 2018

steven-pinker

Thinking of language as an instinct inverts the popular wisdom, especially as it has been passed down in the canon of the humanities and social sciences. Language is no more a cultural invention than is upright posture. It is not a manifestation of a general capacity to use symbols: a three-year-old, we shall see, is a grammatical genius, but is quite incompetent at the visual arts, religious iconography, traffic signs, and the other staples of the semiotics curriculum. Though language is a magnificent ability unique to Homo sapiens among living species, it does not call for sequestering the study of humans from the domain of biology, for a magnificent ability unique to a particular living species is far from unique in the animal kingdom. Some kinds of bats home in on flying insects using Doppler sonar. Some kinds of migratory birds navigate thousands of miles by calibrating the positions of the constellations against the time of day and year. In nature’s talent show we are simply a species of primate with our own act, a knack for communicating information about who did what to whom by modulating the sounds we make when we exhale. Once you begin to look at language not as the ineffable essence of human uniqueness but as a biological adaptation to communicate information, it is no longer as tempting to see language as an insidious shaper of thought, and, we shall see, it is not. […]

To the broody hen the notion would probably seem monstrous that there should be a creature in the world to whom a nestful of eggs was not the utterly fascinating and precious and never-to-be-too-much-sat-upon object which it is to her. Thus we may be sure that, however mysterious some animals’ instincts may appear to us, our instincts will appear no less mysterious to them. And we may conclude that, to the animal which obeys it, every impulse and every step of every instinct shines with its own sufficient light, and seems at the moment the only eternally right and proper thing to do. What voluptuous thrill may not shake a fly, when she at last discovers the one particular leaf, or carrion, or bit of dung, that out of all the world can stimulate her ovipositor to its discharge? Does not the discharge then seem to her the only fitting thing? And need she care or know anything about the future maggot and its food? […]

The universality of complex language is a discovery that fills linguists with awe, and is the first reason to suspect that language is not just any cultural invention but the product of a special human instinct. Cultural inventions vary widely in their sophistication from society to society; within a society, the inventions are generally at the same level of sophistication. Some groups count by carving notches on bones and cook on fires ignited by spinning sticks in logs; others use computers and microwave ovens. Language, however, ruins this correlation. There are Stone Age societies, but there is no such thing as a Stone Age language. Earlier in this century the anthropological linguist Edward Sapir wrote, “When it comes to linguistic form, Plato walks with the Macedonian swineherd, Confucius with the headhunting savage of Assam.” […]

Behind such “simple” sentences as Where did he go? and or The guy I met killed himself, used automatically by any English speaker, are dozens of subroutines that arrange the words to express the meaning. Despite decades of effort, no artificially engineered language system comes close to duplicating the person in the street, HAL and C3PO notwithstanding. But though the language engine is invisible to the human user, the trim packages and color schemes are attended to obsessively. Trifling differences between the dialect of the mainstream and the dialect of other groups, like isn’t any versus ain’t no, those books versus them books, and dragged him away versus drug him away, are dignified as badges of “proper grammar.” But they have no more to do with grammatical sophistication than the fact that people in some regions of the United States refer to a certain insect as a dragonfly and people in other regions refer to it as a darning needle, or that English speakers call canines dogs whereas French speakers call them chiens. It is even a bit misleading to call Standard English a “language” and these variations “dialects,” as if there were some meaningful difference between them. The best definition comes from the linguist Max Weinreich: a language is a dialect with an army and a navy. […]

Inside the educational and writing establishments, the rules survive by the same dynamic that perpetuates ritual genital mutilations and college fraternity hazing: I had to go through it and am none the worse, so why should you have it any easier? Anyone daring to overturn a rule by example must always worry that readers will think he or she is ignorant of the rule, rather than challenging it. (I confess that this has deterred me from splitting some splitworthy infinitives.) Perhaps most importantly, since prescriptive rules are so psychologically unnatural that only those with access to the right schooling can abide by them, they serve as shibboleths, differentiating the elite from the rabble.

When speakers of different languages have to communicate to carry out practical tasks but do not have the opportunity to learn one another’s languages, they develop a makeshift jargon called a pidgin. Pidgins are choppy strings of words borrowed from the language of the colonizers or plantation owners, highly variable in order and with little in the way of grammar. Sometimes a pidgin can become a lingua franca and gradually increase in complexity over decades, as in the “Pidgin English” of the modern South Pacific. (Prince Philip was delighted to learn on a visit to New Guinea that he is referred to in that language as fella belong Mrs. Queen.) But the linguist Derek Bickerton has presented evidence that in many cases a pidgin can be transmuted into a full complex language in one fell swoop: all it takes is for a group of children to be exposed to the pidgin at the age when they acquire their mother tongue. That happened, Bickerton has argued, when children were isolated from their parents and were tended collectively by a worker who spoke to them in the pidgin. Not content to reproduce the fragmentary word strings, the children injected grammatical complexity where none existed before, resulting in a brand-new, richly expressive language. The language that results when children make a pidgin their native tongue is called a creole. Bickerton’s main evidence comes from a unique historical circumstance. Though the slave plantations that spawned most creoles are, fortunately, a thing of the remote past, one episode of creolization occurred recently enough for us to study its principal players. Just before the turn of the century there was a boom in Hawaiian sugar plantations, whose demands for labor quickly outstripped the native pool. Workers were brought in from China, Japan, Korea, Portugal, the Philippines, and Puerto Rico, and a pidgin quickly developed. Many of the immigrant laborers who first developed that pidgin were alive when Bickerton interviewed them in the 1970s. Here are some typical examples of their speech: Me capé buy, me check make. Building—high place—wall pat—time—nowtime—an’ den—a new tempecha eri time show you. Good, dis one. Kaukau any-kin’ dis one. Pilipine islan’ no good. No mo money. From the individual words and the context, it was possible for the listener to infer that the first speaker, a ninety-two-year-old Japanese immigrant talking about his earlier days as a coffee farmer, was trying to say “He bought my coffee; he made me out a check.” But the utterance itself could just as easily have meant “I bought coffee; I made him out a check,” which would have been appropriate if he had been referring to his current situation as a store owner. The second speaker, another elderly Japanese immigrant, had been introduced to the wonders of civilization in Los Angeles by one of his many children, and was saying that there was an electric sign high up on the wall of the building which displayed the time and temperature. The third speaker, a sixty-nine-year-old Filipino, was saying “It’s better here than in the Philippines; here you can get all kinds of food, but over there there isn’t any money to buy food with.” (One of the kinds of food was “pfrawg,” which he caught for himself in the marshes by the method of “kank da head.”) In all these cases, the speaker’s intentions had to be filled in by the listener. The pidgin did not offer the speakers the ordinary grammatical resources to convey these messages—no consistent word order, no prefixes or suffixes, no tense or other temporal and logical markers, no structure more complex than a simple clause, and no consistent way to indicate who did what to whom. But the children who had grown up in Hawaii beginning in the 1890s and were exposed to the pidgin ended up speaking quite differently. Here are some sentences from the language they invented, Hawaiian Creole. The first two are from a Japanese papaya grower born in Maui; the next two, from a Japanese/Hawaiian ex-plantation laborer born on the big island; the last, from a Hawaiian motel manager, formerly a farmer, born in Kauai: Da firs japani came ran away from japan come. “The first Japanese who arrived ran away from Japan to here.” Some filipino wok o’he-ah dey wen’ couple ye-ahs in filipin islan’. “Some Filipinos who worked over here went back to the Philippines for a couple of years.” People no like t’come fo’ go wok. “People don’t want to have him go to work [for them].” One time when we go home inna night dis ting stay fly up. “Once when we went home at night this thing was flying about.” One day had pleny of dis mountain fish come down. “One day there were a lot of these fish from the mountains that came down [the river].” Do not be misled by what look like crudely placed English verbs, such as go, stay, and came, or phrases like one time. They are not hap-hazard uses of English words but systematic uses of Hawaiian Creole grammar: the words have been converted by the creole speakers into auxiliaries, prepositions, case markers, and relative pronouns. […]

Indeed, creoles are bona fide languages, with standardized word orders and grammatical markers that were lacking in the pidgin of the immigrants and, aside from the sounds of words, not taken from the language of the colonizers. […]

Until recently there were no sign languages at all in Nicaragua, because its deaf people remained isolated from one another. When the Sandinista government took over in 1979 and reformed the educational system, the first schools for the deaf were created. The schools focused on drilling the children in lip reading and speech, and as in every case where that is tried, the results were dismal. But it did not matter. On the playgrounds and schoolbuses the children were inventing their own sign system, pooling the makeshift gestures that they used with their families at home. Before long the system congealed into what is now called the Lenguaje de Signos Nicaragüense (LSN). Today LSN is used, with varying degrees of fluency, by young deaf adults, aged seventeen to twenty-five, who developed it when they were ten or older. Basically, it is a pidgin. Everyone uses it differently, and the signers depend on suggestive, elaborate circumlocutions rather than on a consistent grammar. But children like Mayela, who joined the school around the age of four, when LSN was already around, and all the pupils younger than her, are quite different. Their signing is more fluid and compact, and the gestures are more stylized and less like a pantomime. In fact, when their signing is examined close up, it is so different from LSN that it is referred to by a different name, Idioma de Signos Nicaragüense (ISN). LSN and ISN are currently being studied by the psycholinguists Judy Kegl, Miriam Hebe Lopez, and Annie Senghas. ISN appears to be a creole, created in one leap when the younger children were exposed to the pidgin signing of the older children—just as Bickerton would have predicted. ISN has spontaneously standardized itself; all the young children sign it in the same way. The children have introduced many grammatical devices that were absent in LSN, and hence they rely far less on circumlocutions. For example, an LSN (pidgin) signer might make the sign for “talk to” and then point from the position of the talker to the position of the hearer. But an ISN (creole) signer modifies the sign itself, sweeping it in one motion from a point representing the talker to a point representing the hearer. This is a common device in sign languages, formally identical to inflecting a verb for agreement in spoken languages. Thanks to such consistent grammar, ISN is very expressive. A child can watch a surrealistic cartoon and describe its plot to another child. The children use it in jokes, poems, narratives, and life histories, and it is coming to serve as the glue that holds the community together. A language has been born before our eyes. But ISN was the collective product of many children communicating with one another. If we are to attribute the richness of language to the mind of the child, we really want to see a single child adding some increment of grammatical complexity to the input the child has received. Once again the study of the deaf grants our wish. When deaf infants are raised by signing parents, they learn sign language in the same way that hearing infants learn spoken language. But deaf children who are not born to deaf parents—the majority of deaf children—often have no access to sign language users as they grow up, and indeed are sometimes deliberately kept from them by educators in the “oralist” tradition who want to force them to master lip reading and speech. (Most deaf people deplore these authoritarian measures.) When deaf children become adults, they tend to seek out deaf communities and begin to acquire the sign language that takes proper advantage of the communicative media available to them. But by then it is usually too late; they must then struggle with sign language as a difficult intellectual puzzle, much as a hearing adult does in foreign language classes. Their proficiency is notably below that of deaf people who acquired sign language as infants, just as adult immigrants are often permanently burdened with accents and conspicuous grammatical errors. Indeed, because the deaf are virtually the only neurologically normal people who make it to adulthood without having acquired a language, their difficulties offer particularly good evidence that successful language acquisition must take place during a critical window of opportunity in childhood. The psycholinguists Jenny Singleton and Elissa Newport have studied a nine-year-old profoundly deaf boy, to whom they gave the pseudonym Simon, and his parents, who are also deaf. Simon’s parents did not acquire sign language until the late ages of fifteen and sixteen, and as a result they acquired it badly. In ASL, as in many languages, one can move a phrase to the front of a sentence and mark it with a prefix or suffix (in ASL, raised eyebrows and a lifted chin) to indicate that it is the topic of the sentence. The English sentence Elvis I really like is a rough equivalent. But Simon’s parents rarely used this construction and mangled it when they did. For example, Simon’s father once tried to sign the thought My friend, he thought my second child was deaf. It came out as My friend thought, my second child, he thought he was deaf—a bit of sign salad that violates not only ASL grammar but, according to Chomsky’s theory, the Universal Grammar that governs all naturally acquired human languages (later in this chapter we will see why). Simon’s parents had also failed to grasp the verb inflection system of ASL. In ASL, the verb to blow is signed by opening a fist held horizontally in front of the mouth (like a puff of air). Any verb in ASL can be modified to indicate that the action is being done continuously: the signer superimposes an arclike motion on the sign and repeats it quickly. A verb can also be modified to indicate that the action is being done to more than one object (for example, several candles): the signer terminates the sign in one location in space, then repeats it but terminates it at another location. These inflections can be combined in either of two orders: blow toward the left and then toward the right and repeat, or blow toward the left twice and then blow toward the right twice. The first order means “to blow out the candles on one cake, then another cake, then the first cake again, then the second cake again”; the second means “to blow out the candles on one cake continuously, and then blow out the candles on another cake continuously.” This elegant set of rules was lost on Simon’s parents. They used the inflections inconsistently and never combined them onto a verb two at a time, though they would occasionally use the inflections separately, crudely linked with signs like then. In many ways Simon’s parents were like pidgin speakers. Astoundingly, though Simon saw no ASL but his parents’ defective version, his own signing was far better ASL than theirs. He understood sentences with moved topic phrases without difficulty, and when he had to describe complex videotaped events, he used the ASL verb inflections almost perfectly, even in sentences requiring two of them in particular orders. Simon must somehow have shut out his parents’ ungrammatical “noise.” He must have latched on to the inflections that his parents used inconsistently, and reinterpreted them as mandatory. And he must have seen the logic that was implicit, though never realized, in his parents’ use of two kinds of verb inflection, and reinvented the ASL system of superimposing both of them onto a single verb in a specific order. Simon’s superiority to his parents is an example of creolization by a single living child. Actually, Simon’s achievements are remarkable only because he is the first one who showed them to a psycholinguist. There must be thousands of Simons: ninety to ninety-five percent of deaf children are born to hearing parents. Children fortunate enough to be exposed to ASL at all often get it from hearing parents who themselves learned it, incompletely, to communicate with their children. Indeed, as the transition from LSN to ISN shows, sign languages themselves are surely products of creolization. Educators at various points in history have tried to invent sign systems, sometimes based on the surrounding spoken language. But these crude codes are always unlearnable, and when deaf children learn from them at all, they do so by converting them into much richer natural languages. […]

In contemporary middle-class American culture, parenting is seen as an awesome responsibility, an unforgiving vigil to keep the helpless infant from falling behind in the great race of life. The belief that Motherese is essential to language development is part of the same mentality that sends yuppies to “learning centers” to buy little mittens with bull’s-eyes to help their babies find their hands sooner. One gets some perspective by examining the folk theories about parenting in other cultures. The !Kung San of the Kalahari Desert in southern Africa believe that children must be drilled to sit, stand, and walk. They carefully pile sand around their infants to prop them upright, and sure enough, every one of these infants soon sits up on its own. We find this amusing because we have observed the results of the experiment that the San are unwilling to chance: we don’t teach our children to sit, stand, and walk, and they do it anyway, on their own schedule. But other groups enjoy the same condescension toward us. In many communities of the world, parents do not indulge their children in Motherese. In fact, they do not speak to their prelinguistic children at all, except for occasional demands and rebukes. This is not unreasonable. After all, young children plainly can’t understand a word you say. So why waste your breath in soliloquies? Any sensible person would surely wait until a child has developed speech and more gratifying two-way conversations become possible. As Aunt Mae, a woman living in the South Carolina Piedmont, explained to the anthropologist Shirley Brice Heath: “Now just how crazy is dat? White folks uh hear dey kids say sump’n, dey say it back to ’em, dey aks ’em ’gain and ’gain ’bout things, like they ’posed to be born knowin’.” […]

Here is another interview, this one between a fourteen-year-old girl called Denyse and the late psycholinguist Richard Cromer; the interview was transcribed and analyzed by Cromer’s colleague Sigrid Lipka.

I like opening cards. I had a pile of post this morning and not one of them was a Christmas card. A bank statement I got this morning! [A bank statement? I hope it was good news.] No it wasn’t good news. [Sounds like mine.] I hate…, My mum works over at the, over on the ward and she said “not another bank statement.” I said “it’s the second one in two days.” And she said “Do you want me to go to the bank for you at lunchtime?” and I went “No, I’ll go this time and explain it myself.” I tell you what, my bank are awful. They’ve lost my bank book, you see, and I can’t find it anywhere. I belong to the TSB Bank and I’m thinking of changing my bank ’cause they’re so awful. They keep, they keep losing…[someone comes in to bring some tea] Oh, isn’t that nice. [Uhm. Very good.] They’ve got the habit of doing that. They lose, they’ve lost my bank book twice, in a month, and I think I’ll scream. My mum went yesterday to the bank for me. She said “They’ve lost your bank book again.” I went “Can I scream?” and I went, she went “Yes, go on.” So I hollered. But it is annoying when they do things like that. TSB, Trustees aren’t…uh the best ones to be with actually. They’re hopeless.

I have seen Denyse on videotape, and she comes across as a loquacious, sophisticated conversationalist—all the more so, to American ears, because of her refined British accent. (My bank are awful, by the way, is grammatical in British, though not American, English.) It comes as a surprise to learn that the events she relates so earnestly are figments of her imagination. Denyse has no bank account, so she could not have received any statement in the mail, nor could her bank have lost her bankbook. Though she would talk about a joint bank account she shared with her boyfriend, she had no boyfriend, and obviously had only the most tenuous grasp of the concept “joint bank account” because she complained about the boyfriend taking money out of her side of the account. In other conversations Denyse would engage her listeners with lively tales about the wedding of her sister, her holiday in Scotland with a boy named Danny, and a happy airport reunion with a long-estranged father. But Denyse’s sister is unmarried, Denyse has never been to Scotland, she does not know anyone named Danny, and her father has never been away for any length of time. In fact, Denyse is severely retarded. She never learned to read or write and cannot handle money or any of the other demands of everyday functioning. Denyse was born with spina bifida (“split spine”) a malformation of the vertebrae that leaves the spinal cord unprotected. Spina bifida often results in hydrocephalus, an increase in pressure in the cerebrospinal fluid filling the ventricles (large cavities) of the brain, distending the brain from within. For reasons no one understands, hydrocephalic children occasionally end up like Denyse, significantly retarded but with unimpaired—indeed, overdeveloped—language skills. (Perhaps the ballooning ventricles crush much of the brain tissue necessary for everyday intelligence but leave intact some other portions that can develop language circuitry.) The various technical terms for the condition include “cocktail party conversation,” “chatterbox syndrome,” and “blathering.” […]

If a language has only two color words, they are for black and white (usually encompassing dark and light, respectively). If it has three, they are for black, white, and red; if four, black, white, red, and either yellow or green. Five adds in both yellow and green; six, blue; seven, brown; more than seven, purple, pink, orange, or gray. But the clinching experiment was carried out in the New Guinea highlands with the Grand Valley Dani, a people speaking one of the black-and-white languages. The psychologist Eleanor Rosch found that the Dani were quicker at learning a new color category that was based on fire-engine red than a category based on an off-red. The way we see colors determines how we learn words for them, not vice versa. […]

How might the combinatorial grammar underlying human language work? The most straightforward way to combine words in order is explained in Michael Frayn’s novel The Tin Men. The protagonist, Goldwasser, is an engineer working at an institute for automation. He must devise a computer system that generates the standard kinds of stories found in the daily papers, like “Paralyzed Girl Determined to Dance Again.” Here he is hand-testing a program that composes stories about royal occasions:

He opened the filing cabinet and picked out the first card in the set. Traditionally, it read. Now there was a random choice between cards reading coronations, engagements, funerals, weddings, comings of age, births, deaths, or the churching of women. The day before he had picked funerals, and been directed on to a card reading with simple perfection are occasions for mourning. Today he closed his eyes, drew weddings, and was signposted on to are occasions for rejoicing. The wedding of X and Y followed in logical sequence, and brought him a choice between is no exception and is a case in point. Either way there followed indeed. Indeed, whichever occasion one had started off with, whether coronations, deaths, or births, Goldwasser saw with intense mathematical pleasure, one now reached this same elegant bottleneck. He paused on indeed, then drew in quick succession it is a particularly happy occasion, rarely, and can there have been a more popular young couple. From the next selection, Goldwasser drew X has won himself/herself a special place in the nation’s affections, which forced him to go on to and the British people have cleverly taken Y to their hearts already. Goldwasser was surprised, and a little disturbed, to realise that the word “fitting” had still not come up. But he drew it with the next card—it is especially fitting that. This gave him the bride/bridegroom should be, and an open choice between of such a noble and illustrious line, a commoner in these democratic times, from a nation with which this country has long enjoyed a particularly close and cordial relationship, and from a nation with which this country’s relations have not in the past been always happy. Feeling that he had done particularly well with “fitting” last time, Goldwasser now deliberately selected it again. It is also fitting that, read the card, to be quickly followed by we should remember, and X and Y are not mere symbols—they are a lively young man and a very lovely young woman. Goldwasser shut his eyes to draw the next card. It turned out to read in these days when. He pondered whether to select it is fashionable to scoff at the traditional morality of marriage and family life or it is no longer fashionable to scoff at the traditional morality of marriage and family life. The latter had more of the form’s authentic baroque splendor, he decided.

[…]

The psychologist Laura Ann Petitto has a startling demonstration that the arbitrariness of the relation between a symbol and its meaning is deeply entrenched in the child’s mind. Shortly before they turn two, English-speaking children learn the pronouns you and me. Often they reverse them, using you to refer to themselves. The error is forgivable. You and me are “deictic” pronouns, whose referent shifts with the speaker: you refers to you when I use it but to me when you use it. So children may need some time to get that down. After all, Jessica hears her mother refer to her, Jessica, using you; why should she not think that you means “Jessica”? Now, in ASL the sign for “me” is a point to one’s chest; the sign for “you” is a point to one’s partner. What could be more transparent? One would expect that using “you” and “me” in ASL would be as foolproof as knowing how to point, which all babies, deaf and hearing, do before their first birthday. But for the deaf children Petitto studied, pointing is not pointing. The children used the sign of pointing to their conversational partners to mean “me” at exactly the age at which hearing children use the spoken sound you to mean “me.” The children were treating the gesture as a pure linguistic symbol; the fact that it pointed somewhere did not register as being relevant. […]

Moreover, it pays to give objects several labels in mentalese, designating different-sized categories like “cottontail rabbit,” “rabbit,” “mammal,” “animal,” and “living thing.” There is a tradeoff involved in choosing one category over another. It takes less effort to determine that Peter Cottontail is an animal than that he is a cottontail (for example, an animallike motion will suffice for us to recognize that he is an animal, leaving it open whether or not he is a cottontail). But we can predict more new things about Peter if we know he is a cottontail than if we merely know he is an animal. If he is a cottontail, he likes carrots and inhabits open country or woodland clearings; if he is merely an animal, he could eat anything and live anywhere, for all one knows. The middle-sized or “basic-level” category “rabbit” represents a compromise between how easy it is to label something and how much good the label does you. […]

What sense, then, can we make of the suggestion that images, numbers, kinship relations, or logic can be represented in the brain without being couched in words? In the first half of this century, philosophers had an answer: none. Reifying thoughts as things in the head was a logical error, they said. A picture or family tree or number in the head would require a little man, a homunculus, to look at it. And what would be inside his head—even smaller pictures, with an even smaller man looking at them? But the argument was unsound. It took Alan Turing, the brilliant British mathematician and philosopher, to make the idea of a mental representation scientifically respectable. Turing described a hypothetical machine that could be said to engage in reasoning. In fact this simple device, named a Turing machine in his honor, is powerful enough to solve any problem that any computer, past, present, or future, can solve. And it clearly uses an internal symbolic representation—a kind of mentalese—without requiring a little man or any occult processes. […]

The overall impression is that Universal Grammar is like an archetypal body plan found across vast numbers of animals in a phylum. For example, among all the amphibians, reptiles, birds, and mammals, there is a common body architecture, with a segmented backbone, four jointed limbs, a tail, a skull, and so on. The various parts can be grotesquely distorted or stunted across animals: a bat’s wing is a hand, a horse trots on its middle toes, whales’ forelimbs have become flippers and their hindlimbs have shrunken to invisible nubs, and the tiny hammer, anvil, and stirrup of the mammalian middle ear are jaw parts of reptiles. But from newts to elephants, a common topology of the body plan—the shin bone connected to the thigh bone, the thigh bone connected to the hip bone—can be discerned. Many of the differences are caused by minor variations in the relative timing and rate of growth of the parts during embryonic development. Differences among languages are similar. There seems to be a common plan of syntactic, morphological, and phonological rules and principles, with a small set of varying parameters, like a checklist of options. Once set, a parameter can have far-reaching changes on the superficial appearance of the language. […]

Some ancient tribe must have taken over most of Europe, Turkey, Iran, Afghanistan, Pakistan, northern India, western Russia, and parts of China. The idea has excited the imagination of a century of linguists and archeologists, though even today no one really knows who the Indo-Europeans were. Ingenious scholars have made guesses from the reconstructed vocabulary. Words for metals, wheeled vehicles, farm implements, and domesticated animals and plants suggest that the Indo-Europeans were a late Neolithic people. The ecological distributions of the natural objects for which there are Proto-Indo-European words—elm and willow, for example, but not olive or palm—have been used to place the speakers somewhere in the territory from inland northern Europe to southern Russia. Combined with words for patriarch, fort, horse, and weapons, the reconstructions led to an image of a powerful conquering tribe spilling out of an ancestral homeland on horseback to overrun most of Europe and Asia. The word “Aryan” became associated with the Indo-Europeans, and the Nazis claimed them as ancestors. More sanely, archeologists have linked them to artifacts of the Kurgan culture in the southern Russian steppes from around 3500 B.C., a band of tribes that first harnessed the horse for military purposes. […]

Since ears don’t move the way eyes do, the psychologists Peter Eimas and Peter Jusczyk devised a different way to see what a one-month-old finds interesting. They put a switch inside a rubber nipple and hooked up the switch to a tape recorder, so that when the baby sucked, the tape played. As the tape droned on with ba ba ba ba…, the infants showed their boredom by sucking more slowly. But when the syllables changed to pa pa pa…, the infants began to suck more vigorously, to hear more syllables. Moreover, they were using the sixth sense, speech perception, rather than just hearing the syllables as raw sound: two ba’s that differed acoustically from each other as much as a ba differs from a pa, but that are both heard as ba by adults, did not revive the infants’ interest. And infants must be recovering phonemes, like b, from the syllables they are smeared across. Like adults, they hear the same stretch of sound as a b if it appears in a short syllable and as a w if it appears in a long syllable. Infants come equipped with these skills; they do not learn them by listening to their parents’ speech. Kikuyu and Spanish infants discriminate English ba’s and pa’s, which are not used in Kikuyu or Spanish and which their parents cannot tell apart. English-learning infants under the age of six months distinguish phonemes used in Czech, Hindi, and Inslekampx (a Native American language), but English-speaking adults cannot, even with five hundred trials of training or a year of university coursework. Adult ears can tell the sounds apart, though, when the consonants are stripped from the syllables and presented alone as chirpy sounds; they just cannot tell them apart as phonemes. […]

By ten months they are no longer universal phoneticians but have turned into their parents; they do not distinguish Czech or Inslekampx phonemes unless they are Czech or Inslekampx babies. Babies make this transition before they produce or understand words, so their learning cannot depend on correlating sound with meaning. That is, they cannot be listening for the difference in sound between a word they think means bit and a word they think means beet, because they have learned neither word. […]

Between the late twos and the mid-threes, children’s language blooms into fluent grammatical conversation so rapidly that it overwhelms the researchers who study it, and no one has worked out the exact sequence. Sentence length increases steadily, and because grammar is a discrete combinatorial system, the number of syntactic types increases exponentially, doubling every month, reaching the thousands before the third birthday. You can get a feel for this explosion by seeing how the speech of a little boy called Adam grows in sophistication over the period of a year, starting with his early word combinations at the age of two years and three months (“2;3”):

2;3: Play checkers. Big drum. I got horn. A bunny-rabbit walk.

2;4: See marching bear go? Screw part machine. That busy bulldozer truck.

2;5: Now put boots on. Where wrench go? Mommy talking bout lady. What that paper clip doing?

2;6: Write a piece a paper. What that egg doing? I lost a shoe. No, I don’t want to sit seat.

2;7 Where piece a paper go? Ursula has a boot on. Going to see kitten. Put the cigarette down. Dropped a rubber band. Shadow has hat just like that. Rintintin don’t fly, Mommy.

2;8: Let me get down with the boots on. Don’t be afraid a horses. How tiger be so healthy and fly like kite? Joshua throw like a penguin.

2;9: Where Mommy keep her pocket book? Show you something funny. Just like turtle make mud pie.

2;10: Look at that train Ursula brought. I simply don’t want put in chair. You don’t have paper. Do you want little bit, Cromer? I can’t wear it tomorrow.

2;11: That birdie hopping by Missouri in bag. Do want some pie on your face? Why you mixing baby chocolate? I finish drinking all up down my throat. I said why not you coming in? Look at that piece a paper and tell it. Do you want me tie that round? We going turn light on so you can’t see.

3;0: I going come in fourteen minutes. I going wear that to wedding. I see what happens. I have to save them now. Those are not strong mens. They are going sleep in wintertime. You dress me up like a baby elephant.

3;1: I like to play with something else. You know how to put it back together. I gon’ make it like a rocket to blast off with. I put another one on the floor. You went to Boston University? You want to give me some carrots and some beans? Press the button and catch it, sir. I want some other peanuts. Why you put the pacifier in his mouth? Doggies like to climb up.

3;2: So it can’t be cleaned? I broke my racing car. Do you know the lights wents off? What happened to the bridge? When it’s got a flat tire it’s need a go to the station. I dream sometimes. I’m going to mail this so the letter can’t come off. I want to have some espresso. The sun is not too bright. Can I have some sugar? Can I put my head in the mailbox so the mailman can know where I are and put me in the mailbox? Can I keep the screwdriver just like a carpenter. […]

If a person is asked to shadow someone else’s speech (repeat it as the talker is talking) and, simultaneously, to tap a finger to the right or the left hand, the person has a harder time tapping with the right finger than with the left, because the right finger competes with language for the resources of the left hemisphere. Remarkably, the psychologist Ursula Bellugi and her colleagues have shown that the same thing happens when deaf people shadow one-handed signs in American Sign Language: they find it harder to tap with their right finger than with their left finger. The gestures must be tying up the left hemispheres, but it is not because they are gestures; it is because they are linguistic gestures. When a person (either a signer or a speaker) has to shadow a goodbye wave, a thumbs-up sign, or a meaningless gesticulation, the fingers of the right hand and the left hand are slowed down equally. The study of aphasia in the deaf leads to a similar conclusion. Deaf signers with damage to their left hemispheres suffer from forms of sign aphasia that are virtually identical to the aphasia of hearing victims with similar lesions. [They] are unimpaired at nonlinguistic tasks that place similar demands on the eyes and hands, such as gesturing, pantomiming, recognizing faces, and copying designs. Injuries to the right hemisphere of deaf signers produce the opposite pattern: they remain flawless at signing but have difficulty performing visuospatial tasks, just like hearing patients with injured right hemispheres. […]

Just as crumpling a newspaper can appear to scramble the pictures and text, a side view of a brain is a misleading picture of which regions are adjacent. Gazzaniga’s coworkers have developed a technique that uses MRI pictures of brain slices to reconstruct what the person’s cortex would look like if somehow it could be unwrinkled into a flat sheet. […]

Some aphasics leave out verbs, inflections, and function words; others use the wrong ones. Some cannot comprehend complicated sentences involving traces (like The man who the woman kissed (trace) hugged the child) but can comprehend complex sentences involving reflexives (like The girl said that the woman washed herself). Other patients do the reverse. There are Italian patients who mangle their language’s inflectional suffixes (similar to the -ing, -s, and -ed of English) but are almost flawless with its derivational suffixes (similar to -able, -ness, and -er). The mental thesaurus, in particular, is sometimes torn into pieces with clean edges. Among anomic patients (those who have trouble using nouns), different patients have problems with different kinds of nouns. Some can use concrete nouns but not abstract nouns. Some can use abstract nouns but not concrete nouns. Some can use nouns for nonliving things but have trouble with nouns for living things; others can use nouns for living things but have trouble with nouns for nonliving things. Some can name animals and vegetables but not foods, body parts, clothing, vehicles, or furniture. There are patients who have trouble with nouns for anything but animals, patients who cannot name body parts, patients who cannot name objects typically found indoors, patients who cannot name colors, and patients who have trouble with proper names. One patient could not name fruits or vegetables: he could name an abacus and a sphinx but not an apple or a peach. The psychologist Edgar Zurif, jesting the neurologist’s habit of giving a fancy name to every syndrome, has suggested that it be called anomia for bananas, or “banananomia.” […]

Does this mean that the brain has a produce section? No one has found one, nor centers for inflections, traces, phonology, and so on. Pinning brain areas to mental functions has been frustrating. Frequently one finds two patients with lesions in the same general area but with different kinds of impairment, or two patients with the same impairment but lesions in different areas. Sometimes a circumscribed impairment, like the inability to name animals, can be caused by massive lesions, brain-wide degeneration, or a blow to the head. […]

What does natural selection do when faced with these tradeoffs? In general, it will favor an option with benefits to the young organism and costs to the old one over an option with the same average benefit spread out evenly over the life span. This asymmetry is rooted in the inherent asymmetry of death. If a lightning bolt kills a forty-year-old, there will be no fifty-year-old or sixty-year-old to worry about, but there will have been a twenty-year-old and a thirty-year-old. Any bodily feature designed for the benefit of the potential over-forty incarnations, at the expense of the under-forty incarnations, will have gone to waste. And the logic is the same for unforeseeable death at any age: the brute mathematical fact is that all things being equal, there is a better chance of being a young person than being an old person. So genes that strengthen young organisms at the expense of old organisms have the odds in their favor and will tend to accumulate over evolutionary timespans, whatever the bodily system, and the result is overall senescence.

Steven Pinker, The Language Instinct, 1994


Added to diary 26 June 2018

Another historian, Carlo Cipolla, noted:

In preindustrial Europe, the purchase of a garment or of the cloth for a garment remained a luxury the common people could only afford a few times in their lives. One of the main preoccupations of hospital administration was to ensure that the clothes of the deceased should not be usurped but should be given to lawful inheritors. During epidemics of plague, the town authorities had to struggle to confiscate the clothes of the dead and to burn them: people waited for others to die so as to take over their clothes—which generally had the effect of spreading the epidemic.

Steven Pinker. Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, 2017


Added to diary 21 April 2018

The “natural affection” is far from automatic. Daly and Wilson, and later the anthropologist Edward Hagen, have proposed that postpartum depression and its milder version, the baby blues, are not a hormonal malfunction but the emotional implementation of the decision period for keeping a child. (Postpartum depression as adaptation: Hagen, 1999; Daly & Wilson, 1988, pp. 61–77.) Mothers with postpartum depression often feel emotionally detached from their newborns and may harbor intrusive thoughts of harming them. Mild depression, psychologists have found, often gives people a more accurate appraisal of their life prospects than the rose-tinted view we normally enjoy. The typical rumination of a depressed new mother—how will I cope with this burden?—has been a legitimate question for mothers throughout history who faced the weighty choice between a definite tragedy now and the possibility of an even greater tragedy later. As the situation becomes manageable and the blues dissipate, many women report falling in love with their baby, coming to see it as a uniquely wonderful individual.

Hagen examined the psychiatric literature on postpartum depression to test five predictions of the theory that it is an evaluation period for investing in a newborn. As predicted, postpartum depression is more common in women who lack social support (they are single, separated, dissatisfied with their marriage, or distant from their parents), who had had a complicated delivery or an unhealthy infant, and who were unemployed or whose husbands were unemployed. He found reports of postpartum depression in a number of non-Western populations which showed the same risk factors (though he could not find enough suitable studies of traditional kin-based societies). Finally, postpartum depression is only loosely tied to measured hormonal imbalances, suggesting that it is not a malfunction but a design feature.

Many cultural traditions work to distance people’s emotions from a newborn until its survival seems likely. People may be enjoined from touching, naming, or granting legal personhood to a baby until a danger period is over, and the transition is often marked by a joyful ceremony, as in our own customs of the christening and the bris. Some traditions have a series of milestones, such as traditional Judaism, which grants full legal personhood to a baby only after it has survived thirty days.

Steven Pinker, The Better Angels of our Nature, Chapter 7


Added to diary 15 January 2018

We have seen that during periods of humanitarian reform, a recognition of the rights of one group can lead to a recognition of others by analogy, as when the despotism of kings was analogized to the despotism of husbands, and when two centuries later the civil rights movement inspired the women’s rights movement. The protection of abused children also benefited from an analogy—in this case, believe it or not, with animals.

In Manhattan in 1874, the neighbors of ten-year-old Mary Ellen McCormack, an orphan being raised by an adoptive mother and her second husband, noticed suspicious cuts and bruises on the girl’s body. They reported her to the Department of Public Charities and Correction, which administered the city’s jails, poorhouses, orphanages, and insane asylums. Since there were no laws that specifically protected children, the caseworker contacted the American Society for the Protection of Animals. The society’s founder saw an analogy between the plight of the girl and the plight of the horses he rescued from violent stable owners. He engaged a lawyer who presented a creative interpretation of habeas corpus to the New York State Supreme Court and petitioned to have her removed from her home. The girl calmly testified:

Mamma has been in the habit of whipping and beating me almost every day. She used to whip me with a twisted whip—a rawhide. I have now on my head two black-and-blue marks which were made by Mamma with the whip, and a cut on the left side of my forehead which was made by a pair of scissors in Mamma’s hand…. I never dared speak to anybody, because if I did I would get whipped.

The New York Times reprinted the testimony in an article entitled “Inhumane Treatment of a Little Waif,” and the girl was removed from the home and eventually adopted by her caseworker. Her lawyer set up the New York Society for the Prevention of Cruelty to Children, the first protective agency for children anywhere in the world. Together with other agencies founded in its wake, it set up shelters for battered children and lobbied for laws that punished their abusive parents. Similarly, in England the first legal case to protect a child against an abusive parent was taken up by the Royal Society for the Prevention of Cruelty to Animals, and out of it grew the National Society for the Prevention of Cruelty to Children.

Steven Pinker, The Better Angels of our Nature, Chapter 7


Added to diary 15 January 2018

John Maynard Smith, the biologist who first applied game theory to evolution, modeled this kind of standoff as a War of Attrition game. Each of two contestants competes for a valuable resource by trying to outlast the other, steadily accumulating costs as he waits. In the original scenario, they might be heavily armored animals competing for a territory who stare at each other until one of them leaves; the costs are the time and energy the animals waste in the standoff, which they could otherwise use in catching food or pursuing mates. A game of attrition is mathematically equivalent to an auction in which the highest bidder wins the prize and both sides have to pay the loser’s low bid. And of course it can be analogized to a war in which the expenditure is reckoned in the lives of soldiers.

The War of Attrition is one of those paradoxical scenarios in game theory (like the Prisoner’s Dilemma, the Tragedy of the Commons, and the Dollar Auction) in which a set of rational actors pursuing their interests end up worse off than if they had put their heads together and come to a collective and binding agreement. One might think that in an attrition game each side should do what bidders on eBay are advised to do: decide how much the contested resource is worth and bid only up to that limit. The problem is that this strategy can be gamed by another bidder. All he has to do is bid one more dollar (or wait just a bit longer, or commit another surge of soldiers), and he wins. He gets the prize for close to the amount you think it is worth, while you have to forfeit that amount too, without getting anything in return. You would be crazy to let that happen, so you are tempted to use the strategy “Always outbid him by a dollar,” which he is tempted to adopt as well. You can see where this leads. Thanks to the perverse logic of an attrition game, in which the loser pays too, the bidders may keep bidding after the point at which the expenditure exceeds the value of the prize. They can no longer win, but each side hopes not to lose as much. The technical term for this outcome in game theory is “a ruinous situation.” It is also called a “Pyrrhic victory”; the military analogy is profound.

One strategy that can evolve in a War of Attrition game (where the expenditure, recall, is in time) is for each player to wait a random amount of time, with an average wait time that is equivalent in value to what the resource is worth to them. In the long run, each player gets good value for his expenditure, but because the waiting times are random, neither is able to predict the surrender time of the other and reliably outlast him. In other words, they follow the rule: At every instant throw a pair of dice, and if they come up (say) 4, concede; if not, throw them again. This is, of course, like a Poisson process, and by now you know that it leads to an exponential distribution of wait times (since a longer and longer wait depends on a less and less probable run of tosses). Since the contest ends when the first side throws in the towel, the contest durations will also be exponentially distributed. Returning to our model where the expenditures are in soldiers rather than seconds, if real wars of attrition were like the “War of Attrition” modeled in game theory, and if all else were equal, then wars of attrition would fall into an exponential distribution of magnitudes.

Of course, real wars fall into a power-law distribution, which has a thicker tail than an exponential (in this case, a greater number of severe wars). But an exponential can be transformed into a power law if the values are modulated by a second exponential process pushing in the opposite direction. And attrition games have a twist that might do just that. If one side in an attrition game were to leak its intention to concede in the next instant by, say, twitching or blanching or showing some other sign of nervousness, its opponent could capitalize on the “tell” by waiting just a bit longer, and it would win the prize every time. As Richard Dawkins has put it, in a species that often takes part in wars of attrition, one expects the evolution of a poker face.

Now, one also might have guessed that organisms would capitalize on the opposite kind of signal, asign of continuing resolve rather than impending surrender. If a contestant could adopt some defiant posture that means “I’ll stand my ground; I won’t back down,” that would make it rational for his opposite number to give up and cut its losses rather than escalate to mutual ruin. But there’s a reason we call it “posturing.” Any coward can cross his arms and glower, but the other side can simply call his bluff. Only if a signal is costly—if the defiant party holds his hand over a candle, or cuts his arm with a knife—can he show that he means business. (Of course, paying a self-imposed cost would be worthwhile only if the prize is especially valuable to him, or if he had reason to believe that he could prevail over his opponent if the contest escalated.)

In the case of a war of attrition, one can imagine a leader who has a changing willingness to suffer a cost over time, increasing as the conflict proceeds and his resolve toughens. His motto would be: “We fight on so that our boys shall not have died in vain.” This mindset, known as loss aversion, the sunk-cost fallacy, and throwing good money after bad, is patently irrational, but it is surprisingly pervasive in human decision-making. People stay in an abusive marriage because of the years they have already put into it, or sit through a bad movie because they have already paid for the ticket, or try to reverse a gambling loss by doubling their next bet, or pour money into a boondoggle because they’ve already poured so much money into it. Though psychologists don’t fully understand why people are suckers for sunk costs, a common explanation is that it signals a public commitment. The person is announcing: “When I make a decision, I’m not so weak, stupid, or indecisive that I can be easily talked out of it.” In a contest of resolve like an attrition game, loss aversion could serve as a costly and hence credible signal that the contestant is not about to concede, preempting his opponent’s strategy of outlasting him just one more round.

I already mentioned some evidence from Richardson’s dataset which suggests that combatants do fight longer when a war is more lethal: small wars show a higher probability of coming to an end with each succeeding year than do large wars. The magnitude numbers in the Correlates of War Dataset also show signs of escalating commitment: wars that are longer in duration are not just costlier in fatalities; they are costlier than one would expect from their durations alone. If we pop back from the statistics of war to the conduct of actual wars, we can see the mechanism at work. Many of the bloodiest wars in history owe their destructiveness to leaders on one or both sides pursuing a blatantly irrational loss-aversion strategy. Hitler fought the last months of World War II with a maniacal fury well past the point when defeat was all but certain, as did Japan. Lyndon Johnson’s repeated escalations of the Vietnam War inspired a protest song that has served as a summary of people’s understanding of that destructive war: “We were waist-deep in the Big Muddy; The big fool said to push on.”

The systems biologist Jean-Baptiste Michel has pointed out to me how escalating commitments in a war of attrition could produce a power-law distribution. All we need to assume is that leaders keep escalating as a constant proportion of their past commitment—the size of each surge is, say, 10 percent of the number of soldiers that have fought so far. A constant proportional increase would be consistent with the well-known discovery in psychology called Weber’s Law: for an increase in intensity to be noticeable, it must be a constant proportion of the existing intensity. (If a room is illuminated by ten lightbulbs, you’ll notice a brightening when an eleventh is switched on, but if it is illuminated by a hundred lightbulbs, you won’t notice the hundred and first; someone would have to switch on another ten bulbs before you noticed the brightening.) Richardson observed that people perceive lost lives in the same way: “Contrast for example the many days of newspaper-sympathy over the loss of the British submarine Thetis in time of peace with the terse announcement of similar losses during the war. This contrast may be regarded as an example of the Weber-Fechner doctrine that an increment is judged relative to the previous amount.” The psychologist Paul Slovic has recently reviewed several experiments that support this observation. The quotation falsely attributed to Stalin, “One death is a tragedy; a million deaths is a statistic,” gets the numbers wrong but captures a real fact about human psychology.

If escalations are proportional to past commitments (and a constant proportion of soldiers sent to the battlefield are killed in battle), then losses will increase exponentially as a war drags on, like compound interest. And if wars are attrition games, their durations will also be distributed exponentially. Recall the mathematical law that a variable will fall into a power-law distribution if it is an exponential function of a second variable that is distributed exponentially. My own guess is that the combination of escalation and attrition is the best explanation for the power-law distribution of war magnitudes.

Steven Pinker, The Better Angels of our Nature, Chapter 5

Though conventional terrorism, as John Kerry gaffed, is a nuisance to be policed rather than a threat to the fabric of life, terrorism with weapons of mass destruction would be something else entirely. The prospect of an attack that would kill millions of people is not just theoretically possible but consistent with the statistics of terrorism. The computer scientists Aaron Clauset and Maxwell Young and the political scientist Kristian Gleditsch plotted the death tolls of eleven thousand terrorist attacks on log-log paper and saw them fall into a neat straight line.261 Terrorist attacks obey a power-law distribution, which means they are generated by mechanisms that make extreme events unlikely, but not astronomically unlikely.

The trio suggested a simple model that is a bit like the one that Jean-Baptiste Michel and I proposed for wars, invoking nothing fancier than a combination of exponentials. As terrorists invest more time into plotting their attack, the death toll can go up exponentially: a plot that takes twice as long to plan can kill, say, four times as many people. To be concrete, an attack by a single suicide bomber, which usually kills in the single digits, can be planned in a few days or weeks. The 2004 Madrid train bombings, which killed around two hundred, took six months to plan, and 9/11, which killed three thousand, took two years. But terrorists live on borrowed time: every day that a plot drags on brings the possibility that it will be disrupted, aborted, or executed prematurely. If the probability is constant, the plot durations will be distributed exponentially. (Cronin, recall, showed that terrorist organizations drop like flies over time, falling into an exponential curve.) Combine exponentially growing damage with an exponentially shrinking chance of success, and you get a power law, with its disconcertingly thick tail. Given the presence of weapons of mass destruction in the real world, and religious fanatics willing to wreak untold damage for a higher cause, a lengthy conspiracy producing a horrendous death toll is within the realm of thinkable probabilities.

A statistical model, of course, is not a crystal ball. Even if we could extrapolate the line of existing data points, the massive terrorist attacks in the tail are still extremely (albeit not astronomically) unlikely. More to the point, we can’t extrapolate it. In practice, as you get to the tail of a power-law distribution, the data points start to misbehave, scattering around the line or warping it downward to very low probabilities. The statistical spectrum of terrorist damage reminds us not to dismiss the worst-case scenarios, but it doesn’t tell us how likely they are.

Steven Pinker, The Better Angels of our Nature, Chapter 6

 


Added to diary 15 January 2018

the-economist-contributors

GENTRIFIER has surpassed many worthier slurs to become the dirtiest word in American cities. In the popular telling, hordes of well-to-do whites are descending upon poor, minority neighbourhoods that were made to endure decades of discrimination. With their avocado on toast, beard oil and cappuccinos, these people snuff out local culture. As rents rise, lifelong residents are evicted and forced to leave. In this view, the quintessential scene might be one witnessed in Oakland, California, where a miserable-looking homeless encampment rests a mere ten-minute walk from a Whole Foods landscaped with palm trees and bougainvillea, offering chia and flax seed upon entry. An ancient, sinister force lurks behind the overpriced produce. “‘Gentrification’ is but a more pleasing name for white supremacy,” wrote Ta-Nehisi Coates. It is “the interest on enslavement, the interest on Jim Crow, the interest on redlining, compounding across the years.”

This story is better described as an urban myth. The supposed ills of gentrification—which might be more neutrally defined as poorer urban neighbourhoods becoming wealthier—lack rigorous support. The most careful empirical analyses conducted by urban economists have failed to detect a rise in displacement within gentrifying neighbourhoods. Often, they find that poor residents are more likely to stay put if they live in these areas. At the same time, the benefits of gentrification are scarcely considered. Longtime residents reap the rewards of reduced crime and better amenities. Those lucky enough to own their homes come out richer. The left usually bemoans the lack of investment in historically non-white neighbourhoods, white flight from city centres and economic segregation. Yet gentrification straightforwardly reverses each of those regrettable trends.

The anti-gentrification brigades often cite anecdotes from residents forced to move. Yet the data suggest a different story. An influential study by Lance Freeman and Frank Braconi found that poor residents living in New York’s gentrifying neighbourhoods during the 1990s were actually less likely to move than poor residents of non-gentrifying areas. A follow-up study by Mr Freeman, using a nationwide sample, found scant association between gentrification and displacement. A more recent examination found that financially vulnerable residents in Philadelphia—those with low credit scores and no mortgages—are no more likely to move if they live in a gentrifying neighbourhood.

These studies undermine the widely held belief that for every horrid kale-munching millennial moving in, one longtime resident must be chucked out. The surprising result is explained by three underlying trends.

The first is that poor Americans are obliged to move very frequently, regardless of the circumstances of their district, as the Princeton sociologist Matthew Desmond so harrowingly demonstrated in his research on eviction. The second is that poor neighbourhoods have lacked investment for decades, and so have considerable slack in their commercial and residential property markets. A lot of wealthier city dwellers can thus move in without pushing out incumbent residents or businesses. “Given the typical pattern of low-income renter mobility in New York City, a neighbourhood could go from a 30% poverty population to 12% in as few as ten years without any displacement whatsoever,” noted Messrs Freeman and Braconi in their study. Indeed, the number of poor people living in New York’s gentrifying neighbourhoods barely budged from 1990 to 2014, according to a study by New York University’s Furman Centre. […]

Residents of gentrifying neighbourhoods who own their homes have reaped considerable windfalls. One black resident of Logan Circle, a residential district in downtown Washington, bought his home in 1993 for $130,000. He recently sold it for $1.6m. Businesses gain from having more customers, with more to spend. Having new shops, like well-stocked grocery stores, and sources of employment nearby can reduce commuting costs and time. Tax collection surges and so does political clout. Crime, already on the decline in American city centres, seems to fall even further in gentrifying neighbourhoods, as MIT economists observed after Cambridge, Massachusetts, undid its rent-control scheme.

Those who bemoan segregation and gentrification simultaneously risk contradiction. The introduction of affluent, white residents into poor, minority districts boosts racial and economic integration. […]

A second reason gentrification is disliked is culture. The argument is that the arrival of yuppie professionals sipping kombucha will alter the character of a place in an unseemly way. “Don’t Brooklyn my Detroit” T-shirts are now a common sight in Motor City. In truth, Detroit would do well with a bit more Brooklyn. Across big American cities, for every gentrifying neighbourhood ten remain poor. Opposing gentrification has become a way for people to display their anti-racist bona fides. This leads to the exaggerated equation of gentrification with white supremacy. Such objections parallel those made by white NIMBYs who fret that a new bus stop or apartment complex will bring people who might also alter the culture of their neighbourhood—for the worse.

The term gentrification has become tarred. But called by any other name—revitalisation, reinvestment, renaissance—it would smell sweet.

In praise of gentrification, The Economist, June 21st 2018


Added to diary 26 June 2018

The four biggest British supermarket chains all offer some form of price-match guarantee, promising that their customers could not save any money by shopping elsewhere. On the face of it, they seem like a good thing: a sign that fierce competition is lowering prices. But economists have long been suspicious of such promises, which can leave consumers worse off.

The problem is that price-match guarantees can blunt the logic of competition. Suppose a car dealership worries about a rival undercutting its prices and stealing customers. Even if the dealership can respond by cutting its prices too, it might lose sales in the interim. A price-match guarantee offers a pre-emptive defence. […]

There is no evidence that Britain’s current crop of price-match guarantees has hurt consumers. However, researchers have linked similar promises elsewhere to sustained high prices for groceries (Hess and Gerstner 1991), tyres (Arbatskaya et al. 1999a) and even shares (Edlin and Emch 1998). Wonks have confirmed the finding in the laboratory, too (Datta and Offenberg 2005). […]

Another mooted justification for the guarantees is price discrimination: selling to different types of consumers at different prices. For instance, if some customers are too busy to shop around, a firm can sell to them at a high price while using a guarantee to attract more price-sensitive, hassle-tolerating customers. This is great for profits, but sometimes benefits consumers too.

Finally, a price-guarantee may be an attempt to signal genuinely lower prices and thus stand out from the crowd. That is probably how most consumers interpret them. This works only if there is a genuine difference in efficiency between rival stores, such that only one can afford to sell on the cheap. Then, the nimble firm might want a price war in order to speed the lumbering one’s demise. In such circumstances, any attempt by the lumbering firm to collude tacitly is futile; if it offers a guarantee, its bluff will be called. So when low-cost firms make such promises, consumers can take them as a sign of a competitive offering.

This does not seem to be what is happening in Britain, however. There, it is the more expensive supermarkets that are promising to match each others’ prices. Only one has pledged to match the deals on offer at Aldi and Lidl, nimble low-cost rivals that are making inroads into the market.

The Economist, Guaranteed profits, 12 February 2015


Added to diary 28 March 2018

For some British millennials, Monzo is as close to a cult as a bank can be. Its coral-pink cards are hard to miss. “People in bars will get very excited if they see you are a fellow Monzo user,” says Mr Matthews, who is 29.

The Economist, Attack of the minnows, 17 February 2018


Added to diary 19 February 2018

Around 40% of Japanese are still virgins at the age of 34, whereas 90% of men and women in America have had sex before turning 22.

The Economist, Seventh heaven at 7-Eleven, 17 February 2018


Added to diary 19 February 2018

A judge dismissed a case against Taylor Swift brought by two songwriters, who argued that the lyrics in her single, “Shake it Off”, infringed on their copyright. The judge ruled that the phrase “haters gonna hate”, lacked “the modicum of originality and creativity required for copyright protection”, observing that American popular culture was already “heavily steeped in the concepts of players, haters and player haters”.

The Economist, The world this week, 17 February 2018


Added to diary 19 February 2018

thomas-chaney

Economists build a database from 4000-year-old clay tablets, plug it into a trade model, and use it to locate lost bronze age cities.

They had clay tablets from ancient merchants, saying things like:

(I paid) 6.5 shekels (of tin) from the Town of the Kanishites to Timelkiya. I paid 2 shekels of silver and 2 shekels of tin for the hire of a donkey from Timelkiya to Hurama. From Hurama to Kaneš I paid 4.5 shekels of silver and 4.5 shekels of tin for the hire of a donkey and a packer.

This allowed them to measure the amount of trade between any two cities.

Then they constructed a theoretical model of the expected amount of trade between any two cities, as inversely propotional to the distance between the two cities. Given the data on amount of trade, and the locations of the known cities, they are able to estimate the lost locations:

As long as we have data on trade between known and lost cities, with sufficiently many known compared to lost cities, a structural gravity model is able to estimate the likely geographic coordinates of lost cities [….]

We build a simple Ricardian model of trade. Further imposing that bilateral trade frictions can be summarized by a power function of geographic distance, our model makes predictions on the number oftransactions between city pairs, which is observed in our data. The model can be estimated solely on bilateral trade flows and on the geographic location of at least some cities.

Gojko Barjamovic, Thomas Chaney, Kerem A. Coşar, Ali Hortaçsu, Trade, Merchants, and the Lost Cities of the Bronze Age, 2017


Added to diary 15 January 2018

thomas-paine

I view things as they are, without regard to place or person; my country is the world, and my religion is to do good.

Thomas Paine, Rights of Man, Part 2.7 Chapter V, 1791


Added to diary 26 January 2018

Each of those churches show certain books, which they call revelation, or the word of God. The Jews say, that their word of God was given by God to Moses, face to face; the Christians say, that their word of God came by divine inspiration: and the Turks say, that their word of God (the Koran) was brought by an angel from Heaven. Each of those churches accuse the other of unbelief; and for my own part, I disbelieve them all.

Thomas Paine, The Age of Reason, Part I, 1794


Added to diary 26 January 2018

tim-harford

The elevator

One day, on her way to work, a woman decides that she’s going to take a mass-transit system instead of her usual method. Just before she gets on board, she looks at an app on her phone that gives her position with the exact latitude and longitude. The journey is smooth and perfectly satisfactory, despite frequent stops, and when the woman disembarks she checks her phone again. Her latitude and longitude haven’t changed at all. What’s going on? The answer: this lady works in a tall office building, and rather than taking the stairs, she’s taken the lift. We don’t tend to think of elevators as mass-transportation systems, but they are: they move hundreds of millions of people every day, and China alone is installing 700,000 elevators a year. The tallest building in the world, the Burj Khalifa in Dubai, has more than 300,000 square metres of floor space; the brilliantly engineered Sears Tower in Chicago has more than 400,000. Imagine such skyscrapers sliced into fifty or sixty low-rise chunks, then surrounding each chunk with a car park and connecting all the car parks together with roads, and you’d have an office park the size of a small town. The fact that so many people can work together in huge buildings on compact sites is possible only because of the elevator.

The index fund

Index funds now seem completely natural – part of the very language of investing. But as recently as 1976, they didn’t exist. Before you can have an index fund, you need an index. In 1884, a financial journalist called Charles Dow had the bright idea that he could take the price of some famous company stocks and average them, then publish the average going up and down. He ended up founding not only the Dow Jones company, but also the Wall Street Journal. […] And Samuelson went further: he said that since professional investors didn’t seem to be able to beat the market, somebody should set up an index fund – a way for ordinary people to invest in the performance of the stock market as a whole, without paying a fortune in fees for fancy professional fund managers to try, and fail, to be clever. At this point, something interesting happened: a practical businessman actually paid attention to what an academic economist had written. John Bogle had just founded a company called Vanguard, whose mission was to provide simple mutual funds for ordinary investors – no fuss, no fancy stuff, low fees. And what could be simpler and cheaper than an index fund – as recommended by the world’s most respected economist? And so Bogle decided he was going to make Paul Samuelson’s wish come true. He set up the world’s first index fund, and waited for investors to rush in.  

Epilogue: Light

Back in the mid-1990s the economist William Nordhaus conducted a series of simple experiments. One day, for example, he used a prehistoric technology: he lit a wood fire. Humans have been gathering and chopping and burning wood for tens of thousands of years. But Nordhaus also had a piece of high-tech equipment with him: a Minolta light meter. He burned 20 pounds of wood, kept track of how long it burned for and carefully recorded the dim, flickering firelight with his meter. Another day, Nordhaus bought a Roman oil lamp – a genuine antique, he was assured – fitted it with a wick, and filled it with cold-pressed sesame oil. He lit the lamp and watched the oil burn down, again using the light meter to measure its soft, even glow. Nordhaus’s open wood fire had burned for just three hours when fuelled with 20 pounds of wood. But a mere eggcup of oil burned all day, and more brightly and controllably. […] To see why [keeping track of inflation] is difficult, consider the price of travelling from – say – Lisbon in Portugal to Luanda in Angola. When the journey was first made by Portuguese explorers, it would have been an epic expedition, taking months. Later, by steam ship, it would have taken a few days. Then by plane, a few hours. An economic historian who wanted to measure inflation could start by tracking the price of passage on the steamer. But then, once an air route opens up, which price do you look at? Perhaps you simply switch to the airline ticket price once more people start flying than sailing. But flying is a different service – faster, more convenient. If more travellers are willing to pay twice as much to fly, it hardly makes sense for inflation statistics to record that the cost of the journey has suddenly doubled. How, then, do we measure inflation when what we’re able to buy changes so radically over time? […] Because we don’t have a good way to compare an iPod today to a gramophone a century ago, we don’t really have a good way to quantify how much all the inventions described in this book have really expanded the choices available to us. We probably never will. But we can try – and Bill Nordhaus was trying as he fooled around with wood fires, antique oil lamps and Minolta light meters. He wanted to unbundle the cost of a single quality that humans have cared deeply about since time immemorial, using the state-of-the-art technology of different ages: illumination. That’s measured in lumens, or lumen-hours. A candle, for example, gives off 13 lumens while it burns; a typical modern light bulb is almost a hundred times as bright as that. […] Switch off a light bulb for an hour and you’re saving illumination that would have cost our ancestors all week to create. It would have taken Benjamin Franklin’s contemporaries all afternoon. But someone in a rich industrial economy today could earn the money to buy that illumination in a fraction of a second.

Tim Harford, Fifty inventions that shaped the modern economy

Here’s Nordhaus’ paper, “Do Real-Output and Real-Wage Measures Capture Reality?”


Added to diary 15 January 2018

tracy-chapman

This training ground for punks and thieves
Home of poor white retirees
Who didn’t bail
And couldn’t sell
When color made the grass less green

Tracy Chapman, “3000 Miles”, 13 September 2005


Added to diary 12 July 2018

vernon-smith

At the heart of economics is a scientific mystery: How is it that the pricing system accomplishes the world’s work without anyone being in charge? Like language, no one invented it. None of us could have invented it, and its operation depends in no way on anyone’s comprehension or understanding of it. Somehow, it is a product of culture; yet in important ways, the pricing system is what makes culture possible. Smash it in the command economy and it rises as a Phoenix with a thousand heads, as the command system becomes shot through with bribery, favors, barter and underground exchange. Indeed, these latter elements may prevent the command system from collapsing. No law and no police force can stop it, for the police may become as large a part of the problem as of the solution. The pricing system-How is order produced from freedom of choice?-is a scientific mystery as deep, fundamental, and inspiring as that of the expanding universe or the forces that bind matter. For to understand it is to understand something about how the human species got from hunting-gathering through the agricultural and industrial revolutions to a state of affluence that allows us to ask questions about the expanding universe, the weak and strong forces that bind particles, and the nature of the pricing system, itself.

Vernon L. Smith, Microeconomic Systems as an Experimental Science, The American Economic Review, Vol. 72, No. 5 (Dec., 1982), pp. 923-955


Added to diary 19 May 2018

wikipedia-contributors

A hyperforeignism is a type of qualitative hypercorrection that involves speakers misidentifying the distribution of a pattern found in loanwords and extending it to other environments, including words and phrases not borrowed from the language that the pattern derives from. The result of this process does not reflect the rules of either language. For example, habanero is sometimes pronounced as though it were spelled with an <ñ> (habañero), which is not the Spanish form from which the English word was borrowed. […]

A number of words of French origin feature a final <e> that is pronounced in English but silent in the original language. For example, forte (used to mean “strength” in English as in “not my forte”) is often pronounced /ˈfɔːrteɪ/ or /fɔːrˈteɪ/, by confusion with the Italian musical term of the same spelling (and same Latin origin, but meaning “loud”), which is pronounced [ˈfɔrte]. In French, the word “forte” is pronounced [fɔʁt], with silent final <e> […].

Wikipedia Contributors, Hyperforeignism


Added to diary 13 February 2019

The Debate between bird and fish is a literature essay of the Sumerian language, on clay tablets from the mid to late 3rd millennium BC. […]

The debate, in short summary

Fish speaks first

The initial speech of Fish:

“…Bird…there is no insult, ..! Croaking …noise in the marshes …squawking! Forever gobbling away greedily, while your heart is dripping with evil! Standing on the plain you can keep pecking away until they chase you off! The farmer’s sons lay lines and nets for you..(and continues)..You cause damage in the vegetable plots..(more)..Bird, you are shameless: you fill the courtyard with your droppings. The courtyard sweeper-boy who cleans the house chases after you…(etc)”

The 2nd and 3rd paragraphs continue:

“They bring you into the fattening shed. They let you moo like cattle, bleat like sheep. They pour out cool water in jugs for you. They drag you away for the daily sacrifice.” (the 2nd, 3rd paragraphs continue for several lines)

Bird’s initial retort

Bird replies:

“How has your heart become so arrogant, while you yourself are so lowly? Your mouth is flabby(?), but although your mouth goes all the way round, you can not see behind you. You are bereft of hips, as also of arms, hands and feet – try bending your neck to your feet! Your smell is awful; you make people throw-up; they sneer at you! ….”

Bird continues:

“But I am the beautiful and clever Bird! Fine artistry went into my adornment. But no skill has been expended on your holy shaping! Strutting about in the royal palace is my glory; my warbling is considered a decoration in the courtyard. The sound I produce, in all its sweetness, is a delight for the person of Culgi, son of Enlil….”

Šulgi rules in favor of Bird

After the initial speech and retort, Fish attacks Bird’s nest. Battle ensues between the two of them, in more words. Near the end Bird requests that Culgi decide in Bird’s favor:

Šulgi proclaims:

“To strut about in the E-kur is a glory for Bird, as its singing is sweet. At Enlil’s holy table, Bird …precedence over you …! “(lacuna)…Bird …Because Bird was victorious over Fish in the dispute between Fish and Bird, Father Enki be praised!”-(end of line 190, final line)

Wikipedia contributors, Debate between bird and fish

Seven major debates are known, with specific titles […]:

  • Debate between bird and fish
  • Debate between cattle and grain
  • Debate between the millstone and the gulgul-stone
  • Debate between the pickaxe and the plough
  • Debate between silver and mighty copper
  • Debate between Summer and Winter
  • Debate between tree and the reed

Wikipedia contributors, Sumerian disputations


Added to diary 05 April 2018

To reiterate, in just six days, The New York Times ran as many cover stories about Hillary Clinton’s emails as they did about all policy issues combined in the 69 days leading up to the election (and that does not include the three additional articles on October 18, and November 6 and 7, or the two articles on the emails taken from John Podesta). This intense focus on the email scandal cannot be written off as inconsequential: The Comey incident and its subsequent impact on Clinton’s approval rating among undecided voters could very well have tipped the election.

Duncan Watts and David Rothschild, Don’t blame the election on fake news. Blame it on the media, Columbia Journalism Review, 5 December 2017

Analyses by Columbia Journalism Review, the Berkman Klein Center for Internet and Society at Harvard University, and the Shorenstein Center at the Harvard Kennedy School show that the Clinton email controversy received more coverage in mainstream media outlets than any other topic during the 2016 presidential election.

Wikipedia contributors, Hillary Clinton email controversy


Added to diary 30 January 2018

Given the ubiquity of the wheel in human technology, and the existence of biological analogues of many other technologies (such as wings and lenses), the lack of wheels in the natural world would seem to demand explanation—and the phenomenon is broadly explained by two main factors. First, there are several developmental and evolutionary obstacles to the advent of a wheel by natural selection, addressing the question “Why can’t life evolve wheels?” Secondly, wheels are often at a competitive disadvantage when compared with other means of propulsion (such as walking, running, or slithering) in natural environments, addressing the question “If wheels could evolve, why might they be rare nonetheless?” This environment-specific disadvantage also explains why at least one historical civilization abandoned the wheel as a mode of transport. […]

Biological barriers to wheeled organisms

Richard Dawkins describes the matter: “The wheel may be one of those cases where the engineering solution can be seen in plain view, yet be unattainable in evolution because it lies [on] the other side of a deep valley, cutting unbridgeably across the massif of Mount Improbable.” In such a fitness landscape, wheels might sit on a highly favorable “peak”, but the valley around that peak may be too deep or wide for the gene pool to migrate across by genetic drift or natural selection. […]

The greatest anatomical impediment to wheeled multicellular organisms is the interface between the static and rotating components of the wheel. In either a passive or driven case, the wheel (and possibly axle) must be able to rotate freely relative to the rest of the machine or organism.[Note 2] Unlike animal joints, which have a limited range of motion, a wheel must be able to rotate through an arbitrary angle without ever needing to be “unwound”. As such, a wheel cannot be permanently attached to the axle or shaft about which it rotates (or, if the axle and wheel are fixed together, the axle cannot be affixed to the rest of the machine or organism). There are several functional problems created by this requirement. […]

In animals, motion is typically achieved by the use of skeletal muscles, which derive their energy from the metabolism of nutrients from food. Because these muscles are attached to both of the components that must move relative to each other, they are not capable of directly driving a wheel. […]

Another potential problem that arises at the interface between wheel and axle (or axle and body) is the limited ability of an organism to transfer materials across this interface. If the tissues that make up a wheel are living, they will need to be supplied with oxygen and nutrients and have wastes removed to sustain metabolism. […]

Disadvantages of wheels

Wheels incur mechanical and other disadvantages in certain environments and situations that would represent a decreased fitness when compared with limbed locomotion. These disadvantages suggest that, even barring the biological constraints discussed above, the absence of wheels in multicellular life may not be the “missed opportunity” of biology that it first seems. In fact, given the mechanical disadvantages and restricted usefulness of wheels when compared with limbs, the central question can be reversed: not “Why does nature not produce wheels?”, but rather, “Why do human vehicles not make more use of limbs?” The use of wheels rather than limbs in many engineered vehicles can likely be attributed to the complexity of design required to construct and control limbs, rather than to a consistent functional advantage of wheels over limbs. […]

Although stiff wheels are more energy efficient than other means of locomotion when traveling over hard, level terrain (such as paved roads), wheels are not especially efficient on soft terrain such as soil, because they are vulnerable to rolling resistance. In rolling resistance, a vehicle loses energy to the deformation of its wheels and the surface on which they are rolling. […]

When moving through a fluid, rotating systems carry an efficiency advantage only at extremely low Reynolds numbers (i.e. viscosity-dominated flows) such as those experienced by bacterial flagella, whereas oscillating systems have the advantage at higher (inertia-dominated) Reynolds numbers Whereas ship propellers typically have efficiencies around 60% and aircraft propellers up to around 80% (achieving 88% in the human-powered Gossamer Condor), much higher efficiencies, in the range of 96%–98%, can be achieved with an oscillating flexible foil like a fish tail or bird wing.

Wheels are prone to slipping—an inability to generate traction—on loose or slippery terrain. Slipping wastes energy and can potentially lead to a loss of control or becoming stuck, as with an automobile on mud or snow. This limitation of wheels can be seen in the realm of human technology: in an example of biologically inspired engineering, legged vehicles find use in the logging industry, where they allow access to terrain too challenging for wheeled vehicles to navigate.

Wikipedia, Rotating locomotion in living systems


Added to diary 15 January 2018

william-butler-yeats

Things fall apart; the centre cannot hold;
Mere anarchy is loosed upon the world,
The blood-dimmed tide is loosed, and everywhere
The ceremony of innocence is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity.

W. B. Yeats. The Second Coming, 1921


Added to diary 23 September 2018

william-child

‘In the use of words’, Wittgenstein writes in Philosophical Investigations, ‘one might distinguish “surface grammar” from “depth grammar” ’ (PI §664). Words that have the same ‘surface grammar’ may have very different ‘depth grammars’. And philosophical problems arise when we are misled by ‘certain analogies between the forms of expression in different regions of our language’ (PI §90) into thinking that the phenomena we are talking about when we use those expressions are similarly analogous (we will shortly consider an example). So there is a significant parallel between Wittgenstein’s early and later views about the source of philosophical problems. And there is a significant parallel in his view about the proper response to philosophical problems: we should not try to solve those problems by producing a philosophical theory; instead, we should dissolve them, by showing that they were not genuine problems at all. ‘We cannot give any answer to [philosophical questions]’, Wittgenstein writes in the Tractatus, ‘but can only point out that they are nonsensical’ (TLP: 4.003). Similarly, in Philosophical Investigations: ‘The results of philosophy are the discovery of some piece of plain nonsense and the bumps that the understanding has got by running up against the limits of language’ (PI §119). ‘What I want to teach’, he says, ‘is: to pass from unobvious nonsense to obvious nonsense’ (PI §464). ‘The clarity that we are aiming at is indeed complete clarity. But this simply means that the philosophical problems should completely disappear’ (PI §133).

William Child, Wittgenstein, 2011


Added to diary 19 January 2018

william-macaskill

On many issues, I find that people hold the following two views:

  • If many people did this thing, then change would happen.
  • But any individual person doesn’t make a difference.

Holding that combination of views is usually a mistake when we consider expected value.

Consider ethical consumption, like switching to fair-trade coffee, or reducing how much meat you buy. Suppose someone stops buying chicken breasts, instead choosing vegetarian options, in order to reduce the amount of animal suffering on factory farms. Does that person make a difference? You might think not. If one person decides against buying chicken breast one day but the rest of the meat eaters on the planet continue to buy chicken, how could that possibly affect how many chickens are killed for human consumption? When a supermarket decides how much chicken to buy, they don’t care that one fewer breast was purchased on a given day. However, if thousands or millions of people stopped buying chicken breasts, the number of chickens raised for food would decrease—supply would fall to meet demand. But then we’re left with a paradox: individuals can’t make a difference, but millions of individuals do. But the actions of millions of people are just the sum of the actions of many individual people. Moreover, an iron law of economics is that, in a well-functioning market, if demand for a product decreases, the quantity of the product that’s supplied decreases. How, then, can we reconcile these thoughts?

The answer lies with expected value. If you decline to buy some chicken breast, then most of the time you’ll make no difference: the supermarket will buy the same amount of chicken in the future. Sometimes, however, you will make a difference. Occasionally, the manager of the store will assess the number of chicken breasts bought by consumers and decide to decrease their intake of stock, even though they wouldn’t have done so had the number of chicken breasts bought been one higher. (Perhaps they follow a rule like: “If fewer than five thousand chicken breasts were bought this month, decrease stock intake.”) And when that manager does decide to decrease their stock intake, they will decrease stock by a large amount. Perhaps your decision against purchasing chicken breast will have an effect on the supermarket only one in a thousand times, but in that one time, the store manager will decide to purchase approximately one thousand fewer chicken breasts.

This isn’t just a theoretical argument. Economists have studied this issue [Norwood & Lusk 2011, p. 223] and worked out how, on average, a consumer affects the number of animal products supplied by declining to buy that product. They estimate that, on average, if you give up one egg, total production ultimately falls by 0.91 eggs; if you give up one gallon of milk, total production falls by 0.56 gallons. Other products are somewhere in between: economists estimate that if you give up one pound of beef, beef production falls by 0.68 pounds; if you give up one pound of pork, production ultimately falls by 0.74 pounds; if you give up one pound of chicken, production ultimately falls by 0.76 pounds.

This same reasoning can be applied when considering the value of participating in political rallies. Suppose there’s some policy that a group of people want to see implemented. Suppose everyone agrees that if no one attends a rally on this policy, the policy won’t go through, but if one million people show up, the policy will go through. What difference do you make by showing up at this rally? You’re just one body among thousands of others—surely the difference you make is negligible. Again, the solution is to think in terms of expected value. The chance of you being the person who makes the difference is very small, but if you do make the difference, it will be very large indeed. This isn’t just a speculative model. Professors of political science at Harvard and Stockholm Universities analyzed Tea Party rallies held on Tax Day, April 15, 2009 [Madestam, Shoag, Veuger & Yanagizawa-Drott 2013]. They used the weather in different constituencies as a natural experiment: if the weather was bad on the day of a rally, fewer people would show up. This allowed them to assess whether increased numbers of people at a rally made a difference to how influential the rally was. They found that policy was significantly influenced by those rallies that attracted more people, and that the larger the rally, the greater the degree to which those protestors’ representatives in Congress voted conservatively.

William Macaskill, Doing Good Better, 2015


Added to diary 15 April 2018

Emphasis is mine.

As I and the Centre for Effective Altruism define it, effective altruism is the project of using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.

On this definition, effective altruism is an intellectual and practical project rather than a normative claim, in the same way that science is an intellectual and practical project rather than a body of any particular normative and empirical claims. Its aims are welfarist, impartial, and maximising: effective altruists aim to maximise the wellbeing of all, where (on some interpretation) everyone counts for one, and no-one for more than one. But it is not a mere restatement of consequentialism: it does not claim that one is always obligated to maximise the good, impartially considered, with no room for one’s personal projects; and it does not claim that one is permitted to violate side-constraints for the greater good.

Effective altruism is an idea with a community built around it. That community champions certain values that aren’t part of the definition of effective altruism per se. These include serious commitment to benefiting others, with many members of the community pledging to donate at least 10% of their income to charity; scientific mindset, and willingness to change one’s mind in light of new evidence or argument; openness to many different cause-areas, such as extreme poverty, farm animal welfare, and risks of human extinction; integrity, with a strong commitment to honesty and transparency; and a collaborative spirit, with an unusual level of cooperation between people with different moral projects.

William MacAskill, “Effective Altruism: IntroductionEssays in Philosophy: Vol. 18: Iss. 1, Article 1. doi:10.7710/1526-0569.1580


Added to diary 20 March 2018

william-shakespeare

There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.

William Shakespeare, Hamlet, Act I, Scene V


Added to diary 18 March 2018

KING HENRY:

Once more unto the breach, dear friends, once more;
Or close the wall up with our English dead!
In peace there’s nothing so becomes a man,
As modest stillness and humility;
But when the blast of war blows in our ears,
Then imitate the action of the tiger:
Stiffen the sinews, conjure up the blood,
Disguise fair nature with hard-favoured rage:
Then lend the eye a terrible aspect;
Let it pry through the portage of the head,
Like the brass cannon; let the brow o’erwhelm it
As fearfully as doth a galled rock
O’erhang and jutty his confounded base,
Swill’d with the wild and wasteful ocean.

William Shakespeare, Henry V, Act III, Scene I


Added to diary 15 January 2018

woody-allen