India’s largest social protection program, the Mahatma Gandhi National Rural Employment Guarantee Scheme (MGNREGS), guarantees households 100 days of work per year, typically in unskilled manual labor on infrastructure projects. For MGNREGS, the central government disburses funds to local governments based on projected spending, a system that has extensive delays and leakages. In fiscal year 2016-17, the Indian government spent over US$6 billion on the program and reached 74 million beneficiaries. In Bihar, J-PAL affiliates tested the impact of an information technology reform that linked the flow of funds to actual expenditures and reduced the number of officials involved in the process. The reform led to a 24 percent decline in expenditure without a detectable decline in employment or assets created, and there is direct evidence that at least part of the decline was due to reduction in fund leakage. […] Informed by the results of the randomized evaluation, India’s Union Cabinet approved a national reform of MGNREGS fund-flow. The reform allows beneficiary payments across all Indian states to be made through a newly-established National Electronic Fund Management System (Ne-FMS). As detailed below, in the Bihar study MGNREGS funds flowed directly from a state government account to village councils (called panchayats), which in turn then paid the beneficiaries. […] The Cabinet note cites J-PAL’s evaluation as part of the rationale for this decision. […] The evaluation, conducted between September 2012 and March 2013, spanned twelve districts in Bihar, covering a rural population of 33 million. Abdul Latif Jameel Poverty Action Lab (J-PAL), Fund-flow reforms for improved social program delivery in India Added to diary 21 March 2018 # abraham-lincoln Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this. But, in a larger sense, we can not dedicate—we can not consecrate—we can not hallow—this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us—that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain—that this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth. Abraham Lincoln, Gettysburg Address, 19 November 1863 [Alexander Bliss version] Added to diary 20 January 2018 # agence-france-presse-contributors The Kingdom of Tonga has admitted to losing millions of dollars that it made selling passports to Asians after an American whom the king appointed as his ‘‘court jester’’ invested the money in a mysterious company that later disappeared. In the last week, two cabinet ministers have been forced to quit over the scandal as the new deputy prime minister, Clive Edwards, conceded that$26 million, held by the Tonga Trust Fund in a Bank of America account, had been lost.

The money was taken out of the bank in June 1999 and put into Millennium Asset Management in Nevada.

At the time, it said, a Bank of America employee, Jesse Bogdonoff, became the Trust Funds advising officer, just after King Taufaahau Tupou IV issued a royal decree declaring him the court jester.

The fund owed its origins to the late 1980’s when a Hong Kong businessman, George Chen, won royal approval to sell Tongan citizenship and special passports mainly to Asians, with a particular eye on Hong Kong Chinese who were worried about its handover to China.

Mr. Chen put the money into a checking account at the Bank of America after the king refused to keep it in Tonga, saying the government would only spend it on roads.

At the time, Mr. Bogdonoff was working at the bank, and by his own account in a company newsletter, ‘‘he stumbled onto millions of dollars inexplicably invested in a checking account.’’ He persuaded the king to allow him to invest the money.

Millennium was established on March 25, 1999, and the fund was moved into it on June 21. The government statement concluded, though, that Millennium no longer exists and that the $26 million dollars, plus an additional$11 million estimated to be accrued interest, was gone.

'’Some common questions being asked by the general public in Tonga today include: Why did the trustee deposit so much of our Foreign Reserves in such a suspicious company?’’ the government said.

Agence France-Presse, The Money Is All Gone in Tonga, And the Jester’s Role Was No Joke, The New York Times, 7 Oct. 2001

Added to diary 18 May 2018

# alex-tabarrok

The Baumol effect is easy to explain but difficult to grasp. In 1826, when Beethoven’s String Quartet No. 14 was first played, it took four people 40 minutes to produce a performance. In 2010, it still took four people 40 minutes to produce a performance. Stated differently, in the nearly 200 years between 1826 and 2010, there was no growth in string quartet labor productivity. In 1826 it took 2.66 labor hours to produce one unit of output, and it took 2.66 labor hours to produce one unit of output in 2010.

Fortunately, most other sectors of the economy have experienced substantial growth in labor productivity since 1826. We can measure growth in labor productivity in the economy as a whole by looking at the growth in real wages. In 1826 the average hourly wage for a production worker was $1.14. In 2010 the average hourly wage for a production worker was$26.44, approximately 23 times higher in real (inflation-adjusted) terms. Growth in average labor productivity has a surprising implication: it makes the output of slow productivity-growth sectors (relatively) more expensive. In 1826, the average wage of $1.14 meant that the 2.66 hours needed to produce a performance of Beethoven’s String Quartet No. 14 had an opportunity cost of just$3.02. At a wage of $26.44, the 2.66 hours of labor in music production had an opportunity cost of$70.33. Thus, in 2010 it was 23 times (70.33/3.02) more expensive to produce a performance of Beethoven’s String Quartet No. 14 than in 1826. In other words, one had to give up more other goods and services to produce a music performance in 2010 than one did in 1826. Why? Simply because in 2010, society was better at producing other goods and services than in 1826.

The 23 times increase in the relative price of the string quartet is the driving force of Baumol’s cost disease. The focus on relative prices tells us that the cost disease is misnamed. The cost disease is not a disease but a blessing. To be sure, it would be better if productivity increased in all industries, but that is just to say that more is better. There is nothing negative about productivity growth, even if it is unbalanced.

Helland, Eric, and Alexander T. Tabarrok. “Why Are the Prices So Damn High?” (2019).

Added to diary 09 June 2019

# ali-hortacsu

Economists build a database from 4000-year-old clay tablets, plug it into a trade model, and use it to locate lost bronze age cities.

They had clay tablets from ancient merchants, saying things like:

(I paid) 6.5 shekels (of tin) from the Town of the Kanishites to Timelkiya. I paid 2 shekels of silver and 2 shekels of tin for the hire of a donkey from Timelkiya to Hurama. From Hurama to Kaneš I paid 4.5 shekels of silver and 4.5 shekels of tin for the hire of a donkey and a packer.

This allowed them to measure the amount of trade between any two cities.

Then they constructed a theoretical model of the expected amount of trade between any two cities, as inversely propotional to the distance between the two cities. Given the data on amount of trade, and the locations of the known cities, they are able to estimate the lost locations:

As long as we have data on trade between known and lost cities, with sufficiently many known compared to lost cities, a structural gravity model is able to estimate the likely geographic coordinates of lost cities [….]

We build a simple Ricardian model of trade. Further imposing that bilateral trade frictions can be summarized by a power function of geographic distance, our model makes predictions on the number oftransactions between city pairs, which is observed in our data. The model can be estimated solely on bilateral trade flows and on the geographic location of at least some cities.

Gojko Barjamovic, Thomas Chaney, Kerem A. Coşar, Ali Hortaçsu, Trade, Merchants, and the Lost Cities of the Bronze Age, 2017

Added to diary 15 January 2018

# amia-srinivasan

Many male octopuses, to avoid being eaten during mating, will keep their bodies as far removed from the female as possible, extending a single arm with a sperm packet towards her siphon, a manoeuvre known as ‘the reach’. […]

In 1959, Peter Dews, a Harvard scientist, trained three octopuses there to pull a lever to obtain a chunk of sardine. Two of the octopuses, Albert and Bertram, pulled the lever in a ‘reasonably consistent’ manner. But the third, Charles, would anchor his arms on the side of the tank and apply great force to the lever, eventually breaking it and bringing the experiment to a premature end. Dews also reported that Charles repeatedly pulled a lamp into his tank, and that he ‘had a high tendency to direct jets of water out of the tank; specifically … in the direction of the experimenter’. ‘This behaviour,’ Dews wrote, ‘interfered materially with the smooth conduct of the experiments, and is … clearly incompatible with lever-pulling.’ […]

Both female and male octopuses mate only once, and enter a swift and sudden decline into senescence soon after, developing white lesions on their skin, losing interest in food, and becoming unco-ordinated and confused. The females die from starvation while they tend their eggs, and the males are typically preyed on as they wander the ocean aimlessly. […] In its early evolutionary history, the octopus gave up its protective, molluscan shell in order to embrace a life of unboundaried potential. But the cost was an increased vulnerability to toothy and bony predators. An animal with a soft body and no shell cannot expect to live long, and so harmful mutations that take effect only once it has been alive for a couple of years will soon spread through the population. The result is a life that is experientially rich but conspicuously brief.

Amia Srinivasan, The Sucker, the Sucker!, London Review of Books, Vol. 39 No. 17, 7 September 2017, pages 23-25

Added to diary 16 March 2018

# anonymous

What am I? I’m a bunch of bones. I get to wiggle my bones around for a few years, and then I die. And I’m supposed to affect the light cone? It’s like being a worm in a back yard, wiggling around, and hoping to maximise the price of tea in China.

Anonymous, February 2019, Oxford

Added to diary 13 February 2019

Me: Why do some people wear clothes that are generally considered very strange (e.g. punk or emo)? They clearly know most people don’t like it.
Friend: It increases the variance.
Me: But it decreases the expected value.
Friend: Well, you can fuck the variance, but you can’t fuck the expected value.

A conversation with a friend, Oxford, 2017

Added to diary 18 January 2018

# aron-vallinder

Natural Turing Machines. The final issue is the choice of Universal Turing machine to be used as the reference machine. The problem is that there is still subjectivity involved in this choice since what is simple on one Turing machine may not be on another. More formally, it can be shown that for any arbitrarily complex string $x$ as measured against the UTM $U$ there is another UTM machine $U ′$ for which $x$ has Kolmogorov complexity $1$. This result seems to undermine the entire concept of a universal simplicity measure but it is more of a philosophical nuisance which only occurs in specifically designed pathological examples. The Turing machine $U ′$ would have to be absurdly biased towards the string $x$ which would require previous knowledge of $x$. The analogy here would be to hard-code some arbitrary long complex number into the hardware of a computer system which is clearly not a natural design. To deal with this case we make the soft assumption that the reference machine is natural in the sense that no such specific biases exist. Unfortunately there is no rigorous definition of natural but it is possible to argue for a reasonable and intuitive definition in this context.

Hutter and Rathmanner 2011, Section 5.9 “Andrey Kolmogorov”

In section 2.4 we saw that Solomonoff’s prior is invariant under both reparametrization and regrouping, up to a multiplicative constant. But there is another form of language dependence, namely the choice of a uni- versal Turing machine.

There are three principal responses to the threat of language dependence. First, one could accept it flat out, and admit that no language is better than any other. Second, one could admit that there is language dependence but argue that some languages are better than others. Third, one could deny language dependence, and try to show that there isn’t any.

For a defender of Solomonoff’s prior, I believe the second option is the most promising. If you accept language dependence flat out, why intro- duce universal Turing machines, incomputable functions, and other need- lessly complicated things? And the third option is not available: there isn’t any way of getting around the fact that Solomonoff’s prior depends on the choice of universal Turing machine. Thus, we shall somehow try to limit the blow of the language dependence that is inherent to the framework. Williamson (2010) defends the use of a particular language by saying that an agent’s language gives her some information about the world she lives in. In the present framework, a similar response could go as follows. First, we identify binary strings with propositions or sensory observations in the way outlined in the previous section. Second, we pick a UTM so that the terms that exist in a particular agent’s language gets low Kolmogorov complexity.

If the above proposal is unconvincing, the damage may be limited some- what by the following result. Let $K_U ( x )$ be the Kolmogorov complexity of $x$ relative to universal Turing machine $U$, and let $K_T ( x )$ be the Kolmogorov complexity of $x$ relative to Turing machine $T$ (which needn’t be universal). We have that $K_U ( x ) \leq K_T ( x ) + C_{TU}$ That is: the difference in Kolmogorov complexity relative to $U$ and rela- tive to $T$ is bounded by a constant $C_TU$ that depends only on these Turing machines, and not on $x$. (See Li and Vitanyi (1997, p. 104) for a proof.) This is somewhat reassuring. It means that no other Turing machine can outperform $U$ infinitely often by more than a fixed constant. But we want to achieve more than that. If one picks a UTM that is biased enough to start with, strings that intuitively seem complex will get a very low Kolmogorov complexity. As we have seen, for any string $x$ it is always possible to find a UTM $T$ such that $K_T ( x ) = 1$. If $K_T ( x ) = 1$, the corresponding Solomonoff prior $M_T ( x )$ will be at least $0.5$. So for any binary string, it is always possible to find a UTM such that we assign that string prior probability greater than or equal to $0.5$. Thus some way of discriminating between universal Turing machines is called for.

Vallinder 2012, Section 4.1 “Language dependence”

Added to diary 15 January 2018

# ben-garfinkel

a generalization of all these concepts [metaphor, deepity, motte and bailey]:

Blurry sentence: a sentence with at least two possible interpretations, where one is both much more interesting and much less plausible than the other. The effect is to make the reader feel that the sentence is both interesting and plausible. The implausible interpretation also does not help the reader to understand the plausible one.

Here’s how the generalization works:

Basic metaphor: a blurry sentence where the plausible interpretation is the most salient one to the reader.

Deepity: a blurry sentence where neither the plausible interpretation nor the implausible interpretation is much more salient than the other, leaving the reader with a sense of vagueness or ineffability. The feeling is: There is something true and interesting here, although I can’t quite put my finger on it.

Motte and bailey: a blurry sentence where the implausible interpretation is the most salient one. If the reader objects that the sentence is in fact implausible, then the writer has the option of switching to only the plausible interpretation.

These revised definitions reveal that there is a spectrum here, such that the names “basic metaphor,” “deepity,” and “motte and bailey” are only calling out particular regions of the spectrum.

Ben Garfinkel, Basic Metaphors, Deepities, and Motte-and-Baileys, The best that can happen, 16 September 2017

Added to diary 15 January 2018

# bryan-caplan

But George Akerlof aggressively seizes the booby prize. He apparently keeps his wealth in money market funds and the like. His justification? Less than zero: “I know it’s utterly stupid.”

The whole article reminds me of a quote I wrongly attributed to Einstein instead of Thomas Szasz: “Clear thinking requires courage rather than intelligence.”

Bryan Caplan, Four bad role models, EconLog, 12 May 2005

All four basically said that they have portfolios that their research says are stupid portfolios. I’m just like, what is wrong with you, like why do you do this? […]

To me it’s just so frustrating. This is the way a lot of people approach economics, is it’s like a game. Say you get publications or maybe you go and write things, but you don’t actually really use it to change behavior.

Bryan Caplan, 80,000 Hours Podcast, 22 May 2018

Added to diary 23 May 2018

List of passages I highlighted in my copy of “The Case Against Education”.

On conformity and conscientiousness

Heterodox signals of your strengths, in contrast, automatically suggest offsetting weaknesses. Suppose you scored well on the SAT but never went to college. Employers will readily believe you’re smart. But if you’re so smart, why didn’t you go to college? As long as your conscientiousness and conformity were in the normal range, finishing college would have been a snap. Once employers see your SATs, they naturally infer you’re below average in conscientiousness and conformity. The higher your scores, the more suspicious your missing diploma becomes. […]

The further outside the box your substitute signal of conformity, the more it backfires. Try telling employers, “I’m not Jewish, but I keep kosher to prove I can conform to intricate rules.” They’ll take you for a freak. […]

On employer learning:

Give people a chance, observe how they do, fire them if they don’t measure up: a “Hire, Look, Flush” personnel policy sounds both profitable and fair. Yet group identity and pity get in the way. After a firm hires you, you’re part of the team. […]

Employers do have one guilt-free way to reverse a bad hiring decision. Human resources calls it “dehiring.” Instead of firing the unwanted worker, help them jump ship. Privately urge them to find new opportunities. When firms call for a reference, shade the truth—or lie. […]

For most workers, employer learning takes years or even decades, not months. Two seminal studies of employer learning found that during your first decade in the workforce, the ability premium sharply rises, while the education premium falls 25–30%. A subsequent prize-winning article found the education and ability premiums plateau after roughly ten years of experience; the education premium stops falling, and the ability premium stops rising. […]

The more fundamental reason why signals durably affect pay, though, is employers underreact to what they learn. Why? Because they want to match pay and perceived productivity without seeming unfair. When employers spot poor performance, they could swiftly respond with wage cuts, demotions, or terminations. The catch: such “unfair” measures are bad for morale—and make employers feel guilty. […]

A subpar worker can profit from their fancy degree long after their employer sees their true colors. The degree lands them a good job. As truth unfolds, the typical employer responds with stingy raises, not outright pay cuts or demotion. This slowly erodes the value of the signal, but squeamish firms show mercy long before they sync pay with performance. If and when the employer vows to eject the underperformer, both prudence and pity tell them to informally “dehire” rather than blatantly fire. As long as the subpar worker lands another position suitable for their paper persona, the cycle of disappointment, mercy, and deception is reborn. […]

A litmus test:

Imagine this stark dilemma: you can have either a Princeton education without a diploma, or a Princeton diploma without an education. Which gets you further on the job market? For a human capital purist, the answer is obvious: four years of training are vastly preferable to a page of paper. But try saying that with a straight face. Sensible versions of the signaling model don’t imply the diploma is clearly preferable; after all, Princeton teaches some useful skills. But you need signaling to explain why choosing between an education and a diploma is a head-scratcher rather than a no-brainer. […]

Counter-intuitive effects of education subsidies:

Yes, awarding a full scholarship to one poor youth makes that individual better off by helping send a fine signal to the labor market. Awarding full scholarships to all poor youths, however, changes what educational signals mean—and leads more affluent competitors to pursue further education to keep their edge. The result, as we’ve seen, is credential inflation. As education rises, workers—including the poor—need more education to get the same job. Where’s the social justice in that? Imagine the government subsidized wedding rings for the poor. Anyone ready for marriage can go to any jewelry store in the country, knowing—whatever their income—they can buy a diamond ring. The snag: diamond rings are largely a signal of marital commitment. If diamonds were cheap as plastic, other gems would adorn our rings. They’re valuable because they’re costly. Once the government makes them affordable to all, then, diamond rings signal little or nothing. Doesn’t this “level the playing field”? Only for a heartbeat. Once the nonpoor see diamond rings don’t signal what they used to, they procure a snazzier ring to separate themselves from the pack. Thanks to government subsidies, every suitor can afford a wedding ring, but so what? Society is functionally as unequal as ever. […]

Subsidies don’t just hurt the poor by fueling credential inflation. They reshape hiring and promotion to the poor’s detriment. Picture a society where half the population can’t afford college. In this setting, reserving good jobs for college grads is bad business. “There are plenty of qualified candidates who didn’t go to college” is not wishful thinking, but literal truth. Education still signals something, but lack of education is not the kiss of death. When asked, “Why didn’t you go to college?” “I couldn’t afford it” is a great excuse. Heavy subsidies take it off the table.

Does school build human capital?

One major study tested roughly a thousand people’s knowledge of algebra and geometry. Some participants were still in high school; the rest were adults between 19 and 84 years old. The researchers had data on subjects’ full mathematical education. Main finding: Most people who take high school algebra and geometry forget about half of what they learn within five years and forget almost everything within twenty-five years. Only […]

Despite the shortage of long-term retention studies, we can fall back on a compelling shortcut. Instead of measuring the enduring effect of education on adult knowledge, we can place an upper bound on that effect. It’s a two-step process. Step one: measure adult knowledge about various school subjects. Step two: note that schools can’t be responsible for more than 100% of what adults know about these subjects. What people now know is therefore an upper bound on the school learning they retain. My shortcut is easy to implement. Surveys of adults’ knowledge of reading, math, history, civics, science, and foreign languages are already on the shelf. The results are stark: Basic literacy and numeracy are virtually the only book learning most American adults possess. […]

Barely half of American adults know the Earth goes around the sun. Only 32% know atoms are bigger than electrons. Just 14% know that antibiotics don’t kill viruses. Knowledge of evolution barely exceeds zero. Knowledge of the Big Bang is actually less than zero; respondents would have done better flipping a coin. […]

If you throw a coin straight up, how many forces act on it midair? The textbook answer is “one”: after it leaves your hand, the only force on the coin is gravity. The popular answer, however, is “two”: the force of the throw keeps sending it up, and the force of gravity keeps dragging it down. Popular with whom? Virtually everyone—physics students included. At the beginning of the semester, only 12% of college students in introductory mechanics get the coin problem right. At the end of the semester, 72% still get it wrong. […]

Learning how to learn?

The clash between teachers’ grand claims about “learning how to learn” and a century of careful research is jarring. Yet commonsense skepticism is a shortcut to the expert consensus. Teachers’ plea that “we’re mediocre at teaching what we measure, but great at teaching what we don’t measure” is comically convenient. […]

Effects on IQ:

“Flowers for Algernon” is science fiction, but life mirrors art. Making IQ higher is easy. Keeping IQ higher is hard. Researchers call this “fadeout.” Fadeout for early childhood education is especially well documented. After six years in the famous Milwaukee Project, experimental subjects’ IQs were 32 points higher than controls’. By age fourteen, this advantage had declined to 10 points. In the Perry Preschool program, experimental subjects gained 13 points of IQ, but all this vanished by age 8. Head Start raises preschoolers’ IQs by a few points, but gains disappear by the end of kindergarten. […]

In any case, suppose each year of school permanently made you a whopping 3 IQ points smarter. According to standard estimates, this would raise your earnings by about 3%, leaving a supermajority of the education premium unexplained. […]

This section, comparing three models of education (human capital, signalling, ability bias) was excellent:

Table 3.2: Human Capital, Signaling, and Ability Bias [WYSIWYG = “What You See Is What You Get.”]

Story Visibility of Skill Education’s Effect on Skill Education’s Effect on Income
Pure Human Capital Perfect WYSIWYG WYSIWYG
Pure Signaling Zero Zero WYSIWYG
Pure Ability Bias Perfect Zero Zero
⅓ Human Capital, ⅓ Signaling, ⅓ Ability Bias 2/3 1/3 x WYSIWYG 2/3 x WYSIWYG

In 1999, a comprehensive review of earlier studies found that correcting for IQ reduces the education premium by an average of 18%. 7 When researchers correct for scores on the Armed Forces Qualification Test (AFQT), an especially high-quality IQ test, the education premium typically declines by 20–30%. 8 Correcting for mathematical ability may tilt the scales even more; the most prominent researchers to do so report a 40–50% decline in the education premium for men and a 30–40% decline for women.

Internationally, correcting for cognitive skill cuts the payoff for years of education by 20%, leaving clear rewards of mere years of schooling in all 23 countries studied. The highest serious estimate finds the education premium falls 50% after correcting for students’ twelfth-grade math, reading, and vocabulary scores, self-perception, perceived teacher ranking, family background, and location.

A thinner body of research weighs the importance of so-called noncognitive abilities such as conscientiousness and conformity. The results parallel those for IQ: noncognitive ability pays, and correcting for noncognitive ability reduces the education premium. Correcting for AFQT, self-esteem, and fatalism (belief about the importance of luck versus effort) reduces the education premium by a total of 30%. The sole study correcting for detailed personality tests finds the education premium falls 13%. The highest serious estimate says that once you correct for intelligence and background, correcting for attitudes (such as fear of failure, personal efficacy, and trust) and personal behavior (such as church attendance, television viewing, and cleanliness) further cuts the education premium by 37%.

There are admittedly two big reasons to mistrust these basic results: reverse causation and missing abilities. The former could systematically overstate the severity of ability bias. The latter could systematically understate the severity of ability bias. Need we fret over either flaw?

Reverse causation. When you estimate the education premium correcting for ability X, you implicitly assume education does not enhance X. If this assumption is false, correcting for X leads to misleadingly low estimates of the effect of education on earnings. The best remedy for this “reverse causation” problem is to measure ability, then estimate the effect of subsequent education on earnings. Research on cognitive ability bias routinely applies this remedy—and uncovers little evidence of reverse causation. The comprehensive review article mentioned earlier separated studies into two categories: those that measured IQ before school completion and those that measured IQ after school completion. If reverse causation were at work, studies that relied on IQ after completion would report more ability bias than studies that relied on IQ before completion. In fact, both categories yield similar estimates of cognitive ability bias.

Researchers who rely on the AFQT and related tests reach a similar result: When you correct for cognitive ability in 1980, the payoff for posttest education falls at least as much as the payoff for pretest education. Correcting for mathematical ability in the senior year of high school shaves 25–32% off the male college premium and 4–20% off the female college premium. What about reverse causation from education to noncognitive ability? Truth be told, relevant research is sparse. A few papers grapple with the issue, with mixed results. Most research, however, either measures noncognitive ability and education at the same point in time, or fails to distinguish between the effect of pre- and posttest education. The shortage of evidence hardly shows reverse causation is a serious problem, but caution is in order.

Missing abilities. Correcting for ability doesn’t fully eliminate ability bias unless you measure all relevant abilities. Are there any important abilities we’ve overlooked? Family background—via nature or nurture—is a plausible contender. Perhaps wealthy families use their money to help their kids get good educations and good jobs. Maybe college is a four-year vacation for rich kids—and a status symbol for their parents. Perhaps children from large families get less educational and professional assistance from their parents. Maybe well-educated workers come from high-achieving families—and would have been high achievers even without their schooling. The mechanism is hard to nail down, but most researchers find correcting for family background reduces the education premium by 0–15%. On reflection, though, correcting for family background probably “double-counts.” Both cognitive and noncognitive ability are moderately to highly hereditary, so you should correct for individual ability before you conclude family background overstates school’s payoff. This caveat matters. Rare studies that correct for intelligence and family background find that correcting for intelligence alone suffices. Armed with good measures of cognitive and noncognitive ability, we can probably safely ignore family background. The most troubling evidentiary gap: researchers usually settle for mediocre measures of noncognitive ability. Most studies that correct for noncognitive ability rely on one or two hastily measured traits and find only mild ability bias. Yet when asked, employers hail the importance of workers’ attitude and motivation—and the study with the best measures of noncognitive ability finds large ability bias. Until better measures come along, we should picture existing results as a lower bound on noncognitive ability bias rather than a solid estimate. So how severe is ability bias, all things considered? For cognitive ability bias, 20% is a cautious estimate, and 30% is reasonable. For noncognitive ability bias, 5% is cautious, and 15% is reasonable. Figure 3.1 shows education premiums correcting for both abilities, assuming equal bias for all education levels. […]

Why don’t more companies use IQ tests?

IQ “laundering.” Human capital purists often protest, “Why on earth do workers signal ability with a four-year degree instead of a three-hour IQ test?” My response: employers reasonably fear high-IQ, low-education applicants’ low conscientiousness and conformity. Other critics of the education industry, however, have a more streamlined response: American employers rely on educational credentials rather than IQ tests because IQ tests are effectively illegal. Thanks to the landmark 1971 Griggs vs. Duke Power case, later codified in the 1991 Civil Rights Act, anyone who hires by IQ risks pricey lawsuits. Why? Because IQ tests have a “disparate impact” on black and Hispanic applicants. To escape liability, employers must prove IQ testing is a “business necessity.” […]

Sheepskin effects: graduation is much more valuable than learning

Labor economists normally neglect sheepskin effects. By default, they assume all years of education are created equal, then estimate “the” effect of a year of education on earnings. 2 Yet economists who trouble to look almost always find pay spikes for diplomas. 3 High school graduation has a big spike: twelfth grade pays more than grades 9, 10, and 11 combined. In percentage terms, the average study finds graduation year is worth 3.4 regular years. College graduation has a huge spike: senior year of college pays over twice as much as freshman, sophomore, and junior years combined. 4 In percentage terms, the average study finds graduation year is worth 6.7 regular years. […]

correcting for ability usually modestly cuts the effect of both years of education and diplomas—holding the relative payoff for diplomas steady. […]

The Onion and the politics of education:

Who could oppose investment in our children, our people, our nation, and our future? The Onion, the best parody site ever, once ran an article titled, “U.S. Government to Discontinue Long-Term, Low-Yield Investment in Nation’s Youth.” 8 In it, Secretary of Education Rod Paige takes a calmly analytical approach that would cost any politician their job: “Testing is exactly the sort of research the government should do before making spending decisions,” Paige said. “How else will we know which individuals are sound investments and which are likely to waste our time and money?” […]

Bryan Caplan, The Case Against Education, 2017

Added to diary 21 April 2018

A Motte and Bailey castle is a medieval system of defence in which a stone tower on a mound (the Motte) is surrounded by an area of land (the Bailey) which in turn is encompassed by some sort of a barrier such as a ditch. Being dark and dank, the Motte is not a habitation of choice. The only reason for its existence is the desirability of the Bailey, which the combination of the Motte and ditch makes relatively easy to retain despite attack by marauders. When only lightly pressed, the ditch makes small numbers of attackers easy to defeat as they struggle across it: when heavily pressed the ditch is not defensible and so neither is the Bailey. Rather one retreats to the insalubrious but defensible, perhaps impregnable, Motte. Eventually the marauders give up, when one is well placed to reoccupy desirable land.

For my purposes the desirable but only lightly defensible territory of the Motte and Bailey castle, that is to say, the Bailey, represents a philosophical doctrine or position with similar properties: desirable to its proponent but only lightly defensible. The Motte is the defensible but undesired position to which one retreats when hard pressed. I think it is evident that Troll’s Truisms have the Motte and Bailey property, since the exciting falsehoods constitute the desired but indefensible region within the ditch whilst the trivial truth constitutes the defensible but dank Motte to which one may retreat when pressed.

An entire doctrine or theory may be a Motte and Bailey Doctrine just by virtue of having a central core of defensible but not terribly interesting or original doctrines surrounded by a region of exciting but only lightly defensible doctrines. Just as the medieval Motte was often constructed by the stonemasons art from stone in the surrounding land, the Motte of dull but defensible doctrines is often constructed by the use of the sophists art from the desired but indefensible doctrines lying within the ditch.

Diagnosis of a philosophical doctrine as being a Motte and Bailey Doctrine is invariably fatal. Once made it is relatively obvious to those familiar with the doctrine that the doctrine’s survival required a systematic vacillation between exploiting the desired territory and retreating to the Motte when pressed.

The dialectic between many refutations of specific postmodernist doctrines and the postmodernist defences correspond exactly to the dynamics of Motte and Bailey Doctrines. When pressed with refutation the postmodernists retreat to their Mottes, only to venture out and repossess the desired territory when the refutation is not in immediate evidence. For these reasons, I think the proper diagnosis of postmodernism is precisely that it is a Motte and Bailey Doctrine.

Shackel, N. (2005). The vacuity of postmodernist methodology. Metaphilosophy, 36(3), 295-320.

So the motte-and-bailey doctrine is when you make a bold, controversial statement. Then when somebody challenges you, you claim you were just making an obvious, uncontroversial statement, so you are clearly right and they are silly for challenging you. Then when the argument is over you go back to making the bold, controversial statement.

Some classic examples:

1. The religious group that acts for all the world like God is a supernatural creator who builds universes, creates people out of other people’s ribs, parts seas, and heals the sick when asked very nicely (bailey). Then when atheists come around and say maybe there’s no God, the religious group objects “But God is just another name for the beauty and order in the Universe! You’re not denying that there’s beauty and order in the Universe, are you?” (motte). Then when the atheists go away they get back to making people out of other people’s ribs and stuff.

2. Or…”If you don’t accept Jesus, you will burn in Hell forever.” (bailey) But isn’t that horrible and inhuman? “Well, Hell is just another word for being without God, and if you choose to be without God, God will be nice and let you make that choice.” (motte) Oh, well that doesn’t sound so bad, I’m going to keep rejecting Jesus. “But if you reject Jesus, you will BURN in HELL FOREVER and your body will be GNAWED BY WORMS.” But didn’t you just… “Metaphorical worms of godlessness!”

3. The feminists who constantly argue about whether you can be a real feminist or not without believing in X, Y and Z and wanting to empower women in some very specific way, and who demand everybody support controversial policies like affirmative action or affirmative consent laws (bailey). Then when someone says they don’t really like feminism very much, they object “But feminism is just the belief that women are people!” (motte) Then once the person hastily retreats and promises he definitely didn’t mean women aren’t people, the feminists get back to demanding everyone support affirmative action because feminism, or arguing about whether you can be a feminist and wear lipstick.

4. Proponents of pseudoscience sometimes argue that their particular form of quackery will cure cancer or take away your pains or heal your crippling injuries (bailey). When confronted with evidence that it doesn’t work, they might argue that people need hope, and even a placebo solution will often relieve stress and help people feel cared for (motte). In fact, some have argued that quackery may be better than real medicine for certain untreatable diseases, because neither real nor fake medicine will help, but fake medicine tends to be more calming and has fewer side effects. But then once you leave the quacks in peace, they will go back to telling less knowledgeable patients that their treatments will cure cancer.

5. Critics of the rationalist community note that it pushes controversial complicated things like Bayesian statistics and utilitarianism (bailey) under the name “rationality”, but when asked to justify itself defines rationality as “whatever helps you achieve your goals”, which is so vague as to be universally unobjectionable (motte). Then once you have admitted that more rationality is always a good thing, they suggest you’ve admitted everyone needs to learn more Bayesian statistics.

6. Likewise, singularitarians who predict with certainty that there will be a singularity, because “singularity” just means “a time when technology is so different that it is impossible to imagine” – and really, who would deny that technology will probably get really weird (motte)? But then every other time they use “singularity”, they use it to refer to a very specific scenario of intelligence explosion, which is far less certain and needs a lot more evidence before you can predict it (bailey).

The motte and bailey doctrine sounds kind of stupid and hard-to-fall-for when you put it like that, but all fallacies sound that way when you’re thinking about them. More important, it draws its strength from people’s usual failure to debate specific propositions rather than vague clouds of ideas. If I’m debating “does quackery cure cancer?”, it might be easy to view that as a general case of the problem of “is quackery okay?” or “should quackery be illegal?”, and from there it’s easy to bring up the motte objection.

Scott Alexander, “All in all, another brick in the motte”, Slate Star Codex, 3 November 2014

Suppose I define socialism as, “a system of totalitarian control over the economy, leading inevitably to mass poverty and death.” As a detractor of socialism, this is superficially tempting. But it’s sheer folly, for two distinct reasons.

First, this plainly isn’t what most socialists mean by “socialism.” When socialists call for socialism, they’re rarely requesting totalitarianism, poverty, and death. And when non-socialists listen to socialists, that’s rarely what they hear, either.

Second, if you buy this definition, there’s no point studying actual socialist regimes to see if they in fact are “totalitarian” or “inevitably lead to mass poverty and death.” Mere words tell you what you need to know.

What’s the problem? The problem is that I’ve provided an argumentative definition of socialism. Instead of rigorously distinguishing between what we’re talking about and what we’re saying about it, an argumentative definition deliberately interweaves the two.

The hidden hope, presumably, is that if we control the way people use words, we’ll also control what people think about the world. And it is plainly possible to trick the naive using these semantic tactics. But the epistemic cost is high: You preemptively end conversation with anyone who substantively disagrees with you - and cloud your own thinking in the process. It’s far better to neutrally define socialism as, say, “Government ownership of most of the means of production,” or maybe, “The view that each nation’s wealth is justly owned collectively by its citizens.” You can quibble with these definitions, but people can accept either definition regardless of their position on socialism itself.

Modern discussions are riddled with argumentative definitions, but the most prominent instance, lately, is feminism. Google “feminism,” and what do you get? The top hit: “the advocacy of women’s rights on the basis of the equality of the sexes.” I’ve heard many variants on this: “the theory that men and women should be treated equally,” or even “the radical notion that women are people.”

What’s argumentative about these definitions? Well, in this 2016 Washington Post/Kaiser Family Foundation survey, 40% of women and 67% of men did not consider themselves “feminists.” But over 90% of both genders agreed that “men and women should be social, political, and economic equals.” If Google’s definition of feminism conformed to standard English usage, these patterns would make very little sense. Imagine a world where 90% of men say they’re “bachelors,” but only 40% say they’re “unmarried.”

Bryan Caplan, Against Argumentative Definitions: The Case of Feminism, EconLog, 20 February 2018

Added to diary 21 April 2018

# carolyn-kormann

In the Aeneid, Virgil puts forward a prophecy founded on proto-pizza consumption, which foretells where Rome shall be built. “When hunger shall drive you, landed on unknown shores, to eat the tables at your frugal meal,” Aeneas recalls his father telling him, “remember to place your first buildings there.” These “tables,” Aeneas later realizes, falling to his knees, are plates made of hard bread off which his band of Trojan refugees eat lunch. Two millennia later, Camillo (opened in September by the proprietors of the Clinton Hill standby Locanda Vini e Olii) honors pizza’s Virgilian origins—in the ultimate old-timey Brooklyn move—with pinsa, a Roman flatbread.

Carolyn Kormann, Camillo, Tables for two, New Yorker Magazime, November 27, 2017

Added to diary 16 January 2018

# christopher-pfalzgraf

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.

Added to diary 27 June 2018

# colin-mcginn

Can I decide that my definite descriptions shall all conform to Russell’s theory? I simply stipulate that they do. It appears that I can indeed decide these things—my semantic will can create semantic facts. I can decide what my words and sentences, mean, since this is a matter of stipulation. […]

[…] so I decide to mean truth conditions by my sentences, not verification conditions. What is to stop me from doing that? I freely assign truth conditions to my sentences as their meanings. It’s a free country, semantically speaking. I can mean what I choose to mean—and I choose to mean truth conditions. I might even announce outright: “The meaning of my utterances is to be understood in terms of truth conditions, not verification conditions.”

But couldn’t someone of positivist or antirealist sympathies make the contrary decision? This person suspects that the sentences she has inherited from her elders are tainted with metaphysics, and she regards the concept of truth with suspicion; she wants her meaning to be determined entirely by verification conditions. She thus stipulates that her sentences are to be understood in terms of verification conditions, not truth conditions. When she says, “John is in pain” she means that the assertion conditions for that sentence are satisfied (John is groaning, writhing, etc.). She insists that there is no inaccessible private something lurking behind the behavioral evidence—no mysterious “truth condition”; there are just the criteria we use for making assertions of this type. She accordingly stipulates that her meanings shall conform to the verification conditions theory. This does not seem logically impossible: there could be a language conforming to the verificationist conception, given appropriate beliefs and intentions on the part of speakers. The traditional dispute has been over whether our actual language is subject to a truth conditions or a verification conditions theory, not over whether each theory is logically coherent.

Colin McGinn, Philosophical Provocations, Deciding to Mean p. 99ff, 2017

Added to diary 19 January 2018

# daniel-defoe

So I went to work; and here I must needs observe, that as reason is the substance and original of the mathematicks, so by stating and squaring every thing by reason, and by making the most rational judgment of things, every man may be in time master of every mechanick art.

Daniel Defoe, Robinson Crusoe, 1719

Added to diary 26 January 2018

# daron-acemoglu

In the wake of Hurricane Katrina in the summer of 2005, much of the Gulf Coast had been pummeled by wind and inches upon inches of rain. Water was everywhere, but of- ten undrinkable. Basic provisions we take for granted, like drinking water, weren’t easy to come by and the Federal Emergency Management Agency (FEMA) was caught flat-footed.

In response to catastrophic events like a hurricane or an earthquake, the caricature of private industry is that firms will gouge customers. And sometimes this is true, but in response to Katrina, there was one unlikely hero: Walmart. In fact, the Mayor of Kenner, a suburb of New Orleans, had this to say about Walmart’s response: “. . . the only lifeline in Kenner was the Walmart stores. We didn’t have looting on a mass scale because Walmart showed up with food and water so our people could survive.”

Indeed, in the three weeks after Katrina, Walmart shipped almost 2,500 truckloads of supplies to storm- damaged areas. These truckloads reached affected areas before FEMA, whose troubles responding to the storm were so great that it shipped 30,000 pounds of ice to Maine instead of Mississippi. These stories and more are in Horwitz (2009), which summarizes the divergent responses to Katrina by private industry and FEMA. How was Walmart so effective in its response? Well, it maintains a hurricane response center of its own that rivals FEMA’s, and prior to the storm’s landfall it anticipated a need for generators, water, and food, so it effectively diverted supplies to the area. Walmart’s emergency re- sponse center was in full swing as the storm approached with 50 employees managing the response from headquarters.

This sounds like the sort of response FEMA should have produced; so if that’s the job of FEMA, why did Walmart respond so heroically? Simple economics. Walmart un- derstood that there would be an important shift of the demand curve for water, generators, and ice in response to the storm and the textbook response to such shifts is an increase in quantity supplied. Lucky for us, few are better at shipping provisions around the country than Walmart.

Daron Acemoglu, David Laibson, John List, Economics, 1st Edition, 2015

Added to diary 19 May 2018

# david-foster-wallace

Laurel Manderley, who like most of the magazine’s high level interns wore exquisitely chosen and coordinated professional attire, permitted herself a small diamond stud in one nostril that Atwater found slightly distracting in face to face exchanges, but she was extremely shrewd and pragmatic—she had actually been voted Most Rational by the Class of ’96 at Miss Porter’s School. She was also all but incapable of writing a simple declarative sentence and thus could not, by any dark stretch of the imagination, ever be any kind of rival for Atwater’s salaryman position at Style. […]

Many of Style’s upper echelon interns convened for a working lunch at Chambers Street’s Tutti Mangia restaurant twice a week, to discuss issues of concern and transact any editorial or other business that was pending, after which each returned to her respective mentor and relayed whatever was germane. It was an efficient practice that saved the magazine’s paid staffers a great deal of time and emotional energy. […]

A fellow WITW staff intern, who also roomed with Laurel Manderley and three other Wellesleyites in a basement sublet near the Williamsburg Bridge, related a vignette that her therapist had once shared with her about dating his wife, whom the therapist had originally met when both of them were going through horrible divorces, and of their going out to dinner on one of their early dates and coming back and sitting with glasses of wine on her sofa, and of she all of a sudden saying, ‘You have to leave,’ and he not understanding, not knowing whether she was kicking him out or whether he’d said something inappropriate or what, and she finally explaining, ‘I have to take a dump and I can’t do it with you here, it’s too stressful,’ using the actual word dump, and of so how the therapist had gone down and stood on the corner smoking a cigarette and looking up at her apartment, watching the light in the bathroom’s frosted window go on, and simultaneously, one, feeling like a bit of an idiot for standing out there waiting for her to finish so he could go back up, and, two, realizing that he loved and respected this woman for baring to him so nakedly the insecurity she had been feeling. He had told the intern that standing on that corner was the first time in quite a long time he had not felt deeply and painfully alone, he had realized. […]

A polished, shallow, earnest, productive, consummate corporate pro. Over the past three years, Skip Atwater had turned in some 70 separate pieces to Style, of which almost 50 saw print and a handful of others ran under rewriters’ names. A volunteer fire company in suburban Tulsa where you had to be a grandmother to join. When Baby Won’t Wait—Moms who never made it to the hospital tell their amazing stories. Drinking and boating: The other DUI. Just who really was Slim Whitman. This Grass Ain’t Blue—Kentucky’s other cash crop. He Delivers—81 year old obstetrician welcomes the grandchild of his own first patient. Former Condit intern speaks out. Today’s forest ranger: He doesn’t just sit in a tower. Holy Rollers—Inline skateathon saves church from default. Eczema: The silent epidemic. Rock ’n’ Roll High School—Which future pop stars made the grade? Nevada bikers rev up the fight against myasthenia gravis. Head of the Parade—From Macy’s to the Tournament of Roses, this float designer has done them all. The All Ads All The Time cable channel. Rock of Ages—These geologists celebrate the millennium in a whole new way. Sometimes he felt that if not for his schipperkes’ love he would simply blow away and dissipate like milkweed. The women who didn’t get picked for Who Wants to Marry a Millionaire: Where did they come from, to what do they return. Leapin’ Lizards—The Gulf Coast’s new alligator plague. One Lucky Bunch of Cats—A terminally ill Lotto winner’s astounding bequest. Those new home cottage cheese makers: Marvel or ripoff? Be(-Happy-)Atitudes—This Orange County pastor claims Christ was no sourpuss. Dramamine and NASA: The untold story. Secret documents reveal Wallis Simpson cheated on Edward VIII. A Whole Lotta Dough—Delaware teen sells $40,000 worth of Girl Scout cookies . . . and isn’t finished yet! For these former agoraphobics, home is not where the heart is. Contra: The thinking person’s square dance. […] Hopping Mad—This triple amputee isn’t taking health care costs lying down. The meth lab next door! Mrs. Gladys Hine, the voice behind over 1,500 automated phone menus. The Dish—This Washington D.C. caterer has seen it all. Computer solitaire: The last addiction? No Sweet Talkin’—Blue M&Ms have these consumers up in arms. Dallas commuter’s airbag nightmare. Menopause and herbs: Exciting new findings. Fat Chance—Lottery cheaters and the heavyweight squad that busts them. Seance secrets of online medium Duwayne Evans. Ice sculpture: How do they do that? […] David Foster Wallace, The Suffering Channel, in Oblivion, 2004 Added to diary 27 June 2018 GOVERNOR: Guys, the state is getting soft. I can feel softness out there. It’s getting to be one big suburb and industrial park and mall. Too much development. People are getting complacent. They’re forgetting the way this state was historically hewn out of the wilderness. There’s no more hewing. MR. OBSTAT: You’ve got a point there, Chief. GOVERNOR: We need a wasteland. MR. LUNGBERG and MR. OBSTAT: A wasteland? GOVERNOR: Gentlemen, we need a desert. MR. LUNGBERG and MR. OBSTAT: A desert? GOVERNOR: Gentlemen, a desert. A point of savage reference for the good people of Ohio. A place to fear and love. A blasted region. Something to remind us of what we hewed out of. A place without malls. An Other for Ohio’s Self. Cacti and scorpions and the sun beating down. Desolation. A place for people to wander alone. To reflect. Away from everything. Gentlemen, a desert. MR. OBSTAT: Just a super idea, Chief. GOVERNOR: Thanks, Neil. Gentlemen may I present Mr. Ed Roy Yancey, of Industrial Desert Design, Dallas. They did Kuwait. David Foster Wallace, The Broom of the System, 1987 Added to diary 27 June 2018 The nearby StairMasters were used almost exclusively by midlevel financial analysts, all of whom had bristly cybernetic haircuts. David Foster Wallace, The Suffering Channel, in Oblivion, 2004 Added to diary 27 June 2018 I have a truly horrible dream which invariably occurs on the nights I am Lenoreless in my bed. I am attempting to stimulate the clitoris of Queen Victoria with the back of a tortoise-shell hairbrush. Her voluminous skirts swirl around her waist and my head. Her enormous cottage-cheese thighs rest heavy on my shoulders, spill out in front of my sweating face. The clanking of pounds of jewelry is heard as she shifts to offer herself at best advantage. There are odors. The Queen’s impatient breathing is thunder above me as I kneel at the throne. Time passes. Finally her voice is heard, overhead, metalled with disgust and frustration: “We are not aroused.” I am punched in the arm by a guard and flung into a pit at the bottom of which boil the figures of countless mice. I awake with a mouth full of fur. Begging for more time. A ribbed brush. David Foster Wallace, The Broom of the System, 1987 Added to diary 27 June 2018 I had a sort of detached interest in the whole analysis scene, really. My problems were without exception very tiny. David Foster Wallace, The Broom of the System, 1987 Added to diary 27 June 2018 Darlene Lilley, who was married and the mother of a large-headed toddler whose photograph adorned her desk and hutch at Team Δy, had, three fiscal quarters past, been subjected to unwelcome sexual advances by one of the four Senior Research Directors who liaisoned between the Field and Technical Processing teams and the upper echelons of Team Δy under Alan Britton, advances and duress more than sufficient for legal action in Schmidt’s and most of the rest of their Field Team’s opinions, which advances she had been able to deflect and defuse in an enormously skillful manner without raising any of the sort of hue and cry that could divide a firm along gender and/or political lines, and things had been allowed to cool down and blow over to such an extent that Darlene Lilley, Schmidt, and the three other members of their Field Team all now still enjoyed a productive working relationship with this dusky and pungent older Senior Research Director, who was now in fact overseeing Field research on the Mister Squishy-R.S.B. project, and Terry Schmidt was personally somewhat in awe of the self-possession and interpersonal savvy Darlene had displayed throughout the whole tense period, an awe tinged with an unwilled element of romantic attraction, and it is true that Schmidt at night in his condominium sometimes without feeling as if he could help himself masturbated to thoughts of having moist slapping intercourse with Darlene Lilley on one of the ponderous laminate conference tables of the firms they conducted statistical market research for, and this was a tertiary cause of what practicing social psychologists would call his MAM* with the board’s marker as he used a modulated tone of off-the-record confidence to tell the Focus Group about some of the more dramatic travails Reesemeyer Shannon Belt had had with establishing the product’s brand-identity and coming up with the test name Felony!, all the while envisioning in a more autonomic part of his brain Darlene delivering nothing but the standard minimal pre-GRDS instructions for her own Focus Group as she stood in her dark Hanes hosiery and the burgundy high heels she kept at work in the bottom-right cabinet of her hutch and changed out of her crosstrainers into every morning the moment she sat down and rolled her chair with small pretend whimpers of effort over to the hutch’s cabinets, sometimes (unlike Schmidt) pacing slightly in front of the whiteboard, sometimes planting one heel and rotating her foot slightly or crossing her sturdy ankles to lend her standing posture a carelessly demure aspect, sometimes taking her delicate oval eyeglasses off and not chewing on the arm but holding the glasses in such a way and in such proximity to her mouth that one got the idea she could, at any moment, put one of the frames’ arm’s plastic earguards just inside her mouth and nibble on it absently, an unconscious gesture of shyness and concentration at once. David Foster Wallace, Mister Squishy, Oblivion (2004) Added to diary 28 March 2018 So the fact that I had chosen to be supposedly ‘honest’ and to diagnose myself aloud was in fact just one more move in my campaign to make sure Dr. Gustafson understood that as a patient I was uniquely acute and self-aware, and that there was very little chance he was going to see or diagnose anything about me that I wasn’t already aware of and able to turn to my own tactical advantage in terms of creating whatever image or impression of myself I wanted him to see at that moment. […] I’ll spare you any more examples, for instance I’ll spare you the literally countless examples of my fraudulence with girls—with the ladies as they say—in just about every dating relationship I ever had, or the almost unbelievable amount of fraudulence and calculation involved in my career—not just in terms of manipulating the consumer and manipulating the client into trusting that your agency’s ideas are the best way to manipulate the consumer, but in the inter-office politics of the agency itself, like for example in sizing up what sorts of things your superiors want to believe (including the belief that they’re smarter than you and that that’s why they’re your superior) and then giving them what they want but doing it just subtly enough that they never get a chance to view you as a sycophant or yes-man (which they want to believe they do not really want) but instead see you as a tough-minded independent thinker who from time to time bows to the weight of their superior intelligence and creative firepower, etc. […] Plus it didn’t exactly seem like a coincidence that the cancer he [Dr. Gustafson] was even then harboring was in his colon—that shameful, dirty, secret place right near the rectum—with the idea being that using your rectum or colon to secretly harbor an alien growth was a blatant symbol both of homosexuality and of the repressive belief that its open acknowledgment would equal disease and death. Dr. Gustafson and I both had a good laugh over this one after we’d both died and were outside linear time and in the process of dramatic change, you can bet on that. […] I also inserted that there was also a good possibility that, when all was said and done, I was nothing but just another fast-track yuppie who couldn’t love, and that I found the banality of this unendurable, largely because I was evidently so hollow and insecure that I had a pathological need to see myself as somehow exceptional or outstanding at all times. Without going into much explanation or argument, I also told Fern that if her initial reaction to these reasons for my killing myself was to think that I was being much, much too hard on myself, then she should know that I was already aware that that was the most likely reaction my note would produce in her, and had probably deliberately constructed the note to at least in part prompt just that reaction, just the way my whole life I’d often said and done things designed to prompt certain people to believe that I was a genuinely outstanding person whose personal standards were so high that he was far too hard on himself, which in turn made me appear attractively modest and unsmug, and was a big reason for my popularity with so many people in all different avenues of my life—what Beverly-Elizabeth Slane had termed my ‘talent for ingratiation’—but was nevertheless basically calculated and fraudulent. I also told Fern that I loved her very much, and asked her to relay these same sentiments to Marin County for me. David Foster Wallace, Good Old Neon, in Oblivion (2004) Added to diary 28 March 2018 I’ll say God seems to have a kind of laid-back management style I’m not crazy about. I’m pretty much anti-death. God looks by all accounts to be pro-death. I’m not seeing how we can get together on this issue, he and I […]. David Foster Wallace, Infinite Jest, 1996 Added to diary 08 February 2018 Have you ever wondered where these particular types of unfunny T-shirts come from? the ones that say things like “HORNEY IN 2.5” or “Impeach President Clinton… AND HER HUSBAND TOO!!”? Mystery solved. They come from State Fair Expos. Right here on the main floor’s a monster-sized booth, more like an open bodega, with shirts and laminated buttons and license-plate borders, all of which, for this subphylum, Testify. This booth seems integral, somehow. The seamiest fold of the Midwestern underbelly. The Lascaux Caves of a certain rural mentality. “40 Isn’t Old… IF YOU’RE A TREE” and “The More Hair I Lose, The More Head I Get” and “Retired: No Worries, No Paycheck” and “I Fight Poverty… I WORK!!” As with New Yorker cartoons, there’s an elusive sameness about the shirts’ messages. A lot serve to I.D. the wearer as part of a certain group and then congratulate that group for its sexual dynamism—“Coon Hunters Do It All Night” and “Hairdressers Tease It Till It Stands Up” and “Save A Horse: Ride A Cowboy.” Some presume a weird kind of aggressive relation between the shirt’s wearer and its reader—“We’d Get Along Better… If You Were A BEER” and “Lead Me Not Into Temptation, I Know The Way MYSELF” and “What Part Of NO Don’t You Understand?” There’s something complex and compelling about the fact that these messages are not just uttered but worn, like they’re a badge or credential. The message compliments the wearer somehow, and the wearer in turn endorses the message by spreading it across his chest, which fact is then in further turn supposed to endorse the wearer as a person of plucky or risqué wit. It’s also meant to cast the wearer as an Individual, the sort of person who not only makes but wears a Personal Statement. What’s depressing is that the T-shirts’ statements are not only preprinted and mass-produced, but so dumbly unfunny that they serve to place the wearer squarely in that large and unfortunate group of people who think such messages not only Individual but funny. It all gets tremendously complex and depressing. The lady running the booth’s register is dressed like a ’68 Yippie but has a hard carny face and wants to know why I’m standing here memorizing T-shirts. All I can manage to tell her is that the “HORNEY” on these “2.5 BEERS”-shirts is misspelled; and now I really feel like an East-Coast snob, laying judgments and semiotic theories on these people who ask of life only a Republican in the White House and a black velvet Elvis on the wood-grain mantel of their mobile home. David Foster Wallace, Ticket to the fair, Harper’s July 1994 Added to diary 26 January 2018 They are sort of disingenuous, I believe, these magazine people. They say all they want is a sort of really big experiential postcard—go, plow the Caribbean in style, come back, say what you’ve seen. I have seen a lot of really big white ships. I have seen schools of little fish with fins that glow. I have seen a toupee on a thirteen-year-old boy. (The glowing fish liked to swarm between our hull and the cement of the pier whenever we docked.) I have seen the north coast of Jamaica. I have seen and smelled all 145 cats inside the Ernest Hemingway Residence in Key West FL. I now know the difference between straight Bingo and Prize-O, and what it is when a Bingo jackpot “snowballs.” I have seen camcorders that practically required a dolly; I’ve seen fluorescent luggage and fluorescent sunglasses and fluorescent pince-nez and over twenty different makes of rubber thong. I have heard steel drums and eaten conch fritters and watched a woman in silver lamé projectile-vomit inside a glass elevator. I have pointed rhythmically at the ceiling to the 2:4 beat of the exact same disco music I hated pointing at the ceiling to in 1977. I have learned that there are actually intensities of blue beyond very, very bright blue. I have eaten more and classier food than I’ve ever eaten, and eaten this food during a week when I’ve also learned the difference between “rolling” in heavy seas and “pitching” in heavy seas. I have heard a professional comedian tell folks, without irony, “But seriously.” I have seen fuchsia pantsuits and menstrual-pink sportcoats and and maroon-and-purple warm-ups and white loafers worn without socks. I have seen professional blackjack dealers so lovely they make you want to run over to their table and spend every last nickel you’ve got playing blackjack. I have heard upscale adult U.S. citizens ask the Guest Relations Desk whether snorkeling necessitates getting wet, whether the skeetshooting will be held outside, whether the crew sleeps on board, and what time the Midnight Buffet is. I now know the precise mixological difference between a Slippery Nipple and a Fuzzy Navel. I know what a Coco Loco is. I have in one week been the object of over 1500 professional smiles. I have burned and peeled twice. I have shot skeet at sea. Is this enough? David Foster Wallace, Shipping Out, Harper’s January 1996 Added to diary 26 January 2018 # david-hume In every system of morality, which I have hitherto met with, I have always remark’d, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surpriz’d to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it shou’d be observ’d and explain’d; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. David Hume, A Treatise of Human Nature Book III, Section 1, Part i, Added to diary 21 April 2018 # david-laibson In the wake of Hurricane Katrina in the summer of 2005, much of the Gulf Coast had been pummeled by wind and inches upon inches of rain. Water was everywhere, but of- ten undrinkable. Basic provisions we take for granted, like drinking water, weren’t easy to come by and the Federal Emergency Management Agency (FEMA) was caught flat-footed. In response to catastrophic events like a hurricane or an earthquake, the caricature of private industry is that firms will gouge customers. And sometimes this is true, but in response to Katrina, there was one unlikely hero: Walmart. In fact, the Mayor of Kenner, a suburb of New Orleans, had this to say about Walmart’s response: “. . . the only lifeline in Kenner was the Walmart stores. We didn’t have looting on a mass scale because Walmart showed up with food and water so our people could survive.” Indeed, in the three weeks after Katrina, Walmart shipped almost 2,500 truckloads of supplies to storm- damaged areas. These truckloads reached affected areas before FEMA, whose troubles responding to the storm were so great that it shipped 30,000 pounds of ice to Maine instead of Mississippi. These stories and more are in Horwitz (2009), which summarizes the divergent responses to Katrina by private industry and FEMA. How was Walmart so effective in its response? Well, it maintains a hurricane response center of its own that rivals FEMA’s, and prior to the storm’s landfall it anticipated a need for generators, water, and food, so it effectively diverted supplies to the area. Walmart’s emergency re- sponse center was in full swing as the storm approached with 50 employees managing the response from headquarters. This sounds like the sort of response FEMA should have produced; so if that’s the job of FEMA, why did Walmart respond so heroically? Simple economics. Walmart un- derstood that there would be an important shift of the demand curve for water, generators, and ice in response to the storm and the textbook response to such shifts is an increase in quantity supplied. Lucky for us, few are better at shipping provisions around the country than Walmart. Daron Acemoglu, David Laibson, John List, Economics, 1st Edition, 2015 Added to diary 19 May 2018 # david-rothschild To reiterate, in just six days, The New York Times ran as many cover stories about Hillary Clinton’s emails as they did about all policy issues combined in the 69 days leading up to the election (and that does not include the three additional articles on October 18, and November 6 and 7, or the two articles on the emails taken from John Podesta). This intense focus on the email scandal cannot be written off as inconsequential: The Comey incident and its subsequent impact on Clinton’s approval rating among undecided voters could very well have tipped the election. Duncan Watts and David Rothschild, Don’t blame the election on fake news. Blame it on the media, Columbia Journalism Review, 5 December 2017 Analyses by Columbia Journalism Review, the Berkman Klein Center for Internet and Society at Harvard University, and the Shorenstein Center at the Harvard Kennedy School show that the Clinton email controversy received more coverage in mainstream media outlets than any other topic during the 2016 presidential election. Wikipedia contributors, Hillary Clinton email controversy Added to diary 30 January 2018 # douglas-adams The real Universe arched sickeningly away beneath them. Various pretend ones flitted silently by, like mountain goats. Primal light exploded, splattering space-time as with gobbets of Jell-O. Time blossomed, matter shrank away. The highest prime number coalesced quietly in a corner and hid itself away for ever. Douglas Adams, The Hitchhiker’s Guide to the Galaxy (1979) Added to diary 28 March 2018 # duncan-watts To reiterate, in just six days, The New York Times ran as many cover stories about Hillary Clinton’s emails as they did about all policy issues combined in the 69 days leading up to the election (and that does not include the three additional articles on October 18, and November 6 and 7, or the two articles on the emails taken from John Podesta). This intense focus on the email scandal cannot be written off as inconsequential: The Comey incident and its subsequent impact on Clinton’s approval rating among undecided voters could very well have tipped the election. Duncan Watts and David Rothschild, Don’t blame the election on fake news. Blame it on the media, Columbia Journalism Review, 5 December 2017 Analyses by Columbia Journalism Review, the Berkman Klein Center for Internet and Society at Harvard University, and the Shorenstein Center at the Harvard Kennedy School show that the Clinton email controversy received more coverage in mainstream media outlets than any other topic during the 2016 presidential election. Wikipedia contributors, Hillary Clinton email controversy Added to diary 30 January 2018 # eliezer-yudkowsky What capitalism is good at: If I had to name the single epistemic feat at which modern human civilization is most adequate, the peak of all human power of estimation, I would unhesitatingly reply, “Short-term relative pricing of liquid financial assets, like the price of S&P 500 stocks relative to other S&P 500 stocks over the next three months.” This is something into which human civilization puts an actual effort. Millions of dollars are offered to smart, conscientious people with physics PhDs to induce them to enter the field. These people are then offered huge additional payouts conditional on actual performance—especially outperformance relative to a baseline.3 Large corporations form to specialize in narrow aspects of price-tuning. They have enormous computing clusters, vast historical datasets, and competent machine learning professionals. They receive repeated news of success or failure in a fast feedback loop.4 The knowledge aggregation mechanism—namely, prices that equilibrate supply and demand for the financial asset—has proven to work beautifully, and acts to sum up the wisdom of all those highly motivated actors. An actor that spots a 1% systematic error in the aggregate estimate is rewarded with a billion dollars—in a process that also corrects the estimate. Barriers to entry are not zero (you can’t get the loans to make a billion-dollar corrective trade), but there are thousands of diverse intelligent actors who are all individually allowed to spot errors, correct them, and be rewarded, with no central veto. This is certainly not perfect, but it is literally as good as it gets on modern-day Earth.[…] In the thickly traded parts of the stock market, where the collective power of human civilization is truly at its strongest, I doff my hat, I put aside my pride and kneel in true humility to accept the market’s beliefs as though they were my own, knowing that any impulse I feel to second-guess and every independent thought I have to argue otherwise is nothing but my own folly. If my perceptions suggest an exploitable opportunity, then my perceptions are far more likely mistaken than the markets. That is what it feels like to look upon a civilization doing something adequately.[…] Efficiency vs Inexploitavility vs Adequacy: So the distinction is: Efficiency: “Microsoft’s stock price is neither too low nor too high, relative to anything you can possibly know about Microsoft’s stock price.” Inexploitability: “Some houses and housing markets are overpriced, but you can’t make a profit by short-selling them, and you’re unlikely to find any substantially underpriced houses—the market as a whole isn’t rational, but it contains participants who have money and understand housing markets as well as you do.” Adequacy: “Okay, the medical sector is a wildly crazy place where different interventions have orders-of-magnitude differences in cost-effectiveness, but at least there’s no well-known but unused way to save ten thousand lives for just ten dollars each, right? Somebody would have picked up on it! Right?!”[…] Eliezer Yudkowsky, Inadequate Equilibria, 2017 Added to diary 21 April 2018 everything Bayesians said seemed perfectly straightforward and simple, the obvious way I would do it myself; whereas the things frequentists said sounded like the elaborate, warped, mad blasphemy of dreaming Cthulhu. I didn’t choose to become a Bayesian any more than fishes choose to breathe water. Eliezer Yudkowsky, My Bayesian Enlightenment, LessWrong, 5 October 2008 Added to diary 08 February 2018 the brain can’t successfully multiply by eight and get a larger quantity than it started with. But it ought to, normatively speaking. Eliezer Yudkowsky, The “Intuitions” Behind “Utilitarianism”, LessWrong, 28 January 2008 Added to diary 02 February 2018 The ancient war between the Bayesians and the accursèd frequentists stretches back through decades, and I’m not going to try to recount that elder history in this blog post. But one of the central conflicts is that Bayesians expect probability theory to be… what’s the word I’m looking for? “Neat?” “Clean?” “Self-consistent?” As Jaynes says, the theorems of Bayesian probability are just that, theorems in a coherent proof system. No matter what derivations you use, in what order, the results of Bayesian probability theory should always be consistent – every theorem compatible with every other theorem. […] Math! That’s the word I was looking for. Bayesians expect probability theory to be math. That’s why we’re interested in Cox’s Theorem and its many extensions, showing that any representation of uncertainty which obeys certain constraints has to map onto probability theory. Coherent math is great, but unique math is even better. And yet… should rationality be math? It is by no means a foregone conclusion that probability should be pretty. The real world is messy – so shouldn’t you need messy reasoning to handle it? Maybe the non-Bayesian statisticians, with their vast collection of ad-hoc methods and ad-hoc justifications, are strictly more competent because they have a strictly larger toolbox. It’s nice when problems are clean, but they usually aren’t, and you have to live with that. After all, it’s a well-known fact that you can’t use Bayesian methods on many problems because the Bayesian calculation is computationally intractable. So why not let many flowers bloom? Why not have more than one tool in your toolbox? That’s the fundamental difference in mindset. Old School statisticians thought in terms of tools, tricks to throw at particular problems. Bayesians – at least this Bayesian, though I don’t think I’m speaking only for myself – we think in terms of laws. […] No, you can’t always do the exact Bayesian calculation for a problem. Sometimes you must seek an approximation; often, indeed. This doesn’t mean that probability theory has ceased to apply, any more than your inability to calculate the aerodynamics of a 747 on an atom-by-atom basis implies that the 747 is not made out of atoms. Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation – and fails to the extent that it departs. Bayesianism’s coherence and uniqueness proofs cut both ways. Just as any calculation that obeys Cox’s coherency axioms (or any of the many reformulations and generalizations) must map onto probabilities, so too, anything that is not Bayesian must fail one of the coherency tests. This, in turn, opens you to punishments like Dutch-booking (accepting combinations of bets that are sure losses, or rejecting combinations of bets that are sure gains). You may not be able to compute the optimal answer. But whatever approximation you use, both its failures and successes will be explainable in terms of Bayesian probability theory. You may not know the explanation; that does not mean no explanation exists. So you want to use a linear regression, instead of doing Bayesian updates? But look to the underlying structure of the linear regression, and you see that it corresponds to picking the best point estimate given a Gaussian likelihood function and a uniform prior over the parameters. You want to use a regularized linear regression, because that works better in practice? Well, that corresponds (says the Bayesian) to having a Gaussian prior over the weights. Sometimes you can’t use Bayesian methods literally; often, indeed. But when you can use the exact Bayesian calculation that uses every scrap of available knowledge, you are done. You will never find a statistical method that yields a better answer. You may find a cheap approximation that works excellently nearly all the time, and it will be cheaper, but it will not be more accurate. Not unless the other method uses knowledge, perhaps in the form of disguised prior information, that you are not allowing into the Bayesian calculation; and then when you feed the prior information into the Bayesian calculation, the Bayesian calculation will again be equal or superior. When you use an Old Style ad-hoc statistical tool with an ad-hoc (but often quite interesting) justification, you never know if someone else will come up with an even more clever tool tomorrow. But when you can directly use a calculation that mirrors the Bayesian law, you’re done – like managing to put a Carnot heat engine into your car. It is, as the saying goes, “Bayes-optimal”. Eliezer Yudkowsky, Beautiful Probability, 14 January 2008 Added to diary 15 January 2018 In Empty Labels and then Replace the Symbol with the Substance, we saw the technique of replacing a word with its definition – the example being given: All [mortal, ~feathers, bipedal] are mortal. Socrates is a [mortal, ~feathers, bipedal]. Therefore, Socrates is mortal. Why, then, would you even want to have a word for “human”? Why not just say “Socrates is a mortal featherless biped”? Because it’s helpful to have shorter words for things that you encounter often. If your code for describing single properties is already efficient, then there will not be an advantage to having a special word for a conjunction – like “human” for “mortal featherless biped” – unless things that are mortal and featherless and bipedal, are found more often than the marginal probabilities would lead you to expect. In efficient codes, word length corresponds to probability—so the code for Z1Y2 will be just as long as the code for Z1 plus the code for Y2, unless P(Z1Y2) > P(Z1)P(Y2), in which case the code for the word can be shorter than the codes for its parts. And this in turn corresponds exactly to the case where we can infer some of the properties of the thing, from seeing its other properties. It must be more likely than the default that featherless bipedal things will also be mortal. Of course the word “human” really describes many, many more properties – when you see a human-shaped entity that talks and wears clothes, you can infer whole hosts of biochemical and anatomical and cognitive facts about it. To replace the word “human” with a description of everything we know about humans would require us to spend an inordinate amount of time talking. But this is true only because a featherless talking biped is far more likely than default to be poisonable by hemlock, or have broad nails, or be overconfident. Having a word for a thing, rather than just listing its properties, is a more compact code precisely in those cases where we can infer some of those properties from the other properties. (With the exception perhaps of very primitive words, like “red”, that we would use to send an entirely uncompressed description of our sensory experiences. But by the time you encounter a bug, or even a rock, you’re dealing with nonsimple property collections, far above the primitive level.) So having a word “wiggin” for green-eyed black-haired people, is more useful than just saying “green-eyed black-haired person”, precisely when: 1. Green-eyed people are more likely than average to be black-haired (and vice versa), meaning that we can probabilistically infer green eyes from black hair or vice versa; or 2. Wiggins share other properties that can be inferred at greater-than-default probability. In this case we have to separately observe the green eyes and black hair; but then, after observing both these properties independently, we can probabilistically infer other properties (like a taste for ketchup). One may even consider the act of defining a word as a promise to this effect. Telling someone, “I define the word ‘wiggin’ to mean a person with green eyes and black hair”, by Gricean implication, asserts that the word “wiggin” will somehow help you make inferences / shorten your messages. If green-eyes and black hair have no greater than default probability to be found together, nor does any other property occur at greater than default probability along with them, then the word “wiggin” is a lie: The word claims that certain people are worth distinguishing as a group, but they’re not. In this case the word “wiggin” does not help describe reality more compactly—it is not defined by someone sending the shortest message—it has no role in the simplest explanation. Equivalently, the word “wiggin” will be of no help to you in doing any Bayesian inference. Even if you do not call the word a lie, it is surely an error. And the way to carve reality at its joints, is to draw your boundaries around concentrations of unusually high probability density in Thingspace. Eliezer Yudkowsky, Mutual information and density in thingspace, 23 February 2008 And so even the labels that we use for words are not quite arbitrary. The sounds that we attach to our concepts can be better or worse, wiser or more foolish. Even apart from considerations of common usage! Eliezer Yudkowsky, Entropy, and Short Codes, 23 February 2008 Added to diary 15 January 2018 The Foresight Institute suggests, among other sensible proposals, that the replication instructions for any nanodevice should be encrypted. Moreover, encrypted such that flipping a single bit of the encoded instructions will entirely scramble the decrypted output. If all nanodevices produced are precise molecular copies, and moreover, any mistakes on the assembly line are not heritable because the offspring got a digital copy of the original encrypted instructions for use in making grandchildren, then your nanodevices ain’t gonna be doin’ much evolving. Eliezer Yudkowsky, No Evolutions for Corporations or Nanodevices, 17 November 2007 Added to diary 15 January 2018 # eric-helland The Baumol effect is easy to explain but difficult to grasp. In 1826, when Beethoven’s String Quartet No. 14 was first played, it took four people 40 minutes to produce a performance. In 2010, it still took four people 40 minutes to produce a performance. Stated differently, in the nearly 200 years between 1826 and 2010, there was no growth in string quartet labor productivity. In 1826 it took 2.66 labor hours to produce one unit of output, and it took 2.66 labor hours to produce one unit of output in 2010. Fortunately, most other sectors of the economy have experienced substantial growth in labor productivity since 1826. We can measure growth in labor productivity in the economy as a whole by looking at the growth in real wages. In 1826 the average hourly wage for a production worker was$1.14. In 2010 the average hourly wage for a production worker was $26.44, approximately 23 times higher in real (inflation-adjusted) terms. Growth in average labor productivity has a surprising implication: it makes the output of slow productivity-growth sectors (relatively) more expensive. In 1826, the average wage of$1.14 meant that the 2.66 hours needed to produce a performance of Beethoven’s String Quartet No. 14 had an opportunity cost of just $3.02. At a wage of$26.44, the 2.66 hours of labor in music production had an opportunity cost of $70.33. Thus, in 2010 it was 23 times (70.33/3.02) more expensive to produce a performance of Beethoven’s String Quartet No. 14 than in 1826. In other words, one had to give up more other goods and services to produce a music performance in 2010 than one did in 1826. Why? Simply because in 2010, society was better at producing other goods and services than in 1826. The 23 times increase in the relative price of the string quartet is the driving force of Baumol’s cost disease. The focus on relative prices tells us that the cost disease is misnamed. The cost disease is not a disease but a blessing. To be sure, it would be better if productivity increased in all industries, but that is just to say that more is better. There is nothing negative about productivity growth, even if it is unbalanced. Helland, Eric, and Alexander T. Tabarrok. “Why Are the Prices So Damn High?” (2019). Added to diary 09 June 2019 # eric-zitzewitz We consider a simple prediction market in which traders buy and sell an all-or nothing contract (a binary option) paying$1 if a specific event occurs, and nothing otherwise. There is heterogeneity in beliefs among the trading population, and following Manski’s notation, we denote trader $j$’s belief that the event will occur as $q_j$. These beliefs are orthogonal to wealth levels $(y)$, and are drawn from a distribution, $F(q)$. Individuals are price-takers and trade so as to maximize their subjectively expected utility. Wealth is only affected by the event via the prediction market, so there is no hedging motive for trading the contract. We first consider the case where traders have log utility, and we endogenously derive their trading activity, given the price of the contract is $\pi$. […] Thus, in this simple model, market prices are equal to the mean belief among traders.

[…]

We now turn to relaxing some of our assumptions. To preview, relaxing the assumption that budgets are orthogonal to beliefs yields the intuitively plausible result that prediction market prices are a wealth-weighted average of beliefs among market traders. And second, the result that the equilibrium price is exactly equal to the (weighted) mean of beliefs reflects the fact that demand for the prediction security is linear in beliefs, which is itself a byproduct of assuming log utility. Calibrating alternative utility functions, we find that prices can systematically diverge from mean beliefs, but that this divergence is typically small. […] The extent of the deviation depends crucially on how widely dispersed beliefs are.

[…]

We start by assuming that beliefs are drawn from a uniform distribution with a range of 10 percentage points, and solve for the mapping between mean beliefs and prices implied by each of the utility functions shown in Figure 1. (We rescale beliefs outside the (0,1) range to 0 or 1.) Figure 2 shows that for moderately dispersed beliefs, prediction market prices tend to coincide fairly closely with the mean beliefs. While there is some divergence, it is typically within a percentage point, although the risk neutral model yields larger differences. […]

Figure 2

Figure 3 shows the mapping from prices to probabilities when beliefs are more disperse (in this case the standard deviation and range were doubled). As the dispersion of beliefs widens, the number of traders with extreme beliefs increases, and hence the non-linear response to the divergence between beliefs and prices is increasingly important. As such, the biases evident in Figure 2 become even more evident as the distribution of beliefs widens. Even so, for utility functions with standard levels of risk aversion, these biases are small.

Figure 3

Justin Wolfers and Eric Zitzewitz, Interpreting Prediction Market Prices as Probabilities, NBER Working Paper No. 12200, May 2006

Added to diary 20 January 2018

# esther-duflo

As economists increasingly help governments design new policies and regulations, they take on an added responsibility to engage with the details of policy making and, in doing so, to adopt the mindset of a plumber. […]

There are two reasons for this need to attend to details. First, it turns out that policy makers rarely have the time or inclination to focus on them, and will tend to decide on how to address them based on hunches, without much regard for evidence. […] Second, details that we as economists might consider relatively uninteresting are in fact extraordinarily important in determining the final impact of a policy or a regulation, while some of the theoretical issues we worry about most may not be that relevant. […]

For Roth, intervening in the real world should fundamentally alter the attitude of the economist and her way of working. He sets the tone in the abstract of the paper:

Market design involves a responsibility for detail, a need to deal with all of a market’s complications, not just its principle features. Designers therefore cannot work only with the simple conceptual models used for theoretical insights into the general working of markets. Instead, market design calls for an engineering approach.

The scientist provides the general framework that guides the design. […] The engineer takes these general principles into account, but applies them to a specific situation. […] The plumber goes one step further than the engineer: she installs the machine in the real world, carefully watches what happens, and then tinkers as needed.

Esther Duflo, The Economist as Plumber, 23 January 2017

Added to diary 21 March 2018

# gojko-barjamovic

Economists build a database from 4000-year-old clay tablets, plug it into a trade model, and use it to locate lost bronze age cities.

They had clay tablets from ancient merchants, saying things like:

(I paid) 6.5 shekels (of tin) from the Town of the Kanishites to Timelkiya. I paid 2 shekels of silver and 2 shekels of tin for the hire of a donkey from Timelkiya to Hurama. From Hurama to Kaneš I paid 4.5 shekels of silver and 4.5 shekels of tin for the hire of a donkey and a packer.

This allowed them to measure the amount of trade between any two cities.

Then they constructed a theoretical model of the expected amount of trade between any two cities, as inversely propotional to the distance between the two cities. Given the data on amount of trade, and the locations of the known cities, they are able to estimate the lost locations:

As long as we have data on trade between known and lost cities, with sufficiently many known compared to lost cities, a structural gravity model is able to estimate the likely geographic coordinates of lost cities [….]

We build a simple Ricardian model of trade. Further imposing that bilateral trade frictions can be summarized by a power function of geographic distance, our model makes predictions on the number oftransactions between city pairs, which is observed in our data. The model can be estimated solely on bilateral trade flows and on the geographic location of at least some cities.

Gojko Barjamovic, Thomas Chaney, Kerem A. Coşar, Ali Hortaçsu, Trade, Merchants, and the Lost Cities of the Bronze Age, 2017

Added to diary 15 January 2018

# gottfried-leibniz

if controversies were to arise, there would be be no more need of disputation between two philosophers than between two calculators. For it would suffice for them to take their pencils in their hands and to sit down at the abacus, and say to each other (and if they so wish also to a friend called to help): Let us calculate.

Gottfried Leibniz

Added to diary 15 January 2018

# gregory-lewis

Effective Altruists, then, know the price of everything and the value of nothing. […] Heir apparent to Bentham’s reductive credo, they aspire to prize apart the rib cage of eudaimonia to feast on its entrails of utilty.

Gregory Lewis, in a comment on “Philosophical Critiques of Effective Altruism by Prof Jeff McMahan”, Effective altruism forum, 10 May 2016

Added to diary 09 February 2018

# hilary-greaves

I think what the […] the people who are concerned about the excessive sacrifice argument are likely to say here is like, “Well, in so far as you’re right about morality being this extremely demanding thing, it looks like we’re going to have reason to talk about a second thing as well, which is maybe like watered down morality or pseudo morality, and that the second thing is going to be something like […] what we actually plan to act according to or what is reasonable to ask of people, or what we’re going to issue as advice to the government given that they don’t have morally perfect motivations or something like that and then, by the way, the second thing is the thing that’s going to be more directly action guiding in practice. So yeah, you philosophers can go and have your conversation about what this abstract morality abstractly requires, but nobody’s actually going to pay any attention to it when they act. They’re going to pay attention to this other thing. So, by the way, we’ve won the practical argument. I think there’s something to that line of response.

Hilary Greaves, 80,000 Hours Podcast, 23 October 2018

Added to diary 03 December 2018

# immanuel-kant

Der gewöhnliche Probierstein: ob etwas bloße Überredung, oder wenigstens subjektive Überzeugung, d. i. festes Glauben sei, was jemand behauptet, ist das Wetten. Öfters spricht jemand seine Sätze mit so zuversichtlichem und unlenkbarem Trotze aus, daß er alle Besorgnis des Irrtums gänzlich abgelegt zu haben scheint. Eine Wette macht ihn stutzig. Bisweilen zeigt sich, daß er zwar Überredung genug, die auf einen Dukaten an Wert geschätzt werden kann, aber nicht auf zehn, besitze. Denn den ersten wagt er noch wohl, aber bei zehn wird er allererst inne, was er vorher nicht bemerkte, daß es nämlich doch wohl möglich sei, er habe sich geirrt. Wenn man sich in Gedanken vorstellt, man solle worauf das Glück des ganzen Lebens verwetten, so schwindet unser triumphierendes Urteil gar sehr, wir werden überaus schüchtern und entdecken so allererst, daß unser Glaube so weit nicht zulange. So hat der pragmatische Glaube nur einen Grad, der nach Verschiedenheit des Interesses, das dabei im Spiele ist, groß oder auch klein sein kann.

Immanuel Kant, Kritik der reinen Vernunft, 1781

The usual touchstone, whether that which someone asserts is merely his persuasion – or at least his subjective conviction, that is, his firm belief – is betting. It often happens that someone propounds his views with such positive and uncompromising assurance that he seems to have entirely set aside all thought of possible error. A bet disconcerts him. Sometimes it turns out that he has a conviction which can be estimated at a value of one ducat, but not of ten. For he is very willing to venture one ducat, but when it is a question of ten he becomes aware, as he had not previously been, that it may very well be that he is in error. If, in a given case, we represent ourselves as staking the happiness of our whole life, the triumphant tone of our judgment is greatly abated; we become extremely diffident, and discover for the first time that our belief does not reach so far. Thus pragmatic belief always exists in some specific degree, which, according to differences in the interests at stake, may be large or may be small.

Immanuel Kant, Critique of Pure Reason, 1781

Added to diary 17 December 2018

# isaac-newton

To determine by what modes or actions light produceth in our minds the phantasm of colour is not so easie.

Isaac Newton, in a letter to Henry Oldenburg, 1672

Added to diary 21 April 2018

# jacob-lagerros

In order to get the actual motion of the planets correct, both Ptolemy and Copernicus had to bolster their models with many more epicycles, and epicycles upon epicycles, than shown in the above figure and video. Copernicus even considered introducing an epicyclepicyclet — “an epicyclet whose center was carried round by an epicycle, whose center in turn revolved on the circumference of a deferent concentric with the sun as the center of the universe”… (Complete Dictionary of Scientific Biography, 2008).

Pondering his creation, Copernicus concluded an early manuscript outline his theory thus “Mercury runs on seven circles in all, Venus on five, the earth on three with the moon around it on four, and finally Mars, Jupiter, and Saturn on five each. Thus 34 circles are enough to explain the whole structure of the universe and the entire ballet of the planets” (MacLachlan & Gingerich, 2005).

These inventions might appear like remarkably awkward — if not ingenious — ways of making a flawed system fit the observational data. There is however quite an elegant reason why they worked so well: they form a primitive version of Fourier analysis, a modern technique for function approximation. Thus, in the constantly expanding machinery of epicycles and epicyclets, Ptolemy and Copernicus had gotten their hands on a powerful computational tool, which would in fact have allowed them to approximate orbits of a very large number of shapes, including squares and triangles (Hanson, 1960)!

Despite these geometric acrocrabitcs, Copernicus theory did not fit the available data better than Ptolemy’s. In the second half of the 16th century, renowned imperial astronomer Tycho Brahe produced the most rigorous astronomical observations to date — and found that they even fit Copernicus’ data worse than Ptolemy’s in some places (Gingerich, 1973, 1975).

Jacob Lagerros, The Copernican Revolution from the inside, 26 October 2017

Added to diary 15 January 2018

# james-henderson-burns

Dr. Priestley published his Essay on Government in 1768. He there introduced as the only reasonable and proper object of government, ‘the greatest happiness of the greatest number.’ […]

Somehow or other, shortly after its publication, a copy of this pamphlet found its way into the little circulating library belonging to a little coffee-house, called Harper’s coffee-house, attached, as it were, to Queen’s College, Oxford, and deriving, from the popularity of that college, the whole of its subsistence. It was a corner house, having one front towards the High Street, another towards a narrow lane, which on that side skirts Queen’s College, and loses itself in a lane issuing from one of the gates of New College. To this library the subscription was a shilling a quarter, or, in the University phrase, a shilling a term. Of this subscription the produce was composed of two or three newspapers, with magazines one or two, and now and then a newly-published pamphlet; a moderate sized octavo was a rare, if ever exemplified spectacle: composed partly of pamphlets, partly of magazines, half-bound together, a few dozen volumes made up this library, which formed so curious a contrast with the Bodleian Library, and those of Christ’s Church and All Souls. […]

This year, 1768, was the latest of all the years in which this pamphlet could have come into my hands. Be this as it may, it was by that pamphlet, and this phrase in it, that my principles on the subject of morality, public and private together, were determined. It was from that pamphlet and that page of it, that I drew the phrase, the words and import of which have been so widely diffused over the civilized world. At the sight of it, I cried out, as it were, in an inward ecstasy, like Archimedes on the discovery of the fundamental principle of hydrostatics, eureka!

Jeremy Bentham, “Deontology, or the Science of Morality”, The British Critic, Quarterly Theological Review, and Ecclesiastical Record, vol. 16, no. 32 (October, 1834), pp. 279-280

[T]he origins and history of the phrase and of Bentham’s use of it have been the subject of protracted scholarly debate. The seeds of uncertainty were sown by Bentham himself in confused and inconclusive recollections recorded by John Bowring. The question of origins at least was definitively resolved over thirty years ago by Robert Shackelton, in an elegant piece of research. This demonstrated that by far the likeliest source of the phrase as Bentham used it is the English translation of Beccaria’s Dei delitti e delle pene, published in 1768. That was the year in which Bentham sometimes thought, mistakenly, that he had found the phrase in a work by Joseph Priestley, and it seems likely that he read the Beccaria translation in what he later called ‘a most interesting year’ – 1769.

Burns, J. H. (2005). Happiness and utility: Jeremy Bentham’s equation. Utilitas, 17(1), 46-61.

Added to diary 17 April 2018

# jared-diamond

Among the world’s thousands of wild grass species, Blumler tabulated the 56 with the largest seeds, the cream of nature’s crop: the grass species with seeds at least 10 times heavier than the median grass species (see Table 8.1). Virtually all of them are native to Mediterranean zones or other seasonally dry environments. Furthermore, they are overwhelmingly concentrated in the Fertile Crescent or other parts of western Eurasia’s Mediterranean zone, which offered a huge selection to incipient farmers: about 32 of the world’s 56 prize wild grasses! […] In contrast, the Mediterranean zone of Chile offered only two of those species. California and southern Africa just one each, and southwestern Australia none at all. That fact alone goes a long way toward explaining the course of human history.

Jared Diamond, Guns, Germs, and Steel, Chapter 8, 1997

Added to diary 24 December 2018

# jen-banbury

The world—and, specifically, the oil and gas industry—needs commercial divers like Hovey who can go to the seabed to perform the delicate maneuvers required to put together, maintain, and disassemble offshore wells, rigs, and pipelines, everything from flipping flow valves, to tightening bolts with hydraulic jacks, to working in tight confines around a blowout preventer. Remotely operated vehicles don’t have the touch, maneuverability, or judgment for the job. And so, a solution. Experiments in the 1930s showed that, after a certain time at pressure, divers’ bodies become fully saturated with inert gas, and they can remain at that pressure indefinitely, provided they get one long decompression at the end. In 1964, naval aquanauts occupied the first Sea Lab—a metal-encased living quarters lowered to a depth of 192 feet. The aquanauts could move effortlessly between their pressurized underwater home and the surrounding water, and they demonstrated the enormous commercial potential of saturation diving. It soon became apparent that it would be easier and cheaper to monitor and support the divers if the pressurized living quarters weren’t themselves at the bottom of the sea. At this moment, all around the world, there are commercial divers living at pressure inside saturation systems (mostly on ships, occasionally on rigs or barges), and commuting to and from their jobsites in pressurized diving bells. They can each put in solid six-hour working days on the bottom.

Jen Banbury, The Weird, Dangerous, Isolated Life of the Saturation Diver, Atlas Obscura, 9 May 2018

Added to diary 13 May 2018

# jennifer-grudnik

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.

Added to diary 27 June 2018

# jeremy-bentham

Dr. Priestley published his Essay on Government in 1768. He there introduced as the only reasonable and proper object of government, ‘the greatest happiness of the greatest number.’ […]

Somehow or other, shortly after its publication, a copy of this pamphlet found its way into the little circulating library belonging to a little coffee-house, called Harper’s coffee-house, attached, as it were, to Queen’s College, Oxford, and deriving, from the popularity of that college, the whole of its subsistence. It was a corner house, having one front towards the High Street, another towards a narrow lane, which on that side skirts Queen’s College, and loses itself in a lane issuing from one of the gates of New College. To this library the subscription was a shilling a quarter, or, in the University phrase, a shilling a term. Of this subscription the produce was composed of two or three newspapers, with magazines one or two, and now and then a newly-published pamphlet; a moderate sized octavo was a rare, if ever exemplified spectacle: composed partly of pamphlets, partly of magazines, half-bound together, a few dozen volumes made up this library, which formed so curious a contrast with the Bodleian Library, and those of Christ’s Church and All Souls. […]

This year, 1768, was the latest of all the years in which this pamphlet could have come into my hands. Be this as it may, it was by that pamphlet, and this phrase in it, that my principles on the subject of morality, public and private together, were determined. It was from that pamphlet and that page of it, that I drew the phrase, the words and import of which have been so widely diffused over the civilized world. At the sight of it, I cried out, as it were, in an inward ecstasy, like Archimedes on the discovery of the fundamental principle of hydrostatics, eureka!

Jeremy Bentham, “Deontology, or the Science of Morality”, The British Critic, Quarterly Theological Review, and Ecclesiastical Record, vol. 16, no. 32 (October, 1834), pp. 279-280

[T]he origins and history of the phrase and of Bentham’s use of it have been the subject of protracted scholarly debate. The seeds of uncertainty were sown by Bentham himself in confused and inconclusive recollections recorded by John Bowring. The question of origins at least was definitively resolved over thirty years ago by Robert Shackelton, in an elegant piece of research. This demonstrated that by far the likeliest source of the phrase as Bentham used it is the English translation of Beccaria’s Dei delitti e delle pene, published in 1768. That was the year in which Bentham sometimes thought, mistakenly, that he had found the phrase in a work by Joseph Priestley, and it seems likely that he read the Beccaria translation in what he later called ‘a most interesting year’ – 1769.

Burns, J. H. (2005). Happiness and utility: Jeremy Bentham’s equation. Utilitas, 17(1), 46-61.

Added to diary 17 April 2018

That which has no existence cannot be destroyed — that which cannot be destroyed cannot require anything to preserve it from destruction. Natural rights is simple nonsense: natural and imprescriptible rights, rhetorical nonsense — nonsense upon stilts.

Jeremy Bentham, Anarchical Fallacies: Being an Examination of the Declarations of Rights Issued During the French Revolution, 1843

Added to diary 26 January 2018

The French have already discovered that the blackness of the skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor. It may one day come to be recognized that the number of legs, the villosity of the skin, or the termination of the os sacrum are reasons equally insufficient for abandoning a sensitive being to the same fate. What else is it that should trace the insuperable line? Is it the faculty of reason, or perhaps the faculty of discourse? But a full-grown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day or a week or even a month, old. But suppose they were otherwise, what would it avail? The question is not Can they reason?, nor Can they talk?, but Can they suffer?

Jeremy Bentham, An Introduction to the Principles of Morals and Legislation, Chapter 17, 1789

Added to diary 26 January 2018

# joe-simmons

It’s a famous study. Give a mug to a random subset of a group of people. Then ask those who got the mug (the sellers) to tell you the lowest price they’d sell the mug for, and ask those who didn’t get the mug (the buyers) to tell you the highest price they’d pay for the mug. You’ll find that sellers’ minimum selling prices exceed buyers’ maximum buying prices by a factor of 2 or 3 (.pdf).

This famous finding, known as the endowment effect, is presumed to have a famous cause: loss aversion. Just as loss aversion maintains that people dislike losses more than they like gains, the endowment effect seems to show that people put a higher price on losing a good than on gaining it. The endowment effect seems to perfectly follow from loss aversion.

But a 2012 paper by Ray Weaver and Shane Frederick convincingly shows that loss aversion is not the cause of the endowment effect (.pdf). Instead, “the endowment effect is often better understood as the reluctance to trade on unfavorable terms,” in other words “as an aversion to bad deals.” [1] […]

Weaver and Frederick’s theory is simple: Selling and buying prices reflect two concerns. First, people don’t want to sell the mug for less, or buy the mug for more, than their own value of it. Second, they don’t want to sell the mug for less, or buy the mug for more, than the market price. This is because people dislike feeling like a sucker. [2]

To see how this produces the endowment effect, imagine you are willing to pay $1 for the mug and you believe it usually sells for$3. As a buyer, you won’t pay more than $1, because you don’t want to pay more than it’s worth to you. But as a seller, you don’t want to sell for as little as$1, because you’ll feel like a chump selling it for much less than it is worth. [3]. Thus, because there’s a gap between people’s perception of the market price and their valuation of the mug, there’ll be a large gap between selling ($3) and buying ($1) prices.

Weaver and Frederick predict that the endowment effect will arise whenever market prices differ from valuations.

However, when market prices are not different from valuations, you shouldn’t see the endowment effect. For example, if people value a mug at $2 and also think that its market price is$2, then both buyers and sellers will price it at $2. And this is what Weaver and Frederick find. Repeatedly. There is no endowment effect when valuations are equal to perceived market prices. Wow. Joe Simmons, “A Better Explanation Of The Endowment Effect”, 27 May 2015, Data Colada Added to diary 21 June 2018 # john-kennedy-toole ‘Employers sense in me a denial of their values.’ He rolled over onto his back. ‘They fear me. I suspect that they can see that I am forced to function in a century I loathe.’ John Kennedy Toole, A Confederacy of Dunces, 1994 Apparently I lack some particular perversion which today’s employer is seeking. John Kennedy Toole, A Confederacy of Dunces, 1994 Added to diary 17 January 2018 I am at the moment writing a lengthy indictment against our century. When my brain begins to reel from my literary labors, I make an occasional cheese dip. John Kennedy Toole, A Confederacy of Dunces, 1994 Added to diary 17 January 2018 # john-kranzler In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times. Robert J.Sternberg, Encyclopædia Britannica, Intelligence This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction). Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2 Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […] The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […] Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test. The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form. The performance of the proxy measures across the different cognitive ability groups was examined next. […] Above-Average IQ Group […] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […] The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals. Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766. Added to diary 27 June 2018 # john-list In the wake of Hurricane Katrina in the summer of 2005, much of the Gulf Coast had been pummeled by wind and inches upon inches of rain. Water was everywhere, but of- ten undrinkable. Basic provisions we take for granted, like drinking water, weren’t easy to come by and the Federal Emergency Management Agency (FEMA) was caught flat-footed. In response to catastrophic events like a hurricane or an earthquake, the caricature of private industry is that firms will gouge customers. And sometimes this is true, but in response to Katrina, there was one unlikely hero: Walmart. In fact, the Mayor of Kenner, a suburb of New Orleans, had this to say about Walmart’s response: “. . . the only lifeline in Kenner was the Walmart stores. We didn’t have looting on a mass scale because Walmart showed up with food and water so our people could survive.” Indeed, in the three weeks after Katrina, Walmart shipped almost 2,500 truckloads of supplies to storm- damaged areas. These truckloads reached affected areas before FEMA, whose troubles responding to the storm were so great that it shipped 30,000 pounds of ice to Maine instead of Mississippi. These stories and more are in Horwitz (2009), which summarizes the divergent responses to Katrina by private industry and FEMA. How was Walmart so effective in its response? Well, it maintains a hurricane response center of its own that rivals FEMA’s, and prior to the storm’s landfall it anticipated a need for generators, water, and food, so it effectively diverted supplies to the area. Walmart’s emergency re- sponse center was in full swing as the storm approached with 50 employees managing the response from headquarters. This sounds like the sort of response FEMA should have produced; so if that’s the job of FEMA, why did Walmart respond so heroically? Simple economics. Walmart un- derstood that there would be an important shift of the demand curve for water, generators, and ice in response to the storm and the textbook response to such shifts is an increase in quantity supplied. Lucky for us, few are better at shipping provisions around the country than Walmart. Daron Acemoglu, David Laibson, John List, Economics, 1st Edition, 2015 Added to diary 19 May 2018 # jonathan-haidt They concluded that most of the bizarre and depressing research findings make perfect sense once you see reasoning as having evolved not to help us find truth but to help us engage in arguments, persuasion, and manipulation in the context of discussions with other people. As they put it, “skilled arguers … are not after the truth but after arguments supporting their views.” This explains why the confirmation bias is so powerful, and so ineradicable. How hard could it be to teach students to look on the other side, to look for evidence against their favored view? Yet, in fact, it’s very hard, and nobody has yet found a way to do it. It’s hard because the confirmation bias is a built-in feature (of an argumentative mind), not a bug that can be removed (from a platonic mind). Jonathan Haidt, The Righteous Mind, 2012 Added to diary 27 June 2018 I was talking with a taxi driver who told me that he had just become a father. I asked him if he planned on staying in the United States or returning to his native Jordan. I’ll never forget his response: “We will return to Jordan because I never want to hear my son say ‘fuck you’ to me.” Jonathan Haidt, The Righteous Mind, 2012 Added to diary 27 June 2018 For the first billion years or so of life, the only organisms were prokaryotic cells (such as bacteria). Each was a solo operation, competing with others and reproducing copies of itself. But then, around 2 billion years ago, two bacteria somehow joined together inside a single membrane, which explains why mitochondria have their own DNA, unrelated to the DNA in the nucleus. These are the two-person rowboats in my example. Cells that had internal organelles could reap the benefits of cooperation and the division of labor (see Adam Smith). There was no longer any competition between these organelles, for they could reproduce only when the entire cell reproduced, so it was “one for all, all for one.” Life on Earth underwent what biologists call a “major transition.” Natural selection went on as it always had, but now there was a radically new kind of creature to be selected. There was a new kind of vehicle by which selfish genes could replicate themselves. Single-celled eukaryotes were wildly successful and spread throughout the oceans. A few hundred million years later, some of these eukaryotes developed a novel adaptation: they stayed together after cell division to form multicellular organisms in which every cell had exactly the same genes. These are the three-boat septuplets in my example. Once again, competition is suppressed (because each cell can only reproduce if the organism reproduces, via its sperm or egg cells). A group of cells becomes an individual, able to divide labor among the cells (which specialize into limbs and organs). A powerful new kind of vehicle appears, and in a short span of time the world is covered with plants, animals, and fungi. It’s another major transition. Major transitions are rare. The biologists John Maynard Smith and Eörs Szathmáry count just eight clear examples over the last 4 billion years (the last of which is human societies). But these transitions are among the most important events in biological history, and they are examples of multilevel selection at work. It’s the same story over and over again: Whenever a way is found to suppress free riding so that individual units can cooperate, work as a team, and divide labor, selection at the lower level becomes less important, selection at the higher level becomes more powerful, and that higher-level selection favors the most cohesive superorganisms. (A superorganism is an organism made out of smaller organisms.) As these superorganisms proliferate, they begin to compete with each other, and to evolve for greater success in that competition. This competition among superorganisms is one form of group selection. There is variation among the groups, and the fittest groups pass on their traits to future generations of groups. Major transitions may be rare, but when they happen, the Earth often changes. Just look at what happened more than 100 million years ago when some wasps developed the trick of dividing labor between a queen (who lays all the eggs) and several kinds of workers who maintain the nest and bring back food to share. This trick was discovered by the early hymenoptera (members of the order that includes wasps, which gave rise to bees and ants) and it was discovered independently several dozen other times (by the ancestors of termites, naked mole rats, and some species of shrimp, aphids, beetles, and spiders). In each case, the free rider problem was surmounted and selfish genes began to craft relatively selfless group members who together constituted a supremely selfish group. These groups were a new kind of vehicle: a hive or colony of close genetic relatives, which functioned as a unit (e.g., in foraging and fighting) and reproduced as a unit. These are the motorboating sisters in my example, taking advantage of technological innovations and mechanical engineering that had never before existed. It was another transition. Another kind of group began to function as though it were a single organism, and the genes that got to ride around in colonies crushed the genes that couldn’t “get it together” and rode around in the bodies of more selfish and solitary insects. The colonial insects represent just 2 percent of all insect species, but in a short period of time they claimed the best feeding and breeding sites for themselves, pushed their competitors to marginal grounds, and changed most of the Earth’s terrestrial ecosystems (for example, by enabling the evolution of flowering plants, which need pollinators). Now they’re the majority, by weight, of all insects on Earth. What about human beings? Since ancient times, people have likened human societies to beehives. But is this just a loose analogy? If you map the queen of the hive onto the queen or king of a city-state, then yes, it’s loose. A hive or colony has no ruler, no boss. The queen is just the ovary. But if we simply ask whether humans went through the same evolutionary process as bees—a major transition from selfish individualism to groupish hives that prosper when they find a way to suppress free riding—then the analogy gets much tighter. Many animals are social: they live in groups, flocks, or herds. But only a few animals have crossed the threshold and become ultrasocial, which means that they live in very large groups that have some internal structure, enabling them to reap the benefits of the division of labor. Beehives and ant nests, with their separate castes of soldiers, scouts, and nursery attendants, are examples of ultrasociality, and so are human societies. One of the key features that has helped all the nonhuman ultra-socials to cross over appears to be the need to defend a shared nest. The biologists Bert Hölldobler and E. O. Wilson summarize the recent finding that ultrasociality (also called “eusociality”) is found among a few species of shrimp, aphids, thrips, and beetles, as well as among wasps, bees, ants, and termites: In all the known [species that] display the earliest stages of eusociality, their behavior protects a persistent, defensible resource from predators, parasites, or competitors. The resource is invariably a nest plus dependable food within foraging range of the nest inhabitants. Hölldobler and Wilson give supporting roles to two other factors: the need to feed offspring over an extended period (which gives an advantage to species that can recruit siblings or males to help out Mom) and intergroup conflict. All three of these factors applied to those first early wasps camped out together in defensible naturally occurring nests (such as holes in trees). From that point on, the most cooperative groups got to keep the best nesting sites, which they then modified in increasingly elaborate ways to make themselves even more productive and more protected. Their descendants include the honeybees we know today, whose hives have been described as “a factory inside a fortress.” Those same three factors applied to human beings. Like bees, our ancestors were (1) territorial creatures with a fondness for defensible nests (such as caves) who (2) gave birth to needy offspring that required enormous amounts of care, which had to be given while (3) the group was under threat from neighboring groups. For hundreds of thousands of years, therefore, conditions were in place that pulled for the evolution of ultrasociality, and as a result, we are the only ultrasocial primate. The human lineage may have started off acting very much like chimps, but by the time our ancestors started walking out of Africa, they had become at least a little bit like bees. And much later, when some groups began planting crops and orchards, and then building granaries, storage sheds, fenced pastures, and permanent homes, they had an even steadier food supply that had to be defended even more vigorously. Like bees, humans began building ever more elaborate nests, and in just a few thousand years, a new kind of vehicle appeared on Earth—the city-state, able to raise walls and armies. City-states and, later, empires spread rapidly across Eurasia, North Africa, and Mesoamerica, changing many of the Earth’s ecosystems and allowing the total tonnage of human beings to shoot up from insignificance at the start of the Holocene (around twelve thousand years ago) to world domination today. As the colonial insects did to the other insects, we have pushed all other mammals to the margins, to extinction, or to servitude. The analogy to bees is not shallow or loose. Despite their many differences, human civilizations and beehives are both products of major transitions in evolutionary history. Jonathan Haidt, The Righteous Mind, 2012 Added to diary 27 June 2018 # judea-pearl Let us now examine how the surgery interpretation resolves Russell’s enigma concerning the clash between the directionality of causal relations and the symmetry of physical equations.The equations of physics are indeed symmetrical, but when we compare the phrases “A causes B” versus “B causes A,” we are not talking about a single set of equations. Rather, we are comparing two world models, represented by two different sets of equations: one in which the equation for A is surgically removed; the other where the equation for B is removed. Russell would probably stop us at this point and ask: “How can you talk about two world models when in fact there is only one world model, given by all the equations of physics put together?” The answer is: yes. If you wish to include the entire universe in the model, causality disappears because interventions disappear – the manipulator and the manipulated lose their distinction. Judea Pearl, Causality, 2009, Epilogue, p. 419 Added to diary 17 January 2018 Remark: The exclusion of unmeasured variables from the definition of statistical parameters is devised to prevent one from hiding causal assumptions under the guise of latent variables. Such constructions, if permitted, would qualify any quantity as statistical and would thus obscure the important distinction between quantities that can be estimated from statistical data alone, and those that require additional assumptions beyond the data. […] The sharp distinction between statistical and causal concepts can be translated into a useful principle: behind every causal claim there must lie some causal assumption that is not discernable from the joint distribution and, hence, not testable in observational studies. Such assumptions are usually provided by humans, resting on expert judgment. Thus, the way humans organize and communicate experiential knowledge becomes an integral part of the study, for it determines the veracity of the judgments experts are requested to articulate. Another ramification of this causal–statistical distinction is that any mathematical approach to causal analysis must acquire new notation. The vocabulary of probability calculus, with its powerful operators of expectation, conditionalization, and marginalization, is defined strictly in terms of distribution functions and is therefore insufficient for expressing causal assumptions or causal claims. To illustrate, the syntax of probability calculus does not permit us to express the simple fact that “symptoms do not cause diseases,’’ let alone draw mathematical conclusions from such facts. All we can say is that two events are dependent – meaning that if we find one, we can expect to encounter the other, but we cannot distinguish statistical dependence, quantified by the conditional probability P(disease | symptom), from causal dependence, for which we have no expression in standard probability calculus. The preceding two requirements: (1) to commence causal analysis with untested, judgmental assumptions, and (2) to extend the syntax of probability calculus, constitute the two main obstacles to the acceptance of causal analysis among professionals with traditional training in statistics (Pearl 2003c, also sections 11.1.1 and 11.6.4). This book helps overcome the two barriers through an effective and friendly notational system based on symbiosis of graphical and algebraic approaches. Judea Pearl, Causality, 2009, Section 1.4, p. 39ff Added to diary 16 January 2018 # justin-wolfers We consider a simple prediction market in which traders buy and sell an all-or nothing contract (a binary option) paying$1 if a specific event occurs, and nothing otherwise. There is heterogeneity in beliefs among the trading population, and following Manski’s notation, we denote trader $j$’s belief that the event will occur as $q_j$. These beliefs are orthogonal to wealth levels $(y)$, and are drawn from a distribution, $F(q)$. Individuals are price-takers and trade so as to maximize their subjectively expected utility. Wealth is only affected by the event via the prediction market, so there is no hedging motive for trading the contract. We first consider the case where traders have log utility, and we endogenously derive their trading activity, given the price of the contract is $\pi$. […] Thus, in this simple model, market prices are equal to the mean belief among traders.

[…]

We now turn to relaxing some of our assumptions. To preview, relaxing the assumption that budgets are orthogonal to beliefs yields the intuitively plausible result that prediction market prices are a wealth-weighted average of beliefs among market traders. And second, the result that the equilibrium price is exactly equal to the (weighted) mean of beliefs reflects the fact that demand for the prediction security is linear in beliefs, which is itself a byproduct of assuming log utility. Calibrating alternative utility functions, we find that prices can systematically diverge from mean beliefs, but that this divergence is typically small. […] The extent of the deviation depends crucially on how widely dispersed beliefs are.

[…]

We start by assuming that beliefs are drawn from a uniform distribution with a range of 10 percentage points, and solve for the mapping between mean beliefs and prices implied by each of the utility functions shown in Figure 1. (We rescale beliefs outside the (0,1) range to 0 or 1.) Figure 2 shows that for moderately dispersed beliefs, prediction market prices tend to coincide fairly closely with the mean beliefs. While there is some divergence, it is typically within a percentage point, although the risk neutral model yields larger differences. […]

Figure 2

Figure 3 shows the mapping from prices to probabilities when beliefs are more disperse (in this case the standard deviation and range were doubled). As the dispersion of beliefs widens, the number of traders with extreme beliefs increases, and hence the non-linear response to the divergence between beliefs and prices is increasingly important. As such, the biases evident in Figure 2 become even more evident as the distribution of beliefs widens. Even so, for utility functions with standard levels of risk aversion, these biases are small.

Figure 3

Justin Wolfers and Eric Zitzewitz, Interpreting Prediction Market Prices as Probabilities, NBER Working Paper No. 12200, May 2006

Added to diary 20 January 2018

# karl-marx

Die klassische Ökonomie liebte es von jeher, das gesellschaftliche Kapital als eine fixe Größe von fixem Wirkungsgrad aufzufassen. Aber das Vorurteil ward erst zum Dogma befestigt durch den Urphilister Jeremias Bentham, dies nüchtern pedantische, schwatzlederne Orakel des gemeinen Bürgerverstandes des 19. Jahrhunderts. […] Jeremias Bentham ist ein rein englisches Phänomen. Selbst unsern Philosophen Christian Wolf nicht ausgenommen, hat zu keiner Zeit und in keinem Land der hausbackenste Gemeinplatz sich jemals so selbstgefällig breitgemacht.

Karl Marx, Das Kapital, Bd. I, Verwandlung von Mehrwert in Kapital

Classical economy always loved to conceive social capital as a fixed magnitude of a fixed degree of efficiency. But this prejudice was first established as a dogma by the arch-Philistine, Jeremy Bentham, that insipid, pedantic, leather-tongued oracle of the ordinary bourgeois intelligence of the 19th century. […] Bentham is a purely English phenomenon. Not even excepting our philosopher, Christian Wolff, in no time and in no country has the most homespun commonplace ever strutted about in so self-satisfied a way.

Karl Marx, Capital, Book I, Conversion of Surplus-Value into Capital.

Added to diary 18 April 2018

# karthik-muralidharan

Public employment programs play a major role in the anti-poverty strategy of many developing countries. Besides the direct wages provided to the poor, such programs are likely to affect their welfare by changing broader labor market outcomes including wages and private employment. These general equilibrium effects may accentuate or attenuate the direct benefits of the program, but have been difficult to estimate credibly. We estimate the general equilibrium effects of a technological reform that improved the implementation quality of India’s public employment scheme on the earnings of the rural poor, using a large-scale experiment which randomized treatment across sub-districts of 60,000 people. We find that this reform had a large impact on the earnings of low-income households, and that these gains were overwhelmingly driven by higher private-sector earnings (90%) as opposed to earnings directly from the program (10%). These earnings gains reflect a 5.7% increase in market wages for rural unskilled labor, and a similar increase in reservation wages. We do not find evidence of distortions in factor allocation, including labor supply, migration, and land use. Our results highlight the importance of accounting for general equilibrium effects in evaluating programs, and also illustrate the feasibility of using large-scale experiments to study such effects.

Karthik Muralidharan, Paul Niehaus, and Sandip Sukhtankar, General Equilibrium Effects of (Improving) Public Employment Programs: Experimental Evidence from India, 2017

the cool thing is it mattered – study helped convince gov’t not to scrap the program, putting $100Ms annually into hands of the poor @paulfniehaus, Twitter Added to diary 15 January 2018 # kathryn-schulz Zimmerman was still downstairs when he heard her scream. He sprinted up to join her, and the two of them stood in the doorway, aghast. Their bedroom walls were crawling with insects—not dozens of them but hundreds upon hundreds. Stone knew what they were, because she’d seen a few around the house earlier that year and eventually posted a picture of one on Facebook and asked what it was. That’s a stinkbug, a chorus of people had told her—specifically, a brown marmorated stinkbug. Huh, Stone had thought at the time. Never heard of them. Now they were covering every visible surface of her bedroom. “It was like a horror movie,” Stone recalled. She and Zimmerman fetched two brooms and started sweeping down the walls. Pre-stinkbug crisis, the couple had been unwinding after work (she is an actress, comedian, and horse trainer; he is a horticulturist), and were notably underdressed, in tank tops and boxers, for undertaking a full-scale extermination. The stinkbugs, attracted to warmth, kept thwacking into their bodies as they worked. Stone and Zimmerman didn’t dare kill them—the stink for which stinkbugs are named is released when you crush them—so they periodically threw the accumulated heaps back outside, only to realize that, every time they opened the doors to do so, more stinkbugs flew in. […] The defining ugliness of a stinkbug, however, is its stink. Olfactory defense mechanisms are not uncommon in nature: wolverines, anteaters, and polecats all have scent glands that produce an odor rivalling that of a skunk; bombardier beetles, when threatened, emit a foul-smelling chemical hot enough to burn human skin; vultures keep predators at bay by vomiting up the most recent bit of carrion they ate; honey badgers achieve the same effect by turning their anal pouch inside out. All these creatures produce a smell worse than the stinkbug’s, but none of them do so in your home. Slightly less noxious but vastly more pervasive, the smell of the brown marmorated stinkbug is often likened to that of cilantro, chiefly because the same chemical is present in both. In reality, stinkbugs smell like cilantro only in the way that rancid cilantro-mutton stew smells like cilantro, which is to say, they do not. […] What makes the brown marmorated stinkbug so impressively omnivorous is also what makes it a bug. Technically speaking, bugs are not synonymous with insects but are a subset of them: those which possess mouthparts that pierce and suck (as opposed to, say, caterpillars and termites, whose mouths are built, like ours, to chew). Yet even among those insects which share its basic physiology, the stinkbug is an outlier; Michael Raupp, an entomologist at the University of Maryland, described its host range as “huge, huge, wildly huge. You’re right up there now with the big guys, with gypsy moths and Japanese beetles.” […] [A]s it turns out, the brown marmorated stinkbug is exceptionally hard to kill with pesticides. Peter Jentsch, an entomologist with Cornell University’s Hudson Valley research laboratory, calls it the Hummer of insects: a highly armored creature built to maximize its defensive capabilities. Its relatively long legs keep it perched above the surface of its food, which limits its exposure to pesticide applications. […] A class of pesticides known as pyrethroids, which are used to control native stinkbugs, initially appeared to work just as well on the brown marmorated kind–until a day or two later, when more than a third of the ostensibly dead bugs rose up, Lazarus-like, and calmly resumed the business of demolition. […] Once it settles down for the season, it enters a state known as diapause–a kind of insect hibernation, during which its metabolism slows to near-moribund conditions. It cannot mate or reproduce, it does not need to eat, and although it can still both crawl and fly, it performs each activity slowly and poorly. […] It is also thanks to diapause that stinkbugs, indoors, seem inordinately graceless and impossibly dumb. But, as we all now know, being graceless and dumb is no obstacle to being powerful and horrifying. […] Unlike household pests such as ants and fruit flies, they are not particularly drawn to food and drink; then again, as equal-opportunity invaders they aren’t particularly not drawn to them, either. This has predictable but unfortunate consequences. One poor soul spooned up a stinkbug that had blended into her granola, putting her off fruit-and-nut cereals for life. Another discovered too late that a stinkbug had percolated in her coffeemaker, along with her morning brew. A third removed a turkey from the oven on Thanksgiving Day and discovered a cooked stinkbug at the bottom of the roasting pan. Other people have reported accidentally ingesting stinkbugs in, among other things, salads, berries, raisin bran, applesauce, and chili. By all accounts, the bugs release their stink upon being crunched, and taste pretty much the way they smell. […] Raupp, who has been studying non-native species for forty-one years, called its arrival on our shores “one of the most productive incidents in the history of invasive pests in the United States.” Because the stinkbug is, as he put it, “magnificent and dastardly,” it has attracted an almost unprecedented level of scientific attention. It has spawned multimillion-dollar grants, dozens of master’s degrees and Ph.D.s, and a huge collaborative partnership that includes the federal government, land-grant colleges, Ivy League universities, extension programs, environmental organizations, trade groups, small farmers, and agribusiness. “From a research perspective,” Raupp said, “this was and continues to be one of the major drivers in the history of entomology in the United States.” Kathryn Schulz, Home Invasion, Annals of Ecology, The New Yorker Magazine, 12 March 2018 Added to diary 18 March 2018 # kendrick-lamar So I was takin’ a walk the other day And I seen a woman—a blind woman Pacin’ up and down the sidewalk She seemed to be a bit frustrated As if she had dropped somethin’ and Havin’ a hard time findin’ it So after watchin’ her struggle for a while I decide to go over and lend a helping hand, you know? “Hello ma’am, can I be of any assistance? It seems to me that you have lost something I would like to help you find it.” She replied: “Oh yes, you have lost something You’ve lost… your life.” Kendrick Lamar, “BLOOD.”, 14 April 2017 Added to diary 19 May 2018 # kerem-cosar Economists build a database from 4000-year-old clay tablets, plug it into a trade model, and use it to locate lost bronze age cities. They had clay tablets from ancient merchants, saying things like: (I paid) 6.5 shekels (of tin) from the Town of the Kanishites to Timelkiya. I paid 2 shekels of silver and 2 shekels of tin for the hire of a donkey from Timelkiya to Hurama. From Hurama to Kaneš I paid 4.5 shekels of silver and 4.5 shekels of tin for the hire of a donkey and a packer. This allowed them to measure the amount of trade between any two cities. Then they constructed a theoretical model of the expected amount of trade between any two cities, as inversely propotional to the distance between the two cities. Given the data on amount of trade, and the locations of the known cities, they are able to estimate the lost locations: As long as we have data on trade between known and lost cities, with sufficiently many known compared to lost cities, a structural gravity model is able to estimate the likely geographic coordinates of lost cities [….] We build a simple Ricardian model of trade. Further imposing that bilateral trade frictions can be summarized by a power function of geographic distance, our model makes predictions on the number oftransactions between city pairs, which is observed in our data. The model can be estimated solely on bilateral trade flows and on the geographic location of at least some cities. Gojko Barjamovic, Thomas Chaney, Kerem A. Coşar, Ali Hortaçsu, Trade, Merchants, and the Lost Cities of the Bronze Age, 2017 Added to diary 15 January 2018 # kevin-simler No matter how fast the economy grows, there remains a limited supply of sex and social status […]. Robin Hanson and Kevin Simler, The Elephant in the Brain, 2017 Added to diary 26 June 2018 [T]here’s a very real sense in which we are the Press Secretaries within our minds. In other words, the parts of the mind that we identify with, the parts we think of as our conscious selves (“I,” “myself,” “my conscious ego”), are the ones responsible for strategically spinning the truth for an external audience. […] Body language also facilitates discretion by being less quotable to third parties, relative to spoken language. If Peter had explicitly told a colleague, “I want to get Jim fired,” the colleague could easily turn around and relay Peter’s agenda to others in the office. Similarly, if Peter had asked his flirting partner out for a drink, word might get back to his wife—in which case, bad news for Peter. […] [S]peaking functions in part as an act of showing off. Speakers strive to impress their audience by consistently delivering impressive remarks. This explains how speakers foot the bill for the costs of speaking we discussed earlier: they’re compensated not in-kind, by receiving information reciprocally, but rather by raising their social value in the eyes (and ears) of their listeners. […] Participants evaluate each other not just as trading partners, but also as potential allies. Speakers are eager to impress listeners by saying new and useful things, but the facts themselves can be secondary. Instead, it’s more important for speakers to demonstrate that they have abilities that are attractive in an ally. […] But why do speakers need to be relevant in conversation? If speakers deliver high-quality information, why should listeners care whether the information is related to the current topic? A plausible answer is that it’s simply too easy to rattle off memorized trivia. You can recite random facts from the encyclopedia until you’re blue in the face, but that does little to advertise your generic facility with information. Similarly, when you meet someone for the first time, you’re more eager to sniff each other out for this generic skill, rather than to exchange the most important information each of you has gathered to this point in your lives. In other words, listeners generally prefer speakers who can impress them wherever a conversation happens to lead, rather than speakers who steer conversations to specific topics where they already know what to say. […] In fact, patients show surprisingly little interest in private information on medical quality. For example, patients who would soon undergo a dangerous surgery (with a few percent chance of death) were offered private information on the (risk-adjusted) rates at which patients died from that surgery with individual surgeons and hospitals in their area. These rates were large and varied by a factor of three. However, only 8 percent of these patients were willing to spend even$50 to learn these death rates. Similarly, when the government published risk-adjusted hospital death rates between 1986 and 1992, hospitals with twice the risk-adjusted death rates saw their admissions fall by only 0.8 percent. In contrast, a single high-profile news story about an untoward death at a hospital resulted in a 9 percent drop in patient admissions at that hospital. […]

When John F. Kennedy described the space race with his famous speech in 1962, he dressed up the nation’s ambition in a suitably prosocial motive. “We set sail on this new sea,” he told the crowd, “because there is new knowledge to be gained, and new rights to be won, and they must be won and used for the progress of all people.” Everyone, of course, knew the subtext: “We need to beat the Russians!” In the end, our motives were less important than what we managed to achieve by them. We may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.

Robin Hanson and Kevin Simler, The Elephant in the Brain, 2017

Added to diary 26 June 2018

# kristin-caspers

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.

Added to diary 27 June 2018

# larissa-macfarquhar

“We have to go further back, to 2005. I’m in Warri, in Delta State, I’m working as a doctor, and my mom and I are having a fight. She’s saying, You’re stagnating, you read medicine and you haven’t gone further, you could do better! I was happy, I was in this quiet place becoming a provincial doctor, but in Nigeria that is a lack of ambition, so my mom was angry. She showed me a photograph in a magazine of a young woman with beads in her hair, and she said, Look at this small girl, she has written a book of horticulture, about flowers—you could do something like that. She didn’t care what I did, really, she just wanted me to do more. So she told me, Write books! Don’t just sit there dishing out Tylenol. I said O.K. So I got a computer and started writing.”

Eghosa Imasuen was twenty-eight. He was living near his parents, in a small city some two hundred and fifty miles southeast of Lagos. He read a lot, mostly thrillers and science fiction, pulp paperbacks he bought from secondhand bookshops for a dollar or less. “Literature to me was recommended reading in school, which was Chinua Achebe. ‘Things Fall Apart,’ ‘Arrow of God,’ ‘Things Fall Apart,’ ‘Arrow of God,’ ‘Things Fall Apart,’ ‘Arrow of God.’ I tried to read Ben Okri once—I couldn’t get past page 10. […]

When Chimamanda read Barack Obama’s memoir and learned how his father had deserted his white American wife, whom he had married despite already having a wife in Kenya, she judged the father less harshly than Obama did. “It’s easy to understand it as deceitful,” she says. “But I don’t see it that way. To be an African man of that time, to have this privilege of being educated, often by people in your home town contributing money to pay part of your school fees—not only do you owe them money, you owe them in an emotional way, because you’re a shining star for them, you’re theirs. Then you go off and fall in love with somebody who would not be acceptable to them, and you feel torn. Often, the village wins. And so, reading Obama’s book as a person who was familiar with stories of that sort, part of me wanted to say, It’s not that he didn’t love you.” […]

She had always imagined that she would marry someone flamboyantly unfamiliar—she pictured herself shocking the family by bringing home “a spiky-haired Mongolian-Sri-Lankan-Rwandan”—but the man she ended up marrying, in 2009, was almost comically suitable: a Nigerian doctor who practiced in America, whose father was a doctor and a friend of her parents, and whose sister was her sister’s close friend. Before they had a baby, she spent about half the year in Nigeria, and her husband would join her when he could. But her husband doesn’t want to be apart from the baby for too long, so now she spends less time in Nigeria. “One of the perils of a feminist marriage is that the man actually wants to be there,” she says. “He is so present and he does every damn thing! And the child adores him. I swear to God, sometimes I look at her and say, I carried you for nine months, my breasts went down because of you, my belly is slack because of you, and now Papa comes home and you run off and ignore me. Really?”

In America, they live in a big, new house in a suburb of Baltimore, on a cul-de-sac alongside four other similar houses arranged in a semicircle. In “Americanah,” she describes Princeton as having no smell, and her neighborhood has no smell, either. It is calm, spacious, bland, empty—the opposite of Lagos. If she looks out the window, she sees nothing. She doesn’t know many people in Maryland, and doesn’t want to. She can go out and people don’t recognize her. It’s a good place to work.

In Nigeria, when a woman in her family had a baby, all of her female relatives came to help and she lay in bed like a dying queen. She loved the idea of that in some ways, but when she had her baby, in Maryland, she instructed her mother not to come for a month. She realized afterward that she had internalized what she took to be an American notion, that having help with a newborn was something to be slightly ashamed of. You were supposed to do everything on your own, or else you weren’t properly bonding, or suffering enough, or something like that. […]

Her husband told her that her father had been kidnapped, and she screamed, then vomited, then started to cry. Her father had been in a car driving from Nsukka to Abba, but he had not arrived. When her mother tried to call him, his phone was switched off, as was the phone of his driver. Two hours later, her mother received a call from his phone: the kidnapper told her, Madam, we have him, and hung up. Her mother had not called Chimamanda to tell her the news, fearing she would have a miscarriage.

She pulled herself together and started making phone calls. She called the governor of Anambra, her home state. She called the American consul-general in Lagos, because her father was an American citizen, through his two elder daughters, who were born in Berkeley. The American consul sent a Nigerian-American F.B.I. agent, a kidnapping expert, to her mother’s house; he told her mother what to say when the kidnappers called back. Chimamanda called the house to talk to the F.B.I. agent. He told her he was a big fan and had read all her books. Later, she would find this funny.

There were no demands until the next day. This was the usual method: kidnappers delayed, so that you worked yourself up into a panic. The next day, they called and demanded five million naira—around fourteen thousand dollars—and told her mother that if she told the police they would kill him. They didn’t call for another day. On the third day, they demanded ten million naira. There were laws against taking out too much money at once, but kidnappings were common enough that the banks made an exception.

She was terrified that her father was dead. When the kidnappers called her mother, her mother had asked to hear her husband’s voice, but the man on the line refused. Her father was diabetic and didn’t have his medicine with him. The F.B.I. man told her mother to forge an emotional connection with the kidnapper, so she called him “my dear son,” and told him she was an old, old lady, and begged him for mercy. The family made a plan to drop off the money. The kidnappers knew all about them: they said that Okey or a particular son-in-law could go, but no one else. Okey drove to a point on the highway near Nsukka, then, as instructed, set off on a motorcycle taxi for the designated meeting place, carrying ten million naira in a sack. Nobody knew if he would be seen again. They had heard that sometimes a family member would bring the money, only to find that the victim was already dead, and then be killed himself.

Okey rode on the back of the motorcycle, talking to the kidnapper on his phone. The motorcycle driver asked where they were going, there was nothing around here. Okey said to him, Just keep driving. When they entered a forest, the kidnapper told Okey to stop. The kidnapper told him not to look to the right or left, just keep walking, then drop the bag. Okey obeyed; the kidnapper on the phone told him to leave. Back at the house, the family held their phones, willing them to ring and afraid that they would ring. Then her father was delivered. […]

Her child is two. Soon she will have to go to school and become part of the world, and this brings up several quandaries that Chimamanda has postponed thinking about. She recently wrote a short manual on rearing a child—“Dear Ijeawele, or A Feminist Manifesto in Fifteen Suggestions”—but although she is now a published authority on the subject, and holds fully formed opinions on questions such as how gender stereotypes imprison boys as well as girls, she finds that when one descends from principles to logistics things become complicated. She cannot create a child in the way that she can create a character, of course, but she can choose the setting and the language of her daughter’s childhood, which is already to choose one set of possible selves over another.

She wants to raise her child in Nigeria, because she wants her to be protected as she herself was protected, growing up there: not knowing she is black. Someday she will talk to her about what it means to be black, but not yet. She wants her daughter to be in a place where race as she has encountered it in America does not exist.

Even as a privileged Americanah, she found that arriving at an American airport was often jarring—a reminder that she was once again black and foreign. And it wasn’t just the white customs officers who hassled her. “There is a certain kind of black American that deeply resents an African whom they think of as privileged,” she says. “Privileged Nigerians especially. My husband and I have got to the airport and they’ve said to us, You’re Nigerian, I bet you have twenty-five thousand dollars in your bag, let’s see it.”

Her neighborhood in Maryland is more diverse than most, but it’s still America. She moved into her current house just before the 2016 election, and when, the morning after Trump won, she began reading about post-election vandalism in Baltimore, and about how someone had spray-painted “nigger” on a black woman’s car, and how Trump had been elected not by the white working class after all but by suburbanites, she started to panic. She became convinced that her new neighbors had guns and were going to shoot them because they were black and supported Clinton. All day, she refused to leave the house. Then the doorbell rang, and it was the neighbors bearing welcome gifts, and they turned out to be a Japanese couple, a Bangladeshi couple, a white-black couple, and a lefty white couple. She was so relieved that she almost cried.

“There aren’t enough middle-class black folks to go around,” Bill said. “Lots of liberal white folks are looking for black friends.”

[…]

On the other hand, raising her daughter in Nigeria would mean that she would likely learn much sooner, and more definitively than she would in America, that she was a girl. She doesn’t want her to know that too early, either. Of course, there was sexism in America as well, but nobody was going to say to her in an American school, You! Go to the girls’ line. In “We Should All Be Feminists,” she told the story of her ambition, when she was nine, to be class monitor, because the monitor was empowered to patrol the classroom, holding a cane, and write down the names of noisemakers. Told that the child who scored the highest mark on a test would become monitor, she concentrated hard and attained the highest mark, only to be told by the teacher that the monitor had to be a boy. The boy who got the second-highest mark duly took up the post, although he was unsuited for its responsibilities. “The boy was a sweet, gentle soul who had no interest in patrolling the class with a cane,” she said, “whereas I was full of ambition to do so.” Should her daughter grow up cherishing similar ambitions, she did not want them thwarted.

Larissa MacFarquhar, “Writing Home: Chimamanda Ngozi Adichie Comes to Terms with Global Fame”, New Yorker Magazine, June 4 & 11, 2018 Issue

Added to diary 11 June 2018

# louis-ck

Flying is the worst one, because people come back from flights, and they tell you their story…. They’re like, “It was the worst day of my life…. We get on the plane and they made us sit there on the runway for forty minutes.”… Oh really, then what happened next? Did you fly through the air, incredibly, like a bird? Did you soar into the clouds, impossibly? Did you partake in the miracle of human flight, and then land softly on giant tires that you couldn’t even conceive how they fuckin’ put air in them?

Louis CK, quoted in Pinker, Enlightenment Now

Added to diary 21 April 2018

# lowell-mckirgan

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.

Added to diary 27 June 2018

# ludwig-wittgenstein

Denn die philosophischen Probleme entstehen, wenn die Sprache feiert.

Ludwig Wittgenstein, Philosophische Untersuchungen, 38, 1953

For philosophical problems arise when language goes on holiday.

Ludwig Wittgenstein, Philosophical Investigations, 38, 1953

Aber welches sind die einfachen Bestandteile, aus denen sich die Realität zusammensetzt? - Was sind die einfachen Bestandteile eines Sessels? - Die Stücke Holz, aus denen er zusammengefügt ist? Oder die Moleküle, oder die Atome? - »Einfach« heißt: nicht zusammengesetzt. Und da kommt es darauf an: in welchem Sinne ›zusammengesetzt‹? Es hat gar keinen Sinn von den ›einfachen Bestandteilen des Sessels schlechtweg‹ zu reden. […] Auf die philosophische Frage: »Ist das Gesichtsbild dieses Baumes zusammengesetzt, und welches sind seine Bestandteile?« ist die richtige Antwort: »Das kommt drauf an, was du unter ›zusammengesetzt‹ verstehst.« (Und das ist natürlich keine Beantwortung, sondern eine Zurückweisung der Frage.)

Ludwig Wittgenstein, Philosophische Untersuchungen, 47, 1953

But what are the simple constituent parts of which reality is composed?—What are the simple constituent parts of a chair?—The bits of wood of which it is made? Or the molecules, or the atoms?— “Simple” means: not composite. And here the point is: in what sense ‘composite’? It makes no sense at all to speak absolutely of the ‘simple parts of a chair’. […] To the philosophical question: “Is the visual image of this tree composite, and what are its component parts?” the correct answer is: “That depends on what you understand by ‘composite’.” (And that is of course not an answer but a rejection of the question.)

Ludwig Wittgenstein, Philosophical Investigations, 47, 1953

Betrachte z.B. einmal die Vorgänge, die wir »Spiele« nennen. Ich meine Brettspiele, Kartenspiele, Ballspiel, Kampfspiele, usw. Was ist allen diesen gemeinsam? - Sag nicht: »Es muß ihnen etwas gemeinsam sein, sonst hießen sie nicht ›Spiele‹ « - sondern schau, ob ihnen allen etwas gemeinsam ist.

Ludwig Wittgenstein, Philosophische Untersuchungen, 66, 1953

Consider for example the proceedings that we call “games”. I mean board-games, card-games, ball-games, Olympic games, and so on. What is common to them all? – Don’t say: “There must be something common, or they would not be called ‘games’ “-but look and see whether there is anything common to all.

Ludwig Wittgenstein, Philosophical Investigations, 66, 1953

Diese [philosophischen Probleme] sind freilich keine empirischen, sondern sie werden durch eine Einsicht in das Arbeiten unserer Sprache gelöst, und zwar so, daß dieses erkannt wird: entgegen einem Trieb, es mißzuverstehen. Diese Probleme werden gelöst, nicht durch Beibringen neuer Erfahrung, sondern durch Zusammenstellung des längst Bekannten. Die Philosophie ist ein Kampf gegen die Verhexung unsres Verstandes durch die Mittel unserer Sprache.

Ludwig Wittgenstein, Philosophische Untersuchungen, 109, 1953

These [philosophical problems] are, of course, not empirical problems; they are solved, rather, by looking into the workings of our language, and that in such a way as to make us recognize those workings: in despite of an urge to misunderstand them. The problems are solved, not by giving new information, but by arranging what we have always known. Philosophy is a battle against the bewitchment of our intelligence by means of language

Ludwig Wittgenstein, Philosophical Investigations, 109, 1953

Fragen wir uns: Warum empfinden wir einen grammatischen Witz als tieft (Und das ist ja die philosophische Tiefe.)

Ludwig Wittgenstein, Philosophische Untersuchungen, 111, 1953

Let us ask ourselves: why do we feel a grammatical joke to be deep (And that is what the depth of philosophy is.)

Ludwig Wittgenstein, Philosophical Investigations, 111, 1953

»Die Zahl meiner Freunde ist n und n2 + 2n +2 = 0.« Hat dieser Satz Sinn? Es ist ihm unmittelbar nicht anzukennen. Man sieht an diesem Beispiel, wie es zugehen kann, daß etwas aussieht wie ein Satz, den wir verstehen, was doch keinen Sinn ergibt.

Ludwig Wittgenstein, Philosophische Untersuchungen, 513, 1953

Or: “I have n friends and n2 + 2n + 2 = o”. Does this sentence make sense? This cannot be seen immediately. This example shews how it is that something can look like a sentence which we understand, and yet yield no sense.

Ludwig Wittgenstein, Philosophical Investigations, 513, 1953

Added to diary 19 January 2018

Die Sprache verkleidet den Gedanken. Und zwar so, dass man nach der äußeren Form des Kleides, nicht auf deie Form des bekleideten Gedankens schließen kann; weil die äußere Form des Kleides nach ganz anderen Zwecken gebildet ist als danach, die Form des Körpers erkennen zu lassen.
Die stillschweigenden Abmachungen zum Verständnis der Umgangssprache sind enorm kompliziert.

Ludwig Wittgenstein, Tractatus Logico-Philosophicus, 4.002, 1921

Language disguises thought. So much so, that from the outward form of the clothing it is impossible to infer the form of the thought beneath it, because the outward form of the clothing is not designed to reveal the form of the body, but for entirely different purposes.
The tacit conventions on which the understanding of everyday language depends are enormously complicated.

Ludwig Wittgenstein, Tractatus Logico-Philosophicus, 4.002, 1921 [Translation: Pears/McGuinness]

Added to diary 19 January 2018

Denn die philosophischen Probleme entstehen, wenn die Sprache feiert.

Ludwig Wittgenstein, Philosophische Untersuchungen, 38, 1953

For philosophical problems arise when language goes on holiday.

Ludwig Wittgenstein, Philosophical Investigations, 38, 1953

Added to diary 15 January 2018

# marcus-hutter

Natural Turing Machines. The final issue is the choice of Universal Turing machine to be used as the reference machine. The problem is that there is still subjectivity involved in this choice since what is simple on one Turing machine may not be on another. More formally, it can be shown that for any arbitrarily complex string $x$ as measured against the UTM $U$ there is another UTM machine $U ′$ for which $x$ has Kolmogorov complexity $1$. This result seems to undermine the entire concept of a universal simplicity measure but it is more of a philosophical nuisance which only occurs in specifically designed pathological examples. The Turing machine $U ′$ would have to be absurdly biased towards the string $x$ which would require previous knowledge of $x$. The analogy here would be to hard-code some arbitrary long complex number into the hardware of a computer system which is clearly not a natural design. To deal with this case we make the soft assumption that the reference machine is natural in the sense that no such specific biases exist. Unfortunately there is no rigorous definition of natural but it is possible to argue for a reasonable and intuitive definition in this context.

Hutter and Rathmanner 2011, Section 5.9 “Andrey Kolmogorov”

In section 2.4 we saw that Solomonoff’s prior is invariant under both reparametrization and regrouping, up to a multiplicative constant. But there is another form of language dependence, namely the choice of a uni- versal Turing machine.

There are three principal responses to the threat of language dependence. First, one could accept it flat out, and admit that no language is better than any other. Second, one could admit that there is language dependence but argue that some languages are better than others. Third, one could deny language dependence, and try to show that there isn’t any.

For a defender of Solomonoff’s prior, I believe the second option is the most promising. If you accept language dependence flat out, why intro- duce universal Turing machines, incomputable functions, and other need- lessly complicated things? And the third option is not available: there isn’t any way of getting around the fact that Solomonoff’s prior depends on the choice of universal Turing machine. Thus, we shall somehow try to limit the blow of the language dependence that is inherent to the framework. Williamson (2010) defends the use of a particular language by saying that an agent’s language gives her some information about the world she lives in. In the present framework, a similar response could go as follows. First, we identify binary strings with propositions or sensory observations in the way outlined in the previous section. Second, we pick a UTM so that the terms that exist in a particular agent’s language gets low Kolmogorov complexity.

If the above proposal is unconvincing, the damage may be limited some- what by the following result. Let $K_U ( x )$ be the Kolmogorov complexity of $x$ relative to universal Turing machine $U$, and let $K_T ( x )$ be the Kolmogorov complexity of $x$ relative to Turing machine $T$ (which needn’t be universal). We have that $K_U ( x ) \leq K_T ( x ) + C_{TU}$ That is: the difference in Kolmogorov complexity relative to $U$ and rela- tive to $T$ is bounded by a constant $C_TU$ that depends only on these Turing machines, and not on $x$. (See Li and Vitanyi (1997, p. 104) for a proof.) This is somewhat reassuring. It means that no other Turing machine can outperform $U$ infinitely often by more than a fixed constant. But we want to achieve more than that. If one picks a UTM that is biased enough to start with, strings that intuitively seem complex will get a very low Kolmogorov complexity. As we have seen, for any string $x$ it is always possible to find a UTM $T$ such that $K_T ( x ) = 1$. If $K_T ( x ) = 1$, the corresponding Solomonoff prior $M_T ( x )$ will be at least $0.5$. So for any binary string, it is always possible to find a UTM such that we assign that string prior probability greater than or equal to $0.5$. Thus some way of discriminating between universal Turing machines is called for.

Vallinder 2012, Section 4.1 “Language dependence”

Added to diary 15 January 2018

# mark-twain

It was a crisp and spicy morning in early October. The lilacs and laburnums, lit with the glory-fires of autumn, hung burning and flashing in the upper air, a fairy bridge provided by kind Nature for the wingless wild things that have their homes in the tree-tops and would visit together; the larch and the pomegranate flung their purple and yellow flames in brilliant broad splashes along the slanting sweep of the woodland; the sensuous fragrance of innumerable deciduous flowers rose upon the swooning atmosphere; far in the empty sky a solitary esophagus slept upon motionless wing; everywhere brooded stillness, serenity, and the peace of God.

Mark Twain, A Double Barrelled Detective Story, 1902

Added to diary 26 June 2018

ADDRESS TO THE VIENNA PRESS CLUB, NOVEMBER 21, 1897, DELIVERED IN GERMAN [Here in literal translation] […]

I am indeed the truest friend of the German language […]. I would only some changes effect. I would only the language method–the luxurious, elaborate construction compress, the eternal parenthesis suppress, do away with, annihilate; the introduction of more than thirteen subjects in one sentence forbid; the verb so far to the front pull that one it without a telescope discover can.

Mark Twain, The Awful German Language, in A Tramp Abroad, 1880

Added to diary 26 June 2018

# mary-ann-bates

Policy makers repeatedly face this generalizability puzzle—whether the results of a specific program generalize to other contexts—and there has been a long-standing debate among policy makers about the appropriate response. But the discussion is often framed by confusing and unhelpful questions, such as: Should policy makers rely on less rigorous evidence from a local context or more rigorous evidence from elsewhere? And must a new experiment always be done locally before a program is scaled up?

These questions present false choices. Rigorous impact evaluations are designed not to replace the need for local data but to enhance their value. This complementarity between detailed knowledge of local institutions and global knowledge of common behavioral relationships is fundamental to the philosophy and practice of our work at the Abdul Latif Jameel Poverty Action Lab (J-PAL). […]

To give a sense of our philosophy, it may help to first examine four common, but misguided, approaches about evidence-based policy making that our work seeks to resolve.

Can a study inform policy only in the location in which it was undertaken? Kaushik Basu has argued that an impact evaluation done in Kenya can never tell us anything useful about what to do in Rwanda because we do not know with certainty that the results will generalize to Rwanda. To be sure, we will never be able to predict human behavior with certainty, but the aim of social science is to describe general patterns that are helpful guides, such as the prediction that, in general, demand falls when prices rise. Describing general behaviors that are found across settings and time is particularly important for informing policy. The best impact evaluations are designed to test these general propositions about human behavior.

Should we use only whatever evidence we have from our specific location? In an effort to ensure that a program or policy makes sense locally, researchers such as Lant Pritchett and Justin Sandefur argue that policy makers should mainly rely on whatever evidence is available locally, even if it is not of very good quality. But while good local data are important, to suggest that decision makers should ignore all evidence from other countries, districts, or towns because of the risk that it might not generalize would be to waste a valuable resource. The challenge is to pair local information with global evidence and use each piece of evidence to help understand, interpret, and complement the other.

Should a new local randomized evaluation always precede scale up? One response to the concern for local relevance is to use the global evidence base as a source for policy ideas but always to test a policy with a randomized evaluation locally before scaling it up. Given J-PAL’s focus on this method, our partners often assume that we will always recommend that another randomized evaluation be done—we do not. With limited resources and evaluation expertise, we cannot rigorously test every policy in every country in the world. We need to prioritize. For example, there have been more than 30 analyses of 10 randomized evaluations in nine low- and middle- income countries on the effects of conditional cash transfers. While there is still much that could be learned about the optimal design of these programs, it is unlikely to be the best use of limited funds to do a randomized impact evaluation for every new conditional cash transfer program when there are many other aspects of antipoverty policy that have not yet been rigorously tested.

Must an identical program or policy be replicated a specific number of times before it is scaled up? One of the most common questions we get asked is how many times a study needs to be replicated in different contexts before a decision maker can rely on evidence from other contexts. We think this is the wrong way to think about evidence. There are examples of the same program being tested at multiple sites: For example, a coordinated set of seven randomized trials of an intensive graduation program to support the ultra-poor in seven countries found positive impacts in the majority of cases. This type of evidence should be weighted highly in our decision making. But if we only draw on results from studies that have been replicated many times, we throw away a lot of potentially relevant information. […]

Focusing on mechanisms, and then judging whether a mechanism is likely to apply in a new setting, has a number of practical advantages for policy making. […] We suggest the use of a four-step generalizability framework that seeks to answer a crucial question at each step:

Step 1: What is the disaggregated theory behind the program?
Step 2: Do the local conditions hold for that theory to apply?
Step 3: How strong is the evidence for the required general behavioral change?
Step 4: What is the evidence that the implementation process can be carried out well?

Bates, M. A., & Glennerster, R. (2017). “The generalizability puzzle”. Stanford Social Innovation Review, Summer 2017. Leland Stanford Jr. University.

Added to diary 22 March 2018

In a preventive war scenario, the rising state (the one that is becoming more powerful) would like to guarantee that it would not use its powerful position to exploit the declining state in the future. The declining state would like to accept such a guarantee. Both states would prefer such a guarantee to risky and costly fighting. Yet both states know that the guarantee would be worthless once the rising state achieves a dominant position. Hence, the declining state may launch a war now in order to avoid being exploited in the future.

Matthew Adam Kocher, Commitment Problems and Preventive War, 8 August 2013

Complete-information bargaining can break down in this setting if the shift in the distribution of power is sufficiently large and rapid. To see why, consider the situation confronting a temporarily weak bargainer who expects to be stronger in the future (that is, the amount that this bargainer can lock in will increase). In order to avoid the inefficient use of power, this bargainer must buy off its temporarily strong adversary. To do this, the weaker party must promise the stronger at least as much of the flow as the latter can lock in. But when the once-weak bargainer becomes stronger, it may want to exploit its better bargaining position and renege on the promised transfer. Indeed, if the shift in the distribution of power is sufficiently large and rapid, the once-weak bargainer is certain to want to renege. Foreseeing this, the temporarily strong adversary uses it power to lock in a higher payoff while it still has the chance.

Robert Powell (2006). War as a Commitment Problem. International Organization, 60(1), 169-203. doi:10.1017/S0020818306060061

Added to diary 15 March 2018

# nate-silver

Few news organizations gave the story more velocity than The New York Times. On the morning of Oct. 29, Comey stories stretched across the print edition’s front page, accompanied by a photo showing Clinton and her aide Huma Abedin, Weiner’s estranged wife. Although some of these articles contained detailed reporting, the headlines focused on speculation about the implications for the horse race — “NEW EMAILS JOLT CLINTON CAMPAIGN IN RACE’S LAST DAYS.” […]

Clinton’s standing in the polls fell sharply. She’d led Trump by 5.9 percentage points in FiveThirtyEight’s popular vote projection at 12:01 a.m. on Oct. 28. A week later — after polls had time to fully reflect the letter — her lead had declined to 2.9 percentage points. That is to say, there was a shift of about 3 percentage points against Clinton. And it was an especially pernicious shift for Clinton because (at least according to the FiveThirtyEight model) Clinton was underperforming in swing states as compared to the country overall. In the average swing state, Clinton’s lead declined from 4.5 percentage points at the start of Oct. 28 to just 1.7 percentage points on Nov. 4. If the polls were off even slightly, Trump could be headed to the White House. […]

So while one can debate the magnitude of the effect, there’s a reasonably clear consensus of the evidence that the Comey letter mattered — probably by enough to swing the election. This ought not be one of the more controversial facts about the 2016 campaign; the data is pretty straightforward. Why the media covered the story as it did and how to weigh the Comey letter against the other causes for Clinton’s defeat are the more complicated parts of the story.

Nate Silver, The Comey Letter Probably Cost Clinton The Election, FiveThirtyEight, 3 May 2017

Added to diary 30 January 2018

# neima-jahromi

By February, when blizzards coat the oily streets, the world outside will resemble the bar’s Black Manta, a rye drink with black sesame and on-trend charcoal, made unsettlingly frothy with the aid of egg whites.

Neima Jahromi, Dromedary Bar, Bar tab, New Yorker Magazine, December 4, 2017

Added to diary 16 January 2018

# nicholas-shackel

A Motte and Bailey castle is a medieval system of defence in which a stone tower on a mound (the Motte) is surrounded by an area of land (the Bailey) which in turn is encompassed by some sort of a barrier such as a ditch. Being dark and dank, the Motte is not a habitation of choice. The only reason for its existence is the desirability of the Bailey, which the combination of the Motte and ditch makes relatively easy to retain despite attack by marauders. When only lightly pressed, the ditch makes small numbers of attackers easy to defeat as they struggle across it: when heavily pressed the ditch is not defensible and so neither is the Bailey. Rather one retreats to the insalubrious but defensible, perhaps impregnable, Motte. Eventually the marauders give up, when one is well placed to reoccupy desirable land.

For my purposes the desirable but only lightly defensible territory of the Motte and Bailey castle, that is to say, the Bailey, represents a philosophical doctrine or position with similar properties: desirable to its proponent but only lightly defensible. The Motte is the defensible but undesired position to which one retreats when hard pressed. I think it is evident that Troll’s Truisms have the Motte and Bailey property, since the exciting falsehoods constitute the desired but indefensible region within the ditch whilst the trivial truth constitutes the defensible but dank Motte to which one may retreat when pressed.

An entire doctrine or theory may be a Motte and Bailey Doctrine just by virtue of having a central core of defensible but not terribly interesting or original doctrines surrounded by a region of exciting but only lightly defensible doctrines. Just as the medieval Motte was often constructed by the stonemasons art from stone in the surrounding land, the Motte of dull but defensible doctrines is often constructed by the use of the sophists art from the desired but indefensible doctrines lying within the ditch.

Diagnosis of a philosophical doctrine as being a Motte and Bailey Doctrine is invariably fatal. Once made it is relatively obvious to those familiar with the doctrine that the doctrine’s survival required a systematic vacillation between exploiting the desired territory and retreating to the Motte when pressed.

The dialectic between many refutations of specific postmodernist doctrines and the postmodernist defences correspond exactly to the dynamics of Motte and Bailey Doctrines. When pressed with refutation the postmodernists retreat to their Mottes, only to venture out and repossess the desired territory when the refutation is not in immediate evidence. For these reasons, I think the proper diagnosis of postmodernism is precisely that it is a Motte and Bailey Doctrine.

Shackel, N. (2005). The vacuity of postmodernist methodology. Metaphilosophy, 36(3), 295-320.

So the motte-and-bailey doctrine is when you make a bold, controversial statement. Then when somebody challenges you, you claim you were just making an obvious, uncontroversial statement, so you are clearly right and they are silly for challenging you. Then when the argument is over you go back to making the bold, controversial statement.

Some classic examples:

1. The religious group that acts for all the world like God is a supernatural creator who builds universes, creates people out of other people’s ribs, parts seas, and heals the sick when asked very nicely (bailey). Then when atheists come around and say maybe there’s no God, the religious group objects “But God is just another name for the beauty and order in the Universe! You’re not denying that there’s beauty and order in the Universe, are you?” (motte). Then when the atheists go away they get back to making people out of other people’s ribs and stuff.

2. Or…”If you don’t accept Jesus, you will burn in Hell forever.” (bailey) But isn’t that horrible and inhuman? “Well, Hell is just another word for being without God, and if you choose to be without God, God will be nice and let you make that choice.” (motte) Oh, well that doesn’t sound so bad, I’m going to keep rejecting Jesus. “But if you reject Jesus, you will BURN in HELL FOREVER and your body will be GNAWED BY WORMS.” But didn’t you just… “Metaphorical worms of godlessness!”

3. The feminists who constantly argue about whether you can be a real feminist or not without believing in X, Y and Z and wanting to empower women in some very specific way, and who demand everybody support controversial policies like affirmative action or affirmative consent laws (bailey). Then when someone says they don’t really like feminism very much, they object “But feminism is just the belief that women are people!” (motte) Then once the person hastily retreats and promises he definitely didn’t mean women aren’t people, the feminists get back to demanding everyone support affirmative action because feminism, or arguing about whether you can be a feminist and wear lipstick.

4. Proponents of pseudoscience sometimes argue that their particular form of quackery will cure cancer or take away your pains or heal your crippling injuries (bailey). When confronted with evidence that it doesn’t work, they might argue that people need hope, and even a placebo solution will often relieve stress and help people feel cared for (motte). In fact, some have argued that quackery may be better than real medicine for certain untreatable diseases, because neither real nor fake medicine will help, but fake medicine tends to be more calming and has fewer side effects. But then once you leave the quacks in peace, they will go back to telling less knowledgeable patients that their treatments will cure cancer.

5. Critics of the rationalist community note that it pushes controversial complicated things like Bayesian statistics and utilitarianism (bailey) under the name “rationality”, but when asked to justify itself defines rationality as “whatever helps you achieve your goals”, which is so vague as to be universally unobjectionable (motte). Then once you have admitted that more rationality is always a good thing, they suggest you’ve admitted everyone needs to learn more Bayesian statistics.

6. Likewise, singularitarians who predict with certainty that there will be a singularity, because “singularity” just means “a time when technology is so different that it is impossible to imagine” – and really, who would deny that technology will probably get really weird (motte)? But then every other time they use “singularity”, they use it to refer to a very specific scenario of intelligence explosion, which is far less certain and needs a lot more evidence before you can predict it (bailey).

The motte and bailey doctrine sounds kind of stupid and hard-to-fall-for when you put it like that, but all fallacies sound that way when you’re thinking about them. More important, it draws its strength from people’s usual failure to debate specific propositions rather than vague clouds of ideas. If I’m debating “does quackery cure cancer?”, it might be easy to view that as a general case of the problem of “is quackery okay?” or “should quackery be illegal?”, and from there it’s easy to bring up the motte objection.

Scott Alexander, “All in all, another brick in the motte”, Slate Star Codex, 3 November 2014

Suppose I define socialism as, “a system of totalitarian control over the economy, leading inevitably to mass poverty and death.” As a detractor of socialism, this is superficially tempting. But it’s sheer folly, for two distinct reasons.

First, this plainly isn’t what most socialists mean by “socialism.” When socialists call for socialism, they’re rarely requesting totalitarianism, poverty, and death. And when non-socialists listen to socialists, that’s rarely what they hear, either.

Second, if you buy this definition, there’s no point studying actual socialist regimes to see if they in fact are “totalitarian” or “inevitably lead to mass poverty and death.” Mere words tell you what you need to know.

What’s the problem? The problem is that I’ve provided an argumentative definition of socialism. Instead of rigorously distinguishing between what we’re talking about and what we’re saying about it, an argumentative definition deliberately interweaves the two.

The hidden hope, presumably, is that if we control the way people use words, we’ll also control what people think about the world. And it is plainly possible to trick the naive using these semantic tactics. But the epistemic cost is high: You preemptively end conversation with anyone who substantively disagrees with you - and cloud your own thinking in the process. It’s far better to neutrally define socialism as, say, “Government ownership of most of the means of production,” or maybe, “The view that each nation’s wealth is justly owned collectively by its citizens.” You can quibble with these definitions, but people can accept either definition regardless of their position on socialism itself.

Modern discussions are riddled with argumentative definitions, but the most prominent instance, lately, is feminism. Google “feminism,” and what do you get? The top hit: “the advocacy of women’s rights on the basis of the equality of the sexes.” I’ve heard many variants on this: “the theory that men and women should be treated equally,” or even “the radical notion that women are people.”

What’s argumentative about these definitions? Well, in this 2016 Washington Post/Kaiser Family Foundation survey, 40% of women and 67% of men did not consider themselves “feminists.” But over 90% of both genders agreed that “men and women should be social, political, and economic equals.” If Google’s definition of feminism conformed to standard English usage, these patterns would make very little sense. Imagine a world where 90% of men say they’re “bachelors,” but only 40% say they’re “unmarried.”

Bryan Caplan, Against Argumentative Definitions: The Case of Feminism, EconLog, 20 February 2018

Added to diary 21 April 2018

# nick-bostrom

So, “maximize expected value”, say, is a quantity we could define. It just doesn’t help us very much, because whenever you try to do something specific you’re still virtually as far away as you had been. On the other hand, if you set some more concrete objective, like maximize the number of people in this room, or something like that, we can now easily tell like how many people there are, and we have ideas about how we could maximize it. So any particular action we think of we might easily see how it fares on this objective of maximizing the people in this room. However, we might feel it’s very difficult to get strong reasons for knowing whether more people in this room is better, or whether there is some inverse relationship. A good signpost would strike a reasonable compromise between being visible from afar and also being such that we can have strong reason to be sure of its sign.

Nick Bostrom, Crucial Considerations and Wise Philanthropy, Good Done Right conference, 2014

Added to diary 12 December 2018

Suppose you’re an administrator here in Oxford, you’re working in the Computer Science department, and you’re the secretary there. Suppose you find some way to make the department run slightly more efficiently: you create this mailing list so that everybody can, when they have an announcement to make, just email it to the mailing list rather than having to put in each person individually in the address field. And that’s a useful thing, that’s a great thing: it didn’t cost anything, other than one-off cost, and now everybody can go about their business more easily. From this perspective, it’s very non-obvious whether that is, in fact, a good thing. It might be contributing to AI—that might be the main effect of this, other than the very small general effect on economic growth. And it might probably be that you have made the world worse in expectation by making this little efficiency improvement. So this project of trying to think through this it’s in a sense a little bit like the Nietzschean Umwertung aller Werte — the revaluation of all values—project that he never had a chance to complete, because he went mad before.

Nick Bostrom, Crucial Considerations and Wise Philanthropy, Good Done Right conference, 2014

Added to diary 12 December 2018

# nicolas-niarchos

Ass Juice, a rather unpleasantly named punch, is one of the specials at this Alphabet City dive. […] The concoction is also deceptively priced: one is four dollars, and two are nine. The ingenious valuation recently led a former hedge-fund manager to declaim, “It’s very New York!” Around him, a mix of yuppies and leather-clad locals clustered at the bar while televisions above them alternated between shots of rowdy concerts and graphic pornography. “Is that legal?” a drunk patron queried, of the questionable fare onscreen. Nearby, a man in a checked shirt took a swig from a can of Genesee beer and mumbled to his friend, “I’m more than just a back-end-data guy.”

Nicolas Niarchos, Double Down Saloon, Bar Tab, New Yorker Magazine, 5 February 2018

Added to diary 03 February 2018

# paul-christiano

When I suggest that supporting technological development may be an efficient way to improve the world, I often encounter the reaction:

Markets already incentivize technological development; why would we expect altruists to have much impact working on it?

When I talk about more extreme cases, like subsidizing corporate R&D or tech startups, I seem to get this reaction even more strongly and with striking regularity: “But that’s a for-profit enterprise, right? If it were worthwhile to spend any more money on R&D, then they’d do it.” […]

If I notice a promising opportunity and believe that I can capture all of the gains from pursuing it, I might suspect that the market would have scooped it up if it were really as good as it seems. But if I can only capture some of the gains, this reasoning falls apart. If the opportunity involves diminishing returns and allows capturing only a tiny fraction of all of the social gains, then it might be worth investing a tiny bit for profit, leaving significant room for further altruistic investment. I think the existence of diminishing returns is really doing the work in this argument; it will generally cause even very good altruistic opportunities to be very profitable at first, despite a huge gap between social value and profit potential.

Paul Christiano Altruism and profit, Rational altruist, July 11 2013

This part can be hard to follow: “If the opportunity involves diminishing returns and allows capturing only a tiny fraction of all of the social gains”. The explanation is: the more convex the demand curve (quickly diminishing returns), the greater the fraction of total surplus captured by a monopolist. The more concave the demand (slowly diminishing returns), the less the fraction of social surplus captured by a monopolist. See Malueg 1994 (Monopoly Output and Welfare: The Role of Curvature of the Demand Function, Proposition 2). The same result extends to Cournot oligopoly (Anderson and Renault 2001, Efficiency and surplus bounds in Cournot competition).

And why does Paul say “very profitable at first”? Because every natural monopoly must end someday: patents expire, know-how diffuses, capital depreciates, and so on.

Added to diary 01 April 2018

I think it’s useful to split up plans into two parts:

1. Trying to achieve some observable goals, where we can make many attempts and improve each time.
2. Hoping that achieving these goals will lead to a positive impact. […]

So I think the upshot is to choose plans for which the arguments supporting step (2) are as simple as possible. Arguments without many moving parts, particularly which are substantiated by a direct appeal to historical regularity, may hold up even if you never get to check them. Conversely, load as much of the difficult work as possible into step (1).

Paul Christiano, Guesswork, feedback, and impact, Rational Altruist, 6 December 2012

Added to diary 28 March 2018

No matter how “dumb” the monkey is, if it is unbiased then there is no free lunch. For a time we can do what we desire at the expense of what we cesire, but any cognitive policy that does so will eventually become unappealing.

Paul Christiano, The monkey and the machine: a dual process theory, The sideways view, 9 February 2017

Added to diary 12 February 2018

# paul-graham

Hacker News is definitely useful. I’ve learned a lot from things I’ve read on HN. I’ve written several essays that began as comments there. So I wouldn’t want the site to go away. But I would like to be sure it’s not a net drag on productivity. What a disaster that would be, to attract thousands of smart people to a site that caused them to waste lots of time. I wish I could be 100% sure that’s not a description of HN. I feel like the addictiveness of games and social applications is still a mostly unsolved problem. The situation now is like it was with crack in the 1980s: we’ve invented terribly addictive new things, and we haven’t yet evolved ways to protect ourselves from them. We will eventually, and that’s one of the problems I hope to focus on next.

Paul Graham, What I’ve Learned from Hacker News, February 2009

Added to diary 27 June 2018

# paul-niehaus

Public employment programs play a major role in the anti-poverty strategy of many developing countries. Besides the direct wages provided to the poor, such programs are likely to affect their welfare by changing broader labor market outcomes including wages and private employment. These general equilibrium effects may accentuate or attenuate the direct benefits of the program, but have been difficult to estimate credibly. We estimate the general equilibrium effects of a technological reform that improved the implementation quality of India’s public employment scheme on the earnings of the rural poor, using a large-scale experiment which randomized treatment across sub-districts of 60,000 people. We find that this reform had a large impact on the earnings of low-income households, and that these gains were overwhelmingly driven by higher private-sector earnings (90%) as opposed to earnings directly from the program (10%). These earnings gains reflect a 5.7% increase in market wages for rural unskilled labor, and a similar increase in reservation wages. We do not find evidence of distortions in factor allocation, including labor supply, migration, and land use. Our results highlight the importance of accounting for general equilibrium effects in evaluating programs, and also illustrate the feasibility of using large-scale experiments to study such effects.

Karthik Muralidharan, Paul Niehaus, and Sandip Sukhtankar, General Equilibrium Effects of (Improving) Public Employment Programs: Experimental Evidence from India, 2017

• Moreover, some and maybe even most of this relationship is not causal. For example, healthier people will be both happier and capable of earning more. This means the effect of gaining extra money on your happiness is weaker than the above correlations suggest. Unfortunately, how much of the above relationships are caused by money making people happier is still not known with confidence. Once you get to an individual income of around $40,000, other factors, such as health, relationships and a sense of purpose, seem far more important than income. Robert Wiblin, Everything you need to know about whether money makes you happy, 80,000 Hours blog, 2 March 2016 Added to diary 15 April 2018 # robin-hanson No matter how fast the economy grows, there remains a limited supply of sex and social status […]. Robin Hanson and Kevin Simler, The Elephant in the Brain, 2017 Added to diary 26 June 2018 [T]here’s a very real sense in which we are the Press Secretaries within our minds. In other words, the parts of the mind that we identify with, the parts we think of as our conscious selves (“I,” “myself,” “my conscious ego”), are the ones responsible for strategically spinning the truth for an external audience. […] Body language also facilitates discretion by being less quotable to third parties, relative to spoken language. If Peter had explicitly told a colleague, “I want to get Jim fired,” the colleague could easily turn around and relay Peter’s agenda to others in the office. Similarly, if Peter had asked his flirting partner out for a drink, word might get back to his wife—in which case, bad news for Peter. […] [S]peaking functions in part as an act of showing off. Speakers strive to impress their audience by consistently delivering impressive remarks. This explains how speakers foot the bill for the costs of speaking we discussed earlier: they’re compensated not in-kind, by receiving information reciprocally, but rather by raising their social value in the eyes (and ears) of their listeners. […] Participants evaluate each other not just as trading partners, but also as potential allies. Speakers are eager to impress listeners by saying new and useful things, but the facts themselves can be secondary. Instead, it’s more important for speakers to demonstrate that they have abilities that are attractive in an ally. […] But why do speakers need to be relevant in conversation? If speakers deliver high-quality information, why should listeners care whether the information is related to the current topic? A plausible answer is that it’s simply too easy to rattle off memorized trivia. You can recite random facts from the encyclopedia until you’re blue in the face, but that does little to advertise your generic facility with information. Similarly, when you meet someone for the first time, you’re more eager to sniff each other out for this generic skill, rather than to exchange the most important information each of you has gathered to this point in your lives. In other words, listeners generally prefer speakers who can impress them wherever a conversation happens to lead, rather than speakers who steer conversations to specific topics where they already know what to say. […] In fact, patients show surprisingly little interest in private information on medical quality. For example, patients who would soon undergo a dangerous surgery (with a few percent chance of death) were offered private information on the (risk-adjusted) rates at which patients died from that surgery with individual surgeons and hospitals in their area. These rates were large and varied by a factor of three. However, only 8 percent of these patients were willing to spend even$50 to learn these death rates. Similarly, when the government published risk-adjusted hospital death rates between 1986 and 1992, hospitals with twice the risk-adjusted death rates saw their admissions fall by only 0.8 percent. In contrast, a single high-profile news story about an untoward death at a hospital resulted in a 9 percent drop in patient admissions at that hospital. […]

When John F. Kennedy described the space race with his famous speech in 1962, he dressed up the nation’s ambition in a suitably prosocial motive. “We set sail on this new sea,” he told the crowd, “because there is new knowledge to be gained, and new rights to be won, and they must be won and used for the progress of all people.” Everyone, of course, knew the subtext: “We need to beat the Russians!” In the end, our motives were less important than what we managed to achieve by them. We may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.

Robin Hanson and Kevin Simler, The Elephant in the Brain, 2017

Added to diary 26 June 2018

# ruth-spinks

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.

Added to diary 27 June 2018

# sam-harris

When we see a person walking down the street talking to himself, we generally assume that he is mentally ill (provided he is not wearing a headset of some kind). But we all talk to ourselves constantly—most of us merely have the good sense to keep our mouths shut. We rehearse past conversations—thinking about what we said, what we didn’t say, what we should have said. We anticipate the future, producing a ceaseless string of words and images that fill us with hope or fear. We tell ourselves the story of the present, as though some blind person were inside our heads who required continuous narration to know what is happening: “Wow, nice desk. I wonder what kind of wood that is. Oh, but it has no drawers. They didn’t put drawers in this thing? How can you have a desk without at least one drawer?” Who are we talking to? No one else is there. And we seem to imagine that if we just keep this inner monologue to ourselves, it is perfectly compatible with mental health. Perhaps it isn’t.

Sam Harris, Waking up (2014)

Added to diary 28 March 2018

# samuel-rathmanner

Natural Turing Machines. The final issue is the choice of Universal Turing machine to be used as the reference machine. The problem is that there is still subjectivity involved in this choice since what is simple on one Turing machine may not be on another. More formally, it can be shown that for any arbitrarily complex string $x$ as measured against the UTM $U$ there is another UTM machine $U ′$ for which $x$ has Kolmogorov complexity $1$. This result seems to undermine the entire concept of a universal simplicity measure but it is more of a philosophical nuisance which only occurs in specifically designed pathological examples. The Turing machine $U ′$ would have to be absurdly biased towards the string $x$ which would require previous knowledge of $x$. The analogy here would be to hard-code some arbitrary long complex number into the hardware of a computer system which is clearly not a natural design. To deal with this case we make the soft assumption that the reference machine is natural in the sense that no such specific biases exist. Unfortunately there is no rigorous definition of natural but it is possible to argue for a reasonable and intuitive definition in this context.

Hutter and Rathmanner 2011, Section 5.9 “Andrey Kolmogorov”

In section 2.4 we saw that Solomonoff’s prior is invariant under both reparametrization and regrouping, up to a multiplicative constant. But there is another form of language dependence, namely the choice of a uni- versal Turing machine.

There are three principal responses to the threat of language dependence. First, one could accept it flat out, and admit that no language is better than any other. Second, one could admit that there is language dependence but argue that some languages are better than others. Third, one could deny language dependence, and try to show that there isn’t any.

For a defender of Solomonoff’s prior, I believe the second option is the most promising. If you accept language dependence flat out, why intro- duce universal Turing machines, incomputable functions, and other need- lessly complicated things? And the third option is not available: there isn’t any way of getting around the fact that Solomonoff’s prior depends on the choice of universal Turing machine. Thus, we shall somehow try to limit the blow of the language dependence that is inherent to the framework. Williamson (2010) defends the use of a particular language by saying that an agent’s language gives her some information about the world she lives in. In the present framework, a similar response could go as follows. First, we identify binary strings with propositions or sensory observations in the way outlined in the previous section. Second, we pick a UTM so that the terms that exist in a particular agent’s language gets low Kolmogorov complexity.

If the above proposal is unconvincing, the damage may be limited some- what by the following result. Let $K_U ( x )$ be the Kolmogorov complexity of $x$ relative to universal Turing machine $U$, and let $K_T ( x )$ be the Kolmogorov complexity of $x$ relative to Turing machine $T$ (which needn’t be universal). We have that $K_U ( x ) \leq K_T ( x ) + C_{TU}$ That is: the difference in Kolmogorov complexity relative to $U$ and rela- tive to $T$ is bounded by a constant $C_TU$ that depends only on these Turing machines, and not on $x$. (See Li and Vitanyi (1997, p. 104) for a proof.) This is somewhat reassuring. It means that no other Turing machine can outperform $U$ infinitely often by more than a fixed constant. But we want to achieve more than that. If one picks a UTM that is biased enough to start with, strings that intuitively seem complex will get a very low Kolmogorov complexity. As we have seen, for any string $x$ it is always possible to find a UTM $T$ such that $K_T ( x ) = 1$. If $K_T ( x ) = 1$, the corresponding Solomonoff prior $M_T ( x )$ will be at least $0.5$. So for any binary string, it is always possible to find a UTM such that we assign that string prior probability greater than or equal to $0.5$. Thus some way of discriminating between universal Turing machines is called for.

Vallinder 2012, Section 4.1 “Language dependence”

Added to diary 15 January 2018

# sandip-sukhtankar

Public employment programs play a major role in the anti-poverty strategy of many developing countries. Besides the direct wages provided to the poor, such programs are likely to affect their welfare by changing broader labor market outcomes including wages and private employment. These general equilibrium effects may accentuate or attenuate the direct benefits of the program, but have been difficult to estimate credibly. We estimate the general equilibrium effects of a technological reform that improved the implementation quality of India’s public employment scheme on the earnings of the rural poor, using a large-scale experiment which randomized treatment across sub-districts of 60,000 people. We find that this reform had a large impact on the earnings of low-income households, and that these gains were overwhelmingly driven by higher private-sector earnings (90%) as opposed to earnings directly from the program (10%). These earnings gains reflect a 5.7% increase in market wages for rural unskilled labor, and a similar increase in reservation wages. We do not find evidence of distortions in factor allocation, including labor supply, migration, and land use. Our results highlight the importance of accounting for general equilibrium effects in evaluating programs, and also illustrate the feasibility of using large-scale experiments to study such effects.

Karthik Muralidharan, Paul Niehaus, and Sandip Sukhtankar, General Equilibrium Effects of (Improving) Public Employment Programs: Experimental Evidence from India, 2017

the cool thing is it mattered – study helped convince gov’t not to scrap the program, putting $100Ms annually into hands of the poor @paulfniehaus, Twitter Added to diary 15 January 2018 # scott-aaronson I’ll start with the “Muddy Children Puzzle,” which is one of the greatest logic puzzles ever invented. How many of you have seen this one? OK, so the way it goes is, there are a hundred children playing in the mud. Naturally, they all have muddy foreheads. At some point their teacher comes along and says to them, as they all sit around in a circle: “stand up if you know your forehead is muddy.” No one stands up. For how could they know? Each kid can see all the other 99 kids’ foreheads, so knows that they’re muddy, but can’t see his or her own forehead. (We’ll assume that there are no mirrors or camera phones nearby, and also that this is mud that you don’t feel when it’s on your forehead.) So the teacher tries again. “Knowing that no one stood up the last time, now stand up if you know your forehead is muddy.” Still no one stands up. Why would they? No matter how many times the teacher repeats the request, still no one stands up. Then the teacher tries something new. “Look, I hereby announce that at least one of you has a muddy forehead.” After that announcement, the teacher again says, “stand up if you know your forehead is muddy”—and again no one stands up. And again and again; it continues 99 times. But then the hundredth time, all the children suddenly stand up. (There’s a variant of the puzzle involving blue-eyed islanders who all suddenly commit suicide on the hundredth day, when they all learn that their eyes are blue—but as a blue-eyed person myself, that’s always struck me as needlessly macabre.) What’s going on here? Somehow, the teacher’s announcing to the children that at least one of them had a muddy forehead set something dramatic in motion, which would eventually make them all stand up—but how could that announcement possibly have made any difference? After all, each child already knew that at least 99 children had muddy foreheads! Like with many puzzles, the way to get intuition is to change the numbers. So suppose there were twochildren with muddy foreheads, and the teacher announced to them that at least one had a muddy forehead, and then asked both of them whether their own forehead was muddy. Neither would know. But each child could reason as follows: “if my forehead weren’t muddy, then the other child would’ve seen that, and would also have known that at least one of us has a muddy forehead. Therefore she would’ve known, when asked, that her own forehead was muddy. Since she didn’t know, that means my forehead is muddy.” So then both children know their foreheads are muddy, when the teacher asks a second time. Now, this argument can be generalized to any (finite) number of children. The crucial concept here is common knowledge. We call a fact “common knowledge” if, not only does everyone know it, but everyone knows everyone knows it, and everyone knows everyone knows everyone knows it, and so on. It’s true that in the beginning, each child knew that all the other children had muddy foreheads, but it wasn’t common knowledge that even one of them had a muddy forehead. For example, if your forehead and mine are both muddy, then I know that at least one of us has a muddy forehead, and you know that too, but you don’t know that I know it (for what if your forehead were clean?), and I don’t know that you know it (for what if my forehead were clean?). What the teacher’s announcement did, was to make itcommon knowledge that at least one child has a muddy forehead (since not only did everyone hear the announcement, but everyone witnessed everyone else hearing it, etc.). And once you understand that point, it’s easy to argue by induction: after the teacher asks and no child stands up (and everyone sees that no one stood up), it becomes common knowledge that at least two children have muddy foreheads (since if only one child had had a muddy forehead, that child would’ve known it and stood up). Next it becomes common knowledge that at least three children have muddy foreheads, and so on, until after a hundred rounds it’s common knowledge that everyone’s forehead is muddy, so everyone stands up. The moral is that the mere act of saying something publicly can change the world—even if everything you said was already obvious to every last one of your listeners. For it’s possible that, until your announcement, not everyone knew that everyone knew the thing, or knew everyone knew everyone knew it, etc., and that could have prevented them from acting. Scott Aaronson, Common Knowledge and Aumann’s Agreement Theorem, Shtetl-Optimized, 16 August 2015 Added to diary 27 June 2018 In my view, any assessment of Robin’s abrasive, tone-deaf, and sometimes even offensive intellectual style has to grapple with the fact that, over his career, Robin has originated not one but several hugely important ideas—and his ability to do so strikes me as clearly related to his style, not easily detachable from it. Most famously, Robin is one of the major developers of prediction markets, and also the inventor of futarchy—a proposed system of government that would harness prediction markets to get well-calibrated assessments of the effects of various policies. Robin also first articulated the concept of the Great Filter in the evolution of life in our universe. It’s Great Filter reasoning that tells us, for example, that if we ever discover fossil microbial life on Mars (or worse yet, simple plants and animals on extrasolar planets), then we should be terrified, because it would mean that several solutions to the Fermi paradox that don’t involve civilizations like ours killing themselves off would have been eliminated. Sure, once you say it, it sounds pretty obvious … but did you think of it? Scott Aaronson, The Zeroth Commandment, Shtetl-Optimized, 6 May 2018 Added to diary 06 May 2018 # scott-alexander A Motte and Bailey castle is a medieval system of defence in which a stone tower on a mound (the Motte) is surrounded by an area of land (the Bailey) which in turn is encompassed by some sort of a barrier such as a ditch. Being dark and dank, the Motte is not a habitation of choice. The only reason for its existence is the desirability of the Bailey, which the combination of the Motte and ditch makes relatively easy to retain despite attack by marauders. When only lightly pressed, the ditch makes small numbers of attackers easy to defeat as they struggle across it: when heavily pressed the ditch is not defensible and so neither is the Bailey. Rather one retreats to the insalubrious but defensible, perhaps impregnable, Motte. Eventually the marauders give up, when one is well placed to reoccupy desirable land. For my purposes the desirable but only lightly defensible territory of the Motte and Bailey castle, that is to say, the Bailey, represents a philosophical doctrine or position with similar properties: desirable to its proponent but only lightly defensible. The Motte is the defensible but undesired position to which one retreats when hard pressed. I think it is evident that Troll’s Truisms have the Motte and Bailey property, since the exciting falsehoods constitute the desired but indefensible region within the ditch whilst the trivial truth constitutes the defensible but dank Motte to which one may retreat when pressed. An entire doctrine or theory may be a Motte and Bailey Doctrine just by virtue of having a central core of defensible but not terribly interesting or original doctrines surrounded by a region of exciting but only lightly defensible doctrines. Just as the medieval Motte was often constructed by the stonemasons art from stone in the surrounding land, the Motte of dull but defensible doctrines is often constructed by the use of the sophists art from the desired but indefensible doctrines lying within the ditch. Diagnosis of a philosophical doctrine as being a Motte and Bailey Doctrine is invariably fatal. Once made it is relatively obvious to those familiar with the doctrine that the doctrine’s survival required a systematic vacillation between exploiting the desired territory and retreating to the Motte when pressed. The dialectic between many refutations of specific postmodernist doctrines and the postmodernist defences correspond exactly to the dynamics of Motte and Bailey Doctrines. When pressed with refutation the postmodernists retreat to their Mottes, only to venture out and repossess the desired territory when the refutation is not in immediate evidence. For these reasons, I think the proper diagnosis of postmodernism is precisely that it is a Motte and Bailey Doctrine. Shackel, N. (2005). The vacuity of postmodernist methodology. Metaphilosophy, 36(3), 295-320. So the motte-and-bailey doctrine is when you make a bold, controversial statement. Then when somebody challenges you, you claim you were just making an obvious, uncontroversial statement, so you are clearly right and they are silly for challenging you. Then when the argument is over you go back to making the bold, controversial statement. Some classic examples: 1. The religious group that acts for all the world like God is a supernatural creator who builds universes, creates people out of other people’s ribs, parts seas, and heals the sick when asked very nicely (bailey). Then when atheists come around and say maybe there’s no God, the religious group objects “But God is just another name for the beauty and order in the Universe! You’re not denying that there’s beauty and order in the Universe, are you?” (motte). Then when the atheists go away they get back to making people out of other people’s ribs and stuff. 2. Or…”If you don’t accept Jesus, you will burn in Hell forever.” (bailey) But isn’t that horrible and inhuman? “Well, Hell is just another word for being without God, and if you choose to be without God, God will be nice and let you make that choice.” (motte) Oh, well that doesn’t sound so bad, I’m going to keep rejecting Jesus. “But if you reject Jesus, you will BURN in HELL FOREVER and your body will be GNAWED BY WORMS.” But didn’t you just… “Metaphorical worms of godlessness!” 3. The feminists who constantly argue about whether you can be a real feminist or not without believing in X, Y and Z and wanting to empower women in some very specific way, and who demand everybody support controversial policies like affirmative action or affirmative consent laws (bailey). Then when someone says they don’t really like feminism very much, they object “But feminism is just the belief that women are people!” (motte) Then once the person hastily retreats and promises he definitely didn’t mean women aren’t people, the feminists get back to demanding everyone support affirmative action because feminism, or arguing about whether you can be a feminist and wear lipstick. 4. Proponents of pseudoscience sometimes argue that their particular form of quackery will cure cancer or take away your pains or heal your crippling injuries (bailey). When confronted with evidence that it doesn’t work, they might argue that people need hope, and even a placebo solution will often relieve stress and help people feel cared for (motte). In fact, some have argued that quackery may be better than real medicine for certain untreatable diseases, because neither real nor fake medicine will help, but fake medicine tends to be more calming and has fewer side effects. But then once you leave the quacks in peace, they will go back to telling less knowledgeable patients that their treatments will cure cancer. 5. Critics of the rationalist community note that it pushes controversial complicated things like Bayesian statistics and utilitarianism (bailey) under the name “rationality”, but when asked to justify itself defines rationality as “whatever helps you achieve your goals”, which is so vague as to be universally unobjectionable (motte). Then once you have admitted that more rationality is always a good thing, they suggest you’ve admitted everyone needs to learn more Bayesian statistics. 6. Likewise, singularitarians who predict with certainty that there will be a singularity, because “singularity” just means “a time when technology is so different that it is impossible to imagine” – and really, who would deny that technology will probably get really weird (motte)? But then every other time they use “singularity”, they use it to refer to a very specific scenario of intelligence explosion, which is far less certain and needs a lot more evidence before you can predict it (bailey). The motte and bailey doctrine sounds kind of stupid and hard-to-fall-for when you put it like that, but all fallacies sound that way when you’re thinking about them. More important, it draws its strength from people’s usual failure to debate specific propositions rather than vague clouds of ideas. If I’m debating “does quackery cure cancer?”, it might be easy to view that as a general case of the problem of “is quackery okay?” or “should quackery be illegal?”, and from there it’s easy to bring up the motte objection. Scott Alexander, “All in all, another brick in the motte”, Slate Star Codex, 3 November 2014 Suppose I define socialism as, “a system of totalitarian control over the economy, leading inevitably to mass poverty and death.” As a detractor of socialism, this is superficially tempting. But it’s sheer folly, for two distinct reasons. First, this plainly isn’t what most socialists mean by “socialism.” When socialists call for socialism, they’re rarely requesting totalitarianism, poverty, and death. And when non-socialists listen to socialists, that’s rarely what they hear, either. Second, if you buy this definition, there’s no point studying actual socialist regimes to see if they in fact are “totalitarian” or “inevitably lead to mass poverty and death.” Mere words tell you what you need to know. What’s the problem? The problem is that I’ve provided an argumentative definition of socialism. Instead of rigorously distinguishing between what we’re talking about and what we’re saying about it, an argumentative definition deliberately interweaves the two. The hidden hope, presumably, is that if we control the way people use words, we’ll also control what people think about the world. And it is plainly possible to trick the naive using these semantic tactics. But the epistemic cost is high: You preemptively end conversation with anyone who substantively disagrees with you - and cloud your own thinking in the process. It’s far better to neutrally define socialism as, say, “Government ownership of most of the means of production,” or maybe, “The view that each nation’s wealth is justly owned collectively by its citizens.” You can quibble with these definitions, but people can accept either definition regardless of their position on socialism itself. Modern discussions are riddled with argumentative definitions, but the most prominent instance, lately, is feminism. Google “feminism,” and what do you get? The top hit: “the advocacy of women’s rights on the basis of the equality of the sexes.” I’ve heard many variants on this: “the theory that men and women should be treated equally,” or even “the radical notion that women are people.” What’s argumentative about these definitions? Well, in this 2016 Washington Post/Kaiser Family Foundation survey, 40% of women and 67% of men did not consider themselves “feminists.” But over 90% of both genders agreed that “men and women should be social, political, and economic equals.” If Google’s definition of feminism conformed to standard English usage, these patterns would make very little sense. Imagine a world where 90% of men say they’re “bachelors,” but only 40% say they’re “unmarried.” Bryan Caplan, Against Argumentative Definitions: The Case of Feminism, EconLog, 20 February 2018 Added to diary 21 April 2018 # sean-caroll List of passages I highlighted in “The Big Picture”. On ontology: We will see how our best approach to describing the universe is not a single, unified story but an interconnected series of models appropriate at different levels. Each model has a domain in which it is applicable, and the ideas that appear as essential parts of each story have every right to be thought of as “real.” […] While there is one world, there are many ways of talking about it. We refer to these ways as “models” or “theories” or “vocabularies” or “stories”; it doesn’t matter. Aristotle and his contemporaries weren’t just making things up; they told a reasonable story about the world they actually observed. Science has discovered another set of stories, harder to perceive but of greater precision and wider applicability. It’s not good enough that the stories succeed individually; they have to fit together. […] The different stories or theories use utterly different vocabularies; they are different ontologies, despite describing the same underlying reality. In one we talk about the density, pressure, and viscosity of the fluid; in the other we talk about the position and velocity of all the individual molecules. […] One theory can directly be obtained from the other by a process known as coarse-graining. […] The case of fluid dynamics emerging from molecules is as simple as it gets. One theory can directly be obtained from the other by a process known as coarse-graining. […] Typically—though not necessarily—the theory that has a wider domain of applicability will also be the one that is more computationally cumbersome. There tends to be a trade-off between comprehensiveness of a theory and its practicality. […] There are several different questions here, which are related to one another but logically distinct. Are the most fine-grained (microscopic, comprehensive) stories the most interesting or important ones? As a research program, is the best way to understand macroscopic phenomena to first understand microscopic phenomena, and then derive the emergent description? Is there something we learn by studying the emergent level that we could not understand by studying the microscopic level, even if we were as smart as Laplace’s Demon? Is behavior at the macroscopic level incompatible—literally inconsistent with—how we would expect the system to behave if we knew only the microscopic rules? […] (A similar view was put forward by Stephen Hawking and Leonard Mlodinow, under the label “model-dependent realism.”) […] To evaluate a model of the world, the questions we need to ask include “Is it internally consistent?,” “Is it well-defined?,” and “Does it fit the data?” When we have multiple distinct theories that overlap in some regime, they had better be compatible with one another; otherwise they couldn’t both fit the data at the same time. The theories may involve utterly different kinds of concepts; one may have particles and forces obeying differential equations, and another may have human agents making choices. That’s fine, as long as the predictions of the theories line up in their overlapping domains of applicability. The success of one theory doesn’t mean that another one is wrong; that only happens when a theory turns out to be internally incoherent, or when it does a bad job at describing the observed phenomena. […] “Causation,” which after all is itself a derived notion rather than a fundamental one, is best thought of as acting within individual theories that rely on the concept. Thinking of behavior in one theory as causing behavior in a completely different theory is the first step toward a morass of confusion from which it is difficult to extract ourselves. […] The way we talk about human beings and their interactions is going to end up being less crisp and precise than our theories of elementary particles. It might be harmless, and even useful, to borrow terms from one story because they are useful in another one—“diseases are caused by microscopic germs” being an obvious example. Drawing relations between different vocabularies, such as when Boltzmann suggested that the entropy of a gas was related to the number of indistinguishable arrangements of the molecules of which it was composed, can be extremely valuable and add important insights. But if a theory is any good, it has to be able to speak sensibly about the phenomena it purports to describe all by itself, without leaning on causes being exerted to or from theories at different levels of focus. […] On reductionism: Galileo observed that Jupiter has moons, implying that it is a gravitating body just like the Earth. Isaac Newton showed that the force of gravity is universal, underlying both the motion of the planets and the way that apples fall from trees. John Dalton demonstrated how different chemical compounds could be thought of as combinations of basic building blocks called atoms. Charles Darwin established the unity of life from common ancestors. James Clerk Maxwell and other physicists brought together such disparate phenomena as lightning, radiation, and magnets under the single rubric of “electromagnetism.” Close analysis of starlight revealed that stars are made of the same kinds of atoms as we find here on Earth, with Cecilia Payne-Gaposchkin eventually proving that they are mostly hydrogen and helium. Albert Einstein unified space and time, joining together matter and energy along the way. Particle physics has taught us that every atom in the periodic table of the elements is an arrangement of just three basic particles: protons, neutrons, and electrons. Every object you have ever seen or bumped into in your life is made of just those three particles. We’re left with a very different view of reality from where we started. At a fundamental level, there aren’t separate “living things” and “nonliving things,” “things here on Earth” and “things up in the sky,” “matter” and “spirit.” There is just the basic stuff of reality, appearing to us in many different forms. How far will this process of unification and simplification go? It’s impossible to say for sure. But we have a reasonable guess, based on our progress thus far: it will go all the way. We will ultimately understand the world as a single, unified reality, not caused or sustained or influenced by anything outside itself. That’s a big deal. […] On truth and falsity of models: Consider a coffee cup sitting at rest on a table. It is in its natural state, in this case at rest. (Unless we were to pull the table out from beneath it, in which case it would naturally fall, but let’s not do that.) Now imagine we exert a violent motion, pushing the cup across the table. As we push it, it moves; when we stop, it returns to its natural state of rest. In order to keep it moving, we would have to keep pushing on it. As Aristotle says, “Everything that is in motion must be moved by something.” This is manifestly how coffee cups do behave in the real world. The difference between Galileo and Aristotle wasn’t that one was saying true things and the other was saying false things; it’s that the things Galileo chose to focus on turned out to be a useful basis for a more rigorous and complete understanding of phenomena beyond the original set of examples, in a way that Aristotle’s did not. […] On simluation: To simulate the entire universe with good accuracy, you basically have to be the universe. […] On the arrow of time: When a later event has great leverage over an earlier one, we call the latter a “record” of the former; when the earlier event has great leverage over a later one, we call the latter a “cause” of the former. […] On quantum mechanics: Physicists were forced to throw out what we mean by the “state” of a physical system—the complete description of its current situation—and replace it with something utterly different. What is worse, we had to reinvent an idea we thought was pretty straightforward: the concept of a measurement or observation. […] On cells: Keeping the cell membrane intact and robust turns out to be a kind of Bayesian reasoning. […] On information: As the universe evolves from this very specific configuration to increasingly generic ones, correlations between different parts of the universe develop very naturally. It becomes useful to say that one part carries information about another part. It’s just one of the many helpful ways we have of talking about the world at an emergent, macroscopic level. […] On fine-tuning: The fine-tuning argument plays by the rules of how we come to learn about the world. It takes two theories, naturalism and theism, and then tests them by making predictions and going out and looking at the world to test which prediction comes true. It’s the best argument we have for God’s existence. […] naturalists need to face fine-tuning head-on. That means understanding what the universe is predicted to look like under both theism and naturalism, so that we can legitimately compare how our observations affect our credences. We’ll see that the existence of life provides, at best, a small boost to the probability that theism is true—while related features of the universe provide an extremely large boost for naturalism. […] On the evolution of higher intelligence: If you’ve spent much time swimming or diving, you know that you can’t see as far underwater as you can in air. The attenuation length—the distance past which light is mostly absorbed by the medium you are looking through—is tens of meters through clear water, while in air it’s practically infinite. (We have no trouble seeing the moon, or distant objects on our horizon.) What you can see has a dramatic effect on how you think. If you’re a fish, you move through the water at a meter or two per second, and you see some tens of meters in front of you. Every few seconds you are entering a new perceptual environment. As something new looms into your view, you have only a very brief amount of time in which to evaluate how to react to it. Is it friendly, fearsome, or foodlike? Under those conditions, there is enormous evolutionary pressure to think fast. See something, respond almost immediately. A fish brain is going to be optimized to do just that. Quick reaction, not leisurely contemplation, is the name of the game. Now imagine you’ve climbed up onto land. Suddenly your sensory horizon expands enormously. Surrounded by clear air, you can see for kilometers—much farther than you can travel in a couple of seconds. At first, there wasn’t much to see, since there weren’t any other animals up there with you. But there is food of different varieties, obstacles like rocks and trees, not to mention the occasional geological eruption. And before you know it, you are joined by other kinds of locomotive creatures. Some friendly, some tasty, some simply to be avoided. Now the selection pressures have shifted dramatically. Being simple-minded and reactive might be okay in some circumstances, but it’s not the best strategy on land. When you can see what’s coming long before you are forced to react, you have the time to contemplate different possible actions, and weigh the pros and cons of each. You can even be ingenious, putting some of your cognitive resources into inventing plans of action other than those that are immediately obvious. Out in the clear air, it pays to use your imagination. […] On our righteous minds: Kahneman compares System 2 to “a supporting character who believes herself to be the lead actor and often has little idea of what’s going on.” […] On cognitive science: The study of how we think and feel, not to mention how to think about who we are, is in its relative infancy. As neuroscientist and philosopher Patricia Churchland has put it, “We’re pre-Newton, pre-Kepler. We’re still sussing out that there are moons around Jupiter.” […] On philosophy of mind: Frank Jackson himself has subsequently repudiated the original conclusion of the knowledge argument. Like most philosophers, he now accepts that consciousness arises from purely physical processes: “Although I once dissented from the majority, I have capitulated,” he writes. Jackson believes that Mary the Color Scientist helps pinpoint our intuition about why conscious experience can’t be purely physical, but that this isn’t enough to qualify as a compelling argument for such a conclusion. The interesting task is to show how our intuition has led us astray—as, science keeps reminding us, it so often does. […] Sean Caroll, The Big Picture: On the Origins of Life, Meaning, and the Universe Itself Hardcover, 2016 Added to diary 21 April 2018 # second-westminster-company Finally, brethren, whatsoever things are true, whatsoever things are honest, whatsoever things are just, whatsoever things are pure, whatsoever things are lovely, whatsoever things are of good report; if there be any virtue, and if there be any praise, think on these things. Philippians 4:8, King James Version Added to diary 20 January 2018 # shauna-lyon Fried pork-belly skewers were eerily similar to corn dogs. Tamarind baby-back ribs were slick with grease but not sauce, and octopus with cilantro was far from tender. But robata-grilled yellowtail was fresh and juicy, and fried duck tongues were as cute as you’d want, crunchy and cheerful and dusted with chili powder. The squid-ink “pasta” noodles were made from fish—slippery and firm, they took well to bottarga. […] Here is where Takayama’s influence is deeply felt, in perfect little pieces of nigiri or delicate temaki on crisp nori. The rice is pillowy, with just enough vinegar to provide counterpoint to the soft, silky slabs of mackerel, scallop, salmon, amberjack, even maitake mushroom. Perhaps a bit of cynicism can be detected in the uni-toro nigiri: it sounds good, but these two worshipped ingredients really don’t belong together; the metallic taste of the tuna belly overpowers the delicate sweetness of the sea urchin, plus, one piece costs sixteen dollars. Shauna Lyon, Tetsu, Tables for Two, New Yorker Magazine, 26 February 2018 Added to diary 24 February 2018 Grilled Spanish mackerel sits in warm ponzu next to plums and “yolk jam”—a wonderfully pure, almost solid egg yolk, the texture attained, according to a server, by “cooking the yolk over low heat for a very long time.” Perfect little agnolotti ooze Fontina cheese and carrot butter; Proechel adds tender braised lamb neck and a dice of pickled squash and raw carrot to take it over the top. And then there’s the côte de boeuf, aged for sixty days, “from Kansas,” the waiter repeats to each table, in various sizes, starting, on one recent night, at$183 for thirty-five ounces (including a hefty bone), with “all the fixings.” Unlike any fixings ever, these include black-garlic jam (if mahogany had a flavor it would be this), whipped buttermilk with charred cipollini onions (like a tart, zingy whipped cream, utterly delicious), a bowl of broth with bland unsalted potato dumplings and beef-fat-soaked croutons, and an addictive Brussels-sprout slaw.

Not everything works. There’s a reason you rarely see rutabaga; its sharpness is jarring next to perfectly seared duck breast. Beets with black-sesame tahini goes too dark.

Shauna Lyon, Ferris, Tables for two, New Yorker Magazime, December 4, 2017

Added to diary 16 January 2018

# stephan-arndt

In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

Robert J.Sternberg, Encyclopædia Britannica, Intelligence

This study replicated and extended Kranzler and Jensen’s [Intelligence 13 (1989) 329] meta-analysis of the relationship between inspection time (IT) and intelligence (IQ). […] meta-analyses were conducted on obtained correlations (r’s) between IT and general IQ. […] For the total sample (N>4100), the r was −.51 after correction for artifactual effects (−.30 prior to correction).

Jennifer L. Grudnik, John H. Kranzler, “Meta-analysis of the relationship between intelligence and inspection time”, Intelligence, Volume 29, Issue 6, November–December 2001, Pages 523-535, https://doi.org/10.1016/S0160-2896(01)00078-2

Neuropsychologists and cognitive researchers often need quick estimates of global cognitive functioning [i.e., intelligence quotient (IQ)]. […]

The current article examined 11 proxy measures to determine their level of agreement with WAIS-III FSIQ across the entire sample. [(Wechsler Adult Intelligence Scale [Third Edition] full scale IQ test)] […]

Measures evaluated for this study included the Ward-7ST short form developed by Ward and modifi ed for the WAISIII by Pilgrim et al. ( 1999 ), the NAART, the SILS, ITBS, the Barona and Crawford demographic regression formulae, and the fi ve OPIE3 hybrids combining demographic and WAIS-III subtest information. The fi nal estimate examined was the ITBS (Hoover et al., 2003 ), a nationally recognized standardized school achievement test.

The Pearson correlation and confidence interval between WAIS-III FSIQ and each proxy measure are shown in Table 2 . Correlations ranged from r = .25 for the Barona estimate to r = .95 for the Ward-7ST short form.

The performance of the proxy measures across the different cognitive ability groups was examined next. […]

Above-Average IQ Group

[…] The Ward-7ST estimate was the only proxy to correlate above r = .70 for the high ability group. […]

The most important finding of this article is how poorly the IQ proxy measures performed at the tails of the IQ distribution. The proxy measures consistently overestimated the IQ of lowfunctioning individuals and underestimated the IQs of highfunctioning individuals.

Spinks, Mckirgan, Arndt, Caspers, Yucuis and Pfalzgraf (2009). IQ estimate smackdown: comparing IQ proxy measures to the WAIS-III. Journal of the International Neuropsychological Society. 15. 590-6. doi:10.1017/S1355617709090766.

Added to diary 27 June 2018

# stephen-hsu

Many people lack standard cognitive tools useful for understanding the world around them. Perhaps the most egregious case: probability and statistics, which are central to understanding health, economics, risk, crime, society, evolution, global warming, etc. Very few people have any facility for calculating risk, visualizing a distribution, understanding the difference between the average, the median, variance, etc.

A remnant of the cold war era curriculum still in place in the US: if students learn advanced math it tends to be calculus, whereas a course on probability, statistics and thinking distributionally would be more useful. (I say this reluctantly, since I am a physical scientist and calculus is in the curriculum largely for its utility in fields related to mine.)

In the post below, blogger Mark Liberman (a linguist at Penn) notes that our situation parallels the absence of concepts for specific numbers (i.e., “ten”) among primitive cultures like the Piraha of the Amazon. We may find their condition amusing, or even sad. Personally, I find it tragic that leading public intellectuals around the world are mostly innumerate and don’t understand basic physics.

The Pirahã language and culture seem to lack not only the words but also the concepts for numbers, using instead less precise terms like “small size”, “large size” and “collection”. And the Pirahã people themselves seem to be suprisingly uninterested in learning about numbers, and even actively resistant to doing so, despite the fact that in their frequent dealings with traders they have a practical need to evaluate and compare numerical expressions. A similar situation seems to obtain among some other groups in Amazonia, and a lack of indigenous words for numbers has been reported elsewhere in the world.

Many people find this hard to believe. These are simple and natural concepts, of great practical importance: how could rational people resist learning to understand and use them? I don’t know the answer. But I do know that we can investigate a strictly comparable case, equally puzzling to me, right here in the U.S. of A.

Until about a hundred years ago, our language and culture lacked the words and ideas needed to deal with the evaluation and comparison of sampled properties of groups. Even today, only a minuscule proportion of the U.S. population understands even the simplest form of these concepts and terms. Out of the roughly 300 million Americans, I doubt that as many as 500 thousand grasp these ideas to any practical extent, and 50,000 might be a better estimate. The rest of the population is surprisingly uninterested in learning, and even actively resists the intermittent attempts to teach them, despite the fact that in their frequent dealings with social and biomedical scientists they have a practical need to evaluate and compare the numerical properties of representative samples.

[OK, perhaps 500k is an underestimate… Surely >1% of the population has been exposed to these ideas and remembers the main points?]

…Before 1900 or so, only a few mathematical geniuses like Gauss (1777-1855) had any real ability to deal with these issues. But even today, most of the population still relies on crude modes of expression like the attribution of numerical properties to prototypes (“A woman uses about 20,000 words per day while a man uses about 7,000”) or the comparison of bare-plural nouns (“men are happier than women”).

Sometimes, people are just avoiding more cumbersome modes of expression – “Xs are P-er than Ys” instead of (say) “The mean P measurement in a sample of Xs was greater than the mean P measurement in a sample of Ys, by an amount that would arise by chance fewer than once in 20 trials, assuming that the two samples were drawn from a single population in which P is normally distributed”. But I submit that even most intellectuals don’t really know how to think about the evaluation and comparison of distributions – not even simple univariate gaussian distributions, much less more complex situations. And many people who do sort of understand this, at some level, generally fall back on thinking (as well as talking) about properties of group prototypes rather than properties of distributions of individual characteristics.

Stephen Hsu, Bounded cognition, Information Processing, 9 October 2007

Added to diary 15 January 2018

# steven-pinker

Thinking of language as an instinct inverts the popular wisdom, especially as it has been passed down in the canon of the humanities and social sciences. Language is no more a cultural invention than is upright posture. It is not a manifestation of a general capacity to use symbols: a three-year-old, we shall see, is a grammatical genius, but is quite incompetent at the visual arts, religious iconography, traffic signs, and the other staples of the semiotics curriculum. Though language is a magnificent ability unique to Homo sapiens among living species, it does not call for sequestering the study of humans from the domain of biology, for a magnificent ability unique to a particular living species is far from unique in the animal kingdom. Some kinds of bats home in on flying insects using Doppler sonar. Some kinds of migratory birds navigate thousands of miles by calibrating the positions of the constellations against the time of day and year. In nature’s talent show we are simply a species of primate with our own act, a knack for communicating information about who did what to whom by modulating the sounds we make when we exhale. Once you begin to look at language not as the ineffable essence of human uniqueness but as a biological adaptation to communicate information, it is no longer as tempting to see language as an insidious shaper of thought, and, we shall see, it is not. […]

To the broody hen the notion would probably seem monstrous that there should be a creature in the world to whom a nestful of eggs was not the utterly fascinating and precious and never-to-be-too-much-sat-upon object which it is to her. Thus we may be sure that, however mysterious some animals’ instincts may appear to us, our instincts will appear no less mysterious to them. And we may conclude that, to the animal which obeys it, every impulse and every step of every instinct shines with its own sufficient light, and seems at the moment the only eternally right and proper thing to do. What voluptuous thrill may not shake a fly, when she at last discovers the one particular leaf, or carrion, or bit of dung, that out of all the world can stimulate her ovipositor to its discharge? Does not the discharge then seem to her the only fitting thing? And need she care or know anything about the future maggot and its food? […]

The universality of complex language is a discovery that fills linguists with awe, and is the first reason to suspect that language is not just any cultural invention but the product of a special human instinct. Cultural inventions vary widely in their sophistication from society to society; within a society, the inventions are generally at the same level of sophistication. Some groups count by carving notches on bones and cook on fires ignited by spinning sticks in logs; others use computers and microwave ovens. Language, however, ruins this correlation. There are Stone Age societies, but there is no such thing as a Stone Age language. Earlier in this century the anthropological linguist Edward Sapir wrote, “When it comes to linguistic form, Plato walks with the Macedonian swineherd, Confucius with the headhunting savage of Assam.” […]

Behind such “simple” sentences as Where did he go? and or The guy I met killed himself, used automatically by any English speaker, are dozens of subroutines that arrange the words to express the meaning. Despite decades of effort, no artificially engineered language system comes close to duplicating the person in the street, HAL and C3PO notwithstanding. But though the language engine is invisible to the human user, the trim packages and color schemes are attended to obsessively. Trifling differences between the dialect of the mainstream and the dialect of other groups, like isn’t any versus ain’t no, those books versus them books, and dragged him away versus drug him away, are dignified as badges of “proper grammar.” But they have no more to do with grammatical sophistication than the fact that people in some regions of the United States refer to a certain insect as a dragonfly and people in other regions refer to it as a darning needle, or that English speakers call canines dogs whereas French speakers call them chiens. It is even a bit misleading to call Standard English a “language” and these variations “dialects,” as if there were some meaningful difference between them. The best definition comes from the linguist Max Weinreich: a language is a dialect with an army and a navy. […]

Inside the educational and writing establishments, the rules survive by the same dynamic that perpetuates ritual genital mutilations and college fraternity hazing: I had to go through it and am none the worse, so why should you have it any easier? Anyone daring to overturn a rule by example must always worry that readers will think he or she is ignorant of the rule, rather than challenging it. (I confess that this has deterred me from splitting some splitworthy infinitives.) Perhaps most importantly, since prescriptive rules are so psychologically unnatural that only those with access to the right schooling can abide by them, they serve as shibboleths, differentiating the elite from the rabble.

Indeed, creoles are bona fide languages, with standardized word orders and grammatical markers that were lacking in the pidgin of the immigrants and, aside from the sounds of words, not taken from the language of the colonizers. […]

In contemporary middle-class American culture, parenting is seen as an awesome responsibility, an unforgiving vigil to keep the helpless infant from falling behind in the great race of life. The belief that Motherese is essential to language development is part of the same mentality that sends yuppies to “learning centers” to buy little mittens with bull’s-eyes to help their babies find their hands sooner. One gets some perspective by examining the folk theories about parenting in other cultures. The !Kung San of the Kalahari Desert in southern Africa believe that children must be drilled to sit, stand, and walk. They carefully pile sand around their infants to prop them upright, and sure enough, every one of these infants soon sits up on its own. We find this amusing because we have observed the results of the experiment that the San are unwilling to chance: we don’t teach our children to sit, stand, and walk, and they do it anyway, on their own schedule. But other groups enjoy the same condescension toward us. In many communities of the world, parents do not indulge their children in Motherese. In fact, they do not speak to their prelinguistic children at all, except for occasional demands and rebukes. This is not unreasonable. After all, young children plainly can’t understand a word you say. So why waste your breath in soliloquies? Any sensible person would surely wait until a child has developed speech and more gratifying two-way conversations become possible. As Aunt Mae, a woman living in the South Carolina Piedmont, explained to the anthropologist Shirley Brice Heath: “Now just how crazy is dat? White folks uh hear dey kids say sump’n, dey say it back to ’em, dey aks ’em ’gain and ’gain ’bout things, like they ’posed to be born knowin’.” […]

Here is another interview, this one between a fourteen-year-old girl called Denyse and the late psycholinguist Richard Cromer; the interview was transcribed and analyzed by Cromer’s colleague Sigrid Lipka.

I like opening cards. I had a pile of post this morning and not one of them was a Christmas card. A bank statement I got this morning! [A bank statement? I hope it was good news.] No it wasn’t good news. [Sounds like mine.] I hate…, My mum works over at the, over on the ward and she said “not another bank statement.” I said “it’s the second one in two days.” And she said “Do you want me to go to the bank for you at lunchtime?” and I went “No, I’ll go this time and explain it myself.” I tell you what, my bank are awful. They’ve lost my bank book, you see, and I can’t find it anywhere. I belong to the TSB Bank and I’m thinking of changing my bank ’cause they’re so awful. They keep, they keep losing…[someone comes in to bring some tea] Oh, isn’t that nice. [Uhm. Very good.] They’ve got the habit of doing that. They lose, they’ve lost my bank book twice, in a month, and I think I’ll scream. My mum went yesterday to the bank for me. She said “They’ve lost your bank book again.” I went “Can I scream?” and I went, she went “Yes, go on.” So I hollered. But it is annoying when they do things like that. TSB, Trustees aren’t…uh the best ones to be with actually. They’re hopeless.

I have seen Denyse on videotape, and she comes across as a loquacious, sophisticated conversationalist—all the more so, to American ears, because of her refined British accent. (My bank are awful, by the way, is grammatical in British, though not American, English.) It comes as a surprise to learn that the events she relates so earnestly are figments of her imagination. Denyse has no bank account, so she could not have received any statement in the mail, nor could her bank have lost her bankbook. Though she would talk about a joint bank account she shared with her boyfriend, she had no boyfriend, and obviously had only the most tenuous grasp of the concept “joint bank account” because she complained about the boyfriend taking money out of her side of the account. In other conversations Denyse would engage her listeners with lively tales about the wedding of her sister, her holiday in Scotland with a boy named Danny, and a happy airport reunion with a long-estranged father. But Denyse’s sister is unmarried, Denyse has never been to Scotland, she does not know anyone named Danny, and her father has never been away for any length of time. In fact, Denyse is severely retarded. She never learned to read or write and cannot handle money or any of the other demands of everyday functioning. Denyse was born with spina bifida (“split spine”) a malformation of the vertebrae that leaves the spinal cord unprotected. Spina bifida often results in hydrocephalus, an increase in pressure in the cerebrospinal fluid filling the ventricles (large cavities) of the brain, distending the brain from within. For reasons no one understands, hydrocephalic children occasionally end up like Denyse, significantly retarded but with unimpaired—indeed, overdeveloped—language skills. (Perhaps the ballooning ventricles crush much of the brain tissue necessary for everyday intelligence but leave intact some other portions that can develop language circuitry.) The various technical terms for the condition include “cocktail party conversation,” “chatterbox syndrome,” and “blathering.” […]

If a language has only two color words, they are for black and white (usually encompassing dark and light, respectively). If it has three, they are for black, white, and red; if four, black, white, red, and either yellow or green. Five adds in both yellow and green; six, blue; seven, brown; more than seven, purple, pink, orange, or gray. But the clinching experiment was carried out in the New Guinea highlands with the Grand Valley Dani, a people speaking one of the black-and-white languages. The psychologist Eleanor Rosch found that the Dani were quicker at learning a new color category that was based on fire-engine red than a category based on an off-red. The way we see colors determines how we learn words for them, not vice versa. […]

How might the combinatorial grammar underlying human language work? The most straightforward way to combine words in order is explained in Michael Frayn’s novel The Tin Men. The protagonist, Goldwasser, is an engineer working at an institute for automation. He must devise a computer system that generates the standard kinds of stories found in the daily papers, like “Paralyzed Girl Determined to Dance Again.” Here he is hand-testing a program that composes stories about royal occasions:

[…]

The psychologist Laura Ann Petitto has a startling demonstration that the arbitrariness of the relation between a symbol and its meaning is deeply entrenched in the child’s mind. Shortly before they turn two, English-speaking children learn the pronouns you and me. Often they reverse them, using you to refer to themselves. The error is forgivable. You and me are “deictic” pronouns, whose referent shifts with the speaker: you refers to you when I use it but to me when you use it. So children may need some time to get that down. After all, Jessica hears her mother refer to her, Jessica, using you; why should she not think that you means “Jessica”? Now, in ASL the sign for “me” is a point to one’s chest; the sign for “you” is a point to one’s partner. What could be more transparent? One would expect that using “you” and “me” in ASL would be as foolproof as knowing how to point, which all babies, deaf and hearing, do before their first birthday. But for the deaf children Petitto studied, pointing is not pointing. The children used the sign of pointing to their conversational partners to mean “me” at exactly the age at which hearing children use the spoken sound you to mean “me.” The children were treating the gesture as a pure linguistic symbol; the fact that it pointed somewhere did not register as being relevant. […]

Moreover, it pays to give objects several labels in mentalese, designating different-sized categories like “cottontail rabbit,” “rabbit,” “mammal,” “animal,” and “living thing.” There is a tradeoff involved in choosing one category over another. It takes less effort to determine that Peter Cottontail is an animal than that he is a cottontail (for example, an animallike motion will suffice for us to recognize that he is an animal, leaving it open whether or not he is a cottontail). But we can predict more new things about Peter if we know he is a cottontail than if we merely know he is an animal. If he is a cottontail, he likes carrots and inhabits open country or woodland clearings; if he is merely an animal, he could eat anything and live anywhere, for all one knows. The middle-sized or “basic-level” category “rabbit” represents a compromise between how easy it is to label something and how much good the label does you. […]

What sense, then, can we make of the suggestion that images, numbers, kinship relations, or logic can be represented in the brain without being couched in words? In the first half of this century, philosophers had an answer: none. Reifying thoughts as things in the head was a logical error, they said. A picture or family tree or number in the head would require a little man, a homunculus, to look at it. And what would be inside his head—even smaller pictures, with an even smaller man looking at them? But the argument was unsound. It took Alan Turing, the brilliant British mathematician and philosopher, to make the idea of a mental representation scientifically respectable. Turing described a hypothetical machine that could be said to engage in reasoning. In fact this simple device, named a Turing machine in his honor, is powerful enough to solve any problem that any computer, past, present, or future, can solve. And it clearly uses an internal symbolic representation—a kind of mentalese—without requiring a little man or any occult processes. […]

The overall impression is that Universal Grammar is like an archetypal body plan found across vast numbers of animals in a phylum. For example, among all the amphibians, reptiles, birds, and mammals, there is a common body architecture, with a segmented backbone, four jointed limbs, a tail, a skull, and so on. The various parts can be grotesquely distorted or stunted across animals: a bat’s wing is a hand, a horse trots on its middle toes, whales’ forelimbs have become flippers and their hindlimbs have shrunken to invisible nubs, and the tiny hammer, anvil, and stirrup of the mammalian middle ear are jaw parts of reptiles. But from newts to elephants, a common topology of the body plan—the shin bone connected to the thigh bone, the thigh bone connected to the hip bone—can be discerned. Many of the differences are caused by minor variations in the relative timing and rate of growth of the parts during embryonic development. Differences among languages are similar. There seems to be a common plan of syntactic, morphological, and phonological rules and principles, with a small set of varying parameters, like a checklist of options. Once set, a parameter can have far-reaching changes on the superficial appearance of the language. […]

Some ancient tribe must have taken over most of Europe, Turkey, Iran, Afghanistan, Pakistan, northern India, western Russia, and parts of China. The idea has excited the imagination of a century of linguists and archeologists, though even today no one really knows who the Indo-Europeans were. Ingenious scholars have made guesses from the reconstructed vocabulary. Words for metals, wheeled vehicles, farm implements, and domesticated animals and plants suggest that the Indo-Europeans were a late Neolithic people. The ecological distributions of the natural objects for which there are Proto-Indo-European words—elm and willow, for example, but not olive or palm—have been used to place the speakers somewhere in the territory from inland northern Europe to southern Russia. Combined with words for patriarch, fort, horse, and weapons, the reconstructions led to an image of a powerful conquering tribe spilling out of an ancestral homeland on horseback to overrun most of Europe and Asia. The word “Aryan” became associated with the Indo-Europeans, and the Nazis claimed them as ancestors. More sanely, archeologists have linked them to artifacts of the Kurgan culture in the southern Russian steppes from around 3500 B.C., a band of tribes that first harnessed the horse for military purposes. […]

Since ears don’t move the way eyes do, the psychologists Peter Eimas and Peter Jusczyk devised a different way to see what a one-month-old finds interesting. They put a switch inside a rubber nipple and hooked up the switch to a tape recorder, so that when the baby sucked, the tape played. As the tape droned on with ba ba ba ba…, the infants showed their boredom by sucking more slowly. But when the syllables changed to pa pa pa…, the infants began to suck more vigorously, to hear more syllables. Moreover, they were using the sixth sense, speech perception, rather than just hearing the syllables as raw sound: two ba’s that differed acoustically from each other as much as a ba differs from a pa, but that are both heard as ba by adults, did not revive the infants’ interest. And infants must be recovering phonemes, like b, from the syllables they are smeared across. Like adults, they hear the same stretch of sound as a b if it appears in a short syllable and as a w if it appears in a long syllable. Infants come equipped with these skills; they do not learn them by listening to their parents’ speech. Kikuyu and Spanish infants discriminate English ba’s and pa’s, which are not used in Kikuyu or Spanish and which their parents cannot tell apart. English-learning infants under the age of six months distinguish phonemes used in Czech, Hindi, and Inslekampx (a Native American language), but English-speaking adults cannot, even with five hundred trials of training or a year of university coursework. Adult ears can tell the sounds apart, though, when the consonants are stripped from the syllables and presented alone as chirpy sounds; they just cannot tell them apart as phonemes. […]

By ten months they are no longer universal phoneticians but have turned into their parents; they do not distinguish Czech or Inslekampx phonemes unless they are Czech or Inslekampx babies. Babies make this transition before they produce or understand words, so their learning cannot depend on correlating sound with meaning. That is, they cannot be listening for the difference in sound between a word they think means bit and a word they think means beet, because they have learned neither word. […]

Between the late twos and the mid-threes, children’s language blooms into fluent grammatical conversation so rapidly that it overwhelms the researchers who study it, and no one has worked out the exact sequence. Sentence length increases steadily, and because grammar is a discrete combinatorial system, the number of syntactic types increases exponentially, doubling every month, reaching the thousands before the third birthday. You can get a feel for this explosion by seeing how the speech of a little boy called Adam grows in sophistication over the period of a year, starting with his early word combinations at the age of two years and three months (“2;3”):

2;3: Play checkers. Big drum. I got horn. A bunny-rabbit walk.

2;4: See marching bear go? Screw part machine. That busy bulldozer truck.

2;5: Now put boots on. Where wrench go? Mommy talking bout lady. What that paper clip doing?

2;6: Write a piece a paper. What that egg doing? I lost a shoe. No, I don’t want to sit seat.

2;7 Where piece a paper go? Ursula has a boot on. Going to see kitten. Put the cigarette down. Dropped a rubber band. Shadow has hat just like that. Rintintin don’t fly, Mommy.

2;8: Let me get down with the boots on. Don’t be afraid a horses. How tiger be so healthy and fly like kite? Joshua throw like a penguin.

2;9: Where Mommy keep her pocket book? Show you something funny. Just like turtle make mud pie.

2;10: Look at that train Ursula brought. I simply don’t want put in chair. You don’t have paper. Do you want little bit, Cromer? I can’t wear it tomorrow.

2;11: That birdie hopping by Missouri in bag. Do want some pie on your face? Why you mixing baby chocolate? I finish drinking all up down my throat. I said why not you coming in? Look at that piece a paper and tell it. Do you want me tie that round? We going turn light on so you can’t see.

3;0: I going come in fourteen minutes. I going wear that to wedding. I see what happens. I have to save them now. Those are not strong mens. They are going sleep in wintertime. You dress me up like a baby elephant.

3;1: I like to play with something else. You know how to put it back together. I gon’ make it like a rocket to blast off with. I put another one on the floor. You went to Boston University? You want to give me some carrots and some beans? Press the button and catch it, sir. I want some other peanuts. Why you put the pacifier in his mouth? Doggies like to climb up.

3;2: So it can’t be cleaned? I broke my racing car. Do you know the lights wents off? What happened to the bridge? When it’s got a flat tire it’s need a go to the station. I dream sometimes. I’m going to mail this so the letter can’t come off. I want to have some espresso. The sun is not too bright. Can I have some sugar? Can I put my head in the mailbox so the mailman can know where I are and put me in the mailbox? Can I keep the screwdriver just like a carpenter. […]

If a person is asked to shadow someone else’s speech (repeat it as the talker is talking) and, simultaneously, to tap a finger to the right or the left hand, the person has a harder time tapping with the right finger than with the left, because the right finger competes with language for the resources of the left hemisphere. Remarkably, the psychologist Ursula Bellugi and her colleagues have shown that the same thing happens when deaf people shadow one-handed signs in American Sign Language: they find it harder to tap with their right finger than with their left finger. The gestures must be tying up the left hemispheres, but it is not because they are gestures; it is because they are linguistic gestures. When a person (either a signer or a speaker) has to shadow a goodbye wave, a thumbs-up sign, or a meaningless gesticulation, the fingers of the right hand and the left hand are slowed down equally. The study of aphasia in the deaf leads to a similar conclusion. Deaf signers with damage to their left hemispheres suffer from forms of sign aphasia that are virtually identical to the aphasia of hearing victims with similar lesions. [They] are unimpaired at nonlinguistic tasks that place similar demands on the eyes and hands, such as gesturing, pantomiming, recognizing faces, and copying designs. Injuries to the right hemisphere of deaf signers produce the opposite pattern: they remain flawless at signing but have difficulty performing visuospatial tasks, just like hearing patients with injured right hemispheres. […]

Just as crumpling a newspaper can appear to scramble the pictures and text, a side view of a brain is a misleading picture of which regions are adjacent. Gazzaniga’s coworkers have developed a technique that uses MRI pictures of brain slices to reconstruct what the person’s cortex would look like if somehow it could be unwrinkled into a flat sheet. […]

Some aphasics leave out verbs, inflections, and function words; others use the wrong ones. Some cannot comprehend complicated sentences involving traces (like The man who the woman kissed (trace) hugged the child) but can comprehend complex sentences involving reflexives (like The girl said that the woman washed herself). Other patients do the reverse. There are Italian patients who mangle their language’s inflectional suffixes (similar to the -ing, -s, and -ed of English) but are almost flawless with its derivational suffixes (similar to -able, -ness, and -er). The mental thesaurus, in particular, is sometimes torn into pieces with clean edges. Among anomic patients (those who have trouble using nouns), different patients have problems with different kinds of nouns. Some can use concrete nouns but not abstract nouns. Some can use abstract nouns but not concrete nouns. Some can use nouns for nonliving things but have trouble with nouns for living things; others can use nouns for living things but have trouble with nouns for nonliving things. Some can name animals and vegetables but not foods, body parts, clothing, vehicles, or furniture. There are patients who have trouble with nouns for anything but animals, patients who cannot name body parts, patients who cannot name objects typically found indoors, patients who cannot name colors, and patients who have trouble with proper names. One patient could not name fruits or vegetables: he could name an abacus and a sphinx but not an apple or a peach. The psychologist Edgar Zurif, jesting the neurologist’s habit of giving a fancy name to every syndrome, has suggested that it be called anomia for bananas, or “banananomia.” […]

Does this mean that the brain has a produce section? No one has found one, nor centers for inflections, traces, phonology, and so on. Pinning brain areas to mental functions has been frustrating. Frequently one finds two patients with lesions in the same general area but with different kinds of impairment, or two patients with the same impairment but lesions in different areas. Sometimes a circumscribed impairment, like the inability to name animals, can be caused by massive lesions, brain-wide degeneration, or a blow to the head. […]

What does natural selection do when faced with these tradeoffs? In general, it will favor an option with benefits to the young organism and costs to the old one over an option with the same average benefit spread out evenly over the life span. This asymmetry is rooted in the inherent asymmetry of death. If a lightning bolt kills a forty-year-old, there will be no fifty-year-old or sixty-year-old to worry about, but there will have been a twenty-year-old and a thirty-year-old. Any bodily feature designed for the benefit of the potential over-forty incarnations, at the expense of the under-forty incarnations, will have gone to waste. And the logic is the same for unforeseeable death at any age: the brute mathematical fact is that all things being equal, there is a better chance of being a young person than being an old person. So genes that strengthen young organisms at the expense of old organisms have the odds in their favor and will tend to accumulate over evolutionary timespans, whatever the bodily system, and the result is overall senescence.

Steven Pinker, The Language Instinct, 1994

Added to diary 26 June 2018

Another historian, Carlo Cipolla, noted:

In preindustrial Europe, the purchase of a garment or of the cloth for a garment remained a luxury the common people could only afford a few times in their lives. One of the main preoccupations of hospital administration was to ensure that the clothes of the deceased should not be usurped but should be given to lawful inheritors. During epidemics of plague, the town authorities had to struggle to confiscate the clothes of the dead and to burn them: people waited for others to die so as to take over their clothes—which generally had the effect of spreading the epidemic.

Steven Pinker. Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, 2017

Added to diary 21 April 2018

The “natural affection” is far from automatic. Daly and Wilson, and later the anthropologist Edward Hagen, have proposed that postpartum depression and its milder version, the baby blues, are not a hormonal malfunction but the emotional implementation of the decision period for keeping a child. (Postpartum depression as adaptation: Hagen, 1999; Daly & Wilson, 1988, pp. 61–77.) Mothers with postpartum depression often feel emotionally detached from their newborns and may harbor intrusive thoughts of harming them. Mild depression, psychologists have found, often gives people a more accurate appraisal of their life prospects than the rose-tinted view we normally enjoy. The typical rumination of a depressed new mother—how will I cope with this burden?—has been a legitimate question for mothers throughout history who faced the weighty choice between a definite tragedy now and the possibility of an even greater tragedy later. As the situation becomes manageable and the blues dissipate, many women report falling in love with their baby, coming to see it as a uniquely wonderful individual.

Hagen examined the psychiatric literature on postpartum depression to test five predictions of the theory that it is an evaluation period for investing in a newborn. As predicted, postpartum depression is more common in women who lack social support (they are single, separated, dissatisfied with their marriage, or distant from their parents), who had had a complicated delivery or an unhealthy infant, and who were unemployed or whose husbands were unemployed. He found reports of postpartum depression in a number of non-Western populations which showed the same risk factors (though he could not find enough suitable studies of traditional kin-based societies). Finally, postpartum depression is only loosely tied to measured hormonal imbalances, suggesting that it is not a malfunction but a design feature.

Many cultural traditions work to distance people’s emotions from a newborn until its survival seems likely. People may be enjoined from touching, naming, or granting legal personhood to a baby until a danger period is over, and the transition is often marked by a joyful ceremony, as in our own customs of the christening and the bris. Some traditions have a series of milestones, such as traditional Judaism, which grants full legal personhood to a baby only after it has survived thirty days.

Steven Pinker, The Better Angels of our Nature, Chapter 7

Added to diary 15 January 2018

We have seen that during periods of humanitarian reform, a recognition of the rights of one group can lead to a recognition of others by analogy, as when the despotism of kings was analogized to the despotism of husbands, and when two centuries later the civil rights movement inspired the women’s rights movement. The protection of abused children also benefited from an analogy—in this case, believe it or not, with animals.

In Manhattan in 1874, the neighbors of ten-year-old Mary Ellen McCormack, an orphan being raised by an adoptive mother and her second husband, noticed suspicious cuts and bruises on the girl’s body. They reported her to the Department of Public Charities and Correction, which administered the city’s jails, poorhouses, orphanages, and insane asylums. Since there were no laws that specifically protected children, the caseworker contacted the American Society for the Protection of Animals. The society’s founder saw an analogy between the plight of the girl and the plight of the horses he rescued from violent stable owners. He engaged a lawyer who presented a creative interpretation of habeas corpus to the New York State Supreme Court and petitioned to have her removed from her home. The girl calmly testified:

Mamma has been in the habit of whipping and beating me almost every day. She used to whip me with a twisted whip—a rawhide. I have now on my head two black-and-blue marks which were made by Mamma with the whip, and a cut on the left side of my forehead which was made by a pair of scissors in Mamma’s hand…. I never dared speak to anybody, because if I did I would get whipped.

The New York Times reprinted the testimony in an article entitled “Inhumane Treatment of a Little Waif,” and the girl was removed from the home and eventually adopted by her caseworker. Her lawyer set up the New York Society for the Prevention of Cruelty to Children, the first protective agency for children anywhere in the world. Together with other agencies founded in its wake, it set up shelters for battered children and lobbied for laws that punished their abusive parents. Similarly, in England the first legal case to protect a child against an abusive parent was taken up by the Royal Society for the Prevention of Cruelty to Animals, and out of it grew the National Society for the Prevention of Cruelty to Children.

Steven Pinker, The Better Angels of our Nature, Chapter 7

Added to diary 15 January 2018

John Maynard Smith, the biologist who first applied game theory to evolution, modeled this kind of standoff as a War of Attrition game. Each of two contestants competes for a valuable resource by trying to outlast the other, steadily accumulating costs as he waits. In the original scenario, they might be heavily armored animals competing for a territory who stare at each other until one of them leaves; the costs are the time and energy the animals waste in the standoff, which they could otherwise use in catching food or pursuing mates. A game of attrition is mathematically equivalent to an auction in which the highest bidder wins the prize and both sides have to pay the loser’s low bid. And of course it can be analogized to a war in which the expenditure is reckoned in the lives of soldiers.

The War of Attrition is one of those paradoxical scenarios in game theory (like the Prisoner’s Dilemma, the Tragedy of the Commons, and the Dollar Auction) in which a set of rational actors pursuing their interests end up worse off than if they had put their heads together and come to a collective and binding agreement. One might think that in an attrition game each side should do what bidders on eBay are advised to do: decide how much the contested resource is worth and bid only up to that limit. The problem is that this strategy can be gamed by another bidder. All he has to do is bid one more dollar (or wait just a bit longer, or commit another surge of soldiers), and he wins. He gets the prize for close to the amount you think it is worth, while you have to forfeit that amount too, without getting anything in return. You would be crazy to let that happen, so you are tempted to use the strategy “Always outbid him by a dollar,” which he is tempted to adopt as well. You can see where this leads. Thanks to the perverse logic of an attrition game, in which the loser pays too, the bidders may keep bidding after the point at which the expenditure exceeds the value of the prize. They can no longer win, but each side hopes not to lose as much. The technical term for this outcome in game theory is “a ruinous situation.” It is also called a “Pyrrhic victory”; the military analogy is profound.

One strategy that can evolve in a War of Attrition game (where the expenditure, recall, is in time) is for each player to wait a random amount of time, with an average wait time that is equivalent in value to what the resource is worth to them. In the long run, each player gets good value for his expenditure, but because the waiting times are random, neither is able to predict the surrender time of the other and reliably outlast him. In other words, they follow the rule: At every instant throw a pair of dice, and if they come up (say) 4, concede; if not, throw them again. This is, of course, like a Poisson process, and by now you know that it leads to an exponential distribution of wait times (since a longer and longer wait depends on a less and less probable run of tosses). Since the contest ends when the first side throws in the towel, the contest durations will also be exponentially distributed. Returning to our model where the expenditures are in soldiers rather than seconds, if real wars of attrition were like the “War of Attrition” modeled in game theory, and if all else were equal, then wars of attrition would fall into an exponential distribution of magnitudes.

Of course, real wars fall into a power-law distribution, which has a thicker tail than an exponential (in this case, a greater number of severe wars). But an exponential can be transformed into a power law if the values are modulated by a second exponential process pushing in the opposite direction. And attrition games have a twist that might do just that. If one side in an attrition game were to leak its intention to concede in the next instant by, say, twitching or blanching or showing some other sign of nervousness, its opponent could capitalize on the “tell” by waiting just a bit longer, and it would win the prize every time. As Richard Dawkins has put it, in a species that often takes part in wars of attrition, one expects the evolution of a poker face.

Now, one also might have guessed that organisms would capitalize on the opposite kind of signal, asign of continuing resolve rather than impending surrender. If a contestant could adopt some defiant posture that means “I’ll stand my ground; I won’t back down,” that would make it rational for his opposite number to give up and cut its losses rather than escalate to mutual ruin. But there’s a reason we call it “posturing.” Any coward can cross his arms and glower, but the other side can simply call his bluff. Only if a signal is costly—if the defiant party holds his hand over a candle, or cuts his arm with a knife—can he show that he means business. (Of course, paying a self-imposed cost would be worthwhile only if the prize is especially valuable to him, or if he had reason to believe that he could prevail over his opponent if the contest escalated.)

In the case of a war of attrition, one can imagine a leader who has a changing willingness to suffer a cost over time, increasing as the conflict proceeds and his resolve toughens. His motto would be: “We fight on so that our boys shall not have died in vain.” This mindset, known as loss aversion, the sunk-cost fallacy, and throwing good money after bad, is patently irrational, but it is surprisingly pervasive in human decision-making. People stay in an abusive marriage because of the years they have already put into it, or sit through a bad movie because they have already paid for the ticket, or try to reverse a gambling loss by doubling their next bet, or pour money into a boondoggle because they’ve already poured so much money into it. Though psychologists don’t fully understand why people are suckers for sunk costs, a common explanation is that it signals a public commitment. The person is announcing: “When I make a decision, I’m not so weak, stupid, or indecisive that I can be easily talked out of it.” In a contest of resolve like an attrition game, loss aversion could serve as a costly and hence credible signal that the contestant is not about to concede, preempting his opponent’s strategy of outlasting him just one more round.

I already mentioned some evidence from Richardson’s dataset which suggests that combatants do fight longer when a war is more lethal: small wars show a higher probability of coming to an end with each succeeding year than do large wars. The magnitude numbers in the Correlates of War Dataset also show signs of escalating commitment: wars that are longer in duration are not just costlier in fatalities; they are costlier than one would expect from their durations alone. If we pop back from the statistics of war to the conduct of actual wars, we can see the mechanism at work. Many of the bloodiest wars in history owe their destructiveness to leaders on one or both sides pursuing a blatantly irrational loss-aversion strategy. Hitler fought the last months of World War II with a maniacal fury well past the point when defeat was all but certain, as did Japan. Lyndon Johnson’s repeated escalations of the Vietnam War inspired a protest song that has served as a summary of people’s understanding of that destructive war: “We were waist-deep in the Big Muddy; The big fool said to push on.”

The systems biologist Jean-Baptiste Michel has pointed out to me how escalating commitments in a war of attrition could produce a power-law distribution. All we need to assume is that leaders keep escalating as a constant proportion of their past commitment—the size of each surge is, say, 10 percent of the number of soldiers that have fought so far. A constant proportional increase would be consistent with the well-known discovery in psychology called Weber’s Law: for an increase in intensity to be noticeable, it must be a constant proportion of the existing intensity. (If a room is illuminated by ten lightbulbs, you’ll notice a brightening when an eleventh is switched on, but if it is illuminated by a hundred lightbulbs, you won’t notice the hundred and first; someone would have to switch on another ten bulbs before you noticed the brightening.) Richardson observed that people perceive lost lives in the same way: “Contrast for example the many days of newspaper-sympathy over the loss of the British submarine Thetis in time of peace with the terse announcement of similar losses during the war. This contrast may be regarded as an example of the Weber-Fechner doctrine that an increment is judged relative to the previous amount.” The psychologist Paul Slovic has recently reviewed several experiments that support this observation. The quotation falsely attributed to Stalin, “One death is a tragedy; a million deaths is a statistic,” gets the numbers wrong but captures a real fact about human psychology.

If escalations are proportional to past commitments (and a constant proportion of soldiers sent to the battlefield are killed in battle), then losses will increase exponentially as a war drags on, like compound interest. And if wars are attrition games, their durations will also be distributed exponentially. Recall the mathematical law that a variable will fall into a power-law distribution if it is an exponential function of a second variable that is distributed exponentially. My own guess is that the combination of escalation and attrition is the best explanation for the power-law distribution of war magnitudes.

Steven Pinker, The Better Angels of our Nature, Chapter 5

Though conventional terrorism, as John Kerry gaffed, is a nuisance to be policed rather than a threat to the fabric of life, terrorism with weapons of mass destruction would be something else entirely. The prospect of an attack that would kill millions of people is not just theoretically possible but consistent with the statistics of terrorism. The computer scientists Aaron Clauset and Maxwell Young and the political scientist Kristian Gleditsch plotted the death tolls of eleven thousand terrorist attacks on log-log paper and saw them fall into a neat straight line.261 Terrorist attacks obey a power-law distribution, which means they are generated by mechanisms that make extreme events unlikely, but not astronomically unlikely.

The trio suggested a simple model that is a bit like the one that Jean-Baptiste Michel and I proposed for wars, invoking nothing fancier than a combination of exponentials. As terrorists invest more time into plotting their attack, the death toll can go up exponentially: a plot that takes twice as long to plan can kill, say, four times as many people. To be concrete, an attack by a single suicide bomber, which usually kills in the single digits, can be planned in a few days or weeks. The 2004 Madrid train bombings, which killed around two hundred, took six months to plan, and 9/11, which killed three thousand, took two years. But terrorists live on borrowed time: every day that a plot drags on brings the possibility that it will be disrupted, aborted, or executed prematurely. If the probability is constant, the plot durations will be distributed exponentially. (Cronin, recall, showed that terrorist organizations drop like flies over time, falling into an exponential curve.) Combine exponentially growing damage with an exponentially shrinking chance of success, and you get a power law, with its disconcertingly thick tail. Given the presence of weapons of mass destruction in the real world, and religious fanatics willing to wreak untold damage for a higher cause, a lengthy conspiracy producing a horrendous death toll is within the realm of thinkable probabilities.

A statistical model, of course, is not a crystal ball. Even if we could extrapolate the line of existing data points, the massive terrorist attacks in the tail are still extremely (albeit not astronomically) unlikely. More to the point, we can’t extrapolate it. In practice, as you get to the tail of a power-law distribution, the data points start to misbehave, scattering around the line or warping it downward to very low probabilities. The statistical spectrum of terrorist damage reminds us not to dismiss the worst-case scenarios, but it doesn’t tell us how likely they are.

Steven Pinker, The Better Angels of our Nature, Chapter 6

Added to diary 15 January 2018

# the-economist-contributors

GENTRIFIER has surpassed many worthier slurs to become the dirtiest word in American cities. In the popular telling, hordes of well-to-do whites are descending upon poor, minority neighbourhoods that were made to endure decades of discrimination. With their avocado on toast, beard oil and cappuccinos, these people snuff out local culture. As rents rise, lifelong residents are evicted and forced to leave. In this view, the quintessential scene might be one witnessed in Oakland, California, where a miserable-looking homeless encampment rests a mere ten-minute walk from a Whole Foods landscaped with palm trees and bougainvillea, offering chia and flax seed upon entry. An ancient, sinister force lurks behind the overpriced produce. “‘Gentrification’ is but a more pleasing name for white supremacy,” wrote Ta-Nehisi Coates. It is “the interest on enslavement, the interest on Jim Crow, the interest on redlining, compounding across the years.”

This story is better described as an urban myth. The supposed ills of gentrification—which might be more neutrally defined as poorer urban neighbourhoods becoming wealthier—lack rigorous support. The most careful empirical analyses conducted by urban economists have failed to detect a rise in displacement within gentrifying neighbourhoods. Often, they find that poor residents are more likely to stay put if they live in these areas. At the same time, the benefits of gentrification are scarcely considered. Longtime residents reap the rewards of reduced crime and better amenities. Those lucky enough to own their homes come out richer. The left usually bemoans the lack of investment in historically non-white neighbourhoods, white flight from city centres and economic segregation. Yet gentrification straightforwardly reverses each of those regrettable trends.

The anti-gentrification brigades often cite anecdotes from residents forced to move. Yet the data suggest a different story. An influential study by Lance Freeman and Frank Braconi found that poor residents living in New York’s gentrifying neighbourhoods during the 1990s were actually less likely to move than poor residents of non-gentrifying areas. A follow-up study by Mr Freeman, using a nationwide sample, found scant association between gentrification and displacement. A more recent examination found that financially vulnerable residents in Philadelphia—those with low credit scores and no mortgages—are no more likely to move if they live in a gentrifying neighbourhood.

These studies undermine the widely held belief that for every horrid kale-munching millennial moving in, one longtime resident must be chucked out. The surprising result is explained by three underlying trends.

The first is that poor Americans are obliged to move very frequently, regardless of the circumstances of their district, as the Princeton sociologist Matthew Desmond so harrowingly demonstrated in his research on eviction. The second is that poor neighbourhoods have lacked investment for decades, and so have considerable slack in their commercial and residential property markets. A lot of wealthier city dwellers can thus move in without pushing out incumbent residents or businesses. “Given the typical pattern of low-income renter mobility in New York City, a neighbourhood could go from a 30% poverty population to 12% in as few as ten years without any displacement whatsoever,” noted Messrs Freeman and Braconi in their study. Indeed, the number of poor people living in New York’s gentrifying neighbourhoods barely budged from 1990 to 2014, according to a study by New York University’s Furman Centre. […]

Residents of gentrifying neighbourhoods who own their homes have reaped considerable windfalls. One black resident of Logan Circle, a residential district in downtown Washington, bought his home in 1993 for $130,000. He recently sold it for$1.6m. Businesses gain from having more customers, with more to spend. Having new shops, like well-stocked grocery stores, and sources of employment nearby can reduce commuting costs and time. Tax collection surges and so does political clout. Crime, already on the decline in American city centres, seems to fall even further in gentrifying neighbourhoods, as MIT economists observed after Cambridge, Massachusetts, undid its rent-control scheme.

Those who bemoan segregation and gentrification simultaneously risk contradiction. The introduction of affluent, white residents into poor, minority districts boosts racial and economic integration. […]

A second reason gentrification is disliked is culture. The argument is that the arrival of yuppie professionals sipping kombucha will alter the character of a place in an unseemly way. “Don’t Brooklyn my Detroit” T-shirts are now a common sight in Motor City. In truth, Detroit would do well with a bit more Brooklyn. Across big American cities, for every gentrifying neighbourhood ten remain poor. Opposing gentrification has become a way for people to display their anti-racist bona fides. This leads to the exaggerated equation of gentrification with white supremacy. Such objections parallel those made by white NIMBYs who fret that a new bus stop or apartment complex will bring people who might also alter the culture of their neighbourhood—for the worse.

The term gentrification has become tarred. But called by any other name—revitalisation, reinvestment, renaissance—it would smell sweet.

In praise of gentrification, The Economist, June 21st 2018

Added to diary 26 June 2018

The four biggest British supermarket chains all offer some form of price-match guarantee, promising that their customers could not save any money by shopping elsewhere. On the face of it, they seem like a good thing: a sign that fierce competition is lowering prices. But economists have long been suspicious of such promises, which can leave consumers worse off.

The problem is that price-match guarantees can blunt the logic of competition. Suppose a car dealership worries about a rival undercutting its prices and stealing customers. Even if the dealership can respond by cutting its prices too, it might lose sales in the interim. A price-match guarantee offers a pre-emptive defence. […]

There is no evidence that Britain’s current crop of price-match guarantees has hurt consumers. However, researchers have linked similar promises elsewhere to sustained high prices for groceries (Hess and Gerstner 1991), tyres (Arbatskaya et al. 1999a) and even shares (Edlin and Emch 1998). Wonks have confirmed the finding in the laboratory, too (Datta and Offenberg 2005). […]

Another mooted justification for the guarantees is price discrimination: selling to different types of consumers at different prices. For instance, if some customers are too busy to shop around, a firm can sell to them at a high price while using a guarantee to attract more price-sensitive, hassle-tolerating customers. This is great for profits, but sometimes benefits consumers too.

Finally, a price-guarantee may be an attempt to signal genuinely lower prices and thus stand out from the crowd. That is probably how most consumers interpret them. This works only if there is a genuine difference in efficiency between rival stores, such that only one can afford to sell on the cheap. Then, the nimble firm might want a price war in order to speed the lumbering one’s demise. In such circumstances, any attempt by the lumbering firm to collude tacitly is futile; if it offers a guarantee, its bluff will be called. So when low-cost firms make such promises, consumers can take them as a sign of a competitive offering.

This does not seem to be what is happening in Britain, however. There, it is the more expensive supermarkets that are promising to match each others’ prices. Only one has pledged to match the deals on offer at Aldi and Lidl, nimble low-cost rivals that are making inroads into the market.

The Economist, Guaranteed profits, 12 February 2015

Added to diary 28 March 2018

For some British millennials, Monzo is as close to a cult as a bank can be. Its coral-pink cards are hard to miss. “People in bars will get very excited if they see you are a fellow Monzo user,” says Mr Matthews, who is 29.

The Economist, Attack of the minnows, 17 February 2018

Added to diary 19 February 2018

Around 40% of Japanese are still virgins at the age of 34, whereas 90% of men and women in America have had sex before turning 22.

The Economist, Seventh heaven at 7-Eleven, 17 February 2018

Added to diary 19 February 2018

A judge dismissed a case against Taylor Swift brought by two songwriters, who argued that the lyrics in her single, “Shake it Off”, infringed on their copyright. The judge ruled that the phrase “haters gonna hate”, lacked “the modicum of originality and creativity required for copyright protection”, observing that American popular culture was already “heavily steeped in the concepts of players, haters and player haters”.

The Economist, The world this week, 17 February 2018

Added to diary 19 February 2018

# thomas-chaney

Economists build a database from 4000-year-old clay tablets, plug it into a trade model, and use it to locate lost bronze age cities.

They had clay tablets from ancient merchants, saying things like:

(I paid) 6.5 shekels (of tin) from the Town of the Kanishites to Timelkiya. I paid 2 shekels of silver and 2 shekels of tin for the hire of a donkey from Timelkiya to Hurama. From Hurama to Kaneš I paid 4.5 shekels of silver and 4.5 shekels of tin for the hire of a donkey and a packer.

This allowed them to measure the amount of trade between any two cities.

Then they constructed a theoretical model of the expected amount of trade between any two cities, as inversely propotional to the distance between the two cities. Given the data on amount of trade, and the locations of the known cities, they are able to estimate the lost locations:

As long as we have data on trade between known and lost cities, with sufficiently many known compared to lost cities, a structural gravity model is able to estimate the likely geographic coordinates of lost cities [….]

We build a simple Ricardian model of trade. Further imposing that bilateral trade frictions can be summarized by a power function of geographic distance, our model makes predictions on the number oftransactions between city pairs, which is observed in our data. The model can be estimated solely on bilateral trade flows and on the geographic location of at least some cities.

Gojko Barjamovic, Thomas Chaney, Kerem A. Coşar, Ali Hortaçsu, Trade, Merchants, and the Lost Cities of the Bronze Age, 2017

Added to diary 15 January 2018

# thomas-paine

I view things as they are, without regard to place or person; my country is the world, and my religion is to do good.

Thomas Paine, Rights of Man, Part 2.7 Chapter V, 1791

Added to diary 26 January 2018

Each of those churches show certain books, which they call revelation, or the word of God. The Jews say, that their word of God was given by God to Moses, face to face; the Christians say, that their word of God came by divine inspiration: and the Turks say, that their word of God (the Koran) was brought by an angel from Heaven. Each of those churches accuse the other of unbelief; and for my own part, I disbelieve them all.

Thomas Paine, The Age of Reason, Part I, 1794

Added to diary 26 January 2018

# The elevator

One day, on her way to work, a woman decides that she’s going to take a mass-transit system instead of her usual method. Just before she gets on board, she looks at an app on her phone that gives her position with the exact latitude and longitude. The journey is smooth and perfectly satisfactory, despite frequent stops, and when the woman disembarks she checks her phone again. Her latitude and longitude haven’t changed at all. What’s going on? The answer: this lady works in a tall office building, and rather than taking the stairs, she’s taken the lift. We don’t tend to think of elevators as mass-transportation systems, but they are: they move hundreds of millions of people every day, and China alone is installing 700,000 elevators a year. The tallest building in the world, the Burj Khalifa in Dubai, has more than 300,000 square metres of floor space; the brilliantly engineered Sears Tower in Chicago has more than 400,000. Imagine such skyscrapers sliced into fifty or sixty low-rise chunks, then surrounding each chunk with a car park and connecting all the car parks together with roads, and you’d have an office park the size of a small town. The fact that so many people can work together in huge buildings on compact sites is possible only because of the elevator.

# The index fund

Index funds now seem completely natural – part of the very language of investing. But as recently as 1976, they didn’t exist. Before you can have an index fund, you need an index. In 1884, a financial journalist called Charles Dow had the bright idea that he could take the price of some famous company stocks and average them, then publish the average going up and down. He ended up founding not only the Dow Jones company, but also the Wall Street Journal. […] And Samuelson went further: he said that since professional investors didn’t seem to be able to beat the market, somebody should set up an index fund – a way for ordinary people to invest in the performance of the stock market as a whole, without paying a fortune in fees for fancy professional fund managers to try, and fail, to be clever. At this point, something interesting happened: a practical businessman actually paid attention to what an academic economist had written. John Bogle had just founded a company called Vanguard, whose mission was to provide simple mutual funds for ordinary investors – no fuss, no fancy stuff, low fees. And what could be simpler and cheaper than an index fund – as recommended by the world’s most respected economist? And so Bogle decided he was going to make Paul Samuelson’s wish come true. He set up the world’s first index fund, and waited for investors to rush in.

# Epilogue: Light

Back in the mid-1990s the economist William Nordhaus conducted a series of simple experiments. One day, for example, he used a prehistoric technology: he lit a wood fire. Humans have been gathering and chopping and burning wood for tens of thousands of years. But Nordhaus also had a piece of high-tech equipment with him: a Minolta light meter. He burned 20 pounds of wood, kept track of how long it burned for and carefully recorded the dim, flickering firelight with his meter. Another day, Nordhaus bought a Roman oil lamp – a genuine antique, he was assured – fitted it with a wick, and filled it with cold-pressed sesame oil. He lit the lamp and watched the oil burn down, again using the light meter to measure its soft, even glow. Nordhaus’s open wood fire had burned for just three hours when fuelled with 20 pounds of wood. But a mere eggcup of oil burned all day, and more brightly and controllably. […] To see why [keeping track of inflation] is difficult, consider the price of travelling from – say – Lisbon in Portugal to Luanda in Angola. When the journey was first made by Portuguese explorers, it would have been an epic expedition, taking months. Later, by steam ship, it would have taken a few days. Then by plane, a few hours. An economic historian who wanted to measure inflation could start by tracking the price of passage on the steamer. But then, once an air route opens up, which price do you look at? Perhaps you simply switch to the airline ticket price once more people start flying than sailing. But flying is a different service – faster, more convenient. If more travellers are willing to pay twice as much to fly, it hardly makes sense for inflation statistics to record that the cost of the journey has suddenly doubled. How, then, do we measure inflation when what we’re able to buy changes so radically over time? […] Because we don’t have a good way to compare an iPod today to a gramophone a century ago, we don’t really have a good way to quantify how much all the inventions described in this book have really expanded the choices available to us. We probably never will. But we can try – and Bill Nordhaus was trying as he fooled around with wood fires, antique oil lamps and Minolta light meters. He wanted to unbundle the cost of a single quality that humans have cared deeply about since time immemorial, using the state-of-the-art technology of different ages: illumination. That’s measured in lumens, or lumen-hours. A candle, for example, gives off 13 lumens while it burns; a typical modern light bulb is almost a hundred times as bright as that. […] Switch off a light bulb for an hour and you’re saving illumination that would have cost our ancestors all week to create. It would have taken Benjamin Franklin’s contemporaries all afternoon. But someone in a rich industrial economy today could earn the money to buy that illumination in a fraction of a second.

Tim Harford, Fifty inventions that shaped the modern economy

Here’s Nordhaus’ paper, “Do Real-Output and Real-Wage Measures Capture Reality?”

Added to diary 15 January 2018

# tracy-chapman

This training ground for punks and thieves
Home of poor white retirees
Who didn’t bail
And couldn’t sell
When color made the grass less green

Tracy Chapman, “3000 Miles”, 13 September 2005

Added to diary 12 July 2018

# vernon-smith

At the heart of economics is a scientific mystery: How is it that the pricing system accomplishes the world’s work without anyone being in charge? Like language, no one invented it. None of us could have invented it, and its operation depends in no way on anyone’s comprehension or understanding of it. Somehow, it is a product of culture; yet in important ways, the pricing system is what makes culture possible. Smash it in the command economy and it rises as a Phoenix with a thousand heads, as the command system becomes shot through with bribery, favors, barter and underground exchange. Indeed, these latter elements may prevent the command system from collapsing. No law and no police force can stop it, for the police may become as large a part of the problem as of the solution. The pricing system-How is order produced from freedom of choice?-is a scientific mystery as deep, fundamental, and inspiring as that of the expanding universe or the forces that bind matter. For to understand it is to understand something about how the human species got from hunting-gathering through the agricultural and industrial revolutions to a state of affluence that allows us to ask questions about the expanding universe, the weak and strong forces that bind particles, and the nature of the pricing system, itself.

Vernon L. Smith, Microeconomic Systems as an Experimental Science, The American Economic Review, Vol. 72, No. 5 (Dec., 1982), pp. 923-955

Added to diary 19 May 2018

# wikipedia-contributors

A hyperforeignism is a type of qualitative hypercorrection that involves speakers misidentifying the distribution of a pattern found in loanwords and extending it to other environments, including words and phrases not borrowed from the language that the pattern derives from. The result of this process does not reflect the rules of either language. For example, habanero is sometimes pronounced as though it were spelled with an <ñ> (habañero), which is not the Spanish form from which the English word was borrowed. […]

A number of words of French origin feature a final <e> that is pronounced in English but silent in the original language. For example, forte (used to mean “strength” in English as in “not my forte”) is often pronounced /ˈfɔːrteɪ/ or /fɔːrˈteɪ/, by confusion with the Italian musical term of the same spelling (and same Latin origin, but meaning “loud”), which is pronounced [ˈfɔrte]. In French, the word “forte” is pronounced [fɔʁt], with silent final <e> […].

Wikipedia Contributors, Hyperforeignism

Added to diary 13 February 2019

The Debate between bird and fish is a literature essay of the Sumerian language, on clay tablets from the mid to late 3rd millennium BC. […]

# The debate, in short summary

## Fish speaks first

The initial speech of Fish:

“…Bird…there is no insult, ..! Croaking …noise in the marshes …squawking! Forever gobbling away greedily, while your heart is dripping with evil! Standing on the plain you can keep pecking away until they chase you off! The farmer’s sons lay lines and nets for you..(and continues)..You cause damage in the vegetable plots..(more)..Bird, you are shameless: you fill the courtyard with your droppings. The courtyard sweeper-boy who cleans the house chases after you…(etc)”

The 2nd and 3rd paragraphs continue:

“They bring you into the fattening shed. They let you moo like cattle, bleat like sheep. They pour out cool water in jugs for you. They drag you away for the daily sacrifice.” (the 2nd, 3rd paragraphs continue for several lines)

## Bird’s initial retort

Bird replies:

“How has your heart become so arrogant, while you yourself are so lowly? Your mouth is flabby(?), but although your mouth goes all the way round, you can not see behind you. You are bereft of hips, as also of arms, hands and feet – try bending your neck to your feet! Your smell is awful; you make people throw-up; they sneer at you! ….”

Bird continues:

“But I am the beautiful and clever Bird! Fine artistry went into my adornment. But no skill has been expended on your holy shaping! Strutting about in the royal palace is my glory; my warbling is considered a decoration in the courtyard. The sound I produce, in all its sweetness, is a delight for the person of Culgi, son of Enlil….”

## Šulgi rules in favor of Bird

After the initial speech and retort, Fish attacks Bird’s nest. Battle ensues between the two of them, in more words. Near the end Bird requests that Culgi decide in Bird’s favor:

Šulgi proclaims:

“To strut about in the E-kur is a glory for Bird, as its singing is sweet. At Enlil’s holy table, Bird …precedence over you …! “(lacuna)…Bird …Because Bird was victorious over Fish in the dispute between Fish and Bird, Father Enki be praised!”-(end of line 190, final line)

Wikipedia contributors, Debate between bird and fish

Seven major debates are known, with specific titles […]:

• Debate between bird and fish
• Debate between cattle and grain
• Debate between the millstone and the gulgul-stone
• Debate between the pickaxe and the plough
• Debate between silver and mighty copper
• Debate between Summer and Winter
• Debate between tree and the reed

Wikipedia contributors, Sumerian disputations

Added to diary 05 April 2018

To reiterate, in just six days, The New York Times ran as many cover stories about Hillary Clinton’s emails as they did about all policy issues combined in the 69 days leading up to the election (and that does not include the three additional articles on October 18, and November 6 and 7, or the two articles on the emails taken from John Podesta). This intense focus on the email scandal cannot be written off as inconsequential: The Comey incident and its subsequent impact on Clinton’s approval rating among undecided voters could very well have tipped the election.

Duncan Watts and David Rothschild, Don’t blame the election on fake news. Blame it on the media, Columbia Journalism Review, 5 December 2017

Analyses by Columbia Journalism Review, the Berkman Klein Center for Internet and Society at Harvard University, and the Shorenstein Center at the Harvard Kennedy School show that the Clinton email controversy received more coverage in mainstream media outlets than any other topic during the 2016 presidential election.

Wikipedia contributors, Hillary Clinton email controversy

Added to diary 30 January 2018

Given the ubiquity of the wheel in human technology, and the existence of biological analogues of many other technologies (such as wings and lenses), the lack of wheels in the natural world would seem to demand explanation—and the phenomenon is broadly explained by two main factors. First, there are several developmental and evolutionary obstacles to the advent of a wheel by natural selection, addressing the question “Why can’t life evolve wheels?” Secondly, wheels are often at a competitive disadvantage when compared with other means of propulsion (such as walking, running, or slithering) in natural environments, addressing the question “If wheels could evolve, why might they be rare nonetheless?” This environment-specific disadvantage also explains why at least one historical civilization abandoned the wheel as a mode of transport. […]

# Biological barriers to wheeled organisms

Richard Dawkins describes the matter: “The wheel may be one of those cases where the engineering solution can be seen in plain view, yet be unattainable in evolution because it lies [on] the other side of a deep valley, cutting unbridgeably across the massif of Mount Improbable.” In such a fitness landscape, wheels might sit on a highly favorable “peak”, but the valley around that peak may be too deep or wide for the gene pool to migrate across by genetic drift or natural selection. […]

The greatest anatomical impediment to wheeled multicellular organisms is the interface between the static and rotating components of the wheel. In either a passive or driven case, the wheel (and possibly axle) must be able to rotate freely relative to the rest of the machine or organism.[Note 2] Unlike animal joints, which have a limited range of motion, a wheel must be able to rotate through an arbitrary angle without ever needing to be “unwound”. As such, a wheel cannot be permanently attached to the axle or shaft about which it rotates (or, if the axle and wheel are fixed together, the axle cannot be affixed to the rest of the machine or organism). There are several functional problems created by this requirement. […]

In animals, motion is typically achieved by the use of skeletal muscles, which derive their energy from the metabolism of nutrients from food. Because these muscles are attached to both of the components that must move relative to each other, they are not capable of directly driving a wheel. […]

Another potential problem that arises at the interface between wheel and axle (or axle and body) is the limited ability of an organism to transfer materials across this interface. If the tissues that make up a wheel are living, they will need to be supplied with oxygen and nutrients and have wastes removed to sustain metabolism. […]

Wheels incur mechanical and other disadvantages in certain environments and situations that would represent a decreased fitness when compared with limbed locomotion. These disadvantages suggest that, even barring the biological constraints discussed above, the absence of wheels in multicellular life may not be the “missed opportunity” of biology that it first seems. In fact, given the mechanical disadvantages and restricted usefulness of wheels when compared with limbs, the central question can be reversed: not “Why does nature not produce wheels?”, but rather, “Why do human vehicles not make more use of limbs?” The use of wheels rather than limbs in many engineered vehicles can likely be attributed to the complexity of design required to construct and control limbs, rather than to a consistent functional advantage of wheels over limbs. […]

Although stiff wheels are more energy efficient than other means of locomotion when traveling over hard, level terrain (such as paved roads), wheels are not especially efficient on soft terrain such as soil, because they are vulnerable to rolling resistance. In rolling resistance, a vehicle loses energy to the deformation of its wheels and the surface on which they are rolling. […]

When moving through a fluid, rotating systems carry an efficiency advantage only at extremely low Reynolds numbers (i.e. viscosity-dominated flows) such as those experienced by bacterial flagella, whereas oscillating systems have the advantage at higher (inertia-dominated) Reynolds numbers Whereas ship propellers typically have efficiencies around 60% and aircraft propellers up to around 80% (achieving 88% in the human-powered Gossamer Condor), much higher efficiencies, in the range of 96%–98%, can be achieved with an oscillating flexible foil like a fish tail or bird wing.

Wheels are prone to slipping—an inability to generate traction—on loose or slippery terrain. Slipping wastes energy and can potentially lead to a loss of control or becoming stuck, as with an automobile on mud or snow. This limitation of wheels can be seen in the realm of human technology: in an example of biologically inspired engineering, legged vehicles find use in the logging industry, where they allow access to terrain too challenging for wheeled vehicles to navigate.

Wikipedia, Rotating locomotion in living systems

Added to diary 15 January 2018

# william-butler-yeats

Things fall apart; the centre cannot hold;
Mere anarchy is loosed upon the world,
The blood-dimmed tide is loosed, and everywhere
The ceremony of innocence is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity.

W. B. Yeats. The Second Coming, 1921

Added to diary 23 September 2018

# william-child

‘In the use of words’, Wittgenstein writes in Philosophical Investigations, ‘one might distinguish “surface grammar” from “depth grammar” ’ (PI §664). Words that have the same ‘surface grammar’ may have very different ‘depth grammars’. And philosophical problems arise when we are misled by ‘certain analogies between the forms of expression in different regions of our language’ (PI §90) into thinking that the phenomena we are talking about when we use those expressions are similarly analogous (we will shortly consider an example). So there is a significant parallel between Wittgenstein’s early and later views about the source of philosophical problems. And there is a significant parallel in his view about the proper response to philosophical problems: we should not try to solve those problems by producing a philosophical theory; instead, we should dissolve them, by showing that they were not genuine problems at all. ‘We cannot give any answer to [philosophical questions]’, Wittgenstein writes in the Tractatus, ‘but can only point out that they are nonsensical’ (TLP: 4.003). Similarly, in Philosophical Investigations: ‘The results of philosophy are the discovery of some piece of plain nonsense and the bumps that the understanding has got by running up against the limits of language’ (PI §119). ‘What I want to teach’, he says, ‘is: to pass from unobvious nonsense to obvious nonsense’ (PI §464). ‘The clarity that we are aiming at is indeed complete clarity. But this simply means that the philosophical problems should completely disappear’ (PI §133).

William Child, Wittgenstein, 2011

Added to diary 19 January 2018

On many issues, I find that people hold the following two views:

• If many people did this thing, then change would happen.
• But any individual person doesn’t make a difference.

Holding that combination of views is usually a mistake when we consider expected value.

Consider ethical consumption, like switching to fair-trade coffee, or reducing how much meat you buy. Suppose someone stops buying chicken breasts, instead choosing vegetarian options, in order to reduce the amount of animal suffering on factory farms. Does that person make a difference? You might think not. If one person decides against buying chicken breast one day but the rest of the meat eaters on the planet continue to buy chicken, how could that possibly affect how many chickens are killed for human consumption? When a supermarket decides how much chicken to buy, they don’t care that one fewer breast was purchased on a given day. However, if thousands or millions of people stopped buying chicken breasts, the number of chickens raised for food would decrease—supply would fall to meet demand. But then we’re left with a paradox: individuals can’t make a difference, but millions of individuals do. But the actions of millions of people are just the sum of the actions of many individual people. Moreover, an iron law of economics is that, in a well-functioning market, if demand for a product decreases, the quantity of the product that’s supplied decreases. How, then, can we reconcile these thoughts?

The answer lies with expected value. If you decline to buy some chicken breast, then most of the time you’ll make no difference: the supermarket will buy the same amount of chicken in the future. Sometimes, however, you will make a difference. Occasionally, the manager of the store will assess the number of chicken breasts bought by consumers and decide to decrease their intake of stock, even though they wouldn’t have done so had the number of chicken breasts bought been one higher. (Perhaps they follow a rule like: “If fewer than five thousand chicken breasts were bought this month, decrease stock intake.”) And when that manager does decide to decrease their stock intake, they will decrease stock by a large amount. Perhaps your decision against purchasing chicken breast will have an effect on the supermarket only one in a thousand times, but in that one time, the store manager will decide to purchase approximately one thousand fewer chicken breasts.

This isn’t just a theoretical argument. Economists have studied this issue [Norwood & Lusk 2011, p. 223] and worked out how, on average, a consumer affects the number of animal products supplied by declining to buy that product. They estimate that, on average, if you give up one egg, total production ultimately falls by 0.91 eggs; if you give up one gallon of milk, total production falls by 0.56 gallons. Other products are somewhere in between: economists estimate that if you give up one pound of beef, beef production falls by 0.68 pounds; if you give up one pound of pork, production ultimately falls by 0.74 pounds; if you give up one pound of chicken, production ultimately falls by 0.76 pounds.

This same reasoning can be applied when considering the value of participating in political rallies. Suppose there’s some policy that a group of people want to see implemented. Suppose everyone agrees that if no one attends a rally on this policy, the policy won’t go through, but if one million people show up, the policy will go through. What difference do you make by showing up at this rally? You’re just one body among thousands of others—surely the difference you make is negligible. Again, the solution is to think in terms of expected value. The chance of you being the person who makes the difference is very small, but if you do make the difference, it will be very large indeed. This isn’t just a speculative model. Professors of political science at Harvard and Stockholm Universities analyzed Tea Party rallies held on Tax Day, April 15, 2009 [Madestam, Shoag, Veuger & Yanagizawa-Drott 2013]. They used the weather in different constituencies as a natural experiment: if the weather was bad on the day of a rally, fewer people would show up. This allowed them to assess whether increased numbers of people at a rally made a difference to how influential the rally was. They found that policy was significantly influenced by those rallies that attracted more people, and that the larger the rally, the greater the degree to which those protestors’ representatives in Congress voted conservatively.

William Macaskill, Doing Good Better, 2015

Added to diary 15 April 2018

Emphasis is mine.

As I and the Centre for Effective Altruism define it, effective altruism is the project of using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.

On this definition, effective altruism is an intellectual and practical project rather than a normative claim, in the same way that science is an intellectual and practical project rather than a body of any particular normative and empirical claims. Its aims are welfarist, impartial, and maximising: effective altruists aim to maximise the wellbeing of all, where (on some interpretation) everyone counts for one, and no-one for more than one. But it is not a mere restatement of consequentialism: it does not claim that one is always obligated to maximise the good, impartially considered, with no room for one’s personal projects; and it does not claim that one is permitted to violate side-constraints for the greater good.

Effective altruism is an idea with a community built around it. That community champions certain values that aren’t part of the definition of effective altruism per se. These include serious commitment to benefiting others, with many members of the community pledging to donate at least 10% of their income to charity; scientific mindset, and willingness to change one’s mind in light of new evidence or argument; openness to many different cause-areas, such as extreme poverty, farm animal welfare, and risks of human extinction; integrity, with a strong commitment to honesty and transparency; and a collaborative spirit, with an unusual level of cooperation between people with different moral projects.

William MacAskill, “Effective Altruism: IntroductionEssays in Philosophy: Vol. 18: Iss. 1, Article 1. doi:10.7710/1526-0569.1580

Added to diary 20 March 2018

# william-shakespeare

There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.

William Shakespeare, Hamlet, Act I, Scene V

Added to diary 18 March 2018

KING HENRY:

Once more unto the breach, dear friends, once more;
Or close the wall up with our English dead!
In peace there’s nothing so becomes a man,
As modest stillness and humility;
But when the blast of war blows in our ears,
Then imitate the action of the tiger:
Stiffen the sinews, conjure up the blood,
Disguise fair nature with hard-favoured rage:
Then lend the eye a terrible aspect;
Let it pry through the portage of the head,
Like the brass cannon; let the brow o’erwhelm it
As fearfully as doth a galled rock
O’erhang and jutty his confounded base,
Swill’d with the wild and wasteful ocean.

William Shakespeare, Henry V, Act III, Scene I

Added to diary 15 January 2018