Reality Is Not Always “Elegant”

Ordinary-Geniuses-Segre-Gino-BK2012-09-03.jpg

Source of book image: http://images.betterworldbooks.com/067/Ordinary-Geniuses-Segre-Gino-9780670022762.jpg

(p. C9) In the summer of 1953, while visiting Berkeley, Gamow was shown a copy of the article in Nature where Watson and Crick spelled out some of the genetic implications of their discovery that DNA is structured as a double helix. He immediately realized what was missing. Each helix is a linear sequence of four molecules known as bases. The sequence contains all the information that guides the manufacture of the proteins from which living things are made. Proteins are assembled from 20 different amino acids. What is the code that takes you from the string of bases to the amino acids? Gamow seems to have been the first to look at the problem in quite this way.

But he made a physicist’s mistake: He thought that the code would be “elegant”–that each amino acid would be specified by only one string of bases. (These strings were dubbed “codons.”) He produced a wonderfully clever code in which each codon consisted of three bases. That was the only part that was right. In the actual code sometimes three different codons correspond to the same amino acid, while some codons do not code for an amino acid at all. These irregularities are the results of evolutionary stops and starts, and no amount of cleverness could predict them.

For the full review, see:
JEREMY BERNSTEIN. “The Inelegant Universe.” The Wall Street Journal (Sat., August 13, 2011): C9.

The book under review is:
Segrè, Gino. Ordinary Geniuses: Max Delbruck, George Gamow, and the Origins of Genomics and Big Bang Cosmology. New York: Viking, 2011.

Paul Samuelson, in 2009 Interview, Says Economists Should Study Economic History

Clarke Conor interviewed Paul Samuelson in the summer of 2009. Since Samuelson died in October 2009, the interview was one of his last.
Samuelson was a student of Joseph Schumpeter at Harvard, and Schumpeter worked to get Samuelson financial support and a job. Near the end of his life, Schumpeter was ridiculed when he warned National Bureau of Economic Research (NBER) economists that they should not neglect economic history.
It took Paul Samuelson a long time to appreciate Schumpeter’s truth.

Very last thing. What would you say to someone starting graduate study in economics? Where do you think the big developments in modern macro are going to be, or in the micro foundations of modern macro? Where does it go from here and how does the current crisis change it?

Well, I’d say, and this is probably a change from what I would have said when I was younger: Have a very healthy respect for the study of economic history, because that’s the raw material out of which any of your conjectures or testings will come. And I think the recent period has illustrated that. The governor of the Bank of England seems to have forgotten or not known that there was no bank insurance in England, so when Northern Rock got a run, he was surprised. Well, he shouldn’t have been.
But history doesn’t tell its own story. You’ve got to bring to it all the statistical testings that are possible. And we have a lot more information now than we used to.

For the full interview, see:
Clarke, Conor. “An Interview with Paul Samuelson, Part Two.” The Atlantic (2009), http://www.theatlantic.com/politics/archive/2009/06/an-interview-with-paul-samuelson-part-two/19627/.
(Note: bold indicates Conor question, and is bolded in original.)
(Note: the interview was posted on The Atlantic online website, but I do not believe that it ever appeared in the print version of the magazine.)

No Amount of Econometric Sophistication Will Substitute for Good Data

(p. 234) Using a powerful method due to Singh, we have established a relationship between God’s attitude toward man and the amount of prayer (p. 235) transmitted to God. The method presented here is applicable to a number of important problems. Provided conditional density (1) is assumed, we do not need to observe a variable to compute its conditional expectation with respect to another variable whose density can be estimated. For example, one can extend current empirical work in a variety of areas of economics to estimate the effect of income on happiness or the effect of income inequality on democracy. We conjecture that this powerful method can be extended to the more general case when X is not observed either.

For the full article, from which the above is quoted, see:
Heckman, James. “The Effect of Prayer on God’s Attitude toward Mankind.” Economic Inquiry 48, no. 1 (Jan. 2010): 234-35.

Reference Point Ignored Due to “Theory-Induced Blindness”

(p. 290) The omission of the reference point from the indifference map is a surprising case of theory-induced blindness, because we so often encounter cases in which the reference point obviously matters. In labor negotiations, it is well understood by both sides that the reference point is the existing contract and that the negotiations will focus on mutual demands for concessions relative to that reference point. The role of loss aversion in bargaining is also well understood: making concessions hurts. You have much (p. 291) personal experience of the role of reference point. If you changed jobs or locations, or even considered such a change, you surely remember that the features of the new place were coded as pluses or minuses relative to where you were. You may also have noticed that disadvantages loomed larger than advantages in this evaluation–loss aversion was at work. It is difficult to accept changes for the worse. For example, the minimal wage that unemployed workers would accept for new employment averages 90% of their previous wage, and it drops by less than 10% over a period of one year.

Source:
Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.

Kahneman Grants that “the Basic Concepts of Economics Are Essential Intellectual Tools”

(p. 286) Most graduate students in economics have heard about prospect theory and loss aversion, but you are unlikely to find these terms in the index of an introductory text in economics. I am sometimes pained by this omission, but in fact it is quite reasonable, because of the central role of rationality in basic economic theory. The standard concepts and results that undergraduates are taught are most easily explained by assuming that Econs do not make foolish mistakes. This assumption is truly necessary, and it would be undermined by introducing the Humans of prospect theory, whose evaluations of outcomes are unreasonably short-sighted.
There are good reasons for keeping prospect theory out of introductory texts. The basic concepts of economics are essential intellectual tools, which are not easy to grasp even with simplified and unrealistic assumptions about the nature of the economic agents who interact in markets. Raising questions about these assumptions even as they are introduced would be confusing, and perhaps demoralizing. It is reasonable to put priority on helping students acquire the basic tools of the discipline. Furthermore, the failure of rationality that is built into prospect theory is often irrelevant to the predictions of economic theory, which work out with great precision in some situations and provide good approximations in many others. In some contexts, however, the difference becomes significant: the Humans described by prospect theory are (p. 287) guided by the immediate emotional impact of gains and losses, not by long-term prospects of wealth and global utility.
I emphasized theory-induced blindness in my discussion of flaws in Bernoulli’s model that remained unquestioned for more than two centuries. But of course theory-induced blindness is not restricted to expected utility theory. Prospect theory has flaws of its own, and theory-induced blindness to these flaws has contributed to its acceptance as the main alternative to utility theory.

Source:
Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.

Models Often “Ignore the Messiness of Reality”

SuperCooperatorsBK2012-08-31.png

Source of book image: http://www.namingandtreating.com/wp-content/uploads/2011/04/SuperCooperators_small.png

(p. 18) Nowak is one of the most exciting modelers working in the field of mathematical biology today. But a model, of course, is only as good as its assumptions, and biology is much messier than physics or chemistry. Nowak tells a joke about a man who approaches a shepherd and asks, ”If I tell you how many sheep you have, can I have one?” The shepherd agrees and is astonished when the stranger answers, ”Eighty-three.” As he turns to leave, the shepherd retorts: ”If I guess your profession, can I have the animal back?” The stranger agrees. ”You must be a mathematical biologist.” How did he know? ”Because you picked up my dog.”

. . .
Near the end of the book, Nowak describes Gustav Mahler’s efforts, in his grandiloquent Third Symphony, to create an all-encompassing structure in which ”nature in its totality may ring and resound,” adding, ”In my own way, I would like to think I have helped to give nature her voice too.” But there remains a telling gap between the precision of the models and the generality of the advice Nowak offers for turning us all into supercooperators. We humans really are infinitely more complex than falling apples, metastasizing colons, even ant colonies. Idealized accounts of the world often need to ignore the messiness of reality. Mahler understood this. In 1896 he invited Bruno Walter to Lake Attersee to glimpse the score of the Third. As they walked beneath the mountains, Walter admonished Mahler to look at the vista, to which he replied, ”No use staring up there — I’ve already composed it all away into my symphony!”

For the full review, see:
OREN HARMAN. “A Little Help from Your Friends.” The New York Times Book Review (Sun., April 10, 2011): 18.
(Note: ellipsis added.)
(Note: the online version of the review has the date April 8, 2011, and has the title “How Evolution Explains Altruism.”)

The full reference for the book under review, is:
Nowak, Martin A., and Roger Highfield. Supercooperators: Altruism, Evolution, and Why We Need Each Other to Succeed. New York: Free Press, 2011.

Sticking with Expected Utility Theory as an Example of “Theory-Induced Blindness”

(p. 286) Perhaps carried away by their enthusiasm, [Rabin and Thaler] . . . concluded their article by recalling the famous Monty Python sketch in which a frustrated customer attempts to return a dead parrot to a pet store. The customer uses a long series of phrases to describe the state of the bird, culminating in “this is an ex-parrot.” Rabin and Thaler went on to say that “it is time for economists to recognize that expected utility is an ex-hypothesis.” Many economists saw this flippant statement as little short of blasphemy. However, the theory-induced blindness of accepting the utility of wealth as an explanation of attitudes to small losses is a legitimate target for humorous comment.

Source:
Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.
(Note: bracketed names and ellipsis added.)

“Theory-Induced Blindness”

(p. 276) The mystery is how a conception of the utility of outcomes that is vulnerable to . . . obvious counterexamples survived for so long. I can explain (p. 277) it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing. You give the theory the benefit of the doubt, trusting the community of experts who have accepted it. . . . As the psychologist Daniel Gilbert observed, disbelieving is hard work, and System 2 is easily tired.

Source:
Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.
(Note: ellipses added.)

How Politics Trumps Peer Review in Medical Research

Abstract

The U.S. public biomedical research system is renowned for its peer review process that awards federal funds to meritorious research performers. Although congressional appropriators do not earmark federal funds for biomedical research performers, I argue that they support allocations for those research fields that are most likely to benefit performers in their constituencies. Such disguised transfers mitigate the reputational penalties to appropriators of interfering with a merit‐driven system. I use data on all peer‐reviewed grants by the National Institutes of Health during the years 1984-2003 and find that performers in the states of certain House Appropriations Committee members receive 5.9-10.3 percent more research funds than those at unrepresented institutions. The returns to representation are concentrated in state universities and small businesses. Members support funding for the projects of represented performers in fields in which they are relatively weak and counteract the distributive effect of the peer review process.

Source:
Hegde, Deepak. “Political Influence Behind the Veil of Peer Review: An Analysis of Public Biomedical Research Funding in the United States.” Journal of Law and Economics 52, no. 4 (Nov. 2009): 665-90.

Economists Have “the Tools to Slap Together a Model to ‘Explain’ Any and All Phenomena”

(p. 755) The economist of today has the tools to slap together a model to ‘explain’ any and all phenomena that come to mind. The flood of models is rising higher and higher, spouting from an ever increasing number of journal outlets. In the midst of all this evidence of highly trained cleverness, it is difficult to retain the realisation that we are confronting a complex system ‘the working of which we do not understand’. . . . That the economics profession might be humbled by recent events is a realisation devoutly to be wished.

Source:
Leijonhufvud, Axel. “Out of the Corridor: Keynes and the Crisis.” Cambridge Journal of Economics 33, no. 4 (July 2009): 741-57.
(Note: ellipsis added.)
(Note: the passage above was quoted on the back cover of The Cato Journal 30, no. 2 (Spring/Summer 2010).)

Simple Algorithms Predict Better than Trained Experts

(p. 222) I never met Meehl, but he was one of my heroes from the time I read his Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence.
In the slim volume that he later called “my disturbing little book,” Meehl reviewed the results of 20 studies that had analyzed whether clinical predictions based on the subjective impressions of trained professionals were more accurate than statistical predictions made by combining a few scores or ratings according to a rule. In a typical study, trained counselors predicted the grades of freshmen at the end of the school year. The counselors interviewed each student for forty-five minutes. They also had access to high school grades, several aptitude tests, and a four-page personal statement. The statistical algorithm used only a fraction of this information: high school grades and one aptitude test. Nevertheless, the formula was more accurate than 11 of the 14 counselors. Meehl reported generally sim-(p. 223)ilar results across a variety of other forecast outcomes, including violations of parole, success in pilot training, and criminal recidivism.
Not surprisingly, Meehl’s book provoked shock and disbelief among clinical psychologists, and the controversy it started has engendered a stream of research that is still flowing today, more than fifty years after its publication. The number of studies reporting comparisons of clinical and statistical predictions has increased to roughly two hundred, but the score in the contest between algorithms and humans has not changed. About 60% of the studies have shown significantly better accuracy for the algorithms. The other comparisons scored a draw in accuracy, but a tie is tantamount to a win for the statistical rules, which are normally much less expensive to use than expert judgment. No exception has been convincingly documented.

Source:
Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux, 2011.
(Note: italics in original.)