Smugly Believing Those Who Disagree with Us Are Stupid

(p. 3) Many liberals, but not conservatives, believe there is an important asymmetry in American politics. These liberals believe that people on opposite sides of the ideological spectrum are fundamentally different. Specifically, they believe that liberals are much more open to change than conservatives, more tolerant of differences, more motivated by the public good and, maybe most of all, smarter and better informed.
The evidence for these beliefs is not good. Liberals turn out to be just as prone to their own forms of intolerance, ignorance and bias. But the beliefs are comforting to many. They give their bearers a sense of intellectual and even moral superiority. And they affect behavior. They inform the condescension and self-righteousness with which liberals often treat conservatives.
. . .
. . . my strongest memory of Mr. Stewart, like that of many other conservatives, is probably going to be his 2010 interview with the Berkeley law professor John Yoo. Mr. Yoo had served in Mr. Bush’s Justice Department and had drafted memos laying out what techniques could and couldn’t be used to interrogate Al Qaeda detainees. Mr. Stewart seemed to go into the interview expecting a menacing Clint Eastwood type, who was fully prepared to zap the genitals of some terrorist if that’s what it took to protect America’s women and children.
Mr. Stewart was caught unaware by the quiet, reasonable Mr. Yoo, who explained that he had been asked to determine what legally constituted torture so the government could safely stay on this side of the line. The issue, in other words, wasn’t whether torture was justified but what constituted it and what didn’t. Ask yourself how intellectually curious Mr. Stewart really could be, not to know that this is what Bush administration officials had been saying all along?

For the full commentary, see:
GERARD ALEXANDER. “Jon Stewart, Patron Saint of Liberal Smugness.” The New York Times, SundayReview Section (Sun., AUG. 9, 2015): 3.
(Note: the online version of the commentary has the date AUG. 7, 2015.)
(Note: ellipses added, italics in original.)

Computer Programs “Lack the Flexibility of Human Thinking”

(p. A11) . . . let’s not panic. “Superintelligent” machines won’t be arriving soon. Computers today are good at narrow tasks carefully engineered by programmers, like balancing checkbooks and landing airplanes, but after five decades of research, they are still weak at anything that looks remotely like genuine human intelligence.
. . .
Even the best computer programs out there lack the flexibility of human thinking. A teenager can pick up a new videogame in an hour; your average computer program still can only do just the single task for which it was designed. (Some new technologies do slightly better, but they still struggle with any task that requires long-term planning.)

For the full commentary, see:
GARY MARCUS. “Artificial Intelligence Isn’t a Threat–Yet; Superintelligent machines are still a long way off, but we need to prepare for their future rise.” The Wall Street Journal (Sat., Dec. 13, 2014): A11.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date Dec. 11, 2014.)

Cultural and Institutional Differences Between Europe and U.S. Keep Europe from Having a Silicon Valley

(p. B7) “They all want a Silicon Valley,” Jacob Kirkegaard, a Danish economist and senior fellow at the Peterson Institute for International Economics, told me this week. “But none of them can match the scale and focus on the new and truly innovative technologies you have in the United States. Europe and the rest of the world are playing catch-up, to the great frustration of policy makers there.”
Petra Moser, assistant professor of economics at Stanford and its Europe Center, who was born in Germany, agreed that “Europeans are worried.”
“They’re trying to recreate Silicon Valley in places like Munich, so far with little success,” she said. “The institutional and cultural differences are still too great.”
. . .
There is . . . little or no stigma in Silicon Valley to being fired; Steve Jobs himself was forced out of Apple. “American companies allow their employees to leave and try something else,” Professor Moser said. “Then, if it works, great, the mother company acquires the start-up. If it doesn’t, they hire them back. It’s a great system. It allows people to experiment and try things. In Germany, you can’t do that. People would hold it against you. They’d see it as disloyal. It’s a very different ethic.”
Europeans are also much less receptive to the kind of truly disruptive innovation represented by a Google or a Facebook, Mr. Kirkegaard said.
He cited the example of Uber, the ride-hailing service that despite its German-sounding name is a thoroughly American upstart. Uber has been greeted in Europe like the arrival of a virus, and its reception says a lot about the power of incumbent taxi operators.
“But it goes deeper than that,” Mr. Kirkegaard said. “New Yorkers don’t get all nostalgic about yellow cabs. In London, the black cab is seen as something that makes London what it is. People like it that way. Americans tend to act in a more rational and less emotional way about the goods and services they consume, because it’s not tied up with their national and regional identities.”
. . .
With its emphasis on early testing and sorting, the educational system in Europe tends to be very rigid. “If you don’t do well at age 18, you’re out,” Professor Moser said. “That cuts out a lot of people who could do better but never get the chance. The person who does best at a test of rote memorization at age 17 may not be innovative at 23.” She added that many of Europe’s most enterprising students go to the United States to study and end up staying.
She is currently doing research into creativity. “The American education system is much more forgiving,” Professor Moser said. “Students can catch up and go on to excel.”
Even the vaunted European child-rearing, she believes, is too prescriptive. While she concedes there is as yet no hard scientific evidence to support her thesis, “European children may be better behaved, but American children may end up being more free to explore new things.”

For the full story, see:
JAMES B. STEWART. “Common Sense; A Fearless Culture Fuels Tech.” The New York Times (Fri., JUNE 19, 2015): B1 & B7.
(Note: ellipses added.)
(Note: the online version of the story has the date JUNE 18, 2015, and has the title “Common Sense; A Fearless Culture Fuels U.S. Tech Giants.”)

Babies “Have a Positive Hunger for the Unexpected”

(p. C2) In an amazingly clever new paper in the journal Science, Aimee Stahl and Lisa Feigenson at Johns Hopkins University show systematically that 11-month-old babies, like scientists, pay special attention when their predictions are violated, learn especially well as a result, and even do experiments to figure out just what happened.
They took off from some classic research showing that babies will look at something longer when it is unexpected. The babies in the new study either saw impossible events, like the apparent passage of a ball through a solid brick wall, or straightforward events, like the same ball simply moving through an empty space.
. . .
The babies explored objects more when they behaved unexpectedly. They also explored them differently depending on just how they behaved unexpectedly. If the ball had vanished through the wall, the babies banged the ball against a surface; if it had hovered in thin air, they dropped it. It was as if they were testing to see if the ball really was solid, or really did defy gravity, much like Georgie testing the fake eggs in the Easter basket.
In fact, these experiments suggest that babies may be even better scientists than grown-ups often are. Adults suffer from “confirmation bias”–we pay attention to the events that fit what we already know and ignore things that might shake up our preconceptions. Charles Darwin famously kept a special list of all the facts that were at odds with his theory, because he knew he’d otherwise be tempted to ignore or forget them.
Babies, on the other hand, seem to have a positive hunger for the unexpected. Like the ideal scientists proposed by the philosopher of science Karl Popper, babies are always on the lookout for a fact that falsifies their theories.

For the full commentary, see:
ALISON GOPNIK. “MIND AND MATTER; How 1-Year-Olds Figure Out the World.” The Wall Street Journal (Sat., April 15, 2015): C2.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date April 15, 2015, and has the title “MIND AND MATTER; How 1-Year-Olds Figure Out the World.”)

The scientific article mentioned in the passages quoted, is:
Stahl, Aimee E., and Lisa Feigenson. “Observing the Unexpected Enhances Infants’ Learning and Exploration.” Science 348, no. 6230 (April 3, 2015): 91-94.

We Often “See” What We Expect to See

(p. 9) The Justice Department recently analyzed eight years of shootings by Philadelphia police officers. Its report contained two sobering statistics: Fifteen percent of those shot were unarmed; and in half of these cases, an officer reportedly misidentified a “nonthreatening object (e.g., a cellphone) or movement (e.g., tugging at the waistband)” as a weapon.
Many factors presumably contribute to such shootings, ranging from carelessness to unconscious bias to explicit racism, all of which have received considerable attention of late, and deservedly so.
But there is a lesser-known psychological phenomenon that might also explain some of these shootings. It’s called “affective realism”: the tendency of your feelings to influence what you see — not what you think you see, but the actual content of your perceptual experience.
. . .
The brain is a predictive organ. A majority of your brain activity consists of predictions about the world — thousands of them at a time — based on your past experience. These predictions are not deliberate prognostications like “the Red Sox will win the World Series,” but unconscious anticipations of every sight, sound and other sensation you might encounter in every instant. These neural “guesses” largely shape what you see, hear and otherwise perceive.
. . .
. . . , our lab at Northeastern University has conducted experiments to document affective realism. For example, in one study we showed an affectively neutral face to our test subjects, and using special equipment, we secretly accompanied it with a smiling or scowling face that the subjects could not consciously see. (The technique is called “continuous flash suppression.”) We found that the unseen faces influenced the subjects’ bodily activity (e.g., how fast their hearts beat) and their feelings. These in turn influenced their perceptions: In the presence of an unseen scowling face, our subjects felt unpleasant and perceived the neutral face as less likable, less trustworthy, less competent, less attractive and more likely to commit a crime than when we paired it with an unseen smiling face.
These weren’t just impressions; they were actual visual changes. The test subjects saw the neutral faces as having a more furrowed brow, a more surly mouth and so on. (Some of these findings were published in Emotion in 2012.)
. . .
. . . the brain is wired for prediction, and you predict most of the sights, sounds and other sensations in your life. You are, in large measure, the architect of your own experience.

For the full commentary, see:
Feldman Barrett, Lisa, and Jolie Wormwood. “When a Gun Is Not a Gun.” The New York Times, SundayReview Section (Sun., April 19, 2015): 9.
(Note: italics in original; ellipses added.)
(Note: the date of the online version of the commentary is APRIL 17, 2015.)

The academic article mentioned in the passage quoted above, is:
Anderson, Eric, Erika Siegel, Dominique White, and Lisa Feldman Barrett. “Out of Sight but Not out of Mind: Unseen Affective Faces Influence Evaluations and Social Impressions.” Emotion 12, no. 6 (Dec. 2012): 1210-21.

Authentic Happiness Requires Engagement and Meaning

(p. 278) Recent research into what happiness is and what makes people happy sheds some contemporary light on the connection Aristotle claimed between wisdom and happiness. Students of the “science of happiness” try to measure happiness, identify its components, determine its causes, and specify its consequences. This work doesn’t tell us what should make people happy. It aims to tell us what does make people happy.
Ed Diener is perhaps the world’s leading researcher on happiness. His recent book, written in collaboration with his son, Robert Biswas-Diener, confirms some things we might expect. The major determinants (p. 279) of happiness (or “well-being,” as it is sometimes called) include material wealth (though much less than most people think, especially when their standard of living is above subsistence), physical health, freedom, political democracy, and physical, material, and psychological security. None of these determinants of happiness seems to have much to do with practical wisdom. But two other factors, each of them extremely important, do. Well-being depends critically on being part of a network of close connections to others. And well-being is enhanced when we are engaged in our work and find meaning in it.
The work of Martin Seligman, a distinguished psychologist at the University of Pennsylvania, points in the same direction. Seligman launched a whole new discipline– dubbed “positive” psychology– in the 1990s, when he was president of the American Psychological Association. We’ve talked to Seligman often about his work. He had long been concerned that psychologists focused too exclusively on curing the problems of their patients (he himself was an expert on depression) and spent too little time investigating those things that would positively promote their well-being. He kick-started positive psychology with his book Authentic Happiness.
The word authentic is there to distinguish what Seligman is talking about from what many of us sometimes casually take happiness to be– feeling good. Feeling good– experiencing positive emotion– is certainly important. But just as important are engagement and meaning. Engagement is about throwing yourself into the activities of your life. And meaning is about connecting what you do to the lives of others– knowing that what you do makes the lives of others better. Authentic happiness, says Seligman, is a combination of engagement, meaning, and positive emotion. Seligman collected a massive amount of data from research on people all over the world. He found that people who considered themselves happy had certain character strengths and virtues. He further found that in each individual, some of these strengths were more prominent than others. Seligman concluded that promoting a person’s particular (p. 280) strengths– he dubbed these a person’s “signature strengths”– promoted authentic happiness.
The twenty-four character strengths Seligman identified include things like curiosity, open-mindedness, perspective, kindness and generosity, loyalty, duty, fairness, leadership, self-control, caution, humility, bravery, perseverance, honesty, gratitude, optimism, and zest. He organized these strengths into six virtues: courage, humanity and love, justice, temperance, transcendence, and wisdom and knowledge. Aristotle would have recognized many of these strengths as the kind of “excellences” or virtues he considered necessary for eudaimonia, a flourishing or happy life.

Source:
Schwartz, Barry, and Kenneth Sharpe. Practical Wisdom: The Right Way to Do the Right Thing. New York: Riverhead Books, 2010.
(Note: italics in original.)

Chimps Are Willing to Delay Gratification in Order to Receive Cooked Food

This is a big deal because cooking food allows us humans to spend a lot less energy digesting our food, which allows a lot more energy to be used by the brain. So one theory is that the cooking technology allowed humans to eventually develop cognitive abilities superior to other primates.

(p. A3) . . . scientists from Harvard and Yale found that chimps have the patience and foresight to resist eating raw food and to place it in a device meant to appear, at least to the chimps, to cook it.
. . .
But they found that chimps would give up a raw slice of sweet potato in the hand for the prospect of a cooked slice of sweet potato a bit later. That kind of foresight and self-control is something any cook who has eaten too much raw cookie dough can admire.
The research grew out of the idea that cooking itself may have driven changes in human evolution, a hypothesis put forth by Richard Wrangham, an anthropologist at Harvard and several colleagues about 15 years ago in an article in Current Anthropology, and more recently in his book, “Catching Fire: How Cooking Made Us Human.”
He argued that cooking may have begun something like two million years ago, even though hard evidence only dates back about one million years. For that to be true, some early ancestors, perhaps not much more advanced than chimps, had to grasp the whole concept of transforming the raw into the cooked.
Felix Warneken at Harvard and Alexandra G. Rosati, who is about to move from Yale to Harvard, both of whom study cognition, wanted to see if chimpanzees, which often serve as stand-ins for human ancestors, had the cognitive foundation that would prepare them to cook.
. . .
Dr. Rosati said the experiments showed not only that chimps had the patience for cooking, but that they had the “minimal causal understanding they would need” to make the leap to cooking.

For the full story, see:
JAMES GORMAN. “Chimpanzees Would Cook if Given Chance, Research Says.” The New York Times (Weds., JUNE 3, 2015): A3.
(Note: ellipses added.)
(Note: the date of the online version of the story is JUNE 2, 2015, and has the title “Chimpanzees Would Cook if Given the Chance, Research Says.”)

The academic article discussed in the passages quoted above, is:
Warneken, Felix, and Alexandra G. Rosati. “Cognitive Capacities for Cooking in Chimpanzees.” Proceedings of the Royal Society of London B: Biological Sciences 282, no. 1809 (June 22, 2015).

Little Progress Toward Complex Autonomous Robots

(p. A8) [In June 2015] . . . , the Defense Advanced Research Projects Agency, a Pentagon research arm, . . . [held] the final competition in its Robotics Challenge in Pomona, Calif. With $2 million in prize money for the robot that performs best in a series of rescue-oriented tasks in under an hour, the event . . . offer[ed] what engineers refer to as the “ground truth” — a reality check on the state of the art in the field of mobile robotics.

A preview of their work suggests that nobody needs to worry about a Terminator creating havoc anytime soon. Given a year and a half to improve their machines, the roboticists, who shared details about their work in interviews before the contest in June, appear to have made limited progress.
. . .
“The extraordinary thing that has happened in the last five years is that we have seemed to make extraordininary progress in machine perception,” said Gill Pratt, the Darpa program manager in charge of the Robotics Challenge.
Pattern recognition hardware and software has made it possible for computers to make dramatic progress in computer vision and speech understanding. In contrast, Dr. Pratt said, little headway has been made in “cognition,” the higher-level humanlike processes required for robot planning and true autonomy. As a result, both in the Darpa contest and in the field of robotics more broadly, there has been a re-emphasis on the idea of human-machine partnerships.
“It is extremely important to remember that the Darpa Robotics Challenge is about a team of humans and machines working together,” he said. “Without the person, these machines could hardly do anything at all.”
In fact, the steep challenge in making progress toward mobile robots that can mimic human capabilities is causing robotics researchers worldwide to rethink their goals. Now, instead of trying to build completely autonomous robots, many researchers have begun to think instead of creating ensembles of humans and robots, an approach they describe as co-robots or “cloud robotics.”
Ken Goldberg, a University of California, Berkeley, roboticist, has called on the computing world to drop its obsession with singularity, the much-ballyhooed time when computers are predicted to surpass their human designers. Rather, he has proposed a concept he calls “multiplicity,” with diverse groups of humans and machines solving problems through collaboration.
For decades, artificial-intelligence researchers have noted that the simplest tasks for humans, such as reaching into a pocket to retrieve a quarter, are the most challenging for machines.
“The intuitive idea is that the more money you spend on a robot, the more autonomy you will be able to design into it,” said Rodney Brooks, an M.I.T. roboticist and co-founder two early companies, iRobot and Rethink Robotics. “The fact is actually the opposite is true: The cheaper the robot, the more autonomy it has.”
For example, iRobot’s Roomba robot is autonomous, but the vacuuming task it performs by wandering around rooms is extremely simple. By contrast, the company’s Packbot is more expensive, designed for defusing bombs, and must be teleoperated or controlled wirelessly by people.

For the full story, see:
JOHN MARKOFF. “A Reality Check for A.I.” The New York Times (Tues., MAY 26, 2015): D2.
(Note: ellipses, and bracketed expressions, added. I corrected a misspelling of “extraordinary.”)
(Note: the date of the online version of the story is MAY 25, 2015, and has the title “Relax, the Terminator Is Far Away.”)

George Bailey Wanted to Make Money, But He Wanted to Do More than Just Make Money

(p. 219) Actually, it’s not so strange. The norm for bankers was never just moneymaking, any more than it was for doctors or lawyers. Bankers made a livelihood, often quite a good one, by serving their clients– the depositors and borrowers– and the communities in which they worked. But traditionally, the aim of banking– even if sometimes honored only in the breach– was service, not just moneymaking.
In the movie It’s a Wonderful Life, James Stewart plays George Bailey, a small-town banker faced with a run on the bank– a liquidity crisis. When the townspeople rush into the bank to withdraw their money, Bailey tells them, “You’re thinking of this place all wrong. As if I had the money back in a safe. The money’s not here.” He goes on. “Your money’s in Joe’s house. Right next to yours. And in the Kennedy house, and Mrs. Backlin’s house, and a hundred others. Why, you’re lending them the money to build, and they’re going to pay you back, as best they can…. What are you going to do, foreclose on them?”
No, says George Bailey, “we’ve got to stick together. We’ve got to have faith in one another.” Fail to stick together, and the community will be ruined. Bailey took all the money he could get his hands on and gave it to his depositors to help see them through the crisis. Of course, George Bailey was interested in making money, but money was not the only point of what Bailey did.
Relying on a Hollywood script to provide evidence of good bankers is at some level absurd, but it does indicate something valuable about society’s expectations regarding the role of bankers. The norm for a “good banker” throughout most of the twentieth century was in fact someone who was trustworthy and who served the community, who was responsible to clients, and who took an interest in them.

Source:
Schwartz, Barry, and Kenneth Sharpe. Practical Wisdom: The Right Way to Do the Right Thing. New York: Riverhead Books, 2010.
(Note: italics in original.)

Intel Entrepreneur Gordon Moore Was “Introverted”

(p. A11) “In the world of the silicon microchip,” [Thackray, Brock and Jones] write, “Moore was a master strategist and risk taker. Even so, he was not especially a self-starter.” Mr. Moore possesses many of the stereotypical character traits of an introverted Ph.D. chemist: working for hours on his own, avoiding small talk and favoring laconic statements. Indeed, as a manager he often avoided conflict, even when a colleague’s errors persisted in plain sight.
. . .
After two leadership changes at Fairchild in 1967 and 1968, which unsettled its talented employees, Mr. Moore departed to help found a new firm, Intel, with a fellow Fairchild engineer, the charming and brilliant Robert Noyce (another of the “traitorous eight”). They also brought along a younger colleague, the confrontational and hyper-energetic Andy Grove. Each one of the famous triumvirate would serve as CEO at some point over the next three decades.

For the full review, see:
SHANE GREENSTEIN. “BOOKSHELF; Silicon Valley’s Lawmaker; What became Moore’s law first emerged in a 1965 article modestly titled ‘Cramming More Components Onto Integrated Circuits’.” The Wall Street Journal (Tues., May 26, 2015): A11.
(Note: ellipsis, and bracketed names, added.)
(Note: the online version of the review has the date May 25, 2015.)

The book under review is:
Thackray, Arnold, David C. Brock, and Rachel Jones. Moore’s Law: The Life of Gordon Moore, Silicon Valley’s Quiet Revolutionary. New York: Basic Books, 2015.

Insights More Likely When Mood Is Positive and Distractions Few

If insights are more likely in the absence of distractions, then why are business executives so universally gung-ho on imposing on their workers the open office space layouts that are guaranteed to maximize distractions?

(p. C7) We can’t put a mathematician inside an fMRI machine and demand that she have a breakthrough over the course of 20 minutes or even an hour. These kinds of breakthroughs are too mercurial and rare to be subjected to experimentation.

We are, however, able to study the phenomenon more generally. Enter John Kounios and Mark Beeman, two cognitive neuroscientists and the authors of the “The Eureka Factor.” Messrs. Kounios and Beeman focus their book on the science behind insights and how to cultivate them.
As Mr. Irvine recognizes, studying insights in the lab is difficult. But it’s not impossible. Scientists have devised experiments that can provoke in subjects these kinds of insights, ones that feel genuine but occur on a much smaller scale.
. . .
The book includes some practical takeaways of how to improve our odds of getting insights as well. Blocking out distractions can create an environment conducive to insights. So can having a positive mood. While many of the suggestions contain caveats, as befits the delicate nature of creativity, ultimately it seems that there are ways to be more open to these moments of insight.

For the full review, see:
SAMUEL ARBESMAN. “Every Man an Archimedes; Insights can seem to appear spontaneously, but fully formed. No wonder the ancients spoke of muses.” The Wall Street Journal (Sat., May 23, 2015): C7.
(Note: ellipsis added.)
(Note: the online version of the review has the date May 22, 2015.)

The book under review, is:
Kounios, John, and Mark Beeman. The Eureka Factor: Aha Moments, Creative Insight, and the Brain. New York: Random House, 2015.