Autism Is “Inseparably Tied to Innovation”

(p. 11) “NeuroTribes” is beautifully told, humanizing, important. It has earned its enthusiastic foreword from Oliver Sacks; it has found its place on the shelf next to “Far From the Tree,” Andrew Solomon’s landmark appreciation of neurological differences. At its heart is a plea for the world to make accommodations for those with autism, not the other way around, and for researchers and the public alike to focus on getting them the services they need. They are, to use Temple Grandin’s words, “different, not less.” Better yet, indispensable: inseparably tied to innovation, showing us there are other ways to think and work and live.

For the full review, see:
JENNIFER SENIOR. “‘Skewed Diagnosis; A Science Journalist’s Reading of Medical History Suggests that the ‘Autism Pandemic’ Is an Optical Illusion.” The New York Time Book Review (Sun., AUG. 23, 2015): 11.
(Note: the online version of the review has the date AUG. 17, 2015, and has the title “‘NeuroTribes,’ by Steve Silberman.”)

The book under review, is:
Silberman, Steve. Neurotribes: The Legacy of Autism and the Future of Neurodiversity. New York: Avery/Penguin Random House, 2015.

Should We Have a Right to the Silence that “Contributes to Creativity and Innovation”?

(p. D5) The benefits of silence are off the books. They are not measured in the gross domestic product, yet the availability of silence surely contributes to creativity and innovation. They do not show up explicitly in social statistics such as level of educational achievement, yet one consumes a great deal of silence in the course of becoming educated.
. . .
Or do we? Silence is now offered as a luxury good. In the business-class lounge at Charles de Gaulle Airport, I heard only the occasional tinkling of a spoon against china. I saw no advertisements on the walls. This silence, more than any other feature, is what makes it feel genuinely luxurious. When you step inside and the automatic doors whoosh shut behind you, the difference is nearly tactile, like slipping out of haircloth into satin. Your brow unfurrows, your neck muscles relax; after 20 minutes you no longer feel exhausted.
Outside, in the peon section, is the usual airport cacophony. . . .
. . .
To engage in inventive thinking during those idle hours spent at an airport requires silence.
. . .
I think we need to sharpen the conceptually murky right to privacy by supplementing it with a right not to be addressed. This would apply not, of course, to those who address me face to face as individuals, but to those who never show their faces, and treat my mind as a resource to be harvested.

For the full commentary, see:
MATTHEW B. CRAWFORD. “OPINION; The Cost of Paying Attention.” The New York Times, SundayReview Section (Sun., MARCH 8, 2015): 5.
(Note: ellipses added.)
(Note: the online version of the commentary has the date MARCH 7, 2015.)

The commentary quoted above is related to the author’s book:
Crawford, Matthew B. The World Beyond Your Head: On Becoming an Individual in an Age of Distraction. New York: Farrar, Straus and Giroux, 2015.

Too Much Positive Thinking Creates Relaxed Complacency

(p. D5) In her smart, lucid book, “Rethinking Positive Thinking: Inside the New Science of Motivation,” Dr. Oettingen critically re-examines positive thinking and give readers a more nuanced — and useful — understanding of motivation based on solid empirical evidence.
Conventional wisdom has it that dreams are supposed to excite us and inspire us to act. Putting this to the test, Dr. Oettingen recruits a group of undergraduate college students and randomly assigns them to two groups. She instructs the first group to fantasize that the coming week will be a knockout: good grades, great parties and the like; students in the second group are asked to record all their thoughts and daydreams about the coming week, good and bad.
Strikingly, the students who were told to think positively felt far less energized and accomplished than those who were instructed to have a neutral fantasy. Blind optimism, it turns out, does not motivate people; instead, as Dr. Oettingen shows in a series of clever experiments, it creates a sense of relaxation complacency. It is as if in dreaming or fantasizing about something we want, our minds are tricked into believing we have attained the desired goal.
There appears to be a physiological basis for this effect: Studies show that just fantasizing about a wish lowers blood pressure, while thinking of that same wish — and considering not getting it — raises blood pressure. It may feel better to daydream, but it leaves you less energized and less prepared for action.
. . .
In one study, she taught a group of third graders a mental-contrast exercise: They were told to imagine a candy prize they would receive if they finished a language assignment, and then to imagine several of their own behaviors that could prevent them from winning. A second group of students was instructed only to fantasize about winning the prize. The students who did the mental contrast outperformed those who just dreamed.

For the full review, see:
RICHARD A. FRIEDMAN, M.D. “Books; Dare to Dream of Falling Short.” The New York Times (Tues., DEC. 23, 2014): D5.
(Note: italics in original; ellipsis added.)
(Note: the online version of the review has the date DEC. 22, 2014.)

The book under review, is:
Oettingen, Gabriele. Rethinking Positive Thinking: Inside the New Science of Motivation. New York: Current, 2014.

Smugly Believing Those Who Disagree with Us Are Stupid

(p. 3) Many liberals, but not conservatives, believe there is an important asymmetry in American politics. These liberals believe that people on opposite sides of the ideological spectrum are fundamentally different. Specifically, they believe that liberals are much more open to change than conservatives, more tolerant of differences, more motivated by the public good and, maybe most of all, smarter and better informed.
The evidence for these beliefs is not good. Liberals turn out to be just as prone to their own forms of intolerance, ignorance and bias. But the beliefs are comforting to many. They give their bearers a sense of intellectual and even moral superiority. And they affect behavior. They inform the condescension and self-righteousness with which liberals often treat conservatives.
. . .
. . . my strongest memory of Mr. Stewart, like that of many other conservatives, is probably going to be his 2010 interview with the Berkeley law professor John Yoo. Mr. Yoo had served in Mr. Bush’s Justice Department and had drafted memos laying out what techniques could and couldn’t be used to interrogate Al Qaeda detainees. Mr. Stewart seemed to go into the interview expecting a menacing Clint Eastwood type, who was fully prepared to zap the genitals of some terrorist if that’s what it took to protect America’s women and children.
Mr. Stewart was caught unaware by the quiet, reasonable Mr. Yoo, who explained that he had been asked to determine what legally constituted torture so the government could safely stay on this side of the line. The issue, in other words, wasn’t whether torture was justified but what constituted it and what didn’t. Ask yourself how intellectually curious Mr. Stewart really could be, not to know that this is what Bush administration officials had been saying all along?

For the full commentary, see:
GERARD ALEXANDER. “Jon Stewart, Patron Saint of Liberal Smugness.” The New York Times, SundayReview Section (Sun., AUG. 9, 2015): 3.
(Note: the online version of the commentary has the date AUG. 7, 2015.)
(Note: ellipses added, italics in original.)

Computer Programs “Lack the Flexibility of Human Thinking”

(p. A11) . . . let’s not panic. “Superintelligent” machines won’t be arriving soon. Computers today are good at narrow tasks carefully engineered by programmers, like balancing checkbooks and landing airplanes, but after five decades of research, they are still weak at anything that looks remotely like genuine human intelligence.
. . .
Even the best computer programs out there lack the flexibility of human thinking. A teenager can pick up a new videogame in an hour; your average computer program still can only do just the single task for which it was designed. (Some new technologies do slightly better, but they still struggle with any task that requires long-term planning.)

For the full commentary, see:
GARY MARCUS. “Artificial Intelligence Isn’t a Threat–Yet; Superintelligent machines are still a long way off, but we need to prepare for their future rise.” The Wall Street Journal (Sat., Dec. 13, 2014): A11.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date Dec. 11, 2014.)

Cultural and Institutional Differences Between Europe and U.S. Keep Europe from Having a Silicon Valley

(p. B7) “They all want a Silicon Valley,” Jacob Kirkegaard, a Danish economist and senior fellow at the Peterson Institute for International Economics, told me this week. “But none of them can match the scale and focus on the new and truly innovative technologies you have in the United States. Europe and the rest of the world are playing catch-up, to the great frustration of policy makers there.”
Petra Moser, assistant professor of economics at Stanford and its Europe Center, who was born in Germany, agreed that “Europeans are worried.”
“They’re trying to recreate Silicon Valley in places like Munich, so far with little success,” she said. “The institutional and cultural differences are still too great.”
. . .
There is . . . little or no stigma in Silicon Valley to being fired; Steve Jobs himself was forced out of Apple. “American companies allow their employees to leave and try something else,” Professor Moser said. “Then, if it works, great, the mother company acquires the start-up. If it doesn’t, they hire them back. It’s a great system. It allows people to experiment and try things. In Germany, you can’t do that. People would hold it against you. They’d see it as disloyal. It’s a very different ethic.”
Europeans are also much less receptive to the kind of truly disruptive innovation represented by a Google or a Facebook, Mr. Kirkegaard said.
He cited the example of Uber, the ride-hailing service that despite its German-sounding name is a thoroughly American upstart. Uber has been greeted in Europe like the arrival of a virus, and its reception says a lot about the power of incumbent taxi operators.
“But it goes deeper than that,” Mr. Kirkegaard said. “New Yorkers don’t get all nostalgic about yellow cabs. In London, the black cab is seen as something that makes London what it is. People like it that way. Americans tend to act in a more rational and less emotional way about the goods and services they consume, because it’s not tied up with their national and regional identities.”
. . .
With its emphasis on early testing and sorting, the educational system in Europe tends to be very rigid. “If you don’t do well at age 18, you’re out,” Professor Moser said. “That cuts out a lot of people who could do better but never get the chance. The person who does best at a test of rote memorization at age 17 may not be innovative at 23.” She added that many of Europe’s most enterprising students go to the United States to study and end up staying.
She is currently doing research into creativity. “The American education system is much more forgiving,” Professor Moser said. “Students can catch up and go on to excel.”
Even the vaunted European child-rearing, she believes, is too prescriptive. While she concedes there is as yet no hard scientific evidence to support her thesis, “European children may be better behaved, but American children may end up being more free to explore new things.”

For the full story, see:
JAMES B. STEWART. “Common Sense; A Fearless Culture Fuels Tech.” The New York Times (Fri., JUNE 19, 2015): B1 & B7.
(Note: ellipses added.)
(Note: the online version of the story has the date JUNE 18, 2015, and has the title “Common Sense; A Fearless Culture Fuels U.S. Tech Giants.”)

Babies “Have a Positive Hunger for the Unexpected”

(p. C2) In an amazingly clever new paper in the journal Science, Aimee Stahl and Lisa Feigenson at Johns Hopkins University show systematically that 11-month-old babies, like scientists, pay special attention when their predictions are violated, learn especially well as a result, and even do experiments to figure out just what happened.
They took off from some classic research showing that babies will look at something longer when it is unexpected. The babies in the new study either saw impossible events, like the apparent passage of a ball through a solid brick wall, or straightforward events, like the same ball simply moving through an empty space.
. . .
The babies explored objects more when they behaved unexpectedly. They also explored them differently depending on just how they behaved unexpectedly. If the ball had vanished through the wall, the babies banged the ball against a surface; if it had hovered in thin air, they dropped it. It was as if they were testing to see if the ball really was solid, or really did defy gravity, much like Georgie testing the fake eggs in the Easter basket.
In fact, these experiments suggest that babies may be even better scientists than grown-ups often are. Adults suffer from “confirmation bias”–we pay attention to the events that fit what we already know and ignore things that might shake up our preconceptions. Charles Darwin famously kept a special list of all the facts that were at odds with his theory, because he knew he’d otherwise be tempted to ignore or forget them.
Babies, on the other hand, seem to have a positive hunger for the unexpected. Like the ideal scientists proposed by the philosopher of science Karl Popper, babies are always on the lookout for a fact that falsifies their theories.

For the full commentary, see:
ALISON GOPNIK. “MIND AND MATTER; How 1-Year-Olds Figure Out the World.” The Wall Street Journal (Sat., April 15, 2015): C2.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date April 15, 2015, and has the title “MIND AND MATTER; How 1-Year-Olds Figure Out the World.”)

The scientific article mentioned in the passages quoted, is:
Stahl, Aimee E., and Lisa Feigenson. “Observing the Unexpected Enhances Infants’ Learning and Exploration.” Science 348, no. 6230 (April 3, 2015): 91-94.

We Often “See” What We Expect to See

(p. 9) The Justice Department recently analyzed eight years of shootings by Philadelphia police officers. Its report contained two sobering statistics: Fifteen percent of those shot were unarmed; and in half of these cases, an officer reportedly misidentified a “nonthreatening object (e.g., a cellphone) or movement (e.g., tugging at the waistband)” as a weapon.
Many factors presumably contribute to such shootings, ranging from carelessness to unconscious bias to explicit racism, all of which have received considerable attention of late, and deservedly so.
But there is a lesser-known psychological phenomenon that might also explain some of these shootings. It’s called “affective realism”: the tendency of your feelings to influence what you see — not what you think you see, but the actual content of your perceptual experience.
. . .
The brain is a predictive organ. A majority of your brain activity consists of predictions about the world — thousands of them at a time — based on your past experience. These predictions are not deliberate prognostications like “the Red Sox will win the World Series,” but unconscious anticipations of every sight, sound and other sensation you might encounter in every instant. These neural “guesses” largely shape what you see, hear and otherwise perceive.
. . .
. . . , our lab at Northeastern University has conducted experiments to document affective realism. For example, in one study we showed an affectively neutral face to our test subjects, and using special equipment, we secretly accompanied it with a smiling or scowling face that the subjects could not consciously see. (The technique is called “continuous flash suppression.”) We found that the unseen faces influenced the subjects’ bodily activity (e.g., how fast their hearts beat) and their feelings. These in turn influenced their perceptions: In the presence of an unseen scowling face, our subjects felt unpleasant and perceived the neutral face as less likable, less trustworthy, less competent, less attractive and more likely to commit a crime than when we paired it with an unseen smiling face.
These weren’t just impressions; they were actual visual changes. The test subjects saw the neutral faces as having a more furrowed brow, a more surly mouth and so on. (Some of these findings were published in Emotion in 2012.)
. . .
. . . the brain is wired for prediction, and you predict most of the sights, sounds and other sensations in your life. You are, in large measure, the architect of your own experience.

For the full commentary, see:
Feldman Barrett, Lisa, and Jolie Wormwood. “When a Gun Is Not a Gun.” The New York Times, SundayReview Section (Sun., April 19, 2015): 9.
(Note: italics in original; ellipses added.)
(Note: the date of the online version of the commentary is APRIL 17, 2015.)

The academic article mentioned in the passage quoted above, is:
Anderson, Eric, Erika Siegel, Dominique White, and Lisa Feldman Barrett. “Out of Sight but Not out of Mind: Unseen Affective Faces Influence Evaluations and Social Impressions.” Emotion 12, no. 6 (Dec. 2012): 1210-21.

Authentic Happiness Requires Engagement and Meaning

(p. 278) Recent research into what happiness is and what makes people happy sheds some contemporary light on the connection Aristotle claimed between wisdom and happiness. Students of the “science of happiness” try to measure happiness, identify its components, determine its causes, and specify its consequences. This work doesn’t tell us what should make people happy. It aims to tell us what does make people happy.
Ed Diener is perhaps the world’s leading researcher on happiness. His recent book, written in collaboration with his son, Robert Biswas-Diener, confirms some things we might expect. The major determinants (p. 279) of happiness (or “well-being,” as it is sometimes called) include material wealth (though much less than most people think, especially when their standard of living is above subsistence), physical health, freedom, political democracy, and physical, material, and psychological security. None of these determinants of happiness seems to have much to do with practical wisdom. But two other factors, each of them extremely important, do. Well-being depends critically on being part of a network of close connections to others. And well-being is enhanced when we are engaged in our work and find meaning in it.
The work of Martin Seligman, a distinguished psychologist at the University of Pennsylvania, points in the same direction. Seligman launched a whole new discipline– dubbed “positive” psychology– in the 1990s, when he was president of the American Psychological Association. We’ve talked to Seligman often about his work. He had long been concerned that psychologists focused too exclusively on curing the problems of their patients (he himself was an expert on depression) and spent too little time investigating those things that would positively promote their well-being. He kick-started positive psychology with his book Authentic Happiness.
The word authentic is there to distinguish what Seligman is talking about from what many of us sometimes casually take happiness to be– feeling good. Feeling good– experiencing positive emotion– is certainly important. But just as important are engagement and meaning. Engagement is about throwing yourself into the activities of your life. And meaning is about connecting what you do to the lives of others– knowing that what you do makes the lives of others better. Authentic happiness, says Seligman, is a combination of engagement, meaning, and positive emotion. Seligman collected a massive amount of data from research on people all over the world. He found that people who considered themselves happy had certain character strengths and virtues. He further found that in each individual, some of these strengths were more prominent than others. Seligman concluded that promoting a person’s particular (p. 280) strengths– he dubbed these a person’s “signature strengths”– promoted authentic happiness.
The twenty-four character strengths Seligman identified include things like curiosity, open-mindedness, perspective, kindness and generosity, loyalty, duty, fairness, leadership, self-control, caution, humility, bravery, perseverance, honesty, gratitude, optimism, and zest. He organized these strengths into six virtues: courage, humanity and love, justice, temperance, transcendence, and wisdom and knowledge. Aristotle would have recognized many of these strengths as the kind of “excellences” or virtues he considered necessary for eudaimonia, a flourishing or happy life.

Source:
Schwartz, Barry, and Kenneth Sharpe. Practical Wisdom: The Right Way to Do the Right Thing. New York: Riverhead Books, 2010.
(Note: italics in original.)

Chimps Are Willing to Delay Gratification in Order to Receive Cooked Food

This is a big deal because cooking food allows us humans to spend a lot less energy digesting our food, which allows a lot more energy to be used by the brain. So one theory is that the cooking technology allowed humans to eventually develop cognitive abilities superior to other primates.

(p. A3) . . . scientists from Harvard and Yale found that chimps have the patience and foresight to resist eating raw food and to place it in a device meant to appear, at least to the chimps, to cook it.
. . .
But they found that chimps would give up a raw slice of sweet potato in the hand for the prospect of a cooked slice of sweet potato a bit later. That kind of foresight and self-control is something any cook who has eaten too much raw cookie dough can admire.
The research grew out of the idea that cooking itself may have driven changes in human evolution, a hypothesis put forth by Richard Wrangham, an anthropologist at Harvard and several colleagues about 15 years ago in an article in Current Anthropology, and more recently in his book, “Catching Fire: How Cooking Made Us Human.”
He argued that cooking may have begun something like two million years ago, even though hard evidence only dates back about one million years. For that to be true, some early ancestors, perhaps not much more advanced than chimps, had to grasp the whole concept of transforming the raw into the cooked.
Felix Warneken at Harvard and Alexandra G. Rosati, who is about to move from Yale to Harvard, both of whom study cognition, wanted to see if chimpanzees, which often serve as stand-ins for human ancestors, had the cognitive foundation that would prepare them to cook.
. . .
Dr. Rosati said the experiments showed not only that chimps had the patience for cooking, but that they had the “minimal causal understanding they would need” to make the leap to cooking.

For the full story, see:
JAMES GORMAN. “Chimpanzees Would Cook if Given Chance, Research Says.” The New York Times (Weds., JUNE 3, 2015): A3.
(Note: ellipses added.)
(Note: the date of the online version of the story is JUNE 2, 2015, and has the title “Chimpanzees Would Cook if Given the Chance, Research Says.”)

The academic article discussed in the passages quoted above, is:
Warneken, Felix, and Alexandra G. Rosati. “Cognitive Capacities for Cooking in Chimpanzees.” Proceedings of the Royal Society of London B: Biological Sciences 282, no. 1809 (June 22, 2015).

Little Progress Toward Complex Autonomous Robots

(p. A8) [In June 2015] . . . , the Defense Advanced Research Projects Agency, a Pentagon research arm, . . . [held] the final competition in its Robotics Challenge in Pomona, Calif. With $2 million in prize money for the robot that performs best in a series of rescue-oriented tasks in under an hour, the event . . . offer[ed] what engineers refer to as the “ground truth” — a reality check on the state of the art in the field of mobile robotics.

A preview of their work suggests that nobody needs to worry about a Terminator creating havoc anytime soon. Given a year and a half to improve their machines, the roboticists, who shared details about their work in interviews before the contest in June, appear to have made limited progress.
. . .
“The extraordinary thing that has happened in the last five years is that we have seemed to make extraordininary progress in machine perception,” said Gill Pratt, the Darpa program manager in charge of the Robotics Challenge.
Pattern recognition hardware and software has made it possible for computers to make dramatic progress in computer vision and speech understanding. In contrast, Dr. Pratt said, little headway has been made in “cognition,” the higher-level humanlike processes required for robot planning and true autonomy. As a result, both in the Darpa contest and in the field of robotics more broadly, there has been a re-emphasis on the idea of human-machine partnerships.
“It is extremely important to remember that the Darpa Robotics Challenge is about a team of humans and machines working together,” he said. “Without the person, these machines could hardly do anything at all.”
In fact, the steep challenge in making progress toward mobile robots that can mimic human capabilities is causing robotics researchers worldwide to rethink their goals. Now, instead of trying to build completely autonomous robots, many researchers have begun to think instead of creating ensembles of humans and robots, an approach they describe as co-robots or “cloud robotics.”
Ken Goldberg, a University of California, Berkeley, roboticist, has called on the computing world to drop its obsession with singularity, the much-ballyhooed time when computers are predicted to surpass their human designers. Rather, he has proposed a concept he calls “multiplicity,” with diverse groups of humans and machines solving problems through collaboration.
For decades, artificial-intelligence researchers have noted that the simplest tasks for humans, such as reaching into a pocket to retrieve a quarter, are the most challenging for machines.
“The intuitive idea is that the more money you spend on a robot, the more autonomy you will be able to design into it,” said Rodney Brooks, an M.I.T. roboticist and co-founder two early companies, iRobot and Rethink Robotics. “The fact is actually the opposite is true: The cheaper the robot, the more autonomy it has.”
For example, iRobot’s Roomba robot is autonomous, but the vacuuming task it performs by wandering around rooms is extremely simple. By contrast, the company’s Packbot is more expensive, designed for defusing bombs, and must be teleoperated or controlled wirelessly by people.

For the full story, see:
JOHN MARKOFF. “A Reality Check for A.I.” The New York Times (Tues., MAY 26, 2015): D2.
(Note: ellipses, and bracketed expressions, added. I corrected a misspelling of “extraordinary.”)
(Note: the date of the online version of the story is MAY 25, 2015, and has the title “Relax, the Terminator Is Far Away.”)