“Meditation Is Demotivating”

(p. 6) . . . on the face of it, mindfulness might seem counterproductive in a workplace setting. A central technique of mindfulness meditation, after all, is to accept things as they are. Yet companies want their employees to be motivated. And the very notion of motivation — striving to obtain a more desirable future — implies some degree of discontentment with the present, which seems at odds with a psychological exercise that instills equanimity and a sense of calm.
To test this hunch, we recently conducted five studies, involving hundreds of people, to see whether there was a tension between mindfulness and motivation. As we report in a forthcoming article in the journal Organizational Behavior and Human Decision Processes, we found strong evidence that meditation is demotivating.

For the full commentary, see:
Kathleen D. Vohs and Andrew C. Hafenbrack. “GRAY MATTER; Don’t Meditate at Work.” The New York Times, SundayReview Section (Sunday, June 17, 2018): 6.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date June 14, 2018, and has the title “GRAY MATTER; Hey Boss, You Don’t Want Your Employees to Meditate.”)

The article by Hafenbrack and Vohs, mentioned above, is:
Hafenbrack, Andrew C., and Kathleen D. Vohs. “Mindfulness Meditation Impairs Task Motivation but Not Performance.” Organizational Behavior and Human Decision Processes 147 (July 2018): 1-15.

“Infatuation with Deep Learning May Well Breed Myopia . . . Overinvestment . . . and Disillusionment”

(p. B1) For the past five years, the hottest thing in artificial intelligence has been a branch known as deep learning. The grandly named statistical technique, put simply, gives computers a way to learn by processing vast amounts of data.
. . .
But now some scientists are asking whether deep learning is really so deep after all.
In recent conversations, online comments and a few lengthy essays, a growing number of A.I. experts are warning that the infatuation with deep learning may well breed myopia and overinvestment now — and disillusionment later.
“There is no real intelligence there,” said Michael I. Jordan, a professor at the University of California, Berkeley, and the author of an essay published in April intended to temper the lofty expectations surrounding A.I. “And I think that trusting these brute force algorithms too much is a faith misplaced.”
The danger, some experts warn, is (p. B4) that A.I. will run into a technical wall and eventually face a popular backlash — a familiar pattern in artificial intelligence since that term was coined in the 1950s. With deep learning in particular, researchers said, the concerns are being fueled by the technology’s limits.
Deep learning algorithms train on a batch of related data — like pictures of human faces — and are then fed more and more data, which steadily improve the software’s pattern-matching accuracy. Although the technique has spawned successes, the results are largely confined to fields where those huge data sets are available and the tasks are well defined, like labeling images or translating speech to text.
The technology struggles in the more open terrains of intelligence — that is, meaning, reasoning and common-sense knowledge. While deep learning software can instantly identify millions of words, it has no understanding of a concept like “justice,” “democracy” or “meddling.”
Researchers have shown that deep learning can be easily fooled. Scramble a relative handful of pixels, and the technology can mistake a turtle for a rifle or a parking sign for a refrigerator.
In a widely read article published early this year on arXiv.org, a site for scientific papers, Gary Marcus, a professor at New York University, posed the question: “Is deep learning approaching a wall?” He wrote, “As is so often the case, the patterns extracted by deep learning are more superficial than they initially appear.”

For the full story, see:
Steve Lohr. “Researchers Seek Smarter Paths to A.I.” The New York Times (Thursday, June 21, 2018): B1 & B4.
(Note: ellipses added.)
(Note: the online version of the story has the date June 20, 2018, and has the title “Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So.” The June 21st date is the publication date in my copy of the National Edition.)

The essay by Jordan, mentioned above, is:
Jordan, Michael I. “Artificial Intelligence – the Revolution Hasn’t Happened Yet.” Medium.com, April 18, 2018.

The manuscript by Marcus, mentioned above, is:

Marcus, Gary. “Deep Learning: A Critical Appraisal.” Jan. 2, 2018.

The Diversity That Matters Most Is Diversity of Thought

(p. A15) If you want anyone to pay attention to you in meetings, don’t ever preface your opposition to a proposal by saying: “Just to play devil’s advocate . . .” If you disagree with something, just say it and hold your ground until you’re convinced otherwise. There are many such useful ideas in Charlan Nemeth’s “In Defense of Troublemakers,” her study of dissent in life and the workplace. But if this one alone takes hold, it could transform millions of meetings, doing away with all those mushy, consensus-driven hours wasted by people too scared of disagreement or power to speak truth to gibberish. Not only would better decisions get made, but the process of making them would vastly improve.
. . .
In the latter part of her book, Ms. Nemeth explores in more detail how dissent improves the way in which groups think. She is ruthless toward conventional “brainstorming,” which tends toward the uncritical accumulation of bad ideas rather than the argumentative heat that forges better ideas. It’s only through criticism that concepts receive proper scrutiny. “Repeatedly we find that dissent has value, even when it is wrong, even when we don’t like the dissenter, and even when we are not convinced of his position,” she writes. “Dissent . . . enables us to think more independently” and “also stimulates thought that is open, divergent, flexible, and original.”
. . .
Ms. Nemeth’s punchy book also has an invaluable section on diversity in groups. All too often, she writes, in pursuit of diversity we focus on everything but the way people think. We look at a group’s gender, color or experience, and once the palette looks right declare it diverse. But you can have all of that and still have a group that thinks the same and reinforces a wrong-headed consensus.
By contrast, you can have a group that is demographically homogeneous yet violently heterogeneous in the way it thinks. The kind of diversity that leads to well-informed decisions is not necessarily the kind of diversity that gives the appearance of social justice. That will be a hard message for many organizations to swallow. But as with many of the arguments that Ms. Nemeth makes in her book, it is one that she gamely delivers and that all managers interested in the quality and integrity of their decision-making would do well to heed.

For the full review, see:
Philip Delves Broughton. “BOOKSHELF; Rocking The Boat.” The Wall Street Journal (Thursday, May 9, 2018): A15.
(Note: ellipsis internal to a paragraph, in original; ellipses between paragraphs, added.)
(Note: the online version of the review has the date May 10, 2018, and has the title “BOOKSHELF; ‘In Defense of Troublemakers’ Review: Rocking the Boat.”)

The book under review, is:
Nemeth, Charlan. In Defense of Troublemakers: The Power of Dissent in Life and Business. New York: Basic Books, 2018.

A.I. “Will Never Match the Creativity of Human Beings or the Fluidity of the Real World”

(p. A21) If you read Google’s public statement about Google Duplex, you’ll discover that the initial scope of the project is surprisingly limited. It encompasses just three tasks: helping users “make restaurant reservations, schedule hair salon appointments, and get holiday hours.”
Schedule hair salon appointments? The dream of artificial intelligence was supposed to be grander than this — to help revolutionize medicine, say, or to produce trustworthy robot helpers for the home.
The reason Google Duplex is so narrow in scope isn’t that it represents a small but important first step toward such goals. The reason is that the field of A.I. doesn’t yet have a clue how to do any better.
. . .
The narrower the scope of a conversation, the easier it is to have. If your interlocutor is more or less following a script, it is not hard to build a computer program that, with the help of simple phrase-book-like templates, can recognize a few variations on a theme. (“What time does your establishment close?” “I would like a reservation for four people at 7 p.m.”) But mastering a Berlitz phrase book doesn’t make you a fluent speaker of a foreign language. Sooner or later the non sequiturs start flowing.
. . .
To be fair, Google Duplex doesn’t literally use phrase-book-like templates. It uses “machine learning” techniques to extract a range of possible phrases drawn from an enormous data set of recordings of human conversations. But the basic problem remains the same: No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety.
. . .
Today’s dominant approach to A.I. has not worked out. Yes, some remarkable applications have been built from it, including Google Translate and Google Duplex. But the limitations of these applications as a form of intelligence should be a wake-up call. If machine learning and big data can’t get us any further than a restaurant reservation, even in the hands of the world’s most capable A.I. company, it is time to reconsider that strategy.

For the full commentary, see:
Gary Marcus and Ernest Davis. “A.I. Is Harder Than You Think.” The New York Times (Saturday, May 19, 2018): A21.
(Note: ellipses added.)
(Note: the online version of the commentary has the date May 18, 2018.)

Philosopher Argued Artificial Intelligence Would Never Reach Human Intelligence

(p. A28) Professor Dreyfus became interested in artificial intelligence in the late 1950s, when he began teaching at the Massachusetts Institute of Technology. He often brushed shoulders with scientists trying to turn computers into reasoning machines.
. . .
Inevitably, he said, artificial intelligence ran up against something called the common-knowledge problem: the vast repository of facts and information that ordinary people possess as though by inheritance, and can draw on to make inferences and navigate their way through the world.
“Current claims and hopes for progress in models for making computers intelligent are like the belief that someone climbing a tree is making progress toward reaching the moon,” he wrote in “Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer” (1985), a book he collaborated on with his younger brother Stuart, a professor of industrial engineering at Berkeley.
His criticisms were greeted with intense hostility in the world of artificial intelligence researchers, who remained confident that success lay within reach as computers grew more powerful.
When that did not happen, Professor Dreyfus found himself vindicated, doubly so when research in the field began incorporating his arguments, expanded upon in a second edition of “What Computers Can’t Do” in 1979 and “What Computers Still Can’t Do” in 1992.
. . .
For his 2006 book “Philosophy: The Latest Answers to the Oldest Questions,” Nicholas Fearn broached the topic of artificial intelligence in an interview with Professor Dreyfus, who told him: “I don’t think about computers anymore. I figure I won and it’s over: They’ve given up.”

For the full obituary, see:
WILLIAM GRIMES. “Hubert L. Dreyfus, Who Put Computing In Its Place, Dies at 87.” The New York Times (Wednesday, May 3, 2017): A28.
(Note: ellipses added.)
(Note: the online version of the obituary has the date MAY 2, 2017, and has the title “Hubert L. Dreyfus, Philosopher of the Limits of Computers, Dies at 87.”)

Dreyfus’s last book on the limits of artificial intelligence, was:
Dreyfus, Hubert L. What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: The MIT Press, 1992.

Happiness “Emerges from the Pursuit of Purpose”

(p. C7) The modern positive-psychology movement– . . .–is a blend of wise goals, good studies, surprising discoveries, old truths and overblown promises. Daniel Horowitz’s history deftly reveals the eternal lessons that underlie all its incarnations: Money can’t buy happiness; human beings need social bonds, satisfying work and strong communities; a life based entirely on the pursuit of pleasure ultimately becomes pleasureless. As Viktor Frankl told us, “Happiness cannot be pursued; it must ensue. One must have a reason to ‘be happy.’ ” That reason, he said, emerges from the pursuit of purpose.

For the full review, see:
Carol Tavris. “”How Smiles Were Packaged and Sold.” The Wall Street Journal (Saturday, March 31, 2018): C5 & C7.
(Note: ellipsis added.)
(Note: the online version of the review has the date March 29, 2018, and has the title “”Happier?’ and ‘The Hope Circuit’ Reviews: How Smiles Were Packaged and Sold.”)

The book under review, is:
Horowitz, Daniel. Happier?: The History of a Cultural Movement That Aspired to Transform America. New York: Oxford University Press, 2017.

“A Litigious, Protective Culture Has Gone Too Far”

(p. A1) SHOEBURYNESS, England — Educators in Britain, after decades spent in a collective effort to minimize risk, are now, cautiously, getting into the business of providing it.
. . .
Limited risks are increasingly cast by experts as an experience essential to childhood development, useful in building resilience and grit.
Outside the Princess Diana Playground in Kensington Gardens in London, which attracts more than a million visitors a year, a placard informs parents that risks have been “intentionally provided, so that your child can develop an appreciation of risk in a controlled play environment rather than taking similar risks in an uncontrolled and unregulated wider world.”
This view is tinged with nostalgia for an earlier Britain, in which children were tougher and more self-reliant. It resonates both with right-wing tabloids, which see it as a corrective to the cosseting of a liberal nanny state; and with progressives, drawn to a freer and more natural childhood.
. . .
(p. A12) Britain is one of a number of countries where educators and regulators say a litigious, protective culture has gone too far, leaching healthy risks out of childhood. Guidelines on play from the government agency that oversees health and safety issues in Britain state that “the goal is not to eliminate risk.”

For the full story, see:
ELLEN BARRY. “In Britain, Learning to Accept Risk, and the Occasional ‘Owie’.” The New York Times, First Section (Sunday, March 11, 2018): A1 & A12.
(Note: ellipses added.)
(Note: the online version of the story has the date MARCH 10, 2018, and has the title “In Britain’s Playgrounds, ‘Bringing in Risk’ to Build Resilience.”)

Brain as Computer “Is a Bad Metaphor”

(p. A13) In “The Biological Mind: How Brain, Body, and Environment Collaborate to Make Us Who We Are,” Mr. Jasanoff, the director of the MIT Center for Neurobiological Engineering, presents a lucid primer on current brain science that takes the form of a passionate warning about its limitations. He argues that the age of popular neurohype has persuaded many of us to identify completely with our brains and to misunderstand the true nature of these marvelous organs.
We hear constantly, for example, that the brain is a computer. This is a bad metaphor, Mr. Jasanoff insists. Computers run on electricity, so we concentrate on the electrical activity within the brain; yet there is also chemical and hormonal signaling, for which there are no good computing analogies.

For the full review, see:
Steven Poole. “”BOOKSHELF; Identify Your Self.” The Wall Street Journal (Friday, April 6, 2018): A13.
(Note: the online version of the review has the date April 5, 2018, and has the title “BOOKSHELF; ‘The Biological Mind’ Review: Identify Your Self.”)

The book under review, is:
Jasanoff, Alan. The Biological Mind: How Brain, Body, and Environment Collaborate to Make Us Who We Are. New York: Basic Books, 2018.

Labor-Intensive Tinkering Can Advance Science

(p. A24) When John E. Sulston was 5 years old and growing up in Britain, the son of an Anglican priest, his parents sent him to a private school. There, he discovered, sports were his nemesis.
“I absolutely loathed games,” he said. “I was hopeless.”
When it came to schoolwork, he said, he was “not a books person.”
He had only one consuming interest: science. He liked to tinker, to figure out how things were put together.
. . .
The Nobel he received, shared with two other scientists, recognized the good data he amassed in his work on the tiny transparent roundworm C. elegans in an effort to better understand how organisms develop.
. . .
At the time, it was widely believed that the 558 cells the worm had when it hatched were all it would ever have. But Dr. Sulston noticed that, in fact, the worm kept gaining cells as it developed. And by tracing the patterns of divisions that gave rise to those new cells, he found, surprisingly, that the worm also lost cells in a predictable way. Certain cells were destined to die at a specific time, digesting their own DNA.
Dr. Sulston’s next major project was to trace the fate of every single cell in a worm. It was a task so demanding and labor-intensive that other scientists still shake their heads in amazement that he got it done.
Each day, bending over his microscope for eight or more hours, he would start with a worm embryo and choose one of its cells. He would then watch the cell as it divided and follow each of its progeny cells as, together, they grew and formed the organism. This went on for a total of 18 months.
In the end, he had a complete map of every one of the worm’s 959 cells (not counting sperm and egg cells).

For the full obituary, see:
GINA KOLATA. “John Sulston, 75; Tiny Worm Guided Him to Nobel.” The New York Times (Friday, March 16, 2018): A24.
(Note: ellipses added.)
(Note: the online version of the obituary has the date MARCH 15, 2018, and has the title “John E. Sulston, 75, Dies; Found Clues to Genes in a Worm.”)

Individualistic Cultures Foster Innovation

IndividualismProductivityGraph2018-04-20.pngSource of graph: online version of the WSJ commentary quoted and cited below.

(p. B1) Luther matters to investors not because of the religion he founded, but because of the cultural impact of challenging the Catholic Church’s grip on society. By ushering in what Edmund Phelps, the Nobel-winning director of Columbia University’s Center on Capitalism and Society, calls the “the age of the individual,” Luther laid the groundwork for capitalism.
. . .
(p. B10) Mr. Phelps and collaborators Saifedean Ammous, Raicho Bojilov and Gylfi Zoega show that even in recent years, countries with more individualistic cultures have more innovative economies. They demonstrate a strong link between countries that surveys show to be more individualistic, and total factor productivity, a proxy for innovation that measures growth due to more efficient use of labor and capital. Less individualistic cultures, such as France, Spain and Japan, showed little innovation while the individualistic U.S. led.
As Mr. Bojilov points out, correlation doesn’t prove causation, so they looked at the effects of country of origin on the success of second, third and fourth-generation Americans as entrepreneurs. The effects turn out to be significant but leave room for debate about how important individualistic attitudes are to financial and economic success.

For the full commentary, see:
James Mackintosh. “STREETWISE; What Martin Luther Says About Capitalism.” The Wall Street Journal (Friday, Nov. 3, 2017): B1 & B10.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date Nov. 2, 2017, and has the title “STREETWISE; What 500 Years of Protestantism Teaches Us About Capitalism’s Future.” Where there are minor differences in wording in the two versions, the passages quoted above follow the online version.)

Macron Gives France Hope That “Tomorrow Can Be Better Than Today”

(p. A27) PARIS — When people used to ask me what I missed about America, I would say, “The optimism.” I grew up in the land of hope, then moved to one whose catchphrases are “It’s not possible” and “Hell is other people.” I walked around Paris feeling conspicuously chipper.
But lately I’ve had a kind of emotional whiplash. France is starting to seem like an upbeat, can-do country, while Americans are less sure that everything will be O.K.
. . .
The French haven’t become magically cheerful, but there’s a creeping sense that hope isn’t idiotic, and life can actually improve. As is common with a new president, there was a jump in optimism after Emmanuel Macron was elected last year. But this time, optimism has remained strong, and in January it hit an eight-year high.
It helps that France’s economy is finally growing more and that Mr. Macron has made good on promises ranging from overhauling the labor laws to shrinking class sizes at kindergartens in disadvantaged areas.
. . .
“The France of the optimists has won, and is dragging the other part of France toward its own side,” said Claudia Senik, an economist who heads the Well-Being Observatory, an academic think tank here.
The French are even taking an intellectual interest in this alien idea. There are optimism clubs, conferences and school programs, scholars of positivity and books like “50+1 Good Reasons to Choose Optimism.” In September Mr. Macron was a patron of the Global Positive Forum, a study group of “positive initiatives” in business and government. (“Tomorrow can be better than today,” the forum’s website insists.)

For the full commentary, see:
Druckerman, Pamela. “The New French Optimism.” The New York Times (Friday, March 23, 2018): A27.
(Note: ellipses added.)
(Note: the online version of the commentary has the date March 22, 2018, and has the title “Are the French the New Optimists?”)