Rats, Mice, and Humans Fail to Ignore Sunk Costs

(p. D6) Suppose that, seeking a fun evening out, you pay $175 for a ticket to a new Broadway musical. Seated in the balcony, you quickly realize that the acting is bad, the sets are ugly and no one, you suspect, will go home humming the melodies.
Do you head out the door at the intermission, or stick it out for the duration?
Studies of human decision-making suggest that most people will stay put, even though money spent in the past logically should have no bearing on the choice.
This “sunk cost fallacy,” as economists call it, is one of many ways that humans allow emotions to affect their choices, sometimes to their own detriment. But the tendency to factor past investments into decision-making is apparently not limited to Homo sapiens.
In a study published on Thursday [July 12, 2018] in the journal Science, investigators at the University of Minnesota reported that mice and rats were just as likely as humans to be influenced by sunk costs.
The more time they invested in waiting for a reward — in the case of the rodents, flavored pellets; in the case of the humans, entertaining videos — the less likely they were to quit the pursuit before the delay ended.
“Whatever is going on in the humans is also going on in the nonhuman animals,” said A. David Redish, a professor of neuroscience at the University of Minnesota and an author of the study.
This cross-species consistency, he and others said, suggested that in some decision-making situations, taking account of how much has already been invested might pay off.

For the full story, see:
Erica Goode. “‘Sunk Cost Fallacy’ Claims More Victims.” The New York Times (Tuesday, July 17, 2018): D6
(Note: bracketed date added.)
(Note: the online version of the story has the date July 12, 2018, and has the title “Mice Don’t Know When to Let It Go, Either.”)

Human Intelligence Helps A.I. Work Better

(p. B3) A recent study at the M.I.T. Media Lab showed how biases in the real world could seep into artificial intelligence. Commercial software is nearly flawless at telling the gender of white men, researchers found, but not so for darker-skinned women.
And Google had to apologize in 2015 after its image-recognition photo app mistakenly labeled photos of black people as “gorillas.”
Professor Nourbakhsh said that A.I.-enhanced security systems could struggle to determine whether a nonwhite person was arriving as a guest, a worker or an intruder.
One way to parse the system’s bias is to make sure humans are still verifying the images before responding.
“When you take the human out of the loop, you lose the empathetic component,” Professor Nourbakhsh said. “If you keep humans in the loop and use these systems, you get the best of all worlds.”

For the full story, see:
Paul Sullivan. “WEALTH MATTERS; Can Artificial Intelligence Keep Your Home Secure?” The New York Times (Saturday, June 30, 2018): B3.
(Note: the online version of the story has the date June 29, 2018.)

The “recent study” mentioned above, is:
Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 1-15.

Earned Income Matters More Than Equal Income

(p. A13) The concept of a universal basic income, or UBI, has become part of the moral armor of Silicon Valley moguls who want a socially conscious defense against the charge that technology is making humanity obsolete.
. . .
We need policies that encourage job creation and working, not policies that pay people not to work.
In the mid-1960s, about 5% of men aged 25 to 54 were jobless. For 40 years that share has risen, and for much of the past decade the rate has remained over 15%. Suicide, divorce and opioid abuse are all associated with nonemployment, and many facts suggest that the misery of joblessness is far worse than that of a low-paying job. According to the most recent data, only 7% of working men in households earning less than $35,000 report being dissatisfied with their lives. But that share soars to 18% among the nonemployed of all incomes. This suggests that promoting employment is more important than reducing inequality.
. . . 50 years of evidence about labor supply in the U.S. suggests that giving people money will lead them to work less.
The Negative Income Tax experiments of the 1970s–when poorer households in a number of states received direct cash payments to keep them at a minimum income–are the closest America has come to a UBI. But they did not show “minimal impact on work,” as Mr. Yang suggests. Rather, they produced a quite significant work-hours reduction of between 5% and 25%, as well as “employment rate reductions . . . from about 1 to 10 percentage points,” according to one capable study.

For the full review, see:

Edward Glaeser. “‘BOOKSHELF; ‘Give People Money’ and ‘The War on Normal People’ Review: The Cure for Poverty? A guaranteed income does nothing to address the misery of joblessness, nor the associated plagues of divorce, opioid abuse and suicide.” The Wall Street Journal (Tuesday, July 10, 2018): A13.

(Note: first two ellipses added; third ellipsis in original.)
(Note: the online version of the review has the date July 9, 2018, and has the title “BOOKSHELF; ‘Give People Money’ and ‘The War on Normal People’ Review: The Cure for Poverty? A guaranteed income does nothing to address the misery of joblessness, nor the associated plagues of divorce, opioid abuse and suicide.”)

“Meditation Is Demotivating”

(p. 6) . . . on the face of it, mindfulness might seem counterproductive in a workplace setting. A central technique of mindfulness meditation, after all, is to accept things as they are. Yet companies want their employees to be motivated. And the very notion of motivation — striving to obtain a more desirable future — implies some degree of discontentment with the present, which seems at odds with a psychological exercise that instills equanimity and a sense of calm.
To test this hunch, we recently conducted five studies, involving hundreds of people, to see whether there was a tension between mindfulness and motivation. As we report in a forthcoming article in the journal Organizational Behavior and Human Decision Processes, we found strong evidence that meditation is demotivating.

For the full commentary, see:
Kathleen D. Vohs and Andrew C. Hafenbrack. “GRAY MATTER; Don’t Meditate at Work.” The New York Times, SundayReview Section (Sunday, June 17, 2018): 6.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date June 14, 2018, and has the title “GRAY MATTER; Hey Boss, You Don’t Want Your Employees to Meditate.”)

The article by Hafenbrack and Vohs, mentioned above, is:
Hafenbrack, Andrew C., and Kathleen D. Vohs. “Mindfulness Meditation Impairs Task Motivation but Not Performance.” Organizational Behavior and Human Decision Processes 147 (July 2018): 1-15.

“Infatuation with Deep Learning May Well Breed Myopia . . . Overinvestment . . . and Disillusionment”

(p. B1) For the past five years, the hottest thing in artificial intelligence has been a branch known as deep learning. The grandly named statistical technique, put simply, gives computers a way to learn by processing vast amounts of data.
. . .
But now some scientists are asking whether deep learning is really so deep after all.
In recent conversations, online comments and a few lengthy essays, a growing number of A.I. experts are warning that the infatuation with deep learning may well breed myopia and overinvestment now — and disillusionment later.
“There is no real intelligence there,” said Michael I. Jordan, a professor at the University of California, Berkeley, and the author of an essay published in April intended to temper the lofty expectations surrounding A.I. “And I think that trusting these brute force algorithms too much is a faith misplaced.”
The danger, some experts warn, is (p. B4) that A.I. will run into a technical wall and eventually face a popular backlash — a familiar pattern in artificial intelligence since that term was coined in the 1950s. With deep learning in particular, researchers said, the concerns are being fueled by the technology’s limits.
Deep learning algorithms train on a batch of related data — like pictures of human faces — and are then fed more and more data, which steadily improve the software’s pattern-matching accuracy. Although the technique has spawned successes, the results are largely confined to fields where those huge data sets are available and the tasks are well defined, like labeling images or translating speech to text.
The technology struggles in the more open terrains of intelligence — that is, meaning, reasoning and common-sense knowledge. While deep learning software can instantly identify millions of words, it has no understanding of a concept like “justice,” “democracy” or “meddling.”
Researchers have shown that deep learning can be easily fooled. Scramble a relative handful of pixels, and the technology can mistake a turtle for a rifle or a parking sign for a refrigerator.
In a widely read article published early this year on arXiv.org, a site for scientific papers, Gary Marcus, a professor at New York University, posed the question: “Is deep learning approaching a wall?” He wrote, “As is so often the case, the patterns extracted by deep learning are more superficial than they initially appear.”

For the full story, see:
Steve Lohr. “Researchers Seek Smarter Paths to A.I.” The New York Times (Thursday, June 21, 2018): B1 & B4.
(Note: ellipses added.)
(Note: the online version of the story has the date June 20, 2018, and has the title “Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So.” The June 21st date is the publication date in my copy of the National Edition.)

The essay by Jordan, mentioned above, is:
Jordan, Michael I. “Artificial Intelligence – the Revolution Hasn’t Happened Yet.” Medium.com, April 18, 2018.

The manuscript by Marcus, mentioned above, is:

Marcus, Gary. “Deep Learning: A Critical Appraisal.” Jan. 2, 2018.

The Diversity That Matters Most Is Diversity of Thought

(p. A15) If you want anyone to pay attention to you in meetings, don’t ever preface your opposition to a proposal by saying: “Just to play devil’s advocate . . .” If you disagree with something, just say it and hold your ground until you’re convinced otherwise. There are many such useful ideas in Charlan Nemeth’s “In Defense of Troublemakers,” her study of dissent in life and the workplace. But if this one alone takes hold, it could transform millions of meetings, doing away with all those mushy, consensus-driven hours wasted by people too scared of disagreement or power to speak truth to gibberish. Not only would better decisions get made, but the process of making them would vastly improve.
. . .
In the latter part of her book, Ms. Nemeth explores in more detail how dissent improves the way in which groups think. She is ruthless toward conventional “brainstorming,” which tends toward the uncritical accumulation of bad ideas rather than the argumentative heat that forges better ideas. It’s only through criticism that concepts receive proper scrutiny. “Repeatedly we find that dissent has value, even when it is wrong, even when we don’t like the dissenter, and even when we are not convinced of his position,” she writes. “Dissent . . . enables us to think more independently” and “also stimulates thought that is open, divergent, flexible, and original.”
. . .
Ms. Nemeth’s punchy book also has an invaluable section on diversity in groups. All too often, she writes, in pursuit of diversity we focus on everything but the way people think. We look at a group’s gender, color or experience, and once the palette looks right declare it diverse. But you can have all of that and still have a group that thinks the same and reinforces a wrong-headed consensus.
By contrast, you can have a group that is demographically homogeneous yet violently heterogeneous in the way it thinks. The kind of diversity that leads to well-informed decisions is not necessarily the kind of diversity that gives the appearance of social justice. That will be a hard message for many organizations to swallow. But as with many of the arguments that Ms. Nemeth makes in her book, it is one that she gamely delivers and that all managers interested in the quality and integrity of their decision-making would do well to heed.

For the full review, see:
Philip Delves Broughton. “BOOKSHELF; Rocking The Boat.” The Wall Street Journal (Thursday, May 9, 2018): A15.
(Note: ellipsis internal to a paragraph, in original; ellipses between paragraphs, added.)
(Note: the online version of the review has the date May 10, 2018, and has the title “BOOKSHELF; ‘In Defense of Troublemakers’ Review: Rocking the Boat.”)

The book under review, is:
Nemeth, Charlan. In Defense of Troublemakers: The Power of Dissent in Life and Business. New York: Basic Books, 2018.

A.I. “Will Never Match the Creativity of Human Beings or the Fluidity of the Real World”

(p. A21) If you read Google’s public statement about Google Duplex, you’ll discover that the initial scope of the project is surprisingly limited. It encompasses just three tasks: helping users “make restaurant reservations, schedule hair salon appointments, and get holiday hours.”
Schedule hair salon appointments? The dream of artificial intelligence was supposed to be grander than this — to help revolutionize medicine, say, or to produce trustworthy robot helpers for the home.
The reason Google Duplex is so narrow in scope isn’t that it represents a small but important first step toward such goals. The reason is that the field of A.I. doesn’t yet have a clue how to do any better.
. . .
The narrower the scope of a conversation, the easier it is to have. If your interlocutor is more or less following a script, it is not hard to build a computer program that, with the help of simple phrase-book-like templates, can recognize a few variations on a theme. (“What time does your establishment close?” “I would like a reservation for four people at 7 p.m.”) But mastering a Berlitz phrase book doesn’t make you a fluent speaker of a foreign language. Sooner or later the non sequiturs start flowing.
. . .
To be fair, Google Duplex doesn’t literally use phrase-book-like templates. It uses “machine learning” techniques to extract a range of possible phrases drawn from an enormous data set of recordings of human conversations. But the basic problem remains the same: No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety.
. . .
Today’s dominant approach to A.I. has not worked out. Yes, some remarkable applications have been built from it, including Google Translate and Google Duplex. But the limitations of these applications as a form of intelligence should be a wake-up call. If machine learning and big data can’t get us any further than a restaurant reservation, even in the hands of the world’s most capable A.I. company, it is time to reconsider that strategy.

For the full commentary, see:
Gary Marcus and Ernest Davis. “A.I. Is Harder Than You Think.” The New York Times (Saturday, May 19, 2018): A21.
(Note: ellipses added.)
(Note: the online version of the commentary has the date May 18, 2018.)

Philosopher Argued Artificial Intelligence Would Never Reach Human Intelligence

(p. A28) Professor Dreyfus became interested in artificial intelligence in the late 1950s, when he began teaching at the Massachusetts Institute of Technology. He often brushed shoulders with scientists trying to turn computers into reasoning machines.
. . .
Inevitably, he said, artificial intelligence ran up against something called the common-knowledge problem: the vast repository of facts and information that ordinary people possess as though by inheritance, and can draw on to make inferences and navigate their way through the world.
“Current claims and hopes for progress in models for making computers intelligent are like the belief that someone climbing a tree is making progress toward reaching the moon,” he wrote in “Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer” (1985), a book he collaborated on with his younger brother Stuart, a professor of industrial engineering at Berkeley.
His criticisms were greeted with intense hostility in the world of artificial intelligence researchers, who remained confident that success lay within reach as computers grew more powerful.
When that did not happen, Professor Dreyfus found himself vindicated, doubly so when research in the field began incorporating his arguments, expanded upon in a second edition of “What Computers Can’t Do” in 1979 and “What Computers Still Can’t Do” in 1992.
. . .
For his 2006 book “Philosophy: The Latest Answers to the Oldest Questions,” Nicholas Fearn broached the topic of artificial intelligence in an interview with Professor Dreyfus, who told him: “I don’t think about computers anymore. I figure I won and it’s over: They’ve given up.”

For the full obituary, see:
WILLIAM GRIMES. “Hubert L. Dreyfus, Who Put Computing In Its Place, Dies at 87.” The New York Times (Wednesday, May 3, 2017): A28.
(Note: ellipses added.)
(Note: the online version of the obituary has the date MAY 2, 2017, and has the title “Hubert L. Dreyfus, Philosopher of the Limits of Computers, Dies at 87.”)

Dreyfus’s last book on the limits of artificial intelligence, was:
Dreyfus, Hubert L. What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: The MIT Press, 1992.

Happiness “Emerges from the Pursuit of Purpose”

(p. C7) The modern positive-psychology movement– . . .–is a blend of wise goals, good studies, surprising discoveries, old truths and overblown promises. Daniel Horowitz’s history deftly reveals the eternal lessons that underlie all its incarnations: Money can’t buy happiness; human beings need social bonds, satisfying work and strong communities; a life based entirely on the pursuit of pleasure ultimately becomes pleasureless. As Viktor Frankl told us, “Happiness cannot be pursued; it must ensue. One must have a reason to ‘be happy.’ ” That reason, he said, emerges from the pursuit of purpose.

For the full review, see:
Carol Tavris. “”How Smiles Were Packaged and Sold.” The Wall Street Journal (Saturday, March 31, 2018): C5 & C7.
(Note: ellipsis added.)
(Note: the online version of the review has the date March 29, 2018, and has the title “”Happier?’ and ‘The Hope Circuit’ Reviews: How Smiles Were Packaged and Sold.”)

The book under review, is:
Horowitz, Daniel. Happier?: The History of a Cultural Movement That Aspired to Transform America. New York: Oxford University Press, 2017.

“A Litigious, Protective Culture Has Gone Too Far”

(p. A1) SHOEBURYNESS, England — Educators in Britain, after decades spent in a collective effort to minimize risk, are now, cautiously, getting into the business of providing it.
. . .
Limited risks are increasingly cast by experts as an experience essential to childhood development, useful in building resilience and grit.
Outside the Princess Diana Playground in Kensington Gardens in London, which attracts more than a million visitors a year, a placard informs parents that risks have been “intentionally provided, so that your child can develop an appreciation of risk in a controlled play environment rather than taking similar risks in an uncontrolled and unregulated wider world.”
This view is tinged with nostalgia for an earlier Britain, in which children were tougher and more self-reliant. It resonates both with right-wing tabloids, which see it as a corrective to the cosseting of a liberal nanny state; and with progressives, drawn to a freer and more natural childhood.
. . .
(p. A12) Britain is one of a number of countries where educators and regulators say a litigious, protective culture has gone too far, leaching healthy risks out of childhood. Guidelines on play from the government agency that oversees health and safety issues in Britain state that “the goal is not to eliminate risk.”

For the full story, see:
ELLEN BARRY. “In Britain, Learning to Accept Risk, and the Occasional ‘Owie’.” The New York Times, First Section (Sunday, March 11, 2018): A1 & A12.
(Note: ellipses added.)
(Note: the online version of the story has the date MARCH 10, 2018, and has the title “In Britain’s Playgrounds, ‘Bringing in Risk’ to Build Resilience.”)

Brain as Computer “Is a Bad Metaphor”

(p. A13) In “The Biological Mind: How Brain, Body, and Environment Collaborate to Make Us Who We Are,” Mr. Jasanoff, the director of the MIT Center for Neurobiological Engineering, presents a lucid primer on current brain science that takes the form of a passionate warning about its limitations. He argues that the age of popular neurohype has persuaded many of us to identify completely with our brains and to misunderstand the true nature of these marvelous organs.
We hear constantly, for example, that the brain is a computer. This is a bad metaphor, Mr. Jasanoff insists. Computers run on electricity, so we concentrate on the electrical activity within the brain; yet there is also chemical and hormonal signaling, for which there are no good computing analogies.

For the full review, see:
Steven Poole. “”BOOKSHELF; Identify Your Self.” The Wall Street Journal (Friday, April 6, 2018): A13.
(Note: the online version of the review has the date April 5, 2018, and has the title “BOOKSHELF; ‘The Biological Mind’ Review: Identify Your Self.”)

The book under review, is:
Jasanoff, Alan. The Biological Mind: How Brain, Body, and Environment Collaborate to Make Us Who We Are. New York: Basic Books, 2018.