“I’d Rather Be Optimistic and Wrong than Pessimistic and Right”

(p. A17) There is no question that Tesla’s culture is different from that of conventional automakers or even other Silicon Valley companies — . . . . That is largely by Mr. Musk’s design, and certainly reflects his outsize presence. His web appearance late Thursday [Sept. 6, 2018] was the latest evidence.
He was the guest of the comedian Joe Rogan, an advocate for legalizing marijuana, and the repartee included an exchange over what Mr. Musk was smoking.
“Is that a joint, or is it a cigar?” Mr. Musk asked after his host took out a large joint and lit it up.
“It’s marijuana inside of tobacco,” Mr. Rogan replied, and he asked if Mr. Musk had ever had it.
“Yeah, I think I tried one once,” he replied, laughing.
The comedian then asked if smoking on air would cause issues with stockholders, to which Mr. Musk responded, “It’s legal, right?” He then proceeded to take a puff. Marijuana is legal for medical and recreational use in California, where the interview was recorded.
After Mr. Musk announced on Aug. 7 that he intended to take Tesla private at $420 a share, there was speculation that the figure was chosen because “420” is a code for marijuana in the drug subculture.
In an interview with The New York Times while the gambit was still in play, Mr. Musk didn’t deny a connection. But he did try to clarify his state of mind in hatching the plan — and the shortcomings of mind-altering.
“It seemed like better karma at $420 than at $419,” he said. “But I was not on weed, to be clear. Weed is not helpful for productivity. There’s a reason for the word ‘stoned.’ You just sit there like a stone on weed.”
. . .
If he is feeling any insecurity, it was not reflected in his webcast with Mr. Rogan. He appeared at ease, sipping whiskey, and spoke, at one point, about artificial intelligence and how it could not be controlled.
“You kind of have to be optimistic about the future,” Mr. Musk said. “There’s no point in being pessimistic. I’d rather be optimistic and wrong than pessimistic and right.”

For the full story, see:
Neal E. Boudette. “‘Tesla Stock Dips As Musk Puffs On … What?” The New York Times (Saturday, Sept. 8, 2018): A1 & A17.
(Note: ellipses in quotes, and bracketed date, added; ellipsis in title, in original.)
(Note: the online version of the story has the date Sept. 7, 2018, and has the title “‘Tesla Shaken by a Departure and What Elon Musk Was Smoking.”)

Technologies Can Offer “Extraordinary Learning” Where “Children’s Interests Turn to Passion”

(p. B1) The American Academy of Pediatrics once recommended parents simply limit children’s time on screens. The association changed those recommendations in 2016 to reflect profound differences in levels of interactivity between TV, on which most previous research was based, and the devices children use today.
Where previous guidelines described all screen time for (p. B4) young children in terms of “exposure,” as if screen time were a toxic substance, new guidance allows for up to an hour a day for children under 5 and distinguishes between different kinds of screen use–say, FaceTime with Grandma versus a show on YouTube.
. . .
Instead of enforcing time-based rules, parents should help children determine what they want to do–consume and create art, marvel at the universe–and make it a daily part of screen life, says Anya Kamenetz, a journalist and author of the coming book “The Art of Screen Time–How Your Family Can Balance Digital Media and Real Life.”
In doing so, parents can offer “extraordinary learning” experiences that weren’t possible before such technology came along, says Mimi Ito, director of the Connected Learning Lab at the University of California, Irvine and a cultural anthropologist who has studied how children actually use technology for over two decades.
“Extraordinary learning” is what happens when children’s interests turn to passion, and a combination of tech and the internet provides a bottomless well of tools, knowledge and peers to help them pursue these passions with intensity characteristic of youth.
It’s about more than parents spending time with children. It includes steering them toward quality and letting them–with breaks for stretching and visual relief, of course–dive deep without a timer.
There are many examples of such learning, whether it is children teaching themselves to code with the videogame Minecraft or learning how to create music and shoot videos. Giving children this opportunity allows them to learn at their own, often-accelerated pace.

For the full commentary, see:
Christopher Mims. “KEYWORDS; Not All Screen Time Is Equal Screen Time Isn’t Toxic After All.” The Wall Street Journal (Monday, Jan. 22, 2018): B1 & B4.
(Note: ellipsis added.)
(Note: the online version of the commentary was last updated Jan. 22, 2018, and has the title “KEYWORDS; What If Children Should Be Spending More Time With Screens?”)

The book mentioned above, is:
Kamenetz, Anya. The Art of Screen Time: How Your Family Can Balance Digital Media and Real Life. New York: PublicAffairs, 2018.

Zuckerberg Calls Musk “Pretty Irresponsible” on A.I. “Doomsday” Fears

(p. 1) SAN FRANCISCO — Mark Zuckerberg thought his fellow Silicon Valley billionaire Elon Musk was behaving like an alarmist.
Mr. Musk, the entrepreneur behind SpaceX and the electric-car maker Tesla, had taken it upon himself to warn the world that artificial intelligence was “potentially more dangerous than nukes” in television interviews and on social media.
So, on Nov. 19, 2014, Mr. Zuckerberg, Facebook’s chief executive, invited Mr. Musk to dinner at his home in Palo Alto, Calif. Two top researchers from Facebook’s new artificial intelligence lab and two other Facebook executives joined them.
As they ate, the Facebook contingent tried to convince Mr. Musk that he was wrong. But he wasn’t budging. “I genuinely believe this is dangerous,” Mr. Musk told the table, according to one of the dinner’s attendees, Yann LeCun, the researcher who led Facebook’s A.I. lab.
Mr. Musk’s fears of A.I., distilled to their essence, were simple: If we create machines that are smarter than humans, they could turn against us. (See: “The Terminator,” “The Matrix,” and “2001: A Space Odyssey.”) Let’s for once, he was saying to the rest of the tech industry, consider the unintended consequences of what we are creating before we unleash it on the world.
. . .
(p. 6) Since their dinner three years ago, the debate between Mr. Zuckerberg and Mr. Musk has turned sour. Last summer, in a live Facebook video streamed from his backyard as he and his wife barbecued, Mr. Zuckerberg called Mr. Musk’s views on A.I. “pretty irresponsible.”
Panicking about A.I. now, so early in its development, could threaten the many benefits that come from things like self-driving cars and A.I. health care, he said.
“With A.I. especially, I’m really optimistic,” Mr. Zuckerberg said. “People who are naysayers and kind of try to drum up these doomsday scenarios — I just, I don’t understand it.”

For the full story, see:
Cade Metz. “Moguls and Killer Robots.” The New York Times, SundayBusiness Section (Sunday, June 10, 2018): 1 & 6.
(Note: ellipsis added.)
(Note: the online version of the story has the date June 9, 2018, and has the title “Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots.”)

Rats, Mice, and Humans Fail to Ignore Sunk Costs

(p. D6) Suppose that, seeking a fun evening out, you pay $175 for a ticket to a new Broadway musical. Seated in the balcony, you quickly realize that the acting is bad, the sets are ugly and no one, you suspect, will go home humming the melodies.
Do you head out the door at the intermission, or stick it out for the duration?
Studies of human decision-making suggest that most people will stay put, even though money spent in the past logically should have no bearing on the choice.
This “sunk cost fallacy,” as economists call it, is one of many ways that humans allow emotions to affect their choices, sometimes to their own detriment. But the tendency to factor past investments into decision-making is apparently not limited to Homo sapiens.
In a study published on Thursday [July 12, 2018] in the journal Science, investigators at the University of Minnesota reported that mice and rats were just as likely as humans to be influenced by sunk costs.
The more time they invested in waiting for a reward — in the case of the rodents, flavored pellets; in the case of the humans, entertaining videos — the less likely they were to quit the pursuit before the delay ended.
“Whatever is going on in the humans is also going on in the nonhuman animals,” said A. David Redish, a professor of neuroscience at the University of Minnesota and an author of the study.
This cross-species consistency, he and others said, suggested that in some decision-making situations, taking account of how much has already been invested might pay off.

For the full story, see:
Erica Goode. “‘Sunk Cost Fallacy’ Claims More Victims.” The New York Times (Tuesday, July 17, 2018): D6
(Note: bracketed date added.)
(Note: the online version of the story has the date July 12, 2018, and has the title “Mice Don’t Know When to Let It Go, Either.”)

Human Intelligence Helps A.I. Work Better

(p. B3) A recent study at the M.I.T. Media Lab showed how biases in the real world could seep into artificial intelligence. Commercial software is nearly flawless at telling the gender of white men, researchers found, but not so for darker-skinned women.
And Google had to apologize in 2015 after its image-recognition photo app mistakenly labeled photos of black people as “gorillas.”
Professor Nourbakhsh said that A.I.-enhanced security systems could struggle to determine whether a nonwhite person was arriving as a guest, a worker or an intruder.
One way to parse the system’s bias is to make sure humans are still verifying the images before responding.
“When you take the human out of the loop, you lose the empathetic component,” Professor Nourbakhsh said. “If you keep humans in the loop and use these systems, you get the best of all worlds.”

For the full story, see:
Paul Sullivan. “WEALTH MATTERS; Can Artificial Intelligence Keep Your Home Secure?” The New York Times (Saturday, June 30, 2018): B3.
(Note: the online version of the story has the date June 29, 2018.)

The “recent study” mentioned above, is:
Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 1-15.

Earned Income Matters More Than Equal Income

(p. A13) The concept of a universal basic income, or UBI, has become part of the moral armor of Silicon Valley moguls who want a socially conscious defense against the charge that technology is making humanity obsolete.
. . .
We need policies that encourage job creation and working, not policies that pay people not to work.
In the mid-1960s, about 5% of men aged 25 to 54 were jobless. For 40 years that share has risen, and for much of the past decade the rate has remained over 15%. Suicide, divorce and opioid abuse are all associated with nonemployment, and many facts suggest that the misery of joblessness is far worse than that of a low-paying job. According to the most recent data, only 7% of working men in households earning less than $35,000 report being dissatisfied with their lives. But that share soars to 18% among the nonemployed of all incomes. This suggests that promoting employment is more important than reducing inequality.
. . . 50 years of evidence about labor supply in the U.S. suggests that giving people money will lead them to work less.
The Negative Income Tax experiments of the 1970s–when poorer households in a number of states received direct cash payments to keep them at a minimum income–are the closest America has come to a UBI. But they did not show “minimal impact on work,” as Mr. Yang suggests. Rather, they produced a quite significant work-hours reduction of between 5% and 25%, as well as “employment rate reductions . . . from about 1 to 10 percentage points,” according to one capable study.

For the full review, see:

Edward Glaeser. “‘BOOKSHELF; ‘Give People Money’ and ‘The War on Normal People’ Review: The Cure for Poverty? A guaranteed income does nothing to address the misery of joblessness, nor the associated plagues of divorce, opioid abuse and suicide.” The Wall Street Journal (Tuesday, July 10, 2018): A13.

(Note: first two ellipses added; third ellipsis in original.)
(Note: the online version of the review has the date July 9, 2018, and has the title “BOOKSHELF; ‘Give People Money’ and ‘The War on Normal People’ Review: The Cure for Poverty? A guaranteed income does nothing to address the misery of joblessness, nor the associated plagues of divorce, opioid abuse and suicide.”)

“Meditation Is Demotivating”

(p. 6) . . . on the face of it, mindfulness might seem counterproductive in a workplace setting. A central technique of mindfulness meditation, after all, is to accept things as they are. Yet companies want their employees to be motivated. And the very notion of motivation — striving to obtain a more desirable future — implies some degree of discontentment with the present, which seems at odds with a psychological exercise that instills equanimity and a sense of calm.
To test this hunch, we recently conducted five studies, involving hundreds of people, to see whether there was a tension between mindfulness and motivation. As we report in a forthcoming article in the journal Organizational Behavior and Human Decision Processes, we found strong evidence that meditation is demotivating.

For the full commentary, see:
Kathleen D. Vohs and Andrew C. Hafenbrack. “GRAY MATTER; Don’t Meditate at Work.” The New York Times, SundayReview Section (Sunday, June 17, 2018): 6.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date June 14, 2018, and has the title “GRAY MATTER; Hey Boss, You Don’t Want Your Employees to Meditate.”)

The article by Hafenbrack and Vohs, mentioned above, is:
Hafenbrack, Andrew C., and Kathleen D. Vohs. “Mindfulness Meditation Impairs Task Motivation but Not Performance.” Organizational Behavior and Human Decision Processes 147 (July 2018): 1-15.

“Infatuation with Deep Learning May Well Breed Myopia . . . Overinvestment . . . and Disillusionment”

(p. B1) For the past five years, the hottest thing in artificial intelligence has been a branch known as deep learning. The grandly named statistical technique, put simply, gives computers a way to learn by processing vast amounts of data.
. . .
But now some scientists are asking whether deep learning is really so deep after all.
In recent conversations, online comments and a few lengthy essays, a growing number of A.I. experts are warning that the infatuation with deep learning may well breed myopia and overinvestment now — and disillusionment later.
“There is no real intelligence there,” said Michael I. Jordan, a professor at the University of California, Berkeley, and the author of an essay published in April intended to temper the lofty expectations surrounding A.I. “And I think that trusting these brute force algorithms too much is a faith misplaced.”
The danger, some experts warn, is (p. B4) that A.I. will run into a technical wall and eventually face a popular backlash — a familiar pattern in artificial intelligence since that term was coined in the 1950s. With deep learning in particular, researchers said, the concerns are being fueled by the technology’s limits.
Deep learning algorithms train on a batch of related data — like pictures of human faces — and are then fed more and more data, which steadily improve the software’s pattern-matching accuracy. Although the technique has spawned successes, the results are largely confined to fields where those huge data sets are available and the tasks are well defined, like labeling images or translating speech to text.
The technology struggles in the more open terrains of intelligence — that is, meaning, reasoning and common-sense knowledge. While deep learning software can instantly identify millions of words, it has no understanding of a concept like “justice,” “democracy” or “meddling.”
Researchers have shown that deep learning can be easily fooled. Scramble a relative handful of pixels, and the technology can mistake a turtle for a rifle or a parking sign for a refrigerator.
In a widely read article published early this year on arXiv.org, a site for scientific papers, Gary Marcus, a professor at New York University, posed the question: “Is deep learning approaching a wall?” He wrote, “As is so often the case, the patterns extracted by deep learning are more superficial than they initially appear.”

For the full story, see:
Steve Lohr. “Researchers Seek Smarter Paths to A.I.” The New York Times (Thursday, June 21, 2018): B1 & B4.
(Note: ellipses added.)
(Note: the online version of the story has the date June 20, 2018, and has the title “Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So.” The June 21st date is the publication date in my copy of the National Edition.)

The essay by Jordan, mentioned above, is:
Jordan, Michael I. “Artificial Intelligence – the Revolution Hasn’t Happened Yet.” Medium.com, April 18, 2018.

The manuscript by Marcus, mentioned above, is:

Marcus, Gary. “Deep Learning: A Critical Appraisal.” Jan. 2, 2018.

The Diversity That Matters Most Is Diversity of Thought

(p. A15) If you want anyone to pay attention to you in meetings, don’t ever preface your opposition to a proposal by saying: “Just to play devil’s advocate . . .” If you disagree with something, just say it and hold your ground until you’re convinced otherwise. There are many such useful ideas in Charlan Nemeth’s “In Defense of Troublemakers,” her study of dissent in life and the workplace. But if this one alone takes hold, it could transform millions of meetings, doing away with all those mushy, consensus-driven hours wasted by people too scared of disagreement or power to speak truth to gibberish. Not only would better decisions get made, but the process of making them would vastly improve.
. . .
In the latter part of her book, Ms. Nemeth explores in more detail how dissent improves the way in which groups think. She is ruthless toward conventional “brainstorming,” which tends toward the uncritical accumulation of bad ideas rather than the argumentative heat that forges better ideas. It’s only through criticism that concepts receive proper scrutiny. “Repeatedly we find that dissent has value, even when it is wrong, even when we don’t like the dissenter, and even when we are not convinced of his position,” she writes. “Dissent . . . enables us to think more independently” and “also stimulates thought that is open, divergent, flexible, and original.”
. . .
Ms. Nemeth’s punchy book also has an invaluable section on diversity in groups. All too often, she writes, in pursuit of diversity we focus on everything but the way people think. We look at a group’s gender, color or experience, and once the palette looks right declare it diverse. But you can have all of that and still have a group that thinks the same and reinforces a wrong-headed consensus.
By contrast, you can have a group that is demographically homogeneous yet violently heterogeneous in the way it thinks. The kind of diversity that leads to well-informed decisions is not necessarily the kind of diversity that gives the appearance of social justice. That will be a hard message for many organizations to swallow. But as with many of the arguments that Ms. Nemeth makes in her book, it is one that she gamely delivers and that all managers interested in the quality and integrity of their decision-making would do well to heed.

For the full review, see:
Philip Delves Broughton. “BOOKSHELF; Rocking The Boat.” The Wall Street Journal (Thursday, May 9, 2018): A15.
(Note: ellipsis internal to a paragraph, in original; ellipses between paragraphs, added.)
(Note: the online version of the review has the date May 10, 2018, and has the title “BOOKSHELF; ‘In Defense of Troublemakers’ Review: Rocking the Boat.”)

The book under review, is:
Nemeth, Charlan. In Defense of Troublemakers: The Power of Dissent in Life and Business. New York: Basic Books, 2018.

A.I. “Will Never Match the Creativity of Human Beings or the Fluidity of the Real World”

(p. A21) If you read Google’s public statement about Google Duplex, you’ll discover that the initial scope of the project is surprisingly limited. It encompasses just three tasks: helping users “make restaurant reservations, schedule hair salon appointments, and get holiday hours.”
Schedule hair salon appointments? The dream of artificial intelligence was supposed to be grander than this — to help revolutionize medicine, say, or to produce trustworthy robot helpers for the home.
The reason Google Duplex is so narrow in scope isn’t that it represents a small but important first step toward such goals. The reason is that the field of A.I. doesn’t yet have a clue how to do any better.
. . .
The narrower the scope of a conversation, the easier it is to have. If your interlocutor is more or less following a script, it is not hard to build a computer program that, with the help of simple phrase-book-like templates, can recognize a few variations on a theme. (“What time does your establishment close?” “I would like a reservation for four people at 7 p.m.”) But mastering a Berlitz phrase book doesn’t make you a fluent speaker of a foreign language. Sooner or later the non sequiturs start flowing.
. . .
To be fair, Google Duplex doesn’t literally use phrase-book-like templates. It uses “machine learning” techniques to extract a range of possible phrases drawn from an enormous data set of recordings of human conversations. But the basic problem remains the same: No matter how much data you have and how many patterns you discern, your data will never match the creativity of human beings or the fluidity of the real world. The universe of possible sentences is too complex. There is no end to the variety of life — or to the ways in which we can talk about that variety.
. . .
Today’s dominant approach to A.I. has not worked out. Yes, some remarkable applications have been built from it, including Google Translate and Google Duplex. But the limitations of these applications as a form of intelligence should be a wake-up call. If machine learning and big data can’t get us any further than a restaurant reservation, even in the hands of the world’s most capable A.I. company, it is time to reconsider that strategy.

For the full commentary, see:
Gary Marcus and Ernest Davis. “A.I. Is Harder Than You Think.” The New York Times (Saturday, May 19, 2018): A21.
(Note: ellipses added.)
(Note: the online version of the commentary has the date May 18, 2018.)

Philosopher Argued Artificial Intelligence Would Never Reach Human Intelligence

(p. A28) Professor Dreyfus became interested in artificial intelligence in the late 1950s, when he began teaching at the Massachusetts Institute of Technology. He often brushed shoulders with scientists trying to turn computers into reasoning machines.
. . .
Inevitably, he said, artificial intelligence ran up against something called the common-knowledge problem: the vast repository of facts and information that ordinary people possess as though by inheritance, and can draw on to make inferences and navigate their way through the world.
“Current claims and hopes for progress in models for making computers intelligent are like the belief that someone climbing a tree is making progress toward reaching the moon,” he wrote in “Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer” (1985), a book he collaborated on with his younger brother Stuart, a professor of industrial engineering at Berkeley.
His criticisms were greeted with intense hostility in the world of artificial intelligence researchers, who remained confident that success lay within reach as computers grew more powerful.
When that did not happen, Professor Dreyfus found himself vindicated, doubly so when research in the field began incorporating his arguments, expanded upon in a second edition of “What Computers Can’t Do” in 1979 and “What Computers Still Can’t Do” in 1992.
. . .
For his 2006 book “Philosophy: The Latest Answers to the Oldest Questions,” Nicholas Fearn broached the topic of artificial intelligence in an interview with Professor Dreyfus, who told him: “I don’t think about computers anymore. I figure I won and it’s over: They’ve given up.”

For the full obituary, see:
WILLIAM GRIMES. “Hubert L. Dreyfus, Who Put Computing In Its Place, Dies at 87.” The New York Times (Wednesday, May 3, 2017): A28.
(Note: ellipses added.)
(Note: the online version of the obituary has the date MAY 2, 2017, and has the title “Hubert L. Dreyfus, Philosopher of the Limits of Computers, Dies at 87.”)

Dreyfus’s last book on the limits of artificial intelligence, was:
Dreyfus, Hubert L. What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: The MIT Press, 1992.