To Be Happy, “We Need to Have Goals”

(p. B8) A little over a year ago, I drove home from the airport with the windows down and the radio on full blast after filming the last scenes for the Netflix docu-series “The Innocent Man.” I was so proud of the work I’d done investigating two wrongful murder convictions in a small city in Oklahoma in the 1980s. This was work that mattered, and I was thrilled to be a part of it.

A few days later, I sat in my truck and cried. An empty work schedule yawned before me, and I was sure that my most meaningful achievement was in my rearview mirror.

This wave of hopelessness has a name: I was experiencing arrival fallacy.

“Arrival fallacy is this illusion that once we make it, once we attain our goal or reach our destination, we will reach lasting happiness,” said Tal Ben-Shahar, the Harvard-trained positive psychology expert who is credited with coining the term.

. . .

To be clear, acknowledging the power of arrival fallacy does not mean we should settle for a life of mediocrity.

“We need to have goals,” Dr. Ben-Shahar said. “We need to think about the future.” And, he noted, we are also a “future-oriented” species. In fact, studies have shown that the mortality rate rises by 2 percent among men who retire right when they become eligible to collect Social Security, and that retiring early may lead to early death, even among those who are healthy when they do so. Purpose and meaning can generate satisfaction, which is part of the happiness equation, Dr. Gruman said.

So wait. Reaching a goal can make us unhappy, but setting goals makes us happy? It sounds like a conundrum, but it’s not if you plan correctly, Dr. Ben-Shahar said. His advice is to lay out multiple concurrent goals, both in and out of your work life.

For the full commentary, see:

A.C. Shilton. “Success Doesn’t Always Bring Happiness.” The New York Times (Monday, June 3, 2019): B8.

(Note: ellipsis added.)

(Note: the online version of the commentary has the date May 28, 2019, and has the title “You Accomplished Something Great. So Now What?”)

No Evidence Base that Smartphones Cause Anxiety and Depression in Teens

(p. B1) SAN FRANCISCO — It has become common wisdom that too much time spent on smartphones and social media is responsible for a recent spike in anxiety, depression and other mental health problems, especially among teenagers.

But a growing number of academic researchers have produced studies that suggest the common wisdom is wrong.

The latest research, published on Friday [January 17, 2020] by two psychology professors, combs through about 40 studies that have examined the link between social media use and both depression and anxiety among adolescents. That link, according to the professors, is small and inconsistent.

“There doesn’t seem to be an evidence base that would explain the level of panic and consternation around these issues,” said Candice L. Odgers, a professor at the University of California, Irvine, and the lead author of the paper, which was published in the Journal of Child Psychology and Psychiatry.

For the full story, see:

Nathaniel Popper. “The Menace of Screen Time Could Be More of a Mirage.” The New York Times (Saturday, January 18, 2020): B1 & B6.

(Note: bracketed date added.)

(Note: the online version of the story has the date Jan. 17, 2020, and has the title “Panicking About Your Kids’ Phones? New Research Says Don’t.”)

The Odgers paper, mentioned in the passage quoted above, is:

Odgers, Candice L., and Michaeline R. Jensen. “Annual Research Review: Adolescent Mental Health in the Digital Age: Facts, Fears, and Future Directions.” Journal of Child Psychology and Psychiatry (published first online Jan. 17, 2020).

Clint Eastwood’s “Stubborn Libertarian Streak”

(p. C6) Though he acts bravely and responsibly at a moment of crisis, Jewell (Paul Walter Hauser) isn’t entirely a hero, and “Richard Jewell” doesn’t quite belong in the gallery with “Sully” and “American Sniper,” Eastwood’s other recent portraits of exceptional Americans in trying circumstances.

. . .

Eastwood, Ray and Hauser (who is nothing short of brilliant) cleverly invite the audience to judge Jewell the way his tormentors eventually will: on the basis of prejudices we might not even admit to ourselves. He’s overweight. He lives with his mother, Bobi (Kathy Bates). He has a habit of taking things too seriously — like his job as a campus police officer at a small liberal-arts college — and of trying a little too hard to fit in. He treats members of the Atlanta Police Department and the F.B.I. like his professional peers, and seems blind to their condescension. “I’m law enforcement too” he says to the agents who are investigating him as a potential terrorist, with an earnestness that is both comical and pathetic.

Most movies, if they bothered with someone like Jewell at all, would make fun of him or relegate him to a sidekick role. Eastwood, instead, makes the radical decision to respect him as he is, and to show how easily both his everyday shortcomings and his honesty and decency are distorted and exploited by the predators who descend on him at what should be his moment of glory.

. . .

Eastwood has always had a stubborn libertarian streak, and a fascination with law enforcement that, like Jewell’s, is shadowed by ambivalence and outright disillusionment.

. . .

“Richard Jewell” is a rebuke to institutional arrogance and a defense of individual dignity, sometimes clumsy in its finger-pointing but mostly shrewd and sensitive in its effort to understand its protagonist and what happened to him. The political implications of his ordeal are interesting to contemplate, but its essential nature is clear enough. He was bullied.

For the full film review, see:

A.O. Scott. “The Jagged Shrapnel Still Flies Years Later.” The New York Times (Friday, December 13, 2019): C6.

(Note: ellipses added.)

(Note: the online version of the film review was last updated December 23 [sic], 2019, and has the title “‘Richard Jewell’ Review: The Wrong Man.”)

Does Musk Want to Reach Mars or Conspicuously Consume Real Estate?

In my book Openness to Creative Destruction, I describe and praise those who I call “project entrepreneurs.” These are innovative entrepreneurs, like Walt Disney and Cyrus Field, who are motivated primarily by a desire to bring their project into the world, rather than a desire for conspicuous personal consumption. I have been unsure whether to count Elon Musk as a project entrepreneur. The evidence quoted below suggests the answer is “no.”

(p. M1) Over the last seven years, Mr. Musk and limited-liability companies tied to him have amassed a cluster of six houses on two streets in the “lower” and “mid” areas of the Bel-Air neighborhood of Los Angeles, a celebrity-filled, leafy enclave near the Hotel Bel-Air.

Those buys—plus a grand, 100-year-old estate in Northern California near the headquarters of Tesla, the electric car concern he heads—means Mr. Musk or LLCs with ties to him have spent around $100 million on seven properties.

For the full story, see:

Nancy Keates. “Elon Musk’s Big Buyout.” The Wall Street Journal (Friday, December 6, 2019): M1 & M6.

(Note: the online version of the story has the date Dec. 5, 2019, and has the title “Elon Musk Buys Out the Neighbors.”)

My book, mentioned at the top, is:

Diamond, Arthur M., Jr. Openness to Creative Destruction: Sustaining Innovative Dynamism. New York: Oxford University Press, 2019.

A.I. Needs People to Set the Objectives

(p. A2) Although Deep Mind’s Alpha Zero can beat a grand master at computer chess, it would still bomb at Attie Chess—the version of the game played by my 3-year-old grandson Atticus. In Attie Chess, you throw all of the pieces into the wastebasket, pick each one up, try to put them on the board and then throw them all in the wastebasket again. This apparently simple physical task is remarkably challenging even for the most sophisticated robots.

But . . . there’s a more profound way in which human intelligence is different from artificial intelligence, and there’s another reason why Attie Chess may be important.

. . .

The basic technique is to give the computer millions of examples of games, images or previous judgments and to provide feedback. Which moves led to a high score? Which pictures did people label as dogs?

. . .

But people also can decide to change their objectives. A great judge can argue that slavery should be outlawed or that homosexuality should no longer be illegal. A great curator can make the case for an unprecedented new kind of art, like Cubism or Abstract Expressionism, that is very different from anything in the past. We invent brand new games and play them in new ways.

. . .

Indeed, the point of each new generation is to create new objectives—new games, new categories and new judgments. And yet, somehow, in a way that we don’t understand at all, we don’t merely slide into relativism. We can decide what is worth doing in a way that AI can’t.

. . .

. . . , we are the only creatures who can decide not only what we want but whether we should want it.

For the full commentary, see:

Alison Gopnik. “MIND & MATTER; What A.I. Is Still Far From Figuring Out.” The Wall Street Journal (Saturday, March 23, 2019): A2.

(Note: ellipses added.)

(Note: the online version of the commentary has the date March 20, 2019, and has the same title as the print version.)

Alison Gopnik’s comments, that are quoted above, are related to her paper:

Gopnik, Alison. “AIs Versus Four-Year-Olds.” In Possible Minds: Twenty-Five Ways of Looking at AI, edited by John Brockman. New York: Penguin Press, 2019, pp. 219-30.

“Our Creative Yield Increases with Age”

(p. C1) . . . precocious achievement is the exception, not the norm. The fact is, we mature and develop at different rates. All of us will have multiple cognitive peaks throughout our lives, and the talents and passions that we have to offer can emerge across a range of personal circumstances, not just in formal educational settings focused on a few narrow criteria of achievement. Late bloomers are everywhere once you know to look for them.

. . .

What about creativity and innovation? That realm must belong to the young, with their exuberance and fresh ideas, right? Not necessarily. For instance, the average age of scientists when they are doing work that eventually leads to a Nobel Prize is 39, according to a 2008 Northwestern University study. The average age of U.S. patent applicants is 47.

Our creative yield increases with age, says Elkhonon Goldberg, a clinical professor of neurology at New York University. Dr. Goldberg thinks that the brain’s right and left hemispheres are connected by a “salience network” that helps us to evaluate novel perceptions from the right side by comparing them to the stored images and patterns on our left side. Thus a child will have greater novel perceptions than a middle-aged adult but will lack the context to turn them into creative insights.

Take Ken Fisher, who today runs Fisher Investments, a stock fund with $100 billion under management and 50,000 customers. After graduating from high school, he flunked out of a junior college. “I had no particular direction,” he said. He went back to school to study forestry, hoping for a career outdoors, but switched to economics and got his degree in 1972. In his early 20s, he hung out his shingle as a financial adviser, following his father’s career. To bring in extra money, he took construction jobs, and he played slide guitar in a bar. But he also read and read: “Books about management and business—and maybe thirty trade magazines a month for years,” he says. By the time he reached his 30s, an idea had gelled that would make him his fortune. As he puts it, during that period of reflection, “I developed a theory about valuing companies that was a bit unconventional.”

For the full commentary, see:

Rich Karlgaard. “It’s Never Too Late to Start a Brilliant Career; Our obsession with early achievement shortchanges people of all ages. Research shows that our brains keep developing deep into adulthood and so do our capabilities.” The Wall Street Journal (Saturday, May 4, 2019): C1-C2.

(Note: ellipses added.)

(Note: the online version of the commentary has the date May 3, 2019, and has the same title as the print version.)

The the passages quoted above, are from a commentary that is adapted from:

Karlgaard, Rich. Late Bloomers: The Power of Patience in a World Obsessed with Early Achievement. New York: Currency, 2019.

The research by Elkhonon Goldberg, mentioned above, is described in:

Goldberg, Elkhonon. Creativity: The Human Brain in the Age of Innovation. New York: Oxford University Press, 2018.

How Drinking Coffee Makes Us Younger and More Open-Minded

(p. C2) . . . , if a baby monkey heard a new sound pattern many times, her neurons (brain cells) would adjust to respond more to that sound pattern. Older monkeys’ neurons didn’t change in the same way.

At least part of the reason for this lies in neurotransmitters, chemicals that help to connect one neuron to another. Young animals have high levels of “cholinergic” neurotransmitters that make the brain more plastic, easier to change. Older animals start to produce inhibitory chemicals that counteract the effect of the cholinergic ones. They actually actively keep the brain from changing.

. . .

In the new research, Jay Blundon and colleagues at St. Jude Children’s Research Hospital in Memphis, Tenn., tried to restore early-learning abilities to adult mice. As in the earlier experiments, they exposed the mice to a new sound and recorded whether their neurons changed in response. But this time the researchers tried making the adult mice more flexible by keeping the inhibitory brain chemicals from influencing the neurons.

In some studies, they actually changed the mouse genes so that the animals no longer produced the inhibitors in the same way. In others, they injected other chemicals that counteracted the inhibitors. (Caffeine seems to work in this way, by counteracting inhibitory neurotransmitters. That’s why coffee makes us more alert and helps us to learn.)

In all of these cases in the St. Jude study, the adult brains started to look like the baby brains.

For the full commentary, see:

Alison Gopnik. “MIND & MATTER; How to Get Old Brains to Think Like Young Ones.” The New York Times (Saturday, July 8, 2017): C2.

(Note: ellipses added.)

(Note: the online version of the commentary has the date July 7, 2017, and has the same title as the print version.)

The article co-authored by Jay Blundon and mentioned above,is:

Blundon, Jay A., Noah C. Roy, Brett J. W. Teubner, Jing Yu, Tae-Yeon Eom, K. Jake Sample, Amar Pani, Richard J. Smeyne, Seung Baek Han, Ryan A. Kerekes, Derek C. Rose, Troy A. Hackett, Pradeep K. Vuppala, Burgess B. Freeman, and Stanislav S. Zakharenko. “Restoring Auditory Cortex Plasticity in Adult Mice by Restricting Thalamic Adenosine Signaling.” Science 356, no. 6345 (June 30, 2017): 1352-56.

Stalin’s “Despotism in Mass Bloodshed”

(p. A13) In the aftermath of Lenin’s death in January 1924, Joseph Stalin—already secretary-general of the Communist Party—emerged as the outright leader of the Soviet Union. “Right through 1927,” Stephen Kotkin notes, Stalin “had not appeared to be a sociopath in the eyes of those who worked most closely with him.” But by 1929-30, he “was exhibiting an intense dark side.” Mr. Kotkin’s “Stalin: Waiting for Hitler, 1929-1941,” the second volume of a planned three-volume biography, tracks the Soviet leader’s transformation during these crucial years. “Impatient with dictatorship,” Mr. Kotkin says, Stalin set out to forge “a despotism in mass bloodshed.”

The three central episodes of Mr. Kotkin’s narrative, all from the 1930s, are indeed violent and catastrophic, if in different ways: the forced collectivization of Soviet agriculture; the atrocities of the Great Terror, when Stalin “arrested and murdered immense numbers of loyal people”; and the rise of Adolf Hitler, the man who would become Stalin’s ally and then, as Mr. Kotkin puts it, his “principal nemesis.” In each case, as Mr. Kotkin shows, Stalin’s personal character—a combination of ruthlessness and paranoia—played a key role in the unfolding of events.

For the full review, see:

Joshua Rubenstein. “BOOKSHELF; The Turn to Tyranny; We may never know what degree of personal obsession, political calculation and ideological zeal drove Stalin to kill and persecute so many.” The Wall Street Journal (Wednesday, Nov. 1, 2017): A13.

(Note: the online version of the review has the date Oct. 31, 2017, and has the same title “BOOKSHELF; Review: The Turn to Tyranny; We may never know what degree of personal obsession, political calculation and ideological zeal drove Stalin to kill and persecute so many.”)

The book under review is:

Kotkin, Stephen. Stalin: Volume 2: Waiting for Hitler, 1929-1941. New York: Penguin Press, 2017.

Manic Energy from Bipolar Disorder May Enable “Heights of Success”

(p. A17) Dr. Ronald R. Fieve, who was a pioneer in the prescription of lithium to treat mania and other mood disorders — while avowing that some gifted individuals, like Abraham Lincoln, Theodore Roosevelt and Winston Churchill, might have benefited from being bipolar — died on Jan. 2 [2018] at his home in Palm Beach, Fla.

. . .

He cited estimates that as many as one in 15 people experienced a manic episode during their lifetimes, and that bipolar disorder — characterized by swings from elation, hyperactivity and a decreased need for sleep to incapacitating depression — was often misclassified as schizophrenia or other illnesses, or undiagnosed altogether.

He cautioned, however, that some highly creative, exuberant and energetic people have derived benefits from the condition because they have what he called “a hypomanic edge.”

“I have found that some of the most gifted individuals in our society suffer from this condition — including many outstanding writers, politicians, business executives and scientists — where tremendous amounts of manic energy have enabled them to achieve their heights of success,” Dr. Fieve told a symposium in 1973.

But without proper treatment, he said, those individuals afflicted with manic depression “more often than not either go too ‘high’ or suddenly crash into a devastating depression that we only hear about after a successful suicide.”

In contrast to antidepressant drugs or electroshock treatments, he said, regular doses of lithium carbonate appeared to stabilize mood swings without cramping creativity, memory or personality.

. . .

Before it was approved to treat depression, lithium was found in the late 1940s to be potentially unsafe as a salt substitute. But Dr. Fieve pointed out that lithium had been found in natural mineral waters prescribed by Greek and Roman physicians 1,500 years earlier to treat what were then called manic insanity and melancholia.

For the full obituary, see:

Sam Roberts. “Dr. Ronald Fieve, Pioneer In Lithium, Is Dead at 87.” The New York Times (Wednesday, Jan. 17, 2018): A17.

(Note: ellipses, and bracketed year, added.)

(Note: the online version of the obituary has the date Jan. 12, 2018, and has the title “Dr. Ronald Fieve, 87, Dies; Pioneered Lithium to Treat Mood Swings.”)

Much of the “Intelligence” in Artificial Intelligence Is Human, Not Artificial

(p. B5) Everything we’re injecting artificial intelligence into—self-driving vehicles, robot doctors, the social-credit scores of more than a billion Chinese citizens and more—hinges on a debate about how to make AI do things it can’t, at present.

. . .

On one side of this debate are the proponents of “deep learning”—an approach that, since a landmark paper in 2012 by a trio of researchers at the University of Toronto, has exploded in popularity.

. . .

On the other side of this debate are researchers such as Gary Marcus, former head of Uber Technologies Inc.’s AI division and currently a New York University professor, who argues that deep learning is woefully insufficient for accomplishing the sorts of things we’ve been promised. It could never, for instance, be able to usurp all white collar jobs and lead us to a glorious future of fully automated luxury communism.

Dr. Marcus says that to get to “general intelligence”—which requires the ability to reason, learn on one’s own and build mental models of the world—will take more than what today’s AI can achieve.

“That they get a lot of mileage out of [deep learning] doesn’t mean that it’s the right tool for theory of mind or abstract reasoning,” says Dr. Marcus.

To go further with AI, “we need to take inspiration from nature,” say Dr. Marcus. That means coming up with other kinds of artificial neural networks, and in some cases giving them innate, pre-programmed knowledge—like the instincts that all living things are born with.

. . .

Until we figure out how to make our AIs more intelligent and robust, we’re going to have to hand-code into them a great deal of existing human knowledge, says Dr. Marcus. That is, a lot of the “intelligence” in artificial intelligence systems like self-driving software isn’t artificial at all. As much as companies need to train their vehicles on as many miles of real roads as possible, for now, making these systems truly capable will still require inputting a great deal of logic that reflects the decisions made by the engineers who build and test them.

For the full commentary, see:

Christopher Mims. “KEYWORDS; Should Artificial Intelligence Copy the Brain?” The Wall Street Journal (Saturday, October 26, 2017): B5.

(Note: ellipses added.)

(Note: the online version of the commentary has the same date as the print version, and has the title “KEYWORDS; Should Artificial Intelligence Copy the Human Brain?”)

Big, Frequent Meetings Are Unproductive and Crowd Out Deep Thought

(p. 7) To figure out why the workers in Microsoft’s device unit were so dissatisfied with their work-life balance, the organizational analytics team examined the metadata from their emails and calendar appointments. The team divided the business unit into smaller groups and looked for differences in the patterns between those where people were satisfied and those where they were unhappy.

It seemed as if the problem would involve something about after-hours work. But no matter how Ms. Klinghoffer and Mr. Fuller crunched the data, there weren’t any meaningful correlations to be found between groups that had a lot of tasks to do at odd times and those that were unhappy. Gut instincts about overwork just weren’t supported by the numbers.

The two kept iterating until something emerged in the data. People in Mr. Ostrum’s division were spending an awful lot of time in meetings: an average of 27 hours a week. That wasn’t so much more than the typical team at Microsoft. But what really distinguished those teams with low satisfaction scores from the rest was that their meetings tended to include a lot of people — 10 or 20 bodies arrayed around a conference table coordinating plans, as opposed to two or three people brainstorming ideas.

The issue wasn’t that people had to fly to China or make late-night calls. People who had taken jobs requiring that sort of commitment seemed to accept these things as part of the deal. The issue was that their managers were clogging their schedules with overcrowded meetings, reducing available hours for tasks that rewarded more focused concentration — thinking deeply about trying to solve a problem.

Data alone isn’t insight. But once the Microsoft executives had shaped the data into a form they could understand, they could better question employees about the source of their frustrations. Staffers’ complaints about spending evenings and weekends catching up with more solitary forms of work started to make more sense. Now it was clearer why the first cuts of the data didn’t reveal the problem. An engineer sitting down to do individual work for several hours on a Saturday afternoon probably wouldn’t bother putting it on her calendar, or create digital exhaust in the form of trading emails with colleagues during that time.

Anyone familiar with the office-drone lifestyle might scoff at what it took Microsoft to get here. Does it really take that much analytical firepower, and the acquisition of an entire start-up, to figure out that big meetings make people sad?

For the full story, see:

Neil Irwin. “How to Win at Winner-Take-All.” The New York Times, SundayBusiness Section (Sunday, June 15, 2019): 1 & 6-7.

(Note: the online version of the story has the date June 15, 2019, and has the title “The Mystery of the Miserable Employees: How to Win in the Winner-Take-All Economy.”)

The article quoted above, is adapted from:

Irwin, Neil. How to Win in a Winner-Take-All World: The Definitive Guide to Adapting and Succeeding in High-Performance Careers. New York: St. Martin’s Press, 2019.