Good Scientific Questions Can Be Answered With Empirical Experiments: “In Science, Reality Rules”

(p. A17) . . . I hit it off with the legendary Columbia University physics professor and Nobel Prize winner I.I. Rabi, who discovered the basis for magnetic resonance imaging, among other techniques through which we access and harness the quantum world.

. . .

Naturally, our conversations often wandered across physics. I was full of theoretical ideas and quasi-philosophical speculations. Rabi pressed me—gently, with a twinkle in his eye, yet relentlessly—to describe their concrete meaning. In the process we often discovered that there wasn’t any!

But not always—and the questions that survived those dialogues were leaner and stronger. I internalized this experience, and since then my inner Rabi (he died in 1988) has been a wise, inspiring companion.

. . .

Fully worked-out answers to good scientific questions should include solid experimental prospects.

That is a surprisingly controversial view today, as some prominent philosophers of science promote a “post-empirical physics” that doesn’t require proof, or evidence. And there’s no doubt that physically inspired mathematics, or for that matter pure mathematics, can bring people great joy. But I lean toward Rabi’s attitude: In science, reality rules.

. . .

Another characteristic of most good questions is that the answer is just a little bit out of reach. It should not be too obvious, but it should not be utterly inaccessible either.

. . .

The foolproof way to find good questions is to come up with a lot of them and then throw out the ones that are too vague, too easy, too hard or too inconsequential.

For the full commentary, see:

Frank Wilczek. “WILCZEK’S UNIVERSE; Sifting for the Right Questions in Science.” The Wall Street Journal (Saturday, July 29, 2023): C4.

(Note: ellipses added.)

(Note: the online version of the commentary has the date July 28, 2023, and has the same title as the print version.)

All Conclusions in Science Are Open to Further Inquiry

(p. C3) Victory is often temporary. In December 2014, a nurse named Nina Pham contracted Ebola from a patient in Dallas. She was transferred to the National Institutes of Health in Bethesda, Md., and treated by a team led by Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases.

When Ms. Pham was discharged, the cameras captured an indelible moment: Together with NIH Director Francis Collins, Dr. Fauci, dressed in a crisp white lab coat, walked her out with his arm draped over her shoulder. This conveyed a critical message at a time when public fear about the disease was widespread. “We would not be releasing Ms. Pham if we were not completely confident in the knowledge that she has fully recovered, is virus free and poses no public health threat,” an NIH statement read.

But scientific certainty often carries an asterisk. Six months later, doctors in Atlanta discovered that in some patients who survive, the Ebola virus could still be found hidden away in parts of the body. This did not indicate that they could transmit the disease, but it meant that they could no longer be declared “virus-free” with certainty. This episode demonstrated how quickly our knowledge about public health threats can alter. What we once thought was true for the Ebola virus had changed, and no doubt will continue to evolve.

For the full commentary, see:

Jeremy Brown. “What Past Crises Tell Us About the Coronavirus.” The Wall Street Journal (Saturday, Feb. 1, 2020 [sic]): C3.

(Note: the online version of the commentary was updated Jan. 31, 2020 [sic], and has the same title as the print version. In both the online and print versions, the first sentence quoted above is in bold font.)

“Linguistic Diversity Is Precious” Because Languages Are “Natural Experiments” in “Ways of Seeing, Understanding, and Living”

(p. A13) Linguistic variety is “often seen as a problem, the curse of Babel,” but for a linguist, New York City is a riotous collection of living specimens—a “greenhouse, not a graveyard.”  . . .  Mr. Perlin, who has a doctorate in linguistics, helps run the Endangered Language Alliance, which works to document such minority tongues.  . . .

The heart of “Language City” is portraits of individual New York-based speakers. Mr. Perlin writes about their work as well as his, capturing the grind of immigrant life with empathy, balance and wit.  (. . .)  “If the country was rich we would never leave,” says Husniya, a Wakhi speaker from bleak post-Soviet Tajikistan. But she savors the city’s entrepreneurial energy: “New York opened my eyes. It shapes you to be a human being, not dividing based on religion, face, or race, or anything.”

. . .

Wonderfully rich, “Language City” is in part an introduction to the diverse ways different languages work. Seke and other “evidential” languages, for example, have different grammatical forms to indicate how the speaker knows what she’s asserting—whether from observation or inference, hearsay or hunch. Other languages syntactically “tag the speaker’s surprise at unexpected information” or have a special temporal marking “just for things happening today.”

. . .

Yet linguistic diversity is precious, Mr. Perlin stresses, and should be celebrated, not just tolerated.  . . .  . . ., languages “represent thousands of natural experiments” that encode wildly different “ways of seeing, understanding, and living.” Constructed by generations of collective effort, they are invisible cathedrals bigger and more democratic than any building.

For the full review see:

Timothy Farrington. “BOOKSHELF; The Words On the Street.” The Wall Street Journal (Friday, Feb. 23, 2024): A13.

(Note: ellipses added.)

(Note: the online version of the review has the date February 22, 2024, and has the title “BOOKSHELF; ‘Language City’ Review: The Words on the Street.”)

The book under review is:

Perlin, Ross. Language City: The Fight to Preserve Endangered Mother Tongues in New York. Washington, D.C.: Atlantic Monthly Press, 2024.

“Adoption of Singular ‘Gold Standard’ Models” Closes “Off Other Important Avenues of Inquiry”

(p. A15) Ubiquitous and persuasive, models . . . drive decisions—one reason why, in Ms. Thompson’s view, they require our urgent attention. She tells us that, as a graduate student studying North Atlantic storms, she noticed how different models predicted different overall effects and produced contradictory results.

. . .

The problem is that Model Land is easy to enter but difficult to escape. Having built “a beautiful internally consistent model,” Ms. Thompson writes, it can be “emotionally difficult to acknowledge that the initial assumptions on which the whole thing is built are literally not true.”

There are all sorts of ways that models can lead us astray. A small measurement error on an input can lead to wildly inaccurate forecasts—a phenomenon known as the Butterfly Effect. Fortunately, this type of uncertainty is often manageable. Far more problematic are what Ms. Thompson calls “unquantifiable unknowns”—things that are left out of a model’s calculation because they can’t be anticipated, such as the unexpected arrival of a transformative technology or the abrupt collapse of a robust market. It is not always true, she observes, that the data we have now will be relevant to the future—as traders discovered in the stock-market crash of 1987, when their models catastrophically failed.

. . .  We may be inclined to regard models as objective expressions of truth, yet they are deliberately constructed interpretations, imbued with the values and viewpoints of the modelers—primarily, as Ms. Thompson notes, well-educated, middle-class individuals. During the pandemic, models “took more account of harms to some groups of people than others,” resulting in a “moral case” for lockdowns that was “partial and biased.” Modelers who worked from home—while others maintained the supply chain—often overlooked “all of the possible harms” of the actions their models were suggesting.  . . .

The promise and peril of models, Ms. Thompson recognizes, has deep resonance in biomedicine, where so-called model organisms, like yeast and zebrafish, have led to foundational insights and accelerated the development of therapeutics. At the same time, treatments that work brilliantly in Model Land often fail in people, devastating patients and disappointing drug developers. The search for improved disease models can be complicated when proponents of one model suppress research into alternative approaches, as the late journalist Sharon Begley documented in a powerful 2019 report. Ms. Thompson perceptively critiques the adoption of singular “gold standard” models, noting that the “solidification” of one set of assumptions can lock us into one way of thinking and close off other important avenues of inquiry.

For the full review see:

David A. Shaywitz. “BOOKSHELF; Seduced By Numbers.” The Wall Street Journal (Wednesday, Dec. 28, 2022): A15.

(Note: ellipses added.)

(Note: the online version of the review has the date December 27, 2022, and has the title “BOOKSHELF; ‘Escape From Model Land’ Review: Seduced by Numbers.”)

The book under review is:

Thompson, Erica. Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It. New York: Basic Books, 2022.

Sharon Begley’s “powerful” 2019 report, mentioned above, is:

Begley, Sharon. “The Maddening Saga of How an Alzheimer’s ‘Cabal’ Thwarted Progress toward a Cure for Decades.” STAT; Reporting from the Frontiers of Health and Medicine, Posted June 25, 2019. Available from https://www.statnews.com/2019/06/25/alzheimers-cabal-thwarted-progress-toward-cure/.

Economists’ Models of Growth and Inflation Predicted a Recession That Has Not Happened; So “Economists Can Learn a Huge, Healthy Dose of Humility”

(p. B1) Many economists spent early 2023 predicting a painful downturn, a view so widely held that some commentators started to treat it as a given. Inflation had spiked to the highest level in decades, and a range of forecasters thought that it would take a drop in demand and a prolonged jump in unemployment to wrestle it down.

Instead, the economy grew 3.1 percent last year, up from less than 1 percent in 2022 and faster than the average for the five years leading up to the pandemic.

. . .

(p. B3) . . . what is clear is that old models of how growth and inflation relate did not serve as accurate guides.

. . .

“It’s not like we understood the macro economy perfectly before, and this was a pretty unique time,” said Jason Furman, a Harvard economist and former Obama administration economic official who thought that lowering inflation would require higher unemployment. “Economists can learn a huge, healthy dose of humility.”

. . .

Many economists previously thought that a more marked slowdown was likely to be necessary to fully stamp out rapid inflation. Mr. Summers, for instance, predicted that it would take years of joblessness above 5 percent to wrestle price increases back under control.

“I was of the view that soft landings” were “the triumph of hope over experience,” Mr. Summers said. “This is looking like a case where hope has triumphed over experience.”

. . .

“I would have thought that it was an iron law that disinflation is painful,” said Laurence M. Ball, a Johns Hopkins economist who was an author of an influential 2022 paper that argued bringing down inflation would probably require driving up unemployment. “The broad lesson, which we never seem to completely learn, is that it’s very hard to forecast things and we shouldn’t be too confident, and especially when there’s a very weird, historic event like Covid.”

For the full story, see:

Jeanna Smialek and Ben Casselman. “How Experts Got It Wrong On Economy.” The New York Times (Saturday, January 27, 2024): B1 & B3.

(Note: ellipses, and bracketed date, added.)

(Note: the online version of the story has the date Jan. 26, 2024, and has the title “Economists Predicted a Recession. So Far They’ve Been Wrong.”)

Cleaning Pigeon Poop Off Their Antenna to Rule Out a Cause of Static

(p. B11) Arno A. Penzias, whose astronomical probes yielded incontrovertible evidence of a dynamic, evolving universe with a clear point of origin, confirming what became known as the Big Bang theory, died on Monday [January 22, 2024] in San Francisco.

. . .

In 1964, while preparing the antenna to measure the properties of the Milky Way galaxy, Dr. Penzias and Dr. Wilson, another young radio astronomer who was new to Bell Labs, encountered a persistent, unexplained hiss of radio waves that seemed to come from everywhere in the sky, detected no matter which way the antenna was pointed. Perplexed, they considered various sources of the noise. They thought they might be picking up radar, or noise from New York City, or radiation from a nuclear explosion. Or might pigeon droppings be the culprit?

Examining the antenna, Dr. Penzias and Dr. Wilson “subjected its electric circuits to scrutiny comparable to that used in preparing a manned spacecraft,” Walter Sullivan wrote in The New York Times in 1965. Yet the mysterious hiss remained.

The cosmological underpinnings of the noise were finally explained with help from physicists at Princeton University, who had predicted that there might be radiation coming from all directions left over from the Big Bang. The buzzing, it turned out, was just that: a cosmic echo. It confirmed that the universe wasn’t infinitely old and static but rather had begun as a primordial fireball that left the universe bathed in background radiation.

. . .

. . . Dr. Penzias’s path to stumbling onto the answer to one of humanity’s most central questions started . . ., when he joined Bell Laboratories as a member of its radio research group in Holmdel.

There, he saw the potential of AT&T’s new satellite communications antenna, a giant radio telescope known as the Holmdel Horn, as a tool for cosmological observation. In teaming up with Dr. Wilson in 1964 to use the antenna, Dr. Wilson said in a recent interview, one of their goals was to advance the nascent field of radio astronomy by accurately measuring several bright celestial sources.

Soon after they started their measurements, however, they heard the hiss. They spent months ruling out possible causes, including pigeons.

“The pigeons would go and roost at the small end of the horn, and they deposited what Arno called a white dielectric material,” Dr. Wilson said. “And we didn’t know if the pigeon poop might have produced some radiation.” So the men climbed up and cleaned it out. The noise persisted.

It was finally Dr. Penzias’s fondness for chatting on the telephone that led to a fortuitous breakthrough. (“It was a good thing he worked for the phone company, because he liked to use their instrument,” Dr. Wilson said. “He talked to a lot of people.”)

In January 1965, Dr. Penzias dialed Bernard Burke, a fellow radio astronomer, and in the course of their conversation he mentioned the puzzling hiss. Dr. Burke suggested that Dr. Penzias call a physicist at Princeton who had been trying to prove that the Big Bang had left traces of cosmological radiation. He did.

Intrigued, scientists from Princeton visited Dr. Penzias and Dr. Wilson, and together they made the connection to the Big Bang. Theory and observation were then brought together in a pair of papers published in 1965.

For the full obituary, see:

Katie Hafner. “Arno A. Penzias Is Dead at 90; Confirmed the Big Bang Theory.” The New York Times (Thursday, January 25, 2024): B11.

(Note: ellipses, and bracketed date, added.)

(Note: the online version of the obituary has the date Jan. 22, 2024, and has the title “Arno A. Penzias, 90, Dies; Nobel Physicist Confirmed Big Bang Theory.”)

Biologists Surprised That Marine Animals Are “Having a Blast” in “Great Pacific Garbage Patch”

(p. A3) Biologists who fished toothbrushes, rope and broken bottle shards from the Great Pacific Garbage Patch found them studded with gooseneck barnacles and jet-black sea anemones glistening like buttons. All told, they found 484 marine invertebrates from 46 species clinging to the detritus, they reported Monday [April 17, 2023] in the journal Nature Ecology & Evolution.

. . .

Marine ecologists said they would expect most coastal species to struggle to survive outside their shoreline habitats. On the Great Pacific Garbage Patch, animals were found growing and reproducing.

“They’re having a blast,” said study author Matthias Egger, head of environmental and social affairs at the Dutch nonprofit The Ocean Cleanup. “That’s really a shift in the scientific understanding.”

Anemones like to protect themselves with grains of sand, Dr. Egger said, but out in the garbage patch they are covered in seed-like microplastics. Squeeze an anemone and the shards spew out, he said: “They’re all fully loaded with plastic on the outside and inside.”

. . .

The patch is also a haven for animals that are at home on the open ocean. Such species—sea snails, blue button jellyfish, and a relative called by-the-wind sailors—gather more densely where there is more plastic, Dr. Helm and her team said in a study posted online ahead of peer-review.

Removing the plastic would mean uprooting them, Dr. Helm said: “Cleaning it up is not actually that simple.”

For the full story, see:

Nidhi Subbaraman. “Ocean Garbage Patch Hosts Critters.” The Wall Street Journal (Tuesday, Apr. 18, 2023): A3.

(Note: ellipses, and bracketed date, added.)

(Note: the online version of the story was updated April 17, 2023, and has the title “Pacific Ocean Garbage Patch Is Bursting With Life.” The 7th, 8th, and 9th sentences quoted above, appear in the online, but not in the print, version of the commentary. Also, the online version of the sentence on being able to handle switching, contains seven added words of detail.)

The published version of the “posted online” article mentioned above is:

Haram, Linsey E., James T. Carlton, Luca Centurioni, Henry Choong, Brendan Cornwell, Mary Crowley, Matthias Egger, Jan Hafner, Verena Hormann, Laurent Lebreton, Nikolai Maximenko, Megan McCuller, Cathryn Murray, Jenny Par, Andrey Shcherbina, Cynthia Wright, and Gregory M. Ruiz. “Extent and Reproduction of Coastal Species on Plastic Debris in the North Pacific Subtropical Gyre.” Nature Ecology & Evolution 7, no. 5 (April 17, 2023): 687-97.

“Context Switching Is the Mindkiller”

(p. B7) “My mind often feels…like a very wild storm,” Musk said Wednesday in the same interview. “I’m a fountain of ideas. I mean I have more ideas than I could possibly execute. So I have no shortage of ideas. Innovation is not a problem, execution is a problem.”

He was speaking at the New York Times DealBook Summit on Wednesday [Nov. 29, 2023] in New York City, a high-profile event run by one of the media juggernauts he has been openly needling.

He was only there, Musk said, because of his friendship with the host, Andrew Ross Sorkin. Or, as Musk called him on stage, “Jonathan.”

“I’m Andrew,” Sorkin said.

. . .

“Context switching is the mindkiller,” he tweeted the day after Thanksgiving, a favorite axiom of his that mixes a quote from the sci-fi book “Dune” with computer lingo for multitasking.

In “Dune,” fear is the mindkiller—the idea that the primal reaction to fear is to recoil rather than go forward. In essence, fear is an obstacle to be overcome to reach success. For Musk, the challenge to overcome is being able to handle switching between rockets and tweets and cars and brain computers and drilling machines and superhuman artificial intelligence.

. . .

In the moment that ricocheted around the world, Musk told advertisers unhappy with him to go f— themselves, saying he was unwilling to pander to their “blackmail” and warned they threatened to bankrupt the social-media platform he acquired slightly more than a year ago. And if they were successful, he warned, “See how Earth responds to that.”

. . .

To Musk, the likes of Disney are trying to squelch his freedom of speech. To others, they are simply exercising their rights to walk away.

“Go. F—. Yourself,” Musk said on stage to a stunned audience. “Is that clear? I hope it is. Hey, Bob, if you’re in the audience.”

For the full commentary, see:

Tim Higgins. “Storm in Musk’s Mind Casts Shadow on Vehicle Launch.” The Wall Street Journal (Monday, Dec. 4, 2023): B7.

(Note: ellipses, and bracketed date, added.)

(Note: the online version of the commentary has the date December 2, 2023, and has the title “The Storm Brewing Inside Elon Musk’s Mind Gets Out.” The 7th, 8th, and 9th sentences quoted above, appear in the online, but not in the print, version of the commentary. Also, the online version of the sentence on being able to handle switching, contains seven added words of detail.)

The science-fiction Dune book mentioned above is:

Herbert, Frank. Dune. Deluxe ed. New York: Ace, 2019 [1st ed. 1965].

“Serendipitous” Discoveries Related to Two “Odd-Looking” Animals Was Source of Weight-Loss Drugs

(p. A1) The blockbuster diabetes drugs that have revolutionized obesity treatment seem to have come out of nowhere, turning the diet industry upside down in just the past year. But they didn’t arrive suddenly. They are the unlikely result of two separate bodies of science that date back decades and began with the study of (p. A2) two unsightly creatures: a carnivorous fish and a poisonous lizard.

In 1980, researchers at Massachusetts General Hospital wanted to use new technology to find the gene that encodes a hormone called glucagon. The team decided to study Anglerfish, which have special organs that make the hormone, simplifying the task of gathering samples of pure tissue.

. . .

After plucking out organs the size of Lima beans with scalpels, they dropped them into liquid nitrogen and drove back to Boston. Then they determined the genetic sequence of glucagon, which is how they learned that the same gene encodes related hormones known as peptides. One of them was a key discovery that would soon be found in humans, too.

It was called glucagon-like peptide-1 and its nickname was GLP-1.

After they found GLP-1, others would determine its significance. Scientists in Massachusetts and Europe learned that it encourages insulin release and lowers blood sugar. That held out hope that it could help treat diabetes. Later they discovered that GLP-1 makes people feel fuller faster and slows down emptying of food from the stomach.

. . .

The key to the first drug would come from a serendipitous discovery inside another odd-looking animal.

Around the time Goodman was cutting open fish, Jean-Pierre Raufman was studying insect and animal venoms to see if they stimulated digestive enzymes in mammals.

“We got a tremendous response from Gila monster venom,” he recalled.

It was a small discovery that could have been forgotten, but for a lucky break nearly a decade later when Raufman gave a lecture on that work at the Bronx Veterans Administration. John Eng, an expert in identifying peptides, was intrigued. The pair had collaborated on unrelated work a few years before. Eng proposed they study Gila monsters.

. . .

Eng isolated a small peptide that he called Exendin-4, which they found was similar to human GLP-1.

Eng then tested his new peptide on diabetic mice and found something intriguing: It not only reduced blood glucose, it did so for hours. If the same effect were to be observed in humans, it could be the key to turning GLP-1 into a meaningful advance in diabetes treatment, not just a seasickness simulator in an IV bag.

Jens Juul Holst, a pioneering GLP-1 researcher, remembers standing in an exhibit hall at a European conference next to Eng. The two had put up posters that displayed their work, hoping top researchers would stop by to discuss it. But other scientists were skeptical that anything derived from a lizard would work in humans.

“He was extremely frustrated,” recalled Holst. “Nobody was interested in his work. None of the important people. It was too strange for people to accept.”

After three years, tens of thousands of dollars in patent-related fees and thousands of miles traveled, Eng found himself standing with his poster in San Francisco. This time, he caught the attention of Andrew Young, an executive from a small pharmaceutical company named Amylin.

“I saw the results in the mice and realized this could be druggable,” Young said.

When an Eli Lilly executive leaned over his shoulder to look at Eng’s work, Young worried he might miss his chance. Not long after, Amylin licensed the patent.

They worked to develop Exendin-4 into a drug by synthesizing the Gila monster peptide. They weren’t sure what would happen in humans. “We couldn’t predict weight loss or weight gain with these drugs,” recalled Young. “They enhance insulin secretion. Usually that increases body weight.” But the effect on slowing the stomach’s processing of food was more pronounced and Young’s team found as they tested their new drug that it caused weight loss.

To get a better understanding of Exendin-4, Young consulted with Mark Seward, a dentist raising more than 100 Gila monsters in his Colorado Springs, Colo., basement. The lizard enthusiast’s task was to feed them and draw blood. One took exception to the needle in its tail, slipped its restraint and snapped its teeth on Seward’s palm—the only time he’s been bitten in the decades he’s raised the animals. “It’s like a wasp sting,” he said, “but much worse.”

Nine years after the chance San Francisco meeting between Eng and Young, the Food and Drug Administration approved the first GLP-1-based treatment in 2005.

The twice-daily injection remained in the bloodstream for hours, helping patients manage Type 2 diabetes. Eng would be paid royalties as high as $6.7 million per year for the drug, . . .

For the full story, see:

Rolfe Winkler and Ben Cohen. “Two Monsters Spawned Huge Drugs.” The Wall Street Journal (Friday, June 24, 2023): A1-A2.

(Note: ellipsis added.)

(Note: the online version of the story has the date June 23, 2023, and has the title “Monster Diet Drugs Like Ozempic Started With Actual Monsters.” The sentence about “a serendipitous discovery” appears in the online, but not the print, version of the article. The passages quoted above also include several other sentences that appear in the more extensive online version, but not in the print version.)

The Most Powerful A.I. Systems Still Do Not Understand, Have No Common Sense, and Cannot Explain Their Decisions

(p. B1) David Ferrucci, who led the team that built IBM’s famed Watson computer, was elated when it beat the best-ever human “Jeopardy!” players in 2011, in a televised triumph for artificial intelligence.

But Dr. Ferrucci understood Watson’s limitations. The system could mine oceans of text, identify word patterns and predict likely answers at lightning speed. Yet the technology had no semblance of understanding, no human-style common sense, no path of reasoning to explain why it reached a decision.

Eleven years later, despite enormous advances, the most powerful A.I. systems still have those limitations.

. . .

(p. B7) The big, so-called deep learning programs have conquered tasks like image and speech recognition, and new versions can even pen speeches, write computer programs and have conversations.

They are also deeply flawed. They can generate biased or toxic screeds against women, minorities and others. Or occasionally stumble on questions that any child could answer. (“Which is heavier, a toaster or a pencil? A pencil is heavier.”)

“The depth of the pattern matching is exceptional, but that’s what it is,” said Kristian Hammond, an A.I. researcher at Northwestern University. “It’s not reasoning.”

Elemental Cognition is trying to address that gap.

. . .

Eventually, Dr. Ferrucci and his team made progress with the technology. In the past few years, they have presented some of their hybrid techniques at conferences and they now have demonstration projects and a couple of initial customers.

. . .

The Elemental Cognition technology is largely an automated system. But that system must be trained. For example, the rules and options for a global airline ticket are spelled out in many pages of documents, which are scanned.

Dr. Ferrucci and his team use machine learning algorithms to convert them into suggested statements in a form a computer can interpret. Those statements can be facts, concepts, rules or relationships: Qantas is an airline, for example. When a person says “go to” a city, that means add a flight to that city. If a traveler adds four more destinations, that adds a certain amount to the cost of the ticket.

In training the round-the-world ticket assistant, an airline expert reviews the computer-generated statements, as a final check. The process eliminates most of the need for hand coding knowledge into a computer, a crippling handicap of the old expert systems.

Dr. Ferrucci concedes that advanced machine learning — the dominant path pursued by the big tech companies and well-funded research centers — may one day overcome its shortcomings. But he is skeptical from an engineering perspective. Those systems, he said, are not made with the goals of transparency and generating rational decisions that can be explained.

“The big question is how do we design the A.I. that we want,” Dr. Ferrucci said. “To do that, I think we need to step out of the machine-learning box.”

For the full story, see:

Steve Lohr. “You Can Lead A.I. to Answers, but Can You Make It Think?” The New York Times (Monday, August 29, 2022): B1 & B7.

(Note: ellipses added.)

(Note: the online version of the story was updated Sept. 8, 2022, and has the title “One Man’s Dream of Fusing A.I. With Common Sense.”)

“You Will Do Your Best Creative Work by Yourself”

(p. A12) The value of gathering to swap loosely formed thoughts is highly suspect, despite being a major reason many companies want workers back in offices.

“You do not get your best ideas out of these freewheeling brainstorming sessions,” says Sheena Iyengar, a professor at Columbia Business School. “You will do your best creative work by yourself.”

Iyengar has compiled academic research on idea generation, including a decade of her own interviews with more than a thousand people, into a book called “Think Bigger.” It concludes that group brainstorming is usually a waste of time.

Pitfalls include blabbermouths with mediocre suggestions and introverts with brilliant ones that they keep to themselves.

. . .

Plenty of people have always bemoaned brainstorming. Longtime Wall Street Journal readers may recall a 2006 “Cubicle Culture” column that skewered the popular practice, and Harvard Business Review published a research-based case against the usefulness of brainstorming in 2015.

. . .

Sometimes leaders bring employees together to create the illusion of wide-open input, says Erika Hall, co-founder of Mule Design Studio, a management consulting firm in San Francisco. In-person brainstorming is part of the back-to-office rationale for many of her clients, and she generally advises the ones that truly want to improve collaboration to first carve out some alone time for their workers.

When Hall needs inspiration, she goes for a run.

“It’s freaky,” she says. “I will go run on a problem, and things will happen in my head that do not happen under any other circumstance.”

Others might find “Aha!” moments in the shower or while listening to music. Leaving breakthroughs to private serendipity can feel, to bosses, like losing control, she acknowledges, but it might be more effective than trying to schedule magic in a conference room.

For the full commentary, see:

Callum Borchers. “ON THE CLOCK; Switch Off Brainstorming If You Want Brighter Ideas.” The Wall Street Journal (Thursday, May 18, 2023): A12.

(Note: ellipses added.)

(Note: the online version of the commentary was updated May 18, 2023, and has the title “ON THE CLOCK; Office Brainstorms Are a Waste of Time.”)

The book by Iyengar mentioned above is:

Iyengar, Sheena. Think Bigger: How to Innovate. New York: Columbia Business School Publishing, 2023.