“The Car Emancipated the Masses”

(p. 11) Ear-shredding noise, toxic air, interminable traffic jams, chaos and death — all the result of untrammeled population expansion. Is this a description of a contemporary urban nightmare? Not quite: We’re talking about 19th-century London, although the situation in Paris and other major cities wasn’t much better. And the cause of all this misery was … the horse.

As recounted by Bryan Appleyard in his compelling new book, “The Car,” by 1900 the 50,000 horses required to meet London’s transportation needs deposited 500 tons of excrement daily. Hooves and carriage wheels threw up curtains of fetid muck. Accidents caused by mechanical failures and spooked animals were often fatal to passengers, drivers and the horses themselves. New York City employed 130,000 horses and predictions were made that by 1930 that city’s streets would be piled three stories high with dung. Yet another dire prophecy fallen victim to the continuity fallacy — the belief that a current trend will endure forever.

Things change because when problems arise, people work at solving them, and sometimes they arrive at solutions. The answer to the psychosocial and physical degradation brought on by too many people employing too many horses in the burgeoning Industrial Age was, of course, the development of the motor vehicle. Specifically, one powered by the internal combustion engine.

. . .

For all the carping and finger-pointing leveled at traditional automobiles — much of which Appleyard acknowledges as valid — he is unabashed about his appreciation for the most important machine in human history. As he points out, “The car emancipated the masses far more effectively than any political ideology; that it did so at a cost should not obliterate the importance of that freedom.”

Well said. Vroom.

For the full review, see:

Jonathan Kellerman. “Auto Erotica.” The New York Times Book Review (Sunday, September 25, 2022): 11.

(Note: ellipsis added.)

(Note: the online version of the review was updated Sept. 23, 2022, and has the title “How the Car Created the Modern World.”)

The book under review is:

Appleyard, Bryan. The Car: The Rise and Fall of the Machine That Made the Modern World. New York: Pegasus Books, 2022.

Deep-Sea Nodules Are a New Source of Scarce Metals

(p. B11) Companies could start mining the ocean floor for metals used to make electric-vehicle batteries within the next year, a development that could occur despite broad concerns about the environmental impact of deep-sea mining.

The International Seabed Authority, a United Nations observer organization that regulates deep-sea mining in international waters, is drawing up a final regulatory framework for deep-sea mining that all 168 members would need to agree to within the next 12 months. The U.S. isn’t a member of the ISA. With or without the finalized rules, the ISA will permit seabed mining by July 2023, according to people familiar with the matter.

The intent of deep-sea mining is to scrape the ocean floor for polymetallic nodules—tennis-ball-size pieces of rock that contain iron and manganese oxide layers. A seabed in the Pacific Ocean called the Clarion Clipperton Zone, which cover 1.7 million square miles between Mexico and Hawaii, contains a high volume of nodules made of battery metals, such as cobalt, manganese and lithium. The International Seabed Authority in 2010 estimated the zone had roughly 30 billion metric tons of nodules. Cobalt and other metals used in making rechargeable batteries that power products from phones to electric vehicles are in high demand, setting off a race to find and procure them. Prices of these metals are soaring as mining for them comes under scrutiny, curtailing supply.

For the full story see:

Yusuf Khan. “Deep-Sea Mining Nears Reality.” The Wall Street Journal (Tuesday, Aug. 23, 2022): B11.

(Note: as of Sept. 9, 2022, the article was not available online.)

Trang Almost Did Not Perform Simple Experiment Because “People Would Have Known This Already”

(p. D3) A team of scientists has found a cheap, effective way to destroy so-called forever chemicals, a group of compounds that pose a global threat to human health.

The chemicals — known as PFAS, or per- and polyfluoroalkyl substances — are found in a spectrum of products and contaminate water and soil around the world. Left on their own, they are remarkably durable, remaining dangerous for generations.

Scientists have been searching for ways to destroy them for years. In a study, published Thursday [Aug. 18, 2022] in the journal Science, a team of researchers rendered PFAS molecules harmless by mixing them with two inexpensive compounds at a low boil. In a matter of hours, the PFAS molecules fell apart.

. . .

At the end of a PFAS molecule’s carbon-fluorine chain, it is capped by a cluster of other atoms. Many types of PFAS molecules have heads made of a carbon atom connected to a pair of oxygen atoms, for example.

Dr. Dichtel came across a study in which chemists at the University of Alberta found an easy way to pry carbon-oxygen heads off other chains. He suggested to his graduate student, Brittany Trang, that she give it a try on PFAS molecules.

Dr. Trang was skeptical. She had tried to pry off carbon-oxygen heads from PFAS molecules for months without any luck. According to the Alberta recipe, all she’d need to do was mix PFAS with a common solvent called dimethyl sulfoxide, or DMSO, and bring it to a boil.

“I didn’t want to try it initially because I thought it was too simple,” Dr. Trang said. “If this happens, people would have known this already.”

An older grad student advised her to give it a shot. To her surprise, the carbon-oxygen head fell off.

It appears that DMSO makes the head fragile by altering the electric field around the PFAS molecule, and without the head, the bonds between the carbon atoms and the fluorine atoms become weak as well. “This oddly simple method worked,” said Dr. Trang, who finished her Ph.D. last month and is now a journalist.

For the full commentary see:

Carl Zimmer. “MATTER; Fighting Forever Chemicals With Chemicals.” The New York Times (Tuesday, Aug. 23, 2022): D3.

(Note: ellipsis, and bracketed date, added.)

(Note: the online version of the commentary has the date August 18, 2022, and has the title “MATTER; Forever Chemicals No More? PFAS Are Destroyed With New Technique.”)

The research summarized in the passages quoted above was published in:

Trang, Brittany, Yuli Li, Xiao-Song Xue, Mohamed Ateia, K. N. Houk, and William R. Dichtel. “Low-Temperature Mineralization of Perfluorocarboxylic Acids.” Science 377, no. 6608 (Aug. 18, 2022): 839-45.

Nobody Wanted to Buy Tony Fadell’s Early Inventions

(p. C7) Tony Fadell began his Silicon Valley career in 1991 at General Magic, which he calls “the most influential startup nobody has ever heard of.” He sketches his early persona as the all-too-typical engineering nerd dutifully donning an interview suit only to be told to ditch the jacket and tie before the meeting begins. He started at the bottom, building tools to check the work of others, many of whom just happened to be established legends from the original Apple Macintosh team.

General Magic failed at its ambitious goal: to create demand for its ingenious hand-held computer at a time when most people didn’t know they needed a computer at all. In other words, though the company had a great product, the product didn’t solve a pressing problem for consumers. Mr. Fadell offers candid reasons why such a smart group of people could have overlooked this basic market reality. Relaying the “gut punch of our failure,” he describes what it’s like “when you think you know everything (p. C8) then suddenly realize you have no idea what you’re doing.”

Four years later, Mr. Fadell landed as chief technology officer at Philips, the 300,000-employee Dutch electronics company, where he had a big title, a new team, a budget and a mission: The company was going to make a hand-held computer for now-seasoned desktop users who were beginning to see the need for a mobile device. Using Microsoft Windows CE as the operating system, it launched the Philips Velo in 1997. This was a $599.99 “personal digital assistant”—keyboard, email, docs, calendar, the works—in a friendly 14-ounce package. All the pieces were there, the author writes, except “a real sales and retail partnership.” No one—not Best Buy, not Circuit City, not Philips itself—knew how to sell the product, or whom to sell it to. So here was another “lesson learned via gut punch”: There is a lot more to a successful product than a good gadget, even an excellent one.

. . .

He uses his problem-solution-failure style to share stories about how he built the Nest thermostat and the Nest Labs company—from fundraising, building a retail channel and navigating patent litigation to marketing, packaging and customer support.

The best moments in this section, and perhaps the most difficult for Mr. Fadell to write, are about the acquisition of Nest by Google. He pulls no punches in describing what an outsider might call a botched venture integration. Google paid $3.2 billion for Nest in 2014 but within two years began to consider selling. “In utter frustration,” Mr. Fadell walked away. The lessons in these pages are as much for big companies acquiring startups as they are for the startups being acquired.

For the full review, see:

Steven Sinofsky. “Running The Tortuous ‘Idea Maze’.” The Wall Street Journal (Saturday, June 18, 2022): C7-C8.

(Note: ellipsis added.)

(Note: the online version of the review has the date June 17, 2022, and has the title “‘Build’ Review: Failure Is the Mother of Invention.”)

The book under review is:

Fadell, Tony. Build: An Unorthodox Guide to Making Things Worth Making. New York: Harper Business, 2022.

A.I. Cannot Learn What 4-Year-Old Learns From Trial-And-Error Experiments

(p. C3) A few weeks ago a Google engineer got a lot of attention for a dramatic claim: He said that the company’s LaMDA system, an example of what’s known in artificial intelligence as a large language model, had become a sentient, intelligent being.

Large language models like LaMDA or San Francisco-based Open AI’s rival GPT-3 are remarkably good at generating coherent, convincing writing and conversations—convincing enough to fool the engineer. But they use a relatively simple technique to do it: The models see the first part of a text that someone has written and then try to predict which words are likely to come next. If a powerful computer does this billions of times with billions of texts generated by millions of people, the system can eventually produce a grammatical and plausible continuation to a new prompt or a question.

. . .

In what’s known as the classic “Turing test,” Alan Turing in 1950 suggested that if you couldn’t tell the difference in a typed conversation between a person and a computer, the computer might qualify as intelligent. Large language models are getting close. But Turing also proposed a more stringent test: For true intelligence, a computer should not only be able to talk about the world like a human adult—it should be able to learn about the world like a human child.

In my lab we created a new online environment to implement this second Turing test—an equal playing field for children and AI systems. We showed 4-year-olds on-screen machines that would light up when you put some combinations of virtual blocks on them but not others; different machines worked in different ways. The children had to figure out how the machines worked and say what to do to make them light up. The 4-year-olds experimented, and after a few trials they got the right answer. Then we gave state-of-the-art AI systems, including GPT-3 and other large language models, the same problem. The language models got a script that described each event the children saw and then we asked them to answer the same questions we asked the kids.

We thought the AI systems might be able to extract the right answer to this simple problem from all those billions of earlier words. But nobody in those giant text databases had seen our virtual colored-block machines before. In fact, GPT-3 bombed. Some other recent experiments had similar results. GPT-3, for all its articulate speech, can’t seem to solve cause-and-effect problems.

If you want to solve a new problem, googling it or going to the library may be a first step. But ultimately you have to experiment, the way the children did. GPT-3 can tell you what the most likely outcome of a story will be. But innovation, even for 4-year-olds, depends on the surprising and unexpected—on discovering unlikely outcomes, not predictable ones.

For the full commentary see:

Alison Gopnik. “What AI Still Doesn’t Know How To Do.” The Wall Street Journal (Saturday, July 16, 2022): C3.

(Note: ellipsis added.)

(Note: the online version of the commentary has the date July 15, 2022, and has the same title as the print version.)

Increasing Tax Rates Will Reduce Venture Funding for Cancer Research

(p. A17) In his last year as vice president, Joe Biden launched a “cancer moonshot” to accelerate cures for the disease. It was short-lived, but he did help negotiate an agreement in Congress easing regulation of breakthrough drugs and medical devices.

In February [2022], President Biden revived the initiative, setting a goal of reducing cancer death rates by at least 50% over the next 25 years. It’s ambitious but may be achievable given how rapidly scientific knowledge and treatments are advancing. Other Biden policies, however, are at odds with the goals of this one.

Two pharmaceutical breakthroughs were announced only last week that could save tens of thousands of lives each year and redefine cancer care. Yet the tax hikes and drug-price controls that the Biden administration is pitching would discourage the private investment that has delivered these potential cures.

. . .

Oncologists were blown away by the results reported last week in the New England Journal of Medicine: All 12 patients receiving the drug achieved complete remission after six months of treatment. None needed surgery, chemotherapy or radiation. Although some may relapse, the 100% success rate is unprecedented even for a small trial.

. . .

Last week AstraZeneca in partnership with Daiichi Sankyo reported that Enhertu reduced the risk of death by 36% in patients with metastatic breast cancer with low HER2 and by half for the subset who were hormone-receptor negative. These results blow the outcomes for other metastatic breast-cancer therapies out of the water.

. . .

These treatment breakthroughs aren’t happening because of government programs. They’re happening because pharmaceutical companies have invested decades and hundreds of billions of dollars in drug research and development. It typically takes 10.5 years and $1.3 billion to bring a new drug to market. About 95% of cancer drugs fail.

This is important to keep in mind as Mr. Biden and Democrats in Congress push for Medicare to “negotiate”—i.e., cap—drug prices and raise taxes on corporations and investors. The large profits that drugmakers notch from successful drugs are needed to reward shareholders for their investment risk and encourage future investment. Capital is mobile.

Mr. Biden’s proposal to increase the top marginal individual income-tax rates, including on capital gains, would punish venture capitalists who seed biotech startups, which do most early-stage research on cancer drugs and are often acquired by large drugmakers. At the same time, his proposed corporate global minimum tax would raise costs of intellectual property, which is often taxed at lower rates abroad.

There aren’t many things to celebrate nowadays, but biotech innovation is one. Let’s hope the president doesn’t kill his own cancer moonshot.

For the full commentary see:

Allysia Finley. “Biden May Stop His Cancer Moonshot’s Launch.” The Wall Street Journal (Thursday, June 16, 2022): A17.

(Note: ellipses, and bracketed year, added.)

(Note: the online version of the commentary has the date June 15, 2022, and has the same title as the print version.)

Feds Requiring EV Chargers in Desolate Parts of the West That Are Off the Electric Grid

(p. B1) The U.S. government wants fast EV-charging stations every 50 miles along major highways. Some Western states say the odds of making that work are as remote as their rugged landscapes.

States including Utah, Wyoming, Montana, New Mexico and Colorado are raising concerns about rules the Biden administration has proposed for receiving a share of the coming $5 billion in federal funding to help jump-start a national EV-charging network. The states say it will be difficult, if not impossible, to run EV chargers along desolate stretches of highway.

“There are plenty of places in Montana and other states here out West where it’s well more than 50 miles between gas stations,” said Rob Stapley, an official with the Montana Department of Transportation. “Even if there’s an exit, or a place for people to pull off, the other big question is: Is there anything on the electrical grid at a location or even anywhere close to make that viable?”

. . .

(p. B2) Some Western states are unhappy over the federal determination of which U.S. highways should have the chargers, which is a carry-over from 2015 legislation for alternative-fuels roadways.

Mr. De La Rosa of New Mexico said it could result in a disproportionate number of charging stations in the southeast part of the state, and none in the northwest. “It’s not apparent here in New Mexico how those decisions were made,” he said.

Utah’s population is largely clustered in cities along the Wasatch Front and Interstate-15 in the northern and southern parts of the state, and there are concerns that spending on remote locations could skip serving the routes most delivery drivers and residents use, said Kim Frost, executive director of the Utah Clean Air Partnership.

For the full story see:

Jennifer Hiller. “Plan for EV Chargers Meets Skepticism in West.” The Wall Street Journal (Tuesday, June 14, 2022): B1-B2.

(Note: ellipsis added.)

(Note: the online version of the story has the date June 13, 2022, and has the title “Biden Plan for EV Chargers Meets Skepticism in Rural West.”)

The “Intellect” and “Bravado” Behind the Success of Thiel, Musk, and the “PayPal Mafia”

(p. C7) Next week marks the 20th anniversary of PayPal becoming a publicly traded company. The IPO valued the online payments processor at nearly $1 billion—an eye-opening sum at the time. Back in the day, technology firms marked such occasions with glitzy celebrations. PayPal took a different path. Its youthful employees gathered in the parking lot of their Palo Alto, Calif., office building, where the company’s enigmatic chief executive, Peter Thiel, performed a keg stand and then played 10 simultaneous games of speed chess, winning nine of them.

Jimmy Soni tells that story and many others in “The Founders,” a gripping account of PayPal’s origins and a vivid portrait of the geeks and contrarians who made its meteoric rise possible. His richly reported narrative includes corporate intrigue, workplace hijinks, breakthrough innovation and first-class nerdiness.

. . .

Julie Anderson, one of X.com’s early employees, dropped the company’s California-based telephone customer-service provider and relaunched the service in Nebraska. Why there? Because many of her relatives lived there.

. . .

Confirming a cliché, staffers do spend all night at the office—sometimes sleeping under their desks, though not always. “There’s this massive value that you harness when you’re doing an all-nighter,” says Mr. Levchin, “when you’ve gone for presumably seven or eight hours of work, and you’re really getting up to a point when something’s about to be born—and then you go for eight more hours! And instead of stopping to go to sleep and letting these ideas dissipate, you actually focus on the findings you’ve made in the last few hours, and you just go crazy and do some more of that.”

. . .

Why did PayPal thrive when others—eMoneyMail, PayPlace, c2it—failed? One key was limiting the losses from fraud. If the company had taken a traditional approach, observes a member of the fraud-analytics team, it “would have hired people who had been building logistic regression models for banks for twenty years but never innovated.” Instead it turned to young, open-minded engineers who devised unorthodox methods.

. . .

. . . “The Founders” makes crystal-clear that PayPal’s human capital—a potent cocktail of intellect, bravado and competitiveness, complemented by the occasional keg stand—laid the foundation for success.

For the full review, see:

Matthew Rees. “Making the Future Click.” The Wall Street Journal (Saturday, Feb. 12, 2022): C7.

(Note: ellipses added.)

(Note: the online version of the review has the date February 11, 2022, and has the title “‘The Founders’ Review: Making the Future Click.”)

The book under review is:

Soni, Jimmy. The Founders: The Story of PayPal and the Entrepreneurs Who Shaped Silicon Valley. New York: Simon & Schuster, 2022.

A.I. Remains Useful Mainly for “Uncinematic Back-Office Logistics”

(p. B4) After years of companies emphasizing the potential of artificial intelligence, researchers say it is now time to reset expectations.

With recent leaps in the technology, companies have developed more systems that can produce seemingly humanlike conversation, poetry and images. Yet AI ethicists and researchers warn that some businesses are exaggerating the capabilities—hype that they say is brewing widespread misunderstanding and distorting policy makers’ views of the power and fallibility of such technology.

“We’re out of balance,” says Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, a Seattle-based research nonprofit.

. . .

The belief that AI is becoming—or could ever become—conscious remains on the fringes in the broader scientific community, researchers say.

In reality, artificial intelligence encompasses a range of techniques that largely remain useful for a range of uncinematic back-office logistics like processing data from users to better target them with ads, content and product recommendations.

. . .

The gap between perception and reality isn’t new. Mr. Etzioni and others pointed to the marketing around Watson, the AI system from International Business Machines Corp. that became widely known after besting humans on the quiz show “Jeopardy.” After a decade and billions of dollars in investment, the company said last year it was exploring the sale of Watson Health, a unit whose marquee product was supposed to help doctors diagnose and cure cancer.

. . .

Elizabeth Kumar, a computer-science doctoral student at Brown University who studies AI policy, says the perception gap has crept into policy documents. Recent local, federal and international regulations and regulatory proposals have sought to address the potential of AI systems to discriminate, manipulate or otherwise cause harm in ways that assume a system is highly competent. They have largely left out the possibility of harm from such AI systems’ simply not working, which is more likely, she says.

For the full story see:

Karen Hao and Miles Kruppa. “AI Hype Doesn’t Match Reality.” The Wall Street Journal (Thursday, June 30, 2022): B4.

(Note: ellipses added.)

(Note: the online version of the story was updated July 5, 2022, and has the title “Tech Giants Pour Billions Into AI, but Hype Doesn’t Always Match Reality.”)

Brynjolfsson Made “Long Bet” with Gordon that A.I. Will Increase Productivity

(p. B1) For years, it has been an article of faith in corporate America that cloud computing and artificial intelligence will fuel a surge in wealth-generating productivity. That belief has inspired a flood of venture funding and company spending. And the payoff, proponents insist, will not be confined to a small group of tech giants but will spread across the economy.

It hasn’t happened yet.

Productivity, which is defined as the value of goods and services produced per hour of work, fell sharply in the first quarter this year, the government reported this month. The quarterly numbers are often volatile, but the report seemed to dash earlier hopes that a productivity revival was finally underway, helped by accelerated investment in digital technologies during the pandemic.

The growth in productivity since the pandemic hit now stands at about 1 percent annually, in line with the meager rate since 2010 — and far below the last stretch of robust improvement, from 1996 to 2004, when productivity grew more than 3 percent a year.

. . .

(p. B6) The current productivity puzzle is the subject of spirited debate among economists. Robert J. Gordon, an economist at Northwestern University, is the leading skeptic. Today’s artificial intelligence, he says, is mainly a technology of pattern recognition, poring through vast troves of words, images and numbers. Its feats, according to Mr. Gordon, are “impressive but not transformational” in the way that electricity and the internal combustion engine were.

Erik Brynjolfsson, director of Stanford University’s Digital Economy Lab, is the leader of the optimists’ camp. He confesses to being somewhat disappointed that the productivity pickup is not yet evident, but is convinced it is only a matter of time.

“Real change is happening — a tidal wave of transformation is underway,” Mr. Brynjolfsson said. “We’re seeing more and more facts on the ground.”

It will probably be years before there is a definitive answer to the productivity debate. Mr. Brynjolfsson and Mr. Gordon made a “long bet” last year, with the winner determined at the end of 2029.

For the full story see:

Steve Lohr. “Why Isn’t A.I. Increasing Productivity?” The New York Times (Wednesday, May 25, 2022): B1 & B6.

(Note: ellipsis added.)

(Note: the online version of the story was updated May 27, 2022, and has the title “Why Isn’t New Technology Making Us More Productive?”)

Log4j Open Source Bug Created “Endemic” Risk for “a Decade or Longer”

Continuing worries about the Log4j software bug are consistent with my skepticism of open source software, Openness to Creative Destruction. You can find a brief discussion in the chapter defending patents.

(p. A6) WASHINGTON—A major cybersecurity bug detected last year in a widely used piece of software is an “endemic vulnerability” that could persist for more than a decade as an avenue for hackers to infiltrate computer networks, a U.S. government review has concluded.

. . .

“The Log4j event is not over,” the report said. “The board assesses that Log4j is an ‘endemic vulnerability’ and that vulnerable instances of Log4j will remain in systems for many years to come, perhaps a decade or longer. Significant risk remains.”

. . .

Security researchers uncovered last December a major flaw in Log4j, an open-source software logging tool. It is a widely used piece of free code that logs activity in computer networks and applications.

For the full story, see:

Dustin Volz. “‘Endemic’ Risk Seen In Log4j Cyber Bug.” The Wall Street Journal (Friday, July 15, 2022): A6.

(Note: ellipses added.)

(Note: the online version of the story has the date July 14, 2022, and has the title “Major Cyber Bug in Log4j to Persist as ‘Endemic’ Risk for Years to Come, U.S. Board Finds.”)

My book, mentioned above, is:

Diamond, Arthur M., Jr. Openness to Creative Destruction: Sustaining Innovative Dynamism. New York: Oxford University Press, 2019.