Philosopher MacAskill’s “Effective Altruism” Was Neither Effective Nor Altruistic

(p. B1) In short order, the extraordinary collapse of the cryptocurrency exchange FTX has vaporized billions of dollars of customer deposits, prompted investigations by law enforcement and destroyed the fortune and reputation of the company’s founder and chief executive, Sam Bankman-Fried.

It has also dealt a significant blow to the corner of philanthropy known as effective altruism, a philosophy that advocates applying data and evidence to doing the most good for the many and that is deeply tied to Mr. Bankman-Fried, one of its leading proponents and donors. Now nonprofits are scrambling to replace millions in grant commitments from Mr. Bankman-Fried’s charitable vehicles, and members of the effective altruism community are asking themselves whether they might have helped burnish his reputation.

“Sam and FTX had a lot of good will — and some of that good will was the result of association with ideas I have spent my career promoting,” the philosopher William MacAskill, a founder of the effective altruism movement who has known Mr. Bankman-Fried since the FTX founder was an undergraduate at M.I.T., wrote on Twitter on Friday (Nov. 11, 2022). “If that good will laundered fraud, I am ashamed.”

Mr. MacAskill was one of five people from the charitable vehicle known as the FTX Future Fund who jointly announced their resignation on Thursday (Nov. 10, 2022).

. . .

(p. B5) Benjamin Soskis, senior research associate in the Center on Nonprofits and Philanthropy at the Urban Institute, said that the issues raised by Mr. Bankman-Fried’s reversal of fortune acted as a “distorted fun-house mirror of a lot of the problems with contemporary philanthropy,” in which very young donors control increasingly enormous fortunes.

. . .

Mr. Bankman-Fried’s fall from grace may have cost effective-altruist causes billions of dollars in future donations.  . . .

His connection to the movement in fact predates the vast fortune he won and lost in the cryptocurrency field. Over lunch a decade ago while he was still in college, Mr. Bankman-Fried told Mr. MacAskill, the philosopher, that he wanted to work on animal-welfare issues. Mr. MacAskill suggested the young man could do more good earning large sums of money and donating the bulk of it to good causes instead.

. . .

A significant share of the grants went to groups focused on building the effective altruist movement rather than organizations working directly on its causes. Many of those groups had ties to Mr. Bankman-Fried’s own team of advisers. The largest single grant listed on the Future Fund website was $15 million to a group called Longview, which according to its website counts the philosopher Mr. MacAskill and the chief executive of the FTX Foundation, Nick Beckstead, among its own advisers.

The second-largest grant, in the amount of $13.9 million, went to the Center for Effective Altruism. Mr. MacAskill was a founder of the center. Both Mr. Beckstead and Mr. MacAskill are on the group’s board of trustees, with Mr. MacAskill serving as the chair of the United Kingdom board and Mr. Beckstead as the chair of the U.S. subsidiary.

For the full story, see:

Nicholas Kulish. “Collapse of FTX Strikes a Philanthropy Movement.” The New York Times (Monday, November 14, 2022): B1 & B5.

(Note: ellipses, and bracketed dates, added.)

(Note: the online version of the story was updated Nov. 14, 2022, and has the title “FTX’s Collapse Casts a Pall on a Philanthropy Movement.”)

The Most Powerful A.I. Systems Still Do Not Understand, Have No Common Sense, and Cannot Explain Their Decisions

(p. B1) David Ferrucci, who led the team that built IBM’s famed Watson computer, was elated when it beat the best-ever human “Jeopardy!” players in 2011, in a televised triumph for artificial intelligence.

But Dr. Ferrucci understood Watson’s limitations. The system could mine oceans of text, identify word patterns and predict likely answers at lightning speed. Yet the technology had no semblance of understanding, no human-style common sense, no path of reasoning to explain why it reached a decision.

Eleven years later, despite enormous advances, the most powerful A.I. systems still have those limitations.

. . .

(p. B7) The big, so-called deep learning programs have conquered tasks like image and speech recognition, and new versions can even pen speeches, write computer programs and have conversations.

They are also deeply flawed. They can generate biased or toxic screeds against women, minorities and others. Or occasionally stumble on questions that any child could answer. (“Which is heavier, a toaster or a pencil? A pencil is heavier.”)

“The depth of the pattern matching is exceptional, but that’s what it is,” said Kristian Hammond, an A.I. researcher at Northwestern University. “It’s not reasoning.”

Elemental Cognition is trying to address that gap.

. . .

Eventually, Dr. Ferrucci and his team made progress with the technology. In the past few years, they have presented some of their hybrid techniques at conferences and they now have demonstration projects and a couple of initial customers.

. . .

The Elemental Cognition technology is largely an automated system. But that system must be trained. For example, the rules and options for a global airline ticket are spelled out in many pages of documents, which are scanned.

Dr. Ferrucci and his team use machine learning algorithms to convert them into suggested statements in a form a computer can interpret. Those statements can be facts, concepts, rules or relationships: Qantas is an airline, for example. When a person says “go to” a city, that means add a flight to that city. If a traveler adds four more destinations, that adds a certain amount to the cost of the ticket.

In training the round-the-world ticket assistant, an airline expert reviews the computer-generated statements, as a final check. The process eliminates most of the need for hand coding knowledge into a computer, a crippling handicap of the old expert systems.

Dr. Ferrucci concedes that advanced machine learning — the dominant path pursued by the big tech companies and well-funded research centers — may one day overcome its shortcomings. But he is skeptical from an engineering perspective. Those systems, he said, are not made with the goals of transparency and generating rational decisions that can be explained.

“The big question is how do we design the A.I. that we want,” Dr. Ferrucci said. “To do that, I think we need to step out of the machine-learning box.”

For the full story, see:

Steve Lohr. “You Can Lead A.I. to Answers, but Can You Make It Think?” The New York Times (Monday, August 29, 2022): B1 & B7.

(Note: ellipses added.)

(Note: the online version of the story was updated Sept. 8, 2022, and has the title “One Man’s Dream of Fusing A.I. With Common Sense.”)

“Cochrane Reviews Are Often Referred to as Gold Standard Evidence in Medicine”

The credibility of Cochrane reviews matters. One of their most important reviews, that I cite in my in-progress work on clinical trials, suggests that results of randomized double-blind clinical trials, usually agree with results of observational studies on the same topic. This matters a lot, because observational studies can give us more and quicker actionable results, saving lives.

(p. A23) Cochrane reviews are often referred to as gold standard evidence in medicine because they aggregate results from many randomized trials to reach an overall conclusion — a great method for evaluating drugs, for example, which often are subjected to rigorous but small trials. Combining their results can lead to more confident conclusions.

. . .

. . . what we learn from the Cochrane review is that, especially before the pandemic, distributing masks didn’t lead people to wear them, which is why their effect on transmission couldn’t be confidently evaluated.

For the full commentary, see:

Zeynep Tufekci. “In Fact, the Science Is Clear That Masks Work.” The New York Times (Saturday, March 11, 2023): A23.

(Note: ellipses added.)

(Note: the online version of the commentary has the date March 10, 2023, and has the title “Here’s Why the Science Is Clear That Masks Work.”)

“Flowers Never Bend, With the Rainfall”

Sometimes when I am in a dark mood I wonder how you keep moving forward when you do not know how much time is left. Some seek an answer in religion. I am more open to a kind of stoicism combined with the other gift of Prometheus: blind hope.

(p. 3) A few months into treatment, I realized that Josh might not make it to the next spring, when we would normally visit my extended family in Greece. I told Dr. Sara that I would like to take my husband to Greece, because he might not get the chance again.

. . .

My diary reminds me that while we were there, I asked Josh what he would do differently in life. “Not get cancer,” he said.

. . .

As for me, I kept hearing the lyrics to a Simon and Garfunkel song in my head: “So, I’ll continue to continue to pretend, my life will never end, and flowers never bend, with the rainfall.” It was my soundtrack.

For the full commentary, see:

Anemona Hartocollis. “My Husband’s Doctor, Onscreen.” The New York Times, SundayStyles Section (Sunday, November 20, 2022): 1-3.

(Note: the online version of the commentary was updated June 20, 2023 [sic], and has the title “Cancer, My Husband’s Doctor, and Catherine Deneuve.”)

Public Sector Unions Make Government Unaccountable to the Will of the People

(p. A17) In 2008, six years after securing control over New York City’s public schools, Mayor Michael Bloomberg and schools chancellor Joel Klein put forward a program to tie teacher tenure to student performance. The goal was to reward the best-performing teachers with job security, encourage better student outcomes, and hold teachers accountable for demonstrated results. To most New York residents, it surely sounded like a good idea.

To New York’s teachers’ unions, however, the program was utterly unacceptable. Union leaders lobbied Albany, threatened state lawmakers (who could pass legislation binding the mayor) with the loss of political support, and walked away with a two-year statewide prohibition on the use of student test performance in tenure evaluations. In short, the union thwarted the mayor’s authority over the city’s schools and commandeered the state’s legislative power.

In this case and many others, a public-sector union served its own interests at the expense of the public’s. In “Not Accountable,” Philip Howard shows in vivid detail how such practices have made government at all levels unmanageable, inefficient and opposed to the common good. He argues that, in fact, public unions—that is, unions whose members work for the government—are forbidden by the Constitution. The argument, he notes, would have been familiar to President Franklin Roosevelt and George Meany, the longtime president of the AFL-CIO, both of whom championed private-sector labor but believed that public workers—teachers, fire fighters, policemen, civil-service employees—had no right to bargain collectively with the government.

. . .

Mr. Howard makes a persuasive case, but the chances of seeing it affect American political life are, at the moment, remote.

. . .

Still, the goal is admirable and worth pursuing. In place of public-sector collective bargaining, Mr. Howard calls for a merit-based system for hiring and evaluating government employees. Instead of stultifying work rules that thwart creativity, he envisions a public-sector structure in which employees can use their talents and judgment to improve the functioning of government. Fundamentally, Mr. Howard views the Constitution, and the law generally, as a mechanism for both action and accountability, one that entrusts powers to inevitably fallible human beings while subjecting them to the checks of others in authority and, ultimately, to the will of the people.

For the full review, see:

John Ketcham. “BOOKSHELF; Unelected Legislators.” The Wall Street Journal (Tuesday, March 14, 2023): A17.

(Note: ellipses added.)

(Note: the online version of the review was updated March 13, 2023, and has the title “BOOKSHELF; ‘Not Accountable’ Review: Unelected Legislators.”)

The book under review is:

Howard, Philip K. Not Accountable: Rethinking the Constitutionality of Public Employee Unions. Garden City, NY: Rodin Books, 2023.

Tim Cook’s Apple Is Silent on Communist China’s Suppression of Human Rights

(p. A19) Apple CEO Tim Cook has been taking a beating over his company’s coziness with Beijing. It comes amid protests across China against the government’s strict Covid-19 lockdowns, including at a factory in Zhengzhou where most of the world’s iPhones are made. Hillary Vaughn of Fox News perfectly captured Mr. Cook’s embarrassment on Capitol Hill Thursday [Dec. 1, 2022] when she peppered him with questions:

“Do you support the Chinese people’s right to protest? Do you have any reaction to the factory workers that were beaten and detained for protesting Covid lockdowns? Do you regret restricting AirDrop access that protesters used to evade surveillance from the Chinese government? Do you think it’s problematic to do business with the Communist Chinese Party when they suppress human rights?”

A stone-faced Mr. Cook responded with silence.

. . .

CEOs can always justify their operations by pointing to the economic benefits their companies bring to the communities in which they operate. Or CEOs can go the progressive route, presenting their companies as moral paragons. But they can’t have it both ways: holding themselves up as courageous in places where the risk from speaking out is low while keeping quiet about real oppression in places where speaking out can really hurt the bottom line.

For the full commentary, see:

William McGurn. “MAIN STREET; Tim Cook’s Bad Day on China.” The Wall Street Journal (Tuesday, Dec. 6, 2022): A19.

(Note: ellipsis, and bracketed date, added.)

(Note: the online version of the commentary has the date December 5, 2022, and has the same title as the print version.)

“Effective Altruism” Is Woke, Sanctimonious, Fraudulent, and Ineffective

(p. A1) Sam Bankman-Fried said he wanted to prevent nuclear war and stop future pandemics. And he publicly pledged to use his vast and growing wealth to do so.

But the collapse of Mr. Bankman-Fried’s firm, FTX, and the revelations that he mixed FTX’s money with that of its customers, have upended those declared lofty philanthropic goals.

Run by self-described idealists spending the wealth of their billionaire patron to make the world a better place, Mr. Bankman-Fried’s FTX Foundation and its flagship Future Fund touted deep pockets, ambitious goals and fast turnarounds.

Now Mr. Bankman-Fried’s fortune has disappeared, and the self-described philosopher-executives running the organizations have resigned. Grant recipients are scrambling for cash to plug the shortfall and fretting about the provenance of FTX’s largess after the company’s lawyers said this week that a “substantial amount” of assets were missing and possibly stolen.

. . .

(p. A6) Mr. Bankman-Fried often claimed philanthropy was his primary motivation for amassing a fortune. “It’s the thing that matters the most in the end,” he said in an April interview on the “80,000 Hours” podcast.

Mr. Bankman-Fried has said his law-professor parents instilled in him an interest in utilitarianism, the philosophy of trying to do the greatest good for the greatest number of people.

. . .

Will MacAskill, then a philosophy graduate student, pitched Mr. Bankman-Fried on the idea of effective altruism, a way of applying some utilitarian ideas to charitable giving.

. . .

Mr. Bankman-Fried had considered different career paths, he said in the “80,000 Hours” interview, but Mr. MacAskill suggested he could do the most good by making a lot of money and giving it away, a popular idea in the community.

. . .

Future Fund pledged hundreds of grants worth more than $160 million by September [2022], according to its website.  . . .

Its two largest public grants, of $15 million and $13.9 million, were awarded to effective altruism groups where Mr. MacAskill held roles. Mr. MacAskill, now a professor at Oxford University, wasn’t paid for his involvement in those organizations “other than expenses,” a spokeswoman for one of them said.

. . .

Mr. MacAskill distanced himself from FTX as it was crumbling. In a string of tweets, he accused Mr. Bankman-Fried of personal betrayal and abandoning the principles of effective altruism. He was also one of the Future Fund staffers who quit.

Last week, Mr. Bankman-Fried exchanged messages with a writer at Vox, a news organization that Building A Stronger Future had also pledged to fund.

“You were really good at talking about ethics,” she said.

“I had to be,” Mr. Bankman-Fried responded. He went on to explain it as “this dumb game we woke westerners play where we say all the right shiboleths [sic.] and so everyone likes us.”

For the full story, see:

Rachel Louise Ensign and Ben Cohen. “FTX’s Collapse Wiped Out Founder’s Philanthropic Aims.” The Wall Street Journal (Friday, Nov. 25, 2022): A1 & A6.

(Note: ellipses, and bracketed year, added. The bracketed [sic.] is in the original.)

(Note: the online version of the story has the date November 24, 2022, and has the title “Sam Bankman-Fried Said He Would Give Away Billions. Broken Promises Are All That’s Left.”)

In His Bathysphere Beebe “Maintained a Sense of Childlike Optimism”

(p. 28) Beautifully written and beautifully made, “The Bathysphere Book” is a piece of poetic nonfiction that strives to conjure up the crushing blackness of the midnight zone. Full color, overflowing with stunning illustrations of the uncanny creatures that live beyond the sun, it raises questions of exploration and wonder, of nature and humanity, and lets readers find answers on their own.

. . .

As he slipped deeper and deeper beneath the waves, Beebe bore witness to “a black so black it called his very existence into question,” and saw creatures that could be recorded only by describing them to Else Bostelmann, a painter who worked like a police sketch artist to render animals she would never see in colors like “bittersweet orange, metallic opaline green, orange rufous and orange chrome.”

. . .

. . . he maintained a sense of childlike optimism that pervades the book, cutting through the limitless cold of the sea: “Having traveled the world from the depths of the sea to the highest mountains, tramped through jungles and flown across continents, Beebe was more and more adamant that wonder was not produced by swashbuckling adventures — it was a way of seeing, an attitude toward experience that was always available. At every turn, the world’s marvels were right before our eyes.”

For the full review, see:

W. M. Akers. “Under the Sea.” The New York Times Book Review (Sunday, June 4, 2023): 28-29.

(Note: ellipses added.)

(Note: the online version of the review was updated May 31, 2023, and has the title “Deep-Sea Creatures of Bittersweet Orange and Metallic Opaline Green.”)

The book under review is:

Fox, Brad. The Bathysphere Book: Effects of the Luminous Ocean Depths. New York: Astra Publishing House, 2023.

The “Woke-Mind” Is “Anti-Science, Anti-Merit and Anti-Human”

(p. 9) At various moments in “Elon Musk,” Walter Isaacson’s new biography of the world’s richest person, the author tries to make sense of the billionaire entrepreneur he has shadowed for two years — sitting in on meetings, getting a peek at emails and texts, engaging in “scores of interviews and late-night conversations.” Musk is a mercurial “man-child,” Isaacson writes, who was bullied relentlessly as a kid in South Africa until he grew big enough to beat up his bullies. Musk talks about having Asperger’s, which makes him “bad at picking up social cues.”

. . .

At one point, Isaacson asks why Musk is so offended by anything he deems politically correct, and Musk, as usual, has to dial it up to 11. “Unless the woke-mind virus, which is fundamentally anti-science, anti-merit and anti-human in general, is stopped,” he declares, “civilization will never become multiplanetary.”

. . .

The musician Grimes, the mother of three of Musk’s children (. . .), calls his roiling anger “demon mode” — a mind-set that “causes a lot of chaos.” She also insists that it allows him to get stuff done.

. . .

He is mostly preoccupied with his businesses, where he expects his staff to abide by “the algorithm,” his workplace creed, which commands them to “question every requirement” from a department, including “the legal department” and “the safety department”; and to “delete any part or process” they can. “Comradery is dangerous,” is one of the corollaries. So is this: “The only rules are the ones dictated by the laws of physics. Everything else is a recommendation.”

Still, Musk has accrued enough power to dictate his own rules. In one of the book’s biggest scoops, Isaacson describes Musk secretly instructing his engineers to “turn off” Starlink satellite internet coverage to prevent Ukraine from launching a surprise drone attack on Russian forces in Crimea. (Isaacson has since posted on X that contrary to what he writes in the book, Musk didn’t shut down coverage but denied a request to extend the network’s range.)

. . .

Isaacson believes that Musk wanted to buy Twitter because he had been so bullied as a kid and “now he could own the playground.”  . . .  Owning a playground won’t stop you from getting bullied.

For the full review, see:

Jennifer Szalai. “Self-Driving Czar.” The New York Times Book Review (Sunday, September 24, 2023): 9.

(Note: ellipses added.)

(Note: the online version of the review was updated Sept. 11, 2023, and has the title “Elon Musk Wants to Save Humanity. The Only Problem: People.”)

The book under review is:

Isaacson, Walter. Elon Musk. New York: Simon & Schuster, 2023.

Okinawans Think Ikigai (a Reason for Living) Is Important for Long Life

(p. A11) Ask most people if they want to live to be 100 and the response is likely to be “Sure!” followed by “Wait a sec . . .” Questions suddenly abound: Am I going to be healthy? Am I going to be lonely? Will I be financially stable? Will I have outlived everyone I knew and loved? What author-researcher Dan Buettner set out to demonstrate in “Live to 100: Secrets of the Blue Zones” is that the solutions to those concerns are also the keys to longevity itself.

. . .

What is clear early on is that what Mr. Buettner “discovers” during his visits to Sardinia; Singapore; Okinawa, Japan; Ikaria, Greece; and even Loma Linda, Calif., is largely what we would expect: that much of what helps people live longer isn’t necessarily the purple Japanese sweet potatoes, or going to church every day, or having the limited stress load of a Greek shepherd. It is an Okinawan diet rich in nutrients and fiber, the walking uphill to the Sardinian church, and the community to which one belongs in Loma Linda when one is, for instance, a Seventh Day Adventist who plays pickleball.

. . .

There are many correlating clues to a longer life across the locations in “Live to 100.” Okinawans emphasize the importance of having an ikigai, or reason for living; in Costa Rica the same thing is called one’s plan de vida.

For the full television review, see:

John Anderson. “Netflix’s Lessons in Longevity.” The Wall Street Journal (Wednesday, Aug. 30, 2023): A11.

(Note: ellipses added.)

(Note: the online version of the television review has the date August 29, 2023, and has the title “‘Live to 100: Secrets of the Blue Zones’ Review: Lessons in Longevity.” In the original the word ikigai and the phrase plan de vida are in italics.)

Buettner’s latest book on blue zones is:

Buettner, Dan. The Blue Zones Secrets for Living Longer: Lessons from the Healthiest Places on Earth. Washington, D.C.: National Geographic, 2023.

Improved AI Models Do Worse at Identifying Prime Numbers

(p. A2) . . . new research released this week reveals a fundamental challenge of developing artificial intelligence: ChatGPT has become worse at performing certain basic math operations.

The researchers at Stanford University and the University of California, Berkeley said the deterioration is an example of a phenomenon known to AI developers as drift, where attempts to improve one part of the enormously complex AI models make other parts of the models perform worse.

“Changing it in one direction can worsen it in other directions,” said James Zou, a Stanford professor who is affiliated with the school’s AI lab and is one of the authors of the new research. “It makes it very challenging to consistently improve.”

. . .

The goal of the team of researchers, consisting of Lingjiao Chen, a computer-science Ph.D. student at Stanford, along with Zou and Berkeley’s Matei Zaharia, is to systematically and repeatedly see how the models perform over time at a range of tasks.

Thus far, they have tested two versions of ChatGPT: version 3.5, available free online to anyone, and version 4.0, available via a premium subscription.

The results aren’t entirely promising. They gave the chatbot a basic task: identify whether a particular number is a prime number. This is the sort of math problem that is complicated for people but simple for computers.

Is 17,077 prime? Is 17,947 prime? Unless you are a savant you can’t work this out in your head, but it is easy for computers to evaluate. A computer can just brute force the problem—try dividing by two, three, five, etc., and see if anything works.

To track performance, the researchers fed ChatGPT 1,000 different numbers. In March, the premium GPT-4, correctly identified whether 84% of the numbers were prime or not. (Pretty mediocre performance for a computer, frankly.) By June its success rate had dropped to 51%.

. . .

The phenomenon of unpredictable drift is known to researchers who study machine learning and AI, Zou said. “We had the suspicion it could happen here, but we were very surprised at how fast the drift is happening.”

For the full commentary, see:

Josh Zumbrun. “THE NUMBERS; AI Surprise: It’s Unlearning Basic Math.” The Wall Street Journal (Saturday, Aug. 5, 2023): A2.

(Note: ellipses added.)

(Note: the online version of the commentary has the date August 4, 2023, and has the title “THE NUMBERS; Why ChatGPT Is Getting Dumber at Basic Math.”)