Category: Economics
Large Randomized Controlled Trial Finds Little Benefit in Free Money to Poor, Undermining Case for Universal Basic Income (UBI)
A variety of arguments have been made in support of a Universal Basic Income (UBI). I am most interested in the argument that says that technology will destroy the jobs of the worst off, and so for them to survive society would be justified in giving them a basic income. I do not believe that in a free society technological progress will on balance destroy the jobs of the worst off. If innovative entrepreneurs are free to innovate, especially in labor markets, they will find ways to employ the worst off.
Others have argued that giving a basic income to the worst off will make them better parents, measurable by better child outcomes in terms of language skills and better behavior and cognition. Several years ago these advocates setup a big, expensive randomized controlled trial to test their argument. The results? None of their hypotheses were supported. The passages quoted below are from a front page New York Times article in which they express their surprise, and for some, their incredulity.
(p. A1) If the government wants poor children to thrive, it should give their parents money. That simple idea has propelled an avid movement to send low-income families regular payments with no strings attached.
Significant but indirect evidence has suggested that unconditional cash aid would help children flourish. But now a rigorous experiment, in a more direct test, found that years of monthly payments did nothing to boost children’s well-being, a result that defied researchers’ predictions and could weaken the case for income guarantees.
After four years of payments, children whose parents received $333 a month from the experiment fared no better than similar children without that help, the study found. They were no more likely to develop language skills, avoid behavioral problems or developmental delays, demonstrate executive function or exhibit brain activity associated with cognitive development.
“I was very surprised — we were all very surprised,” said Greg J. Duncan, an economist at the University of California, Irvine and one of six researchers who led the study, called Baby’s First Years. “The money did not (p. A15) make a difference.”
The findings could weaken the case for turning the child tax credit into an income guarantee, as the Democrats did briefly four years ago in a pandemic-era effort to fight child poverty.
. . .
Though an earlier paper showed promising activity on a related neurological measure in the high-cash infants, that trend did not endure. The new study detected “some evidence” of other differences in neurological activity between the two groups of children, but its significance was unclear.
While researchers publicized the earlier, more promising results, the follow-up study was released quietly and has received little attention. Several co-authors declined to comment on the results, saying that it was unclear why the payments had no effect and that the pattern could change as the children age.
For the full story see:
(Note: ellipsis added.)
(Note: the online version of the story has the date July 28, 2025, and has the title “Study May Undercut Idea That Cash Payments to Poor Families Help Child Development.”)
The academic presentation of the research discussed above, can be found in:
AI Cannot Know What People Think “At the Very Edge of Their Experience”
The passages quoted below mention “the advent of generative A.I.” From previous reading, I had the impression that “generative A.I” meant A.I. that had reached human level cognition. But when I looked up the meaning of the phrase, I found that it means A.I. that can generate new content. Then I smiled. I was at Wabash College as an undergraduate from 1971-1974 (I graduated in three years). Sometime during those years, Wabash acquired its first minicomputer, and I took a course in BASIC computer programming. I distinctly remember programming a template for a brief poem where at key locations I inserted a random word variable. Where the random word variable occurred, the program randomly selected from one of a number of rhyming words. So each time the program was run, a new rhyming poem would be “generated.” That was new content, and sometimes it was even amusing. But it wasn’t any good, and it did not have deep meaning, and if what it generated was true, it was only by accident. So I guess “the advent of generative A.I.” goes back at least to the early 1970s when Art Diamond messed around with a DEC.
This is not the main point of the passages quoted below. The main point is that the frontiers of human thought are not on the internet, and so cannot be part of the training of A.I. So whatever A.I. can do, it can’t think at the human “edge.”
(p. B3) Dan Shipper, the founder of the media start-up Every, says he gets asked a lot whether he thinks robots will replace writers. He swears they won’t, at least not at his company.
. . .
Mr. Shipper argues that the advent of generative A.I. is merely the latest step in a centuries-long technological march that has brought writers closer to their own ideas. Along the way, most typesetters and scriveners have been erased. But the part of writing that most requires humans remains intact: a perspective and taste, and A.I. can help form both even though it doesn’t have either on its own, he said.
“One example of a thing that journalists do that language models cannot is come and have this conversation with me,” Mr. Shipper said. “You’re going out and talking to people every day at the very edge of their experience. That’s always changing. And language models just don’t have access to that, because it’s not on the internet.”
For the full story see:
(Note: ellipsis added.)
(Note: the online version of the story has the date May 21, 2025, and has the title “Will Writing Survive A.I.? This Media Company Is Betting on It.”)
If AI Takes Some Jobs, New Human Jobs Will Be Created
In the passage quoted below, Atkinson makes a sound general case for optimism on the effects of AI on the labor market. I would add to that case that many are currently overestimating the potential cognitive effectiveness of AI. Humans have a vast reservoir of unarticulated common sense knowledge that is not accessible to AI training. In addition AI cannot innovate at the frontiers of knowledge, not yet posted to the internet.
(p. A15) AI doomsayers frequently succumb to what economists call the “lump of labor” fallacy: the idea that there is a limited amount of work to be done, and if a job is eliminated, it’s gone for good. This fails to account for second-order effects, whereby the saving from increased productivity is recycled back into the economy in the form of higher wages, higher profits and reduced prices. This creates new demand that in turn creates new jobs. Some of these are entirely new occupations, such as “content creator assistant,” but others are existing jobs that are in higher demand now that people have more money to spend—for example, personal trainers.
Suppose an insurance firm uses AI to handle many of the customer-service functions that humans used to perform. Assume the technology allows the firm to do the same amount of work with 50% less labor. Some workers would lose their jobs, but lower labor costs would decrease insurance premiums. Customers would then be able to spend less money on insurance and more on other things, such as vacations, restaurants or gym memberships.
In other words, the savings don’t get stuffed under a mattress; they get spent, thereby creating more jobs.
For the full commentary, see:
(Note: the online version of the commentary has the date June 5, 2025, and has the same title as the print version.)
Substrate Startup Develops Less Complex and Cheaper Way to Etch Computer Chips
To prepare for a workshop next week I have been reading a lot about Stuart Kauffman and Roger Koppl’s theory of the adjacent possible (TAP), as it is applied to the growth of technology. One of the implications of TAP is the that new technology gets progressively more complex, in the sense of using an ever larger number of components. I think that is often true but I can think of a couple of counter-examples. So I was interested to read yesterday that the production of computer chips may provide another counter-example.
(p. B1) In March [2025], James Proud, an unassuming British-born American without a college degree, sat in Vice President JD Vance’s office and explained how his Silicon Valley start-up, Substrate, had developed an alternative manufacturing process for semiconductors, one of the most fundamental and difficult challenges in tech.
For the past decade, semiconductors have been manufactured by a school-bus-size machine that uses light to etch patterns onto silicon wafers inside sterile, $25 billion factories. The machine, from the Dutch company ASML, is so critical to the chips in smartphones, A.I. systems and weaponry that Washington has effectively blocked sales of it to China.
But Mr. Proud said his company, which has received more than $100 million from investors, had developed a solution that would cut the manufacturing cost in half by channeling light from a giant instrument known as a particle accelerator through a tool the size of a car. The technique had allowed Substrate to print a high-resolution microchip layer comparable to images produced by the world’s leading semiconductor plants.
. . .
(p. B4) Mr. Proud moved to San Francisco from London in 2011 as a member of the first Thiel Fellowship class, a college alternative for aspiring founders created by Peter Thiel, the venture capitalist.
. . .
After the Trump administration persuaded TSMC to build a plant in Arizona, Mr. Proud decided to build his own company. He and his brother Oliver, 25, started reading books and academic papers on semiconductor lithography. They questioned why the process had become so complex and expensive.
One of the major costs in modern lithography machines, which have more than 100,000 parts, is how they use high-powered lasers to turn droplets of molten tin into a burst of extreme ultraviolet light. The machines use the light to etch a wafer of silicon in a process known as EUV lithography.
. . .
The team spent much of 2023 building a custom lithography tool. It featured thousands of parts and was small enough to fit in the back of a U-Haul. They tested it in computer simulations.
In early 2024, Substrate reserved a Bay Area particle accelerator for a make-or-break test. The company ran into problems when vibrations near the particle accelerator caused the tool to gyrate and blur the image, Mr. Proud said.
A frantic, daylong search found that the air-conditioning system was causing the vibration. Substrate adjusted the fan speed until the process printed “very beautiful and tiny things repeatedly” on a silicon wafer, Mr. Proud said.
For the full story see:
(Note: ellipses, and bracketed year, added.)
(Note: the online version of the story has the date Oct. 28, 2025, and has the title “Can a Start-Up Make Computer Chips Cheaper Than the Industry’s Giants?”)
Trump’s Budget Director Is Competently Dedicated to Dismantling the Deep State
Before the 2024 Presidential election I quoted an op-ed piece by Walter Block and another by Thomas Sowell in which Block argued, and Sowell implied, that given the choice between Donald Trump and Kamala Harris, the better choice was Trump. I still agree with their op-eds.
A related, but different issue is whether on balance, Trump’s policies will hurt or help the economy. His tariffs, industrial policy, and crony deals will hurt. His deregulation and downsizing of government will help. I hope, but do not know, that the helps will help more than the hurts hurt.
The New York Times ran a long front-page article on Trump’s Budget Director Russell T. Vought that bolsters my hope. I quote from that article below. Vought is serious and competent and dedicated to “a much smaller bureaucracy.” When he nominated him, Trump wrote “Russ knows exactly how to dismantle the Deep State and end weaponized government.”
But a Vought failure would not prove Block and Sowell wrong. Even if Trump does more to harm the economy than to help it, he still will not match the harm that would have been done by Harris.
(p. A1) Russell T. Vought, the White House budget director, was preparing the Trump administration’s 2026 budget proposal this spring when his staff got some surprising news: Elon Musk’s cost-cutting team was unilaterally axing items that Mr. Vought had intended to keep.
Mr. Vought, a numbers wonk who rarely raises his voice, could barely contain his frustration, telling colleagues that he felt sidelined and undermined by the haphazard chaos of the Musk-led Department of Government Efficiency, according to six people with knowledge of his comments who, like others interviewed for this article, spoke on the condition of anonymity for fear of retribution.
. . .
Mr. Vought, who also directed the White House Office of Management and Budget in President Trump’s first term, had spent four years in exile from power. He worked through Joseph R. Biden Jr.’s presidency from an old rowhouse near the Capitol, where he complained of pigeons infesting his ceiling and coordinated with other Trump loyalists to draw up sweeping, detailed plans for a comeback.
He had carefully analyzed mistakes from the first term. And he had laid out steps to achieve the long-sought conservative goal of a president with dramatically expanded authority over the executive branch, including the power (p. A14) to cut off spending, fire employees, control independent agencies and deregulate the economy.
. . .
He works long hours and weekends in his suite in the Eisenhower Executive Office Building next to the White House, where he oversees a staff of more than 500.
On the wall is a photo of his favorite president, Calvin Coolidge, the farm boy and small-town mayor historians say most purely embodied the conservative principles of small government and fiscal austerity.
. . .
“Russ knows exactly how to dismantle the Deep State and end weaponized government,” Mr. Trump wrote in a statement when nominating Mr. Vought.
. . .
Rob Fairweather, who spent 42 years at the Office of Management and Budget and wrote a book about how it operates, said there is reason for Mr. Vought to have confidence in a legal victory.
“What he’s doing is radical, but it’s well thought out,” Mr. Fairweather said. “He’s had all these years to plan. He’s looked clearly at the authorities and boundaries that are there, and is pushing past them on the assumption that at least some of it will hold up in the courts.”
Mr. Vought is already looking forward to that outcome, declaring on Glenn Beck’s show this spring: “We will have a much smaller bureaucracy as a result of it.”
For the full story see:
(Note: ellipses added.)
(Note: the online version of the story was updated Oct. 3, 2025, and has the title “The Man Behind Trump’s Push for an All-Powerful Presidency.”)
The Review of Austrian Economics Publishes Diamond’s Review of Creative Destruction
The Review of Austrian Economics published my review of Dalton and Logan’s Creative Destruction book on Sept. 17. It can be viewed, but not printed or saved, at: https://rdcu.be/eIMJN
National Academy of Sciences Paper Warns Scientific “Fraud Is Growing Exponentially”
In previous blog entries I have cited evidence that top medical scientists have committed fraud in the areas of Alzheimer’s and cancer research. The research discussed in the passages quoted below reports a related but broader problem. In these accounts the fraud consisted mainly of doctored data and images, but did not mainly consist also of wholly fabricated text, which apparently is what new evidence reveals is being increasingly cranked out by paper mills.
The journals accepting these papers are presumably mainly the lower level, and less-cited, journals, and so this fraud arguably may be less damaging to the ongoing progress of science than the more sophisticated fraud carried out by top scientists and published in top journals. This argument assumes that scientists build on work published in the top journals. A problem with this argument is that many times, truly pathbreaking innovations are at first rejected by “top” journals and are only accepted by “lower” level journals. (For instance Hans Krebs’s paper on what is now known as the “Krebs cycle,” that must be memorized by all aspiring doctors, was rejected by the prestigious Nature and published by the much less prestigious Enzymologia (Lane 2022, p. 55).)
The newly revealed fraud reduces even further the credibility of those on the left who order ordinary citizens to “follow the science” rather than follow their own eyes and their own judgement.
(BTW, Dr. Elisabeth Bik who is quoted in a couple of passages quoted below, is also a prominent source in Charles Piller’s Doctored, that documented widespread high-level fraud in the Alzheimer’s research community.)
(p. D1) For years, whistle-blowers have warned that fake results are sneaking into the scientific literature at an increasing pace. A new statistical analysis backs up the concern.
A team of researchers found evidence of shady organizations churning out fake or low-quality studies on an industrial scale. And their output is rising fast, threatening the integrity of many fields.
“If these trends are not stopped, science is going to be destroyed,” said Luís A. Nunes Amaral, a data scientist at Northwestern University and an author of the study, which was published in the Proceedings of the National Academy of Sciences on Monday [Aug. 4, 2025].
. . .
“Science relies on trusting what others did, so you do not have to repeat everything,” Dr. Amaral said.
By the 2010s, journal editors and watchdog organizations were warning that this trust was under threat. They flagged a growing number of papers with fabricated data and doctored images. In the years that followed, the factors driving this increase grew more intense.
As more graduate students were trained in labs, the competition for a limited number of research jobs sharpened. High-profile papers became essential for success, not just for landing a job, but also for getting promotions and grants.
Academic publishers have responded to the demand by opening thousands of new scientific journals every year. “All of the incentives are for publishers to publish more and more,” said Dr. Ivan Oransky, the executive director of the Center for Scientific Integrity.
. . .
(p. D3) Elisabeth Bik, a California-based expert on scientific fraud who was not involved in the study, said that it confirmed her early suspicions. “It’s fantastic to see all the work we’ve done now solidified into a much higher-level analysis,” she said.
Dr. Amaral and his colleagues warn that fraud is growing exponentially. In their new study, they calculated that the number of suspicious new papers appearing each year was doubling every 1.5 years. That’s far faster than the increase of scientific papers overall, which is doubling every 15 years.
. . .
In an executive order in May on “gold-standard science,” President Trump drew attention to the problem of scientific fraud. “The falsification of data by leading researchers has led to high-profile retractions of federally funded research,” the order stated.
. . .
Dr. Bik proposed that scientific publishers dedicate more of their profits to monitoring manuscripts for fraud, similar to how credit card companies check for suspicious purchases.
. . .
Dr. Oransky said that the way scientists are rewarded for their work would have to change as well. “To paraphrase James Carville, it’s the incentives, stupid,” he said. “We need to stop making it profitable to game the system.”
For the full story see:
(Note: ellipses, and bracketed date and year, added.)
(Note: the online version of the story has the date Aug. 4, 2025, and has the title “Fraudulent Scientific Papers Are Rapidly Increasing, Study Finds.” Where there was a minor difference in the wording between the online and print versions, the passages quoted above follow the online version.)
The academic paper documenting the substantial increase in scientific fraud is:
Nick Lane’s book, cited in my introductory comments, is:
Lane, Nick. Transformer: The Deep Chemistry of Life and Death. New York: W. W. Norton & Company, 2022.
Was Schumpeter Mean to Hayek?
I have sometimes been surprised by the level of hostility of some Austrian economists toward Joseph Schumpeter. I once asked a distinguished Austrian economist why so much hostility? His answer was: ‘Schumpeter was mean to Hayek.’ Of course, Schumpeter and F.A. Hayek disagreed on some issues of method and theory, but so did other Austrians, such as Murray Rothbard and Hayek. I have read a few biographies of Schumpeter and have never read that Schumpeter was ever personally mean to Hayek. To the contrary, when I spent a day in the Schumpeter archives at Harvard, I ran across a carbon-copy of a letter that Schumpeter wrote to Stephen P. Duggan, co-founder and president of the Institute of International Education. Schumpeter wrote that Hayek wanted to give a lecture tour of the United States in March and April and asked if Duggan “would undertake the management of the trip.” Schumpeter wrote that “very many economists in this country would like an exchange of ideas with so outstanding a man.” (The letter was dated “January 16,” with a typo in the year, but with a jotted correction indicating, I think, “1940”—Hayek did visit the United States in 1940.)
Skimming Schumpeter’s letters in the archive leaves the impression that Schumpeter was almost always gracious to everybody almost all of the time, Hayek included.
David Henderson Gives Trump Credit Where Credit Is Due, on Deregulation
Latest “So-Called Reasoning Systems” Hallucinate MORE Than Earlier A.I. Systems
Since more sophisticated “reasoning” A.I. systems are increasingly inaccurate on the facts, it is unlikely that such systems will threaten any job where job performance depends on getting the facts right. Wouldn’t that include most jobs? The article quoted below suggests it would most clearly include jobs working with “court documents, medical information or sensitive business data.”
(p. B1) The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.
Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not — and cannot — decide what (p. B6) is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent.
. . .
The A.I. bots tied to search engines like Google and Bing sometimes generate search results that are laughably wrong. If you ask them for a good marathon on the West Coast, they might suggest a race in Philadelphia. If they tell you the number of households in Illinois, they might cite a source that does not include that information.
Those hallucinations may not be a big problem for many people, but it is a serious issue for anyone using the technology with court documents, medical information or sensitive business data.
“You spend a lot of time trying to figure out which responses are factual and which aren’t,” said Pratik Verma, co-founder and chief executive of Okahu, a company that helps businesses navigate the hallucination problem. “Not dealing with these errors properly basically eliminates the value of A.I. systems, which are supposed to automate tasks for you.”
. . .
For more than two years, companies like OpenAI and Google steadily improved their A.I. systems and reduced the frequency of these errors. But with the use of new reasoning systems, errors are rising. The latest OpenAI systems hallucinate at a higher rate than the company’s previous system, according to the company’s own tests.
The company found that o3 — its most powerful system — hallucinated 33 percent of the time when running its PersonQA benchmark test, which involves answering questions about public figures. That is more than twice the hallucination rate of OpenAI’s previous reasoning system, called o1. The new o4-mini hallucinated at an even higher rate: 48 percent.
When running another test called SimpleQA, which asks more general questions, the hallucination rates for o3 and o4-mini were 51 percent and 79 percent. The previous system, o1, hallucinated 44 percent of the time.
. . .
For years, companies like OpenAI relied on a simple concept: The more internet data they fed into their A.I. systems, the better those systems would perform. But they used up just about all the English text on the internet, which meant they needed a new way of improving their chatbots.
So these companies are leaning more heavily on a technique that scientists call reinforcement learning. With this process, a system can learn behavior through trial and error. It is working well in certain areas, like math and computer programming. But it is falling short in other areas.
For the full story see:
(Note: ellipses added.)
(Note: the online version of the story was updated May 6, 2025, and has the title “A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse.”)
