Minimum Wage May Destroy Jobs Overall, In Spite of Card and Krueger

The economists’ consensus about the job-destroying aspect of the minimum wage is less strong than it used to be.  In the late 1970s, 90% of economists surveyed agreed or partly agreed with the statement, "a minimum wage increases unemployment among young and unskilled workers."  By 2003, this percentage had fallen to 73.  Still a strong consensus, but a weaker one than previously. What happened?

The answer:  One major study and a book by economists David Card, now at the University of California, Berkeley, and Alan Krueger of Princeton.  In a 1994 study of the effect of a minimum wage increase in New Jersey, they found higher growth of jobs at fast-food restaurants in New Jersey than in Pennsylvania, whose state government had not increased the minimum wage.  This study convinced a lot of people, including some economists.  It was almost comical to see Sen. Edward Kennedy hype this study when he had never before mentioned any economic studies of the minimum wage.

Based on criticism of their data from David Neumark and economist William Wascher of the Federal Reserve Board, Messrs. Card and Krueger moderated their findings, later concluding that fast-food jobs grew no more slowly, rather than more quickly, in New Jersey than in Pennsylvania.  But they never answered a more fundamental criticism, namely that the standard economists’ minimum-wage analysis makes no predictions about narrowly defined industries.  As Donald Deere and Finis Welch of Texas A&M University, and Kevin M. Murphy of the University of Chicago, pointed out, an increased minimum wage help expand jobs at franchised fast-food outlets by hobbling competition from local pizza places and sandwich shops.

 

For the full commentary, see:

DAVID R. HENDERSON. "If Only Most Americans Understood." The Wall Street Journal (Tues., August 1, 2006): A12.

 

The citation for the article by Deere, Murphy and Welch:

Deere, Donald, Kevin M. Murphy, and Finis Welch. "Sense and Nonsense on the Minimum Wage." Regulation 18, no. 1 (1995).

 

 

French Slow Innovation By Violating Apple’s Intellectual Property Rights

THE French take pride in their revolutions, which are usually hard to miss — mass uprisings, heads rolling and such.  So, with the scent of tear gas in the air this past month from the giant protests against a youth labor law, it was easy to overlook the French National Assembly’s approval of a bill that would require Apple Computer to crack open the software codes of its iTunes music store and let the files work on players other than the iPod.  While seemingly minor, the move is actually rather startling and has left many experts wondering (as ever):  What has possessed the French?

. . .  

If the French gave away the codes, Apple would lose much of its rationale for improving iTunes.  Right now, after the royalty payment to the label (around 65 cents) and the processing fee to the credit card company (as high as 23 cents), not to mention other costs, Apple’s margin on 99-cent music is thin.  Yet it continues to add free features to iTunes because it helps sell iPods.

Opening the codes threatens that link.  Apple would need to pay for iTunes features with profits from iTunes itself.  Prices would rise.  Innovation would slow.

Even worse, sharing the codes could make it easier for hackers to unravel Apple’s FairPlay software.  Without strong copy protection, labels would not supply as much new music.

 

For the full commentary, see:

Austan Goolsbee.  "ECONOMIC SCENE; In iTunes War, France Has Met the Enemy. Perhaps It Is France."  The New York Times  (Thurs., April 27, 2006):  C3.

Over-regulating Lenders is a Recipe for Stagnation

MICROFINANCE is based on a simple idea: banks, finance companies, and charities lend small sums — often no more than a few hundred dollars — to poor third world entrepreneurs. The loan recipients open businesses like tailoring shops or small grocery stores, thereby bolstering local economies.

But does microfinance, in fact, help the poor?

To help answer this question, I visited Hyderabad, India, in June.  . . .

. . .

My visit suggested that microfinance is working, but it is often more corporate, more commercial and under more attack than I had expected.

. . .

Near Hyderabad, in the state of Andhra Pradesh, political opposition to microfinance has begun. State officials have fed negative stories to the media. They charge that microfinance debts have driven some people to ruin or perhaps suicide. They call Spandana’s programs “coercive” and “barbaric.” They question whether the “community pressure” behind repayment is sometimes too severe.

The motives behind this campaign are twofold. The state is not a neutral umpire but rather has its own “self-help group” banking model, which lends at the micro level. Spandana and some of the other private microfinance groups are unwelcome competition. More generally, opposition to money lending has been frequent in the history of both India and the West. Not every loan will have a positive outcome, and it is easy to focus on the victims. Not all Indians have accepted the future of their country as an open commercial society with winners and losers.

. . .

The Indian political authorities must decide whether they will allow new businesses to spread, even when commercialization leads to some disappointments or competes with a state interest. The stipulation that no one can be harmed by progress is a sure recipe for stagnation.

 

For the full commentary, see: 

Tyler Cowen.  "ECONOMIC SCENE; Microloans May Work, but There Is Dispute in India Over Who Will Make Them."  The New York Times  (Thurs., August 10, 2006):  C4. 

 

Doctors Face Perverse Incentives and Constraints

Kevin MD’s blog provides an illuminating discussion of how hard we make it for good people to practice medicine.  The case discussed involves an MD who is successfully sued for not performing a heart cath on a patient, even though two previous treadmill tests did not reveal any problems.  (The heart cath procedure itself has a nontrivial risk of death and other serious complications.)   

The discussion in the Kevin MD illustrates the difficult incentives and constraints faced by the conscientious physician. In terms of a patient’s health, a cost/benefit analysis may imply that a medical test should not be performed, but in terms of an MD’s income, and legal liability, a cost/benefit analysis may imply that a medical test should be performed. 

Something is wrong with our reward structure and legal institutions, when MD’s who make the right medical call for the patient, are "rewarded" by earning less, and by increasing their chances of being sued.

 

Read the full discussion at:

http://www.kevinmd.com/blog/2006/06/liable-for-not-doing-heart-cath-on-49.html

 

For convenience, here is the opening entry in the discussion:

Continue reading “Doctors Face Perverse Incentives and Constraints”

More and Better Jobs Gained by ‘Insourcing’ than are Lost to ‘Outsourcing’

  N. Gregory Mankiw, former chair of W.’s Council of Economic Advisors. The media, most Democrats, and some Republicans, skewered Mankiw in 2004 for simply and clearly stating the truth about outsourcing. Source of photo:  online version of the NYT article cited below.

 

In December 2005, the McKinsey Global Institute predicted that 1.4 million jobs would be outsourced overseas from 2004 to 2008, or about 280,000 a year.  That’s a drop in the bucket.  In July, there were 135.35 million payroll jobs in the United States, according to the Bureau of Labor Statistics.  Thanks to the forces of creative destruction, more jobs are created and lost in a few months than will be outsourced in a year.  Diana Farrell, director of the McKinsey Global Institute, notes that in May 2005 alone, 4.7 million Americans started new jobs with new employers.

What’s more, the threat of outsourcing varies widely by industry.  Lots of services require face-to-face interaction for people to do their jobs.  That is particularly true for the biggest sectors, retail and health care.  As a result, according to a McKinsey study, only 3 percent of retail jobs and 8 percent of health care jobs can possibly be outsourced.  By contrast, McKinsey found that nearly half the jobs in packaged software and information technology services could be done offshore.  But those sectors account for only about 2 percent of total employment.  The upshot:  “Only 11 percent of all U.S. services job could theoretically be performed offshore,” Ms. Farrell says.

Economists have also found that jobs or sectors susceptible to outsourcing aren’t disappearing.  Quite the opposite.  Last fall, J. Bradford Jensen, deputy director at the International Institute of Economics, based in Washington, and Lori G. Kletzer, professor of economics at the University of California, Santa Cruz, documented the degree to which various service sectors and jobs were “tradable,” ranging from computer and mathematical occupations (100 percent) to food preparation (4 percent).

Not surprisingly, Mr. Jensen and Professor Kletzer found that in recent years there has been greater job insecurity in the tradable job categories.  But they also concluded that jobs in those industries paid higher wages, and that tradable industries had grown faster than nontradable industries.  “That could mean that this is our competitive advantage,” Mr. Jensen says.  “In other words, what the U.S. does well is the highly skilled, higher-paid jobs within those tradable services.”

There is evidence that within sectors, lower-paying jobs are being outsourced while the more skilled ones are being kept here.  In a 2005 study, Catherine L. Mann, senior fellow at the Institute for International Economics, found that from 1999 to 2003, when outsourcing was picking up pace, the United States lost 125,000 programming jobs but added 425,000 jobs for higher-skilled software engineers and analysts.

 

For the full commentary, see:

DANIEL GROSS. "Economic View; Why ‘Outsourcing’ May Lose Its Power as a Scare Word." The New York Times, Section 3 (Sun., August 13, 2006):  5. 

25% Increase in Oil by 2015

OilPriceGraphic.gif  Source of graphic:  online version of the WSJ article cited below.

 

Despite fears of "running out" of oil, Cambridge Energy Research Associates’ new analysis of oil-industry activity points to a considerable growth in the capacity to produce oil in the years ahead.  Based upon our field-by-field examination of current activity and of 360 new projects that are either underway or very likely, we see capacity growing from its current 89 mbd to 110 mbd by 2015, a 25% increase.  A substantial part of this growth reflects the advance of technology, i.e., the rapid growth in "non-traditional" hydrocarbons, such as from very deep offshore waters, Canadian oil sands, and liquids made from natural gas.  (We are not counting in this increase the additional supplement that will come from ethanol and other fuels made from plants.)

There are important qualifications, however.  First, this is physical capacity to produce, not actual flows, which, as we have seen over the last year, can be disrupted by everything from natural disasters to government decision, to conflict and geopolitical discord.  Second, while prices are going up rapidly, so are costs;  and shortages of equipment and people can slow things down.  Third, greater scale and technical complexity can generate delays.  Still, a 25% increase in physical capacity by 2015 is a reasonable expectation, based upon today’s evidence, and that would go a long way to meeting the growing demand from China, India and other motorizing countries.

Admittedly, it may be hard to conceive of this kind of increase when oil prices are climbing the wall of worry, when each new disruption reverberates around the world, when Iranian politicians threaten $100 or $250 oil in the event of sanctions, and when so many geopolitical trends seem so adverse.  All this underlines the fact that while the challenges below ground are extensive, the looming uncertainties — and risks — remain above ground. 

 

For the full commentary, see:

Daniel Yergin.  "Crisis in the Pipeline."  The Wall Street Journal  (Weds., August 9, 2006):  A10.   

Parts Order is a Major Step Towards a Nuclear Renaissance

WASHINGTON, Aug. 3 — A partnership established to build nuclear reactors has ordered the heavy steel parts needed to make a reactor vessel, as well as other crucial components, apparently the first hardware order for a plant since the 1970’s.

The order, which an executive of the partnership said was worth “tens of millions of dollars,” was a major step toward actual construction after several years of speculation about a nuclear renaissance.

. . .

The design is derived from the Westinghouse layout already in service, but with several changes.  It is 1,600 megawatts, about a third larger than the largest reactor operating here.  It has a double-walled containment building designed to withstand the crash of a large aircraft.  It has four emergency core cooling systems, any one of which would be sufficient in an emergency, so that it can continue operating even if some of the systems are deactivated for maintenance and repair.  And because of design changes, it has 47 percent fewer valves, 16 percent fewer pumps and 50 percent fewer tanks than a typical existing model.

 

For the full story, see:

MATTHEW L. WALD.  "Nuclear Power Venture Orders Crucial Parts for Reactor."  The New York Times (Fri., August 4, 2006):  C2.

“When Beds Are Available, Physicians Figure Out a Way to Fill Them”

HospitalStayLength.gif Source of graphic:  online version of the WSJ article cited below.

 

(p. D1)  The Dartmouth investigators say there is no evidence that higher amounts and greater intensity of care lead to better outcomes for patients.  They note past studies done at Dartmouth — looking at Medicare patients with heart attacks, hip fractures and colon cancer — that suggest centers with the most high-intensity care actually have slightly higher death rates than those with a lower intensity of care.  As a result, the researchers say, the bills for patients with similar illness may be two or three times higher at some prestigious institutions, with no apparent additional benefit — and perhaps some risk of harm.

. . .

(p. D4)  John E. Wennberg, principal investigator for the Atlas project, has pioneered research into variation of medical services.  He says the differences among academic medical centers are particularly striking since the medical community depends on these institutions to develop effective treatment strategies.  "If the academic medical centers don’t know how to do it, nobody will," Dr. Wennberg says.

He says his research suggests the primary reason for the differences is the capacity of services, such as hospital beds, intensive care units and specialist physicians, within communities.  There isn’t any evidence that people are sicker in the markets of high-intensity services than in low ones, says Dr. Wennberg, but when beds are available, physicians figure out a way to fill them.

 

For the full story, see:

RON WINSLOW.   "Care Varies Widely At Top Medical Centers; Utilization of ICU for Sickest Patients Is 5 Times Higher at Some Than Others; NYU Vs. Mayo."  The Wall Street Journal  (Tues. May 16, 2006):  D1.

 

  Source of graphic:  online version of the WSJ article cited above.

U.S. Economy Can Prosper, Even if G.M. Does Not

The fragility of success for large corporations is documented in the early chapters of the Foster and Kaplan book that is mentioned below. 

(p. 1)  THE announcement last week that General Motors would cut 25,000 jobs and close several factories is yet another blow to the Goliath of automakers and its workers.  But only if you work for G.M. is the company’s decline a worry.  For consumers, the decline can be seen as a symbol of healthy competition.

G.M.’s sales, market share and work force have all been falling for a generation, even as the quality of its vehicles has gone up.  Why?  Because its competitors’ products have improved even more.  Today’s auto buyers enjoy an unprecedented array of well-built, well-equipped, reasonably priced vehicles offered by many manufacturers.

. . .

(p. 3)  . . .  even if a new generation is drawn to G.M.’s products, recovery of its former position seems unlikely.  Other brands have improved, too:  J.D. Power estimates that for the auto industry overall, manufacturing defects declined 32 percent since 1998 alone.

There is also great pressure to hold prices down, which is bad for companies like G.M. with vast amounts of overhead.  According to the consumer price index, new cars and light trucks today cost less in real-dollar terms than in 1982, despite having air bags, antilock brakes, CD players, power windows and other features either unavailable or considered luxury options back then.

This means that during the very period that General Motors has declined, American car buyers have become better off.  Competition can have the effect of ”creative destruction,” in the economist Joseph Schumpeter’s famous term, harming workers in some places, while everyone else comes out ahead.

. . .

As it continues to shrink, G.M. may serve as an exemplar of what the world economy will do in many arenas — knock off established leaders, while improving quality and cutting prices.  In their 2001 book ”Creative Destruction,” Richard Foster and Sarah Kaplan, analysts at McKinsey & Company, documented how even powerhouse companies that are ”built to last” usually succumb to competition.

Competition can be a utilitarian force that brings the greatest good to the greatest number.  Someday when the remaining divisions of General Motors are bought by some start-up company that doesn’t even exist yet, try to keep that in mind.

 

For the full commentary, see: 

GREGG EASTERBROOK.  "What’s Bad for G.M. Is . . ."  The New York Times, Section 4  (Sunday, June 12, 2005):  1 & 3.

(Note:  the ellipsis in the title is in the original title; the ellipses in the article, were added.)

 

The full reference to the Foster and Kaplan book, is:

Foster, Richard and Sarah Kaplan.  Creative Destruction:  Why Companies that Are Built to Last Underperform the Market—and How to Successfully Transform Them.  New York:  Currency Books, 2001.

 

Static Assumptions Undermine Economic Policy Analysis


Over 50 years ago, Schumpeter emphasized that static models of capitalism miss what is most important in capitalism.  Yet static analysis still dominates most policy discussions.  But there is hope:


(p. A14) A bit of background:  Most official analysis of tax policy is based on what economists call "static assumptions."  While many microeconomic behavioral responses are included, the future path of macroeconomic variables such as the capital stock and GNP are assumed to stay the same, regardless of tax policy.  This approach is not realistic, but it has been the tradition in tax analysis mainly because it is simple and convenient.

In his 2007 budget, President Bush directed the Treasury staff to develop a dynamic analysis of tax policy, and we are now reaping the fruits of those efforts.  The staff uses a model that does not consider the short-run effects of tax policy on the business cycle, but instead focuses on its longer run effects on economic growth through the incentives to work, save and invest, and to allocate capital among competing uses.

 

For the full story, see:

ROBERT CARROLL and N. GREGORY MANKIW.  "Dynamic Analysis."  The Wall Street Journal  (Weds., July 26, 2006):  A14.


Taking the Red Pill in China

Surfing the Web last fall, a Chinese high-school student who calls himself Zivn noticed something missing.  It was Wikipedia, an online encyclopedia that accepts contributions or edits from users, and that he himself had contributed to.

The Chinese government, in October, had added Wikipedia to a list of Web sites and phrases it blocks from Internet users’ access.  For Zivn, trying to surf this and many other Web sites, including the BBC’s Chinese-language news service, brought just an error message.  But the 17-year-old had had a taste of that wealth of information and wanted more.  "There were so many lies among the facts, and I could not find where the truth is," he writes in an instant-message interview.

Then some friends told him where to find Freegate, a tiny software program that thwarts the Chinese government’s vast system to limit what its citizens see.  Freegate — by connecting computers inside of China to servers in the U.S. — allows Zivn and others to keep reading and writing to Wikipedia and countless other sites.

Behind Freegate is a North Carolina-based Chinese hacker named Bill Xia.  He calls it his red pill, a reference to the drug in the "Matrix" movies that vaulted unconscious captives of a totalitarian regime into the real world.  Mr. Xia likes to refer to the villainous Agent Smith from the Matrix films, noting that the digital bad guy in sunglasses "guards the Matrix like China’s Public Security Bureau guards the Internet."

. . .

(p. A9)  . . . , with each new version of Freegate — now on its sixth release — the censors "just keep improving and adding more manpower to monitor what we have been doing," Mr. Xia says.  In turn, he and volunteer programmers keep tweaking Freegate.

At first, the software would automatically change its Internet Protocol address — a sort of phone number for a Web site — faster than China could block it.  That worked until September 2002, when China blocked Freegate’s domain name, not just its number, in the Internet phone book.

More than three years later, Mr. Xia is still amazed by the bold move, calling it a "hijacking."  Ultimately he prevailed, however, through a solution he won’t identify for fear of being shut down for good.

Confident in that solution, Mr. Xia continues to send out his red pill, and users like Zivn continue to take it.  The teen credits his cultural and political perspective to a "generation gap" that has come of having access to more information.  "I am just gradually getting used to the truth about the real world," he writes.

 

For the full story, see: 

Geoffrey A. Fowler.  "Chinese Internet Censors Face ‘Hacktivists’ in U.S."  The Wall Street Journal  (Monday, February 13, 2006):  A1 & A9.