Kits Let Model T Owners Transform Them into Tractors, Snowmobiles, Roadsters and Trucks

ModelTtractorConversion2013-10-25.jpg “OFF ROAD; Kits to take the Model T places Henry Ford never intended included tractor conversions, . . . ” Source of caption and photo: online version of the NYT article quoted and cited below.

(p. 1) WHEN Henry Ford started to manufacture his groundbreaking Model T on Sept. 27, 1908, he probably never imagined that the spindly little car would remain in production for 19 years. Nor could Ford have foreseen that his company would eventually build more than 15 million Tin Lizzies, making him a billionaire while putting the world on wheels.

But nearly as significant as the Model T’s ubiquity was its knack for performing tasks far beyond basic transportation. As quickly as customers left the dealers’ lot, they began transforming their Ts to suit their specialized needs, assisted by scores of new companies that sprang up to cater exclusively to the world’s most popular car.
Following the Model T’s skyrocketing success came mail-order catalogs and magazine advertisements filled with parts and kits to turn the humble Fords into farm tractors, mobile sawmills, snowmobiles, racy roadsters and even semi-trucks. Indeed, historians credit the Model T — which Ford first advertised as The Universal Car — with launching today’s multibillion-dollar automotive aftermarket industry.

For the full story, see:
LINDSAY BROOKE. “Mr. Ford’s T: Mobility With Versatility.” The New York Times, Automobiles Section (Sun., July 20, 2008): 1 & 14.
(Note: the online version of the story has the title “Mr. Ford’s T: Versatile Mobility.”)

Kerosene Creatively Destroyed Whale Oil

WhaleOilLamps2013-10-25.jpg “The whale-oil lamps at the Sag Harbor Whaling and Historical Museum are obsolete, though at one time, whale oil lighted much of the Western world.” Source of caption and photo: online version of the NYT article quoted and cited below.

(p. 20) Like oil, particularly in its early days, whaling spawned dazzling fortunes, depending on the brute labor of tens of thousands of men doing dirty, sweaty, dangerous work. Like oil, it began with the prizes closest to home and then found itself exploring every corner of the globe. And like oil, whaling at its peak seemed impregnable, its product so far superior to its trifling rivals, like smelly lard oil or volatile camphene, that whaling interests mocked their competitors.

“Great noise is made by many of the newspapers and thousands of the traders in the country about lard oil, chemical oil, camphene oil, and a half-dozen other luminous humbugs,” The Nantucket Inquirer snorted derisively in 1843. It went on: “But let not our envious and — in view of the lard oil mania — we had well nigh said, hog-gish opponents, indulge themselves in any such dreams.”
But, in fact, whaling was already just about done, said Eric Jay Dolin, who . . . is the author of “Leviathan: The History of Whaling in America.” Whales near North America were becoming scarce, and the birth of the American petroleum industry in 1859 in Titusville, Pa., allowed kerosene to supplant whale oil before the electric light replaced both of them and oil found other uses.
. . .
Mr. Dolin said the message for today was that one era’s irreplaceable energy source could be the next one’s relic. Like whaling, he said, big oil is ripe to be replaced by something newer, cleaner, more appropriate for its moment.

For the full story, see:
PETER APPLEBOME. “OUR TOWNS; Once They Thought Whale Oil Was Indispensable, Too.” The New York Times, First Section (Sun., August 3, 2008): 20.
(Note: ellipses added.)
(Note: the online version of the story has the title, “OUR TOWNS; They Used to Say Whale Oil Was Indispensable, Too.”)

Dolin’s book is:
Dolin, Eric Jay. Leviathan: The History of Whaling in America. New York: W. W. Norton & Company, Inc., 2007.

Companies Do Less R&D in Countries that Steal Intellectual Property

The conclusions of Gupta and Wang, quoted below, are consistent with research done many years ago by economist Edwin Mansfield.

(p. A15) China’s indigenous innovation program, launched in 2006, has alarmed the world’s technology giants more than any other policy measure since the start of economic reforms in 1978. A recent report from the U.S. Chamber of Commerce even went so far as to call this program “a blueprint for technology theft on a scale the world has not seen before.”
. . .
A comparison with India is illustrative. India has no equivalent to indigenous innovation rules. The government also is content to allow companies to set up R&D facilities without any rules about sharing technology with local partners or the like.
These policy differences appear to have a significant influence on corporate behavior. Consider the top 10 U.S.-based technology giants that received the most patents from the U.S. Patent and Trademark Office (USPTO) between 2006 and 2010: IBM, Microsoft, Intel, Hewlett-Packard, Micron, GE, Cisco, Texas Instruments, Broadcom and Honeywell.
Half of these companies appear not to be doing any significant R&D work in China. Between 2006 and 2010, the U.S. PTO did not award a single patent to any China-based units of five out of the 10 companies. In contrast, only one of the 10 did not receive a patent for an innovation developed in India.

For the full commentary, see:
Anil K. Gupta and Haiyan Wang. “How Beijing Is Stifling Chinese Innovation.” The Wall Street Journal (Thurs., September 1, 2011): A15.
(Note: ellipsis added.)
(Note: the online version of the commentary has the title “Beijing Is Stifling Chinese Innovation.”)

Mansfield’s relevant paper is:
Mansfield, Edwin. “Unauthorized Use of Intellectual Property: Effects on Investment, Technology Transfer, and Innovation.” In Global Dimensions of Intellectual Property Rights in Science and Technology, edited by M. E. Mogee M. B. Wallerstein, and R. A. Schoen. Washington, D.C.: National Academy Press, 1993, pp. 107-45.

Mansfield’s research on this issue is discussed on pp. 1611-1612 of:
Diamond, Arthur M., Jr. “Edwin Mansfield’s Contributions to the Economics of Technology.” Research Policy 32, no. 9 (Oct. 2003): 1607-17.

If Feds Stalled Skype Deal, Google Would Have Been “Stuck with a Piece of Shit”

Even just the plausible possibility of a government veto of an acquisition, can stop the acquisition from happening. The feds thereby kill efficiency and innovation enhancing reconfigurations of assets and business units.

(p. 234) . . . , an opportunity arose that Google’s leaders felt compelled to consider: Skype was available. It was a onetime chance to grab hundreds of millions of Internet voice customers, merging them with Google Voice to create an instant powerhouse. Wesley Chan believed that this was a bad move. Skype relied on a technology called peer to peer, which moved information cheaply and quickly through a decentralized network that emerged through the connections of users. But Google didn’t need that system because it had its own efficient infrastruc-(p. 235)ture. In addition, there was a question whether eBay, the owner of Skype, had claim to all the patents to the underlying technology, so it was unclear what rights Google would have as it tried to embellish and improve the peer-to-peer protocols. Finally, before Google could take possession, the U.S. government might stall the deal for months, maybe even two years, before approving it. “We would have paid all this money, but the value would go away and then we’d be stuck with a piece of shit,” says Chan.

Source:
Levy, Steven. In the Plex: How Google Thinks, Works, and Shapes Our Lives. New York: Simon & Schuster, 2011.
(Note: ellipsis added.)

Better Batteries Would Be a General Purpose Technology (GPT)

Economists of technology have been thinking about General Purpose Technologies (GPT) for the last 10 years or so. As the name implies, a GPT is one where there are broad applications, and new applications are invented as the price of the GPT declines. My plausible guess is that a breakthrough in battery technology would be a very important GPT. The progress sketched below is probably not a breakthrough, but progress is good.

(p. C4) People take batteries for granted, but they shouldn’t. All kinds of technological advances hinge on developing smaller and more powerful mobile energy sources.
Researchers at Harvard University and the University of Illinois are reporting just such a creation, one that happens to be no bigger than a grain of sand. These tiny but powerful lithium-ion batteries raise the prospect of a new generation of medical and other devices that can go where traditional hulking batteries can’t.
. . .
Jennifer Lewis, a materials scientist at Harvard, says these batteries can store more energy because 3-D printing enables the stacking of electrodes in greater volume than the thin-film methods now used to make microbatteries.

For the full story, see:
DANIEL AKST. “R AND D: Batteries on the Head of a Pin.” The Wall Street Journal (Sat., June 22, 2013): C4.
(Note: ellipsis added.)
(Note: the online version of the interview has the date June 21, 2013.)

Google Used Auction Model to Allocate Internal Resources

(p. 202) Google’s chief economist, Hal Varian, would later explain how it worked when new data centers open: “We’ll build a nice new data center and say, ‘Hey, Google Docs, would you move your machines over here?’ And they say, ‘Sure, next month.’ Because nobody wants to go through the disruption of shifting. So I suggested we run an auction similar to what airlines do when they oversell a plane– they keep offering bigger vouchers until enough customers are willing to give up their seats. In our case, we offer more machines in exchange for moving. One group might do it for fifty new ones, another for a hundred, and another won’t move unless we give them three hundred. So we give them to the lowest bidder– they get their extra capacity, and we get computation shifted to the new data center.”
Google eventually devised an elaborate auction model for divvying up existing resources. In a paper entitled “Using a Market Economy to Provision Computer Resources Across Planet-wide Clusters,” a group of Google engineers, along with a Stanford professor of management science and engineering, reported a project that essentially made Google’s
computational resources into a silicon Wall Street. Supply and demand worked here not to fix stock prices but to place a value on resources. The system not only allowed projects at Google to get fair access to storage and computational cycles but identified shortages in computers, storage, and bandwidth. Instead of the Vickery auction used by AdWords, the system used an “ascending clock auction.” At the beginning, the current price of each resource would be displayed, and Google engineers in competing projects could claim them at that price. The ideal outcome would ensure sufficient resources for everyone, in which case the auction stopped. Otherwise, the automated auctioneer would raise the prices for the next “time slot,” and (p. 203) remaining competitors for those resources had to decide whether to bid higher. And so on, until the engineers not willing to stake their budgets on the most contested resources dropped out. “Hence,” write the paper’s authors, “the auction allows users to ‘discover’ prices in which all users pay/ receive payment in proportion to uniform resource prices.”

Source:
Levy, Steven. In the Plex: How Google Thinks, Works, and Shapes Our Lives. New York: Simon & Schuster, 2011.

Google Had the Most “Massive Parallelized Redundant Computer Network” in the World

(p. 198) . . . by perfecting its software, owning its own fiber, and innovating in conservation techniques, Google was able to run its computers spending only a third of what its competitors paid. “Our true advantage was actually the fact that we had this massive parallelized redundant computer network, probably more than anyone in the world, including governments,” says Jim Reese. “And we realized that maybe it’s not in our best interests to let our competitors know.”

Source:
Levy, Steven. In the Plex: How Google Thinks, Works, and Shapes Our Lives. New York: Simon & Schuster, 2011.
(Note: ellipsis added.)

Silicon Valley Is Open to Creative Destruction, But Tired of Taxes

(p. A15) Rancho Palos Verdes, Calif.
When the howls of creative destruction blew through the auto and steel industries, their executives lobbied Washington for bailouts and tariffs. For now, Silicon Valley remains optimistic enough that its executives don’t mind having their own businesses creatively destroyed by newer technologies and smarter innovations. That’s an encouraging lesson from this newspaper’s recent All Things Digital conference, which each year attracts hundreds of technology leaders and investors.
. . .
In a 90-minute grilling by the Journal’s Walt Mossberg and Kara Swisher, Apple Chief Executive Tim Cook assured the audience that his company has “some incredible plans that we’ve been working on for a while.”
Mr. Cook’s sunny outlook was clouded only by his dealings with Washington. He was recently the main witness at hearings called by Sen. Carl Levin, a Michigan Democrat, who accused Apple of violating tax laws. In fact, Apple’s use of foreign subsidiaries is entirely legal–and Apple is the largest taxpayer in the U.S., contributing $6 billion a year to the government’s coffers.
Mr. Cook put on a brave face about the hearings, saying, “I thought it was very important to go tell our side of the story and to view that as an opportunity instead of a pain in the [expletive].” Mr. Cook’s foul language was understandable. “Just gut the [tax] code,” he told the conference. “It’s 7,500 pages long. . . . Apple’s tax return is two feet high. It’s crazy.”
When the audience applauded, Ms. Swisher quipped, “All right, Rand Paul.” A woman shouted: “No, I’m a Democrat!” One reason the technology industry remains the center of innovation may be that many technologists of all parties view trips to Washington as a pain.

For the full commentary, see:
L. GORDON CROVITZ. “INFORMATION AGE; Techies Cheer Creative Destruction.” The Wall Street Journal (Mon., June 3, 2013): A15.
(Note: ellipsis between paragraphs added; italics in original; ellipsis, and bracketed words, within next-to-last paragraph, in original.)
(Note: the online version of the commentary has the date June 2, 2013.)

Larry Page: “At His Core He Cares about Latency”

(p. 184) Speed had always been an obsession at Google, especially for Larry Page. It was almost instinctual for him. “He’s always measuring everything,” says early Googler Megan Smith. “At his core he cares about latency.” More accurately, he despises latency and is always trying to remove it, like Lady Macbeth washing guilt from her hands. Once Smith was walking down the street with him in Morocco and he suddenly dragged her into a random Internet café with maybe three machines. Immediately, he began timing how long it took web pages to load into a browser there.
Whether due to pathological impatience or a dead-on conviction that speed is chronically underestimated as a factor in successful products, Page had been insisting on faster delivery for everything Google from the beginning. The minimalism of Google’s home page, allowing for lightning-quick (p. 185) loading, was the classic example. But early Google also innovated by storing cached versions of web pages on its own servers, for redundancy and speed.
“Speed is a feature,” says Urs Hölzle. “Speed can drive usage as much as having bells and whistles on your product. People really underappreciate it. Larry is very much on that line.”

Source:
Levy, Steven. In the Plex: How Google Thinks, Works, and Shapes Our Lives. New York: Simon & Schuster, 2011.

Dohrmann and Quevedo Survive Creative Destruction of Inacom

DohrmannHokampQuevedoCosentry2013-10-07.jpg “Cosentry, an Omaha-based provider of data center storage and managed technology services, has a new CEO, Brad Hokamp, center. With him at the Cosentry data center in Papillion are company founders Kevin Dohrmann, left, and Manny Quevedo.” Source of caption and photo: online version of the Omaha World-Herald article quoted and cited below.

Innovation through creative destruction brings us the new products and processes that make our lives longer, richer and more satisfying. The major downside of creative destruction is the job loss of those working for firms that are creatively destroyed. Sometimes, in class, I use Omaha’s Inacom as a concrete example. Inacom was a value-added retailer of computer equipment. They would buy PCs from IBM, Compaq and the like, then add software and hardware, and re-sell and install for firms, at a mark-up. They were creatively destroyed by Dell’s process innovation of customizing and selling direct, at much lower prices than Inacom charged. When I arrived in Omaha, Inacom was one of a handful of Fortune 500 firms. Now Inacom is gone. But just because a firm is creatively destroyed does not imply that all those who worked for the firm are creatively destroyed. Dohrmann and Quevedo were executives at Inacom. They had the skills, knowledge, resilience and work ethic to create their own entrepreneurial startup that has thrived. Not everyone can do what Dohrmann and Quevedo did. But everyone should be able to improve their skills, knowledge, resilience, and work ethic, so that if creative destruction destroys the firm that employs them, they will still survive and possibly thrive.

(p. 1D) Cosentry’s regional data center footprint has grown far from its “humble beginnings” 12 years ago of just 4,000 square feet in the old Southroads Mall in Bellevue.

“Everyone saw it as a mall that was in deterioration, and I walked in and saw the most beautiful building in Omaha,” co-founder Manny Quevedo said, (p. 3D) remembering solid walls and below-grade space for computer systems.
Investments from Omaha firms Waitt Co. and McCarthy Capital along the way helped the firm grow; it was sold in 2011 to Boston private equity firm TA Associates but still has its headquarters at 127th Street and West Dodge Road.
. . .
The company’s workforce has approximately doubled in the last five years to nearly 200, more than half of them in Nebraska, and will continue to grow gradually with the expansion as Cosentry hires more engineers and technicians, Quevedo said.
Today the company has six data centers, including two each in the Kansas City and Sioux Falls, S.D., metropolitan areas. If you use utilities or health care services or do any shopping or banking in the region, there’s a chance some of your information has been stored or processed through Cosentry’s servers.
Cosentry started with what Quevedo said was a handful of clients and grew to hundreds within its first five years.
. . .
(p. 3D) Cosentry Timeline
2001: With investment from Waitt Co., Cosentry is started by Manny Quevedo and Kevin Dohrmann, former employees of InaCom, the former Omaha Fortune 500 computer dealer that began as a division of Valmont Industries but merged with VanStar of Atlanta in 2000 and later declared bankruptcy. Cosentry creates a data center in Bellevue.
2005: Cosentry, also called IPR Inc., sold its IP Revolution division to a Kansas firm, Choice Solutions. IP Revolution sold voice and data communications services and systems. Cosentry doubles the size of its Bellevue data center and expands to the Kansas City and Sioux Falls, S.D., markets.
2008: Omaha investment firm McCarthy Capital invests in the firm. At the time, Cosentry had 95 employees.
2010: Cosentry cuts the ribbon on the $26 million Midlands Data Center in Papillion, a joint project with Alegent Health, which uses the center to store electronic medical records.
2011: Boston investment firm TA Associates buys Cosentry for an undisclosed amount from McCarthy and Waitt. The local management team continues to operate and have an ownership stake in Cosentry. The firm expands with second data centers in both the Sioux Falls and Kansas City markets.
2013: Cosentry refinances its credit facilities to provide up to $100 million to enable expansion, including the expansion of the Midlands Data Center. Today, Cosentry has nearly 200 employees and six data centers in three metropolitan areas.

For the full story, see:
Barbara Soderlin. “A Growing Tech Footprint: As Businesses’ Data Storage Needs Expand, Cosentry Adds to Its Papillion Center.” Omaha World-Herald (MONDAY, AUGUST 26, 2013): 1D & 3D.
(Note: ellipses added; bold in original print version of article.)
(Note: the online version of the article has the title “As Businesses’ Data Storage Needs Expand, Cosentry Adds to Its Papillion Center.”)

CosentryScottCappsAtPapillionDataCenterCoolingSystem2013-10-07.jpg

“Scott Capps of Cosentry’s Papillion data center with the cooling system that helped Cosentry earn an Energy Star certification, which is given by the Environmental Protection Agency based on energy efficiency and lower emissions. It’s the only data center in Nebraska with the certification.” Source of caption and photo: the archive online version of the Omaha World-Herald article quoted and cited above.

Google’s Redundant, Fault-Tolerant System Worked with Cheap, Low-Quality, Failure-Prone Equipment

(p. 183) Google was a tough client for Exodus; no company had ever jammed so many servers into so small an area. The typical practice was to put between five and ten servers on a rack; Google managed to get eighty servers on each of its racks. The racks were so closely arranged that it was difficult for a human being to squeeze into the aisle between them. To get an extra rack in, Google had to get Exodus to temporarily remove the side wall of the cage. “The data centers had never worried about how much power and AC went into each cage, because it was never close to being maxed out,” says Reese. “Well, we completely maxed out. It was on an order of magnitude of a small suburban neighborhood,” Reese says. Exodus had to scramble to install heavier circuitry. Its air-conditioning was also overwhelmed, and the colo bought a portable AC truck. They drove the eighteen-wheeler up to the colo, punched three holes in the wall, and pumped cold air into Google’s cage through PVC pipes.
. . .
The key to Google’s efficiency was buying low-quality equipment dirt cheap and applying brainpower to work around the inevitably high failure rate. It was an outgrowth of Google’s earliest days, when Page and Brin had built a server housed by Lego blocks. “Larry and Sergey proposed that we design and build our own servers as cheaply as we can– massive numbers of servers connected to a high-speed network,” says Reese. The conventional wisdom was that an equipment failure should be regarded as, well, a failure. Generally the server failure rate was between 4 and 10 percent. To keep the failures at the lower end of the range, technology companies paid for high-end equipment from Sun Microsystems or EMC. “Our idea was completely opposite,” says Reese. “We’re going to build hundreds and thousands of cheap servers knowing from the get-go that a certain percentage, maybe 10 percent, are going to fail,” says Reese. Google’s first CIO, Douglas Merrill, once noted that the disk drives Google purchased were “poorer quality than you would put into your kid’s computer at home.”
(p. 184) But Google designed around the flaws. “We built capabilities into the software, the hardware, and the network–network– the way we hook them up, the load balancing, and so on– to build in redundancy, to make the system fault-tolerant,” says Reese. The Google File System, written by Jeff Dean and Sanjay Ghemawat, was invaluable in this process: it was designed to manage failure by “sharding” data, distributing it to multiple servers. If Google search called for certain information at one server and didn’t get a reply after a couple of milliseconds, there were two other Google servers that could fulfill the request.

Source:
Levy, Steven. In the Plex: How Google Thinks, Works, and Shapes Our Lives. New York: Simon & Schuster, 2011.
(Note: ellipsis added.)