FCC Gains Arbitrary Power Over Internet Innovation

(p. A11) Imagine if Steve Jobs, Larry Page or Mark Zuckerberg had been obliged to ask bureaucrats in Washington if it was OK to launch the iPhone, Gmail, or Facebook’s forthcoming Oculus virtual-reality service. Ridiculous, right? Not anymore.
A few days before the Independence Day holiday weekend, the Federal Communications Commission announced what amounts to a system of permission slips for the Internet.
. . .
As the FCC begins to issue guidance and enforcement actions, it’s becoming clearer that critics who feared there would be significant legal uncertainty were right. Under its new “transparency” rule, for example, the agency on June 17 conjured out of thin air an astonishing $100 million fine against AT&T, even though the firm explained its mobile-data plans on its websites and in numerous emails and texts to customers.
The FCC’s new “Internet Conduct Standard,” meanwhile, is no standard at all. It is an undefined catchall for any future behavior the agency doesn’t like.
. . .
From the beginning, Internet pioneers operated in an environment of “permissionless innovation.” FCC Chairman Tom Wheeler now insists that “it makes sense to have somebody watching over their shoulder and ready to jump in if necessary.” But the agency is jumping in to demand that innovators get permission before they offer new services to consumers. The result will be less innovation.

For the full commentary, see:
BRET SWANSON. “Permission Slips for Internet Innovation; The FCC’s new Web rules are already as onerous as feared and favor some business models over others.” The Wall Street Journal (Sat., Aug. 15, 2015): A11.
(Note: ellipses added.)
(Note: the online version of the commentary has the date Aug. 14, 2015.)

“We Embrace New Technology”

(p. 2D) . . . , the first digital images created by the earliest digital cameras “were terrible,” Rockbrook’s Chuck Fortina said. “These were real chunky images made by big, clunky cameras.”
Viewing those results, some retailers dismissed the new digital technology and clung doggedly to film. But Rockbrook Camera began stocking digital cameras alongside models that used film, Fortina said.
“Film sales were great, but we just knew digital was going to take over,” Fortina said. As those cameras and their images improved, the retailer saw a huge opportunity. ”Instead of thinking this is going to kill our business, we were thinking people are going to have to buy all new gear,” Fortina said of the switch from analog to digital.
“By 2000, film was over,” he said. Companies that didn’t refocus their business found themselves struggling or forced to close their doors.
Today, Rockbrook Camera is constantly scouring the Internet, attending trade shows and quizzing customers and employees in search of new technologies, Fortina said. “We embrace new technology,” he said.

For the full story, see:
Janice Podsada. “More Ready than Not for Tech Shifts; How 3 Omaha-area businesses altered course and thrived amid technological changes.” Omaha World-Herald (SUNDAY, SEPTEMBER 27, 2015 ): 1D-2D.
(Note: ellipsis added.)
(Note: the online version of the story has the title “How 3 Omaha-area businesses altered course and thrived amid technological changes.”)

John Paul Stapp Thumbed His Nose at the Precautionary Principle

(p. C7) In the early 19th century, a science professor in London named Dionysus Lardner rejected the future of high-speed train travel because, he said, “passengers, unable to breathe, would die of asphyxia.” A contemporary, the famed engineer Thomas Tredgold, agreed, noting “that any general system of conveying passengers . . . [traveling] at a velocity exceeding 10 miles an hour, or thereabouts, is extremely improbable.”
The current land speed for a human being is 763 miles an hour, or thereabouts, thanks in large part to the brilliance, bravery and dedication of a U.S. Air Force lieutenant colonel named John Paul Stapp, a wonderfully iconoclastic medical doctor, innovator and renegade consumer activist who repeatedly put his own life in peril in search of the line beyond which human survival at speed really was “extremely improbable.”
. . .
Initial tests were carried out on a crash-test dummy named Oscar Eightball, then chimpanzees and pigs. There was plenty of trial and error–the term “Murphy’s Law” was coined during the Gee Whiz experiments–until Stapp couldn’t resist strapping himself into the Gee Whiz to experience firsthand what the cold data could never reveal: what it felt like. On May 5, 1948, for example, he “took a peak deceleration of an astounding twenty-four times the force of gravity,” the author writes. “This was the equivalent of a full stop from 75 miles per hour in just seven feet or, in other words, freeway speed to zero in the length of a very tall man.”
Stapp endured a total of 26 rides on the Gee Whiz over the course of 50 months, measuring an array of physiological factors as well as testing prototype helmets and safety belts. Along the way he suffered a broken wrist, torn rib cartilage, a bruised collarbone, a fractured coccyx, busted capillaries in both eyes and six cracked dental fillings. Colleagues became increasingly concerned for his health every time he staggered, gamely, off the sled, but, according to Mr. Ryan, he never lost his sense of humor, nor did these ordeals stop Dr. Stapp from voluntarily making house calls at night for families stationed on the desolate air base.
. . .
After 29 harrowing trips down the track, Stapp prepared for one grand finale, what he called the “Big Run,” hoping to achieve 600 miles per hour, the speed beyond which many scientists suspected that human survivability was–really, this time–highly improbable. On Dec. 10, 1954, Sonic Wind marked a speed of 639 miles per hour, faster than a .45 caliber bullet shot from a pistol. Film footage of the test shows the sled rocketing past an overhead jet plane that was filming the event. The Big Run temporarily blinded Stapp, and he turned blue for a few days, but the experiment landed him on the cover of Time magazine as the fastest man on earth. The record stood for the next 30 years.

For the full review, see:
PATRICK COOKE. “Faster Than a Speeding Bullet–Really.” The Wall Street Journal (Sat., Aug. 22, 2015): C7.
(Note: first ellipsis, and bracketed word, in original; other ellipses added.)
(Note: the online version of the review has the date Aug. 21, 2015.)

The book under review, is:
Ryan, Craig. Sonic Wind: The Story of John Paul Stapp and How a Renegade Doctor Became the Fastest Man on Earth. New York: Liveright Publishing Corp., 2015.

Fire Cooked Carbohydrates Fed Bigger Brains

(p. D5) Scientists have long recognized that the diets of our ancestors went through a profound shift with the addition of meat. But in the September issue of The Quarterly Review of Biology, researchers argue that another item added to the menu was just as important: carbohydrates, bane of today’s paleo diet enthusiasts. In fact, the scientists propose, by incorporating cooked starches into their diet, our ancestors were able to fuel the evolution of our oversize brains.
. . .
Cooked meat provided increased protein, fat and energy, helping hominins grow and thrive. But Mark G. Thomas, an evolutionary geneticist at University College London, and his colleagues argue that there was another important food sizzling on the ancient hearth: tubers and other starchy plants.
Our bodies convert starch into glucose, the body’s fuel. The process begins as soon as we start chewing: Saliva contains an enzyme called amylase, which begins to break down starchy foods.
Amylase doesn’t work all that well on raw starches, however; it is much more effective on cooked foods. Cooking makes the average potato about 20 times as digestible, Dr. Thomas said: “It’s really profound.”
. . .
Dr. Thomas and his colleagues propose that the invention of fire, not farming, gave rise to the need for more amylase. Once early humans started cooking starchy foods, they needed more amylase to unlock the precious supply of glucose.
Mutations that gave people extra amylase helped them survive, and those mutations spread because of natural selection. That glucose, Dr. Thomas and his colleagues argue, provided the fuel for bigger brains.

For the full story, see:
Carl Zimmer. “MATTER; For Evolving Brains, a ‘Paleo’ Diet of Carbs.” The New York Times (Tues., AUG. 18, 2015): D5.
(Note: ellipses added.)
(Note: the online version of the story has the date AUG. 13, 2015.)

The academic article summarized in the passages above, is:
Hardy, Karen, Jennie Brand-Miller, Katherine D. Brown, Mark G. Thomas, and Les Copeland. “The Importance of Dietary Carbohydrate in Human Evolution.” The Quarterly Review of Biology 90, no. 3 (Sept. 2015): 251-68.

More Danger from Existing Artificial Stupidity than from Fictional Artificial Intelligence

(p. B6) In the kind of artificial intelligence, or A.I., that most people seem to worry about, computers decide people are a bad idea, so they kill them. That is undeniably bad for the human race, but it is a potentially smart move by the computers.
But the real worry, specialists in the field say, is a computer program rapidly overdoing a single task, with no context. A machine that makes paper clips proceeds unfettered, one example goes, and becomes so proficient that overnight we are drowning in paper clips.
In other words, something really dumb happens, at a global scale. As for those “Terminator” robots you tend to see on scary news stories about an A.I. apocalypse, forget it.
“What you should fear is a computer that is competent in one very narrow area, to a bad degree,” said Max Tegmark, a professor of physics at the Massachusetts Institute of Technology and the president of the Future of Life Institute, a group dedicated to limiting the risks from A.I.
In late June, when a worker in Germany was killed by an assembly line robot, Mr. Tegmark said, “it was an example of a machine being stupid, not doing something mean but treating a person like a piece of metal.”
. . .
“These doomsday scenarios confuse the science with remote philosophical problems about the mind and consciousness,” Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, a nonprofit that explores artificial intelligence, said. “If more people learned how to write software, they’d see how literal-minded these overgrown pencils we call computers actually are.”
What accounts for the confusion? One big reason is the way computer scientists work. “The term ‘A.I.’ came about in the 1950s, when people thought machines that think were around the corner,” Mr. Etzioni said. “Now we’re stuck with it.”
It is still a hallmark of the business. Google’s advanced A.I. work is at a company it acquired called DeepMind. A pioneering company in the field was called Thinking Machines. Researchers are pursuing something called Deep Learning, another suggestion that we are birthing intelligence.
. . .
DeepMind made a program that mastered simple video games, but it never took the learning from one game into another. The 22 rungs of a neural net it climbs to figure out what is in a picture do not operate much like human image recognition and are still easily defeated.

For the full story, see:
QUENTIN HARDY. “The Real Threat Computers Pose: Artificial Stupidity, Not Intelligence.” The New York Times (Mon., JULY 13, 2015): B6.
(Note: ellipses added.)
(Note: the online version of the story has the date JULY 11, 2015, and has the title “The Real Threat Posed by Powerful Computers.”)

Marie Curie Opposed Patents Because Women Could Not Own Property in France

(p. C6) Ms. Wirtén, a professor at Linköping University in Sweden, pays special attention to the decision not to patent and how it was treated in the founding texts of the Curie legend: Curie’s 1923 biography of her husband, “Pierre Curie,” and their daughter Eve’s 1937 biography of her mother, “Madame Curie.” The books each recount a conversation in which husband and wife agree that patenting their radium method would be contrary to the spirit of science.
It is not quite that simple. As Ms. Wirtén points out, the Curies derived a significant portion of their income from Pierre’s patents on instruments. Various factors besides beneficence could have affected their decision not to extend this approach to their radium process. Intriguingly, the author suggests that the ineligibility of women to own property under French law might have shaped Curie’s perspective. “Because the law excluded her from the status of person upon which these intellectual property rights depend,” Ms. Wirtén writes, “the ‘property’ road was closed to Marie Curie. The persona road was not.”

For the full review, see:
EVAN HEPLER-SMITH. “Scientific Saint; After scandals in France, Curie was embraced by American women as an intellectual icon.” The Wall Street Journal (Sat., March 21, 2015): C6.
(Note: the online version of the review has the date March 20, 2015.)

The book under review, is:
Wirtén, Eva Hemmungs. Making Marie Curie: Intellectual Property and Celebrity Culture in an Age of Information. Chicago: University of Chicago Press, 2015.

Pentagon Seeks Innovation from Private Start-Ups Since “They’ve Realized that the Old Model Wasn’t Working Anymore”

(p. A3) SAN FRANCISCO — A small group of high-ranking Pentagon officials made a quiet visit to Silicon Valley in December to solicit national security ideas from start-up firms with little or no history of working with the military.
The visit was made as part of an effort to find new ways to maintain a military advantage in an increasingly uncertain world.
In announcing its Defense Innovation Initiative in a speech in California in November, Chuck Hagel, then the defense secretary, mentioned examples of technologies like robotics, unmanned systems, miniaturization and 3-D printing as places to look for “game changing” technologies that would maintain military superiority.
“They’ve realized that the old model wasn’t working anymore,” said James Lewis, director of the Strategic Technologies Program at the Center for Strategic and International Studies in Washington. “They’re really worried about America’s capacity to innovate.”
There is a precedent for the initiative. Startled by the Soviet launch of the Sputnik satellite in 1957, President Dwight D. Eisenhower created the Advanced Research Projects Agency, or ARPA, at the Pentagon to ensure that the United States would not be blindsided by technological advances.
Now, the Pentagon has decided that the nation needs more than ARPA, renamed the Defense Advanced Research Projects Agency, or Darpa, if it is to find new technologies to maintain American military superiority.
. . .
The Pentagon focused on smaller companies during its December visit; it did not, for example, visit Google. Mr. Welby acknowledged that Silicon Valley start-ups were not likely to be focused on the Pentagon as a customer. The military has captive suppliers and a long and complex sales cycle, and it is perceived as being a small market compared with the hundreds of millions of customers for consumer electronics products.
Mr. Welby has worked for three different Darpa directors, but he said that Pentagon officials now believed they had to look beyond their own advanced technology offices.
“The Darpa culture is about trying to understand high-risk technology,” he said. “It’s about big leaps.” Today, however, the Pentagon needs to break out of what can be seen as a “not invented here” culture, he said.
“We’re thinking about what the world is going to look like in 2030 and what tools the department will need in 20 or 30 years,” he added.

For the full story, see:
JOHN MARKOFF. “Pentagon Shops in Silicon Valley for Game Changers.” The New York Times (Fri., FEB. 27, 2015): A3.
(Note: ellipsis added.)
(Note: the online version of the story has the date FEB. 26, 2015.)

More Tech Stars Skip College, at Least for a While

(p. B1) The college dropout-turned-entrepreneur is a staple of Silicon Valley mythology. Steve Jobs, Bill Gates and Mark Zuckerberg all left college.
In their day, those founders were very unusual. But a lot has changed since 2005, when Mr. Zuckerberg left Harvard. The new crop of dropouts has grown up with the Internet and smartphones. The tools to create new technology are more accessible. The cost to start a company has plunged, while the options for raising money have multiplied.
Moreover, the path isn’t as lonely.
. . .
Not long ago, dropping out of school to start a company was considered risky. For this generation, it is a badge of honor, evidence of ambition and focus. Very few dropouts become tycoons, but “failure” today often means going back to school or taking a six-figure job at a big tech company.
. . .
(p. B5) There are no hard numbers on the dropout trend, but applicants for the Thiel Fellowship tripled in the most recent year; the fellowship won’t disclose numbers.
. . .
It has tapped 82 fellows in the past five years.
“I don’t think college is always bad, but our society seems to think college is always good, for everyone, at any cost–and that is what we have to question,” says Mr. Thiel, a co-founder of PayPal and an early investor in Facebook.
Of the 43 fellows in the initial classes of 2011 and 2012, 26 didn’t return to school and continued to work on startups or independent projects. Five went to work for large tech firms, including a few through acquisitions. The remaining 12 went back to school.
Mr. Thiel says companies started by the fellows have raised $73 million, a record that he says has attracted additional applicants. He says fellows “learned far more than they would have in college.”

For the full story, see:
DAISUKE WAKABAYASHI. “College Dropouts Thrive in Tech.” The Wall Street Journal (Thurs., June 4, 2015): B1 & B10.
(Note: ellipses added. The phrase “the fellowship won’t disclose numbers” was in the online, but not the print, version of the article.)
(Note: the online version of the article has the date June 3, 2015, and has the title “College Dropouts Thrive in Tech.”)

Computer Programs “Lack the Flexibility of Human Thinking”

(p. A11) . . . let’s not panic. “Superintelligent” machines won’t be arriving soon. Computers today are good at narrow tasks carefully engineered by programmers, like balancing checkbooks and landing airplanes, but after five decades of research, they are still weak at anything that looks remotely like genuine human intelligence.
. . .
Even the best computer programs out there lack the flexibility of human thinking. A teenager can pick up a new videogame in an hour; your average computer program still can only do just the single task for which it was designed. (Some new technologies do slightly better, but they still struggle with any task that requires long-term planning.)

For the full commentary, see:
GARY MARCUS. “Artificial Intelligence Isn’t a Threat–Yet; Superintelligent machines are still a long way off, but we need to prepare for their future rise.” The Wall Street Journal (Sat., Dec. 13, 2014): A11.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date Dec. 11, 2014.)

Cultural and Institutional Differences Between Europe and U.S. Keep Europe from Having a Silicon Valley

(p. B7) “They all want a Silicon Valley,” Jacob Kirkegaard, a Danish economist and senior fellow at the Peterson Institute for International Economics, told me this week. “But none of them can match the scale and focus on the new and truly innovative technologies you have in the United States. Europe and the rest of the world are playing catch-up, to the great frustration of policy makers there.”
Petra Moser, assistant professor of economics at Stanford and its Europe Center, who was born in Germany, agreed that “Europeans are worried.”
“They’re trying to recreate Silicon Valley in places like Munich, so far with little success,” she said. “The institutional and cultural differences are still too great.”
. . .
There is . . . little or no stigma in Silicon Valley to being fired; Steve Jobs himself was forced out of Apple. “American companies allow their employees to leave and try something else,” Professor Moser said. “Then, if it works, great, the mother company acquires the start-up. If it doesn’t, they hire them back. It’s a great system. It allows people to experiment and try things. In Germany, you can’t do that. People would hold it against you. They’d see it as disloyal. It’s a very different ethic.”
Europeans are also much less receptive to the kind of truly disruptive innovation represented by a Google or a Facebook, Mr. Kirkegaard said.
He cited the example of Uber, the ride-hailing service that despite its German-sounding name is a thoroughly American upstart. Uber has been greeted in Europe like the arrival of a virus, and its reception says a lot about the power of incumbent taxi operators.
“But it goes deeper than that,” Mr. Kirkegaard said. “New Yorkers don’t get all nostalgic about yellow cabs. In London, the black cab is seen as something that makes London what it is. People like it that way. Americans tend to act in a more rational and less emotional way about the goods and services they consume, because it’s not tied up with their national and regional identities.”
. . .
With its emphasis on early testing and sorting, the educational system in Europe tends to be very rigid. “If you don’t do well at age 18, you’re out,” Professor Moser said. “That cuts out a lot of people who could do better but never get the chance. The person who does best at a test of rote memorization at age 17 may not be innovative at 23.” She added that many of Europe’s most enterprising students go to the United States to study and end up staying.
She is currently doing research into creativity. “The American education system is much more forgiving,” Professor Moser said. “Students can catch up and go on to excel.”
Even the vaunted European child-rearing, she believes, is too prescriptive. While she concedes there is as yet no hard scientific evidence to support her thesis, “European children may be better behaved, but American children may end up being more free to explore new things.”

For the full story, see:
JAMES B. STEWART. “Common Sense; A Fearless Culture Fuels Tech.” The New York Times (Fri., JUNE 19, 2015): B1 & B7.
(Note: ellipses added.)
(Note: the online version of the story has the date JUNE 18, 2015, and has the title “Common Sense; A Fearless Culture Fuels U.S. Tech Giants.”)

Chimps Are Willing to Delay Gratification in Order to Receive Cooked Food

This is a big deal because cooking food allows us humans to spend a lot less energy digesting our food, which allows a lot more energy to be used by the brain. So one theory is that the cooking technology allowed humans to eventually develop cognitive abilities superior to other primates.

(p. A3) . . . scientists from Harvard and Yale found that chimps have the patience and foresight to resist eating raw food and to place it in a device meant to appear, at least to the chimps, to cook it.
. . .
But they found that chimps would give up a raw slice of sweet potato in the hand for the prospect of a cooked slice of sweet potato a bit later. That kind of foresight and self-control is something any cook who has eaten too much raw cookie dough can admire.
The research grew out of the idea that cooking itself may have driven changes in human evolution, a hypothesis put forth by Richard Wrangham, an anthropologist at Harvard and several colleagues about 15 years ago in an article in Current Anthropology, and more recently in his book, “Catching Fire: How Cooking Made Us Human.”
He argued that cooking may have begun something like two million years ago, even though hard evidence only dates back about one million years. For that to be true, some early ancestors, perhaps not much more advanced than chimps, had to grasp the whole concept of transforming the raw into the cooked.
Felix Warneken at Harvard and Alexandra G. Rosati, who is about to move from Yale to Harvard, both of whom study cognition, wanted to see if chimpanzees, which often serve as stand-ins for human ancestors, had the cognitive foundation that would prepare them to cook.
. . .
Dr. Rosati said the experiments showed not only that chimps had the patience for cooking, but that they had the “minimal causal understanding they would need” to make the leap to cooking.

For the full story, see:
JAMES GORMAN. “Chimpanzees Would Cook if Given Chance, Research Says.” The New York Times (Weds., JUNE 3, 2015): A3.
(Note: ellipses added.)
(Note: the date of the online version of the story is JUNE 2, 2015, and has the title “Chimpanzees Would Cook if Given the Chance, Research Says.”)

The academic article discussed in the passages quoted above, is:
Warneken, Felix, and Alexandra G. Rosati. “Cognitive Capacities for Cooking in Chimpanzees.” Proceedings of the Royal Society of London B: Biological Sciences 282, no. 1809 (June 22, 2015).