Drones “Stifled” by Stringent Regulations

(p. B5) The commercial drone industry is being stifled by unnecessarily stringent federal safety rules enforced by regulators who frequently pay only lip service to easing restrictions or streamlining decision-making, according to a report by the National Academies of Sciences, Engineering and Medicine.
The unusually strongly worded report released Monday [June 11, 2018] urges “top-to-bottom” changes in how the Federal Aviation Administration assesses and manages risks from drones.
. . .
. . . minimal but persistent levels of risk already are accepted by the public,according to the report. A fundamental issue is “what are we going to compare [drone] safety to?” said consultant George Ligler, who served as chairman of the committee that drafted the document.
“We do not ground airplanes because birds fly in the airspace, although we know birds can and do bring down aircraft,” the report said.

For the full story, see:
Andy Pasztor. “FAA’s Safety Rules for Commercial Drones Are Overly Strict, Report Says.” The Wall Street Journal (Tuesday, June 12, 2018): B5.
(Note: ellipses, and bracketed date, added.)
(Note: the online version of the story has the date June 11, 2018, and has the title “FAA’s Safety Rules for Commercial Drones Are Overly Strict, Report Says.”)

History of Energy Shows Power of Human Ingenuity to Solve Problems

(p. 16) In this meticulously researched work, Rhodes brings his fascination with engineers, scientists and inventors along as he presents an often underappreciated history: four centuries through the evolution of energy and how we use it. He focuses on the introduction of each new energy source, and the discovery and gradual refinement of technologies that eventually made them dominant. The result is a book that is as much about innovation and ingenuity as it is about wood, coal, kerosene or oil.

. . .

Moreover, there is a familiar pattern when one energy source supplants another: As each obstacle is cleared, a new one appears. The distillation of Pennsylvania “rock oil,” for instance, established that it offered a superior mode of lighting, a discovery that immediately presented the challenge of producing such oil — then collected from places where it bubbled to the surface — in sufficient quantities. Similarly, the invention of the petroleum-fueled internal combustion engine required Charles F. Kettering and Thomas Midgely Jr. to resolve the pressing problem of “engine knock” that resulted from small, damaging explosions in the cylinders.

. . .

. . . , by the end one gets a sense of boosted confidence about the ability of technology and human ingenuity to solve even those problems that at first seem insurmountable.

For the full review, see:

Meghan L. O’Sullivan. “Power On.” The New York Times Book Review (Sunday, June 24, 2018): 16.

(Note: ellipses added.)

(Note: the online version of the review has the date June 18, 2018, and has the title “A History of the Energy We Have Consumed.”)

The book under review, is:

Rhodes, Richard. Energy: A Human History. New York: Simon & Schuster, 2018.

Regulations Support Car Incumbents and Undermine Tesla Profitability

(p. A13) . . . governments everywhere have decided, perversely, that electric cars will not be profitable. In every major market–the U.S., Europe, China–the same political dispensation now applies: Established auto makers effectively will be required to make and sell electric cars at a loss in order to continue profiting from gas-powered vehicles.
This has rapidly become the institutional structure of the electric-car industry world-wide, for the benefit of the incumbents, whether GM in the U.S. or Daimler in Germany. Let’s face it, the political class always had a bigger investment in these incumbents than it ever did in Tesla.
Tesla has a great brand, great technology and great vehicles. To survive, it also needs to mate itself to a nonelectric pickup truck business. . . .
We’ll save for another day the relating of this phenomenon to Mr. Musk’s recently erratic behavior and pronouncements. . . . Keep your eye on the bigger picture–the bigger picture is the global regulatory capture of the electric car moment by the status quo. And note the irony that Tesla’s home state of California was the original pioneer of this insiders’ regulatory bargain with its so-called zero-emissions-vehicle mandate.
Electric cars were going to remain a niche in any case, but public policy is quickly ruling out the possibility (which Tesla needed) of them at least being a profitable niche.

For the full commentary, see:
Holman W. Jenkins, Jr. “BUSINESS WORLD; A Tesla Crackup Foretold; The real problem is that governments everywhere have ordained that electric cars will be sold at a loss.” The Wall Street Journal (Saturday, June 23, 2018): A13.
(Note: ellipses added.)
(Note: the online version of the commentary has the date June 22, 2018.)

Libertarian Peter Thiel Predicts Communist China’s Tech Success (What?)

(p. B1) The Trump administration gave ZTE, which employs 75,000 people and is the world’s No. 4 maker of telecom gear, a stay of execution on Thursday. ZTE, which had violated American sanctions, agreed to pay a $1 billion fine and to allow monitors to set up shop in its headquarters. In return, the company — once a symbol of China’s progress and engineering know-how — will be allowed to buy the American-made microchips, software and other tools it needs to survive.
China’s technology boom, it turns out, has been largely built on top of Western technology.
The ZTE incident, as it is called in China, may be the country’s Sputnik moment. Like the United States in 1957, watching helplessly as the Soviet Union launched the first human-made satellite, many people in China now see how far the country still has to go.
“We realized,” said Dong Jielin, an adjunct professor at the Research Center for Technological Innovation at Tsinghua University in Beijing, “that China’s prosperity was built on sand.”
. . .
(p. B3) . . . many in China — and many cheerleaders of the Chinese tech scene — . . . found themselves in a feedback loop of their own making. The powerful propaganda machine flooded out rational voices, said Ms. Dong of Tsinghua University. The tech boom fits perfectly into Beijing’s grand narrative of a national rejuvenation. Innovation and entrepreneurship are top national policies, with enormous financial backing from the government. Even now, some articles critical of China’s lagging semiconductor industry have disappeared from the internet there.
And it wasn’t just Chinese people. Michael Moritz, the American venture capital investor, warned that China “is leaving Donald Trump’s America behind.” Peter Thiel, a PayPal co-founder, wondered how long it would take for China to overtake the United States. Three to four years, he concluded.
The boom kept many from asking hard questions. They promoted China’s surge in patent filings without looking at whether the patents were any good. They didn’t ask why China still imports 90 percent of its semiconductor components even though the industry became a national priority in 2000.

For the full commentary, see:
Li Yuan. “China’s Sputnik Moment.” The New York Times (Monday, June 11, 2018): B1 & B3.
(Note: ellipses added.)
(Note: the online version of the commentary has the date June 10, 2018, and has the title “THE NEW NEW WORLD; ZTE’s Near-Collapse May Be China’s Sputnik Moment.”)

A.I. Assists, but Does Not Replace, Humans

(p. B4) Some Phoenix-area residents have been hailing rides in minivans with no drivers and no human safety operators inside. But that doesn’t mean they’re on their own if trouble arises.
From a command center, employees at Alphabet Inc.’s Waymo driverless-car unit monitor the test vehicles on computer screens, able to wirelessly peer in through the minivan’s cameras. If the robot brain maneuvering the vehicle gets confused by a situation–say, a car unexpectedly stalled in front of it or closed lanes of traffic–it will stop the vehicle and ask the command center to verify what it is seeing. If the human confirms the situation, the robot will calculate how it should navigate around the hazard.

For the full story, see:
Tim Higgins. “Driverless Autos Get Help From Humans Watching Remotely.” The Wall Street Journal (Monday, June 7, 2018): B4.
(Note: the online version of the story has the date June 5, 2018, and has the title “Driverless Cars Still Handled by Humans–From Afar.”)

Human Intelligence Helps A.I. Work Better

(p. B3) A recent study at the M.I.T. Media Lab showed how biases in the real world could seep into artificial intelligence. Commercial software is nearly flawless at telling the gender of white men, researchers found, but not so for darker-skinned women.
And Google had to apologize in 2015 after its image-recognition photo app mistakenly labeled photos of black people as “gorillas.”
Professor Nourbakhsh said that A.I.-enhanced security systems could struggle to determine whether a nonwhite person was arriving as a guest, a worker or an intruder.
One way to parse the system’s bias is to make sure humans are still verifying the images before responding.
“When you take the human out of the loop, you lose the empathetic component,” Professor Nourbakhsh said. “If you keep humans in the loop and use these systems, you get the best of all worlds.”

For the full story, see:
Paul Sullivan. “WEALTH MATTERS; Can Artificial Intelligence Keep Your Home Secure?” The New York Times (Saturday, June 30, 2018): B3.
(Note: the online version of the story has the date June 29, 2018.)

The “recent study” mentioned above, is:
Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 1-15.

Collaborative Robots (Cobots) Fall in Price and Rise in Ease of Programming

(p. B4) Robots are moving off the assembly line.
Collaborative robots that work alongside humans–“cobots”–are getting cheaper and easier to program. That is encouraging businesses to put them to work at new tasks in bars, restaurants and clinics.
In the Netherlands, a cobot scales a 26-foot-high bar to tap bottles of homemade gin, whiskey and limoncello so that bartenders don’t need to climb ladders. In Japan, a cobot boxes takeout dumplings. In Singapore, robots give soft-tissue massages.
Cobots made up just 5% of the $14 billion industrial-robot market in 2017, according to research by Minneapolis-based venture-capital firm Loup Ventures. Loup estimates sales will jump to 27% of a $33 billion market by 2025 as demand for the robotic arms rises. About 20 manufacturers around the world have started selling such robots in the past decade.

For the full story, see:
Natasha Khan. “Robots Shift From Factories to New Jobs.” The Wall Street Journal (Monday, June 11, 2018): B4.
(Note: the online version of the story has the date June 9, 2018, and has the title “Your Next Robot Encounter: Dinner, Drinks and a Massage.”)

China Will Fail to Corner the Lithium Market

(p. B12) Since emerging as an industrial superpower in the 2000s, China has repeatedly tried to lock up essential resources like iron ore and so-called rare earths. The latest example is lithium, a key battery element: . . . .
. . .
The reality is more mundane.
. . .
. . . it will take just $13 billion in investment to satisfy annual lithium consumption as of 2030, against more $100 billion for nickel and copper.
Even if only a relatively small amount of mining capital spending migrates from mainstays like iron ore into lithium over the next decade, supply probably won’t be a huge problem.

For the full story, see:
Nathaniel Taplin. “China Won’t Be Able to Dominate Lithium Mining Forever.” The Wall Street Journal (Friday, May 18, 2018): B12.
(Note: ellipses added.)
(Note: the online version of the story has the date May 17, 2018, and has the title “China Won’t Dominate Lithium Forever.” The last sentence quoted above appeared in the online, but not in the print, version of the article.)

Assigning Property Rights to Internet Data Creators

(p. C3) Congress has stepped up talk of new privacy regulations in the wake of the scandal involving Cambridge Analytica, which improperly gained access to the data of as many as 87 million Facebook users. Even Facebook chief executive Mark Zuckerberg testified that he thought new federal rules were “inevitable.” But to understand what regulation is appropriate, we need to understand the source of the problem: the absence of a real market in data, with true property rights for data creators. Once that market is in place, implementing privacy protections will be easy.
We often think of ourselves as consumers of Facebook, Google, Instagram and other internet services. In reality, we are also their suppliers–or more accurately, their workers. When we post and label photos on Facebook or Instagram, use Google maps while driving, chat in multiple languages on Skype or upload videos to YouTube, we are generating data about human behavior that the companies then feed into machine-learning programs.
These programs use our personal data to learn patterns that allow them to imitate human behavior and understanding. With that information, computers can recognize images, translate languages, help viewers choose among shows and offer the speediest route to the mall. Companies such as Facebook, Google and Microsoft (where one of us works) sell these tools to other companies. They also use our data to match advertisers with consumers.
Defenders of the current system often say that we don’t give away our personal data for free. Rather, we’re paid in the form of the services that we receive. But this exchange is bad for users, bad for society and probably not ideal even for the tech companies. In a real market, consumers would have far more power over the exchange: Here’s my data. What are you willing to pay for it?
An internet user today probably would earn only a few hundred dollars a year if companies paid for data. But that amount could grow substantially in the coming years. If the economic reach of AI systems continues to expand–into drafting legal contracts, diagnosing diseases, performing surgery, making investments, driving trucks, managing businesses–they will need vast amounts of data to function.
And if these systems displace human jobs, people will have plenty of time to supply that data. Tech executives fearful that AI will cause mass unemployment have advocated a universal basic income funded by increased taxes. But the pressure for such policies would abate if users were simply compensated for their data.

For the full commentary, see:
Eric A. Posner and E. Glen Weyl. “Want Our Personal Data? Pay for It.” The Wall Street Journal (Saturday, April 21, 2018): C3.
(Note: the online version of the commentary has the date April 20, 2018.)

The commentary quoted above, is based on:
Posner, Eric A., and E. Glen Weyl. Radical Markets: Uprooting Capitalism and Democracy for a Just Society. Princeton, NJ: Princeton University Press, 2018.

Stornetta and Nakamoto Invented Bitcoin

(p. C18) In 1990, the physicist Scott Stornetta had a eureka moment while getting ice cream with his family at a Friendly’s restaurant in Morristown, N.J. He and his cryptographer colleague, Stuart Haber, had been thinking about the proliferation of digital files that accompanied the rise of personal computing and the ease with which files could be altered. They wondered how we might know for certain what was true about the past. What would prevent tampering with the historical record–and would it be possible to protect such information for future generations?
The sticking point was the need to trust a central authority. But at Friendly’s, an answer came to Dr. Stornetta: He realized that instead of a central record-keeper, the system could have many dispersed but interconnected copies of a shared ledger. The truth could never be typed over if there were too many linked ledgers to alter.
Drs. Haber and Stornetta were working at the time at Bellcore, a research center descended from the legendary Bell Labs. The pair set out to build a cryptographically secure archive–a way to verify records without revealing their contents.
. . .
. . . there is no mistaking their crucial contribution. When the founding document of bitcoin was published in 2008 under the name ” Satoshi Nakamoto “–a pseudonym for one or more scientists–it had just eight citations of previous works. Three of them were papers co-authored by Drs. Haber and Stornetta.
, , ,
The Nakamoto paper revolutionized the foundational work of Drs. Stornetta and Haber by adding the concept of “mining” cryptocurrencies. It created financial incentives for participation in retaining and verifying parts of the blockchain ledger.

For the full commentary, see:
Amy Whitaker. “The Eureka Moment That Made Bitcoin Possible; A key insight for the technology came to a physicist almost three decades ago at a Friendly’s restaurant in New Jersey.” The Wall Street Journal (Saturday, May 26, 2018): C18.
(Note: ellipses added.)
(Note: the online version of the commentary has the date May 25, 2018.)

“Infatuation with Deep Learning May Well Breed Myopia . . . Overinvestment . . . and Disillusionment”

(p. B1) For the past five years, the hottest thing in artificial intelligence has been a branch known as deep learning. The grandly named statistical technique, put simply, gives computers a way to learn by processing vast amounts of data.
. . .
But now some scientists are asking whether deep learning is really so deep after all.
In recent conversations, online comments and a few lengthy essays, a growing number of A.I. experts are warning that the infatuation with deep learning may well breed myopia and overinvestment now — and disillusionment later.
“There is no real intelligence there,” said Michael I. Jordan, a professor at the University of California, Berkeley, and the author of an essay published in April intended to temper the lofty expectations surrounding A.I. “And I think that trusting these brute force algorithms too much is a faith misplaced.”
The danger, some experts warn, is (p. B4) that A.I. will run into a technical wall and eventually face a popular backlash — a familiar pattern in artificial intelligence since that term was coined in the 1950s. With deep learning in particular, researchers said, the concerns are being fueled by the technology’s limits.
Deep learning algorithms train on a batch of related data — like pictures of human faces — and are then fed more and more data, which steadily improve the software’s pattern-matching accuracy. Although the technique has spawned successes, the results are largely confined to fields where those huge data sets are available and the tasks are well defined, like labeling images or translating speech to text.
The technology struggles in the more open terrains of intelligence — that is, meaning, reasoning and common-sense knowledge. While deep learning software can instantly identify millions of words, it has no understanding of a concept like “justice,” “democracy” or “meddling.”
Researchers have shown that deep learning can be easily fooled. Scramble a relative handful of pixels, and the technology can mistake a turtle for a rifle or a parking sign for a refrigerator.
In a widely read article published early this year on arXiv.org, a site for scientific papers, Gary Marcus, a professor at New York University, posed the question: “Is deep learning approaching a wall?” He wrote, “As is so often the case, the patterns extracted by deep learning are more superficial than they initially appear.”

For the full story, see:
Steve Lohr. “Researchers Seek Smarter Paths to A.I.” The New York Times (Thursday, June 21, 2018): B1 & B4.
(Note: ellipses added.)
(Note: the online version of the story has the date June 20, 2018, and has the title “Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So.” The June 21st date is the publication date in my copy of the National Edition.)

The essay by Jordan, mentioned above, is:
Jordan, Michael I. “Artificial Intelligence – the Revolution Hasn’t Happened Yet.” Medium.com, April 18, 2018.

The manuscript by Marcus, mentioned above, is:

Marcus, Gary. “Deep Learning: A Critical Appraisal.” Jan. 2, 2018.