A.I. Researchers’ Joke: Whenever You Ask, Real A.I. Is 30 Years in Future

On February 1, 2019, at a conference at Texas A&M, I saw a demonstration of prototypes of A.I. driverless car technology. One of the lead researchers told us that it would be 30 years before we saw real driverless cars on the road.

(p. B3) While the A.C.L.U. is ringing alarm bells about the use of video analytics now, it’s anyone’s guess how quickly the technology will advance.

“The joke in A.I. is that you ask a bunch of A.I. researchers, ‘When are we going to achieve A.I.?’ and the answer always has been, ‘In 30 years,’” Mr. Vondrick said.

For the full story, see:

Niraj Chokshi. “Intelligent ‘Robot Surveillance’ Poses Threats, A.C.L.U. Warns.” The New York Times (Friday, July 14, 2019): B3.

(Note: the online version of the story has the date July 13, 2019, and has the title “How Surveillance Cameras Could Be Weaponized With A.I.”)

Thai Royal Navy Seizes Tiny Floating Galt’s Gulch

(p. A1) American software engineer Chad Elwartowski thought he had found the perfect refuge from the long arm of meddlesome, overbearing governments. It was a home floating in the turquoise waters far off the coasts of Thailand and Indonesia.

Last year, he joined a project that built an octagonal fiberglass pod and mounted it atop a floating steel spar that reached 65 feet down into the ocean, like a giant keel.

It was to be a place for people to gather and live by their own rules, he said, beyond the jurisdiction of any government. “I was free for a moment,” he wrote on his Facebook page after settling in with his girlfriend in March. “Probably the freest person in the world.”

Not anymore. He and his (p. A8) girlfriend, Supranee Thepdet, are in hiding on dry land after the Royal Thai Navy said their nautical haven was within Thai jurisdiction and accused them of trying to set up their own micro-nation. Last Monday, a utility ship towed the abandoned seastead to shore as evidence. Police say they are figuring out whether to request an arrest warrant for endangering Thai sovereignty—which potentially carries the death penalty.

The concept of a seastead—a homestead at sea—is a popular one in libertarian and cryptocurrency circles. Mr. Elwartowski, 46 years old, described it in a YouTube video as the closest he could get to the secret enclave cut off from the rest of society depicted in Ayn Rand’s novel “Atlas Shrugged.”

For the full story, see:

James Hookway. “Libertarian Nirvana at Sea Runs Into an Opponent: the Thai Navy.” The Wall Street Journal (Monday, April 29, 2019): A1 & A8.

(Note: the online version of the story has the date April 28, 2019, and the title “A Libertarian Nirvana at Sea Runs Into a Stubborn Opponent: the Thai Navy.”)

The Ayn Rand novel mentioned above, is:

Rand, Ayn. Atlas Shrugged. New York: Random House, 1957.

Facebook Hires More Humans to Do What Its AI Cannot Do

(p. B5) If telling us what to look at next is Facebook’s raison d’être, then the AI that enables that endless spoon-feeding of content is the company’s most important, and sometimes most controversial, intellectual property.

. . .

At the same time, the company’s announcement that it is hiring more humans to screen ads and filter content shows there is so much essential to Facebook’s functionality that AI alone can’t accomplish.

AI algorithms are inherently black boxes whose workings can be next to impossible to understand—even by many Facebook engineers.

For the full commentary, see:

Christopher Mims. “KEYWORDS; The Algorithm Driving Facebook.” The Wall Street Journal (Monday, October 23, 2017): B1 & B5.

(Note: ellipses added.)

(Note: the online version of the commentary has the date Oct. 22, 2017, and the title “KEYWORDS; How Facebook’s Master Algorithm Powers the Social Network.”)

“If You Do No Harm, Then You Do No Harm to the Cancer, Either”

(p. B16) James F. Holland, a founding father of chemotherapy who helped pioneer a lifesaving drug treatment for pediatric leukemia patients, died on Thursday [March 22, 2018] at his home in Scarsdale, N.Y.

. . .

“Patients have to be subsidiaries of the trial,” he told The New York Times in 1986. “I’m not interested in holding patients’ hands. I’m interested in curing cancer.”

He acknowledged that some patients become guinea pigs, and that they sometimes suffer discomfort in the effort to eradicate tumors, but he said that even those who die provide lessons for others who will survive.

“If you do no harm,” Dr. Holland said, “then you do no harm to the cancer, either.”

. . .

Dr. Holland acknowledged that while experimenting with drug treatment sometimes amounts to trial and error, the primary killer is typically the disease itself.

“The thing to remember,” he said, “is that the deadliest thing about cancer chemotherapy is not the chemotherapy.” Continue reading ““If You Do No Harm, Then You Do No Harm to the Cancer, Either””

Forecasts “of Doom and Gloom” Fail Because “Lot of Moving Parts That Are Not Well Understood”

(p. A3) The science community now believes tornadoes most likely build from the ground up and not from a storm cloud down, potentially making them harder to spot via radar early in the formation process. But scientists still struggle to say with certainty when and where a tornado will form, or why some storms spawn them and neighboring storms don’t.

“Sometimes the science and the atmosphere remind us of the limitations of what we can predict,” said Bill Bunting, chief of forecast operations at the National Oceanic and Atmospheric Administration’s Storm Prediction Center.

. . .

“We have big outlooks of doom and gloom, and nothing happens because there are a lot of moving parts that are not well understood yet,” said Erik Rasmussen, a research scientist with NOAA’s National Severe Storms Laboratory.

For the full story, see:

Erin Ailworth. “Tornadoes Outrun Forecaster Data.” The Wall Street Journal (Thursday, May 30, 2019): A3.

(Note: ellipsis added.)

(Note: the online version of the story has the date May 29, 2019, and the title “New Science Explains Why Tornadoes Are So Hard to Forecast.”)

Art Diamond on EconTalk 8/12/19 Podcast with Russ Roberts

The podcast will posted sometime during the morning of Mon., 8/12/19. EconTalk podcasts can be downloaded from (or listened to at) econtalk.org.

Much of the “Intelligence” in Artificial Intelligence Is Human, Not Artificial

(p. B5) Everything we’re injecting artificial intelligence into—self-driving vehicles, robot doctors, the social-credit scores of more than a billion Chinese citizens and more—hinges on a debate about how to make AI do things it can’t, at present.

. . .

On one side of this debate are the proponents of “deep learning”—an approach that, since a landmark paper in 2012 by a trio of researchers at the University of Toronto, has exploded in popularity.

. . .

On the other side of this debate are researchers such as Gary Marcus, former head of Uber Technologies Inc.’s AI division and currently a New York University professor, who argues that deep learning is woefully insufficient for accomplishing the sorts of things we’ve been promised. It could never, for instance, be able to usurp all white collar jobs and lead us to a glorious future of fully automated luxury communism.

Dr. Marcus says that to get to “general intelligence”—which requires the ability to reason, learn on one’s own and build mental models of the world—will take more than what today’s AI can achieve.

“That they get a lot of mileage out of [deep learning] doesn’t mean that it’s the right tool for theory of mind or abstract reasoning,” says Dr. Marcus.

To go further with AI, “we need to take inspiration from nature,” say Dr. Marcus. That means coming up with other kinds of artificial neural networks, and in some cases giving them innate, pre-programmed knowledge—like the instincts that all living things are born with.

. . .

Until we figure out how to make our AIs more intelligent and robust, we’re going to have to hand-code into them a great deal of existing human knowledge, says Dr. Marcus. That is, a lot of the “intelligence” in artificial intelligence systems like self-driving software isn’t artificial at all. As much as companies need to train their vehicles on as many miles of real roads as possible, for now, making these systems truly capable will still require inputting a great deal of logic that reflects the decisions made by the engineers who build and test them.

For the full commentary, see:

Christopher Mims. “KEYWORDS; Should Artificial Intelligence Copy the Brain?” The Wall Street Journal (Saturday, October 26, 2017): B5.

(Note: ellipses added.)

(Note: the online version of the commentary has the same date as the print version, and has the title “KEYWORDS; Should Artificial Intelligence Copy the Human Brain?”)

Big, Frequent Meetings Are Unproductive and Crowd Out Deep Thought

(p. 7) To figure out why the workers in Microsoft’s device unit were so dissatisfied with their work-life balance, the organizational analytics team examined the metadata from their emails and calendar appointments. The team divided the business unit into smaller groups and looked for differences in the patterns between those where people were satisfied and those where they were unhappy.

It seemed as if the problem would involve something about after-hours work. But no matter how Ms. Klinghoffer and Mr. Fuller crunched the data, there weren’t any meaningful correlations to be found between groups that had a lot of tasks to do at odd times and those that were unhappy. Gut instincts about overwork just weren’t supported by the numbers.

The two kept iterating until something emerged in the data. People in Mr. Ostrum’s division were spending an awful lot of time in meetings: an average of 27 hours a week. That wasn’t so much more than the typical team at Microsoft. But what really distinguished those teams with low satisfaction scores from the rest was that their meetings tended to include a lot of people — 10 or 20 bodies arrayed around a conference table coordinating plans, as opposed to two or three people brainstorming ideas.

The issue wasn’t that people had to fly to China or make late-night calls. People who had taken jobs requiring that sort of commitment seemed to accept these things as part of the deal. The issue was that their managers were clogging their schedules with overcrowded meetings, reducing available hours for tasks that rewarded more focused concentration — thinking deeply about trying to solve a problem.

Data alone isn’t insight. But once the Microsoft executives had shaped the data into a form they could understand, they could better question employees about the source of their frustrations. Staffers’ complaints about spending evenings and weekends catching up with more solitary forms of work started to make more sense. Now it was clearer why the first cuts of the data didn’t reveal the problem. An engineer sitting down to do individual work for several hours on a Saturday afternoon probably wouldn’t bother putting it on her calendar, or create digital exhaust in the form of trading emails with colleagues during that time.

Anyone familiar with the office-drone lifestyle might scoff at what it took Microsoft to get here. Does it really take that much analytical firepower, and the acquisition of an entire start-up, to figure out that big meetings make people sad?

For the full story, see:

Neil Irwin. “How to Win at Winner-Take-All.” The New York Times, SundayBusiness Section (Sunday, June 15, 2019): 1 & 6-7.

(Note: the online version of the story has the date June 15, 2019, and has the title “The Mystery of the Miserable Employees: How to Win in the Winner-Take-All Economy.”)

The article quoted above, is adapted from:

Irwin, Neil. How to Win in a Winner-Take-All World: The Definitive Guide to Adapting and Succeeding in High-Performance Careers. New York: St. Martin’s Press, 2019.

“If You Lower the Hurdles to Innovation . . . , You’ll Get More of It”

(p. A2) You’d think from the debate raging in Washington that taxes are the key to economic growth. They aren’t. In the long run, innovation matters way more, and that depends on inspiration, experimentation and luck, not tax-law changes.

Yet presidents matter for promoting innovation even if it’s less glamorous than taxes. Their support often takes the form of directing money toward basic research or favored industries such as defense or renewable energy.

Under President Donald Trump the place to look is the regulators. Two of his appointees in particular, Food and Drug Administration Commissioner Scott Gottlieb and Federal Communications Commission Chairman Ajit Pai, have prioritized reducing regulatory hurdles to private investment as a way of boosting innovation. It’s too early to gauge their success, but the efforts merit more attention at a time when the growth debate is focused on steep, deficit-financed tax cuts.

. . .

At the FCC, Mr. Pai has targeted the “digital divide,” the gap in broadband access between some communities, especially in rural areas, and others. The share of U.S. households with a fixed broadband connection has stalled at roughly a third in recent years. Mr. Pai thinks the solution is “setting rules that maximize private investment in high-speed networks.”

Controversially, that includes a proposed rollback of his predecessor’s imposition of utility-like regulation so that internet service providers (ISPs) adhere to “net neutrality”—charging all content providers the same to access their networks. Without those limitations, he reckons ISPs will have more incentive to expand capacity and thus access; critics worry this will favor rich, established content providers over innovative newcomers.

. . .

. . . , Mr. Gottlieb’s and Mr. Pai’s theory is that if you lower the hurdles to innovation in specific sectors, you’ll get more of it. It offers a potentially more tangible payoff than fiddling with the tax code.

For the full commentary, see:

Greg Ip. “CAPITAL ACCOUNT; Why Innovation Tops Tax Cuts.” The Wall Street Journal (Thursday, October 26, 2017): A2.

(Note: ellipses added.)

(Note: the online version of the commentary has the date Oct. 25, 2017, and the title “CAPITAL ACCOUNT; Trump’s Regulators Aim to Boost Growth by Lowering Hurdles to Innovation.”)

Robots Relieve Restaurant Workers of Small, Mundane, Tedious Tasks

(p. A1) John Miller, chief executive and founder of CaliBurger LLC, finds it harder to find employees these days. His solution is Flippy, a robot that turns the burgers and cleans the hot, greasy grill.

The chain plans to install Flippy in up to ten of its 50 restaurants by year end. CaliBurger doesn’t intend to kick humans to the curb as a result. Flippy will handle the gruntwork, freeing employees to tidy the dining rooms and refill drinks, less arduous work that might make it easier to recruit and retain workers.

“We’re a long way from teaching a robot to walk the restaurant and do those things,” Mr. Miller said.

Experts have warned for years that robots will replace humans in restaurants. Instead, a twist on that prediction is unfolding. Amid the lowest unemployment in years, fast-food restaurants are turning to machines—not to get rid of workers, but because they can’t find enough.

. . .

(p. A10) Dunkin’ conducted focus groups with former employees to pinpoint the mundane tasks that made them want to leave and geared automation around that.

Workers used to create thousands of hand-written labels daily for everything from coffee to cheese expirations. Last year, Dunkin’ installed small terminals that print out expiration times.

Brewing a single pot involved grinding and weighing coffee and comparing its fineness and coarseness to a perfect sample. Now, some Dunkin’ shops use digital refractometers to determine if coffee meets specifications.

. . .

Alexandra Guajardo, the morning shift leader at a Dunkin’ Donuts shop in Corona, Calif. said she’s likely to stick with the job longer now than she otherwise would have.

“I don’t have to constantly be worried about other smaller tasks that were tedious,” she said. “I can focus on other things that need my attention in the restaurant.”

Mr. Murphy said he can’t see a time when a Dunkin’ Donuts shop is fully automated. The company experimented with a robot barista nearly two years ago at an innovation lab in Massachusetts. The robot did fine at making simple drinks, but couldn’t grasp custom orders, such as “light sugar.”

The machine also required a lot of cleaning and maintenance, and at up to $100,000 per robot, Mr. Murphy said he couldn’t see a return on the investment.

For the full story, see:

Julie Jargon and Eric Morath. “Short of Workers, Robots Man the Grill.” The Wall Street Journal (Monday, June 25, 2018): A1 & A10.

(Note: ellipses added.)

(Note: the online version of the story has the date June 24, 2018, and the title “Short of Workers, Fast-Food Restaurants Turn to Robots.”)