“No Clear Path” for A.I. to Match Humans in “Broad, Integrated, Flexible and Robust Understanding of the World”

The author of the comments quoted below is a Duke University Professor of Computer Science.

(p. A15) For those not working in AI, it can be difficult to interpret achievements in the field.
. . .
. . . the AI system solves problems in a very different way than humans.
. . .
Tasks that require responding to the same kind of standardized input over and over, with a clear measure of success, are a natural fit. Such tasks range from the diagnosis of medical images to flipping burgers. On the other hand, jobs that are messy and unpredictable and require an understanding of people and the broader world–I like to think of kindergarten teachers–will likely remain safe for a long time.
Much progress has been made in AI in a short time, so future breakthroughs are not unthinkable. For now, humans remain unsurpassed in their broad, integrated, flexible and robust understanding of the world.
. . .
. . . currently there is no clear path toward building such systems.

For the full commentary, see:
Vincent Conitzer. “Natural Intelligence Still Has Its Advantages; AI is disruptive, but it hasn’t rendered humanity obsolete.” The Wall Street Journal (Wednesday, Aug. 29, 2018): A15.
(Note: ellipses added.)
(Note: the online version of the commentary has the date Aug. 28, 2018.)

To Bacharach, Retiring from Music “Is Like Dying”

(p. 6B) NEW YORK (AP) — At age 90, Burt Bacharach hasn’t lost faith in the power of music.
“Music softens the heart, makes you feel something if it’s good, brings in emotion that you might not have felt before,” he said. “It’s a very powerful thing if you’re able to do to it, if you have it in your heart to do something like that.”
. . .
Bacharach says he has no plans to stop writing or performing. He contributes music to a new album by Elvis Costello, a longtime admirer with whom Bacharach has worked with before, and he continues to tour.
“You can throw up your hands and say, ‘I can’t do this anymore,’ but it’s what I do. I’m not just going to stop and retire, that is like dying, you know.”

For the full story, see:
AP. “School shootings inspire song by Bacharach, 90.” Omaha World-Herald (Tuesday, September 28, 2018): 6B.
(Note: ellipsis added.)

Dr. Charles Wilson Had Surgical Intuition, “Sort of an Invisible Hand”

(p. A19) Dr. Wilson sometimes worked in three operating rooms simultaneously: Residents would surgically open and prepare patients for his arrival, and he would then enter to seal an aneurysm or remove a tumor before moving on to the next case.
“He never spent much more than 30 or 60 minutes on each case, and we were left to close the case and make sure everything was O.K.,” Dr. Mitchel Berger, a former resident who is chairman of U.C.S.F.’s neurosurgical department, said in an interview. “It was unorthodox, but it worked. He demanded excellence and we gave him excellence.”
They also gave him silence. He allowed no music, no ringing phones and no idle chatter. Scrub nurses were expected to anticipate his requests.
“He would manage any break of silence with a stern look,” said Dr. Brian Andrews, a neurosurgeon who was one of Dr. Wilson’s residents and also his biographer, with the book “Cherokee Surgeon” (2011). (Dr. Wilson was one-eighth Cherokee.)
Dr. Wilson became world renowned for excising pituitary tumors through the sinus in a surgery called transsphenoidal resection.
. . .
The writer Malcolm Gladwell, in a profile of Dr. Wilson in The New Yorker in 1999, described one of those pituitary cancer surgeries. Looking at a tumor through a surgical microscope, Dr. Wilson used an instrument called a ring curette to peel the tumor from the gland.
“It was, he would say later, like running a squeegee across a windshield,” Mr. Gladwell wrote, “except that in this case, the windshield was a surgical field one centimeter in diameter, flanked on either side by the carotid arteries, the principal sources of blood to the brain.”
A wrong move could nick an artery or damage a nerve, endangering the patient’s vision or his life.
When Dr. Wilson saw bleeding from one side of the gland, he realized that he had not gotten all of the tumor. He found it and removed it. The surgery took only 25 minutes.
Dr. Wilson performed the surgery more than 3,300 times.
He told Mr. Gladwell that he had a special feel for surgery that he could not entirely explain.
“It’s sort of an invisible hand,” he said. “It begins almost to seem mystical. Sometimes a resident asks, ‘Why did you do that?’ ” His response, he told Mr. Gladwell, was to shrug and say, “Well, it just seemed like the right thing.”

For the full obituary, see:
Richard Sandomir. “‘Charles Wilson, 88, Lauded For Excising Brain Tumors, Sometimes Several in a Day.” The New York Times (Monday, March 5, 2018): A19.
(Note: ellipsis, and bracketed year, added.)
(Note: the online version of the obituary has the date March 2, 2018, and has the title “‘Charles Wilson, Top Brain Surgeon and Researcher, Dies at 88.”)

The biography of Wilson, mentioned above, is:
Andrews, Brian T. Cherokee Neurosurgeon: A Biography of Charles Byron Wilson, M.D. Scotts Valley, CA: CreateSpace Independent Publishing Platform, 2011.

“Machines Are Not Capable of Creativity”

(p. A11) New York
“I rarely have an urge to whisper,” says George Gilder–loudly–as he settles onto a divan by the window of his Times Square hotel room. I’d asked him to speak as audibly as possible into my recording device, and his response, while literal, could also serve as a metaphor: Nothing Mr. Gilder says or writes is ever delivered at anything less than the fullest philosophical decibel.
. . .
Citing Claude Shannon, the American mathematician acknowledged as the father of information theory, Mr. Gilder says that “information is surprise. Creativity always comes as a surprise to us. If it wasn’t surprising, we wouldn’t need it.” However useful they may be, “machines are not capable of creativity.” Human minds can generate counterfactuals, imaginative flights, dreams. By contrast, “a surprise in a machine is a breakdown. You don’t want your machines to have surprising outcomes!”
The narrative of human obsolescence, Mr. Gilder says, is giving rise to a belief that the only way forward is to provide redundant citizens with some sort of “guaranteed annual income,” which would mean the end of the market economy: . . .
. . .
For all the gloom about Silicon Valley that appears to suffuse his new book, Mr. Gilder insists that he’s not a tech-pessimist. “I think technology has fabulous promise,” he says, as he describes blockchain and cryptocurrency as “a new technological revolution that is rising up as we speak.” He says it has generated “a huge efflorescence of peer-to-peer technology and creativity, and new companies.” The decline of initial public offerings in the U.S., he adds, has been “redressed already by the rise of the ICO, the ‘initial coin offering,’ which has raised some $12 billion for several thousand companies in the last year.”
It is clear that Mr. Gilder is smitten with what he calls “this cryptographic revolution,” and believes that it will heal some of the damage to humanity that has been inflicted by the “machine obsessed” denizens of Silicon Valley. Blockchain “endows individuals with control of their data, their identity, the truths that they want to assert, their transactions, their visions, their content and their security.” Here Mr. Gilder sounds less like a tech guru than a poet, and his words tumble out in a romantic cascade.

For the full interview, see:
Tunku Varadarajan, interviewer. “Sage Against the Machine; A leading Google critic on why he thinks the era of ‘big data’ is done, why he opposes Trump’s talk of regulation, and the promise of blockchain.” The Wall Street Journal (Saturday, Sept. 1, 2018): A11.
(Note: ellipses added.)
(Note: the online version of the interview has the date Aug. 31, 2018.)

The “new book” by Gilder, mentioned above, is:
Gilder, George. Life after Google: The Fall of Big Data and the Rise of the Blockchain Economy. Washington, D.C.: Regnery Gateway, 2018.

Carl Reiner Says Having a Project Motivates Vibrant Longevity

(p. 6B) LOS ANGELES (AP) — Ask 12-time Emmy Award winner Carl Reiner how it feels to be nominated again, and he fires back a wisecrack.
. . .
Reiner is nominated as host-narrator of “If You’re Not in the Obit, Eat Breakfast,” a documentary about how perennial high achievers, including Mel Brooks and Tony Bennett, both 92, stay vibrant.
. . .
Reiner, the oldest-ever Emmy nominee, is willing to look in the rearview mirror, but only to fuel new work.
“When I finish anything, I have to start a new project or I have no reason to get up. Most people are that way — if they have something to do, they hang around,” said Reiner.

For the full story, see:
LYNN ELBER for the Associated Press. “Comedy Legend Carl Reiner Turns His Emmy Shot into a Punchline.” Omaha World-Herald (Monday, Aug. 27, 201): 6B.
(Note: ellipses added.)

“I’d Rather Be Optimistic and Wrong than Pessimistic and Right”

(p. A17) There is no question that Tesla’s culture is different from that of conventional automakers or even other Silicon Valley companies — . . . . That is largely by Mr. Musk’s design, and certainly reflects his outsize presence. His web appearance late Thursday [Sept. 6, 2018] was the latest evidence.
He was the guest of the comedian Joe Rogan, an advocate for legalizing marijuana, and the repartee included an exchange over what Mr. Musk was smoking.
“Is that a joint, or is it a cigar?” Mr. Musk asked after his host took out a large joint and lit it up.
“It’s marijuana inside of tobacco,” Mr. Rogan replied, and he asked if Mr. Musk had ever had it.
“Yeah, I think I tried one once,” he replied, laughing.
The comedian then asked if smoking on air would cause issues with stockholders, to which Mr. Musk responded, “It’s legal, right?” He then proceeded to take a puff. Marijuana is legal for medical and recreational use in California, where the interview was recorded.
After Mr. Musk announced on Aug. 7 that he intended to take Tesla private at $420 a share, there was speculation that the figure was chosen because “420” is a code for marijuana in the drug subculture.
In an interview with The New York Times while the gambit was still in play, Mr. Musk didn’t deny a connection. But he did try to clarify his state of mind in hatching the plan — and the shortcomings of mind-altering.
“It seemed like better karma at $420 than at $419,” he said. “But I was not on weed, to be clear. Weed is not helpful for productivity. There’s a reason for the word ‘stoned.’ You just sit there like a stone on weed.”
. . .
If he is feeling any insecurity, it was not reflected in his webcast with Mr. Rogan. He appeared at ease, sipping whiskey, and spoke, at one point, about artificial intelligence and how it could not be controlled.
“You kind of have to be optimistic about the future,” Mr. Musk said. “There’s no point in being pessimistic. I’d rather be optimistic and wrong than pessimistic and right.”

For the full story, see:
Neal E. Boudette. “‘Tesla Stock Dips As Musk Puffs On … What?” The New York Times (Saturday, Sept. 8, 2018): A1 & A17.
(Note: ellipses in quotes, and bracketed date, added; ellipsis in title, in original.)
(Note: the online version of the story has the date Sept. 7, 2018, and has the title “‘Tesla Shaken by a Departure and What Elon Musk Was Smoking.”)

Technologies Can Offer “Extraordinary Learning” Where “Children’s Interests Turn to Passion”

(p. B1) The American Academy of Pediatrics once recommended parents simply limit children’s time on screens. The association changed those recommendations in 2016 to reflect profound differences in levels of interactivity between TV, on which most previous research was based, and the devices children use today.
Where previous guidelines described all screen time for (p. B4) young children in terms of “exposure,” as if screen time were a toxic substance, new guidance allows for up to an hour a day for children under 5 and distinguishes between different kinds of screen use–say, FaceTime with Grandma versus a show on YouTube.
. . .
Instead of enforcing time-based rules, parents should help children determine what they want to do–consume and create art, marvel at the universe–and make it a daily part of screen life, says Anya Kamenetz, a journalist and author of the coming book “The Art of Screen Time–How Your Family Can Balance Digital Media and Real Life.”
In doing so, parents can offer “extraordinary learning” experiences that weren’t possible before such technology came along, says Mimi Ito, director of the Connected Learning Lab at the University of California, Irvine and a cultural anthropologist who has studied how children actually use technology for over two decades.
“Extraordinary learning” is what happens when children’s interests turn to passion, and a combination of tech and the internet provides a bottomless well of tools, knowledge and peers to help them pursue these passions with intensity characteristic of youth.
It’s about more than parents spending time with children. It includes steering them toward quality and letting them–with breaks for stretching and visual relief, of course–dive deep without a timer.
There are many examples of such learning, whether it is children teaching themselves to code with the videogame Minecraft or learning how to create music and shoot videos. Giving children this opportunity allows them to learn at their own, often-accelerated pace.

For the full commentary, see:
Christopher Mims. “KEYWORDS; Not All Screen Time Is Equal Screen Time Isn’t Toxic After All.” The Wall Street Journal (Monday, Jan. 22, 2018): B1 & B4.
(Note: ellipsis added.)
(Note: the online version of the commentary was last updated Jan. 22, 2018, and has the title “KEYWORDS; What If Children Should Be Spending More Time With Screens?”)

The book mentioned above, is:
Kamenetz, Anya. The Art of Screen Time: How Your Family Can Balance Digital Media and Real Life. New York: PublicAffairs, 2018.

Zuckerberg Calls Musk “Pretty Irresponsible” on A.I. “Doomsday” Fears

(p. 1) SAN FRANCISCO — Mark Zuckerberg thought his fellow Silicon Valley billionaire Elon Musk was behaving like an alarmist.
Mr. Musk, the entrepreneur behind SpaceX and the electric-car maker Tesla, had taken it upon himself to warn the world that artificial intelligence was “potentially more dangerous than nukes” in television interviews and on social media.
So, on Nov. 19, 2014, Mr. Zuckerberg, Facebook’s chief executive, invited Mr. Musk to dinner at his home in Palo Alto, Calif. Two top researchers from Facebook’s new artificial intelligence lab and two other Facebook executives joined them.
As they ate, the Facebook contingent tried to convince Mr. Musk that he was wrong. But he wasn’t budging. “I genuinely believe this is dangerous,” Mr. Musk told the table, according to one of the dinner’s attendees, Yann LeCun, the researcher who led Facebook’s A.I. lab.
Mr. Musk’s fears of A.I., distilled to their essence, were simple: If we create machines that are smarter than humans, they could turn against us. (See: “The Terminator,” “The Matrix,” and “2001: A Space Odyssey.”) Let’s for once, he was saying to the rest of the tech industry, consider the unintended consequences of what we are creating before we unleash it on the world.
. . .
(p. 6) Since their dinner three years ago, the debate between Mr. Zuckerberg and Mr. Musk has turned sour. Last summer, in a live Facebook video streamed from his backyard as he and his wife barbecued, Mr. Zuckerberg called Mr. Musk’s views on A.I. “pretty irresponsible.”
Panicking about A.I. now, so early in its development, could threaten the many benefits that come from things like self-driving cars and A.I. health care, he said.
“With A.I. especially, I’m really optimistic,” Mr. Zuckerberg said. “People who are naysayers and kind of try to drum up these doomsday scenarios — I just, I don’t understand it.”

For the full story, see:
Cade Metz. “Moguls and Killer Robots.” The New York Times, SundayBusiness Section (Sunday, June 10, 2018): 1 & 6.
(Note: ellipsis added.)
(Note: the online version of the story has the date June 9, 2018, and has the title “Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots.”)

Rats, Mice, and Humans Fail to Ignore Sunk Costs

(p. D6) Suppose that, seeking a fun evening out, you pay $175 for a ticket to a new Broadway musical. Seated in the balcony, you quickly realize that the acting is bad, the sets are ugly and no one, you suspect, will go home humming the melodies.
Do you head out the door at the intermission, or stick it out for the duration?
Studies of human decision-making suggest that most people will stay put, even though money spent in the past logically should have no bearing on the choice.
This “sunk cost fallacy,” as economists call it, is one of many ways that humans allow emotions to affect their choices, sometimes to their own detriment. But the tendency to factor past investments into decision-making is apparently not limited to Homo sapiens.
In a study published on Thursday [July 12, 2018] in the journal Science, investigators at the University of Minnesota reported that mice and rats were just as likely as humans to be influenced by sunk costs.
The more time they invested in waiting for a reward — in the case of the rodents, flavored pellets; in the case of the humans, entertaining videos — the less likely they were to quit the pursuit before the delay ended.
“Whatever is going on in the humans is also going on in the nonhuman animals,” said A. David Redish, a professor of neuroscience at the University of Minnesota and an author of the study.
This cross-species consistency, he and others said, suggested that in some decision-making situations, taking account of how much has already been invested might pay off.

For the full story, see:
Erica Goode. “‘Sunk Cost Fallacy’ Claims More Victims.” The New York Times (Tuesday, July 17, 2018): D6
(Note: bracketed date added.)
(Note: the online version of the story has the date July 12, 2018, and has the title “Mice Don’t Know When to Let It Go, Either.”)

Human Intelligence Helps A.I. Work Better

(p. B3) A recent study at the M.I.T. Media Lab showed how biases in the real world could seep into artificial intelligence. Commercial software is nearly flawless at telling the gender of white men, researchers found, but not so for darker-skinned women.
And Google had to apologize in 2015 after its image-recognition photo app mistakenly labeled photos of black people as “gorillas.”
Professor Nourbakhsh said that A.I.-enhanced security systems could struggle to determine whether a nonwhite person was arriving as a guest, a worker or an intruder.
One way to parse the system’s bias is to make sure humans are still verifying the images before responding.
“When you take the human out of the loop, you lose the empathetic component,” Professor Nourbakhsh said. “If you keep humans in the loop and use these systems, you get the best of all worlds.”

For the full story, see:
Paul Sullivan. “WEALTH MATTERS; Can Artificial Intelligence Keep Your Home Secure?” The New York Times (Saturday, June 30, 2018): B3.
(Note: the online version of the story has the date June 29, 2018.)

The “recent study” mentioned above, is:
Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 1-15.

Earned Income Matters More Than Equal Income

(p. A13) The concept of a universal basic income, or UBI, has become part of the moral armor of Silicon Valley moguls who want a socially conscious defense against the charge that technology is making humanity obsolete.
. . .
We need policies that encourage job creation and working, not policies that pay people not to work.
In the mid-1960s, about 5% of men aged 25 to 54 were jobless. For 40 years that share has risen, and for much of the past decade the rate has remained over 15%. Suicide, divorce and opioid abuse are all associated with nonemployment, and many facts suggest that the misery of joblessness is far worse than that of a low-paying job. According to the most recent data, only 7% of working men in households earning less than $35,000 report being dissatisfied with their lives. But that share soars to 18% among the nonemployed of all incomes. This suggests that promoting employment is more important than reducing inequality.
. . . 50 years of evidence about labor supply in the U.S. suggests that giving people money will lead them to work less.
The Negative Income Tax experiments of the 1970s–when poorer households in a number of states received direct cash payments to keep them at a minimum income–are the closest America has come to a UBI. But they did not show “minimal impact on work,” as Mr. Yang suggests. Rather, they produced a quite significant work-hours reduction of between 5% and 25%, as well as “employment rate reductions . . . from about 1 to 10 percentage points,” according to one capable study.

For the full review, see:

Edward Glaeser. “‘BOOKSHELF; ‘Give People Money’ and ‘The War on Normal People’ Review: The Cure for Poverty? A guaranteed income does nothing to address the misery of joblessness, nor the associated plagues of divorce, opioid abuse and suicide.” The Wall Street Journal (Tuesday, July 10, 2018): A13.

(Note: first two ellipses added; third ellipsis in original.)
(Note: the online version of the review has the date July 9, 2018, and has the title “BOOKSHELF; ‘Give People Money’ and ‘The War on Normal People’ Review: The Cure for Poverty? A guaranteed income does nothing to address the misery of joblessness, nor the associated plagues of divorce, opioid abuse and suicide.”)