A Dog (But Not A.I.) Can Put Together What It Learns in Two Separate Contexts, and Apply It in a Third Context

(p. 6) . . . an engineer named Blake Lemoine . . . worked on artificial intelligence at Google, specifically on software that can generate words on its own — what’s called a large language model. He concluded the technology was sentient; his bosses concluded it wasn’t.

. . .

There is no evidence this technology is sentient or conscious — two words that describe an awareness of the surrounding world.

That goes for even the simplest form you might find in a worm, said Colin Allen, a professor at the University of Pittsburgh who explores cognitive skills in both animals and machines. “The dialogue generated by large language models does not provide evidence of the kind of sentience that even very primitive animals likely possess,” he said.

Alison Gopnik, a professor of psychology who is part of the A.I. research group at the University of California, Berkeley, agreed. “The computational capacities of current A.I. like the large language models,” she said, “don’t make it any more likely that they are sentient than that rocks or other machines are.”

. . .

(p. 7) “A conscious organism — like a person or a dog or other animals — can learn something in one context and learn something else in another context and then put the two things together to do something in a novel context they have never experienced before,” Dr. Allen of the University of Pittsburgh said. “This technology is nowhere close to doing that.”

For the full story, see:

Cade Metz. “A.I. Does Not Have Thoughts, No Matter What You Think.” The New York Times, SundayBusiness Section (Sunday, August 7, 2022): 6-7.

(Note: ellipses added.)

(Note: the online version of the story was updated Aug. 11 [sic.], 2022, and has the title “A.I. Is Not Sentient. Why Do People Say It Is?”)

Leave a Reply

Your email address will not be published. Required fields are marked *