Common sense and competence
In Part I of this series, we saw examples of how machines, putatively endowed with “Artificial Intelligence,” commit laughably stupid mistakes doing grade-school arithmetic. See Dumb machines Part I
You’d think that if machines can make such stupid blunders in a domain where they are alleged to have superhuman powers—a simple task compared with, say, getting your kid to school when the bus has broken down and your car is in the shop—then they could never be expected to achieve a level of competence across many domains sufficient for world domination.
Possibly machines are not capable of the “common sense” that is vital to real, complicated life, where we range across many domains, often nearly simultaneously.
A trivial example from Part I: the machine correctly calculates 68 when asked for the product of “17 x 4.” But it calculates “17 x 4” as 69. Stupid, right? A human looks at the discrepancy and says aha! It’s the missing period that threw it off. Getting the correct answer would require knowing something about punctuation. The period is not a mathematical object, it’s a grammatical object. Getting the difference requires bridging from math to grammar—another common sense activity we do without consciously missing a beat.
Nontrivial—”AI hacking” and rethinking AI
There are larger concerns than misunderstanding punctuation within the AI community, as you will find in an article by Chris Baraniuk in the April 27 issue of New Scientist.
To the right is the illustration that appears at the beginning of Chris Baraniuk’s article. To you, these look like two halves of an orange, although the half on the right has some pixels altered. No matter, you say, it’s still half an orange.
But the caption reads “One of these is a power drill.” Such was the image processing’s software identification of the right-side half of the orange.
Oops, Mr. Artificial Intelligence. You goofed big-time.
Another notable blunders were an AI misclassifying a turtle as a rifle, and two skiers as a dog.
All of these mistakes resulted from people deliberately modifying images. The thrust of Baraniuk’s article is that computer image-recognition software is susceptible to “AI hacking,” which is of great concern to software engineers in disciplines ranging from facial recognition to military weaponry.
In one case, researchers fooled a self-driving car by sticking pieces of black tape on a stop sign. To any human observer it was obviously a stop sign, but the self-driving car drove itself right on through. That’s a pretty low-tech attack on an AI, making one wonder what sort of mistakes can be made at higher levels. Thus the “captchas” that are designed to fool bots.
These mistakes were made by an artificial “neural network,” produced by layering digital functions that, as Baraniuk puts it, “loosely mimic neurons in the brain.” The network is “trained” by being fed large quantities of images and learning to recognize patterns it has seen before.
Sounds a bit like a human brain’s activity, right? A young child sees a golden retriever, a collie, a bull terrier, a chihuahua, and told each of them is a dog, and pretty soon the kid has a concept of dogginess that works to identify a dog radically different from any she has seen before, say a Great Dane wearing a saddle and a funny hat—”dog!”
The procedure for the neural network is similar—compare the new image against known images—but the processing method is different. The neural network scans images pixel-by-pixel (really fast), and does statistics on the pixels to build a representation from the bottom up, rather than comprehending images as a whole. The neural network doesn’t have any concept of dogginess. This is why “AI hacking,” by removing or changing a few pixels, can fool the network into misclassifying half an orange as a power drill.
Some machine-learning researchers are greatly troubled by the lack of resiliency in image-processing software and data analysis generally with neural networks. Rethinking AI seems to be in order, but there’s no clear consensus on what direction to take.
The human edge: the multi-faceted mind
Interestingly, under certain circumstances, humans can be fooled in much the same way as a neural network. Shown images flashing in split seconds, people also misclassified them if they were tweaked in a way similarly to what fooled the AI. But, given a few more milliseconds, people correctly recognized images that the neural networks were still misclassifying.
What’s happening is still not fully understood, but Apple machine learning researcher Ian Goodfellow has an hypothesis, that in the first few milliseconds, “human perception involves neurons firing in a one-way cascade, much like the way artificial neural networks function,” but that subsequently “deeper layers of the brain talk back to the earlier ones, and you can update what you thought about the image.” This sounds very much like how you perceive intuitively—that’s a car, medium size, a Honda, oh no it’s a Toyota, it’s a Camry, new? a hybrid I think, I need a closer look . . . do hybrids come in black. . . ?
Cognitive scientist Chaz Firestone at Johns Hopkins puts it more simply. “Your mind has parts to it. Some parts can get confused by things that other parts of your mind don’t get confused by.”
Can machines achieve general intelligence? Can they be conscious?
If image-processing machines can get stumped by simple digital tweaks, and no one has a definite way to correct it, you have to wonder how far machine “intelligence” has to go before they can take over the world. After all, image-processing is just one part of what endows a brain with general intelligence.
Different parts of the mind speaking to each other and providing corrective feedback raises the question, is this what generates what we call consciousness? Could consciousness be something as simple—or complicated—as different parts of the brain talking back and forth to each other? Is common sense bound up with consciousness, in that you don’t have one without the other?
Some data scientists claim that intelligence does not require consciousness, and machines could take over the world without being “conscious” or self-aware in the way we are—they just do stuff. In fact, second-guessing done by consciousness could create inefficiencies, and from a machine’s perspective, inefficiencies must be minimized for maximum utility. Is consciousness just a side-effect of intelligence, and is even worse than useless?
Consciousness might be useless, but common sense definitely is not.
Neural networks have many layers structured in a hierarchy. Maybe neural networks could be improved by adding a parallel (common sense?) network that says to its companion network things like, “Hey! Are you sure that’s a power drill? You wanna take another look?”
Why is this important now?
Even supergenius Ray Kurzweil, the most optimistic booster of all technology advances including AI, does not predict AIs with human level general intelligence will show up before 2029. Time to relax. (Just as we relaxed thirty years ago about Global Warming.)
BUT think about it this way: one of the leading lights of contemporary computer science, Stuart Russell, observed
Some have argued there’s no conceivable risk to humanity [from machine intelligence] for centuries to come, perhaps forgetting that the interval of time between Ernest Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Leó Szilárd’s invention of the neutron-induced chain reaction was less than twenty-four hours.*
Note that Ernest Rutherford was one of the towering figures of 20th-Century science. He is sometimes called “the father of nuclear physics” for conducting experiments demonstrating that the atom is mostly empty space surrounding a massive nucleus. He formulated the Rutherford (planetary) model of the atom, and won a Nobel prize in 1908. He discovered the proton in 1919 and hypothesized the existence of the neutron—the neutron being the very particle which Leó Szilárd later figured out could, thanks to its lack of charge, plow through clouds of electrons and smash into an atomic nucleus, scattering more neutrons to . . . well, you know the rest.
You know the rest now, but profound thinker Ernest Rutherford didn’t see it coming eighty years ago. Something to chew on when you smugly wave away the AI threat while asserting that machines are too dumb to take over the world.
On a cheerier note, if you have an hour to listen to four extremely bright, deeply informed people calmly discuss the limitations of AI, you might want to check out:
=================footnote ================
* From essay, ‘Will They Make Us Better People?’ in collection What to Think About Machines That Think, edited by John Brockman, Harper Perennial, 2015.