Know Thyself—Man, Rat or Bot

Whether it is an eerily human bot in a virtual-reality game, an animal looking at you with soulful eyes or a patient in a vegetative state, the question nags and nags and won't go away: is there a thinking, self-aware, conscious mind in there? Not one that merely exhibits intelligence, since silicon chips do calculations that leave the human brain in the dust and even discover mathematical proofs. And not one merely capable of empathy or grief or cooperation, which chimps, elephants and species in between all manage. No, the capacity that distinguishes humans has come down to something Augustine identified 1,600 years ago when he asked what "can be the purport of the injunction, know thyself? I suppose it is that the mind should reflect upon itself."

It's called metacognition—the ability to think about your thoughts, to engage in self-reflection, to introspect. It was long thought to be not just something that we have more of or do better than machines or animals, but that we have and they lack. To know what you know is not only the mark of a skilled game-show contestant who is quick (but not too quick) on the buzzer, but also of consciousness, the last stand for human exceptionalism. Now, however, this claim is on the rocks as both animals and machines show signs that they can engage in self-reflection.

In the latest study, scientists tested for introspection in rats. Jonathon Crystal and Allison Foote of the University of Georgia trained rats to push one lever when they heard a short burst of static, and a second lever when they heard a long burst. The reward for a right answer was six food pellets. A wrong answer yielded nothing. But refusing to answer—like a student fleeing an exam room upon seeing the impossible questions—earned the rat a consolation prize of three morsels. Clearly, the smart strategy was to respond if sure of the answer, but pass if not.

The rats got almost perfect scores when they had to identify two-second or eight-second bursts. But when they heard static of intermediate duration and had to choose "long" or "short," they were twice as likely to decline the test and take the three pellets; they knew what they didn't know. To make sure the rats were truly introspecting, the scientists then eliminated the opt-out choice and required the rats to choose "long" or "short" for the medium bursts. The animals got half right, no better than guessing, which suggests that when they opted out it was indeed because they had assessed the contents of their mind—do I know this?—and made the rational choice, the scientists report in Current Biology. "Rats can reflect on their internal mental states," says Crystal. "They know when they don't know." Other scientists have gotten similar results with dolphins and rhesus monkeys, who also decline to take a test when they don't know the answer. They think about thinking.

Some defenders of humanity's lock on consciousness have argued that a rat or monkey need not be self-aware to tell that it doesn't know something; ignorance might be expressed as "no test for me, thanks" unconsciousness. You could wire up a simple—unconscious—circuit to do the same thing, they say.

Funny they should mention circuits. After decades in which metacognition was written off by many researchers in artificial intelligence, it is getting serious attention, says Michael Cox of BBN Technologies. There are now computer systems that can reason about what went wrong in a calculation and consider whether to continue on their current path to a solution or switch to a new strategy—both of which, if a person did them, we would call introspection and self-awareness. Next month an AI conference in Hawaii will feature a dozen studies on introspective machines. "I don't think there is an inherent barrier to self-understanding on the part of machines," says Cox. "There is nothing magical, mystical, spiritual or uniquely human about introspection and metacognition."

Questions about consciousness have become especially poignant in the case of comatose patients. Last year scientists in England reported on a young woman who, despite being in a vegetative state, showed brain activity identical to that of healthy volunteers in response to spoken requests that she imagine walking around her home or playing tennis. Critics dismissed the brain signals as little more than reflex: maybe the word "tennis" automatically triggered the activity, they said. If so, it was nothing special, and certainly not a sign of a self-reflecting mind. Perhaps, but the new research says we need to be careful about expecting too much of consciousness. As self-awareness dawns on machines and as scientists find it in animals, it may be that vegetative patients are not the only ones whose glimmers of consciousness can be dismissed as nothing special.