Consciousness Redux: Testing for Consciousness in Machines; September / October 2011; Scientific American Mind; by Christof Koch, Giulio Tononi; 2 Page(s)
How would we know if a machine is conscious? As computers inch closer to human-level performance—witness IBM’s Watson victory over the all-time champs of the television quiz show Jeopardy—this question is becoming more pressing. So far, though, despite their ability to crunch data at superhuman speed, we suspect that unlike us, computers do not truly “see” a visual scene full of shapes and colors in front of their cameras; they don’t “hear” a question through their microphones; they don’t feel anything. Why do we think so, and how could we test if they do or do not experience a scene the way we do?
Consciousness, we have suggested, has two fundamental properties [see the July/August 2009 column by Christof Koch, “A Theory of Consciousness”]. First, every experience is highly informative. Any particular conscious state rules out an immense number of other possible states, from which it differs in its own particular way. Even the simple percept of pitch-blackness implies you do not see a well-lit living room, the intricate canopy of the jungle or any of countless other scenes that could present themselves to the mind: think of all the frames from all the movies you have ever seen.