AddThis

5/24/04

Thinking Without Thinking: Marvin Minsky, The Dalai Lama, & Artificial Intelligence

Thinking Without Thinking: Marvin Minsky, The Dalai Lama, & Artificial Intelligence.

Some people think that consciousness and computers are a contradiction in terms. That is, they believe that computers can never qualify as conscious, which is a uniquely human quality.

Marvin Minsky believes that conscious artificial intelligence is not at all out of the question, which is in keeping with his writings. His thinking in no regard qualifies as religious, and therefore holds nothing sacred about human ability. Would religious leaders differ with him? If readers expected solid disagreement from the Tibetan Buddhist community, they might be surprised to learn that the Dalai Lama does not take exception to a point of view such as Minsky's. Here, then, are two perspectives on the issue, first Minsky, then the Dalai Lama.

  • Minsky: " Just as we walk without thinking, we think without thinking! We don't know how our muscles make us walk--nor do we know much more about the agencies that do our mental work. When you have a hard problem to solve, you think about if for a time. Then, perhaps, the answer seems to come all at once, and you say, ' Aha, I've got it. I'll do such and such.' But if someone were to ask how you found the solution, you could rarely say more than things like the following :

    ' I suddenly realized . . . '

    ' I just got the idea . . . '

    ' It occurred to me that . . . '

    If we could really sense the workings of our minds, we wouldn't act so often in accord with motives we don't suspect. We wouldn't have such varied and conflicting theories for psychology. And when we're asked how people get their good ideas, we wouldn't be reduced to metaphors about ' ruminating,' and ' digesting,' ' conceiving' and ' giving birth' to concepts--as though our thoughts were anywhere but in the head. If we could see inside our minds, we'd surely have more useful things to say.

    Many people seem absolutely certain that no computer could ever be sentient, conscious, self-willed, or in any other way ' aware' of itself. But what makes everyone so sure that they themselves possess those admirable qualities? It's true that if we're sure of anything at all, it is that ' I'm aware--hence, I'm aware.' Yet what do such convictions really mean? If self-awareness means to know what's happening inside one's mind, no realist could maintain for long that people have much insight, in the literal sense of seeing-in. Indeed, the evidence that we are self-aware--that is, that we have any special aptitude for finding out what's happening inside ourselves--is very weak indeed. It is true that certain people have a special excellence at assessing the attitudes and motivations of other persons (and, more rarely, of themselves). But this does not justify the belief that how we learn things about people, including ourselves, is fundamentally different from how we learn about other things. Most of the understandings we call ' insights' are merely variants of our other ways to ' figure out' what's happening." (The Society of Mind, Marvin Minsky, (Simon and Schuster) )

  • The Dali Lama once said that there is no theoretical limit to artificial intelligence. If "conscious" computers are some day developed, he will give them the same consideration as sentient beings. ( Salon Magazine, 27 February 1997, in an interview with Jeff Greenwald.) Elsewhere, he had this to say about artificial intelligence : "It is very difficult to say that it's not a living being, that it doesn't have cognition, even from the Buddhist point of view. We maintain that there are certain types of births in which a preceding continuum of consciousness is the basis. The consciousness doesn't actually arise from the matter, but a continuum of consciousness might conceivably come into it." (Gentle Bridges: Conversations with the Dalai Lama on the Sciences of Mind, Jeremy Hayward and Francisco Varela ( Shambala) )