Saturday, March 12, 2011

A Hard Case OF AI

Watson: Rooted in place
Enlarge AFP/Getty Images Watson: Rooted in place

Gilbert Ryle once wrote that:
engineers stretch, twist, compress and batter bits of metal until they collapse, but it is just by such tests that they determine the strains which the metal will withstand. In somewhat the same way, philosophical arguments bring out the logical powers of the ideas under investigation, by fixing the precise forms of logical mishandling under which they refuse to work.
If that's the work of philosophy, then Artificial Intelligence (AI) is one of philosophy's branches. Rod Brooks, for many years director of MIT's AI Lab, and one of AI's great plain talkers, not to mention visionaries, defines artificial intelligence something like this: it's when a machine does something that, if it were done by a person, we'd say it was intelligent, thoughtful, or human.
Wait a second! What does "what we would say" have to do with whether a machine is thinking?
But that's just the point. AI is applied philosophy. AI curates opportunities for us to think about what we would say about the hard cases. At its best, AI gives us new hard cases. That's what IBM's, Jeopardy-winning Watson is.
But first, a real-world case: ants remove their dead from the nest and so avoid contamination. This looks like smart behavior. Now dead ants, it turns out, give off oleic acid, and experimenters have been able to demonstrate that ants will eject even live healthy ants from the nest if (thanks to meddling scientists) they have been daubed with oleic acid. What had at first appeared to be a sensitive response of the ants to the threat of harmful bacteria turns out to be a brute response triggered by the presence of a chemical.
Is the ant smart? Or stupid? Maybe neither. Or, most intriguingly of all, maybe it is both? Is there an experimentum crucis that we might perform to settle a question like this once and for all?
No. Intelligence isn't like that. It isn't something that happens inside the bug, or inside us. If intelligence is anything, it is an appropriate and autonomous responsiveness to the world around us. Flexible, real-time sensitivity to actual situations is what we have in mind when we talk about intelligence. And this means that intelligence is always going to be not just a matter of degree, but one of interpretation.
So back to Watson: it won! Watson produced answers to real questions, and it did so quickly and in ways that could only dimly be anticipated or understood by its designers. It beat its human opponents! This is a stunning achievement. A dazzling display of real-world, real-time responsiveness in action. Watson can think!
  But hold on. Not so fast. Even if Watson is bristling and buzzing with intelligence, we can legitimately wonder whether it's the natural intelligence of its programmers that is in evidence, rather than that of Watson.
And then there's the issue of that little pronoun. People wonder whether it's legitimate to talk of Watson as a He, but really the more pressing question is whether we can even speak of an It. In an important sense, there is no Watson. If Watson is a machine, then it is a machine in the way that a nuclear power plant is a machine. Watson is a system, a distributed local network. The avatar, the voice, the name — these are sleights of hand. The Watson System is staged to manipulate strings of symbols which have no meaning for it. At no point, any where in its processes, does the meaning, or context, or point of what it is doing, ever get into the act. The Watson System no more understands what's going on around it, or what it is itself doing, than the ant understands the public health risks of decomposition. It may be a useful tool for us to deploy (for winning games on Jeopardy, or diagnosing illnesses, or whatever — mazal tov!), but it isn't smart.
But again, we need to be slow down. Think of the ants once more. The ants do have good reasons to eject the oleic acid ants from the nest, even if they aren't clever enough to understand that they do. Natural selection built the ants to act in accord with reasons they cannot themselves understand. And so with Watson. The IBM design team led by David Ferrucci built Watson to act as if it understood meanings that are, in fact, not available to it. And maybe that's the upshot of what Dan Dennett has called Darwin's dangerous idea; that's the way, the only way, meaning and thinking gets into the world, through natural (or artificial) design. Watson is surely nothing like us, as we fantasize ourselves to be. But if Darwin and Dennett are right, we may turn out to be a lot more like Watson than we ever imagined.
Whatever we say about Dennett's elegant and beautiful theory, you'd have to be drunk on moonshine to take seriously the idea that Watson exhibits a human-like mindfulness. And the reason is, the Watson System fails to exhibit even an animal-like mindfulness.
Remember: animals are basically plants that move. Plants are deaf, blind and dumb. Vegetables have little option but to take what comes. Animals, in contrast, can give chase, take cover, seek out both prey and mates, and hide from predators. Animals need to be perceptually sensitive to the places they happen to find themselves, and they need to make choices about what they want or need. In short, animals need to be smart.
Now here's the rub. Watson, biologically speaking, if you get my drift, is a plant. Watson is big and it is rooted. Like all plants, it is deaf, blind, and immobile; it is basically incapable of directing action of any kind on the world around it. But now we come up against Ryle's question as to just how much logical mishandling the concept of intelligence can tolerate. For it is right there — in the space that opens up between the animal and the world, in the situations that require of the animal that it shape and guide and organize its own actions and interactions with its surroundings — that intelligence ever enters the scene.
It's important to appreciate that language is no work-around here. Language is just one of the techniques animals use to manage their dealings with the world around them. Giving a plant a camera won't make it see, and giving it language won't let it think. Which is just a way of reminding us that Watson understands no language. Unlike the ant, who acts as though it has reasons for its actions, Watson acts like a plant that talks.

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...