If you saw my dog, Sadie, as she scopes me out in the morning when I put on my shoes, you'd think the answer to the question is obvious. She's always checking for the running shoes. When it's clear we're going running, she goes ballistic.
But does she really have "fun?" Or is this just a result of evolution, a dance between her genes and mine in which her ancestors enjoyed some benefit (my ancestors fed them?) when they exhibited what looked like enthusiasm for going hunting, or out into the fields for the work day.
It's a question I think about a lot, so I was totally intrigued by a piece in the April 8 Nature by Clive D. L. Wynne, from the University of Florida.
Wynne's taking on the question of whether it is useful to anthropomorphize animal behaviors. It's a distinguished tradition:
The complexity of animal behaviour naturally prompts us to use terms that are familiar from everyday descriptions of our own actions. Charles Darwin used mentalistic terms freely when describing, for example, pleasure and disappointment in dogs; the cunning of a cobra; and sympathy in crows. Darwin's careful anthropomorphism, when combined with meticulous description, provided a scientific basis for obvious resemblances between the behaviour and psychology of humans and other animals. It raised few objections.
It's also, Wynne argues, wrong.
He's doing something careful here, and I am not. Wynne is looking for some sort of rigorous framework, while I'm just having a primitive relationship with my four-legged running buddy. I'm doing what he'd label "naive anthropomorphism" ("the impulse that prompts children to engage in conversations with the family dog") while he's talking about something much more rigorous. "Critical anthropomorphism," in Wynne's formulation (he's quoting Gordon Burghardt here) "uses the assumption of animal consciousness as a 'heuristic method to formulate research agendas that result in publicly verifiable data that move our understanding of behaviour forward.'"
And when you get into the realm of consciousness vs. hardwired behaviorism, that's when things get sticky. It's really a central debate of contemporary philosophy - how one might distinguish, as Kwame Anthony Appiah puts it, between a robot and our mom.
Suppose the computer in question is in a robot, which, like androids in science fiction, looks exactly like a person. It's a very smart computer, so that its "body" responds exactly like a particular person: your mother, for example. For that reason I'll call the robot "M." Would you have as much reason for thinking that M had a mind as you have for thinking your mother does?
But what about Sadie? How might I distinguish between whether the run is "fun," in the same way that our human consciousness experiences "fun," and a hardwired behaviorist exhibition of fun-like symptoms? Wynne's not arguing, I think, that Sadie doesn't have consciousness, merely when we say that she does, we're not saying anything terribly useful in terms of understanding what's up.
For me, I've solved it at a more practical level. Of course she's having fun. The poor dog can barely sit still for me to put the leash on her. What else could it be?
I guess I'm just a naive anthropomorphist.
In the end, i suspect the question of consciousness will be determined much like the question of value.
A service or an object has value if someone
is willing to perform/provide it for a certain cost, and someone else is willing to pay to have it at that cost. The value then is not intrinsic to the thing, but a result of an external relationship: the thing has value by definition or agreement because of the role it fulfills. (this applies even if the value may be determined by scarcity or some other factor; they are all external to the thing).
i find it questionable whether the concept of conciousness even represents anything that actually exists. the question of whether something is having an emotion, or just presenting reponses which are consistent with some emotion is meaningless, like asking whether submarines can swim. (ie, the query is about a concept which covers multiple issues partially, not all of which are applicable to the object, and which is more dependent on the definition of the concept than the properties of the object--it's just a badly formed question).
so you can say, define fun as a particular set of behavioural responses or a particular set of synaptic patterns in the limbic system and come to a conclusion (possibly).
but the concept of emotion encompasses an ineffible, visceral response that occurs to me, one that i have to take on faith when someone else tells me it occurs in them. it is easier to accept, because i can see myself in others (just as we can see human traits in animals or even things).
Posted by: ed__ on April 21, 2004 12:53 AMYou said "and of course Mom has consciousness and the computer doesn't, you silly argumentative philosopher you.", and that's where I really don't agree with you.
Human beings, like any other animal, are not different from machines. They are made of captors (eyes, ears, skin, mouth, nose mainly). Those captors catch information from the outside world, which are brought to the brain, which calculates a reaction based on complex algorithms (which, I think, are determined by your genes) and also based on your memory, and send the response to motors (muscles mainly).
Each time you do something, you might think you could have done it differently, but the reality is that you couldn't. You did this thing this way, because, at the end of the day, it's what your brain decided after computing a very complex calculation involving a lot of sources (data from external captors and data from your memory).
I'm not an expert and don't claim to be one, but that's the way I understand the way animals (including human beings of course) live. I don't see how it is different from robots or computers. And I don't see why it should be different. I don't even see why some people tend to think human beings are different from robots.
Posted by: Fernando Morientes on April 21, 2004 01:12 AM