Tag Archives: turing test

What do Inkheart, the Turing Test, and viruses have to do with each other?

Here’s one that’ll bust your noodle: it looks like fictional characters could pass the Turing Test. Imagine that we’re text-messaging each other, and I answer all your questions as one of my favorite imaginary people, like Lyra Belacqua. Our conversation might go something like this:

Interviewer: Hello, Lyra.
Lyra: How do you know my name, then?
Interviewer: Well, I–
Lyra: What is this place? You’re one of them Gobblers, en’t you?
Lyra: Where did I learn how to type?

Theoretically, this could go in indefinitely. I’m using all the intelligence of my brain to convince the interviewer about Lyra’s intelligence. It only breaks down when I reach the limits of my knowledge about Lyra or we run into some logical inconsistencies due to the fact that she’s from another dimension. You can see that that second problem happened pretty fast, but maybe somebody skilled enough (Lyra’s creator?) could fool the interviewer every time. Are we supposed to conclude that imaginary people are intelligent? Where do they keep their brains?

I’m bringing up the subject because it’s so compelling for writers such as myself. One of the common experiences for a fiction writer, across all genres, is that it feels like one’s characters are alive. My characters are like the most wonderful imaginary friends; I don’t write their dialogue, I just let them talk. And this experience doesn’t have to happen to just authors. Some characters have captured the public attention so thoroughly (like the Tin Woodsman, for example, or Harry Potter) that they can live independently of their creators. But if characters are intelligent, then we writers are all psychotic. We’re all multiple personality disorder cases with a bunch of knights and talking dragons bopping around in our heads. And some days, that’s what it feels like.

Maybe the best way to look at this is to go back to what the Turing test was supposed to prove in the first place. We tend to regard it as just a sentience test. But the grand old man Turing himself introduced the idea with a sort of parlor game where both a man and a woman pretend to be women. Somebody runs typewriter-generated messages between them and an interviewer (if only he’d known about Instant Messenger!). If the man wins the game by convincing the interviewer that he’s the woman, Turing says we can conclude he’s got a pretty good understanding of gender. Likewise, in the more famous version of the test, if a computer is smart enough to convince an interviewer that it’s smart, it must be pretty smart. Okay. So far, so good. So, if an interviewer becomes convinced that Lyra Belacqua is a real person, then

Lyra Belacqua must be pretty smart.
I must be a pretty good actress.

Which is it?

Here’s what I think, anyway. Characters are particularly good memes. Those are those ideas that can hop from head to head, taking on lives of their own. (Yes, kind of like LOLcats, but more sophisticated than that.) Viruses aren’t alive, but they can hijack the machinery of a cell in order to act like they’re alive and reproduce. Characters aren’t intelligent; they’re “personality fragments” that can sieze hold of a person’s imagination and act like they’re real. Lyra uses my brain (with my permission, I hope!) to answer the interviewer’s questions. And there’s a plot idea in that. What if there is a character whose personality is so virulent that his author goes nuts?

Prognostication

So, I was reading Turing’s “Turing Test” essay, and I came across the following passage. Turing has set forth the question, “Can machines think?” and to answer it, he’s trying to come up with a definition of “machine” that would rule out human beings.

One might for instance insist that the team of engineers be all of one sex, but this would not really be satisfactory, for it is probably possible to rear a complete individual from a single cell of the skin (say) of a man. To do so would be a feat of biological technique deserving of the very highest praise, but we would not be inclined to regard it as a case of “constructing a thinking machine.”

Prescient, no? This guy wrote this in 1950, and he wasn’t even a biologist.