Posted by & filed under Open Letters.

A month or so ago my friend R., who feels you have a talent for futurology, said to me, “I have a question I want you to ask Pamela.”

He and I had been watching the television series Westworld, about an American “Wild West” theme park populated with androids who begin to gain consciousness.

“Ask Pamela if she thinks this is possible, that AIs will become conscious.”

I kept forgetting to ask you his question, until finally on one of our recent Skype calls I remembered. You told me you didn’t have a clear answer to R.’s question, but gave me your thoughts on the subject. One thing you said, and I might be misquoting you, was that the people who work on artificial intelligence projects often have the kind of personality that they feel more of an affinity for the logical processes of computers than the oft messy emotionality and coded communication of human relationships. Another thing you said was that AIs, if they favor a logical sort of thinking, might make choices that humans would find unethical. Putting these two ideas together I wondered if rather than just explore the question of AIs becoming conscious or becoming more intelligent than humans, we should also look into what kind of intelligence AIs are developing. People are a very diverse breed, and our individual specimens think in very different ways. If the designers of AI tend to have a certain kind of intelligence, and AI resultantly looks like its designers, perhaps AI will still be missing something—something that has to do with the diversity among humans and our desire to form relationships with one another.

On that same Skype call, after mine and you discussion of AI, you said, “Given that R. was so interested in my opinion I now have a question I would like you to ask him.”

Earlier in the conversation you and I had been talking about the Landscape of Change: that there is so much talk these days about “entrepreneurship” and “follow your passion,” but maybe not everyone has an individual passion worth following and turning into an entrepreneurial business. I mentioned R. as an example, saying he was a bright and curious individual, and at a stage in his life when he was looking for something larger than himself to lend his life meaning.

“Ask R. for his ideas on how I can get people like him”—bright, curious and in search of something larger than themselves—“interested in participating in my work,” you said.

In the following days I kept forgetting to tell R. your response to his question as well as your question for him. Meanwhile on mine and your next Skype call, two weeks later, you and I had a nice breakthrough on the article you’re writing on what it means to be Exponentially Human. You were struggling with the question of how to write an essay as if it were written ten years in the future, when ten years in the future you expect writing to be eclipsed by other forms of communication. You mentioned for example that a decade from now each person might have a “bot” that provides a stream of images and audio personalized to the individual. With such “bots” available what reason will there be to communicate in writing? I almost wanted to say to you, “Forget about logical inconsistencies for now, Pamela, and just write the article.” But we kept with the question and came to a breakthrough. Why in the future (or the present even) would someone choose to communicate in such an antiquated, static and imperfect mode as writing? For no other reason—you and I concluded during our Skype conversation—than to let the reader know that there was a person at the other end of this written essay, a writer, who chose these words, imperfect as they are, in hopes of connecting with you, the reader, another person, and sharing my imperfections with you—that which makes us human.

My face lit up as we had this conversation, as it made me think of a similar conversation I had with you one late night at Hub Westminster, three years ago, when I was attempting, with much frustration, to design an algorithm that would sort through an archive of videos I was developing and deliver to the viewer exactly what he wanted. I showed you the archive, then selected several videos I thought you might enjoy, explaining that my algorithm would do just that (only better): choose the right videos for the right people.

“But I don’t care about the algorithm,” you said. “I’m interested in these videos is because YOU CHOSE them.”

The day after mine and your latest Skype conversation R. and I watched the film The Imitation Game, about Alan Turing and the creation of the machine that cracked the German Enigma code during World War II. The Alan Turing character in the film, which I should note was partly fictionalized, had a personality that made me think of what you said about how those who work on developing AI often feel an affinity with the logical processes of computers. Meanwhile there was a line said by the Alan Turing character in the film, when he is asked whether he believes machines can think as people do. I’ve found the quote:

Of course machines can’t think as people do. A machine is different from a person. Hence, they think differently. The interesting question is, just because something, uh… thinks differently from you, does that mean it’s not thinking? Well, we allow for humans to have such divergences from one another. You like strawberries, I hate ice-skating, you cry at sad films, I am allergic to pollen. What is the point of… different tastes, different… preferences, if not, to say that our brains work differently, that we think differently? And if we can say that about one another, then why can’t we say the same thing for brains…  built of copper and wire, steel?

All of this got me thinking about the diversity among people (and between people and machines). I remembered your response to R.’s question about AIs developing consciousness and shared it back with him. But first I asked him your question, about how to get people like him interested in your work.

“Make it something like this we just saw,” he said. “A story. Not a novel. People don’t read those these days. Even better than what we just saw [a movie] is a television series. Because that’s how people get their information these days. I also think there’s something about images that’s more natural.”

The same weekend R. and I saw The Imitation Game he and I watched a documentary by Werner Herzog, The Cave of Forgotten Dreams, about a cave in France, discovered in 1994, that contained the oldest cave drawings ever discovered, some of them 32,000 years old. Near the end of the film one of the archaeologists studying the cave says there’s something about images, versus language, that communicates to us—to all of us humans—across time.

Finally, a postscript: R. says he another question for you, about the impact of emoji symbols on the English language and language in general. He says he has noticed that when he types a message on his iPhone it will suggest emoji symbols to replace some of his words, for example a smiley face symbol instead of the world happy. He’s curious about the consequences of this, negative and positive. Negative, in that emoji are developed by corporations; in that this is a sort of “artificial intelligence” in which your phone thinks for you when it suggests an emoji; and in that this is also a sort of “artificial emotion,” substituting a symbol of a smile for the actual, face-to-face thing. And finally there are the (potentially) positive consequences, captured in R.’s question to you: Is there going to be an international language based on [emoji-like, visual] symbols?