Emotional uncanny valleys

I was speaking at the GameHorizon conference in Newcastle earlier this week. One of its greatest pleasures was getting to listen to inspirational others; and in particular one of my all-time games-and-big-ideas heroes, Jesse Schell.

Schell gave a typically expansive and delightful account of possible futures for gaming, focusing on the evolution of characters in games – and how these may not only grow ever-more realistic in their behaviour and looks, but actually morph from being mere avatars into something more like virtual companions. “Mario,” he joked, “is a terrible friend,” because ever after three decades of fun together he doesn’t recognize you when you start playing a new game. But persistent databases and powerful AI could change this, gifting gamers a long-term relationship with characters, and potentially taking such relationships far beyond gaming.

It was heady stuff, and reminded me of the science fiction notion of a shadowy “e” version of the self trailing our future counterparts, connecting them seamlessly and intelligently to whatever the Internet might look a thousand years hence. Yet parts of it, I felt, also failed to ring true.

Leaving aside the technological issues, I can see tremendous value in the idea of something that’s able over time to learn about your speech patterns, looks, behaviour, preferences, history and so on, and then to act as an interface between you and the digital realm. I was struck, though, by an issue that came up when Schell showed us slides of cutting-edge puppetry in places like Disneyland.

I visited Disney World in Florida earlier this year, and loved the onscreen real-time puppeteering in attractions like Monsters Inc., where computer-animated characters from the films bantered with their audience thanks to actors behind the scenes controlling the characters and having their words mapped onto them. In the flesh, though, the dynamic was rather different. And Schell, who I think has been on the inside of several Disney character costumes in his time, recalled the unexpected anger some children showed when they met costumed Disney characters. When they got up close, the children suddenly realised that this wasn’t the cartoon creature they had watched on TV, but a person in a suit. They had been lied to, and they knew it.

People hate feeling manipulated; they also hate certain kinds of ambiguity and wrongness. And so I found myself wondering about an idea that was invoked during the questions after Schell’s talk: the emotional uncanny valley. This phrase is an extension of the notion of the uncanny valley, which describes the creeping feeling of unease you get when you see something that looks very close to being human or real, but isn’t quite. Why shouldn’t the same apply emotionally as well as visually?

Quasi-human interactions with virtual characters are an area bursting with potential unease – something Schell acknowledged. And this made me wonder about the whole idea of “digital companions” as a technological objective, at least outside of the constraints of entertainment and fiction.

Most people already have a close relationship with much of the technology in their lives: with cars, houses, smartphones, clothes, computers and so on. We are pretty comfortable with the idea of interweaving powerful devices into our lives; creations that are variously functional, smart, entertaining and delightful, but that don’t pretend to be our friends or ask us about our feelings.

I can imagine a day when portable, personal software is able to learn an incredible amount about our preferences, speech patterns, bodies and even minds. Quite possibly we will choose to link this software to everything from our bank balances to our browsing habits and motor vehicle choices. But when I wonder what form this smartness and intimacy will take, my vision of the future looks less like Mickey Mouse following me around and more like, well, an iPhone – albeit one that discreetly houses a machine mind many orders of magnitude more potent than an A4 chip. I don’t want anthropomorphism; I don’t want a friend. I want a brilliant, beautiful tool that helps me feel good about myself because it’s brilliant and beautiful – and not because it appears to care whether I’ve had a bad day.

While personalised future technologies may well shadow, augment and interrogate our every move, I have the feeling that they’ll leave the business of feelings to us. Or am I just stuck on the near side of the uncanny valley, unable to imagine the sunlit uplands beyond?