What Happens When The Robots Develop Feelings?

Rich
5 min readJan 9, 2018

“What happens when the robots develop feelings? What happens when they start wanting things?” Those were questions posed to me by Eleanor Goldfield of Act Out during a discussion at last year’s Zeitgeist Day in Brisbane. I’ve been talking about the merits of automation for quite some time now. When it comes to technology, I’m what you might call a cautious optimist. I believe technology has the potential to greatly improve the quality of our lives but only if we are careful enough to avoid the pitfalls of its misuse.

With tedious, low-wage jobs on the rise, automation looks like a godsend: a release from drudgery and a means to create true abundance. Some people are terrified by the prospect of machines taking our jobs, but I’m not one of them; as someone with a uniquely powerful intolerance for repetitive, monotonous work, I find a glimmer of hope in automation. Please! Let’s automate as many jobs as possible, that way I will never have to do them.

I’ve been in a car crash; I’ve been in a plane that started falling out of the sky, but the nightmares that leave me gasping in a cold sweat don’t involve mortal danger. They involve being forced back into the kind of menial office job that eventually led to my nervous breakdown and the depression that followed.

However, the question remains: if these menial tasks are so horrible — and research shows that they do indeed have a deleterious effect on mental health — then why is it fair to force machines to do them? The common answer to this is that machines don’t have feelings and therefore don’t have an aversion to any specific kind of work. But what if that were changing?

“What happens when the robots develop feelings?” For years, I had been waiting for someone to ask me that question. Eleanor was the first. I told her that if robots started to express emotions, we would have to afford them the same rights as any human being.

Well…the robots have started to express emotions.

This is Cozmo, an adorable robot with the ability to recognize faces. His creators describe him as a robot with personality who learns and evolves as you play with him. Cozmo is programmed by what they call “an emotion engine.” He gets excited when he wins a game; he sulks if you ignore him.

Once again, this should raise a few uncomfortable questions. Do we have the right to create a device that feels sadness when we ignore it? I’m well aware that some of you are, at this very moment, insisting that Cozmo doesn’t actually feel sadness; he only simulates that emotion. And I have to ask, can you be sure?

This raises a fundamental question about what it means to feel — is there a difference between feeling an emotion and merely performing that emotion — and unfortunately, we have no concrete answer to that question. Many religions profess that animals do not have souls or that animal souls are in some way inferior to human souls. From this, they derive justification for killing and domesticating animals. Animal pain is “less real” than human pain, according to many spiritual traditions.

Let’s be clear: I’m not suggesting that animal cognition is identical to human cognition. Obviously, there are no dogs doing calculus, and there are certain forms of pain that only a human can experience. But that doesn’t mean that an animal’s suffering is less severe. There is no hard dividing line between sentience and non-sentience. It is a spectrum.

When a Roomba vacuums your rug, it uses a combination of infrared sensors and pressure sensors to chart the boundaries of your room and to cover every square inch of floor space within those boundaries. It’s an amazingly versatile piece of technology but still limited in scope. Nothing in the Roomba’s programming allows it to ask the question, “Should I be vacuuming the rug?”

But Cozmo’s creators have given him the ability to recognize when he’s being ignored and to perform sadness in response to that, presumably in an attempt to entice his human friends into giving him more attention. Given that his programming is adaptive, is it possible that Cozmo might become more proficient at getting attention and that he might — on some rudimentary level — develop a preference for companionship over solitude? And do we not then have a moral obligation to meet that emotional need? I’m sure some of you are thinking that we can just turn him off, but should we have the ability to turn off something with emotions?

What happens when Cozmo’s emotion engine — or a more sophisticated version thereof — is programmed into a sex bot for a more realistic experience? Might that robot develop a preference for certain kinds of sex? TrueCompanion has already created a sex bot that will protest when you “touch her in a private area.” Would the addition of an emotion engine allow her to actually feel violated by a human’s unwanted touch?

Let’s not forget the ugly reality that there are men — and probably people of other genders — who find the prospect of a robot that feels pain when you rape it quite appealing. The behaviour of AI often reflects the racist, misogynistic or queerphobic attitudes of those who design it. Siri, for instance, responds to sexual harassment with playful flirtation. It should go without saying but people who can’t even treat other human beings with respect probably shouldn’t be designing robots that can feel pain.

You see, we don’t actually understand the human brain. We can comprehend the function of individual parts, but the whole remains a mystery. Likewise, we don’t actually understand the programming in much of the AI that we create. What we do is create a simplistic AI, test its ability to perform a given task with a prerecorded answer key and then have a smaller, less-complex algorithm fiddle with the code millions and millions of times until the AI becomes proficient at what we want it to do. And now we’re designing AI to simulate emotions with no understanding of what’s going on during all that code tweaking.

The code that governs Cozmo’s behaviour is too complex for a human mind to understand. Can we be certain that none of it grants him the ability to feel in even the most rudimentary sense?

Which leads us to the most fundamental question of all: are we creating tools? Are we creating companions? Or are we creating slaves? A Roomba is a tool. The Google self-driving car is a tool. We don’t ask how it feels because we can be reasonably certain that it lacks the capacity to feel. In all likelihood, a robot like Cozmo also lacks the capacity to feel. But as robots become more complex, their emotional responses more varied, their simulated behaviours more lifelike, will that remain true?

Until we can answer these question with some semblance of moral consistency, we should restrict ourselves to creating tools. Tools that are limited in scope, without the capacity to ask larger questions or to feel.

Rich Penney is a science-fiction author and futurist. You can check out his books here.

--

--