Source – venturebeat.com
Emotional intelligence (EQ) — the ability to pick up on what other people are feeling or thinking, primarily using body language and tone of voice — is a difficult endeavor, even for some humans. When a human misreads their fellow human, they could lose a friendship, a relationship, a job. Stakes are even higher for bots — if their makers can’t teach them empathy, they could cease to exist.
Naveen Joshi, the founder and CEO of enterprise development company Allerin, recently wrote about how EQ will make all the difference in whether AI becomes more widely used by society. “Even the most sophisticated AI technologies lack essential factors like emotional intelligence and the ability to contextualize information like human beings,” he wrote, nailing the basic stumbling block.
Bots like Alexa don’t actually know us. They don’t know how we’re feeling, or what we are thinking. They can’t pick up on the unspoken gestures and frowns. They lack even basic empathy, essentially communicating only in trivia and small talk.
The curious thing about this is that it’s not obvious. When we talk to Alexa, we tend to see the bot as another human, someone who lives inside a small speaker. The bot talks, it tells jokes. Part of the reason we don’t want to think too hard about EQ with bots is that there’s a bit of an “uncanny valley” for AI, that awkward gap where our minds essentially make up the difference between what is obviously a set of algorithms and something that seems more human. We bridge that gap mentally, but as bots evolve and get smarter and show more emotion, we’ll actually start questioning them more — we’ll start realizing they are not human.
The “uncanny valley” is a term used to describe what happens when we see a human avatar. It’s hard to bridge that divide — the more the avatar looks human, the more we start filling in gaps, until at some point we realize it is not human at all. That’s when things start falling apart.
Think of the most recent Final Fantasy movie, called Kingsglaive: Final Fantasy XV. At first, it’s astounding how much the digital actors look like humans. Then there’s a slight misalignment, or a facial twitch, or a squint that doesn’t look quite right. I never finished the movie because eventually I stopped believing it was real and I stopped caring about these digital actors.
This will happen with bots. First, we’ll stop seeing them as digital creations and start connecting to them emotionally. But they will always be subroutines on top of subroutines. At some point, we’ll stop bridging the gap between ourselves and Alexa or Cortana. This is where things will become the most interesting, because bot developers will have to figure out how to solve the massive problem of understanding you, the user. Are you sick? In a bad mood? Recently broken up with a boyfriend? Tired? If the bot doesn’t know how to read you, we won’t think of the bot as valuable. “Alexa, how is the weather?” works fine for now, but soon we will want a lot more.
This valley — the rising programmatic accomplishments mirrored by our eventual mistrust as bots seem more and more human — is the single greatest challenge AI developers face. That’s because humans can’t trust things that do not show empathy. It’s not possible. It goes against our nature. We don’t last long in a job, a friendship, or any relationship that is not built on trust and empathy. And we won’t rely more and more on a bot unless it seeks to understand us and demonstates that it “knows” us.
The worst part? We don’t know when this split will occur. For now, bots are mindless minions that do our bidding. Google Home is a sidekick that tells us NFL scores. But when we want to send a bot on an errand to pick up the kids in an autonomous car? When the bot will fill in for us in an interview? When we want a bot that cares for an elderly person? The AI of the not-so-distant future had better be ready to tackle more complex challenges than simply looking up the weather.