09/19/2017 - 10:47 Written by Steve Cocheo
“I’m going to pick up a slice.”
Most humans would recognize that the speaker is talking about pizza, not their deteriorating golf game or a nasty papercut. But a chatbot may not recognize American vernacular.
“Humans communicate without context. Human conversation is not clean. We use slang and we use dialects,” said Darrius Jones, assistant vice-president, Enterprise Innovation, at USAA. For that matter, he added, “how many of you understand 100% of what your boss says?”
Chatbots continually surprise
Jones, speaking during a session about chatbots and artificial intelligence at Finovate Fall, acknowledged that the vagaries of chatbots can be downright embarrassing. More than once, he said, he has demoed chatbot tech for someone and … crickets. No response from the bot. And that can even happen, he said, when the chatbot answered the same query properly the day before.
Who hasn’t had their everyday personal assistant—Apple’s Siri, Microsoft’s Cortana, Google Personal Assistant, or Amazon’s Alexa—return complete non sequiturs or just plain gibberish in response to what, to us, is a plain question? How often do users rephrase a request more than once to get the bot to understand? Who is training whom?
Moderator Michael Meyer, venture capitalist and CEO of RegTechLab, had opened the session by asking the audience “Who enjoys chatting to a chatbot?” (Technically, some experts set the likes of Siri and its competitors apart from chatbots, referring to them as “conversational artificial intelligence.”)
He scanned the room. “Oh, I got half a hand there.”
“We have a ways to go,” said Meyer. Then he cited research by HSBC released earlier this year, taken across multiple international markets, that found that 20% would trust a robot to open their parachute—yet only 11% would trust a bot to give them mortgage advice.
Potential for reliable chatbots
Meyer’s panel consisted of players involved in developing advanced chatbot technology on some level. One observation coming out of the discussion: While hopes for improving the chatbot experience hinge to a great degree on adopting improved technology, there remains a significant role for human intelligence in establishing and scoping what a chatbot should do.
Getting chatbots right has great potential. Hari Gopalkrishnan, managing director, Client Facing Platforms Technology, Bank of America, said his mother could pick up a phone and say, “Send my son $50,” to a bot, and yet she would never use an app to perform the same transaction.
Chatbots thus have the potential “to involve a demographic that’s been untapped” for greater tech-based service, said Gopalkrishnan.
A shortcoming of traditional bank website navigation, he said, is that “it’s hard to find stuff.” A simple matter like replacing a lost bank card can be complicated if one can’t find the right place on a bank site—and you can’t call the 800 number on the back of the card, he added, if you lost the card.
Enabling people to address such requests in plain everyday speech to a chatbot is the goal behind BofA’s Erica chatbot. Erica was announced as a concept at Money 2020 last October. Since then the bot has been in development and refinement, and has not yet been released for customer use.
Looking at the science
“The challenge is that when you are chatting with a chatbot you feel constrained, that there is a correct way to say things,” said Jason Mars, a University of Michigan computer science professor, and co-founder and CEO of Clinc, an artificial intelligence startup. He said the hope is to apply new developments in A.I. to produce the next generation of Siri and the like, and then put that technology into the hands of banks for their chatbots.
“I think we’re getting there. The science is out there,” Mars said. “It’s changing what’s possible.”
“Any engineer in the country can build a mobile app,” he added. “It’s a commodity.” Solving the chatbot challenge is the next big thing.
The next generation of bank customers will expect chatbots to respond to them, and won’t adapt, the panel indicated. Meyer said his kids won’t use Alexa. This didn’t come as a surprise to Jones. Millennials, he said, resist learning to talk in bot-speak, feeling, he said, “I shouldn’t have to say what you expect me to say.”
Mars said he thinks the breakthroughs will come in the next two years. “We’ll get to that quality bar,” he said.
Stop trying to boil the ocean
Successive generations of Star Trek, from the 50-year-old original series on, have featured talking computers that characters would simply address out loud from anywhere on their ship. The computer could usually find out anything or set up anything, instantaneously, and it was a great plot device.
But panelists suggested that banks may err when they try to come up with chatbots that can answer nearly anything that developers can dream up that a customer might ask.
“Trying to boil the ocean is not the right answer,” said Gopalkrishnan.
Jones said that USAA developed a set of 10,000 potential answers that its chatbot could give out, and yet it just wasn’t right. Trying to come up with answers the chatbot could provide to everything the team could think of wasn’t the correct solution—the result was too thin.
Jones said that further work found that developing a set of just 3,000 answers—with greater precision and depth—actually produced better results for customers.
In August, USAA launched a pilot of a new “skill” for Alexa built in cooperation with Mars’ Clinc. A company news release gave examples of the chat possible with the approach, which is designed to learn as it goes.
“Alexa, how much money do I have?”
“You have a total of $2,618.51 in your two bank accounts.”
“Alexa, how much do I have in my checking account?”
“You have $1,809.97 in your classic checking account.”
“Alexa, can I spend $100 on a new phone?”
“You typically spend an average of $200 on electronics in a month. So far this month you’ve spent $50. This will leave you with a balance of $2,518.51.”
USAA’s chatbot is far from perfected, however. As time has gone on, Jones told Banking Exchange after the panel discussion, USAA has found that only 25% of the 3,000 answer set are in the top 10% of customer requests.
“You have to be willing to iterate over and over again,” said Jones.