Organisations must allow humans to take over beyond AIs reasoning capabilities, says Suman Reddy of Pegasystems – BusinessLine

Posted: November 30, 2019 at 10:08 am

If we want humans to trust artificial intelligence (AI), then we need to incorporate empathy, which is at the heart of ethics issues related to AI systems, says Suman Reddy, Managing Director, Pegasystems, India.

As businesses attempt to improve customer service, many use AI to help make customer decisions faster, cheaper and more intelligently. "They are also using chatbots or intelligent virtual assistants to streamline conversations by taking over routine customer queries. However, in their quest to personalise interactions and reduce interaction times, the human touch is being eliminated," Reddy told Business Line.

Eventually, customer service becomes impersonal, inefficient, and does not adapt to the customer context, he points out.

The concept was borne out by a survey recently conducted by Pegasystems, whereby 70 per cent of respondents said they preferred to speak to a human at the other end of the line, and not a chatbot.

The nature of AI means that although efficient, it sometimes operates in a way that lacks what a human may describe as empathy, or bombards customers with recommendations that may actually be detrimental for customer loyalty.

Reddy insists for more complex customer engagement scenarios beyond AIs reasoning capabilities, organisations must allow humans to take over.

"They can use their natural capabilities of judgement and reasoning to resolve cases. By combining AI with live human customer agents, customers get the benefit of human judgement with AI, allowing the agent to vet AI recommendation before deciding which path to go down," he adds.

This integrated approach, he continues, could help manage difficult conversations that do not fit a pre-defined response or require a lot of nuanced judgement.

Citing an example, he says a banks sales team could be using data analysis and pattern recognition through their AI to boost their quarterly numbers. "While the AI algorithm might be complying with mandated regulations, it could offer a high-interest loan or insurance premium to a family that they can barely afford it. The plan might not be sustainable for them or the bank. In this case, AI will not have the discretion to withhold that offer, given it will be configured to achieve the banks objective to maximise profit even if it came at the expense of hurting the customers interest (of an over-expensive proposition)," says Reddy.

In such a scenario, there should be a way for AI "to make more empathetic decisions that balance business needs and customer needs so, in the end, everyone wins. A mutually beneficial transaction helps the business bottom line and engenders trust from the customer, that will translate into customer loyalty in the long run," he adds.

AI today can already predict the future. Police forces uses it to map crime, doctors use it to predict when a patient is most likely to have a heart attack or stroke, while researchers are even trying to incorporate AI into their experiments to plan for unexpected consequences.

Many decisions require a good forecast, and AI agents are almost always better at forecasting than their human counterparts, goes the general thinking. Yet for all these technological advances, Reddy says consumers still seem to deeply lack confidence in AI predictions.

Recent findings from a global Pega survey found 65 per cent of respondents dont trust that companies have their best interests at heart.

"If they dont trust companies, how can they trust the AI that companies use to engage with them," questions Reddy. The survey showed only 30 per cent of customers felt comfortable with businesses using AI to correspond with them.

"While AI has tremendous advantages, people are skeptical if businesses' really care and empathise with their particular situations," he adds.

.

"Customer service teams can perform much better when empathy is shown. This becomes critical especially when agents who deal with hundreds of customers find it impossible to contextualise responses based on prior behavioural or context data captured," adds Reddy.

For their part, organisations also need to ensure their AI models are tuned to be more empathetic to customer needs and not just optimised to squeeze every last possible dollar from them.

The Pega study showed only 12 per cent of respondents feel they have interacted with an automated system that has exhibited empathy. "Enterprises must take advantage of advanced AI which can drive unprecedented business benefits: making intelligent decisions at scale, monetising huge volumes of data, and jump-starting revenues, all while delighting customers," says Reddy.

Unfortunately, he adds, the problem with AI today is that it just looks for patterns. "The systems cannot usually explain how they came to a decision because the normal network struggles to explain how it thought about the problem and worked towards a suggested action. These sorts of systems can also compound problems when imagined from a human point of view in terms of decision making or cognitive analysis. These are very important things that almost no company is addressing now," he adds.

Original post:

Organisations must allow humans to take over beyond AIs reasoning capabilities, says Suman Reddy of Pegasystems - BusinessLine

Related Posts