When UX meets AI
When UX and AI collide - what is the result? Tom Wood explores.
Name
- Tom Wood
Date
- 25th February 2016
Artificial intelligence is attracting more and more interest in the business world.
It looks increasingly likely that AI software is going to find applications in the front-line between companies and their customers. So it’s probably the right time to start thinking about how AI will weave itself into user experience in the coming years.
One of our neighbours is a company called Rainbird which is building a commercial AI platform. We paid them a visit to learn more about AI and its intersection with design.
What do we mean by AI?
Artificial Intelligence is a very wide, multi-disciplinary field. This means that there’s no single model for how software and hardware come together to create practical applications. But there are three pretty common elements which tend to turn up in emerging products and services:
Knowledge modelling: This is a way to gather and organise human input to capture a particular area of knowledge. For example the soil conditions needed to grow the juiciest tomatoes, or the suitability of various savings products for people at different stages of life. This knowledge model becomes the basis for the AI to learn and grow its own knowledge base.
Machine learning: These are algorithms that can learn from and make predictions on data. They draw on, and add to, the knowledge model. Essentially these allow the system to make data-driven predictions rather than following a strict set of rules (like the Expert Systems which were the focus of AI in the 1980s).
Natural language processing (NLP): This is a branch of human-computer interaction which wrestles with how to align the languages of machines and humans. It’s got two major challenges: how can a computer derive meaning from the rich and nuanced universe of human language and meaning? And how can it generate language in response which will be understood (and trusted and liked) by the human it is addressing?
Bringing all three of these elements together you have the potential to create services which draw on human knowledge, can build on this to consult in natural language with humans in order to help them make decisions or answer their questions.
Where does this fit into the commercial world?
There are thousands of potential applications from running nuclear power plants to controlling stock levels in your local supermarket. The simplest question is probably this: where do companies currently spend a lot of money getting humans to apply a body of knowledge to create a business outcome? Rainbird call the AI opportunity ‘knowledge work automation’ and a big, fat application of this sort of system is in customer service.
Think for a second about the volume of voice calls that banks, telcos and energy companies handle getting employees to guide customers towards the right product or tariff. Even now, at the end of the second decade of digital self-service, it’s hundreds of millions of calls every month.
AI could save industry billions by both decreasing the volume of inbound customer contact and also by automating more of those contacts where they can’t be avoided. AI could improve outcomes from customer-driven self-service (content optimisation, online Q&A). Or failing that drive web chat. Or failing that automate email or phone conversations. Or failing that support a human agent behind the scenes in a traditional human-to-human conversation. And, if the customer can be identified in all of this, it could not only drive a consistent dialogue across all of these channels, but also learn from that interplay and apply that learning to drive better, more efficient outcomes for both company and customer in the future.
All of this hinges off the ability of AI systems to enter into consultation with a human, rather than just retrieve the answer to a question. When you ask an AI system a question it tends to respond with a question. So if your query is something like “There’s something on my bill I don’t understand. Can you explain it to me?” the AI agent will ask you a series of questions which help it zero in on you, your context, your customer status and relationship, your usage data and tariff details and instances of where this (or a similar) question has been asked before and how it has been successfully resolved. And it has to do this faster and better than a human, so it can’t be asking what seem like dumb questions in an unnatural way.
This is quite a challenge, but the financial benefits are just too huge to ignore. Many of the world’s biggest companies are looking at how AI could be woven into the customer experience. This raises a spectre from the past. In the 90s and 00s these same companies tackled the same problem by moving call-centres offshore. If we can’t automate the contact, they reasoned, we can at least lower the cost of human service by doing it in places where labour is cheap. Offshore call centres became almost totemic as a demonstration of how big companies put profit ahead of customers. So much so that lots of contact centres have been brought onshore again, with companies scoring a point in marketing by shouting about local customer service.
AI-driven customer service is certainly coming, but it will be a big test of big business’ commitment to customer-centricity. Will the application of AI make customer service interaction more difficult, more uncomfortable and weaken bonds of loyalty and brand preference? How AI manifests in the customer’s world needs very careful consideration.
Three user experience challenges AI needs to overcome
AI will have to challenge on these three fronts if it is to reach mass acceptance by customers:
Our mental model of how computers answer questions
Our expectations of interaction with computers has developed in the age of search. We make a query, we get an answer. Sometimes there’s a really good match between what we wanted to know and the answers we get. Sometimes there isn’t. We’ve learned that search is good for some types of query but bad for others. And for those kinds of questions, we often look for human help.
In the future, though a question like “What’s the best mobile phone?” will be answered with a question (perhaps “What do you use your current phone to do?”). And in fact the path to a satisfactory answer will be a whole series of questions with you providing the answers.
For AI to succeed in customer service there will have to be a shift in expectation: some computers don’t serve ‘dumb answers’ but ‘smart questions’. So what are the rules and etiquette of computers asking us questions and conversing with us?
What has already been learned about the use and acceptance of decision-trees and wizards might be useful. In particular how users form a view about whether the time investment in this kind of interaction is likely to yield a positive reward. If customers can sense or ‘smell’ a waste of time they will bail out. The earliest moments of interaction with AI are likely to be critical, and they probably need deliberate design.
Feel and authenticity
As well as adjusting to the new conceptual model of AI, the algorithmic models from which the technology delivers data-driven responses must also be able to gauge the users’ emotional state so as not to alienate or offend.
Rainbird told us that many AI ventures are looking to the five-factor model as a way to gauge the emotional state of the human they are addressing. It is a theory of five broad types of personality and psyche. The ‘Big Five’ are:
Openness to experience
Conscientiousness
Extraversion
Agreeableness
Neuroticism
By programming a degree of emotional awareness into the system the software will begin to identify and differentiate between data which are fixed and factual (e.g. name, date of birth) and those which are transitory such as mood. The AI system can then draw inference from all of the information it is gathering and assess the customer’s emotional state and personality type. It can then tailor its own style in handling the contact.
Again questions of convention and etiquette start to arise. It’s one thing that the system I’m talking to is trying to assess my level of neuroticism. It's quite another if I notice that it’s doing it. For the user experience of AI to be successful there will need to be a lot of experimentation around the tone and style of the AI agent.
Our suspicion is that successful early implementations will personify as a ‘dumb but well-meaning robot’ until users become familiar with the new interaction dynamic and chalk up some successes with AI agents.
User interface
There seem to be two big challenges here.
The first is about successfully building knowledge models. Rainbird were quick to point out that (pretty obviously) the tools for inputting human knowledge into the AI system have been developed by AI software engineers (and not people who know about tomatoes or saving accounts). If the cost and effort of building useful knowledge models is too great AI will never get out of the starting blocks. So there’s a huge user experience challenge linked to the development systems which can be populated with knowledge without someone with a PhD in Machine Learning having to be in the room. This seems to hark back to the very earliest days of commercial computing when only computer scientists could make or use computers. Some lessons from those times will need to be relearned. Usability skills will probably mark out the winners from the losers in the race to commercial success for AI software firms.
The second, more obvious challenge, is in interaction design between computer and human.
Screen-based interactions (e.g. webchat) are probably going to be easier. There’s already lots of learning about user experience for this kind of interaction - and clearer expectations from customers about what good looks like. For this reason it’s likely that the first mass-market implementations of AI will be in this space. But we can expect some ‘uncanny valley’ moments as we move into not spotting the difference between whether a human or AI is driving the interaction before, finally, not caring.
Voice interface with AI seems harder and riskier. Customer expectations are set by the two extremes of warm, flexible human conversation and the stilted, robotic IVR we already encounter in call-centre queues. Natural language interfaces like Siri have already set some user expectations about how it will work and feel, but when the AI is running the conversation by posing questions it’s not certain that this interface style is the solution. Short conversations with Siri can be useful, long ones can be a drag.
So what happens next?
There’s simply too much money on the table for major brands not to start experimenting with AI-driven customer service in the near future.
History teaches us that bad early implementations – where the needs of the customer are considered secondary to the needs of the business – will slow the rate of adoption and acceptance. If poorly planned, poorly tested AI implementations are forced onto customers they will avoid them. And annoying AI interactions will become a meme like offshore call centres did by the mid-00s.
To avoid that, AI services will need to implemented with care and patience. Web-based services are probably the place to start. But it’s also possible that AI-assisted human customer service can be a gateway to full automation of voice-interface in the future. Let’s not be neurotic about it.