Recently, I came across a post on the Boarding Area blog detailing a negative customer experience while on an American Airlines business class flight from New York to LA. The issue? All the power outlets on the plane were broken, preventing the writer from working during the six-hour flight. He sent an email complaining to customer support, and got a reply less than 20 minutes later—lightning fast for the industry.
Yet, the speed of the email reply didn’t solve the customer’s problem. Instead, the email failed to appropriately address the complaint and only added to the customer’s frustration.
It was immediately clear to me that the off-the-mark email response was either a) a poorly personalized email template or b) a mismanaged auto-response.
Had American Airlines been using machine learning applications, they could have easily prevented this snafu and saved this otherwise loyal customer from an unfortunate experience.
While we probably don’t know exactly what happened at American Airlines, we can make educated guesses as to what likely went wrong. And, based on our experience with intelligent automation within the context of support, we can pose thoughtful solutions to help others prevent problems like these in the future.
What went wrong: Limited templates in the response library
It’s possible that the customer service agent selected the best available macro—that is, a command which automatically triggers a set of instructions to perform a specific task—and that a more appropriate response simply didn’t exist in the American Airlines’ template library.Solution: Broaden the response library
If you’re using email templates or macros to help assist your customer service agents, ensure that there’s a broad variety of content to appropriately address all potential incoming customer issues. Otherwise, you run the risk of sending a response that fails to address the customer’s problem, thereby creating more frustration and less trust for the future.
Creating content can be a tedious manual task, but using intelligent response analysis tools help can uncover gaps in your template library. These tools use machine learning to compare agent responses to existing templates, identify gaps in the template library, and suggest new templates to add.
What went wrong: Agent errors
Another potential scenario is that an unknowing agent chose the incorrect macro for this customer’s problem, unaware that a better choice for that particular issue already existed.Solution: Implement thorough agent training
If the correct content exists, the issue shifts to proper training of support agents as to where to find solutions in the library of options. Of course, if you have many macros—even a few hundred, let alone a thousand or more—then proper macro training rapidly becomes challenging.
Download our Pinterest Case Study to learn how Pinterest achieved 87% first-touch close rates and 250% improvement on CSAT.
Additionally, it’s important that the agent takes the time to properly review the customer’s inquiry and modify an existing template appropriately. If the agent was rushed, they may have fired off the response without taking the time to check if it truly addressed the customer’s issue.
Solution: Machine learning tools
Machine learning analyzes closed tickets by reading the language and customer data patterns in order to choose the most appropriate templates for new tickets going forward. When a new ticket comes in, the application reads it and presents the agent with the top recommended templates to apply, saving the agent time, ensuring consistent brand messaging, and helping reduce the chances for error.
This benefits agents in three key ways:
- Speeds up the training process by helping familiarize agents with the available responses and assisting with making the best selection.
- Provides more time for agents to personalize the responses before sending them to customers, cutting down the time they have to spend searching for the right template.
- Drives greater consistency and efficiency across the entire support team, because the tool continues to learn over time and improves its recommendations with every new ticket.
More than likely, this corporate mistake was the result of an auto-generated response triggered by a poorly established, keyword-based manual business rule. This is the old-school way of sending automated responses, and unfortunately, it tends to create a negative customer experience. Keyword-based systems are complicated to set up, difficult to maintain, and tough to track. Plus, they are often inaccurate.
While sending a quick reply that acknowledges the customer’s problem and contains the information needed to fix it is a great experience, sending a quick reply that totally misses the mark is not. Great customer service is not only about speed; the service engagement must also resolve a customer’s problem.
Artificial intelligence-powered auto-response systems, which learn from successful, historical agent-customer interactions, are more accurate, controllable, and transparent. The result is reliable performance that maximizes customer satisfaction while minimizing the danger of damaging your customer relationships—all driven by machine learning applications.
Want to learn more about how you can leverage machine learning to deliver a superior customer experience? Download our Case Study.