This article originally appeared in DMN News.
If the future was perfect, every digital marketer would install an AI system, flip a switch, and have it operate on auto-pilot. It would manage every step in the customer journey, from the first web site visit to the final mouse click to buy.
Well, we’re living in that future, and it ain’t perfect. AI is just starting to progress from buzz word to mainstream. Consultants and practitioners agree that you need to keep a human in the loop. The question is where in the loop do you place the human: the beginning, the middle, the end, or throughout?
The human needs to touch the system
Salesforce’s Einstein has been around since 2016, focused on delivering AI for customer resource management, offering predictive and recommended advice to its human users who want to convert prospects into customers. Users “are coming to us with a business problem,” said Allison Witherspoon, director of product marketing for Salesforce Einstein. They need to augment their decision making, using tools like lead scoring and engagement scoring to rate prospects. Only that work is automated by the AI, which can be adjusted to suit user need. Witherspoon referred to this as “augmented” rather than “artificial” intelligence.
So where is the human? At the end, as the user, but also in front, as the customer. The AI is “watching the customer interact” — say with an e-mail message, explained Meghann York, director of product marketing at Salesforce. Customers provide input into Einstein as they interact with web pages and pitched messages, which the system analyzes. This should yield a recommendation the marketer to take an action, but more importantly, telling the user why a particular recommendation is best, Witherspoon added.
“Humans are a bit unpredictable,” York added. “The model will learn and follow it.”
The human needs to tap the system
AI depends on machine learning, but there comes point where the system “tapers off” for lack of new data to learn from. Cosmas Wong is founder and CEO at Grey Jean Technologies: “Once it tapers off, you see if there is a need for it be returned [to its functioning state],” he said. “You look at it once very two weeks,” he said, tracking the process of how the system continues to learn.
Sometimes errors occur for lack of data. Wong offered one example of an affinity engine powered by AI that would pick articles for people to read, based on reader profiles and past preferences. The system tried to approximate a selection by trying pick an article for a person whose profile mostly resembled that of another person. “It looped back to itself,” he said. The solution was to fix the algorithm to filter out the false choice. “A human being has to do it.”
“AI is intelligent, but it is still going to be artificial,” Wong pointed out. A human must always be kept in the AI loop “to determine if the output is what we want.”
The human needs to teach the system
An AI system is only as good as the data you use to teach it. “People often overlook this step,” noted Marius Kierski, CEO for Sigmoidal, an AI consulting firm. Even when an AI system starts out with a good, curated data set, it will acquire more uncurated data as it “learns”, but that extra data dilutes the quality of that starter set, leading to “catastrophic forgetting,” Kierski explained. “If you just let it go, you will lose the added value of a person looking at the data.”
Which leads to another concept: confidence. Once AI is trained on a data set, how confident will that system be that it is making the correct decision? The system should flag a decision where it is unsure and “alert the human to help make the decision.” he said. “The system refuses to make a decision where its confidence is low.”
The human needs to doubt the system
AI learns from data compiled by humans. So won’t the data be as flawed as the people who compile it?
“Artificial intelligence is a tool…It is not intended to stand on its own.” said Risto Miikkulainen, VP for research at Sentient Technologies and professor of computer science at University of Texas, Austin. “The data determines what the behavior will be. The big challenge is that sometimes there are hidden biases in the data you don’t want there.” he said.
“You don’t want the AI to propagate a bias…[but] if you are conscious about it. You can fix it,” Miikkulainen added. Humans can mitigate any instances of bias once detected, to avoid offending users, or worse, customers. The outcome should be a system that changes over time with adjustments. The commercial example Miikkulainen gave is a web site that self-adjusts to suit the users accessing it. That, of course, is a personalization approach — which would hopefully increase the likelihood of conversions or other desired outcome.
The human needs to “be” the system
So where in the loop do you put the human? Everywhere.
“[I]t is always prudent to set up an ongoing performance-monitoring system that shows how predictive performance metrics are evolving over time.” said Prasad Chalasani, Chief Scientist at MediaMath. “Unusual changes in these metrics do require a human to intervene and see whether there are any data anomalies, or unforeseen edge cases. Depending on the domain, such human intervention can occur daily or weekly.”
A true AI system learns as it goes, self-adjusting as it receives new data. That also raises the risk that data and operations can interact in unexpected ways, producing unexpected results. “Usually it is easy to figure out why, but occasionally figuring out the root cause can take a day or more.” Chalasani said.
Thankfully, online marketing is a realm where AI mistakes can be obnoxious, but not fatal. “In some areas such as speech recognition, we are already seeing nearly fully autonomous AI.” Chalasani said. “In other areas, especially where life-and- death decisions are involved (e.g. in military or medical domains), it is doubtful we can completely eliminate humans from the loop.”