AI-based predictive analytics is gaining steam across industries, helping insurance firms predict customer churn; retailers predict customer preferences; or physicians anticipate the patients that will be making a return visit. It’s becoming quite popular and helping firms root out problems even before they occur; but the problem, is, prescriptive AI has not quite kept pace with predictive AI – knowing what the problem is can be a far cry from knowing how to solve it.
I recently explored this topic in an article I wrote for the Forbes Technology Council. In it, I examined the differences between predictive and prescriptive analytics, and how prescriptive AI can actually pose an ethical dilemma when gone unchecked.
Prescriptive analytics can be tricky since it can be used to manipulate human behavior, and while it’s still a ways off before it is really feasible, social media companies, such as Facebook, have been researching and experimenting with ways to do just that. About 10 years ago Facebook conducted an experiment to see how it might influence human emotions and behavior. The site populated half of news feeds with positive information, and the other half with negative. They found that after a week, individuals whose newsfeeds had the positive slant posted more positive comments, while those with negative ones did the opposite.
While prescriptive analytics may be attractive to marketers, the industry should beware of deceptive techniques that manipulate behavior – much like subliminal advertising, which led to an FCC crack-down on that practice.
Despite the potential for prescriptive AI to be used for less than beneficial uses, it will certainly play a role for the good, helping organizations act on the data-driven intelligence it receives through predictive AI. Not only do the two forms of AI need to work together, but they also will require humans in the loop, to provide sound reason, judgement and oversight – which will never go away.