Artificial Intelligence (AI), and the promise of intelligent software, has been all over the news in 2019. From helping to detect fraud, identify medical conditions and fight climate change, a lot has been reported on the subject. We now know that AI technology is mature enough to help us solve real problems in business, healthcare, government and the environment. New tools from the likes of IBM, Google, Microsoft and others are launched daily; and faster and more complex GPU processing power is in overdrive. So, the technology is advancing, but what we really need to figure out in 2020 is what exactly to do with it all.
As we look out over the horizon to 2020, the new year will be marked by a focus on bringing AI from theory to practice among companies of all sizes.
Consider the following five key trends shaping its role in the new year:
- AI adoption is slower than the hype. While articles on AI seems to be everywhere, adoption is still in its early stages. The author of a CIO Journal article expects AI usage to follow an “S” curve – slow at the beginning, followed by a steep rise of adoption. A key factor that is limiting growth right now is that companies just aren’t sure how to use it. They need to be more targeted and focused about how they define the business problems they want to solve and how they will approach them. Additionally, since algorithms depend on good quality data in order to deliver accurate results, companies need to get their data sources in order. They need to find out where key data is located in their organizations, get it out of silos, and clean it up – critical requirements before implementing an AI program.
- Innovation sprints increase. Many companies are reluctant to pull the trigger on major AI implementations. They are trying to figure out the value that they can expect from them since these programs can cost hundreds of thousands of dollars and take some time before results are realized. In order to “kick the AI tires” at a fraction of the cost of a full-blown AI investment, companies will increasingly turn to AI Innovation Sprint projects. This trial approach provides an incremental, faster and more cost-effective way to not only test out the accuracy of an algorithm in solving a business challenge, but also to determine if a business problem is even a good candidate for AI at all.
- Hyper-personalization is on the rise. Companies are using AI to determine a customer’s interests, likes, dislikes, etc. so they can offer more relevant content, products and services. Netflix suggests movies based on our prior choices, Amazon serves up product suggestions based on what others with similar interests selected, and so on. While narrowing down our choices can be convenient, it becomes problematic when we only get served information based on our specific identifiers. This hyper-personalization practice, which some call filter bubbles, has led to news silos where, for example, conservative-leaning people only see one perspective on news and information, and liberals only see another. It is not only misleading, since no one is getting the complete story, but it is also heightening polarization. Unfortunately, hyper-personalization is only expected to increase in 2020.
- AI will both enable and destroy deepfakes. By applying deep learning to huge amounts of data, deepfakes have begun tricking the world. These fake versions of a face, a voice or a full body used for video, audio or other media, have included a falsified photo of President Trump pinning a medal on what appears to be a military dog. In another example, a doctored video was made to look like CNN reporter Jim Acosta was pushing a White House intern. As we move toward the election, deepfakes are expected to increase in number in the political realm. Yet, interestingly, the same AI technology that is enabling deepfakes will also be used to combat them. Tech giants, such as Google, Microsoft and Facebook, who are under pressure to help manage deepfakes, are working with major universities to develop huge databases of deepfakes in order to develop AI programs to detect these imposters.
- Regulation is coming. With the Cambridge Analytica scandal that exposed privacy concerns of Facebook users in the 2016 election, security concerns over TikTok, and the rise of deepfakes, the need for regulation is clear. In 2020, there will be more legislation regulating data privacy, as well as disclosure of manipulated data and images. This is already starting to happen. This year, California passed legislation making it illegal to distribute deepfakes of political candidates within 60 days of an election, and Democrat Rep. Adam Schiff of California suggested holding social media companies liable for content that appears on their platforms. Be on the lookout for more regulation in 2020.
AI advances have gotten ahead of our ability to manage them, yet 2020 will be the year that we stop chasing the shiny objects and start to finally figure it all out. Through strategic and pragmatic implementation, and careful regulation, we are on the threshold of a new generation in which AI is ready to fulfill its promise.