Skip to content Skip to footer

Imagine entering a movie theater to watch the latest blockbuster. As soon as it’s over, another movie immediately begins and hooks you from the start. Once that one is over, a voice-over directs you to the film playing one door down, where another intriguing movie is about to begin. Before you know it, you’ve spent most of the day – and a whole lot of money – in the movie theater, and it has become a vast vortex of entertainment in over-drive with no clear exit route. This may sound like the stuff of a bad horror movie, but in fact it’s happening today in the virtual world, where streaming services, e-commerce sites and others are vying for our attention.

Yet, while this scenario is quite real, we’re more apt to hear the horror stories of AI falsely accusing innocent people of crimes they didn’t commit, causing health scares from incorrect diagnoses or denying a qualified person a loan simply based on his skin color or address. But, there is a more subtle but certain danger when it comes to AI left unregulated – its ability to manipulate people without their even knowing it.

The movie theater vortex scenario plays out every day online. When was the last time you watched a YouTube video that was supposed to last two minutes, but when you next looked at the clock, 30 minutes passed because you were fed increasingly more intriguing content? Or, even when you don’t have a minute to spare, who wouldn’t click on a newsfeed that declared that aliens have invaded our airspace? The goal of social media sites, streaming services or even political sites is to keep you on their sites –whatever the cost.

While these tactics, driven by advanced AI, may seem innocent enough, they are essentially manipulating how people think, act and spend their time. They are controlling what we consume, directing our decisions, beliefs and actions.

How AI Can Create Your Digital Persona, Enable the Spread of Misinformation

The problem (and phenomenon) is that AI algorithms have the ability to understand us even better than we understand ourselves. They’re able to do this because of the treasure trove of data we leave along our digital paths. Each time we stream a movie, conduct a Google search, visit a retail site or share on Instagram, we’re leaving a piece of information about ourselves. The algorithms are trained to identify patterns in this information, and understand our interests, what motivates us, or even what repulses us.

This all may seem harmless enough, but when certain content is being pushed to us without our asking for it, it’s a form of manipulation. It also can be a breeding ground for one-sided “news” or viewpoints that can lead to extremism, misinformation and fear tactics.

It also presents major challenges to tech firms, government and researchers concerned with the growth of this type of online misinformation. In fact, in a study cited in a Brookings Institute TechStream article, scholars at the University of Washington, “searched nearly 4 dozen terms related to vaccine misinformation on Amazon. They found 36,000 results and more than 16,000 recommendations. Of these search results, 10.47% (nearly 5,000 unique products) contained misinformation.”

Wovenware included in the Gartner Market Guide

The Role of Tech Leaders

Social media and tech players, such as Facebook, Twitter, Apple and others, didn’t plan on enabling the spread of misinformation, discord or manipulation. They were designed to build community and connection. Yet, make no mistake, the plan to monetize sites on the backs of user data has always been part of the strategy. The more views or website stickiness, the more valuable the ad revenue. The goal of AI algorithms is to maximize those user views.

Some tech firms are beginning to get it, however. Google announced in March 2021 that it would stop selling ads based on a user’s specific web browsing, and it would stop using technology that tracks them across websites. And, Apple plans on limiting the tracking of mobile app usage. Unfortunately, the initiatives of tech firms and the private sector are not enough. The federal government needs to take a stronger stance.

Federal Regulation of Manipulative AI

The opportunity to do things right when it comes to monetizing user data was there, yet the private sector chose to avoid it, so now it’s up to the federal government to make sure that manipulative AI practices are thwarted.

Taking a lesson from the establishment of antitrust laws, the government recognized the dangers of certain companies dominating and controlling markets, so laws were put in place to enable free competition and protect consumers from predatory business practices. The same needs to occur when it comes to AI-driven online data collection.

In order to prevent manipulated practices online, the federal government needs to set harsh regulations around certain uses of AI. Features, such as AutoPlay, which seamlessly begins a new movie once the movie you were watching is completed, should be banned. There really is no need for the technology, other than as a marketing ploy to keep you in your seat. And, when it comes to children, companies that use deceptive practices to keep them on a site should be penalized.

As a baseline measure, there should be a federal requirement that disclaimers be displayed on each site that collects user information. While that is merely a band-aid, it’s a start that may help people think twice about the sites they visit and the information they share. California has such a law, but it is currently only a state-mandated regulation.

Business leaders and government need to come together and recognize that the subtle danger of AI tracking your every move online, feeding you with the information marketers want you to have and directing you to where they want you to be is very real and intolerable. While AI is providing real value in a variety of areas, including e-commerce, there’s no place for manipulative practices that have the potential to cause great harm in the most subtle of ways.

The use of AI to manipulate human behavior is a complex issue that raises important ethical concerns. While AI has the potential to bring many benefits, it is important to approach its development and use with caution and awareness of the potential risks. By developing ethical guidelines and standards, increasing transparency around AI algorithms, and continuing to research and test their accuracy and fairness, we can work towards a future where AI is used to empower and enhance human capabilities, rather than control or manipulate them.

How AI Can Manipulate Human Behavior

Get the best blog stories in your inbox!