Skip to content Skip to footer

Regulating Artificial Intelligence Projects

Recently, the White House issued a set of guiding principles on how federal agencies should regulate Artificial Intelligence (AI) in the private sector, while at the same time taking care to avoid stifling innovation. While this objective is a welcome step in the right direction, it’s clear that more needs to be done. These guiding principles were just that; the White House did not issue any concrete steps to really address the issues at hand. We’re already late to the party when it comes to ensuring that AI is managed ethically, truthfully and transparently.

Unfortunately, we’ve all become familiar with the challenges of unregulated AI. Take for example the news about election interference, resulting from Cambridge Analytica’s use of Facebook data to manipulate potential voters. More recently, we’ve seen how deepfakes can trick people into believing false images or videos that have been doctored to look like someone is saying or doing something that they are not. Deepfakes can have dangerous repercussions, and some states have begun introducing legislation regulating them. California, for example, has made it against the law to disseminate deepfakes during the election season.

There are also concerns about biases being programmed into AI algorithms – purposefully or inadvertently – that may negatively impact people’s lives, such as who gets a mortgage or loan. It’s clear that we need carefully thought-out legislation to protect everyone.

Here are five critical ways that AI can and should be regulated – not only by government, but by enterprises as well:

  • Focus on big tech. The reality is that firms like Facebook, Google and other big tech companies have a monopoly on people’s data. Having unbridled access without regulation can wreak havoc, such as the impact Cambridge Analytica’s actions had on approximately 87 million people. Yet, it’s a balancing act because too much regulation could stifle competition and continued growth. How can small start-ups, which traditionally disrupt the market with innovation, hope to compete against the large big tech companies without better regulation?
  • Address privacy concerns. First and foremost, people must be protected. They have a right to their privacy, and to own their own data, and should be in charge of who can access it. Organizations must be transparent about what they are doing, and request permission to use a person’s data, and give individual’s the option to opt in, rather than having to opt out. The General Data Protection and Regulation (GDPR) Act in the EU, is an example of how governments are considering the rights of individuals to their privacy as well as data protection.
  • Anonymize most data. It’s reasonable that companies want to use data to understand customer trends for marketing, product development and other purposes, but in these cases and many others, they should anonymize the data at the source, so it is not specifically tied to a particular individual. Similarly, companies must have best practices in place to make sure they are handling sensitive data and personally identifiable information – like social security numbers and cellphone numbers – very securely.
  • Put anti-bias practices in place. Organizations should be held accountable to ensure that they are following best practices to reduce bias. First of all, they should have a diverse team of data scientists and data engineers who reflect the diverse population so they are not inadvertently programming smart apps according to their biases. Another key consideration is understanding where the data is coming from. If an algorithm is written based on data from a homogeneous population then it will not accurately or adequately reflect – or be sensitive to – a diverse population. Natural Language Processing (NLP)-based AI programs should be written in the language of the end-users by native speakers. Additionally, data scientists should purposely and continually test the algorithms for bias and make adjustments, as needed.
  • Be transparent and ethical. People need to know where organizations are getting their data from, and how they are determining outcomes and making decisions in order to feel comfortable that they are fair. For example, if a mortgage app makes a decision about who qualifies for a loan based on predictions on who will default, people should have the opportunity to see how this was determined. In addition, organizations must allow people to question predictions, and based on what is uncovered, they should revise the algorithm accordingly.

Not all types of AI, however, need to be monitored or regulated – just those that impact people. For example, technology that monitors crops for optimal conditions, or that predicts which devices being manufactured might have defects, do not need to be regulated – the challenge, however, will be in deciding where to draw the line.

We’re at a critical juncture in which we need to tame the AI wild west and its impact on people. We’ve seen how valuable AI can be in helping to diagnose diseases, help us improve the environment and more. Yet, at the same time, we’ve also seen the downside and potential abuses that can arise. We need intelligent and thoughtful legislation in areas such as regulating big data, and ensuring transparency and ethical behavior. We must also protect personal data and privacy and do whatever we can to avoid bias. We need to implement regulations to create a better, brighter future with AI, and we need to act now.

Sign up for our monthly newsletter:

Taming the Wild West of AI | Wovenware Blog

Get the best blog stories in your inbox!