Skip to content Skip to footer

A recent article I wrote for the Forbes Technology Council brought up the growing need for building diversity into AI algorithms in order to prevent biased decision-making. This is a subject that is getting heated attention among businesses, industry pundits and the media alike and stories about the consequences of biased AI abound.

As I mentioned in the article, as more organizations rely on algorithms to help with decisions making, we have a responsibility to ensure that we are not programming bias into our AI systems. A recent report released by AI Now Institute at New York University found pervasive biases in the AI industry, which is predominately comprised of white males. A major concern is that the bias that has crept into so many of our policies and practices in hiring, education and mortgage lending, are being programmed into AI apps.

Some of this bias can happen when data that is used in AI is based on a homogenous group. Take for example the pharma industry. In the past, pharma companies often tested drugs only on male subjects, operating under the fallacy that the results would apply to women as well. What they learned was that these studies were of limited value for women because they metabolized drugs differently than men. Because of this, eight drugs that had been approved by the FDA had to be taken off the market in a four-year span in the late 90’s/early 2000’s because, while they had not caused serious issues for men, they posed “unacceptable health risks to women.”

This realization that diverse teams achieve better outcomes than homogeneous ones is really hitting home. A study conducted by McKinsey found that public companies that were in the top 25 percent of ethnic and racial diversity had a better chance of outperforming their peers, with a 33 percent greater probability of achieving above-average returns. Whether it’s pricing stocks or determining guilt or innocence in a trial, a diverse group is more likely to examine the facts, be objective and be more accurate.

The AI ecosystem is no different than the real world – diversity is the well-spring to well-functioning algorithms and requires diverse data-sets, as well as diverse groups of data scientists to create realistic solutions.

So how do you build in this diversity in AI development? At Wovenware, diversity is a central focus. It’s not enough to have highly trained data scientists and data engineers, but it’s also important to have a diverse group of staff reflecting a variety of backgrounds, experiences and perspectives to teach AI algorithms the nuances and insights they need to learn and predict more accurately.

We can’t allow the discrimination and biases of the real world to be mirrored in the AI one, but we must train our AI solutions to operate on a diverse and more level playing field.

Taking Bias out of AI Begins with Diversity

Get the best blog stories in your inbox!