In a recent Forbes blog post, I posed the question: Is there such a thing as a prejudiced AI algorithm? It’s a topic that’s been getting lots of attention by the media, as well as tech leaders lately, as complex AI solutions are taking on a life of their own and making decisions for us.
But what continues to make AI tick is the data that is fed to them, and that data is often delivered by real-life people who may have their own preconceived opinions, experiences and maybe even their own biases. As I outline in the blog post, the only way to ensure that an algorithm is trained to make fair and unbiased decisions is by making sure there is broad perspective and experiences among the people that will train it. Diversity is key to the data science team.
As I mention in the post, there are a few ways to make sure that the data science teams brings diversity to the training, including carefully reviewing candidate qualifications that go beyond the tech credentials; considering outsourcing parts of the project to get perspective beyond your company; offering internships and other programs to engage the next generation of professionals; and investing in training programs.
This Forbes post sheds light on a topic that is of great concern to us here at Wovenware and we’ll continue to explore ways to ensure any AI solution we create supports fair, unbiased and ethical decision making. The future depends on greater awareness and acceptance of diversity in the workplace and the creation of AI is no exception.
We’d love to hear your thoughts. Please reach out to us: firstname.lastname@example.org