Skip to content Skip to footer

Cue ominous music. “That Terminator is out there, it can’t be bargained with, it can’t be reasoned with, it doesn’t feel pity or remorse or fear, and it absolutely will not stop.”

These lines from the blockbuster, movie, The Terminator, tap into a fear of robots that would stop at nothing until they destroy civilization as we know it. And it’s not the first time – with movies like 2001: A Space Odyssey and the Matrix – Hollywood is full of foreboding about machines that will one day turn on their creators.

Is this fact or fiction? Could our AI initiatives today be the beginning of the end, laying the groundwork for this apocalyptic vision?

Interesting, there are several technology and scientific leaders who think this type of Hollywood-inspired threat of AI could be real.

Real Concerns About the Future Dangers of AI

A 2015 Wired Magazine article mentioned a conference in Puerto Rico, attended by such luminaries as Elon Musk and Stephen Hawking, where delegates signed an open letter promising to “conduct AI for good.” And more recently, a Wall Street Journal article reported on a gathering of U.S. governors, where Elon Musk warned about the potential dangers of AI and pushed for a regulatory body to guide its development.

Just last week, Facebook shut down an AI process because of two chatbots that created their own language that humans could not understand. While this was an innocent scenario, many people were concerned because of the potential ramifications of this type of capability.

Even without a malicious intent there are potential disasters that could be caused by AI: robots taking jobs away from human; accidents caused by self-driving cars; robots making bad decisions in military situations; or stock market crashes due to trading algorithms. Because of these and many other potential scenarios, it’s smart to tread cautiously down the AI path and have regulatory overseers in place to ensure it is evolving safely and being used wisely.

Balancing Risk with Reward

As with any technology innovation, or breakthrough of any kind, for that matter, AI has the capacity to improve the world or cause it harm, and innovation should not be stifled because of the latter – rather, it just needs to be strategically controlled.

The key purpose for AI in the future will be to augment humans. While we have the great capacity for judgement, wisdom, compassion, among other traits, we do not have the computing capacity to retain and process the huge volume of information that computers can. Imagine what we can accomplish when we use an advanced artificial intelligence system like IBM’s Watson to read 25 million published medical papers, and keep up with the latest clinical trials and latest treatments to offer up treatment options, that the doctors might have missed.

In other examples, through deep learning algorithms, AI can help ensure the safety of airports, help cities operate more efficiently, and help companies serve customers better.

While another Terminator character said, “I’ll be back,” the truth of the matter is that AI is not going to leave. As Deep Learning algorithms continue to create greater intelligence to augment the progress people, governments and businesses can achieve, it will become a part of our daily lives. The key is to use this enormous capability wisely.

The Future of AI: Proceed with Caution or Full Steam Ahead?

Get the best blog stories in your inbox!