Skip to content Skip to footer

Should We Listen to the Doomsday Scenarios about Generative AI?

Summary: The transformative power of generative AI, while holding immense potential for positive change, raises concerns of extinction-level risks due to misuse; addressing these concerns requires the implementation of ethical guidelines, regulatory oversight, and strategic governance, as outlined by initiatives such as The White House’s AI Bill of Rights and proposed guidelines for responsible AI practices.

Generative AI is all the rage today, capable of generating high-quality content, code or information at lightning speed based on a few simple prompts. It’s being said that it represents the next major disruption in technology and will drastically change the way we live, work and communicate. Yet along with its power to transform for the better, comes its ability to cause harm without proper governance. 

This was a topic I recently tackled in an article for Forbes, soon after more than 350 tech executives and scientists signed a joint statement o express their concerns and warn of the dangers of AI. In their statement, the group claimed that it poses an “extinction risk” on par with pandemics and nuclear war. Highlighting this concern, a Yale CEO Summit survey found that 42 percent of attending executives believe that AI has the potential to be an extinction risk, able to destroy humanity within ten years.

Calling the possibilities of AI an extinction risk is quite a bold statement, but what exactly are the scenarios that could cause this outcome? Many pundits speculate that it can be caused by bad actors leveraging AI’s massive data-sets to create bioweapons of war or introduce new strains of lethal viruses. It also could mean using it to hack into nuclear systems or deliberately spread false information that causes panic across the world.

While many of these doomsday scenarios may never come to pass, AI does have the power to cause harm if put in the wrong hands. It’s simply too powerful without regulatory oversight to keep it in check.

To that end, almost a year ago The White House Office of Science and Technology Policy released a Blueprint for an AI Bill of Rights, to require privacy and equity when using or building AI. It identified five principles that should guide the design, use and deployment to protect the American public

These guidelines include:

• Safe and Effective Systems: AI solutions should be thoroughly tested to evaluate concerns, risks and potential impact.

• Algorithmic Discrimination Protection: Solutions should be designed in an equitable way to remove the possibility of bias.

• Data Privacy: People should have agency over how their data is used and protected against violations of privacy.

• Notice and Explanation: There should be clearly stated transparency when you are interacting with AI.

• Human Alternatives, Considerations and Fallback: You should be able to opt out of interactions with AI, in favor of a human alternative.

More recently, the U.S. government’s Biden administration has been meeting regularly with industry leaders and legislators to better understand AI and develop strategies to regulate it. The European Union also is establishing its own regulations.

As I shared in the Forbes article, below are some guidelines that could remove some of the harm in transformative AI. Many of them are already being considered today: 

Establish standards body. Much like the Good Manufacturing Practices (GMP) regulations established by the FDA for life sciences companies, clearly outlined guidelines need to be developed and communicated to companies that want to earn a “good AI practices” designation.

Enforce transparency. Whether generative AI is being used to develop content, marketing materials, software code or research, it should be required that there is a highly visible public disclaimer that indicates that parts or all of it were machine-generated.

Conduct risk assessments. Recently, Google and its AI research laboratory DeepMind recommended a number of steps to ensure that “high-risk AI systems” provide detailed documentation about their solutions. Among those recommendations is that risk assessment from independent organizations should be mandatory.

Male AI explainable. When AI is making decisions that affect people’s lives, individuals should be able to have an adequate explanation of how the algorithm arrived at a decision.

Establish cloud oversight. When deploying AI in a public cloud, it should be required that you not only have permission from the federal government, but there should be federal employees whose sole job is to closely monitor the cloud and the projects being deployed.

Teach AI ethics. Software engineering and data science students should be required to complete studies in AI ethics before they can work in the industry. A type of “Hippocratic Oath’ for data scientists.

Generative AI is not going away anytime soon. In fact we’re really just on the cusp of its evolution. And, as with anything new, there’s often fear of the unknown even as we recognize the inherent usefulness of it. Yet, this time around, the fear has very real possibilities if AI is left unchecked. It needs to be governed, closely monitored and rolled out strategically, since to do anything less could have profound implications.

Interested in learning more about AI and how it can safely be rolled out at your organization? Reach out to us at info@wovenware.com

 
Generative AI: Separating Facts from Doomsday Scenarios

Get the best blog stories in your inbox!