Summary: The Wovenware AI Update provides insights into the evolving landscape of generative AI, highlighting recent developments such as White House initiatives, Senate hearings on AI regulation, concerns from authors about content usage, and the European Union’s strict AI laws.
A Snapshot and Summary of What’s Happening to Enable the Safe Use of Generative AI
Last year at this time, it would have been unusual for people outside of the tech industry to be talking daily about (or actively using) artificial intelligence. Yet today, ChatGPT is the fastest-growing website and more than 110 million people and counting have used the open source generative AI tool for a variety of tasks.
As its usage expands and it becomes embedded into the fabric of business and society, there are clear concerns about creating guardrails and governance to ensure that it is used for the good.
Recently, Bill Gates weighed in on the fears and concerns of AI, expressing that the fears are real but manageable. He said, its, “the most transformative innovation any of us will see in our lifetimes, and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks.”
Keeping knowledgeable about generative AI can be a daunting task when policy and product innovations are being broadcast and written about daily. We’ve started the Wovenware AI Update to serve as a resource to help you navigate the unfolding world of generative AI, providing unbiased information that can help you get up to speed and shape your own opinions.
White House Initiatives Underway
The Biden Administration reached an agreement with seven major AI companies to put guardrails around AI. The companies include Amazon, Anthropic, Google, Inflection, Meta Platforms, Microsoft and OpenAI. According to Biden, “AI poses risks and opportunities. We’ll see more technology change in the next 10 years, or even in the next few years than we’ve seen in the last 50. That has been an astounding revelation to me, quite frankly,” he said, adding, “This is a serious responsibility. We have to get it right.”
Meanwhile, the Senate Judiciary Subcommittee on Privacy, Technology and the Law has held two hearings this summer to help guide senators as they strive to address AI regulations. U.S. senators proposed specific ideas for AI regulation, such as requiring licensing for companies pursuing high-risk AI development, creating an AI testing and auditing regime, creating a new federal agency overseeing AI, imposing legal limits on some AI usage in elections and nuclear warfare and requiring watermarks and transparency when AI is being used.
According to the Brookings Institute with so many dissenting viewpoints, the issues can be broken down into three challenges. The first challenge is the lightning fast pace of innovation. Big tech players are working at breakneck speed to lead generative AI, but its outpacing the ability of government to properly regulate it. And the last challenges are questions around what exactly are we regulating and who will be responsible for doing so and how.
Arts & Entertainment Industries Raise Their Concerns
This summer authors, such as James Patterson and Margaret Atwood signed an open letter demanding that AI companies compensate them for use of their content. Others have followed suit, filing lawsuits for allegedly training AI models on their content without permission. We can expect this issue to heat up, as AI models are trained on current Internet content.
Europe Moves Full Force to Implement Strict AI Laws
On the European front, lawmakers from the European Union have created a draft version of the AI Act, which will be negotiated with the Council of the European Union and EU member states before becoming law. It’s proposed that systems such as real-time facial recognition in public spaces, predictive policing tools and social scoring systems are banned outright.
It also aims to set tight restrictions on “high-risk AI applications,” such as systems used to influence voters; and social media platforms that recommend content to their users.” The proposed AI Act also outlines transparency requirements for AI systems. For example, ChatGPT would have to disclose that “content was AI-generated, distinguish deep-fake images from real ones and provide safeguards against the generation of illegal content.” Further, it goes on say that companies violating the regulations could be fine up to $43M in USD, or 7 percent of a company’s annual income, whichever is higher.
Many companies in the EU are pushing back on the EU’s proposed AI legislation, saying that it could hinder the EU’s competitiveness and cause a reduction in investment in the region.
Stay tuned for more developments in the evolving world of generative AI in the September edition. In the meantime, please share your questions, concerns and opinions about the AI-driven era. We’d love to hear from you. Please reach out at firstname.lastname@example.org.