A Snapshot and Summary of What’s Happening to Enable the Safe and Effective Use of AI
Welcome to the latest installment of the Wovenware Monthly AI Index, where we’ve curated the news stories and issues shaping the AI-driven world. In this month’s edition, learn about the new startups shaping the generative AI landscape, new AI regulation being proposed on the state level and how the motion picture studio behind the Hunger Games franchise is leaning more into AI.
Happy reading!
Linda Savage, Content Director, Wovenware, a Maxar Company
Listen to the summary now: This audio was generated with AI
BUSINESS DEVELOPMENTS
AI startup Anthropic goes enterprise-wide: Anthropic announced the Claude Enterprise Plan, joining other members of the Claude large language model (LLM) family. It includes security controls and integration with key enterprise solutions, among other capabilities more aligned with enterprise needs. The enterprise AI market is becoming the key focus of start-ups, as well as established players, such as OpenAI.
Other AI startups join the fray: OpenAI’s co-founder Ilya Sutskever started a new AI firm this past June, Safe Superintelligence (SSI), which recently raised $1 billion. The company is committed to building safer AI models. Sutskever left OpenAI after an attempt to oust CEO Sam Altman. While the new solution has not yet been launched, the company says that “its mission is to create AI systems that are both highly capable and aligned with human interests.”
In other startup investment news, Nvidia invested in a Japanese AI company, Sakana AI, which was created by Google engineers last year. The company said it would be working closely with Nvidia on AI research, the creation of data centers and building AI’s focus in Japan. Together, they plan to develop new techniques for efficiently creating foundation models that can give Japan a competitive advantage in AI, leveraging Nvidia’s latest technologies.
OpenAI gets ready to harvest Strawberry: OpenAI is getting ready to launch a new AI model called o1 (what the industry was referring to as “Strawberry”). OpenAI calls it a “reasoning” model, trained to answer really complex questions, as well as coding and Math, faster than what is possible with a human. It’s expected that o1 will be more expensive than previous OpenAI models.
Salesforce sets agents loose: Slack (owned by Salesforce) is adding a number of new AI features into the platform, including the ability to incorporate AI-powered agents from Salesforce, while supporting agents from Adobe, Anthropic, Cohere and others. The goal is to advance the idea that generative AI can power agents that act autonomously instead of acting simply as a co-pilots to humans.
U.S. AND EU REGULATIONS
Employees and big tech bosses at odds over California AI regulation: Approximately 113 employees of leading AI companies, such as Google, DeepMind and Meta, have signed an open letter in support of SB 1047, the AI safety bill proposed by California state Senator Scott Wiener. The employees signing the letter contradict their employers’ opposition to the bill. SB 1047 looks to make developers of AI models liable for any major issue that occurs if the developer didn’t take appropriate safety measures. The legislation would apply only to developers of models that cost at least $100 million to train.
Another AI bill, AB 3211, has already passed the state Senate Appropriations Committee in California, which requires that tech companies label AI-generated content. This content could include anything from harmless memes to deepfakes aimed at spreading misinformation about political candidates. Many big tech firms endorse the bill. For example, OpenAI believes that for AI-generated content, transparency and requirements around provenance such as watermarking are important, especially in an election year.
Will federal AI regulation ever come to pass?: Despite working on federal AI regulation for months, policymakers remain mixed on the feasibility of legislation occurring by the end of the year. While federal legislation is unclear, state regulation continues to move forward in the U.S. (with the above-mentioned SB 1047 in California), as well as legislation in states such as Colorado, Virginia and Florida. Washington has a poor track record of moving quickly when it comes to technology, and with AI, it’s even more difficult, since lawmakers are typically not experts when it comes to AI.
ARTS & ENTERTAINMENT — AND EVERYTHING IN BETWEEN
School is back in session: Can students use AI?
Universities, for the most part, are letting professors decide if and how much generative AI tools can be used in the classroom. For example, Colorado State University history professor Jonathan Rees plans to tell his students they can’t use AI tools to write essays for his class. “The policy I picked is ‘Don’t use AI,” he said. Other professors are seizing the moment, recognizing the rise of ChatGPT as an opportunity to combine inquiry and scholarship with preparation for an ever-changing world.
More California AI bills protect entertainers: Two more AI bills coming out of California are addressing the entertainment industry. AB 2602 bars contract provisions that “facilitate the use of a digital replica of a performer in a project instead of an in-person performance from that human being, unless there is a reasonably specific description of the intended use of the digital replica and the performer was represented by legal counsel or a labor union in negotiations.” AB 1836, meanwhile, requires entertainment employers to gain the consent of a deceased performer’s estate before using a digital replica of that person.
Hunger Games studio goes all out on AI: Lionsgate, the production studio behind films like the Hunger Games, announced that it is partnering with AI start-up firm, Runway, to create a new customized video generation model intended to help filmmakers, directors and other creative talent augment their work through AI. Studios have increasingly begun implementing AI, despite many filmmakers’ concerns about how the technology’s use could threaten their jobs. This was one of the major issues that led to the SAG-AFTRA strike last year.
That’s it for this month’s Wovenware Monthly AI Index. We hope you gained new AI insights and food for thought.
Please share your questions, concerns and opinions about the AI-driven era. We’d love to hear from you. Please reach out to info@wovenware.com.