Skip to content Skip to footer

The Wovenware Monthly AI Index February 2024

The Wovenware Monthly AI Index February 2024


A Snapshot and Summary of What’s Happening to Enable the Safe and Effective Use of AI

Summary: Generative AI (GenAI) is quickly becoming a household term, but it’s not without its risks, challenges and complexity. We’re here to help you join in on the conversation by staying current on the most important news and trends. Welcome to the latest installment of the Wovenware AI Index, where we’ve curated the most important news of the month. Happy reading!


Meta is pushing to create industry standards for labeling and identifying AI content. Facebook and Instagram users will start seeing labels on AI-generated images that appear on those platforms to provide greater transparency. Meta’s president of global affairs, Nick Clegg, didn’t specify when the labels would appear but said it would be “in the coming months” and in different languages, noting that a “number of important elections are taking place around the world.”

In news from another tech giant, in February Google announced a subscription plan for its AI chatbot, taking a page from Microsoft and OpenAI which offer their premium GenAI platforms on a subscription basis. Google’s recently renamed solution, Gemini Advanced, (previously known as Bard) will cost $19.99 a month. A free version of the chatbot using more basic AI technology will remain available. The Wall Street Journal reports that Google Bard was struggling to catch up to ChatGPT, attracting about one-fifth of OpenAI’s user base.

On the collaboration front, more than 200 organizations joined the U.S.AI Safety Institute Consortium to help promote the safe use of AI. Including companies such as Meta, Adobe and OpenAI, the consortium is aligned with the National Institute of Standards and Technology and is tasked with bringing about the goals set by President Joe Biden in his recent executive order.


The Biden administration’s Commerce Secretary Gina Raimondo announced the U.S. AI Safety Institute Consortium (AISIC) this month, which includes the leading AI companies and more than 200 other organizations and individuals. Notable companies include OpenAI, Alphabet’s Google, Anthropic, Microsoft, Meta, Palantir, Intel, JPMorgan Chase and Bank of America

Meanwhile, Europe made significant strides in the move to adopt AI rules, with EU countries endorsing a political deal that first originated in December 2023. The next step for the AI Act to become legislation is a vote by a committee of EU lawmakers and the European Parliament vote in March or April.


Deep fakes are being taken to a whole new level. A series of AI-generated, sexually explicit images of pop star, Taylor Swift were published across social media channels, drawing outrage from fans and renewing calls for a crackdown on AI misinformation. Because of advanced and widely available AI tools, anyone can create high-quality images or videos featuring anyone’s likeness in any possible scenario. According to the article, It’s illegal to share real nude images of someone without their consent in almost every state. However, the laws around “AI porn” are much weaker. There is no federal law concerning deepfake porn and only about 10 states have statutes banning it. 

On the sports front, AI is helping the National Football League (NFL) improve the safety of players. The Digital Athlete platform, which the NFL built with Amazon Web Services (AWS), uses computer vision and machine learning to predict and identify plays and body positions most likely to lead to player injury.

That’s it for this month, but stay tuned for more developments in the world of AI in next month’s Wovenware Monthly AI Index.

Please share your questions, concerns and opinions about the AI-driven era. We’d love to hear from you. Please reach out at

Get the best blog stories in your inbox!