Wovenware, a nearshore provider of Artificial Intelligence (AI) and digital transformation solutions, today announced that the U.S. Patent and Trademark Office has issued it Patent No. 11,024,032 B1. The patent addresses the company’s system and method for generating synthetic overhead imagery based on cluster sampling and spatial aggregation factors of existing data-sets. The patented method is used by Wovenware to develop highly accurate deep learning-based computer vision models for government and commercial applications in regulated industries.
“The training of deep learning models using large amounts of annotated data has become essential to the accuracy of computer vision applications,” said Christian González, CEO, Wovenware. “The issuance of this new patent recognizes Wovenware’s unique approach to generating synthetic datasets and reinforces our expertise in developing highly accurate solutions that enable automated object detection, such as satellite imagery or molecular object identification, within public and private sectors.”
The newly issued patent protects Wovenware’s mechanisms and processes for enhancing limited data-sets by generating synthetic overhead imagery based on cluster sampling and spatial aggregator factors (SAFs). It generates synthetic images through its unique process of cropping objects from original images and inserting them into uniform, natural or synthetic backgrounds. The objects are selected by clusters based on pixel distribution similarity and through SAF mining.
In 2020, Wovenware was listed as a Strong Performer in the Forrester New WaveTM: Computer Vision Consultancies report of the top 13 computer vision providers, joining companies, such as Accenture, Capgemini, Deloitte and PwC.
Wovenware’s computer vision services are designed to help organizations optimize their operations by extracting valuable insights from the physical world. When images are limited, Wovenware customizes algorithms to generate synthetic images with varying textures, backgrounds and other conditions, to complement actual images and create a diverse and large enough dataset to train a high-performing computer vision model.
The company leverages its own proprietary algorithms and model factory to accelerate the testing and optimization of models against computer vision algorithms like ssd, mask r-cnn and retinanet. Whether helping nurses classify ulcer wounds or helping researchers classify mosquitoes, for example, the company’s computer vision applications tag, slice and dice visual data assets so that humans can dedicate more time to having meaningful interactions with customers to solve complex problems.