Wovenware Launches Design Experience Lab to Deliver Human-Centric Approach to AI and Digital Transformation

SAN JUAN, Puerto Rico —Sep 1, 2020 — Wovenware, a nearshore provider of AI and digital transformation solutions, today announced that is has created the Wovenware Design Experience Lab to formalize its focus on human-centered design as a core part of its AI and digital transformation services. It also announced that Dana Montenegro has joined the company’s senior management team as Chief Design Officer.  Dana will integrate his extensive experience in design thinking with Wovenware’s proven technology expertise to provide customers with the full lifecycle of services – from design and innovation to development and deployment.

“At Wovenware, we’ve always been committed to building things right,  but before we build something right, we need to make sure we design the right thing,” said Christian  González, Wovenware’s CEO . “Bringing a design-driven approach to each digital transformation project we initiate is increasingly becoming the critical component to successful tech adoption, and Dana’s vast expertise will help us bring this focus on design thinking to our deep bench of tech talent and business acumen.”

Before joining Wovenware Dana co-founded SeriouslyCreative, a highly successful innovation, strategy, and experience design firm, which continues to operate from headquarters in Puerto Rico. Prior to that he was design director for FJORD, the popular design and innovation studio within Accenture in Washington D.C., where he worked to bring strategy, service design and creative leadership to its studio engagements. He was responsible for ensuring quality and innovation across a wide variety of civic challenges. Prior to this position with FJORD, Dana was a service design lead for the studio, where he led transformation projects that reimagined how federal health, civilian, public safety and revenue agencies deliver better services to their customers. Earlier in his career, he held positions with Red Bull, most recently as Driver of Culture, Innovation & Inspiration. Dana received a bachelor’s degree from American University, in the School of International Service.

The Wovenware Design Experience Lab combines design thinking with Agile and Lean methodologies, as well as  creative problem solving, storytelling, and business discipline to help customers identify their unique business challenges and ensure that the digital transformation technologies that Wovenware develops for them are designed to specifically address those challenges. The company’s Design Experience team will be integrated with multidisciplinary project teams  comprised of project managers, software engineers, data architects, data scientists and others, to collaboratively craft digital experiences for customers.

“Companies are quickly realizing that in order to affect change, we need to start with humans.  This means engaging customers, stakeholders and employees to truly understand their pain points, wants and needs and turn them into opportunities that leverage technology as an experience,” said Dana Montenegro. “I’m excited to join the Wovenware team because of its commitment to putting customer needs at the center of its focus. Instead of simply collecting data, design thinking believes that we learn through empathy – talking to people across all disciplines and all stakeholders and hearing how they work, what they do and what’s important to them. We then apply digital expertise delivered through seamless experiences to address those needs.”

About Wovenware

As a design-driven firm, Wovenware delivers customized AI and other digital transformation solutions that create measurable value for customers. Through its nearshore capabilities, the company has become the partner of choice for organizations needing to re-engineer their systems and processes to increase profitability, boost user experience and seize new market opportunities. Wovenware leverages a multidisciplinary team of world-class experience designers, software engineers and data scientists to create solutions for cloud transformation, advanced AI  innovation and application modernization. Headquartered in Puerto Rico, Wovenware partners with customers across North America and around the world.

Visit us on the web at www.wovenware.com, or connect with us on Twitter, Facebook, LinkedIn, Google+ or Instagram.

Managing Data Science Teams – Core Practices to Drive AI Innovation

Managing data science teams requires a skillset very different from what may be required when leading other teams in the technology industry. This is because, unlike other areas, the scientific process that drives artificial intelligence (AI) innovation can introduce a whole new level of uncertainty and perceived chaos, which very few organizations are prepared to manage. Project managers, product owners, or Scrum masters who have many years of experience working with software developers are being asked to quickly shift their mindsets when it comes to managing AI projects. They must refocus their priorities and adapt to a new way of approaching their roles.

Assessing Organizational Readiness

Before initiating any AI project, however, it’s important that the data science manager assess an organization’s readiness to incorporate AI into daily practices. A recent McKinsey survey found that while AI adoption is steadily increasing, few companies are implementing “the foundational practices needed to generate value at scale.” According to the survey, thirty percent of organizations that were not classified as AI high performers did not have an AI strategy aligned with a corporate strategy. A data science manager and his/her team therefore, must be supported by an organization that implements a commitment to core AI practices. The data science manager should determine this by asking:

  • Are senior leaders committed to AI initiatives?
  • Are business leaders educated in the potential and limitations of AI?
  • Have the problems to be solved and questions to be answered been clearly stated with success measures and acceptable margins of error?
  • Is data easily accessible to the data science team?
  • Does the organization have the internal talent required to execute AI work?
  • Does the organization have external partnerships to complement internal teams?
  • Do business units trust insights generated by AI models?

Identifying and removing barriers will be paramount in order to lead a high-performing data science team that is set up for success.

The Evolving Role of Managers

Artificial Intelligence is disrupting just about every industry known to man. It is changing the way we work, and how managers manage. Traditionally, most managers’ primary responsibility has been on execution, turning a vision into reality through leadership and example. Today, data science managers’ main responsibility is innovation, creating a better vision through leadership and experimentation. The following are some of the ways data science managers can accomplish this:

Break Organizational Barriers
To drive AI innovation, managers should put their Scrum master skills to good use and eliminate any impediments their teams may encounter. When managing data science teams, strategic planning, combined with good old-fashioned persuading, negotiating, collaboration and persistence will be needed to break organizational barriers and pave the way for innovation.

Build Bridges with Business Units
Since AI is an emerging technology, business units are usually not equipped to understand its full potential. Data science managers need to keep in constant communication with business units to understand their pain points, unique domain knowledge and strategic goals to identify potential AI opportunities across the organization.

Build Bridges with Other Technical Units
Some organizations report that lack of defined integration processes can cause miscommunication between data science teams and software engineering teams who are tasked with integrating an R or Python model into an existing .Net or Java application. Bridging the gap between research and implementation is a process in and of itself and data science managers will be at the center of it.

Choose the Right Problems to Work On
Assessing AI opportunities within the organization based on available data and resources and potential value is critical for data science leaders. Data science teams have extensive technical knowledge but may not have the domain knowledge and business acumen to identify the most impactful problems to work on.

Manage Uncertainty
Managing uncertainty in data science projects often takes on a life of its own. Experiments not only sometimes fail but often fail. Managers need to quantify the acceptable margin of error for any given model (e.g. an 80% accurate model for churn prediction may be acceptable while an 80% accurate model for diagnosing cancer may not be.)

Extract Understandable Insights
Data scientists can apply complex mathematical algorithms and create sophisticated models to extract valuable insights for an organization. Managers must ensure the insights are clearly communicated in plain English and can be understood by all business users. They must make sure it is “not all Greek“ to key stakeholders.

Implementing an Innovation Culture

In a very insightful HBR article, Gary Pisano summarizes the behaviors that must be put in place in order to create and sustain innovative cultures. Promoting creativity and experimentation does not imply lowering expectations on performance, since innovators will always be held to a higher standard. Managing data science teams requires cementing an innovation culture in the organization. According to Pisano, here are some of the characteristics required of an innovator:

  1. “Willingness to Experiment but highly disciplined”- Experiments are well-thought out, planned, and follow experimental design best practices.
  2. “Tolerance for failure but no tolerance for incompetence”- Experiments fail more often than not. Failure due to lack of hard work and conducting due diligence processes should not be tolerated in any AI team.
  3. “Collaboration but with individual accountability”- As each team member needs to take ownership and responsibility for the quality of his/her work, the results of true collaboration will be maximized.
  4. “Psychologically safe but brutally candid”- Innovation is only cultivated in an open environment where ideas can be safely shared, and criticism delivered candidly and respectfully.
  5. “Flat but strong leadership”- Doing away with structured hierarchies helps remove unconscious emotional or mental constraints that hinder openly sharing ideas. But every team will need a single leader to set goals, give direction and be empowered to make decisions.
  6. These behaviors may seem contradictory in some cases so leaders driving AI innovation must constantly make the right judgement calls to create a balance of creativity, openness, experimentation and discipline, accountability and structure.

Identifying and Retaining Talent

If you do a Google search for “Best Jobs of 2019”, chances are that data scientist, software developer and statistician will rank in the top five. Data scientists are in extremely high demand and managers need to identify, hire and retain qualified talent. Yet, Google, Facebook and now Walmart (which hired 1,500 data scientists last year) may be shrinking the pool, luring away new graduates and promising talent.

But despite this, not all data scientists need to be PhD level. To build a well-balanced team you should identify and provide opportunities for your internal resources to up-skill and engage in lifelong learning. Identify quick learners, driven and hungry critical thinkers who can perform just as well if not better than other formally educated data scientists.

The most crucial and important role of a data science manager is retaining talent. Given all the AI hype, data scientists expect to have real impact on organizations and the world. If your organization poses barriers and does not live up to expectations, your data scientists will look for bigger and better jobs.

Here at Wovenware, we strive to keep data science teams challenged and motivated. Our thinking is that the best way to keep a data scientist engaged and motivated is by always providing new questions to answer and new problems to solve that result in extracting business insights and expanding our intellectual property. Data scientists are passionate and curious with heightened critical thinking skills. The curious mind needs to be constantly fed with new unanswered questions. Otherwise, they will get bored and may go out to search for new opportunities.

Identifying External Talent

To offset the challenges of hiring and retaining data scientists, managers should reach out and create partnerships with external providers and build an extended team. Many organizations are turning to AI outsourcing and nearshoring as a cost-effective and faster approach to initiating AI projects.

Autonomous and Accountable Data Science Teams
Successfully managing data science teams requires empowering every individual with autonomy and responsibility, encouraging the team to experiment in the pursuit of continuous improvement and embracing lifelong learning. These are the foundations of building an innovative culture in a data science team. In a recent Ted Talk, former Marine Corps lieutenant Drew Humphreys shares a very thoughtful and surprising connection between the way we implement machine learning models and the way leaders should be agents of innovation. Humphreys invites everyone to “think differently about leadership…and empower people, the way we empower machines.”

Driving AI innovation and managing data science teams requires shifting the focus to empowering talent, embracing uncertainty and finding new ways of communicating. Now more than ever, managers have become facilitators for unlocking the creativity that will generate insight-driven transformation in organizations.

RetinaNet Implementation for Object Detection Using the xView Dataset

Sometimes it can be difficult to fairly compare different object detectors. After all, no single object detector can fit every data model. Understanding the data can give us a sense of direction in terms of what architectures to use. Recently, I set out to test which models or strategies could be used to improve the detection of small scale objects. I ran some experiments with the RetinaNet implementation from the paper Focal Loss for Dense Objects. After experimenting with it, I can see why RetinaNet is a popular one-stage object detection model that can work well with small-scale objects. In this post, I will explain my experience of training RetinaNet for detecting small cars on satellite imagery.

RetinaNet introduced two improvements over existing single-stage object detection models: the use of Feature Pyramid Networks (FPN) and Focal Loss. For a more in-depth explanation, I suggest reading a descriptive blog called The intuition behind RetinaNet. Below we can see a figure from the paper Focal Loss for Dense Objects explaining the network architecture.

For the experiments, I used the Keras implementation of RetinaNet that can be found in Fizyr‘s Github repository. A great advantage of this approach is that it can be quickly and easily  downloaded and installed following the instructions in the repository. In my case, I created a container with this repository beforehand. I had access to the following commands on the terminal:

  • retinanet-train
  • retinanet-evaluate
  • retinanet-debug
  • retinanet-convert-model

 

Later in the blog, I will explain how I used each of these commands. After the environment was set up, I needed to gather the following to train RetinaNet:

  • The pre-processed dataset
  • A backbone model from the ResNet family.

I used a small car dataset, where the images were a subset of satellite images from the Xview Dataset. The training dataset had 104,069 annotations in a CSV file with each annotation containing the path, bounding box annotations and class name. There was also another CSV file that contained the class name and mapping with each line following the  format below:

 

class_name,id

 

And lastly, in the repository, there were available ResNet model weights, from which I used the default ResNet50.

Before I started the training, I checked whether the annotations were correctly formatted for the model or not. I did this by using the retinanet_debug command as follows:

 

retinanet-debug csv data-volume/train/train_cars.csv data-volume/train/classes.csv

 

This command outputs the annotations, when they are colored green it means that anchors are available and the color red indicates no anchors are available. This means that the annotations that do not have anchors available won’t contribute to training. The default anchor parameters work well for most objects. Since I was working on a dataset of aerial images, some objects were smaller. I noticed I had red annotations, indicating that my dataset had smaller objects than the default anchor configuration and that I needed to change my anchor configuration. For choosing the anchor box parameters I used the repository  Anchor Optimization. It helped me calculate the best anchor parameters for the dataset by recommending the following ratios and scales:

 

Final best anchor configuration
Ratios: [0.544, 1.0, 1.837]
Scales: [0.4, 0.519, 0.636]
Number of labels that don’t have any matching anchor: 190

 

I found the default configuration of anchor parameters and saw that there were other parameters apart from ratio and scales, but that it is not advised to change them in this RetinaNet implementation. Sizes correlate to how the network processes an image, and strides should correlate to how the network strides over an image. I only needed to change the different ratios and scaling factors to use per anchor location, so I saved the new configuration in a config.ini file like this:

 

[anchor_parameters]
sizes = 32 64 128 256 512
strides = 8 16 32 64 128
ratios = 0.544 1.0 1.837
scales = 0.4 0.519 0.636

 

After that, I was ready to start training, but I first needed to understand the retinanet-train command parameters:

retinanet-train –help

 

The parameters I decided to use were:

  • snapshot-path – path to store snapshots of models during training
  • tensorboard-dir – log directory for Tensorboard output
  • config – path of configuration parameters .ini file.
  • csv – this flag followed by train csv path and class csv path

There were other parameters that could be changed including epochs, weights, backbone. I decided to keep the defaults with those parameters. Since I was working  with a server with multiple GPUs, I had to specify before which GPU I wanted to use with CUDA_VISIBLE DEVICES and specify where to save the output:

CUDA_VISIBLE_DEVICES=2 retinanet-train –snapshot-path data-volume/outputs/snapshots-cars/ –tensorboard-dir data-volume/outputs/tensorboard/dir –config data-volume/train/config.ini csv data-volume/train/train_cars.csv data-volume/train/classes.csv  &> data-volume/outputs/output_retinanet.txt

 

After executing the training task I monitored the progress.  After I saw it was successfully training, I confirmed it had the correct configuration. The model was slow to train, running overnight for almost 18 hours. After it completed, I converted it for evaluations. I converted the model using the retinanet-convert-model command. To convert the model I needed:

  • Path of snapshot  to convert
  • Path to save model
  • Path of configuration file

After I gathered those paths, I ran the following command:

retinanet-convert-model data-volume/outputs/snapshots-april24/resnet50_csv_50.h5 –config data-volume/train/config.ini  data-volume/outputs/models/model_cars.h5

 

While figuring out how to convert the model I caught errors along the way. I also had to convert the model more than once because I didn’t include the config file, so Remember to use the config.ini file. After the model was saved it was  ready to be evaluated. For this I needed:

  • The path to save the inference results
  • The path of the configuration file
  • The path of the csvs with the annotation and classes of dataset to evaluate
  • The path of the model to evaluate

After gathering those paths I first evaluated the model on the training set using the retinanet-evaluate command as follows:

CUDA_VISIBLE_DEVICES=2 retinanet-evaluate –save-path data-volume/outputs/inference_results-optimization/ –config data-volume/train/config.ini  csv data-volume/train/train_cars.csv  data-volume/train/classes.csv  data-volume/outputs/models/model_cars.h5

 

After running evaluations on the training set it resulted in a mean average precision (mAP) of 0.8015. Then I used a similar command but with a different path of csv on the test set and that resulted in an mAP of 0.5090, which indicated the model was  generalizing well.

Summarizing, I learned that knowing the data helped training with RetinaNet, initialization of the network also plays an important role, and having the correct anchor parameters helps improve the performance significantly.  To compare results, I also did an experiment where with this dataset I trained RetinaNet using the default anchor parameters and got approximately 0.2843 mAP. I recommend using this RetinaNet implementation because it is simple to use and I obtained good results without much customization.

Putting Humanity at the Heart of Business

When we closed the office back in March and went to a fully remote workplace because of a government imposed lockdown we established a daily 8:30 a.m. management meeting. The idea of the meeting was to make sure the executives and middle management teams were in sync and that problems and issues could be promptly identified and resolved quickly. Early on it was decided that Friday meetings would be different. Instead of talking about work, we would talk about us as individuals. Each Friday, a different pair of members from the team runs the meeting.

Wovenware Management Team

Wovenware Management Team

Although the office is now open, we have kept these daily meetings as they have proven to be valuable to our operations, and Friday meetings have become something we look forward to. They ensure that we forge deeper connection with one another and maintain the humanity in our business.

Today the meeting was led by our human resource department and the team really went above and beyond. As a surprise to all of us, they got each manager’s family members to record videos of appreciation. They were beautiful and often emotional.

This kind of intangible yet rewarding gift reinforces what we’ve come to achieve at Wovenware – a successful organization made up of empathetic humans who are creating meaningful solutions for other humans. We treat our people as the creative and talented individuals that they are and work to nurture their skills. We understand that though these times are hard we are strengthened by family, friends and coworkers.

This kind of unexpected surprise of appreciation serves as a reminder that business is not only about numbers to reach and milestones to meet, but humanity in all of its simplicity. By keeping humanity at the core of all we do, we’re able to build a better business, but more importantly, a better world.

The Missteps That Can Lead to Failure in AI Development

The benefits of AI are becoming well-established when applied to the right applications, but why do they sometimes fail to deliver on the promise? This was a question I examined in a recent article I wrote for the Forbes Technology Council.

Below are a few of the missteps I shared:

  • Gathering the wrong or incomplete data. Companies need to take great care to make sure that their data is accurate, clean and representative, otherwise as the expression goes, “garbage in, garbage out.” The problem is that when you don’t pay enough attention to your data or take proper steps, you might not even know that you’re getting flawed results.
  • Glamorizing AI. Many companies think AI will be the answer to all of their problems, but that’s not the case. Some problems can’t be solved through AI. AI doesn’t work in situations where the data is random, involves constant change and no discernable pattern; or there are too many variables. Examples include trying to predict the stock market or the weather.
  • Using biased data. Data engineers need to consider where the data is coming from and if it is representative and diverse. Without a representative, diverse sample, AI algorithms can deliver incomplete or false outcomes, or worse, promote gender or racial discrimination.

Check out the Forbes article to learn what action can be taken to address these missteps and put you on a solid path toward beneficial AI.
After all, AI has great potential to be a helpful partner to humans, but it’s important to beware the hazards and set realistic expectations.

Wovenware Named to Inc. Magazine’s 2020 Inc. 5000, Annual List of Fastest Growing Private U.S. Companies for the Fifth Time

SAN JUAN, Puerto Rico — Aug 12, 2020, Wovenware, a nearshore provider of AI and digital transformation services and solutions, today announced that for a fifth year, it has been named to the Inc. 5000, an annual list of America’s fastest-growing private companies, published by Inc. Magazine. The list represents a unique look at the most successful companies within the American economy’s most dynamic segment—its independent small businesses. Intuit, Zappos, Under Armour, Microsoft, Patagonia, and many other well-known names gained their first national exposure as honorees on the Inc. 5000.

“We’re honored to once again be included in the Inc. 5000 list of successful companies,” said Christian Gonzalez, CEO and co-founder, Wovenware. “As we continue to grow our offerings and customer base, this ranking not only recognizes our hard work and dedication, but it reinforces the growing demand for AI and other digitally transformative solutions that are solving very real business needs.”

Not only have the companies listed on the 2020 Inc. 5000 been very competitive within their markets, but the list as a whole shows staggering growth compared with prior lists. The 2020 Inc. 5000 achieved a three-year average growth of over 500 percent, and a median rate of 165 percent. The Inc. 5000’s aggregate revenue was $209 billion in 2019, accounting for over one million jobs over the past three years.

Wovenware was ranked number 3,335 on this year’s list. Complete results of the Inc. 5000, including company profiles and an interactive database that can be sorted by industry, region, and other criteria, can be found at www.inc.com/inc5000.

“The companies on this year’s Inc. 5000 come from nearly every realm of business,” says Inc. editor-in-chief Scott Omelianuk. “From health and software to media and hospitality, the 2020 list proves that no matter the sector, incredible growth is based on the foundations of tenacity and opportunism.”

The 2020 Inc. 5000 is ranked according to percentage revenue growth when comparing 2016 and 2019. To qualify, companies must have been founded and generating revenue by March 31, 2016. They had to be U.S.-based, privately held, for profit, and independent—not subsidiaries or divisions of other companies—as of December 31, 2019. (Since then, a number of companies on the list have gone public or been acquired.) The minimum revenue required for 2016 is $100,000; the minimum for 2019 is $2 million. As always, Inc. reserves the right to decline applicants for subjective reasons.