There is Never Enough Data

Data scientists can all agree: there is never enough data. Even with the enormous amount of data that companies are accumulating, there never seems to be enough of the right data- the necessary data to train algorithms to perform specific tasks. In our recent article, Data, Data Everywhere, But Not a Drop to Drink, we discuss precisely this. That no matter how much data you have, it is never enough.

Gartner found that the average cost of poor data quality on businesses amounts to between $9.7 million and $14.2 million annually. That is a lot of data. Even when the data is of excellent quality,  companies are still needing more of it-rounding up to an estimate of 10,000 label data points needed to provide sufficient information to develop an algorithm that can extract insights and generate predictions.

That said, data can be difficult to collect. The extraction process can be a strenuous one and more time-consuming than building the actual machine learning models. This leads data scientists to use synthetic data. 

Synthetic data is precisely that; it is data that is artificially created and based on possible scenarios. 

The Importance of Synthetic Data

Synthetic data is not created without conscious efforts. As previously mentioned, it is based on possible scenarios or outcomes. It uses statistical properties from real datasets.

Here is an example of the effective use of synthetic data. A healthcare insurer needed to calculate how frequently customers with kidney disease file claims and for what reasons. After lacking sufficient internal data, they needed to integrate synthetic data to better support the algorithm. The synthetic data was created from made-up, but possible scenarios related to ailments of those with chronic kidney disease. The added data was also created by using the initial dataset as a guide.

It is a cycle- synthetic data needs real data to be created, which continuously feeds itself to become more accurate for better algorithms.

Synthetic data can help develop computer vision apps. For example, an urban planner needs to identify how many eight-wheelers use a specific stretch of highway each year. The computer vision application will be tested by identifying each one in an image. If no images are available, one can create a 3D model and strategically place it in possible locations. This model will aid the computer vision app by training it to identify the differences between eight-wheelers and smaller trucks or cars.

Data scientists have found that by combining a quality base model and synthetic data, the algorithms can become highly accurate and add bigger value.

Additionally to adding value and generating effective AI, synthetic data is also proving to be the solution for privacy concerns. Regulated industries, such as healthcare, finance, and banking, must be on the lookout for their data privacy. This can become an obstacle when that sensitive data is needed to generate accurate outcomes for algorithms. 

By using synthetic samples from the real datasets, companies can take advantage of key characteristics of the original datasets without causing privacy concerns. 

Four Tips for Successfully Leveraging Synthetic Data

Not all data is created equal- and this includes synthetic data. Its role is necessary and growing. Here’s what you can do to ensure the quality of your synthetic data and if it’s adding value to your algorithms by making them more accurate and effective. 

  1. It’s all in the base. Ensuring the quality and accuracy of your original datasets will allow the algorithm to get smarter as you add more of the same info.
  2. Consider the source. Partnering with a synthetic data provider with experience in the full cycle of AI development will guarantee you a partner that understands the importance of clean data, how much is needed, and the role of testing.
  3. It takes more than a tool. Creating synthetic data requires human knowledge. It is a complex process that also requires advanced frameworks, which require specific talent trained on the system. 
  4. Look for data that goes deep. When sourcing data, target those specific to your cause. The best synthetic data is that which is aligned with a specific issue.

The growing need for training data that creates better-informed machine learning algorithms will lead businesses and data scientists alike to synthetic data to produce more accurate solutions. And since algorithms are never satiated- there will always be a need for more data. Make sure you find the right one.

Sign up for our Monthly Newsletter:

Advantages of Nearshoring in Puerto Rico

Nearshore, the act of outsourcing your manufacturing or services to neighboring countries, becomes more appealing when it involves nearshoring your outsourcing projects in Puerto Rico. Nearshoring in Puerto Rico has exclusive perks that will benefit your IT projects. Learn about them here.

The advantages of outsourcing projects to close-by countries include:

  • Big savings. For a few decades, outsourcing has been appealing to large companies seeking to better cut costs and continue to grow their business.
  • Better Collaboration. The advantage of partnering with providers in similar time zones is better communication with them. Working similar hours means real-time communication across teams. Any concern that arises could be addressed and resolved promptly. 
  • Steady Growth. A nearshore partnership allows you to quickly expand your team with skilled professionals that are equipped in a diversity of topics. It is difficult to have an in-house team equipped for everything and this team expansion solves that. 

Nearshoring in Puerto Rico

Puerto Rico is one of the popular countries available for nearshore outsourcing. It is an attractive solution for companies all over the United States. Besides having a big pool of a diverse group of skilled professionals and a fitting geographical location, our unique competitive advantage is that we are part of the United States. We are all US citizens. Here is what that means: 

  • U.S. Laws. When working with a nearshore provider based in Puerto Rico, the same familiar intellectual property protections and federal laws apply to them.
  • Education. Our software developers, analysts, testers, and others, are all trained and educated following the same standards and best practices required in the U.S. .
  • Quality. The same standards and best practices used in the U.S., apply to any project assigned to a nearshore staff. 
  • Distance. Our location, which does not observe Daylight Saving Time and is in the Atlantic Standard Time Zone all year round, makes quick visits possible. This helps to strengthen professional relationships between your company and your nearshore provider.
  • Language & Culture. English and Spanish are both official languages in Puerto Rico. This guarantees you a team of bilingual engineers. Additionally, as part of the U.S., our island shares a similar culture and is familiar with the same business practices as American companies. 
  • Price. Puerto Rico is part of the U.S., but it still offers very competitive prices compared to those on the mainland. This is an advantage for short-term project costs or hiring processes. 
  • Entrepreneurial Ecosystem. Puerto Rico has a vibrant community dedicated to startup programs and partnerships between academia, government, and private industry that help promote innovation and job creation. 

Interested in partnering with a nearshore provider located in Puerto Rico? Consider your company’s path towards digital transformation and analyze the possible gaps that can be closed with the right nearshore provider. Puerto Rico’s entrepreneurial ecosystem, convenient location, and diverse pool of bilingual engineers are excellent advantages to have for your software development projects. 

Sign up for our monthly newsletter here:

The Rising Need for Data Regulation

While many senators expressed concern during the testimony of Facebook whistleblower Frances Haugen, the key question remains: What are they going to do about it? Wherever people land on the truths or misconceptions Haugen raised is not the key issue. The real threat that was exposed is how social media platforms are designing algorithms, that unknown to followers, are influencing and changing human behavior. Without regulating this core issue, dangerous practices will continue. 

The initial promise of social media technology, as a way to stay connected to each other, build bridges and move humanity forward, has a darker side which is finally coming to light and fueled by the desire to increase engagement, and thus profits, at all cost. To address the dire consequences social media can cause, Haugen and others are advocating for greater transparency and stricter oversight. And, while nothing has been announced yet, senators are now talking about the possibility of strengthening privacy and child protection regulations, and applying anti-trust laws. 

While well-intentioned, these efforts are wholly inadequate and don’t go far enough.  Instead, Congress must put an end to the use of algorithms to influence or change human behavior – period. These algorithms, designed to increase engagement, advertising, and ultimately profits, end up fanning the flames of controversy, conspiracies, and disinformation that influence people’s emotions and actions. 

The power to change human behavior

Studies have shown how social media sites can affect the body image and moods of children and teens, leading to increased anxiety, depression, and suicidal thoughts. And, we’ve all seen the increase of polarization in politics, vaccines and, other issues and how this has led to confusion, anger, and in extreme cases, violence. 

Concerns about using hidden techniques to influence human behavior is nothing new, nor is the outrage that comes along with it. Subliminal advertising messages, such as those flashed on movie screens telling people in theatres to “eat popcorn,” have been banned in many countries. While the jury is still out as to whether those techniques are effective, the FCC and others recognized their potential for abuse.

Yet, punishing, regulating or breaking up Facebook won’t solve the problem. Facebook and other social media sites did not intentionally try to create these issues, and to turn it into some evil plot designed to destroy humanity is best left to Hollywood. Rather, they’ve been doing what all companies do: working to increase profits. Unfortunately, the technique they use to increase engagement, advertising dollars, and ultimately profits, has inadvertently led to large social, emotional and political problems. 

Unless stopped in its tracks, the practice of using AI algorithms to manipulate people will continue to grow across platforms because, well quite simply, it works.

Addressing the problem at its source

It’s not realistic for social media platforms to police themselves and balance shareholder profit against the negative consequences that might ensue from their practices.  Nor is it enough to try to address or regulate the symptoms that have arisen from manipulative practices, which is where Congress seems to setting its sights. 

We must go further and ensure there is a strong regulatory body that can set strict policies to end the use of algorithms that can influence human emotion and behavior.  And we need to give this agency the teeth to enforce it, by banning behavior-modifying algorithms outright, and enacting severe criminal and civil punishments for companies that violate that ban. To ensure compliance, companies must become more transparent, and open up their data and practices to the regulatory commission and other relevant bodies through ongoing inspections and audits. 

We certainly have a way to go in regulating social media and it’s no easy task. Yet, until lawmakers address it, they will just be dealing with the many different ways that human manipulation algorithms can be manifested, and get caught up in a fruitless effort to try to stamp them out in a whack-a-mole scenario. We must establish broad-based algorithm manipulation laws before it’s too late.

Sign up for our Monthly Newsletter:

Outsourcing & Nearshoring: What are they?

In the last few decades, companies have turned to outsourcing for their short-term or long-term strategic endeavors. Outsourcing, the process of moving operations to external locations, has multiple different types of models: offshoring, farshoring, onshoring, nearshoring, among others. Learn about the rising popularity of nearshore outsourcing in recent years.

Seeking to cut costs, industries found outsourcing to be the right solution for it. While in-house development teams make sense when you have an abundance of great skilled engineers or a vast budget, outsourcing offers better opportunities. Is outsourcing the right decision for your business? Here are some advantages of outsourcing to consider.

The Value of Outsourcing

To better understand the value that outsourcing could bring to your projects, you need to understand the advantages it presents.

  • Rapid growth. As we’ve previously mentioned, companies sometimes don’t have the necessary talent at hand and the process of hiring can be a long one, or might not give the desired results. Outsourcing your workload allows you to continue your growth and expand your team. The company can then stay focused on other tasks, all while having a steady growth rate. 
  • Big savings. In-house teams and manufacturing processes can get expensive quickly. By outsourcing, you’ll be able to do more while saving costs, which will fuel the growth of your business. 
  • Flexible staff.  Every industry has a high season and a low season.  Outsourcing grants you the flexibility of having the necessary staff for your busy projects, without the commitment of hiring and retaining in-house employees. 
  • Diversified skill set. It is nearly impossible for in-house teams to be experts in every single specialized skill. Outsource providers can complement and fill the gaps of your organization with a diversified set of skills. 

Nearshoring: Best of Both Worlds

Nearshore Software Development is the outsourcing of software development to nearby countries. The U.S. is looking for more and better ways to outsource, all while still being closer to home base. Nearshoring provides the best of both worlds. 

Nearshoring, an outsourcing model, benefits from the same advantages as other outsourcing models, but here are the additional advantages to consider. 

  • Better communication. The ability to work during similar time zones promotes a higher flow of communication and collaboration. Your nearshoring team will be working when you are. Any quick high priority issue o quick consultation will be available as often as needed.
  • Shorter travel time. The ability to brainstorm ideas in person meetings is a great advantage of having an in-house team. By working with a nearshore partner, you can ensure that you can have a quick meeting in the same day, helping build a stronger partnership. 

By partnering with a nearshore provider, you can enjoy the same appeal that made business seek out outsourcing partners, while also developing a stronger connection. Your nearshore team will be an extension of your business. Nearshoring might be the right business practice to adopt for your company.

Developing Multiplayer Games Using Unity with MLAPI

by Antonio, Software Developer at Wovenware

One of my favorite hobbies growing up was playing video games. The truth is that they made my youth a cheerful one, filled with excitement. So, naturally, I wondered about how they work. This is where my adventure into the world of computing started.

Video games are a great medium to tell stories. It is an interactive medium- where you can be part of the experience. Video games revolutionized humanity’s greatest skills, storytelling. If you are reading this, you too probably have a few stories you would love to share interactively or maybe you’re like me, you want to have an interactive experience with others.

The Unity Game Engine

Today, I will teach you how to create a small multiplayer experience using one of the most powerful game engines out there- the Unity game engine. It is a great engine for game making, as it comes with a slew of premade tools that you can mix and match at you’re leisure to craft your game.

A newly integrated tool is the Unity Multiplayer Networking library or MLAPI. This tool allows you to create remote procedure calls with annotations, and it also offers security that was lacking in Unity’s previous networking library.

Let’s Get Started

  • You’re going to create a new project and name it:  “MyMLAPITest”. Make sure to select the template of 3D.
Graphical user interface, application

Description automatically generated

Creating our Simple Multiplayer Experience

First, we need to create our network manager for the sample scene in which we will be building our game. Right-click on the hierarchy tab and click on “Create Empty”. Like in the screenshot below:

A screenshot of a computer

Description automatically generated with medium confidence

Now, to this empty Gameobject, we will be adding a component to it. Click on the “Gameobject” we just created.

Graphical user interface, website

Description automatically generated

Next, in the Inspector tab, click on “Add component” and then type “Network Manager”. Add that component to the empty Gameobject.

A screenshot of a computer

Description automatically generated with medium confidence

Next, we need to select a transport for our networking. Click on the “UnetTransport” in the select a transport part of the network manager. Note that you don’t need to know what transport is, just know that it is part of what MLAPI uses for networking.

A screenshot of a computer

Description automatically generated

We are now ready to create the player prefab for when players connect to our server. If you don’t know what a prefab is, it’s a collection of game objects and components that can be stored in a disk and reused in scenes. It is an abstraction that allows us to manipulate complex composite game objects.

Let’s start creating a player prefab by creating a cube as is shown below. Right-click on the scene hierarchy tab and then select “3D Object > Cube”.

A screenshot of a computer

Description automatically generated with medium confidence

Next, we are going to make the cube a network Gameobject by adding to it a component, as we did for the network manager. The difference is this time we are going to add two components: one called NetworkTransform and the other called NetworkObject

A screenshot of a computer

Description automatically generated
A screenshot of a computer

Description automatically generated with medium confidence

Now, we are going to save our prefab as a file by dragging and dropping the PlayerPrefab Gameobject into the tab below which shows us the project’s files.

A screenshot of a computer

Description automatically generated with medium confidence

Now, we add the prefab to the list of prefabs instantiated when a player joins a server. We do this by clicking on the empty game object we created to have the network manager component. Then, on the NetworkPrefabs part, we click on the “+” icon and drag the player prefab from the file system. Make sure to click on the checkbox “Default Player Prefab”.

A screenshot of a computer

Description automatically generated with medium confidence

Next, we need to download the HelloWorldManager from:

Download it to the Assets folder inside our current project. Then, add this script as a component to the NetworkManager.

A screenshot of a computer

Description automatically generated with medium confidence

We then add physics to our game. We do this by adding a rigid body to our player prefab. Click on the prefab and then in the inspector tab, adding a component name “RigidBody”. 

A screenshot of a computer

Description automatically generated with medium confidence

Here, we can make a point to add material to the PlayerPrefab cube. For more details, you can visit this tutorial:

Next, we add terrain upon which our players can fall into. Right-click on the Scene Hierarchy tab and click on “3D Object > Terrain”.

A screenshot of a computer

Description automatically generated with medium confidence

Now, we move our terrain below the position where our players spawn. To do this, click on the terrain object in the scene view and then look at the inspector and edit the values that appear on the transform component to the following:” X:-500, Y:-10, Z:-500″.

A screenshot of a computer

Description automatically generated with medium confidence

Move the camera so that you can see the new players fall on the terrain. To do this, we again click on the scene view, but this time we select the object named Main Camera and then move to the inspector and edit the values that appear on the transform component to the following: “X:0, Y:–10, Z:-10”.

A screenshot of a computer

Description automatically generated with medium confidence

Testing the Game

Let’s compile our game and run it. To do this, go to “File > Build Settings > Player Settings”  and in fullscreen mode, select “Windowed” and put in the desired resolution.

A screenshot of a computer

Description automatically generated with medium confidence

Build the game by clicking on “File > Build Settings” and then on “build”. Select the folder where your game will be saved and then open two instances of it.

A screenshot of a computer

Description automatically generated with medium confidence

Once the game is built, open two windows and test that every time a player connects, a new cube is created and dropped. It should look something like this :

A screenshot of a computer

Description automatically generated

Have Fun

In this multiplayer game example, we utilize an authoritative server as our network topology. This means that all clients connect to a central client(Host) who is running the logic for the game. Know that networked games may have other different topologies. Another type of topology would be peer-to-peer, usually used on Real-Time Strategy games.

In this tutorial, we did not have to code because we utilized the build-in code that MLAPI comes with. But, for the more complex games, you can apply a tool that MLAPI comes with. The tool is called remote procedure calls– a powerful way to create network-enabled software. 

MLAPI wraps around Unity libraries, allowing you to use NetworkedObjects. NetworkedObjects are the basic building block of all network-enabled game objects. In the body of networked objects, MLAPI allows you to specify remote procedure calls from server-to-single-client, server-to-all-clients, or client-to-server. This allows us to implement any kind of network behavior we wish for our game to have. 

Apart from enabling remote procedure calls, MLAPI comes with special access modifiers that allow us to put in place locks and safeguards against hacking or other player modifying an object they should not be able to. 

If you want to see more examples of games built with Unity and MLAPI, you can view a game I made in it at: I made this for the Genre Mash mini biweekly Gamejam. The source code for it can be found here:

Games are the reason why I got into computers, and I can truly say I am very happy with my decision. Computer science is a very challenging and changing field and it is constantly growing. Video games are a very good way to get into computer science, as they allow you to visually see your work in action. The knowledge of networking I gain through my participation in game jams helps me better understand some of the technology stacks out there.

Sign up for our Monthly Newsletter:

Offshoring vs Nearshoring: Choosing your Outsourcing Model

We’ve talked about nearshoring before, but how different is it from offshoring? Between both, which would better suit your IT projects? Learn more before choosing the right outsourcing model for your development services.

Outsourcing has been a great growth opportunity for businesses for a few decades now. Seeking to cut costs, large manufacturing companies have been moving operations offshore since the early 1970s. Later on, with the arrival of the internet and tech companies, industries searched for companies overseas with the intent of outsourcing projects and the desire to expand their teams.

Offshore Software Development is the process of outsourcing work to distant countries. Some offshoring options would be countries like India and Japan. Nearshore Software Development is the process of outsourcing work to close-by countries, examples of this being: Mexico and Puerto Rico.

Once COVID-19 hit in 2020, industries were impacted by unprecedented obstacles and had to figure out a way to work around them. Everything was hurt, from supply chains to companies’ workflow. The companies best equipped continued on their path towards digital transformation, whereas those not equipped, struggled to keep up. But, prepared and unprepared industries alike were able to turn to outsourcing for a range of things. For example, some searched for the right provider to manufacture their projects.

Nowadays, businesses everywhere continue to evolve by partnering with development teams located outside the U.S., but they are increasingly looking for better ways to outsource. Enter: nearshoring.

There are additional benefits to outsourcing with nearshore providers versus outsourcing with offshoring providers.

U.S. companies search for the right outsourcing partner for potential cost savings. This is one of the main reasons to do so. Nearshoring provides the attractive prices that are also available when offshoring, but they have the advantage of being in a similar time zone which makes for better collaboration.

While both offshore and nearshore teams are used to remote work, the nearshore teams’ ability to work during similar hours as your team facilitates real-time communication across groups. This means that any issues that arise may be addressed immediately.

Additionally, nearshore developers can help address the labor shortage by expanding your team with skilled teammates that are ready to go and are experts in the methodologies and workflows you’re already familiar with. Unlike offshore teams, the cultural barriers tend to be few when partnering with a nearshore team and this allows for better and easier collaboration and communication. Nearshore developers are proficient in the English language, and up to date on U.S. current events and pop culture.

Unsure which outsourcing model is the right fit? Consider your company goals and narrow down which model would better support your companies’ journey toward digital transformation. Partnering with the right shore development team can reduce and eliminate many challenges that have arisen in the wake of the COVID-19 crisis.

Sign up for our Monthly AI Newsletter: