Skip to content Skip to footer

GPU Computing: The Key to Unleashing the Mysteries of All That Data

GPU Computing is Key to AI Development

This post originally appeared as GPU Computing: The Key to Unleashing the Mysteries of all that Data on our COO, Carlos Meléndez, Under Development blog at InfoWorld and is reprinted with permission from IDG.

A few years ago Marc Andreesen accurately predicted that “software is eating the world.” An update to that declaration today could very well be that data is eating the world.

According to IDC, by 2025, the world will create 180 zettabytes of data per year (up from 4.4 zettabytes in 2013). And until recently, that data was structured, which means it is presented in rows and columns and easily entered, stored and sorted. Today, much of the data is unstructured, coming from different websites, devices and databases, and presented as video, images, graphics, etc.

All of these reams of structured and unstructured data hold the key to helping organizations better understand their customers, follow patterns of behavior to predict future actions, uncover breakthroughs to cure diseases and make communities safer, among a myriad of other things.

What to Do With all That Data

To harness this goldmine of data, artificial intelligence and machine learning has emerged, which uses algorithms to train data to find patterns. The problem is, traditional CPU (Central Processing Unit) processors just can’t adequately handle the bulk processing required for this complex, boat-load of data, and this is where Graphics Processing Unit (GPU) computing comes in.

GPU-based servers are fast becoming as necessary to effectively process data for artificial intelligence (especially for deep learning algorithms), as CPUs have been to do virtually everything else. While CPU’s have the capabilities to process tons of unstructured data — eventually – in a matter of days or hours, GPUs can do so in a matter of minutes.

If CPUs are Race Cars, then GPUs are Cargo Trucks

Yet while GPU computing is the hot new technology, it’s really nothing new. GPUs have been used for many years in gaming applications, as well as for video play-back and analysis. They were designed to work with arrays of data – for faster processing of big arrays of data.

At Wovenware, we like to describe the difference between CPUs and GPUs as the difference between a race car and a cargo truck moving lots of supplies from one place to another. A race car is really fast and compact and can get to where it needs to go quickly, yet it may need to make many trips to accomplish the task. A cargo truck, on the other hand, may not be as fast, but may require only one trip to deliver the goods.

The Forerunner of GPU-Enabled Deep Learning

The current leader of the “cargo truck” fleet is NVIDIA, which started out as a video card manufacturer, providing superior 3-D rendering for gamers. Yet it saw how it could help process big data, so it invented a computer platform that allows data scientists to use GPU to process this data more efficiently and quickly than ever before.

NVIDIA has enabled this level of use for deep learning algorithms through its software platform, CUDA, which allows hardware to be used for something different than what it was originally intended for.

As the market continues to heat up, however, new players are entering the ring, providing competitive advantages and new features and capabilities, and the big companies, such as Intel or Google continue to focus on GPU computing. This competition will only serve to inspire continued innovation and lower costs.

The GPU Investment Considerations

Adopting this technology is not something to be taken lightly, however. Consider these requirements:

Cost. one GPU card alone can cost more than $10K, and most deep learning projects would require a minimum of four GPU cards.

Specialized Hardware Architecture. Companies looking to deploy GPU servers also should understand that it’s not just the cost of GPUs, but a special hardware architecture, as well as lots of CPUs and RAM. GPU servers are almost always custom-built to meet the specific machine learning needs of the developer.

Specialized Facilities. GPU servers require special housing, with specific power, airflow and temperature requirements.

Another thing to note is that since operating GPU servers, requires specialized expertise, ongoing maintenance and lots of financial investment. Most companies – especially mid-market ones, just don’t have the ability nor budget to handle it in-house and are turning to the few artificial intelligence and software engineering service providers around who can provide the GPU capabilities to build and manage machine learning projects.

GPU computing could eventually become the standard computing processor for all software development projects as machine learning takes over. Understanding its origins, capabilities and possibilities provides organizations of all shapes and sizes a considerable leg-up in the fast-paced world of business.

GPU Computing Unleashes Data Mysteries

Get the best blog stories in your inbox!