NVIDIA GTC Conference Sheds Light on Power of GPUs

December 04, 2020

In October I had the privilege of speaking at the NVIDIA GPU Technology Conference (GTC), an annual event for developers, researchers, engineers, and innovators in the NVIDIA community. The conference helps them enhance their skills by sharing ideas, best practices and inspiration for new ways to approach AI projects. Noted speakers at this year’s event hailed from leading academic institutions, such as MIT and Johns Hopkins; major companies, such as Facebook and Amazon; and of course senior-level experts at NVIDIA.

As a member of the NVIDIA Inception Program, Wovenware has been fortunate to have access to training from NVIDIA’s Deep Learning Institute, as well as the ability to participate in its developer forums, along with other key benefits. So, we were honored when Wovenware was invited to submit a topic that would be considered for presentation at GTC.

I submitted an abstract on how to port Keras Lambda layers to TensorFlow.js, and was thrilled when it was accepted. The challenge was boiling down a complex subject into a five-minute discussion.

I basically shared a specific project we worked on involving the development of a deep learning solution on the edge, for satellite imagery. Our deep learning model was trained using the xView dataset.

I shared how, by using custom Lambda layers to build Keras models, users can specify an arbitrary Python function to be wrapped as a Keras Layer object. They are typically used to construct sequential and functional API models.

The challenge, however, is that although Keras Lambda layers provide flexibility for users to surpass what is achievable beyond the use of stock layers, the Python-based implementation of such layers is not portable to run on TensorFlow.js at runtime, without some hand-tailored intervention. I shared how we solved this complex problem, porting trained Keras models with custom Lambda layers to TensorFlow.js for object detection.

Our goal was to convert deep learning models with a public tool to run on a browser. When running on a browser, we could avoid the use of an external resource, or a remote server that would need to be hosted and paid for. By pushing the models out to a browser , they would live on a local computer and execute at that device with no external cost. In this way, we would bring our models to the edge, as close to the end device hardware as possible.

The NVIDIA GTC inspired me with new ways of leveraging the power of NVIDIA’s GPUs and advanced AI tools and technologies to create new and innovative solutions to support the challenges of business, government and the world at large. I hope my presentation inspired others as well to find new ways to solve complex AI challenges.