An Overview of Large-Scale Monitoring of Infrastructure Using Deep Learning for Satellite Imagery

September 11, 2020

In the past half-decade in Puerto Rico, we have had our fair share of catastrophic events such as hurricanes and earthquakes. These events have left significant parts of the population without power for extended periods of time, often ranging from days to weeks, or even months.

As society becomes ever more connected and reliant on power and utilities, the more increasingly disruptive these catastrophic phenomena become to essential services for millions of people. This is one of the driving reasons behind the growing adoption of self-sustainable technologies such as solar power generation.

As we continue to develop as a country, we’re learning from these experiences and can appreciate the need to consider every option that may allow us to receive vital services in the event of another catastrophe. One of those options is the use of solar farms.

By implementing deep learning for satellite imagery, programmed in a location-independent, distributed platform not affected by regional events, we can design a solution that quickly analyzes infrastructure over large regions.

This approach enables us to put in place a disaster-resilient pipeline that could provide us with insights into the status of critical infrastructure, even after significant catastrophic events. We believe that such a system can potentially help mitigate the impacts of prolonged service interruption by providing resilient solutions that reside outside the scope of these events.

As an example, in this blog, we’ll go over the high-level process through which you could create a potential solution to solar power installation assessment, using deep learning models for high-resolution satellite imagery. This system could be ideally utilized during and after potentially catastrophic or disruptive events, to assess if the infrastructure suffered any large-scale damage, such as a loss of some of its solar panels.

Why the Use of Very High-Resolution Satellite Imagery?

With the continuous advancements to satellite imagery technologies, the amount and complexity of plausible analysis that can be performed also continues to increase. At the time of this writing, Maxar’s Worldview3 and Worldview4 satellites provide 30 cm per pixel images, the highest resolution for satellite imagery commercially available. There are many considerations when it comes to satellite image data acquisition, resolution is of extreme importance in the context of small objects such as solar panels. As you can imagine, there are clear benefits to working with 30 cm including the ability to create more accurate models and an easier data annotation process, given that the objects are more clearly delineated. Therefore, it is a good idea to utilize the highest resolution available.

50 cm/px vs 30 cm/px Satellite Imagery Resolutions

Note: Both images were originally 512 x 512, but minor cropping has been performed to illustrate the effect of resolution

Deep Learning for Satellite Imagery 50 cm/px resolution

50 cm/px resolution

Deep Learning for Satellite Imagery 30 cm/px resolution

30 cm/px resolution

The satellite imagery with 50 cm/px resolution:

  • Although at first glance the image appears bigger, what this really means is that each pixel averages the value of a larger area, therefore it contains less information about specific details providing a lower resolution.
  • If you were to zoom the image in order to observe the solar panels these would quickly get pixelated.

The satellite imagery with 30 cm/px resolution:

  • Although at first glance the image appears smaller, what this really means is that each pixel averages the value of a smaller area, therefore it contains finer detail of the area yielding in turn a higher resolution.
  • Even at the current zoom level you can observe what seem to be discernible features in the solar panels.

For reference, the average area of solar panels is between 936 square centimeters in residential installations and 1528.8 square centimeters in commercial installations. This translates to approximately between 31 pixels (residential) and 51 pixels (commercial) at 30 cm resolution, and between 19 pixels (residential) and 31 pixels (commercial) at 50 cm resolution. As you can see, the amount and quality of information that is available for models to learn from is variable depending on the resolution of the data and the effects of other necessary processes inherent to satellite imagery data, such as orthorectification.

Image of Panel Resolutions

Deep Learning For Satellite Imagery: Resolution

50 cm/px resolution

Deep Learning For Satellite Imagery: Resolution

30 cm/px resolution

 

  • 50 cm/px resolution: When zoomed in, individual panels become blurred.
  • 30 cm/px resolution: Even when zoomed in you can still spot the separation between panels.

Why Utilize Deep Learning?

The role of object detection is to extract information from images of what objects exist within them and where they are located. Although many advanced machine learning and computer vision strategies exist for object detection, none have proven to be as efficient as Deep Learning. Deep Learning can help by minimizing the time and the possibility of error when analyzing high dimensional satellite imagery data when detecting objects of interest, in this case solar panels. Deep Learning models obtained from the Single Shot Detector (SSD) architecture are capable of very accurately detecting, classifying, locating and approximately bounding the areas in an image containing solar panels with the added benefit of being relatively quick.We have chosen this model for the task, but many others could be used in its place. During the rest of this article whenever we refer to the ” model” or “detector,” we mean this very specific implementation of an SSD.

Single Shot Multi-Box Detector

images describing the complex SSD architecture and the overall training process this model utilizes to detect objects. Images taken from the Single Shot Multibox Detector paper.

So, Let’s Create This System

First, we would select our regions of interest and acquire the data. When doing this we should take care to select appropriate data that has similar features to the ones we are interested in detecting, they should also have sufficient quality and resolution in order for the model to be able to learn from them and for us to be able to properly annotate the required features. Then, utilizing specialized software, such as QGIS, we would go about carefully and cleanly annotating all of the solar panels that we could find.

The Annotation Process

QGIS: An open source software that allows for the annotation of georeferenced images.

QGIS: An open source software that allows for the annotation of georeferenced images.

Annotations: All solar panels in image have been annotated by drawing a polygon over their area.

Annotations: All solar panels in image have been annotated by drawing a polygon over their area.

Second, once the annotation process has been finished and we have reached a satisfactory amount of annotations, comes the arduous task of creating, debugging, training, assessing and tweaking a model. For the purpose of this blog, and as mentioned above, we will be using a Single Shot Detector because it has certain characteristics that are desirable for the task at hand. Satellite imagery usually comes in very large TIFF files that need to be cut into smaller more manageable pieces for performance reasons, therefore some preprocessing is needed. With all of this done, we are finally ready to train the SSD model. Depending on the amount of data and the configuration used for this training process the amount of time it will take to train may vary.

Finally, once training is done, we need to assess the results and make any necessary tweaks. Assessing the quality of the model is a very problem-specific and subjective process. In general, by looking at the output from the validation phase and comparing the resulting metrics to the ones of similar projects you can decide when your model is ready for production.

Deep Learning for Satellite Imagery: Model Predictions

Inferences made by the model

The inferences made by the model (the green squares on top of houses), correctly predict all solar panels in rooftops in a region adjacent to the one it was trained in.

Data, Model, Now… What?

The last action in the development of this monitoring system is to incorporate the model inside a bigger solution which feeds the model information, runs the predictions and provides the results in a smooth streamlined fashion. At this point we would develop a group of additional modules for handling each of the pre- and post-processing tasks, and then deploy our solution.

Congratulations! You now know the high-level process of creating a deep learning model for satellite imagery. This solution can be utilized at any point by acquiring the desired satellite imagery at the corresponding resolution and running it through the data acquisition, inference, and analysis pipeline developed. It should provide essential and timely insights into the target infrastructure, in this case solar panels, when they’re most needed, such as during or after a catastrophic event.

We at Wovenware currently have solutions in place and the expertise to leverage deep learning for satellite imagery to provide solutions to these challenges, in this and other scenarios. While we described some of the challenges of working with satellite imagery, every problem/project comes with its own challenges, so we continue to work hard, solving these challenges whenever they arise. Feel free to contact us with any questions about this project or any of your own image detection or advanced software development needs.  You can reach us here or at 877-249-0090.

Leave a Reply

  • (will not be published)