Increasing amounts of available satellite imagery has led to advances in the development of aerospace applications due to a wealth of information that needs to be analyzed. This has resulted in the growth of deep learning, an effective AI tool for object detection tasks and broad area search in satellite images.
Wovenware’s data science team often works with deep learning models that have applications in broad area searches on satellite imagery. In this blog, I will discuss how saliency maps can be used to visualize what regions or objects our models consider as important features while we train our object detection models. To understand this, I will document a salient map analysis based on our recent work with solar panel detection.
Since the insights obtained from the analysis performed with deep learning models can often have drastic impacts on decision making, public policy, and the lives of the general population, some deep learning applications require accountability and transparency about their decision making process. This means that we need to be able to explain the inclusion of features in our data and justify our model’s predictions. Although the “black box” that constitutes deep learning is still unresolved, there are ways to visually demonstrate what features drive the decisions made by our object detection models. Saliency maps, usually represented as heat maps, help us identify what areas of an image are the most important to the object detector’s final decision, and allow us to asses if the areas are directly related to the object of interest. Saliency can help us visualize biases acquired by the model from the dataset, originating in the annotation process or any other source.
Solar Panel Detection Project
In order to observe what insights can be obtained from our object detection models for satellite imagery, saliency maps have been generated from their features maps. The generated visuals are then used as proxy explanations of our model’s decision making process. We will explain how these visualizations can help prevent bias in our data and further improve the data collection process. The objective is to demonstrate how saliency analysis can improve the way we enhance our object detection models for satellite imagery within the solar panel detection context.
Like any other machine learning project, the first step consisted of a data collection and curation stage that included a thorough annotation process of solar panels. Then, in the second stage, a series of convolutional neural networks were trained with the resulting datasets. Saliency maps were generated as we trained our baseline model. We consequently compared the resulting visuals to the actual detections made by the model. In Figure 1-A , we can see that our model correctly identified solar panels with red bounding boxes. In order to visually understand what drove our model to correctly detect solar panels, we created a saliency map, shown in Figure 1-B, to our input image.
From the images in Figure 1 we can see that the model correctly recognizes the area containing the solar panels to be important. Despite this, since we can see a rectangular outline around the area that contains the panels, it seems that the model also recognizes the house’s roof. This suggests that rooftops contribute to its final decision on whether a solar panel has been detected. Based on this information, we can only imagine that our model would struggle in areas where solar panels are not always installed on rooftops. Therefore, we tested our model to see if it would find panels on a solar farm (Figure 2-A) but it was unable to find any in the area. This suggests that our model underperforms when it comes to detecting solar panels in solar farms as it was only trained with panels on rooftops. Let’s see what insights can be obtained from the saliency map generated for these predictions.
We can see from Figure 2-B that our model doesn’t know what areas are more likely to contain solar panels in the farm. In order to eliminate the rooftop bias and improve our model’s ability to detect solar panels on solar farms, we needed to go back to our data collection and curation stage and find panels located in solar farms. We then trained and tested our new model against the same images.
The new model seems to be an improvement as it identified, in Figure 3-A, an additional panel that went undetected by our previous model in Figure 1-A. We still don’t know if it relies on the roof to make the detections. The saliency map shown in Figure 3-B doesn’t show the roof’s outline seen with our previous model in Figure 1-B, further supporting the notion that we have reduced the bias that favored panels on rooftops. Now, we can expect our model to be able to find solar panels on solar farms.
We can see in Figure 4-A that, even though our new model did not find all the panels in the solar farm, it did find most of them. In this sense, it greatly improved when compared to the previous one, which couldn’t find a single panel on the farm. The saliency map in Figure 4-B supports the premise that our new model is capable of finding solar panels of various types and in different regions.
In the end, saliency maps can indeed provide us with important insights about our models and their decision making process. During this analysis, we first noticed that, in order to improve our models, we needed to acquire more solar panel data, specifically ones that where located in solar farms. The visual explanations generated by the saliency maps guided the data collection process, saving us time and minimizing costs, since acquiring satellite image data can be a time consuming and expensive endeavor.