Semantic Segmentation on Unbalanced Remote Sensing Classes for Active Fire (Papers Track)

Xikun Hu (KTH Royal Institute of Technology); Alberto Costa Nogueira Junior (IBM Research Brazil); Tian Jin (College of Electronic Science, National University of Defense Technology)

Slides PDF Recorded Talk Cite
Disaster Management and Relief Computer Vision & Remote Sensing


Wildfires generate considerable research interest due to their high frequency of occurrence along with global climate change. Future wildfire detection sensors would equip an on-orbit processing module that filters the useless raw images before data transmission. To efficiently detect heat anomalies from the single large scene, we need to handle the unbalanced sample sets between small active fire pixels and large-size complex background information. In this study, we contribute to solving this problem by enhancing the target feature representation in three ways. We first preprocess training images by constraining sampling ranges and removing background patches. Then we use the object-contextual representation (OCR) module to strengthen the active fire pixel representation based on the self-attention unit. The HRNet backbone provides multi-scale pixel representation as input to the OCR module. Finally, the combined loss of weighted cross-entropy loss and Lovasz hinge loss improve the segmentation accuracy further by optimizing the IoU of the foreground class. The performance is tested on the aerial FLAME dataset, whose ratio between labeled active fire and background pixels is 5.6%. The proposed framework improves the mIoU from 83.10% (baseline U-Net) to 90.81%. Future research will expand the technique for active fire detection using satellite images.

Recorded Talk (direct link)