Continual VQA for Disaster Response Systems (Papers Track)

Aditya Kane (Pune Institute of Computer Technology); V Manushree (Manipal Institute Of Technology); Sahil S Khose (Georgia Institute of Technology)

Paper PDF Slides PDF Recorded Talk NeurIPS 2022 Poster Topia Link Cite


Visual Question Answering (VQA) is a multi-modal task that involves answering questions from an input image, semantically understanding the contents of the image and answering it in natural language. Using VQA for disaster management is an important line of research due to the scope of problems that are answered by the VQA system. However, the main challenge is the delay caused by the generation of labels in the assessment of the affected areas. To tackle this, we deployed pre-trained CLIP model, which is trained on visual-image pairs. however, we empirically see that the model has poor zero-shot performance. Thus, we instead use pre-trained embeddings of text and image from this model for our supervised training and surpass previous state-of-the-art results on the FloodNet dataset. We expand this to a continual setting, which is a more real-life scenario. We tackle the problem of catastrophic forgetting using various experience replay methods.

Recorded Talk (direct link)