Agile Modeling for Bioacoustic Monitoring (Tutorials Track)

tom denton (google); Jenny Hamer (Google Research); Rob Laber (Google)

Tutorial Poster File NeurIPS 2023 Poster Cite
Ecosystems & Biodiversity Active Learning

Abstract

Bird, insect, and other wild animal populations are rapidly declining, highlighting the need for better monitoring, understanding, and protection of Earth’s remaining wild places. However, direct monitoring of biodiversity is difficult. Passive Acoustic Monitoring (PAM) enables detection of the vocalizing species in an ecosystem, many of which can be difficult or impossible to detect by satellite or camera trap. Large-scale PAM deployments using low-cost devices allow measuring changes over time and responses to environmental changes, and targeted deployments can discover and monitor endangered or invasive species. Machine learning methods are needed to analyze the thousands or even millions of hours of audio produced by large-scale deployments. But there are a massive number of potential signals to target for bioacoustic measurement, and many of the most interesting lack training data. Many rare species are difficult to observe. Detecting specific call-types and juvenile calls can give further insight into behavior and population health, but almost no structured datasets exist for these use-cases. No single classifier can address all of these needs, so practitioners regularly need to create new classifiers to address novel problems. Soundscape annotation efforts are very expensive, and machine learning experts are scarce, creating a bottleneck on analysis. We aim to eliminate the bottleneck by providing an efficient, self-contained active learning workflow for biologists. In this tutorial, we present an integrated workflow for analyzing large unlabeled bioacoustic datasets, adapting new agile modeling techniques to audio. Our goal is to allow experts to create a new high quality classifier for a novel class with under one hour of effort. We achieve this by leveraging transfer learning from high-quality bioacoustic models, vector search over audio databases, and lightweight Python notebook UX. The workflow can begin from a single example, proceeds through an efficient active learning loop, and finally applies the produced classifier to a large mass of unlabeled data to produce insights for ecologists and land managers.