Challenges in Applying Audio Classification Models to Datasets Containing Crucial Biodiversity Information (Papers Track)

Jacob G Ayers (UC San Diego); Yaman Jandali (University of California, San Diego); Yoo-Jin Hwang (Harvey Mudd College); Erika Joun (University of California, San Diego); Gabriel Steinberg (Binghampton University); Mathias Tobler (San Diego Zoo Wildlife Alliance); Ian Ingram (San Diego Zoo Wildlife Alliance); Ryan Kastner (University of California San Diego); Curt Schurgers (University of California San Diego)

Paper PDF Slides PDF Recorded Talk Cite
Ecosystems & Biodiversity Natural Language Processing

Abstract

The acoustic signature of a natural soundscape can reveal consequences of climate change on biodiversity. Hardware costs, human labor time, and expertise dedicated to labeling audio are impediments to conducting acoustic surveys across a representative portion of an ecosystem. These barriers are quickly eroding away with the advent of low-cost, easy to use, open source hardware and the expansion of the machine learning field providing pre-trained neural networks to test on retrieved acoustic data. One consistent challenge in passive acoustic monitoring (PAM) is a lack of reliability from neural networks on audio recordings collected in the field that contain crucial biodiversity information that otherwise show promising results from publicly available training and test sets. To demonstrate this challenge, we tested a hybrid recurrent neural network (RNN) and convolutional neural network (CNN) binary classifier trained for bird presence/absence on two Peruvian bird audiosets. The RNN achieved an area under the receiver operating characteristics (AUROC) of 95% on a dataset collected from Xeno-canto and Google’s AudioSet ontology in contrast to 65% across a stratified random sample of field recordings collected from the Madre de Dios region of the Peruvian Amazon. In an attempt to alleviate this discrepancy, we applied various audio data augmentation techniques in the network’s training process which led to an AUROC of 77% across the field recordings.

Recorded Talk (direct link)

Loading…