loading page

Automated Location Invariant Animal Detection In Camera Trap Images Using Publicly Available Data Sources
  • +1
  • Andrew Shepley,
  • Greg Falzon,
  • Paul D. Meek,
  • Paul Kwan
Andrew Shepley
University of New England School of Science and Technology
Author Profile
Greg Falzon
University of New England
Author Profile
Paul D. Meek
NSW Dept Primary Industries
Author Profile
Paul Kwan
Melbourne Institute of Technology
Author Profile

Abstract

1. A time-consuming challenge faced by ecologists is the extraction of meaningful data from camera trap images to inform ecological management. Automated object detection solutions are increasingly, however, most are not sufficiently robust to be deployed on a large scale due to lack of location invariance across sites. This prevents optimal use of ecological data and results in significant resource expenditure to annotate and retrain object detectors. 2. In this study, we aimed to (a) assess the value of publicly available image datasets including FlickR and iNaturalist (FiN) when training deep learning models for camera trap object detection (b) develop a for training location invariant object detection models and (c) explore the use of small subsets of camera trap images for optimization training. 3. We collected and annotated 3 datasets of images of striped hyena, rhinoceros and pig, from FiN, and used transfer learning to train 3 object detection models in the task of animal detection. We compared the performance of these models to that of 3 models trained on the Wildlife Conservation Society and Camera CATalogue datasets, when tested on out of sample Snapshot Serengeti datasets. Furthermore, optimized the FiN models via infusion of small subsets of camera trap images to increase robustness for challenging detection cases. 4. In all experiments, the mean Average Precision (mAP) of the FiN models was significantly higher (82.33-88.59%) than that achieved by the models trained only on camera trap datasets (38.5-66.74%). The infusion of camera trap images into FiN training further improved mAP, with increases ranging from 1.78-32.08%. 5. Ecology researchers can use FiN images for training robust, location invariant, out-of-the-box, deep learning object detection solutions for camera trap image processing. This would allow AI technologies to be deployed on a large scale in ecological applications. Datasets and code related to this study are open source and available at: https://github.com/ashep29/infusion

Peer review status:Published

17 Oct 2020Submitted to Ecology and Evolution
21 Oct 2020Submission Checks Completed
21 Oct 2020Assigned to Editor
23 Oct 2020Review(s) Completed, Editorial Evaluation Pending
21 Dec 2020Editorial Decision: Revise Minor
27 Jan 20211st Revision Received
28 Jan 2021Review(s) Completed, Editorial Evaluation Pending
28 Jan 2021Submission Checks Completed
28 Jan 2021Assigned to Editor
31 Jan 2021Editorial Decision: Accept
10 Mar 2021Published in Ecology and Evolution. 10.1002/ece3.7344