loading page

Camera settings and habitat type influence the accuracy of citizen science approaches to camera trap image classification.
  • +23
  • Nicole Egna,
  • DAVID O'CONNOR,
  • Jenna Stacy-Dawes,
  • Mathias Tobler,
  • Nicholas Pilfold,
  • Kristin Neilsen,
  • Brooke Simmons,
  • Elizabeth Davis,
  • Mark Bowler,
  • Symon Masiaine,
  • Daniel Lenaipa,
  • Jonathan Lenyakopiro,
  • Lexson Larpei,
  • Johnson Lekushan,
  • Ruth Lekupanai,
  • Jesus Lekalgitele,
  • Joseph Lemirgishan,
  • Lekuran Lemingani,
  • Ranis Lesipiti,
  • Masenge Lororua,
  • Sebastian Rabhayo,
  • Kirstie Ruppert,
  • Jenny Glikman,
  • Julian Fennessy,
  • Arthur Muneza,
  • Megan Owen
Nicole Egna
Duke University Nicholas School of the Environment

Corresponding Author:[email protected]

Author Profile
DAVID O'CONNOR
San Diego Zoo Institute for Conservation Research
Author Profile
Jenna Stacy-Dawes
San Diego Zoo Institute for Conservation Research
Author Profile
Mathias Tobler
San Diego Zoo Institute for Conservation Research
Author Profile
Nicholas Pilfold
San Diego Zoo Institute for Conservation Research
Author Profile
Kristin Neilsen
San Diego Zoo Institute for Conservation Research
Author Profile
Brooke Simmons
Lancaster University Department of Physics
Author Profile
Elizabeth Davis
San Diego Zoo Institute for Conservation Research
Author Profile
Mark Bowler
University of Suffolk
Author Profile
Symon Masiaine
San Diego Zoo Institute for Conservation Research
Author Profile
Daniel Lenaipa
Namunyak Wildlife Conservation Trust
Author Profile
Jonathan Lenyakopiro
Namunyak Wildlife Conservation Trust
Author Profile
Lexson Larpei
Loisaba Conservancy
Author Profile
Johnson Lekushan
Namunyak Wildlife Conservation Trust
Author Profile
Ruth Lekupanai
Namunyak Conservation Wildlife Trust
Author Profile
Jesus Lekalgitele
Namunyak Wildlife Conservation Trust
Author Profile
Joseph Lemirgishan
Namunyak Wildlife Conservation Trust
Author Profile
Lekuran Lemingani
Namunyak Wildlife Conservation Trust
Author Profile
Ranis Lesipiti
Namunyak Wildlife Conservation Trust
Author Profile
Masenge Lororua
Namunyak Wildlife Conservation Trust
Author Profile
Sebastian Rabhayo
Namunyak Wildlife Conservation Trust
Author Profile
Kirstie Ruppert
San Diego Zoo Institute for Conservation Research
Author Profile
Jenny Glikman
San Diego Zoo Institute for Conservation Research
Author Profile
Julian Fennessy
Giraffe Conservation Foundation
Author Profile
Arthur Muneza
Giraffe Conservation Foundation
Author Profile
Megan Owen
San Diego Zoo Institute for Conservation Research
Author Profile

Abstract

Scientists are increasingly using volunteer efforts of citizen scientists to classify images captured by motion-activated trail-cameras. The rising popularity of citizen science reflects its potential to engage the public in conservation science and accelerate processing of the large volume of images generated by trail-cameras. While image classification accuracy by citizen scientists can vary across species, the influence of other factors on accuracy are poorly understood. Inaccuracy diminishes the value of citizen science derived data and prompts the need for specific best practice protocols to decrease error. We compare the accuracy between three programs that use crowdsourced citizen scientists to process images online: Snapshot Serengeti, Wildwatch Kenya, and AmazonCam Tambopata. We hypothesized that habitat type and camera settings would influence accuracy. To evaluate these factors, each photo was circulated to multiple volunteers. All volunteer classifications were aggregated to a single best answer for each photo using a plurality algorithm. Subsequently, a subset of these images underwent expert review and were compared to the citizen scientist results. Classification errors were categorized by the nature of the error (e.g. false species or false empty), and reason for the false classification (e.g. misidentification). Our results show that Snapshot Serengeti had the highest accuracy (97.9%), followed by AmazonCam Tambopata (93.5%), then Wildwatch Kenya (83.4%). Error type was influenced by habitat, with false empty images more prevalent in open-grassy habitat (27%) compared to woodlands (10%). For medium to large animal surveys across all habitat types, our results suggest that to significantly improve accuracy in crowdsourced projects, researchers should use a trail-camera set up protocol with a burst of three consecutive photos, a short field of view, and consider appropriate camera sensitivity. Accuracy level comparisons such as this study can improve reliability of future citizen science projects, and subsequently encourage the increased use of such data.
04 May 2020Submitted to Ecology and Evolution
05 May 2020Submission Checks Completed
05 May 2020Assigned to Editor
06 May 2020Reviewer(s) Assigned
19 Jun 2020Review(s) Completed, Editorial Evaluation Pending
22 Jun 2020Editorial Decision: Revise Minor
05 Aug 20201st Revision Received
05 Aug 2020Submission Checks Completed
05 Aug 2020Assigned to Editor
05 Aug 2020Review(s) Completed, Editorial Evaluation Pending
06 Aug 2020Editorial Decision: Accept