Robotics, artificial intelligence and the integration of
image analysis and barcode sequencing
Exciting advances are being made in the areas of robotics and image
analysis for arthropods. Species identification via image analysis is
currently based on convolutional neural networks (CNNs), a specific tool
in the field of deep learning (DL), where complex image patterns are
classified taking advantage of training sets (Valan et al., 2019, Valan,
Vondráček, & Ronquist, 2021). However, the training of CNNs requires
large sets of training images whereby each image has to be labelled with
reliable taxonomic information. Perhaps not surprisingly, such datasets
are available for bees and butterflies (Buschbacher, Ahrens, Espeland,
& Steinhage, 2020), but are largely missing for the bulk of arthropods
collected by standardised trapping. The challenge is to generate these
sets, and this is where a combination of robotic specimen handling and
HTS barcoding can help. Robotics can generate the images and HTS
barcoding can sort images to putative species. Recently, Wührl et al.
(2021) presented a first-generation robot for this purpose. It detects,
images and measures specimens, before moving them into a 96-well
microplate for DNA barcode sequencing. This approach opens the door for
a transition away from multiplex barcode sequencing of all individuals
toward taxonomic assignment by image recognition, as images with barcode
sequences contribute to training images for machine learning. However,
it remains unclear whether automated image-based approaches alone will
eventually reach the approximately species-level resolution obtained
with sequence data. Image-based specimen identification could,
nevertheless, be used as an external validation of molecular-based
diversity estimations at, for example, genus level. Similarly, image
analyses can yield information on sample biomass and abundance (Ärje et
al., 2020; Schneider et al., 2022, Wührl, 2021).