Of the 30 bioinformatic tasks identified (see Table 1 for a description of the tasks), only 11 were implemented in more than half of the papers (n<55) (Fig. 3). Quality filtering (n=92) and OTU delimitation (n=89) were the tasks most reported. Some of the less reported tasks were those associated with uncommon bioinformatic requirements of metabarcoding data, such as assembly or degapping; others have become redundant with modern computational power, such as preclustering. Low reporting of such tasks is likely an accurate reflection of rare implementation; however, there are many other tasks that are fundamental in metabarcoding bioinformatics but are poorly reported. For example, primer trimming was only reported by just over half of the papers (n=67), yet is a completely necessary step. Similarly, adapter trimming was underreported (n=21); while it is likely that in the majority of cases this is implemented by sequencing facilities prior to the authors receiving data, its reporting, including parameters and tools used, is fundamental to verify stringency of the read preparation procedures. The mapping of by-sample reads to OTUs was reported by only one third (n=30) of the papers that employed OTU delimitation, despite this being a necessary step for the production of ecological data for downstream analysis. Furthermore, OTU mapping is not a trivial step; the level of filtering/processing performed on the reads used for mapping (as opposed to filtering/processing performed on the sequences used for OTU delimitation), and the similarity threshold and tie-breaking algorithm employed to assign reads to OTU clusters could all substantially affect the community data generated. The accurate reporting of this step is important to assess the validity of a pipeline, its comparability across studies, and/or its ability to be reproduced.