Heterogeneity in the sequence of tasks may reflect the careful design and adaptation of bioinformatic procedures within each study to the type and structure of sample and sequence data and/or the specific research question, rather than the simple duplication of previously published pipelines. However, high heterogeneity may equally result from the omission of important tasks or their inappropriate implementation within the pipelines, and so result in low comparability, integration  and replication across studies. One clear example of this is associated with the Filtering tasks of removal of erroneous sequence reads. Denoising (i.e. the removal of sequencing errors based on models of error frequency parameterised by between-sequence similarity, error sensitivity and/or relative frequency), was employed in just 18 studies and its relative position within the pipelines was highly variable (see Table 1 and Fig. 3). While some sequencing errors will be disregarded during OTU clustering, failure to incorporate denoising can lead to false OTUs and thus OTU inflation (Shum & Palumbi, 2021) Furthermore, the trend towards examining haplotypic variation in metazoan wocDNA metabarcoding through use of amplicon sequence variants (ASVs, Callahan et al., 2017) requires minimising the number of spurious sequences, relying on stringent filtering such as denoising. Similarly, filtering to remove sequences with low copy number (that are often considered highly likely to be erroneous) was reported in only half (n=57) of the studies, despite being generally recommended (Calderón‐Sanou et al., 2020; Ficetola et al., 2017) and a critical step for reducing spurious sequences surviving denoising including nuclear mitochondrial (NUMT, Lopez et al., 1994) copies (Andújar et al., 2021). It should be noted that while many task absences are cases of under-implementation, some may also be underreporting (see below).