I see GBIF as the aggregation platform for all evidence we have of the distribution (and as far as possible abundance) of species in time and space. Soundscapes can and should be included. They share some characteristics with satellite imagery and metagenomic data. In all cases, we have instruments that collect streams of digital information from which we can derive assertions of the presence/abundance of different species.
For metagenomics, the sequencing gives us a stream of sequences we can interpret through a combination of reference libraries like BOLD, UNITE and INSDC and suitable clustering algorithms. The quality of the data we get is a product of the quality of the original sequences, of the reference library coverage, completeness and correctness, and of the specific algorithms used.
For hyperspectral satellite imagery, we could be getting e.g. assessments of proportional tree cover by different species for a given area. Again this would be a product of the quality of the satellite imagery, the reference datasets for tree spectra, and the algorithms used.
For soundscapes, our quality will be a product of the quality of the recordings, and of the human or machine processes to diagnose species.
I would like to see the infrastructures in place that store the original sequences/images/recordings and publish versionable sampling-event (Event Core) data sets that list species (and applicable abundance measures) and link back 1) to the digital artifacts from which they derive and 2) to the algorithm versions used to derive the species lists. Over time, as reference data and algorithms improve, this processing may be repeated to increase the completeness or quality of the lists.
A key challenge, particularly in the case of soundscapes will be to document the effective spatial area being sampled. This should be represented in the metadata for the dataset/sample but could over time come to be attached more at the level of each species based on profiles of the effective range of the sounds for each species.