@sformel, many thanks for contributing to this discussion.
It’s interesting that you see the core notion as an hypothesis, because that opens the question “How do you test for a significant result?” In other words, what variables would you have to control to examine statistically the connection between the sharing of biodiversity data and positive, real-world conservation benefits, and how would you avoid the trap of seeing a false-cause connection (post hoc ergo propter hoc; the benefits were coming for other reasons)?
In my experience in Australia, the variables-needing-to-be-controlled make up a long list and are subject to stochastic change, e.g. because of elections. What’s been clear in many conservation wins is that the science behind the win didn’t play a large part in its success.
In fact, in many cases the science has been abused. Suppose a campaign is started to legally reserve a block of forest to be logged. To focus public attention on the block, a biodiversity survey is done that generates occurrence data for a rare or otherwise glamorous species in the block. The campaign propaganda features these occurrences: “Stop the logging! Save the {X}!”
Whether or not the campaign succeeds, several ecological questions are likely to remain unanswered:
- Is the rarity of {X} real, or is it an artifact of inadequate sampling?
- Will logging actually disadvantage {X}?
- What are the current and near-term threats (other than logging) to the continued existence of {X} in the block of forest?
- What management is required to maintain {X} in unlogged forest, and who will do it?
You can argue that the propaganda use of occurrence records is not relevant to the science in this case, and that the important thing is that conservation planners and policy-makers now have additional data to inform their decisions. My counter-argument is that those decisions are likely to be made for reasons having nothing to do with the occurrence data, which gets us back to how you could test your hypothesis.