This thread will capture the questions that arose during and after the second support hour for Nodes
Watch the recording that step-by-step describes the technical components of GBIF.
Question 1: Are the flags on the GBIF validator the same as the flags on GBIF.org?
As mentioned in the presentation, the validator use the same processing pipeline as GBIF.org. However, all the flags relating to the ingestion process (for example, metadata validation errors or occurrence duplicate flags) are only shown in the validator. In other words, the validator will show all the flags that will be present on GBIF.org and more.
Question 2: Does the validator have an API?
No.
Question 3: Datasets get new DOIs for major new versions, is that correct?
The publisher decides what constitutes a major version. So if a given publisher considers that there is no major change to a dataset, the DOI will remain the same.
Question 4: Can the validator identify records where coordinates are missing?
Records where coordinates are missing are not flagged because the lack of coordinates is not an issue per say. It is ok to publish records without coordinates. This means that the validator would not identify the records where coordinates are missing.
Question 5: Could there be some GBIF technical Support to help with updating IPTs?
We don’t have access to your servers so we can’t make the update for you. That being said, we are happy to make it the next topic of our next technical support hour for Nodes and make a demonstration. We can also consider arranging a call later if needed.
Question 6: Does checklistbank.org correspond to the GBIF backbone and is it related to the Catalogue of Life changes?
Checklistbank.org was developed by GBIF for the Catalogue of Life. It does more than hosting checklist, it is also a tool to compare checklists, build checklists from existing ones, etc. The Catalogue of Life (CoL) use checklistbank.org to work on the CoL taxonomy as well as the Extended Catalogue of Life, which will include more checklist sources and some modifications. GBIF right now has its own backbone taxonomy, built and maintained in house. In the future, we would like to replace the GBIF backbone by the Extended Catalogue of Life. We do not have timeline for such change.
Note that you can ask for an account on checklistbank.org to download and upload checklists.
Question 7: Can you search occurrences based on eventIDs with the GBIF API?
Yes you can do that as eventID
is an argument of the occurrence API (both search and download). Here is an example: https://api.gbif.org/v1/occurrence/search?dataset_key=372ac467-9b19-4373-8a0f-85fb3a91b6ca&event_id=e14_2018-02-18
Question 8: Could you advise on the best way to model the following in Darwin Core? A study where samples correspond to several limbs of the same organisms as well as their microbiomes.
The answer here is a summary of a few things that were discussed together in the call:
- You would probably want to publish the microbiome as a separate dataset.
- You could try the Preservation Extension (https://rs.gbif.org/extension/ggbn/preservation.xml) with an event or occurrence core to convey the different limb preservations.
- You could use the Resource Relationship extension to capture the relationships between the different components.
- You could publish one occurrence per limb but use the same
organismID
as this allows users to get all the occurrence from one given organism in the web interface. See this example: Search - Note that the GBIF “clustering” function only checks records across datasets so it would not group records published in the same dataset.