Thanks @elyw. It will only make sense for these functions to be included within collection management systems if the following things are true:
There are benefits that come to the institutions from having good current information accessible on their collections. Maybe there are parallels here with ORCID. As ORCIDs come more and more to be a useful tool for researchers to integrate and organise information on their work, it becomes more essential to have one. If collection records come to be important linkage points for information and services that matter to collections, they will become mission-critical.
There are efficient publication pathways for collection records that can readily be integrated into the workflow of collection management systems - management of a TDWG Collection Descriptions record could make this true.
In terms of metadata, I also think this is a great idea. If this could be automated and pulled periodically from the database it is one less thing a collection manager has to do or remember to update.
In terms of metrics, there was a presentation at the SPNHC 2019 meeting about Arctos interfacing with some external sources and then providing metrics on things like publication and use for specimens. I would love to see something like that for Specify. It would make compiling annual reports easier and also make it easier to demonstrate the impact of our collection.
Thanks @emeyke and tkarim - I’m glad to see the interest in this idea. The Arctos example is a really excellent one. It shows how a collection management system can be a tool for curating information from the side of the knowledge graph - ownership and responsibility for data owned by the institution/researcher, combined with the ability to mesh with BOLD and ecological data systems. I know that some other collection management systems (including EarthCape and Specify) have similar interests.
“Most natural history collections maintain data on their specimens in a collection management system (CMS) such as Specify, Symbiota, EMu, DarWIN or BRAHMS.”
may not be exactly true in many parts of the world. Unless you can call Excel a collection management system…
For those using CMSs, interfaces with the catalogue would be great and save people a lot of time and effort. But we should not neglect those using simple spreadsheets, for whom other solutions would be needed (the beauty of the IPT model, allowing just uploading a simple file).
Thanks @pzermoglio. This is very much in line with our thinking on the sketch we put together. Going beyond even the IPT model which requires someone to install a server, having some simple web forms to fill in, along with the ability to upload Excel files with standardised field heading could really lower the technical threshold for many to have an online search portal. Excel and databases like FileMaker are commonplace and we need a technical solution that allows easy participation. I propose we consider offering that natively in the catalogue itself.
@trobertson if you are talking about just collections catalogue and maintaining own collection record, than Excel upload seems like a bit of overkill. Simple web form for creating/updating the record should do it.
Well this is interesting and I might have missed a big discussion on that. Are we talking about direct upload of occurrence datasets via csv/excel files into GBIF? We can take this elsewhere as it seems to be drifting off the current topic.
One aspect where a link between a CMS and a Global Catalogue might be useful is to provide metadata on digitization progress in a collection. As an example, Index Herbariorum entries now have this optional set of fields (data shown for NY Botanical Garden):
Thanks, @Rich87. Please know GBIF are actively working on bringing this into GRSciColl and it is available in the API already. For example, you can see these counts in the collectionSummary field at the bottom of the response for NY Botanical Garden. Today this is populated only for records sync’ed from IH records (happens automatically now) but will be expanded for more collections and moved into the user interface.
This is limited of course, and more expressive descriptors should be available when metadata in (N)CD Standard is provided.
Given the existence of this operational infrastructure today, I wonder if we should consider exploring a revision of the EML profile in use by GBIF and others, along with promotion of metadata-only resource sharing as @dhobern described in this thread.
I am very curious about the MIDS standard - where could I get more information about that standard? I think this is a very pertinent issue that likely is of interest to quite a few people in our community.
I’m in hopes we have a sort of “software summit” to gather developers and some of the key users of these software platforms to have a decadal meeting (where have we come in 10 years, where do we want to go). In this meeting, we’d talk about needs / path to integrating TDWG DQ standards into CMS systems, interoperability needed / or opportunities for more interoperability. Example: Agents tables with Wikidata. And, we could move toward (or at least discuss feasibility) “metadata” tables in CMS to reduce the burden of creating, mapping, exporting, updating, publishing these data. This would also have the added benefit of simplifying annual reporting. As you point out @elyw, often this data must be compiled anew each time it’s needed locally, let alone regionally or globally. The Global Aggregated understanding of what we have, is still far from complete, and very difficult to estimate. As we continue to digitize, our estimates will continue to improve.
This is an interesting idea to explore as a possible STEP 1.
If we added CD fields to it, I could see it working.
My main observation is that many of the EML fields (being free text), often result in confusing data.
Sometimes collections describe their dataset in the EML file. Other times they describe their entire collections (which may not all be included in the dataset). And sometimes, a mix. This makes it difficult to understand what’s being described in the EML w/o looking at the associated data records.
IF we could use the EML (+ CD fields) and then
a) make it clear we’re looking for metadata
b) make it possible to link to any specimen-record-level datasets being shared
This could be a first pass at a workable system I think.
Even better then, once collections have metadata tables in their databases, then a view of the table data could be linked automatically to the IPT.
How might one link a collection’s wikidata page into this vision of using the IPT for EML metadata files?