Thanks @emeyke and tkarim - I’m glad to see the interest in this idea. The Arctos example is a really excellent one. It shows how a collection management system can be a tool for curating information from the side of the knowledge graph - ownership and responsibility for data owned by the institution/researcher, combined with the ability to mesh with BOLD and ecological data systems. I know that some other collection management systems (including EarthCape and Specify) have similar interests.
“Most natural history collections maintain data on their specimens in a collection management system (CMS) such as Specify, Symbiota, EMu, DarWIN or BRAHMS.”
may not be exactly true in many parts of the world. Unless you can call Excel a collection management system…
For those using CMSs, interfaces with the catalogue would be great and save people a lot of time and effort. But we should not neglect those using simple spreadsheets, for whom other solutions would be needed (the beauty of the IPT model, allowing just uploading a simple file).
Thanks @pzermoglio. This is very much in line with our thinking on the sketch we put together. Going beyond even the IPT model which requires someone to install a server, having some simple web forms to fill in, along with the ability to upload Excel files with standardised field heading could really lower the technical threshold for many to have an online search portal. Excel and databases like FileMaker are commonplace and we need a technical solution that allows easy participation. I propose we consider offering that natively in the catalogue itself.
@trobertson if you are talking about just collections catalogue and maintaining own collection record, than Excel upload seems like a bit of overkill. Simple web form for creating/updating the record should do it.
Well this is interesting and I might have missed a big discussion on that. Are we talking about direct upload of occurrence datasets via csv/excel files into GBIF? We can take this elsewhere as it seems to be drifting off the current topic.
One aspect where a link between a CMS and a Global Catalogue might be useful is to provide metadata on digitization progress in a collection. As an example, Index Herbariorum entries now have this optional set of fields (data shown for NY Botanical Garden):
Thanks, @Rich87. Please know GBIF are actively working on bringing this into GRSciColl and it is available in the API already. For example, you can see these counts in the collectionSummary field at the bottom of the response for NY Botanical Garden. Today this is populated only for records sync’ed from IH records (happens automatically now) but will be expanded for more collections and moved into the user interface.
This is limited of course, and more expressive descriptors should be available when metadata in (N)CD Standard is provided.
Given the existence of this operational infrastructure today, I wonder if we should consider exploring a revision of the EML profile in use by GBIF and others, along with promotion of metadata-only resource sharing as @dhobern described in this thread.
I am very curious about the MIDS standard - where could I get more information about that standard? I think this is a very pertinent issue that likely is of interest to quite a few people in our community.
I’m in hopes we have a sort of “software summit” to gather developers and some of the key users of these software platforms to have a decadal meeting (where have we come in 10 years, where do we want to go). In this meeting, we’d talk about needs / path to integrating TDWG DQ standards into CMS systems, interoperability needed / or opportunities for more interoperability. Example: Agents tables with Wikidata. And, we could move toward (or at least discuss feasibility) “metadata” tables in CMS to reduce the burden of creating, mapping, exporting, updating, publishing these data. This would also have the added benefit of simplifying annual reporting. As you point out @elyw, often this data must be compiled anew each time it’s needed locally, let alone regionally or globally. The Global Aggregated understanding of what we have, is still far from complete, and very difficult to estimate. As we continue to digitize, our estimates will continue to improve.
This is an interesting idea to explore as a possible STEP 1.
If we added CD fields to it, I could see it working.
My main observation is that many of the EML fields (being free text), often result in confusing data.
Sometimes collections describe their dataset in the EML file. Other times they describe their entire collections (which may not all be included in the dataset). And sometimes, a mix. This makes it difficult to understand what’s being described in the EML w/o looking at the associated data records.
IF we could use the EML (+ CD fields) and then
a) make it clear we’re looking for metadata
b) make it possible to link to any specimen-record-level datasets being shared
This could be a first pass at a workable system I think.
Even better then, once collections have metadata tables in their databases, then a view of the table data could be linked automatically to the IPT.
How might one link a collection’s wikidata page into this vision of using the IPT for EML metadata files?
Among the various ways of providing data to the catalogue that have been mentioned in this and other topics of this consultation the integration with CDMS would be my preferred route to go.
I would like to be able to manage collection information (that is then channeled to this catalogue or other destinations) seamlessly with information on the specimens that make up these collections. Once a collection is defined in terms of the specimens that are its constituent parts or other criteria I would want automatic updates of information aggregated on the level of the collection based on the specimen-level information and I would like to be able to configure automatic publication of that collection-level data.
I would not want to compile, manage and export data on the collections’ level with a second independent system.
This might be especially useful for the wider data linkages based on aggregated specimen data and corresponding services disussed in topics 2.5 and 2.6, respectively.
Also, when collections are defined according to intensional criteria, e.g., all specimens collected by a particular collector, then such integration would automatically pick up on updated knowledge about the specimens - we expect that digitization will expose a lot of such hitherto unknown specimen properties.
It is not quite that simple at least for Symbiota, it functions like Specify, Arctos, EMu etc. but it also functions like Vertnet or SpeciesLink as a primary aggregator. You can either enter data directly into Symbiota or use Excel and upload a CSV version or batch upload an EMu dataset via IPT.
From our experience developing an integrated Biodiversity Information Management System for Collections of different types, one of the requirements was to support the Collection Management, particularly the annual estimation of specimens in the different collections. This is important because the collections are considered National Heritage in some of our countries and specimens are never given away, they are lent in perpetuity, so they need to be tracked. But this can’t be accurately calculated or tracked unless all the operation was digitized, (which was never the case), there was always something in the backlog, something being prepared, something from the wet collection being mounted, something not there yet, so estimations at the Catalogue level are always necessary. Apart from the challenge of creating and maintaining totals for collections that couldn’t be added, considering those loans/duplicates sent to other institutions was vital in determining how realistic the estimations were and country reports were dependant on this information at the Catalog level.