Analyzing/mining specimen data for novel applications

By generating demand for the data and knowledge generated by the basic efforts. When collections, collection institutions and the collection’s community provide indispensable resources and services to society, their work will become essential - no matter if taxonomic expert, or collections specialist, assistant and manager.

This sounds very much like inter- and transdisciplinary cooperation. I hadn’t connected collections with transdisciplinary work, though your thoughts and descriptions now remind me of the i2insights blog, which I find a great resource of information and for inspiration: https://i2insights.org/

2 Likes

and

Part of the answers might be modular and versatile software with UIs and UXs that can be easily modified and geared towards the specific needs and preferences of projects and users.

I am thinking about the software architectures of R (https://www.r-project.org/) and Nextcloud (https://nextcloud.com/). Both provide a general-purpose default environment, which can be modified (replacement of default modules) and extended (additional modules) by user-chosen modules.

For example, a new user in a citizen science project might start out encountering a highly simplified interface, which allows the user to quickly and in an intuitively understandable approach add reports, field observations, etc. to the projects dataset. This results/can result in high-quality, standardized data, despite the fact that the new user doesn’t (yet) fully understand the project, input, etc.

Still, over time a digital-non-native might grow in confidence about what they are doing, they now might want to explore the digital environment more, have more options and be more in charge, even start their own projects. Thus, the software at the same time needs to allow different levels of power user-functionality. As an example, consider Inkscape (https://inkscape.org/): I am only using the graphical surface, though all its functionality can be accessed via command line, too.

At the level of project leaders, allow them all the freedom to design their input surfaces and data structures. Though, provide guidance, eg. via templates and context-dependent information, so that it is easy for them to make decisions which have their project adhere to and being compatible with standards and full-filling minimum data requirements.

Enabling users and projects to use the same, already familiar platform again for different purposes, eg. projects, by being able to design their own input forms and workflows, users, no matter if newbie, power user or project leader, will come back, interact with the project, enter data, etc. There is no “oh god, another program that I need to learn first”-moment.

Plus, if the software allows, they can see their data entries in the context of the existing dataset and additional datasets (closely related, of personal interest). They immediately see the progress that their additions added to the project and might thus recognize gaps, which they might be able to easily close.

The considerations for architecture and functionality at the data provider-side should be similarly considered at the data user side for data discovery, use and export/interoperability.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.