FAIRness Reference Model

Q4 2026
  • FAIR metrics & certification
  • Result description

    OSTrails is going to produce the FAIRness reference model, which is a metadata schema for identifying and exchanging for tests, i.e., the software/piece of code for testing the FAIR metrics. This is currently missing from the landscape, and OSTrails is doing this not only for digital objects such as data, but also for software and other disciplinary types of data, in close collaboration with the EOSC-A  Task Force on FAIR Metrics and Digital Objects [Elli Papadopoulou and Mark Wilkinson are the co-chairs]. The old Task Force was asked to create a roadmap for deliverables for the new task force. To avoid confusion, OSTrails made sure that üroject objectives will be separate from the task force.

    Problem addressed

    It is currently not common practice to share FAIR assessment results. Consequently, there is a need for a data schema to identify them when they are shared and include them in scientific knowledge graphs and also in DMPs so they can be “consumed” from DMPs to assess the information they contain. In addition, OSTrails is working on FAIR assessment guidance (not only providing a score after the test, but also guiding the users on how they can improve things). The other problem concerns consensus. Different research communities don’t have a consensus on what constitutes FAIRness for their data. Some are strict, others are lax. Sharing and consensus in communities are addressed; there is a lack of consensus on what constitutes FAIRNess. By having the commons (definition) and the tests for FAIRness this will be addressed in OSTrails. Through the definition, both humans and machines will get an idea on how to evaluate FAIRness. OSTrails will provide tools plus tests, plus an additional governance layer to streamline criteria for FAIRness.

    Governance: There are competing interpretations of what constitute FAIRness, along with differences in the willingness of various infrastructures to look into their specific needs with respect to FAIR data. OSTrails provides the tools to share the results plus the test. Governance here concerns maintaining these definitions of/criteria for FAIRness.

    Currently, the landscape of FAIR assessment tools needs to be aligned as different tools that claim to do FAIR assessment will produce different results using the same data set. OSTrails proposes to harmonize the landscape by working with 10 different FAIR assessment tools.

    Who can use the result

    Researchers, Funders, Publishers

    Timeline

    Q1 (2025) and Q4 (2026) for the toolbox of testing services and thematic evaluation extensions

    How to use the result