FAIR Assessment Tools: Towards an “Apples to Apples” Comparisons

Implementation challenges arrow_forward FAIR metrics & certification

Relevance

The report emphasises the need for standardisation in FAIR assessment tools to provide consistent and reliable evaluations of research data’s FAIRness. It discusses the development of FAIR Signposting to guide data publishing and metadata harvesting, ensuring clarity and uniformity in FAIRness assessments. The report targets stakeholders in data management, highlighting the collaborative efforts to harmonise FAIR validation tools and improve data interoperability and reusability across research communities.

Scope

The scope of this whitepaper is to establish a governance model that ensures FAIRness assessments are understandable and trusted, aiding tools and services in providing clear and consistent evaluations. It concentrates on defining the metrics and maturity indicators for testing, and how FAIRness can be represented both qualitatively and quantitatively.

Main highlights

The main highlights of the report include the organisation of a hackathon divided into two groups. The first group concentrated on FAIR Metric testing and evaluation, while the second worked on developing a comprehensive benchmark reference environment, referred to as the Apples-to-Apples (A2A) benchmark repository. This repository is crucial as it facilitates the continuous creation of new reference challenges to accommodate the evolving interpretations of FAIR signposting and the inclusion of various expert domains with specific standards. The establishment of these benchmarks is intended to maintain uniformity among FAIR assessment tools and improve the consistency of metadata harvesting across different domains​

Key recommendations

The key recommendations from the report emphasise the need for harmonisation among FAIR assessment tools to ensure they operate uniformly across different (meta)data publishing paradigms. The report highlights the importance of FAIR Signposting for consistent metadata harvesting workflows and the adoption of this standard by key stakeholders like funding agencies and the EOSC-A. It also encourages the development and testing of new FAIR assessment tools within established benchmarking environments to minimise variations in FAIRness evaluations, suggesting that this harmonisation process will continue to evolve, accommodating new types of research outputs beyond data, such as software or workflows.