Result description
FAIR-IMPACT will build on the outputs of the RDA/ReSA/FORCE11 FAIR for Research Software WG and existing guidelines & metrics for research software to adapt and enhance the FAIR principles for research software.
Problem addressed
This deliverable from Task 5.2 (FAIR metrics for research software) on “Metrics for automated FAIR software assessment in a disciplinary context” is part of Work Package 5 on “Metrics, Certification and Guidelines” within the FAIR-IMPACT project. It builds on the outputs of the RDA/ReSA/FORCE11 FAIR for Research Software WG and existing guidelines and metrics for research software to define metrics for the assessment of the “FAIR Principles for Research Software (FAIR4RS Principles)”. FAIR software can be defined as research software which adheres to these principles, and the extent to which a principle has been satisfied can be measured against the criteria in a metric. This work on software metrics was coordinated with Task 4.3 (Standard metadata for research software) from Work Package 4 on “Metadata and Ontologies”, which focuses on “Guidelines for recommended metadata standard for research software within EOSC”, to ensure that metrics are related to their recommended metadata properties.
Problem addressed: The deliverable defines 17 metrics that can be used to automate the assessment of research software against the FAIR4RS Principles, and provides examples of how these might be implemented in one exemplar disciplinary context of the social sciences. The FAIR-IMPACT project will then work to implement the metrics as practical tests by extending existing assessment tools such as F-UJI; this work will be reported in Q2 2024. Feedback will be sought from the community, through webinars and an open request for comments. The information from all these sources will be used to publish a revised version of the metrics.