The INFRAEOSC EU project FAIR-IMPACT has released its latest project result.
The publication identifies 17 metrics that can be used to assess research software against the FAIR Principles for Research Software (FAIR4RS Principles). The publication also provides an exemplary implementation case study in the social sciences context via EOSC Association (EOSC-A) member, CESSDA, the Consortium of European Social Science Data Archives.
This is the first project result to be highlighted from the EOSC implementation Macro-Roadmap since its release in September. The Macro-Roadmap – a joint effort of the European Commission and EOSC-A, with the support of EOSC Focus – is a visual mapping of the results of the EU projects implementing EOSC. The tool displays project results over time and according to selected high-level objectives and the respective Action Areas of the EOSC Partnership’s Strategic Research & Innovation Agenda. In the present case, the FAIR-IMPACT report is mapped under Implementation Challenges —> FAIR metrics and certification.
Assessing FAIR research software
The report, Metrics for automated FAIR software assessment in a disciplinary context, is a deliverable of FAIR-IMPACT’s Work Package 5, Metrics, Certification and Guidelines. It builds on the outputs of the RDA/ReSA/FORCE11 FAIR for Research Software Working Group as well as existing guidelines and metrics for research software. It also builds on community input from a workshop run at the Research Data Alliance Plenary Meeting 20 in Stockholm.
FAIR software can be defined as research software which adheres to FAIR4RS principles, and the extent to which a principle has been satisfied can be measured against the criteria in a metric. This work on software metrics was coordinated with FAIR-IMPACT Work Package 4 on Metadata and Ontologies, in particular the deliverable Guidelines for recommended metadata standard for research software within EOSC, to ensure that metrics are related to their recommended metadata properties.
The FAIR-IMPACT project will work to implement the metrics as practical tests by extending existing assessment tools such as F-UJI. This work will be reported in Q2 2024. Feedback will be sought from the community through webinars and an open request for comments, resulting a revised version of the report.