Skip to aside Skip to content Skip to footer

tasks: Adopting FAIR research software practices

What is FAIR research software?

FAIR stands for Findable, Accessible, Interoperable, and Reusable and comprises a set of principles designed to increase the visibility and usefulness of your research to others. The FAIR data principles, first published in 2016, are widely known and applied today to other areas, including software, scientific workflows, or machine learning projects.

The FAIR software principles for software mean that it should be:

  • Findable - software and its associated metadata must be easy to discover by humans and machines.
  • Accessible - in order to reuse software, the software and its metadata must be retrievable by standard protocols, free and legally usable.
  • Interoperable - when interacting with other software it must be done by exchanging data and/or metadata through standardised protocols and application programming interfaces (APIs).
  • Reusable - software should be usable (can be executed) and reusable (can be understood, modified, built upon, or incorporated into other software).

Let’s have a quick look into what each of the above principle means in practice. Five Recommendations for FAIR Software also gives a quick overview of what is making software more FAIR entails.

How can we make our software findable?

How can we make our software accessible?

  • Make sure people can obtain get a copy your software using standard communication protocols (e.g. HTTP, FTP, etc.)
  • The code and its description (metadata) should be available even when the software is no longer actively developed (this includes earlier versions of the software) - see [software archiving][archiving_software]

How can we make our software interoperable?

How can we make our software reusable?

FAIR and quality

FAIR software sits squarely within the broader umbrella of quality research software. Quality software is defined by multiple aspects - e.g. correctness, performance, maintainability, usability, robustness, and reproducibility, among others. Reproducibility (the “openness & reusability” slice of software quality) often hinges on the FAIR principles: if your code and metadata are not findable or accessible, no one can rerun it; if it is not interoperable or reusable, others cannot adapt or extend or use it to verify your results.

So, FAIR is a crucial subset of quality, primarily ensuring that your software can actually be discovered, understood, and exercised by others (or by you, months down the line). A truly high-quality, reproducible research software package will typically satisfy both classical software-engineering criteria (tests, style, documentation, performance) and the FAIR principles.

Tools and practices for FAIR

There are various tools and practices that support the development of FAIR research software - some of them listed above. These tools and practices work together, as no single tool or practice will fully address one principle, but can contribute to multiple principles simultaneously.

It is important to note that while FAIR can improve software quality in several aspects - it does not say anything about its functionality. This mean that software may be FAIR, but still not very good in terms of what it does - other practices need to be employed (e.g. testing software) to make sure it works on different platforms/operating systems and that it is correct and does what it is set out to do.

Tools and frameworks exist for assessing software FAIRness:

  • FAIR software checklist - a self-assessment tool developed by the Australian Research Data Commons (ARDC) and the Netherlands eScience Center
  • FAIRsoft Evaluator - OpenBench’s tool for assessing the FAIRness of software tool from its metadata
  • FAIR software checklist - a self-assessment tool developed by the Australian Research Data Commons (ARDC) and the Netherlands eScience Center- howfairis - a command line tool to evaluate a software repository’s compliance with the FAIR principles
  • CODECHECK - an approach for independent execution of computations underlying research articles
  • Common metrics for Research Software that may used to assess each of the FAIR4RS principles

They not meant to criticise or discredit software or its authors. Their role is to make quality aspects visible, help researchers identify strengths and areas for improvement, and support the evolution of good practices. In the context of research software, such assessments are diagnostic rather than evaluative — they guide reflection, transparency, and learning, not scoring or ranking. By using them, researchers can better understand how their software performs across different aspects of FAIRness and make informed decisions about how to improve it.

Related pages

Training

EVERSE TeSS search results: