Skip to main content

Crosswalk with FAIR4RS

We summarize below how the FAIR-BioRS guidelines as well as other relevant guidelines align with the FAIR4RS Principles. A description of the compliance and a score are provided (0 = Do not provide actionable instructions for complying with the corresponding FAIR4RS Principle, 1 = Provide actionable instructions that only allow to comply partially with the corresponding FAIR4RS Principle, 2 = Provide actionable instructions for compying fully with the corresponding FAIR4RS Principles).

FAIR4RS Principles FAIR-BioRS guidelines NIH's Best Practices for Sharing Research Software
F1. Software is assigned a globally unique and persistent identifier.

Score = 2

Archiving the software on Zenodo/Figshare (step 5.2) will assign a Digital Object Identifier (DOI) which is a unique and persistent identifier. Archiving the software on Software Heritage (step 5.3) will assign a SoftWare Heritage persistent IDentifier (SWHID) which is also a unique and persistent identifier. Bio.tools/RRID Portal will issue a unique and persistent identifier as well (bio.tools ID and RRID, respectively) when the software is registered (step 6).

Score = 2

Archiving the software on Zenodo (FAQ 5) will assign a Digital Object Identifier (DOI) which is a unique and persistent identifier.

F1.1. Components of the software representing levels of granularity are assigned distinct identifiers.

Score = 2

Bio.tools/RRID Portal (step 6) will assign a unique identifier for the entire software. Archiving each version of the software on Zenodo/Figshare (step 5.2) will assign a distinct identifier (DOI) for each version. Archiving the software on Software Heritage (step 5.3) will assign a distinct identifier (SWHID) to any level of granularity of the software (software, releases, files, commits, code fragments, etc.).

Score = 1

Archiving each version of the software on Zenodo (FAQ 5) will assign a distinct identifier (DOI) for each version. The guidelines do not include any recommendation for assigning to levels of granularities below versions of software.

F1.2. Different versions of the software are assigned distinct identifiers.

Score = 2 Archiving each version of the software on Zenodo/Figshare (step 5.2) will assign a distinct identifier (DOI) for each version. Archiving on Software Heritage (step 5.3) will assign a distinct identifier for each version release of the software as well. Changes between versions will be documented in the CHANGELOG file (step 3.2).

Score = 2

Archiving each version of the software on Zenodo (FAQ 5) will assign a distinct identifier (DOI) for each version.

F2. Software is described with rich metadata.

Score = 2

Rich metadata covering a variety of aspects will be provided through the code-level documentation (step 2.1), the dependencies recording (step 2.2), the instructed documentation (step 3), the prescribed metadata files (step 4), the repository-specific metadata on Zenodo/Figshare (step 5.2), and the registry-specific metadata on bio.tools/RRID Portal (step 6).

Score = 1

Rich metadata covering a variety of aspects will be provided through the code-level documentation (FAQ 11), the CITATION.cff metadata file (FAQ 5), and the repository-specific metadata on Zenodo (FAQ 5). The guidelines do not specify what to include in the CITATION.cff metadata file.

F3. Metadata clearly and explicitly include the identifier of the software they describe.

Score = 2

The README file will include the DOI from Zenodo/Figshare in a "How to cite"; or similar section (step 3.1). The codemeta.json and CITATION.cff files (step 4) will include the DOI from Zenodo/Figshare in their "identifier" and "identifiers" fields, respectively. The DOI from Zenodo/Figshare is always included in that repository's metadata (step 5.2). The DOI will also be included in the bio.tools/RRID portal's metadata which also includes their respective IDs (step 6).

Score = 1

The CITATION.cff file (FAQ 5) will include the DOI from Zenodo in the "identifiers" fields (if provided) but the guidelines do not specify that the identifier must be included in the CITATION.cff file. The DOI from Zenodo is always included in the repository's metadata (FAQ 5).

F4. Metadata are FAIR, searchable and indexable.

Score = 2

FAIR, searchable, and indexable metadata that follow community standards and use controlled vocabularies are provided through Zenodo (aligns with DataCite's Metadata Schema minimum and recommended terms, with a few additional enrichments)/Figshare (aligns with DataCite's Metadata Schema) (step 5.2), Software Heritage (follows the CodeMeta vocabulary) (step 5.3), and Bio.tools (uses the biotoolsSchema and EDAM ontology)/RRID Portal (follows the Resource Description Framework (RDF) and aligns with the Biomedical Resource Ontology (BRO) and the Eagle-i Resource Ontology (ERO) along with few additions) (step 6). The prescribed license documentation (step 1.2), development best practices (steps 2.1 and 2.2), documentation of the software (step 3), and prescribed metadata files (step 4) will contain additional metadata that also follow community standards, use controlled vocabularies, and is typically searchable through the suggested version system control platforms (step 1.1).

Score = 2

FAIR, searchable, and indexable metadata that follow community standards and use controlled vocabularies are provided through Zenodo (aligns with DataCite's Metadata Schema minimum and recommended terms, with a few additional enrichments) (FAQ 5). The CITATION.cff files (FAQ 5) will contain additional metadata that also follow community standards, use controlled vocabularies, and is typically searchable through the suggested version system control platforms (FAQ 2).

A1. Software is retrievable by its identifier using a standardised communications protocol.

Score = 2

The software archive can be retrieved by the DOI generated by Zenodo/Figshare (step 5.2) using HTTP, which is a standardized protocol. The software will be retrievable through the version control system platform (step 1.1), the deployment repository if applicable (step 5.1), and Software Heritage (Step 5.3) also using HTTP.

Score = 2

The software archive can be retrieved by the DOI generated by Zenodo (FAQ 5) using HTTP, which is a standardized protocol. The software will be retrievable through the version control system platform (FAQ 2) also using HTTP.

A1.1. The protocol is open, free, and universally implementable.Score = 2 The HTTP protocol is open, free, and universally implementable.Score = 2 The HTTP protocol is open, free, and universally implementable.

A1.2. The protocol allows for an authentication and authorization procedure, where necessary.

Score = 2

Version control systems platforms (step 1.1), deployment repositories (step 5.1), and Zenodo/Figshare (step 5.2) have a process in place to allow for an authentication and authorization procedure for software shared under closed/restricted access. Everything on Software Heritage (Step 5.3) is open access and does not require any authentication or authorization.

Score = 2

Version control systems platforms (FAQ 2), deployment repositories (FAQ 3), and Zenodo (FAQ 5) have a process in place to allow for an authentication and authorization procedure for software shared under closed/restricted access.

A2. Metadata are accessible, even when the software is no longer available.

Score = 2

Once archived on Zenodo or Figshare (step 5.2) and on Software Heritage (step 5.3) both the software and metadata will always be available and accessible for the lifetime of these repositories. Moreover, Zenodo and Figshare send metadata from the software to DataCite for generating a DOI and that metadata will always remain accessible through DataCite's registry. Additionally, Zenodo keeps metadata stored in high-availability database servers separate from the software files. Bio.tools/RRID Portal (step 6) will also keep the metadata accessible even if the software is no longer available e.g., on the version control system platform or any of the archiving repositories.

Score = 2

Once archived on Zenodo (FAQ 5) both the software and metadata will always be available and accessible for the lifetime of these repositories. Moreover, Zenodo sends metadata from the software to DataCite for generating a DOI and that metadata will always remain accessible through DataCite's registry. Additionally, Zenodo keeps metadata stored in high-availability database servers separate from the software files.

I1. Software reads, writes and exchanges data in a way that meets domain-relevant community standards.

Score = 2

Step 2.4 will ensure that the inputs/outputs of the software follow any applicable community standards. Those standards will be documented in the README file under a "Standards followed" or similar section (step 3.1). They can also be documented in the bio.tools metadata using the EDAM ontology to specify the nature and format of the input and output data.

Score = 0

There are no instructions in the guidelines relevant to this Principle.

I2. Software includes qualified references to other objects.

Score = 2

The README file/documentation will contain qualified references to other objects associated with the software under a "Parameters and data required to run the software" or similar section (step 3.1) . The fields "isPartOf", "hasPart", and "relatedLink" of the codemeta.json file (step 4.1) will also provide qualified references to other objects. The Zenodo metadata (step 5.2) include a "Related identifiers" field that can be used to provide qualified references to other objects.

Score = 0

The Zenodo metadata (FAQ 5) include a "Related identifiers" field that can be used to provide qualified references to other objects but the guidelines do not instruct to provide that metadata on Zenodo.

R1. Software is described with a plurality of accurate and relevant attributes.

Score = 2

The software will be described with a plurality of accurate and relevant attributes through the development history captured by the version control system platform (step 1.1.), the prescribed documentation (step 3), the prescribed metadata files (step 4), the repository-specific metadata (step 5), and the registry-specific metadata (step 6), which all have several overlapping elements.

Score = 1

The software will be described with a plurality of accurate and relevant attributes through the development history captured by the version control system platform (FAQ 2), the CITATION.cff metadata (FAQ 5), and the Zenodo metadata (FAQ 5) which all have several overlapping elements. It is not explicitly specified what metadata to include in the CITATION.cff file and on Zenodo.

R1.1. Software is given a clear and accessible license.

Score = 2

The software will be given a clear and accessible license through step 1.2 which instructs selecting a license and including a LICENSE file with usage terms. The metadata of the software repository in the version control system platform (step 1.1), the metadata files (step 4), the repository-specific metadata (step 5), and the registry-specific metadata (step 6) will all include the name of the license.

Score = 1

The software will be given a clear and accessible license through FAQ 4 which instructs selecting an OSI-approved license. The metadata of the software repository in the version control system platform (FAQ 2), the CITATION.cff file (FAQ 5), and the Zenodo metadata (FAQ 5) could include the name of the license. It is not specified how the license terms should be included.

R1.2. Software is associated with detailed provenance.

Score = 2

Detailed provenance (why and how the software came to be, as well as who contributed what, when and where, etc.) will be provided in several ways: in the development history maintained by the version control system platform (step 1.1.) that will also get archived in Software Heritage (step 5.3), in the README through an "Overall description of the software" and a "How to cite" or similar sections (step 3.1), in the codemeta.json file through several fields such as Software description/abstract ("description") and Authors ("givenName", "familyName") with their Organization name ("affiliation") (step 4.1), in the CITATION.cff file through several fields such as Authors ("given-names", "family-names") with their Organization name ("affiliation") (step 4.2), in the repository-specific metadata (step 5), and the registry-specific metadata (step 6).

Score = 1

Detailed provenance (why and how the software came to be, as well as who contributed what, when and where, etc.) will be provided in several ways: in the development history maintained by the version control system platform (FAQ 1), in the CITATION.cff file through several fields such as Authors ("given-names", "family-names") with their Organization name ("affiliation") (step 4.2), and in the Zenodo metadata (step 6). The guidelines do not specify what to include in the CITATION.cff metadata file.

R2. Software includes qualified references to other software.

Score = 2

The software dependencies file (step 2.2) will contain qualified references to other software required to run the source code. Following language-specific best practices (step 2.3) will also allow including dependencies in the code (e.g., imports in Python code). The README files (step 3.1) will contain qualified references to other software under a "High-level dependencies of the software" or similar section. The fields "isPartOf", "hasPart", and "relatedLink" of the codemeta.json file (step 4.1) will provide qualified references to other software. The Zenodo metadata (Step 5.2) also includes a "Related identifiers" field that can be used to provide qualified references to other software. The bio.tools metadata (step 6) include a "Relations" class that can be used to provide qualified references to other software registered on bio.tools.

Score = 0

The Zenodo metadata (FAQ 5) also includes a "Related identifiers" field that can be used to provide qualified references to other software, but the guidelines do not instruct to use that field.

R3. Software meets domain-relevant community standards.

Score = 2

Steps 1, 2, and 3 will ensure that the software, including its documentation and license, meet domain-relevant community standards and best practices. Sharing software on a deployment repository (if applicable) will also help meet domain-relevant community standards and best practices (step 5.1).

Score = 1

Including code level comments (FAQ 11) and sharing software on a deployment repository (FAQ 2) will help meet domain-relevant community standards and best practices, although more is needed especially in terms of software dependencies and documentation.

Was this page helpful?