The R Validation Hub is a working group within the R Consortium. Contact us at psi.aims.r.validation@gmail.com.

The {riskmetric} Package

About the Package

Contributed R packages are developed by anyone & everyone, and may differ in popularity and accuracy. As such, the R Validation Hub developed an R package titled riskmetric whose goal is to assess the risk of contributed R packages.

{riskmetric} has four groups of metric criteria:

  • Unit testing metrics - includes unit test coverage and composite coverage of dependencies
  • Documentation metrics - availability of vignettes, news tracking, example(s) and return object description for exported functions
  • Community engagement - number of downloads, availability of the code in a public repository, formal bug tracking and user interaction
  • Maintainability and reuse - number of active contributors, author / maintainer contacts, and type of license

Note: Even though the quality of software is sometimes measurable, sometimes it is not. For example, assessing the accuracy of a contributed open-source R package should be done outside of {riskmetric}. The term accuracy refers to the risk of an error in the code that, when used, could lead to an incorrect calculation. This incorrect calculation may lead to an incorrect decision during data analysis. The relative impact of an error should be determined by the individual organisation. Thus, impact is not a part of the risk assessment performed by {riskmetric}.

With this type of data at your finger tips, you can analyze package risk statistics with plots like the following (below), which allocates packages into different subgroups based on developers’ membership in the tidyverse / pharmaverse and groups defined by “most downloads”.

For a comprehensive list of metrics assessed via {riskmetric}, see the current state of our package reference guide or browse the Metric Development Progress GitHub project.


Are you interested in supporting package development?