Task Force on Data, Software, and Reproducibility in Publication

Leader: Michael Lesk

Members: Alex Wade, Bruce Childers, Dave Grove, Dirk Beyer, Gianluca Setti, Grigori Fursin, Gwendal Simon, Henning Schulzrinne, Juliana Freire, Limor Peer, Mark Gross, Micah Altman, Michael Heroux, Michela Taufer, Sheila Morrissey, Simon Harper, Stephen Spencer, Stratos Idreos, Tim Hopkins, Wilfred Pinfold, Randy Leveque, Ron Boisvert, Jack Davidson, Nik Dutt, Bernie Rous, Wayne Graves, Scott Delman, Craig Rodkin

A scientific result is not fully established until it has been independently reproduced. Unfortunately, much published research is not independently verified.  And, in the rare cases when a systematic effort has been made to do so, the results have not been encouraging 1. This threatens to undermine public confidence in the enterprise, and has led to calls for improvements to the process of reporting and reviewing scientific research in, for example, the biomedical sciences 2.  The record in computer science is no better 3.

Changing this state of affairs is not easy.  Experimental procedures are rarely described in sufficient detail.  And in spite of the fact that experiments in the computing field are often virtual in nature, and hence more easily transportable, the underlying software and data is rarely shared.  Mandates are only a first step, and, while they can lead to the release of artifacts, there is little incentive to make them actually usable.

Professional societies have an important role in developing and promoting an open science ecosystem.  As part of their role of arbiters and curators of the research literature, they can play a key role in changing the incentive structure to promote higher standards of reproducibility.  Some of the necessary infrastructure includes:

  • Standards, guidelines and best practices for reproducibility of published research
  • Operational procedures for the formal review of reproducibility
  • Identification and branding of reproducible research within the published literature
  • Indexing and storage of artifacts that support replication and reuse
  • Standards and procedures for the citation of research artifacts as first-class objects
  • Commitment to long-term curation on behalf of the research community

ACM has already taken steps in this regard.  From the inception of its highly regarded Digital Library (DL), provisions for the archive of arbitrary supplemental material have been in place. Now ACM has initiated this Task Force charged with envisioning how ACM can effectively promote reproducibility within the computing research community.

The Task Force brings together several groups within ACM which have initiated independent grassroots efforts to develop reviewing procedures and provide incentives for reproducibility in research reported in ACM conferences and journals. Coordinating these experiences should facilitate development of shared best practices, guidelines, and standards that can be propagated to the entire ACM and broader scientific publishing community. 

  1. “Trouble at the Lab”, The Economist, Oct. 19, 2013.
  2. “Proposed Principles and Guidelines for Reporting Preclinical Research,” National Institute of Health, http://www.nih.gov/about/reporting-preclinical-research.htm.

R. D. Peng, “Reproducible Research in Computational Science,” Science, vol. 334, no. 6060 (2 Dec. 2011).

Bringing You the World’s Computing Literature

The most comprehensive collection of full-text articles and bibliographic records covering computing and information technology includes the complete collection of ACM's publications. 

ACM Digital Library