Software and Data Artifacts in the ACM Digital Library

ACM encourages authors to submit software and data sets with their papers. For years, ACM has provided mechanisms for authors to submit software, data sets, videos and other supplemental artifacts with their research papers. We have recently made these artifacts more discoverable through search and made them more prominent on abstract pages and Tables of Contents.

ACM’s Reproducibility Task Force has been working with SIG conferences and journal EICs to understand and articulate common Best Practices in preparing and reviewing software and data artifacts, how they can be integrated with the ACM Digital Library, and how to reflect them in publication and enable their re-use. Many of ACM’s technical communities are evolving standardized documentation and review processes to improve the chances for successful experiment re-runs and artifact re-use.

A number of ACM conferences and journals have already instituted formal processes and are implementing Best Practices for artifact review. ACM provides standard terms and definitions for labeling successful artifact reviews, and iconic badging for their associated articles, thereby establishing uniformity across ACM publications and any choosing to adopt its Best Practices.

New Changes to Badging Terminology

The ACM Publications Board recently approved a change to the terminology used for its artifact review badges to ensure that our current definitions of “reproducibility” and “replication” are consistent with definitions used by other research communities outside the field of Computing. Following discussions with the National Information Standards Organization (NISO), it was recommended that ACM harmonize its terminology and definitions with those used in the broader scientific research community, and as a result the ACM Publications Board voted to interchange the definitions of “Results Replicated” and “Results Reproduced” to adopt the NISO standard.

Reproducibility of Results in the ACM DL

There is a growing need for access to artifacts and experiments associated with scholarly publications in computer science and engineering. To serve this need, new “active digital curation” platforms are emerging that provide support to create and deliver artifacts and experiments as dynamic, interactive content, which can be associated with scientific articles. Integrating these platforms with publications has the potential to radically change the processes and mechanisms for scholarly dissemination.

ACM Digital Library-Curation Platform Integrations

ACM encourages all authors to submit a snapshot of their software and data sets for permanent archiving in the ACM DL. At the same time, ACM has begun building integrations with external software curation platforms where authors can create, evolve, and modify their projects. ACM has conducted pilot integrations for three uses cases to understand the landscape of the emerging platforms and how they might be integrated with the ACM Digital Library.
 

Digital curation image

ACM Reproducibility Task Force & Workshops

ACM created its Reproducibility Task Force in 2015 to address the difficulty of installing and re-running experimental software and data. The ACM Reproducibility Task Force organized two workshops resulting in ACM’s Artifact Review and Badging Policy, a standard set of Badges, and a draft set of Best Practices. The Third ACM Workshop will be held early December 2017 in New York to finalize its Best Practices Guide and build the next steps of its implementation plan.

Image of Reproducibility Task Force logo