In systems research, artifacts play an important role since the results are often tied to the produced artifact. This not only includes software systems but also datasets, benchmarks, models and test suites. In many cases, it is impossible to reproduce the results without the artifact. Yet, as a community, we offer no formal means to submit and evaluate anything but the paper. This effort is a small first step towards bridging this gap.

Apart from validating the major results presented in the paper, artifact evaluation also aims to recognize authors who put in the effort to make their artifacts available to be used by other researchers and practitioners. We model the artifact evaluation process on the guidelines set forth by ACM. You can learn more about our badging process here.

Towards realizing this mission, we have gathered a motivated team of early career researchers and senior grad students with a shared interest in reproducibility in systems research. In this inagural year, we limit our evaluation effort to those papers that are selected for publication. At the conclusion of this effort, we will produce a report to share our experiences, and to encourage the broader systems community to adapt artifact evaluation.