Artifact Review Summary: Ditto: An Elastic and Adaptive Memory-Disaggregated Caching System

Artifact Details

Badges Awarded

Artifact Available Artifact Functional Results Reproduced
Artifacts Available (v1.1) Artifacts Evaluated - Functional (v1.1) Results Reproduced (v1.1)

Description of the Artifact

The submitted artifact (https://github.com/dmemsys/Ditto/tree/79f9c621bb9e5bca71ea6cc0694a3156d362603a) consists of a git repository hosted on GitHub, which comprises:

  1. The source code of Ditto.
  2. A Cloudlab profile to start a reservation on similar nodes as the one used in the paper.
  3. Utility scripts to set up the test environment on nodes and install dependencies.
  4. Scripts for running and plotting the evaluation of the paper for almost all of its claims (Fig. 1-2, 13-25).
  5. The necessary documentation, including
    • a description
    • the supported environments
    • running instructions with ETA for each step

Environment(s) Used for Testing

A cluster of 10 r650 nodes on CloudLab.

Step-By-Step Instructions to Exercise the Artifact

We followed the steps described in the README to set up the environment and download the workload. Then, steps in experiments/scripts/README.md were used to reproduce experiment results.

How The Artifact Supports The Paper

Why did we choose to award it with an Available badge?

  • ✔ The artifact is available on a public website.
  • ✔ The artifact has a “read me” file with a reference to the paper. (Title + Conference)

The artifact is available on a public website, including source code, scripts for running, and detailed documentation describing how to run the experiment.

Why did we choose to award it with a Functional badge?

  • ✔ The artifact has a “read me” file with high-level documentation.
    • ✔ It has a description.
    • ✔ It does include a list of supported environments.
    • ✔ It has compilation and running instructions.
    • ✔ It has usage instructions to run experiments.
    • ✔ It has instructions for a “minimal working example.”
  • ✔ The source code looks good.
  • ✔ The artifact includes all experiments detailed in the paper, except fig{3,4,5} & table 3.

Following the steps in the README file, we could successfully run a minimum working example (kick-the-tires.py). Moreover, with the instructions in experiments/scripts/README.md, we performed all experiments detailed in the paper, except fig{3,4,5} (motivation simulations) & table 3 (SLOCs comparison of caching algorithm implementations).

Why did we choose to award it with a Reproduced badge?

  • ✔ The artifact has a “read me” file that documents:
    • ✔ The exact environment the authors used.
    • ✔ The exact commands to run to reproduce each claim from the paper.
    • ✔ The time used per claim.
  • ✔ The results of the evaluation match the paper’s claims.

The artifact was very easy to exercise. And the results (Fig. 1-2, 13-25) we got are roughly the same as the figures demonstrated in the paper. Although the data we collected had minor dissimilarities with the paper, it well supported the paper’s claims.

Additional Notes and Resources

  1. For Figure 17, fig17.py relies on the result file results/fig14.json generated by fig14.py.
  2. CloudLab sometimes resets the default shell to bash after reboot. But the artifacts use zsh in scripts and environments.
  3. In rare cases, the ulimit -n unlimited command could fail. You need to adjust it to ulimit -S -n unlimited in fig{1,17}.py.