See the overall guidelines for ICMR Reproducibility here.

Important notice: Due to the COVID-19 pandemic, the reproducibility track will only consider paper that have been submitted in the 2021 edition, but have not been accepted in that cycle. The next regular call will be for ICMR 2023, inviting submissions from both ICMR 2021 and 2022.

2022 Reproducibility Committee

Chairs:

  • Martin Aumüller, IT University of Copenhagen, Denmark email
  • Björn Þór Jónsson, IT University of Copenhagen, Denmark email

Committee members:

  • Andreas Leibetseder, Klagenfurt University, Austria
  • Bei Liu, Microsoft Research Asia, China
  • Maria Sinziiana Astefanoaei, IT University of Copenhagen, Denmark
  • Matteo Ceccarello, Free University of Bolzano, Italy

ICMR 2021 - Call for Papers (Archival)

This page gives the instructions for submitting reproducibility companion papers at the 2021 edition of ACM ICMR. Such a submission has essentially two parts:

  • Companion paper: The companion paper is 2–3.5 pages in length (with an optional page for references). It contains a high-level description of the experiments carried out in the original paper and that are implemented in the archive.

  • Archive: Contains the artifacts (e.g., code, scripts, data sets, protocols), which are cleanly packaged, ready for download and use to reproduce the results from the original paper. It contains detailed readme file(s), examples, and all information needed to successfully carry out the experiments.

These instructions are split into sections that detail:


General Submission Guidelines for 2021

Authors that have a regular long/short or a special session paper published at ACM ICMR 2020 are invited to submit a short reproducibility companion paper to the ACM ICMR Reproducibility track at ACM ICMR 2021. That companion paper typically focuses on the technical details of what you published at ACM ICMR 2020.

The companion paper should be submitted as a short paper that is 2–3.5 pages long, excluding references. It must follow the standard ACM style format, double column. It has to involve a majority of the authors of the original paper, and provide in the clear their names and affiliations. The original ACM ICMR 2020 contribution associated to this companion paper must be clearly referenced.

The reproducibility companion paper and the associated artifacts will undergo a reproducibility review, which will result in an accept or a reject decision. Rejected papers are not disclosed.

If accepted, a Results Reproduced ACM badge is added to the original paper and to the companion paper, which are both stored in the ACM Digital Library, together with the artifacts. A badged companion paper will appear in the ACM ICMR 2021 proceedings and will be presented as a poster in the ACM ICMR 2021 Reproducibility poster session. The reviewers of the badged companion paper add a section documenting their efforts and become co-authors of the paper. The final version may be up to 4 pages (with an optional page for references).

If, during the evaluation, a serious flaw invalidating the scientific results published in the original contribution is discovered, then the companion paper is rejected, and the authors are encouraged to publish an errata.

Note: If your original paper turns out to be especially challenging to reproduced, the committee will recommend that the companion paper be published at ACM ICMR 2022, instead of ACM ICMR 2021. If replication cannot be completed in time for ACM ICMR 2022, then the paper cannot be accepted.


Badge for the 2021 Edition

For ICMR 2021, we are committed to set the standard high for reproducibility at ACM ICMR, and the Reproducibility Committee asks authors to target a top quality badge for the companion papers they submit. The target is the Results Reproduced badge:

ACM gives the following definition of this Results Reproduced badge:

The main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author.
» ACM DL. Read it here.


Contents of the Companion Paper

The companion paper must provide means for the committee to download the artifacts that will enable replicating the findings that are in the original ACM Multimedia contribution. The author-created artifacts that are relevant to this paper must have been placed on a publicly accessible archival repository (e.g. github).

The companion paper must describe the procedure allowing reviewers to easily find their way in the artifacts. That procedure might for example explain the reasons for having organized the artifacts in hierarchies of folders. It might specify what items inside the artifacts must be read, and in which order, what are the main elements to fully review first. It might clearly establish relationships between items in the artifacts and the corresponding elements that can be found in the original scientific contribution. It might explain and justify why this or that part of the original paper is not reproducible. The paper should indicate the expected duration of running the whole experimental pipeline.

Ideally, the companion paper should include text/schemas/illustrations that describe what the artifacts contains and how it should be deployed and then used. The companion paper should also contain notes about parameters that can be set or adjusted and about how to recreate the plots. It has to have examples, with comments. You can of course create documents in the archive that provide additional text/schemas/illustrations. In this case, the companion paper should clearly indicate where that can be found in the artifacts.


Contents of the Archive of Artifacts

Replicability is grounded in code, scripts, datasets that you provide and that form so called “artifacts”. More formally, ACM defines artifacts as follows:

By “artifact” we mean a digital object that was either created by the authors to be used as part of the study or generated by the experiment itself. For example, artifacts can be software systems, scripts used to run experiments, input datasets, raw data collected in the experiment, or scripts used to analyze results.
» ACM DL. Read it here.

Artifacts contain digital objects that supplement the companion paper. Artifacts are typically a series of files, possibly organized according to a clear and easy to grasp hierarchy. Artifacts include for example:

  1. The configuration files and scripts to set up and deploy the environment needed for the reviewers to subsequently run your code,
  2. The source code if you expose your system as a white box,
  3. Input Data: Either the process to generate the input data should be made available, or when the data is not generated, the actual data itself or a link to the data should be provided,
  4. The set of experiments (system configuration and initialization, scripts, workload, measurement protocol, …) used to run the experiments that produce the raw experimental data,
  5. The scripts needed to transform the raw experimental data into the graphs, tables, plots, …, that can be found in the original submission already published in the proceedings of ACM ICMR. All this material should be extensively described, documented, commented, easy to understand, for example in specific files coming together with what they describe.

Note: We strongly recommend to expose your system as a white box so that anyone can reproduce and reuse your results. If the system is only made accessible as a black box, however, reviewers may still require confidential access to source code, such that they can properly assess the validity of the reproducibility results.

About Experiments

The central results and claims of the corresponding published paper should be supported by the submitted experiments, meaning we can recreate result data and graphs that demonstrate similar behavior with that shown in that paper. Typically when the results are about response times, we do not expect to get identical results. Instead, we expect to see that the overall behavior matches the conclusions from the paper, e.g., that a given algorithm is significantly faster than another one, or that a given parameter affects negatively or positively the behavior of a system.

Given a system, the authors should provide the complete set of experiments to reproduce the results that are in the original paper. Typically, each experiment will consist of the following parts.

  • A setup phase where parameters are configured and data is loaded,
  • A running phase where a workload is applied and measurements are taken,
  • A clean-up phase where the system is prepared to avoid interference with the next round of experiments.

The authors should document (i) how to perform the setup, running and clean-up phases, and (ii) how to check that these phases complete as they should. The authors should document the expected effect of the setup phase (e.g., a cold file cache is enforced) and the different steps of the running phase, e.g., by documenting the combination of command line options used to run a given experiment script.

Each experiment should be automatic, e.g., via a script that takes a range of values for each experiment parameter as arguments, rather than manual, e.g., via a script that must be edited so that a constant takes the value of a given experiment parameter.

We do not expect the authors to perform any additional experiments on top of the ones in their original paper. Any additional experiments submitted will be considered and tested but they are not required.

About Graphs and Plots

For each graph/plots in the original paper, the authors should describe how the graph/plot is obtained from the experimental measurements. The submission should contain the scripts (or spreadsheets) that are used to generate the graphs/plots. We strongly encourage authors to provide scripts for all their graphs using a tool such as Gnuplot or Matplotlib. Here are two useful tutorials for Gnuplot: a brief manual and tutorial, and a tutorial with details about creating eps figures and embed them using LaTeX and another two for Matplotlib: examples from SciPy, and a step-by-step tutorial discussing many features.

Similar procedures must be provided by the authors in order to create the tables that are in the original paper.

Ideal archive of artifacts

Authors are encourage to strive for this ideal submission for truly replicable work…

At a minimum the authors should provide a complete set of scripts to install the system, produce the data, run experiments and produce the resulting graphs along with detailed readme file(s) that describe the process step by step so it can be easily redone by a reviewer.

The ideal submission consists an extremely careful and detailed description of the experiments, their parameters, and of a master script that:

  1. installs all systems needed,
  2. generates or fetches all needed input data,
  3. reruns all experiments and generates all results,
  4. generates all graphs, plots and tables, and finally,
  5. recompiles the sources of the paper

… to produce a brand new PDF for the paper with all the reproduced experiments. It’ll allow comparing the obtained material with the one in the original contribution that was accepted at ACM MM.

Packaging Guidelines

These packaging guidelines are meant to cover general cases. Please keep in mind that every individual case is slightly different.

Environment

Authors should explicitly specify the operating system and tools that should be installed as the environment. Such specification should include dependencies with specific hardware features (e.g., 25 GB of RAM are needed) or dependencies within the environment (e.g., the compiler that should be used must be run with a specific version of the operating system).

Note: The submitted artifacts should not require specific hardware and software that are hard to get access to. The committee will try to do the best to find reviewers who can setup the required environments; however, if no reviewer can be found, the companion paper cannot be accepted.

System

System setup is one of the most challenging aspects when replicating experiments. System setup will be easier to conduct if it is automatic rather than manual. Authors should test that the system they distribute can actually be installed in a new environment. The documentation should detail every step in system setup:

  • How to obtain the system?
  • How to configure the environment if need be (e.g., environment variables, paths)?
  • How to compile the system? (existing compilation options should be mentioned)
  • How to use the system? (What are the configuration options and parameters to the system?)
  • How to make sure that the system is installed correctly?

The above tasks should be achieved by executing a set of scripts provided by the authors that will download needed components (systems, libraries), initialize the environment, check that software and hardware is compatible, and deploy the system.

Tools

The committee suggests that authors use ReproZip in order to streamline this process. ReproZip can be used to capture the environment, the input files, the expected output files, and the required libraries. A detailed how-to guide (installing, packing experiments, unpacking experiments) can be found in there. ReproZip will help both the authors and the evaluators to seamlessly rerun experiments.

If using ReproZip to capture the experiments proves to be difficult for a particular paper, the committee will work with the authors to find the proper solution based on the specifics of the paper and the environment needed. More tools are available here: https://reproduciblescience.org/reproducibility-directory/


Timeline for 2021

The important dates along the timeline for submitting the companion paper and the associated artifacts are:

  • December 11, 2020: deadline for submitting the .pdf and reproducibility archive
  • March, 2021 (tentative): notification of acceptance/rejection (same deadline as for regular papers),
  • April, 2021: deadline for preparing the final version of the accepted companion paper (same deadline as for regular papers),
  • June, 2021: present your reproducibility work at a specific poster session during the 2021 edition of ACM ICMR in Taiwan.

2021 Reproducibility Committee

Chairs:

  • Martin Aumüller, IT University of Copenhagen, Denmark email
  • Björn Þór Jónsson, IT University of Copenhagen, Denmark email

Committee members:

  • Andreas Leibetseder, Klagenfurt University, Austria
  • Bei Liu, Microsoft Research Asia, China
  • Hong-Han Shuai, National Chiao Tung University, Taiwan
  • Jinyoung Moon, Electronics and Telecommunications Research Institute (ETRI), Korea
  • Maria Sinziiana Astefanoaei, IT University of Copenhagen, Denmark
  • Matteo Ceccarello, Free University of Bolzano, Italy