dc.contributor.author | Pineau, Joelle | |
dc.contributor.author | Vincent-Lamarre, Philippe | |
dc.contributor.author | Sinha, Koustuv | |
dc.contributor.author | Larivière, Vincent | |
dc.contributor.author | Beygelzimer, Alina | |
dc.contributor.author | d’Alché-Buc, Florence | |
dc.contributor.author | Fox, Emily | |
dc.contributor.author | Larochelle, Hugo | |
dc.date.accessioned | 2021-11-02T18:55:04Z | |
dc.date.available | NO_RESTRICTION | fr |
dc.date.available | 2021-11-02T18:55:04Z | |
dc.date.issued | 2021 | |
dc.identifier.uri | http://hdl.handle.net/1866/25785 | |
dc.publisher | Microtome Publishing | fr |
dc.rights | Ce document est mis à disposition selon les termes de la Licence Creative Commons Paternité 4.0 International. / This work is licensed under a Creative Commons Attribution 4.0 International License. | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | Reproducibility | fr |
dc.subject | NeurIPS 2019 | fr |
dc.title | Improving reproducibility in machine learning research : a report from the NeurIPS 2019 reproducibility program | fr |
dc.type | Article | fr |
dc.contributor.affiliation | Université de Montréal. Faculté des arts et des sciences. École de bibliothéconomie et des sciences de l'information | fr |
dcterms.abstract | One of the challenges in machine learning research is to ensure that presented and published
results are sound and reliable. Reproducibility, that is obtaining similar results as presented
in a paper or talk, using the same code and data (when available), is a necessary step to
verify the reliability of research findings. Reproducibility is also an important step to
promote open and accessible research, thereby allowing the scientific community to quickly
integrate new findings and convert ideas to practice. Reproducibility also promotes the use
of robust experimental workflows, which potentially reduce unintentional errors. In 2019,
the Neural Information Processing Systems (NeurIPS) conference, the premier international
conference for research in machine learning, introduced a reproducibility program, designed
to improve the standards across the community for how we conduct, communicate, and
evaluate machine learning research. The program contained three components: a code
submission policy, a community-wide reproducibility challenge, and the inclusion of the Machine Learning Reproducibility checklist as part of the paper submission process. In
this paper, we describe each of these components, how it was deployed, as well as what we
were able to learn from this initiative. | fr |
dcterms.isPartOf | urn:ISSN:1532-4435 | fr |
dcterms.isPartOf | urn:ISSN:1533-7928 | fr |
dcterms.language | eng | fr |
UdeM.ReferenceFournieParDeposant | https://jmlr.org/papers/v22/20-303.html | fr |
UdeM.VersionRioxx | Version publiée / Version of Record | fr |
oaire.citationTitle | Journal of machine learning research | fr |
oaire.citationVolume | 22 | fr |