Now showing items 1-4 of 4

  • Estimating open access Mandate effectiveness : the MELIBEA score 

    Vincent-Lamarre, Philippe; Boivin, Jade; Gargouri, Yassine; Larivière, Vincent; Harnad, Stevan (Association for information science and technology, 2015-12-23)
    MELIBEA is a directory of institutional open‐access policies for research output that uses a composite formula with eight weighted conditions to estimate the “strength” of open access (OA) mandates (registered in ROARMAP). We analyzed total Web of ...
  • Improving reproducibility in machine learning research : a report from the NeurIPS 2019 reproducibility program 

    Pineau, Joelle; Vincent-Lamarre, Philippe; Sinha, Koustuv; Larivière, Vincent; Beygelzimer, Alina; d’Alché-Buc, Florence; Fox, Emily; Larochelle, Hugo (Microtome Publishing, 2021)
    One of the challenges in machine learning research is to ensure that presented and published results are sound and reliable. Reproducibility, that is obtaining similar results as presented in a paper or talk, using the same code and data (when ...
  • Predatory publishers’ latest scam : bootlegged and rebranded papers 

    Siler, Kyle; Vincent-Lamarre, Philippe; Sugimoto, Cassidy R.; Larivière, Vincent (Nature research, 2021-10-26)
    To thwart publishing rackets that undermine scholars and scholarly publishing, legitimate journals should show their workings.
  • Textual analysis of artificial intelligence manuscripts reveals features associated with peer review outcome 

    Vincent-Lamarre, Philippe; Larivière, Vincent (MIT Press, 2021-07-15)
    We analyzed a data set of scientific manuscripts that were submitted to various conferences in artificial intelligence. We performed a combination of semantic, lexical, and psycholinguistic analyses of the full text of the manuscripts and compared them ...