Afficher la notice

dc.contributor.authorLanovaz, Marc
dc.contributor.authorPrimian, Rachel
dc.date.accessioned2022-04-26T13:00:39Z
dc.date.availableNO_RESTRICTIONfr
dc.date.available2022-04-26T13:00:39Z
dc.date.issued2022-04-25
dc.identifier.urihttp://hdl.handle.net/1866/26620
dc.publisherSpringerfr
dc.rightsCe document est mis à disposition selon les termes de la Licence Creative Commons Attribution - Pas d’utilisation commerciale 4.0 International. / This work is licensed under a Creative Commons Attribution - NonCommercial 4.0 International License.
dc.rights.urihttps://creativecommons.org/licenses/by-nc/4.0/
dc.subjectAB designfr
dc.subjectBaselinefr
dc.subjectData analysisfr
dc.subjectMachine learningfr
dc.subjectn-of-1 trialfr
dc.subjectSingle-case designfr
dc.titleWaiting for baseline stability in single-case designs : is it worth the time and effort?fr
dc.typeArticlefr
dc.contributor.affiliationUniversité de Montréal. Faculté des arts et des sciences. École de psychoéducationfr
dc.identifier.doi10.3758/s13428-022-01858-9
dcterms.abstractResearchers and practitioners often use single-case designs (SCDs), or n-of-1 trials, to develop and validate novel treatments. Standards and guidelines have been published to provide guidance as to how to implement SCDs, but many of their recommendations are not derived from the research literature. For example, one of these recommendations suggests that researchers and practitioners should wait for baseline stability prior to introducing an independent variable. However, this recommendation is not strongly supported by empirical evidence. To address this issue, we used Monte Carlo simulations to generate graphs with fxed, response-guided, and random baseline lengths while manipulating trend and variability. Then, our analyses compared the type I error rate and power produced by two methods of analysis: the conservative dual-criteria method (a structured visual aid) and a support vector classifer (a model derived from machine learning). The conservative dual-criteria method produced fewer errors when using response-guided decision-making (i.e., waiting for stability) and random baseline lengths. In contrast, waiting for stability did not reduce decision-making errors with the support vector classifer. Our fndings question the necessity of waiting for baseline stability when using SCDs with machine learning, but the study must be replicated with other designs and graph parameters that change over time to support our results.fr
dcterms.isPartOfurn:ISSN:1554-351Xfr
dcterms.isPartOfurn:ISSN:1554-3528fr
dcterms.languageengfr
dcterms.relationhttps://doi.org/10.17605/OSF.IO/H7BSGfr
UdeM.ReferenceFournieParDeposantLanovaz, M. J., & Primiani, R. (2022). Waiting for baseline stability in single-case designs: Is it worth the time and effort? Behavior Research Methods. https://doi.org/10.3758/s13428-022-01858-9fr
UdeM.VersionRioxxVersion publiée / Version of Recordfr
oaire.citationTitleBehavior research methodsfr


Fichier·s constituant ce document

Vignette

Ce document figure dans la ou les collections suivantes

Afficher la notice

Ce document est mis à disposition selon les termes de la Licence Creative Commons Attribution - Pas d’utilisation commerciale 4.0 International. / This work is licensed under a Creative Commons Attribution - NonCommercial 4.0 International License.
Droits d'utilisation : Ce document est mis à disposition selon les termes de la Licence Creative Commons Attribution - Pas d’utilisation commerciale 4.0 International. / This work is licensed under a Creative Commons Attribution - NonCommercial 4.0 International License.