Show item record

dc.contributor.authorDeschênes, Marie-France
dc.contributor.authorDionne, Éric
dc.contributor.authorDorion, Michelle
dc.contributor.authorGrondin, Julie
dc.date.accessioned2024-01-15T15:16:57Z
dc.date.availableNO_RESTRICTIONfr
dc.date.available2024-01-15T15:16:57Z
dc.date.issued2023-06-19
dc.identifier.urihttp://hdl.handle.net/1866/32344
dc.publisherUniversity of Massachusetts at Amherstfr
dc.subjectEvaluationfr
dc.subjectConcordance testfr
dc.subjectAggregate scoringfr
dc.subjectJudgmentfr
dc.subjectMinority French-speaking communitiesfr
dc.titleExamination of the aggregate scoring method in a judgment concordance testfr
dc.typeArticlefr
dc.contributor.affiliationUniversité de Montréal. Faculté des sciences infirmièresfr
dc.identifier.doi10.7275/pare.1258
dcterms.abstractThe use of the aggregate scoring method for scoring concordance tests requires the weighting of test items to be derived from the performance of a group of experts who take the test under the same conditions as the examinees. However, the average score of experts constituting the reference panel remains a critical issue in the use of these tests. This study aims to examine the distribution of panelists’ scores on the judgment concordance test (JCT) using the aggregate scoring method. A test composed of 32 items was developed and completed by 14 experts. The mean scores of the experts were calculated based on whether their choices of response categories for the 32 JCT items were included or excluded. Descriptive statistics were conducted. The mean scores of the experts showed a difference of 5.76%, depending on the approach used. The approach that excludes the experts’ response category choices was found to be more penalizing (76.16%±8.9) than the method including their own choices (81.92%±8.1). It is recommended that researchers make their computational approaches explicit in addition to outlining the distribution of expert results retained for the purpose of determining scores in the concordance tests. Further research is required to refine our understanding of the quality of score-setting in this type of test.fr
dcterms.isPartOfurn:ISSN:1531-7714fr
dcterms.languageengfr
UdeM.ReferenceFournieParDeposantDeschênes, M.-F., Dionne, E., Dorion, M. et Grondin, J. (2023). Examination of the aggregate scoring method in a professional judgment concordance test. Practical Assessment, Research and Evaluation. Vol. 28, Article 8. https://scholarworks.umass.edu/pare/vol28/iss1/8fr
UdeM.VersionRioxxVersion publiée / Version of Recordfr
oaire.citationTitlePractical assessment, research and evaluationfr
oaire.citationVolume28fr
oaire.citationIssue1


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show item record

This document disseminated on Papyrus is the exclusive property of the copyright holders and is protected by the Copyright Act (R.S.C. 1985, c. C-42). It may be used for fair dealing and non-commercial purposes, for private study or research, criticism and review as provided by law. For any other use, written authorization from the copyright holders is required.