Examination of the aggregate scoring method in a judgment concordance test
dc.contributor.author | Deschênes, Marie-France | |
dc.contributor.author | Dionne, Éric | |
dc.contributor.author | Dorion, Michelle | |
dc.contributor.author | Grondin, Julie | |
dc.date.accessioned | 2024-01-15T15:16:57Z | |
dc.date.available | NO_RESTRICTION | fr |
dc.date.available | 2024-01-15T15:16:57Z | |
dc.date.issued | 2023-06-19 | |
dc.identifier.uri | http://hdl.handle.net/1866/32344 | |
dc.publisher | University of Massachusetts at Amherst | fr |
dc.subject | Evaluation | fr |
dc.subject | Concordance test | fr |
dc.subject | Aggregate scoring | fr |
dc.subject | Judgment | fr |
dc.subject | Minority French-speaking communities | fr |
dc.title | Examination of the aggregate scoring method in a judgment concordance test | fr |
dc.type | Article | fr |
dc.contributor.affiliation | Université de Montréal. Faculté des sciences infirmières | fr |
dc.identifier.doi | 10.7275/pare.1258 | |
dcterms.abstract | The use of the aggregate scoring method for scoring concordance tests requires the weighting of test items to be derived from the performance of a group of experts who take the test under the same conditions as the examinees. However, the average score of experts constituting the reference panel remains a critical issue in the use of these tests. This study aims to examine the distribution of panelists’ scores on the judgment concordance test (JCT) using the aggregate scoring method. A test composed of 32 items was developed and completed by 14 experts. The mean scores of the experts were calculated based on whether their choices of response categories for the 32 JCT items were included or excluded. Descriptive statistics were conducted. The mean scores of the experts showed a difference of 5.76%, depending on the approach used. The approach that excludes the experts’ response category choices was found to be more penalizing (76.16%±8.9) than the method including their own choices (81.92%±8.1). It is recommended that researchers make their computational approaches explicit in addition to outlining the distribution of expert results retained for the purpose of determining scores in the concordance tests. Further research is required to refine our understanding of the quality of score-setting in this type of test. | fr |
dcterms.isPartOf | urn:ISSN:1531-7714 | fr |
dcterms.language | eng | fr |
UdeM.ReferenceFournieParDeposant | Deschênes, M.-F., Dionne, E., Dorion, M. et Grondin, J. (2023). Examination of the aggregate scoring method in a professional judgment concordance test. Practical Assessment, Research and Evaluation. Vol. 28, Article 8. https://scholarworks.umass.edu/pare/vol28/iss1/8 | fr |
UdeM.VersionRioxx | Version publiée / Version of Record | fr |
oaire.citationTitle | Practical assessment, research and evaluation | fr |
oaire.citationVolume | 28 | fr |
oaire.citationIssue | 1 |
Fichier·s constituant ce document
Ce document figure dans la ou les collections suivantes
Ce document diffusé sur Papyrus est la propriété exclusive des titulaires des droits d'auteur et est protégé par la Loi sur le droit d'auteur (L.R.C. (1985), ch. C-42). Il peut être utilisé dans le cadre d'une utilisation équitable et non commerciale, à des fins d'étude privée ou de recherche, de critique ou de compte-rendu comme le prévoit la Loi. Pour toute autre utilisation, une autorisation écrite des titulaires des droits d'auteur sera nécessaire.