Using qualitative feedback data to support educators for providing quality feedback to the undergraduate medical students
No Thumbnail Available
Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Saudi Digital Library
Abstract
Introduction
Assessments in medical education are essential for evaluating the competencies of future healthcare professionals. Among these, Objective Structured Clinical Examinations (OSCEs) play a pivotal role by offering a structured and objective approach to evaluating clinical skills. Despite OSCEs' widespread use, significant discrepancies between observed scores and global rating scores have emerged, raising concerns about the process of reliability and validity of these assessments. These discrepancies often lead to the provision of generic, non-specific feedback, which fails to offer students actionable guidance for improvement. This thesis investigates how qualitative feedback data can better support educators in providing actionable, high-quality feedback. This exploration includes addressing discrepancies between observed scores and global rating scores, aiming to develop a feedback system that is both specific and meaningful. This study intends to empower educators to guide undergraduate medical students toward clinical proficiency.
Methods
The thesis is organised into four interrelated studies, each contributing to developing a novel structured feedback tool for OSCEs. The first study, A comparative analysis of OSCE observed scores and global rating scores using a novel approach, involved a retrospective observational analysis of scoring discrepancies between these two systems. Data were collected from 1,571 anonymised undergraduate medical students across nine cohorts. Statistical methods, including ordinal regression models and raincloud plots, were employed to identify and analyse the discrepancies between observed scores and global rating scores in OSCE assessments. The second study, A retrospective feedback analysis of objective structured clinical examination performance of undergraduate medical students, utilised text-mining techniques to analyse written feedback from 1,034 anonymised OSCE performance records. R software was used to identify common descriptors in the feedback, revealing a reliance on generic and non-specific terms. Thus, the study emphasised the need for more detailed and actionable feedback. In response to the identified feedback gaps, the third study, A Systematic Review of effective quality feedback measurement tools used in Clinical Skills Assessment, systematically reviewed existing feedback measurement tools in clinical education. Databases such as PubMed, Medline, and Scopus were searched, including 14 studies. From these, ten key determinants of effective feedback—such as specificity, balance, and behavioural focus—were identified to inform the design of a new feedback tool.
In the fourth study, Development and preliminary validation of a content validity index for an OSCE feedback tool in medical education , An expert panel of seven medical educators evaluated the tool's relevance and clarity across domains, including communication, task knowledge, and professionalism, and the CVI score was calculated to evaluate the structured feedback tool through the content validity index (CVI) lens, assessing its potential effectiveness in capturing and conveying essential feedback elements within the OSCE framework.
Results
The findings of this thesis highlighted significant discrepancies between observed scores and global rating scores in OSCEs, particularly in mid-range scoring categories, which emphasised the uncertainty in current assessment practices. Additionally, a retrospective analysis of the feedback provided to medical students revealed that much of it was generic, lacking the depth and specificity required to offer actionable guidance. The final studies introduced and validated an enhanced feedback tool, demonstrating its potential to address these gaps by providing medical students with more detailed, constructive, and actionable feedback that supports their clinical development.
Discussion
The identified discrepancies between observed scores and global rating scores highlight limitations of the current OSCE assessment framework. Although OSCEs remain an essential tool for evaluating clinical competencies, their effectiveness maybe may be undermined by scoring inconsistencies and generic feedback provision. These findings emphasise the need to recalibrate the feedback process, ensuring that it reflects student performance and is directive for future improvement. The introduction of a structured feedback tool offers a solution that enhances the specificity and relevance of feedback and aligns more closely with the educational goals of developing clinical proficiency in medical students.
Conclusion
This thesis stresses the need for a more structured, actionable feedback system within OSCEs. By addressing the identified discrepancies between observed scores and global rating scores and by introducing an enhanced feedback tool, this research can enhance the accuracy and relevance of feedback provided to medical students. The developed tool aims to bridge the gap between assessment and actionable feedback, ultimately improving the educational value of OSCEs and fostering the development of more competent and prepared healthcare professionals.
Description
Keywords
OSCEs, Feedback, Assessments, Medical Education, Global Rating Score