Assessment of Resident Physicians' Communicator and Collaborator Competencies by Interprofessional Clinicians: A Mixed-Methods Study

ABSTRACT Phenomenon: As we move toward competency-based medical education, greater emphasis is being placed on assessing a more comprehensive skill set, including the ability to communicate and collaborate effectively in the workplace. Nonphysician members on interprofessional (IP) teams have valuable perspectives on actual resident performance and are often not adequately engaged in the provision of feedback to residents. Based on the educational theories of collaborative evaluation and social constructivism, this research examined the ability of IP clinicians to provide feedback to residents. The aim of this study was to examine IP clinicians' perceptions of their ability to provide formative feedback, through their observations and assessments of developmental pediatric residents, compared to physician supervisors on the rotation, and to qualitatively explore potential barriers to the feedback process from their perspective. Approach: This explanatory, sequential mixed-methods design study first examined which and how many of the CanMEDS Communicator and Collaborator training objectives (N = 40) were considered to be observable and assessable by IP clinicians and physicians. A comparison of the mean number of objectives that were observed and practically assessed by (a) each group (IP clinicians vs. physicians) and (b) clinical service teams during the core developmental pediatrics rotations, were examined using independent t tests. Second, a thematic qualitative analysis of focus groups was used to develop a contextual understanding of the factors that influenced this process. Data were analyzed using three levels of open coding and descriptive qualitative analysis techniques. Findings: Physicians reported they could observe (M = 33.3, SD = 5.2, 83.3%) and assess (M = 31.5, SD = 7.3, 79%) a larger number of objectives compared to the IP clinician group (M = 24.7, SD = 8.6, 61.8% and M = 20.3, SD = 10.6, 51%, respectively). There were no differences between the clinical service teams (i.e., preschool/school-age and pediatric rehabilitation). The objective that was most observable and assessable by the IP clinicians was “Demonstrates a respectful attitude towards other colleagues and members of an interprofessional team.” Four themes identified by the IP clinicians provided more in-depth qualitative information: (a) assessment requires more than simple observation, (b) assumptions and indirect observation influence assessment, (c) clinic culture and structure shapes observation and assessment, and (d) specific assessment criteria are required by IP clinicians. Insights: IP clinicians have the desire and ability to provide formative feedback to residents. Formalized processes with specific evaluation criteria would facilitate meaningful feedback from IP clinicians in the assessment of residents as they journey toward competence.

[1]  V. Braun,et al.  Using thematic analysis in psychology , 2006 .

[2]  Jane C Khoury,et al.  Evaluation of Resident Communication Skills and Professionalism: A Matter of Perspective? , 2006, Pediatrics.

[3]  Jonathan Sherbino,et al.  Competency-based medical education in postgraduate medical education , 2010, Medical teacher.

[4]  Bridget C. O’Brien,et al.  Useful but Different: Resident Physician Perceptions of Interprofessional Feedback , 2016, Teaching and learning in medicine.

[5]  N. Westhus,et al.  A comparison of student attitudes and perceptions before and after an introductory interprofessional education experience , 2014, Journal of interprofessional care.

[6]  Joseph Jaeger,et al.  Assessment of a 360-Degree Instrument to Evaluate Residents’ Competency in Interpersonal and Communication Skills , 2004, Academic medicine : journal of the Association of American Medical Colleges.

[7]  D. Brodsky,et al.  Educational Perspectives: The 360-degree Assessment: A New Paradigm in Trainee Evaluation , 2011 .

[8]  G. Regehr,et al.  Perceptions of Peer-to-Peer Interprofessional Feedback Among Students in the Health Professions , 2016, Academic medicine : journal of the Association of American Medical Colleges.

[9]  J. Archer,et al.  Use of SPRAT for peer review of paediatricians in training , 2005, BMJ : British Medical Journal.

[10]  E. Guba,et al.  Competing paradigms in qualitative research. , 1994 .

[11]  H. Tajfel,et al.  The Social Identity Theory of Intergroup Behavior. , 2004 .

[12]  M. Neergaard,et al.  Qualitative Description – the poor cousin of health research ? , 2009 .

[13]  L. Lingard,et al.  Are We All on the Same Page? A Discourse Analysis of Interprofessional Collaboration , 2013, Academic medicine : journal of the Association of American Medical Colleges.

[14]  Olle ten Cate,et al.  Nuts and Bolts of Entrustable Professional Activities , 2013 .

[15]  Jonathan Sherbino,et al.  The role of assessment in competency-based medical education , 2010, Medical teacher.

[16]  C. Violato,et al.  The Reliability, Validity, and Feasibility of Multisource Feedback Physician Assessment: A Systematic Review , 2014, Academic medicine : journal of the Association of American Medical Colleges.

[17]  B. Hudson Pessimism and optimism in inter-professional working: The Sedgefield Integrated Team , 2007, Journal of interprofessional care.

[18]  V. Curran,et al.  Reliability of the Interprofessional Collaborator Assessment Rubric (ICAR) in Multi Source Feedback (MSF) with post-graduate medical residents , 2014, BMC medical education.

[19]  M. Hammick,et al.  Evaluating an interprofessional education curriculum: A theory-informed approach , 2015, Medical teacher.

[20]  John W. Creswell,et al.  Designing and Conducting Mixed Methods Research , 2006 .

[21]  H. Turunen,et al.  Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. , 2013, Nursing & health sciences.

[22]  C. Violato,et al.  Changes in performance: a 5‐year longitudinal study of participants in a multi‐source feedback programme , 2008, Medical education.

[23]  R. Trowbridge,et al.  Why medical educators may be failing at feedback. , 2009, JAMA.

[24]  David Wall,et al.  A literature review of multi-source feedback systems within and without health services, leading to 10 tips for their successful design , 2006, Medical teacher.

[25]  Earl R. Babbie,et al.  Survey Research Methods , 1984 .

[26]  Jocelyn Lockyer,et al.  Multisource feedback in the assessment of physician competencies. , 2003, The Journal of continuing education in the health professions.

[27]  B. Lanphear,et al.  Effect of multisource feedback on resident communication skills and professionalism: a randomized controlled trial. , 2007, Archives of pediatrics & adolescent medicine.

[28]  David A. Waldman,et al.  Understanding and optimizing multisource feedback , 2002 .

[29]  P. Sainsbury,et al.  Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. , 2007, International journal for quality in health care : journal of the International Society for Quality in Health Care.

[30]  M. Sandelowski Focus on Research Methods Whatever Happened to Qualitative Description? , 2022 .

[31]  J. Coleman,et al.  Developing interprofessional simulation in the undergraduate setting: Experience with five different professional groups , 2012, Journal of interprofessional care.

[32]  Liliana Rodríguez-Campos,et al.  Advances in collaborative evaluation. , 2012, Evaluation and program planning.

[33]  C. Violato,et al.  Assessment of physician performance in Alberta: the physician achievement review. , 1999, CMAJ : Canadian Medical Association journal = journal de l'Association medicale canadienne.