The relative reliability of actively participating and passively observing raters in a simulation-based assessment for selection to specialty training in anaesthesia.

M. J. Roberts*, T. C.E. Gale, P. J.A. Sice, I. R. Anderson

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Selection to specialty training is a high-stakes assessment demanding valuable consultant time. In one initial entry level and two higher level anaesthesia selection centres, we investigated the feasibility of using staff participating in simulation scenarios, rather than observing consultants, to rate candidate performance. We compared participant and observer scores using four different outcomes: inter-rater reliability; score distributions; correlation of candidate rankings; and percentage of candidates whose selection might be affected by substituting participants' for observers' ratings. Inter-rater reliability between observers was good (correlation coefficient 0.73-0.96) but lower between participants (correlation coefficient 0.39-0.92), particularly at higher level where participants also rated candidates more favourably than did observers. Station rank orderings were strongly correlated between the rater groups at entry level (rho 0.81, p < 0.001) but weaker at the two higher level centres (rho 0.52, p = 0.018; rho 0.58, p = 0.001). Substituting participants' for observers' ratings had less effect once scores were combined with those from other selection centre stations. Selection decisions for 0-20% of candidates could have changed, depending on the numbers of training posts available. We conclude that using participating raters is feasible at initial entry level only.
Original languageEnglish
Pages (from-to)591-599
Number of pages0
JournalAnaesthesia
Volume68
Issue number6
DOIs
Publication statusPublished - Jun 2013

Keywords

  • Anesthesiology
  • Clinical Competence
  • Education
  • Medical
  • Graduate
  • Feasibility Studies
  • Humans
  • Observer Variation
  • Patient Simulation
  • Personnel Selection
  • Reproducibility of Results

Cite this