Abstract
Selection to specialty training is a high-stakes assessment demanding valuable consultant time. In one initial entry level and two higher level anaesthesia selection centres, we investigated the feasibility of using staff participating in simulation scenarios, rather than observing consultants, to rate candidate performance. We compared participant and observer scores using four different outcomes: inter-rater reliability; score distributions; correlation of candidate rankings; and percentage of candidates whose selection might be affected by substituting participants' for observers' ratings. Inter-rater reliability between observers was good (correlation coefficient 0.73-0.96) but lower between participants (correlation coefficient 0.39-0.92), particularly at higher level where participants also rated candidates more favourably than did observers. Station rank orderings were strongly correlated between the rater groups at entry level (rho 0.81, p < 0.001) but weaker at the two higher level centres (rho 0.52, p = 0.018; rho 0.58, p = 0.001). Substituting participants' for observers' ratings had less effect once scores were combined with those from other selection centre stations. Selection decisions for 0-20% of candidates could have changed, depending on the numbers of training posts available. We conclude that using participating raters is feasible at initial entry level only.
Original language | English |
---|---|
Pages (from-to) | 591-599 |
Number of pages | 0 |
Journal | Anaesthesia |
Volume | 68 |
Issue number | 6 |
DOIs | |
Publication status | Published - Jun 2013 |
Keywords
- Anesthesiology
- Clinical Competence
- Education
- Medical
- Graduate
- Feasibility Studies
- Humans
- Observer Variation
- Patient Simulation
- Personnel Selection
- Reproducibility of Results