Abstract
Experiments often produce a hit rate and a false alarm rate in each of two conditions. These response rates are summarized into a single-point sensitivity measure such as d', and t tests are conducted to test for experimental effects. Using large-scale Monte Carlo simulations, we evaluate the Type I error rates and power that result from four commonly used single-point measures: d', A', percent correct, and gamma. We also test a newly proposed measure called gammaC. For all measures, we consider several ways of handling cases in which false alarm rate = 0 or hit rate = 1. The results of our simulations indicate that power is similar for these measures but that the Type I error rates are often unacceptably high. Type I errors are minimized when the selected sensitivity measure is theoretically appropriate for the data.
Original language | English |
---|---|
Pages (from-to) | 389-401 |
Number of pages | 0 |
Journal | Percept Psychophys |
Volume | 70 |
Issue number | 2 |
DOIs | |
Publication status | Published - Feb 2008 |
Keywords
- Attention
- Bias
- Cues
- Data Interpretation
- Statistical
- Decision Making
- Discrimination Learning
- Humans
- Models
- Monte Carlo Method
- Normal Distribution
- Orientation
- Pattern Recognition
- Visual
- Psychology
- Experimental
- Psychophysics
- ROC Curve
- Research Design
- Sensitivity and Specificity
- Signal Detection
- Psychological