Date of Award




Document Type


Degree Name

Doctor of Philosophy (PhD)


Department of Educational and Counseling Psychology


Educational Psychology and Methodology

Content Description

1 online resource (v, 126 pages) : illustrations.

Dissertation/Thesis Chair

Kimberly Colvin

Committee Members

Lisa Keller, Mariola Moeyaert


reliability, single-item assessment, Psychometrics, Educational evaluation, Reliability

Subject Categories

Educational Psychology | Psychology


Single-item assessments have become more popular recently in distinct areas, even though there is no consensus about whether they are sufficiently reliable. Researchers have developed methods to estimate the reliability of single-item assessments, some are based on factor analysis (method FA), correction for attenuation (method CA), and others employ Molenaar and Sijtsma’s theory (method MS), coefficient λ6 (method λ6), or latent class model (method LCRC). However, no empirical study has investigated which method estimates the reliability of single-item assessment most precisely. This study investigated this question via a simulation study. To represent assessments as found in practice, the simulation study varied several aspects: the item discrimination parameter, test length, sample size, and the correlation between the single-item assessment and its corresponding multi-item assessment. Results suggest that using method CA, method MS, and method FA concurrently, researchers can obtain the most precise estimate of the range of single-item assessment’s reliability in 94.44% of cases. The range of single-item assessments’ population reliability is (.28, .59), the range of estimated single-item assessments’ reliability is (.15, .70). Test length, the item discrimination parameter, sample size, and the correlation between the single-item assessment and its corresponding multi-item assessment do not have a clear impact method choice, these four aspects do not have consistent relation with the estimate of single-item assessment’s reliability.