A vast literature on interviewer effects (interviewer measurement error variance) is devoted to the estimation of these effects and understanding their causes and associated factors. However, consideration of interviewer effects in active quality control (QC) does not seem widespread, despite their known effect on reducing precision of survey estimates. We address this gap in this dissertation with the overarching goal of using item-level paradata (keystrokes and time stamps generated during the computer-assisted interviewing process) in a systematic manner to develop an active interviewer monitoring system inorder to control interviewer effects. The dissertation is structured around exploring associations between paradata, indicators of interviewing quality, and interviewer effects. Our hypothesis is that different levels of interviewing quality cause different paradata patterns. Differing levels of interviewing quality also result in different between-interviewer response means even after controlling for respondent characteristics, leading to interviewer effects.Thus, interviewing quality is conceptualized as a common cause of both interviewer effects and paradata patterns, making it possible for us to think about paradata patterns as being potentially effective proxies of interviewer effects.Little is known about what paradata say about the actual quality of an interview. This is explored in Chapter 2 where we use paradata patterns to either predict the proportion of flags in an interview (interview-level analysis) or the occurrence of a QC flag for an item (item-level analysis). The results show that paradata patterns have strong associations with interviewing quality. A key finding is that a multivariate approach to paradata useis necessary. Chapter 3 turns to investigating associations of indicators of interviewing quality with interviewer effects. Survey quality control (QC) systems monitor interviewers for their compliance with interviewing protocol. But what is not very clear is if deviations from protocol are also associated with interviewer effects. While the results of our analysis show moderate associations in this regard, we find that when QC variables are complementary to other interviewer-level characteristic variables; when used together, a fair magnitudeof interviewer variance can be explained.Based on the foundations laid by Chapters 2 and 3, Chapter 4 uses paradata to directly predict interviewer effects. We find that paradata are fairly strong predictors of interviewer effects for the items we analyzed, explaining more than half the magnitude of interviewer effects on average. Also, paradata outperformed interviewer-level demographic and work-related variables in explaining interviewer effects. While most of the focus in the literatureand practice has been on time-based paradata, e.g., item times, we find that non-time based paradata, e.g., frequency of item revisits, outperform the time-based paradata for a large majority of items. We discuss how survey organizations can use the dissertation findings in active quality control. All our analyses use data from the 2015 wave of the Panel Study of Income Dynamics.
【 预 览 】
附件列表
Files
Size
Format
View
Paradata, Interviewing Quality, and Interviewer Effects