When did you last see an empirical study in the social sciences that reported on measurement error in the context of a quantitative survey and how these errors might affect the research findings?
In a previous blog, we discussed inter-rater reliability (IRR) as a tool to reduce bias in surveys. Our advice there was to use IRR as often as possible prior to the launch of a survey, especially in questions where subjectivity is a likely source of error.
Our new Technical Brief picks up this theme and explores what happens when error creeps into a survey. The brief discusses the types and possible sources of measurement errors in survey data using the example of a potential RCT to study best practice adoption among maize farmers. Taking this synthetic dataset, we demonstrate that the effect of measurement error is not negligible and has an impact on the research findings.
When to look out for measurement error
Every activity involving human judgment is prone to error – this is virtually unavoidable. But it pays to be aware and take extra steps to mitigate errors. Especially if:
- You are collecting data on things that are difficult to measure (think income, productivity)
- Your survey instruments ask questions that rely on recall or are time-sensitive
- You are working with data that depends on the judgment of enumerators;
- You are collecting data based on observations;
- You are asking the same question about a household, from different household members in different households (for example a woman in one household, a man in another, because they can perceive things differently);
- You think the assumption of parallel trends holds in the real world.
Our Technical Brief wraps up with some lessons we learned through our data collection projects in East Africa, and a list of strategies that can be deployed to mitigate the effects of measurement errors.
Technical Brief: Measurement error in survey data: what is it and why does it matter