Article

When the measure is broken the solution is wrong — Part 2.

A project on intimate partner violence in Nairobi shows why choosing the right measure matters.

Measures enable us to estimate behavior change due to a particular intervention. In research settings, reliable and valid measures are a prerequisite to answer research questions and draw conclusions about the world. In applied settings, good measures form the basis on which well-informed policies can be rolled out.

In this 2 part series on measures, we look at some ways we have had to recalibrate measures to ensure we get accurate results from our work. Our work on measures is part of the Culture, Research Ethics, and Methods (CREME) agenda by Busara. We have established a team within our Research & Innovation division to pursue independent research, studying three topics central to behavioral science: cross cultural validation, ethics and methods. Specifically we are looking to understand behavior and psychological constructs, improve the experiences of research participants and examine the methodological practices that work best in the Global South

People have many reasons not to tell the truth when directly asked about issues, particularly when the issues may have a personal or sensitive aspect.

Self reported data is notoriously dangerous for bias. People have many reasons not to tell the truth when directly asked about issues, particularly when the issues may have a personal or sensitive aspect. This is exactly the case with intimate partner violence, where respondents may be hesitant to answer a question such as “have you ever been hit by your partner?” because they fear the information would end up in the wrong hands, putting them at risk. In such cases, indirect methods of measurement could provide important advantages over more direct methods by both protecting the interests of the participants and allowing more valid and accurate estimates of the rate of intimate partner violence. If they do lead to more valid estimates these estimates could directly inform and improve policies to reduce intimate partner violence.

We compared the validity of one type of indirect measurement method, the so-called “List Experiment” to a more direct method as a way of understanding how researchers and policymakers should measure intimate partner violence. A List Experiment estimates a target characteristic, such as being subject to intimate partner violence, without ever requiring participants to report that characteristic directly.

A List Experiment

We chose a variation of the List Experiment called a Double List Experiment that has more precision than a conventional List Experiment.

Our study included 1000 males and females above the age of 18 in low-income settlements of Nairobi’s Kibera and Kawangware areas. All respondents needed to have/have had a partner and were able to read, write and understand English and/or Swahili. Half the participants answered the survey via phone call (enumerator) and the other half answered the survey online (self-administered).

Results

Intimate partner violence is a very sensitive topic, as such we expected the indirect method to yield a higher prevalence rate than the direct method as people would likely underreport their experiences with IPV. However, the results surprised us, as they indicated the opposite. This tentatively suggests that, at least in Nairobi county, it might be preferable to use direct questioning.

In line with our expectations, we also found that people with lower levels of education were more likely to experience intimate partner violence. Respondents with lower levels of education also had lower compliance to the List Experiment than those with higher levels of education. This means the List Experiment lacks validity evidence for low-income, low-education samples — especially when an enumerator is not present to help people better understand instructions.

Direct and indirect estimates were most similar for those respondents who complied with the method. However, had we restricted our analysis only to those respondents who complied with the method, we would have a biased sample that did not include vulnerable, low-income and less educated populations that were unable to comprehend or follow the method — the very same people who are the focus of development work.

What does this tell us about measurement in development?

Measurement is not a project afterthought. Measurement directly calls into question the content of the construct being measured, the response process and the internal structure of the construct. For example, measurement questions around intimate partner violence might entail what “abuse” is and according to whom, why people directly withhold saying whether they have been abused, why people might feel good openly talking about past abuse etc. Measurement questions are often explicit, get straight to the point, help you define what you intend to measure and should be discussed in the design and planning phase of projects. Further, thinking about measurement in these initial stages can help to reduce any potential systematic measurement errors.

There is no one-size fits all measure that serves all purposes for all people in all contexts. There are multiple ways to measure a construct and each has bespoke strengths and weaknesses. Researchers should be thoughtful while choosing measures, carefully considering the trade-offs between the properties that make it suitable and those that make it inappropriate.

Collect validity evidence for your measure before using it. Validity evidence is evidence of a measure being “good” and suitable in a given context. It is vital that you have it to support the interpretation of a measure in a specific context. If this evidence does not exist — test for it. For example, one may assume that directly asking about intimate partner violence is bad. That assumption, however, may not be accurate in all contexts — as the results of our study show. In other words, without validity evidence, you are taking a risk if you interpret the score as informative about the construct. Therefore, test, test, test and test.

Follow us on social media to get updates every time we upload new content: TwitterFacebookInstagramLinkedIn and YouTube.

RELATED CONTENT
Scroll to Top