An emerging area of financial inclusion research is understanding the balance between opportunity and risk when using algorithms to determine credit worthiness. This topic is increasingly important in Rwanda given the recent rise in digital financial services, driven largely by a series of policy changes in response to the COVID-19 pandemic. For example, there was a 450% increase in person-to-person mobile money transfers between January and April 2020, and the number of unique subscribers sending a person-to-person transfer doubled a week after lockdown.

In this context, the Center for Financial Inclusion published The Stories Algorithms Tell: Bias and Financial Inclusion at the Data Margins. This report that shows that while algorithms have the potential to reduce bias, lower costs, and increase speed of loan decisions, they are also more likely to mis-categorize the credit worthiness of vulnerable groups. The report draws from key informant interviews with industry actors across 12 countries, but is missing the perspective of consumers.

Studying financial inclusion with early adopters in Rwanda

To fill this gap, in early 2021 Laterite partnered with the Center for Financial Inclusion on a small study to explore consumer perceptions, behavior and knowledge about the data ecosystem driving digital financial services in Rwanda. Laterite conducted 30 qualitative semi-structured interviews with users of digital lending products as part of this research. Even though the participants were early adopters of digital financial services, several concepts covered – such as the use of algorithms and personal digital data to inform lending decisions – were new to many of them.

The results, published by the Center for Financial Inclusion, show that consumers generally trust digital lending decisions over those made by loan officers. However, consumers raised concerns and questions when given examples of the kinds of data used by the decision algorithms. The study also highlights the need to learn more about how best to collect data and conduct research on abstract topics of data ownership and data rights. In this blog post we discuss the operational lessons learned from collecting data for this project, for others who wish to conduct similar research.

Tip #1: Build in time to create awareness of new concepts

Interviewers found that it was essential to build time into the survey to introduce new concepts before moving on to the survey questions. This included using simplified language where possible, providing explanations and examples, and allowing time for participants to ask questions. One of the topics that required extra attention was the use of alternative data sources in assessing credit worthiness. In general respondents were not aware that providers make credit assessments using sources such as utility payments, frequency of airtime top-ups, phone battery usage, number of text messages sent and received daily, and size of an individual’s contact list.

Tip #2: Pilot the instrument

Given the complex terms and concepts introduced in the study, piloting the instrument was crucial for assessing participants’ understanding, and identifying areas of ambiguity. For example, an initial version of the survey asked participants, “What data do you think is appropriate to use in deciding whether or not you would be a good borrower and pay on time?” followed by “Which of the previously mentioned sources is most/least effective to use…?”

With this wording, most respondents interpreted appropriateness as effectiveness. After recognizing this during the pilot, we revised the wording to swap “appropriate” for “fair” and ask: “Do you think it is fair to use the following data…”. This wording prompted better responses about whether the use of data was just or ethical, as opposed to useful.

Tip #3: Consider a focus group format

This qualitative study consisted of a series of semi-structured interviews conducted over the phone. This methodology was the most logistically feasible given the limited availability to gather in groups or collect data face-to-face due to COVID-19 restrictions in place at the time. This format also allowed interviewers to personalize the conversation and adapt the level of explanation to the needs of each individual. However, given the low familiarity with many of the abstract topics in this study, it also meant that interviewers spent a lot of time explaining topics over and over again. We suggest that focus group discussions might be a better format to cover these kinds of topics, if restrictions allow. This way, interviewers can build collective joint awareness of a topic, while participants can respond to and build on each other’s responses.

Tip #4: Make the most of the phone interview

Phone surveys are not the preferred format for this type of interview but, if they are indeed necessary (for example, during a lockdown), we suggest several techniques to make interviews more feasible. First, data collectors can use computer-assisted telephone interview software (Laterite uses the package built into SurveyCTO). This allows both sides of the conversation to be recorded, and makes it easier to complete interview notes following the interview. Second, for long surveys, respondents may become fatigued more quickly over the phone than in person. Interviewers may offer to split the survey into two rounds to avoid tiring respondents and to better accommodate their schedules.

As research on the emerging topic of data rights and digital lending grows, especially in the era of COVID-19 when remote data collection is increasingly the norm, we hope these findings are useful for others conducting similar work.

 


This blog post was contributed by Maggie Vinyard, Research Manager; Lydie Shima, Program Associate; and Laura Langbeen, Research Analyst.