Declining survey response rates are a problem—here's why

Declining survey response rates are a problem—here's why

After over 40 years in the field of survey research, John Boyle still loves to dig into the data. Surveys, after all, remain an important tool for understanding public sentiment toward everything from the current presidential election to the economy and public health. “What’s great about my job in collecting data,” according to John, “is that I get to answer questions where, if you didn’t have the data, I could make the case for either side.”

 

But how do you answer questions when the data is harder to get? Survey response rates have been declining for the past 20 years. So John—a senior survey advisor at ICF—and a research team set out to look at why people participate in surveys. One of the resulting papers was recently published in ScienceDirect, and we asked him to share some highlights from the findings:

 

Q: The paper talks about declining survey response rates in the U.S. for many years. Why is this a concern for researchers? 

 

A: There are three primary reasons that declining response rates are a big deal. First is cost. As the response rates decline, you have to work a lot harder—do a lot more calls or send out a lot more questionnaires, and possibly offer incentives—just to get enough respondents. Those all increase the cost of each survey conducted. 

 

Second is credibility. For years, we’ve used response rate as the gold standard of survey quality. We’ve seen the response rates decline on major surveys—in some major federal surveys from 70% to 40% or less—over the last 20 years. The 2018 Survey of Medicaid in Ohio had a response rate of just 12%. The face validity looks bad, which undermines our ability to judge how good these surveys are and how much we can rely on them. 

 

Third, non-response bias, is the real problem. The larger the proportion of non-respondents to respondents, the more the opportunity for non-response bias grows. You may have a perfectly drawn sample, but if a majority are non-responders, you don’t know how that group differs from the responders. The overall results of the research can get skewed by over-indexing on the types of people who are more likely to respond to surveys.

 

Q: How is non-response related to potential bias, particularly in health measures?

 

A: It’s important to look at how the completed sample differs from the general population, and then understand how they are different. Those differences can throw off your estimates in either of two directions. If the people in the completed sample are healthier than the general population, your estimates would show a healthier population than is really the case. But if more people who respond are sicker, then the study will produce estimates that say the general population is less healthy than it truly is. That’s a big problem in our understanding of a whole range of health measures.

 

Q. Who’s more likely to participate in biomarkers collection, healthy people or sick people? 

 

A: One of the biggest and most important surveys that uses biomarkers is the National Health and Nutrition Examination Survey. After their household interview, respondents are invited to go to a physical site for biomarker collection. Many assumed that people who agreed to the physical exam and clinical assessment were more likely to have unmet health problems or lack access to health care.

 

Our survey data suggests that this is both true—and maybe not true. People with unmet health needs or chronic conditions are in fact more likely to agree to do a health survey including the collection of biomarkers. But if you really look at the data, their propensity to participate is more about the salience of the health issues to them than trying to get additional health care. If we call it a general health survey, people with no conditions are less likely to respond than someone with any condition. And if we say that we’re interested in a particular issue (e.g., asthma) then people with related conditions—asthma, allergies, or other respiratory issues—will be more likely to respond than someone with heart disease or diabetes.  

 

Q: There’s an inclination for researchers to gather as much information as possible, including biomarkers such as blood pressure and weight. How can the potential non-response bias be balanced against the value of the additional information?

 

A: The first issue is to assess whether there is potential non-response bias. The way we do surveys, traditionally, is to use one protocol for everybody—one appeal, one cover letter, one script. That treats everyone the same, but it also makes assumptions that people participate in surveys due to social utility (a desire to perform a service for the community or country). If it turns out that everyone doesn’t have the same reason for participating, then we have concerns about bias.

 

In the past, researchers primarily looked at demographics. They compared the demographics of the completed sample to the general population and made any corrections through sample weighting. But certain non-demographic factors can’t be corrected. Our survey found that there are significant differences by health status (whether or not the person has certain health conditions, or if they think health care is important) in the willingness to participate in surveys or collection of biomarkers.

 

Q. Now that you’ve determined non-response bias is likely, what can be done about it? 

 

A: Playing on social utility as motivation may be the cheapest way to complete a survey, but it’s not the only answer. You have to recognize that social utility doesn’t resonate with some groups—particularly lower-income groups. I’ve heard respondents say on telephone surveys: “Why should I participate? You’re being paid to conduct it but I’m not being paid for my time.”

 

We have to address differences in the characteristics between those who do and don’t respond to surveys by using a tailored approach. If you get refusals and non-responders, then you should tailor the appeal in a different way. Maybe give them extra interview attempts. Maybe give them a financial incentive for their participation and change the survey introduction to mention those incentives. Maybe tell them how this survey impacts their community, or how health issues also translate to economic issues that may affect them.

 

If we fail to recognize that segments of the population vary in their willingness to participate, then those differences are going to affect the outcomes of our survey. Changing the appeal and protocol for different groups may introduce bias in other ways, but it’s the best approach to get a final sample that is closer to the general population. We need to think about which levers to use and how to use them to get a relatively unbiased sample. 

 

Q. This research suggests that there is an underlying propensity to participate in biomarkers collection. Do you think that propensity likely extends to screening for the novel coronavirus?

 

A: One of the population segments with low likelihood of response has high distrust of government. We tested this by asking for likelihood of response if the survey was sponsored by the government versus other potential sponsors. The response to the government as the sponsor was much lower than if the survey was sponsored by a university or a non-profit organization. So if you’re conducting a health survey and you have a choice between mentioning the university that has the grant and the federal agency that gave the grant, only mentioning the university may produce a higher response rate—and potentially less bias.  

 

In the case of COVID-19, you not only have issues with distrust of the federal government, but you have people who think it's a hoax. You have people who think the science behind it is biased and misleading. I have real concerns that if we try to go out and do a cross-sectional survey on the prevalence of the virus, we’ll have an underrepresentation of people who don’t believe it exists. 

 

At the end of the day, the biggest underlying issue we’re trying to address in our project on the dimensions of participation is understanding what drives response rate. Unfortunately, the theories of survey participation are lacking in empirical research to back them up. Our paper offers the first detailed look at why people say that they participate in surveys. What we saw is that the population is not homogenous in terms of the willingness to participate.

 

If we want to improve response rates, reduce non-response bias, and correct any imbalance in the sample, we have to take a different approach. We have to throw out the one-size-fits-all approach that is based on social utility, because we know that only appeals to one segment of the population. Now that we’ve recognized what the problem is, we can get to work on the solution.

 

Read the full paper in ScienceDirect to learn more.

Meet the author
  1. John Boyle, Senior Advisor, Survey Research

    John is a research expert with more than 30 years of experience in the design, execution, analysis, and reporting of large-scale health surveys. View bio

File Under

Subscribe to get our latest insights