Survey researchers err if geometry dash they disregard the issue of differential nonresponse, which Bailey is bravely addressing.

Skip to main content

# Disclosure Statement

# Reference List

##### Connections

1 of 11

Not Really a New Paradigm for Polling

Published onOct 13, 2023

Not Really a New Paradigm for Polling

I was asked to comment on Michael Bailey’s (2023) “A New Paradigm for Polling” because *HDSR* had done an interview with me about the history of the exit poll and they wanted a comment from “someone in the polling profession.” I agree that this piece really needs feedback from people in the profession since the author has not referenced the past research in the field. There is a valuable research agenda in this piece, but we first need to get through the packaging.

Bailey is certainly correct that probability-based polling has been facing stiff challenges with falling response rates. He gives a good description of the problems and then suggests that the time in polling is ripe for a paradigm shift led by his work with the Meng equation because it is a way of characterizing error for any sampling approach and because the correlation in the equation shows the impact when the individual’s decision to respond is correlated with how they respond.

It is true that the survey profession is going through a paradigm shift away from the traditional probability-based sampling. This has led to elaborate forms of weighting and the statistically sophisticated nonprobability methods used by the major firms that conduct polling, which are continually being refined.

However, the insight from the Meng equation of characterizing error for any sampling approach is not new. The concept ‘total survey error,’ which includes all forms of survey error, has been around for a long time. In fact, a search on Google Scholar brings 6,800 citations using that term, including books with that phrase in the title.

There is also a parallel concept in the survey literature to the correlation in the Meng equation called ‘differential nonresponse.’ This also measures the impact when the response is related to the measure being studied. There are over 2,800 citations using this term. Survey researchers have been looking at this problem for a long time. In fact, I wrote a chapter along with Dan Merkle analyzing the differential nonresponse in the exit poll in *Survey Nonresponse*, a well-received collection of research papers about nonsampling errors edited by Groves and Dillman, which came out in 2002, over 20 years ago.

With the packaging of a ‘paradigm shift’ removed, I can applaud the cleverness of Bailey’s investigation into recent examples of the impact of differential nonresponse. I also agree with Bailey’s criticism that the industry could do a better job of examining the error when the individual’s decision to respond is related to how they respond. Because this relationship is hard to measure, it is often ignored and often not mentioned in methodology statements.

However, I have trouble seeing what additional benefit the use of the Meng equation brings to this problem that is not covered by the concepts of ‘total survey error’ and ‘differential nonresponse.’ It might have some direct value in computing an error on naïve surveys like Bailey’s reference to “a Twitter poll initiated by an unpredictable billionaire,” but I do not see how it can be used for the sophisticated modeling in the nonprobability methods used by major survey firms. For example, YouGov uses a sample matching procedure in respondent selection where I think it would be quite difficult to pull out the effect of the Meng correlation.

In the fourth section, “The New Paradigm in Practice,” Bailey (2023) convincingly shows the problems when there is non-ignorable nonresponse in the ANES and IPSOS polls. I found this analysis quite interesting and hope that he continues this line of research. However, it is noteworthy that in this discussion he often refers to the Meng correlation when referring to nonresponse, but never tries to estimate it. And given that the ‘non-ignorable nonresponse,’ which he does test for significance, is the same as the existing survey term of ‘differential nonresponse,’ his applied work fits well within the existing framework of survey research and is certainly not a paradigm shift.

Overall, I applaud the direction Bailey is taking by tackling a problem of differential nonresponse that survey researchers ignore at their peril. And I welcome his future efforts to keep pointing out the problem and improve our work. However, his work using the Meng equation appears to be just a way of emphasizing that differential nonresponse can be a real problem in surveys, without offering a new way of estimating the effect of the problem. The message is certainly important, but I am not convinced that referring to the Meng equation improves the message. But I am confident in predicting that his insights will be better received by survey researchers if they are not cloaked as a ‘paradigm shift.’

Murray Edelman has no financial or non-financial disclosures to share for this article.

Bailey, M. A. (2023). A new paradigm for polling. *Harvard Data Science Review, 5*(3). https://doi.org/10.1162/99608f92.9898eede

Merkle, D. M., & Edelman, M. (2002). Nonresponse in Exit Polls: A Comprehensive Analysis. In R. M. Groves, D. A. Dillman, J. L. Eltinge, & R. J.A. Little (Eds.), *Survey Nonresponse* (pp. 243–258). John Wiley & Sons.

©2023 Murray Edelman. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.

Another Commentary on
A New Paradigm for Polling

Is It Time for a New Paradigm in Biodiversity Monitoring? Lessons From Opinion Polling

Another Commentary on
A New Paradigm for Polling

The “Law of Large Populations” Does Not Herald a Paradigm Shift in Survey Sampling

Another Commentary on
A New Paradigm for Polling

Assuming a Nonresponse Model Does Not Make It True