Skip to main content
SearchLoginLogin or Signup

An Interview With Murray Edelman on the History of the Exit Poll

An interview with Murray Edelman by Liberty Vittert and Xiao-Li Meng
Published onFeb 24, 2021
An Interview With Murray Edelman on the History of the Exit Poll
history

You're viewing an older Release (#2) of this Pub.

  • This Release (#2) was created on Feb 24, 2021 ()
  • The latest Release (#3) was created on Apr 10, 2022 ().

Column Editors’ Note: In this column, survey methodologist Murray Edelman, one of the creators of the “Exit Poll,” speaks with Harvard Data Science Review’s Xiao-Li Meng and Liberty Vittert. The invention of political polling provided one of the most visible and high-stakes roles for the use of sampling in the history of statistics. Edelman discusses some of political junkies’ favorite topics from a data science perspective, from the role of “shy voters” and network “calling” of races to the difference between “polling” and “projections” and the ever-present possibility of controversy.  

Listen to the interview and read the transcript below.


Liberty Vittert (LV): Hello and welcome to the Harvard Data Science Review’s Conversations with Leaders. I’m Liberty Vittert, Media Feature Editor for Harvard Data Science Review, and I’m joined by my co-host Xiao-Li Meng, our Editor-in-Chief.

Today we are speaking with Murray Edelman, an American political scientist who is regularly called the “father of the exit poll” and who was in charge of exit polls and projections used by ABS, CBS, CNN, FOX, NBC, and the Associated Press until 2003. We are sitting down today to speak with him about the founding of the exit poll, and what we can learn from the current election controversy. Mr. Edelman, thanks for being here.

Xiao-Li Meng (XLM): Well, thank you so much, Dr. Murray Edelman, for joining us. I was very excited when I learned that you were the one who actually created this whole concept of exit polls. I think it is something many of us have heard about and we know how important it is. People, particularly during election times, tried to learn from it. But I think very few people actually really understand what the history was, how it was created. So my first question is, how was it created? What did you initially intend for this exit poll to accomplish, to collect what otherwise would not be available?

Murray Edelman (ME): Well, first, let me start, I worked very closely with Warren Mitofsky, and together we developed it. He was the one in charge of the unit. I was just a grad student at the time. To answer your question, let me take you back to 1967. Warren was working at the Census Bureau and that's where I met him. When I got my bachelor's, my first job was at the Census Bureau. He was offered a job at CBS to do this projecting. I was at grad school, at the University of North Carolina—I was doing it more to help myself through graduate school—and we brought the idea of probability modeling. The whole basic idea in probability sampling is that every unit in the population has a chance to be selected, and then you weight them based on that probability. And, lo and behold, you have a good estimate. When I teach beginning statistics, I always tell people it's sort of like a compact. It's like the Ten Commandments. You follow these rules of probability sampling and you are given an estimate that's within this range. And, you know, 99 out of 100 times, you're going to be within this range. In the real world, though, it doesn't always fit that well. But that's the idea.

So, in the case of what we were dealing with at the time, we had a list of all the precincts in a state, so we could actually take a sample, a probability sample, of those precincts, weighting them proportionate to their size. And we could make a nice, really solid estimate of how a state is going to be. So, if a state has 5000 precincts, we take 100 precincts, we're going to have a really good estimate and we're going to have a really good model of that estimate as well. If we're one in a thousand or five out of a thousand, that's a really good fit, and you could take it to the bank. And that was the model that we were using. Now, the problem is, you have states like Kentucky that have multiple time zones. So, at the time, the networks were calling races when the majority of the precincts closed. At that time, we treated Kentucky as a closing at six o'clock, even though there was about a third of the state or a quarter of the state that was seven o'clock. So the problem was, what do we do about that part of the state? Because that votes differently. And if you really trust the sampling, you've got to do something there. That’s when Warren had the idea of doing an exit poll. That was the first exit poll, which we did in 1967 when we were doing Kentucky, and we had an exit poll of people leaving the polling place. The whole point of it was to get an estimate of that part of the state. It worked fairly well.

And then in 1968, we did that in selected states that had multiple poll closings. We would have samples of exit polls. I remember, shortly after the election in ‘68, I think, in ‘69, I remember I had this really bright idea and I talked to Warren about and I said, ‘This exit point could be something more.’ You know, when you do a pre-election poll, you're trying to guess who the voters are. That’s the whole problem with a pre-election poll. But that's always been a problem. You have to guess who's going to vote. And we have our models, we call them likely voter models. It's sort of referred to as the secret sauce of polling because a lot of people have their own models. And I said, ‘You're still guessing, but here, we actually have voters. We could find out about the voters and it would be good.’ And he said, ‘But there's all these problems and our administration is doing this and doing that. Do you really trust it?’ And I said, ‘Well, it's going to be as good as the other. Plus, we weight it to the final outcome. So, you're covering all your problems because you could weight it to the actual outcome as well.’ And so, we did that in, I think ‘70 and ’72, and we had a national exit poll and that's kind of how it started. It initially started as a projection thing. But then it became a way to do demographic analysis and then we started adding opinion things to it after that. And so, through the 70s, we were doing a national exit poll and two or three states. That's how it started. Then in 1980, NBC decided to use exit polls for calling races. ABC and CBS had to join in '82. And we started doing exit polls that way for projecting. And from then on, exit polls were used for both projecting and for analysis. That's your origin story.

XLM: That's a great story to hear from someone like you directly starting from day one. That's great. Let me follow up on a question, which you already alluded to. You said, probability sampling is great, right? Because we know all the theory, we do all those things—but in the real world, it may or may not work out that well. One of the biggest problem, as you know well, is that you ask people, but people can refuse to give you an answer, even assuming they're not lying. There's all kinds of potential bias, the nonresponse bias, so-called shy voters or whatever it is. I want to ask you that, I don't think these issues are new. That started from probably day one. How do you deal with these issues and what are the techniques have you been using to improve dealing with these all kinds of possible bias in these exit polls?

ME: We've done a lot of different research, like with interviewers and just how the interviewer approaches the person and things like that. One of the things that is very noticeable is there are different non-response patterns by age. And so, what we've done is we have it set up now where the interviewer guesses the age, race, and gender of the person who doesn’t respond. Then we do a non-response adjustment for age, race, and by gender. Now, you might argue, that people can't guess ages that well. And that's true. However, we're really gauging categories. We're doing 18 to 29, 30 to 60 and 60-plus. We’ve done different periods of research where we've had people guess everybody, and then we compare it to the questionnaires. And it's pretty close because you're really guessing categories. You're going to be off on a few people. Sure. But overall, that's pretty good.

XLM: So, age affects people's nonresponsive behavior. Is it like when you're getting older, you tend to be non-respondent?

ME: Right, and that's one reason we often have older interviewers, because they're more comfortable that way.

LV: I’m not telling anyone my age if they ask me. I'm right there. I will not tell a soul.

ME: But you might you fill it out on a questionnaire. See, one of the nice things about exit polling is it's impersonal. We don't have to know anything. For a while later on, I started really pushing hard for asking gay and lesbian on the exit poll. And, it was a perfect place to do it because you don't know the person's phone number, you don't know their address, you don't know anything about them. It's just something that they're answering. And it's totally confidential—they fold it and put it in the ballot box. So you're getting a good estimate there.

XLM: It's a great point.

LV: So that makes it a better estimate than calling on the phone because if all of a sudden someone calls me on the phone, I'm nervous. I don't want them to know who I am. Of course. That makes a lot more sense.

ME: And it's also true who you vote for. You know, you can get that. So, I think you get a lot more honesty. There's a lot of pluses there. But there is the problem of who responds, which is always the case. That's always a problem with surveys and that's a problem we have. So that's one way we deal with it. The other way we deal with it is we weight our data. Now, initially, when you see the early exit polls, like around eight o'clock, right at poll closing, it's just our best estimate. Our estimate keeps improving as the night goes on, and by the end of the night we actually have the results. We weight the data based on geographic area and by party, so we're weighting it to the outcome. That’s another way we improve it. And then we compare it to other surveys, and in fact it compares pretty well. One of the things we saw over the years is we tended to overestimate the higher education group, particularly postgraduates. What we've developed is a way of using the current population survey, which is a very large survey with a very high response rate. We tend to adjust by education based on our past relationship with them. So those are the different ways that we ensure that it's a representative sample of the electorate.

LV: I sort of feel like now I understand a little bit more about how the exit poll was really started and created with a purpose, but I wanted to get in to sort of some of the more actual elections and something like the 2000 election controversy.

ME: Just to pick a random one.

LV: Yeah. just to pick a random one, you know, not an important one at all. Um, it's sort of a two-part question. How does the 2000 election controversy change the way you approach your work? And also, does the existence and certainly in recent memory the most controversial election? Does that change the way people understand the role and meaning of data in presidential elections, especially as we're now seeing it again in 2020?

ME: Wow, that's, that's a big one. Well, 2000 changed things in a lot of ways. I think over the years you've seen people paying more and more and more attention to polls, and this year has been amazing. I talk about polls with people that I never would have talked about polls with a few years ago. I mean, I was amazed how many people have been fascinated by my job and everything else. It's like so many things have shifted. You know, when I was a grad student working for the networks, I was an activist, very much a gay activist, a gay political activist. And I saw the media as partly—maybe not so much the enemy, but not the friends, you know? And I figured, well, at least it's value-neutral. Now the media, it's the left. It’s like a whole different kind of the whole shift has changed, and science is taking on a whole new meaning. Polling has taken on a whole new meaning. Our election work is very different. So that that's been a really clear kind of change.

2000 was not really a challenge to polling, it was a challenge to projections, and one of the things that happened is the exit poll got trashed because people saw us as projecting elections. But the exit poll was fine. The estimate in Florida from the exit poll was perfectly fine. It was a very different kind of problem. We corrected it, but it was a different kind of problem. That's how it's changed. The other way things change is just the competitiveness around calling races. What happened in the 60s and the 70s and even the 80s, the networks were competing, and that was quite a big deal. Then in 1990, they formed Voter Research and Surveys, and then in '93 it became Voter News Service, when I was in charge in '93. During that period, I was part of the pool. We made the calls and that was that, and they were fine with it. Then in '94, ABC started calling things ahead of everybody, and so from '96 on, the networks then had to compete again. And in the 90s they were competing. That was partly what fueled that mess in 2000. After that, they're still competing, but they're very hesitant to compete now. In other words, they each have their own decision desk. They each do their own everything, and people proceed much more carefully than they did. It was a change that way, more than anything.

XLM: Speaking of a change, another big one we want to ask you about is over the years, you have increased absentee voters, the early voters, as well as mail voters—particularly this year—and this seems like such a challenge to the concept of exit polls. As you said, the best part of the exit poll is that people are there and you can talk to them without being worried about being identified or anything. But now in this environment, with that kind of increase in the voters’ not being there in person, how do you do something like exit poll similar to have that kind of advantage, and it gets information?

ME: Yeah. Yeah. Well, we've been dealing with that over the years. Our model of sampling precincts was great when we started. But what happened is, states like California started doing more and more absentee voting and, depending on the state, there's always been some absentee voting. Originally you had to prove that you were out of the county in order to get a ballot. Then California started making it a lot easier. You don't have to give a reason. We started doing telephone surveys in California, so we would do both. We do an exit poll and a telephone survey. We would try to get an estimate. We would have a pre-election estimate of what percent is going to be absentee, what percent is going to be Election Day. We would combine them, and then as all the votes came in, we would then divide it up a little differently so it reflected the actual result. We were doing that in some states in the 80s and 90s. And it just became more states. Then some of the states started early voting. Then it was they had alternative polling places that you could go to a couple of weeks before. We developed ways of sampling those polling places. It's the same thing. You select a polling place, you assign an interviewer to the polling place. You give them a sampling interval. They just do it over days rather than that one day.

XLM: I see.

ME: So, you give them so many hours a day and you vary that. We developed all these methods. Now, during this period when it first started the absentees were generally more Republican than the Election Day voting. And that was pretty much of a pattern. As it started getting a larger share of the vote, then it became more even. That’s really the way it was in the 2000's, the last 10 or 15 years. Then we had Trump come along now, who started really putting down mail balloting, and it totally added a whole other element to our work in 2020 because we've never had such huge differences between mail voting and Election Day voting. It's always been a few points maybe, and sometimes not at all. So, when you looked at vote returns, you weren't that concerned whether it was Election Day or mail, unless it were a very close race, because there wasn't that big of a difference. Now, the mail voting is 2-1 Democrat. The Election Day is 2-1 Republican. When we're looking at vote returns, we don't know what we got half the time. Some of the time we do. Sometimes we don't. That made it really hard to project the races. As far as our estimates, we were able to keep it together because we had our methods for it. But it was definitely a challenge. It was a very big challenge.

LV: How did the actual election night projections work? How do you call an election just an hour after the polls’ closing? What exact data is it that you're using?

ME: Well, so, in a given state, right at poll closing, we have the exit poll for the state. We have pre-election polls to give us an idea what to expect. We'll look at that at the exit polls’ confirming our pre-election poll if the difference is a wide margin than we might very well call it at poll closing. If the exit polls go in a different direction, we definitely won't. In races that are closer. we just wait. So then, after the polls close, then we have sample precincts come in. In a given state—like Pennsylvania, for example—we might have one hundred precincts and we might have 40 exit poll precincts, because the exit poll precincts cost a lot more. With the sample precincts, we just need a reporter to phone in, what the final votes are for the precinct. So first, we have the exit poll, then we get the final votes, then we project. We keep our projection, we keep looking at it. We have an error term, and we decide if it looks like a good call or not, if it looks really safe to call. If it isn't, we just wait for more. And then we start getting the vote tabulation. Then that comes in and we modify that and we have different ways of playing with that. And we look at all that together. And as it keeps coming in, we decide if it's safe enough to call, if it's a clear winner or not. That's how we do it.

LV: As I listened to your other interviews and things you've done, there's this issue between calling too early versus accuracy, and there's always people who want you to call earlier or don't want you to. With the concept of calling too early versus accuracy, if we think about current times as basically the history of the future, what do you think we can learn from now for the future? What about the current controversy could we learn from in how to do better in the future?

ME: When you're talking about the current controversy, the current controversies around the pre-election polling.

LV: Yes.

ME: I don't think there's any controversy around the election projections. And I think in terms of projecting elections, we've got it down pretty well. I thought we were very careful and we didn't call anything that got really close. At least, we didn't. I think Fox had a bit of a surprise with Arizona. They called that kind of early and they were probably praying that in. But I always look at it as a mistake. If I'm losing sleep or if I'm really worried about something, then I shouldn't call it. That's my rule.

LV: Has that ever happened in the past?

ME: Oh, yeah.

LV: You have past experiences that you've learned from?

ME: And I think I've had five mistakes out of about 3500 calls that I’ve been associated with.

LV: That's not a bad percentage to go with.

ME: Yeah. I don't think it's bad. The problem is, one of them was in Florida in 2000 and that's what everybody noticed. But yeah. Five out of 3500, that's a good record you know.

LV: Yeah. I think you have a pretty solid record there.

ME: Yeah. And believe me, I learn from every mistake because you really do learn a lot from mistakes. I guarantee you that.

LV: To finish, and I think you touched on it earlier, that with the rise of Nate Silver, FiveThirtyEight, Upshot and all that stuff, people are paying a lot more attention to the polls. But the problem is that people are sort of seeing the polls as predictive. Do you sort of feel that we're going to be able to change that? Or do you think people are going to lose trust in the polls because they see it as predictive? What do you see the future being in that sense?

ME: Well, I mean, this has been going on ever since I've been involved, which, as we know, is a very long time. And that is why we put out our polls and we say pay attention to the demographics and the analysis and all that. But that headline is always who's winning? It's always the horse race. And that's really where the consumers are. They want to know who's winning. I don't think that's going to change. I think that we'll find out what went wrong. I'm sure we will. I'm sure we'll have some really good analysis of it, because people are doing post-election polls. They're going to be people matching their interviews with the actual vote database. So, we'll be able to see just who we weren't missing. And I think we'll develop some corrections and some improvements and it'll look better going forward, and then there'll be some other mistakes. That's just the way it goes. The job we have is unlike pretty much anything else. Like when we call races, when we do our polling, we get hit in the head with the final result that night—or in this case, a day or two later. But you really get hit really hard. What places in science, and certainly in polling or statistics, do you get hit with the truth? Where do you get hit with the truth so quickly? Where do you get hit by the truth, period? So where do you get hit so hard and so fast? And that's the big challenge. You know, it's a big challenge.

LV: We were talking earlier about how do you verify an exit poll? It's like, well, you find out pretty darn quick.

ME: Yeah, you find out different and you compare with other sources and you can tell. You find out really quick. And you learn to be cautious, you know.

XLM: Well, thank you, I think that really means that we need more exit polls to find out. You’re right, election polling is one of—maybe the only, because obviously I have the exact same question of where else you can find all the truth and so quickly—it makes the job extremely exciting. But it's also very challenging because you can be proven wrong very quickly as well.

ME: Right. Right.

XLM: And so, we want to really thank you again for talking to us and most importantly, for doing an amazing job for all these years to ensure these polls get better and better. And I'm sure the best part of this job is you will always have exciting job. Thank you very much.

ME: Thank you. Thank you so much.


This interview is © 2021 by the author(s). The editorial is licensed under a Creative Commons Attribution (CC BY 4.0) International license (https://creativecommons.org/licenses/by/4.0/legalcode), except where otherwise indicated with respect to particular material included in the article. The article should be attributed to the authors identified above.

Comments
0
comment
No comments here
Why not start the discussion?