Skip to main content
SearchLoginLogin or Signup

Post-Election Interview With Allan Lichtman

An interview with Allan Lichtman by Liberty Vittert and Xiao-Li Meng
Published onNov 19, 2020
Post-Election Interview With Allan Lichtman
·
key-enterThis Pub is a Supplement to

Listen to the interview or read the transcript below


Liberty Vittert (LV): Hello, and welcome to the Harvard Data Science Review special theme on the 2020 U.S. election. I'm Liberty Vittert, media feature editor for the Harvard Data Science Review, and I'm joined by my co-host, Xiao-Li Meng, our Editor-in-Chief. Today we are speaking with Professor Allan Lichtman, a historian at American University in Washington, D.C., the author of The Case for Impeachment, a regular commentator on major networks and cable channels and—little-known fact—the winner of $110,000 in 1981 on the quiz show Tic-Tac-Dough with Wink Martindale. And no, I am not making up that name. Professor Lichtman predicted a Biden win in the Harvard Data Science Review in October 2020, and he is here today to discuss that correct prediction and the aftermath of this election. So, Professor Lichtman, you're famous for the 13 Keys method and, just for our listeners, could you give a brief recap or overview of what your 13 Keys are for predicting the U.S. presidential elections since 1980, '84?

Allan Lichtman (AL): 1984. I first predicted Ronald Reagan's reelection in April 1982, two-and-a-half years ahead of time, and in the midst of what was then the worst recession since the Great Depression. And so, my predictive career goes back nigh on 40 years now. How do I do it?

I do it, first of all, by ignoring the terribly misleading polls, by not paying any attention to the pundits, and by not looking at the day-to-day events of the campaign—the speeches, the debates, the ads, the fundraising, the dirty tricks, the sound bites play no role in my prediction system. Rather, the 13 Keys to the White House gauge the big picture of the strength and performance of the party holding the White House. Believe it or not, it's governing, not campaigning, that counts. And the way it works is if six or more of these Keys that gauge the strength and performance of the White House party go against them, they are predicted losers. Six strikes and you're out, any six. It's purely non-linear, entirely non-weighted.

LV: So, I have to ask: as a statistician, the idea of some of your Keys are hard for me to reconcile in my mind because they're qualitative. I want to understand for something like ‘incumbent charisma,’ as one of your Keys, how do you determine if the person has charisma or not? Is it something you just know or are we all just see it? How do you make that determination?

AL: Well, it's not quite Justice Brennan's comments, ‘I know something is pornographic when I see it,’ it's a little better-defined than that. And I’ve got to tell you, when I first came out with the Keys, I was blasted by the professional forecasters for the sin of subjectivity. And although I'm Jewish, I will confess I am a sinner. But it's not really subjectivity, it's judgment. I tried to tell everyone, historians make judgments all the time. That doesn't mean our work is irrelevant or not valuable. We're dealing with human beings and you can't reduce them—sorry, Liberty—just to numbers.

Well, it took me about 20 years to convince them, but about 20 years later, all of a sudden, the professional forecasters realized that an excellent way to forecast is to combine judgmental indicators with cut-and-dried factors. Suddenly the Keys to the White House were the hottest thing. I twice, believe it or not, keynoted the International Forecasting Summit, published in every forecasting journal, twice gave presentations to the American Political Science Association. Now, you asked me about the one that I have been criticized the most for, for being, I will say, judgmental, is the charisma Key and it seems like that should be in the eye of the beholder. And here's one of the great secrets to the Keys as a forecasting model. You can have your own opinions and that's fine. But if you're going to evaluate according to the Keys, your personal opinion doesn't matter. That's why the Keys are nonpartisan. They don't have my personal opinion stuck in there. And if you look at the charisma Key, it is very narrowly defined. It is defined as that very rare, once-in-a-generation inspirational candidate who broadly appeals to the American people, like FDR for the Democrats or Ronald Reagan for the Republicans. And I've also answered this Key all the way back to 1860, retrospectively, Trump is a great showman, but he only appeals to a narrow slice of the electorate. Over 60 percent of the American people don't like him and don't trust him. I didn't give him the Key in 2016, I don't give it to him now. And we've had candidates like him before in my retrospective analysis. Barry Goldwater in 1964 appealed to passionate conservatives who would walk through walls for him, or George McGovern, same thing with liberals, but I did not give the Key to either one of them because of their lack of broad appeal.

LV: I think it's important to note that when you say we can't reduce people to data, we see that very clearly that with how wrong the pollsters were in 2016 and how wrong they've been in 2020, that you're right. These judgments do need to come in to the data because just the data itself doesn't tell us everything.

AL: It's a great point, Liberty, and I have a term for this. I call it the ‘fallacy of false precision.’ You know, the polls give us these really precise numbers down to decimal points and compilers of the polls like Nate Silver, who I just call a clerk because he doesn't have a theory of how elections work. They also give us very precise numbers, you know, 91.23% that Trump will win 71.29%. But these are dependent on the base data, which is the polls. And number one, polls are snapshots. They're not predictors. They're abused as predictors.

And number two, as you know as a statistician, the error margin is way larger than they tell you when they say the error margin is plus or minus 3%, that's pure sampling statistical error. It doesn't take into account error from the fact that so few people actually respond to the polls, so you have to weight your sample. It doesn't take into account respondent error, and most importantly, it doesn't take into account error in estimating who the likely voters are. And we know both times they way underestimated Trump-likely voters. So, the real error is vastly larger than what they tell you.

Xiao-Li Meng (XLM): What I really have to agree with you as much as my data science friends will say, "Xiao-Li, you're crazy," you know. But I actually, as Liberty knows, I wrote an article back in 2018 exactly trying to measure what I call ‘data quality’ which includes these kind of nonresponse bias, all kinds of things, which actually, interestingly, retrospectively, you can quantify that, because once you know the answer, you can back up to know what the error is—you can quantify those things. And you're right, they turn out to be far larger than what the sampling errors would be. Sampling error essentially is the minor part of all these errors. But let me get to the point of if I think more generally, I'm jumping into this conversation that in data science most of us do quantitative analysis, but there is an increasing awareness of doing qualitative data science. We actually have an article that I hope will be accepted, it's being revised now, that emphasizes the importance of a quantitative study in data science, because there's a lot of judgment calls. And, you know, we have articles by philosophers writing about the fact that there's no such thing as raw data because anytime you collect anything, there's a lot of judgments being made. So now, to talk about judgment, I do think that I want to push you a little bit, if you don't mind. Your method does have an important quantitative part, which is a number six. Why six Keys, right? Why not the nine Keys? Why not three Keys? How did you develop that? Where did you decide that six is just about right to tackle this?

AL: Great question. Let me comment first on this broader issue of raw data. You know, I still clash with some political scientists who are so narrow-minded as to say you can't have judgment at all. And then I looked at their models, and their models are filled with judgments. For example, GDP—is that objective? Of course not, it includes all kinds of judgments. Liberty, as you know, the big exclusion is unpaid household labor. That's not objective. That's a subjective decision. And so, this notion that you can draw this hard line between what subjective and objective, as you point out, it's just bogus. So how did I come up with 13 Keys? Do you have time for a little story?

XLM: Absolutely. We're all here for you as long as you have.

AL: All right. I would love to tell you, I came up with this, by brilliant, deep, and long contemplation. But were I to tell you that, to quote the late, great Richard Nixon, "that would be wrong." I developed the 13 Keys by accident. I was a distinguished visiting scholar at Caltech in 1981 in Southern California. And, of course, while I was there, I did what every distinguished scholar does, I went on a quiz show Tic-Tac-Dough and won over $100,000. But that's not the real story. The real story is I met Vladimir Keilis-Borok, the world's leading authority in earthquake prediction. It was his idea to collaborate, and, of course, being foresightful, I said, "Absolutely not. You know, earthquakes may be a big deal here in Southern California. I have to go back to Washington, D.C. Nobody cares about earthquakes there." He said, "No, no, I already solved earthquakes here." Look, get this, in 1963, Keillis-Borok was a member of the Soviet scientific delegation that came to Washington, D.C. under JFK and negotiated the most important treaty in the history of the world. The treaty is why young people like you, Liberty, are still here. The Nuclear Test Ban Treaty that stopped us from poisoning our atmosphere, our oceans, our soil. And he said he fell in love with politics. He said, "I got a big problem. I live in the Soviet Union. Elections? Forget it. But you're an expert in American history, politics, and the presidency." So we became the odd couple of political research. And the key to finding the 13 Keys was twofold. Number one, we reconceptualized presidential elections in geophysical terms. Not as Reagan versus Carter, liberal versus conservative, or Republican versus Democrat, but as stability—the White House party wins—and earthquake—the White House party loses. We then looked retrospectively at every election from 1860 in the horse-and-buggy days of politics when Abraham Lincoln was elected, up to 1980. We didn't use regression models like most forecasters do. We used Keillis-Borok's method of pattern recognition to see what patterns were associated with stability and earthquake. And you're right, we could have come up with six Keys, nine Keys, but it turned out the 13 Keys best separated stability from earthquake, along with our six-Key decision rule. And we got criticized, when I first published my book back in 1990, from some very smart people saying you could have predicted based on a smaller number of Keys. And you know what? If I had listened to them, I would have made errors.

XLM: Thank you for that great story. What I was asking is that I know that you come up with 13 Keys, but how did you decide a decision only based on six Keys? Not say, seven or eight.

AL: Because if you tried five, seven, or eight, you made errors. This is retrospectively—by the way, you may know this or not, but a lot of forecasters blur together retrospective with real forecasts. Back last time, no this is 2012, I think, that a bunch of University of Colorado forecasters claimed they had a model that predicted something like the last eight elections. I'm saying, wait a minute, I never heard of this model. How can that be? And it turned out they were all retrospective, retrodictions. And their prediction of a Romney landslide didn't exactly pan out. And we haven't heard from them again.

XLM: I see. So what you did is technicality, you're back-testing.

AL: Exactly.

XLM: You develop the 13 Keys, then you went back to however many elections. Then you look at how many of them will be giving you the decision rule that will maximize the accuracy.

AL: That's exactly right. And I'm very careful to distinguish between that retrospective analysis and my predictions, which start in 1982.

XLM: I see. I see.

AL: I didn't know Abe Lincoln, as old as I am.

XLM: Now, you can be excited as a nerd. You made it clear that when you have these Keys you do not weight them. Everyone is giving exact the same weight. There's no priority or you don't care which six. Most people would imagine these 13 Keys, these are all important, but some might be more important than others. It must be the something in your formulation, the 13 Keys that you kind of sort of balanced in a way that you can weigh them equally.

AL: And our idea was to use vectors of simple, integral parameters of 0 and 1. I never contemplated weighting because my study of modeling to that point showed me that weighting is excellent again for back-checking, for retrospective analysis. But weights can change unpredictably from one election to another. Now, people always say to me, like your great question here, ‘Come on, Professor Lichtman, how can the Great Depression of the 1930s count for only two economic Keys?’ And the answer is the secret to the Keys. And it's called trigger effects. I don't have to wait because if something is important enough and big enough in our country, it will trigger other Keys. So, the Great Depression triggered social unrest. It triggered a loss of the mandate Key because Republicans suffered midterm election losses after they had controlled everything. It precipitated the candidacy of a charismatic candidate, Franklin Roosevelt, who had no intention of running before that, so it triggered three more Keys. Or the Vietnam War in 1968, we know, triggered social unrest, midterm election losses. It triggered Lyndon Johnson not to run again, losing the incumbency key. And of course, the pandemic—although I don't have a pandemic Key—triggered other Keys like the short- and long-term economy Key, and contributed to social unrest.

LV: So COVID basically was the triggering point that then caused these other Keys to turn against Trump. Because my recollection is that your Keys before COVID were actually pro-Trump, right? Trump would have won, according to the Keys, before COVID?

AL: Yes, but I'll give a slightly different cause on that. At the end of 2019, Trump was down four Keys. That is two Keys short of defeat. You're right. If things had stopped there, Trump would be a predicted winner. But I didn't make a final prediction because I know how crazy things changed in the year of Trump. But then we got the COVID pandemic and the cries for social and racial justice. But most importantly, we got Trump's failed response to these crises. You know, when I predicted Trump in 2016, I got this very nice note after the election saying, ‘Professor, congrats, good call.’ And in big Sharpie letters, Donald J. Trump. So he acknowledged my prediction, but he's not that much of a reader, never got far enough to understand the deeper meaning that when you're the incumbent, not the challenger, you're judged by your record. And instead of dealing with these crises, as we know from Bob Woodward's tapes, he reverted to his 2016 challenger playbook and tried to talk his way out of them. And that, of course, didn't work. The pandemic surged out of control, killed the economy. And, of course, we got social unrest. So he goes from four keys down to seven keys down, one more than needed to predict his defeat. And this has never happened before. We have never seen an incumbent president suffer such a sudden and dramatic reversal of fortune in just a matter of a few months. And he has no one to blame but himself. You know, he loves to distract and deflect. But as Harry Truman once said, for the presidency, the buck stops here.

LV: You know, I wish I could bottle up all of your fabulous quotes because I feel like you need to write a book, like the best presidential quotes or something here. I want to ask on sort of a broader scale, we touched on how in 2016 and 2020 the pollsters got it so wrong. And regardless of how this election specifically turns out, as the counts keep going, there is no doubt that the public's trust in pollsters has been lost. And so, at some point, do you think that these pollsters are going to come back from this? Or do you think we should get rid of pollsters altogether? Or where do you think the pollsters can go in terms of regaining the public's trust and what they're doing?

AL: They should go far, far away from us, as Tevye said about the Tsar in Fiddler on the Roof. I have nothing against polls. It's horse race polls that give me heartburn.

LV: Can you explain the difference to us?

AL: One is a public opinion poll. What do you think of Donald Trump? Do you think he's honest and trustworthy? Do you think we should abolish the Electoral College? Where do you stand on Medicare or Medicaid for all? I think those are probably pretty good, but they have their own problems, of course. But the horserace polls have tremendous problems. And they and those are you know, you're matching up Trump and Biden. Who do you support? They present this false picture, to start with, of elections. You know, everything's a sports metaphor in America. They present this false picture of a presidential election as candidates sprinting ahead and falling behind every day on the events of the campaign with the pollsters keeping score. Look, the Keys to the White House suggest that's a completely misleading view of presidential elections, which turn on the record of the party holding the White House. That's governance, not campaigning that counts. So that's horse race polls. And they claim to have fixed them. They made all these mistakes last time and they claim to have fixed them, and they're worse, way worse. There are huge errors. Nate Silver had Biden winning Wisconsin by seven or eight points. He won it by a fraction of a point. That's a monumental error, even though he might have been right about Biden winning Wisconsin. So, in many interviews, and I'll reiterate it, I've told my friends in journalism, never again report or comment on a horse race poll. You just misleading us and maybe even distorting voter turnout while you are actually doing it. You know, the Keys often enable you to predict elections way ahead of time because they tell you exactly what factors decide the election and they can fall into place early. And I predicted the 2012 hard-to-call election for Obama in 2010. And guess who viciously attacked me? Nate Silver. He claimed, ‘No, no, you can never make a prediction this early!’ And he wrote a 20-page attack. And of course, being a humble person, I wrote back a 15-page response and I said, ’Of course, you can't do it by your methods, polling, that only works a few months before the election. But I can do it with my structural method.’ Well, eventually, Nate came around and I wrote him a very nice email saying, all right, you know, I forgive you for your attack. So let's write a joint article explaining how two different analysts using totally different methods came to the same conclusion. Never heard from the guy again. Right now, my advice to Nate Silver is stick to sports betting and stay out of politics.

LV: I have to ask to dig a little bit deeper into these horserace polls. My prediction for the election was that it was going to be won by a razor thin margin by either side. And the reason is not because I did statistical models or I'm as smart as the pollsters or did anything statistically. It was because I, I just talked to some people. You know, one guy that I really respect, who has strong convictions and is in no way shy—loves his family, loves his country, incredible guy—he said that if he votes for Trump, which he did, that not only would he not tell anyone that he was voting for Trump, but he would lie and say he was voting for Biden. And this is not a shy guy. This is because he was scared and he was scared of the retribution that would come to him if he tells and also just the harassment or the upset or that he just didn't want to deal with telling people he was voting for Trump. So my feeling was that there were these shy or scared voters that were just not going to tell pollsters the truth. Do you think that that's really where that error came from, or do you think it's something else?

AL: No, I think that's definitely one component of several areas, including the fact people don't respond to pollsters at all. And you got to weight the polls, which is a huge problem. But, you know, as brilliant a statistician as you are, Liberty, your methods are even more qualitative than mine. And they seem to work as well as my methods. But yes, that is certainly one, it's not correctable. It's not as if the pollsters can just fix it. You know, as I said, they thought they fixed it this time and it was much, much worse.

XLM: Well, fixing that is really, really hard because you need to know at what probability people do this shy percentage. I mean, essentially, if you really know that, you kind of know the truth. But back to your question, which is kind of very interesting, you said you reached out to Nate Silver about writing a joint article. That reminds me to bring up this much larger question, as I mentioned, that we are talking about how to use the qualitative methods in data science itself. So I think it'll just take a very fair kind of a general point, whether it's qualitative judgment or quantitative data collected from people. These are all ‘informations.’ There's some misinformation, some good information, some questionable information. Another general question here, and I hope it's a unique question that you get from a data science journal, is how do you think we should to kind of all get together talking about being united? How do we use a qualitative information and quantitative information? Obviously, they are useful if the two sets of information give you a very different answer, that also is very useful information. It tells you that something's not quite right. There is more uncertainty in the system. What are your thoughts on this, because you obviously think a lot about those issues? How do we in the data science community think about let's get the best out of all of us so we can all become more reliable? Do you have any thoughts on that?

AL: I do. First of all, we need to open our minds. I take my advice from the late, great physicist Richard Feynman, and he was asked, you know, what is the best way to understand the phenomenon of light and other phenomena of physics? Is there one path to understanding? And he said, absolutely not. He said, I am open to all different paths, including intuitive, not necessarily those that can be reduced to mathematical equations. And, you know, if you read the biographies of the great Albert Einstein, the epitome of advanced science, a lot of his insights started with certain kinds of intuitions that he had. I think the first step is to realize that, as religions teach us, there are many pathways to God, there are many pathways to the truth, and we should never close our mind to any given pathway. Liberty's brilliant analysis of how she came up with kind of different results from the quantitative pollsters illustrate that.

Unfortunately, there is still too much narrow-mindedness in academia. It's a lot better than it was in 1982 when I first began this enterprise, but I still butt heads and I won't name names, but I still butt heads with political scientists who believe that political science must be quantitative strictly. No matter how many examples of proofs I give them, they still seem to stick to that point of view. And you know what it's done? It's kind of rendered political science irrelevant. Who reads the American Political Science Review, other than a few other professors? They're not speaking to the broader public. They're not really communicating with the American people. You know, I've been an expert witness in 100 civil rights cases, and the key to being an expert witness and convincing judges and achieving social progress is not just doing excellent analysis. That's fine. But that's only the start. You've got to be able to communicate it clearly, compellingly, and persuasively to a judge who is not an expert. And, you know, people tell me, oh, my work is too complicated to communicate, and my answer is no, if you can't communicate it, then there's something wrong with your work. I was in one case and we had a brilliant Harvard political scientist, world-renowned, I won't mention his name. And he began to give his testimony and it was absolutely brilliant. And the judges said, stop. We don't understand one word of what you're saying.

XLM: I definitely hear you, because part of the thing we publish in Harvard Data Science Review is about a communication and without communication, it's very hard to have an impact. That's a definitely a very important point. So I get it. You know, we need to be open-minded, which is obviously important not only for data science, for other things. Open-mindedness is always good. On top of that, do you have any other advice for a communication is another important point, other ways you can think of how do we get the best out of the qualitative study and a quantitative study so we can all get better, more reliable information?

AL: Yeah, I always tell my audiences what the secret is to successful forecasting. It's not knowing math or they got no math. It's not knowing history. They got no history. It's not knowing politics. They got no politics. It's keeping your own personal prejudices out of it, whether they're prejudices about privileged kind of data or political preferences and prejudice. If you let them in, then your work is going to be tainted. Second thing I'd tell everyone, as an historian, I was one of the pioneers back, I hate to say it, in the 1970s and what's called the new history, which is the use of mathematical models and social science methodology for history. Hadn't really been done before. But what I've always tried to do in all my historical books is combine, say, mathematical analysis of voting patterns with qualitative research, with what people are saying with the kinds of bills that passed, the legislative debates, and I think it is that combination that is probably the strongest way to look at the world. I always tell those who want to rely on quantitative data, don't make it random, don't make it just your opinion. Do what I tried to do in the Keys tightly, define it as much as you can, and give us practical examples of how you've applied it. I also say, unless you can predict, there's no reason I would believe your analysis. The pundits are all friends of mine. You know, I go on their shows I love them, but I get a kick out of their after-the-fact analysis. You know, my favorite was after Romney lost in 2012 when half the analysts said, oh, he lost because he wasn't conservative enough and he lost his base. And the other half said, he lost because he was too conservative and he lost the middle. After-the-fact analysis, if you want to rely on that, at your own risk.

LV: Hindsight also becomes 20/20 in that sense.

AL: It really does. I love it.

XLM: And there's also a data side term, it’s called ‘overfitting.’

AL: Yes, I know it.

XLM: You develop these certain Keys and you define them as narrowly as possible. You guard yourself in the sense that you don't let your own ideology preference into your judgment in how you apply these Keys, and which I think is essentially a way to prevent yourself from overfitting. So, the question I have for you is, have others used your Keys and how successful are they? Because I can see your Keys, they are open. They are published in Harvard Data Science Review [an open access publication], thank you very much, and others can read it, people can apply—but I can see that when they apply, then make a judgment. Let's say they just don't like a particular candidate. It's easy for them to think about, well, you know, I don't think that actually applies. This is where people worry about this kind of a qualitative judgment thing, because you can easily talk yourself out of it. Come up with six or seven more than the direction wants. What do you tell people? How should they guard themselves when they apply your Keys?

AL: Yeah, that's really, really a deep issue. A good friend of mine is Richard Bond. He ran George H.W. Bush's 1988 campaign and was a former head of the Republican National Committee. And he was doing a fundraiser for George W. Bush in 2004—Rich is a huge fan of the Keys—and he handed them out to everyone at the Bush fundraiser and explained that an answer of ‘true’ favors Bush's reelection. They had lunch and they answered them all. Rich says, how many of you had 13 truths? They all raised their hands. One of the great strengths of the Keys is they’re the everyman's predictor. What can the ordinary citizen do with some big regression equation? But anyone can take the Keys and become a do-it-yourself predictor. The downside of that is very few people, a) take the time to read my entire book and figure out how narrowly the keys are defined and how I've answered them before. Instead, they do tend to insert their own opinions. It takes years of training, Liberty, to be a statistician like you. It takes years of training to learn how to analyze information impartially. And that's what historians do. That's what we're taught to do in, you know, five, six years of graduate school and umpteen years of being a pre-tenured professor.

XLM: In data science we talk a lot about education. You teach a methodology, you teach regression, you teach machine learning, you teach all those things. Now, the question is how do you teach this kind of qualitative? How do you teach judgment? How do you teach objective judgment? The judgment is not your personal opinion. It's a judgment of the situation and applying your Keys.

AL: You teach it by testing people's opinion against reality and by posing one opinion against another. Part of it, I hate to say this, is almost psychoanalysis. You break down people's opinions and see what it derives from and try to get at the underlying impulses and tendencies that may not be impartial, just as, you know, some kind of psychotherapy gets at the underlying feelings that may be leading people astray.

LV: And I guess, you know, in data science, we actually have to differentiate between opinion and judgment a lot. If you think about designing a survey and making sure you're not putting in your personal opinion to the question. Xiao-Li, you and I were talking about that before this, of how do you not lead the person when you're asking them a question? In those sorts of questions, judgment and personal opinion come into a lot of things.

AL: Indeed. And, you know, I was on a panel the other day, and Charles Tien, you may know him, he's a modeler and he gave me his model or presented it, and then he said, I'm going to change it because this GDP result, it's too odd, it's too unusual. You know, it produced something like a 38 percent popular vote for Trump. He said, I know that's wrong, so he adjusted it. But how did he adjust it? Judgmentally. There was no objective way to jigger GDP. It creeps in all the time, even for those who claim to be the most objective.

XLM: That actually reminds me a story about Nate Silver. He was he was giving a talk at Harvard years ago, I was in the audience, and some students asked him what he did right that other people got wrong at that time. And he said one thing which really raised my respect for him. He said, ‘I resist to tune my models when obviously something is wrong, I don't just try to adjust, adjust, adjust.’ That's a discipline. It's very easy to say, let me adjust and adjust. But what do you do that then you basically start overfitting like crazy. Right?

AL: I agree

XLM: To be fair to all the pollsters, quantitative people, they're saying great. I mean, you develop this very important prediction for the President but there are a zillion other things. There's all the congressional seats, there are Senators and Congress. The question then is, do you have similar Keys for predicting these individuals? How do they work? What's the story there?

AL: Back in the 90s, I developed Keys to the Senate, which, by the way, were totally different than the Keys to the White House. For example, a bad economy helped an incumbent senator because they wanted to get aid to their state. Fundraising was in there. It's not in the Keys to the White House. I thought I did pretty well at one set of Keys for 50 states and I got about 85 to 90 percent right. Of course, what everyone focused on was the 10 or 15 percent that I got wrong. So I stopped that, because I didn't want to tarnish my presidential predictions, but the one inside and they really the only insight I got is presidential elections really are unique. And the dynamics of presidential elections are not the same as any other election. I think we validated that this year.

LV: You know, we're going to come back to you in spring of 2024 and ask for your prediction again for the 2024 presidential election. So, I will not hold you to what you're going to say in the next couple of minutes. But what's going to happen in 2024?

AL: Well, nice thing about doing presidencies is I can't be wrong for four more years and maybe then I'll be so old I won't care. But Biden enters office with a lot of advantages. I'm going to assume for a moment he runs again. That gives him the incumbency. He won't be challenged with his own party. That gives him the internal party challenge. There's not likely to be a third party, so that gives him the third party Key. He's certainly going to achieve major policy change as compared to the Trump administration. So that gives him the policy change Key. He is not a cheater. He's not dishonest. He's not corrupt, whatever you may think of him. So that gives him the scandal Key. So he comes into 2024, presuming he runs again, with huge advantages. And you heard it here first. It's the first time I've said that.

XLM: Well, thank you so much. This has been a fabulous, and I think that you give us a lot of food for thought. And I know that not every quantitative person agrees with this kind of qualitative approach. But I think the most important thing you mentioned is we should all keep open-minded and I have to plug in for the Harvard Data Science Review is that the reason we create this journal is to talk to all kinds of people. On the Board, we have philosophers to, you know, a physicist literally, because we have we have all these individuals because of this very diverse nature of data science.

LV: Well, I think we just have to say thank you so much. And most importantly, we heard it here first. 2024, here we come. Never too early to start thinking about it.

AL: That's the beauty of the Keys. The polls mean nothing, but the Keys tell you the elements that decide elections and they can fall into place early. 2012, I called in January 2010. And now don't take this as a definitive call, by the way, just a quick preview for A few privileged people.

LV: Well, we are looking forward to speaking with you four years from now.

AL: Take care, and you know where to find me

XLM: Thank you very much.

LV: Thanks.

AL: Thank you. Bye-bye.


Disclosure Statement

Allan Lichtman, Liberty Vittert, and Xiao-Li Meng have no financial or non-financial disclosures to share for this interview.


©2020 Allan Lichtman, Liberty Vittert, and Xiao-Li Meng. This interview is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the interview.

Comments
0
comment
No comments here
Why not start the discussion?