First in the nation
I'm launching the first episode of Cross Tabs today, January 15. It's the day Iowa's Republican voters will be the first to officially express their preferences about who the GOP's nominee should be in 2024. As I write this, it is -14ºF in Des Moines, Iowa; on Caucus Day it will warm up a little – the high temperature will be 0ºF. The last time both parties held competitive caucuses in the state, in 2016, the turnout was less than 20%.
As is the custom, The Iowa Poll – conducted by J. Ann Selzer and backed by NBC News, The Des Moines Register and Mediacom – came out the day before the election. There are no obvious upsets to report here – the frontrunner is who you expect, and the candidates fighting it out for second place are also who you'd expect. Of course, anything can happen – and inclement weather in addition to the psychology of voters could influence who and how many turn out.
It's an auspicious time to launch an election year podcast and newsletter about political polling, in other words.
The paradox of public polling
The mechanism of the political survey is constantly evolving; the way one pollster assembles their mechanism can get them closer or further from the truth; the way the press reports on these mechanisms can amplify and sometimes distort their meanings; and the way ordinary people consume the product of these mechanisms can influence their opinions about issues and candidates, and can influence whether or not they vote.
But most people don't know much about how polls and polling work. Which is why we get a lot of handwringing about whether you can "trust" polls, or "rely on" them. Polls have been, in general, fairly accurate – but people (including pundits and journalists) will tell you they're wrong. Polls are conducted on an ongoing basis, all the way up to an election, which is the final, definitive poll – but those same people will tell you that polls can't predict the outcome of the election.
We'll be swimming in data for the next eight months or so – but most of us will be at least a little confused about what that data means. Just because it's all happening in public doesn't mean it's transparent and easy to understand.
So I want to help explain.
No takes given
I'm not going to give you any takes on the campaigns, the issues, the candidates. My style of prediction is generally darkly pessimistic, bordering on the absurd. So we're not going to do any of that either, for your mental health and mine.
Instead, we're just going to try to demystify polls and polling. Some questions we'll tackle:
- How are polls conducted? How have polls changed?
- Who responds to polls and who doesn't? Why?
- What's the difference between poll questions about issues and poll questions about candidates or parties?
- How do we know who likely voters are?
- What are the common forms of survey bias?
- How do pollsters come up with their questions?
- How do you tell if a poll is "high quality"?
- Are polls still/ever good for democracy?
- How do poll results actually influence public opinion?
- How do you tell if people are lying to pollsters? Is it even lying?
- How do pollsters get a representative sample? What even is that?
- What's a margin of error? How should you think about those?
The schedule
Here's what I'm going to try – every week, I'll send out a newsletter. Every other week, I'll publish an episode of the show. Me and a guest, including my friend Paul Soldera (himself a former public pollster in New Zealand and now my favorite quantitative researcher who continues to do internal polling for issue groups and candidates among other kinds of research for other kinds of organizations), will explore a different facet of polling that most people might not understand all that well and try to explain it as clearly as we can.
There's going to be a lot of polling this year, so we'll also take you through an interesting poll – interesting because it asks a question an interesting way, or approached sampling in a way that needs to be talked about, or because it's just too crazy of a methodology or question design to ignore, or because the press is interpreting it in a way that we think demonstrates something about the way the poll was designed and conducted. We might even run a few experiments in poll design ourselves and share with you what we learn.
The point
I want you to be able to listen to a report about polling, or read a story about polls, and know enough to be able to evaluate how much attention you should pay to it. I want you to be informed enough to be skeptical when it's called for, and credulous when a poll deserves it. We all rely on polling – at least during election years – to have a sense of what our fellow citizens think and care about. We should understand what polls really say, what they can say, and why they say it.
I promise this is going to be interesting, and I'm going to try to make it fun, too. So, I hope you subscribe and tell a friend.
We'll post episodes here as well as publishing it to all your favorite pod catchers. Here are some links you might enjoy that informed the first episode:
Links
- An episode of All In with Chris Hayes from December about two polls gauging people's support for keeping former President Trump off the ballot. Both polls are discussed, along with Chris' takes in the first 7 minutes or so.
- Here is the NYT/Siena College poll he references.
- He also talks about a poll by YouGov, which asked 3 questions related to the Colorado Supreme Court decision to (after review by SCOTUS) remove Trump from the ballot in that state.
- The Iowa Poll is a "gold standard" poll conducted by a highly respected pollster. Here's a profile of J. Ann Selzer, the pollster herself.
- Here's their final poll, issued January 14, 2024.