Tuesday Polls - Clarity & Judgment

Notes on a new poll from ABC News; thoughts on that Joe Kahn interview in Semafor

I’ll release an episode this week that’s more of a quick(ish) history of polling told through two important figures. I won’t spoil it for you but one of the things I hope it drives home is the long, historical arc of news organizations reporting on data. Whether it was tracking the fertility rates of French women in the 18th century, cholera outbreaks in the 1840s, Civil War troop movements in the 1860s, or the first national straw polls in the late 1890s, news organizations have a centuries-long love of data.

But just because news organizations love data doesn’t mean they report on it clearly. And while I don’t want to turn this into a critic’s corner, I was sent an article reporting a poll that stood out to me for its… real lack of clarity.

This survey was commissioned by ABC News, designed and analyzed by an independent research firm, and used Ipsos’ probability-based KnowledgePanel. The topline report is authored by the head of the research firm. At the end of the article, they report a few details about the methodology, including a reported sampling error of 2 percentage points, based on a total sample size of 2,260 US adults. 

I give ABC a lot of credit for publishing quite a bit of methodological detail about what they demand of the pollsters, which you can find here

But I have some general nits to pick about survey reporting:

  • Public opinion polls should put the survey-level margin of error up front in the reporting and include the margin of error for each question, on every chart presented. 
  • When public opinion polls are not using the voter file to verify registered voters, they should say so explicitly. Since voter registration status is verifiable, available information, when a pollster uses some other method to identify registered voters, they should include their method for doing so in the report.
  • When public polls decide to “test” multiple definitions of a likely voter, they should be extremely clear about their definitions.
  • When public polls want to slice their data by “all respondents”, “registered voters” and various versions of “likely voters”, they should disclose the margin of error within each of these subsets – by definition, the sampling error will rise as the sub-sample size falls. 
  • When public polls decide to weight their sample, they should indicate what groups were weighted, and what weights were applied. Otherwise, nobody knows how to compare one set of weights by one pollster to another. ABC does disclose the types of factors they choose to apply weights to, but the research firm did not include those notes in their published report.

That’s just general nitpicking. Let's get specific.

Plain Language Matters

For this particular report, I would have liked for an ABC News editor to provide some notes on writing for clarity. This passage stood out to me as something only another opinion researcher would be able to parse, maybe.

“This finds the race at 42% for Trump and 40% for Biden, with 12% for Robert F. Kennedy Jr., 2% for Cornel West and 1% for Jill Stein. (That, of course, assumes Kennedy, West and Stein are on the ballot in all states, an open question.) Among registered voters in the five-way race, it's 42-42%, Biden-Trump, and Biden is a non-significant +3 or +4 points in likely voter models.”

“Biden is a non-significant +3 or +4 points in likely voter models.” Biden is “up”, “ahead”, or “winning by”? It’s very clunky writing – we don’t need that right now. We need crystal clarity.

Explain Registered and Likely Voter Definitions

If we’re not going to come to a consensus about what a likely voter is – and we seem unwilling or unable to do so – pollsters should at least be clear about what definitions they selected, and how that affected the base sizes and therefore the MoE.

This firm mentioned “a few versions” of likely voters, and then gave examples. One version of likely voters is people who say they are registered, or who say they will register and are going to vote this November. If you're wondering why these are not simply registered voters, I suppose the qualification that they “are certain to vote in November” is the key factor here. Another example is people who say they are or will be registered to vote and will vote in November and also say they voted in 2020 (unless they weren’t old enough to vote in 2020).

Again, for clarity’s sake, I’d call these groups something other than likely voters. Maybe I’d call them “RVs/strong intenders” and “RVs/2020 voters”. 

There are a variety of models available for determining what a likely voter might be, but please – tell me which model (or models) you chose.

The News Value of Polls

If you ever open up the linked PDFs of full reports or published cross tabs, you’ll often find that some questions in the survey are withheld for later release. This is a fairly standard method of reporting news organization-sponsored public polling. Remember, these polls are content. They're not cheap, but they’re still less expensive than investigative or other in-depth reporting. 

But polls aren’t just content to be reported – they are a tool for determining more in-depth reporting and analysis of issues and topics. The New York Times Executive Editor Joe Kahn said as much in an interview with Ben Smith of Semafor. He says it this way:

“It’s our job to cover the full range of issues that people have. At the moment, democracy is one of them. But it’s not the top one — immigration happens to be the top [of polls], and the economy and inflation is the second. Should we stop covering those things because they’re favorable to Trump and minimize them?”

There’s a lot that’s just… odd about this statement. I looked through some recent polling by big organizations. The only very recent one from a non-partisan public pollster I could find that ranked immigration first is the February Gallup Poll. It was conducted around the time the Senate blocked a bill that some say would have been one of the toughest border bills in history, despite bipartisan support, so the topic had relatively high issue salience and was something government seemed to be failing to act upon. This ranking is also driven almost entirely by Republicans – according to Gallup, 57% of Republicans named it their top issue; 22% of Independents did; and only 10% of Democrats did. 

This happens a lot in issue polling. Abortion, for example, ranks in the middle of the pack on a lot of top-line issue polls; but when you sort by party, or more to the point, by gender, abortion can leap into the top 3 issues. 

As for protecting democracy, there’s a brand new NBC News poll out this week that asks about democracy in interesting ways. While their poll reports “threats to democracy” ranked third most important to voters after “inflation and the cost of living” and “immigration and the situation at the border”, they asked an interesting follow-up question.

“Some people feel so strongly about an issue that they will vote for or against a candidate on that basis regardless of the candidate's stand on most other issues. I'm going to read you a list of issues and ask whether you consider any one issue so important that you would vote for or against a candidate solely on that basis. If you do not feel strongly enough about any of the issues to determine your vote, please just say so.... (RANDOMIZE) (ACCEPT UP TO 2 RESPONSES)”

The responses here are telling. “Protecting democracy or constitutional rights” is now number 1. “Immigration and border security” is now number 2. And the 3rd highest-ranked issue here? Abortion.

Now, the lists of issues in both questions are not identical – I’d have kept them the same, myself. But based on these results, one has to wonder: if an editor of a major news organization is letting “the polls” dictate news coverage, which polls should they look at? Which polling questions should be prioritized? And what should you do when a “top” issue is highly partisan, meaning only one party’s members care a lot about it? Is an effort to appear unbiased working against you? And is resting the responsibility on the polls – and by implication on the voters – how you want to run a newsroom?

I’ve already gone long (again). So I’ll leave off here. But there are some material questions about how we do issue polling, how to understand the impact of the responses to issue polling, and how issue polling results are, filtered through editorial decision-making, essentially in a dialogue with voters, influencing them as much as measuring them. We’ll return to that in future editions of this newsletter and upcoming interviews.

🚄
Quick question for you – should I try to gate crash AAPOR the way I used to do at SXSW? Like, is there a great public swimming hole and breakfast tacos and a comfy hotel lobby I can camp in while I accidentally-on-purpose “run into” people? Let me know. I might bring a microphone and try to make this happen.

Get the latest directly in your inbox