Episode 9 - This Is How We Pew It with Andrew Mercer
More ways to poll, more possibilities, more problems
Political polling has been around for over a century. And while there have been many significant advancements in survey technology, statistical methods, and sample sourcing, each new development carries its unique challenges and complexities.
In our latest episode of Cross Tabs, I talk to senior research methodologist Andrew Mercer from the Pew Research Center about the current state of political polling methodologies and the various factors that can influence the accuracy and reliability of poll results.
You can listen to that episode here on Spotify:
The Impact of Survey Mode on Polling Outcomes
We talked about how much of an influence the mode of a survey can have on polling results. Mercer explains that how a survey is conducted, whether by phone, online, or through mixed methods, can lead to notable differences in responses. For example, respondents may be more likely to admit to socially undesirable behaviors in online surveys compared to phone interviews, and people use different attention heuristics depending on whether they are listening to a list of options to select or reading that list. Understanding these mode effects is crucial for both survey design and analysis. For a fascinating look at how polling modes are changing, and how quickly, see Pew's report "How Public Polling Has Changed in the 21st Century". In it, they look at how polling changed after the 2016 election, and how it changed again after 2020.
The Growing Problem of Bogus Respondents in Online Opt-In Samples
Another major challenge facing political pollsters is the rise of bogus respondents in online opt-in samples. Mercer highlights research showing that certain demographic groups, particularly young adults and Hispanic adults, are disproportionately affected by this issue. People misrepresenting themselves as members of these demographic cohorts can skew poll results, leading to misleading narratives and inaccurate conclusions. Pollsters – and poll consumers – must remain vigilant in identifying and mitigating the impact of bogus respondents to maintain the integrity of their findings. And journalists and commentators (as well as marketers!) should be skeptical about salacious headlines based on online surveys of either group.
Designing Effective Survey Questions in a Changing Landscape
Crafting survey questions that accurately capture public opinion is an ever-evolving challenge in political polling. Mercer emphasizes the importance of considering the nuances of question-wording, response options, and the potential for respondent confusion. He also discusses the need to adapt survey designs to accommodate the growing prevalence of mobile devices, as poorly optimized surveys can lead to lower-quality data. By carefully designing survey questions and staying attuned to technological shifts, pollsters can improve the reliability of their results. Survey designers should all take a cue from Pew and stop using matrices.
The Importance of Transparency in Political Polling Practices
Throughout the conversation, Mercer stresses the crucial role of transparency in political polling. He argues that pollsters should comprehensively explain their methodologies, including details on sampling, weighting, and contact methods. By being transparent about their practices, pollsters can build trust with the public and enable more informed interpretations of their findings. Mercer also cautions against relying on polls that disclose little methodological information, as these results may be less reliable.
It's not just politics
It's not just pollsters who have to be concerned about the use of tech-driven research solutions (ResTech). Marketers, advertisers, business consultants, and business decision-makers, as well as academics who rely on surveys for their research, all have skin in this game.
Online, opt-in sample providers democratized research over the past 20 years, making it more accessible and more affordable than ever before. Online survey platforms have made it possible for just about anyone to create and program a survey instrument. And mobile technology has improved the ability to reach respondents and serve questionnaires that reach people where they are.
But where we save time and money, we can also lose quality. This episode is the first of three to explore the implications of low quality sample in online research – what it means for data quality, research practice, partner selection, and reputation; and how to address the challenges that this democratization have created for us all.
If you're an Apple Podcasts kid, here you go:
And if you could – share this newsletter with a friend!
Mentioned Resources
- National Public Opinion Reference Survey (Pew Research Center): The National Public Opinion Reference Survey is an extensive, high-quality, high-response-rate survey conducted annually by the Pew Research Center to provide benchmarks for weighting on specific demographic characteristics that the government doesn't measure, such as party identification and religious identification.
- Study on PartyID (Pew Research Center): Pew Research Center conducted a study on party identification (PartyID) using statistical modeling to examine the entire history of partisanship data, adjusting for mode effects between telephone and online surveys to understand how the trends connect across different survey modes.
- Online opt-in polls can produce misleading results, especially for young people and Hispanic adults: Pew looked at how bogus respondents can skew results in the cross tabs, and lead to misleading headlines about certain demographic groups.
- How Public Polling Has Changed in the 21st Century: Pew looked at how most public pollsters changed their research methods after 2016, and again after 2020.
- Assessing the Risks to Online Polls From Bogus Respondents: Pew looked at the way bogus respondents respond to polls and found that the error introduced is not only significant, but also systematic.
Our Guest
Andrew Mercer is a senior research methodologist at Pew Research Center. He is an expert on probability-based online panels, nonprobability survey methods, survey nonresponse, and statistical analysis. His research focuses on methods of identifying and correcting bias in survey samples. He leads the Center's research on nonprobability samples and co-authored several reports and publications.
He also served on the American Association for Public Opinion Research's task force on Data Quality Metrics for Online Samples. He has authored blog posts and analyses, making methodological concepts such as margin of error and oversampling accessible to a general audience.
Prior to joining the Center, Mercer was a senior survey methodologist at Westat. He received a bachelor's degree in political science from Carleton College and a master's and doctoral degrees in survey methodology from the University of Maryland. His research has been published in Public Opinion Quarterly and the Journal of Survey Statistics and Methodology.
Your Host
Farrah Bostic is the founder and Head of Research & Strategy at The Difference Engine, a strategic insights consultancy. With over 20 years of experience turning audience insights into practical strategies for B2B and B2C companies, Farrah helps business leaders make big decisions across various industries. Learn more at thedifferenceengine.co and connect with Farrah on LinkedIn.
Subscribe to Cross Tabs
Don't miss an episode! Subscribe to Cross Tabs on your favorite podcast platform: