CONTRIBUTORS

Political pollsters have gotten a black eye. Here’s how they hope to do better

Mark Z. Barabak
Los Angeles Times (TNS)
Donald Trump and and Hillary Clinton on stage during the second debate between the Republican and Democratic presidential candidates on Oct. 9, 2016, at Washington University in St. Louis. (Christian Gooden/St. Louis Post-Dispatch/TNS)

It’s been a rough few years for those taking the nation’s political pulse.

In 2016 pollsters underestimated Donald Trump’s appeal, feeding perceptions Hillary Clinton was headed to the White House. Then, after they swore to do better, a similar thing happened in 2020, when surveys overestimated the strength of Joe Biden and Democratic chances of romping to big majorities in the House and Senate.

Recently some of the country’s top poll takers got together in Chicago for their first in-person confab since the pandemic began. There were the usual convention staples: Awards banquet. A “fun run.” Karaoke! And plenty of arcane panels along the lines of “Measuring and Modeling Neighborhoods” and “Universal Adaptability: A New Method to Draw Inference from Non-Probability Surveys and Other Data Sources.”

More:How Doug Mastriano could turn election lies into action

More:John Fetterman’s vaguely encouraging politics

More:GOP sides with Oz in lawsuit over counting undated mail-in ballots

Sampling: Some of the yeastiest conversation was about the polling industry’s black eye and ways to avoid future embarrassment by broadening and refining their samples.

“If it’s Trump-specific, the problem will take care of itself” should the former president not run again, said Scott Keeter, director of survey research at the nonpartisan Pew Research Center in Washington.

The statement was not a political or value judgment. It was simply a reflection of Trump’s capacity to frazzle political pollsters, among others.

“If it’s about a kind of voter or some attitude voters are bringing into the process” — like refusing to take part in political polling — “that may be more difficult for us to deal with,” said Keeter, who had just returned from the conference.

Why try?: First, a question: Why even bother conducting those type of who’s-up-who’s-down surveys?

The short answer is people want to know which candidate is winning, even before a single vote is cast and even if pollsters repeatedly emphasize their surveys are only a snapshot of a moment and not predictive of an election outcome. (Tell that to political junkies, who salivate over polling data the way a Doberman regards a ribeye.)

Media outlets also have an incentive to report on those kind of buzzy surveys, which draw more eyeballs than, say, an explication of a candidate’s position on Social Security or NATO expansion. That’s why news organizations, which often pay for name-branded polling, have a shared interest in getting the numbers right; the missteps of the past few years have only served to undermine the already low level of trust many Americans have in the news business.

So how to improve things?

Much of the discussion in Chicago revolved around new ways of contacting voters and better weighting the results, said Daron Shaw, a Republican pollster and government professor at the University of Texas, who also attended the gathering of the American Assn. for Public Opinion Research.

“I’m always trying to steal good ideas,” Shaw said of the notes he brought back to Austin.

Some may wonder how a survey of just a few hundred or few thousand people can capture the sentiment of voters in a statewide or national election. The polling pioneer, George Gallup, had a ready response: You need only a few drops of blood, not several quarts, to perform an accurate blood test.

Randomness: The key, of course, is obtaining a representative sample; that’s where things get tricky and can easily go sideways.

“What makes a poll valid? It’s that a sample is random,” said Mark Mellman, a Democratic pollster with extensive experience in California and the West. “What randomness means, in short, is everybody has an equal probability of being contacted.”

Not, he emphasized, just those willing to take the time to answer a whole bunch of questions from an inquisitive interviewer.

It used to be much easier to draw broad samples of public opinion by dialing telephone numbers at random. But gone are the days when people felt obliged to answer the phone simply because they didn’t know who might be trying to reach them. (Caller ID took care of that, routing a lot of calls to the purgatory of voicemail.)

So among the innovations being tested are text messaging and emailing individuals to ask if they’d be willing to participate in an opinion survey, either online or in a call back. Some are even experimenting — holy rotary phone! — with sending invitations via snail mail.

The other challenge has to do with weighting survey results.

Pollsters do their best to model their results on the turnout they expect for a given election. They will adjust their findings to make sure the sample has the share of men and women, young and old, and people of different races, ethnic groups and income levels that match what is known from census data. Then, as a second step, most pollsters adjust the results to reflect who they think will actually cast a ballot.

Various criteria are used to aim for the right turnout model, but those modifications still amount to educated guesswork.

Getting the weight right: In 2016, education levels proved a significant indicator of how people ended up voting. Many pollsters, however, failed to weight their samples for educational achievement. Their surveys didn’t have enough non-college-educated voters, who are less likely to respond to polls, and therefore missed Trump’s strong showing among them.

In 2020, chastened poll takers adjusted their results to account for education, but many still fell short in gauging Trump’s support.

So other ways of profiling voters and weighting the results are being discussed, among them factors like home ownership, engagement on social media and whether respondents harbor “anti-establishment” sentiments reflected by, say, scorning the media or believing the 2020 election was stolen.

“Because demography has become increasingly important in people’s political behavior, getting that weighting right has become all the more important,” Mellman said.

Ideally, every eligible citizen would vote and every voter would be highly informed about the candidates and issues before them. There would be no horse-race polling and thus no candidate gaining an advantage in fundraising and publicity by leading in pre-election surveys.

But that’s never going to happen.

So it’s good the polling industry is looking inward and seeking ways to do better and boost public faith in its findings. As Shaw put it, “Anything worth doing is worth doing well.”

Especially when it’s our government and election system at stake.

— Mark Z. Barabak is a columnist for the Los Angeles Times, focusing on politics in California and the West.