Home / World News / Opinion | Are Vaccine Polls Flawed?

Opinion | Are Vaccine Polls Flawed?

A much smaller survey conducted by Axios and Ipsos was off by only 4 percentage points, the Nature article says. That survey is based on an online panel with only about 1,000 responses per week, but it uses best practices for obtaining a representative cross-section.

Opinion Conversation
Questions surrounding the Covid-19 vaccine and its rollout.

By one measure the Delphi-Facebook survey produced results that are no better than a simple random sample of just 10 people, the authors of the Nature article write. “Our central message,” they conclude, “is that data quality matters more than data quantity, and that compensating the former with the latter is a mathematically provable losing proposition.”

A team of six scholars from Harvard University and the University of Oxford did the research. They are a mix of statisticians, political scientists and computer scientists. The lead authors are Valerie Bradley of Oxford and Shiro Kuriwaki, formerly of Harvard, now an incoming assistant professor at Yale. The senior team members are Seth Flaxman of Oxford and Meng of Harvard. The other two authors are Michael Isakov of Harvard and Dino Sejdinovic of Oxford.

Meng has been calling attention to the big data paradox for years. I wrote about his work earlier this year when I was working for Bloomberg Businessweek. The intuition is that if you solicit opinions about Taylor Swift while you’re at one of her concerts, you won’t be getting a good read on overall opinion. As I wrote:

In a perfectly random sample there’s no correlation between someone’s opinion and their chance of being included in the data. If there’s even a 0.5 percent correlation — i.e., a small amount of selection bias — the non-random sample of 2.3 million will be no better than the random sample of 400, Meng says.

That’s a reduction in effective sample size of 99.98 percent.

That’s not just theory: Statisticians estimate that there was a 0.5 percent correlation contaminating the 2016 presidential polls, presumably because supporters of Donald Trump were slightly less likely to express their preference to pollsters. That’s why so many pollsters were caught by surprise when Trump won. The 2020 polls suffered similar problems.

Both the Delphi-Facebook survey and the Census Bureau’s Household Pulse Survey have been widely quoted in the news and cited in academic research. The Delphi Research Group, a research team at Carnegie Mellon University that collaborates with Facebook, has a web page with links to 15 publications, including ones in journals such as Science and Lancet Digital Health. (There are ways for scholars to make valid use of imperfect data, such as re-weighting it to better approximate the makeup of the population, but they need to tread carefully, the authors say.)

I reached out to the Delphi-Facebook and Census Bureau teams for their responses to the Nature article. A Census Bureau spokeswoman directed me to the bureau’s description of the Household Pulse Survey, which says that it “is designed to deploy quickly and efficiently” but is experimental. The page adds, “Census Bureau experimental data may not meet all of our quality standards.”

About brandsauthority

Check Also

Gemma Acton: Qantas continues to let Aussies down despite soaring profits

From pilots to planes, Qantas failed to ensure they were kept anywhere near match-fit for …

%d bloggers like this: