After elections, questions are often raised about the accuracy of polling data, especially when the results differ from predictions made by polling companies. These questions can snowball into more widespread concern about the accuracy and usefulness of market research as a whole.
However, not all market research is the same; there are numerous ways in which our methodology differs from the techniques typically used by political pollsters which means it isn’t subject to the same challenges. This isn't to say the approaches used by pollsters are wrong or inherently bad - rather that pollsters are trying to achieve a specific and difficult task within tight constraints, and that many of the challenges they face therefore aren't applicable (at least to the same extent) to other forms of market research.
Online vs telephone interviewing
Our surveys are fielded online while opinion polls tend to use telephone based approaches. Although this means pollsters are able to reach individuals who aren’t online, telephone interviews can miss out on respondents who are too busy to take part in research during the work day. Additionally, opinion polls often aren’t incentivized, which can demotivate people to take part.
As our main data sets are designed to represent internet users only, we’re able to use an online methodology. This means our interviews are conducted anonymously and at the leisure of respondents, meaning we’re more likely to reach respondents who are too busy to take part when initially contacted. In addition, our respondents are incentivized to take part, leading to good response rates across all demographic groups.
Whilst best practice dictates political polls aren’t signposted as such at the start of an interview, throughout the election many individuals will be keenly aware of the ongoing polls and may be inclined to reject polling calls from unknown sources, especially if primed to distrust the mainstream media and their sources of information.
Our survey invitations on the other hand are distributed via panel providers, who build long term relationships of trust with their panellists. This means that consumer research like ours is less likely to underrepresent specific groups of voters who may be reluctant to participate in political polling but have no issue answering surveys about consumer topics which come from a source they trust.
When comparing our data to political polls that have been conducted online, it’s important to note that election polls will have a higher degree of urgency in response, with the large majority of polls conducted over a few days. This can bias their results towards respondents who are able to respond quickly. Our surveys have no such strict deadlines, and many of our panel providers will give respondents a few weeks to finish the survey. This means those who work away from a computer, or are busier, can still take part.
Sample size and sampling
As preferences for a candidate will change over a campaign, it’s important that polls are collected over a very short period. This means sample size of the studies will be relatively small and demographic quotas will often need to be relaxed - with the harder to reach groups, like those with lower education levels, or ethnic minorities, boosted using weighting. This can mean that a small number of people from hard to reach groups can end up representing far more people than those from easier to reach groups.
Our data on the other hand is collected over a longer period (three months for GWI USA and GWI Core). This means we’re able to collect a bigger sample and take the necessary time to ensure our quotas are met. This means we’re less likely to under-represent specific demographic groups in our data.
Even our custom studies - which are often collected with a faster turnaround - will have an easier time fulfilling quotas than political surveys done over telephone, as we use panel providers who know the demographic make-up of their respondents and as such can target invites to ensure an even spread of responses.
Likely voters
Finally, it’s important to consider the challenge pollsters face regarding the inclusion of both voters and non-voters in their final results. One of the key issues with the 2020 polls in the US, for example, was a higher than expected turnout for Republican voters. These groups will have been underrepresented in the polls, as many - who ended up voting - will have been counted as unlikely voters, and not included in the final results.
We don’t need to categorise respondents for inclusion and non-inclusion in the same way. Each of our data sets represents all consumers within its specified universe, and therefore won't be impacted by the biases of researchers regarding who should and shouldn't be included in the data. Meanwhile, the majority of custom studies filter respondents based on more objective criteria, such as their current behaviour, demographics or habits, rather than likely future behaviour.