Sunday, October 11, 2015

Who got it right in 2011: Pollsters and Accuracy

This has been cross posted at iPolitics

It can be difficult at the best of times to make sense of the flood of polling data issuing forth from Internet and airwaves. And there’s a fair amount of social media buzz out there about which pollster is getting it right in this election.

Two of Canada’s major polling firms, EKOS and Nanos, have been reporting different parties in the lead — the Liberals on the part of Nanos over the past week, the Conservatives in the case of EKOS (although the latter’s most recent release gives the Liberals a slight lead that puts them in a statistical tie with the Conservatives, given the margin of error).

So who is getting it right? We won't know for sure until we take the ballots out of the box on Oct. 19; until then, we can do little more than speculate.

What we can do is look back at earlier elections. In 2011, when it came to national numbers, the firm with the fewest errors was Angus Reid, a repeat of its 2008 performance. Reid finished just ahead of Nanos, which in an earlier incarnation as the firm SES had been closest to the mark in the 2006 election.

The table below compares the performance of polling companies in 2011 with respect to national vote shares:



No one pollster gets it right all the time. Compas, no longer providing polling reports, was last on this list in 2011 but the most accurate in 2004 (with Ipsos and Léger just a whisker behind). EKOS ranked relatively low in 2011 (to its credit, EKOS performed a post-election evaluation of its performance). However, EKOS was closest to the election result in 1997, while Environics had that honour in 2000. The title has been widely shared.

One interesting pattern easily discerned from this list is that most firms underestimated how well the Conservatives would do in 2011. On the other hand, Conservative support was widely overestimated in 2004 and Liberal support was underestimated. As a result, we had election night surprises: a Conservative majority in 2011 and a stronger than expected Liberal minority in 2004.
While getting close to the actual result is important to firms, in terms of election outcomes the national number is something of a beauty contest. When it comes to seats, what’s happening in the provinces and regions is what really counts, and it’s difficult and expensive to obtain accurate polling numbers that capture all of Canada’s political diversity.
The table below takes the regional numbers from the pollsters above (except Compas) and then creates an average of errors in the regions for each party. A total lets us compare them. You will see the order of finish is somewhat different.


While the rankings differ from the national picture, that’s not what is most important. Many values are relatively large. In a first-past-the-post system, a small deviation of two or three points one way or the other can have a significant impact on ridings won or lost.
The under-estimate of the Conservative vote led directly to forecasters’ missing the impending Harper majority. Going into voting day, Harper had an average 6.2 percentage point lead over the second place NDP — but wound up with a nine point lead. Of particular significance was a 9.4 percentage point poll lead in Ontario that wound up on voting day as a 19-point lead.
Since 2011 we have had polling fiascos in Alberta in 2012 and British Columbia in 2013. The polls in these two elections created strong expectations of a win by the opposition that never materialized. This year, despite the radical change the Alberta election produced, the polls were generally accurate in forecasting the outcome (Léger Marketing was closest to the mark).
Methodologies have changed and diversified. For example, we have traditional telephone polling from Nanos and Environics, computerized telephone polling from EKOS, Forum Research and others, online surveys from large Internet panels from Ipsos, Abacus, Léger Marketing and Angus Reid and others. Resistance to answering polls has also increased, adding to the variation in results and the uncertainty.
A new Internet methodology that has seen some use in the United States is the Google Consumer Survey, which is a short survey that randomly pops up on computer screens. According to polling guru Nate Silver, a Google survey was the second-most accurate poll in the 2012 presidential election; this could be a glimpse of our polling future.
I am old enough to remember provincial election campaigns with no polls. However, polls have proliferated as never before, as have the individual riding surveys that attempt to drill down into the dynamics of close-fought local races.