Quantcast


CURATOR
A pinboard by
George Ng

I've been in Information Technology for close to 30 years, I've never been more excited about data mining, machine learning and it's impact on our society

I love to seek out insights into data through visualization in a refreshing way

PINBOARD SUMMARY

Voters have a history of jumping on the bandwagon and choosing a party with an improved poll rating

What is the the 'Bandwagon Effect'?

Research suggests a party's fortunes in the polls can convince more voters to back it by 'jumping on the bandwagon'. As the UK was heading for an early election, the question was: would Labour's resurgence in the surveys translate into an upset?

Don't believe it?

A large national survey of Danish voters revealed that polls make voters 'float along'. In other words, voters exposed to news of either an upward or downward movement in a party's ratings tend to shift their voting intentions accordingly. The effect is strongest when a political force gains popularity as more voters will back that party.

Hang on, but don't polls get it spectacularly wrong sometimes?

Well, it happens - remember last year's Brexit vote and the US presidential election? Not only pollsters but some researchers were left red faced.

A common flaw of many election prediction models is not incorporating time uncertainty. The further away in calendar time to election day, the more unstable the prediction.

There is also the issue of voters not always reporting their voting intentions. Research found that this is the case when the 'cost' of the election is high and the electorate is large. In this case respondents have a stake in influencing the voting behaviour of others by misreporting their own preferences.

Poll authors and meta-poll analysts also overlook swing voters in polls and simply assign them to candidates. But when swing voter numbers are high, these unrealistic practices bias election predictions.

Bookies vs Scholars

Another interesting question is if betting markets predict outcomes better than forecast models. A few weeks before the 2016 Presidential, pollsters were still forecasting a Clinton victory, while bookmakers started slashing their odds, entertaining the possibility of a Trump win. However, studies show that bookies' prediction is not superior to poll-based models. Their advantage is that they can factor in the polls in their prices, whereas pollsters do not base their predictions on betting markets.

14 ITEMS PINNED

Biased polls and the psychology of voter indecisiveness

Abstract: Accounting for undecided and uncertain voters is a challenging issue for predicting election results from public opinion polls. Undecided voters typify the uncertainty of swing voters in polls but are often ignored or allocated to each candidate in a simplistic manner. Historically this has been adequate because first, the undecided tend to settle on a candidate as the election day draws closer, and second, they are comparatively small enough to assume that the undecided voters do not affect the relative proportions of the decided voters. These assumptions are used by poll authors and meta-poll analysts, but in the presence of high numbers of undecided voters these static rules may bias election predictions. In this paper, we examine the effect of undecided voters in the 2016 US presidential election. This election was unique in that a) there was a relatively high number of undecided voters and b) the major party candidates had high unfavorability ratings. We draw on psychological theories of decision making such as decision field theory and prospect theory to explain the link between candidate unfavorability and voter indecisiveness, and to describe how these factors likely contributed to a systematic bias in polling. We then show that the allocation of undecided voters in the 2016 election biased polls and meta-polls in a manner consistent with these theories. These findings imply that, given the increasing number of undecided voters in recent elections, it will be important to take into account the underlying psychology of voting when making predictions about elections.

Pub.: 28 Mar '17, Pinned: 06 Jun '17

How the polls can be both spot on and dead wrong: using choice blindness to shift political attitudes and voter intentions.

Abstract: Political candidates often believe they must focus their campaign efforts on a small number of swing voters open for ideological change. Based on the wisdom of opinion polls, this might seem like a good idea. But do most voters really hold their political attitudes so firmly that they are unreceptive to persuasion? We tested this premise during the most recent general election in Sweden, in which a left- and a right-wing coalition were locked in a close race. We asked our participants to state their voter intention, and presented them with a political survey of wedge issues between the two coalitions. Using a sleight-of-hand we then altered their replies to place them in the opposite political camp, and invited them to reason about their attitudes on the manipulated issues. Finally, we summarized their survey score, and asked for their voter intention again. The results showed that no more than 22% of the manipulated replies were detected, and that a full 92% of the participants accepted and endorsed our altered political survey score. Furthermore, the final voter intention question indicated that as many as 48% (±9.2%) were willing to consider a left-right coalition shift. This can be contrasted with the established polls tracking the Swedish election, which registered maximally 10% voters open for a swing. Our results indicate that political attitudes and partisan divisions can be far more flexible than what is assumed by the polls, and that people can reason about the factual issues of the campaign with considerable openness to change.

Pub.: 18 Apr '13, Pinned: 06 Jun '17

Forecasting daily political opinion polls using the fractionally cointegrated vector auto-regressive model

Abstract: We examine forecasting performance of the recent fractionally cointegrated vector auto-regressive (FCVAR) model. We use daily polling data of political support in the UK for 2010–2015 and compare with popular competing models at several forecast horizons. Our findings show that the four variants of the FCVAR model considered are generally ranked as the top four models in terms of forecast accuracy, and the FCVAR model significantly outperforms both univariate fractional models and the standard cointegrated vector auto-regressive model at all forecast horizons. The relative forecast improvement is higher at longer forecast horizons, where the root-mean-squared forecast error of the FCVAR model is up to 15% lower than that of the univariate fractional models and up to 20% lower than that of the cointegrated vector auto-regressive model. In an empirical application to the 2015 UK general election, the estimated common stochastic trend from the model follows the vote share of the UK Independence Party very closely, and we thus interpret it as a measure of Euroscepticism in public opinion rather than an indicator of the more traditional left–right political spectrum. In terms of prediction of vote shares in the election, forecasts generated by the FCVAR model leading to the election appear to provide a more informative assessment of the current state of public opinion on electoral support than the hung Parliament prediction of the opinion poll.

Pub.: 21 Nov '16, Pinned: 06 Jun '17