Methodology

This is a pretty simple aggregation of state level polling for the 2020 presidential general election. We pull the latest polling data, pollster rating, and pollster bias is from FiveThirtyEight's git repo. We do a few things to calculate the weight of each poll (see below) then run 20,000 election simulations each day to determine a probability of each candidate winning overall and in each state (again, see below).

This model checks for the latest polls every 30 minutes and, if new polls are found, it re-runs all needed simulations. Note: for some states (smaller, less competitive ones) we may have no polls. In that case we use the 2016 vote % for that state instead (these are faded out and italics in the states list). And we also take past election results (2016-2000) into account when calculating probability, though the more polls we have in a state, the less the past results are considered.

Charts

The top Win Probability chart shows how our model's calculated win probability for each candidate has changed over the last several months.

The Electoral Votes chart below shows each state and its current projected winner. The color is shaded by how 'safe' it is (how far ahead the winning projected winner is). You can hover over to see the current polling difference, the projection, the number of electoral votes the state has, and the total number of electoral votes the current leader in that state would have if they won that state.

Below that we also show a chart of how the electoral votes have changed over time. We have 3 different ways of showing electoral votes:

  • Projection - This is the number of electoral votes each candidate would receive based upon our projection for every state (ties aren't considered).
  • Estimated - This is the average electoral votes for each candidate in the 20,000 tests we run. It's an estimation of the average number of electoral votes the candidate will recieve (and can, as such, include partial numbers).
  • Polls - This is the number of electoral votes each candidate would receive if all the polls were accurate (so that the leader in each state's polling average won that state).

We only show the 'Projection' number on the chart (but you can turn on the others. We also show a 95% range of all simulations (the light colored area behind the line charts).

We also show an 'Electoral College Simulations' chart which shows all the electoral college outcomes in our latest simulation test.

States

In the States section, we show the current polling gap, the swing from the 2016 results, and a projection of the win probability for each state. The 'Tipping Point' state (which, here, is the state at which the election would be won as determined by the current polls) is shaded in either blue or red. States are sorted by their 'Projection' (not their polling lead).

You can click on a state's name to view a much fuller picture of that state: the current win probability and how it's changed over time, the swing from 2016, a bell curve of how our simulations currently see the race, and the latest polls as well as how the polling average has changed over time.

Change From 2016

The Change From 2016 section shows how each state's numbers have changed from 2016 to today. For each state (and Nationally) it shows both how the 2016 polls compare to the 2020 polls and how the 2016 results compare to the 2020 polls.

Polls

The Polls section shows a list of all the 2020 election cycle polls we use in the model (and the national polls used in our avearge, they're not used in the model). As well the adjusted polling numbers and the weight we give to each poll.

Here it's important to point out how we adjust polls and calculate their weight.

Poll Adjustment
Polls are adjusted based upon the pollster bias as determined by FiveThirtyEight's pollster ratings, using their 'Mean-Reverted Bias' value. You can see the result of this adjustment in the Polls section's 'Diff (adj)' column. Those adjust polling numbers are what we use throughout the model.

Poll Weight
When calculating the weight to give a poll, we start with a value of 1. We have several things that can either add or remove weight from that value. These are:

  • Age of the Poll: The older the poll, the less weight is has. That weight degradation slows over time. But as we get nearer the election, the degradation speeds up.
  • Pollster Rating: Again, we use FiveThirtyEight's pollster ratings. If it's a good pollster (ex: an 'A+' rating) we don't take away any weight. If it's not good (ex: a 'D' rating) we take away quite a bit of weight (in our 'D' rating example, we take away 0.3). If a pollster doesn't have a rating, we assume it's a D rated pollster.
  • Sample Size: If a poll's sample size is less than 1,000, we take away a bit of weight. If it's over 1,000, we add a bit of weight. Each one is a sliding scale, with limits of how much weight can be added or removed just by sample size.
  • Voters in Poll: If a poll is of a 'Likely Voters' we give it more weight. If it's 'Registered Voters' we do nothing. If it's 'All Adults' we take away some weight.

When a poll's weight drops below a certain point, we stop using it in the model. Though we do make sure each state uses at least 4 polls (if available).

Win Probability / Projections

When we're calculating the Win Probability for each candidate (overall and in each state) we do the following:

  • Calculate a range of polling changes to test for each state based upon the following:
    • How many (and how high quality) the polls are in that state
    • How far the numbers are from 50%
    • How many 'Undecided' voters are in the state's polls
  • Then in each simulation we use that calculated range to randomly select a polling swing. More simulations are done with swings closer to the current polling, fewer are done with the furthest swings.
  • We also adjust testing so that it can calculate different regional swings as follows (each state gets its own swing, but sometimes in line with other states):
    • 30% of tests - states in the same defined 'region' move together (ex: Pennsylvania and Michigan might move together, but Arizona and Texas might move in another direction)
    • 40% of tests - the country as a whole moves in a smiliar direction (ex: a test assumes most polls change to bump up Trump by around 2 points, though, again, the actual swing in each state will be adjusted slightly)
    • 20% of tests - states all swing independently of each other (ex: the simulation might give Trump another 4% in Michigan, but give Biden another 2% in Pennsylvania)
  • As the election gets nearer, we give more weight to more recent polls and assume the polls will swing less than they do, say, 6 months out. Which is to say, the model gets more aggressive as we near the election. You can see that aggressive election day version of the model by clicking the If Election Today filter.

Every day we run 20,000 simulations of the election and then spit out the numbers we found as the probability overall and in each state.

Compare to 2016

One of my favorite features is the ability to Compare To 2016 (now always shown). This runs the 2016 data through the same model to show you what it would have said about that election. Again, I used FiveThirtyEight's 2016 Polling data.

Clearly, this model gave Clinton a high chance to win in 2016. And she lost, so take that as you will. But, we have a few filters we've added to let you play around with the numbers...

Filters

+ 2016 Polling Error
This gives you the ability Assume the Same Polling Errors as 2016 ('Errors' of course isn't the right word, polls aren't meant to be perfect, but it's the best word I could think of). This takes all the current 2020 state level polling averages and adjusts them by how far off the polls were in 2016 from the actual results, then runs the simulations.

If Election Today
Another filter is the If Election Today checkbox. This will change the numbers to assume today is election day. We're more aggressive as we get nearer to the election so this will show higher highs, lower lows, and bigger jumps. Of course, as we get nearer to the election, we'll get closer to what this filter is simulating so it has less effect.

What's Missing

The model is based 100% on state level polling data (with just a dash of past election results thrown in). That's by design. We want to look just at what the state polls say. As such, it does NOT include some stuff other models might include, like:

  • Fundamentals - it doesn't factor in presidential approval ratings, economic indicators, etc.
  • Demographics - it doesn't look at a state's demographics or states with similar demographics or anything like that.
  • Expert Predictions - it doesn't include anything like the ratings from Cook Political Report or other experts.
  • Down Ballot Polls - it doesn't take senate or house race polls (or results) into account
  • National Polls - it specifically avoids national numbers (we show the average, but don't use it in our state by state projections at all), it tries its best to look just as the electoral college.

If you're looking for stuff like that, there are several other good models out there I've seen. Just to name a few:

Again, this is pretty simple model...though increasingly more complex. It's mostly just something I'm doing for fun (and to give the false illusion of control in a chaotic time :) But it does accurately show what the polls are currently saying.

If you have any ideas or concerns or questions or know of any other models I should check out, please let me know! @electoralpolls