04 February 2013

The moment of truth: how did the pollsters do?

The moment of truth: how did the pollsters do?


With the Presidential Election of 2012 now firmly in the rear-view mirror and all of the voting statistics locked into stone, it's time to take a hard look at which pollsters were good and which were really, really bad in 2012.

There is good and bad news in this report for people from all sides of the spectrum.

Leading up to Electoral Landscape No. 8 (the final landscape), which was published early on November 6th, 2012, long before the polling places around the country closed, I collected every single state and national poll that was published in that year. All of those polls, linked and referenced, were kept in one large, tabbed EXCEL document, which you can read online HERE.

Using that very same document, I have now included the exit-polling results from election day and also the exact percentages and percentage margin actual results for each and every state.  I then ran a simple mathematical comparison between the state poll margins and the actual margin. By state polls, I mean generally the very latest polls, the ones that were used in the final polling average. Sometimes I ran a compare all the way back to the beginning of October, just to get a feel for how the margins were looking the entire time, but those polls are usually not in the final poll mix.

The difference between the predicted margin of a pollster and the actual margin as it happened is called "mathematical bias", especially if it occurs all too frequently by the same pollster over a large number of states and almost always in the same direction.

All of those results are published in an EXCEL table HERE. Feel free to click and go to your state to see how things shaped up.

When we look at such numbers, common sense must always rule. Here are some easy guidelines to help sift through that EXCEL table.

1.) A state with many polling results and a rich polling "DNA" for averages is easier to gauge than a state with just one or two polls for the entire year.

2.) A mathematical bias of +/-3 points or more looks pretty extreme in a very tight state (as was the case in FL, NC, and to some degree, OH), but that same 3 point bias may be nothing more than statistical noise in a state that went for a candidate by +30 or more, like HI or ID.  In other words, "mathematical bias" can best be understood within the environment in which it appears.

So, here we go:


I. Which states were polled in 2012?

Most of the states in the Union were polled in 2012, but not all:

Alaska, Delaware, Mississippi and Wyoming were not polled even once in 2012.

There are also some states that were polled only once or twice in the entire year: Alabama (1), DC (2),  Hawaii (2), Idaho (1), Kansas (1), Kentucky (1), Lousiana (1), South Carolina (1), and West Virginia (2). The lone South Carolina poll (Ipsos/Reuters) was from January 16th, 2012, so it is so far removed from the event horizon that we could easily say that for all intents and purposes, South Carolina was not polled in 2012, either.

There were also some states that received polls from only one pollster, spread throughout the year:

Hawaii - 2 polls from the Merriman Group (R).
Oklahoma - 3 "Sooner" polls between May and October 2012.
South Dakota - 6 polls, all from Nielson Brothers, from February to October 2012.
Vermont - 3 "Castleton State" polls, from February to August 2012.
West Virginia - 2 polls from "R.L. Repass and Partners", from May to August 2012.

There is of course nothing wrong with this, and often a certain hometown pollster puts out the best results for its state, but of course, they can never be mixed into a polling average.

Adding to the misery of those data-poor states is the fact that not all states received exit-polls this time around. The following states have no exit-polling data for 2012:

Alaska, Arkansas, Delaware, DC, Georgia, Hawaii, Idaho, Kentucky, Louisiana, Nebraska, North Dakota, Oklahoma, Rhode Island, South Carolina, South Dakota, Tennessee, Texas, Utah, West Virginia and Wyoming (20 "states").

Most of the states from the lists above intersect the states that did not receive any exit polling, except Mississippi and Vermont. So, that makes 22 states total where information is either practically non-existent or so limited that comparisons are going to be difficult:

It looks like this on a map:




The vast majority of states that did were not exit-polled were so-called "Red States". Hmmmm...

This means that we have 31 states with some kind of polling "gene-pool" to work with.


II. What do the numbers calculations look like?

I randomly picked a state from the table to illustrate:

Nr. WISCONSIN Date Sample MoE Obama Romney Und Mar.
DIFF: Simplified:

AVERAGE (one week): N/A N/A N/A 50,23 45,71 4,06 4,52
-2,42 -2

AVERAGE (final 3 days):


50,20 46,00 3,80 4,20
-2,74 -3













Exit Polling 06.11.12 3845
52,1 46,4 1,5 5,7
-1,24 -1

Actual Results 2012 06.11.12

52,83 45,89 1,28 6,94















Most Recent (one week, no repeaters):





















65 Angus Reid 05.11.12 454 LV +/-4.5 53 46 1 7
0,06 0
64 Pulse / Let Freedom Ring (Tea-Party) 04.11.12 1000 LV +/-3.0 49 48 3 1
-5,94 -6
63 YouGov 04.11.12 1225 LV +/-3.1 50 46 4 4
-2,94 -3
62 PPP (D) – FINAL POLL 03.11.12 1256 LV +/-2.8 51 48 1 3
-3,94 -4
61 Grove (D) / PNA / USAction 03.11.12 500 LV +/-4.4 48 42 10 6
-0,94 -1
60 WAA (R) 02.11.12 1210 LV +/-3.0 51,5 44,8 3,7 6,7
-0,24 0
59 Wenzel (R) / Citizens United (R) 01.11.12 1074 LV +/-3.0 49 47 4 2
-4,94 -5
58 Rasmussen 01.11.02 500 LV +/-4.5 49 49 2 0
-6,94 -7
57 St. Norbert Poll 01.11.12 402 LV +/-5.0 52 43 5 9
2,06 2
56 NBC / WSJ/ Marist 31.10.12 1065 LV +/-3.0 49 46 5 3
-3,94 -4
54 Marquette 31.10.12 1243 LV +/-2.8 51 43 6 8
1,06 1

What you see above is exactly the format I used in the nightly battleground reports and also in the electoral landscapes for 2012. What is slightly different is that I have eliminated all "Obama vs - other GOP" polling and added two columns to the right: "DIFF:" and "Simplified".  So, let's go through this:

The end polling average for Wisconsin for the final seven day period was: Obama +4.52%. Obama actually won the state with +6.94%. So, subtracting the actual result from the polling average = -2.42%. This means that the average of the 11 polls within the final 7 day time-frame was 2.42% TO THE RIGHT of the actual results. Here I am not speaking of any one individual poll, but rather, the summation of the end polls, which was used to predict the likely winner on election night.

This also means that we need to learn to interpret the color coding and the minus signs: a positive value for "blue", which automatically means  a negative value for "red" = a polling result to the LEFT of the actual result.  We could call this a "Liberal/Democratic mathematical bias".

Conversely, a postive value for "red", which automatically means  a negative value for "blue" =  a polling result to the RIGHT of the actual result. We could call this a "Conservative/Republican mathematical bias".

The column next to "Diff", "Simplified" - simply rounds to a whole number, since most polling results are published in whole numbers. But I do the rounding after calculating the result to a 1/100th of a percent. This makes it all more accurate.

Now, look at the values calculated for the individual polls above in the Wisconsin table. The HIGHER a difference number, then the farther away that poll was from reality. Put simply:

Low number = good, high number = bad.

In the case of Wisconsin, two pollsters absolutely nailed it: WAA (We Ask America - R) and Angus Reid (from Canada) take first place, both with essentially a "0" value. Not only did Angus Reid nail the margin, he also nailed the toplines as well, which is a pretty rare feat. Can't get better than that. That makes Angus Reid the absolute top dog in Wisconsin polling.

In close second place are Grove Insight (D) and Marquette (considered the Gold Standard for WI), which were either 1 point to the right or to the left of reality.

A +/-1 difference is certainly within the realm; it means that if only 1/2 of one percent had been recorded differently, that poll would have also hit the nail right on the head. Similarly, a +/-2 difference is really not bad at all: it means that that pollster put only 1% in the wrong column, causing a 2 point difference. These things can happen even when a pollster is using a common-sense model and not putting any fingers on the scales for false weighting of values, etc.

However, since the standard margin of error is +/-3.5% and these pollsters make their money claiming accuracy, I consider anything at +/-4 and above to be a disaster. This means that the pollster really did not do his job in that particular state. Remember this commentary, for you are going to see lots and lots of +/-4s and above from a couple of key pollsters across the board.

The big losers in Wisconsin were:

Rasmussen, which claimed a 49-49 tie and was therefore to the RIGHT by a whalloping 7 points.

Pulse (R, a subsidiary of Rasmussen, or Ras-Light, as I call it) was off to the RIGHT by 6 points.

Wenzel (which polls for World Net Daily, a birther organization) was off to the RIGHT by 5 points.

And both NBC and PPP (D), were both to the RIGHT by 4 points. The difference between the two is that PPP showed Obama over 50%, whereas NBC had him below the 50 mark. This also dispels the notion that PPP (D) and NBC only had results that were to the LEFT. This is not true. Wisconsin proves that it is not true.

Please also notice that the exit-polling was only off by 1 point to the RIGHT. The exit polling results were more accurate than the end polling average, but the average did indeed point to an Obama win.

Now, how do I condense all of this information onto a simpler, easier-to-read chart?

I take a -1 difference in a state that Obama won and write it as R+1 for that pollster for that state, so the results of the excel table above are easier to navigate like this:

State Pollster Pollster Margin Actual Margin Bias Bias (simplified)
Wisconsin St. Norbert Poll D +9 D +6,94 D +2,06 D +2
Wisconsin Marquette D +8 D +6,94 D +1,06 D +1
Wisconsin Angus Reid D +7 D +6,94 D +0,06 0
Wisconsin WAA (R) D +6,7 D +6,94 R +0,24 0
Wisconsin Grove (D) / PNA / USAction D +6 D +6,94 R +0,94 R +1
Wisconsin Exit Polling D +5,7 D +6,94 R +1,24 R +1
Wisconsin YouGov D +4 D +6,94 R +2.94 R +3
Wisconsin PPP (D) – FINAL POLL D +3 D +6,94 R +3,94 R +4
Wisconsin NBC / WSJ/ Marist D +3 D +6,94 R +3,94 R +4
Wisconsin Wenzel (R) / Citizens United (R) D +2 D +6,94 R +4,94 R +5
Wisconsin Pulse / Let Freedom Ring (Tea-Party) D +1 D +6,94 R +5,94 R +6
Wisconsin Rasmussen 0 (tie) D +6,94 R +6,94 R +7

The two pollsters that are shaded in light blue are the two that absolutely found the "sweet spot" in the Badger State.


III. The Big List

There were 1,176 state polls recorded in 2012. I added the 31 exit polls, which brings us to 1,207 state polls. Throughout the entire nation, all of the final polling added up to 299 polls, or just about 25% of all polls for the year. You can see that table within the big EXCEL document linked above at the top, or you can also see it all by itself HERE on blogspot, sorted in a number of ways:

In hourglass form, by BIAS value: The top of the table shows the poll with the most errant Democratic BIAS value, descending to "0", then ascends with the Republican BIAS values, ending with the most errant value on the Right. This kind of table helps to see the extremes more easily.

IV. Individual Polling organizations - an evaluation

Here is an in-depth analysis of 13 polling organizations and how they did, with a summary table at the very end.


The two most accurate end-pollsters for 2012 were the exit polls for 2012, "conducted by Edison Research of Somerville, N.J., for the National Election Pool, a consortium of ABC News, Associated Press, CBS News, CNN, Fox News and NBC News. The national results are based on voters in 350 randomly chosen precincts across the United States, and include absentee voters and early voters interviewed by telephone". ,and PPP (D).

For each pollster, I am providing an individual table (in hourglass form), a map and an quick analysis.

EXIT POLLS:

State Pollster Pollster Margin Actual Margin Bias Bias (simplified)
Montana Exit Polling R +8,8 R +13,65 D +4,85 D +5
Kansas Exit Polling R +19,2 R +21,72 D +2,52 D +3
Maine Exit Polling D +17,8 D +15,29 D +2,51 D +3
New Hampshire Exit Polling D +6,4 D +5,58 D +0,82 D +1
New Jersey Exit Polling D +18,31 D +17,68 D +0,63 D +1
Nevada Exit Polling D +7,3 D +6,68 D +0,61 0
Florida Exit Polling D +1,2 D +0,88 D +0,32 0
Iowa Exit Polling D +6,1 D +5,81 D +0,29 0
Alabama Exit Polling R +22 R +22,19 D +0,22 0
Pennsylvania Exit Polling D +5,4 D +5,38 D +0,02 0
Virginia Exit Polling D +3,8 D +3,87 R +0,07 0
Vermont Exit Polling D +35,5 D +35,60 R +0,10 0
Mississippi Exit Polling R +12 R +11,50 R +0,5 R +1
North Carolina Exit Polling R +2,80 R +2,04 R +0,76 R +1
Michigan Exit Polling D +8,7 D +9,48 R +0,78 R +1
Indiana Exit Polling R +11 R +10,20 R +0,80 R +1
Minnesota Exit Polling D +6,8 D +7,69 R +0,89 R +1
Massachusetts Exit Polling D +22,1 D +23,14 R +1,04 R +1
Ohio Exit Polling D +1,9 D +2,97 R +1,07 R +1
Washington State Exit Polling D +13,6 D +14,78 R +1,18 R +1
Wisconsin Exit Polling D +5,7 D +6,94 R +1,24 R +1
New Mexico Exit Polling D +8,3 D +10,15 R +1,85 R +2
Missouri Exit Polling R +11,3 R +9,38 R +1,92 R +2
Maryland Exit Polling D +24,1 D +26,08 R +1,98 R +2
Connecticut Exit Polling D +15 D +17,33 R +2,33 R +2
Colorado Exit Polling D +2,5 D +5,37 R +2,87 R +3
Arizona Exit Polling R +12 R +9,04 R +2,94 R +3
Illinois Exit Polling D +13,6 D +16,86 R +3,26 R +3
New York Exit Polling D +24,6 D +28,13 R +3,53 R +4
Oregon Exit Polling D +7,5 D +12,09 R +4,59 R +5
California Exit Polling D +17,5 D +23,12 R +5,62 R +6
Bias (ALL)



R +0,90
Bias (12 battlegrounds)



R +0,58


Of the 31 exit polls conducted, they were exactly on target in 6 states: Nevada, Iowa, Florida, Alabama, Pennsylvania and Virginia. The exit polls were slightly to the LEFT in 4 states, very much to the LEFT in 1 state (Montana). They were slightly to the RIGHT in 16 states, very much to the RIGHT in 3 states. In other words, the exit polls were very close to or right on the mark in 27 of 31 states!

It looks this way on a map:



Using the idea of "gold" to mean "gold standard", the gold states are where the exit polling was truly identical to the final results. The exit polling nailed 1/2 of the battleground states and all 31 polls correctly called the actual winner.  Again, the colors you see above are not for who won each state, but rather, how much the exit polls diverged from the actual results. Romney won Kansas, but the exit polling was just ever so slightly to the LEFT of the actual results. Obama won Wisconsin, but the exit polling was just ever so slightly to the RIGHT of the actual results.

Of course, exit polling has it easier: it must not sift out "undecided" voters, for once votes are cast, there are no "undecides" in the room. That being said, a lot of noise was made in 2004 about how bad the exit polling supposedly was, but in 2012, the exit polling was EERILY good. In many cases, even the toplines were perfectly matched. See: Pennsylvania and Virginia.

Please note the BIAS average for all 31 states: R +0.90% and only R +0.58% for the 12 battleground states (Florida, North Carolina, Ohio, Virginia, Colorado, Pennsylvania, New Hampshire, Iowa, Nevada, Wisconsin, Minnesota and Michigan)


PPP (D):

PPP (D), a polling outfit from North Carolina, did most of the early polling in 2011 and covered all of the battleground states plus many more in 2012. Of the 12 battleground states, PPP called all of them correctly, except for North Carolina, where PPP called a tie. A tie is automatically a mis-call, because on election night, there are no ties: someone will end up winning the state.

State Pollster Pollster Margin Actual Margin Bias Bias (simplified)
Montana PPP (D) – FINAL POLL R +7 R +13,65 D +6,65 D +7
Missouri PPP (D) R +7 R +9,38 D +3,38 D +3
Arizona PPP (D) – FINAL POLL R +7 R +9,04 D +2,04 D +2
North Carolina PPP (D) – Final Poll 0 R +2,04 D +2,04 D +2
Ohio PPP (D) – FINAL POLL D +5 D +2,97 D +2,03 D +2
Colorado PPP (D) – Final Poll D +6 D +5,37 D +0,63 D +1
Pennsylvania PPP (D) – FINAL POLL D +6 D +5,38 D +0,62 D +1
Maine PPP (D) – FINAL POLL D +15 D +15,29 D +0,29 0
Virginia PPP (D) – FINAL POLL D +4 D +3,87 D +0,13 0
Florida PPP (D) – Final Poll D +1 D +0,88 D +0,12 0
Minnesota PPP (D) / FINAL POLL D +8 D +7,69 R +0,31 0
New Mexico PPP (D) D +9 D +10,15 R +1,15 R +1
Nevada PPP (D) – Final Poll D +4 D +6,68 R +2,68 R +3
Michigan PPP (D) – FINAL POLL D +6 D +9,48 R +3,48 R +3
New Hampshire PPP (D) – FINAL POLL D +2 D +5,58 R +3,58 R +4
DC PPP (D) / Washington City Paper/ Kojo Nnamdi Show D +80 D +83,63 R +3,63 R +4
Iowa PPP (D) – FINAL POLL D +2 D +5,81 R +3,81 R +4
Wisconsin PPP (D) – FINAL POLL D +3 D +6,94 R +3,94 R +4
Connecticut PPP (D) – FINAL POLL D +13 D +17,33 R +4,33 R +4
Oregon PPP (D) / LCV – FINAL POLL D +6 D +12,09 R +6,09 R +6
Washington State PPP (D) – FINAL POLL D +7 D +14,78 R +7,78 R +8
Massachusetts PPP (D) – FINAL POLL D +15 D +23,14 R +8,14 R +8
Bias (ALL)



R +1,41
Bias (12 battlegrounds)



R +1,01

PPP had absolute bullseyes in four states: Maine, Virginia, Florida and Minnesota. It was one of the only pollsters to correctly call Florida.

Of the 22 states where PPP had end-polls, it was within +/-2 in 10 of those states. You can also see that where PPP, a Democratic leaning polling outfit, erred the most was on the Conservative bias, not the Liberal bias. Let's tear that apart some: PPP was off 4 to the RIGHT in DC, but Obama won DC by almost 84 points. Being off by 4 here is like being off by 0.1 in a tight state like Florida. Likewise, in Montana, PPP was off by 7 to the LEFT, which seems surprising until you see that even Rasmussen was off to the LEFT in Montana. Pretty much everyone was off to the left in that state. But in critical battlegrounds, all of which PPP correctly called, it was off to the RIGHT by +4: WI, IA, NH. Even such a good pollster like PPP was trying to use too constrictive of a model for calculating results.

Still, in spite of this, PPP was without a doubt overall the best pollster of 2012. It ended up with a mathematical bias of R +1.41% for all 22 states averaged, but only R +1.01 for the 12 battlegrounds, pretty much where it was in 2008. Remember that the next time someone says that PPPs results are way too liberal: 11 of 22 polls here had a bias to the RIGHT, not to the LEFT!

Here is how PPPs mathematical bias looks like on a map:


Rasmussen:

In direct contrast to PPP (D), Rasmussen was without a doubt the worst pollster of the 2012 season, alone for virtue of the fact that Rasmussen miscalled 6 of the 12 battleground states: OH, FL, VA, IA, WI and CO!

State Pollster Pollster Margin Actual Margin Bias Bias (simplified)
North Dakota Rasmussen R +14 R +19,62 D +5,62 D +6
Montana Rasmussen R +10 R +13,65 D +3,65 D +4
Indiana Rasmussen R +9 R +10,20 D +1,20 D +1
Arizona Rasmussen / CBS 5 R +8 R +9,04 D +1,04 D +1
New Mexico* Rasmussen (10/14/2012) D +11 D +10,15 D +0,85 D +1
Pennsylvania* Rasmussen D +5 D +5,38 R +0,38 0
Missouri* Rasmussen R +11 R +9,38 R +1,62 R +2
Washington State* Rasmussen D +13 D +14,78 R +1,78 R +2
Minnesota Rasmussen D +5 D +7,69 R +2,69 R +3
Florida* Rasmussen R +2 D +0,88 R +2,88 R +3
Ohio Rasmussen – Final Poll 0 D +2,97 R +2,97 R +3
New Hampshire Rasmussen D +2 D +5,58 R +3,58 R +4
North Carolina* Rasmussen R +6 R +2,04 R +3,96 R +4
Massachusetts Rasmussen D +19 D +23,14 R +4,14 R +4
Michigan Rasmussen D +5 D +9,48 R +4,48 R +4
Nevada* Rasmussen D +2 D +6,68 R +4,68 R +5
Virginia Rasmussen R +2 D +3,87 R +5,87 R +6
Iowa Rasmussen R +1 D +5,81 R +6,81 R +7
Wisconsin Rasmussen 0 (tie) D +6,94 R +6,94 R +7
Colorado Rasmussen R +3 D +5,37 R +8,37 R +8
Connecticut Rasmussen D +7 D +17,33 R +10,33 R +10
Bias (ALL)



R +2,81
Bias (12 battlegrounds)



R +4,50

Even a broken watch is right maybe twice a day, and so it was with Rasmussen: Rasmussen indeed hit the nail on the head in Pennsylvania, but that was pretty much it. Rasmussen called 49-49 ties in both Ohio and Wisconsin, both of which Obama won. It called Colorado, Iowa, Virginia and Florida for Romney: Obama won all four of those. In sum total, in 6 states, worth 85 electoral votes (15.8% of the electoral college), Rasmussen completely missed it. Also, in North Carolina, which was Romney's narrowest state of the night, Rasmussen was off by - you guessed it - 4 points to the RIGHT.

Of the 21 states where Rasmussen had end-polls, there was mathematical bias to the RIGHT in 15 of them, and 10 of those 15 states had a mathematical bias of +4 or more to the Right. All 4 of the states where Rasmussen had a mathematical bias to the LEFT were landslide GOP states.

The end results are quite obvious: of all 21 states, Rasmussen has a mathematical bias of R +2.81%, which jumps to R +4.50% in the 12 battlegrounds combined! This is the third election cycle in a row where Rasmussen has put out such terribly inaccurate results, and once again, to the RIGHT on the order of 4 points!

Probably the thing that irks the most about Rasmussen is his absolute dishonesty: where PPP comes out and says quite clearly that it is a Democratic polling organization, Rasmussen claims to be neutral, when in reality Scott Rasmussen is constantly giving speeches and doing write-ups and accepting contract orders from the most right-wing of organizations. The wording of his questions has also raised eyebrows more than once - for the wording is intented to stoke Conservative emotion. It gets worse: PPP releases all of its internals. Rasmussen releases none.

Gravis (R):

Gravis (R), which is kind of a mini-Rasmussen copycat, had less end-polls than Rasmussen, but has the honor of being the only polling organization where every single end-poll was off to the RIGHT. No bullseyes for Gravis (R) and certainly no results to the Left:

State Pollster Pollster Margin Actual Margin Bias Bias (simplified)
Florida Gravis (R) - FINAL POLL 0 D +0,88 R +0,88 R +1
Iowa Gravis (R) D +4 D +5,81 R +1,81 R +2
North Carolina Gravis (R) -Final Poll R +4 R +2,04 R +1,96 R +2
Ohio Gravis (R) - FINAL POLL D +1 D +2,97 R +1,97 R +2
Pennsylvania Gravis (R) - FINAL POLL D +3 D +5,38 R +2,38 R +2
New Hampshire Gravis (R) - FINAL POLL D +1 D +5,58 R +4,58 R +5
Nevada* Gravis (R) D +1 D +6,68 R +5,68 R +6
Bias (ALL)



R +2,86
Bias (7 battlegrounds)



R +2,86

Gravis polled only the battleground states, and only 7 of 12, so its overall bias and the bias for the battlegrounds is of course, identical. That being said, its mathematical bias for the battlegrounds is less extreme than Rasmussen's. And where Rasmussen miscalled 6 battlegrounds, Gravis miscalled only one: Florida (remember, a tie is automatically a mis-call).

Gravis started the year by publishing results down to 1/100th of a percentage point, but as the election drew near, it moved to whole numbers.

Here is how Gravis' mathematical bias looks on a map:



I bet Gravis (R) is having a pretty deep internal conversation over how badly it must have calculated the Latino vote in Nevada, for at the end of the day, Nevada was not a very close race at all. Obama won Nevada by +6.68%, a considerably higher margin than McCain's +5.20% win in Georgia in 2008, and yet no one really thought that Obama was going to take Georgia in 2008...

Grove (D):

A good mirror-reflection of Gravis (R) is Grove (D), which end-polled 9 of the battlegrounds:

State Pollster Pollster Margin Actual Margin Bias Bias (simplified)
North Carolina* Grove (D) / PNA D +3 R +2,04 D +5,04 D +5
Ohio Grove Insight (D) / PNA / USAction D +4 D +2,97 D +1,03 D +1
Florida Grove (D) / PNA D +1 D +0,88 D +0,12 0
Nevada Grove Insight (D) / Project New America D +6 D +6,68 R +0,68 R +1
Wisconsin Grove (D) / PNA / USAction D +6 D +6,94 R +0,94 R +1
Colorado Grove New Insight (D) / PNA D +3 D +5,37 R +2,37 R +2
Michigan Grove (D) / PNA / USAction D +7 D +9,48 R +2,48 R +2
New Hampshire* Grove (D) / PNA D +3 D +5,58 R +2,58 R +3
Iowa Grove Insight (D) / PNA / USAction D +3 D +5,81 R +2,81 R +3
Bias (ALL)



R +0,67
Bias (9 battlegrounds)



R +0,67

Grove (D) was one of the few pollsters to nail Florida, but, like Gravis, it missed one state: North Carolina, and that by a substantial bias of 5 to the LEFT. That being said, of the 9 states polled by Grove (D), 6 produced a Conservative bias and at the end of the day, Grove has some of the smallest overall bias of R +0.67. Notice that not even the most Democratic of pollsters finished out the year with a Democratic bias.

This is how Grove (D) looks on a map:



Grove was one of the few pollsters to have a very slight bias in most of its states, excepting North Carolina, where it really missed the call big-time.

Back to the mini-Rasmussens:

Pulse (R) for LET FREEDOM RING (Tea Party):

A Tea Party organization contracted Pulse (R), which is a fully owned and operated subsidiary of Rasmussen that uses exactly the same software and mainframe as its parent company, to do polls of 5 critical battleground states:

State Pollster Pollster Margin Actual Margin Bias Bias (simplified)
Ohio Pulse (R) / Let Freedom Ring (Tea Party) D +3 D +2,97 D +0,03 0
Pennsylvania Pulse / Let Freedom Ring (Tea-Party) D +3 D +5,38 R +2,38 R +2
Virginia Pulse (R) / Let Freedom Ring (Tea Party) D +1 D +3,87 R +2,87 R +3
Florida Pulse / Let Freedom Ring (Tea-Party) R +2 D +0,88 R +2,88 R +3
Wisconsin Pulse / Let Freedom Ring (Tea-Party) D +1 D +6,94 R +5,94 R +6
Bias (ALL)



R +2,60
Bias (5 battlegrounds)



R +2,60

These results are fascinating and this would be a good point to stop and say that Rasmussen has some 'splainin' to do, for we see that Pulse (R) absolutely nailed it in the most critical battleground of all: Ohio. Pulse miscalled Florida, just as Rasmussen did, but it got Pennsylvania and Virginia right. In Wisconsin, it was just 1 point away from the same miscall as Rasmussen made (Ras called a 49-49 tie in the Badger state, which Obama won by almost 7 points). So, the question is: how can two pollsters, using exactly the same equipment and methodology and conducting the poll in almost exactly the same time frame, come up with such different results in Ohio?

To their credit, this Tea Party organization decided to publish the Ohio numbers and not hide them or anything like that.

Here is how Pulse (R) looks on a map:



Notice that LET FREEDOM RING stayed east of the Mississippi river. I don't know why for sure, but I bet that finances played the larger role in this decision.

SUSA:

Survey USA (SUSA) was much quieter in 2012 than it was in 2008, where it put out 2 complete 50-state polling tables. This time around, SUSA really limited its polling and gave no reason as to why. That being said, SUSA did put out 11 end-polls and the mathematical bias is outstanding:

State Pollster Pollster Margin Actual Margin Bias Bias (simplified)
Minnesota SUSA (KSTP) D +11 D +7,69 D +3,31 D +3
Missouri SUSA R +7 R +9,38 D +2,38 D +2
Ohio SUSA – Final Poll D +5 D +2,97 D +2,03 D +2
New York SUSA D +29 D +28,13 D +0,87 D +1
Georgia SUSA R +8 R +7,81 R +0,19 0
Washington State SUSA D +14 D +14,78 R +0,78 R +1
Florida SUSA 0 D +0,88 R +0,88 R +1
Nevada SUSA D +4 D +6,68 R +2,68 R +3
North Carolina SUSA R +5 R +2,04 R +2,96 R +3
New Jersey* SUSA D +14 D +17,68 R +3,68 R +4
Connecticut SUSA D +13 D +17,33 R +4,33 R +4
Bias (ALL)



R +0,67
Bias (4 battlegrounds)



R +0,75

We see that SUSA nailed the results in Georgia and that it's overall bias is only R +0.67%. With a figure like that, we could claim that SUSA was the best pollster of all, but SUSA only polled 4 of the 12 battlegrounds (Ohio received the most polling from SUSA over the year) and where there was bias, it was inconsistent: it was the only pollster to have a real Liberal bias in Minnesota and one of the few to show a liberal bias in Missouri. However, its polling in New York, following hurricane Sandy, was exceptional.  SUSA claims to be totally neutral, and based on the wording of it's questions and its willingness to take on contracts from all sides of the political spectrum, I take SUSA at it's word. What a shame: I wish SUSA would have polled all of the states 2 times at least, as it did in 2008.

Here is how SUSA's mathematical bias looks like on a map:

Indeed, a very interesting mix of mathematical bias.

We Ask America (WAA):

"We Ask America", out of Illinois, got started in 2010 and focused a lot on the Mark Kirk senatorial race in that state. It was at that time, in my opinion, a rabidly Right-Wing oriented polling outlet. But it appears that the company has gone more mainstream with time. WAA put out end-polls in 9 states, 6 of them battlegrounds, and the results were indeed fascinating:

State Pollster Pollster Margin Actual Margin Bias Bias (simplified)
Nebraska WAA (R) R +13 R +21,78 D +8,78 D +9
Ohio WAA (R) D +4 D +2,97 D +1,03 D +1
Wisconsin WAA (R) D +6,7 D +6,94 R +0,24 0
Illinois WAA (R) D +16 D +16,86 R +0,86 R +1
Florida WAA (R) R +0,9 D +0,88 R +1,78 R +2
Colorado WAA (R) D +3,4 D +5,37 R +1,97 R +2
Missouri WAA (R) R +11,6 R +9,38 R +2,22 R +2
Virginia WAA (R) D +0,9 D +3,87 R +2,97 R +3
Iowa WAA (R) D +2 D +5,81 R +3,81 R +4
Bias (ALL)



R +0,44
Bias (6 battlegrounds)



R +1,67

To its credit, WAA nailed-it in Wisconsin and 6 of its 9 polled states were within +/-2 in bias. It actually has a +1 Liberal bias in Ohio, but a + Conservative bias in Illinois - but remember, +/-1 is really, really good work. The truly surprising result was the massive Liberal bias in GOP bedrock Nebraska, for which there is no really good accounting. But that one massive outlier kept WAA's mathematical bias down close to 0: R +0.44 overal and R +1.67% for the battlegrounds.  There is no doubt that WAA was a more accurate pollster in 2012 than Rasmussen. If I were a conservative, I would pay more attention to WAA's numbers in 2014 and 2016 than to Rasmussen's faulty math.

Here is how WAA looks on a map:

I think that is the most fascinating bias-map of 2012.

Here are two organizations that the Right loved to lambaste as liberal-heavy. Let see how they did:

NBC/WSJ/Marist:

NBC, the Wall Street Journal and Marist polling (out of New York State) combined their efforts once again and put out a considerable amount of polling for 2012. Their final polls were of 8 battlegrounds:

State Pollster Pollster Margin Actual Margin Bias Bias (simplified)
Ohio NBC / WSJ / Marist D +6 D +2,97 D +3,03 D +3
Florida NBC / WSJ / Marist D +2 D +0,88 D +1,12 D +1
Iowa NBC / WSJ/ Marist D +6 D +5,81 D +0,19 0
Virginia NBC / WSJ / Marist D +1 D +3,87 R +2,87 R +3
New Hampshire NBC / WSJ/ Marist D +2 D +5,58 R +3,58 R +4
Nevada NBC / WSJ / Marist D +3 D +6,68 R +3,68 R +4
Wisconsin NBC / WSJ/ Marist D +3 D +6,94 R +3,94 R +4
Colorado* NBC / WSJ / Marist 0 D +5,37 R +5,37 R +5
Bias (ALL)



R +2,00
Bias (8 battlegrounds)



R +2,00

NBC/WSJ/Marist absolutely nailed the margin in Iowa, a state that was filled with heavy right-leaning bias in many polls. Interesting point: though very off in Ohio (3 points to the LEFT), 5 of these eight end polls have a bias to the RIGHT and notice the composite bias: R +2.00. Why, that is more to the RIGHT than WAA and only slightly to the left of Pulse!  NBC missed the call in Colorado (a tie is automatically a mis-call). The next time people want to claim that NBC is a pure liberal outfit, remind them of these numbers!

Here is how NBC/WSJ/Marist looks on a bias map:



Hand in hand with NBC is often Quinnipiac:

Quinnipiac (NYT/CBS):

State Pollster Pollster Margin Actual Margin Bias Bias (simplified)
Ohio Quinnipiac / NYT / CBS D +5 D +2,97 D +2,03 D +2
Florida Quinnipiac / NYT / CBS D +1 D +0,88 D +0,12 0
New York* Quinnipiac D +28 D +28,13 R +0,13 0
Virginia Quinnipiac / NYT / CBS D +2 D +3,87 R +1,87 R +2
Connecticut Quinnipiac D +14 D +17,33 R +3,33 R +3
New Jersey* Quinnipiac D +8 D +17,68 R +9,68 R +10
Bias (ALL)



R +2,17
Bias (3 battlegrounds)



R +0,66

As is the case with Marist, Quinnipiac has hooked up with a major newspaper and a major news network (NYT and CBS) to produce polling. There were 5 end polls, 3 were from battleground states.

Two big points here:

-Quinnipiac absolutely nailed the margin in Florida and New York.

-Though it has a conservative bias of R +2.17 for the 5 polls combined, among the three battlegrounds (Ohio, Florida and Virginia), its bias is absolute null. This is the only pollster to have acheived this mark, but then again, Quinnipiac only put out results for 3 of 12 battlegrounds. What "skewed" the results were the terrible poll numbers out of New Jersey (R +10!), which happened to a number of pollsters, most likely because of hurricane Sandy.

But just as was the case with Marist, no Liberal bias to be seen here!

Here is how Quinnipiac looks on a bias-map:



There are two new polling organizations that put out lots of results for 2012: Mellman (D) and Pharos.


Mellman (D):

State Pollster Pollster Margin Actual Margin Bias Bias (simplified)
Ohio Mellmann (D) / AFC D +5 D +2,97 D +2,03 D +2
Florida Mellman (D) / AUC – Final Poll D +2 D +0,88 D +1,12 D +1
Nevada Mellman (D) / AUC D +6 D +6,68 R +0,68 R +1
Virginia Mellman (D) / AUC – Final Poll D +3 D +3,87 R +0,87 R +1
Iowa Mellman (D) / AUC D +2 D +5,81 R +3,81 R +4
Bias (ALL)



R +0,60
Bias (5 battlegrounds)



R +0,60

Mellman (D) did considerable polling in the above 5 battleground states, most of all in Florida. There were no bullseyes, but there were also no miscalls and three of the five end-polls are within +/-1, which is very, very good. At a composite R +0.60 bias, Mellman was one of the best, but did not poll nearly as many states as PPP or Rasmussen. I am pretty sure that we will be seeing a lot of Mellman in 2016.

Here is how Mellman (D) looks on a bias-map:


This map will probably provide a pretty good baseline for 2016, to see where this new pollster goes.


Pharos.

We have a real problem with Pharos. It put out a slew of polls at the beginning of October, but only 6 end-polls, and the combination of said end-polls is just plain old weird. Look:

State Pollster Pollster Margin Actual Margin Bias Bias (simplified)
Montana Pharos Research R +7,7 R +13,65 D +5,95 D +6
North Dakota Pharos Research R +17 R +19,62 D +2,62 D +3
Nebraska Pharos Research R +20 R +21,78 D +1,78 D +2
Ohio Pharos Research D +2,7 D +2,97 R +0,27 0
Pennsylvania* Pharos Research D +3,94 D +5,38 R +1,44 R +1
Indiana Pharos Research R +13 R +10,20 R +2,80 R +3
Bias (ALL)



D +1,16
Bias (2 battlegrounds)



R +0,50


Only 6 of about 20 states that were polling right before the first debate in October then received final polls. Of those 6, only two are battleground states: OH and PA, and indeed, Pharos nailed it in Ohio, but was way off to the LEFT in Montana. Because of so few polls and this combination of bias numbers, Pharos is the only pollster with a composite Liberal bias of: D +1.16, but again, when you average that battlegrounds, there is still a bias to the Right: R +0.50.

It's a real shame that Pharos did not put out final polls for all 20 or so states, for then we would have had a great baseline to compare. Perhaps Pharos will do this in 2016.

Here is how Pharos looks on a map:



There is one final pollster I want to analyze in detail, and it is completely different from all others: YouGov.

YouGov:

This pollster is out of England and does internet only polls, which makes it's "call-base" completely different. Amazingly, the results were actually very, very good:

State Pollster Pollster Margin Actual Margin Bias Bias (simplified)
Tennessee YouGov R +11 R +20,40 D +9,10 D +9
Indiana YouGov R +7 R +10,20 D +3,20 D +3
Pennsylvania YouGov D +8 D +5,38 D +2,62 D +3
Illinois YouGov D +19 D +16,86 D +2,14 D +2
Arizona YouGov R +8 R +9,04 D +1,04 D +1
North Carolina YouGov R +2 R +2,04 D +0,04 0
Ohio YouGov D +3 D +2,97 D +0,03 0
Georgia YouGov R +8 R +7,81 R +0,19 0
Minnesota YouGov D +7 D +7,69 R +0,69 R +1
Washington State YouGov D +14 D +14,78 R +0,78 R +1
New Hampshire YouGov D +4 D +5,58 R +1,58 R +2
Missouri YouGov R +11 R +9,38 R +1,62 R +2
Virginia YouGov D +2 D +3,87 R +1,87 R +2
Florida YouGov R +1 D +0,88 R +1,88 R +2
Connecticut YouGov D +15 D +17,33 R +2,33 R +2
Michigan YouGov D +7 D +9,48 R +2,48 R +2
Nevada YouGov D +4 D +6,68 R +2,68 R +3
Wisconsin YouGov D +4 D +6,94 R +2.94 R +3
Massachusetts YouGov D +20 D +23,14 R +3,14 R +3
Texas YouGov R +19 R +15,78 R +3,22 R +3
Maryland YouGov D +22 D +26,08 R +4,08 R +4
New Mexico YouGov D +6 D +10,15 R +4,15 R +4
Colorado YouGov D +1 D +5,37 R +4,37 R +4
Iowa YouGov D +1 D +5,81 R +4,81 R +5
New York YouGov D +23 D +28,13 R +5,13 R +5
New Jersey YouGov D +12 D +17,68 R +5,68 R +6
California YouGov D +15 D +23,12 R +8,12 R +8
Bias (ALL)



R +1,63
Bias (12 battlegrounds)



R +1,75

YouGov did polls in 27 states, also end-polls in 27 states, including all 12 battlegrounds. It mis-called Florida, so it got as many of the battleground states right as did PPP. It nailed the results in three states, two of them critical battlegrounds: Ohio, North Carolina and Georgia. But 19 of its states had a Conservative mathematical bias, whereas only 5 states had a Liberal mathematical bias, making for a composite bias of R +1.63, which becomes R +1.75 for the 12 battlegrounds.

Perhaps the strangest bias extreme was in Tennessee, with D +9 in a Romney landslide state. And YouGov had this same result for the Volunteer State twice in October!

YouGov has been able to do what Zogby still has not figured out: how to put out accurate internet-only polls. Bravo to YouGov for a good first time out of the gate for state polling in a US presidential election.

Here is how YouGov looks on a bias-map:



YouGov was one of the few pollsters who put out a lot of end-polls to have also polled California, which suffered under a massive Conservative polling bias in 2012, which could maybe partly explain the bad national polling numbers, since California is about 11% of the total electorate.

IV. Summary Table

Here are the 13 polling organizations I just analysed, in a table, alphabetically:

Pollster Total states Total Battlegrounds Total correct Battlegrounds correct Total % Battleground % miscalls bias battleground bias
Exit polling 31 12 31 12 100,00% 100,00% 0 R +0,90 R +0,58
Gravis (R) 7 7 6 6 85,71% 85,71% 1 R +2,86 R +2,86
Grove (D) 9 9 8 8 88,89% 88,89% 1 R +0,67 R +0,67
Marist 8 8 7 7 87,50% 87,50% 1 R +2,00 R +2,00
Mellman (D) 5 5 5 5 100,00% 100,00% 0 R +0,60 R +0,60
Pharos 6 2 6 2 100,00% 100,00% 0 D +1,15 R +0,50
PPP (D) 22 12 21 11 95,45% 91,67% 1 R +1,41 R +1,01
Pulse (R) 5 5 4 4 80,00% 80,00% 1 R +2,60 R +2,60
Quinnipiac 6 3 6 3 100,00% 100,00% 0 R +2,17 0
Rasmussen 21 12 15 6 71,43% 50,00% 6 R +2,81 R +4,50
SUSA 11 4 10 3 90,91% 75,00% 1 R +0,75 R +0,67
WAA (R) 9 6 8 5 88,89% 83,33% 1 R +0,44 R +1,67
YouGov 27 12 26 11 96,30% 91,67% 1 R +1,63 R +1,75

Here they are, according to composite bias:

Pollster Total states Total Battlegrounds Total correct Battlegrounds correct Total % Battleground % miscalls bias battleground bias
Pharos 6 2 6 2 100,00% 100,00% 0 D +1,15 R +0,50
WAA (R) 9 6 8 5 88,89% 83,33% 1 R +0,44 R +1,67
Mellman (D) 5 5 5 5 100,00% 100,00% 0 R +0,60 R +0,60
Grove (D) 9 9 8 8 88,89% 88,89% 1 R +0,67 R +0,67
SUSA 11 4 10 3 90,91% 75,00% 1 R +0,75 R +0,67
Exit polling 31 12 31 12 100,00% 100,00% 0 R +0,90 R +0,58
PPP (D) 22 12 21 11 95,45% 91,67% 1 R +1,41 R +1,01
YouGov 27 12 26 11 96,30% 91,67% 1 R +1,63 R +1,75
Marist 8 8 7 7 87,50% 87,50% 1 R +2,00 R +2,00
Quinnipiac 6 3 6 3 100,00% 100,00% 0 R +2,17 0
Pulse (R) 5 5 4 4 80,00% 80,00% 1 R +2,60 R +2,60
Rasmussen 21 12 15 6 71,43% 50,00% 6 R +2,81 R +4,50
Gravis (R) 7 7 6 6 85,71% 85,71% 1 R +2,86 R +2,86

And here they are, according to Battleground State bias:

Pollster Total states Total Battlegrounds Total correct Battlegrounds correct Total % Battleground % miscalls bias battleground bias
Quinnipiac 6 3 6 3 100,00% 100,00% 0 R +2,17 0
Pharos 6 2 6 2 100,00% 100,00% 0 D +1,15 R +0,50
Exit polling 31 12 31 12 100,00% 100,00% 0 R +0,90 R +0,58
Mellman (D) 5 5 5 5 100,00% 100,00% 0 R +0,60 R +0,60
Grove (D) 9 9 8 8 88,89% 88,89% 1 R +0,67 R +0,67
SUSA 11 4 10 3 90,91% 75,00% 1 R +0,75 R +0,67
PPP (D) 22 12 21 11 95,45% 91,67% 1 R +1,41 R +1,01
WAA (R) 9 6 8 5 88,89% 83,33% 1 R +0,44 R +1,67
YouGov 27 12 26 11 96,30% 91,67% 1 R +1,63 R +1,75
Marist 8 8 7 7 87,50% 87,50% 1 R +2,00 R +2,00
Pulse (R) 5 5 4 4 80,00% 80,00% 1 R +2,60 R +2,60
Gravis (R) 7 7 6 6 85,71% 85,71% 1 R +2,86 R +2,86
Rasmussen 21 12 15 6 71,43% 50,00% 6 R +2,81 R +4,50

No matter how you slice it, Rasmussen comes out on bottom, and considering the amount of polls, PPP comes out on top. End of story.

Coming up: individual battleground state analyses - in depth, but this pretty much clears up the question of who was really good in 2012 and who was really bad.

And please notice that even the most Democratic-friendly pollsters had a composite Conservative bias on the average. Wow.

No comments:

Post a Comment

Constructive comments and critique are always welcome. Please keep it polite and respectful.