Thursday, September 16, 2004

Pew Research is blatantly and obviously spinning it's poll in an attempt to favor John Kerry.

The news that Pew's poll had the race as a virtual "dead heat" was widely reported today, for example at Yahoo News:

The first of two national polls by Pew, done Sept. 8-10, reflected the president's post-convention bounce. Bush was ahead of Democrat John Kerry (news - web sites) 52-40 among registered voters and by an even wider margin, 54-39, among likely voters, a narrower group.

By the second poll, done Sept. 11-14, the Bush lead had evaporated. In that poll, Bush and Kerry were knotted at 46 percent among registered voters. Among likely voters, Bush was at 47 percent and Kerry at 46 percent.

Pew's contention is that within a space of 3 days, with the 9/11 ceremonies going on, Rathergate erupting, and Kerry talking in favor of gun control, Bush lost 12 points to registered voters, and 14 points with likely voters??? Are they nuts?

Every poll has the potential to be an "outlier", or a poll which through random bad luck has an unrepresentative sample. The larger the sample, the more these things even out. The smaller the sample, the more volatility. Obviously, larger samples are more reliable than smaller samples.

With this Pew poll, I guess you could ask the question as to whether it was the 1st sample that had such a problem or whether it was the 2nd. Another, perhaps more reasonable thought struck me though - what if they were each unrepresentative? The first biased toward Bush, the 2nd toward Kerry.

If these errors were roughly equal, this would result in a "true" picture over a SIX day poll, of a 6 point Bush lead among registered voters, and a 8 point lead among likely voters. Quite a difference from a "dead heat", don't you agree?

Now's the really interesting part - I got to thinking, why would Pew do 2 three day polls one right after the other? Why didn't they combine them to make a larger, more reliable sample? It's not like they are trying to do a "rolling poll" like Rassmusen does. So what made them do this? Is this what they always do?

The answer is a resounding NO! As these pages at shows, Pew has been doing Presidential polls pretty much monthly from at least February of this year. And each of these polls has been over a period of at least 6 days.

Here is the relevant information from August's poll:

Results for the August 2004 New Interest Index survey are based on telephone interviews conducted under the direction of Princeton Survey Research Associates International among a nationwide sample of 1,512 adults, 18 years of age or older, during the period August 5-10, 2004.

Strangely enough, in August they didn't see the need to split the 6 day poll into two "waves" like they did this month.

In July, the same story:

Results for the July 2004 Foreign Policy and Party Images survey are based on telephone interviews conducted under the direction of Princeton Survey Research Associates International among a nationwide sample of 2,009 adults, 18 years of age or older, during the period July 8-18, 2004.

No "waves" in July either.

The same story holds for the June poll, the May poll, and the March poll (They seem to have skipped polling in April.)

These polls were all consistent:

They have been a month or more apart
They have been taken over a period of at least 6 days.
None of them talked about "waves".

The only Pew poll this year that deviates from this pattern (other than this month's, of course) was in February. This seems to be the first month that they began doing polls between Bush and Kerry (which makes sense). What is really interesting here, though, is that they conducted two polls during the month. They were EACH taken over 6 days, and they were seperated in time by about two weeks. So how were they reported? They were combined!

So given this established history, and the increased variability inherent in smaller sample sizes, why would Pew suddenly decide to commission two polls right in a row, with half the time and sample size (notice that the "2nd wave" poll of 9/11-14 has a mere 725 registered voters compared to August's 1166 and July's 1568, and June's 1426) of all the other polls this year?

The answer - they didn't. They commissioned their regular monthly 6 day poll, but saw their chance to manipulate the reporting of the data to show an incredible (both meanings intended) boost to Kerry's support.

After Dan Rather and CBS, the AP's "boos" fiasco, etc., I'm not even surprised anymore.

UPDATE: I noticed that the 725 figure for the recent "3 day" Pew poll was actually likely voters. They had 1002 registered voters, with an additional 970 for the 9/8-10 period.

They interviewed 2494 adults this month (all six days), 1512 last month, 2009 in July, and 1806 in June. I don't know if this increase is due to more calls this month, or if they simply had an abnormally high response rate.

UPDATE 2: Welcome to Musil and Kausfile readers.

Via a David Hoberg thread on polls comes this page at Pew. It certainly raises certain problems with the contention that there were two polls instead of one. For instance in the header is this gem:


and this:

SEPTEMBER 8 - 13, 2004 TOTAL N=2003; RV N=1580

Note that it says survey in the singular, not plural.

And finally, the true results of the September poll, which they were so determined to avoid reporting (sorry for the formatting, or lack thereof):


Bush/Lean Bush Kerry/Lean Kerry Nader/Lean Nader/
Only Only Don't
Total Strongly Mod. DK Total Strongly Mod. DK Nader Know
September, 2004 49 33 15 1 43 22 20 1 1 7=100

In other words, there was a 49 to 43 Bush lead for September's Pew Survey.