Something Rotten In The State Of Political Polling-- Has Strategic Visions Been making Up Polling Data?
>
-by Doug Kahn
Over at Pollster.com today, Mark Blumenthal will probably explain why President Obama's 'favorable' rating is suddenly five percentage points better than it was yesterday. On Monday, Steve Singiser of DailyKos was saying how a 54.7%–37.5% favorability rating for Obama was no disaster, nothing portending a rout of House Democrats running for reelection, because he had a similar rating just before the November election. He used this chart, which no longer exists on Pollster.com. So the President is plus-17% in the polls. Except today, the Pollster.com chart shows the President with a 56.8%–34.6% rating, or plus-22%. The difference is that today's chart is missing about 200,000 voters Rasmussen polled over the past 11 months.
Mark is one of the internet's top polling analysts, and recently he's been commenting on the controversy surrounding polling results released by Strategic Visions, a Republican consulting group. It seems possible they've been making up their results. Nate Silver of fivethirtyeight.com found anomalies in their published figures that make it clear it couldn't be the results of random polling. Certain digits were showing up more often in the last decimal place: 1, 2, and 3 occurred more frequently. So someone was just sitting around making up numbers.
Someone pretty dumb, that is. Anyone who makes even a tiny effort to examine his/her own thinking will understand we all have favorite numbers. What's your favorite number from 0 to 9? And how could you possibly have a favorite number, anyway; are they like people? (I hope no one comes and takes me away for this, but to me all 10 digits have distinct personalities.) All of us have a certain amount of magical thinking going on upstairs. Feel free to discount this conclusion about all of us; my logic is compromised by my belief in magic. Or not.
Anyway, the designated charlatan over at Strategic Visions didn't grok the concept that when you form the intention of making up random results, it causes the numbers you make up to not be random. Maybe your favorite number or numbers will come up more often, and maybe you'll make an effort to compensate for that, but your compensation won't be random. Even computerized 'random number generators' can't provide truly random results. What could be more obvious: you can't be intentionally or systematically random.
Back to Rasmussen: they report weekly on President Obama's favorability rating among likely voters, also on his job approval. The difference between the two somewhat eludes me, but I accept the results of hundreds of polls: we're more likely to feel favorably about someone than we are to approve of the job they're doing. I suppose that means liking them despite their faults. I try to resist that with the President, because when he's not doing a good job, it negatively affects the lives of hundreds of millions of people. Tens of millions of these people are on the brink of personal disaster, financially and otherwise. Whether I like the guy damn well better be vanishingly insignificant.
Let me just say this directly: I believe that some of the polling numbers from Rasmussen Reports are really the result of deliberately skewed polling. I'm not saying they make up the numbers, I'm not saying that at all. I'm just saying their 'methods' make Obama look less popular than he really is. Because they don't like him and his policies, and they don't like Democrats and Democratic policies.
Pollster.com reports on the results of all the major polling firms, and then 'aggregates' the numbers in charts which show 18 months' worth of polling. These aggregates are considered reliable, and are very influential. (Steve Singiser wouldn't use them otherwise.) They do this even though each polling company uses somewhat different methodologies, and asks questions with slightly different wording. Presumably, aggregates make erroneous polls ['outliers', in the jargon of polling math] less influential, by overwhelming them with more accurate data. I'm skeptical. I think that lending legitimacy to funky polling by valuing the results on an equal basis with more rigorous mathematical analyses is something to be avoided, even if that means excluding certain polling firms.
Not expecting a response, I posted a 'suggestion' on the Pollster.com website:
August 9
"When the results of a single pollster (Rasmussen) among 10 raises the average disapproval figure by 4%(!), it's time to recognize that the single pollster isn't measuring the same question. You need to remove the Rasmussen result."
August 10
"Doug,
Thanks for the feedback. For what it's worth, we've written about this issue previously, especially here and here ."
August 10
"Mark,
Having now seen the posts you sent me, and the comments by numerous mathematicians, I understand your reply to me is probably automated. If so, I don't blame you for it.
Nevertheless:
I'm no math whiz, so the following is a guess. Using a regression analysis, as you do, will tend to weigh more heavily the more numerous data points, like Rasmussen. That is, it has the effect of surmising that Rasmussen is more accurate because it is more frequent. A better method would take brownie points away from Rasmussen (and Gallup) for being so consistently outliers.
Let's be straightforward about the matter: the real question is whether the Rasmussen result belongs in the same class as the results of other pollsters, which is what you assume when you include their data points. Two questions arise: are they measuring the same thing, and are they measuring it in the same way? (Relatively speaking, of course, since methods vary among pollsters.)
Rasmussen assigns party identification in a different way when selecting its sample. I suggest you examine how much the Rasmussen results in the Democratic, Republican, and Independent segments differ from the larger group. You'll find the differences to be very small compared to the difference in the aggregated sample. Explain how you can conclude anything other than that Rasmussen exaggerates the Republican result.
Have they explained to you how their results always add up to 98, 99, or 100%?! May I suggest to you that they discard as nonresponsive many of the wishy-washy answers on the positive side, further skewing away from Democrats?
I think there's a high probability that Rasmussen is gaming you. If true, that will become quite a bit more obvious as time goes on. Since you have legitimate methodological reasons for excluding their results, I advise you to spare yourselves the embarassment, and the diminution of your professional reputations, by getting out in front of this."
8/11
"Doug,
I assure you none of my email is automated (I wish), it's just that I get a lot of it and don't have time to re-argue points via email that we've already discussed on the blog. Sorry for that... The results Rasmussen releases omit a very small "no answer" category. They're included in the calculation but not in their daily release or tables."
8/11
"Mark,
Thanks. No reason to apologize. Anyone carefully reading your site should reach the conclusion that you're committed to intellectual honesty.
The reason I pay attention to your analysis (the site itself) is your serious attention to the mathematical nuts and bolts of polling. It's the difference between real journalism (you) and all the wingnuttery available on the web. I know you're well aware of this, but it bears repeating: your reputation for honest analysis is bound to elicit false testimony.
It would be obvious to you if Rasmussen were straying from honesty. But what if they've simply leaned to one side or another of the allowable variations in methodology in a series of mathematical choices, each one affecting the previous slant, until in the end their result doesn't deserve to be aggregated with the other pollsters in your chart? (That's an exercise worth doing yourselves.) Whether they're doing so deliberately is almost beside the point. I wish you'd work from the premise that someone must be providing such a result (and with great frequency), simply because there's a 'market' for it.
When you talk about house effect, you're still implicitly accepting that the Rasmussen result is asking the 'same question'. (Not literally, of course.) Do they give you enough information about their practical application of professional standards to determine if they are producing a 'political' result? I'm well-acquainted, by experience, with the methods of firms who produce polling results for federal candidates.
I think you're influential enough that Rasmussen would have to comply with requests for information detailed enough so you can make a real judgment, so long as rejection of their result were a credible option."
9/27
"Mark,
No Rasmussen presidential favorability for 9/13 through 9/20? It's been weekly."
9/28
"Doug,
Thanks for noticing and contacting us about this. For some reason, they started labeling their 'favorable rating' with the labels normally used for their job approval question. We assume it was an error, but have a call in to check. That's the reason for the delay."
So that's all, except that on Tuesday, Pollster.com removed 47 weeks of Rasmussen polling results from the Obama favorability analysis and chart. Probability that Mark will explain that to us soon: 100%.
The Singiser post is at:
2 Comments:
Really trustworthy blog. Please keep updating with great posts like this one. I have booked
marked your site and am about to email it to a few friends of mine that I know would enjoy
reading..
sesli sohbetsesli chatkamerali sohbetseslisohbetsesli sohbetsesli sohbetsesli sohbet
klip izle | malatya | youtube | video izle | dizi izle
Post a Comment
<< Home