Data-Jockeying the Polls

A range, not a number

Now, I realize that many of you think public opinion polling is a silly thing to do in an increasingly authoritarian country like ours. And I grant that the massive disparities between polls we’re seeing – Capriles is leading by a couple of points and Chávez is ahead by thirty! - can only inspire a healthy dose of skepticism.

My feeling is that if I’m sailing in thick fog, of course I’d prefer a state of the art GPS…but if I can’t get it, I’ll gladly take a rusty old compass. It may not tell me precisely where I am or exactly where I’m going, but in difficult circumstances, it’s better than nothing.

There’s a lot of magical thinking about polls. No matter how often pollsters say so, people find it hard to accept that a poll is necessarily backward-looking – it can estimate where public opinion was when the question was asked, which, by definition, is in the past. Pollsters are not clairvoyant: basing a forward-looking prediction on a backward-looking study is an irreducibly fraught exercise.

Pollsters are keenly aware of this – poll readers, much less so.

Another key word that tends to get lost is “estimate”. A poll is an estimate. You take a random sample from a population and you apply certain statistical techniques to infer something about the behaviour of the population as a whole from the behaviour of the sample. Modern statistical techniques allow you to precisely calculate the odds that your estimate does or does not match the population as a whole, but we’re still talking about odds. The headline number pollsters report is just the center of the distribution of likely real results – the best guess about the characteristics of a population that can be made when you’ve talked to only a part of that population.

In other words, when a pollster tells you Chávez is at 44% with a margin of error of + or – 3%, there isn’t anything “magical” about the number “44”. What it really means is that there’s a 95% chance that Chávez’s support in the population as a whole at the moment the poll was made was higher than 41% and lower than 47%. It also means, of course, that there’s a 5% chance it was outside that range, which is something else that’s too seldom appreciated: it is a matter of mathematical certainty that one out of 20 well conducted polls will be off by more than the margin of error.

And those are just the inherent limitations of polling as a research mechanism, before we even get to the specific difficulties of trying it in Venezuela. In an environment as challenging as ours, you need some quantitative sophistication to beat the polling we have into presentable form. And nobody’s being administering those beatings with more gusto than Iñaki Sagarzazu, currently at the University of Glasgow and – more relevantly, for my purposes – of YVPolis.

Iñaki’s gone through more trouble than most to identify the Bias Profile of each Venezuelan pollster, and uses the results to “correct” the results of their latest polls, giving a kind of synthetic-poll of polls – a range of estimates corrected for each pollster’s past bias.

His result, at this point?

Chávez esta en algún punto del rango entre 39 y 49, con un promedio de 46%. Capriles esta en el rango entre 27 y 43, con un promedio de 34. Estos rangos tienen 5 puntos de coincidencia, que significa es esta elección todavía no se ha decidido, especialmente si consideramos que la mayoría de estas encuestas se realizo antes de que la campaña comenzara oficialmente y que la gente empezara a prestarle atención a la elección.

One last note. One pattern that’s been clear in the last several election cycles is that while the early polling is all over the place, poll results do tend to converge around the real outcome as the election draws near. You can see that in Iñaki’s slides, which is based on the final public poll before each election, and where you can see many pollsters tend to do ok in most elections.

We’re still more than two months out from October 7th, and so we still have to consider the polls we have now “early polls”. We’re just now getting to the period when polling becomes really useful as a guide to what’s about to happen. So watch this space.

40 thoughts on “Data-Jockeying the Polls

  1. Justo lo que hablamos varias veces! Que había que estudiar a las encuestas/encuestadores primero, antes de entrar a criticar los resultados.

    Y, que había que tener algún factor corrector de sesgo/parcialidad en cada encuestadora, de forma de saber ”de cual pata cojea cada una” y así poder obtener un resultado más confiable o normalizado.

    Felizmente, Iñaki hizo un análisis que era vital para ponerle rigor numérico al debate.

    En lo personal, yo creo que la reversión a la media es tan fuerte como la fuerza de gravedad en ciertos ámbitos, como en las encuestas. Faltan 7 semanas de campaña, y veo a Chávez bastante embromado.

    Abrazos a todos.

  2. Those kind of projections are part of what any Campaign HQ should check, and does check… The public is often much too obsessed by the horserace, and thus has been mislead by both journalists and, we need to say so, polling men who presume of being futurologists (or crave media fame).

    That snippet from Iñaki’s study -which is a careful assesment of polls in the last decade, and should perhaps be made into a journal paper- gives a proper narrative for the election, its competitive field and its mirages, and why it would seem that the government why the government engaged in the “polls’ war” early on: only early polls seemed to give a huge lead, and that was the self-fulfilled prophecy they wanted to seed into the public.

    • BTW, I’m not saying the government’s tactic has not reap any rewards… I believe the opposition tactics have been better -in a mostly blunder-free campaign-, and it shows it will be a close race.

      An extraordinary thing, considering both the amount of public money and propaganda bombarding the average public.

  3. Congratulations to Iñaki for his research and to Quico for his interpretation of the research. Hopefully commentary based on facts rather than mere opinion and hearsay about venezuelan polls can help us better understand not just the candidates´ chances during the last couple of months of the campaign but also workings (and limitations) of the local polling industry.

    A couple of observations about Iñaki´s work, specifically his sources.

    In the parliamentary elections of 2010, Datanalisis´ last omnibus poll before the elections in fact estimated a 52-48 split in favor of Chavez. With a margin of error of 3 percent. What Iñaki is reporting suggests Datanalisis got it wrong by a combined difference of more than 10 points. Iñaki should correct his data. Source: http://www.el-carabobeno.com/impreso/articulo/t160910-e07/datanlisis-voto-de-indecisos-definir-resultado-de-las-parlamentarias

    Second observation: If I am not mistaken, Consultores 21 did not conduct a poll immediately before the referendum of 2009. The data reported by Iñaki corresponds to the regular Perfil 21 conducted in november-december 2008 and not to a poll designed specifically for the referendum of february 15 2009. If we recall, the referendum was called immediately after the regional elections of october 2008 which left many (including those in the polling industry) with little time to prepare for the campaign and the elections (or a new and always costly poll). This means that the C21 poll reflects the post-regional election climate and not necessarily the pre-february referendum climate. And those few months, as has been previously discussed, where crucial in overturning an electoral trend that had been favorable to the opposition for the better part of 2008 (maquinarias desmovilizadas y cansadas luego de las regionales, vacaciones de diciembre, desmotivacion del elector opositor ante un referendum muy cuestionado y convocado a ultima hora).

    Both of my observations, while dealing with different firms and elections, touch upon what quico discusses in the last paragraphs of his post. The “photography” or “cross-section” that is a poll usually sharpens only in the last month before an election, as trends tend to stabilize among the electorate who have by then made a choice about whether or not to participate and if so who to vote for.

    My hope is that Iñakis efforts can now serve as a starting point for a discussion about how the polls work and that the observations like the ones I make here serve not to discredit the science of public opinion or its applicability in venezuela, but instead to correct and enhance our interpretation of electoral trends.

    • kliq, the last Datanalisis poll that I have is the same you mention 52-48. Thanks to your comment I noticed that the graph is wrong…sorry! It is showing the mean values for each pollster instead of the last survey. The rest of the graphs are fine. I just made the new graph and I shall put a correction sometime tonight…

    • Kliq: Also, as to the Consultores 21 poll. I am not proud to have used Wikipedia to get some info but that was the only thing available. According to this info the only survey done by consultores 21 was published in January. Not sure what the field dates where. As I mentioned in the blog the data I have for that election (2009) would not pass a peer-reviewed process, but it was important to include some sort of measure for that election.

  4. gtaveledo,

    Of course campaign HQ’s look at trends. But independent assessments (such as Iñaki’s) are always more valuable due to their objectivity.

    The rest is rethoric.

    For example, you can check news headlines from the ’98 election, back when Venezuela was lost. Salas Romer’s affirmed in September that voting intentions where about to cross, that in October both lines signalled a technical tie, and that in December he (we) would easily win. The sad outcome was a 56-39 chavista victory…

    A question for you all:

    Unfortunately I am not on the ground in Venezuela at this time. Can someone tell me if the government is really using their famed “money bazooka”? It seems to me they have been very innefficient this time around, and money is not flowing downstream… Too few free homes, too few free refrigerators, too few free washing machines. The goverment apparatus is so paralyzed nowadays that it is even uncapable of buying votes! And “cadenas” alone do not win elections… So HCF’s only hope would be to promise more and more salary/pension rises and perhaps a fat “aguinaldo” in November. And even this would not buy enough support.

    Abrazos a todos

  5. A tight race for a Dictator like Chavez means nothing because win or lose
    he will cheat.
    Win and he will pad the numbers to claim a mandate for full
    speed to Communism.
    Lose and the CNE will flip the vote a la Iran.
    The farce of an Election is so the ALBA leaches, Russian, Chines
    enablers can keep a straight face and say “he’s a democrat”
    The only way to keep cockroaches out of a democracy is not
    to let them in from the start.
    http://static.eluniversal.com/2012/07/31/merc.jpg.520.360.thumb

  6. For me, polls are retro, especially during the gallop phase of the horse race.

    By the time number crunching is analyzed, post-facto, some other political event has occupied the mental space of opinion of average citizens. What happens next is that along come the poll results (ta-da!) from when the poll was taken, say, two weeks earlier — gotta work those numbers to make polling look like mumbo-jumbo, certainly not something anyone else can do except for the *expert*.

  7. Quico, you’re giving our friend Iñaki way too much credit here. He doesn’t take into account the track record of pollsters in regional elections, and he gives way too much weight to polls for the 2009 Referendum, which was particularly tricky to poll since it was announced really quickly and was one where the opposition had basically zero cash to campaign. I also think he’s comparing the *last* polls taken before national elections to predict the accuracy of polls taken right now – three months before an election. The result is that according to his flawed criteria, Consultores 21 comes out as “unreliable.”

    Sorry, that doesn’t cut it for me. If you’re going to look at a pollster’s track record, you need to look at the whole thing and not discard data point just because they don’t fit some random, flawed criteria.

    • He’s doing C21 (and several other pollsters) a *favor* by looking only at the last poll before an election – they’d come out looking far worse otherwise!

      The regional thing is also, well, debatable. There are serious complications to polling locally and regionally that don’t apply to National Polls. Remember than in 2008, Datanalisis had Liliana Hernandez leading the race for Chacao mayor by some margin – she came in dead last. And a couple of good pollsters had Guanipa ahead of Eveling – in the end Eveling just crushed him. Why? Because they didn’t have enough experience to know how to build a representative sample in Chacao or Maracaibo – which is a very different skill from building a proper sample nationally, something all pollsters have expertise on.

      Reasonable people can differ on these things, and Iñaki’s methodology strikes me as defensible.

      (My bigger concern is with his data – which is difficult to verify because pollsters are just so tight-fisted about this stuff…)

      At any rate, my claim isn’t that he’s the best, just that he’s better than the alternatives – largely because I’m not a BoA client and so I don’t get to play with F-Rod’s database :( …

      • No, you’re confusing things here. Yes, polling for *primaries* is tricky, more so if they are regional in nature. But polling for state-wide races should not be discarded so easily. For example, pollsters predicting Diosdado would beat Capriles have issues …

    • Juan,couple of comments.
      1) I am not discarding elections. If you get me enough data points I’ll include them. The data points I have are the only ones I could find.
      2) I am not using the last poll to predict the 2012 race. I am using the average of the polls presented in each race (which has a whole set of problems) to generate the bias in each election. I then argue that we could have two main dynamics a 2004/2009 style race or a 2007/2010 style race. Table 1 shows the average for each pollster of all their polls for each race and overall. So if C21 comes as unreliable is not because they had a good track record, but instead because they made big mistakes in two very difficult elections.

      • I think Iñakis approach and methodology are correct. If anything we need better data as Quico argues. Like Juan tho, I am not comfortable with Consultores´s poor rating given by Iñakis model but I think (as is the case I mention with 2009) that is has more to do with Iñakis inability to come by data that pollsters and their clients are so sensitive about.

        Someone should lobby these firms a bit harder for the sake of transparency. I have been trying my best. If anyone else wants to join in, I recommend starting here…

        http://www.avai.org.ve/

  8. For predictive purposes I would suggest looking into weighting the biases by the biases that came before them then circling the final bias back to weight the first one, and repeating until stabilization is reached. This would be similar to what some neural networks do when used for prediction with fuzzy data.

  9. At some point, the conversation might be better focused on possible (future) voting irregularities rather than polling irregularities. .Nothing so far has convinced me that the voting nor the vote count will be fair. However, I’m sure there will be many things in place to make it difficult for irregularities that can limit the amount of voter fraud.

    Meanwhile, I’m sure that the polls will not have many people identifying themselves as Capriles supporters out of fear. My take is that if a pollster called me, I would first try to figure out if it was the Chavista thought police calling. My response would be based on my confidence that I can give my real opinion.

  10. I don’t know what assumptions go into polling in Venezuela but that “there’s a 95% chance that Chávez’s support in the population as a whole at the moment the poll was made was higher than 41% and lower than 47%” assumes a normal distribution, right?

    Do pollsters consider skewness on the distribution? Or perhaps that the distribution is not normal? I am curious. Maybe the voter distribution is normal and the model fits reasonably well.

    I sense if the have equal error bands around a mean they are using a normal distribution with no skewness. Or perhaps they do that to say it in simpler terms.

  11. A statistical bias occurs when you use data. I can’t even be sure that some of the pollsters reporting their results are actually conducting them and analyzing them, thus their “bias” is a manual construct, not a mathematical object. This makes the chart useless in my opinion.

    • I think the same, the differences are so big, they are not coming from statistical variance due to sample effects, at least there are “problems” in the way the surveys were done if they were done at all… (starting with the questions, the sample of people asked…). But can you expect a properly realized poll from somebody like J. Chacón?
      If somebody take the time and call say, 50 randomly elected mobile phones, he could get something better I would say. Obviously there is the problem, would people someone unknown at the phone really tell what they think?
      I would say, forget the polls. Don’t waste your time doing scientific analysis on junk-data.

      • Little correction: don’t forget all the polls, but try to identify the better ones, and take it with a big grain of salt, like Miguel does.

  12. In the 2006 presidential elections Penn & Schoen’s Doug Schoen proved that there was a consistent 14-point bias in favor of the Chávez vote built into the results of standard in-home polls. (Forget about telephone polls — there the bias must be around 25 points.)

    Meaning 7% of the respondents consistently lied, saying they’d vote for Chávez when in fact their intention was to vote for Rosales. How did he prove this? He ran two simultaneous polls with the same questionnaire, one in-home, and one in a public space; the in-home poll was administered by the pollster, the on-the-street poll was self-administered. He did this in August and then in October, and then compared the results. They were coherent. I’ve written about this here, if you want the details: http://porlaconciencia.com/?p=3458.

    Alfredo Keller and I recently went over his March and June poll data, and compared it to C21 polls from the same period. Keller acknowledges that the 14-point pro-regime bias (Schoen called it the Fear Factor) was valid for 2006, and now estimates the 2012 bias at a minimum of 16, possibly 18 points.

    This means that you can safely take the Chávez 46% score and shave off 8 points (half of 16) and get a 38% score, and take the Capriles 34% score and add 8 points, and get 42%.

    I’d say that’s a good estimate of where things were in mid-June (when the field work was done.

    No pollster here, including Keller, will stand up publicly and say “all our data is skewed because a significant percentage of poll respondents are afraid to tell us the truth”, pero es un secreto a voces. Why won’t they admit it? Because they’d have to go back to the drawing boards and re-engineer the way they do their polling. Plus it’s more expenseive, a lot more expensive, to poll 2,000 people in one day at 200 demographically representative public places around the country (as Penn & Schoen did) than it is to run a 1,200 sample in-home polling operation over several days with maybe a fifth or a tenth of the manpower.

    I look at all polls through this lens, and I’d suggest you armchair analysts do the same. I spent three days on the road with Capriles last week, and wherever he goes these days it’s mass hysteria. The guy’s got the stature and the chops of a big-league rock star. Whoever says he lacks charisma or can’t develop an emotional bond with his audiences just isn’t paying attention. I think we’ve reached a tipping point, and Chávez’s campaign is washing out. I agree with Syd that polls are a snaphsot into the rearview mirror; he has no place to go in the polls but up, up & up. Today I’d say it’s a 50-40 race, in Capriles’ favor

    That doesn’t mean he’ll win, of course. Far from it. But that’s another issue altogether.

    Capriles and his advisors are blind (as are many in this group) to the fact that the 2004 RR vote was systematically rigged, and that Chávez in fact lost by somewher around 44 to 56. María Mercedes Febres-Cordero and Bernardo Márquez proved that to the satisfaction of the peer review committee at the International Statistical Review which in 2006 published their landmark study “A Statistical Approach to Assess Referendum Results: the Venezuelan Recall Referendum 2004″ in (http:/bit.ly/gFZela). But since Teodoro Petkoff has labeled any reference to vote rigging as “leyendas urbanas,” the easily impressed who won’t think for themselves generally dismiss the evidence and prefer anecdotal “evidence” to the contrary, such as Comando Venezuela number-cruncher Roberto Picón’s “yo ví el quick count de Ojo Electoral y no era así…”

    It’s a pity Capriles prefers anecdotes to science. If he were to look at the Febres Cordero-Márquez study “Regional Elections Venezuela (23N08): A Statistical Analysis of Coherence” he wouldn’t be so quick to crow that he’s never lost an election, and that there’s no evidence for claims of vote fraud, since the study shows, using Newcomb Benford’s Law and Febres-Márquez (FM) methodology that he didn’t beat Diosdado Cabello by 53% to 47%, as the CNE says, but rather 70% to 30% Other states that shopwed extremely strong discrepancies in the 2008 poll were Aragua, Bolívar, Carbobo, Miranda, Vargas and Zulia.

    But nobody in the official opposition wants to hear this. They’d rather shout down any substantiated warnings about vote-rigging by chanting “con testigos en todas las mesas no puede haber fraude!” until they’re blue in the face. And that’s the problem. Henrique didn’t in Miranda in 2008, and we won’t nationwide in two months.

    Two weeks ago I published an article ( http://www.lapatilla.com/site/2012/07/16/eric-ekvall-el-cuento-del-gallo-pelon-o-el-circulo-vicioso-de-la-logica-falaz/) on the frustration I experienced having a conversation on this subject with two otherwise intelligent women (Rayma and Colette Capriles, btw, although I didn’t name them in the article). You might find it interesting.

    • The danger of Eric is that he is so articulate…and seems so data-based…

      But there’s a reason no one in the opposition is buying what he’s selling.

      First, because people on the inside know that (many of the) pollsters in Venezuela have for years included their own measurements for detecting fear or reluctance to participate in polls. And they have not seen a problem anything like the scale Eric describes. (This is not some completely-unique-to-Venezuela phenomenon — pollsters operate in difficult environments all over the world.) For example, in an atmosphere where fear was widespread, you would see many more people refusing to participate than usual — this is not happening – the rejection rates are comparable to other countries.

      The reason it is so, so important that most of the opposition has left this magical thinking (we’re winning! the only reason you can’t see it is because of fear!) behind is that it has allowed Capriles and his campaign to confront the reality of today’s electorate. That’s why they are making progress — because they understand how things really stand and what drives the vote in today’s Venezuela.

      Capriles is indeed generating lots of crowds…which is great…it does prove that he has generated enthusiasm in the opposition base…but it does not provide evidence of where the race stands nationally. That millions of Venezuelans feel passionately that Chavez has to go is not new information.

      In sum — Capriles is doing well…it’s a close race…and I’m thankful that they are committed to doing their best to stay away from the toxic, self-defeating process arguments of old.

      • Lucia, are you suggesting that there is a way to explain away the math behind Febres, or Delfino, or Hausmann, on and on, all of which have put every detail of their methodologies up for peer review, and all of which use more advanced and much stronger statistics than any pollster?

        At this electoral crossroads, let’s err on the side of paranoia, please.

        • In my brief academia experience, statistics is just another way to tell a lie ;). I would be careful with peer review as this world is filled with Nature and Science articles that have been proven wrong and they are not science but noise.

          I am not saying that these studies are wrong, but I am skeptical, that’s all.

          • Publishing in peer review journals is not a matter of what is disputed or not. Rather, the act of submitting for peer review points to the seriousness of a contributor, wishing to inform the upper echelons in his or her intellectual community of a serious intent.
            It also separates the wheat from the chaff, the serious-minded from those with poorly thought-out ideas. A good thing. Analogies to the world of sports are many.

            • Syd,
              I agree with you, but the “seriousness” some times comes from scientist that are not curious about what happens in nature in an open minded way, but instead, they have an hypothesis that they want to impose as law. The war of the arguments. Here statistics, particularly advanced and obscure ones, has served an important role.

              Febres et al had a huge bias towards proving there was a fraud then. An one can do statistical test after test until one proves your point. That’s no way of making science. For these reasons, and because their complex statistics beats my knowledge on the field, I remain skeptical.

              • If you bring up Febres et al’s bias towards proving that there was a fraud, you must admit Carter et al had the same in proving there was none. If you think Febres et al would do test after test until one proves their point, you must admit Carter et al would the same in proving their point.

                So, if you are to remain skeptical, be fair, and remain skeptical both ways. I remained skeptical until I took sides. There is no doubt a new sample and a recount were in order, and that’s all they were asking for, back then.

          • Rodrigo Linares, sorry, by statistics I meant mathematical statistics. The names I mentioned all use mathematical statistics. The only way to tell a lie with these is choosing an incorrect tool or applying it incorrectly for the data at hand. The peer review process was not just a matter of publishing, it included answering to others who know M.Statistics.

            Rigobón’s analysis is one of the simplest, purest and least debatable of those presented. It is based on a basic Theorem, that any random sample must be representative of its universe in every M.Statistical property of the universe. If a random sample is not representative of its universe in any such property, it does not fulfill the Theorem’s requirement, and therefore cannot be considered random, *even if the sample is obtained randomly*.

            For example, if you flip a coin 100 times but randomly choose a section from the series that is 6 heads in a row, those 6 heads in a row are not a valid random sample of the 100 even if they were chosen randomly.

            When Rigobón proved that Carter center’s cleaned up data was not representative of the Universe in a particular property by about 10%, Carter center were able to repeat it, independently, but since by not cleaning the data the discrepancy did not show with the tool of correlation, CC decided that they would quite simply not clean the data so that they could claim randomness with correlation. Yet, when CC graphed their dirty data results, the difference between the lines on the graph of their report was, ta-da, 10%. Incredibly, the person who made the graph then took manual steps (i.e., worksheet applications cannot do what he did) to change the axis to visually minimize the difference that was there. Even when Rigobón explained how correlation was the wrong M.Statistical tool, CC, insisted on using it to “prove” that the sample was valid.

            It wasn’t just the data that was dirty, but that wasn’t a statistical conclusion, so you can understand why someone as professional as Rigobón would have kept it to himself.

      • “The danger of Eric…”

        Oh, my goodness!

        Now I’m a danger?

        The only danger, Lucia, is opening your eyes and seeing something you never imagined was in front of you all the time.

        I never thought I’d quote Barry Goldwater, but here goes: “Extremism in the defense of liberty is no vice.”

        • While you’re busy ringing the alarms of self-defence, Eric, may I point to what I raised, a few comments, below?

      • Lucia,

        I would be interested in seeing where you got this information:

        “For example, in an atmosphere where fear was widespread, you would see many more people refusing to participate than usual ”

        It doesn’t seem logical to me than given the nature of Venezuelans and the climate in which they live ,that this would be true.

        I think it would be true that they could be reluctant to admit their preferences in a job situation that might favor Chavez, but I highly doubt that lying to a pollster would produce much fear….Why would it?

    • Eric, as usual, superb analysis and well-backed up. As for the Polls, I’m sure the superior Penn And Schoen technique has developed a bias which probably holds true today to the extent you mention, and , if anything, perhaps it shouldn’t be split down the middle, but skewed more to the Oppo candidate. The Referendum was obviously fixed, as anyone even empirically could see from the cases published where voting machines jammed at certain levels of “si’s”, and then spewed out hundreds of sequential “no’s”, and the statistical studies proved it. Sorry Carter validated the results after receiving orders from Washington not to rock the boat due to the close Bush-Gore race, and by accepting the CNE’s computer program for auditing, since supposedly Carter’s own program was incompatible with the CNE computer process. The Captahuellas are to keep alive the Fear Factor, and in a close race will be determinant.

      • As for the 3.5 million Mision Vivienda huellas, apart from the Fear Factor of losing the fantasy chance of a vivienda, there is a real chance of introducing these applicants fraudulently/electronically as voters for Chavez. Here, I suppose, the Capriles voting mesa testigos will be key, as will real-time twittering of mesa results once voting has closed. Capriles’ personal appearances are generating all/more of the street cred/emotion that Chavez did quietly in 1998, and which was responsible for his election victory then. This time, at least, we won’t be seeing the sorry spectacle post-election of the Oppo dirigencia huddled in the barely-lit Tamanaco Lobby in the early-morning post-Referendum hours (Petkoff was there, I believe) wondering what to do. Capriles has to win big to keep the wolf from the door, and, if he overcomes the Fear Factor, he will do so.

  13. One way to check the accuracy of theses polls is to stand on a street in Caracas for an hour or two and count the number of political bumper stickers you see on cars. Things like Impeach Chavez, HFC is Obama’s puppy, Wheres the oil money Huey?,
    and Who cares either of you and your cronies are going to screw us AGAIN so who cares!!!!

    • I like it. Suggest you start your polling firm, today. You could call it bumper sticker polling technologies — international, of course. Or BS Potechi.

  14. Thank you, Eric and Lucia for that great debate, perhaps unintended.

    Here’s my thinking on Eric’s position.
    “Keller acknowledges that the 14-point pro-regime bias (Schoen called it the Fear Factor) was valid for 2006, and now estimates the 2012 bias at a minimum of 16, possibly 18 points.”

    One would expect that this so-called fear factor, “discovered” in 2006, has, in the intervening years, been built into the polling questions so as to mitigate the paranoia-induced bias in favour of Chavez. That being the case, then there is much less need to shave off percentage points from the polling results, as Eric suggests we do.

    That the FCM 2006 report was flawless from a statistical point of view is one thing. That it was too convoluted a thinking process for the average person to digest is another.

    All I know is that I’m glad we have the oppo that we do, today, and not the scrappy and unprepared oppo of years before. What a disaster that would have been, if the oppo would have won, years before! (Sad truth that many are unwilling to acknowledge.)

    For that reason, I yield to Lucía’s arguments, especially these:
    1. pollsters operate in difficult environments all over the world.
    They no doubt have low-cost techniques to incorporate any inherent bias. If they don’t then they are doubly useless — to me.

    2. most of the opposition has left this magical thinking (we’re winning! the only reason you can’t see it is because of fear!) behind.
    Like a well-trained and prepared athlete, the oppo we have, today, can stand on its own two feet without needing to engage in the whining of yesterday’s oppo. That makes the oppo stronger, more independent, and better prepared for the fight. We have seen growing evidence of this strength with every passing month. If the oppo were falling back on, as Lucía calls them “self-defeating process arguments of old” you would never see the muscle you see today.

    3. Capriles is indeed generating lots of crowds…but it does not provide evidence of where the race stands nationally.
    Capriles and his team have not covered every single community in Venezuela. There’s a reason for that. In many communities, he would be pelted out of town, as already evidenced from the few incursions by Comando Vz’s into hostile territory. In simpler terms, just because you’re a fan of the Rolling Stones, doesn’t mean everyone is.

Comments are closed.