• Welcome to OGBoards 10.0, keep in mind that we will be making LOTS of changes to smooth out the experience here and make it as close as possible functionally to the old software, but feel free to drop suggestions or requests in the Tech Support subforum!

Les Déplorables

In what world is Nassim Nicholas Taleb an idiot? That dude has forgotten more about any subject than you've learned in your entire life.

That's an idiotic way to describe someone as knowledgeable.
 
Ah insults. The tool of the real intellectual.
 
CrR2jj7XgAI8fc6.jpg:large

This is interesting, but US elections are not simple binary processes because of the Electoral college. Each state could potentially be modeled as a binary process and simulated with Bernoulli trial on a state by state basis and the electoral college votes could be summed at the end of each replicate. Maybe that is what he is describing but I did not see those details in the posted description. Estimating the p of those state by state Bernoulli trials is not a difficult task. The probability of success is for Clinton (pc) is a beta distributed random variable that is essentially the average of the available polling data. In a two person race pTrump is simple 1-pc. In a 4 person race, you could use a Dirichlet distribution to estimate these quantities simultaneously. Or you could estimate them independently and then normalize after the fact. Either way is common practice when trying to estimate the probabilities in a multinomoal process. Not too hard to implement in a Bayesian hierarchical modeling platform like WinBUGS or JAGS.
 
This is interesting, but US elections are not simple binary processes because of the Electoral college. Each state could potentially be modeled as a binary process and simulated with Bernoulli trial on a state by state basis and the electoral college votes could be summed at the end of each replicate. Maybe that is what he is describing but I did not see those details in the posted description. Estimating the p of those state by state Bernoulli trials is not a difficult task. The probability of success is for Clinton (pc) is a beta distributed random variable that is essentially the average of the available polling data. In a two person race pTrump is simple 1-pc. In a 4 person race, you could use a Dirichlet distribution to estimate these quantities simultaneously. Or you could estimate them independently and then normalize after the fact. Either way is common practice when trying to estimate the probabilities in a multinomoal process. Not too hard to implement in a Bayesian hierarchical modeling platform like WinBUGS or JAGS.

I understood it to mean he was modeling the popular vote. I could be wrong though.
 
I understood it to mean he was modeling the popular vote. I could be wrong though.

Ok, doesn't strike me as a super useful exercise. Also the national popular vote should probably be done as a multinomoal given that Johnson is polling at about 10%.
 
Given how often the winner of the popular vote falls under 50%, a binary model is problematic.

A strong model actually includes the decision to not vote at all.
 
Ok, doesn't strike me as a super useful exercise. Also the national popular vote should probably be done as a multinomoal given that Johnson is polling at about 10%.

I mean, it was more proposed as a critique of Silver's methodology than an actual replacement. Clearly the model can be fleshed out more. It also passes the common sense test, unless you believe that Trump's chances of winning the election have actually gone from 2% to 40% in a matter of weeks.
 
Given how often the winner of the popular vote falls under 50%, a binary model is problematic.

A strong model actually includes the decision to not vote at all.

The binary nature of the model reflects the two outcomes of the election, not two choices on the ballot.
 
I mean, it was more proposed as a critique of Silver's methodology than an actual replacement. Clearly the model can be fleshed out more. It also passes the common sense test, unless you believe that Trump's chances of winning the election have actually gone from 2% to 40% in a matter of weeks.

Hard to understand the common sense test. What is that?

I think the proposal/critique is weak, right from the very first sentence where it presents the election as a binomial process, which it is not. At best it is a set of independent binomial processes on a state by state level. But even then it should be estimated as a multi nominal probability set, since there are more than two candidates and therefore more than two possible outcomes. A binomial estimator would fail because the probability of success and the probability of failure have to add to 1 and in this case, they would not. Often the estimation of a binomial probability directly estimates the probability of success, and derives the probability of failure as 1 - psuccess, so it falls apart if the probability of Trump winning is 0.42 and the probability of Clinton winning is 0.44...what happens to the rest of the probability. A far more parsimonious analysis would use a multi nominal estimations.
 
The binary nature of the model reflects the two outcomes of the election, not two choices on the ballot.

As birdman points out, there are more than two outcomes. That's mostly because there are more than two choices on the ballot.
 
Hard to understand the common sense test. What is that?

I think the proposal/critique is weak, right from the very first sentence where it presents the election as a binomial process, which it is not. At best it is a set of independent binomial processes on a state by state level. But even then it should be estimated as a multi nominal probability set, since there are more than two candidates and therefore more than two possible outcomes. A binomial estimator would fail because the probability of success and the probability of failure have to add to 1 and in this case, they would not. Often the estimation of a binomial probability directly estimates the probability of success, and derives the probability of failure as 1 - psuccess, so it falls apart if the probability of Trump winning is 0.42 and the probability of Clinton winning is 0.44...what happens to the rest of the probability. A far more parsimonious analysis would use a multi nominal estimations.

Common sense in that the probability of a win at any given time during the election cycle should be dominated by future uncertainty, rather than current uncertainty. So the probability of either outcome is close to 50% up until right before the election occurs. During early August, Silver had the probability of Trump winning the election at 2%. I think you'd be hard pressed to find anybody except for maybe the rjkarls of the world that believe that if you ran 100 simulations of the election from early August on that Trump would win twice. Going by Silver's model, the statistical likelihood of Trump even being at this point is astronomically small. Not sure how you could possibly take issue with that critique.

Nate Silver is undoubtedly a smart person, but his credibility comes into question when he does stuff like arbitrarily give Trump a 5% of winning the nomination, reasoning that there are four "stages" of the nomination (again, arbitrarily determined by him) and Trump has a 50% chance of clearing each of them.

As for whether or not a binomial process is appropriate, I was always more at home with deterministic systems, and my probability theory is admittedly rusty, so since I feel little compulsion to brush up on it to defend someone else's model on the Internet, especially while there is football on, I will concede that point. In my estimation this doesn't really have any impact on the nature of the critique, but I'm happy to be corrected on that front.
 
Common sense in that the probability of a win at any given time during the election cycle should be dominated by future uncertainty, rather than current uncertainty. So the probability of either outcome is close to 50% up until right before the election occurs. During early August, Silver had the probability of Trump winning the election at 2%. I think you'd be hard pressed to find anybody except for maybe the rjkarls of the world that believe that if you ran 100 simulations of the election from early August on that Trump would win twice. Going by Silver's model, the statistical likelihood of Trump even being at this point is astronomically small. Not sure how you could possibly take issue with that critique.

Nate Silver is undoubtedly a smart person, but his credibility comes into question when he does stuff like arbitrarily give Trump a 5% of winning the nomination, reasoning that there are four "stages" of the nomination (again, arbitrarily determined by him) and Trump has a 50% chance of clearing each of them.

As for whether or not a binomial process is appropriate, I was always more at home with deterministic systems, and my probability theory is admittedly rusty, so since I feel little compulsion to brush up on it to defend someone else's model on the Internet, especially while there is football on, I will concede that point. In my estimation this doesn't really have any impact on the nature of the critique, but I'm happy to be corrected on that front.

I don't recall Silver ever giving Trump only a 2% chance of winning, except in the "now cast" analysis, which tries to predict the election if it were held on that day. The other model, Polls only and the Polls plus model try to add in the temporal aspect that you are discussing; the lowest I've seen those give was in the mid teens for Trump. You are right that it is quite curious that either of those models would exhibit such strong swings from month to months and week to week. There are obviously hard to predict events (like Clinton fainting at a 911 event), but a 30% drop in Clinton's probability of winning is strange model behavior. There may be built in volitility to his model so that people keep coming back to his website.
 
Back
Top