Nate Silver has been in the news lately for his prediction about the 2014 congressional election. He says that the Republicans have a 60% chance of taking over the Senate. Several Democrats have reacted to the prediction by pointing out that Silver made a similar prediction in 2012 (a 61% chance) and he was "wrong" because it didn't happen. Therefore, we shouldn't trust his prediction this time.
That's a complete misunderstanding of Silver's prediction. He never said that the Republicans would win. He merely said that the probability they would win was larger than the probably that they would not win.
For example, suppose I hand you a bag with 10 poker chips in it and I tell you that four of the chips are blue and six are red. You can't see inside of the bag. You only know what I've told you and, based on my statement, there is a 60% chance that a randomly selected chip will be red.
You shake the bag to mix up the chips, reach in, and pull out a chip. If the chip is blue would you say that I was "wrong" when I told you that six of the ten were red? Of course not, there was a 40% probability that the chip would be blue.
What if the Republicans don't win this year either? Would that make Silver wrong?
Let's go back to the bag. If there are six red chips and four blue chips, what's the probability that you'd pull out a blue chip twice in a row (assuming that you put the first chip back)?
It's (0.40)(0.40) = 0.16. A sixteen percent chance isn't exceptionally large, but it's not tiny either.
You can't say that Silver is "wrong" based on his specific prediction. If a knowledgeable statistician wants to go back through Silver's process in detail to look at where and how he got his data and how he analyzed it, it's entirely possible that they would disagree with something. Even that wouldn't necessarily make Silver wrong. There are legitimate disagreements in the statistical world on how to obtain and analyze data. There might be actual mistakes in someone's process, but disagreements in the discipline aren't mistakes.
In this case, it's more likely an example of confirmation bias. Perhaps the greatest barrier to effective use of data in any organization is getting past our tendency to accept data the confirms our predetermined biases and reject data the contradicts them. That's not use of statistics, that's abuse of statistics.