About a dozen math wizards entered the political fray this campaign cycle, championing statistical methods and advanced computing power over partisan bias and conventional wisdom.
President Obama triumphed Tuesday. But the biggest winner may have been math.
After decades of relying on predictions from political pundits and wildly gyrating polls, Americans saw a small band of number crunchers redefine the business of election forecasting. Armed with computer simulations and confidence in cold, hard data, these self-described geeks called the presidential race and a slew of smaller contests with stunning accuracy.
Their foresight proved astonishing and provided the political class endless talking points to debate in the weeks leading up to Election Day.
- 14 million spilled bees on I-5: 'Everybody's been stung'
- Man's journey to find birth mom ends — at work
- Costco said to get sweet deal from credit-card companies
- On tour of UW station, Inslee backs $15 billion tax plan for more light rail
- Mariners lose fourth straight game
Most Read Stories
In the process, these statisticians may have fundamentally changed the way that political campaigns are watched and conducted in America. Think of it as “Moneyball,” which revolutionized baseball, applied to the most important pennant race of all.
The captain of the math brigade, Nate Silver, a former baseball statistician turned New York Times blogger, correctly called 50 of 50 states in the Electoral College, assuming Florida remains blue. Sam Wang, a Princeton University neuroscientist who moonlights as an election forecaster, accurately predicted Obama would capture 51.1 percent of all votes cast nationwide. Drew Linzer, a political-science professor at Emory University, predicted five months ago that Obama would win 332 electoral votes, which will hold up if Florida goes to Obama.
Statistics over bias
All told, about a dozen math wizards entered the political fray this campaign cycle, championing statistical methods and advanced computing power over partisan bias and conventional wisdom.
“The principle behind this movement is that numbers aren’t ideology,” said Scott Elliott, a computer engineer in North Carolina who operates the site electionprojection.com.
A Christian conservative, Elliott voted for Mitt Romney. But his computer model predicted months ago that Obama would easily win the Electoral College. Elliott correctly called every state except Florida.
“The poll data don’t come in wearing a blue shirt or a red shirt,” he said. “They are what they are.”
Like other analysts in the blossoming field of election probability, Elliott depends heavily on data from the many hundreds of state and national polls taken throughout the course of the election. In simple terms, these forecasters aggregate data, average them and then use high-powered processors to run tens of thousands of simulated elections. Then they base their predictions on the most frequent outcomes of those simulations.
Each poll, conducted by groups including Gallup, Public Policy Polling and Rasmussen, might have a margin of error of 5 or more percentage points. By combining all of them, that error margin diminishes to near-invisibility, said Wang, the neuroscience professor. His simulations predicted Obama winning the Electoral College handily and nailed upsets, such as Heidi Heitkamp’s winning a Senate seat in North Dakota.
It is a method predicated on the belief that the more data on hand, the more accurately outcomes can be predicted. Yet Wang and others of his ilk were roundly attacked before the election for supposedly slanting the results to match their political preferences.
“At the national level, pundits were taking brickbats at us because they felt we were in the tank for Obama, but in reality, we were in the tank for getting it right,” Wang said.
Nobody caught more flak than Silver, whose FiveThirtyEight blog (named for the total number of electoral votes) became the online phenomenon of the year. Churning out new predictions and deep-dive analyses of polling methodologies on nearly a daily basis, his blog was followed religiously by political junkies.
Silver picked Obama to win from the start. Over the campaign’s final weekend, he put the president’s chances above 90 percent. That evoked yips of joy from Democrats, but furious cries from conservative commentators, including Dick Morris, who called Silver’s work skewed and predicted a “reckoning” after the election.
Silver was even taken to task by his own newspaper. The New York Times public editor criticized him last month for sparring with MSNBC host Joe Scarborough. Silver challenged Scarborough to a $1,000 bet that Obama would win, after the TV personality called him an ideologue and a joke.
Scarborough didn’t take that bet. Perhaps he knew better than to challenge Silver, who stunned the baseball world in 2008 by accurately predicting that the last-place Tampa Bay Rays would turn it around and become one of the best teams in the American League. In fact, they went on to make the World Series.
“This probably does rebuke the pundits,” said Dean Chambers, a conservative who also tried his hand at computer-aided poll analysis this year. “Nate Silver was right on the mark.”
Chambers’ own calculations showed Romney winning big. But the longtime commentator and consultant erred, he said, because he didn’t take polls at face value, refusing to include some polls out of concern that they over-sampled Democrats. In other words, Chambers said, he threw out data that seemed to favor Obama too much.
“I think a lot of us should have a bit more faith in the accuracy of these polls after this,” said Chambers, who lives in Duffield, Va.
Linzer, the Emory professor, acknowledged he’s an Obama supporter. But he said that played no role in the numerous simulations he conducted this year that showed Obama winning. As a social scientist, he said, these kinds of models are useful for predicting a great range of outcomes based on available data.
By focusing on numbers, he said, it’s possible to overlook momentary events that seem to have great import but in the end don’t shape the election. For example, he said, Romney’s notorious “47 percent” comment may have momentarily moved some polls, but had a negligible effect on the final result.
And none of that, he said, came as a surprise to the candidates.
“The most sophisticated quantitative work is not happening with people like me, but by those inside the campaigns themselves,” Linzer said. He and other election analysts said candidates employ high-powered math whizzes of their own to help predict outcomes and have far larger budgets than any college professor.
“Their work doesn’t show up in a blog or newspaper, but it’s their secret sauce,” he said.
Expense may limit data
Linzer predicts that many more websites like his votamatic.org will emerge in coming election cycles, and wonders whether other news outlets will adopt such a sophisticated approach. He worries that the flood of useful data could ebb because of the expense involved in producing it. In 2008, for example, about 1,700 state polls were conducted. This year, there were only 1,200.
Others decry the injection of mathematics into something as personal and heated as presidential politics. Their fear is that computers, rather than well-spoken pundits, might not only take the fun out of the races, but also change the way they’re conducted.
Wang, the Princeton professor, believes pundits and computer-aided analysts can coexist.
“It’s possible to be Homer and write about the wine-dark sea,” he said. “But sometimes you want the guy with the thermometer.”