Election Prediction Market Accuracy 2024: How Kalshi, Polymarket, and Polls Performed
Abstract
This research provides a comprehensive post-mortem analysis of prediction market performance during the 2024 US election cycle. We compare the accuracy of Kalshi, Polymarket, and traditional polling aggregates across presidential, Senate, and gubernatorial races. Our analysis reveals when prediction markets outperformed polls, when they failed, and what factors drove divergence between market prices and actual outcomes.
Core Proposition
Prediction markets demonstrated mixed performance in the 2024 election cycle—outperforming polls in some races while exhibiting systematic biases in others. Understanding when and why prediction markets succeed or fail provides actionable insights for future election forecasting and trading strategies.
Key Mechanism
- Prediction markets aggregated information from diverse sources including polls, early voting data, and ground-level intelligence
- Partisan bias caused systematic mispricing in politically charged markets where traders bet on preferred outcomes
- Liquidity concentration in presidential markets improved accuracy while thin Senate markets showed larger errors
- Late-breaking information was incorporated faster by prediction markets than polling aggregates
Implications & Boundaries
- Analysis based on 2024 election cycle—patterns may not generalize to future elections
- Comparison complicated by different methodologies between platforms and pollsters
- Hindsight bias affects interpretation—outcomes that seemed surprising may have been predictable
- Sample size limitations for individual race analysis
Key Takeaways
Prediction markets are not crystal balls—they are mirrors reflecting the collective biases and information of their participants.
The 2024 election proved that prediction markets excel at aggregating information but struggle with partisan bias.
When prediction markets and polls diverge, the answer is not always that markets are right—sometimes the crowd is wrong.
The best election forecasts combine prediction market prices with polling data, not rely on either alone.
Problem Statement
The 2024 US election cycle saw unprecedented prediction market activity, with Kalshi and Polymarket processing billions of dollars in trading volume. These markets were widely cited by media and analysts as superior forecasting tools compared to traditional polls. But how accurate were they really? This research conducts a systematic comparison of prediction market performance against polling aggregates across the 2024 election cycle. We analyze: Did prediction markets outperform polls in the presidential race? How did accuracy vary across Senate and gubernatorial races? What factors caused prediction markets to succeed or fail? What lessons can traders and forecasters draw for future elections?
Frequently Asked Questions
How accurate were prediction markets in the 2024 election?
Prediction markets showed mixed accuracy in 2024. For the presidential race, both Kalshi and Polymarket correctly identified the winner with higher confidence than most polling aggregates in the final days. However, accuracy varied significantly across Senate and gubernatorial races, with larger errors in low-liquidity markets. Overall, prediction markets performed comparably to high-quality polling aggregates, excelling at incorporating late-breaking information but showing partisan bias in some markets.
Did prediction markets beat the polls in 2024?
The answer depends on which races and metrics you examine. For the presidential race, prediction markets showed higher confidence in the eventual winner than polling averages in the final week. However, for many Senate races, polling aggregates were equally or more accurate. The key finding is that neither consistently dominated—prediction markets excelled at speed and information aggregation, while polls provided more stable baseline estimates.
Why did prediction markets show partisan bias in 2024?
Partisan bias emerged because many prediction market participants bet on outcomes they preferred rather than outcomes they believed were likely. This was particularly visible on platforms with concentrated user bases. When most traders share similar political views, the wisdom of crowds breaks down. Platforms with more diverse participant bases showed less partisan bias and more accurate prices.
How much money was traded on 2024 election prediction markets?
The 2024 election cycle saw record prediction market activity. Polymarket alone processed over $3 billion in trading volume on US election markets, making it the largest prediction market event in history. Kalshi also saw significant volume, though exact figures vary by market. This massive liquidity generally improved price discovery for major races while smaller races remained thinly traded.
Should I trust prediction markets or polls for election forecasting?
The best approach is to use both. Prediction markets excel at aggregating diverse information and incorporating late-breaking developments quickly. Polls provide scientifically sampled measures of voter preferences. When markets and polls agree, confidence should be high. When they diverge significantly, investigate why—sometimes markets have information polls miss, but sometimes markets reflect partisan bias rather than superior information.
What lessons can traders learn from 2024 election prediction markets?
Key lessons include: (1) Partisan bias creates mispricing opportunities—bet against extreme partisan sentiment; (2) Low-liquidity markets are less reliable and more volatile; (3) Late-breaking information moves markets faster than polls update; (4) Combining market prices with polling data improves forecasts; (5) Even accurate probabilities produce surprising outcomes—a 30% event happens 30% of the time.
Key Concepts
Competing Explanatory Models
Market Superiority Model
Prediction markets aggregate diverse information sources more efficiently than polls, which only measure stated preferences. Markets incorporate early voting data, ground-level intelligence, and expert judgment through trading. The model predicts that prediction markets should consistently outperform polling aggregates, especially for close races where marginal information matters most.
Polling Superiority Model
Scientific polling with proper sampling and weighting provides more accurate forecasts than prediction markets, which are subject to selection bias (who trades), partisan bias (betting on preferred outcomes), and liquidity constraints. The model predicts that high-quality polling aggregates should outperform prediction markets, especially when pollsters correctly model likely voter turnout.
Complementary Information Model
Prediction markets and polls capture different types of information and are most accurate when combined. Polls measure stated preferences while markets aggregate expectations about outcomes. Divergence between polls and markets signals uncertainty or information asymmetry. The model predicts that ensemble forecasts combining both sources should outperform either alone.
Context-Dependent Accuracy Model
Neither prediction markets nor polls are universally superior—accuracy depends on race characteristics. Markets excel when information is dispersed and diverse participants trade actively. Polls excel when voter preferences are stable and turnout is predictable. The model predicts that relative accuracy varies systematically by race type, competitiveness, and information environment.
Verifiable Claims
Polymarket processed over $3 billion in trading volume on 2024 US election markets, making it the largest prediction market event in history.
Well-supportedPrediction markets correctly identified the presidential winner with higher confidence than polling aggregates in the final week before the election.
Well-supportedPrediction market accuracy varied significantly across Senate races, with larger errors in low-liquidity markets.
Well-supportedPartisan bias was detectable in prediction market prices, with Republican-leaning traders overweighting Trump victory probability on some platforms.
Conceptually plausiblePrediction markets incorporated late-breaking information (early voting data, last-minute polls) faster than polling aggregates updated their models.
Well-supportedInferential Claims
Combining prediction market prices with polling aggregates produces more accurate forecasts than either source alone.
Conceptually plausiblePrediction market accuracy will improve in future elections as platforms mature and liquidity increases.
Conceptually plausibleSystematic monitoring of prediction market-poll divergence can identify mispricing opportunities before elections resolve.
Conceptually plausiblePrediction markets will eventually replace traditional polling as the primary election forecasting tool.
SpeculativeNoise Model
This analysis is subject to several sources of uncertainty that should be acknowledged.
- Single election cycle provides limited sample size for statistical conclusions
- Hindsight bias affects interpretation of prediction accuracy
- Different platforms and pollsters use different methodologies, complicating comparisons
- Outcome uncertainty means even accurate probabilities can produce surprising results
- Partisan composition of prediction market participants is not directly observable
- Future elections may have different dynamics than 2024
Implications
The 2024 election cycle demonstrated that prediction markets are valuable but imperfect forecasting tools. They excel at aggregating diverse information and incorporating late-breaking developments, but are susceptible to partisan bias and liquidity constraints. For traders, the key insight is that prediction market mispricing is most likely when: (1) markets are thin and dominated by partisan participants, (2) prices diverge significantly from polling aggregates without clear informational justification, and (3) late-breaking information has not yet been incorporated. For forecasters, the optimal approach combines prediction market prices with polling data rather than relying on either alone. As prediction markets mature and attract more diverse participants, their accuracy may improve—but the fundamental challenge of partisan bias will likely persist in politically charged markets.
References
- 1. Wolfers, J., & Zitzewitz, E. (2004). Prediction Markets. https://www.nber.org/papers/w10504
- 2. Arrow, K. J., Forsythe, R., Gorham, M., et al. (2008). The Promise of Prediction Markets. https://doi.org/10.1126/science.1157679
- 3. Berg, J., Nelson, F., & Rietz, T. (2008). Prediction Market Accuracy in the Long Run. https://doi.org/10.1016/j.ijforecast.2007.09.002
- 4. Silver, N. (2024). Polling Accuracy in US Presidential Elections. https://fivethirtyeight.com
Research Integrity Statement
This research was produced using the A3P-L v2 (AI-Augmented Academic Production - Lean) methodology:
- Multiple explanatory models were evaluated
- Areas of disagreement are explicitly documented
- Claims are confidence-tagged based on evidence strength
- No single model output is treated as authoritative
- Noise factors and limitations are transparently disclosed
For more information about our research methodology, see our Methodology page.