Search for a command to run...
A growing literature proposes conditioning forecasts on historically similar states rather than treating past observations as equally informative. Relevance-Based Prediction (RBP) formalises this idea by weighting observations according to their similarity to current conditions and their historical informativeness, with extreme realisations typically assigned higher relevance. Proponents argue that such relevance weighting should weakly dominate Ordinary Least Squares (OLS) when observations differ in predictive relevance. We revisit this claim using a stylised nonlinear example, an empirical backtest using global asset class returns to predict S&P500 returns, and controlled simulations calibrated to realistic higher-order moments. Across these settings, relevance-weighted estimation does not generally outperform OLS. In fact, discarding the most relevant observations often yields equal or superior predictive accuracy. These results reflect a fundamental bias–variance trade-off: relevance weighting can reduce bias from pooling heterogeneous states but often induces substantial variance inflation through sample truncation and weight concentration. In weak-signal and noisy financial market environments, this variance penalty dominates. RBP is thus not generally superior to OLS as purported and should instead be used as a context-dependent estimation tool. • Relevance-based prediction does not reliably outperform OLS in practice. • Discarding high-relevance (low-probability) observations can improve forecasts. • A bias–variance trade-off underpins relevance-based conditioning. • Variance inflation from sample truncation dominates in weak-signal settings. • Results hold across theory, simulation, and empirical data.