Abstract
This paper provides novel tests for comparing out-of-sample predictive ability of two or more
competing models that are possibly overlapping. The tests do not require pre-testing, they
allow for dynamic misspecification and are valid under different estimation schemes and loss
functions. In pairwise model comparisons, the test is constructed by adding a random perturbation to both the numerator and denominator of a standard Diebold-Mariano test statistic.
This prevents degeneracy in the presence of overlapping models but becomes asymptotically
negligible otherwise. The test is shown to control the Type I error probability asymptotically
at the nominal level, uniformly over all null data generating processes. A similar idea is used
to develop a superior predictive ability test for the comparison of multiple models against a
benchmark. Monte Carlo simulations demonstrate that our tests exhibit very good size control
in finite samples reducing both over- and under-rejection relative to its competitors. Finally, an
application to forecasting U.S. excess bond returns provides evidence in favour of models using
macroeconomic factors