Skip to main content

Table 4 Out-of-sample \(\textbf{R}^\textbf{2}\) \(\mathbf {(\%)}\) of Monitoring Forecasts

From: Robust monitoring machine: a machine learning solution for out-of-sample R\(^2\)-hacking in return predictability monitoring

Forecasts

First year in evaluation sample

1947

1957

1967

1977

1987

1997

2007

Proposed forecast \(f^{(a)}\)

0.50

0.37

0.36

0.14

− 0.09

− 0.10

− 0.24

Robust monitoring machine: \(f^{(m)}\)

0.57

0.55

0.52

0.34

0.35

0.32

0.18

Shrinkage: (\(f^{(a)}\)+\(f^{(b)}\))/2

0.29

0.21

0.21

0.09

− 0.02

− 0.03

− 0.12

Traditional robust monitoring forecasts

 DMSFE (60 months, \(\delta =1.0\))

0.40

0.30

0.39

0.24

0.04

0.01

− 0.10

 DMSFE (60 months, \(\delta =0.5\))

0.45

0.42

0.38

0.19

0.10

0.14

0.09

 Logistic regression (feature engineering)

0.21

0.11

0.06

− 0.05

− 0.36

− 0.48

− 0.46

   Logistic regression (no feature engineering)

0.37

0.23

0.25

0.15

− 0.08

− 0.30

− 0.60

  1. This table repeats Table 1 reporting the forecasting performance of monitoring forecasts in terms of out-of-sample \(R^2\) (%). Traditional robust monitoring forecasts are based on discounted mean square forecast error (DMSFE) from Stock and Watson (2004) and logistic regressions. Each column corresponds to a different sample split year from 1947 to 2007. All evaluation period ends in December 2017