GARCH-MIDAS model
To explore the contribution of a monthly frequency EPU index to the long-term volatility of daily frequency EUA futures, we adopt the GARCH-MIDAS model proposed by Engle et al. (2013). Unlike GARCH-type models, the variance is decomposed into long-term and short-term volatility components. Short-term fluctuations remain determined by historical fluctuation information. In contrast, long-term fluctuations are characterized by low-frequency macroeconomic variables. The basic forms are as follows:
$$\begin{aligned} r_{i,t}= \mu +\sqrt{l_{t}s_{i,t}}\varepsilon _{i,t},\ \forall i=1,\ldots ,N_{t}, \end{aligned}$$
(1)
$$\begin{aligned} \sigma ^{2}_{i,t}= l_{t}s_{i,t}, \end{aligned}$$
(2)
where \(r_{i,t}\) refers to the log return of financial assets on day i of month t, and \(\mu\) is the non-conditional mean of the return sequence. The term \(N_{t}\) denotes the number of days in a month. \(\varepsilon _{i,t}|\Phi _{i-1,t}\sim N(0,1)\), given the information set \(\Phi _{i-1,t}\) up to day (i − 1) of period t. The conditional variance of the daily return is divided into two components: a short-run component defined as \(s_{i,t}\) and a long-run component defined as \(l_{t}\) and \(\sigma ^{2}_{i,t}\) is defined as the total conditional variance. The short-run volatility component \(s_{i,t}\) follows the traditional GARCH (1,1) process as follows:
$$\begin{aligned} s_{i,t}=(1-\alpha -\beta )+\alpha \frac{(r_{i-1,t}-\mu )^{2}}{l_{t}}+\beta s_{i-1,t}, \end{aligned}$$
(3)
where \(\alpha\) and \(\beta\) are the parameters to be estimated for the ARCH and GARCH components, respectively, where \(\alpha >0\), \(\beta >0\), and \(\alpha +\beta <1\). Because the growth rate of the EPU index presented by \(X_{t-k}\) can have a negative value, according to Engle et al. (2013), we convert the long-term fluctuations into the logarithmic form in this study. This can be expressed as follows:
$$\begin{aligned} log(l_{t})=m+\theta \sum ^{K}_{k=1}\varphi _{k}(\omega _{1},\omega _{2})X_{t-k}, \end{aligned}$$
(4)
where m is an intercept and \(\theta\) is the slope of the weighted effect of the low-frequency macroeconomic variables lagged behind the long-term volatility of financial asset returns. The term K denotes the maximum lag order of smooth volatility in MIDAS filtering. The marginal effect depends on \(\theta\) and \(\omega\) (Conrad et al. 2014). In contrast, \(\varphi _{k}(\omega _{1},\omega _{2})\) represents the weighting scheme of beta weights with the independent variables \(\omega _{1}\) and \(\omega _{2}\), which can be expressed as follows:
$$\begin{aligned} \varphi _{k}(\omega _{1},\omega _{2})=\frac{(k/K)^{\omega _{1}-1}(1-k/K)^{\omega _{2}-1}}{\sum ^{K}_{j=1}(j/K)^{\omega _{1}-1}(1-j/K)^{\omega _{2}-1}}, \end{aligned}$$
(5)
$$\begin{aligned} \varphi _{k}(1,\omega _{2})= \frac{(1-k/K)^{\omega _{2}-1}}{\sum ^{K}_{j=1}(1-j/K)^{\omega _{2}-1}}, \end{aligned}$$
(6)
Equation 5 is the unrestrictive weighting scheme that can produce attenuated and hump weight distributions. In contrast, Eq. 6 can be obtained from Eq. 5 with a constraint of \(\omega _{1}=1\). The constraint of \(\omega _{1}=1\) is applied to the unrestricted weight function to obtain the restricted weight function Eq. 6. The restricted weighting function can only generate an attenuated weight distribution, and the attenuation rate is determined by the parameters \(\omega _{2}\). This means that the larger the value of \(\omega _{2}\), the faster the decay rate, and vice versa. Both of these beta weighting functions can be applied to the estimation of the GARCH-MIDAS model. Following Conrad and Loch (2015), the restricted weight function of \(\omega _{1}=1\) is selected. Equations 1, 3, 4, and 6 form a GARCH-MIDAS model based on the EPU exponential change rate. Additionally, quasi-maximum likelihood estimation (QMLE) was adopted to estimate the parameters and parameter space \(\Theta =\{\mu ,\alpha ,\beta ,m,\theta ,\omega \}\).
Regression-based test for the specification of one-component GARCH
According to Conrad and Schienle (2020), there should be misspecification testing in GARCH models in the sense of an omitted multiplicative long-term component. Hence, we apply the regression-based test proposed by Conrad and Schienle (2020) as a preliminary check before estimating the GARCH-MIDAS model. The regression model is considered in logarithmic form:
$$\begin{aligned} ln(\overline{RV}_{t})= c+w_{0}X_{0}+v_{t}, \end{aligned}$$
(7)
$$\begin{aligned} \overline{RV}_{t}= & {} \sum ^{M}_{i=1}r^{2}_{i,t}/{\hat{\sigma }}^{2}_{i,t}, \end{aligned}$$
(8)
where \(X_{t}\) denotes the monthly explanatory variables, \(v_{t}\) meets the independent identical distribution, and \({\hat{\sigma }}^{2}_{i,t}\) is the estimated variance from the model under the null hypothesis of a simple GARCH. \(r_{i,t}\) refers to the daily log returns. We define \(\overline{RV}_{t}\) as the sum of the volatility-adjusted squared daily returns within each month. The regression-based test checks whether the \(H_{0}: w_{0}=0\) that the one-component GARCH is correctly specified can be rejected when using EPU as an explanatory variable.
MCS test
To assess the predictive power of the volatility forecasts from the GARCH-MIDAS and GARCH-type models, various loss functions were used to compare the accuracies of the different models. More criteria make a more effective analysis (Kou et al. 2021). According to Hansen et al. (2011), we employ six loss functions as criteria for evaluating the prediction accuracy of various volatility models in the empirical examination. The specific definitions of these six types of loss functions are as follows:
$$\begin{aligned} MAE=\frac{1}{T}\sum ^{T}_{i=1}|\sigma ^{2}_{i}-{\hat{\sigma }}^{2}_{i}|, \end{aligned}$$
(9)
$$\begin{aligned} MSE= \frac{1}{T}\sum ^{T}_{i=1}(\sigma ^{2}_{i}-{\hat{\sigma }}^{2}_{i})^{2}, \end{aligned}$$
(10)
$$\begin{aligned} MAD= \frac{1}{T}\sum ^{T}_{i=1}|\sigma _{i}-{\hat{\sigma }}_{i}|, \end{aligned}$$
(11)
$$\begin{aligned} MSD= \frac{1}{T}\sum ^{T}_{i=1}(\sigma _{i}-{\hat{\sigma }}_{i})^{2}, \end{aligned}$$
(12)
$$\begin{aligned} QLIKE=\frac{1}{T}\sum ^{T}_{i=1}(log({\hat{\sigma }}^{2}_{i})+\sigma ^{2}_{i}/{\hat{\sigma }}^{2}_{i}), \end{aligned}$$
(13)
$$\begin{aligned} R^{2}LOG= \frac{1}{T}\sum ^{T}_{i=1}(log(\sigma ^{2}_{i}/{\hat{\sigma }}^{2}_{i}))^{2}, \end{aligned}$$
(14)
where \({\hat{\sigma }}^{2}_{i}\) denotes the predicted value of the variance on day i obtained by the different models, and \(\sigma ^{2}_{i}\) is the daily actual variance. The daily frequency realized variance is a perfect proxy for the true conditional variance (Patton 2011). Nevertheless, owing to the unavailability of data, we use the square value of the daily real return to represent the daily actual fluctuation value (Wei et al. 2017). Moreover, \(T\) represents the size of the out-of-sample prediction window.
However, the loss function could not distinguish whether the loss differences of the different models were statistically significant. After the loss function value is obtained, we employ the model confidence set (MCS) test proposed by Hansen et al. (2011) to compare the prediction accuracy between models. The MCS test has an advantage over the conventional test model as it does not have to set a benchmark model. Moreover, it allows for the possibility of multiple optimal models. The MCS test process was expressed as follows:
First, it sets a model collection \(M _{0}\), which contains \(m\) volatility forecasting models to be inspected. After calculating the loss values of the out-of-sample forecast, we write them as \(L^{i}_{u,t}\) for model \(u\) of the loss function \(i\) (out-of-sample window \(t=1,\ldots ,n\)). Therefore, for any two predictive volatility models in model collection \(M _{0}\), the relative loss function values are denoted as \(d_{uv,t}\) = \(L^{i}_{u,t}-L^{i}_{v,t}(u,v\in M_{0})\). The superior object set is defined as \(M^{*}\), which can be expressed as follows:
$$\begin{aligned} M^{*}\equiv \{u\in M_{0}:E(d^{i}_{uv,t})\le 0\ for\ all\ v\in M_{0}\}, \end{aligned}$$
(15)
where \(E(d^{i}_{uv,t})\) represents the mathematical expectation of \(d_{uv,t}\) under the specific loss function \(i\). The MCS test is a series of continuous significance tests for the models of \(M_{0}\), and the model that proves to be significantly inferior to the others is eliminated. The null hypothesis of the MCS test is as follows:
$$\begin{aligned} H_{0,M}:E(d^{i}_{uv,t})\le 0\ for\ all\ v\in M_{0} \end{aligned}$$
(16)
As shown above, the null hypothesis assumes that any two models have the same predictive power. The MCS test is conducted through a series of continuous significance tests wherein models of \(M_{0}\) with poor predictive power are eliminated until no models are excluded from \(M_{0}\). Setting the confidence level \(\alpha\), if the p value is larger than \(\alpha\), this indicates that this model possesses superior out-of-sample predictive ability and can survive the MCS test. The larger the p value of the MCS test, the higher the prediction accuracy of the corresponding model. Additionally, if the p value is smaller than \(\alpha\), then this volatility prediction model is proven to have poor out-of-sample predictive ability and will be removed from the MCS test.
Additionally, Hansen et al. (2011) recommend using semi-quadratic statistics \(T_{SQ}\) and range statistics \(T_{R}\) in the model assessment process. These statistics are defined as follows:
$$\begin{aligned} T_{SQ}=\mathop {max}_{u,v\in M_{0}}\frac{(\overline{d}^{i}_{uv})^{2}}{\sqrt{var(d^{i}_{uv})}}, \end{aligned}$$
(17)
$$\begin{aligned} T_{R}= \mathop {max}_{u,v\in M_{0}}\frac{|\overline{d}^{i}_{uv}|}{\sqrt{var(d^{i}_{uv})}}, \end{aligned}$$
(18)
$$\begin{aligned} \overline{d}^{i}_{uv}= \frac{1}{n}\sum ^{n}_{t=1}d^{i}_{uv}, \end{aligned}$$
(19)
If the p values of the \(T_{SQ}\) and \(T_{R}\) statistics are larger than the given confidence level \(\alpha\), then \(H_{0}\), that is, the null hypothesis, cannot be rejected. Because the asymptotic distribution of the statistics \(T_{SQ}\) and \(T_{R}\) depends on the “aversion parameter” their real distributions are extraordinarily complicated. However, the bootstrap method can solve this difficulty and easily obtain the statistics \(T_{SQ}\), \(T_{R}\), and the corresponding pvalues.