Humboldt-Universität zu Berlin - High Dimensional Nonstationary Time Series

C1 - Volatility

 

The availability of financial data on the lowest possible aggregation level has opened up new possibilities to estimate daily variances and covariances. A common and mathematically convenient way to describe high-frequency asset price processes is to model them on the basis of a continuous-time martingale process which is, however, overshadowed by an additive noise component capturing market microstructure frictions. Substantial progress on the estimation of the quadratic variation and integrated variance of asset prices under various assumptions on the noise process and possible jump components have been made through the past decade.
 
However, how to optimally use high-frequency data to estimate high-dimensional covariance matrices, is still an open question. The latter are crucial in risk management, portfolio analysis and asset pricing. Besides overcoming the problem of the asynchronicity of processes, resulting estimators must be well conditioned in order to be of practical use. Barndorff-Nielsen et al (2011) propose a multivariate realized kernel estimator which can handle huge dimensions and is computationally tractable. However, its way to synchronize observations leads to a dramatic loss of data which makes the estimator quite inefficient in high-dimensional problems. Hautsch et al (2010) propose a blocking and regularization approach where the covariance matrix is estimated block-wise (and more efficient) and is regularized in a second step. This approach is pushed to a limit by Lunde et al (2011) who estimate each entry of the covariance matrix individually resulting into a composite kernel which, however, produces an ill-conditioned covariance matrix estimate. Both theoretically and empirically it is still unclear how to use high-frequency data most efficiently while ensuring positive definiteness and well-conditioning. There is an obvious (unknown) tradeoff between the efficiency gained by blockwise or even element-wise estimations in the first step and the required regularization in the second step. Moreover, it is unknown how to optimally regularize given a specific application, as, e.g., in a portfolio optimization problem.
 
In the spirit of Reiß (2011) an abstract result by Bibinger and Reiß shows that for two assets the estimation of the individual integrated volatility of each asset can be improved by using the other asset’s data, provided that the (unknown) correlation between the two assets does not vanish. This gain in efficiency is accomplished by noise reduction over different time scales. It is conjectured that this gain in efficiency for highly correlated portfolios grows like 1/m in the number m of assets. This conjecture has to be verified for simple models mathematically and a more robust estimator has to be developed for more realistic scenarios, e.g. taking into account an unknown structure in the noise.
 
Besides the estimation of high-dimensional asset price co-variations, a further task is to model these processes in order to make predictions. Given the high dimensionality of the underlying process, this requires techniques of dimension reduction to break the curse of dimensionality and to reduce the impact of estimation error. In this context, factor models are proven to be useful as they also directly produce well-conditioned covariance matrices. This is particularly true when not only the covariance matrix,but also its inverse, has to be predicted, see Fan et al (2008). Hautsch et al (2011) propose a model for (pre-estimated) covariance matrices building on a decomposition of the factorized covariance into its spectral components. The latter evolve on different frequencies according to a parametric multivariate time series structure. As an application-driven approach, they provide a first piece of empirical evidence of the usefulness of high-frequency data in realistic (out-of-sample) prediction settings. However, also in this context, it is very unclear how dimension reduction techniques, (potential) regularization and modelling steps should be optimally brought together.
 
Therefore, the need of a (parametric) prediction model, however, opens up the possibility to directly incorporate high-frequency and low-frequency (e.g., daily) information. Such an approach is pursued, e.g., by Noureldin et al (2011) and Hansen et al (2010) who directly combine both realized measures and low-frequency (typically daily) observations in multivariate GARCH-type settings. The idea behind the mixing of frequencies is to 'smooth' over time and make models less sensitive to erratic changes challenging the prediction performance. Indeed, several studies (e.g., Hautsch et al, 2011, Halbleib and Voev, 2011) show that daily movements of realized correlations are much more instable than those of variances. Hereby, smoothing over time by the incorporation of low-frequency information stabilizes and improves predictions.
 
In the context of multivariate GARCH-type models the positive definiteness of covariance estimates is only achieved by quite restrictive parametric assumptions. Also here, an obvious alternative is to model each entry of the covariance matrix individually (including also realized measures) and to regularize the resulting covariance matrix estimate correspondingly. In a pure GARCH setting, such an approach has been recently proposed by Rosenow (2008) using methods from random matrix theory.
 
These considerations and results still leave numerous theoretical and empirical questions open. Despite its importance in financial practice, this research field is still very young and widely unexplored. While some of the problems, e.g., regarding the optimality of regularization and the tradeoff between efficiency and regularization are mathematically hard problems which presumably can only be tackled in quite simplified frameworks, other problems are of more empirical nature and require careful data analysis and simulation evidence. 
 
 
 
Coordination
 
  • Markus Reiß: His main research interest is mathematical statistics. His research includes work on nonparametric statistics, statistics for stochastic processes, statistical inverse problems, stochastic differential equations, applications in finance and medical imaging.

 

  • Zhi Liu: His main research interests are nonparametric statistics, volatility estimation, high dimensional financial econometrics.

 

  • Kent Wang: His main research interests are corporate security analysis, derivatives and risk management. His research includes work on asset pricing, financial engineering, investment, risk analysis.

 

  • Nikolaus HautschHis main interests are in Econometric models and Empirical Finance. His recent research concentrates on linear and nonlinear time series models, latent factor models, econometrics for financial transaction data, market microstructure analysis, information processing and analysis of limit order book markets.

 

  • Ming LinHis main interests are Monte Carlo Methods, self selection, dimension reduction methods. His research includes work in Monte Carlo algorithm, Bayesian statistics, nonparametric statistics.

 

  • Linlin NiuHer interests are Macro Finance, International Economics, Applied Econometrics.