Direkt zum InhaltDirekt zur SucheDirekt zur Navigation
▼ Zielgruppen ▼

Humboldt-Universität zu Berlin - High Dimensional Nonstationary Time Series

Joint Work

IRTG Cooperations and joint work by HU and XMU students
1 Leveraged ETF options implied volatility paradox: a statistical study

Sergey Nasekin, Zhiwu Hong, Wolfgang K. Härdle

(Journal of Financial Econometrics, revise and resubmit)

HTML link (SFB DP): SFB 649 Discussion Paper 2016-004



We study the statistical properties of the moneyness scaling transformation by Leung and Sircar (2015). This transformation adjusts the moneyness coordinate of the implied volatility smile in an attempt to remove the discrepancy between the IV smiles for levered and unlevered ETF options. We construct bootstrap uniform confidence bands which indicate that in a statistical sense there remains a possibility that the implied volatility smiles are still not the same, even after moneyness scaling has been performed. This presents possible arbitrage opportunities on the (L)ETF market which can be exploited by traders. We build possible arbitrage strategies by constructing portfolios with LETF shares and options which (possibly) have a positive value at the point of creation and non-negative value at the expiration time. An empirical data application shows that there are indeed such opportunities in the market which result in risk-free gains for the investor. A dynamic "trade-with-the-smile" strategy based on a dynamic semiparametric factor model is presented. This strategy utilizes the dynamic structure of implied volatility surface allowing out-of-sample forecasting and information on unleveraged ETF options to construct theoretical one-step-ahead implied volatility surfaces.



2 Likelihood ratio tests for dependent and noisy observations from functional data

Sebastian Holtz, Yingxing Li


In this work, we consider the task of testing hypotheses about the structure of the mean function from noisy functional data. The additive noise is constituted by an unknown continuous Gaussian process and i.i.d. errors. The main result concerns the generalisation of a pseudo-likelihood ratio test that attains the optimal rate. The test is based on a spectral pre-estimator of the unknown noise coefficients. The main result is marked by an optimal choice of the truncation index. For this, strong asymptotic equivalence in Le Cam's sense between the initial model and its projection to a finite dimensional subspace is established.


3 Autoregressive Conditional Localized Expectile Model

Xiu Xu, Andrija Mihoci

HTML link (SFB DP): SFB 649 Discussion Paper 2015-052



We account for time-varying parameters in the conditional expectile based value at risk (EVaR) model. EVaR appears more sensitive to the magnitude of portfolio losses compared to the quantile-based Value at Risk (QVaR), nevertheless, by fitting the models over relatively long ad-hoc fixed time intervals, research ignores the potential time-varying parameter properties. Our work focuses on this issue by exploiting the local parametric approach in quantifying tail risk dynamics. By achieving a balance between parameter variability and modelling bias, one can safely fit a parametric expectile model over a stable interval of homogeneity. Empirical evidence at three stock markets from 2005-2014 shows that the parameter homogeneity interval lengths account for approximately 1-6 months of daily observations. Our method performs favourable compared to the models with one-year fixed intervals, as well as quantile based candidates while employing a time invariant portfolio protection (TIPP) strategy for the DAX portfolio. The tail risk measure implied by our model finally provides valuable insights into asset allocation and portfolio insurance.


4 High dimensional nonparametric hypothesis testing using Monge-Kantorovich-depth

Alexandra Suvorikova, Xiu Xu


In this project we consider quantiles, ranks and signs of a measure in R^d with d>1. The idea is to use Monge-Kantorovich-depth. It generalises these concepts to higher dimensions. The procedure evaluates the optimal transport (OT) map (wrt. to quadratic risk) of the measure into the uniform distribution on the unit ball. It can be shown, that OT-based notions specialize to their usual counterparts in d=1 and for elliptic families. Furthermore, they can be empirically evaluated. As an application it is proposed to test nonparametric hypothesis in a high dimensional setting.


5 Joint Yield Curve Modelling with high dimension functional data

Chen Huang, Linlin Niu


Dozens of yield curves are observed with various bond types and maturities. High dimensional functional data are employed to extract the common patterns of multivariate time series as principle components via sparse factor analysis. Compared with Dynamic Nelson-Siegel (DNS) model, our proposed method can jointly estimate all the types of bonds and reduce the dimension of curves efficiently. The identified common factors can explain the characteristics of the whole macro market, e.g., the dynamics of credit risk and liquidity risk factors, also the dispersion between curves. On the other hand, term structure analysis on the factor loadings recovers the individual variations. In addition, we show that the new approach outperforms DNS on both in-sample fitting and out-of-sample forecasting.


6 Adaptive Penalized Macro-Factors in Bond Risk Excess Premium

Xinjue Li, Lenka Zboňáková


Forecasting high dimensional time series. The initial idea comes from the paper “Adaptive dynamic Nelson-Siegel term structure model with applications” by Prof. Ying Chen and Prof. Linlin Niu published in Journal of Econometrics in 2014. When using the adaptive method, one is searching for the longest homogeneous interval in historical data, based on which the prediction is then made. This has empirical applications, for example in bond risk excess premium modelling, when one deals with high dimensionalities providing a large number of macro factors influencing the premium. In order to simplify the interpretability and prediction, we reduce the dimension with use of the iteratively weighted SCAD penalty. The consistency of such estimations was proved and the method was then combined with the adaptive method used in the previously mentioned publication. The combination of the two methods should provide us with selecting a sparse vector of coefficients that is homogeneous throughout the longest time interval possible. After finding the interval one is able to explain which macro factors were significant in driving the premium from a historical point of view and predict future values with lower prediction error and better interpretability. So far, the simulations and comparison of the proposed method with the method from “Adaptive dynamic Nelson-Siegel term structure model with applications” is being processed.


7 Robust Inference for Quantile Predictive Regressions with Persistent Predictors 

Haiqiang Chen, Chen Huang and Xiaosai Liao


This paper aims to solve the problems caused by persistent predictors in quantile predictive regression and provide a robust inference theory across different persistence categories to perform the testing on predictability.

There are mainly two issues that related literature focuses on. One is the so-called embedded endogeneity. Although there is no bias asymptotically but a kind of bias in finite sample still exists. Another issue is the unknown degree of persistence in predictors; it would lead to non-pivotal statistics, which cannot be estimated consistently. We consider a new two-step estimation procedure in quantile regression framework to fix these two problems in turn. In the first step, a linear projection on the residuals is introduced to correct the bias; pivotal statistics, which is free of a nuisance parameter, is constructed in the second step. The consistency and asymptotic normality of the estimator are derived formally. Power and size performance of the proposed testing is compared with other existing methods such as IVX filtering quantile regression (Lee, 2016) in simulations. The predictability of US stock returns is revisited by our approach at different quantile levels. Risk management efficiency is examined by backtesting on the estimated Value at Risk. 


8 Hedge Strategy Based on Spectral Risk Measurement

Michael Fan, Meng-Jou Lu


The paper focuses the risk management concerned with the determination of the optimal hedge ratio. By looking at marginal utility function, spectral risk management has been employed to empirically compute the optimal hedge ratio. Moreover, copulae are proposed for embodying the dependence structure between the spot and the futures position. This study finally provides valuable insights for the hedge strategy by considering the risk attitude of investors. This project is a joint research project of HUB/XMU with Michael Qing Liang Fan.


9 Multilevel on FMRI data

Yingxing Li, Wolfgang Karl Härdle, Chen Huang


In neuroeconomics, an important research question is to investigate individuals' brain activity and infer their risk attitude and perception. fMRI, as a non-invasive technique that records brain signal, could provide measures related to the blood oxygen level dependent signal. In this study, we analyze fMRI data over time during multiple investment decisions in order to detect the stimulus effects. As the data are in the form of a time series consisting of more than 1000 images with roughly 100,000 voxels for each individual, we need to address the challenges due to the complicated dynamics of such a massive data set. We exploit the multilevel functional principle component analysis to extract both intra- and inter- subject information. To be specific, we employ the fast multivariate penalized spline techniques to identify the active brain regions and project the high dimensional input onto a low dimensional subspace. We then analyse the relationship between the estimated loadings and individuals' risk attitudes to demonstrate the effectiveness of our methods.


10 The risk measures for crypto-currency market and their applications

Haiqiang Chen and Simon Trimborn


The crypto-currency market has captured much public attention since Bitcoin's success in 2009. According to recent statistics, there have been several hundred crypto currencies created and traded in the market. Crypto currencies are regarded as potential alternatives to the standard fiat currencies, due to some advantages such as low or no fees, a controlled and known algorithm of the currency creation, and transparent information on all transactions (see Kristoufek, 2014). Different from traditional currency or security markets, the crypto-currency market is frequently changing, with new crypto currencies being created continuously and some existing ones being traded very infrequently. Moreover, most of the trading is still contributed to by retail investors and market prices are largely driven by investor sentiment or events. However, for such a risky but popular market, there is still no proper risk developed measure, leading to the hard implementation of risk management. With high-frequency trading data, this project tries to address the following issues: 1) design a representative crypto-currency market index (CRIX) with a floating number of constituents in order to reflect the fast change of the market; 2) construct a portfolio using other tradable market indices to mimic the movement of CRIX; 3) develop some risk management strategies based on CRIX and the portfolio from 1) and 2).


11 CRIX and PIIGS, are they co-integrating?

Mingyang Li, Simon Trimborn


This paper works on including crypto currencies into investment portfolios to hedge the risk of economic crisis. A preliminary statistical analysis shows that crypto currencies have a hedge effect on economic risks: the return of CRIX, an index of crypto currencies proposed in Trimborn and Härdle (2016), show positive correlation with the Treasury bond yield of “PIIGS” countries during the times when these bond yields show high volatilities. For a crisis country, like Portugal, portfolios containing both crypto currencies and market stock indexes show great improvements in return compared with portfolios just containing market stock indexes. Further extensions of the paper can come from developing methods on investing in a market with limited market cap and liquidity depths, like the market of crypto currencies, without distorting the market.


12 FRM: A Financial Risk Meter based on penalizing tail events occurrence

Lining Yu, Lukas Borke, Thijs Benschop

Published as SFB 649 Discussion Paper 2017-003


We propose a new measure for systemic risk: Financial Risk Meter (FRM). This measure is based on the penalization parameter (lambda) of a linear quantile lasso regression.  FRM is calculated by taking the average of the penalization parameters over the 100 largest US publicly traded financial institutions.  We demonstrate the suitability of this risk measure by comparing the proposed FRM to other measures for systemic risk, such as VIX, SRISK and Google Trends.  We find that these measures are highly positively correlated with the FRM. In addition mutual Granger causality exists between FRM and these measures, which indicates the validity of FRM as a systemic risk measure. The implementation of this project is carried out using parallel computing. The visualization and the up-to-date FRM can be found at http://frm.wiwi.hu-berlin.de/.


13 Multivariate Factorisable Sparse Asymmetric Least Squares Regression

Shih-Kang Chao, Chen Huang

Submitted to Journal of Computational and Graphical Statistics


Numerous applications in finance, neuroeconomics, demographics and also weather and climate analysis make it necessary to extract common patterns and prompt joint modelling of individual curve variation. Focus of such joint variation analysis has been on fluctuations around a mean curve, a statistical task that can be solved via functional PCA. In a variety of questions concerning the above applications one is more interested in the tail, asking therefore for tail event curves (TEC) studies.  With increasing dimension of curves and the complexity of the covariates though one faces numerical problems and has to look into sparsity relative issues.  Here the idea of FActorisable Sparse Tail Event Curves (FASTEC) via multivariate asymmetric least squares regression (expectile regression) in a high dimensional framework is proposed. Expectile regression captures the tail moments globally and the smooth loss function improves the convergence rate in the iterative estimation algorithm compared with quantile regression. The necessary penalization is carried out via the nuclear norm. Finite sample oracle properties of the estimator associated with asymmetric squared error loss and nuclear norm regularizer are studied formally in this paper.


14 Multivariate tail risk modeling for non-stationary time series via local parametric approach.

Yegor Klochkov, Xiu Xu


We consider a problem of multi-quantile estimation in multivariate time series using regression quantiles. This model has recently been successfully used to show tail dependence of seemingly independent assets. A natural approach assumes vector autoregressive model (VAR) for conditional quantiles and usually ignores possible time-varying parameter. We account for parameter dynamics by using local parametric approach (LPA) with multiscale change point detection. A testing procedure is (to be) used to choose an optimal data interval that finds balance between estimator variability and modeling bias.


15 Document clustering to tackle Topic Modelling

Larisa Adamyan, Linxi Wang


Topic modelling aims to find patterns of words in document collections using probabilistic models called topic models. The discovered patterns often identify the semantic themes within documents and reflect the underlying topics which combined to form the documents. Therefore, topic modelling can be viewed as a technique to cluster documents on topic space. To tackle this problem we propose a novel non-parametric method of cluster analysis called Adaptive Weights Clustering (AWC). For evaluation purposes we use SFB discussion paper abstracts.