Research Assistant, University of Ibadan
To inform economist and financial analyst in choice of model in making decision in risk portfolio.
GARCH models have been developed to account for empirical regularities in financial data. Many financial time series have a number of characteristics in common; asset prices are generally non stationary while returns are usually stationary, some financial time series are fractionally integrated, return series usually show no or little autocorrelation, serial independence between the squared values of the series is often rejected pointing towards the existence of non-linear relationships between subsequent observations, volatility of the return series appears to be clustered, normality has to be rejected in favor of some thick-tailed distribution, some series exhibit so-called leverage effect that is changes in stock prices tend to be negatively correlated with changes in volatility.
An empirical analysis of the mean return and conditional variance of Nigeria Stock Exchange (NSE) index is performed using various error innovations in GARCH models. Conventional GARCH model which assumed normal error term failed to capture volatility clustering, leptokurtosis and leverage effect as a result of zero skewness and kurtosis respectively. We re-modify error distributions of GARCH (p,q) model inference using some thick-tailed distributions. Method of Quasi – Maximum Likelihood Estimation (MLE) was used in parameter estimation,m our result shows that GARCH(1,1) and APARCH(1,1) models with anomalous densities improves overall estimation for measuring conditional variance. The robust model that explained the NSE index is determined by loglikelihood and model selection Criteria. The prediction performance of these conditional changing variance models is compared using Root Mean Square Error(RMSE) and Mean Absolute Percentage Error (MAPE). Generalized Length Biased Scaled-t Innovation using APARCH(1,1) model is the most robust for forecasting Nigeria Stock Exchange Index.
Abstract: The regularized theories are non-local at the scale of the cutoff, leading so to the usual difficulties of non-local theories. In this work the conservation laws and causality are investigated for theories with multi-cluster action. The conservation laws are found to play a less significant role than in local theories because due to the non-locality the conserved quantities are not integrals of the motion, and they can exist even without underlying symmetries. Moreover, the conservation of the energy can not prevent instability brought about by the unbounded nature of the energy from below. Thereby a sufficient condition of stability gets lost. Theories, obtained by appropriate point splitting of local interactions are shown to be causal.
Pub.: 07 Aug '17, Pinned: 21 Aug '17
Abstract: We provide novel characterizations of multivariate normality that incorporate both the characteristic function and the moment generating function, and we employ these results to construct a class of affine invariant, consistent and easy-to-use goodness-of-fit tests for normality. The test statistics are suitably weighted $L^2$-statistics, and we provide their asymptotic behavior both for i.i.d. observations as well as in the context of testing that the innovation distribution of a multivariate GARCH model is Gaussian. We also study the finite-sample behavior of the new tests and compare the new criteria with alternative existing tests.
Pub.: 09 Jun '17, Pinned: 20 Aug '17
Abstract: Reconstruction of networks underlying complex systems is one of the most crucial problems in many areas of engineering and science. In this paper, rather than identifying parameters of complex systems governed by pre-defined models or taking some polynomial and rational functions as a prior information for subsequent model selection, we put forward a general framework for nonlinear causal network reconstruction from time-series with limited observations. With obtaining multi-source datasets based on the data-fusion strategy, we propose a novel method to handle nonlinearity and directionality of complex networked systems, namely group lasso nonlinear conditional granger causality. Specially, our method can exploit different sets of radial basis functions to approximate the nonlinear interactions between each pair of nodes and integrate sparsity into grouped variables selection. The performance characteristic of our approach is firstly assessed with two types of simulated datasets from nonlinear vector autoregressive model and nonlinear dynamic models, and then verified based on the benchmark datasets from DREAM3 Challenge4. Effects of data size and noise intensity are also discussed. All of the results demonstrate that the proposed method performs better in terms of higher area under precision-recall curve.
Pub.: 09 Jun '17, Pinned: 20 Aug '17
Abstract: We use a nonparametric causality-in-quantile test to analyze the predictive ability of the wealth-to-income ratio (wy) for excess stock returns and their volatility. Our results reveal that the wy is nonlinearly related with excess stock returns, and hence, results from linear Granger causality tests cannot be deemed robust. When we apply the nonparametric causality-in-quantile test, we find that the wy can predict excess stock returns over the majority of the conditional distribution, with the exception being the extreme ends, that is, when the market is in deep bear or bull phases. However, the wy has no predictability for the volatility of excess stock returns.
Pub.: 06 Jul '17, Pinned: 20 Aug '17
Abstract: The realized GARCH framework is extended to incorporate the two-sided Weibull distribution, for the purpose of volatility and tail risk forecasting in a financial time series. Further, the realized range, as a competitor for realized variance or daily returns, is employed in the realized GARCH framework. Further, sub-sampling and scaling methods are applied to both the realized range and realized variance, to help deal with inherent micro-structure noise and inefficiency. An adaptive Bayesian Markov Chain Monte Carlo method is developed and employed for estimation and forecasting, whose properties are assessed and compared with maximum likelihood, via a simulation study. Compared to a range of well-known parametric GARCH, GARCH with two-sided Weibull distribution and realized GARCH models, tail risk forecasting results across 7 market index return series and 2 individual assets clearly favor the realized GARCH models incorporating two-sided Weibull distribution, especially models employing the sub-sampled realized variance and sub-sampled realized range, over a six year period that includes the global financial crisis.
Pub.: 11 Jul '17, Pinned: 20 Aug '17
Abstract: Since interactions in neural systems occur across multiple temporal scales, it is likely that information flow will exhibit a multiscale structure, thus requiring a multiscale generalization of classical temporal precedence causality analysis like Granger's approach. However, the computation of multiscale measures of information dynamics is complicated by theoretical and practical issues such as filtering and undersampling: to overcome these problems, we propose a wavelet-based approach for multiscale Granger causality (GC) analysis, which is characterized by the following properties: (i) only the candidate driver variable is wavelet transformed (ii) the decomposition is performed using the \`a trous wavelet transform with cubic B-spline filter. We measure GC, at a given scale, by including the wavelet coefficients of the driver times series, at that scale, in the regression model of the target. To validate our method, we apply it to publicly available scalp EEG signals, and we find that the condition of closed eyes, at rest, is characterized by an enhanced GC among channels at slow scales w.r.t. eye open condition, whilst the standard Granger causality is not significantly different in the two conditions.
Pub.: 12 Jul '17, Pinned: 20 Aug '17
Abstract: Authors: Hamed Ahmad Almahadin ; Gulcay Tuna Article URL: http://www.tandfonline.com/doi/full/10.1080/16081625.2017.1354709?ai=10zf0&mi=47tg1r&af=R Citation: Asia-Pacific Journal of Accounting & Economics Publication Date: 2017-07-19T07:31:48Z Journal: Asia-Pacific Journal of Accounting & Economics
Pub.: 19 Jul '17, Pinned: 20 Aug '17
Abstract: The study of causality or causal inference - how much a given treatment causally affects a given outcome in a population - goes way beyond correlation or association analysis of variables, and is critical in making sound data driven decisions and policies in a multitude of applications. The gold standard in causal inference is performing "controlled experiments", which often is not possible due to logistical or ethical reasons. As an alternative, inferring causality on "observational data" based on the "Neyman-Rubin potential outcome model" has been extensively used in statistics, economics, and social sciences over several decades. In this paper, we present a formal framework for sound causal analysis on observational datasets that are given as multiple relations and where the population under study is obtained by joining these base relations. We study a crucial condition for inferring causality from observational data, called the "strong ignorability assumption" (the treatment and outcome variables should be independent in the joined relation given the observed covariates), using known conditional independences that hold in the base relations. We also discuss how the structure of the conditional independences in base relations given as graphical models help infer new conditional independences in the joined relation. The proposed framework combines concepts from databases, statistics, and graphical models, and aims to initiate new research directions spanning these fields to facilitate powerful data-driven decisions in today's big data world.
Pub.: 08 Aug '17, Pinned: 20 Aug '17
Abstract: The rising number of novel pathogens threatening the human population has motivated the application of mathematical modeling for forecasting the trajectory and size of epidemics.We summarize the real-time forecasting results of the logistic equation during the 2015 Ebola challenge focused on predicting synthetic data derived from a detailed individual-based model of Ebola transmission dynamics and control. We also carry out a post-challenge comparison of two simple phenomenological models. In particular, we systematically compare the logistic growth model and a recently introduced generalized Richards model (GRM) that captures a range of early epidemic growth profiles ranging from sub-exponential to exponential growth. Specifically, we assess the performance of each model for estimating the reproduction number, generate short-term forecasts of the epidemic trajectory, and predict the final epidemic size.During the challenge the logistic equation consistently underestimated the final epidemic size, peak timing and the number of cases at peak timing with an average mean absolute percentage error (MAPE) of 0.49, 0.36 and 0.40, respectively. Post-challenge, the GRM which has the flexibility to reproduce a range of epidemic growth profiles ranging from early sub-exponential to exponential growth dynamics outperformed the logistic growth model in ascertaining the final epidemic size as more incidence data was made available, while the logistic model underestimated the final epidemic even with an increasing amount of data of the evolving epidemic. Incidence forecasts provided by the generalized Richards model performed better across all scenarios and time points than the logistic growth model with mean RMS decreasing from 78.00 (logistic) to 60.80 (GRM). Both models provided reasonable predictions of the effective reproduction number, but the GRM slightly outperformed the logistic growth model with a MAPE of 0.08 compared to 0.10, averaged across all scenarios and time points.Our findings further support the consideration of transmission models that incorporate flexible early epidemic growth profiles in the forecasting toolkit. Such models are particularly useful for quickly evaluating a developing infectious disease outbreak using only case incidence time series of the early phase of an infectious disease outbreak.
Pub.: 04 Dec '16, Pinned: 20 Aug '17
Join Sparrho today to stay on top of science
Discover, organise and share research that matters to you