GRADUATE ASSISTANT, UNIVERSITY OF CALABAR, CALABAR
The research proposed a new distribution
Distribution theory is a Mathematical area of Statistics. Here, new Mathematical models are proposed and proven with associated theorems, applied on both life and non-life data set for its flexibility which pave way for researchers in other area to appreciate the usefulness and application of Statistics in human life.
Abstract: We revisit logistic regression and its nonlinear extensions, including multilayer feedforward neural networks, by showing that these classifiers can be viewed as converting input or higher-level features into Dempster-Shafer mass functions and aggregating them by Dempster's rule of combination. The probabilistic outputs of these classifiers are the normalized plausibilities corresponding to the underlying combined mass function. This mass function is more informative than the output probability distribution. In particular, it makes it possible to distinguish between lack of evidence (when none of the features provides discriminant information) from conflicting evidence (when different features support different classes). This expressivity of mass functions allows us to gain insight into the role played by each input feature in logistic regression, and to interpret hidden unit outputs in multilayer neural networks. It also makes it possible to use alternative decision rules, such as interval dominance, which select a set of classes when the available evidence does not unambiguously point to a single class, thus trading reduced error rate for higher imprecision.
Pub.: 05 Jul '18, Pinned: 08 Jul '18
Abstract: Publication date: Available online 28 June 2018 Source:Journal of Econometrics Author(s): Tingting Cheng, Jiti Gao, Peter C.B. Phillips Ergodic theorem shows that ergodic averages of the posterior draws converge in probability to the posterior mean under the stationarity assumption. The literature also shows that the posterior distribution is asymptotically normal when the sample size of the original data considered goes to infinity. To the best of our knowledge, there is little discussion on the large sample behaviour of the posterior mean. In this paper, we aim to fill this gap. In particular, we extend the posterior mean idea to the conditional mean case, which is conditioning on a given vector of summary statistics of the original data. We establish a new asymptotic theory for the conditional mean estimator for the case when both the sample size of the original data concerned and the number of Markov chain Monte Carlo iterations go to infinity. Simulation studies show that this conditional mean estimator has very good finite sample performance. In addition, we employ the conditional mean estimator to estimate a GARCH(1,1) model for S&P 500 stock returns and find that the conditional mean estimator performs better than quasi–maximum likelihood estimation in terms of out–of–sample forecasting.
Pub.: 30 Jun '18, Pinned: 08 Jul '18