In this talk, we consider estimation problem in generalized linear models when there are many potential predictors and some of them may not have influence on the response of interest. In the context of two competing models where one model includes all predictors and the other restricts variable coefficients to a candidate linear subspace based on subject matter, prior knowledge or auxiliary information. We investigate the relative performances of Stein type shrinkage, pretest, and penalty estimators (L1GLM, adaptive L1GLM, and SCAD) with respect to the full model maximum likelihood estimator (MLE). The asymptotic properties of the pretest and shrinkage estimators including the derivation of asymptotic distributional biases and risks are established. In particular, we give conditions under which the shrinkage estimators are asymptotically more efficient than the full model MLE. A Monte Carlo simulation study shows that the mean squared error (MSE) of an adaptive shrinkage estimator is comparable to the MSE of the penalty estimators in many situations and in particular performs better than the penalty estimators when the dimension of the restricted parameter space is large. The Steinian shrinkage and penalty estimators all improve substantially on the full model MLE. Finally, the methodology is evaluated through application to a real data.
Perspectives on Human Bias versus Machine Bias: Generalized Linear Models
Thursday, April 18, 2013 - 16:00
Syed Ejaz Ahmed, Professor and Dean, Faculty of Mathematics & Science, Brock University
Room 4192, Earth Sciences Building (2207 Main Mall)