[PDF] Shrinkage Parameter Selection In Generalized Linear And Mixed Models - eBooks Review

Shrinkage Parameter Selection In Generalized Linear And Mixed Models


Shrinkage Parameter Selection In Generalized Linear And Mixed Models
DOWNLOAD

Download Shrinkage Parameter Selection In Generalized Linear And Mixed Models PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Shrinkage Parameter Selection In Generalized Linear And Mixed Models book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages. If the content not found or just blank you must refresh this page



Shrinkage Parameter Selection In Generalized Linear And Mixed Models


Shrinkage Parameter Selection In Generalized Linear And Mixed Models
DOWNLOAD
Author : Erin K. Melcon
language : en
Publisher:
Release Date : 2014

Shrinkage Parameter Selection In Generalized Linear And Mixed Models written by Erin K. Melcon and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2014 with categories.


Penalized likelihood methods such as lasso, adaptive lasso, and SCAD have been highly utilized in linear models. Selection of the penalty parameter is an important step in modeling with penalized techniques. Traditionally, information criteria or cross validation are used to select the penalty parameter. Although methods of selecting this have been evaluated in linear models, general linear models and linear mixed models have not been so thoroughly explored.This dissertation will introduce a data-driven bootstrap (Empirical Optimal Selection, or EOS) approach for selecting the penalty parameter with a focus on model selection. We implement EOS on selecting the penalty parameter in the case of lasso and adaptive lasso. In generalized linear models we will introduce the method, show simulations comparing EOS to information criteria and cross validation, and give theoretical justification for this approach. We also consider a practical upper bound for the penalty parameter, with theoretical justification. In linear mixed models, we use EOS with two different objective functions; the traditional log-likelihood approach (which requires an EM algorithm), and a predictive approach. In both of these cases, we compare selecting the penalty parameter with EOS to selection with information criteria. Theoretical justification for both objective functions and a practical upper bound for the penalty parameter in the log-likelihood case are given. We also applied our technique to two datasets; the South African heart data (logistic regression) and the Yale infant data (a linear mixed model). For the South African data, we compare the final models using EOS and information criteria via the mean squared prediction error (MSPE). For the Yale infant data, we compare our results to those obtained by Ibrahim et al. (2011).



Variable Selection Procedures For Generalized Linear Mixed Models In Longitudinal Data Analysis


Variable Selection Procedures For Generalized Linear Mixed Models In Longitudinal Data Analysis
DOWNLOAD
Author :
language : en
Publisher:
Release Date : 2004

Variable Selection Procedures For Generalized Linear Mixed Models In Longitudinal Data Analysis written by and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2004 with categories.


Model selection is important for longitudinal data analysis. But up to date little work has been done on variable selection for generalized linear mixed models (GLMM). In this paper we propose and study a class of variable selection methods. Full likelihood (FL) approach is proposed for simultaneous model selection and parameter estimation. Due to the intensive computation involved in FL approach, Penalized Quasi-Likelihood (PQL) procedure is developed so that model selection in GLMMs can proceed in the framework of linear mixed models. Since the PQL approach will produce biased parameter estimates for sparse binary longitudinal data, Two-stage Penalized Quasi-Likelihood approach (TPQL) is proposed to bias correct PQL in terms of estimation: use PQL to do model selection at the first stage and existing software to do parameter estimation at the second stage. Marginal approach for some special types of data is also developed. A robust estimator of standard error for the fitted parameters is derived based on a sandwich formula. A bias correction is proposed to improve the estimation accuracy of PQL for binary data. The sampling performance of four proposed procedures is evaluated through extensive simulations and their application to real data analysis. In terms of model selection, all of them perform closely. As for parameter estimation, FL, AML and TPQL yield similar results. Compared with FL, the other procedures greatly reduce computational load. The proposed procedures can be extended to longitudinal data analysis involving missing data, and the shrinkage penalty based approach allows them to work even when the number of observations n is less than the number of parameters d.



Linear And Generalized Linear Mixed Models And Their Applications


Linear And Generalized Linear Mixed Models And Their Applications
DOWNLOAD
Author : Jiming Jiang
language : en
Publisher: Springer Nature
Release Date : 2021-03-22

Linear And Generalized Linear Mixed Models And Their Applications written by Jiming Jiang and has been published by Springer Nature this book supported file pdf, txt, epub, kindle and other format this book has been release on 2021-03-22 with Medical categories.


This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models. It presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it includes recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested in using mixed models for statistical data analysis.



Linear Mixed Model Selection Via Minimum Approximated Information Criterion


Linear Mixed Model Selection Via Minimum Approximated Information Criterion
DOWNLOAD
Author : Olivia Abena Atutey
language : en
Publisher:
Release Date : 2020

Linear Mixed Model Selection Via Minimum Approximated Information Criterion written by Olivia Abena Atutey and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2020 with Linear models (Statistics) categories.


The analyses of correlated, repeated measures, or multilevel data with a Gaussian response are often based on models known as the linear mixed models (LMMs). LMMs are modeled using both fixed effects and random effects. The random intercepts (RI) and random intercepts and slopes (RIS) models are two exceptional cases from the linear mixed models that are taken into consideration. Our primary focus in this dissertation is to propose an approach for simultaneous selection and estimation of fixed effects only in LMMs. This dissertation, inspired by recent research of methods and criteria for model selection, aims to extend a variable selection procedure referred to as minimum approximated information criterion (MIC) of Su et al. (2018). Our contribution presents further use of the MIC for variable selection and sparse estimation in LMMs. Thus, we design a penalized log-likelihood procedure referred to as the minimum approximated information criterion for LMMs (lmmMAIC), which is used to find a parsimonious model that better generalizes data with a group structure. Our proposed lmmMAIC method enforces variable selection and sparse estimation simultaneously by adding a penalty term to the negative log-likelihood of the linear mixed model. The method differs from existing regularized methods mainly due to the penalty parameter and the penalty function.With regards to the penalty function, the lmmMAIC mimics the traditional Bayesian information criterion (BIC)-based best subset selection (BSS) method but requires a continuous or smooth approximation to the L0 norm penalty of BSS. In this context, lmmMAIC performs sparse estimation by optimizing an approximated information criterion, which substantially requires approximating that L0 norm penalty of BSS with a continuous unit dent function. A unit dent function, motivated by bump functions called mollifiers (Friedrichs, 1944), is an even continuous function with a [0, 1] range. Among several unit dent functions, incorporating a hyperbolic tangent function is most preferred. The approximation changes the discrete nature of the L0 norm penalty of BSS to a continuous or smooth one making our method less computationally expensive. Besides, the hyperbolic tangent function has a simple form and it is much easier to compute its derivatives. This shrinkage-based method fits a linear mixed model containing all p predictors instead of comparing and selecting a correct sub-model across 2p candidate models. On this account, the lmmMAIC is feasible for high-dimensional data. The replacement, however, does not enforce sparsity since the hyperbolic tangent function is not singular at its origin. To better handle this issue, a reparameterization trick of the regression coefficients is needed to achieve sparsity.For a finite number of parameters, numerical investigations demonstrated by Shi and Tsai (2002) prove that traditional information criterion (IC)-based procedure like BIC can consistently identify a model. Following these suggestions of consistent variable selection and computational efficiency, we maintain the BIC fixed penalty parameter. Thus, our newly proposed procedure is free of using the frequently applied practices such as generalized cross validation (GCV) in selecting an optimal penalty parameter for our penalized likelihood framework. The lmmMAIC enjoys less computational time compared to other regularization methods.We formulate the lmmMAIC procedure as a smooth optimization problem and seek to solve for the fixed effects parameters by minimizing the penalized log-likelihood function. The implementation of the lmmMAIC involves an initial step of using the simulated annealing algorithm to obtain estimates. We proceed using these estimates as starting values by applying the modified Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm until convergence. After this step, we plug estimates obtained from the modified BFGS into the reparameterized hyperbolic tangent function to obtain our fixed effects estimates. Alternatively, the optimization of the penalized log-likelihood can be solved using generalized simulation annealing.Our research explores the performance and asymptotic properties of the lmmMAIC method by conducting extensive simulation studies using different model settings. The numerical results of our simulations for our proposed variable selection and estimation method are compared to other standard LMMs shrinkage-based methods such as Lasso, ridge, and elastic net. The results provide evidence that lmmMAIC is more consistent and efficient than the existing shrinkage-based methods under study. Furthermore, two applications with real-life examples are illustrated to evaluate the effectiveness of the lmmMAIC method.



Multivariate Statistical Modelling Based On Generalized Linear Models


Multivariate Statistical Modelling Based On Generalized Linear Models
DOWNLOAD
Author : Ludwig Fahrmeir
language : en
Publisher: Springer Science & Business Media
Release Date : 2013-11-11

Multivariate Statistical Modelling Based On Generalized Linear Models written by Ludwig Fahrmeir and has been published by Springer Science & Business Media this book supported file pdf, txt, epub, kindle and other format this book has been release on 2013-11-11 with Mathematics categories.


Concerned with the use of generalised linear models for univariate and multivariate regression analysis, this is a detailed introductory survey of the subject, based on the analysis of real data drawn from a variety of subjects such as the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account.



Data Science With Matlab Predictive Techniques Generalized Linear Models And Nonlinear Regression


Data Science With Matlab Predictive Techniques Generalized Linear Models And Nonlinear Regression
DOWNLOAD
Author : A. Vidales
language : en
Publisher: Independently Published
Release Date : 2019-02-10

Data Science With Matlab Predictive Techniques Generalized Linear Models And Nonlinear Regression written by A. Vidales and has been published by Independently Published this book supported file pdf, txt, epub, kindle and other format this book has been release on 2019-02-10 with Business & Economics categories.


Data science includes a set of statistical techniques that allow extracting the knowledge immersed in the data automatically. One of the fundamental techniques in data science is the treatment of regression models. Regression is the process of fitting models to data. The models must have numerical responses. The regression process depends on the model. If a model is parametric, regression estimates the parameters from the data. If a model is linear in the parameters, estimation is based on methods from linear algebra that minimize the norm of a residual vector. If a model is nonlinear in the parameters, estimation is based on search methods from optimization that minimize the norm of a residual vector.The outcome of a response variable might be one of a restricted set of possible values. If there are only two possible outcomes, such as a yes or no answer to a question, these responses are called binary responses. If there are multiple outcomes, then they are called polytomous responses. Some examples include the degree of a disease (mild, medium, severe), preferred districts to live in a city, and so on. When the response variable is nominal, there is no natural order among the response variable categories. Nominal response models explain and predict the probability that an observation is in each category of a categorical response variable. A nominal response model is one of several natural extensions of the binary logit model and is also called a multinomial logit model. The multinomial logit model explains the relative risk of being in one category versus being in the reference category, k, using a linear combination of predictor variables. Consequently, the probability of each outcome is expressed as a nonlinear function of p predictor variables. Lasso is a regularization technique for estimating generalized linear models. Lasso includes a penalty term that constrains the size of the estimated coefficients. Therefore, it resembles "Ridge Regression" . Lasso is a shrinkage estimator: it generates coefficient estimates that are biased to be small. Nevertheless, a lasso estimator can have smaller error than an ordinary maximum likelihood estimator when you apply it to new data. Unlike ridge regression, as the penalty term increases, the lasso technique sets more coefficients to zero. This means that the lasso estimator is a smaller model, with fewer predictors. As such, lasso is an alternative to stepwise regression and other model selection and dimensionality reduction techniques. Elastic net is a related technique. Elastic net is akin to a hybrid of ridge regression and lasso regularization. Like lasso, elastic net can generate reduced models by generating zero-valued coefficients. Empirical studies suggest that the elastic net technique can outperform lasso on data with highly correlated predictors.Generalized linear mixed-effects (GLME) models describe the relationship between a response variable and independent variables using coefficients that can vary with respect to one or more grouping variables, for data with a response variable distribution other than normal. You can think of GLME models as extensions of generalized linear models (GLM) for data that are collected and summarized in groups. Alternatively, you can think of GLME models as a generalization of linear mixed-effects models(LME) for data where the response variable is not normally distributed. A mixed-effects model consists of fixed-effects and random-effects terms. Fixed-effects terms are usually the conventional linear regression part of the model. Random-effects terms are associated with individual experimental units drawn at random from a population, and account for variations between groups that might affect the response. The random effects have prior distributions, whereas the fixed effects do not.



Generalized Linear Mixed Models


Generalized Linear Mixed Models
DOWNLOAD
Author : Charles E. McCulloch
language : en
Publisher: IMS
Release Date : 2003

Generalized Linear Mixed Models written by Charles E. McCulloch and has been published by IMS this book supported file pdf, txt, epub, kindle and other format this book has been release on 2003 with Mathematics categories.


Wiley Series in Probability and Statistics A modern perspective on mixed models The availability of powerful computing methods in recent decades has thrust linear and nonlinear mixed models into the mainstream of statistical application. This volume offers a modern perspective on generalized, linear, and mixed models, presenting a unified and accessible treatment of the newest statistical methods for analyzing correlated, nonnormally distributed data. As a follow-up to Searle's classic, Linear Models, and Variance Components by Searle, Casella, and McCulloch, this new work progresses from the basic one-way classification to generalized linear mixed models. A variety of statistical methods are explained and illustrated, with an emphasis on maximum likelihood and restricted maximum likelihood. An invaluable resource for applied statisticians and industrial practitioners, as well as students interested in the latest results, Generalized, Linear, and Mixed Models features: * A review of the basics of linear models and linear mixed models * Descriptions of models for nonnormal data, including generalized linear and nonlinear models * Analysis and illustration of techniques for a variety of real data sets * Information on the accommodation of longitudinal data using these models * Coverage of the prediction of realized values of random effects * A discussion of the impact of computing issues on mixed models



Evaluation Of Smoothing In The Context Of Generalized Linear Mixed Models


Evaluation Of Smoothing In The Context Of Generalized Linear Mixed Models
DOWNLOAD
Author : Muhammad Mullah
language : en
Publisher:
Release Date : 2017

Evaluation Of Smoothing In The Context Of Generalized Linear Mixed Models written by Muhammad Mullah and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2017 with categories.


"Nonparametric regression models continue to receive more attention and appreciation with the advance in both statistical methodology and computing software over the last three decades. These methods use smooth, flexible functional forms of the predictor to describe the dependency of the mean of responses on a set of covariates. The shape of the smooth curve is directly estimated from the data. While several competing approaches are available for such modelling, penalized splines (P-splines) are a powerful and applicable smoothing technique that restricts the influence of knots in regression splines. P-splines can be viewed as a particular case of generalized linear mixed models (GLMMs). To achieve a smooth function, we can use the GLMM to shrink the regression coefficients of knot points from a regression spline towards zero, by including them as random effects. The resulting models are referred to as semiparametric mixed models (SPMMs). The main advantage of this approach is that the smoothing parameter, which controls the trade-off between bias and variance, may be directly estimated from the data. Moreover, we can take full advantage of existing methods and software for GLMMs. This thesis addresses several unresolved methodological issues related to the implementation of SPMMs, especially for binary outcomes. First, how best to estimate flexible regression curves when the outcomes are correlated and binary is unclear, especially when cluster sizes are small and also when the validity of the model assumptions are violated. In this regard, in the first manuscript, I compare the performance of the likelihood-based and Bayesian approaches to estimate SPMMs for correlated binary data. I also investigate the effect of concurvity (analogous to multicollinearity in linear regression) among covariates on estimates of SPMMs components, an issue that has not yet been studied in the SPMMs context. Next, while it is evident that SPMMs performed very well in recapturing the true curves, it remained unclear how curve fitting via SPMMs impacts the estimation of correlation and variance parameters in complicated data situations arising from, for example, longitudinal studies where data are both over-dispersed and serially correlated. In the second manuscript, I extend the SPMM for analyzing over-dispersed and serially correlated longitudinal data and systematically evaluate the effect of smoothing using SPMMs on the correlation and variance parameter estimates. I also compare the performance of SPMMs to other simpler approaches for estimating the nonlinear association such as fractional polynomials, and quadratic polynomial. Finally, in the third manuscript, I introduce a novel LASSO type penalized splines in the SPMM setting to investigate if the curve fitting performance can be improved using a LASSO type absolute value penalty (to the changes in fit at knots) rather than using typical ridge regression penalty. All these methods are also applied to different real-life data sets." --



Shrinkage Tuning Parameter Selection With A Diverging Number Of Parameters


Shrinkage Tuning Parameter Selection With A Diverging Number Of Parameters
DOWNLOAD
Author : Hansheng Wang
language : en
Publisher:
Release Date : 2008

Shrinkage Tuning Parameter Selection With A Diverging Number Of Parameters written by Hansheng Wang and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2008 with categories.


Contemporary statistical research frequently deals with problems involving a diverging number of parameters. For those problems, various shrinkage methods (e.g., LASSO, SCAD, etc) are found particularly useful for the purpose of variable selection (Fan and Peng, 2004; Huang et al., 2007b). Nevertheless, the desirable performances of those shrinkage methods heavily hinge on an appropriate selection of the tuning parameters. With a fixed predictor dimension, Wang et al. (2007b) and Wang and Leng (2007) demonstrated that the tuning parameters selected by a BIC-type criterion can identify the true model consistently. In this work, similar results are further extended to the situation with a diverging number of parameters for both unpenalized and penalized estimators (Fan and Peng, 2004; Huang et al., 2007b). Consequently, our theoretical results further enlarge not only the applicable scope of the traditional BIC-type criteria but also that of those shrinkage estimation methods (Tibshirani, 1996; Huang et al., 2007b; Fan and Li, 2001; Fan and Peng, 2004).



Shrinkage Estimators For Generalized Linear Models


Shrinkage Estimators For Generalized Linear Models
DOWNLOAD
Author : Geoffrey Kwok Fai Tso
language : en
Publisher:
Release Date : 1989

Shrinkage Estimators For Generalized Linear Models written by Geoffrey Kwok Fai Tso and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 1989 with categories.