[PDF] Shrinkage Estimation - eBooks Review

Shrinkage Estimation


Shrinkage Estimation
DOWNLOAD

Download Shrinkage Estimation PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Shrinkage Estimation book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages. If the content not found or just blank you must refresh this page



Shrinkage Estimation


Shrinkage Estimation
DOWNLOAD
Author : Dominique Fourdrinier
language : en
Publisher: Springer
Release Date : 2018-11-27

Shrinkage Estimation written by Dominique Fourdrinier and has been published by Springer this book supported file pdf, txt, epub, kindle and other format this book has been release on 2018-11-27 with Mathematics categories.


This book provides a coherent framework for understanding shrinkage estimation in statistics. The term refers to modifying a classical estimator by moving it closer to a target which could be known a priori or arise from a model. The goal is to construct estimators with improved statistical properties. The book focuses primarily on point and loss estimation of the mean vector of multivariate normal and spherically symmetric distributions. Chapter 1 reviews the statistical and decision theoretic terminology and results that will be used throughout the book. Chapter 2 is concerned with estimating the mean vector of a multivariate normal distribution under quadratic loss from a frequentist perspective. In Chapter 3 the authors take a Bayesian view of shrinkage estimation in the normal setting. Chapter 4 introduces the general classes of spherically and elliptically symmetric distributions. Point and loss estimation for these broad classes are studied in subsequent chapters. In particular, Chapter 5 extends many of the results from Chapters 2 and 3 to spherically and elliptically symmetric distributions. Chapter 6 considers the general linear model with spherically symmetric error distributions when a residual vector is available. Chapter 7 then considers the problem of estimating a location vector which is constrained to lie in a convex set. Much of the chapter is devoted to one of two types of constraint sets, balls and polyhedral cones. In Chapter 8 the authors focus on loss estimation and data-dependent evidence reports. Appendices cover a number of technical topics including weakly differentiable functions; examples where Stein’s identity doesn’t hold; Stein’s lemma and Stokes’ theorem for smooth boundaries; harmonic, superharmonic and subharmonic functions; and modified Bessel functions.



Shrinkage Estimation For Mean And Covariance Matrices


Shrinkage Estimation For Mean And Covariance Matrices
DOWNLOAD
Author : Hisayuki Tsukuma
language : en
Publisher: Springer Nature
Release Date : 2020-04-16

Shrinkage Estimation For Mean And Covariance Matrices written by Hisayuki Tsukuma and has been published by Springer Nature this book supported file pdf, txt, epub, kindle and other format this book has been release on 2020-04-16 with Medical categories.


This book provides a self-contained introduction to shrinkage estimation for matrix-variate normal distribution models. More specifically, it presents recent techniques and results in estimation of mean and covariance matrices with a high-dimensional setting that implies singularity of the sample covariance matrix. Such high-dimensional models can be analyzed by using the same arguments as for low-dimensional models, thus yielding a unified approach to both high- and low-dimensional shrinkage estimations. The unified shrinkage approach not only integrates modern and classical shrinkage estimation, but is also required for further development of the field. Beginning with the notion of decision-theoretic estimation, this book explains matrix theory, group invariance, and other mathematical tools for finding better estimators. It also includes examples of shrinkage estimators for improving standard estimators, such as least squares, maximum likelihood, and minimum risk invariant estimators, and discusses the historical background and related topics in decision-theoretic estimation of parameter matrices. This book is useful for researchers and graduate students in various fields requiring data analysis skills as well as in mathematical statistics.



Penalty Shrinkage And Pretest Strategies


Penalty Shrinkage And Pretest Strategies
DOWNLOAD
Author : S. Ejaz Ahmed
language : en
Publisher: Springer Science & Business Media
Release Date : 2013-12-11

Penalty Shrinkage And Pretest Strategies written by S. Ejaz Ahmed and has been published by Springer Science & Business Media this book supported file pdf, txt, epub, kindle and other format this book has been release on 2013-12-11 with Mathematics categories.


The objective of this book is to compare the statistical properties of penalty and non-penalty estimation strategies for some popular models. Specifically, it considers the full model, submodel, penalty, pretest and shrinkage estimation techniques for three regression models before presenting the asymptotic properties of the non-penalty estimators and their asymptotic distributional efficiency comparisons. Further, the risk properties of the non-penalty estimators and penalty estimators are explored through a Monte Carlo simulation study. Showcasing examples based on real datasets, the book will be useful for students and applied researchers in a host of applied fields. The book’s level of presentation and style make it accessible to a broad audience. It offers clear, succinct expositions of each estimation strategy. More importantly, it clearly describes how to use each estimation strategy for the problem at hand. The book is largely self-contained, as are the individual chapters, so that anyone interested in a particular topic or area of application may read only that specific chapter. The book is specially designed for graduate students who want to understand the foundations and concepts underlying penalty and non-penalty estimation and its applications. It is well-suited as a textbook for senior undergraduate and graduate courses surveying penalty and non-penalty estimation strategies, and can also be used as a reference book for a host of related subjects, including courses on meta-analysis. Professional statisticians will find this book to be a valuable reference work, since nearly all chapters are self-contained.



Shrinkage Estimation For Penalised Regression Loss Estimation And Topics On Largest Eigenvalue Distributions


Shrinkage Estimation For Penalised Regression Loss Estimation And Topics On Largest Eigenvalue Distributions
DOWNLOAD
Author : Rajendran Narayanan
language : en
Publisher:
Release Date : 2012

Shrinkage Estimation For Penalised Regression Loss Estimation And Topics On Largest Eigenvalue Distributions written by Rajendran Narayanan and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2012 with Estimation theory categories.


The dissertation can be broadly classified into four projects. They are presented in four different chapters as (a) Stein estimation for l1 penalised regression and model selection, (b) Loss estimation for model selection, (c) Largest eigenvalue distributions of random matrices, and (d) Maximum domain of attraction of Tracy-Widom Distribution. In the first project, we construct Stein-type shrinkage estimators for the coefficients of a linear model, based on a convex combination of the Lasso and the least squares estimator. Since the Lasso constraint set is a closed and bounded polyhedron (a crosspolytope), we observe that under a general quadratic loss function, we can treat the Lasso solution as a metric projection of the least squares estimator onto the constraint set. We derive analytical expressions for the decision theoretic risk difference of the proposed Stein-type estimators and Lasso and establish data-based verifiable conditions for risk gains of the proposed estimator over Lasso. Following the Stein's Unbiased Risk Estimation (SURE) framework, we further derive expressions for unbiased esimates of prediction error for selecting the optimal tuning parameter. In the second project, we consider the following problem. For a random vector X, estimation of the unknown location parameter [theta] using an estimator d(X) is often accompanied by a loss function L(d(X), [theta]). Performance of such an estimator is usually evaluated using the risk of d(X). We consider estimating the loss function using an estimator [lamda](X) which is conditional on the actual observations as opposed to an average over the sampling distribution of d(X). In this context, we consider estimating the loss function when the unknown mean vector [theta] of a multivariate normal distribution with an arbitrary covariance matrix is estimated using both the MLE and a shrinkage estimator. We derive sufficient conditions for inadmissibility of the unbiased estimators of loss for such a random vector. We further establish conditions for improved estimators of the loss function for a linear model when the Lasso is used as a model selection tool and exhibit such an improved estimator. The largest eigenvalue of the Gaussian and Jacobi ensembles plays an important role in classical multivariate analysis and random matrix theory. Historically, the exact distribution for the largest eigenvalue has required extensive tables or use of specialised software. More recently, asymptotic approximations for the cumulative distribution function of the largest eigenvalue in both settings have been shown to have the Tracy-Widom limit. Our main results concern using a unified approach to derive the exact cumulative distribution function of the largest eigenvalue in both settings in terms of elements of a matrix that have explicit scalar analytical forms. In the fourth chapter, the maximum of i.i.d. Tracy-Widom distributed random variables arising from the Gaussian unitary ensemble is shown to belong to the Gumbel domain of attraction. This theoretical result has potential applications in any situation where a multiple comparisons is needed using the greatest root statistic.



Shrinkage Estimation Of A Linear Regression


Shrinkage Estimation Of A Linear Regression
DOWNLOAD
Author : Kazuhiro Ohtani
language : en
Publisher:
Release Date : 2000

Shrinkage Estimation Of A Linear Regression written by Kazuhiro Ohtani and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2000 with Business & Economics categories.


This book deals with shrinkage regression estimators obtained by shrinking the ordinary least squares (OLS) estimator towards the origin. The author's main concern is to compare the sampling properties of a family of Stein-rule estimators with those of a family of minimum mean squared error estimators. In this book, the author deals with shrinkage regression estimators obtained by shrinking the ordinary least squares (OLS) estimator towards the origin. In particular, he deals with a family of Stein-rule (SR) estimators and a family of minimum mean squared error (MMSE) estimators.



Shrinkage Estimation In Prediction


Shrinkage Estimation In Prediction
DOWNLOAD
Author : Tahir Naweed Mahdi
language : en
Publisher:
Release Date : 1997

Shrinkage Estimation In Prediction written by Tahir Naweed Mahdi and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 1997 with categories.




Sequential Shrinkage Estimation


Sequential Shrinkage Estimation
DOWNLOAD
Author : David MacLeod Nickerson
language : en
Publisher:
Release Date : 1985

Sequential Shrinkage Estimation written by David MacLeod Nickerson and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 1985 with Multivariate analysis categories.


This dissertation is concerned with sequential estimation of the multivariate normal mean, estimation of the regression coefficient in a normal linear regression model, and estimation of the difference of mean vectors of two multivariate normal distributions in the presence of unknown and possibly unequal variance-covariance matrices. For estimating the p(>3) variate normal mean, we consider two different situations. In one case, the covariance matrix is known up to a multiplicative constant; in the other situation, it is entirely unknown but diagonal. In both cases, the sample mean is the maximum likelihood estimator of the population mean. When the covariance matrix is known up to a multiplicative constant, a class of James- Stein estimators is developed which dominates the sample mean under sequential sampling schemes of M. Ghosh, B.K. Sinha, and N. Mukhopadhyay (1976 Journal of Multivariate Statistics 6j, 281-294). Asymptotic risk expansions of the sample mean vector and James-Stein estimators are provided up to the second order term. Additionally, in this case, some Monte Carlo simulation is done to compare the risks of the sample mean vector, the James-Stein estimators, and a rival class of estimators. In the second case, a class of James-Stein estimators is given which dominates the sample mean asymptotically by considering second order risk expansions. The next case is concerned with estimation of regression parameters in a Gauss-Markoff setup. Here the classical estimator of the regression coefficient is the least squares estimator, and the sampling scheme used is that of N. Mukhopadhyay (1974 Journal of the Indian Statistical Association 12, 39-43). Once again, a class of James-Stein estimators that dominates the least squares estimator is developed, and asymptotic risk expansion is given for both the least squares and James-Stein estimators. Finally, we consider the estimation of the difference of two normal mean vectors, and the sampling schemes developed in 1984 at the Institute of Applied Mathematics National Tsing Hua University by R. Chou and W. Hwang. A class of James-Stein estimators that dominates the difference of sample mean vectors is given. Asymptotic risk expansions are also provided.



Shrinkage Estimation For The Diagonal Multivariate Natural Exponential Families


Shrinkage Estimation For The Diagonal Multivariate Natural Exponential Families
DOWNLOAD
Author : Nikolas Siapoutis
language : en
Publisher:
Release Date : 2022

Shrinkage Estimation For The Diagonal Multivariate Natural Exponential Families written by Nikolas Siapoutis and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2022 with categories.


In this dissertation, we derive and study shrinkage estimators of the parameters of a high-dimensional diagonal natural exponential family of probability distributions. More broadly, we study shrinkage estimation of the parameters of distributions for which the diagonal entries of the covariance matrix are certain quadratic functions of the mean parameter. We propose two classes of semi-parametric shrinkage estimators for the mean of the population, and we construct unbiased estimators of the corresponding risk. We establish the asymptotic consistency and convergence rates for these shrinkage estimators under squared error loss as both $n$, the sample size, and $p$, the dimension, tend to infinity. Further, we specialize these results for the diagonal multivariate natural exponential families, which have been classified as consisting of the normal, Poisson, gamma, multinomial, negative multinomial, and hybrid classes of distributions, and we deduce consistency of our estimators. We deduce consistency of our estimators in the normal, gamma, and negative multinomial cases if $p n^{-1/3}(\log n)^{4/3} \rightarrow 0$ as $n,p \rightarrow \infty$, and for the Poisson and multinomial cases if $pn^{-1/2} \rightarrow 0$ as $n,p \rightarrow \infty$ To evaluate the performance of our mean shrinkage estimators, we carry out a simulation study for the multivariate gamma and multivariate Poisson classes of distributions. We begin by deriving the probability density functions of these two classes of distributions and establish some related regression properties. We propose several acceptance-rejection sampling algorithms and apply two versions of Metropolis algorithm to generate data from the multivariate gamma distribution, all-at-once and variable-at-a-time Metropolis algorithms. We propose reduction schemes for simulating observations from a multivariate Poisson distribution. We also approximate the probability density function of the multivariate Poisson distribution by applying the saddlepoint approximation method. Finally, we apply a variable-at-a-time Metropolis algorithm to generate data from the approximated probability density function. The simulation studies show the proposed estimators to achieve lower risk than the maximum likelihood estimator, thereby demonstrating the superiority of the proposed shrinkage estimators over the maximum likelihood estimator.



Generalized Shrinkage Estimation For Multiple Linear Regression


Generalized Shrinkage Estimation For Multiple Linear Regression
DOWNLOAD
Author : Kok Huat Lee
language : en
Publisher:
Release Date : 1985

Generalized Shrinkage Estimation For Multiple Linear Regression written by Kok Huat Lee and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 1985 with Mathematic statistics categories.




Shrinkage Estimation Of High Dimensional Factor Models With Structural Instabilities


Shrinkage Estimation Of High Dimensional Factor Models With Structural Instabilities
DOWNLOAD
Author : Xu Cheng
language : en
Publisher:
Release Date : 2013

Shrinkage Estimation Of High Dimensional Factor Models With Structural Instabilities written by Xu Cheng and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2013 with Economics categories.


In high-dimensional factor models, both the factor loadings and the number of factors may change over time. This paper proposes a shrinkage estimator that detects and disentangles these instabilities. The new method simultaneously and consistently estimates the number of pre- and post-break factors, which liberates researchers from sequential testing and achieves uniform control of the family-wise model selection errors over an increasing number of variables. The shrinkage estimator only requires the calculation of principal components and the solution of a convex optimization problem, which makes its computation efficient and accurate. The finite sample performance of the new method is investigated in Monte Carlo simulations. In an empirical application, we study the change in factor loadings and emergence of new factors during the Great Recession.