Regularized Rao-Blackwellization
An Extension of a Classical Technique with Applications to Gibbs Point Process Statistics
by Henning Höllwarth
Date of Examination:2020-12-03
Date of issue:2021-06-15
Advisor:Prof. Dr. Dominic Schuhmacher
Referee:Prof. Dr. Dominic Schuhmacher
Referee:Prof. Dr. Lutz Mattner
Files in this item
Name:Dissertation_Hoellwarth_eDiss.pdf
Size:13.8Mb
Format:PDF
Description:Dissertation
Abstract
English
In statistics, Rao-Blackwellization is a well-known technique to improve estimators by removing ancillary information which does not help toward making inference on the parameter of interest. The present thesis reveals this concept as an inverse problem that is often ill-posed. That means, the Rao-Blackwellization generally fails to be continuous with respect to a semi-norm that measures the amount of some ancillary part of an estimator. However, if the underlying statistical model is misspecified, inference cannot go beyond that inaccuracy and hence requires a corresponding continuous surrogate for the Rao-Blackwellization. We therefore propose regularizations of the mentioned ill-posed Rao-Blackwell inverse problem and eventually, we introduce and analyze the concept of regularized Rao-Blackwellization. In classical examples, this new concept leads to new estimators and also to new interpretations of existing ones. For more complex statistical models, like several ones in Gibbs point process statistics, regularized Rao-Blackwellizations can be computed at least approximately. A simulation study where we consider the Lennard-Jones model demonstrates the computational feasibility and the benefit of these results, especially in constructing parametric bootstrap confidence regions on the basis of the maximum likelihood estimator.
Keywords: Rao-Blackwellization; ill-posed inverse problems; Gibbs point processes; Tikhonov regularization; maximum likelihood estimation; confidence region; misspecified statistical models