Zur Kurzanzeige

Multiscale Change-point Segmentation: Beyond Step Functions

dc.contributor.advisorMunk, Axel Prof. Dr.
dc.contributor.authorGuo, Qinghai
dc.date.accessioned2017-03-08T09:38:25Z
dc.date.available2017-03-08T09:38:25Z
dc.date.issued2017-03-08
dc.identifier.urihttp://hdl.handle.net/11858/00-1735-0000-0023-3DCC-D
dc.identifier.urihttp://dx.doi.org/10.53846/goediss-6178
dc.language.isoengde
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subject.ddc510de
dc.titleMultiscale Change-point Segmentation: Beyond Step Functionsde
dc.typedoctoralThesisde
dc.contributor.refereeMunk, Axel Prof. Dr.
dc.date.examination2017-02-03
dc.description.abstractengMany multiscale segmentation methods have been proven to work successfully for detecting multiple change-points, mainly because they provide faithful statistical statements, while at the same time allowing for efficient computation. Underpinning theory has been studied exclusively for models which assume that the signal is an unknown step function. However, when the signal is only approximately piecewise constant, which often occurs in practical applications, the behavior of multiscale segmentation methods is still not well studied. To narrow this gap, we investigate the asymptotic properties of a certain class of \emph{multiscale change-point segmentation} methods in a general nonparametric regression setting. The main contribution of this work is the adaptation property of these methods over a wide range of function classes, although they are designed for step functions. On the one hand, this includes the optimal convergence rates (up to log-factor) for step functions with bounded or even increasing to infinite number of jumps.  On the other hand, for models beyond step functions, which are characterized by certain approximation spaces,  we show the optimal rates (up to log-factor) as well. This includes bounded variation functions and (piecewise) H\"{o}lder functions of smoothness order $ 0 < \alpha \leq 1$.  All results are formulated in terms of $L^p$-loss, $0 <  p < \infty$, both almost surely and in expectation. In addition, we show that the convergence rates readily imply accuracy of feature detection, such as change-points, modes, troughs, etc. The practical performance is examined by various numerical simulations. de
dc.contributor.coRefereeKrajina, Andrea Prof. Dr.
dc.contributor.thirdRefereeHuckemann, Stephan Prof. Dr.
dc.contributor.thirdRefereeLuke, Russell Prof. Dr.
dc.contributor.thirdRefereeZhu, Chenchang Prof. Dr.
dc.contributor.thirdRefereeHabeck, Michael Dr.
dc.subject.engAdaptive estimationde
dc.subject.engapproximation spacesde
dc.subject.engjump detectionde
dc.subject.engmodel misspecificationde
dc.subject.engmultiscale inferencede
dc.subject.engnonparametric regressionde
dc.subject.engrobustnessde
dc.identifier.urnurn:nbn:de:gbv:7-11858/00-1735-0000-0023-3DCC-D-8
dc.affiliation.instituteFakultät für Mathematik und Informatikde
dc.subject.gokfullMathematik (PPN61756535X)de
dc.identifier.ppn881652482


Dateien

Thumbnail

Das Dokument erscheint in:

Zur Kurzanzeige