# Random Function Iterations for Stochastic Feasibility Problems

Neal Hermer

Doctoral thesis
Date of Examination:
2019-01-24

Date of issue:
2019-03-21

Advisor:
Luke, Russell Prof. Dr.

Referee:
Luke, Russell Prof. Dr.

Referee:
Sturm, Anja Prof. Dr.

Persistent Address: http://hdl.handle.net/11858/00-1735-0000-002E-E5DD-D

## Files in this item

Name:
Hermer Dissertation (ohne CV) Fast Web View.pdf

Size:
1013,0 KB

Format:
PDF

The following license files are associated with this item:

## Abstract

### English

The aim of this thesis is to develop a theory that describes errors in fixed point iterations stochastically, treating the iterations as a Markov chain and analyzing them for convergence in distribution. These particular Markov chains are also called iterated random functions. The convergence theory for iterated random averaged operators turns out to be simple in $\mathbb{R}^n$: If an invariant measure for the Markov operator exists, the chain converges to an invariant measure, which may depend on the initial distribution. The stochastic fixed point problem is hence to find invariant measures of the Markov operator. We formulate different error models and study whether the corresponding Markov operator possesses an invariant measure; in some cases also rates of convergence w.r.t. metrics on the space of probability measures can be computed (geometric rates). There occur two major types of convergence. Weak convergence of the distributions of the iterates (or their average) and almost sure convergence. The stochastic fixed point problem can be seen as either consistent or inconsistent stochastic feasibility problem, where almost sure convergence is observed in the former and weak convergence in the latter. The type of convergence turns out to determine the consistency of the problem. We give conditions for which we can expect convergence in the above terms for general assumptions on the underlying metric space, and nonexpansive, paracontractive or averaged mappings. Since the focus of this thesis is probabilistic, when applied to algorithms for optimization, convergence is in distribution and the fixed points are measures. This perspective is particularly useful when the underlying problem models systems with measurement errors, or even when the problem is deterministic, but the algorithm for its numerical solution is implemented on conventional computers with finite-precision arithmetic.**Keywords:**averaged mappings; nonexpansive mappings; stochastic feasibility; stochastic fixed point problem; iterated random functions; convergence of markov chain