In this paper, we address the risk-sensitive filtering problem which is
minimising the expectation of the exponential of the squared estimation
error multiplied by a risk-sensitive parameter. Such filtering can be more robust to plant and noise uncertainty than minimum error variance filtering. It is
virtually equivalent to so-called $H_{\infty}$ filtering.
We consider discrete-time
nonlinear and linear Gauss-Markov state-space models and
Hidden markov Models (HMM) with finite-discrete states. For each signal model,
we present linear recursions in the information state and the result for
the filtered estimate that minimises the risk-sensitive cost index. We also
present fixed-interval smoothing results for each of these signal models.
Also, connection
between $L_2$ filtering (termed here risk-neutral filtering) and risk-sensitive
filtering is described via the limiting results when the risk-sensitive
parameter tends to zero. Indeed, it becomes clear that the risk-sensitive
filtering theory captures the simplicity and elegance of that for the less
general risk-neutral case.
The technique used in this paper is the so-called reference probability method
which defines a new probabilty measure where the observations are independent
and translates the problem to the new measure. The optimisation problem is
solved using simple estimation theory in the new measure and the results
are interpreted as solutions in the original measure.