Identification of a generic MGRF model in Eq. (6.2.5)
involves recovering a characteristic neighbourhood
and
estimating the potential vector
, from image
signal statistics
given by an image
.
There is a chicken-and-egg problem in the identification; On the one
hand, the potentials
need to be known to
compute the partial energy in Eq. (6.2.3) for
selecting clique families into the characteristic neighbourhood.
But, on the other hand, the potentials
require an explicit interaction structure
before can
be computed. An analytic first approximation of potentials is
proposed to work around the problem. The idea is to first compute an
approximation of potentials for recovering the characteristic
neighbourhood, and then to refine the potentials using a more
accurate method (e.g., stochastic approximation) based on the
estimated neighbourhood. Model identification of a generic MGRF
model has three main steps,
Model parameters
in
Eq. (6.2.5) are estimated by computing
MLE
. Given a neighbourhood
and a
training image
, the log-likelihood function
of the
potential is:
In a generic MGRF model, a quadratic approximation to the
log-likelihood function
based on
Taylor series is applied to simplify the log likelihood function and
allows to analytically compute an first approximation of potentials,
. For a clique family
, the first approximation of potential
is given by,
The first approximation of potentials might not be adequate for
computing an accurate posterior distribution
, but
it gives a sub-optimal estimate of partial energy for each clique
family and allows to identify the characteristic neighbourhood of
the model.
The characteristic neighbourhood represents an interaction structure constituted by the most energetic clique families. Most of clique families in a texture have rather low partial energy, and therefore related pixel interactions have little or no impact on texture patterns. In contrast, only a small number of most energetic clique families are major contributors to a texture. This observation suggests only the most energetic clique families should be included in the interaction structure.
The selection of clique families is based on partial energy which
measures the probabilistic strength of related pixel interaction.
Given the first approximation of potential
in Eq. (6.3.2)
and the partial energy
in
Eq. (6.1.3), an approximation of partial
energy for a clique family
can be computed from a
training sample
by:
Since the dimension of the image lattice
and the
factor
are constant, the approximation of partial
energy can be simplified further to a relative partial energy,
, for simplicity.
The relative partial energy
provides a relative
measure of contribution and allows to rank all the clique families
accordingly. The most characteristic clique families can be found by
using a threshold in the simplest case. To a good approximation, the
relative partial energies of all clique families are assumed to
follow a Gaussian distribution. Therefore, the threshold
is
heuristically decided by a function of the mean
and the standard deviation
of the Gaussian [37].
However, this over-simplified method of determining the interaction structure by using a threshold does not take statistical interplay among clique families into account, which might result in neglecting some important clique families. In other words, only statistical independent clique families should be included into the interaction structure [39]. In another approach, clique families are selected iteratively based on their statistical impact to the probability distribution [107]. In this method, the interaction structure grows from a single clique family with maximum energy and only one clique family is selected at each iteration. The statistical impact of a clique family is defined by the change to the probability distribution after adding the family into the structure. This method involves computational expensive operations of re-sampling distribution and re-collecting GLCH statistics at each step, but the recovered structure is more adequate and considerably smaller than the structure obtained by thresholding.
Given the obtained interaction structure, the first approximation of
potentials should be refined in order to determine the posterior
probability distribution in Eq. (6.2.5).
A stochastic approximation [85] algorithm is
applied to iteratively refine the potential toward the MLE
. Basically, the process constructs a Markov chain and
updates the potential at each iteration
based on the following
equation [38],
Here, is an image generated at step
by sampling the
previous probability distribution,
, using Gibbs sampling [34] or
Metropolis algorithm [77].
is the scale
factor. The initial scale
and the control parameters
,
and
can be decided either analytically or
empirically.
The termination condition of the process is given as follows,
![]() |
(6.3.8) |
The stochastic approximation is rather computational-intensive, which usually takes more than a few hundred steps to attain convergence. Speed of convergence and the comparison with an alternative MCMC based technique, called Controllable Simulated Annealing(CSA), are discussed in [38],