The fundamental goal of texture analysis is to derive a compact representation of texture in order to provide an argumentative framework for applying logics and mathematics to solving texture-related vision problems.
In this chapter, texture analysis methods are divided into two categories, descriptive and generic approaches. Descriptive approaches provides feature-based quantitative descriptions of textures, which represent a texture as a data point in a multi-dimensional feature space. Since it is easy to derive similarity metric (e.g., distance) with the resulting description, descriptive approaches are particularly useful for discriminating textures in tasks like classification and segmentation. Generic approaches derives generic models of textures, which extend the observation or domain knowledge about textures into more general conditions. Syntactic models is rather deterministic, which describes a texture as a string of symbols (texture elements) generated by a grammar. In contrast, probability models assume a texture is generated by a stochastic process and derive a joint probability distribution for texture descriptions. Usually, the resulting models can be also used for synthesising new textures.
Although both auto-models and the FRAME model are specified by Gibbs distributions for texture description, they involve different modelling scenarios [113]. In auto-models, local image structures are specified in terms of conditional probabilities with respect to an MRF that is related to the global properties by Markov-Gibbs equivalence. In contrast, the FRAME model first selects salient features from statistics of filter responses and learns the joint probability distribution of the features based on the MaxEnt principle.
Most of today's synthesis approaches are either model-based or non-parametric sampling. The MGRF model-based method [20,13,7,37,36,38,113] identifies a probability model of a training texture, specified by a Gibbs probability distribution on selected sufficient statistics of image signals. These methods provide better generalisation and a principled way for texture analysis and synthesis, but the computation complexity of the process is rather high. Non-parametric sampling [22,29,104,65,59,28,78] enables fast texture synthesis by non-parametric statistical approach. Instead of creating a concrete parametric model, the approach attempts to mimic a texture from samples of image signals taken directly from a training image. The synthesis is basically a process of rearranging image signals of the training texture with due account of their statistical dependency. An MGRF model of texture is usually assumed and pixel sampling is constrained by similarity metrics defined with respect to local neighbourhood systems. Non-parametric sampling methods can be either pixel- or block-based, most of them sharing similar synthesis procedures but with slightly different distance metric for pixel selection. Algorithm 4 provides a general template for the implementation of a typical texture synthesis algorithm based on non-parametric sampling. Recent development in block-based non-parametric sampling focuses also on how to stitch two neighbouring patches with minimal visual discontinuity. The problem has been formulated as a graph cut problem in graph theory [59].
Non-parametric sampling approaches outperform model-based ones in synthesis speed. But, in the downside, because they skip entire image modelling stage, non-parametric sampling methods might fail to preserve global structures of texture and have difficulty in automating the selection of local neighbourhood varying from texture to texture. Finding an optimal cut along patch boundary is considered an extra effort to compensate the absence of texture models.
A texture usually exhibits both stationarity and locality [104]. Stationarity is the global property suggesting that different image regions look similar to each other (periodicity), while locality is a local property indicating a single pixel interacts with a coherent local neighbourhood (local structure). Probability models derived from global image signal statistics emphasise the stationarity, while non-parametric sampling techniques focus on the locality. This is a major conceptual difference between the two approaches to texture analysis and synthesis.