Image Processing: Dealing With Texture (Hardcover)
Maria Petrou, Pedro Garcia Sevilla
- 出版商: Wiley
- 出版日期: 2006-03-01
- 售價: $1,406
- 語言: 英文
- 頁數: 634
- 裝訂: Hardcover
- ISBN: 0470026286
- ISBN-13: 9780470026281
-
其他版本:
Image Processing: Dealing with Texture, 2/e (Hardcover)
買這商品的人也買了...
-
$900$882 -
$720$706 -
$1,120$1,098 -
$880$581 -
$580$493 -
$650$514 -
$390$332 -
$890$757 -
$780$702 -
$680$578 -
$650$553 -
$520$442 -
$620$490 -
$580$493 -
$450$351 -
$650$507 -
$550$468 -
$480$456 -
$980$774 -
$750$593 -
$1,188CCNA Official Exam Certification Library (CCNA Exam 640-802), 3/e
-
$1,650$1,617 -
$1,881Algorithms for Image Processing and Computer Vision, 2/e (Paperback)
-
$2,980$2,831 -
$580$458
相關主題
商品描述
Description
Self-contained text covering practical image processing methods and theory for image texture analysis.
Techniques for the analysis of texture in digital images are essential to a range of applications in areas as diverse as robotics, defence, medicine and the geo-sciences. In biological vision, texture is an important cue allowing humans to discriminate objects. This is because the brain is able to decipher important variations in data at scales smaller than those of the viewed objects. In order to deal with texture in digital data, many techniques have been developed by image processing researchers.
With a wholly practical approach and many worked examples, Image Processing: Dealing with Texture is a comprehensive guide to these techniques, including chapters on mathematical morphology, fractals, Markov random fields, Gabor functions and wavelets. Structured around a series of questions and answers, enabling readers to easily locate information on specific problems, this book also:
- provides detailed descriptions of methods used to analyse binary as well as grey texture images
- presents information on two levels: an easy-to-follow narrative explaining the basics, and an advanced, in-depth study of mathematical theorems and concepts
- looks at ‘good’ and ‘bad’ image processing practice, with wrongly designed algorithms illustrating ‘what not to do’
- includes accompanying website, setting out all algorithms discussed within the text.
An ideal self-teaching aid for senior undergraduate and Masters students taking courses in image processing and pattern recognition, this book is also an ideal reference for PhD students, electrical and biomedical engineers, mathematicians, and informatics researchers designing image processing applications.
Table of Contents
Preface.
1 Introduction.
What is texture?
Why are we interested in texture?
How do we cope with texture when texture is a nuisance?
How does texture give us information about the material of the imaged object?
Are there non-optical images?
What is the meaning of texture in non-optical images?
What is the albedo of a surface?
Can a surface with variable albedo appear non-textured?
Can a rough surface of uniform albedo appear non-textured?
What are the problems of texture which image processing is trying to solve?
What are the limitations of image processing in trying to solve the above problems?
How may the limitations of image processing be overcome for recognising textures?
What is this book about?
Box 1.1. An algorithm for the isolation of textured regions.
2 Binary textures.
Why are we interested in binary textures?
What is this chapter about?
Are there any generic tools appropriate for all types of texture?
Can we at least distinguish classes of texture?
Which are the texture classes?
Which tools are appropriate for each type of texture?
2.1 Shape grammars.
What is a shape grammar?
Box 2.1. Shape grammars.
What happens if the placement of the primitive pattern is not regular?
What happens if the primitive pattern itself is not always the same?
What happens if the primitive patterns vary in a continuous way?
2.2 Boolean models.
What is a 2D Boolean model?
Box 2.2. How can we draw random numbers according to a given probability density function?
Box 2.3. What is a Poisson process?
How can we use the 2D Boolean model to describe a binary texture?
How can we estimate some aggregate parameters of the 2D Boolean model?
How can we estimate some individual parameters of the 2D Boolean model?
Box 2.4. How can we relate the individual parameters to the aggregate parameters of the 2D Boolean model?
What is the simplest possible primitive pattern we may have in a Boolean model?
What is a 1D Boolean model?
How may the 1D Boolean model be used to describe textures?
How can we create 1D strings from a 2D image?
Box 2.5. Hilbert curves.
How can we estimate the parameters of the 1D Boolean model?
Box 2.6. Parameter estimation for the discrete 1D Boolean model.
What happens if the primitive patterns are very irregular ?
2.3 Mathematical morphology.
What is mathematical morphology ?
What is dilation?
What is erosion?
Is there any way to lose details smaller than a certain size but leave the size of larger details unaffected?
What is opening?
What is closing?
How do we do morphological operations if the structuring element is not symmetric about its centre?
Since the structuring element looks like a small image, can we exchange the roles of object and structuring element?
Is closing a commutative operation?
Can we use different structuring elements for the erosion and the dilation parts of the opening and closing operators?
Can we apply morphological operators to the white pixels of an image instead of applying them to the black pixels?
Can we apply more than one morphological operator to the same image?
Is erosion an associative operation as well?
How can we use morphological operations to characterise a texture?
Box 2.7. Formal definitions inmathematical morphology.
What is the “take home” message of this chapter?
3 Stationary grey texture images 81
What is a stationary texture image?
What is this chapter about?
Are any of the methods appropriate for classifying binary textures useful for the analysis of grey textures?
3.1 Image binarisation.
How may a grey image be analysed into a set of binary images by thresholding?
How may a grey image be analysed into a set of binary images by bit-slicing?
Is there any relationship between the binary planes produced by thresholding and the bit planes?
3.2 Grey scale mathematical morphology.
How does mathematica lmorphology generalise for grey images?
How is the complement of an image defined for grey images?
What is a non-flat structuring element?
What is the relationship between the morphological operations applied to an image and those applied to its complement?
What is the purpose of using a non-flat structuring element?
How can we perform granulometry with a grey image?
Can we extract in one go the details of a signal, peaks or valleys, smaller than a certain size?
How can we use the pattern spectrum to classify textures?
3.3 Fractals.
What is a fractal?
What is the fractal dimension?
Which statistical properties remain the same at all scales in non-deterministic fractals?
Box 3.1. What is self-affine scaling?
Box 3.2. What is the relationship between the fractal dimension and exponent H?
Box 3.3. What is the range of values of H?
What is a fractional Brownian motion?
Box 3.4. Prove that the range of values of H for a fractional Brownian motion is (0,1)
Box 3.5. What is the correlation between two increments of a fractional Brownian motion?
Box 3.6. What is the power spectrum of a fractal?
Box 3.7. Robust line fitting using the Ransac method.
Box 3.8. What is the autocorrelation function of a fractal?
Is fractal dimension a good texture descriptor?
Is there a way to enrich the description of textures offered by fractal models?
What is lacunarity?
3.4 Markov random fields.
What is a Markov random field?
Which are the neighbouring pixels of a pixel?
How can we use MRFs to characterise textures?
What is texture synthesis by analysis?
How can we apply the Markov model to create textures?
Can we apply the method discussed in the previous section to create images with 256 grey levels?
What is the auto-normal Markov random field model?
How can we estimate the Markov parameters of a texture?
What is maximum likelihood estimation?
What is the log-likelihood?
Box 3.9. What is the relationship between maximum likelihood estimation and Bayesian estimation?
How can we apply maximum likelihood estimation to estimate the parameters of a Markov random field?
How do we know which parameter values to try when we apply MLE to estimate the Markov parameters?
How can we estimate the Markov parameters with the least square error estimation method?
Box 3.10. Least square parameter estimation for the MRF parameters.
Is a Markov random field always realisable given that we define it arbitrarily?
What conditions make an MRF self-consistent?
What is a clique in a neighbourhood structure?
3.5 Gibbs distributions.
What is a Gibbs distribution?
What is a clique potential?
Can we have a Markov random field with only singleton cliques?
What is the relationship between the clique potentials and the Markov parameters?
Box 3.11. Prove the equivalence of Markov random fields and Gibbs distributions (Hammersley–Clifford theorem).
How can we use the Gibbs distribution to create textures?
How can we create an image compatible with a Gibbs model if we are not interested in fixing the histogram of the image?
What is the temperature of a Gibbs distribution?
How does the temperature parameter of the Gibbs distribution determine how distinguishable one configuration is from another?
What is the critical temperature of a Markov random field?
3.6 The autocorrelation function as a texture descriptor.
How can we compute the autocorrelation function of an MRF?
Can we use the autocorrelation function itself to characterise a texture?
How can we use the autocorrelation function directly for texture characterisation?
How can we infer the periodicity of a texture from the autocorrelation function?
How can we extract parametric features from the autocorrelation function? .
Box 3.12. Least square fitting in 2D and 1D.
3.7 Texture features from the Fourier transform.
Can we infer the periodicity of a texture directly from its power spectrum?
Does the phase of the Fourier transform convey any useful information?
Since the phase conveys more information for a pattern than its power spectrum, why don’t we use the phase to describe textures?
Is it possible to compute from the image phase a function the value of which changes only due to genuine image changes?
How do we perform phase unwrapping?
What are the drawbacks of the simple phase unwrapping algorithm?
3.8 Co-occurrence matrices.
Can we use non-parametric descriptions of texture?
How is a co-occurrence matrix defined?
How do we compute the co-occurrence matrix in practice?
How can we recognise textures with the help of the co-occurrence matrix?
How can we choose the parameters of the co-occurrence matrix?
What are the higher-order co-occurrence matrices?
What is the “take home” message of this chapter?
4 Non-stationary grey texture images.
What is a non-stationary texture image?
What is this chapter about?
Why can’t we use the methods developed in the previous chapter here?
How can we be sure that the texture inside an image window is stationary?
4.1 The uncertainty principle and its implications in signal and image processing.
What is the uncertainty principle in signal processing?
Box 4.1. Prove the uncertainty principle in signal processing.
Does the window we choose in order to extract local information influence the result?
How can we estimate “what is happening where” in a digital signal?
How can we deal with the variability of the values of a feature?
How do we know which size window we should use?
How is the uncertainty principle generalised to 2D?
4.2 Gabor functions.
What is a Gabor function?
Why are Gabor functions useful in analysing a signal?
How can we use the Gabor functions in practice?
How is a Gabor function generalised in 2D?
How may we use the 2D Gabor functions to analyse an image?
Can we have alternative tessellations of the frequency domain?
How can we define a Gaussian window in polar coordinates in the frequency domain?
What is an octave?
How do we express a frequency in octaves?
How may we choose the parameters of the Gaussian window in the frequency space?
4.3 Prolate spheroidal sequence functions.
Is it possible to have a window with sharp edges in one domain which has minimal side ripples in the other domain?
Box 4.2. Of all the band-limited sequences one can define, which sequence has the maximum energy concentration between a given set of indices?
Box 4.3. Do prolate spheroidal wave functions exists in the digital domain?
What is the relationship of two band-limited functions, the Fourier transforms of which are given by the real functions F(ωx, ωy), and F(−ωx,−ωy), respectively?
How can we construct a filter which is band-limited in two bands which are symmetrically
placed about the origin of the axes in the frequency domain?
Box 4.4. How may we generalise the prolate spheroidal sequence functions to 2D?
Could we construct the 2D prolate spheroidal sequence filters as separable filters?
What is the advantage of using separable filters?
4.4 Wavelets.
Is there a way other than using Gabor functions to span the whole spatio-frequency space?
What is a wavelet?
How can we use wavelets to analyse a signal?
Box 4.5. How should we choose the mother wavelet?
Box 4.6. Does the wavelet function minimise the uncertainty inequality?
How is the wavelet transform adapted for digital signals?
How do we compute the wavelet coefficients in practice?
Why is the continuous wavelet transform invertible and the discrete wavelet transform non-invertible?
How can we span the part of the “what happens when” space which contains the direct component of the signal?
Can we span the whole “what is where” space by using only the scaling function?
What is a Laplacian pyramid?
Why is the creation of a Laplacian pyramid associated with the application of a Gaussian function at different scales, and the subtraction of the results?
Why may the second derivative of a Gaussian function be used as a filter to estimate the second derivative of a signal?
How can we extract the coarse resolution content of a signal from its content at a finer resolution?
How can we choose the scaling function.
How do we perform the multiresolution analysis of a signal in practice?
Why in tree wavelet analysis do we always analyse the part of the signal which
contains the low frequencies only?
Box 4.7. How do we recover the original signal from its wavelet coefficients in practice?
How many different wavelet filters exist?
How may we use wavelets to process images?
How may we use wavelets to construct texture features?
What is the maximum overlap algorithm?
What is the relationship between Gabor functions and wavelets?
4.5 Where Image Processing and Pattern Recognition meet.
Why in wavelet analysis do we always split the band with the maximum energy?
What is feature selection?
How can we visualise the histogram of more than one feature in order to decide whether they constitute a good feature set?
What is the feature space?
What is the histogram of distances in a feature space?
Is it possible that the histogram of distances does not pick up the presence of clusters, even though clusters are present?
How do we segment the image once we have produced a set of features for each pixel?
What is the K-means algorithm?
What is deterministic annealing?
Box 4.8. Maximum entropy clustering.
How may we assess the quality of a segmentation?
How is the Bhattacharyya distance defined?
How can we compute the Bhattacharyya distance in practice?
How may we assess the quality of a segmentation using a manual segmentation as reference?
What is a confusion matrix?
What are the over- and under-detection errors?
4.6 Laws’ masks and the ‘‘what looks like where’’ space.
Is it possible to extract image features without referring to the frequency domain?
How are Laws’ masks defined?
Is there a systematic way to construct features that span the “what looks like where” space completely?
How can we expand a local image neighbourhood in terms of the Walsh elementary images?
Can we use convolution to compute the coefficients of the expansion of a sub-image in terms of a set of elementary images?
Is there any other way to express the local structure of the image?
4.7 Local binary patterns.
What is the local binary pattern approach to texture representation?
How can we make this representation rotationally invariant?
How can we make this representation appropriate for macro-textures?
How can we use the local binary patterns to characterise textures?
What is a metric?
What is a pseudo-metric?
Why should one wish to use a pseudo-metric and not a metric?
How can we measure the difference between two histograms?
How can we use the local binary patterns to segment textures?
How can we overcome the shortcomings of the LBP segmentation?
4.8 The Wigner distribution.
What is the Wigner distribution?
How is the Wigner distribution used for the analysis of digital signals?
What is the pseudo-Wigner distribution?
What is the Kaiser window?
What is the Nyquist frequency?
Why does the use of the pseudo-Wigner distribution require signals which have been sampled at twice their Nyquist frequency?
Should we worry about aliasing when we use the pseudo-Wigner distribution for texture analysis?
How is the pseudo-Wigner distribution defined for the analysis of images?
How can the pseudo-Wigner distribution be used for texture segmentation?
What is the “take-home” message of this chapter?
Bibliographical notes.
References.
Index.