SULJE VALIKKO

avaa valikko

Pattern Recognition and Image Processing
171,70 €
Arcler Education Inc
Sivumäärä: 242 sivua
Asu: Kovakantinen kirja
Julkaisuvuosi: 2016, 30.11.2016 (lisätietoa)
Kieli: Englanti
Pattern recognition and image processing are rapidly growing technologies within engineering and computer science. Algorithms for pattern recognition aim to observe the environment, learn to distinguish patterns of interest from their background and make pattern classes. A series of methods are developed to group individual patterns into specific classes according to their common properties, resulting in different pattern classes. Because of emerging applications which are computationally demanding, a wide specter of algorithms is developed to deal with problems such as data mining, classification of multimedia data, and obtaining biometric features like face and fingerprint recognition. Contrast enhancement, labeling of connected components, segmentation and feature detection methods are created based on the existing signal processing algorithms.

Clustering is a widely used concept in pattern recognition and image processing. It becomes scientific not through uniqueness but through transparent and open communication. Various desirable characteristics of clusterings and various approaches to define a context-dependent truth are listed, and it is discussed what impact these ideas can have on the comparison and the choice of clustering methods in practical applications. This subject is discussed in detail in the first section of this book. The following six chapters of the book present methods for solving problems of pattern and texture recognition. The remaining content of this book focuses on the advances of specific methods and algorithms in the field of image processing. The specific advances discussed in this book include the use of dynamic detection and rejection of liveness-recognition, a system to recognize patterns at cellular and specimen levels, specific scale-adapted features and various techniques to encode visual structure. Advances in solving TV deblurring and denoising problems and the suppression of mixed Gaussian and impulsive noise in color images are also investigated.

Dynamic detection and rejection of liveness-recognition pair outliers for spoofed samples in true multi-modal configuration with its inherent challenge of normalization is investigated. Bootstrap aggregating (bagging) classifiers for fingerprint spoof-detection algorithm are presented. Experiments on the latest face video databases and fingerprint spoofing database illustrate the efficiency of proposed techniques.

A system to recognize patterns at cellular and specimen levels, in images of HEp-2 cells is developed. Ensembles of SVMs were trained to classify cells into six classes based on sparse encoding of texture features with cell pyramids, capturing spatial, multi-scale structure. A similar approach was used to classify specimens into seven classes. Detailed descriptions and extensive experiments with various features and encoding methods are provided.

Specific scale-adapted features are computed in reference to the estimated scale of an image, based on the distribution of scale normalized Laplacian responses in a scale-space representation. Intrinsic-scale-adaption is performed to compute features, independent of the intrinsic texture scale, leading to a significantly increased discriminative power for a large amount of texture classes. In a final step, the rotation- and scale-invariant features are combined in a multi-resolution representation, which improves the classification accuracy in texture classification scenarios with scaling and rotation significantly.

A linear time complexity method for computing a canonical form is suggested. This method is using Euclidean distances between pairs of a small subset of vertices. This approach has comparable retrieval accuracy but lower time complexity than using global geodesic distances, allowing it to be used on higher resolution meshes, or for more meshes to be considered within a time budget.

Vision is one of the most important of the senses, and humans use it extensively during navigation. Different types of image and video frame descriptors that could be used to determine distinctive visual landmarks for localizing a person based on what is seen by a camera that they carry are computed. For each type of descriptor, different techniques to encode visual structure and to search between journeys to estimate a user’s position are also tested. The techniques include single-frame descriptors, those using sequences of frames, and both color and achromatic descriptors. The results suggest that appearance-based information could be an additional source of navigational data indoors. This offers a complementary approach to methods based on simultaneous localization and mapping (SLAM) algorithms.

A method of utilising social context and scene context to improve behaviour analysis is developed. It is found that in a crowded scene the application of Mutual Information based social context permits the ability to prevent self-justifying groups and propa¬gate anomalies in a social network, granting a greater anomaly detection capability. Scene context uniformly improves the detec¬tion of anomalies in both datasets.

Visual security metrics are deterministic measures with the (claimed) ability to assess whether an encryption method for visual data does achieve its defined goal. These metrics are usually developed together with a particular encryption method in order to provide an evaluation of said method based on its visual output. Therefore, a methodology for assessing the performance of security metrics based on common media encryption scenarios is considered.

New inexact explicit shrinkage formulas for one class of these relevant problems are introduced, whose regularization terms have translation invariant overlapping groups. These results are applied to TV deblurring and denoising problems with overlapping group sparsity. Thus, alternating direction method of multipliers to iteratively solve them is used. It is shown that the choice of interpolation method when rotating textures greatly influences the recognition capability. Lanczos 3 and B-spline interpolation are comparable to rotating the textures prior to image acquisition, whereas the recognition capability is significantly and increasingly lower for the frequently used third order cubic, linear and nearest neighbour interpolation. It is also shown that including generated rotations of the texture samples in the training data improves the classification accuracies. For many of the descriptors, this strategy compensates for the shortcomings of the poorer interpolation methods to such a degree that the choice of interpola-tion method only has a minor impact.

To enable an appropriate and fair comparison, a new texture dataset is introduced which contains hardware and interpolated rotations of 25 texture classes. An optimisation approach to the design of a binary descriptor is proposed, in which the detected keypoint is described using several, scale-dependent patches. Each such patch is divided into disjoint blocks of pixels, and then, binary tests between blocks’ intensities, as well as their gradients, are used to obtain the binary string. Since the number of image patches and their relative sizes influence the descriptor creation pipeline, a simulated annealing algorithm is used to determine them, optimising recall and precision of keypoint matching. The simulated annealing is also used for dimensionality reduction in long binary strings.

A new filter was created by improving the standard Kuwahara filter. It allows more efficient noise reduction without blurring the edges and image preparation for segmentation and further analyses operations. One of the biggest and most common restrictions encountered in filter algorithms is the need for a declarative definition of the filter window size or the number of iterations that an operation should be repeated.

A novel technique designed for the suppression of mixed Gaussian and impulsive noise in color images is proposed. The new denoising scheme is based on a weighted averaging of pixels contained in a filtering block. The main novelty of the proposed solution lies in the new definition of the similarity between the samples of the processing block and a small window centered at the block’s central pixel. Instead of direct comparison of pixels, a measure based on the similarity between a given pixel and the samples from the neighborhood of the central pixel is utilized. This measure is defined as the sum of distances in a given color space, between a pixel of the block and a certain number of most similar samples from the filtering window. The main advantage of the proposed scheme is that the new similarity measure is not influenced by the outliers injected into the image by the impulsive noise and the averaging process ensures the effectiveness of the new filter in the reduction of Gaussian noise.

Tuotetta lisätty
ostoskoriin kpl
Siirry koriin
LISÄÄ OSTOSKORIIN
Tilaustuote | Arvioimme, että tuote lähetetään meiltä noin 15-18 arkipäivässä
Myymäläsaatavuus
Helsinki
Tapiola
Turku
Tampere
Pattern Recognition and Image Processing
Näytä kaikki tuotetiedot
ISBN:
9781680944471
Sisäänkirjautuminen
Kirjaudu sisään
Rekisteröityminen
Oma tili
Omat tiedot
Omat tilaukset
Omat laskut
Lisätietoja
Asiakaspalvelu
Tietoa verkkokaupasta
Toimitusehdot
Tietosuojaseloste