Eigenvalues play an important role in image processing applications. There are various methods available for image processing. The processing like measurement of image sharpness can be done using the concept of eigenvalues. In case of human face segmentation using elliptical shape, largest and smallest eigenvalue of covariance matrix represent the elliptical shape. Of course you can understand how this will work out only after studying the math behind it.
One of the application which is fairly easy to understand is Image Compression
(also called dimension reduction). Image compression
has been the means of reducing the size of a graphics file for better storage convenience. It is also a useful way of reducing the time requirement of sending large files over the Web via a method in image compression - Principal component analysis
(PCA). This technique utilizes the idea that any image can be represented as a superposition of weighted base images.
Take this image as an example
We divide image into blocks of 10×10 dimensions and concatenate them. These sub-blocks are arranged into an n
matrix, where n
is the number of blocks and p
the number of elements in each block. We apply PCA on the matrix which returns a set of eigenvalues, eigenvectors and principal components.
This produces eigenimages which would be essential elements to the compressed image.
Eigenvalues tell how essential a particular set of eigenvectors is to making up the completeness of the image. Based on these values expressed in percentages we choose the most important eigenvectors and reconstruct the image out of these, which is essentially the reverse process of PCA.
Compressed reconstructions of the image at different numbers of eigen vectors. 1, 3, 5 and 10 respectively.