# Digital Image Processing Using MATLAB

## Preview text

Digital Image Processing Using MATLAB
Ch. 8: Image Compression
Introduction
• Images use lots of memory to store and lots of bandwidth to transmit
• How can we store them compactly? • There are 3 redundancies in images:
coding, interpixel, and psychovisual • We wish to remove the redundancies
to save space.
1

Introduction (2)
• Image compression is not "built-in" in MATLAB.
• We will be looking at code to compress images, both MATLAB code and C code called from MATLAB.
Image Compression Systems
• Two blocks: encoder and decoder. • f(x,y) → encoder→c(x,y) • c(x,y) → decoder→F(x,y) • F(x,y) = f(x,y): lossless compression • F(x,y) ≠ f(x,y): lossy compression
2

Compression Systems (2)
• Scientific or medical images: preserving original data important.
• Pictures: original data not important as long as visual quality is ok.
• For image compression, the compression ratio cR of the image is important: cR = # bits original/# bits compressed. We want cR > 1.0.
MATLAB Code pp. 283 - 284
• The code to calculate the compression ratio uses function bytes(arg) to calculate the number of bytes in the 2 images. Function imratio divides bytes in plain image by bytes in encoded image to get the ratio.
• If arg is a string (image filename), the dir function gets the number of bytes.
3

MATLAB Code pp. 283 - 284
• If arg is a structure variable, the code adds up the lengths of the fields. The function fieldnames returns the names of the fields.
• If arg is something else, the whos function calculates the bytes.
MATLAB Code pp. 283 - 284
• Usage (p. 284): • r = imratio (imread('imagename'),
'imagename') prints r = 35.1612 for image bubbles25.jpg
4

Error in Lossy Compression
• In lossy compression, the decoded image F is different from the original image f. The error at pixel (x,y) is e(x,y) = F(x,y) – f(x,y).
• The total error is found by summing the pixel errors from y = 0 to N-1 and from x = 0 to M – 1 (p. 285).
Error in Lossy Compression (2)
• The root-mean-square error is found as follows: • calculate the square of the error for each pixel and add them all up • divide the result by the image size MN • take the square root of the result
5

Error in Lossy Compression (3)
• Code for function compare is on pp. 285 – 286. It computes the rootmean-square error and, if it is not 0, outputs an error image scaled from – emax to emax (the max error in any pixel) scaled by any desired value (default 1), as well as an error image histogram.
Image Encoder
• The encoder has 3 parts: –a mapper, which reduces interpixel redundancy –a quantizer, which eliminates psychovisual redundancy –a symbol coder, which eliminates coding redundancy
6

Image Decoder
• The image decoder has 2 parts: – a symbol decoder, which reverses the actions of the encoder – an inverse mapper, which reverses the actions of the mapper.
• Note that quantizing cannot be undone.
What Next?
• We will look first at symbol encoding and decoding (section 8.2).
• We will look next at interpixel redundancy (section 8.3).
• After that, we will look at image quantization (section 8.4).
• Finally we will look briefly at JPEG examples (sections 8.5 and 8.6).
7

8.2 Coding Redundancy
• A grey-level image with levels 0 to 255 can be represented by an array of 8-bit bytes.
• An M x N image can thus be represented using 8MN bits.
• Using Huffman coding, we can reduce the number of bits required to store the image.
Huffman Coding
• The idea is to assign frequentlyoccurring numbers a small number of bits, and less frequently occurring numbers a larger number of bits.
• In the usual case, this will reduce the number of bits required to store the M x N image.
8

Huffman Coding (2)
• Let rk be a discrete random variable for k = 1, 2, …, L. rk represents grey level k.
• We count how often rk occurs in the image (nk times) and let its probability p (rk) = nk/n, where n = M x N.
Huffman Coding (3)
• N(rk) = number of bits used to encode rk. Then the average number of bits required to encode L bits is Lavg = sum from 1 to L(N(rk)p(rk)), and the number of bits to encode the M x N image is B = MNLavg.
9

Example (p. 287, Table 8.1)
• In this example, there are 4 grey levels.
• Code 1uses 2 bits for each pixel, so Lavg= 2.
• Code 2 uses different number of bits for different levels: 3, 1, 3, and 2. Its Lavg = 3 x 0.1875 + 1 x 0.5 + 3 x 0.125 + 2 x 0.1875 = 1.8125 bits
Example (continued)
• The compression ratio is 2/1.8125 = 1.103
• A 4 x 4 image using the 4 grey levels with the 2-bit code would take 32 bits (4 x 4 x 2).
• A 4 x 4 image using code 2 would take 29 bits.
10 