Digital filters are remarkable function groups of the image processing. Digital filters work on the time domain or on the frequency domain. The principles of the digital filters will be introduced in this chapter.

To fill in the self-test successfully

Digital filters are usually image processing methods that manipulate the frequency and time ranges. The time range methods were used in the early days of image processing, today they are called spatial range methods. Images within a frequency range are called spectrums.

Let us define an image with s(t), spectrum of it is S(f), and the Fourier transformation with F. Time and frequency are connected by Fourier transformation:

*S(f) = F {s(t)}*

The inverse operation is:

*s(t) = F*^{ -1}*{S(f)}*

The impact of digital filters is represented by functions (transfer functions) in frequency ranges (e.g. convolution filters), while others are interpreted in time ranges (e.g. median filters). Let us multiply an image's spectrum with a quadratic function defined on the range of [-f,f] and 0 apart from it, then the f frequencies greater than the upper bound frequencies are removed (smoothened). The smoothened image is the result of the inverse Fourier transformation applied on the altered spectrum. The scale of smoothness depends on the upper frequency bound: the lower the value set on the upper bound, rougher the smoothness will be.

This is how the digital filters operate in general:

- let there be an
*s(t)*function (which is the digital image itself) - apply Fourier transformation or calculate its spectrum:
*S(f)=F{s(t)}* - multiply the spectrum with the transfer function:
*S'(f) = S(f) A(f),*where*A(f)*is the transfer function (quadratic function was used in the example) - apply inverse Fourier transformation on the new spectrum data at the end, and a filtered image is created:
*s'(t) = F*^{-1}*{S'(f})*

Most of the "well-behaved" functions are Fourier transformations. By knowing the transfer function in advance (defined according to the needs with the image), the inverse Fourier transformation can always be calculated. According to the convolution identity, the convolution of two functions in a given time range is equal to the spectrum product of these functions:

*s(t) * a(t) = F*^{-1}* {S(f) A(f)}*

Applying the convolution identity, with given transfer function on Fourier transformation, filtering can be performed in time range by convoluting the original image (in time range) with the given transfer function Fourier transformation:

*s'(t) = s(t) * F*^{-1}*{A(f)}, *

*where A(f) *is the transform function.

The above discussed functions are continuous functions. In the case of discrete functions, the Fourier transformation is simply called Discrete Fourier transformation (DFT). The fast and efficient version of DFT is the Fast Fourier transformation (FFT) with the following parameters:

*s(t)*digital image itself*A(f)*a discrete transform function as well as its inverse function*a(t)*sampling function

Convolution on time range will use the results of the *a(t)* sampling function.

The keyword kernel has been introduced for the discrete version of transform function inverse Fourier transformation. Kernels are widely used not only in the case when the effect of a function can be defined by the product of spectrum and a function, but also in cases where the result of time range function cannot be given within a frequency range (pl. range filters).

Most of the digital filters are using kernels. The way these filters operate can be summarized in the following steps:

- take a kernel with the size of [(2k+1) * (2k+1)]
- run the given kernel window on every pixel of the image in a way that the centre of the window is on the actual pixel
- calculate the filtered value of the actual pixel by the values below the kernel with the help of algorithms.

This is usually done with the following method: let *g(x,y) = F{f(x,y)},* where *g(x,y)* is the filtered image at *(x,y)* coordinate, f represents the original image, *F* operator calculates the filtered value at *(x,y)* coordinate of the original image with the use of *(x,y) *neighbours intensity value. We take advantage of observations that the intensity values of nearby points are more closely related than that of distant points. Another important property of these filters is that they are not recursively applicable, meaning that the procedure relies on the intensity values of the original image, namely the data will always be taken from the original image.

A Társadalominformatika: moduláris tananyagok, interdiszciplináris tartalom- és tudásmenedzsment rendszerek fejlesztése az Európai Unió támogatásával, az Európai Szociális Alap társfinanszírozásával, az ELTE TÁMOP 4.1.2.A/1-11/1-2011-0056 projekt keretében valósult meg.

Generated with ELTESCORM framework