I: Concept and Background of Image Processing, Techniques, and Big Data
Advanced Digital Image
Introduction to Advanced Digital Image Processing
Digital image processing is being used ubiquitously in various fields, which has been increasing exponentially. This is majorly due to extensive use of digital images in the fields of remote sensing, medicine, machine vision, video processing, microscopic imaging, and so on. Image processing requires manipulation of image data using various electronic devices and softwares. Along with devices, digital image processing requires the application of different algorithms as per the requirements to convert a physical image into a digital image to fetch desired information or features (Figure 1.1).
A digital image is a representation of two-dimensional images as a finite set of digital picture elements termed as pixels. These pixel values represent various parameters like gray levels, height, colors, opacities, etc. of an image in the form of binary digits, and the binary digits can be represented in the form of mathematical equations. The digital image size can be determined by the matrix used to store the pixels based on their size. In order to access a particular pixel in the digital image, the relevant coordinates at x and y axis are defined. Each pixel has its own unique intensity and brightness. Pixels in an image will have different values as per an image or else the images may not appear different from each other. Various mixtures of colors will produce a color image. Pixel dimensions are the horizontal and the vertical measurements of an image. Each pixel is defined using the bit depth which is determined by the number of bits. Resolution is the spatial scale of digital images, is the indicator of the spatial frequency with which the images have been sampled, and can be measured in Ipi, ppi, and dpi.
FIGURE 1.1 Process of digital image processing.
Lpi stands for lines per inch and is used generally for magazine printing. Ppi stands for pixels per inch and refers to the pixel arrays depicting the real-world image. Dpi stands for dots per inch and is used to describe the printer’s resolution.
Categorization of Digital Images
Digital images may be categorized as shown in Figure 1.2.
Binary images are images whose pixels have only two possible values normally as 0 ad 1 displayed as black and white. This image is termed as monochrome or bitonal. Binary images are represented by pixels that can represent only one shade where each pixel consists of 1 bit each. It is an image that is composed exclusively of shades of only one color with the varying range from the brightest to the darkest hues.
Black and White Image
This image consists of only two colors, i.e black and white color and is termed as black and white image. It combines black and white in a continuous fashion creating different ranges of gray. The color range is represented in 256 different gray values. These different shades lie between 0 and 255, where 0 refers to black, 255 refers to white (also known as panchromatic images), and intermediate shades refer to the neutral tonal values of black and white, which are commonly termed as grayscale image, whereas 127 stands for gray. Previous versions of monitors used 4 bits that could display only 16 shades of color between white and black. However, in present-day scenario, 8-bit grayscale is used to indicate that only 8 bits are used to store different shades of gray, thereby permitting 256 different intensities of black and white in each pixel.
-Bit Color Format
This image format consists of 256 different shades of colors. This method requires storing the image information in the computer’s memory where each pixel is represented by using one byte, that is, 8 bits. Thus, maximum numbers of colors that can be displayed are not more than 256.
Color image is basically formed by three colors red, blue, and green to represent a coloured image. There are basically two forms of the 8-bit color graphics. One form utilizes each of the 256 entries for the red, green, and blue color thereby forming shades of 16,777,216 colors. In this approach, each 8 bit out of 24 bits describes the shades of red.
FIGURE 1.2 Categorization of digital images.
green, and blue. Sometimes 18 bits or 12 bits can be used to define the shades of the color where 18 bits utilizes 6 bits for red, green, blue (RGB) forming a palette of 262,144 colors and 12 bits utilizes 4 bits for each RGB thereby forming a color palette of 4096 colors.
The other form of the 8-bit color format is three bits for red, three bits for green, and two bits for blue. This second form is often called 8-bit truecolor, as it does not use a palette at all. Most 8-bit image formats store a local image palette of 256 colors, as the graphics hardware’s global color palette will be overwritten with the local image palette, due to which it is highly possible to have distorted colors of the images. This is one of the major reasons that the 8-bit hardware programs are written along with the web browsers to be able to display images from various sources; each image may consist of its own palette, which will be finally mapped to a local palette, thereby causing some form of dithering. The popular file formats that consist of 8-bit formats are GIF, PNG, and BMP. In case the 24-bit image is converted into the 8-bit image, the image loses its quality and sharpness.
In this type of image format, there are 65,536 types of different colors, and hence, it is termed as high color format. The 16-bit format is divided into three primary colors of red, green, and blue, and the distribution of the RGB can be 5 bits for red, 6 bits for green color, and 5 bits for representing the blue color. Generally, the distribution is like the above stated, and one extra bit is allocated to the green color, as it is soothing to the eyes among the three colors.
- 24-bit color format is also known as true color format. Like 16-bit color format, in a 24-bit color format, the 24 bits are again distributed in three different formats of red, green, and blue. Since 24 bits are equally divided on 8, they have been distributed equally between three different color channels. Their distributions are as follows:
- • 8 bits for R (red),
- • 8 bits for G (green),
- • 8 bits for B (blue).
Compared to indexed color images, true color images lack a color lookup table. A pixel does not have an index referring to a specific color in the color lookup table. Every pixel has its own (RGB) color value and, depending on the file format, may also consist of a value for transparency (RGBA). The main advantage of true color images is the availability of an unlimited amount of colors.
Some of the important image attributes to be considered can be listed as follows:
a. Width: This represents the number of columns in the image.
b. Height: It represents the number of rows in the image.
c. Class: It represents the data type used by the image such as uint8.
d. Image type: It represents the type of image such as intensity, true color, binary, or indexed.
e. Minimum intensity: This is used to define the intensity of image which represents the lowest intensity value of any pixel, whereas for indexed images, it represents the lowest index value into the color map. (This attribute not included for “binary” or "true color” images.)
f. Maximum intensity: This attribute represents the highest intensity value of any pixel for intensity images and for indexed images, and it represents the highest index value into a color map. (This attribute not included for “binary” or "true color” images.)
g. Image intensity: It refers to the amount of light which is available per pixel in an image. It can also be considered as the data matrix that records the values of light for each pixel in the image. However, it is difficult to detect the intensity of an individual pixel. More high-resolution images require a larger number of pixels per unit, thereby producing an image which represents pixels in such a way that the individual pixels are not detectable easily.
h. Image brightness: Image brightness is a relative term where the number of pixels in an image can be considered brighter in comparison to other neighboring pixels. Image brightness depends on the wavelength as well as amplitude, for example, supposing an image with pixel values having intensity of 6, 80, 150 and 180 then pixel value of 180 will be the brightest. Hence higher the intensity, brighter the pixel.
Phases of Digital Image Processing
Image processing is an expensive yet important method to perform unique operations to be able to develop an enhanced image from the original image or in order to be able to extract minute details from the original image so that it can be used for some important interpretations. It is a type of signal processing that may take an input in the form of an image and may result in an image with typical features as required. Image processing remains one of the trending technologies being pursued for the research area.
Image processing basically consists of multiple tasks; however, it includes the following four main steps:
- • Capturing an image using image acquisition tools
- • Analyzing and manipulating an image
- • Image enhancement and restoration
- • Refined image that can be used for required task/information.
Image processing may occur either in the form of analog or digital image processing. Analog images are captured using photographic sensors, and they detect the variation of the energy intensity of the objects, whereas digital images are captured using electro-optical sensors and consist of the arrays of pixels with varied intensities.
Analog image processing can be used for hard copies like printouts and photographs. Image analysts use various fundamentals of interpretation while using these visual techniques. Digital image processing techniques help in manipulation of the digital images by using computers. Digital image processing applied on digital images
i.e. images having matrix of small pixels and it is done using software. The three general phases that all types of data have to undergo while using the digital technique are
FIGURE 1.3 Different phases of image processing.
pre-processing, enhancement, display, and information extension. Digital signal processing system is used to convert analog signals to digital signals or vice-versa with the help of converter. It is faster and easy to retrieve any image and of good quality.
As shown Figure 1.3, the different phases of image processing can be described as follows: