Image Compression Techniques
Image compression techniques are divided into two categories as follows (Figure 6.2):
The fundamental description of lossless compression techniques is provided in Chapter 1. This section covers the survey of practical approaches of these compression techniques as follows.
Juliet et al. (2016) talked about a novel philosophy, which incorporates Ripplet change to give a decent quality of the images and accomplishes a high compression rate. In this work, to achieve a higher compression rate the images are spoken to at various scales and headings. Cyriac and Chellamuthu (2012) proposed a Visually Lossless Run Length Encoder/Decoder to take care of the development issue of the customary Run Length Encoder/Decoder. In this paper, the proposed strategy can sufficiently pack the image and furthermore helps in the quicker equipment usage of continuous applications. Brahimi et al. (2017) portrayed a novel compression method wherein the sign and the image both get together packed with a solitary codec. The fundamental focal point of this methodology is to insert the wavelet-disintegrated signal into the decayed image, and the resultant image is utilized for the compression. Arif et al. (2015) proposed a productive strategy for the compression of the fluoroscopic images with no misfortune. In this work, the separated region of interest (ROI) is packed with a mix of Run Length and Huffman coding. In this paper, final results express that the proposed procedure gives a compression proportion of 400% in contrast to other regular methods. Das and Kundu (2013) revealed a lossless clinical image watermarking strategy which depends on the idea of the ROI. The principle point of this work is to give answers for the various issues with respect to clinical information conveyance such as security, content confirmation, safe chronicling, and safe recovery and move of information. In this paper, seven distinct modalities are
FIGURE 6.2 Image compression techniques.
utilized to portray and contrast the outcomes which show that the proposed procedure is basic and apparent in giving security to the clinical database. Lucas et al. (2017) introduced a novel lossless compression method for three-dimensional (3-D) least rate indicators (MRPs) for volumetric arrangements of clinical images. The introduced strategy depends on the instrument of MRPs. In this paper, it is presumed that the introduced method can improve the mistake likelihood of the MRP calculation, and it achieves high compression effectiveness over the high-effectiveness video coding (HEVC) and another standard for profundity clinical signs. Spelic and Zalik (2012) proposed a novel a calculation begat as a fragmented voxel compression calculation to pack 3-D CT images and for successful transmission of graphical information obtained from a CT scanner. This paper portrays that initially Hounsfield scale is utilized to fragment the clinical information and afterward compression is applied. In this work, a model machine is utilized to assess the productivity of the proposed calculation. Anusuya et al. (2014) presented a novel lossless codec utilizing an entropy coder to pack the 3-D mind images. In this work, MRI methodology was utilized to break down the proficiency of the proposed calculation, and this work centers around diminishing the calculation time with the utilization of equal processing. Xiao et al. (2016) acquainted Integer Discrete Tchebichef Transform with an assortment of images with no loss of information. In this work, the proposed strategy is introduced based on factorization of N x N Discrete Tchebichef Transform into N + 1 single line basic reversible frameworks with least adjusting mistakes. The proposed procedure accomplished whole number-to-whole number mapping for powerful lossless compression. In this paper, the clinical modalities alluded to are CT and MRI to assess the outcomes, and it is inferred that the proposed algorithm can accomplish a higher compression proportion than iDCT. Amri et al. (2017) introduced two lossless compression techniques instituted as a repro: TIFF (watermarked Reduction/ Expansion Protocol combined with TIFF arrangement) and wREPro.JLS (wREPro combined with JPEG-LS format). In this work, the introduced methods are utilized to diminish the image size and encoding calculations for lossless compression. It is seen that the proposed receptive protects the image quality for high compression rates, and it additionally gives different upgrades over regular JPEG image compression standard. Ramesh and Shanmugam (2010) portrayed the Wavelet Decomposition Prediction Method for lossless compression of clinical images. In this methodology, the forecast condition of each sub-band depends on the connection investigation. It is seen from the trial results that the proposed approach gives a higher compression rate when contrasted with SPHIT and JPEG2000 standard. Ibraheem et al. (2016) introduced two new lossless compression methods which depend on the logarithmic algorithm. The proposed approaches can give improved image quality in contrast to regular DWT. Avramovic and Banjac (2012) presented a novel lossless compression method where a basic setting-based entropy coder is utilized. The proposed approach depends on the idea of expectation to evacuate the spatial repetition in images and successfully pack the images with no loss of information. It is reasoned that the proposed approach can accomplish a similar exhibition for great images as the other normalized calculations. Bairagi (2015) revealed the idea of balance for the compression of the clinical images. Here, the introduced approach is lossless and ready to expel the repetitive information from the image successfully and effectively. In this work, the announced idea is teamed up with the current methods to close the outcomes. Zuo et al. (2015) introduced an improved clinical image compression approach IMIC-ROI to adequately and effectively pack the clinical images. The proposed procedure depends on the idea of ROI and non-ROI districts. It is seen that the introduced receptive accomplishes a higher compression proportion and great estimations of GSM and structural similarity index (SSIM) in contrast to other customary procedures. Srinivasan et al. (2011) portrayed a coder for the powerful compression of the electroencephalograph signal grid. The instrument portrayed comprises two phases, initially lossy coding layer (SPHIT) and leftover coding layer (number juggling coding). It is presumed that the two-phase compression conspire is powerful and the idea of preprocessing ready to give 6% improvement and 2 phase yields 3% further improvement in the compression. Taquet and Labit (2012) announced a hierarchical oriented prediction approach for the goals of the adaptable lossless and close lossless compression of the clinical images. It is seen that the proposed approach is best utilized for the close lossless compression since it can give a somewhat better or equivalent PSNR for a high piece rate in contrast to the JPEG 2000 norm.
Lossy Compression Techniques
Bruylants et al. (2015) introduced a novel Wavelet-based system that bolsters JPEG 2000 with its volumetric expansion. The introduced approach upgrades the exhibition of the JPEG2000 for volumetric clinical image compression. In this work, the conventional codec system, directional wavelet changes, and nonexclusive intraband forecast mode tried for the wide scope of compression settings for the volumetric compression. In this paper, three clinical modalities (CT, MRI. and US) are considered to figure out the productivity of the proposed approach. Ayoobkhan et al. (2017) introduced a novel compression strategy PE-VQ for the lossy compression of clinical images. In this methodology, to build codebook the fake honey bee province and hereditary calculations are utilized to figure out the ideal outcomes, and expectation mistake and vector quantization ideas are included for the compelling compression of the images. It is seen that the proposed procedure can accomplish a higher PSNR for a given compression proportion in contrast to different calculations. Rufai et al. (2013) depicted a novel lossy compression procedure for clinical image compression. The detailed methodology contains Singular Value Decomposition and Huffman coding. It is seen from the results of simulation that the announced methodology can give better quantitative and visual outcomes in contrast to the other ordinary strategies such as Huffman coding and JPEG2000. Selvi and Nadarajan (2017) proposed a two-dimensional (2-D) lossy compression strategy for the compression of the MRI and CT images. The proposed approach depends on the Wavelet-Based Contourlet Transform and Binary Array Technique. It is presumed that the proposed approach requires less preparation time and produce exact yield brings about the correlation with the current wavelet-based set apportioning in various leveled and inserted square coders. Sriraam and Shyamsunder (2011) presented a 3-D wavelet encoder way to deal with the pack of 3-D clinical images. Here, the revealed approach work is two phases, right off the bat the encoding should be possible with four wavelet changes named Daubechies 4, Daubechies 6, Cohen-Daubechies-Feauveau 9/7, and Cohen-Daubechies-Feauveau 5/3 and at later stage 3-D SPHIT, 3-D SPECK, and 3-D BISK. Hosseini and Naghsh-Nilchi (2012) depicted logical vector quantization for clinical image compression. Here, the clinical methodology ultrasound is utilized to mimic the examination and finish up the outcomes. It is demonstrated that the proposed approach can accomplish a higher compression proportion and PSNR in contrast to the other customary calculations (JPEG, JPEG2K, and SPHIT). Bairagi et al. (2013) detailed a text-based way to deal with the pack of clinical images viably. The detailed component manages visual quality instead of the pixel-wise constancy. Prabhu et al. (2013) introduced 3-D Warped Discrete Cosine Transformation (WDCT) for the compression of the MRI images viably. The introduced approach depends on the idea of 2-D WDCT and in this paper, an image coding plan is utilized for the enormous datasets which depend on the idea of a 3-D WDCT approach.
Hybrid Compression Techniques
Mofreh et al. (2016) revealed LPC-DWT-Huffman, a novel image compression method to improve the compression rate. This detailed procedure is the blend of the LPC-Huffman and the DWT-Huffman. It is seen that the announced strategy can give a higher compression rate w'hen contrasted with the Huffman and DWT-Huffman. Raza et al. (2012) introduced a half and half-lossless compression strategy for clinical image arrangements. It is seen that the detailed strategy can accomplish an upgraded compression rate when contrasted with other existing strategies. Eben Sophia and Anitha (2017) proposed an improved setting-based compression procedure for the clinical images. The proposed approach depends on the ideas of wavelet change, standardization, and expectation. In this paper, the proposed receptive has to accomplish a decent quality image in contrast to the original image for the chosen logical area. It is seen that the proposed procedure is ready to accomplish better execution quantitatively and subjectively. Parikh et al. (2017) depicted the utilization of HEVC for clinical image compression. In this work, three clinical modalities (MRI. CT. and CR) are utilized to process the test results and contrasted these outcomes and JPEG2000. It is seen that the introduced strategy shows an expansion in compression execution by 54% in contrast to the JPEG2000. Sotnassoundaram and Subramaniam (2018) revealed a crossbreed approach in which 2-D bi-orthogonal multiwavelet transform and SPECK-Deflate encoder are utilized. The primary reason for this methodology is to lessen the transmitting transfer speed by packing the clinical information. It is seen that the proposed approach can accomplish a higher compression proportion than other regular calculations. Haddad et al. (2017) proposed a novel joint watermarking plan for the clinical images. The proposed method is the blend of JPEG-LS and bit replacement watermarking balance. It is seen that the proposed procedure is ready to furnish the equivalent watermarked images with high-security benefits in contrast to different methods. Perumal and Rajasekaran (2016) introduced a hybrid calculation DWT-BP for clinical image compression. In this paper, the creator thinks about the DWT coding. Back Propagation Neural Network, and a half and half DWTBP to break down the exhibition of the introduced approach. It is presumed that the proposed half-and-half method can give a superior compression proportion and accomplishes a better PSNR. Karthikeyan and Thirumoorthi (2016) portrayed Sparse Fast Fourier Transform, a mixture procedure for clinical image compression. In this work, the creator likewise contrasts the proposed procedure and the other three compression techniques such as Karhunen-Loeve Transforms, Walsh-Hadamard Transform, and Fast Fourier Transform. It is seen that the proposed strategy is ready to give improved and productive outcomes in the entirety of the assessment quantifies in contrast with the writer depicted techniques. Thomas et al. (2014) revealed a crossbreed image compression approach for the clinical images utilizing the lossy and lossless instrument for the telemedicine application. It is seen that the proposed half breed strategy is agreeable to accomplish a higher compression proportion and has less loss of data with the powerful utilization of number-crunching entropy coding. Vaishnav et al. (2017) proposed a novel half breed strategy for the lossy and lossless compression of the clinical images. The proposed approach is the mix of the doubletree wavelet change and the number juggling coding approach. It is seen that the proposed approach is a lot of compelling and productive than the other ordinary calculations such as DWT and SPHIT and ready to accomplish a higher PSNR and compression proportion. Rani et al. (2018) detailed a novel crossbreed strategy for clinical image compression. The announced methodology depends on the Haar Wavelet Transform (HWT) and Particle Swarm Optimization (PSO). It is seen that the proposed approach can accomplish a higher compression proportion and PSNR. Jiang et al. (2012) introduced a half-and-half calculation for clinical image compression. The significant objective of the proposed approach is to pack the indicative-related data with a high-compression proportion. It is seen that the proposed approach can accomplish a great PSNR and compelling running time in contrast to the other portrayed calculations. Sanchez et al. (2010) portrayed another instrument for the compression of the 3-D clinical images with the volume of interest (VOI) coding. It is seen that the proposed receptive accomplishes better recreation quality in contrast to the 3-D JPEG2000 and MAXSHIFT with VOI coding. Hsu (2012) proposed an instrument to isolate the tumor from the mammogram with the assistance of improved watershed change with earlier data. The objective of the proposed component is to proficiently pack the mammogram without trading off the nature of the necessary area. It has appeared from the exploratory outcomes that the proposed instrument reproduces adequately and productively in the use of mammogram compression.
Some Advanced Image Compression Techniques
Vector Quantization (VQ)
VQ is one of the progressed lossy data compression methods. Vector quantization is a proficient coding method to quantize signal vectors. It has been broadly utilized in sign and image preparation, for example, design acknowledgment and discourse and image coding. A VQ compression technique has two primary advances: codebook preparation (now and then additionally alluded to as codebook age) and coding (i.e., code vector coordinating). In the preparation step, comparative vectors in a preparation arrangement are assembled into bunches, and each group is allowed to a solitary agent vector called a code vector. In the coding step, each information vector is then packed by supplanting it with the closest code vector referenced by a basic group record. The list (or address) of the coordinated code vector in the codebook is then transmitted to the decoder over a channel and is utilized by the decoder to recover a similar code vector from an indistinguishable codebook. This is the remade multiplication of the comparing input vector. The compression is in this manner obtained by transmitting the file of the code vector instead of the whole code vector itself.
A vector quantizes maps ¿-dimensional vectors in the vector space Rk into a finite set of vectors) Y = y( I i = 1,2,3,.....,N}. Each vector yf is called a code vector or a
codeword and the set of all the codewords is called a codebook (Figure 6.3).
The amount of compression will be described in terms of the rate, which will be measured in bits per sample. Suppose we have a codebook of size k, and the input vector is of dimension L. We need to use [fog, ¿] bits to specify which of the codevectors was selected. The rate for an L-dimensional vector quantizer with a codebook of size K is [log 2Zt] .
Highlights of Vector quantization:
- • Vector quantization was first proposed by Gray in 1984.
- • First, construct codebook which is composed of codevectors.
- • For one vector being encoding, find the nearest vector in codebook (determined by the Euclidean distance).
- • Replace the vector by the index in the codebook.
- • When decoding, find the vector corresponding by the index in the codebook.
Application of Vector quantization
The vector quantization technique is efficiently used in various areas of biometric modalities such as fingerprint pattern recognition and face recognition by generating codebooks of desired size.
FIGURE 6.3 Vector quantization working diagram.