III: Medical Image Processing and Other Healthcare applications

Lossless Medical Image Compression Using Hybrid Block-Based Algorithm for Telemedicine Application

8.1 Introduction

The importance of image compression techniques escalated in our advanced communication technology. Reduction in the number of bits is needed for storage and transmission of an image without any loss of information. Diagnosis of diseases with the help of DICOM (digital imaging and communications in medicine) images and their storage plays a vital role in the medical field but it obsesses large bandwidth. For medical applications, DICOM images have to be transferred to diverse destinations. Principal factors of medical images should be preserved during compression achieving a higher compression ratio (CR) and the ability to decode compressed images with original quality are the major issues in medical image compression techniques [1]. Moreover, the reconstructed image yields limited redundancy with good human visual perception at the receiver end. Therefore, it leads to efficient image storage transmission in telemedicine, teleradiology, and real-time teleconsultation.

Advancements in image compression have been suggested in response to expanding requirements in the field of medical imaging. JPEG 2000 is one of the best image compression algorithm [2-4] which utilizes a new coding method called embedded block coding with optimized truncation of embedded bitstreams. Information loss in the compression algorithm is entirely avoided [5]. It [6,7] achieves efficient coding in both lossy and lossless operation. The compression technique in [8-11] reduces utilization power and computational complexity by combining data extension and lifting scheme methods. Integer wavelet transform (IWT) has been represented [12,13] mainly for targeting attention against the joint problem of optimal factorizations and precise representation effects.

In [14,15], IWT implementation leads to adequate lossless and lossy compression performance. A novel compression algorithm was proposed in [16] that achieves a high CR and low latency. The medical image was splitted into nonoverlapped blocks according to its nature and applies first-order polynomial representation to discard the redundancy between neighboring pixels, thereby the compression was performed with a better value of compression rate [17].

The techniques used in [18] consume low power when combining data extension procedure into the lifting-based Discrete Wavelet Transform (DWT) core in an embedded extension algorithm. Analysis of [19] indicates high compression performance achieved with outstanding reconstruction. Compression techniques [20] extract quantitative information and provide original image visuals to humans with a reduced amount of data. Selective image compression technique [21] describes that the regions of interest are compressed in a lossless manner, whereas image regions containing irrelevant information are compressed in a lossy manner.

The classical Vector Quantization (VQ) [22] with a stationary wavelet transform was used for the volumetric image compression. For entropy coding, Fluffman and arithmetic coding are used. The stationary wavelet transform generates efficient results, w'hen compared w'ith discrete wavelet transform, lifting transform, and discrete cosine transform [23,24]. The efficiency of the VQ compression technique was improved by the incorporation of a fuzzy S tree [25]. Compression quality was enhanced when the curvelet transform was coupled with the VQ algorithm [26,27].

Wavelet transform was coupled with a prediction model and robust results were produced for CT/MR and US images [28]. A hybrid compression model comprising VQ with Artificial Bee Colony (ABC) and Genetic Algorithm (GA) was used for the optimum codebook selection [29]. The lossless prediction model with wavelet transform was proposed in [30,31], efficient results are produced, when compared with the JPEG standard.

The author compared the performance of spatial tree partitioning (STW) and Set Partitioning in Hierarchical Trees (SPIHT) in terms of mean square error (MSE), peak signal-to-noise ratio (PSNR), CR, and size for various levels ranging from 1 to 8. Result shows that STW outperforms better than SPIHT [13,32,33]. The amalgamation of context and hyper prior models is compared with JPEG, JPEG2000, and BPG for determining the performance concerning the rate distortion parameter on the Kodak dataset. It concludes that this combined model provides good rate distortion [34].

Both KITTI Stereo and KITTI General datasets have been utilized for deep image compression which focuses mainly on decoder side information and yields state-of-the-art performance by the way of Pearson correlation score for various bits per pixel [35]. Deep convolutional autoencoder (CAE) was demonstrated on the Kodak image set to achieve excellent PSNR. Moreover, CAE is followed by Principal Component Analysis (PCA) for enhancing the coding efficiency [36]. Deep learning algorithm is a prominent machine learning algorithm and plays a vital role in the classification [37,38] and compression of medical images [39]. The simulated annealing algorithm was used for the codebook selection in the Contextual Vector Quantization; it’s a lossy algorithm but yields fruitful results in terms of reconstructed image quality, when compared with VQ, CVQ, and classical compression algorithms [40]. The least square-based prediction algorithm is a lossless technique and it yields efficient results when compared with JPEG lossless, CVQ, and BAT-VQ algorithms [41].

This chapter is organized as follows. Section 8.2 explains the technical background such as wavelet transform and Hadamard transform. Section 8.3 describes the materials and methods with detailed work. Section 8.4 shows the comparison of IWT-lossless Hadamard transform (LHT)-Huffman, IWT-LHT-Arithmetic, and JPEG lossless encoding techniques. Finally, Section 8.5 gives the conclusion.

< Prev   CONTENTS   Source   Next >