HWCD: A hybrid approach for image compression using wavelet, encryption using confusion, and decryption using di ﬀ usion scheme

: Image data play important role in various real - time online and o ﬄ ine applications. Biomedical ﬁ eld has adopted the imaging system to detect, diagnose, and prevent several types of diseases and abnormalities. The biomedical imaging data contain huge information which requires huge storage space. Moreover, currently telemedicine and IoT based remote health monitoring systems are widely developed where data is transmitted from one place to another. Transmission of this type of huge data consumes more bandwidth. Along with this, during this transmission, the attackers can attack the communication channel and obtain the important and secret information. Hence, biomedical image compression and encryption are considered the solution to deal with these issues. Several techniques have been presented but achieving desired performance for combined module is a challenging task. Hence, in this work, a novel combined approach for image compression and encryption is developed. First, image compression scheme using wavelet transform is presented and later a cryptography scheme is presented using confusion and di ﬀ usion schemes. The outcome of the proposed approach is compared with various existing techniques. The experi - mental analysis shows that the proposed approach achieves better performance in terms of autocorrelation, histogram, information entropy, PSNR, MSE, and SSIM.


Introduction
Nowadays, tremendous growth is noticed in the medical field. The current advancement in this field has led to appropriate diagnosis, improvement in the medical infrastructure of the country, and helped to overcome health-related issues. These advanced diagnosis systems use numerous techniques such as surgery, vaccination, medications, and many more. In order to perform the diagnosis, the medical experts require the data of patient, which can be in the form of text, image, or video. Currently, biomedical images are widely adopted by clinicians for diagnosis because these images are considered as a source of rich medical information. These medical images are obtained by using various imaging technologies such as radiographs, ultrasound, X-ray, Magnetic Resonance Imaging (MRI), Computed Tomography (CT), Photoacoustic imaging, and many others [1].

Image compression techniques
This section describes the study about various existing techniques of medical image compression. Bruylants et al. [25] reported that JPEG 2000 is a promising technique for DICOM image compression; however, the performance of JPEG 2000 can be improved for volumetric medical image compression. In order to take the advantage of JPEG 2000, authors presented a novel framework which uses generic coded framework. This framework supports JP3D volumetric extension along with different types of wavelet transforms and intra-band prediction modes. Zuo et al. [26] focused on both lossless and lossy image compression scheme. It is well-known that lossy compression schemes lose some data, which are not suitable for medical image and lossless schemes preserve the data but have low compression rates. Hence, authors took advantage of both the schemes and presented a region of interest (ROI)-aided compression scheme where the ROI part of image is compressed using lossless approach and non-ROI region is compressed using wavelet-based lossy compression scheme.
Lone [27] reported that conventional lossless compression techniques suffer from computational complexity and huge memory to encode and decode the data. The lossless compression schemes at lower bitrates also suffer from the computational complexity issue. To overcome these issues, authors developed a coding algorithm which is efficient in compression, computational complexity, and memory. This coding scheme is similar to the wavelet block tree coding algorithm which uses spatial orientation block tree coding approach. In this scheme, the image is divided into 2 × 2 blocks which are processed through the block tree coding to obtain the redundancy information in the sub band. Song et al. [28] presented a novel scheme based on irregular segmentation and region-based prediction to improve the performance of lossless compression. The first phase of this approach is to introduce a new scheme for the adaptive irregular segmentation, which is obtained by combining geometry adaptive and quadtree partitioning scheme for the performance of lossless compression. Later, least square predictors are designed for different regions and sub blocks. Along with spatial correlation, this scheme utilizes local structure similarity to improve the reconstruction. Geetha et al. [29] reported that vector quantization (VQ) is widely adopted for image compression. The Linde-Buzo-Gray (LBG) is the mostly used type of VQ, which compresses the image by constructing the local optimal codebook. In this work, authors considered the codebook construction as an optimization problem which is solved by using bio-inspired optimization technique.

Image encryption techniques
Hua et al. [19] presented a new encryption approach for medical image encryption. This approach is mainly of three phases where first random data bits are generated and padded with the image, in next step, high speed scrambling along with pixel diffusion is performed which shuffles the neighboring pixels and spreads the padded data over entire image. This scheme is applied by using two types of operations in the pixel adaptive diffusion which includes bitwise XOR which has higher efficiency in hardware platforms and module arithmetic achieves faster speed in software applications. Nematzadeh et al. [20] adopted optimization scheme and presented a combination of modified genetic algorithm and coupled map lattices for medical image encryption. In the first phase, this scheme applies coupled map lattice to obtain the secure cipher-images which can be used as initial population for genetic algorithm. In the next phase, modified genetic algorithm (GA) is applied to enhance the entropy and minimize the computation time. Belazi et al. [21] developed an encryption method based on the combination of chaos and deoxyribonucleic acid (DNA)based encryption scheme for medical image encryption. This scheme performs two encryption rounds which are led by the key generation, permutation, and substitution and diffusion operations. In addition to this, secure hash algorithm (SHA)-256 hash function is applied to generate the secret key. The proposed approach is composed of six stages including permutations, substitutions, encoding, decoding, and diffusion. Kumar et al. [22] introduced a novel approach using chaotic map with the help of fractional discrete cosine transform (FrDCT) coefficients. The complete approach is divided into two phases where first FrDCT is applied in the image to generate the chaotic map and later FrDCT coefficients are obtained. Amirtharajan et al. [23] developed a hybrid scheme where chaotic maps are generated for the DICOM image by using integer wavelet transform which are later fused with DNA sequence in spatial domain. The 3D Lorenz attractor helps to generate the chaotic map and logistic maps help to generate the keys for encryption. Similar to the scheme reported in ref. [21], this scheme consists of several steps such as substitution, permutation, encoding, and decoding. Ding et al. [24] developed deep-learning-based image encryption and decryption network. In this scheme, cycle-generative adversarial network is applied for learning to map the original image to target domain. The target domain helps to realize the encryption mechanism. The encrypted output image is stored in the text-form and processed through the reconstruction network to achieve the decrypted output.

Combined compression and encryption
This section discusses about the combined techniques of image compression and encryption for biomedical images. Raja [30] focused on the development of a combined framework which uses public key cryptography scheme for security and encoding scheme for the compression of medical images stored over cloud. Multiscale transform techniques are discussed for compression such as wavelet which offers appropriate localization of data in time and frequency domain, curvelet transform can handle the discontinuity curves, bandlet transform helps to obtain the geometric regularity, and contourlet transform helps to obtain the smooth contours and edge information at different orientations. For encryption, authors adopted the RSA algorithm. Ghaffari [31] introduced joint compression-encryption technique using 2D sparse representation and chaotic system. In the first phase, the input image is extended in transform domain which is used to obtain the sparse representation. Further, this sparse representation is scrambled with the help of chaotic confusion. This scrambling step ensures better security and improves the sparse recovery. Further, singular value decomposition is applied for compression and XOR operation is performed to obtain the final encrypted data. Gan et al. [32] reported that information entropy of compressed sensing-based schemes is lower than 7 which makes them vulnerable to entropy attacks. In order to overcome this issue, authors developed a compressed sensing and game of life (GOL)-based scheme. According to the first phase, the game-of-life based scrambling is applied to shuffle the coefficient matrix of plain image and a permutation matrix is formulated based on the rules of GOL. This matrix helps to reduce the pixel correlation and improves the scrambling. In the next phase, confused matrix is processed and compressed with the help of computer sensing and it is further diffused by using key matrix to generate the cipher image. Moreover, a 5D memristive hyperchaotic system is also used to generate the chaotic sequence. Zhang et al. [33] developed a joint scheme for image compression and encryption based on compressive sensing and Fourier transform. This scheme uses a chaos system and two-dimensional fractional Fourier transform to perform the encryption.

The proposed model
In this section, the proposed joint solution for image compression and encryption for biomedical images is presented. The first phase of this section describes the proposed image compression module and second phase presents the proposed solution for encryption.

Overview of the proposed model
This approach is mainly focused on minimizing the requirement of data storage by using image compression and improving the security by incorporating hashing, and encryption mechanism. The overall architecture of this approach is depicted in Figure 1. First, the image compression is performed by using forward lifting wavelet scheme which generates approximation and detailed coefficient by using lifting wavelet scheme. These coefficients are processed through the Huffman encoding phase which generates the binary encoded sequence. Then, considering this encoded sequence, the proposed encryption standard is applied which includes SHA-256 Algorithm for Hash generation, chaotic map generation, and random sequence generation. At this stage, the compressed, encoded, and encrypted data are obtained which can be transmitted over a wireless channel. After receiving this data at the receiver end, data decryption, Huffman decoding, and image decompression steps are applied to reconstruct the image data.
Lifting wavelet transform: this is a type of wavelet transform scheme which is also known as second generation wavelet transform. It is used for designing the wavelets and performing the DWT. This scheme considers the DWT signal and factorizes in a series of elementary convolution operation, called lifting phase. This simplifies the signal and reduces the arithmetic operation by a factor of 2.
Huffman coding: it is a well-known data compression scheme which is used to reduce the size of data without losing any bit or detail of the data. It generates a tree based on the occurrence of frequencies of symbols and then generates code for each character.
SHA-256 hash algorithm: The SHA 256 algorithm is part of the SHA 2 family of algorithms. This generates a 256-bit long output for the given data by performing certain functions.
Chaotic map: it is a model which is based on chaos theory and is used to perform the cryptographic operations. During this process, the repeated rounds of encryption help to obtain the desired diffusion and spreads the initial region over the entire space ( Table 1).

Image compression
In this section, the proposed scheme of image compression is described. The proposed scheme is hybrid of linear predictive coding (LPC), discrete wavelet, and Huffman coding. These encoding and decoding schemes help to achieve the lossless compression. According to this scheme, first, the input image is processed through LPC [34] which generates a coded image that is further given as input to the DWT. In the next stage, the wavelet decomposed image is processed through the Huffman coding where zigzag DCT scanning is applied. This step generates the compressed data. Later, the compressed data are processed through the Huffman decoding, inverse DWT, and inverse LPC to reconstruct the image.

Wavelet transform
In this subsection, the implementation of wavelet transform for image compression phase is described. The wavelet transform decomposes the image into four sub bands as HH, HL, LL, and LH. The DWT-based scheme uses Haar filter in lifting scheme, and filter type-I. The type-I filters' coefficients are given as follows: where h1 denotes the filter coefficient for prediction and h2 denotes the filter coefficient for update phase for lifting scheme. First phase of wavelet transform consists of forward lifting phase. Figure 2 shows the architecture of forward lifting scheme which contains split, predict, and update operations. The split operation divides the coefficients into even and odd coefficients, the prediction value is added to the odd element using the inverse predict transform, and finally, the update step computes the average and replaces the existing even elements with the obtained average. This scheme splits the input signal into two parts using the split function. These samples are denoted as even and odd data samples and expressed as follows: Correlation between the odd and even samples are obtained. Generally, the difference between the actual, original, and predicted samples is known as wavelet coefficient and this process is called as lifting scheme.
In the next step, the update phase is applied where the even and odd sample values are updated based on the input samples. This generates the scaling coefficients. These coefficients are passed to the next step for further processing. This is expressed as follows: where P denotes the prediction phase and U denotes the update phase. After finishing these steps, the odd elements are replaced by the difference and even elements by the average values. This approach helps to obtain the integer coefficients which helps to make it reversible. Similarly, the reverse lifting scheme is also applied to reconstruct the original signal. Thus, inverse wavelet transform is applied. The reverse operation has three steps which includes update, predict, and merge. Figure 3 shows the architecture of reverse lifting scheme.
These operations are given as follows: Figure 2: Forward lifting scheme.
HWCD: A hybrid approach for image compression, encryption, and decryption  7 In this process, the samples are reduced by factor of two and final step produces a single output. In this phase, the input signal is processed through the low-pass and band-pass filters simultaneously and the output data are down sampled in each stage. The complete process is repeated several times until the single output is generated by combining the four output bands

Huffman coding
In this section, Huffman coding for image compression is discussed. The Huffman coding is a widely adopted technique for lossless data compression. Mainly, this scheme assigns the variable length codes to the input data and this length depends on the frequencies of characters in the input data stream. Generally, the characters with high frequency occurrence are assigned the small code and characters with low-occurrence are assigned the largest code. The variable length codes are known as the prefix codes. These codes are assigned in such a way that once the code is assigned to one character, it should not be assigned to any other character. This helps to ensure no uncertainty while decoding the data. The Huffman coding scheme contains two main steps which include construction of tree and assigning the prefix codes to the characters. • Huffman encoding: In this phase, the Huffman tree is constructed which has the unique characters along with their frequency of occurrence. Now, with the help of unique character, leaf node is built, which contains the unique characters. Later, all leaf nodes are considered and a minimum heap is constructed. This heap is used for prioritizing the queue. The character which has the least occurrence, is assigned as the root of the tree. In the next phase, extract two nodes which are having the minimum heap and assign them as left and right child. This node is also added to the tree. This process is repeated until the heap contains only one node. • Huffman decoding: The Huffman tree which contains the entire information of characters is considered.
At the receiver end, the receiver starts scanning this tree in a zigzag scanning process. When the tree is scanned towards the left, then "0" is assigned as decoding, otherwise "1" is assigned as the decoded output.

Image encryption
This section presents the proposed solution for image encryption by using chaotic maps, permutations, and diffusion operations. Initially, the preliminaries of the system mapping the arbitrary size of data to a fixed data using hash operation, hyper-chaotic sequence, and random sequence generation for encryption and decryption purpose are described.

Hash function
Hash function is a function which considers any arbitrary size of data and maps this data to a structure of fixed size elements. In this approach, SHA-256 hash function is adopted to generate the 256-bit hash value V. This hash value can be divided into 32 blocks of 8-bit size, thus the i-th block can be represented as 32 . Let us consider the input I whose size is × m n, an integer k is obtained as follows: where ( ) fix . rounds the input data towards zero. With the help of this integer, a sequence H is produced that has 32k elements as follows: This repmat operation generates a large matrix H. From this matrix, two matrices are generated of size ( ) × + n 2 2 by considering the first + n 2 4elements of H and another matrix of size × m 2 by taking the m 2 numbers of matrix H.

Chaotic map generation
The proposed approach uses hyper-chaotic system and Chebyshev maps to accomplish the encryption process. In this approach, four-dimensional hyper-chaotic system with four initial conditions and four initial parameters is adopted. The hyper chaotic system can be defined as follows: where a b c d , , , , and e denote the system parameters and values of these parameters are 35, 3, 12, 7, and 0.085, respectively. Moreover, in this chaotic system two Lyapunov functions are incorporated with values 0.5 and 0.15 for Lyapunov function 1 and 2, respectively. The Chebyshev maps are expressed as follows: where u 1 and u 2 denote the initial values of the system.

Algorithm 1. Image compression and securing process
Step 1: Input image, simulation parameters.
Step 2: Obtain the wavelet bands of the input image as ( Step 3: identify the approximation and detailed coefficients Step 4: Apply Huffman encoding on the obtained coefficients.
Step 5: Generate hash of these coefficients as Step 6: Generate chaotic map of the hash image.
Step 7: Apply encryption process as mentioned in algorithm 2.
Step 7: Initialize the reconstruction phase.
Step 8: Rearrange the random sequence.
Step 9: Rearrange the chaotic maps of the current image.
Step 10: Apply de-hashing process to obtain the original information.
Step 12: Rearrange the wavelet bands to reconstruct the image.

Random sequence generation
In the chaotic sequence generation, initial values of the chaotic system are considered to generate the four sequences as . These parameters are iterated to produce two sequences as U 1 and U 2 . In the current stage, six chaotic sequences U X Y Z W , , , , , 1 and U 2 are transformed into real-valued sequence as follows: Further, these sequences are transformed into inter value sequence as follows:

(10)
Here D 1 is applied to scramble the images and other parameters are used for permutation and diffusion. The main idea of the proposed approach is to use permutation and diffusion-based approach for image encryption. The chaotic sequence helps to shuffle the pixels. The permutation does not affect the pixel value and establishes a complicated statistical relationship between the cipher and key; thus, the attackers cannot infer the data. Similarly, the diffusion is a process where plaintext data affect the bits of cipher text which helps to improve the sensitivity.

Encryption process
This section presents the proposed encryption approach which is comprised of three steps. First, chaotic sequence is applied to the generation module, in the next phase, permutation module is applied, and the third phase consists of diffusion process. The complete approach is described as follows:

Algorithm 2. Image encryption
Step 1: Let us consider an input image of size × m n which is represented as × P m n . After processing through the hash function, the input data × P m n is represented as ( ) ( ) ′ + × + P m n 1 2 . Later, these image data are transformed into a one-dimensional vector as Step 2: In the next phase, the chaotic sequence D 1 , D 2 , S, V, and T of length l is generated. These chaotic sequences are used for encryption.
Step 3: In the next stage, permutation operation is performed on the image vector P 0 which is given as follows:  In order to prevent the attacks on scrambling sequence a factor g is incorporated in the scrambled sequence. This factor is given as Step 4: In this phase, the combined confusion and diffusion steps are applied to encrypt the element wise data of p 0 vector which is expressed as follows: Step 5: Encrypt the i-th element of image vector data as follows: where ( )

Decryption process
In this section, the decryption process is presented to reconstruct the original image from the encrypted data. The complete decryption process is as follows: Step 1: Initially, generate the required parameters such as chaotic sequences such as D D S V , , , 1 2 , and T along with dynamic indexes such as kt 1 and kt 2 Step 2: Transform the ciphertext image in a one-dimensional vector and obtain the intermediate ciphertext as follows: Step 3: Obtain the decrypted last element of p 0 as This process is repeated from last pixel to first pixel until all elements of sequence is obtained as P 0 which is transformed in matrix P′ of size ( ) ( ) + × + m n 2 2. From this P′ matrix, the first row is removed, last row, first column, and last column to obtain the matrix P of size × m n.
In this section, the experimental analysis of the proposed combined approach of image compression and encryption is presented. The experiments are conducted using MATLAB and Python tools installed on windows platform. This approach is tested on biomedical images where different types of modalities such as ultrasound, CT, and MRI images are included. For each type of modality, five images are considered. Ultrasound images are obtained from https://www.ultrasoundcases.info/cases/abdomen-and-retroperitoneum/, while CT and MRI images are obtained from https://www.osirix-viewer.com/. In order to evaluate the performance of the proposed approach, several analyses are performed such as histogram analysis, autocorrelation of adjacent pixels, information entropy, number of pixels change rate (NPCR), unified average changing intensity (UACI analysis), PSNR, and mean squared error (MSE).

System requirements
The proposed approach is simulated by using MATLAB simulation tool running on windows platform. The operative device has 4GB of NVIDIA graphic memory, 8 GB RAM, and 1TB storage space and Intel i7 10th gen processor.

Qualitative analysis by using histogram assessment
In this section, the histogram analysis of the proposed approach for different images is presented. The histogram analysis of any image illustrates the graphical representation of tone distribution in any given digital image. The similarity of histogram between the encrypted images shows the better cryptography whereas the actual histogram of different images differs from each other. The histogram amplitude of the original and reconstructed image shows the deviation in the quality of the image while reconstructing. Figure 4 depicts the histogram analysis of image encryption. Column 1 of Figure 4 shows the sample original images which include three multimodal images, namely, ultrasound, MRI, and CT. Column 2 shows the histogram of image which illustrates the right skewed distribution of histogram and column 3 shows the encrypted image data. We also performed the histogram analysis on encrypted image as depicted in column 4 of Figure 4. Decrypted image and its corresponding histogram values are presented in columns 5 and 6 of Figure 4, respectively. The histogram of original image shows the right-skewed distribution whereas the histogram of corresponding encrypted image shows the normal distribution, and the final reconstructed output shows the distribution similar to the original histogram.
Here chaotic map generation helps to randomize the distribution which ensures the robustness of the encryption. Similarly, the quality of reconstruction depends on the approximation and detailed coefficient fusion and reconstruction along with Huffman coding.

Quantitative analysis
Further, we perform the Quantitative analysis by considering different set of images from three different modalities such as ultrasound, MRI, and CT. Figure 5 shows the sample images of these modalities which are used for autocorrelation and entropy information analysis. HWCD: A hybrid approach for image compression, encryption, and decryption  13 Figure 5: Sample images used for quantitative analysis: 1st row ultrasound images, 2nd row MRI images, and 3rd row CT images.

Autocorrelation analysis
In this section, the autocorrelation analysis of adjacent pixels for different images is presented. For this analysis, horizontal (H), vertical (V), and diagonal (D) pixels are considered. The horizontal analysis considers horizontal pixel matrix, vertical analysis considers vertical pixel matrix, and diagonal analysis considers diagonal pixel matrix. For each analysis, five images from each category are considered. Table 2 shows a comparative analysis of autocorrelation of five different ultrasound images. The obtained performance is compared with the existing techniques such as GA, grey wolf optimization (GWO) and self-adaptive GWO [37].
The autocorrelation analysis shows the similarity between the original and reconstructed images. However, the proposed approach performs several steps such as histogram lifting wavelet, Huffman encoding, SHA-256, and diffusion, but the robustness of lifting helps to maintain the pixel quality, HAS-256 helps to secure the information and facilitates the appropriate reconstruction. Moreover, the enhanced diffusion scheme helps to enhance the quality of mage by merging the detailed coefficients. Similarly, different CT images are considered and the autocorrelation analysis of adjacent pixels are applied. The comparative analysis for CT image is presented in Table 3.
Further, the autocorrelation analysis for MRI images is performed. The obtained outcome is presented in Table 4.
The autocorrelation analysis shows that the proposed approach achieves better performance when compared with the existing techniques for each scenario of analysis including horizontal, vertical, and diagonal analysis.

Information entropy analysis
Here the information entropy analysis is presented for different modality images. Generally, the information entropy ( ) H X is the measurement of uncertainty in the data. It can be computed as follows: where X represents a discrete random variable, and ( ) p x i denotes the probability density function which is determined based on the occurrence of symbol x. The perfect entropy is 8. Table 5 shows the comparative analysis of information entropy.

Differential analysis
Generally, the image encryption schemes are sensitive with respect to the change in the plain images i.e., minor change in the plain image affects the encryption performance. Differential analysis is a process which allows attacker to change the plain image and regenerate the encrypted image. This analysis is performed by using NPCR and the UACI. Let us consider that initially the image is denoted as C 1 and after modifying the pixel, the image is denoted as C 2 . The NPCR and UACI can be computed as follows: where D represents a two-dimensional which has the similar size as C 1 and C 2 , W denotes the width, and H denotes the height of the image. Table 6 shows the comparative analysis in terms of NPCR and UACI. The obtained performance is compared with the existing techniques.
The proposed SHA and diffusion-based schemes help to reconstruct the images efficiently and also these schemes ensure the prevention of different attacks.

Comparative analysis for image compression
In this section, the comparative analysis for image compression scheme is presented. Similar to previous experiment, in this section, histogram analysis for the considered medical images is performed. Figure 6 depicts the outcome of histogram analysis for ultrasound, CT, and MRI images.
Similar to the experiment presented for Figure 4, we perform the histogram analysis for compressed images. Histogram of original, compressed, and reconstructed image is depicted in Figure 6.  Column 1 of Figure 6 depicts the sample original images which include three multimodal images, namely, ultrasound, MRI and CT. these images are used as input for compression purpose. Column 2 shows the histogram of image which illustrates the normal distribution of histogram, while column 3 shows the compressed image data. The histogram of compressed image is presented in column 4 which shows the random distribution from which it can be concluded that the detailed coefficients present the most significant part of the image. Similarly, reconstructed image and histogram are depicted in columns 5 and 6.
The performance of the proposed approach is measured in terms of PSNR, MSE, and SSIM. These parameters can be computed as mentioned in Table 7.
The outcome of the proposed approach is compared with the existing schemes. Table 8 shows a comparative analysis for image compression.
The comparative analysis shows that the proposed approach achieves better performance when compared with the existing techniques in terms of PSNR, MSE, and SSIM.

Conclusion
In this work, the focus is on biomedical imaging and it was identified that currently telemedicine diagnosis systems are widely adopted. In these systems, the data are transmitted to the remote location which consumes more bandwidth and also the medical images require huge storage space. Moreover, during the transmission, maintaining the security is also considered as a prime task. Hence, the work presents a combined approach for data compression to reduce the storage requirement and encryption to maintain the security. The compression scheme is based on the hybrid approach of predictive coding, Huffman coding, and DWT whereas encryption scheme is based on the diffusion and confusion framework. The work presents an extensive experimental analysis and the outcome of proposed approach are compared with the  However, this approach is tested for 2D biomedical images, and hence 3D image processing and multispectral images still remain a challenging task which can be incorporated in the future research.
Funding information: No funds or grants were received by any of the authors.
Author contributions: Latha H.R. and A. Ramaprasath contributed to the design and methodology of this study, the assessment of the outcomes, and the writing of the manuscript.