Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access April 26, 2021

A Novel Approach Based Multi Biometric Finger Vein Template Recognition System using HGF

  • Rahul Dev EMAIL logo , Rohit Tripathi and Ruqaiya Khanam
From the journal Open Computer Science

Abstract

Finger vein(s) based biometrics is another way to deal with individual's distinguishing proof and has recently received much consideration. The strategy in light of low-level components, like the dark surface of finger vein is taken as standard. However, it is typically looked with numerous difficulties that involves affectability to noise and low neighbourhood consistency. Generally finger vein recognition in view of abnormal state highlights the portrayal that has ended up being a promising method to successfully defeat the above restrictions and enhance the framework execution. This research work proposes finger vein-based recognition technique making use of Hybrid BM3D Filter along with grouped sparse representation for image denoising and feature selection (Local Binary Pattern – LBP, Scale Invariant Feature Transform – SIFT) to evaluate features, key-points and perform recognition. The experimental results on two open databases of finger vein, i.e., HKPU and SDU show that the proposed method has enhanced the overall performance of finger vein pattern recognition system compared with other existing methods.

1 Introduction

Smart recognition of human personality for control and security is a worldwide issue of worry in our present reality. Money related misfortunes because of fraud can be extreme, and the respectability of security frameworks bargained. Henceforth, automatic confirmation frameworks for control have discovered application in criminal recognizable proof, self-governing distributing and computerized managing of accounts among others. Among the numerous authentication frameworks that have been developed and actualized, finger vein biometrics is developing as the fool proof technique for computerized individual distinguishing proof. Finger vein is a remarkable physiological biometric for recognizing people in light of the physical qualities and traits of the vein designs in the human finger [1].

Palm print can be effectively frayed. Voice marks can be effortlessly replicated or named. Face acknowledgment can be troublesome on account of its event, for example, wearing cosmetics, glares, confront lifts, wearing caps or top. Thus, there is need of cost effective, exact and dependable biometrics framework [2]. Likewise, the state of the finger's surface (e.g., sweat, dryness) and skin mutilation can cause corrupted acknowledgment exactness. In spite of the fact that face acknowledgment has favourable circumstances in terms of client accommodation, its execution exceptionally relies upon facial articulations and light. Iris acknowledgment is the most precise, however, the cost of the scanning device can be high and the configuration may be badly arranged since the client must adjust his iris to the camera. To survive these issues, vein examples, like palm veins and hand veins have been contemplated. Vein acknowledgment utilizes vascular examples inside the human body. These vascular examples are unmistakable with infrared light illuminators. Consequently, this methodology has the favourable position that it is hard to distort [4].

Conventional finger vein acknowledgment process primarily incorporates four stages (Figure 3), obtaining finger vein picture, pre-processing the obtained picture, extracting picture highlights and coordinating acknowledgment [3].

The finger vein has numerous focal points over different biometrics:

  1. Particular finger vein pattern that is unique for each person.

  2. A vein is not noticeable remotely and is covered up inside the body so it is exceptionally hard to produce or take.

  3. A finger veins cannot leave amidst a verification procedure and cannot be copied.

  4. A finger vein examples must be taken by the individual.

  5. A finger design lasts for a period time within which reenrolment of the vein design may not be required. Additionally, finger vein acknowledgment frameworks can take low determination pictures [2].

Figure 1 Biometric authentication for finger vein [1]
Figure 1

Biometric authentication for finger vein [1]

Figure 2 Traditional Finger Vein Recognition System [3]
Figure 2

Traditional Finger Vein Recognition System [3]

Figure 3 Flowchart of tri-branch vein extraction algorithm
Figure 3

Flowchart of tri-branch vein extraction algorithm

1.1 Finger vein Recognition Applications

Vascular/vein design pattern recognition (VPR) innovation has been produced economically by Hitachi since 1997, in which infrared light consumed by the haemoglobin in a subject's veins is recorded by a CCD camera behind a straightforward surface. Some examples where this technique has been used are as follows:

  1. Finger filtering gadgets have been conveyed for use in Japanese money related establishments, stands, and turnstiles.

  2. Mantra Softech advertised a gadget in India that outputs vein designs in palms for participation recording.

  3. Fujitsu built up an adaptation that does not require coordinate physical contact with the vein scanner for enhanced cleanliness in the utilization of electronic purpose of offer devices.

  4. PC security master Bruce Schneier expressed that a key favourable position of vein designs for biometric recognizable proof is the absence of a known strategy for producing a usable “sham”, as is conceivable with fingerprints.

1.2 Comparing Finger Vein Recognition with other Biometric

Finger vein verification is a biometric innovation focussing on vein designs underneath the skin's surface that are one of a kind to each finger and every individual. The three fundamental points of interest of finger vein validation are:

  1. As veins are covered up inside the body, there is little danger of fabrication, and the surface states of the hands have no impact on confirmation.

  2. The utilization of infrared light takes into consideration non-obtrusive, contactless imaging that guarantees both accommodation and neatness for the client encounter.

  3. Vein designs are steady and obviously characterized, permitting the utilization of low-goals cameras to capture vein pictures for minimalist basic information picture handling.

2 Background

Liu et al. [5] introduce a novel ID system, which uses super pixel-based highlights (SPFs) of finger vein for abnormal state including portrayal [5]. When looking at two finger veins, the highlights of every pixel are firstly extracted as base traits by customary wary. At that point, after super pixel over-division, the SPF of each finger vein can be acquired in view of its base qualities by some measurable methods [5].

Sapkale et al. [10] proposed an implanted finger-vein acknowledgment framework for verification [10]. The framework is executed utilizing novel finger vein acknowledgment calculation. Lacunae, fractal measurement and gabor channel are the calculations utilized in this framework. The calculations include extraction and coordinating the separated elements using the separation classifier [10].

Liu et al. [7] receive seven layers of CNN which incorporate 5 convolution layers and 2 completely associated layers [7]. This system acquires an acknowledgment rate of 99.53%, which ends up being preferred performing over customary calculation [7].

Hsia et al. [8] proposes another finger-vein acknowledgment framework that uses a twofold hearty invariant rudimentary component from quickened fragment test [8]. This framework focuses on a versatile thresholding methodology utilising a multi-picture quality appraisal (MQA) leading to a second stage confirmation [8].

Yang et al. [9] chose DSST (Discrete Separable Shearlet Transform) as the picture deterioration and highlight extraction apparatus, which is a quick execution of shearlet and has a superior execution than other MGA technique [9]. Conversely test, the strategy in light of MHD (Modified Hausdorff Distance) highlight which include relative separation, format, wavelet highlight, ridgelet highlight and curvelet highlight for acknowledgment examination [9].

Sapkale et al. [6] proposed a biometric confirmation in light of finger-vein acknowledgment framework [6]. The proposed framework is executed utilizing novel finger vein acknowledgment calculation. Lacunae, fractal measurement and gabor channel calculations are utilized in this framework.

Lu et al. [5] present an agent finger vein database captured by a convenient gadget, which is named MMCBNU_6000 [11]. In the first place, MMCBNU_6000 is built up with 100 interested volunteers from from 20 countries. It contains pictures procured from various people with various skin hues. In the second phase, measurable data of the nationality, age, gender, and blood classification is recorded to investigate finger vein pictures. In the third phase, like the genuine application, impacts from interpretation, revolution, scale, uneven brightening, disseminating, gathering stance, finger tissue and finger weight are considered in the imaging procedure [11].

Chen et al. [12] proposed a deformable finger vein affirmation structure, involving the upgraded vein PCA-SIFT feature and bidirectional deformable spatial pyramid organizing (BDSPM) [12]. In addition, they fabricate a finger vein database to reflect picture distortion in certified application. The exploratory results, in one self-fabricated twisting database and one open database, showing the feasibility of the proposed framework in dealing with the image deformation issue [12].

Mulyono et al. [13] familiarize preliminary process with updating the image quality intensified by light effect and tumult made by the web camera [13]. The vein configuration is then segmented by using adaptable edge system and facilitated using upgraded arrange planning. The preliminary outcome shows that even if the image quality is not good, as long as the veins are clear, with some appropriate methodology they can be used for individual distinctive confirmation. Regardless of all that it can achieve up to 100% conspicuous evidence precision [13].

3 Existing Algorithm

In the existing work there is a bifurcating point which is having three local vein branches known as the tri-branch vein structure. Through the vein pattern we will extract the structure then we match the pattern. A two-level filter is also developed in existing work which is shown in Figure 3.

3.1 Extraction of Tri-branch vein structure

  1. Denoising and Thinning: The single-pixel wide vein arrangement is removed from the vein design by the morphological diminishing task. The crossing point of the burr and vein branch can be erroneously observed as the bifurcation point.

  2. Detection of Bifurcation: At one bifurcation point there are three connected vein branches. Expecting the present point, its eight neighbour focuses are referred by p(x; y) and P = fp1; p2; :::; p8g individually. The point p(x, y) is viewed as a bifurcation, if Ns is equivalent to 6, which is characterized as pursues.

    (1) NS=i=18|pi+1pi|,whereP9=P1

  3. Branch Tracking: Three nonzero neighbour focuses can be recognized for one bifurcation point, and these neighbours are the underlying purposes of the three vein branches.

  4. Dot product and Morphological dilation: Morphological expansion task is performed on the single-pixel wide tri-branch vein structure.

Figure 4 Flowchart of Tri-branch vein structure extraction
Figure 4

Flowchart of Tri-branch vein structure extraction

4 Proposed Method

The purpose of image processing is to rebuild the actual image x of high quality from the observed degraded version y [2]. It can be basically formulated as:

(2) Y=Hx+n

where x, y are lexicographically stacked portrayals of first picture and debased picture. Individually, H is a matrix speaking to a non-invertible straight degradation operator and n is typically added substance Gaussian background noise.

To adapt to the poorly presented nature of picture restoration, earlier picture information is generally utilized for regularizing the answer to the accompanying minimization issue:

(3) |argminx12||Hxy||2+2λΨ(x),

where 12||Hxy||22 is the l2 data fidelity term, ψ(x) is called regularization term that denotes the prior image and λ is the regularization parameter.

This work proposed a finger-vein based recognition technique by using Hybrid BM3D Filter along with grouped sparse representation for image denoising and Feature selection (LBP, SIFT) to Evaluate features, key-points and perform recognition. A neural network is used to perform the classification. The methodology used to implement the proposed technique is as follows:

Input: Finger Image, X
Output: Matching Result
Step 1: Resize the input image to 256×256
Step 2: Image pre-processing
    Apply BM3D filter
Step 3: Apply tri-branch vein extraction using eq. (1)
Step 4: Extract features using LBP
Step 5: Train the network
Step 6: Perform sparse representation, xk = Rk(x), where xk is the kth pixel of image X and Rk is the extraction operator.
Step 7: Construction of matched image using group sparse representations
Step 8: Output the matching result

The methodology comprises of the steps shown in the above pseudo code. The image is denoised and converted to black and white image. Thereafter tri-branch vein extraction is done to extract the finger vein structure. The bifurcation and termination points are extracted for developing the image. In the following sections these steps are explained in detail.

  1. Load the finger image from database.

  2. Resize image into 256×256.

  3. Apply Hybrid BM3D Filter along with grouped sparse representation for denoising and perform image enhancement.

  4. Extract the veins from image using either Gray value-based method, Curvature value-based method and Convolution response-based method.

Figure 5 Flowchart of proposed method
Figure 5

Flowchart of proposed method

4.1 Hybrid BM3D Filter with Sparse

In the BM3D calculation, comparable patches frame the 3D lattice and this framework is sifted in the change space with properly chosen limit [15]. Diverse patches have littler relationship of clamour than nearby neighbourhoods giving preferred outcomes over the nearby neighbourhood-based sifting plans. This system is altogether contemplated as of late [16], it has been demonstrated that it produces near the achievable executions.

Algorithm: BM3D filter

Consider an image region D centred around the pixel x(n, m).

  1. Look for locales D_ that are like the considered region D. Note that this closeness check is performed in the change area. These change coefficients are contrasted at the edge and those beneath the limit are set to 0. Comparable patches are chosen to be moderately near the considered fix so as to disentangle seek method.

  2. Comparable patches are put into the 3D network. 3D discrete direct change of the 3D lattice is assessed. Changed coefficients are thresholds and all coefficients beneath the edge are expelled. Impermanent sifted squares are acquired utilizing reverse 3D change. Note that practically, isolated 2D/1D changes are utilized rather than 3D change because of proficiency reasons.

  3. These sifted squares are returned back to the picture. The methodology is performed for every pixel. Pixels have a place with different number of various squares. Along these lines, a collection of the separated squares ought to be performed. Weighted coefficients in accumulation are determined dependent on the number of change coefficients that are over the limit in the 3D changes. More coefficients over the edge imply that there is increasingly leftover clamour in the squares and the other way around.

  4. Comparable squares are found for the sifted picture from the past advance.

  5. Comparative squares frame the 3D lattice.

  6. The 3D change is determined for the 3D lattice.

  7. 3D change coefficients are sifted utilizing the Wiener channel.

  8. Opposite 3D separating is performed to acquire the last form of sifted patches.

  9. These separated patches are again amassed to acquire the last gauge.

4.2 Sparse Representation

Patch is the fundamental unit of representation of sparse, denoted by xRN & xkRBs, vector representation of original image and size of image patch is BsxBs at k location. Then,

xk=Rk(x)

where, Rk is operator that is used to extract xk patch from image x.

4.2.1 Group Based Sparse Representation

Novel sparse representation displaying in the unit of gathering rather than fix, meaning to misuse the nearby sparsity and the non-neighbourhood self-closeness of characteristic pictures at the same time in a brought together system. Each gathering is spoken to by the type of network, which is made out of non-nearby fixes with comparative structures. In this manner, the proposed technique's performance is evaluated using two open finger vein databases and implemented in MATLAB.

Figure 6 A Group construction
Figure 6

A Group construction

Figure 7 Sparse Group Based Representation Modelling
Figure 7

Sparse Group Based Representation Modelling

5 Experiments and Analysis

The proposed technique's performance is evaluated using two open finger vein databases and implemented in MATLAB.

5.1 Database of HKPU

In this database, all of the underlying 210 fingers has 12 pictures, got in two sessions, and all of the last 102 fingers has 6 pictures, got in one session. All photos are 8-bit dim dimension BMP record with an assurance of 513×256 pixels.

5.2 Database of SDU

The database contains 636 fingers, each with 6 pictures, got in one session. The photographs are 8-bit diminish measurement BMP files with an affirmation of 320×240 pixels.

Table 1 and Table 2 shows results using equal error rate (EER) and filtered imposters ratio to each and every enrolled user. Vein patterns extracted using six diverse methods are considered as shown in first column of both tables among six feature extraction methods. The proposed technique achieves better EER (%) than others like whole vein pattern, structure of tri-branch vein, common threshold-based framework and user specific threshold-based framework. The equal error rate (EER) of different recognition methods are illustrated in Figure 8 and Figure 9 from Table 1 and Table 2 respectively.

Table 1

EER (%) of Recognition Methods Using Database of HKPU

Method RLT MaxC WLD MeanC Gabor ASAVE
Whole vein Pattern 2.17 2.16 1.80 1.72 1.50 1.65
structure of Tri-branch vein 15.90 8.92 20.93 20.63 3.79 3.44
(Ratio (%) of filtered imposters to all enrolled users) Common threshold based framework 2.17(0) 2.16 (0) 1.80 (0) 1.72(0) 1.49 (0.43) 1.63 (2.54)
(Ratio (%) of filtered imposters to all enrolled users) User-specific threshold based framework 0.86 (93.52) 1.60 (80.25) 1.39 (54.36) 1.39 (50.80) 0.74 (93.59) 0.0075 (96.40)

Proposed Technique 0.84 1.41 1.21 1.21 0.65 0.0070
Table 2

EER (%) of Recognition Methods Using Database of SDU

Method RLT MaxC WLD MeanC Gabor ASAVE
Whole vein Pattern 6.16 5.59 4.94 4.54 5.12 6.00
structure of Tri-branch vein 15.25 12.87 26.69 13.53 6.46 6.08
Common threshold-based framework (Ratio (%) of filtered imposters to all enrolled users) 6.16 (0) 5.29 (0) 4.94 (0) 4.54 (0) 5.12 (0) 5.98 (9.56e–04)
User-specific threshold based framework (Ratio (%) of filtered imposters to all enrolled users) 5.21 (57.74) 4.29 (63.99) 4.30 (40.97) 3.46 (61.19) 4.04 (76.02) 4.37 (77.42)

Proposed Technique 5.04 4.12 3.99 3.11 3.87 4.24
Figure 8 EER (%) of Recognition Methods Using Database of HKPU
Figure 8

EER (%) of Recognition Methods Using Database of HKPU

Figure 9 EER (%) of Recognition Methods Using SDU Database
Figure 9

EER (%) of Recognition Methods Using SDU Database

In Figure 10 compare vein feature extraction using various methods. As presented in the graph, the proposed technique has better equal error rate for different methods. We can see the result for the HOG is much higher than the proposed technique using RLT method.

Figure 10 Proposed Framework on HKPU Database and EER (%) Comparison between Some Typical Vein Features
Figure 10

Proposed Framework on HKPU Database and EER (%) Comparison between Some Typical Vein Features

Table 3 and Table 4 show the comparison among some vein features such as histogram of oriented gradient (HOG), region-based axis projection (RAP), Neighbour pattern coding (NPC) and the proposed technique. The features compared are extracted from vein patterns identified by four unique strategies. These tables demonstrate that the proposed method accomplishes the best execution.

Table 3

Proposed Framework on HKPU Database and Equal Error Rate {EER (%)} Comparison between Some Typical Vein Features

Methods RLT MaxC MeanC Gabor
HOG 3.62 4.34 6.81 3.57
RAP 1.74 5.16 8.36 2.45
NPC 2.83 3.04 1.77 2.05
Tri-branch vein structure based detection technique 0.86 1.60 1.39 0.74

Proposed Technique 0.75 1.41 1.21 0.65
Table 4

Proposed Framework on SDU Database and EER (%) Comparison between Some Typical Vein Features

Methods RLT MaxC MeanC Gabor
HOG 6.05 4.95 4.55 6.65
RAP 8.53 8.00 5.80 6.67
NPC 6.57 6.63 4.76 5.89
Tri-branch vein structure based 5.21 4.29 3.46 4.04

Proposed 4.55 4.14 3.08 3.87

Table 5 and Table 6 show experimental results on the first session images and two session images respectively. Table 7 demonstrates the time expenses of fundamental strides in the tri-branch vein structure extraction. From the table we can see that the time cost of the structure extraction is low on two databases. The underlying stage in structure extraction, i.e., diminishing and denoising, costs an exceptional bit of time. The major reason is that the deburring in denoising is performed twice owing to the fact that there are more burrs in the vein pattern obtained from low quality picture. Additionally, the time cost of each image on SDU database is nearly 50% of that of each image on HKPU database. The computational results are shown in Figure 14 The reason is that the image on SDU database (i.e., size of image 320×240 pixels) is altogether lesser than the one on HKPU database (i.e., size of image is 513×256 pixels).

Table 5

EER (%) Comparison between Some Typical Finger Vein Recognition Methods and the Proposed Framework on First Session Images of HKPU Database

Category EER value
LBP (Local binary pattern) 2.34
LLBP (Local line binary pattern) 2.48
Superpixel-based feature (SBF) 2.73
Competitive Coding 2.46
Tri-branch vein structure based detection technique 0.75

Proposed Technique 0.70
Table 6

EER (%) Comparison between Some Typical Finger Vein Recognition Methods and the Proposed Framework on Two Session Images of HKPU Database

Method EER (%) Value
Gabor 4.61
ELBP 5.59
Fusion-based method 4.47
Tri-branch vein structure based detection technique 3.89

Proposed Technique 3.14
Table 7

Time Costs (second) in Tri-Branch Vein Structure Extraction

Step Denoising and Thinning Detection of Bifurcation Tracking of Branch dot product and Dilation Total

Existing Proposed Existing Proposed Existing Proposed Existing Proposed Existing Proposed
HKPU data-base 0.0253 0.0198 0.0044 0.0041 0.0031 0.0028 0.0056 0.0051 0.0384 0.0318
SDU data-base 0.0132 0.0125 0.0019 0.0017 0.0014 0.0014 0.0037 0.0028 0.0202 0.0182
Figure 11 EER (%) Comparison between Some Typical Vein Features and Proposed Framework on SDU Database (Extraction method)
Figure 11

EER (%) Comparison between Some Typical Vein Features and Proposed Framework on SDU Database (Extraction method)

Figure 12 EER (%) Comparison between Some Typical Finger Vein Recognition Methods and the Proposed Framework on First Session Images of HKPU Database
Figure 12

EER (%) Comparison between Some Typical Finger Vein Recognition Methods and the Proposed Framework on First Session Images of HKPU Database

Figure 13 EER (%) Correlation between Some Typical Finger Vein Recognition Methods and the Proposed Framework on Two Session Images of HKPU Database
Figure 13

EER (%) Correlation between Some Typical Finger Vein Recognition Methods and the Proposed Framework on Two Session Images of HKPU Database

Figure 14 Time Costs in Tri-Branch Vein Structure Extraction
Figure 14

Time Costs in Tri-Branch Vein Structure Extraction

Figure 15 Main Steps Time Costs in second in Tri-Branch Vein Structure Extraction A for HKPU database and B for SDU database
Figure 15

Main Steps Time Costs in second in Tri-Branch Vein Structure Extraction A for HKPU database and B for SDU database

6 Conclusion

In this paper, we propose a novel finger-vein based recognition technique using Hybrid BM3D Filter along with grouped sparse representation for image denoising and Feature selection (LBP, SIFT) to evaluate features, key-points and perform recognition. A neural network is used to perform classification. All experimental results presented show that the proposed technique is highly satisfactory as compared to existing technique implemented in MATLAB. The proposed technique is evaluated using peak signal to noise ratio performance. In this work, we use BM3D filter. Apart from analysing equal error rate, we also calculated the time cost of each image on SDU database at 50% that of each image on HKPU database. Experimental results demonstrate that the proposed technique outperforms the typical non-vein pattern-based methods and the old vein pattern-based methods.

References

[1] Wang, K., Ma, H., Popoola, O. P., & Li, J. (2011). Finger vein recognition. Biometrics, 31–53.10.5772/18025Search in Google Scholar

[2] Zhang, J., Zhao, D., & Gao, W. (2014). Group-based sparse representation for image restoration. IEEE Transactions on Image Processing, 23(8), 3336–3351.10.1109/TIP.2014.2323127Search in Google Scholar PubMed

[3] Lee, E. C., Lee, H. C., & Park, K. R. (2009). Finger vein recognition using minutia-based alignment and local binary pattern-based feature extraction. International Journal of Imaging Systems and Technology, 19(3), 179–186.10.1002/ima.20193Search in Google Scholar

[4] Yang, L., Yang, G., Xi, X., Meng, X., Zhang, C., & Yin, Y. (2017). Tri-branch vein structure assisted finger vein recognition. IEEE Access, 5, 21020–21028.10.1109/ACCESS.2017.2728797Search in Google Scholar

[5] Liu Z, Yin Y, Wang H, Song S, Li Q (2010) Finger vein recognition with manifold learning. J Netw Comput Appl 33(3):275–28210.1016/j.jnca.2009.12.006Search in Google Scholar

[6] Sapkale, M., & Rajbhoj, S. M. (2016, June). A finger vein recognition system. In 2016 Conference on Advances in Signal Processing (CASP) (pp. 306–310). IEEE.10.1109/CASP.2016.7746185Search in Google Scholar

[7] Liu, W., Li, W., Sun, L., Zhang, L., & Chen, P. (2017, June). Finger vein recognition based on deep learning. In 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA) (pp. 205–210). IEEE.10.1109/ICIEA.2017.8282842Search in Google Scholar

[8] Hsia, C. H. (2017). New verification strategy for finger-vein recognition system. IEEE Sensors Journal, 18(2), 790–797.10.1109/JSEN.2017.2772799Search in Google Scholar

[9] Yang, X., Yang, C., & Yao, Z. (2016, October). The finger vein recognition based on shearlet. In 2016 8th International Conference on Wireless Communications & Signal Processing (WCSP) (pp. 1–5). IEEE.10.1109/WCSP.2016.7752499Search in Google Scholar

[10] Sapkale, M., & Rajbhoj, S. M. (2016, August). A biometric authentication system based on finger vein recognition. In 2016 International Conference on Inventive Computation Technologies (ICICT) (Vol. 3, pp. 1–4). IEEE. [10.1109/INVENTIVE.2016.7830222Search in Google Scholar

[11] Lu, Y., Xie, S. J., Yoon, S., Wang, Z., & Park, D. S. (2013, December). An available database for the research of finger vein recognition. In 2013 6th International congress on image and signal processing (CISP) (Vol. 1, pp. 410–415). IEEE.10.1109/CISP.2013.6744030Search in Google Scholar

[12] Chen, Q., Yang, L., Yang, G., Yin, Y., & Meng, X. (2017, March). DFVR: deformable finger vein recognition. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1278–1282). IEEE.10.1109/ICASSP.2017.7952362Search in Google Scholar

[13] Mulyono, D., & Jinn, H. S. (2008, April). A study of finger vein biometric for personal identification. In 2008 International Symposium on Biometrics and Security Technologies (pp. 1–8). IEEE.10.1109/ISBAST.2008.4547655Search in Google Scholar

[14] Buades, A., Coll, B., & Morel, J. M. (2005, June). A non-local algorithm for image denoising. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) (Vol. 2, pp. 60–65). IEEE.Search in Google Scholar

[15] Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2007). Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Transactions on image processing, 16(8), 2080–2095.10.1109/TIP.2007.901238Search in Google Scholar PubMed

[16] Singh, K. K., Mehrotra, A., Nigam, M. J., & Pal, K. (2011, December). A novel edge preserving filter for impulse noise removal. In 2011 International Conference on Multimedia, Signal Processing and Communication Technologies (pp. 103–106). IEEE.10.1109/MSPCT.2011.6150448Search in Google Scholar

Received: 2020-02-12
Accepted: 2020-05-08
Published Online: 2021-04-26

© 2021 Rahul Dev et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 2.10.2023 from https://www.degruyter.com/document/doi/10.1515/comp-2020-0187/html
Scroll to top button