Feature Extraction

LBP is an effective and widely used texture feature descriptor [22] in biometric recognition. Not only does LBP obtain a better performance in many applications, it is also computationally simplistic [43]. Compared with LBP, LDP was proposed as a high-order texture encoding scheme for local patterns. Furthermore, LDP can extract the derivative direction variation information of each pixel in the image. The 2D-Gabor filter is sensitive to orientations, making it the most promising in the extraction of local palmprint and dorsal hand vein [2,5]. Otherwise, DCNN has obtained significant performances in image classification [44]. DCNN has powerful ability of abstract and impact feature representation by executing several nonlinear convolutional layers. Usually, abundant training data are necessary to train the parameters in the DCNN. Particularly, the derived characteristics from a certain layer can be utilised as the DCF for biometric authentication [10,44].

In this subsection, the classical feature extractors including LBP, LDP, 2D-Gabor, and DCNN are introduced as follows. Each will be utilised for the hyperspectral palmprint and dorsal hand vein ROIs (refer to Sections 1.4.2-1.4.4).

1. LBP: Texture has been proved an effective pattern in biometric recognition [9] due to its rich local characteristics. Given an ROI image, the key step to transform a pixel into the LBP code is to binarise its neighbouring eight pixels that the value of the centre pixel is chosen as the threshold. Afterwards, each pixel can be encoded as follows:

where v is the value of the pixel at the location (L, C) in the image, and vd is the value of the z/th neighbour pixel. Finally, a LBP vector can be generated by using a histogram for all the encoded values. It is shown in Figure 1.7 that we can define the LBP descriptor with a variety of sizes (LBPrf. Д where d denotes the number of neighbour adjacent points and r denotes the radius.

LBP neighbourhood sizes

FIGURE 1.7 LBP neighbourhood sizes.

2. LDP: The LDP [9] is utilised to encode the local direction pattern. Given the ROI image /(Z), we define its first-order derivatives on different orientations as /3 (Z), where d = 0°, 45°, 90°, and 135°. Here, we assume that Z0 is a point in /(Z), and Z, (/ = 1, 8) (see Figure 1.8) denotes the t'th

neighbour pixel. Therefore, the first-order derivatives of Z0 is calculated as follows:

The second-order derivative of Z0 on Э (d = 0°, 45°, 90°, and 135°) can be described as follows:

Surrounding pixels around the centre point Z„

FIGURE 1.8 Surrounding pixels around the centre point Z„.

where /(.,.) is an equation on binary transformation:

At last, a 32-bit feature vector can be generated as follows on different orientations:

3. 2D-Gabor: Due to the fact that it has a good 2D spectral specificity property, the 2D-Gabor filter is frequently exploited in orientation feature extraction [2,5]. The 2D-Gabor is presented as follows:

where i = V-T. И presents the frequency of the sinusoidal wave, f denotes the direction, and a denotes the standard deviation. Usually, a 2D-Gabor filter bank contains a set of filters on n orientations with the same scale. The orientation

Then, the convolution of the 2D-Gabor filter is conducted on the palmprint image to obtain a line response as follows:

where I denotes the image, G(q>j, is the convolutional operator, r is the convolution result, and (x, y) denotes the position of a pixel in I.

4. DCF: DCNN usually includes a variety of components, such as pooling [11], convolution, ReLU [12], and Softmax-loss layer [23], as shown in Figure 1.9. LeCun [11] first utilised the LeNet on handwritten digit classification. Since then, DCNNs with the similar non-linear structure have been widely used [23]. There usually are thousands of parameters in different layers. Therefore, high impact and discriminative characteristics can be obtained after several convolutions with the trained parameters. Particularly, the Softmax-loss layer is used for classification as a classifier in DCNN. Here, we ignore the Softmax-loss layer and extract discriminative features as DCFs directly from the second FC layer (see Figure 1.9) of the DCNN.

The architecture of DCNN for VGG-F [45]

FIGURE 1.9 The architecture of DCNN for VGG-F [45].

Feature Fusion and Matching

In the feature extraction phase (refer to Section 1.3.2), different features, including LBP, LDP, 2D-Gabor, and DCF, can be applied to each ROI image in the palm- print and dorsal hand vein cubes, respectively. If all images from the different bands are fused for recognition, it will be costly and time consuming. Consequently, we selected the optimal bands with respect to the types of features achieving the best recognition results on palmprint and dorsal hand vein, respectively.

Let Fpa)m = [/i, /2.....f...../„ ] e R‘lx" and Pdhv = [pt, p2.....ps.....p„ ] e R‘,x"

denote the hyperspectral palmprint features and hyperspectral dorsal hand vein features, respectively, where f is the feature vector for the /th band palmprint image, ps is the feature vector for the .vth band dorsal hand vein ROI, d denotes the dimensionality of the feature, and n denotes the number of spectrums. Afterwards, the optimal features can be fused as follows:

where 0( ) is the selection of the optimal feature from Fpa!m or Pdhv, with the selected feature vector obtaining the highest recognition accuracy. Specifically, W is to be concatenated with the optimal О(Fpalm) and the optimal 0(Pdhv).

After feature fusion, we use the “Euclidean” distance for the final matching:

where X and Y are features extracted from two objects.

< Prev   CONTENTS   Source   Next >