Evaluation 2: Without-Glass v/s With-Glass

This section of the chapter details the experimental results based on evaluation protocol discussed in Section 8.5.1, i.e., training the model with multi-spectral ocular instances “Without-Glass” data and tested against the data “With-Glass” for gender classification. The purpose of conducting this set of evaluation is to analyse the effect of eyeglasses on the performance accuracy of the algorithms for ocular gender classification. This evaluation further demonstrates the real-life scenario, when the training and testing datasets are acquired under different environmental conditions to truly signify the robustness of the classification model.

To present our results at the same time to provide reasonable comparison with our proposed approach, we again present the performance analysis of individual eight bands and fusion of bands (based on three different fusion methods, as discussed earlier) independently using five different state-of-the-art feature extraction methods along with SVM classifier for two class gender prediction. Hence, we systematically present the performance of individual bands (Section 8.5.3.1) and fusion of bands performance (Section 8.5.3.2) in the following section, followed by their comparison with our proposed approach.

Individual Band Comparison

Table 8.7 summarises the average gender classification accuracy obtained after 10-fold cross-validation, and Figure 8.6 illustrates the mean and variance plot describing the classification accuracy. The results have shown a decrease in the overall classification accuracy of individual bands compared to the previous evaluation results indicating the effect of wearing eyeglasses. Based on the classification results obtained, we summarise our specific observation for this category of evaluation as below:

  • • The maximum average gender classification accuracy obtained across the individual band is 71.04% for 710nm spectrum band using BSIF feature extractor with SVM classifier. On the other hand, the poor average gender classification accuracy is obtained across the individual band is 52.88% for 530 nm spectrum band using LBP feature extractor with SVM classifier.
  • • As it can be seen from the evaluation results, poor classification accuracy is obtained between 55%-65% across most of the individual spectrum bands using state-of-the-art feature descriptor and classification technique. This drastic degradation in the performance is due to the presence of eyeglasses in the ocular regions indicating the vulnerability of these algorithm towards the variation in the data.

Fused Band Comparison

In this section, we combine the individual eight spectral bands into single composite image using the three different fusion methods employed in the previous evaluation. The idea is to present the combined effect of individual bands against the variations in the probe ocular data such as wearing of eyeglasses. Table 8.8 tabulates the average gender classification accuracy, and Figure 8.5 illustrates the mean and variance plot describing the classification accuracy of three fusion methods across five different feature extraction methods. Based on the results obtained, we present major observations in this section as below:

TABLE 8.7

Average Gender Classification Accuracy (in %) across Individual Bands and Proposed Method, When Training Ocular Samples Images Belongs to Without- Glass and Testing Ocular Sample Images Belongs to With-Glass Category

Algorithm

Spectral Bands

530 nm

590 nm

650 nm

710nm

770 nm

890 nm

950nm

1,000nm

LBP-SVM

52.88

54.00

56.44

58.47

59.42

61.97

58.47

58.85

• LPQ-SVM

57.10

56.52

64.54

64.30

66.43

66.01

59.72

66.35

• HOG-SVM

53.27

55.03

70.26

67.17

63.51

55.80

57.60

59.70

•GIST-SVM

63.48

66.32

66.88

66.73

64.77

63.27

64.89

64.08

•BSIF-SVM

60.34

64.15

69.79

71.04

70.25

68.53

67.38

67.00

• Proposed approach

72.50

Average classification accuracy (%) illustrated in terms of mean and variance plot for Without-Glass v/s With-Glass evaluation for gender prediction

FIGURE 8.6 Average classification accuracy (%) illustrated in terms of mean and variance plot for Without-Glass v/s With-Glass evaluation for gender prediction.

TABLE 8.8

Average Gender Classification Accuracy (%) across Fusion of Bands and Proposed Method When Training Ocular Sample Images Belongs to Without- Glass and Testing Ocular Sample Images Belongs to With-Glass Category

Fusion Method

Algorithm

LBP-SVM

LPQ-SVM

HOG-SVM

GIST-SVM

BSIF-SVM

Proposed Approach

IMF

54.23

57.25

56.75

63.4

59.79

GFF

54.88

55.79

55.87

64.96

60.82

72.50

2-DWT

59.65

62.74

68.98

62.71

67.11

• The highest average classification accuracy of 68.98% is obtained with the 2-DFT fusion method using the HOG-SVM algorithm, while the lowest average classification accuracy of 54.23% is obtained for the IMF fusion method using the LBP-SVM algorithm.

On similar lines with above evaluation (Section 8.5.2.2) conducted for fusion methods, 2-DFT demonstrates the better performance across all the five feature extraction along with SVM classifier compared to GFF and IMF.

It can be clearly seen from both the evaluation (individual bands and the fusion approach), there is degradation in the overall classification accuracy when the ocular sample data employed in the testing set is covered with eyeglasses. However, it is observed that our proposed approach still performs better compared to individual bands and fusion of the bands for this set of evaluation. Specifically, the proposed approach computes 72.50% average classification accuracy, demonstrating its superiority over other methods under varying environmental factors.

To summarise, the gender classification based on multi-spectral imaging collected for the ocular region has demonstrated reasonable classification accuracy presenting the significance of employing the inherent properties of multi-spectral imaging sensors. But performance becomes slightly poor when the ocular instances are covered with eyeglasses, which presents the vulnerability of multi-spectral imaging against the eyeglasses. Further, our proposed approach has shown its significance across both the evaluations (i.e., Without-Glass v/v Without-Glass and Without-Glass v/s With-Glass).

 
Source
< Prev   CONTENTS   Source   Next >