Table of Contents:

RELATED WORKS

The presence of eyeglasses is considered as one of the major noise factors for degraded biometric performance, as shown by various studies (Bowyer et al., 2008; Bharadwaj et al., 2010; Proenca & Alexandre, 2010). Another set of works have specifically

TABLE 8.1

Summary of Most Relevant Gender Classification Research from Ocular Images

Authors

Database

Features

Classification

Accuracy

Visible Spectrum

Merkow et al. (2010)

Proprietary

LBP

LDA-NN

PCA-NN

SVM

85.00%

Lyle et al. (2010)

FRGC

LBP

SVM

93.00%

Kumari et al. (2012)

FERET

ICA

BPNN

RBFNN

PNN

90.00%

Castrillon-Santana et al. (2016)

GROUPS

LBP

HOG

LTP

WLD

LOSTB

SVM

92.46%

Rattani et al. (2017)

VISOB

LBP

HOG

LTP

LPQ

BSIF

SVM MLP

92.00%

Rattani et al. (2018)

VISOB

RPI

VGG

ResNet

90.00%

Tapia et al. (2019a)

CSIP

MICHE

MODBIO

INACAP

SRCNNs

RF

90.00%

Near-Infra-Red Spectrum

Bobeldyk and Ross (2016)

BioCOP

BSIF

SVM

85.70%

Kuehlkamp et al. (2017)

GFI

LBP

GF

RPI

MLP

CNN

66.00%

Tapia & Aravena (2018)

ND-GFI

RPI

CNN

87.26%

Viedma et al. (2019)

5 Public DBs

ULBP

HOG

RPI

SVM

NECA

89.22%

Visible and Near-Infra-Red Spectrum

Dong & Woodard (2011)

FRGC

MBGC

GSF

LAF

CPF

MD

EDA

SVM

FRGC:97.00%

MBGC:96.00%

Lyle et al. (2012)

FRGC MBGC

LBP

HOG

DCT

LCH

ANN SVM

FRGC:97.30%

MBGC:90.00%

(Continued)

TABLE 8.1 (Continued)

Summary of Most Relevant Gender Classification Research from Ocular Images

Authors

Database

Features

Classification

Accuracy

Tapia et al. (2019b)

10 Public DBs

RPI

CNN

86.60%

Multi-spectral Imaging

Raja et al. (2020)

Proprietary

GIST

CRC

81.00%

FRGC, Face Recognition Grand Challenge; FERET, Face Recognition Technology; GROUPS. The images of Groups; VISOB. Visible Light Mobile Ocular Biometric; CSIP, Cross-sensor Iris and Periocular; MICHE, Mobile Iris Challenge Evaluation; MODBIO, A multimodal database captured with a portable handheld device; INACAP. Hand-made Periocular Iris Image Database Captured from Cellphones; BioCOP, FBI Biometric Collection of People; GFI, Gender from Iris; ND-GFI. University of Notre Dame Iris Image Dataset; MBGC. Multiple Biometric Grand Challenge; GSF. Global Shape Features; LAF, Local Area Features; CPF. Critical Point Feature; GF. Gabor Filter; RPI, Raw Pixel Intensity; RF. Random Forest; NECA, Nine Ensemble Classifier Algorithm.

studied the impact of wearing eyeglasses to establish the performance degradation in ocular biometrics (Lee et al., 2001; Vetrekar et al., 2018). The works further conclude that the presence of eyeglasses on the ocular region seriously deteriorates the overall accuracy. Despite the serious problem posed by influence of eyeglasses on the performance of ocular biometrics, the set of related works available is very limited. We therefore present a set of related works in the subsequent section listing out the approaches and open challenges, specifically for gender classification based on ocular information.

Earlier works on ocular region-based gender classification have focused on using VIS spectrum (Merkow et al., 2010; Lyle et al., 2010; Kumari et al., 2012; Castrillon- Santana et al., 2016; Rattani et al., 2017, 2018; Tapia et al., 2019a), NIR spectrum (Bobeldyk & Ross, 2016; Kuehlkamp et al., 2017; Tapia & Aravena, 2018; Tapia et al.. 2019a; Viedma et al., 2019), VIS and NIR spectrum (Dong & Woodard, 2011; Lyle et al., 2012; Tapia et al., 2019b) and more recently the multi-spectral imaging operated in nine narrow spectrum bands using the VIS and NIR wavelength range (Raja et al., 2020). A list of all related works in this direction is listed out in Table 8.1.

Visible Spectrum

One of the early works by Merkow et al. (2010) studied the gender classification using the ocular region cropped from the face images to analyse the reliability of the ocular region for gender classification. The facial database employed in this work is of low- resolution Joint Photographic Experts Group (JPEG) face images acquired using web crawler from Flikr [Inc., 2010 (available online)]. In their work, ocular gender classification was performed using three different classification methods such as Linear Discriminant Analysis along with 1 Nearest Neighbor (LDA-1NN) classifier, Principal Component Analysis along with 1 Nearest Neighbor (PCA-1NN) classifier, and SVM Classifier. Each of these classification methods was used in conjunction with LBP feature descriptor. Authors reported an accuracy of 85.00% for classifying gender, while considering the images with frontal face pose with minimal occlusion and pitch.

In another work by Lyle et al. (2010), the ocular features was employed to classify soft biometric information such as gender and ethnicity. With the use of LBP texture descriptor on the grey-scale images, the work illustrated the effectiveness in predicting soft biometric traits. The gender classification accuracy of 93.00% with SVM was reported using Face Recognition Grand Challenge (FRGC) database (Phillips et al., 2005) while demonstrating the improvement in performance accuracy of the existing ocular-based authentication system when combined with soft biometric informations.

Kumari et al. (2012) attempted to classify gender from the poor quality grayscale ocular region using FERET face database (Phillips et al., 2000). The authors used Independent Component Analysis (ICA) on the high dimensional data and classified the gender using Convolutional Neural Network (CNN) methods such as Back Propagation Neural Network (BPNN), Radial Basis Function Neural Network (RBFNN), and Probabilistic Neural Network (PNN). Although the evaluation was performed on low-quality images, the reported ocular gender classification of 90.00% demonstrated satisfactory applicability.

Castrillon-Santanae et al. (2016) studied exhaustively the problem of gender classification based on ocular information on most challenging dataset in wild. The purpose of this work was to demonstrate the validity of using ocular region in a large population and the use of complementary information of different feature descriptors to improve the overall accuracy. Features including LBP, HOG, Local Ternary Patterns (LTP), Weber Local Descriptor (WLD), and Local Oriented Statistics Information Booster (LOSIB) were employed in their system along with a SVM (Radial Basis Function (RBF) Kernel) classifier for classifying the gender. The classification results computed on the GROUPS database (Gallagher & Chen, 2009) showed 92.46% gender classification accuracy.

Further, Rattani et al. (2017) explored the problem of gender estimation using the ocular images acquired using three different smartphones such as iPhone 5s, Samsung Note 4, and Oppo N1. The texture descriptor methods such LBP, LTP, HOG, LPQ, and BSIF were used in conjunction with SVM and Multi-layer Perceptron (MLP) classifier on publicly available VISOB database of periocular images (Rattani et al., 2016). A maximum of 80.00% classification accuracy was obtained with SVM classifier on LPQ descriptor, while 91.60% accuracy was obtained with MLP classifier on HOG descriptor on the ocular data captured from smartphone. Latter, in an extended work by Rattani et al. (2018), the authors performed extensive evaluation based on deep learning on the ocular image collected using smartphone. The pretrained CNNs such as very deep convolutional network for large image recognition (VGG) and residual network (ResNet) were employed for gender classification with an accuracy of 90.00%.

On similar lines, selfie images collected using smartphone also used for ocular gender classification, especially by cropping the ocular region from selfie images of individual faces which was demonstrated by Tapia et al. (2019a). The authors have employed Super Resolution Convolutional Neural Networks (SRCNNs) to improve the overall quality of the ocular region cropped from selfie images to 2X and 3X in their work prior to gender classification. The results were obtained on three existing databases: CSIP (Santos et al., 2015), MICHE (Marsico et al., 2015), MOBBIO (Sequeira et al., 2014), and their in-house INACAP database. The study demonstrated a gender classification accuracy of 90.15% using Random Forest by employing SRCNNs on ocular images.

Overall, the works on ocular gender classification using VIS spectrum are significantly dependent on the use of prominent features such as eyebrow (structural information) (Dong & Woodard, 2011). While we also note that the ocular regions cropped from the holistic facial region collected specifically for facial database are available in the public domain for academic research (Tapia & Aravena, 2018; Bobeldyk & Ross, 2016; Lyle et al., 2010), dedicated datasets with equal gender balance are not available. Finally, it was noted that the performance accuracy decreased when eyebrows were not considered in analysis (Dong & Woodard, 2011).

 
Source
< Prev   CONTENTS   Source   Next >