INDEXING IRIS IMAGES

Iris is one of the most widely used biometric traits. It consists of rich texture information in the form of colour, minutia, spots, filaments, rifts, etc. that makes it unique. It has a small false-matching rate as compared to other available biometric traits [13,15], thus making it a stable biometric trait for authentication. It is an internally protected organ; therefore, it cannot be easily duplicated [30]. Iris database indexing consists of two phases, namely, indexing and retrieval. Indexing refers to associating the extracted features from the iris images to an index. During the retrieval phase, the feature vector for the probe image is generated, and the most similar index is found out. The candidates lying in the similar index are retrieved for comparison with the probe iris image. This results in reducing the search space remarkably.

Indexing of Iris Database Based on Local Features

The technique proposed in Ref. [32] utilises local features extracted from the iris images for indexing by utilising three transformation methods, namely, Discrete Cosine Transform (DCT) [33], Discrete Wavelet Transform (DWT) [36], and Singular Vector Decomposition (SVD) [19]. Before applying these methods, the iris image is pre-processed by applying segmentation, normalization, and enhancement on the acquired image. The iris segmentation is done using Canny edge detection [3] and the circular Hough transformation. The segmented iris image is then normalised using Daugman’s rubber sheet model [31] by converting the iris into a rectangular region. It is then enhanced using the CLAHE approach [56]. The pre-processing steps along with their corresponding outputs have been shown in Figure 11.11. For feature extraction, the pre-processed image is divided into 8x8 blocks, and the local

Pre-processing of the iris image (a) eye image, (b) edge detection, (c) iris localisation, (d) iris segmentation, and (e) iris normalisation [32]

FIGURE 11.11 Pre-processing of the iris image (a) eye image, (b) edge detection, (c) iris localisation, (d) iris segmentation, and (e) iris normalisation [32].

features are extracted from each block. DWT decomposes the blocks into seven subbands and aids in differentiating between the textures. DCT transforms the image space domain to frequency domain, i.e. when applied on each sub-band, it transforms them to spectral sub-bands having different importance. SVD is applied to these blocks and important features are selected that are expressed as a series of singular vectors (SVs). Scalable K-means++ has been applied on these features to divide them into distinctive groups leading to the creation of two В-trees. The block diagram of the proposed approach is shown in Figure 11.12.

For indexing, a global key is generated corresponding to every image stored in the database. The key value consists of the group number that contains the image’s sub-band features and combined key value. Every group is then sorted in increasing order of the first SV. They are then divided into two bins, where the bins contain features from the first and second SVs, respectively. This divides the database into two В-trees in which traversal is done using the generated global key. The images are stored at the leaf nodes of the В-tree. The В-tree structure is shown in Figure 11.13.

During identification, when a probe iris image is given to the system, the features are extracted from the probe image using the aforementioned procedure. The closest bins to the extracted features are identified and are traversed through the В-tree using the global key. The similar candidates are searched inside the bin using half searching method. This outputs a candidate list for comparison with the probe image.

Block diagram of the proposed approach [32]

FIGURE 11.12 Block diagram of the proposed approach [32].

В-tree structure that is used to store the bins [32]

FIGURE 11.13 В-tree structure that is used to store the bins [32].

11.5.2 Results

The performance of the above-discussed technique has been tested on three databases, viz., CASIA-IrisV3-Interval [43], the BATH University database [55], and the НТК database [47]. CASIA-IrisV3-Interval database contains 54,607 iris images collected from 1,800 real and 1,000 virtual subjects. The images have been captured in two different sessions with a gap of at least one month. The BATH University database contains 2,000 iris images collected from both left and right eye of 50 subjects. The images are in grey-scale format having a resolution of 1,280 x 960. The НТК database consists of 1,800 images collected from the left eye of 600 subjects. Some of the images collected from these databases are shown in Figure 11.14.

Sample images taken from three databases

FIGURE 11.14 Sample images taken from three databases: (a) BATH, (b) CASIA, and (c) НТК [32].

The proposed technique achieved penetration rates of 0.98%, 0.13%, and 0.12% and bin miss rates of 0.3037%, 0.4226%, and 0.2019% on CASIA-IrisV3-Interval, BATH University database, and IITK database, respectively. The proposed method has also been compared with three methods proposed in Refs. [4,46,58], such as DCT energy histogram and key-point descriptor. The comparison of the same shows that the proposed method has higher penetration rate. This is due to fewer sub-bands used.

 
Source
< Prev   CONTENTS   Source   Next >