In the case of CB, a PBI is generated by employing some transformation function and user-specific key. Here, matching is always performed in the transformed domain. As we know, for designing an effective CB scheme, four fundamental requirements, i.e., non- invertibility, revocability, unlinkability, and system usability, need to be addressed simultaneously. Thus, for measuring the performance of СВ-based techniques, the four above-mentioned requirements need to be quantitatively assessed. It should be noted that in the literature, several СВ-based techniques have been proposed, but not much work has been carried out in proposing metrics for quantitative assessment.

Performance Measures for Non-invertibility Analysis

For the quantitative assessment of non-invertibility conditional Shannon, entropy can be used. But in the case of CB due to the generation of PBI, it is difficult to quantify Shannon entropy directly, thus to measure non-invertibility, the authors in Ref. [76] have proposed to study several attacks like zero effort attack, exhaustive attack, stolen biometric attack, stolen token attack, and worst-case attack. The decision results for any СВ-based system can be defined as follows:

where DT stands for distance function in the transformed domain. Ax represents the CB template of user .v [combination of feature vector say fx and kx (secret key or transformation)] generated during enrollment, while Axx (combination of query feature vector say fx and k) represents the CB template of user x generated during authentication. Here, Rx represents the decision result, and £ is a decision threshold chosen by the user. Zero effort attack is quantified as follows:

In this case, during authentication, imposter makes no effort and presents his owm biometrics (Av) to the system. While in case of brute force attack, imposter tries different random values of his own biometrics (Av) with an intention that somehow Av

matches with Ax. Other forms of attacks, i.e., stolen biometric, stolen token attack, and worst-case attack, are more serious forms of attack. Here, in these cases, imposter somehow got access either to the genuine user feature vector (fx) or either to the genuine user transformation parameters (kx) or in extreme case, i.e., the worst-case attack can get access to both /' as well as kx.

Performance Measures for Unlinkability Analysis

Linkage across different databases can disclose different pieces of information about an individual and thus can allow an adversary attack by consolidating information. Thus, it is necessary to ensure unlinkability across biometric templates stored in different databases. Recently, two measures, a local and a global [29], have been proposed to quantify unlinkability.

a. [local] D <-> (s) e [0,1]: This metric depends upon the likelihood ratio between the mated (probe and gallery that belong to the same subject but transformed using different keys) and non-mated (probe and gallery that belong to different subjects and transformed using different keys) score distributions to evaluate the local linkability of a system at each score.

In this measure, value of D <-> (s) = 0 signifies “high” unlinkability, while D (s) = 1 signifies “low” unlinkability at score s.

b. [global] Ds>s <-> e [0,1]: This metric is independent of the individual score, and it measures the global linkability of the entire system. In this measure, value of Ds>s <-> = () indicates “high” unlinkability, while Ds>s = 1 indicates “low” unlinkability and defined as follows:

where p(s/H„,) indicates the score generated from distribution of mated samples.

Performance Measures for System Usability Analysis

Ensuring the usability of the system is a functional attribute. Measures for quantifying this attribute are the same as used for quantifying the performance of any traditional biometric system. These measures are mainly classified into two subcategories: (i) performance measures for verification and (ii) performance measures for identification. Mainly FAR, FRR) equal error rate (EER), and decidability index (DI) are used as metrics in the cancelable verification domain. These terms are described below.

FAR: It specifies how many unauthorised persons get access to the system.

It is defined as follows:

FRR: It specifies how many authorised persons are denied access by the system. It is defined as follows:

EER: It is a point at which the FRR value is equal to FAR. Lower value of EER depicts superiority of the biometric system.

DI: This measure gives the separability between imposter and genuine score distributions, respectively. It is defined as follows:

where /jg, /jim, о], Оare the mean and variances of genuine and imposter distributions. Apart from these regular verification metrics for measuring the system usability once, the biometric template is transformed into a metric, as recently proposed [76], which is defined as follows:

where FARr and FRR, in the numerator term represent the FAR and FRR of the transformed template, while FAR0 and FRRQ in the denominator term denote the FAR and FRR of original biometric templates. This metric measures the ratio of the receiver operating characteristic curve. Here, A, = 1 indicates ideal (perfect) scenario, while negative value of A, indicates deteriorating performance.

Correct recognition rate (CRR) is another a commonly used metric for assessing the CB identification performance. It measures the percentage of the match rate, and it is defined as follows:

Performance Measures for Revocability Analysis

For ensuring the revocability, as suggested in Ref. [23], a distribution curve between the imposter and pseudo imposter distribution is drawn. The claim of revocability is preserved when the ^pseudo (mean) and varpseudo (variance) of pseudo imposter is close to jUim and varim of imposter and far from /jg and var., of genuine distribution.

Databases Used in Cancelable Biometrics

Most of the work in the cancelable domain is mainly concentrated over three popular biometric traits, i.e., face, iris, and fingerprints. It is worth mentioning that in the cancelable domain, there is no standard protocol defined for training and


Key Databases Used in Cancelable Biometrics

Biometric Trait


No of Subjects







Largest and the most challenging dataset collected over 15 sessions








Commonly used cancelable iris dataset





ND IRIS 0405



FVC2002 DB-1,2,3,4


Small dataset and not much challenging

testing images. As a result, a different number of training and testing images are used by researchers in various works [48,96]. Table 2.6 illustrates key cancelable databases in the literature along with their advantages and limitations. It should be noted that most of the work in the cancelable domain has been conducted on small datasets despite the availability of large datasets, particularly in the face domain like MS-Celeb and FaceNet. Developing CB techniques for voluminous challenging datasets that can represent real-world population is the current need of the hour.

< Prev   CONTENTS   Source   Next >