PERFORMANCE MEASURES AND DATASETS IN CANCELABLE BIOMETRICS
In the case of CB, a PBI is generated by employing some transformation function and userspecific key. Here, matching is always performed in the transformed domain. As we know, for designing an effective CB scheme, four fundamental requirements, i.e., non invertibility, revocability, unlinkability, and system usability, need to be addressed simultaneously. Thus, for measuring the performance of СВbased techniques, the four abovementioned requirements need to be quantitatively assessed. It should be noted that in the literature, several СВbased techniques have been proposed, but not much work has been carried out in proposing metrics for quantitative assessment.
Performance Measures for Noninvertibility Analysis
For the quantitative assessment of noninvertibility conditional Shannon, entropy can be used. But in the case of CB due to the generation of PBI, it is difficult to quantify Shannon entropy directly, thus to measure noninvertibility, the authors in Ref. [76] have proposed to study several attacks like zero effort attack, exhaustive attack, stolen biometric attack, stolen token attack, and worstcase attack. The decision results for any СВbased system can be defined as follows:
where D_{T} stands for distance function in the transformed domain. A_{x} represents the CB template of user .v [combination of feature vector say f_{x} and k_{x} (secret key or transformation)] generated during enrollment, while A_{xx} (combination of query feature vector say f_{x} and k) represents the CB template of user x generated during authentication. Here, R_{x} represents the decision result, and £ is a decision threshold chosen by the user. Zero effort attack is quantified as follows:
In this case, during authentication, imposter makes no effort and presents his owm biometrics (A_{v}) to the system. While in case of brute force attack, imposter tries different random values of his own biometrics (A_{v}) with an intention that somehow A_{v}
matches with A_{x}. Other forms of attacks, i.e., stolen biometric, stolen token attack, and worstcase attack, are more serious forms of attack. Here, in these cases, imposter somehow got access either to the genuine user feature vector (f_{x}) or either to the genuine user transformation parameters (k_{x}) or in extreme case, i.e., the worstcase attack can get access to both /' as well as k_{x}.
Performance Measures for Unlinkability Analysis
Linkage across different databases can disclose different pieces of information about an individual and thus can allow an adversary attack by consolidating information. Thus, it is necessary to ensure unlinkability across biometric templates stored in different databases. Recently, two measures, a local and a global [29], have been proposed to quantify unlinkability.
a. [local] D <> (s) e [0,1]: This metric depends upon the likelihood ratio between the mated (probe and gallery that belong to the same subject but transformed using different keys) and nonmated (probe and gallery that belong to different subjects and transformed using different keys) score distributions to evaluate the local linkability of a system at each score.
In this measure, value of D <> (s) = 0 signifies “high” unlinkability, while D (s) = 1 signifies “low” unlinkability at score s.
b. [global] D^{s}>^{s} <> e [0,1]: This metric is independent of the individual score, and it measures the global linkability of the entire system. In this measure, value of D^{s}>^{s} <> = () indicates “high” unlinkability, while D^{s}>^{s} = 1 indicates “low” unlinkability and defined as follows:
where p(s/H„,) indicates the score generated from distribution of mated samples.
Performance Measures for System Usability Analysis
Ensuring the usability of the system is a functional attribute. Measures for quantifying this attribute are the same as used for quantifying the performance of any traditional biometric system. These measures are mainly classified into two subcategories: (i) performance measures for verification and (ii) performance measures for identification. Mainly FAR, FRR) equal error rate (EER), and decidability index (DI) are used as metrics in the cancelable verification domain. These terms are described below.
FAR: It specifies how many unauthorised persons get access to the system.
It is defined as follows:
FRR: It specifies how many authorised persons are denied access by the system. It is defined as follows:
EER: It is a point at which the FRR value is equal to FAR. Lower value of EER depicts superiority of the biometric system.
DI: This measure gives the separability between imposter and genuine score distributions, respectively. It is defined as follows:
where /j_{g}, /j_{im}, о], Оare the mean and variances of genuine and imposter distributions. Apart from these regular verification metrics for measuring the system usability once, the biometric template is transformed into a metric, as recently proposed [76], which is defined as follows:
where FAR_{r} and FRR, in the numerator term represent the FAR and FRR of the transformed template, while FAR_{0} and FRR_{Q} in the denominator term denote the FAR and FRR of original biometric templates. This metric measures the ratio of the receiver operating characteristic curve. Here, A, = 1 indicates ideal (perfect) scenario, while negative value of A, indicates deteriorating performance.
Correct recognition rate (CRR) is another a commonly used metric for assessing the CB identification performance. It measures the percentage of the match rate, and it is defined as follows:
Performance Measures for Revocability Analysis
For ensuring the revocability, as suggested in Ref. [23], a distribution curve between the imposter and pseudo imposter distribution is drawn. The claim of revocability is preserved when the ^_{pseudo} (mean) and var_{pseudo} (variance) of pseudo imposter is close to jU_{im} and var_{im} of imposter and far from /j_{g} and var., of genuine distribution.
Databases Used in Cancelable Biometrics
Most of the work in the cancelable domain is mainly concentrated over three popular biometric traits, i.e., face, iris, and fingerprints. It is worth mentioning that in the cancelable domain, there is no standard protocol defined for training and
TABLE 2.6
Key Databases Used in Cancelable Biometrics
Biometric Trait 
Database 
No of Subjects 
Remarks 
Face 
CMUPIE 
68 

FERET 
1199 
Largest and the most challenging dataset collected over 15 sessions 

AR 
126 

FEI 
200 

Iris 
CASIA IRS V3 
396 
Commonly used cancelable iris dataset 
MMU1 DATASET 
100 

IITD 
224 

ND IRIS 0405 
356 

Fingerprint 
FVC2002 DB1,2,3,4 
110 
Small dataset and not much challenging 
testing images. As a result, a different number of training and testing images are used by researchers in various works [48,96]. Table 2.6 illustrates key cancelable databases in the literature along with their advantages and limitations. It should be noted that most of the work in the cancelable domain has been conducted on small datasets despite the availability of large datasets, particularly in the face domain like MSCeleb and FaceNet. Developing CB techniques for voluminous challenging datasets that can represent realworld population is the current need of the hour.