# CANCELABLE BIOMETRIC SYSTEMS: INTRODUCTION AND REVIEW

The concept of cancelable biometric proposes that a biometric template/feature should never be used in its raw format for storing and matching purposes. Unlike other schemes which transform biometric using encryption and invertible vaults, cancelable biometric follows the one-way, non-invertible approach to map an ‘original biometric’ identity into a ‘pseudo-biometric’ identity (PI) with the help of some auxiliary data (AD). Figure 7.1 shows this distortion affect where an original template ‘M’ is transformed to a pseudo-biometric identity ‘РГ using a transformation function which takes user-specific AD or key as its input arguments. An essential property of this transform is that it must be non-invertible and must preserve the discriminability of the original features after distortion. It implies that after distortion, the biometric features belonging to the same user must have a similar distribution and those belonging to the different users must have distinct distributions, indicating that inter-user and intra-user variations must be maintained in the transformed domain.

The basic cancelable biometric setup is shown in Figure 7.2. The transformation function is incorporated as an intermediate step in conventional biometric authentication systems, where only ‘pseudo-biometric’ identity is generated at enrollment or authentication, while the AD is provided to user in a tokenised manner (e.g., smart card). At enrollment, the original biometric identity ‘B’ of a user is transformed with the help of some secret key/AD to generate a transformed feature/PI which is stored as a reference template. At authentication, the probe biometric (B') of the same user is transformed in a similar way to generate transformed query template (РГ). Transformed reference and query templates are matched to determine access.

Cancelable biometric systems are characterised by their ability to provide four important template protection requirements specified as *discriminability, revoca- bility, diversity,* and *non-invertibility.* These characteristic can be followed from Figure 7.1 as (a) the transformed identity PI must preserve the discriminating characteristics of original biometric template M (*discriminability);* (b) if a PI is compromised, it can be regenerated from the same template M by changing the transformation function or AD (*revocability*). Also, the same template can be mapped as different Pis for the diverse usage of biometric over different applications (*diversity*); and (c) in the case of compromise, the original template is not revealed due to non-invertible nature of transform (*non-invertibility*).

FIGURE 7.1 Cancelable biometric transformation process.

FIGURE 7.2 Enrolment and authentication processes with cancelable biometrics.

While it is utmost important that any template protection scheme must deliver these important requirements, the challenge is to design a transformation paradigm which distorts the biometric features and at the same time not to the extent in which the discriminability is compromised. The balance between discriminability and non- invertibility is important to claim the security of the system. The next section studies the conventional template transformation schemes followed by the effect of technology shift on these transformations due the use of neural networks.

## Conventional Template Transformation Techniques

The template transformation paradigms are broadly classified as *biometric salting* and *non-invertible transforms.* Biometric salting techniques distort the data by mixing it with some random noises followed by some many to one mapping. AD are obtained externally and it interacted directly with the biometric to increase the entropy of the template, which makes it difficult for an adversary to make a guess. The salting operation is generally followed by some many-to-one mapping in order to impart non-invertibility. The techniques under this category can be further classified as *Random Projection, Random Convolution,* and *Random Noise,* and *Random Mapping-based* transforms. The techniques under these categories are summarised in Figure 7.3 and are discussed below.

Random Projection (RP)-based transformations are most widely used biometric salting techniques. RP transforms biometric data by projecting it over a random subspace defined by a user-specific key. Teoh et al. (2004) proposed the most popular biometric salting technique known as BioHashing [8]. Here, the biometric features are salted by projecting those on a random subspace defined by orthonormal random matrices. It is later quantised into binary codes via thresholding operations to achieve many-to-one mapping and non-invertibility. Although the approach is well known to preserve discriminability, it is also susceptible to inverse operations if the transformed biometric and projection matrix are leaked [9,10]. Various techniques, such as Random Multi-space Quantisation (RMQ) in BioHash [11], Multispace Random

FIGURE 7.3 Categorywise depiction of conventional template transformation techniques.

Projections (MRP) [12], User-dependent Multi-state Discretisation (Ud-MsD) BioHash [13], RP with vector translation [14], Sectored Random Projections [15], and Dynamic Random Projections [16] are proposed to improve upon the drawbacks.

Random Convolution-based transformations convolve biometric signal with some random kernel to generate transformed templates. Savvides et al. (2004) transformed face images by convolving those with random kernels [17]. However, deconvolution can be attempted to recover features if random kernel is known. Maiorana et al. (2010) proposed BioConvolving, which uses random user-specific key to divide the original feature into fixed sized segments that are later convolved to generate transformed templates [18]. However, discriminability and non-invertibility properties are not justified in stolen token scenario. Wang et al. (2014) used curtailed circular convolution in which binary fingerprints features are convolved with random binary strings in circular manner to impart non-invertibility [19].

Random Noise-based transformations distort biometric templates by adding random noise patterns. Teoh et al. (2006) proposed BioPhasoring to generate a set of complex vectors where the original features form real part and the user-specific random vectors form imaginary part [20]. The phase/arctangent of the complex vector is used as non-invertible transformed template. Leng et al. (2011, 2013) improvised BioHashing and BioPhasoring techniques for palmprint modality. The transformation algorithm is extended to 2D for both the techniques to generate templates with reduced computational complexity and storage cost. Zuo et al. (2008) proposed GRAY salting (template-based salting) and BIN salting (code-based salting) for generating cancelable iris templates [21]. These techniques add unique random noise or synthetic textures to underlying Gabor features. Kaur and Khanna (2017) XORed original features with random patterns that is followed by median filtering to ensure non-invertibility [22].

**Random Mapping Transform **initially maps biometric features to other values in transform domain like decimal values, indices, distance, or slope. Dwivedi et al. (2016) proposed randomised look-up table mapping to generate cancelable iris templates [23]. Consistent bits are extracted from features to generate randomly mapped decimal value. But the mapping can be inverted, if look-up table and transformation parameters are known. Another scheme proposed by Jin et al. (2018) maps realvalued iris features into discrete index (max ranked) hashed codes. It is based on locality sensitive hashing (LSH) also known as Tndex-of-Max (IoM)’ hashing [24]. Kaur and Khanna (2018) proposed a method that maps biometric features and some random user-specific data as points on the Cartesian space. The slopes and intercepts of the lines passing through these features and random points are calculated to generate transformed features [25]. In another work, instead of computing slopes the distances between the feature points and random points are used for the same [26].

**Non-invertible Transforms **map biometric features to a new random subspace such that the inverse mapping is not possible. Ratha et al. (2007) proposed three concrete functions that randomly map fingerprint minutiae points to a new subspace using Cartesian, polar, and surface folding transforms [27]. In spite of many-to-one mappings used by these transform, Quan et al. (2008) proved that the transforms are invertible when transformed templates and parameters are simultaneously known [28]. Similarly, Farooq et al. (2007) and Lee and Kim (2010) proposed a many-to-one mapping of minutiae features onto a predefined 3D array based on some user-specific key and reference minutia’s position and orientation [29,30]. However, the mapping used here tends to compromise discriminability. Also, inverse attacks are possible if user-specific key are revealed. Recently, Alam et al. (2018) have proposed improvisations to preserve the discriminability and non-invertibility using minutiae-based bit strings methods [31]. Yang et al. (2013) extracted local structures of minutiae features using Delaunay triangulation which were subjected to non-invertible polar transformation [32]. Rathgeb et al. (2013) proposed a template protection approach which mapped input binary iris features to hashed vectors consisting of only zeros and ones using the concept of bloom-filters [33]. However, the irreversibility of bloom-filters was identified shortly by Herman et al. (2004), and it was also observed that the technique is vulnerable to cross-matching attacks [34]. Barrero et al. (2016) discussed an improvement which was built upon the original concept of bloom-filter-based template protection followed by an additional feature rearrangement technique to provide unlinkability and irreversibility [35]. Wang et al. (2017) used partial discrete Fourier transform to get good performance as the local structures of minutiae points preserve discriminability after non-invertible distortions [36]. Teoh and Wang (2018) proposed random permutation maxout transform, which maps a real-valued face feature vector into a discrete index code used as transformed template [37].