Fraud Attack Detection in Remote Verification Systems for Non-enrolled Users


Identity verification systems are widely used in daily life. Most of these systems rely on official documents containing identifying information about a person (i.e. passport, ID card, driving licence, membership cards, and social services card, among others). These documents usually include a face image of the person which is used to validate identity.

Recent advances in computer vision techniques have explored the use of biometric information for identity verification [6]. The face [27], fingerprints, and iris [18] are amongst the most used and reliable features for automatic identity verification. However, most systems usually require the physical presence of the individual in order to capture the information from face, fingerprint, or iris images.

The recent massive increase in the use of mobile phones has opened a new form of remote authentication. In these situations, authentication is mainly based on comparing the input data of the user (i.e. selfie or fingerprint) with the information previously registered from the same individual (database).

These systems have been applied in several industries including banking. However, most of the methods require all the users to be registered in a database. The information captured by the mobile device is then matched with the existing information of the user previously saved in a database (Figure 10.1).

The enrolling requirement for all users of the system limits the use of this kind of authentication. Activities such as opening a new account in a bank, for instance, would always require the presence of the user in the bank to be enrolled.

In order to overcome this limitation and build a fully remote system, an authentication system based on the verification of users against the information provided by an official identity document (i.e. ID card, passport, or driver’s license amongst others) is proposed.1 This system uses the biometric information obtained from a self-taken photo (selfie) and compares it against the photo of their official identity document. This process is known as a biometric match [17]. Such systems do not require a database to be consulted as all the data needed are given by the official ID card and selfie image. This eliminates vulnerability to hacking and theft of private information.

Graphical representation of remote authentication (1) and remote verification (2) using selfie face images

FIGURE 10.1 Graphical representation of remote authentication (1) and remote verification (2) using selfie face images.

The key challenge of this authentication system is to ensure that neither the ID card nor the selfie has been manipulated by the user.

This work studies several algorithms used to detect whether ID cards have been altered. Two typical scenarios of image manipulation were studied: (i) physical and

(ii) digital (see Figure 10.2). There are other cases of possible manipulation of ID cards, including more extreme scenarios, such as ‘fake’ identities with soft-biometric features generated using algorithms such as Generative Adversarial Network (GAN) [3]. However, this ongoing research only focuses on the two scenarios mentioned above.

The remainder of the chapter is organised as follows. Related w'ork is reviewed in Section 10.2. The proposed method to detect ID card manipulation is described in Section 10.3. Experiments and results are reported in Section 10.4. Finally, the conclusions of this wwk are shown in Section 10.5.


Remote Authentication Framework Using Biometrics

As technologies progress, the financial, government, and other sectors have increasingly opted for online services due to their lower cost compared to in-office services. As a result, remote authentication system has become critical in order to ensure user identity during transactions. The accelerated evolution in consumer smartphone cameras has brought with it an increased interest from the industry for mobile biometric verification systems. The capacity to reach the customer remotely for services such as e-commerce, digital banking, and general fintech requires robust systems for automatic identity verification. Remote biometric authentication system based on fingerprints [2,16,29] and the face [11,19,20,24] are amongst the most popular authentication systems.

In the case of authentication systems based on faces, there are tw'o main categories: (i) remote authentication for enrolled users and for (ii) non-enrolled users. Most of the literature addresses the first scenario where biometric data from individuals (users of the system) are previously captured and stored in a database [9,20,24]. The main goal of the system is to ensure that the input data from the user match the biometric information previously stored. Stokkenes et al. [24], for instance, proposed online banking authentication based on features extracted from faces using bloom filters. This information is encoded and used as a key for opening banking services. Similar w'ork, w'hich involves fusing biometric information, has also been explored by Czyzewski et al. [9].

This scenario requires an enrolling process that sometimes can limit the application of such systems. Storing sensitive information from users such as biometric data can also be risky for companies due to regulations concerning personal data. Several approaches have been proposed to enforce security in such systems. Perera et al. [20], for instance, proposed an Active Authentication system that attempts to continuously monitor user identity after access has been initially granted. A similar approach has recently been reported by Oza and Patel [19]. Those approaches are a step towards security but do not solve other problems such as spoofing attacks.

The second category, (2) remote authentication systems for non-enrolled users, uses two inputs: a selfie face and an additional proof of identity. The most common proofs of identity are national ID cards and driving licences, amongst others. In this approach, the data contained in the embedded chip in an ID card can be read remotely by a Near Field Communication (NFC)-enabled mobile device and then matched with a frontal face photograph (selfie) of the person in question. Unfortunately, this approach is limited since only a few countries provide national ID cards that include embedded chips with user identity information. In countries such as Brazil, for instance, with a population of over 210 million people, the national ID card does not contain such an embedded chip. Furthermore, the ID card may vary from state to state.

In such cases, an additional challenge added to the remote authentication system is to validate the presented document as a proof of identity.

Remote authentication systems using non-enrolled users are computationally less expensive as they just match the information between the two inputs to the system. They do not require previous enrolment of the users and store any private information.

Image Manipulation and Deep Learning Techniques

As discussed in previous sections, most 2D face-based biometric authentication systems use an image (selfie) as input information. In the case of non-enrolled users, the picture of an identification document is also required by the system. Altering a face photo or an ID document to trick an authentication system is a threat that needs to be detected in order to protect such systems and people’s identity. Spoofing can directly attack biometric systems affecting people’s security by creating fake biometric data [6,17,18]. Existing antispoofing methods generally move in the following directions: analysing the texture image captured by the sensor, detecting any evidence of liveness on the image [4], or combining both approaches together [12,18].

Image manipulation, on the other hand, has been a widely studied topic in the image processing and computer vision fields. Algorithms for tampering, in-painting, texture, and colour transformation amongst others have all been reported in the literature [30]. There are several algorithms to detect attacks on image-based biometric systems. The state-of-the-art technique for image analysis is Convolutional Neural Network (CNN). This is based on the use of algorithms that allow representations of the best features to be found in a hierarchical method [5,12,26].

One of the first applications of CNN was perhaps the LeNet-5 network described by Ref. [15] for optical character recognition. Compared to modern deep CNN, their network was relatively modest due to the limited computational resources of the era and the algorithmic challenges of training bigger networks. Although much potential has been laid in deeper CNN architectures (networks with more layers), only recently have they became prevalent, following the dramatic increase in both computational power, due to the availability of Graphical Processing

Units (GPU); the amount of training data readily available on the Internet; and the development of more effective methods for training such complex models. One recent and notable example is the use of deep CNN for image classification on the challenging ImageNet benchmark [10]. Deep CNN has additionally been successfully applied to human pose estimation, facial key-point detection, speech recognition, and action classification, amongst others [13,25]. However, there are smaller networks, such as small-VGG [23], that represent a trade-off between a shallow and a deeper CNN.

< Prev   CONTENTS   Source   Next >