'Masterprints' used to hack fingerprint systems
16 November 2018 09:26 GMT

NYU researchers have demonstrated the vulnerability of fingerprint recognition systems to dictionary attacks based on 'masterprints'.

MasterPrints are real or synthetic finger-prints that can fortuitously match with a large number of fingerprint thereby undermining the security afforded by fingerprint systems.

While the database of fingerprints used by the researchers had a chance of falsely matching with a random fingerprint one out of 1000 times, the master prints they generated had the power to falsely match one out of five times

Previous work by Roy et al. generated synthetic MasterPrints at the feature-level. In the new work by a team - including Philip Bontrager, Aditi Roy, Julian Togelius, Nasir Memon, and MSU expert Arun Ross - generated complete image-level MasterPrints known as DeepMasterPrints, whose attack accuracy is found to be much superior than that of previous methods.

The proposed method, referred to as Latent Variable Evolution, is based on training a Generative Adversarial Network on a set of real fingerprint images.

Stochastic search in the form of the covariance Matrix Adaptation Evolution Strategy is thenused to search for latent input variables to the generator network that can maximize the number of impostor matches as assessed by a fingerprint recognizer.

Experiments convey the efficacy of the proposed method in generating DeepMas-terPrints. The team said the underlying method is likely to have broad applications in fingerprint security as well as fingerprint synthesis.

The DeepMasterPrints can be used to spoof a system requiring fingerprint authentication without actually requiring any information about the user’s fingerprints. As the paper noted about the application of the fake prints:

"Therefore, they can be used to launch a dictionary attack against a specific subject that can compromise the security of a fingerprint-based recognition system".