NIST scientists make advances in automated fingerprint analysis
16 August 2017 13:47 GMT

Scientists from the National Institute of Standards and Technology (NIST) and Michigan State University have developed an algorithm that automates a key step in the fingerprint analysis process.

Research called "Latent Fingerprint Value Prediction: Crowd-based Learning". which has been published in IEEE Transactions on Information Forensics and Security shows that the team were able to lessen the human factor out of crime scene fingerprinting.

One of the first steps in manual latent processing is for a fingerprint examiner to perform a triage by assigning one of the following three values to a query latent: Value for Individualization (VID), Value for Exclusion Only (VEO) or No Value (NV).

However, latent value determination by examiners is known to be subjective, resulting in large intra-examiner and inter-examiner variations. Furthermore, in spite of the guidelines available, the underlying bases that examiners implicitly use for value determination are unknown.

“We know that when humans analyze a crime scene fingerprint, the process is inherently subjective,” said Elham Tabassi, a computer engineer at NIST and a co-author of the study. “By reducing the human subjectivity, we can make fingerprint analysis more reliable and more efficient.”

If all fingerprints were high-quality, matching them would be a breeze. For instance, computers can easily match two sets of rolled prints—those that are collected under controlled conditions, as when you roll all 10 fingers onto a fingerprint card or scanner.

“But at a crime scene, there’s no one directing the perpetrator on how to leave good prints,” said Anil Jain, a computer scientist at Michigan State University and a co-author of the study. As a result, fingerprints left at a crime scene—so-called latent prints—are often partial, distorted and smudged. Also, if the print is left on something with a confusing background pattern such as a twenty-dollar bill, it may be difficult to separate the print from the background.

That’s why, when an examiner receives latent prints from a crime scene, their first step is to judge how much useful information they contain.

“This first step is standard practice in the forensic community,” said Jain. “This is the step we automated.”

Following that step, if the print contains sufficient usable information, it can be submitted to an Automated Fingerprint Identification System. The AFIS (pronounced AY-fiss) then searches its database and returns a list of potential matches, which the examiner evaluates to look for a conclusive match.

But the initial decision on fingerprint quality is critical.

“If you submit a print to AFIS that does not have sufficient information, you’re more likely to get erroneous matches,” Tabassi said. On the other hand, “If you don’t submit a print that actually does have sufficient information, the perpetrator gets off the hook.”

Currently, the process of judging print quality is subjective, and different examiners come to different conclusions. Automating that step makes the results consistent. “That means we will be able to study the errors and find ways to fix them over time,” Tabassi said.

The main conclusion of the study is that crowdsourced latent value is more robust than prevailing value determination (VID, VEO and NV) and LFIQ for predicting AFIS performance.