What is the Fingerprint Verification Competition (FVC)?
The FVC (Fingerprint Verification Competition) is the world’s largest competition for fingerprint verification algorithms and is organized every tow years by the University of Bologna (Italy) ), the State University of San Jose (EUA), State University of Michigan (EUA) and Universidad Autónoma de Madrid (Spain).. The last FVC took place at fall of 2006 with seventy algorithms submitted by fifty three industry and academic participants. FVC applies algorithms over four databases of fingerprint images from multiple sources and computes a set of biometrics parameters describing its accuracy and performance behavior. Each of four image database has diverse quality and characteristics depending of the source sensor type. The FVC 2006 databases were obtained from the following sources:
Figure 1 shows an image example from each database. Note the diversity on size, background, grey levels variation, contrast and papillae properties.
FVC has two categories: Open and Light. Light category imposes strong limits to fingerprint processing time, memory use and template size. Open category limits basically the fingerprint processing time for a viable commercial algorithm.
How is FVC organized and scheduled?
The FVC competition begins with a call for participants. On registration, participants receive a confidential identification for its algorithms. Griaule Biometrics algorithm identification is P066.
After registration participants receive a sample set of images from each database. Samples let the algorithms to refine and tune his performance and they are not included on final contest. Images on figure 1 belong to database samples.
Participants submit their algorithms and they are tested during some months. When all the results are compiled, participants are confidentially informed about the performance of their algorithms. Finally, they are required to decide whether or not they want to publish their names together with their algorithm’s performance. In case a participant decides to remain anonymous, the label "Anonymous organization" will be used, and the real identity will not be revealed.
Which are the evaluation metrics on FVC? What is the difference between average results and database results?
There are two types of results that FVC publishes: Average results and Database results. Average results are computed from the databases all together. Database results are computed from each database separately.
Average results evaluate integrally the algorithms robustness because is applied on images from sensors of diverse quality and features (figure 1). They describe how the algorithms solve universally the fingerprint verification problem, i.e., what the impacts on your algorithm performance when fingerprint sensor is changed. Average results are the most important metrics for fingerprint recognition academic and industry community.
Database results evaluate how the algorithms solve the fingerprint verification problem for each type of sensor separately. Database results are the basis for compute average results.
Which is the main evaluation metric on FVC?
The most important metric for fingerprint algorithm evaluation is the Equal Error Rate (EER). It estimates the probability of the algorithm to make a mistake when decides if two fingerprints images belong to the same finger, i.e., the verification error rate. FVC databases results include EER for each database.
The accuracy of a fingerprint algorithm is very sensitive to the sensor because the diversity of properties of the source images. Therefore, an algorithm can behave with an excellent EER for a sensor but very poorly with other.
As consequence, the Average Equal Error Rate (AvgEER) is the average result that evaluates the integral accuracy of an algorithm. It describes the mean verification error rate when considering the four images databases all together. Therefore, the AvgERR is a metric of robustness and stability of the algorithm when applied on different sensors, i.e., it shows the capacity of the algorithm for adapting to differences on images and to keep its accuracy with the most widely sources of fingerprint images.
Average ERR is the most important metric evaluated on FVC and it is the criterion for order the participants on FVS results. The main goal of academic and industry researches is to improve the Average ERR of their algorithms.
Griaule Biometrics fingerprint recognition algorithms (P066) get the first position of the average ERR in FVC 2006.