An Optimized Feature-Level Fusion Framework for Multimodal Biometric Authentication Using ML Classifiers
DOI:
https://doi.org/10.64252/jmz3x190Keywords:
Multimodal biometric authentication, Feature-level fusion, Machine learning classifiers, Face, iris, and fingerprint recognition, Biometric security.Abstract
Ensuring secure and accurate identity verification remains a central challenge in biometric authentication systems, particularly when relying on unimodal inputs. This paper proposes a novel hybrid multimodal biometric authentication framework that integrates facial, fingerprint, and iris modalities to enhance performance, security, and robustness. Unlike previous works that rely on isolated biometric traits, the proposed system utilizes a custom-compiled dataset combining two publicly available sources—one for face and iris data and another for fingerprint images—thereby creating a rich, multimodal input space. To optimize feature representation, advanced feature-level fusion is applied, enabling the system to learn complementary biometric patterns across modalities. Machine learning classifiers including Support Vector Machines (SVM), Random Forest, and K-Nearest Neighbors are evaluated, with Optuna-based hyperparameter tuning employed to maximize predictive performance. Experimental results demonstrate that SVM and Random Forest classifiers achieve the highest accuracy (98.28% and 97.24%, respectively), outperforming unimodal models in both recognition accuracy and robustness against environmental variations. This study establishes a scalable and resilient framework for multimodal biometric verification and offers insights into the synergy of data fusion and intelligent optimization in biometric security.