Causal Fusion of Multimodal Wearable Sensor Streams For Explainable In Vivo Biomedical Diagnostics
DOI:
https://doi.org/10.64252/5zgnc342Keywords:
Attention mechanisms, Biomedical diagnostics, Causal inference, Data robustness, Edge computing, Explainable AI, Multimodal fusion, Real-time processing, Wearable healthcare, Uncertainty quantification.Abstract
This study introduces a causal fusion framework for multimodal wearable sensor data that integrates causal inference, attention-guided fusion, and uncertainty-aware decision refinement to enable explainable in vivo biomedical diagnostics. The system employs Lasso-regularized vector autoregression to generate causal graphs, which guide an attention mechanism for feature integration across heterogeneous sensor modalities. By aligning attention weights with physiological dependencies and embedding saliency-driven interpretability, the framework delivers both predictive accuracy and transparent reasoning. Empirical validation demonstrates that the proposed approach achieves 96.3% accuracy, 94.6% precision, 93.9% recall, and a 94.2% F1-score, while sustaining a low inference latency of 17.4 ms and energy efficiency of 0.82 J/inference. It also records a temporal stability score of 0.89, a causal clarity score of 0.91, and top-tier interpretability indices (explainability 0.94, interpretability 0.93). Importantly, the model exhibits superior resilience with an imputation robustness score of 0.91, maintaining diagnostic reliability under noisy or incomplete data streams. These results highlight the method’s potential for real-time, personalized, and resource-constrained healthcare environments.