Optimal Transport Theory in Machine Learning: Applications To Generative Modelling and Domain Adaptation
DOI:
https://doi.org/10.64252/zk3nx611Keywords:
Optimal Transport, Generative Modeling, Domain Adaptation, Distribution Alignment, Wasserstein Distance, Machine LearningAbstract
Optimal Transport (OT) theory has emerged as a powerful mathematical framework in machine learning, particularly for problems involving distribution alignment and transformation. This paper explores the integration of OT into two major application domains: generative modeling and domain adaptation. In generative models, OT facilitates learning mappings between latent and data distributions, enhancing model expressiveness and stability. In domain adaptation, OT aligns feature distributions across source and target domains, thereby improving generalization in non-i.i.d. settings. We provide a comprehensive review of recent advancements, present key algorithmic formulations, and highlight empirical benchmarks demonstrating the superiority of OT-based approaches over traditional divergence measures. Furthermore, we discuss computational challenges and scalability solutions such as entropic regularization and sliced OT. Through theoretical insights and experimental evidence, this study emphasizes OT’s critical role in bridging geometric reasoning with statistical learning, opening new directions for interpretable and principled machine learning algorithms.




