Loading [MathJax]/jax/output/HTML-CSS/config.js
arrow
Volume 18, Issue 2
Convergence Analysis of OT-Flow for Sample Generation

Yang Jing & Lei Li

Numer. Math. Theor. Meth. Appl., 18 (2025), pp. 325-352.

Published online: 2025-05

Export citation
  • Abstract

Deep generative models aim to learn the underlying distribution of data and generate new ones. Despite the diversity of generative models and their high-quality generation performance in practice, most of them lack rigorous theoretical convergence proofs. In this work, we aim to establish some convergence results for OT-Flow, one of the deep generative models. First, by reformulating the framework of OT-Flow model, we establish the $Γ$-convergence of the formulation of OT-Flow to the corresponding optimal transport (OT) problem as the regularization term parameter $α$ goes to infinity. Second, since the loss function will be approximated by Monte Carlo method in training, we established the convergence between the discrete loss function and the continuous one when the sample number $N$ goes to infinity as well. Meanwhile, the approximation capability of the neural network provides an upper bound for the discrete loss function of the minimizers. The proofs in both aspects provide convincing assurances for the stability of OT-Flow.

  • AMS Subject Headings

49Q22, 68T07

  • Copyright

COPYRIGHT: © Global Science Press

  • Email address
  • BibTex
  • RIS
  • TXT
@Article{NMTMA-18-325, author = {Jing , Yang and Li , Lei}, title = {Convergence Analysis of OT-Flow for Sample Generation}, journal = {Numerical Mathematics: Theory, Methods and Applications}, year = {2025}, volume = {18}, number = {2}, pages = {325--352}, abstract = {

Deep generative models aim to learn the underlying distribution of data and generate new ones. Despite the diversity of generative models and their high-quality generation performance in practice, most of them lack rigorous theoretical convergence proofs. In this work, we aim to establish some convergence results for OT-Flow, one of the deep generative models. First, by reformulating the framework of OT-Flow model, we establish the $Γ$-convergence of the formulation of OT-Flow to the corresponding optimal transport (OT) problem as the regularization term parameter $α$ goes to infinity. Second, since the loss function will be approximated by Monte Carlo method in training, we established the convergence between the discrete loss function and the continuous one when the sample number $N$ goes to infinity as well. Meanwhile, the approximation capability of the neural network provides an upper bound for the discrete loss function of the minimizers. The proofs in both aspects provide convincing assurances for the stability of OT-Flow.

}, issn = {2079-7338}, doi = {https://doi.org/10.4208/nmtma.OA-2024-0114}, url = {http://global-sci.org/intro/article_detail/nmtma/24068.html} }
TY - JOUR T1 - Convergence Analysis of OT-Flow for Sample Generation AU - Jing , Yang AU - Li , Lei JO - Numerical Mathematics: Theory, Methods and Applications VL - 2 SP - 325 EP - 352 PY - 2025 DA - 2025/05 SN - 18 DO - http://doi.org/10.4208/nmtma.OA-2024-0114 UR - https://global-sci.org/intro/article_detail/nmtma/24068.html KW - Generative models, continuous normalizing flows, OT-Flow, Benamou-Brenier functional, $Γ$-convergence. AB -

Deep generative models aim to learn the underlying distribution of data and generate new ones. Despite the diversity of generative models and their high-quality generation performance in practice, most of them lack rigorous theoretical convergence proofs. In this work, we aim to establish some convergence results for OT-Flow, one of the deep generative models. First, by reformulating the framework of OT-Flow model, we establish the $Γ$-convergence of the formulation of OT-Flow to the corresponding optimal transport (OT) problem as the regularization term parameter $α$ goes to infinity. Second, since the loss function will be approximated by Monte Carlo method in training, we established the convergence between the discrete loss function and the continuous one when the sample number $N$ goes to infinity as well. Meanwhile, the approximation capability of the neural network provides an upper bound for the discrete loss function of the minimizers. The proofs in both aspects provide convincing assurances for the stability of OT-Flow.

Jing , Yang and Li , Lei. (2025). Convergence Analysis of OT-Flow for Sample Generation. Numerical Mathematics: Theory, Methods and Applications. 18 (2). 325-352. doi:10.4208/nmtma.OA-2024-0114
Copy to clipboard
The citation has been copied to your clipboard