Convergence of the operator extrapolation method

Authors

DOI:

https://doi.org/10.15407/dopovidi2021.04.028

Keywords:

variational inequality, pseudo-monotonicity, sequential weak continuity, Bregman divergence, operator extrapolation, adaptivity, weak convergence

Abstract

One of the popular areas of the modern applied nonlinear analysis is the study of variational inequalities and the development of methods for approximating their solutions. Many important problems of the research of operations, optimal control theory, and mathematical physics can be written in the form of variational inequalities. Non-smooth optimization problems can be solved effectively, if they are reformulated as saddle problems, and modern approximate algorithms for solving the variational inequalities are applied to the obtained saddle problems. With the advent of generating adversarial neural networks (GANs), the strong interest in the use and investigation of iterative algorithms for solving the variational inequalities arose in the ML-community. This paper is devoted to the study of two new approximate algorithms with the Bregman projection for solving the variational inequalities in a Hilbert space. The first algorithm, which we call the operator extrapolation algorithm, is obtained by replacing the Euclidean metric in the Malitsky–Tam method with the Bregman divergence. An attractive feature of the algorithm is only one computation at the iterative step of the Bregman projection onto the feasible set. The second algorithm is an adaptive version of the first, where the used rule for updating the step size does not require knowledge of Lipschitz constants and the calculation of operator values at additional points. For variational inequalities with pseudo-monotone, Lipschitz-continuous, and sequentially weakly continuous operators acting in a Hilbert space, some weak convergence theorems are proved.

Downloads

Download data is not yet available.

References

Nemirovski, A. (2004). Prox-method with rate of convergence O(1/T) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optimization, 15, pp. 229-251. https://doi.org/10.1137/S1052623403425629

Gidel, G., Berard, H., Vincent, P., Lacoste-Julien, S. (2018). A Variational Inequality Perspective on Ge nerative Adversarial Networks. arXiv preprint arXiv:1802.10551.

Korpelevich, G. M. (1976). An extragradient method for finding saddle points and for other problems. Matecon, 12, No. 4, pp. 747-756.

Tseng, P. (2000). A modified forward-backward splitting method for maximal monotone mappings. SIAM J, Control and Optimization, 38, pp. 431-446. https://doi.org/10.1137/S0363012998338806

Verlan, D. A., Semenov, V. V., Chabak, L. M. (2015). A Strongly Convergent Modified Extragradient Method for Variational Inequalities with Non-Lipschitz Operators. J. Automation and Inform. Sci., 47, No. 7, pp. 31- 46. https://doi.org/10.1615/JAutomatInfScien.v47.i7.40

Nesterov, Yu. (2007). Dual extrapolation and its applications to solving variational inequalities and related problems. Math. Programming, 109, Iss. 2-3, pp. 319-344. https://doi.org/10.1007/s10107-006-0034-z

Denisov, S. V., Semenov, V. V., Stetsyuk, P. I. (2019). Bregman Extragradient Method with Monotone Rule of Step Adjustment. Cybern. Syst. Analysis, 55, Iss. 3, pp. 377-383. https://doi.org/10.1007/s10559-019-00144-5

Beck, A. (2017). First-Order Methods in Optimization. 479 p. https://doi.org/10.1137/1.9781611974997

Popov, L. D. (1980). A modification of the Arrow-Hurwicz method for search of saddle points. Mathematical notes of the Academy of Sciences of the USSR, 28, Iss. 5, pp. 845-848. https://doi.org/10.1007/BF01141092

Malitsky, Yu. V. & Semenov, V. V. (2014). An Extragradient Algorithm for Monotone Variational Inequalities. Cybern. Syst. Analysis, 50, Iss. 2, pp. 271-277. https://doi.org/10.1007/s10559-014-9614-8

Lyashko, S. I. & Semenov, V. V. (2016). A New Two-Step Proximal Algorithm of Solving the Problem of Equilibrium Programming. In: Goldengorin, B. (ed.) Optimization and Its Applications in Control and Data Sciences. Springer Optimization and Its Applications, vol. 115. Springer, pp. 315-325. https://doi.org/10.1007/978-3-319-42056-1_10

Semenov, V. V. (2017). A Version of the Mirror descent Method to Solve Variational Inequalities. Cybernetics and Systems Analysis, 53, Iss. 2, pp. 234-243. https://doi.org/10.1007/s10559-017-9923-9

Chabak, L., Semenov, V. & Vedel, Y. (2019). A New Non-Euclidean Proximal Method for Equilibrium

Problems. In: Chertov O., Mylovanov T., Kondratenko Y., Kacprzyk J., Kreinovich V., Stefanuk V. (eds.)

Recent Developments in Data Science and Intelligent Analysis of Information. ICDSIAI 2018. Advances in Intelligent Systems and Computing, vol. 836. Springer, Cham, pp. 50-58. https://doi.org/10.1007/978-3-319-97885-7_6

Malitsky, Y. & Tam, M. K. (2020). A Forward-Backward Splitting Method for Monotone Inclusions WithoutCocoercivity. SIAM Journal on Optimization, 30, No. 2, pp. 1451-1472. https://doi.org/10.1137/18M1207260

Csetnek, E. R., Malitsky, Y. & Tam, M. K. (2019). Shadow Douglas-Rachford Splitting for Monotone

Inclusions. Appl Math Optim., 80, pp. 665-678. https://doi.org/10.1007/s00245-019-09597-8

Published

26.08.2021

How to Cite

Semenov, V., Siryk, D., & Kharkov, O. (2021). Convergence of the operator extrapolation method. Reports of the National Academy of Sciences of Ukraine, (4), 28–35. https://doi.org/10.15407/dopovidi2021.04.028

Issue

Section

Information Science and Cybernetics

Most read articles by the same author(s)