# Publications of Boikanyo, Oganeditse A.

Strong convergence of the method of alternating resolvents

Abstract. In this paper, we present a generalization of the method of alter- nating resolvents introduced in the previous authors' paper [On the method of alternating resolvents, Nonlinear Anal. 74 (2011), 5147-5160]. It is shown that the sequence generated by this method converges strongly under weaker condi- tions on the control parameters. Concerning the error sequences, many more conditions are used here as compared to the above quoted paper.

Multi parameter proximal point algorithms

The aim of this paper is to prove a strong convergence result for an algorithm introduced by Y. Yao and M. A. Noor in 2008 under a new condition on one of the parameters involved. Further, convergence properties of a generalized proximal point algorithm which was introduced in [5] are analyzed. The results in this paper are proved under the general condition that errors tend to zero in norm. These results extend and improve several previous results on the regularization method and the proximal point algorithm.

A generalization of the regularization proximal point method

This paper deals with the generalized regularization proximal point method which was introduced by the authors in [Four parameter proximal point algorithms, Nonlinear Anal. 74 (2011), 544-555]. It is shown that sequences generated by it converge strongly under minimal assumptions on the control parameters involved. Thus the main result of this paper unifies several results related to the prox-Tikhonov method, the contraction proximal point algorithm and/or the regularization method as well as some results of the above quoted paper.

The method of alternating resolvents revisited

The purpose of this article is to prove a strong convergence result associated with a generalization of the method of alternating resolvents introduced by the authors in "Strong convergence of the method of alternating resolvents" [4] under minimal assumptions on the control parameters involved. Thus, this article represents a significant improvement of the article mentioned above.

A contraction proximal point algorithm with two monotone operators

It is a known fact that the method of alternating projections introduced long ago by von Neumann fails to converge strongly for two arbitrary nonempty, closed and convex subsets of a real Hilbert space. In this paper, a new iterative process for finding common zeros of two maximal monotone operators is introduced and strong convergence results associated with it are proved. For the case when the two operators are subdifferentials of indicator functions, this new algorithm coincides with the old method of alternating projections. Several other important algorithms, such as the contraction proximal point algorithm, occur as special cases of our algorithm. Hence our main results generalize and unify many results that occur in the literature.

Four parameter proximal point algorithms

Several strong convergence results involving two distinct fourparameterproximalpointalgorithms are proved under different sets of assumptions on these parameters and the general condition that the error sequence converges to zero in norm. Thus our results address the two important problems related to the proximalpointalgorithm — one being that of strong convergence (instead of weak convergence) and the other one being that of acceptable errors. One of the algorithms discussed was introduced by Yao and Noor (2008) [7] while the other one is new and it is a generalization of the regularization method initiated by Lehdili and Moudafi (1996) [9] and later developed by Xu (2006) [8]. The new algorithm is also ideal for estimating the convergence rate of a sequence that approximates minimum values of certain functionals. Although these algorithms are distinct, it turns out that for a particular case, they are equivalent. The results of this paper extend and generalize several existing ones in the literature.

Inexact Halpern-type proximal point algorithm

We present several strong convergence results for the modified, Halpern-type, proximal point algorithm x_{n+1}=a_nu+(1−a_n)J_{b_n}x_n+e_n (n = 0,1, . . .; u, x_0 given, and J_{b_n}=((I+b_nA)^{−1}, for a maximal monotone operator A) in a real Hilbert space, under new sets of conditions on a_n and b_n. These conditions are weaker than those known to us and our results extend and improve some recent results such as those of H. K. Xu. We also show how to apply our results to approximate minimizers of convex functionals. In addition, we give convergence rate estimates for a sequence approximating the minimum value of such a functional.

On the method of alternating resolvents

The work of H. Hundal (Nonlinear Anal. 57 (2004), 35-61) has revealed that the sequence generated by the method of alternating projections converges weakly, but not strongly in general. This paper seeks to design strongly convergent algorithms by means of alternating the resolvents of two maximal monotone operators, A and B, that can be used to approximate common zeroes of A and B. In particular, methods of alternating projections which generate sequences that converge strongly are obtained. A particular case of such algorithms enables one to approximate minimum values of certain convex functionals under less restrictive conditions on the regularization parameters involved.

A proximal point algorithm converging strongly for general errors

In this paper a proximal point algorithm (PPA) for maximal monotone operators with appropriate regularization parameters is considered. A strong convergence result for PPA is stated and proved under the general condition that the error sequence tends to zero in norm. Note that Rockafellar (SIAM J Control Optim 14:877–898, 1976) assumed summability for the error sequence to derive weak convergence of PPA in its initial form, and this restrictive condition on errors has been extensively used sofar for different versions of PPA. Thus this Note provides a lutiontoalongstandingopenproblemandinparticularoffersnewpossibilitiestowards the approximation of the minimum points of convex functionals.