A New Projection Method for Variational Inequality Problems

Type: Article
Publication Date: 1999-01-01
Citations: 518
DOI: https://doi.org/10.1137/s0363012997317475

Abstract

We propose a new projection algorithm for solving the variational inequality problem, where the underlying function is continuous and satisfies a certain generalized monotonicity assumption (e.g., it can be pseudomonotone). The method is simple and admits a nice geometric interpretation. It consists of two steps. First, we construct an appropriate hyperplane which strictly separates the current iterate from the solutions of the problem. This procedure requires a single projection onto the feasible set and employs an Armijo-type linesearch along a feasible direction. Then the next iterate is obtained as the projection of the current iterate onto the intersection of the feasible set with the halfspace containing the solution set. Thus, in contrast with most other projection-type methods, only two projection operations per iteration are needed. The method is shown to be globally convergent to a solution of the variational inequality problem under minimal assumptions. Preliminary computational experience is also reported.

Locations

  • CiteSeer X (The Pennsylvania State University)
  • SIAM Journal on Control and Optimization

Ask a Question About This Paper

Summary

Login to see paper summary

We propose a projection algorithm for solving the variational inequality problem, where the underlying function is continuous and satisfies a certain generalized monotonicity assumption (for example, it can be pseudomonotone). … We propose a projection algorithm for solving the variational inequality problem, where the underlying function is continuous and satisfies a certain generalized monotonicity assumption (for example, it can be pseudomonotone). The method is simple and admits a nice geometric interpretation. It consists of two steps. First, we construct an appropriate hyperplane which strictly separates the current iterate from the solutions of the problem. This procedure requires a single projection onto the feasible set and employs an Armijo-type line-search along a feasible direction. Then the next iterate is obtained as the projection of the current iterate onto the intersection of the feasible set with the halfspace containing the solution set. Thus, in contrast with most other projection-type methods, only two projection operations per iteration are needed. The method is shown to be globally convergent to a solution of the variational inequality problem under minimal assumptions.
In this paper, a double projection algorithm for solving pseudomonotone variational inequalities is introduced. Based on the Armijo-type line search procedure, a new class of hyperplanes are constructed, which strictly … In this paper, a double projection algorithm for solving pseudomonotone variational inequalities is introduced. Based on the Armijo-type line search procedure, a new class of hyperplanes are constructed, which strictly separate the current iterative point from the solution set of the variational inequalities. Using the separate property of hyperplane, the new algorithm is proved to be globally convergent under very mild assumptions. Numerical experiments prove that the new algorithm is valid.
In this paper, we propose a new incremental constraint projection algorithm for solving variational inequalities, where the underlying function is monotone plus and Lipschitz continuous. The algorithm consists two steps. … In this paper, we propose a new incremental constraint projection algorithm for solving variational inequalities, where the underlying function is monotone plus and Lipschitz continuous. The algorithm consists two steps. In the first step, we compute a predictor point. This procedure requires a single random projection onto some set and employs an Armijo-type linesearch along a feasible direction. Then in the second step an iterate is obtained as the random projection of some point onto the set which we have used in the first step. The incremental constraint projection algorithm is considered for random selection and for cyclic selection of the samples . Accordingly, this algorithm is named random projection algorithm and cyclic projection algorithm. The method is shown to be globally convergent to a solution of the variational inequality problem in almost sure sense both random projection method and cyclic projection method. We provide some computational experiments and compare the efficiency of random projection method and cyclic projection method with some known algorithms.
We introduce and study the convergence properties of a projection-type algorithm for solving the variational inequality problem for point-to-set operators. No monotoni\-city assumption is used in our analysis. The operator … We introduce and study the convergence properties of a projection-type algorithm for solving the variational inequality problem for point-to-set operators. No monotoni\-city assumption is used in our analysis. The operator defining the problem is only assumed to be continuous in the point-to-set sense, i.e., inner- and outer-semicontinuous. Additionally, we assume non-emptiness of the so-called dual solution set. We prove that the whole sequence of iterates converges to a solution of the variational inequality. Moreover, we provide numerical experiments illustrating the behavior of our iterates. Through several examples, we provide a comparison with a recent similar algorithm.
We introduce and study the convergence properties of a projection-type algorithm for solving the variational inequality problem for point-to-set operators. No monotoni\-city assumption is used in our analysis. The operator … We introduce and study the convergence properties of a projection-type algorithm for solving the variational inequality problem for point-to-set operators. No monotoni\-city assumption is used in our analysis. The operator defining the problem is only assumed to be continuous in the point-to-set sense, i.e., inner- and outer-semicontinuous. Additionally, we assume non-emptiness of the so-called dual solution set. We prove that the whole sequence of iterates converges to a solution of the variational inequality. Moreover, we provide numerical experiments illustrating the behavior of our iterates. Through several examples, we provide a comparison with a recent similar algorithm.
A plethora of applications from mathematical programmings, such as minimax, mathematical programming, penalization and fixed point problems can be framed as variational inequality problems. Most of the methods that used … A plethora of applications from mathematical programmings, such as minimax, mathematical programming, penalization and fixed point problems can be framed as variational inequality problems. Most of the methods that used to solve such problems involve iterative methods, that is why, in this paper, we introduce a new extragradient-like method to solve pseudomonotone variational inequalities in a real Hilbert space. The proposed method has the advantage of a variable step size rule that is updated for each iteration based on previous iterations. The main advantage of this method is that it operates without the previous knowledge of the Lipschitz constants of an operator. A strong convergence theorem for the proposed method is proved by letting the mild conditions on an operator G . Numerical experiments have been studied in order to validate the numerical performance of the proposed method and to compare it with existing methods.
In this paper, a projection-type approximation method is introduced for solving a variational inequality problem. The proposed method involves only one projection per iteration and the underline operator is pseudo-monotone … In this paper, a projection-type approximation method is introduced for solving a variational inequality problem. The proposed method involves only one projection per iteration and the underline operator is pseudo-monotone and L-Lipschitz-continuous. The strong convergence result of the iterative sequence generated by the proposed method is established, under mild conditions, in real Hilbert spaces. Sound computational experiments comparing our newly proposed method with the existing state of the art on multiple realistic test problems are given.
We present two explicit methods for solving variational inequalities. Each iteration of the proposed methods consists of projection onto a halfplace containing the feasible set. Our projection is easy to … We present two explicit methods for solving variational inequalities. Each iteration of the proposed methods consists of projection onto a halfplace containing the feasible set. Our projection is easy to calculate. Moreover, we find the step size through an Armijo-like search instead of defining them exogenously. Our methods are proved to be globally convergent under pseudomontonicity and continuity of the operator. No coerciveness, paramonotonicity or Lipschitz continuity assumption is imposed, thus we generalize some recent results in the literature. Mathematics Subject Classification (2010): 90C25.90C30. Key words: Variational inequalities, Pseudomontone operator, Explicit projection algorithm, Global convergence.
We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projection-type methods in which the projection direction … We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projection-type methods in which the projection direction is modified by a strongly monotone mapping of the form $I - \alpha F$ or, if F is affine with underlying matrix M, of the form $I + \alpha M^T $, with $\alpha \in (0,\infty )$. We show that these methods are globally convergent, and if in addition a certain error bound based on the natural residual holds locally, the convergence is linear. Computational experience with the new methods is also reported.
Abstract Numerous attempts have been made to develop efficient methods for solving the system of constrained nonlinear equations due to its widespread use in diverse engineering applications. In this article, … Abstract Numerous attempts have been made to develop efficient methods for solving the system of constrained nonlinear equations due to its widespread use in diverse engineering applications. In this article, we present a family of inertial‐based derivative‐free projection methods with a correction step for solving such system, in which the selection of the derivative‐free search direction is flexible. This family does not require the computation of corresponding Jacobian matrix or approximate matrix at every iteration and possess the following theoretical properties: (i) the inertial‐based corrected direction framework always automatically satisfies the sufficient descent and trust region properties without specific search directions, and is independent of any line search; (ii) the global convergence of the proposed family is proven under a weaker monotonicity condition on the mapping , without the typical monotonicity or pseudo‐monotonicity assumption; (iii) the results about convergence rate of the proposed family are established under slightly stronger assumptions. Furthermore, we propose two effective inertial‐based derivative‐free projection methods, each embedding a specific search direction into the proposed family. We present preliminary numerical experiments on certain test problems to demonstrate the effectiveness and superiority of the proposed methods in comparison with existing ones. Additionally, we utilize these methods for solving sparse signal restorations and image restorations in compressive sensing applications.
The gradient projection algorithm plays an important role in solving constrained convex minimization problems. In general, the gradient projection algorithm has only weak convergence in infinite‐dimensional Hilbert spaces. Recently, H. … The gradient projection algorithm plays an important role in solving constrained convex minimization problems. In general, the gradient projection algorithm has only weak convergence in infinite‐dimensional Hilbert spaces. Recently, H. K. Xu (2011) provided two modified gradient projection algorithms which have strong convergence. Motivated by Xu’s work, in the present paper, we suggest three more simpler variant gradient projection methods so that strong convergence is guaranteed.
We introduce a new ergodic algorithm for solving equilibrium problems over the fixed point set of a nonexpansive mapping. In contrast to the existing one in Kim [The Bruck's ergodic … We introduce a new ergodic algorithm for solving equilibrium problems over the fixed point set of a nonexpansive mapping. In contrast to the existing one in Kim [The Bruck's ergodic iteration method for the Ky Fan inequality over the fixed point set. Int. J. Comput. Math. 94 (2017), pp. 2466–2480], our algorithm uses self-adaptive step sizes. Thanks to that, the proposed algorithm converges under milder conditions. Moreover, at each step of our algorithm, instead of solving strongly convex problems, we only have to compute a subgradient of a convex function. Hence, our algorithm has lower computational cost.
We present a unified framework for the design and convergence analysis of a class of algorithms based on approximate solution of proximal point subproblems. Our development further enhances the constructive … We present a unified framework for the design and convergence analysis of a class of algorithms based on approximate solution of proximal point subproblems. Our development further enhances the constructive approximation approach of the recently proposed hybrid projection–proximal and extragradient–proximal methods. Specifically, we introduce an even more flexible error tolerance criterion, as well as provide a unified view of these two algorithms. Our general method possesses global convergence and local (super)linear rate of convergence under standard assumptions, while using a constructive approximation criterion suitable for a number of specific implementations. For example, we show that close to a regular solution of a monotone system of semismooth equations, two Newton iterations are sufficient to solve the proximal subproblem within the required error tolerance. Such systems of equations arise naturally when reformulating the nonlinear complementarity problem.
The Josephy--Newton method for solving a nonlinear complementarity problem consists of solving, possibly inexactly, a sequence of linear complementarity problems. Under appropriate regularityassumptions, this method is known to be locally … The Josephy--Newton method for solving a nonlinear complementarity problem consists of solving, possibly inexactly, a sequence of linear complementarity problems. Under appropriate regularityassumptions, this method is known to be locally (superlinearly) convergent. To enlarge the domain of convergence of the Newton method, some globalization strategy based on a chosen merit function is typically used. However, to ensure global convergence to a solution, some additional restrictive assumptions are needed. These assumptions imply boundedness of level sets of the merit function and often even (global) uniqueness of the solution. We present a new globalization strategy for monotone problems which is not based on any merit function. Our linesearch procedure utilizes the regularized Newton direction and the monotonicity structure of the problem to force global convergence by means of a (computationally explicit) projection step which reduces the distance to the solution set of the problem. The resulting algorithm is truly globally convergent in the sense that the subproblems are always solvable, and the whole sequence of iterates converges to a solution of the problem without any regularity assumptions. In fact, the solution set can even be unbounded. Each iteration of the new method has the same order of computational cost as an iteration of the damped Newton method. Under natural assumptions, the local superlinear rate of convergence is also achieved.
We propose a new approach to globalizing the Josephy-Newton algorithm for solving the monotone variational inequality problem. Known globalization strategies rely either on minimization of a suitable merit function, or … We propose a new approach to globalizing the Josephy-Newton algorithm for solving the monotone variational inequality problem. Known globalization strategies rely either on minimization of a suitable merit function, or on a projection-type approach. The technique proposed here is based on a linesearch in the regularized Josephy-Newton direction which finds a trial point and a proximal point subproblem (i.e., subproblem with suitable parameters), for which this trial point is an acceptable approximate solution. We emphasize that this requires only checking a certain approximation criterion, and in particular, does not entail actually solving any nonlinear proximal point subproblems. The method converges globally under very mild assumptions. Furthermore, an easy modification of the method secures the local superlinear rate of convergence under standard conditions.
We develop a new stochastic algorithm with variance reduction for solving pseudo-monotone stochastic variational inequalities. Our method builds on Tseng's forward-backward-forward (FBF) algorithm, which is known in the deterministic literature … We develop a new stochastic algorithm with variance reduction for solving pseudo-monotone stochastic variational inequalities. Our method builds on Tseng's forward-backward-forward (FBF) algorithm, which is known in the deterministic literature to be a valuable alternative to Korpelevich's extragradient method when solving variational inequalities over a convex and closed set governed by pseudo-monotone, Lipschitz continuous operators. The main computational advantage of Tseng's algorithm is that it relies only on a single projection step and two independent queries of a stochastic oracle. Our algorithm incorporates a variance reduction mechanism and leads to almost sure (a.s.) convergence to an optimal solution. To the best of our knowledge, this is the first stochastic look-ahead algorithm achieving this by using only a single projection at each iteration..
In this paper, four extragradient-type algorithms with inertial terms are presented for solving the variational inequality problem with a pseudomonotone and non-Lipschitz continuous operator in real Hilbert spaces.Strong convergence theorems … In this paper, four extragradient-type algorithms with inertial terms are presented for solving the variational inequality problem with a pseudomonotone and non-Lipschitz continuous operator in real Hilbert spaces.Strong convergence theorems of the suggested methods are established under some suitable conditions imposed on the parameters.Finally, several computational tests and applications in optimal control problems are given to illustrate the efficiency and advantages of the proposed iterative schemes over some known ones.
We develop a new stochastic algorithm for solving pseudomonotone stochastic variational inequalities. Our method builds on Tseng’s forward-backward-forward algorithm, which is known in the deterministic literature to be a valuable … We develop a new stochastic algorithm for solving pseudomonotone stochastic variational inequalities. Our method builds on Tseng’s forward-backward-forward algorithm, which is known in the deterministic literature to be a valuable alternative to Korpelevich’s extragradient method when solving variational inequalities over a convex and closed set governed by pseudomonotone Lipschitz continuous operators. The main computational advantage of Tseng’s algorithm is that it relies only on a single projection step and two independent queries of a stochastic oracle. Our algorithm incorporates a minibatch sampling mechanism and leads to almost sure convergence to an optimal solution. To the best of our knowledge, this is the first stochastic look-ahead algorithm achieving this by using only a single projection at each iteration.
We propose a projection algorithm for solving an equilibrium problem (EP) where the bifunction is pseudomonotone with respect to its solution set. The algorithm is further combined with a cutting … We propose a projection algorithm for solving an equilibrium problem (EP) where the bifunction is pseudomonotone with respect to its solution set. The algorithm is further combined with a cutting technique for minimizing the norm over the solution set of an EP whose bifunction is pseudomonotone with respect to the solution set.
By using the metric projection onto a closed self-dual cone of the Euclidean space, M. S. Gowda, R. Sznajder and J. Tao have defined generalized lattice operations, which in the … By using the metric projection onto a closed self-dual cone of the Euclidean space, M. S. Gowda, R. Sznajder and J. Tao have defined generalized lattice operations, which in the particular case of the nonnegative orthant of a Cartesian reference system reduce to the lattice operations of the coordinate-wise ordering. The aim of the present note is twofold: to give a geometric characterization of the closed convex sets which are invariant with respect to these operations, and to relate this invariance property to the isotonicity of the metric projection onto these sets. As concrete examples the Lorentz cone and the nonnegative orthant are considered. Old and recent results on closed convex Euclidean sublattices due to D. M. Topkis, A. F. Veinott and to M. Queyranne and F. Tardella, respectively are obtained as particular cases. The topic is related to variational inequalities where the isotonicity of the metric projection is an important technical tool. For Euclidean sublattices this approach was considered by G. Isac, H. Nishimura and E. A. Ok.
Tseng's forward-backward-forward algorithm is a valuable alternative for Korpelevich's extragradient method when solving variational inequalities over a convex and closed set governed by monotone and Lipschitz continuous operators, as it … Tseng's forward-backward-forward algorithm is a valuable alternative for Korpelevich's extragradient method when solving variational inequalities over a convex and closed set governed by monotone and Lipschitz continuous operators, as it requires in every step only one projection operation. However, it is well-known that Korpelevich's method converges and can therefore be used also for solving variational inequalities governed by pseudo-monotone and Lipschitz continuous operators. In this paper, we first associate to a pseudo-monotone variational inequality a forward-backward-forward dynamical system and carry out an asymptotic analysis for the generated trajectories. The explicit time discretization of this system results into Tseng's forward-backward-forward algorithm with relaxation parameters, which we prove to converge also when it is applied to pseudo-monotone variational inequalities. In addition, we show that linear convergence is guaranteed under strong pseudo-monotonicity. Numerical experiments are carried out for pseudo-monotone variational inequalities over polyhedral sets and fractional programming problems.
<p>The Hestenes-Stiefe (HS) conjugate gradient method is very effective in resolving larger-scale sophisticated smoothing optimization tasks due to its low computational requirements and high computational efficiency. Additionally, the algorithm has … <p>The Hestenes-Stiefe (HS) conjugate gradient method is very effective in resolving larger-scale sophisticated smoothing optimization tasks due to its low computational requirements and high computational efficiency. Additionally, the algorithm has been employed in practical applications to address image restoration and machine learning issues. In this paper, the authors proposed an improved Hestenes-Stiefe conjugate gradient algorithm having characteristics like: ⅰ) The algorithm depicts the decreasing features and trust region properties free of conditionalities. ⅱ) The algorithm satisfies global convergence. ⅲ) The algorithm can be applied to tackle the image restoration problem, monotone nonlinear equations, and machine learning problems. Numerical results revealed that the proffered technique is a competitive method.</p>
In this paper, we introduce an inertial Tseng's extragradient method for solving multi-valued variational inequalits, in which only one projection is needed at each iterate. We also obtain the strong … In this paper, we introduce an inertial Tseng's extragradient method for solving multi-valued variational inequalits, in which only one projection is needed at each iterate. We also obtain the strong convergence results of the proposed algorithm, provided that the multi-valued mapping is continuous and pseudomonotone with nonempty compact convex values. Moreover, numerical simulation results illustrate the efficiency of our method when compared to existing methods.
Recently, some proximal-based alternating direction methods and alternating projection-based prediction–correction methods were proposed to solve the structured variational inequalities in Euclidean space Rn. We note that the proximal-based alternating direction … Recently, some proximal-based alternating direction methods and alternating projection-based prediction–correction methods were proposed to solve the structured variational inequalities in Euclidean space Rn. We note that the proximal-based alternating direction methods need to solve its subproblems exactly. However, the subproblems of the proximal-based alternating direction methods are too difficult to be solved exactly in many practical applications. We also note that the existing alternating projection based prediction–correction methods just can cope with the case that the underlying mappings are Lipschitz continuous. However, it could be difficult to verify their Lipschitz continuity condition, provided that the available information is only the mapping values. In this paper, we present a new alternating projection-based prediction–correction method for solving the structured variational inequalities, where the underlying mappings are continuous. In each iteration, we first employ a new Armijo linesearch to derive the predictors, and then update the next iterate via some minor computations. Under some mild assumptions, we establish the global convergence theorem of the proposed method. Preliminary numerical results are also reported to illustrate the effectiveness of the proposed method.
In this paper, a new projection-type algorithm for solving multi-valued mixed variational inequalities without monotonicity is presented. Under some suitable assumptions, it is showed that the sequence generated by the … In this paper, a new projection-type algorithm for solving multi-valued mixed variational inequalities without monotonicity is presented. Under some suitable assumptions, it is showed that the sequence generated by the proposed algorithm converges globally to a solution of the multi-valued mixed variational inequality considered. The algorithm exploited in this paper is based on the generalized f -projection operator due to Wu and Huang [The generalized f-projection operator with an application. Bull Austral Math Soc. 2006;73:307–317] rather than the well-known resolvent operator. Preliminary computational experience is also reported. The results presented in this paper generalize and improve some known results given in the literature.
Abstract In this paper, new numerical algorithms are introduced for finding the solution of a variational inequality problem whose constraint set is the common elements of the set of fixed … Abstract In this paper, new numerical algorithms are introduced for finding the solution of a variational inequality problem whose constraint set is the common elements of the set of fixed points of a demicontractive mapping and the set of solutions of an equilibrium problem for a monotone mapping in a real Hilbert space. The strong convergence of the iterates generated by these algorithms is obtained by combining a viscosity approximation method with an extragradient method. First, this is done when the basic iteration comes directly from the extragradient method, under a Lipschitz-type condition on the equilibrium function. Then, it is shown that this rather strong condition can be omitted when an Armijo-backtracking linesearch is incorporated into the extragradient iteration. The particular case of variational inequality problems is also examined.
Recently Han and Lou proposed a highly parallelizable decomposition algorithm for minimizing a strongly convex cost over the intersection of closed convex sets. It is shown that their algorithm is … Recently Han and Lou proposed a highly parallelizable decomposition algorithm for minimizing a strongly convex cost over the intersection of closed convex sets. It is shown that their algorithm is in fact a special case of a splitting algorithm analyzed by Gabay for finding a zero of the sum of two maximal monotone operators. Gabay’s convergence analysis for the splitting algorithm is sharpened, and new applications of this algorithm to variational inequalities, convex programming, and the solution of linear complementarily problems are proposed. For convex programs with a certain separable structure, a multiplier method that is closely related to the alternating direction method of multipliers of Gabay–Mercier and of Glowinski–Marrocco, but which uses both ordinary and augmented Lagrangians, is obtained.
The Josephy--Newton method for solving a nonlinear complementarity problem consists of solving, possibly inexactly, a sequence of linear complementarity problems. Under appropriate regularityassumptions, this method is known to be locally … The Josephy--Newton method for solving a nonlinear complementarity problem consists of solving, possibly inexactly, a sequence of linear complementarity problems. Under appropriate regularityassumptions, this method is known to be locally (superlinearly) convergent. To enlarge the domain of convergence of the Newton method, some globalization strategy based on a chosen merit function is typically used. However, to ensure global convergence to a solution, some additional restrictive assumptions are needed. These assumptions imply boundedness of level sets of the merit function and often even (global) uniqueness of the solution. We present a new globalization strategy for monotone problems which is not based on any merit function. Our linesearch procedure utilizes the regularized Newton direction and the monotonicity structure of the problem to force global convergence by means of a (computationally explicit) projection step which reduces the distance to the solution set of the problem. The resulting algorithm is truly globally convergent in the sense that the subproblems are always solvable, and the whole sequence of iterates converges to a solution of the problem without any regularity assumptions. In fact, the solution set can even be unbounded. Each iteration of the new method has the same order of computational cost as an iteration of the damped Newton method. Under natural assumptions, the local superlinear rate of convergence is also achieved.
Convexity plays a very important role in various branches of mathematical, natural and social sciences. In an effort to extend existing results depending on convexity there has been a steady … Convexity plays a very important role in various branches of mathematical, natural and social sciences. In an effort to extend existing results depending on convexity there has been a steady interest over the years towards its generalizations. Many excellent books, monographs and numerous articles have been written pursuing generalizations of the basic concepts, their characterizations. and studying properties under different conditions. The purpose of this contribution is to gather information on certain generalizations of convexity and their applications to duality theory and optimality conditions
We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projection-type methods in which the projection direction … We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projection-type methods in which the projection direction is modified by a strongly monotone mapping of the form $I - \alpha F$ or, if F is affine with underlying matrix M, of the form $I + \alpha M^T $, with $\alpha \in (0,\infty )$. We show that these methods are globally convergent, and if in addition a certain error bound based on the natural residual holds locally, the convergence is linear. Computational experience with the new methods is also reported.
(1981). Variational Inequalities and Complementarity Problems — Theory and Application. Journal of the Operational Research Society: Vol. 32, No. 9, pp. 848-848. (1981). Variational Inequalities and Complementarity Problems — Theory and Application. Journal of the Operational Research Society: Vol. 32, No. 9, pp. 848-848.
Abstract We present a variant of Korpelevich's method for variational inequality problems with monotone operators. Instead of a fixed and exogenously given stepsize, possible only when a Lipschitz constant for … Abstract We present a variant of Korpelevich's method for variational inequality problems with monotone operators. Instead of a fixed and exogenously given stepsize, possible only when a Lipschitz constant for the operator exists and is known beforehand, we find an appropriate stepsize in each iteration through an Armijo-type search. Differently from other similar schemes, we perform only two projections onto the feasible set in each iteration, rather than one projection for each tentative step during the search, which represents a considerable saving when the projection is computationally expensive. A full convergence analysis is given, without any Lipschitz continuity assumption Keywords: Variational InequalitiesArmijo Search ∗Research of this author was partially supported by CNPqgrant NΩ301280/86 ∗Research of this author was partially supported by CNPqgrant NΩ301280/86 Notes ∗Research of this author was partially supported by CNPqgrant NΩ301280/86