Abstract:

We develop a derivative-free preconditioned residual method for
solving nonlinear steady fluid flows. The new scheme is based on a
variable implicit preconditioning technique associated with the
globalized spectral residual method. The new scheme is robust and allows
numerical computation of the steady state of the two-dimensional
incompressible Navier-Stokes equations (NSE), which we consider here in
both primary variables and streamfunction-vorticity formulations. The
results are encouraging and agree with those reported in the literature.

Key words. nonlinear systems of equations, residual methods, globalization strategies, preconditioning, Navier-Stokes equations.

AMS subject classifications. 76D05, 65H10, 76M20, 90C30

Key words. nonlinear systems of equations, residual methods, globalization strategies, preconditioning, Navier-Stokes equations.

AMS subject classifications. 76D05, 65H10, 76M20, 90C30

Article Type:

Report

Subject:

Residuals (Statistics)
(Research)

Navier-Stokes equations (Research)

Navier-Stokes equations (Research)

Authors:

Chehab, Jean-Paul

Raydan, Marcos

Raydan, Marcos

Pub Date:

12/01/2008

Publication:

Name: Electronic Transactions on Numerical Analysis Publisher: Institute of Computational Mathematics Audience: Academic Format: Magazine/Journal Subject: Computers; Mathematics Copyright: COPYRIGHT 2008 Institute of Computational Mathematics ISSN: 1068-9613

Issue:

Date: Dec, 2008 Source Volume: 34

Topic:

Event Code: 310 Science & research

Geographic:

Geographic Scope: Venezuela Geographic Code: 3VENE Venezuela

Accession Number:

229992217

Full Text:

1. Introduction. Preconditioning is a widely-used approach to
accelerating numerical methods for solving linear as well as non-linear
problems. For linear systems, it is widely developed and very well
understood. However, the art of preconditioning iterative methods for
nonlinear problems remains a challenge, and it is not so well
understood.

The emergence of non-monotone residual methods, such as the one introduced by Barzilai and Borwein in optimization [2, 13, 26] and globalized versions which enhance its robustness and effectiveness [12, 22, 23, 27], gives the possibility of efficiently solving large-scale nonlinear problems, incorporating in a natural way a preconditioning strategy. Non-monotone globalization strategies for nonlinear problems have become popular in the last few years. These strategies make it possible to define globally convergent algorithms without monotone decrease requirements. The main idea behind non-monotone strategies is that, frequently, the first choice of a trial point along the search direction hides significant information about the problem structure, and that such knowledge can be destroyed by imposing a decrease condition.

In this work we adapt and extend the ideas introduced in [12,22] for large-scale nonlinear problems like those that appear in the solution of the steady fluid flow problem. In particular, we add a preconditioning strategy fully described in [10]. The lid-driven cavity problem is a numerically challenging standard benchmark problem for nonlinear solvers that corresponds to computing either evolving or steady flows of the two-dimensional incompressible Navier-Stokes equation on a rectangular cavity. To compute steady states, two approaches are commonly considered: time-dependent methods that compute the steady state by applying a time marching scheme to the evolutive NSE (for Reynolds numbers that are lower than the bifurcation value), and methods that solve the steady-state NSE by fixed point or Newton-like schemes. It is well known that direct solution of the steady-state NSE is more difficult than time marching, and that it requires very robust schemes, especially as the Reynolds number Re increases. The literature on this topic is very rich, from, e.g., the relaxation schemes proposed by Crouzeix [11] to the more recent defect-correction methods; see [31] and the references therein. However, these methods are very closely related to the structure of the NSE and use a linearization of the equation at each step.

Our aim in this article is to compute the solution of the steady-state NSE by an implicit preconditioned version of the spectral residual method, with globalization. The method we introduce here uses only the solution of the linear part of the equation, which can be obtained efficiently with a fast solver (e.g., FFT and multigrid).

The article is organized as follows. First, in Section 2, after describing the inverse preconditioning strategy and the globalization strategy for the residual scheme, we derive our new algorithm combining the dynamical and the optimization approach. Then, in Section 3, we adapt the discretization of the steady-state two-dimensional incompressible Navier-Stokes equations to the framework of the nonlinear scheme. Finally, in Section 4, as a numerical illustration, we present the solution of the steady-state NSE for different Reynolds numbers (up to Re = 5000). We solve the problem in the primary variables as well as in stream function-vorticity formulation. Our results agree with the ones in the literature and show the robustness of the proposed method.

2. The basic algorithm and its ingredients. In a general framework,
let us consider the nonlinear system of equations

F (x)=0, (2.1)

where F : [R.sup.n] [right arrow] [R.sup.n] is a continuously differentiable mapping. This framework generalizes the nonlinear systems that appear, for example, after discretizing the steady-state models for fluid flows, to be discussed later in this work.

To solve (2.1), some iterative schemes have been recently presented that systematically use the residual vectors as search directions [12, 22]; i.e., the iterations are defined as

[x.sub.k+1] = [x.sub.k] [+ or -] [[lambda].sub.k] F([x.sub.k]), (2.2)

where [[lambda].sub.k] > 0 is the step-length and the search direction is either F([x.sub.k]) or -F([x.sub.k]) depending on which one is a descent direction for the merit function

f(x) = [[parallel]F(x)[parallel].sup.2.sub.2] = F[(x).sup.T] F(x). (2.3)

These schemes are effective and competitive with Newton-Krylov schemes for large-scale nonlinear systems [5, 6, 21] when the step lengths are suitably chosen. The convergence of (2.2) is associated with a derivative-free non-monotone line search, fully described in [12], which will be discussed in the forthcoming subsections.

There are many choices of the step length [[lambda].sub.k] for which convergence is guaranteed. The well-known non-monotone spectral choice has interesting properties; it is defined as the absolute value of

[s.sup.T.sub.k-1][s.sub.k-1]/[s.sup.T.sub.k-1][y.sub.k-1], (2.4)

where [s.sub.k-1] = [x.sub.k] - [x.sub.k-1], and [y.sub.k-1] = F([x.sub.k]) - F([x.sub.k-1]). Obtaining the step length using (2.4) requires a reduced amount of computational work, accelerates the convergence of the process, and involves the last two iterations in a way that incorporates first-order information into the search direction [2, 13, 17, 26].

In the forthcoming subsections, we discuss the preconditioned version of the basic scheme (2.2) and the specific inverse preconditioner to be used in this work, the globalization strategy to guarantee global convergence, and how to obtain the step lengths that accelerate the convergence while improving the stability of the process.

2.1. The preconditioned scheme. In order to present the preconditioned version of (2.2) we extend the ideas discussed in [23] for unconstrained minimization to the solution of (2.1). The well-known and somehow ideal Newton's method for solving (2.1) from an initial guess [x.sub.0] can be written as

[x.sub.k+1] = [x.sub.k] - [J.sup.-1.sub.k]F([x.sub.k]), (2.5)

where [J.sub.k] = J([x.sub.k]) and J(x) is the Jacobian of F evaluated at x.

Recently, a preconditioned scheme, associated to the gradient direction, was proposed to solve unconstrained minimization problems [23]. When solving (2.1), the preconditioned version of (2.2) produces the iterates

[x.sub.k+1] = [x.sub.k] + [[lambda].sub.k][z.sub.k], (2.6)

where [z.sub.k] = [+ or -][C.sub.k] F([x.sub.k]), [C.sub.k] is a nonsingular approximation to [J.sup.-1.sub.k], and the scalar [[lambda].sub.k] is given by

[[lambda].sub.k] = ([[lambda].sub.k-1]) [z.sup.T.sub.k-1][z.sub.k-1]/[z.sup.T.sub.k-1][y.sub.k-1]. (2-7)

In (2.6), if [C.sub.k] = I (the identity matrix) for all k, then [z.sub.k] = [+ or -]F([x.sub.k]), and since [s.sub.k- 1] = [[lambda].sub.k-1][z.sub.k-1] then [[lambda].sub.k] coincides with (2.4). Therefore, if [C.sub.k] = I the method reduces to (2.2). On the other hand, if the sequence of iterates converges to [x.sup.*], and we improve the quality of the preconditioner such that C([x.sub.k]) converges to [J.sup.-1]([x.sup.*]) then, as discussed in [10], [[lambda].sub.k] tends to one and we recover Newton's method, which possesses fast local convergence under standard assumptions [14]. In that sense, the iterative scheme (2.6) is flexible, and allows intermediate options by choosing suitable approximations [C.sub.k] between the identity matrix and the inverse of the Jacobian matrix. For building suitable approximations to [J.sup.-1]([x.sub.k])F([x.sub.k]), we will test implicit preconditioning schemes that do not require the explicit computation of [C.sub.k], as described in Section 2.2.

2.2. Inverse preconditioning schemes. By adapting our recent work on approximating the Newton direction using an ordinary differential equation (ODE) model, we will obtain the inverse preconditioner for (2.1) in the framework of the iterative global preconditioned residual algorithm presented in the next subsection. To that end, we develop an automatic and implicit scheme to directly approximate the preconditioned direction [z.sub.k] at every step in (2.6) without a priori knowledge of the Jacobian of F and with only a reduced and controlled amount of storage and computation. As we will discuss later, this new scheme avoids as much as possible the cost of any calculations involving matrices, and also allows us to obtain asymptotically the Newton direction by improving the accuracy in the ODE solver.

The method we introduce here starts from the numerical integration of the Newton flow aimed at computing the root of F as the stable steady state of

dx/dt = -[(J(x)).sup.-1]F(x). (2.8)

Notice that the preconditioned gradient method can be obtained by applying a forward Euler method to (2.8); see, e.g., [1]. The value [parallel]F(x)[parallel] is decreasing along the integral curves and converges at an exponential rate to zero; for details, see [1, 20] and references therein. Introducing the decoupling

dx/dt = -z (2.9)

J(x)z = F(x), (2.10)

we see that the algebraic condition that links z to x is, in fact, a preconditioning equation. In order to relax its resolution, a time derivative in z is added as

dx/dt = -z, (2.11)

dz/dt = F(x) - J(x)z. (2.12)

This last system allows us to compute numerically the root of F by an explicit time marching scheme since the steady state is asymptotically stable; see [10] for more details. Let [t.sub.k] be discrete times, and denote by [x.sub.k] and [z.sub.k] the numerical approximations to x([t.sub.k]) and z([t.sub.k]). The application of the simple forward Euler method to (2.11)-(2.12) reads

[x.sub.k+1] = [x.sub.k] - ([t.sub.k+1] - [t.sub.k])[z.sub.k], (2.13)

[z.sub.k+1] = [z.sub.k] + ([t.sub.k+1] - [t.sub.k]) (F([x.sub.k]) - J([x.sub.k])[z.sub.k]). (2.14)

Remark 2.1. As stated above, we want to avoid the computation of the Jacobian matrix, so J(x)z is classically approached by a finite difference scheme

J(x)z [equivalent] F(x + [tau]z) - F(x)/[tau],

for a small given positive real number [tau].

Notice that the dynamics of the differential system (2.11)-(2.12) can be very slow. As proposed in [10], a way to speed up the convergence to the steady state is to artifically introduce two time scales by computing for every discrete time [t.sub.k] an approximation of the steady state of the equation in z. More precisely, we write

Step 1. With optimization method 1, compute [z.sub.k] as an approximation of the steady state of

dz/dt = F([x.sub.k]) - F([x.sub.k] + [tau]z) - F(x.sub.k])/[tau],

z(0) = [z.sub.k-1].

Step 2. With optimization method 2, compute [x.sub.k+1] from [x.sub.k] by

[x.sub.k+1] = [x.sub.k] + [[lambda].sub.k][z.sub.k].

The preconditioning relies on the accuracy for solving step 1. As optimization method 1, we proposed in [10] to apply some iterations of Cauchy-like schemes that we describe in Section 2.3. As optimization scheme 2, which defines the step [[lambda].sub.k], we use the spectral gradient method. Promising results were obtained in [10] on some classical optimization problems. However, the solution of steady-state NSE needs a more robust scheme for the time marching of [x.sup.k]. The scheme IPR described below becomes crucial in practical cases. We now present the general form of the scheme.

ALGORITHM 2.2 (Implicit Preconditioned Residual Method (IPR)).

Step 1. With Cauchy-like minimization, compute [z.sub.k] as an approximation of the steady state of

dz/dt = F([x.sub.k]) - F([x.sub.k] + [tau]z) - F(x.sub.k])/[tau],

z(0) = [z.sub.k-1].

Step 2. With GPR, compute [x.sub.k+1] from [x.sub.k] by

[x.sub.k+1] = [x.sub.k] + [[lambda].sub.k][z.sub.k].

2.3. Generalized Cauchy-type minimization for [z.sub.k]. We now describe several ways of computing Cauchy-type step lengths for building [z.sub.k] in Step 1 of the IPR method. The computation of a steady state by an explicit scheme can be accelerated by enhancing the stability domain of the scheme, since this allows the use of larger time steps. In that sense, the accuracy of a time marching scheme is not a priority. A simple way to derive more stable methods is to use parametrized one-step schemes and to fit the parameters not to increase the accuracy as in classical Runge-Kutta methods, but to improve the stability. For example, in [4, 9], a method was proposed to iteratively compute fixed points with larger descent parameter starting from a specific numerical time scheme. More precisely, defining G(z)= F([x.sup.k]) - (F([x.sup.k] + [tau]z) - F([x.sup.k]))/[tau] , this method consists of integrating the differential equation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (2.15)

by the p-step scheme

Here [[summation].sup.p.sub.i=0] [[beta].sub.i] = 1. At convergence, we obtain [bar.y], and we set [z.sub.k] = [bar.y], which will be used as our preconditioned direction at iteration k.

Classically, the convergence can be accelerated by computing at each iteration the step length in order to minimize the Euclidean norm of the current residual. This gives rise to variants of the Cauchy scheme [8]. Of course, the minimizing parameter becomes harder to compute as p increases. We list now the optimal values of the parameters for p = 1,2,3. The case p = 3 represents, as far as we know, a new choice of step length for residual methods. In what follows, for any two vectors v and w, (v,w) = [v.sup.T]w, [r.sup.k] = F([x.sub.k]), and

Av = F([x.sub.k] + [tau]v) - F([x.sub.k])/[tau] [approximately equal to] J([x.sub.k])v.

* p = 1 (Cauchy method)

[[beta].sup.k.sub.i] = 1, [DELTA][t.sub.k] =

* p = 2 (Enhanced Cauchy 1 (EC1) see [9, 10])

We set

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

* p = 3 (Enhanced Cauchy 2 (EC2))

We set

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

2.4. Globalization strategy. In order to guaranee that the implicit preconditioned residual (IPR) method converges from any initial guess, we need to add a globalization strategy. This is an important feature, especially when dealing with highly nonlinear flow problems and high Reynolds numbers. Indeed, for highly nonlinear problems, pure methods (i.e., without a globalization strategy) will diverge unless the initial guess is very close to the solution, which is not a practical assumption. To avoid the derivatives of the merit function, which are not available, we will adapt the recently developed globalization strategy of La Cruz et al. [12] to our preconditioned version.

Assume that {[[eta].sub.k]} is a sequence such that [[eta].sub.k] > 0 for all k [member of] [??] and

[[infinity].summation over (k=0] [[eta].sub.k] < [infinity]. (2.16)

Assume also that 0 < [gamma] < 1 and 0 < [[sigma].sub.min] < [[sigma].sub.max] < [infinity]. Let M be a positive integer. Let [[tau].sub.min], [[tau].sub.max] be such that 0 < [[tau].sub.min] < [[tau].sub.max] < 1. Given an arbitrary initial point [x.sub.0] [member of] [R.sup.n], an algorithm that allows us to obtain [x.sub.k+1] starting from [x.sub.k] is given below.

ALGORITHM 2.3 (Global Implicit Preconditioned Residual Method (GIPR)).

Step 1.

Choose [[lambda].sub.k] such that [absolute value of [[lambda].sub.k]] [member of] [[[sigma].sub.min], [[sigma].sub.max]] (described in the next subsection).

Build the vector [C.sub.k] F([x.sub.k]) (inverse preconditioned direction).

Compute [[bar.f].sub.k] = max{f([x.sub.k]), ..., f([x.sub.max]{0,k-M+1})}.

Set z [left arrow] -[[lambda].sub.k][C.sub.k] F ([x.sub.k]).

Set [[alpha].sub.+] [left arrow] 1, [[alpha].sub.-] [left arrow] 1.

Step 2.

REMARK 2.4. As discussed in [12], the algorithm is well defined, i.e., the backtracking process (choosing [[alpha].sub.+new] and [[alpha].sub.-new]) is guaranteed to terminate successfully in a finite number of trials. Moreover, global convergence is also established in [12]. Indeed, if the symmetric part of the Jacobian of F at any [x.sub.k] is positive (or negative) definite for all k, then the sequence {f([x.sub.k])} tends to zero.

3. Application to the solution of the steady 2D lid-driven cavity.

3.1. The problem. The equilibrium state of a driven square cavity is described by the steady-state Navier-Stokes equation, which, in primary variables, can be written as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (3.1)

Here U = (u,v) is the velocity field, P is the pressure, and f is the external force. For our applications, we will consider the so-called driven cavity case, so f = 0 and the fluid is driven by a proper boundary condition BC. We denote the sides of the unit square [OMEGA] by [[GAMMA].sub.1],i = 1,..., 4: [[GAMMA].sub.1] is the lower horizontal side, [[GAMMA].sub.3] is the upper horizontal side, [[GAMMA].sub.2] is the left vertical side, and [[GAMMA].sub.4] is the right vertical side. In Figure 3.1, we show the location of the main vortices of the steady state of the lid driven cavity: Primary Vortex, Top Left Vortex (TL1), Bottom Right Vortices (BR1 and BR2), and Bottom Left Vortex (BL1). For additional details on this benchmark application, see [25].

We distinguish two different driven flows, according to the choice of the boundary conditions on the velocity. More precisely U = V = 0 at the boundary except for U on [[GAMMA].sub.3], where we have two options:

* U(x, 1) = 1 : Cavity A (lid driven cavity)

* U(x, 1) = [(1 - [(1 - 2x).sup.2]).sup.2] : Cavity B (regularized lid driven cavity)

3.2. Discretization and implementation in primary variables.

3.2.1. Discretization. The discretization is performed on staggered grids of MAC type in order to verify a discrete inf-sup (or Babushka-Brezzi) condition that guarantees the stability; see [25]. Taking N discretization points in each direction on the pressure grid, we obtain the linear system

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (3.2)

where U,V [member of] [R.sup.N(N-1)], P [member of] [R.sup.NxN], and [A.sub.u] and [A.sub.v] are the discretization matrices of minus the Laplacian operator on the U and V grids, respectively. Similarly, [B.sub.x] and [B.sub.y] are the discretization matrices of the first derivative in x and y for the pressure in the velocity grids. The operators [L.sub.u] and [L.sub.v] are the nonlinear terms on U and V respectively; and, finally, [F.sub.1] and [F.sub.2] are the discrete external forces for the horizontal and the vertical velocities. Equation (3.2) is then a square linear system in 2 x N(N - 1) + [N.sup.2] unknowns.

[FIGURE 3.1 OMITTED]

3.2.2. Implementation. The discrete problem (3.2) can be written as a nonlinear system F (U,V,P ) = 0,

with the obvious notation.

Now, let S be the Stokes solution operator defined by

S (F,G, 0) [??] (U,V,P),

where (U, V, P) solves the linear part of (3.2), i.e., the Stokes problem

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (3.3)

Finally we introduce the functional G

G((U,V,P) = S (F (U,V,P)).

The scheme consists of applying the implicit globalized preconditioned gradient method to the differential system

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (3.4)

where X = (U, V, P) and where H is an approximation to the gradient of G(X), as described in Remark 2.1 of Section 2.3.

3.3. The w - [psi] formulation. One of the advantage of the w - [psi] formulation is that the NSE are decoupled into two problems: a convection diffusion equation and a Poisson problem. In particular, we can use the FFT to solve the linear problems, as described in this section.

3.3.1. The formulation. The w - [psi] formulation is obtained by taking the curl of the NSE [16, 25]. Let w = [partial derivative]u/[partial derivative]y - [partial derivative]v/[partial derivative]x, and write u = [partial derivative][psi]/[partial derivative]y, v = -[partial derivative][psi]/[partial derivative]x, where [nabla][psi] = w. Then we have the equations

-1/Re[DELTA]w + [partial derivative][psi]/[partial derivative]y [partial derivative]w/[partial derivative]x - [partial derivative][??]/[partial derivative]x, [partial derivative]w/[partial derivative]y = 0, (3.5)

[DELTA][psi] = w, (3.6)

w(x, 0) = [w.sub.0](x). (3.7)

The boundary conditions on w are derived using the standard approach based on the discretization of [DELTA][psi] on the boundaries; see [16, 25]. With the conditions on u and v [16, 25], we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Therefore, since [[psi].sub.[partial derivative][OMEGA]] = 0 and u = [partial derivative][psi]/[partial derivative]y, v = -[partial derivative][psi]/[partial derivative]x, we obtain by using Taylor expansions

[w.sub.i,0] = 1/2[h.sup.2] ([psi].sub.i,1] - 8[[psi].sub.i,2]), (3.8)

[w.sub.i,N+1] = 1/2[h.sup.2] (-[psi].sub.i,N-1] + 8[[psi].sub.i,N] - 6hg(ih)), (3.9)

[w.sub.0,j] = 1/2[h.sup.2] ([psi].sub.i,j] - 8[[psi].sub.2,j]), (3.10)

[w.sub.N+1,j] = 1/2[h.sup.2] (-[psi].sub.N-1,j] + 8[[psi].sub.N,j]). (3.11)

Here g(x) denotes the boundary condition function for the horizontal velocity at the boundary [[GAMMA].sub.3]. Homogeneous Dirichlet boundary conditions apply to [psi], and the operators are discretized by second order centered schemes on a uniform mesh composed by N points in each direction of the domain of step-size h = 1/(N + 1). The total number of unknowns is then 2[N.sup.2].

The boundary conditions on w are implemented iteratively according to the relations (3.8)-(3.11), making the finite difference scheme second order accurate.

3.3.2. Implementation. With (3.8)-(3.11), we can compute the boundary conditions for w. We denote by [[partial derivative].sup.h.sub.x]([psi]), [[partial derivative].sup.h.sub.y]([psi]), and [[partial derivative].sup.h.sub.[DELTA]]([psi]), and the contributions of the boundary conditions to the discretization operators of [[partial derivative].sub.x], [[partial derivative].sub.y], and -[DELTA]. The problem to solve is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Here A is the discretization matrix of -[DELTA], and [D.sub.x] and [D.sub.y] are the discretization matrices of [[partial derivative].sub.x] and [[partial derivative].sub.y] respectively. The problem to solve is then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

For convenience, we set X = (w, [psi]). Now, as we did for the primary variables formulation, we return to the dynamical system framework of the method: we set

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

and we consider the evolutionary system

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (3.12)

where HZ is an approximation of the gradient of G(X) at Z, as described in Remark 2.1. Here, A is the classical pentadiagonal finite difference matrix for the Laplace operator on a square, and linear systems with A can be cheaply solved using fast solvers such as FFT or multigrid. In this paper, we will use the FFT.

4. Numerical results.

4.1. General implementation of the algorithm. We now list the information required by the GIPR method:

* The positive integer M.

* The parameters [gamma] and [[eta].sub.k].

* The initial value of the descent parameter [[lambda].sub.0].

* The merit function. We use the Euclidian norm of the residual [[parallel]F(X)[parallel].sub.2].

* The accuracy of the global method: the solution is considered accurate when [[parallel]F(X)[parallel].sub.2] < [10.sup.-6].

* The accuracy imposed for solving the preconditioning equation

F([x.sup.k] + [tau]z) - F([x.sup.k])/[tau] - F([x.sup.k] = 0,

which is characterized by

--The choice of the optimization method 1. In our implementation we have used the Enhanced Cauchy 2 as discussed above.

--The number [tau]. We set [tau] = [10.sup.-8].

- The number of iteration nprec, which can vary at each step. We choose to increase nprec as the norm of the residual [r.sup.k] = F([x.sub.k]) decreases in order to improve the preconditioning near the solution:

Adaptive computation of nprec:

The reasoning behind the decision to improve the quality of the preconditioner near the solution is this: near an isolated solution, the merit function f is strictly convex, and hence it makes sense to reduce the condition number of the approximated positive-definite Hessian of f in the neighborhood of the solution. This is achieved by increasing nprec as the norm of the residual, i.e., the value of the merit function, is reduced.

4.2. Computation of steady states of NSE. We now present the numerical solution of the steady state of the two-dimensional driven cavity for different Reynolds numbers. Our results agree with those in the literature [3, 7, 15, 18, 19, 24, 28-30] (see figures and tables below); and to prove the robustness of the solution method, we take as the initial guess the solution of the Stokes problem, which becomes farther from the steady state as the Reynolds number increases. We will pay special attention to the solution of NSE in the w - [PSI] formulation. However, let us mention that the scheme applies also to NSE in primary variables (U - P), the linear solver being a Stokes solver. The crucial practical point is to have at our disposal a fast solver for the linear problems: FFT or multigrid for the w - [PSI] formulation and multigrid Uzawa [7] for the U - P formulation.

As we see in our results, the globalization strategy is important while the residual is not small enough. Furthermore, the preconditioning makes sense "close to the solution." For that reason, we choose to activate the preconditioning progressively as the residual decreases by increasing the number of inner iterations in the solution of the preconditioning step (step 1 of the scheme). This allows us to obtain fast convergence at the end while saving computational time at the beginning.

We observe that the number of outer iterations increases with the Reynolds number but not so much with the dimension of the problem. In all cases, the first part of the convergence process is devoted to "maintaining" the iterates in a neighborhood of the solution. All the computations have been done with MATLAB 7 on a 2Ghz dual core PC with two gigabytes of RAM.

We now present the parameters of the scheme that we used for solving the flow in cavity B and also in cavity A for the stream function-vorticity formulation of NSE. N is the number of discretization points in each direction of the domain. We give the parameter values in Tables 4.1-4.2.

The results are reported in Figures 4.1-4.3. Most of the work is done at the beginning of the iterations, while the globalization is acting to stabilize the iterates. This phenomenon is amplified as the Reynolds number Re becomes large. An acceleration of the convergence is obtained when the residual is small enough, since nprec increases. The shapes of the solutions are identical with that of the literature, such as in [3, 7, 15, 16, 18, 19, 24, 28-30]; particularly, the special values agree.

[FIGURE 4.1 OMITTED]

We also report some special values of the solution for Cavity B and Re = 5000 in Table 4.3, where we also compare them with results in the literature.

4.3. Solution of NSE in primary variables. We now present the numerical results for the solution of the steady-state NSE in primary variables. We change the value of y during the iterations in order to increase the non-monotonicity of the GIPR as follows

if [parallel][r.sup.k][parallel] < [10.sup.-3] then [gamma] = 0.9.

[FIGURE 4.2 OMITTED]

This practical choice reduces the number of line searches (backtrackings) near the solution, and so it reduces the overall computational effort for convergence. In Table 4.4, we present the numerical solution of the cavity B problem for Re = 400 and Re = 1000. The level curves of the pressure, the vorticity, the kinetic energy and the stream function agree with those in the literature. Notice that fewer iterations are required for convergence than are needed for the same example using the stream-vorticity formulation. This is due mainly to the fact that the boundary conditions are implemented exactly for the primary variables formulation, whereas they are approximated iteratively for the stream-vorticity formulation. However, the computational effort required at each iteration of the primary variables formulation is greater than that needed for the stream-vorticity formulation. Indeed, in the first case a coupled system of 3[N.sup.2] variables needs to be solved, whereas two systems of [N.sup.2] variables need to be solved in the second case; see Section 3.

5. Concluding remarks. We have presented a robust scheme to solve steady fluid flows that involves an implicit preconditioned search direction to approximate the Newton direction directly. Our approach is in sharp contrast with the classical Newton type schemes (e.g., Newton-Krylov [21], defect corrections [31]) in which the nonlinear problem is linearized at every Newton iteration and a preconditioner is developed for the inner linear solver in the standard way. Here, we propose to avoid the linearization process by preconditioning (implicitly) the nonlinear problem. The efficiency of the scheme is increased when a fast solver is used for the preconditioned linear system. The results we obtain on the numerical solution of NSE, by adding a globalization strategy, show that the proposed method is robust. As it has been already established, it is harder to solve directly the steady-state NSE than to compute the steady state by time marching schemes applied to the evolutionary equation.

[FIGURE 4.3 OMITTED]

[FIGURE 4.4 OMITTED]

The new method is also flexible since the choice of the preconditioning step is completely free. We would like to stress that the preconditioned globalized spectral residual method can be applied to a large number of scientific computing problems, especially when no (simple) preconditioner can be built, such as in Computational Fluid Dynamics (CFD), and also in numerical linear algebra when solving Riccati matrix equations or some other nonlinear matrix problems. These are topics that deserve further investigation.

Dedicated to Victor Pereyra on the occasion of his 70th birthday

Acknowledgments. This work was supported by SIMPAF project from INRIA futurs.

REFERENCES

[1] F. ALVAREZ, H. ATTOUCH, J. BOLTE, AND P. REDONT, A second-order gradient-like dissipative dynamical system with hessian-driven damping. application to optimization and mechanics, J. Math. Pures Appl., 81 (2002), pp. 747-779.

[2] J. BARZILAI AND J. M. BORWEIN, Two-point step size gradient methods, IMA J. Numer. Anal., 8 (1988), pp. 141-148.

[3] M. BEN-ARTZI, J.-P. CROISILLE, D. FISHELOV, AND S. TRACHTENBERG, A pure-compact scheme for the streamfunction formulation of Navier-Stokes equations, J. Comput. Phys., 205 (2005), pp. 640-664.

[4] C. BREZINSKI AND J.-P. CHEHAB, Nonlinear hybrid procedures and fixed point iterations, Numer. Funct. Anal. Optim., 19 (1998), pp. 465-487.

[5] P. BROWN AND Y. SAAD, Hybrid Krylov methods for nonlinear systems of equations, SIAM J. Sci. Comput., 11 (1990), pp. 450-481.

[6]--, Convergence theory of nonlinear Newton-Krylov algorithms, SIAM J. Optim., 4 (1994), pp. 297-330.

[7] C.-H. BRUNEAU AND C. JOURON, An efficient scheme for solving steady incompressible Navier-Stokes equations, J. Comput. Phys., 89 (1990), pp. 389-413.

[8] A. CAUCHY, Methodes generales pour la resolution des systemes d'equations simultanees, C. R. Acad. Sci. Par., 25 (1847), pp. 536-538.

[9] J.-P. CHEHAB AND J. LAMINIE, Differential equations and solution of linear systems, Numer. Algorithms, 40 (2005), pp. 103-124.

[10] J.-P. CHEHAB AND M. RAYDAN, Inverse and adaptive preconditioned gradient methods for nonlinear problems, Appl. Numer. Math., 55 (2005), pp. 32-47.

[11] M. CROUZEIX, Approximation et methodes iteratives de resolution d'inequations variationelles et de problemes non lineaires, IRIA, 12 (1974), pp. 139-244.

[12] W. L. CRUZ, J. M. MARTINEZ, AND M. RAYDAN, Spectral residual method without gradient information for solving large-scale nonlinear systems, Math. Comp., 75 (2006), pp. 1449-1466.

[13] Y. H. DAI AND L. Z. LIAO, r-linear convergence of the Barzilai and Borwein gradient method, IMA J. Numer. Anal., 22 (2002), pp. 1-10.

[14] J. E. DENNIS, JR. AND R. B. SCHNABEL, Numerical Methods for Unconstrained Optimization and-Nonlin ear Equations, Prentice-Hall, Englewood Cliffs, NJ, 1983.

[15] E. ERTURK, T. C. CORKE, ANDC. GOKCOL, Numerical solutions of 2-d steady incompressible driven cavity flow at high Reynolds numbers, Internal J. Numer. Methods Fluids, 48 (2005), pp. 747-774.

[16] D. EUVRARD, Resolution numerique des equations aux derivees partielles, third ed., Dunod, Paris, 1994.

[17] R. FLETCHER, On the Barzilai-Borwein method, in Optimization and Control with Applications, L. Qi, K. L. Teo, and X. Q. Yang, eds., Springer, New York, 2005, pp. 235-256.

[18] U. GHIA, K. N. GHIA, AND C. T. SHIN, High-Re solutions of incompressible flow using the Navier-Stokes equations and the multigrid method, J. Comput. Phys., 48 (1982), pp. 387-411.

[19] O. GOYON, High-Reynolds number solutions of Navier-Stokes equations using incremental unknowns, Com put. Methods Appl. Mech. Engrg., 130 (1996), pp. 319-335.

[20] U. HELMKE AND J. B. MOORE, Optimization and Dynamical Systems., Springer, Berlin, 1994.

[21] C. T. KELLEY, Iterative Methods for Linear and Nonlinear Equations, SIAM, Philadelphia, 1995.

[22] W. LA CRUZ AND M. RAYDAN, Nonmonotone spectral methods for large-scale nonlinear systems, Optim. Methods and Softw., 18 (2003), pp. 583-599.

[23] F. LUENGO, M. RAYDAN, W. GLUNT, AND T. HAYDEN, Preconditioned spectral gradient method, Numer. Algorithms, 30 (2002), pp. 241-258.

[24] F. PASCAL, Methodes de Galerkin non lineaires en discretisation par elements finis et pseudo-spectrale. Application a la mecanique des fluides, Ph.D. thesis, Departement de Mathematiques, Universite Paris XI Orsay, Jan. 1992.

[25] R. PEYRET AND R. TAYLOR, Computational Methods for Fluid Flows, Springer, New York, 1983.

[26] M. RAYDAN, On the Barzilai and Borwein choice of the steplength for the gradient method, IMA J. Numer. Anal., 13 (1993), pp. 321-326.

[27]--, The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem, SIAM J. Optim., 7 (1997), pp. 26-33.

[28] J. SHEN, Numerical simulation of the regularized driven cavity flows at high Reynolds numbers, Comput. Methods Appl. Mech. Engrg., 80 (1990), pp. 273-280.

[29]--, Hopf bifurcation of the unsteady regularized driven cavity flow, J. Comput. Phys., 95 (1991), pp. 228245.

[30] L. SONKE TABUGUIA, Etude numerique des equations des Navier-Stokes en milieux multiplement connexes, en formulation vitesse-tourbillon, par une approche multidomaine, Ph.D. thesis, Departement de Mathematiques, Universite Paris XI Orsay, 1989.

[31] S. TUREK, Efficient Solvers for Incompressible Flow Problems, Springer, Berlin, 1999.

JEAN-PAUL CHEHAB ([dagger]) and MARCOS RAYDAN ([double dagger])

* Received April 4, 2008. Accepted April 6, 2009. Published online on September 17, 2009. Recommended by Jose Castillo.

([dagger]) Laboratoire Amienois de Mathematiques Fondamentales et Appliquees, UMR 6140, UFR de mathematiques et informatique, Universite de Picardie Jules Verne, 33 rue Saint Leu, 80037 Amiens, France; INRIA Nord-Europe, project SIMPAF, 59655 Villeneuve d'Ascq, France; and Laboratoire de Mathematiques d'Orsay bat. 425, CNRS 6148, ANEDP, Univ. Paris-Sud Orsay, F-91405 (jean-paul.chehab@u-picardie.fr).

([double dagger]) Departamento de Computo Cientlfico y Estadlsticas, Universidad Simoon Bolivar, Ap. 89000, Caracas, 1080-A, Venezuela, and Departamento de Computacioon, Facultad de Ciencias, Universidad Central de Venezuela, Ap. 47002, Caracas 1041-A, Venezuela (marcos.raydan@ciens.ucv.ve).

The emergence of non-monotone residual methods, such as the one introduced by Barzilai and Borwein in optimization [2, 13, 26] and globalized versions which enhance its robustness and effectiveness [12, 22, 23, 27], gives the possibility of efficiently solving large-scale nonlinear problems, incorporating in a natural way a preconditioning strategy. Non-monotone globalization strategies for nonlinear problems have become popular in the last few years. These strategies make it possible to define globally convergent algorithms without monotone decrease requirements. The main idea behind non-monotone strategies is that, frequently, the first choice of a trial point along the search direction hides significant information about the problem structure, and that such knowledge can be destroyed by imposing a decrease condition.

In this work we adapt and extend the ideas introduced in [12,22] for large-scale nonlinear problems like those that appear in the solution of the steady fluid flow problem. In particular, we add a preconditioning strategy fully described in [10]. The lid-driven cavity problem is a numerically challenging standard benchmark problem for nonlinear solvers that corresponds to computing either evolving or steady flows of the two-dimensional incompressible Navier-Stokes equation on a rectangular cavity. To compute steady states, two approaches are commonly considered: time-dependent methods that compute the steady state by applying a time marching scheme to the evolutive NSE (for Reynolds numbers that are lower than the bifurcation value), and methods that solve the steady-state NSE by fixed point or Newton-like schemes. It is well known that direct solution of the steady-state NSE is more difficult than time marching, and that it requires very robust schemes, especially as the Reynolds number Re increases. The literature on this topic is very rich, from, e.g., the relaxation schemes proposed by Crouzeix [11] to the more recent defect-correction methods; see [31] and the references therein. However, these methods are very closely related to the structure of the NSE and use a linearization of the equation at each step.

Our aim in this article is to compute the solution of the steady-state NSE by an implicit preconditioned version of the spectral residual method, with globalization. The method we introduce here uses only the solution of the linear part of the equation, which can be obtained efficiently with a fast solver (e.g., FFT and multigrid).

The article is organized as follows. First, in Section 2, after describing the inverse preconditioning strategy and the globalization strategy for the residual scheme, we derive our new algorithm combining the dynamical and the optimization approach. Then, in Section 3, we adapt the discretization of the steady-state two-dimensional incompressible Navier-Stokes equations to the framework of the nonlinear scheme. Finally, in Section 4, as a numerical illustration, we present the solution of the steady-state NSE for different Reynolds numbers (up to Re = 5000). We solve the problem in the primary variables as well as in stream function-vorticity formulation. Our results agree with the ones in the literature and show the robustness of the proposed method.

F (x)=0, (2.1)

where F : [R.sup.n] [right arrow] [R.sup.n] is a continuously differentiable mapping. This framework generalizes the nonlinear systems that appear, for example, after discretizing the steady-state models for fluid flows, to be discussed later in this work.

To solve (2.1), some iterative schemes have been recently presented that systematically use the residual vectors as search directions [12, 22]; i.e., the iterations are defined as

[x.sub.k+1] = [x.sub.k] [+ or -] [[lambda].sub.k] F([x.sub.k]), (2.2)

where [[lambda].sub.k] > 0 is the step-length and the search direction is either F([x.sub.k]) or -F([x.sub.k]) depending on which one is a descent direction for the merit function

f(x) = [[parallel]F(x)[parallel].sup.2.sub.2] = F[(x).sup.T] F(x). (2.3)

These schemes are effective and competitive with Newton-Krylov schemes for large-scale nonlinear systems [5, 6, 21] when the step lengths are suitably chosen. The convergence of (2.2) is associated with a derivative-free non-monotone line search, fully described in [12], which will be discussed in the forthcoming subsections.

There are many choices of the step length [[lambda].sub.k] for which convergence is guaranteed. The well-known non-monotone spectral choice has interesting properties; it is defined as the absolute value of

[s.sup.T.sub.k-1][s.sub.k-1]/[s.sup.T.sub.k-1][y.sub.k-1], (2.4)

where [s.sub.k-1] = [x.sub.k] - [x.sub.k-1], and [y.sub.k-1] = F([x.sub.k]) - F([x.sub.k-1]). Obtaining the step length using (2.4) requires a reduced amount of computational work, accelerates the convergence of the process, and involves the last two iterations in a way that incorporates first-order information into the search direction [2, 13, 17, 26].

In the forthcoming subsections, we discuss the preconditioned version of the basic scheme (2.2) and the specific inverse preconditioner to be used in this work, the globalization strategy to guarantee global convergence, and how to obtain the step lengths that accelerate the convergence while improving the stability of the process.

2.1. The preconditioned scheme. In order to present the preconditioned version of (2.2) we extend the ideas discussed in [23] for unconstrained minimization to the solution of (2.1). The well-known and somehow ideal Newton's method for solving (2.1) from an initial guess [x.sub.0] can be written as

[x.sub.k+1] = [x.sub.k] - [J.sup.-1.sub.k]F([x.sub.k]), (2.5)

where [J.sub.k] = J([x.sub.k]) and J(x) is the Jacobian of F evaluated at x.

Recently, a preconditioned scheme, associated to the gradient direction, was proposed to solve unconstrained minimization problems [23]. When solving (2.1), the preconditioned version of (2.2) produces the iterates

[x.sub.k+1] = [x.sub.k] + [[lambda].sub.k][z.sub.k], (2.6)

where [z.sub.k] = [+ or -][C.sub.k] F([x.sub.k]), [C.sub.k] is a nonsingular approximation to [J.sup.-1.sub.k], and the scalar [[lambda].sub.k] is given by

[[lambda].sub.k] = ([[lambda].sub.k-1]) [z.sup.T.sub.k-1][z.sub.k-1]/[z.sup.T.sub.k-1][y.sub.k-1]. (2-7)

In (2.6), if [C.sub.k] = I (the identity matrix) for all k, then [z.sub.k] = [+ or -]F([x.sub.k]), and since [s.sub.k- 1] = [[lambda].sub.k-1][z.sub.k-1] then [[lambda].sub.k] coincides with (2.4). Therefore, if [C.sub.k] = I the method reduces to (2.2). On the other hand, if the sequence of iterates converges to [x.sup.*], and we improve the quality of the preconditioner such that C([x.sub.k]) converges to [J.sup.-1]([x.sup.*]) then, as discussed in [10], [[lambda].sub.k] tends to one and we recover Newton's method, which possesses fast local convergence under standard assumptions [14]. In that sense, the iterative scheme (2.6) is flexible, and allows intermediate options by choosing suitable approximations [C.sub.k] between the identity matrix and the inverse of the Jacobian matrix. For building suitable approximations to [J.sup.-1]([x.sub.k])F([x.sub.k]), we will test implicit preconditioning schemes that do not require the explicit computation of [C.sub.k], as described in Section 2.2.

2.2. Inverse preconditioning schemes. By adapting our recent work on approximating the Newton direction using an ordinary differential equation (ODE) model, we will obtain the inverse preconditioner for (2.1) in the framework of the iterative global preconditioned residual algorithm presented in the next subsection. To that end, we develop an automatic and implicit scheme to directly approximate the preconditioned direction [z.sub.k] at every step in (2.6) without a priori knowledge of the Jacobian of F and with only a reduced and controlled amount of storage and computation. As we will discuss later, this new scheme avoids as much as possible the cost of any calculations involving matrices, and also allows us to obtain asymptotically the Newton direction by improving the accuracy in the ODE solver.

The method we introduce here starts from the numerical integration of the Newton flow aimed at computing the root of F as the stable steady state of

dx/dt = -[(J(x)).sup.-1]F(x). (2.8)

Notice that the preconditioned gradient method can be obtained by applying a forward Euler method to (2.8); see, e.g., [1]. The value [parallel]F(x)[parallel] is decreasing along the integral curves and converges at an exponential rate to zero; for details, see [1, 20] and references therein. Introducing the decoupling

dx/dt = -z (2.9)

J(x)z = F(x), (2.10)

we see that the algebraic condition that links z to x is, in fact, a preconditioning equation. In order to relax its resolution, a time derivative in z is added as

dx/dt = -z, (2.11)

dz/dt = F(x) - J(x)z. (2.12)

This last system allows us to compute numerically the root of F by an explicit time marching scheme since the steady state is asymptotically stable; see [10] for more details. Let [t.sub.k] be discrete times, and denote by [x.sub.k] and [z.sub.k] the numerical approximations to x([t.sub.k]) and z([t.sub.k]). The application of the simple forward Euler method to (2.11)-(2.12) reads

[x.sub.k+1] = [x.sub.k] - ([t.sub.k+1] - [t.sub.k])[z.sub.k], (2.13)

[z.sub.k+1] = [z.sub.k] + ([t.sub.k+1] - [t.sub.k]) (F([x.sub.k]) - J([x.sub.k])[z.sub.k]). (2.14)

Remark 2.1. As stated above, we want to avoid the computation of the Jacobian matrix, so J(x)z is classically approached by a finite difference scheme

J(x)z [equivalent] F(x + [tau]z) - F(x)/[tau],

for a small given positive real number [tau].

Notice that the dynamics of the differential system (2.11)-(2.12) can be very slow. As proposed in [10], a way to speed up the convergence to the steady state is to artifically introduce two time scales by computing for every discrete time [t.sub.k] an approximation of the steady state of the equation in z. More precisely, we write

Step 1. With optimization method 1, compute [z.sub.k] as an approximation of the steady state of

dz/dt = F([x.sub.k]) - F([x.sub.k] + [tau]z) - F(x.sub.k])/[tau],

z(0) = [z.sub.k-1].

Step 2. With optimization method 2, compute [x.sub.k+1] from [x.sub.k] by

[x.sub.k+1] = [x.sub.k] + [[lambda].sub.k][z.sub.k].

The preconditioning relies on the accuracy for solving step 1. As optimization method 1, we proposed in [10] to apply some iterations of Cauchy-like schemes that we describe in Section 2.3. As optimization scheme 2, which defines the step [[lambda].sub.k], we use the spectral gradient method. Promising results were obtained in [10] on some classical optimization problems. However, the solution of steady-state NSE needs a more robust scheme for the time marching of [x.sup.k]. The scheme IPR described below becomes crucial in practical cases. We now present the general form of the scheme.

ALGORITHM 2.2 (Implicit Preconditioned Residual Method (IPR)).

Step 1. With Cauchy-like minimization, compute [z.sub.k] as an approximation of the steady state of

dz/dt = F([x.sub.k]) - F([x.sub.k] + [tau]z) - F(x.sub.k])/[tau],

z(0) = [z.sub.k-1].

Step 2. With GPR, compute [x.sub.k+1] from [x.sub.k] by

[x.sub.k+1] = [x.sub.k] + [[lambda].sub.k][z.sub.k].

2.3. Generalized Cauchy-type minimization for [z.sub.k]. We now describe several ways of computing Cauchy-type step lengths for building [z.sub.k] in Step 1 of the IPR method. The computation of a steady state by an explicit scheme can be accelerated by enhancing the stability domain of the scheme, since this allows the use of larger time steps. In that sense, the accuracy of a time marching scheme is not a priority. A simple way to derive more stable methods is to use parametrized one-step schemes and to fit the parameters not to increase the accuracy as in classical Runge-Kutta methods, but to improve the stability. For example, in [4, 9], a method was proposed to iteratively compute fixed points with larger descent parameter starting from a specific numerical time scheme. More precisely, defining G(z)= F([x.sup.k]) - (F([x.sup.k] + [tau]z) - F([x.sup.k]))/[tau] , this method consists of integrating the differential equation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (2.15)

by the p-step scheme

Here [[summation].sup.p.sub.i=0] [[beta].sub.i] = 1. At convergence, we obtain [bar.y], and we set [z.sub.k] = [bar.y], which will be used as our preconditioned direction at iteration k.

Classically, the convergence can be accelerated by computing at each iteration the step length in order to minimize the Euclidean norm of the current residual. This gives rise to variants of the Cauchy scheme [8]. Of course, the minimizing parameter becomes harder to compute as p increases. We list now the optimal values of the parameters for p = 1,2,3. The case p = 3 represents, as far as we know, a new choice of step length for residual methods. In what follows, for any two vectors v and w, (v,w) = [v.sup.T]w, [r.sup.k] = F([x.sub.k]), and

Av = F([x.sub.k] + [tau]v) - F([x.sub.k])/[tau] [approximately equal to] J([x.sub.k])v.

* p = 1 (Cauchy method)

[[beta].sup.k.sub.i] = 1, [DELTA][t.sub.k] =

* p = 2 (Enhanced Cauchy 1 (EC1) see [9, 10])

We set

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

* p = 3 (Enhanced Cauchy 2 (EC2))

We set

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

2.4. Globalization strategy. In order to guaranee that the implicit preconditioned residual (IPR) method converges from any initial guess, we need to add a globalization strategy. This is an important feature, especially when dealing with highly nonlinear flow problems and high Reynolds numbers. Indeed, for highly nonlinear problems, pure methods (i.e., without a globalization strategy) will diverge unless the initial guess is very close to the solution, which is not a practical assumption. To avoid the derivatives of the merit function, which are not available, we will adapt the recently developed globalization strategy of La Cruz et al. [12] to our preconditioned version.

Assume that {[[eta].sub.k]} is a sequence such that [[eta].sub.k] > 0 for all k [member of] [??] and

[[infinity].summation over (k=0] [[eta].sub.k] < [infinity]. (2.16)

Assume also that 0 < [gamma] < 1 and 0 < [[sigma].sub.min] < [[sigma].sub.max] < [infinity]. Let M be a positive integer. Let [[tau].sub.min], [[tau].sub.max] be such that 0 < [[tau].sub.min] < [[tau].sub.max] < 1. Given an arbitrary initial point [x.sub.0] [member of] [R.sup.n], an algorithm that allows us to obtain [x.sub.k+1] starting from [x.sub.k] is given below.

ALGORITHM 2.3 (Global Implicit Preconditioned Residual Method (GIPR)).

Step 1.

Choose [[lambda].sub.k] such that [absolute value of [[lambda].sub.k]] [member of] [[[sigma].sub.min], [[sigma].sub.max]] (described in the next subsection).

Build the vector [C.sub.k] F([x.sub.k]) (inverse preconditioned direction).

Compute [[bar.f].sub.k] = max{f([x.sub.k]), ..., f([x.sub.max]{0,k-M+1})}.

Set z [left arrow] -[[lambda].sub.k][C.sub.k] F ([x.sub.k]).

Set [[alpha].sub.+] [left arrow] 1, [[alpha].sub.-] [left arrow] 1.

Step 2.

REMARK 2.4. As discussed in [12], the algorithm is well defined, i.e., the backtracking process (choosing [[alpha].sub.+new] and [[alpha].sub.-new]) is guaranteed to terminate successfully in a finite number of trials. Moreover, global convergence is also established in [12]. Indeed, if the symmetric part of the Jacobian of F at any [x.sub.k] is positive (or negative) definite for all k, then the sequence {f([x.sub.k])} tends to zero.

3. Application to the solution of the steady 2D lid-driven cavity.

3.1. The problem. The equilibrium state of a driven square cavity is described by the steady-state Navier-Stokes equation, which, in primary variables, can be written as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (3.1)

Here U = (u,v) is the velocity field, P is the pressure, and f is the external force. For our applications, we will consider the so-called driven cavity case, so f = 0 and the fluid is driven by a proper boundary condition BC. We denote the sides of the unit square [OMEGA] by [[GAMMA].sub.1],i = 1,..., 4: [[GAMMA].sub.1] is the lower horizontal side, [[GAMMA].sub.3] is the upper horizontal side, [[GAMMA].sub.2] is the left vertical side, and [[GAMMA].sub.4] is the right vertical side. In Figure 3.1, we show the location of the main vortices of the steady state of the lid driven cavity: Primary Vortex, Top Left Vortex (TL1), Bottom Right Vortices (BR1 and BR2), and Bottom Left Vortex (BL1). For additional details on this benchmark application, see [25].

We distinguish two different driven flows, according to the choice of the boundary conditions on the velocity. More precisely U = V = 0 at the boundary except for U on [[GAMMA].sub.3], where we have two options:

* U(x, 1) = 1 : Cavity A (lid driven cavity)

* U(x, 1) = [(1 - [(1 - 2x).sup.2]).sup.2] : Cavity B (regularized lid driven cavity)

3.2. Discretization and implementation in primary variables.

3.2.1. Discretization. The discretization is performed on staggered grids of MAC type in order to verify a discrete inf-sup (or Babushka-Brezzi) condition that guarantees the stability; see [25]. Taking N discretization points in each direction on the pressure grid, we obtain the linear system

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (3.2)

where U,V [member of] [R.sup.N(N-1)], P [member of] [R.sup.NxN], and [A.sub.u] and [A.sub.v] are the discretization matrices of minus the Laplacian operator on the U and V grids, respectively. Similarly, [B.sub.x] and [B.sub.y] are the discretization matrices of the first derivative in x and y for the pressure in the velocity grids. The operators [L.sub.u] and [L.sub.v] are the nonlinear terms on U and V respectively; and, finally, [F.sub.1] and [F.sub.2] are the discrete external forces for the horizontal and the vertical velocities. Equation (3.2) is then a square linear system in 2 x N(N - 1) + [N.sup.2] unknowns.

[FIGURE 3.1 OMITTED]

3.2.2. Implementation. The discrete problem (3.2) can be written as a nonlinear system F (U,V,P ) = 0,

with the obvious notation.

Now, let S be the Stokes solution operator defined by

S (F,G, 0) [??] (U,V,P),

where (U, V, P) solves the linear part of (3.2), i.e., the Stokes problem

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (3.3)

Finally we introduce the functional G

G((U,V,P) = S (F (U,V,P)).

The scheme consists of applying the implicit globalized preconditioned gradient method to the differential system

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (3.4)

where X = (U, V, P) and where H is an approximation to the gradient of G(X), as described in Remark 2.1 of Section 2.3.

3.3. The w - [psi] formulation. One of the advantage of the w - [psi] formulation is that the NSE are decoupled into two problems: a convection diffusion equation and a Poisson problem. In particular, we can use the FFT to solve the linear problems, as described in this section.

3.3.1. The formulation. The w - [psi] formulation is obtained by taking the curl of the NSE [16, 25]. Let w = [partial derivative]u/[partial derivative]y - [partial derivative]v/[partial derivative]x, and write u = [partial derivative][psi]/[partial derivative]y, v = -[partial derivative][psi]/[partial derivative]x, where [nabla][psi] = w. Then we have the equations

-1/Re[DELTA]w + [partial derivative][psi]/[partial derivative]y [partial derivative]w/[partial derivative]x - [partial derivative][??]/[partial derivative]x, [partial derivative]w/[partial derivative]y = 0, (3.5)

[DELTA][psi] = w, (3.6)

w(x, 0) = [w.sub.0](x). (3.7)

The boundary conditions on w are derived using the standard approach based on the discretization of [DELTA][psi] on the boundaries; see [16, 25]. With the conditions on u and v [16, 25], we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Therefore, since [[psi].sub.[partial derivative][OMEGA]] = 0 and u = [partial derivative][psi]/[partial derivative]y, v = -[partial derivative][psi]/[partial derivative]x, we obtain by using Taylor expansions

[w.sub.i,0] = 1/2[h.sup.2] ([psi].sub.i,1] - 8[[psi].sub.i,2]), (3.8)

[w.sub.i,N+1] = 1/2[h.sup.2] (-[psi].sub.i,N-1] + 8[[psi].sub.i,N] - 6hg(ih)), (3.9)

[w.sub.0,j] = 1/2[h.sup.2] ([psi].sub.i,j] - 8[[psi].sub.2,j]), (3.10)

[w.sub.N+1,j] = 1/2[h.sup.2] (-[psi].sub.N-1,j] + 8[[psi].sub.N,j]). (3.11)

Here g(x) denotes the boundary condition function for the horizontal velocity at the boundary [[GAMMA].sub.3]. Homogeneous Dirichlet boundary conditions apply to [psi], and the operators are discretized by second order centered schemes on a uniform mesh composed by N points in each direction of the domain of step-size h = 1/(N + 1). The total number of unknowns is then 2[N.sup.2].

The boundary conditions on w are implemented iteratively according to the relations (3.8)-(3.11), making the finite difference scheme second order accurate.

3.3.2. Implementation. With (3.8)-(3.11), we can compute the boundary conditions for w. We denote by [[partial derivative].sup.h.sub.x]([psi]), [[partial derivative].sup.h.sub.y]([psi]), and [[partial derivative].sup.h.sub.[DELTA]]([psi]), and the contributions of the boundary conditions to the discretization operators of [[partial derivative].sub.x], [[partial derivative].sub.y], and -[DELTA]. The problem to solve is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Here A is the discretization matrix of -[DELTA], and [D.sub.x] and [D.sub.y] are the discretization matrices of [[partial derivative].sub.x] and [[partial derivative].sub.y] respectively. The problem to solve is then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

For convenience, we set X = (w, [psi]). Now, as we did for the primary variables formulation, we return to the dynamical system framework of the method: we set

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

and we consider the evolutionary system

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (3.12)

where HZ is an approximation of the gradient of G(X) at Z, as described in Remark 2.1. Here, A is the classical pentadiagonal finite difference matrix for the Laplace operator on a square, and linear systems with A can be cheaply solved using fast solvers such as FFT or multigrid. In this paper, we will use the FFT.

4. Numerical results.

4.1. General implementation of the algorithm. We now list the information required by the GIPR method:

* The positive integer M.

* The parameters [gamma] and [[eta].sub.k].

* The initial value of the descent parameter [[lambda].sub.0].

* The merit function. We use the Euclidian norm of the residual [[parallel]F(X)[parallel].sub.2].

* The accuracy of the global method: the solution is considered accurate when [[parallel]F(X)[parallel].sub.2] < [10.sup.-6].

* The accuracy imposed for solving the preconditioning equation

F([x.sup.k] + [tau]z) - F([x.sup.k])/[tau] - F([x.sup.k] = 0,

which is characterized by

--The choice of the optimization method 1. In our implementation we have used the Enhanced Cauchy 2 as discussed above.

--The number [tau]. We set [tau] = [10.sup.-8].

- The number of iteration nprec, which can vary at each step. We choose to increase nprec as the norm of the residual [r.sup.k] = F([x.sub.k]) decreases in order to improve the preconditioning near the solution:

Adaptive computation of nprec:

The reasoning behind the decision to improve the quality of the preconditioner near the solution is this: near an isolated solution, the merit function f is strictly convex, and hence it makes sense to reduce the condition number of the approximated positive-definite Hessian of f in the neighborhood of the solution. This is achieved by increasing nprec as the norm of the residual, i.e., the value of the merit function, is reduced.

4.2. Computation of steady states of NSE. We now present the numerical solution of the steady state of the two-dimensional driven cavity for different Reynolds numbers. Our results agree with those in the literature [3, 7, 15, 18, 19, 24, 28-30] (see figures and tables below); and to prove the robustness of the solution method, we take as the initial guess the solution of the Stokes problem, which becomes farther from the steady state as the Reynolds number increases. We will pay special attention to the solution of NSE in the w - [PSI] formulation. However, let us mention that the scheme applies also to NSE in primary variables (U - P), the linear solver being a Stokes solver. The crucial practical point is to have at our disposal a fast solver for the linear problems: FFT or multigrid for the w - [PSI] formulation and multigrid Uzawa [7] for the U - P formulation.

As we see in our results, the globalization strategy is important while the residual is not small enough. Furthermore, the preconditioning makes sense "close to the solution." For that reason, we choose to activate the preconditioning progressively as the residual decreases by increasing the number of inner iterations in the solution of the preconditioning step (step 1 of the scheme). This allows us to obtain fast convergence at the end while saving computational time at the beginning.

We observe that the number of outer iterations increases with the Reynolds number but not so much with the dimension of the problem. In all cases, the first part of the convergence process is devoted to "maintaining" the iterates in a neighborhood of the solution. All the computations have been done with MATLAB 7 on a 2Ghz dual core PC with two gigabytes of RAM.

We now present the parameters of the scheme that we used for solving the flow in cavity B and also in cavity A for the stream function-vorticity formulation of NSE. N is the number of discretization points in each direction of the domain. We give the parameter values in Tables 4.1-4.2.

The results are reported in Figures 4.1-4.3. Most of the work is done at the beginning of the iterations, while the globalization is acting to stabilize the iterates. This phenomenon is amplified as the Reynolds number Re becomes large. An acceleration of the convergence is obtained when the residual is small enough, since nprec increases. The shapes of the solutions are identical with that of the literature, such as in [3, 7, 15, 16, 18, 19, 24, 28-30]; particularly, the special values agree.

[FIGURE 4.1 OMITTED]

We also report some special values of the solution for Cavity B and Re = 5000 in Table 4.3, where we also compare them with results in the literature.

4.3. Solution of NSE in primary variables. We now present the numerical results for the solution of the steady-state NSE in primary variables. We change the value of y during the iterations in order to increase the non-monotonicity of the GIPR as follows

if [parallel][r.sup.k][parallel] < [10.sup.-3] then [gamma] = 0.9.

[FIGURE 4.2 OMITTED]

This practical choice reduces the number of line searches (backtrackings) near the solution, and so it reduces the overall computational effort for convergence. In Table 4.4, we present the numerical solution of the cavity B problem for Re = 400 and Re = 1000. The level curves of the pressure, the vorticity, the kinetic energy and the stream function agree with those in the literature. Notice that fewer iterations are required for convergence than are needed for the same example using the stream-vorticity formulation. This is due mainly to the fact that the boundary conditions are implemented exactly for the primary variables formulation, whereas they are approximated iteratively for the stream-vorticity formulation. However, the computational effort required at each iteration of the primary variables formulation is greater than that needed for the stream-vorticity formulation. Indeed, in the first case a coupled system of 3[N.sup.2] variables needs to be solved, whereas two systems of [N.sup.2] variables need to be solved in the second case; see Section 3.

5. Concluding remarks. We have presented a robust scheme to solve steady fluid flows that involves an implicit preconditioned search direction to approximate the Newton direction directly. Our approach is in sharp contrast with the classical Newton type schemes (e.g., Newton-Krylov [21], defect corrections [31]) in which the nonlinear problem is linearized at every Newton iteration and a preconditioner is developed for the inner linear solver in the standard way. Here, we propose to avoid the linearization process by preconditioning (implicitly) the nonlinear problem. The efficiency of the scheme is increased when a fast solver is used for the preconditioned linear system. The results we obtain on the numerical solution of NSE, by adding a globalization strategy, show that the proposed method is robust. As it has been already established, it is harder to solve directly the steady-state NSE than to compute the steady state by time marching schemes applied to the evolutionary equation.

[FIGURE 4.3 OMITTED]

[FIGURE 4.4 OMITTED]

The new method is also flexible since the choice of the preconditioning step is completely free. We would like to stress that the preconditioned globalized spectral residual method can be applied to a large number of scientific computing problems, especially when no (simple) preconditioner can be built, such as in Computational Fluid Dynamics (CFD), and also in numerical linear algebra when solving Riccati matrix equations or some other nonlinear matrix problems. These are topics that deserve further investigation.

Dedicated to Victor Pereyra on the occasion of his 70th birthday

Acknowledgments. This work was supported by SIMPAF project from INRIA futurs.

REFERENCES

[1] F. ALVAREZ, H. ATTOUCH, J. BOLTE, AND P. REDONT, A second-order gradient-like dissipative dynamical system with hessian-driven damping. application to optimization and mechanics, J. Math. Pures Appl., 81 (2002), pp. 747-779.

[2] J. BARZILAI AND J. M. BORWEIN, Two-point step size gradient methods, IMA J. Numer. Anal., 8 (1988), pp. 141-148.

[3] M. BEN-ARTZI, J.-P. CROISILLE, D. FISHELOV, AND S. TRACHTENBERG, A pure-compact scheme for the streamfunction formulation of Navier-Stokes equations, J. Comput. Phys., 205 (2005), pp. 640-664.

[4] C. BREZINSKI AND J.-P. CHEHAB, Nonlinear hybrid procedures and fixed point iterations, Numer. Funct. Anal. Optim., 19 (1998), pp. 465-487.

[5] P. BROWN AND Y. SAAD, Hybrid Krylov methods for nonlinear systems of equations, SIAM J. Sci. Comput., 11 (1990), pp. 450-481.

[6]--, Convergence theory of nonlinear Newton-Krylov algorithms, SIAM J. Optim., 4 (1994), pp. 297-330.

[7] C.-H. BRUNEAU AND C. JOURON, An efficient scheme for solving steady incompressible Navier-Stokes equations, J. Comput. Phys., 89 (1990), pp. 389-413.

[8] A. CAUCHY, Methodes generales pour la resolution des systemes d'equations simultanees, C. R. Acad. Sci. Par., 25 (1847), pp. 536-538.

[9] J.-P. CHEHAB AND J. LAMINIE, Differential equations and solution of linear systems, Numer. Algorithms, 40 (2005), pp. 103-124.

[10] J.-P. CHEHAB AND M. RAYDAN, Inverse and adaptive preconditioned gradient methods for nonlinear problems, Appl. Numer. Math., 55 (2005), pp. 32-47.

[11] M. CROUZEIX, Approximation et methodes iteratives de resolution d'inequations variationelles et de problemes non lineaires, IRIA, 12 (1974), pp. 139-244.

[12] W. L. CRUZ, J. M. MARTINEZ, AND M. RAYDAN, Spectral residual method without gradient information for solving large-scale nonlinear systems, Math. Comp., 75 (2006), pp. 1449-1466.

[13] Y. H. DAI AND L. Z. LIAO, r-linear convergence of the Barzilai and Borwein gradient method, IMA J. Numer. Anal., 22 (2002), pp. 1-10.

[14] J. E. DENNIS, JR. AND R. B. SCHNABEL, Numerical Methods for Unconstrained Optimization and-Nonlin ear Equations, Prentice-Hall, Englewood Cliffs, NJ, 1983.

[15] E. ERTURK, T. C. CORKE, ANDC. GOKCOL, Numerical solutions of 2-d steady incompressible driven cavity flow at high Reynolds numbers, Internal J. Numer. Methods Fluids, 48 (2005), pp. 747-774.

[16] D. EUVRARD, Resolution numerique des equations aux derivees partielles, third ed., Dunod, Paris, 1994.

[17] R. FLETCHER, On the Barzilai-Borwein method, in Optimization and Control with Applications, L. Qi, K. L. Teo, and X. Q. Yang, eds., Springer, New York, 2005, pp. 235-256.

[18] U. GHIA, K. N. GHIA, AND C. T. SHIN, High-Re solutions of incompressible flow using the Navier-Stokes equations and the multigrid method, J. Comput. Phys., 48 (1982), pp. 387-411.

[19] O. GOYON, High-Reynolds number solutions of Navier-Stokes equations using incremental unknowns, Com put. Methods Appl. Mech. Engrg., 130 (1996), pp. 319-335.

[20] U. HELMKE AND J. B. MOORE, Optimization and Dynamical Systems., Springer, Berlin, 1994.

[21] C. T. KELLEY, Iterative Methods for Linear and Nonlinear Equations, SIAM, Philadelphia, 1995.

[22] W. LA CRUZ AND M. RAYDAN, Nonmonotone spectral methods for large-scale nonlinear systems, Optim. Methods and Softw., 18 (2003), pp. 583-599.

[23] F. LUENGO, M. RAYDAN, W. GLUNT, AND T. HAYDEN, Preconditioned spectral gradient method, Numer. Algorithms, 30 (2002), pp. 241-258.

[24] F. PASCAL, Methodes de Galerkin non lineaires en discretisation par elements finis et pseudo-spectrale. Application a la mecanique des fluides, Ph.D. thesis, Departement de Mathematiques, Universite Paris XI Orsay, Jan. 1992.

[25] R. PEYRET AND R. TAYLOR, Computational Methods for Fluid Flows, Springer, New York, 1983.

[26] M. RAYDAN, On the Barzilai and Borwein choice of the steplength for the gradient method, IMA J. Numer. Anal., 13 (1993), pp. 321-326.

[27]--, The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem, SIAM J. Optim., 7 (1997), pp. 26-33.

[28] J. SHEN, Numerical simulation of the regularized driven cavity flows at high Reynolds numbers, Comput. Methods Appl. Mech. Engrg., 80 (1990), pp. 273-280.

[29]--, Hopf bifurcation of the unsteady regularized driven cavity flow, J. Comput. Phys., 95 (1991), pp. 228245.

[30] L. SONKE TABUGUIA, Etude numerique des equations des Navier-Stokes en milieux multiplement connexes, en formulation vitesse-tourbillon, par une approche multidomaine, Ph.D. thesis, Departement de Mathematiques, Universite Paris XI Orsay, 1989.

[31] S. TUREK, Efficient Solvers for Incompressible Flow Problems, Springer, Berlin, 1999.

JEAN-PAUL CHEHAB ([dagger]) and MARCOS RAYDAN ([double dagger])

* Received April 4, 2008. Accepted April 6, 2009. Published online on September 17, 2009. Recommended by Jose Castillo.

([dagger]) Laboratoire Amienois de Mathematiques Fondamentales et Appliquees, UMR 6140, UFR de mathematiques et informatique, Universite de Picardie Jules Verne, 33 rue Saint Leu, 80037 Amiens, France; INRIA Nord-Europe, project SIMPAF, 59655 Villeneuve d'Ascq, France; and Laboratoire de Mathematiques d'Orsay bat. 425, CNRS 6148, ANEDP, Univ. Paris-Sud Orsay, F-91405 (jean-paul.chehab@u-picardie.fr).

([double dagger]) Departamento de Computo Cientlfico y Estadlsticas, Universidad Simoon Bolivar, Ap. 89000, Caracas, 1080-A, Venezuela, and Departamento de Computacioon, Facultad de Ciencias, Universidad Central de Venezuela, Ap. 47002, Caracas 1041-A, Venezuela (marcos.raydan@ciens.ucv.ve).

Given [y.sub.0] = [z.sub.k-1] For j = 0,... until convergence Set [K.sub.1] = G([y.sub.j]) For m = 2, ..., p Set [K.sub.m] = G([x.sub.k] + [DELTA]t [K.sub.m-1]) Set [y.sub.k+1] = [y.sub.j] + [DELTA]t [[summation].sup.p.sub.i=0] [[beta].sub.i][K.sub.i]

If f ([x.sub.k] + [[alpha].sub.+] z) [less than or equal to] [[??].sub.k] + [[eta].sub.k] - [gamma][[alpha].sup.2.sub.+] [[parallel]z[parallel].sup.2.sub.2] then Define [z.sub.k] = z, [[alpha].sub.k] = [[alpha].sub.+], [x.sub.k+1] = [x.sub.k] + [[alpha].sub.k][z.sub.k], Else if f([x.sub.k] - [[alpha].sub.-]z) [less than or equal to] [[??].sub.k] + [[eta].sub.k] - [gamma][[alpha].sup.2.sub.-] [[parallel]z[parallel].sup.2.sub.2] then Define [z.sub.k] = -z, [[alpha].sub.k] = [[alpha].sub.-], [x.sub.k+1] = [x.sub.k] + [[alpha].sub.k][z.sub.k]. Else Choose [[alpha].sub.+new] [member of] [[tau].sub.min] [[alpha].sub.+], [[tau].sub.max][[alpha].sub.+], [[alpha].sub.-new] [member of] [[tau].sub.min] [[alpha].sub.-], [[tau].sub.max][[alpha].sub.-]. Replace [[alpha].sub.+] [left arrow] [[alpha].sub.+new], [[alpha].sub.-] [left arrow] [[alpha].sub.-new]. Go to Step 2.

[nprec.sub.0] given for k = 0,... until convergence if [[parallel][r.sup.k][parallel].sub.2] < 0.1 then nprec = ceil x (-[log.sub.10]([[parallel][r.sup.k] [parallel].sub.2]) + 1) x [nprec.sub.0].

TABLE 4.1 Cavity B Re N [gamma] M [nprec.sub.0] Prec. Method 1000 127 9 x [10.sup.2] 2 4 Enhanced Cauchy 2 2000 127 9 x [10.sup.11] 2 4 Enhanced Cauchy 2 5000 255 9 x [10.sup.2] 2 4 Enhanced Cauchy 2 Re Adapted nprec [[alpha].sub.0] 1000 yes 10 2000 yes [10.sub.4] 5000 yes [10.sub.4] TABLE 4.2 Cavity A Re N [gamma] M [nprec.sub.0] Prec. Method 1000 127 9 x [10.sup.2] 2 4 Enhanced Cauchy 2 3200 255 9 x [10.sup.6] 2 4 Enhanced Cauchy 2 Re Adapted nprec [[alpha].sub.0] 1000 yes 10 3200 yes [10.sup.4] TABLE 4.3 Cavity B. Re = 5000 IGPR IGPR Formulation w - [psi] w - [psi] Discretization x FD FD Grid/Mesh 127 x 127 255 x 255 [DELTA]t Vortex x 0.5234 0.51953 y 0.539 0.539 [psi] -0.07761 -0.085211 w - 1/2687 -1.3866 Vortex (B L) x 0.078125 0.078125 y 0.125 0.125 [psi] 6.83393 x [10.sup.-4] 7.95 x [10.sup.-4] w 0.7468 0.844 Vortex (B R) x 0.8203 0.8164 y 0.08593 0.082 [psi] 1.8528 x [10.sup.-3] 2. 041 x [10.sup.-3] w 1.42177 1.58687 Vortex (T L) x 0.07812 0.0859 y 0.9062 0.9101 [psi] 5.6645 x [10.sup.-4] 7.149 x [10.sup.-4] w 0.88813 1.098 Shen [29] Pascal [24] Formulation U, P U, P Discretization x Spectral FEM Grid/Mesh 33 x 33 129 x 129 [DELTA]t 0.03 0.05 Vortex x 0.516 0.5390 y 0.531 0.5313 [psi] -0.08776 -0.0975 w -2.169 Vortex (B L) x 0.094 0.0859 y 0.094 0.1172 [psi] 7.5268 x [10.sup.-4] 6.723 x [10.sup.-4] w 0.7310 Vortex (B R) x 0.922 0.8047 y 0.094 0.0781 [psi] 0.77475 x [10.sup.-3] 2.42 x [10.sup.-3] w 2.009 Vortex (T L) x 0.078 0.0781 y 0.92 0.906 [psi] 6.778 x [10.sup.-4] 7.86 x [10.sup.-4] w 1.159 TABLE 4.4 Cavity B. Re N [[gamma].sub.0] M [nprec.sub.0] Prec. Method 400 63 [10.sup.4] 2 5 Enhanced Cauchy 2 1000 127 [10.sup.4] 2 4 Enhanced Cauchy 2 Re Adapted nprec [[alpha].sub.0] 400 yes [10.sup.2] 1000 yes [10.sup.4]

Gale Copyright:

Copyright 2008 Gale, Cengage Learning. All rights
reserved.