Sign up

Simple square smoothing regularization operators.
Abstract:
Tikhonov regularization of linear discrete ill-posed problems often is applied with a finite difference regularization operator that approximates a low-order derivative. These operators generally are represented by a banded rectangular matrix with fewer rows than columns. They therefore cannot be applied in iterative methods that are based on the Arnoldi process, which requires the regularization operator to be represented by a square matrix. This paper discusses two approaches to circumvent this difficulty: zero-padding the rectangular matrices to make them square and extending the rectangular matrix to a square circulant. We also describe how to combine these operators by weighted averaging and with orthogonal projection. Applications to Arnoldi and Lanczos bidiagonalization-based Tikhonov regularization, as well as to truncated iteration with a range-restricted minimal residual method, are presented.

Key words. ill-posed problem, regularization operator, Tikhonov regularization, truncated iteration.

AMS subject classifications. 65F10, 65F22, 65R32

Article Type:
Report
Subject:
Smoothing (Numerical analysis) (Research)
Operator theory (Research)
Authors:
Reichel, Lothar
Ye, Qiang
Pub Date:
08/01/2008
Publication:
Name: Electronic Transactions on Numerical Analysis Publisher: Institute of Computational Mathematics Audience: Academic Format: Magazine/Journal Subject: Computers; Mathematics Copyright: COPYRIGHT 2008 Institute of Computational Mathematics ISSN: 1068-9613
Issue:
Date: August, 2008 Source Volume: 33
Topic:
Event Code: 310 Science & research
Geographic:
Geographic Scope: United States Geographic Code: 1USA United States
Accession Number:
229894865
Full Text:
1. Introduction. This paper is concerned with the computation of an approximate solution of linear systems of equations of the form

Ax = b, A [member of] [R.sup.nxn], x, b [member of] [R.sup.n], (1.1)

with a matrix A of ill-determined rank. In particular, A is severely ill-conditioned and may be singular. These kinds of linear systems of equations often are referred to as linear discrete ill-posed problems. They stem, e.g., from the discretization of ill-posed problems, such as Fredholm integral equations of the first kind with a smooth kernel. The system (1.1) is not required to be consistent.

Linear discrete ill-posed problems arise when one seeks to determine the cause of an observed effect. The latter is represented by the right-hand side b, which in applications often is contaminated by an unknown measurement error e [member of] [R.sup.n], i.e.,

b= [??] + e, (1.2)

where b denotes the unknown error-free right-hand side vector associated with b. We will refer to the error e as "noise."

The present paper discusses methods for the solution of (1.1) that are applicable when the norm of the noise,

[epsilon] := [parallel] e [parallel], (1.3)

or an accurate approximation thereof, is available. However, the regularization operators considered in this paper also can be applied in methods that do not require knowledge of the error norm (1.3). Throughout this paper [parallel] * [parallel] denotes the Euclidean vector norm. Introduce the linear system of equations

Ax = [??] (1.4)

associated with (1.1) with the unknown error-free right-hand side. The system (1.4) is assumed to be consistent and its solution of minimal Euclidean norm is denoted by [??]. We would like to determine an approximation of [??] and seek to achieve this by computing an approximate solution of the available linear discrete ill-posed problem (1.1). Note that due to the severe ill-conditioning of A and the error e in the right-hand side b, the least-squares solution of minimal Euclidean norm of (1.1) typically is not a meaningful approximation of [??]. In order to be able to obtain an accurate approximation of [??], the system (1.1) has to be replaced by a nearby system, whose solution is less sensitive to perturbations in b. This replacement is known as regularization.

One of the most popular regularization methods, known as Tikhonov regularization, replaces the linear system (1.1) by the minimization problem

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (1.5)

where the matrix L G Rkxn, k < n, is referred to as the regularization operator and the scalar [mu] [greater than or equal to] 0 as the regularization parameter; see, e.g., Engl et al. [14] and Hansen [19] for discussions on Tikhonov regularization.

Let N(M) and R(M) denote the null space and range of the matrix M, respectively. We assume throughout this paper that the matrices A and L satisfy

N(A) [intersection] N(L) = {0}. (1.6)

Then the Tikhonov minimization problem (1.5) has the unique solution

[x.sub.[mu]] := ([A.sup.T] A + [mu]/[L.sup.T]L).sup.-1][A.sup.T]b (1.7)

for any [mu] > 0.

Our computational task is to determine a suitable positive value of the regularization parameter [mu] and the associated solution [[x.sub.[mu] of (1.5). We will determine [mu] with the aid of the discrepancy principle, which requires that the norm of the noise (1.3), or a fairly accurate estimate thereof, be available. This approach of determining [mu] generally calls for the evaluation of the norm of the residual error

[r.sub.[mu]] := b - A[x.sub.[mu] (1.8)

for several values of [mu] ; see Section 3 for details.

Common choices of regularization operators for problems in one space dimension are the identity matrix and scaled finite difference approximations of a derivative, such as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (1.9)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (1.10)

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (1.11)

When the matrices A and L are of small to moderate size, the solution of (1.5) and the norm of the residual error (1.8) conveniently can be determined for several values of the regularization parameter with the aid of the generalized singular value decomposition (GSVD) of the matrix pair {A, L}. However, for large-scale problems, the computation of the GSVD is too expensive to be attractive. This paper is concerned with iterative solution methods that can be applied to the solution of large-scale problems.

Large-scale problems typically are transformed to standard form before solution. Let L[dagger] [epsilon] [R.sup.nxk] denote the Moore-Penrose pseudoinverse of the regularization operator L in (1.5). The A-weighted pseudoinverse of L is defined by

[[L.sup. [[dagger].sub.A] := (I - (A(I - [[L.sup.[dagger]]L))[dagger]A) L[dagger] [member of] [R.sup.nxk]; (1.12)

see, e.g., Elden [13] or Hansen [19, Section 2.3]. Introduce the matrix

[bar.A] := AL[[dagger].sub.A] (1.13)

and the vectors

[x.sup.(0)] := (A(I - L[dagger]L)).sup.[dagger]] b, (1.14)

f:= b - [Ax.sup.(0)]. (1.15)

Let [bar.x] := Lx. Then the Tikhonov minimization problem (1.5) can be expressed in standard form,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (1.16)

The solution [x.sub.[mu]] of (1.5), given by (1.7), can be recovered from the solution xM of (1.16) according to

[x.sub.mu] = [L.sup.[dagger].sub.A] [[bar.x].sub.[mu]] + [x.sup.(0)]; (1.17)

see, e.g., [13] or [19, Section 2.3] for details. We note for future reference that if [bar.x] is any vector in [R.sup.k] and x := [[L.sup.[[dagger].sub.A]+ [x.sup.(0)], then the associated residual vectors f := [??] - [bar.A][bar.x] and r := b - Ax satisfy

[parallel][bar.r][parallel] (1.18)

If L is a square nonsingular matrix, then [x.sup.(0)] = 0, and the above transformation amounts to [bar.x] := Lx and A := [AL.sup.-1].

When the regularization operator L approximates a derivative, its pseudoinverse approximates an integral operator and therefore is a low-pass filter. This filter is applied to the computed solution [[bar.x].sub.[mu]] of (1.16) when evaluating the associated solution [x.sub.mu] of (1.5); cf. (1.17). The smoothing achieved by filtering is beneficial when the desired solution x is smooth. Regularization operators of the form (1.9)-(1.11) often are referred to as smoothing operators.

We remark that despite the complexity of the A-weighted pseudoinverse (1.12), matrixvector products with the matrices LA and ALA can be evaluated quite inexpensively when L is a banded matrix with small bandwidth, a circulant, or an orthogonal projection; see Section 2 or [13], [19, Section 2.3], and [24] for further details. Commonly used iterative methods for the solution of Tikhonov minimization problems in standard form (1.16) are based on partial Lanczos bidiagonalization of the matrix (1.13); see, e.g., [3, 4, 7, 8, 15, 17, 22]. However, recent numerical results reported in [23] show that Tikhonov regularization based on the range-restricted Arnoldi (RR-Arnoldi) process can be competitive with Tikhonov regularization based on Lanczos bidiagonalization. The RR-Arnoldi process is used to reduce Ab to a small upper Hessenberg matrix; see Section 3. Advantages of RR-Arnoldi-based Tikhonov regularization, compared with Lanczos bidiagonalization-based Tikhonov regularization, include:

i) Many problems require a smaller number of matrix-vector product evaluations. When the matrix A is large, these evaluations constitute the dominant computational work; see [23] for a comparison.

ii) The methods do not require the evaluation of matrix-vector products with [A.sup.T]. They therefore are attractive to use for problems for which matrix-vector products with [A.sup.T] are difficult to evaluate. This situation arises, e.g., when solving large nonlinear problems by Krylov subspace methods; see [11] for a discussion. It also arises when matrixvector products are evaluated by multipole methods.

iii) The methods deliver more accurate approximations of the desired solution xb for some problems; see [23] for a few illustrations.

A difficulty with solution methods based on the RR-Arnoldi process is that they require the regularization operator L to be represented by a square matrix. In particular, the operators (1.9)-(1.11) cannot be applied. Several approaches to circumvent this difficulty have been discussed in the literature. Calvetti et al. [8, 9] propose to append or prepend rows to the regularization operators [L.sub.j] to yield square and nonsingular regularization operators L; the nonsingularity simplifies the computations. For suitable extensions of the operators (1.9)-(1.11) this approach yields high accuracy. However, to achieve the most accurate approximations of [??], the choice of extension often should depend on the behavior of the solution near the boundary. If this behavior is not known a priori, one can first solve the problem with L := I to determine the boundary behavior of the solution, and then again with a suitably chosen square regularization operator L to achieve higher accuracy.

An alternative to solving the transformed Tikhonov minimization problem (1.16) is to apply an iterative method to the linear system of equations

[A.bar][bar.x] = [bar.b] (1.19)

Regularization is achieved by terminating the iteration process sufficiently early. Denote the kth iterate generated by the iterative method by [[bar.x].sub.k], with [[bar.x].sub.0] = 0, and define the associated residual vector

[bar.r]k := [bar.b] - [bar.A][[bar.x].sub.k]. (1.20)

The iteration number, k, can be thought of as a regularization parameter. Application of the discrepancy principle to determine when to terminate the iterations requires that the norm of the residual vector rbk be available; see Section 3. In this paper, we discuss application of the range-restricted GMRES (RR-GMRES) iterative method to the solution of (1.19). RRGMRES is based on the RR-Arnoldi process; its use and comparison with standard GMRES are illustrated in [5]. RR-GMRES is better suited than standard GMRES for the computation of an approximate solution of a linear discrete ill-posed problem whose desired solution x is smooth.

Hansen and Jensen [20] recently described how nonsquare regularization operators can be used in conjunction with RR-GMRES by augmenting the linear system (1.1) and multiplying the augmented system from the left-hand side by a matrix which contains (L[[dagger].sub.A]).sup.T] as a submatrix. This approach is interesting, but it is not well suited for use with the discrepancy principle, because the norm of residual vector (1.20) is not explicitly available.

Morigi et al. [24] proposed to use orthogonal projections as regularization operators. These operators perform well for many linear discrete ill-problems, but they are not smoothing. They are well suited for problems for which the desired solution [??] is not smooth. The application of regularization operators without any particular structure is considered by Kilmer et al. [21].

The present paper discusses several approaches to determining square regularization operators from regularization operators of the form (1.9)-(1.11). We consider appending zero rows to obtain square regularization operators. Numerical examples illustrate the feasibility of this approach. Problems with a periodic solution may benefit from the use of circulant regularization operators obtained by augmenting suitable rows to regularization operators of the form (1.9)-(1.11). The spectral decomposition of circulant matrices is easily computable, and numerical examples illustrate that it may be beneficial to set certain eigenvalues to zero. The latter observation is believed to be new. Finally, we discuss how these regularization operators can be combined by weighted averaging or with an orthogonal projection to deal with problems with non-periodic solutions.

This paper is organized as follows. Section 2 discusses some properties of square zeropadded and circulant regularization operators obtained by modifying operators of the form (1.9)-(1.11). A weighted average of the two is proposed inorderto circumvent the restrictions of each approach. We also describe how orthogonal projection regularization operators can be combined with other regularization operators. The RR-Arnoldi process, the discrepancy principle, and RR-GMRES are reviewed in Section 3. Computed examples are presented in Section 4, and Section 5 contains concluding remarks.

2. Regularization operators. We first discuss general properties of regularization operators L [member of] [R.sup.kxn], k [less than or equal to] n, with a nontrivial null space and then consider square regularization operators related to operators of the form (1.9)-(1.11). This is followed by a discussion of properties of orthogonal projection regularization operators and how they can be combined with square smoothing regularization operators.

Many regularization operators of interest have a nontrivial null space. Let

ni := [1,1,..., 1,1].sup.T],

n2 :=[1, 2,...,n - 1,n].sup.T], (2.1) n3 := [1, 22,..., (n - 1).sup.2],[n.sup.2]T.

Then

N ([L.sub.1]) = span{n1},

N ([L.sub.2])= span{ni, n2}, (2.2) N ([L.sub.3]) = span{ni, n2, n3}.

Let the columns of the matrix U [member of] [R.sup.nxl] form an orthonormal basis of N (L). Generally, the dimension, I, of the null spaces of regularization operators of interest is fairly small; for the operators (1.9)-(1.11), we have dimN([L.sub.j]) = j. Introduce the QR-factorization

AU = QR, (2.3)

where Q [member of] [R.sup.nxl] satisfies [Q.sup.T]Q = I and R [member of] [R.sup.lxl] is upper triangular.

Proposition 2.1. Let the columns of U [member of] [R.sup.nxl] form an orthonormal basis of the null space of the regularization operator L G Rkxn, k [less than or equal to] n. Thus, I [greater than or equal to] n - k. Let the matrices Q and R be determined by (2.3). Then

[L.sup.[dagger]A] = (I - U[R.sup-1][Q.sup.T] A)L[dagger], (2.4)

AL[[dagger].sub.A] = (I - Q[Q.sup.T])AL[dagger], (2.5)

[bar.b] = (I - Q[Q.sup.T])b. (2.6)

Proof. The relation (2.4) follows by substituting I-L[dagger] L = [UU.sup.T] and the QR-factorization (2.3) into (1.12). Multiplication of (2.4) by A yields (2.5). Finally, relation (2.6) is a consequence of A(A(I - L[[dagger]L))[dagger] = [QQ.sup.T]. For further details, see [13] or [19, Section 2.3].

It is clear from (1.5) that the component of the solution [x.sub.[mu]] in N(L) is not affected by the value of the regularization parameter ^. The following result is a consequence of this observation.

Proposition 2.2. ([24]) Let the orthonormal columns of the matrix U [member of] [R.sup.nxl] span the null space of the regularization operator L [member of] [R.sup.kxn] and assume that (1.6) holds. Let [mu] > 0 and let [[x.sub.[mu] be the unique solution (1.7) of the Tikhonov minimization problem (1.5). Consider the discrete ill-posed problem with modified right-hand side

Ax = b', b' := b + AUy,

for some y [member of] [R.sup.l]. Then the unique solution of the associated Tikhonov minimization problem

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

is given by := [x.sub.mu] + Uy.

The above proposition suggests that the regularization operator L should be chosen so that known features of the desired solution [??] can be represented by vectors in N(L). For instance, when [??] is known to be monotonically increasing and smooth, L := [L.sub.2] or L := [L.sub.3] may be suitable regularization operators since their null spaces contain discretizations of linearly and quadratically increasing function; cf. (2.2).

2.1. Square regularization operators by zero-padding. We discuss the use of square regularization operators obtained by zero-padding matrices that represent scaled finite difference operators, such as (1.9)-(1.11). Let [L.sub.10] denote the n x n matrix obtained by appending a zero row-vector to the operator [L.sub.1] defined by (1.9). Then

L[dagger].sub.1,0] = [Ll, 0] [member of] [R.sup.nxn]. (2.7)

Substituting L := L10 and := Ll 0 into (1.12) yields a square A-weighted pseudoinverse LA. The matrix AL A also is square and can be reduced to an upper Hessenberg matrix by the RR-Arnoldi process. Note that for any vector v [member of] [R.sup.n], L[dagger]1,]0 v can be evaluated efficiently by first computing the QR-factorization of LT0 with the aid of Givens rotations, and then solving a least-squares problem. The least-squares solution of minimal-norm can be determined easily, because N(L10) is explicitly known; see [13], [19, Section 2.3] for details.

The RR-Arnoldi process may break down when applied to a singular matrix. The matrix (2.5) with [L.sup.[dagger]] defined by (2.7) is singular; we have [L.sup.[(dagger]).sub.A]en = 0. Here and elsewhere in this paper, [e.sub.j] = [0,..., 0,1,0,..., 0].sup.T] denotes the jth axis vector of appropriate dimension. The matrix (2.5) also can be singular because A may be singular. A variant of the Arnoldi process that avoids breakdown when applied to a singular matrix is described in [27] and should be used in black-box software. However, in our experience, it is rare to encounter breakdown or near-breakdown of the RR-Arnoldi process before the computations are terminated by the discrepancy principle. Indeed, we have not encountered near-breakdown in any of our numerical experiments.

The regularization operators (1.10) and (1.11) can be extended to square regularization operators in a similar manner as [L.sub.1]; the square operator [L.sub.j0] is defined by appending j zero row-vectors to [L.sub.j].

2.2. Square regularization operators by circulant extension. Circulant regularization operators are well suited for use in problems with a periodic solution. We consider the following circulant extensions of the matrices (1.9) and (1.10),

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (2.8)

and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

Circulants have advantages of being normal matrices with a known spectral decomposition. Introduce the unitary "FFT-matrix" with the columns ordered according to increasing frequency,

where i := [square root]-1 and [alpha] denotes the integerpart of [alpha] [greater than or equal] [greater than or equal to] 0. The columns satisfy wk = [bar.w]-k, where the bar denotes complex conjugation. Each column wk represents rotations, with the frequency increasing with |k|. The wk are eigenvectors of all circulants; see, e.g., [12]. In particular, the circulants [C.sub.j] have the spectral decompositions

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

where the superscript * denotes transposition and complex conjugation. The eigenvalues satisfy

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

and therefore can be computed efficiently by the fast Fourier transform method.

Proposition 2.3. The spectral decomposition (2.9) expresses a circulant as the sum of n Hermitian rank-one circulants,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

where summation is over the index set {0,1, -1, 2,..., (- 1)n |_n/2j} with n components.

Proof. Straightforward computations show that the rank-one matrices wk wfc are circulants.

The eigenvalues of [C.sub.1] are distributed equidistantly on a circle of radius and center 1 /2, with [[lambda].sup.(1)].sub.0] =0. All, or all but one, depending on the parity of n, of the remaining eigenvalues appear in complex conjugate pairs; we have [[lambda].sup.(1).sub.k]= [??].sup.(1).sub.-k]. The magnitude of the eigenvalues increases with increasing magnitude of their index. The null space is given by N([C.sub.1]) = span{ni}, where ni is defined by (2.1).

[FIGURE 2.1 OMITTED]

The matrix [C.sub.2] is the symmetric part of [C.sub.1] . Its eigenvalues are the real part of the eigenvalues of C(. They live in the interval [0,1] with [[lambda].sup.2].sub.0]) = 0. All, or all but one, depending on the parity of n, of the remaining eigenvalues appear pairwise; we have [[lambda].sub.k].sup.2]) = [[lambda.sup.2].sub.k]. The eigenvalues increase with increasing magnitude of their index. Moreover, N([C.sub.2]) = N([C.sub.1]). Figure 2.1(a) displays the magnitude of the eigenvalues of circulants C( and [C.sub.2] of order n = 100.

Since the n x n matrix [C.sub.2] is real and symmetric, it has a real orthonormal eigenvector basis, such as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

For n > 2, real linear combinations of these basis-vectors do not yield a real circulant eigenvector matrix.

The Moore-Penrose pseudoinverse of [C.sub.j] is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

The smoothing effect ofthe circulant regularization operators [C.sub.j] stems from the fact that the seminorm | [C.sub.j]x| weighs high-frequency components of the vector x, when expanded in terms of the eigenvectors [w.sub.k], more than low-frequency components. Recall that the frequency as well as the magnitude ofthe eigenvalues increase with the magnitude of the eigenvector index. Figure 2.1(a) shows that [C.sub.1] weighs low-frequency components of x more than [C.sub.2].

Figure 2.1(b) differs from Figure 2.1(a) in the scaling of the vertical axis and in that the zero eigenvalues are not displayed. The figure provides better resolution for eigenvalues of small magnitude. In particular, Figure 2.1(b) shows that while both circulants [C.sub.1] and [C.sub.2] weigh high frequency components the most, they also provide some weighting of low frequencies. The latter may not be beneficial in situations when the desired solution is periodic with low frequency. We may want to modify the circulants [C.sub.j] by setting one or more of its eigenvalues to zero and thereby avoid damping of the associated frequencies; cf. Proposition 2.2. The benefit of this is illustrated in Section 4. Here we show that the modified matrix obtained is a real circulant.

Corollary 2.4. Let the matrix [[??].sub.2] be obtained by setting the p pairs of eigenvalues {[[lambda].sub.1].sup.(2)], [[lambda].sup.(2)].sub.-1], {[[lambda].sup.2]2), [[lambda].sup.2}, ..., {[[lambda].sup.(2).sub.p], [[lambda].sup.(2).sub.-p]} in the decomposition (2.10) of [C.sub.2] to zero, where p < n/2. The matrix [[??].sub.2] so obtained is a real circulant, and so is its Moore-Penrose pseudoinverse [[??].sub.2].

Proof. Let 1 [less than or equal to] k [less than or equal to] p. Then [[lambda].sup.(2).sub.-k] = [[lambda].sup.(2).sub.k]. Setting [[lambda].sup.(2).sub.-k] and [[lambda].sup.(2).sub.k] to zero for 1 [less than or equal to] k [less than or equal to] p yields the matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

The right-hand side shows that [[??].sub.2] is real. It is easy to verify that [w.sub.k] [w.sup.*].sub.k] is a circulant and so is [[bar.w].sub.k][[bar.w].sup.*.sub.k]. Therefore [C.sub.2] is a circulant. It follows that [??].sup.[[lambda].sup.2] also is a real circulant.

We have discussed two circulant regularization operators. Other circulants can be investigated similarly and applied as regularization operators when appropriate.

2.3. Weighted average of circulant and zero-padded regularization operators. The null space of the regularization operator L2, defined by (1.10), contains the discretization of linear functions; however, the circulant extension, [C.sub.2], does not. While the circulant [C.sub.2], and its modification [[??].sub.2] described in Corollary 2.4, are well suited for the solution of problems with a periodic solution, they damp non-periodic solution components, such as linear growth. We are interested in deriving regularization operators that neither damp slowly oscillatory solution components nor linear growth. Namely, the null space of the regularization operator should contain discretizations of both linear and slowly oscillatory functions, or at least the complements of the orthogonal projections of these functions into the null space should be small. This section describes how regularization operators with this property can be defined.

First, consider the regularization operator L := [D.sub.[delta]][C.sub.2], where

[D.sub.[delta]] := diag[delta],1, ..., 1,[delta]], (2.11)

and [delta] > 0 is small. Thus, L is the product of the the circulant operator [C.sub.2] and the zeropadded operator that is obtained by prepending and appending a zero row to [L.sub.2]. Discrete linear functions, represented by [alpha]n1 + [beta]n.sub.2] for [alpha], [beta] [member of] R, are approximately in the null space of L, in the sense that they are hardly damped by L in the context of Tikhonov regularization. Unfortunately, this operator L is computationally difficult to use in large-scale problems, as a formula for [L.sup.[dagger]] that is inexpensive to evaluate is not obvious; see also Section 2.4 for further comments. However, [C.sup.[dagger].sub.2][[D.sup.-1].sub.[delta]] may be considered a rough approximation of L[dagger]. This leads us to consider the regularization operator

L := ([C.sup.[dagger].sub.2][D.sup.-1].sub.[delta].sup.[dagger]

which differs from Ds [C.sub.2], but has some similar desirable properties as we shall discuss below.

More generally, as in Section 2.2, we can set some small eigenvalues of [C.sub.2] to zero, and thereby avoid the damping of low frequency solution components. Recall that [C.sub.2] is obtained from [C.sub.2] by setting the p pairs of eigenvalues {[[lambda].sup.(2).sub.1], [[lambda].sup.(2).sub.-1]}, {[[lambda].sup.(2).sub.2]), [[lambda].sup.(2).sub.-2]},..., {[[lambda].sup.(2).sub.p]), [[lambda].sup.(2).sub.p]} in the decomposition (2.10) to zero, where 0 [less than or equal to] p < n/2 (with the convention that [[??].sub.2] = [C.sub.2] if p = 0). We will use the regularization operator defined implicitly by

L:= ([[??].sup.[dagger].sub.2] D.sup.-1.sub.[delta]).sup.[dagger] (2.12)

Note that only L] = [C.sub.2]d_( is needed in computations and is readily available; L itself is not directly used. We show that the operator (2.12) inherits the low-pass property of [[??].sub.2] and hardly damps discrete linear functions [alpha]n1 + [beta][n.sub.2], [beta] [member of] R.

Theorem 2.5. Let L be defined by (2.12) with [[D.sub.[delta] given by (2.11). Then, for any v [member of] [R,sup.n],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (2.12)

In particular, [Lw.sub.k] = 0 for 0 [less than or equal to] |k| [less than or equal to] p. Thus, [n.sub.1] [member of] N(L). Moreover,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

Proof. Let u := Lv. Then L[dagger]Lv = L[dagger]u = [[??].sub.2] [D.sup.-1].sub.[delta]u. Therefore,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

Hence,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (2.12)

Since R([L.sup.[dagger]]) = R([??].sup.[dagger].sup.2]) = span{[w.sub.p+i], [w.sub._(p+i)], [w.sub.p+2], [w.sub._(p+2)], ...}, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (2.12)

It follows from this equation and [??].sub.2][w.sub.k] = 0 for 0 [less than or equal to] |k| [less than or equal to] p that [[D.sub.[delta][??].sub.2][.sup.[dagger]L=[[D.sub.[dagger] [[??].sub.2] Writing [u.sub.0] := [[D.sub.[delta](I - [[??].sub.2][[??].sup.[dagger].sub.2)[[D.sup.-1].sub.[delta] we have

u - [u.sub.0] = [D.sub.[delta]][??].sub.2]v

Now, L[dagger][u.sub.0] = [??].sub.2](I - [[??].sub.2][D.sup.-1].sub.[delta]u = 0, i.e., u0 [member of] N(L[dagger]). Therefore, u0 [perpendicular] u [member of] R(L), and it follows that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

This inequality and [[??].sub.2][w.sub.k] = 0 show that [Lw.sub.k] = 0 for 0 [less than or equal to] |k| [less than or equal to] p. Furthermore, since

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (2.12), we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (2.12)

The bound for [parallel][Ln.sub.2][parallel]/[parallel][n.sub.2][parallel] now follows from [parallel][n.sub.2][parallel] = [EXPRESSION NOT REPRODUCIBLE IN ASCII.]

2.4. Orthogonal projection regularization operators. Let the matrix U [member of] [R.sup.nxl], I [much less than] n, have orthonormal columns. Then the regularization operator

L := I - [UU.sup.T] (2.13)

is an orthogonal projector onto R [perpendicular to](U). Application of orthogonal projection regularization operators is discussed in [24]. This section describes how to combine regularization by orthogonal projection and smoothing operators. This kind of regularization yields the smoothing properties of the latter and allows the choice of a sufficiently large null space to represent important features of the desired solution x.

While the regularization operator introduced in Section 2.3 has some of these properties, its null space only approximately contains discretizations of linear functions and it requires n to be large. In addition, there are situations where it might be desirable to include other solution components, such as a quadratic or exponential growth functions, in the null space. This can be achieved by combining an orthogonal projection regularization operator with a circulant smoothing regularization operator.

We first review relevant properties of orthogonal projection regularization operators and discuss the equivalence of these operators to a decomposition method described in [2]. The latter provides a convenient way of implementing regularization with two regularization operators for large-scale problems.

Proposition 2.6. Let the regularization operator L [member of] [R.sup.nxn] be defined by (2.13) and let the matrices Q and R be determined by (2.3). Then

L[[dagger].sub.A] = I - [UR.sup.-1]([Q.sup.T]A, (2.14)

AL[[dagger].sub.A] = (I - [QQ.sup.T])A. (2.15)

Proof. A proof follows from Proposition 2.1 and straightforward computations. Details and related results can be found in [24].

For small to medium-sized problems, for which the GSVD can be applied, we may choose regularization operators of the form SL, where S is a smoothing regularization operator and L is an orthogonal projector (2.13) with a suitably chosen null space. However, products of regularization operators are difficult to handle in large-scale problems that are transformed to standard form (1.16). The reason for this is that one of the factors of L[[dagger].sub.A] is the pseudoinverse of the regularization operator, cf. (2.4), and for most pairs {S, L} of regularization operators of interest, the pseudoinverse [(SL).sup.[dagger]] is difficult to evaluate efficiently; see, e.g., [10, Section 1.4] and [16] for discussions on the pseudoinverse of a product of matrices.

We therefore propose to apply pairs of regularization operators sequentially. This can be carried out in a conceptually simple manner by partitioning the linear system of equations (1.1) as follows. Introduce the orthogonal projectors

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

and partition the vector x = [P.sub.U] x + [P.sup.[perpendicular].sub.U] x and the linear system of equations (1.1) according

to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (2.16)

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (2.17)

Application of partitioning to the solution of linear discrete ill-posed problems is discussed

in[2].

Theorem 2.7. Let L be defined by (2.13) and let Q be given by (2.3). Then equation (2.16) is equivalent to (1.17), and equation (2.17) is equivalent to (1.19). Proof. We first establish the equivalence of (2.17) and (1.19). Note that

[P.sup.[perpendicular to]A[P.sub.U] =0. (2.18)

It follows from (2.15) and (2.6) that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

This shows the equivalence.

We turn to (2.16). Denote the computed approximate solution of (2.17) by S. In view of (2.18), the system of equations (2.16)-(2.17) is block upper triangular and Py x satisfies

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

which can be written as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

i.e.,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

We may choose

x := [bar.x] + [UR.sup.-1][Q.sup.T](b- [A.bar.x]). (2.19)

Note that (1.14) can be expressed as x(0) = UR 1QTb. Substituting this expression and (2.14) into x = [[dagger].sub.A] [??] + [x.sup.(0)], which is analogous to (1.17), yields

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

This expression is equivalent to (2.19), which shows the theorem.

Theorem 2.7 shows that partitioning is equivalent to the application of an orthogonal projection regularization operator, and that the partitioned system (2.16)-(2.17) is block upper triangular. We propose to apply an initial partitioning in situations when the null space of an otherwise suitable smoothing regularization operators is not large enough to be able to represent pertinent features of the desired solution [??]. Having chosen a suitable orthogonal

projection regularization operator (2.13), and therefore the matrix U, we solve (2.17) using a smoothing regularization operator. Note that in view of (2.18) and [P.sup.[perpendicular to].sub.Q]Ax = [P.sup.[perpendicular to].sub.Q]A, equation (2.17) simplifies to

[P.sup.[perpendicular to].sub.Q]Ax = [P.sup.[perpendicular to].sub.Q] b. (2.20)

For example, if x has a linear growth function component, then the matrix U that defines the orthogonal projection regularization operator (2.13) should be chosen so that R(U) = span{n1, n2}. We then may select the circulant

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

whose null space contains slowly oscillating discrete functions, as a smoothing regularization operator. Computations with this combination of regularization operators are presented in Example 4.5 of Section 4.

The following result shows that in order to determine the residual error norm associated with (1.1), it suffices to compute the norm of the residual error of the reduced system (2.20).

Corollary 2.8. Let [??] [member of] [R.sup.n] and define the residual vector

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

Let x be the associated approximate solution of (1.1) defined by (2.19) in the proof of Theorem 2.7, and let

r := b - Ax.

Then [parallel]r[parallel] = [parallel]S[parallel].

Proof. The result follows from (1.18) by using the equivalence of partitioning and regularization by an orthogonal projection operator. The corollary also can be shown directly by using the partitioning (2.16)-(2.17); see [2] for details on the latter approach. ?

We remark that instead of partitioning (1.1) and regularizing (2.20), we can carry out repeated transformations of the Tikhonov minimization problem (1.5) to standard form. The latter approach is more general. However, since we are not exploiting this generality, we will not pursue this approach further in the present paper. Note that the regularization technique of this section also can be used with nonsquare smoothing regularization operators.

3. Krylov subspace methods and the discrepancy principle. Application of k < n steps of the range-restricted Arnoldi process to the matrix A G Rnxn in(1.16) or(1.19) with initial vector b determines the decomposition

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (3.11)

where the orthonormal columns of the matrix [[bar.V].sub.k+1] [member of] [R.sup.nx(k+1)] form a basis of the Krylov subspace

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (3.2)

and the matrix [[bar.H].sub.k+1,k] [member of] [R.sup.(k+1)xk] is of upper Hessenberg form. The matrix [[bar.V].sub.k] consists of the first k columns of [[bar.V].sub.k+1]

The decomposition (3.1) is the basis for the Arnoldi-Tikhonov method for the solution of (1.16), as well as for the RR-GMRES iterative method for the solution of the linear systems of equations (1.19). We first outline the latter scheme.

The kth iterate, [[bar.x].sub.k], determined by RR-GMRES when applied to the solution of (1.19) satisfies

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (3.3)

and is computed by first finding the solution y k of the minimization problem

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

and then setting [[bar.x].sub.k] := [[bar.V].sub.k][[bar.y].sub.k]. The associated approximate solution of (1.1) is given by

[x.sub.k] := [L.sup.[[dagger].sub.A][[bar.x].sub.k] + [x.sup.(0)];

cf. (1.17). The residual error rk := b - [Ax.sub.k] associated with [x.sub.k] can be expressed as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

and it follows that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (3.3)

where [bar.Qk+1] [member of] [R.sup.(k+1)x(k+1)] is the orthogonal matrix in a QR-factorization of [bar.H.sub.k+1]. The squared residual error norm [[parallel]rk[parallel].sup.2] can be evaluated inexpensively by updating [parallel]rk.sub.-1][parallel].sup.2].

Let [nu] be a constant strictly larger than one. The iterate xk is said to satisfy the discrepancy principle if

[parallel]K[parallel][less than or equal to] [nu][epsilon], (3.4)

where [epsilon] is the norm ofthe noise (1.3) oran estimate thereof. Ifthe available estimate is known to be accurate, then we choose n close to unity. It follows from (3.3) and [[KAPPA].sub.k-1] ([bar.A], [bar.A][bar.b]) [subset] [[KAPPA].sub.k]([bar.A],[bar.A][bar.b]) that [parallel][r/sub.k-1][parallel] [less than or equal to][parallel] [r.sub.k][parallel] for k = 1, 2, 3,..., where r0 := b - [Ax.sup.(0)]. We terminate the iterations as soon as an iterate xfc has been determined that satisfies the discrepancy principle. Thus, [r.sub.[??]] is the first residual vector, such that (3.4) holds. An analysis of this stopping rule for standard GMRES is provided in [6]; the analysis there carries over to RRGMRES. The evaluation of [x.sub.k] requires [??] + l +1 matrix-vector product evaluations with the matrix A, l of which are required to compute the QR factorization (2.3). For largescale problems, the matrix-vector product evaluations dominate the computational work. We therefore tabulate this number in the computed examples of Section 4.

We turn to Tikhonov regularization. Let k be defined as above. In the numerical examples of Section 4, we carry out k Arnoldi steps, where k := [??] or k := [??] + 1. Substituting the decomposition (3.1) into (1.16) yields a minimization problem of fairly small size, whose solution, [[bar.x].sub.k,[mu]] we compute. Let [x.sub.k,[mu]] := [L.sup.[dagger]].sub.A][bar.x],[mu]] + [x.sup.(0)]. We choose [mu] so that, analogously to (3.4), the associated residual error [r.sub.k[mu]] := b - [Ax.sub.k,[mu]] satisfies [parallel][rk,[mu]][parallel] = [nu][epsilon]; see [23] for further details.

4. Computed examples. We illustrate the performance of the regularization operators discussed with some numerical examples. The noise vector e has in all examples normally distributed pseudorandom entries with mean zero, and is normalized to correspond to a chosen noise level

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE ASCII.]

Here [??] denotes the noise-free right-hand-side vector in (1.4). We let [nu] := 1.01 in (3.4) in all examples. The computations are carried out in MATLAB with about 16 significant decimal digits. We reorthogonalize the columns of the matrix [[bar.V].sub.k+1] in the decomposition (3.1). Since typically the dimension k + 1 of the Krylov subspaces (3.2) used is fairly small, reorthogonalization is inexpensive. Reorthogonalization may reduce the size of the Krylov subspace required for determining an approximate solution that satisfies the discrepancy principle; see [26, Example 4.1] for an illustration with a symmetric matrix.

[FIGURE 4.1 OMITTED]

Example 4.1. The Fredholm integral equation of the first kind,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE ASCII.] (4.1)

with [kappa]([sigma],[T]) := exp([sigma]cos(T)), 6(T) := 2sinh(T)/T, and solution x(T) := sin(T) is discussed by Baart [1]. We use the MATLAB code baart from [18] to discretize (4.1) by a Galerkin method with 200 orthonormal box functions as test and trial functions. The code produces the matrix A [memberf of] [R.sup.200x200] and a scaled discrete approximation of x(T). Adding 50ni to the latter yields the vector [??] [member of] [R.sup.200] with which we compute the noise-free righthand side [??] := A[??].

Let the entries of the error vector e [member of] [R.sup.200] be normally distributed with zero mean, and be normalized to yield the noise level v = 5 . [10.sup.-5]. This corresponds to an absolute error of [parallel]e[parallel] = 9.8 . [10.sup.-2]. The right-hand side b in the system (1.1) is obtained from (1.2).||e|| = 9.8 . 10_2. The right-hand side b in the system (1.1) is obtained from (1.2).

Table 4.1 displays results obtained with RR-GMRES for several regularization operators and Figure 4.1 shows two computed solutions. The continuous curve in Figure 4.1(a) displays the computed solution obtained without explicit use of a regularization operator (L := I); the continuous curve in Figure 4.1(b) depicts the corresponding solution determined with L := [L.sub.3,0]. The iterations are terminated by the discrepancy principle (3.4). The dashed curves inFigures 4.1(a) and (b) show the solution x of the noise-free problem (1.4).

This example shows RR-GMRES without explicit use of a regularization operator to perform poorly. The reason for this is that the desired solution is a small relative perturbation of the vector 50ni. We note that this vector lives in N([L.sub.j,0]), 1 [less than or equal to] j [less than or equal to] 3, and therefore does not affect the Krylov subspaces for A when one of the regularization operators [L.sub.j,0] is applied. When L := [L.sub.3,0], the norm of the initial residual r0 := b - [Ax.sup.(0)] satisfies the discrepancy principle (3.4) and no iterations are carried out.

[FIGURE 4.2 OMITTED]

Example 4.2. Consider the Fredholm integral equation of the first kind

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] (4.2)

with kernel and solution given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

otherwise.

The right-hand side G(T) is defined by (4.2). This integral equation is discussed by Phillips [25]. The MATLAB code phillips in [18] determines a discretization by a Galerkin method with orthonormal box functions. A discretization of a scaled solution also is provided. Let the matrix A [member of] [R.sup.200x200] determined by phillips represent the discretized integral operator, and let x be the sum of the scaled discrete solution provided by phillips and the vector ni. The noise-free right-hand side is given by [??] := A[??]. A noise vector e similar to the one in Example 4.1 and scaled to correspond to a noise level of 1 . [10.sup.-3] is added to [??] to yield the right-hand side b of the linear system (1.1).

Tables 4.2 and 4.3 report the performance of several Tikhonov regularization methods with the regularization operators L := I and L := [L.sub.1,0], respectively. The latter regularization operator gives more accurate results for the present problem. For many ill-posed problems the Arnoldi-Tikhonov method yields higher accuracy by applying one more step of the Arnoldi process than the [??] steps necessary to satisfy the discrepancy principle; see the last paragraph of Section 3. This approach is denoted by Arnoldi-Tikhonov +1 in the tables. LBD-Tikhonov is the Lanczos bidiagonalization-based Tikhonov regularization method described in [7]. Lanczos bidiagonalization is implemented with reorthogonalization. We remark that the operators [L.sub.1,0] and [L.sub.1] are equivalent when applied in LBD-Tikhonov. Figure 4.2 displays the most accurate computed solutions in the Tables 4.2 and 4.3 for L := I and L := [L.sub.1,0].

The tables show the Arnoldi-Tikhonov method to yield about the same accuracy as LBDTikhonov and to require fewer matrix-vector product evaluations. We therefore omit graphs for the approximate solutions determined by LBD-Tikhonov.

Example 4.3: Relative error in approximate solutions xk determined by truncated iteration with RR-GMRES.

Example 4.3. RR-GMRES would in Example 4.1 be able to determine a quite accurate approximation of x without the use of the regularization operator [L.sub.1,0], if before iterative solution [50An.sub.1] is subtracted from the right-hand side b, and after iterative solution the vector [50n.sub.1] is added to the approximate solution determined by RR-GMRES. However, it is not always obvious how a problem can be modified in order for RR-GMRES to be able to achieve high accuracy without applying a regularization operator. This is illustrated by the present example, for which subtracting a multiple of [An.sub.1] before iterative solution with RR-GMRES, and adding the same multiple of [n.sub.1] after iterative solution, does not yield an accurate approximation of [??].

Consider

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]

where the kernel and the right-hand side are the same as in (4.1). We discretize in a similar fashion as in Example 4.1 to obtain the matrix A [member of] [R.sup.500x500] and the noise-free right-hand side k g R500. Adding a noise vector e to k yields the right-hand side of (1.1). The vector e is scaled to correspond to the noise level v = 2 . [10.sup.-3]. We solve the system (1.1) by RR-GMRES. Table 4.4 reports results for the cases when no regularization operator is used (L := I) and when the regularization operator L := [L.sub.3,0] is applied. Figure 4.3 displays the computed solutions. The operator [L.sub.3,0] is seen to improve the quality of the computed solution.

[FIGURE 4.3 OMITTED]

Example 4.4. We modify the integral equation of Example 4.2 in two ways: the integral equationis discretized with finer resolution to obtain the matrix A [member of] [R.sup.1000x1000], and instead of adding a discretization of the function 1 to the scaled discrete solution determined by the MATLAB code phillips, we add a discretization of the function 2 cos([pi]r(1+1[sigma]/6)). This defines x. The noise-free right-hand side is given by b := A[??]. A noise vector e with normally distributed entries with mean zero, and scaled to correspond to the noise level 1 . [10.sup.-2], is added to [??] to yield the right-hand side b of the linear system (1.1).

Table 4.5 reports the performance of RR-GMRES without and with several regularization operators. The table shows that the regularization operator [C.sub.2] obtained from (2.8) by setting the two smallest nonvanishing eigenvalues, and [[lambda.sup.(2).sub.1],[[lambda.sup.(2)].sub.-1] to zero yields the best approximation of the desired solution [??]. In particular, the regularization operator [C.sub.2] yields higher accuracy than [L.sub.2,0]. Table 4.5 also displays the number of iterations and the number of matrix-vector product evaluations required. Figure 4.4 shows x (dashed curves) and the computed approximate solutions determined with the regularization operators [L.sub.2,0] and [C.sub.2] (continuous curves). The latter curves differ primarily at their endpoints.

[FIGURE 4.4 OMITTED]

Example 4.5. The matrix in this example is the same as in Example 4.4, and we add a discretization of the linear function 1(a) := 1 + a/6 to the vector [??]. The noise-free and noise-contaminated right-hand sides are determined similarly as in Example 4.4; the noise level is 1 . [10.sup-2].

Table 4.6 reports the performance of RR-GMRES without and with several regularization operators. Because of the oscillatory behavior of the desired solution [??], the regularization operator [L.sub.2,0] does not perform well, and due to the linear term in [??], neither do the operators [C.sub.2] and [[??].sub.2]. The latter operator is the same as in Example 4.4. We therefore consider the approaches of Section 2.3 and 2.4, which yield clearly better results. For the weighted average regularization operator L in (2.12), the matrix [D.sub.[delta]] is given by (2.11) with S := 1 . [10.sup.-8]. In the method of Section 2.4, we first carry out an orthogonal projection onto the complement of discrete linear functions. Hence, we let U [member of] [R.sup.1000x2] in (2.13) be such that R(U) = span{[n.sub.1], [n.sub.2]}. This projection is in Table 4.6 referred to as "OP". RRGMRES is applied to the projected equations (2.20) with the regularization operators [C.sub.2] or without further regularization. Results for the latter approach are displayed in the last line of the table. Table 4.6 shows orthogonal projection followed by regularization with [C.sub.2] to give the best approximation of x. The computed solution is shown in Figure 4.5(b). Also the regularization operator ([[??].sup.[dagger].sub.2][D.sup.-1].sub.[delta]]).sup.[dagger] is seen to determine an accurate approximation of [??]. Furthermore, Table 4.6 illustrates that the smoothing regularization operator [L.sub.2,0] yields a better approximation than the orthogonal projection regularization operator (OP) with the same null space. This depends on that the desired solution [??] is smooth and [L.sub.2,0] is smoothing, but the orthogonal projector is not. Figure 4.5(a) displays the solution determined with [L.sub.2,0]. This computed solution differs from xk the most at the endpoints.

[FIGURE 4.5 OMITTED]

5. Conclusion. We have presented several square extensions of some standard regularization operators based on finite difference discretization of derivatives. The numerical examples illustrate that the square regularization operators discussed here can improve the quality of the computed approximate solution determined by Arnoldi-based Tikhonov regularization and minimal residual methods. Moreover, they are quite simple to implement.

Dedicated to Gerard Meurant on the occasion of his 60th birthday

* Received May 13, 2008. Accepted October 13, 2008. Published online May 7, 2009. Recommended by A. Salam.

REFERENCES

[1] M. L. Baart, The use of auto-correlation for pseudo-rank determination in noisy ill-conditioned least-squares problems, IMA J. Numer. Anal., 2 (1982), pp. 241-247.

[2] J. Baglama and L. Reichel, Decomposition methods for large linear discrete ill-posed problems, J. Comput. Appl. Math., 198 (2007), pp. 332-342.

[3] A. Bjorck, A bidiagonalization algorithm for solving large sparse ill-posed systems of linear equations, BIT, 28 (1988), pp. 659-670.

[4] D. Calvetti, G. H. Golub, and L. Reichel, Estimation ofthe L-curve via Lanczos bidiagonalization, BIT, 39 (1999), pp. 603-619.

[5] D. Calvetti, B. Lewis, and L. Reichel, On the choice ofsubspace for iterative methods for linear discrete ill-posed problems, Int. J. Appl. Math. Comput. Sci., 11 (2001), pp. 1069-1092.

[6] -, On the regularizing properties of the GMRES method, Numer. Math., 91 (2002), pp. 605-625.

[7] D. Calvetti and L. Reichel, Tikhonov regularization oflarge linear problems, BIT, 43 (2003), pp. 263-283.

[8] D. Calvetti, L. Reichel, and A. Shuibi, L-curve and curvature bounds for Tikhonov regularization, Numer. Algorithms, 35 (2004), pp. 301-314.

[9] -, Invertible smoothing preconditioners for linear discrete ill-posed problems, Appl. Numer. Math., 54 (2005), pp. 135-149.

[10] S. L. Campbell and C. D. Meyer, Generalized Inverses ofLinear Transformations, Dover, Mineola, 1991.

[11] T. F. Chan and K. R. Jackson, Nonlinearly preconditioned Krylov subspace methods for discrete Newton algorithms, SIAM J. Sci. Statist. Comput., 5 (1984), pp. 533-542.

[12] P. J. Davis, Circulant Matrices, Chelsea, New York, 1994.

[13] L. Elden, A weighted pseudoinverse, generalized singular values, and constrained least squares problems, BIT, 22 (1982), pp. 487-501.

[14] H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer, Dordrecht, 1996.

[15] G. H. Golub and U. von Matt, Tikhonov regularization for large scale problems, in Workshop on Scientific Computing, eds. G. H. Golub, S. H. Lui, F. Luk, and R. Plemmons, Springer, New York, 1997, pp. 3-26.

[16] T. N. E. Greville, Note on the generalized inverse ofa matrix product, SIAM Rev., 8 (1966), pp. 518-521.

[17] M. Hanke and P. C. Hansen, Regularization methods for large-scale problems, Surveys Math. Indust., 3 (1993), pp. 253-315.

[18] P. C. Hansen, Regularization tools: A Matlab package for analysis and solution of discrete ill-posed problems, Numer. Algorithms, 6 (1994), pp. 1-35. Software is available at http://www .netlib. org.

[19] -, Rank-Deficient and Discrete Ill-Posed Problems, SIAM, Philadelphia, 1998.

[20] P. C. Hansen and T. K. Jensen, Smoothing norm preconditioning for regularizing minimum residual methods, SIAM J. Matrix Anal. Appl., 29 (2006), pp. 1-14.

[21] M. E. Kilmer, P. C. Hansen, and M. I. Espanol, A projection-based approach to general-form Tikhonov regularization, SIAM J. Sci. Comput., 29 (2007), pp. 315-330.

[22] M. E. Kilmer and D. P. O'Leary, Choosing regularization parameters in iterative methods for ill-posed problems, SIAM J. Matrix Anal. Appl., 22 (2001), pp. 1204-1221.

[23] B. Lewis and L. Reichel, Arnoldi-Tikhonov regularization methods, J. Comput. Appl. Math., 226 (2009), pp. 92-102.

[24] S. Morigi, L. Reichel, and F. Sgallari, Orthogonal projection regularization operators, Numer. Algorithms, 44 (2007), pp. 99-114.

[25] D. L. Phillips, A technique for the numerical solution of certain integral equations of the first kind, J. ACM, 9 (1962), pp. 84-97.

[26] L. Reichel, H. Sadok, and A. Shyshkov, Greedy Tikhonov regularization for large linear ill-posed problems, Int. J. Comput. Math., 84 (2007), pp. 1151-1166.

[27] L. Reichel and Q. Ye, Breakdown-free GMRESfor singular systems, SIAM J. Matrix Anal. Appl., 26 (2005), pp. 1001-1021.

LOTHAR REICHEL and QIANG YE

([dagger]) Department of Mathematical Sciences, Kent State University, Kent, OH 44242, USA (reichel@math. kent. edu).

([dagger dagger]) Department of Mathematics, University of Kentucky, Lexington, KY 40506, USA (qye@ms . uky. edu). Supported in part by NSF under grant DMS-0411502.
Table 4.1

Example 4.1: Relative error in approximate solutions
[x.sub.k] determined by truncated iteration with
RR-GMRES.

regularization   # iterations k   # mat-vec.   [parallel][x.sub.k]-
operator                            prod.      [??][parallel]/
                                               [parallel][??][parallel]

I                       3             4        1.6 * [10.sup.-3]
[L.sub.1,0]             2             4        2.2 * [10.sup.-4]
[L.sub.2,0]             1             4        2.0 * [10.sup.-4]
[L.sub.3,0]             0             3        1.7 * [10.sup.-4]

Table 4.2
Example 4.2: Errors in approximate solutions of a modification of (4.2)
determined by several Tikhonov regularization methods with
regularization operator L := I.

method                # iterations k   # mat.-vec.   [parallel]
                                           prod.     [x.sub.k] -
                                                     [??][parallel]

Arnoldi-Tikhonov             9             10        1.5 * [10.sup.-2]
Arnoldi-Tikhonov +1         10             11        1.2 * [10.sup.-2]
LBD-Tikhonov                11             22        1.2 * [10.sup.-2]

Table 4.3

Example 4.2: Errors in approximate solutions of a modification of
(4.2) determined by several Tikhonov regularization methods with
regularization operator [L.sub.1,0].

method                # iterations k    # mat-vec.  [parallel]
                                           prod.    [x.sub.k] - [??]
                                                    [parallel]

Arnoldi-Tikhonov             6               8      5.7 * [10.sup.-3]
Arnoldi-Tikhonov +1          7               9      2.9 * [10.sup.-3]
LBD-Tikhonov                 7              15      3.6 * [10.sup.-3]

Table 4.4
Example 4.3: Relative error in approximate solutions [x.sub.k]
determined by truncated iteration with RR-GMRES.

regularization
  operator       # iterarions k   # mat.-vec.prod.

I                       2                3
[L.sub.3,0]             0                3

                   [parallel][x.sub.k]
regularization       - x [parallel]/
  operator       [parallel] x [parallel]

I                  1.3 * [10.sup.-1]
[L.sub.3,0]        5.0 * [10.sup.-2]

Table 4.5

Example 4.4: Errors in approximate solutions of computed by RR-GMRES
without and with several regularization operators. The operator
[L.sub.2,0] is defined by zero-padding the operator (1.10), [C.sub.2]
by (2.8), and [[??].sub.2] by setting the two smallest nonvanishing
eigenvalues of [C.sub.2] to zero.

regularization   # iterations k   # mat.-vec.   [parallel][x.sub.k]-
operator                              prod.     [??][parallel]

--                     5               6        5.7 * [10.sup.-2]
[L.sub.2,0]            5               8        2.4 * [10.sup.-2]
[C.sub.2]              3               5        5.7 * [10.sup.-3]
[[??].sub.2]           1               5        2.4 * [10.sup.-3]

Table 4.6

Example 4.5: Errors in approximate solutions computed by RR-GMRES
without and with several regularization operators. The operator
[L.sub.2,0] is defined by zero-padding the operator (1.10), [C.sub.2]
by (2.8), and [C.sub.2] by setting the two smallest nonvanishing
eigenvalues of [C.sub.2] to zero. The diagonal matrix [D.sub.[delta]]
has [delta] = 1 * [10.sup.-8]. OP stands for orthogonal projection onto
a subspace not containing discretized linear functions, i.e., R(U) =
span{[n.sub.1], [n.sub.2]} in (2.13).

regularization           # iterations k   # mat.-vec.
opera                                        prod.

--                             8              9
[L.sub.2,0]                    5              8
[C.sub.2]                      7              9
[[??].sub.2]                   3              7
([??].[dagger].sub.2]          2              6
[D.sup.-1.sub.[delta])
[dagger]
OP and [C.sub.2]               3              7
OP                             4              7

regularization           [parallel][x.sub.k]-
opera                       [??][parallel]

--                       6.0 * [10.sup.-2]
[L.sub.2,0]              1.8 * [10.sup.-2]
[C.sub.2]                1.1 * [10.sup.-1]
[[??].sub.2]             1.1 * [10.sup.-1]
([??].[dagger].sub.2]    5.6 * [10.sup.-3]
[D.sup.-1.sub.[delta])
[dagger]
OP and [C.sub.2]         4.3 * [10.sup.-3]
OP                       1 . 0 [10.sup.-2]
Gale Copyright:
Copyright 2008 Gale, Cengage Learning. All rights reserved.