# A rational SHIRA method for the Hamiltonian eigenvalue problem A rational SHIRA method for the...

date post

05-Feb-2020Category

## Documents

view

5download

0

Embed Size (px)

### Transcript of A rational SHIRA method for the Hamiltonian eigenvalue problem A rational SHIRA method for the...

Peter Benner Cedric Effenberger

A rational SHIRA method for the

Hamiltonian eigenvalue problem

CSC/08-08

Chemnitz Scientific Computing

Preprints

Impressum:

Chemnitz Scientific Computing Preprints — ISSN 1864-0087

(1995–2005: Preprintreihe des Chemnitzer SFB393)

Herausgeber: Professuren für Numerische und Angewandte Mathematik an der Fakultät für Mathematik der Technischen Universität Chemnitz

Postanschrift: TU Chemnitz, Fakultät für Mathematik 09107 Chemnitz Sitz: Reichenhainer Str. 41, 09126 Chemnitz

http://www.tu-chemnitz.de/mathematik/csc/

Chemnitz Scientific Computing

Preprints

Peter Benner Cedric Effenberger

A rational SHIRA method for the

Hamiltonian eigenvalue problem

CSC/08-08

CSC/08-08 ISSN 1864-0087 December 2008

Abstract

The SHIRA method of Mehrmann and Watkins belongs among the structure preserving Krylov subspace methods for solving skew- Hamiltonian eigenvalue problems. It can also be applied to Hamilto- nian eigenproblems by considering a suitable transformation. Structure- induced shift-and-invert techniques are employed to steer the algorithm towards the interesting region of the spectrum. However, the shift can- not be altered in the middle of the computation without discarding the information that has been accumulated so far. This paper shows how SHIRA can be combined with ideas from Ruhe’s Rational Krylov algo- rithm to yield a method that permits an adjustment of shift after every step of the computation, adding greatly to the flexibility of the algo- rithm. We call this new method rational SHIRA. A numerical example is presented to demonstrate its efficiency.

Contents

1 Introduction 1 1.1 The Hamiltonian eigenvalue problem . . . . . . . . . . . . . . . . . . . . . 1 1.2 The isotropic Arnoldi process . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Contributions by this paper . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Rational SHIRA 4 2.1 Rational Krylov transformation for real or purely imaginary shifts . . . . . 4 2.2 Rational Krylov transformation for general complex shifts . . . . . . . . . 6 2.3 Applying the operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.4 Implicit restarts, locking and purging . . . . . . . . . . . . . . . . . . . . . 9 2.5 Postprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.6 Complete algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3 Numerical Experiments 13

4 Conclusions 15

Author’s addresses:

Peter Benner benner@mathematik.tu-chemnitz.de Cedric Effenberger ece@hrz.tu-chemnitz.de

TU Chemnitz, Fakultät für Mathematik Mathematik in Industrie und Technik D-09107 Chemnitz

http://www.tu-chemnitz.de/mathematik/industrie-technik/

benner@mathematik.tu-chemnitz.de ece@hrz.tu-chemnitz.de

1 Introduction

1.1 The Hamiltonian eigenvalue problem

We consider the standard eigenvalue problem

Hx = λx, (1.1)

for a Hamiltonian matrix H. Hamiltonian matrices H ∈ R2n×2n feature the explicit block structure

H =

[ A B C −AT

] , B = BT , C = CT ,

where A, B, C are real n× n matrices. Hamiltonian matrices and eigenproblems arise in a variety of applications. They are

ubiquitous in control theory, where they play an important role in various control design procedures (linear-quadratic optimal control, Kalman filtering, H2- and H∞-control, etc., see, e.g., [2, 17, 24, 30] and most textbooks on control theory), system analysis problems like stability radius, pseudo-spectra, and H∞-norm computations [8, 9, 10], and model reduction [1, 5, 6, 14, 26, 29]. Another source of eigenproblems exhibiting Hamiltonian structure is the linearization of certain quadratic eigenvalue problems [4, 18, 20, 28]. Fur- ther applications can be found in computational physics and chemistry, e.g. symplectic integrators for molecular dynamics [12, 16], methods for random phase approximation (RPA) [19], etc.

Hamiltonian matrices may equivalently be characterized as those that are skew-adjoint with respect to the bilinear form 〈x, y〉J := yT Jx induced by the skew-symmetric matrix

J = Jn =

[ 0 In −In 0

] ,

where In denotes the n×n identity matrix. This definition is advantageous in that it leads us directly to two other types of structured matrices, which play an important role when dealing with Hamiltonian eigenvalue problems, namely skew-Hamiltonian and symplectic matrices. Skew-Hamiltonian matrices are those that are self-adjoint with respect to 〈·, ·〉J and symplectic matrices S ∈ R2n×2n fulfill the relation

ST JS = J

which means they are orthogonal with respect to 〈·, ·〉J . Similarity transformations using symplectic matrices can be shown to preserve all of these structures.

Hamiltonian matrices arising in the aforementioned applications are often large and sparse and only a small portion of their spectrum is sought. Krylov subspace methods have proven to be a viable tool for solving this kind of task. They extract certain spectral information by projection of the original matrix onto a sequence of expanding Krylov subspaces. Recall that the Krylov subspace of order m associated with the matrix H and

1

the starting vector u is defined to be

Km(H, u) := span{u, Hu,H2u, . . . , Hm−1u}.

Note that the choice of the starting vector u may critically influence the outcome of the method. In practice, the process is frequently restarted to gradually improve the quality of the starting vector [3, 25].

The spectra of Hamiltonian matrices cannot be arbitrary. Real and purely imaginary eigenvalues are bound to occur in pairs {λ,−λ}, while general complex eigenvalues always occur in quadruples {λ,−λ, λ,−λ}. That is, the spectrum of a Hamiltonian matrix is sym- metric with respect to both the real and the imaginary axis. We will refer to this property as the Hamiltonian spectral symmetry. Numerical methods that take this symmetry into account are capable of preserving the eigenvalue pairings despite the presence of roundoff errors and thus return physically meaningful results. Moreover, exploiting the structure usually leads to more efficient and sometimes more accurate algorithms.

Krylov subspace methods can be redesigned to preserve the Hamiltonian structure. This is done by rearranging the computations such that J-orthogonal bases for the underlying Krylov subspaces are produced. Projections of the original matrix onto these spaces will then amount to partial symplectic similarity transformations and hence will inherit the Hamiltonian structure.

1.2 The isotropic Arnoldi process

Skew-Hamiltonian matrices are somewhat easier to handle in this context as we can take advantage of the following result.

Lemma 1.1. Let L ∈ R2n×2n be skew-Hamiltonian, then for every starting vector u ∈ R2n the Krylov subspace K = Km(L, u) of order m ∈ N, m ≤ n, is isotropic, i. e., yT Jx = 0 for all x, y ∈ K.

Proof. See [18, Prop. 3.3].

Hence, at least in theory, there is no need for J-orthogonalization and we are free to orthogonalize the basis vectors with respect to the standard inner product instead, i. e., we can apply the Arnoldi algorithm. In practice, however, this isotropy is distorted by roundoff error and has to be enforced numerically. This gives rise to the isotropic Arnoldi process of Mehrmann and Watkins [18], where each new basis vector uj+1 is orthogonalized not only against the previously computed basis vectors u1, . . . , uj, but also against Ju1, . . . , Juj to yield the augmented Arnoldi relation

LUj = UjTj + JUjSj + uj+1tj+1,je T j . (1.2)

As was already mentioned, Sj would vanish in exact arithmetic. In finite precision it may happen to deviate from zero, but can still be assumed to be tiny and therefore neglected

2

in our considerations. Moreover, since Uj is orthogonal and isotropic the extended matrix [U, JT U ] is orthogonal and J-orthogonal. By projection we find

[Uj, J T Uj]

T L[Uj, J T Uj] =

[ Tj ∗ −Sj T Tj

] ≈

[ Tj ∗ 0 T Tj

] meaning that the eigenvalues of Tj are approximations to double eigenvalues of the pro- jected matrix.

The use of this procedure is not restricted to skew-Hamiltonian eigenproblems though. It is valuable in the Hamiltonian case as well if applied to an appropriate modification of the original matrix. For example, one might think of employing H2, whose eigensystem is the same as H’s except that eigenvalues have been squared and which is skew-Hamiltonian whenever H is Hamiltonian. This is a proper choice as long as extremal eigenvalues of H are required, but might be rather slow when it comes to interior eigenvalues. In [18] Mehrmann and Watkins propose the use of the compound shift-and-invert operators

L1(µ) = (H − µI)−1(H + µI)−1

for real and purely imaginary target shifts µ and

L2(µ) = (H − µI)−1(H + µI)−1(H − µI)−1(H + µI)−1

for general complex target shifts. Both of these operators are skew-Hamiltonian for any Hamiltonian matrix H and accelerate the convergence of eigenvalues near the pair {µ,−µ} or the quadruple {µ,−µ, µ,−µ}, respectively, by simultaneously mapping them to eigen- values of large modulus.

If the isotropic Arnoldi algorithm applied to either L1(µ) or L2(µ) is complemented with implicit restarts [25], we obtain the SHIRA method [18].

Recommended

*View more*