联系方式

  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp

您当前位置:首页 >> Java编程Java编程

日期:2022-11-16 07:27

MATH0033 Numerical Method

Theoretical exercise sheet 2

Linear systems

Exercises 1 and 5 (marked *) to be submitted via *Crowdmark* in pdf

format (either handwritten and scanned, or typeset using LaTeX).

Deadline: 23:59hrs Sunday 13th November

EXERCISE 1(*) Let.

(a) The Jacobi and Gauss-Seidel methods for the approximation of the solution to the linear

system Ax = b can both be written in the form

Pxk+1 = Nxk + b.

Write down the matrices P and N for each of the two methods. What are the associated

iteration matrices BJ and BGS?

(b) Compute the vector x1 obtained after one iteration with the Jacobi method starting from

the initial vector x0 .

(c) Do both methods converge? Which gives the iteration matrix with the smallest spectral

radius?

(d) Prove that both methods converge linearly with respect to the norm ‖ · ‖∞, and compare

the convergence constants.

EXERCISE 2 Let.

(a) Write the Gauss-Seidel method for the solution of the linear system Ax = b. What is the

associated iteration matrix BGS?

(b) What can be said about the convergence of the Gauss-Seidel method for this system?

(c) Now consider the iteration

D(xk+1 ? xk) = αrk k ≥ 0,

where D is the diagonal of A and rk = b?Axk is the residual. This method is sometimes

called the Jacobi over-relaxation (JOR) method; note that for α = 1 the method is

equivalent to the Jacobi method.

Write down the associated iteration matrix BJOR, and find the optimal choice of α min-

imising the spectral radius ρ(BJOR).

(d) Starting from the initial vector x(0) = (1, 1)T , compute the first iteration x1 of the JOR

method using the optimal choice of α you derived in part (c).

1

EXERCISE 3 Consider the linear system Ax = b where

(a) Write down the Jacobi method for this system and establish if the method is convergent.

(b) Now consider the iterative method defined by

Lxk+1 = Lxk + δrk, k ≥ 0, (1)

where δ > 0 is a parameter, rk = b?Axk is the residual, and

Rewrite (1) in the form xk+1 = Bδx

k + cδ, k ≥ 0, giving explicit expressions for the

matrix Bδ and the vector cδ.

(c) For what values of the parameter δ > 0 does the method (1) converge? Find the optimal

value of δ which minimises the spectral radius ρ(Bδ).

(d) Take δ = 43 . By using the results from part (c), establish if (1) converges. If yes, which

method do you expect to have the faster convergence, (1) or the Jacobi method? Why?

EXERCISE 4 Consider the linear system Ax = b where

and κ, β, γ ∈ R are three real parameters.

(Warning: Before answering this question, make sure you know the difference between “nec-

essary” and “sufficient” conditions.)

(a) Without constructing the iteration matrices, give a sufficient condition on the coefficients

κ, β and γ for the Jacobi and Gauss-Seidel methods to be convergent.

(b) Write down the iteration matrix BJ for the Jacobi method and give a necessary and

sufficient condition on κ, β, γ for the method to be convergent.

(c) Now suppose we want to solve the system Ax = b using the JOR method (cf. Exercise 2)

D(xk+1 ? xk) = αrk, k ≥ 0, (2)

where D is the diagonal of A. This method corresponds to applying the stationary

Richardson method (from lectures) to the modified system D?1Ax = D?1b.

Determine for which γ, β, κ the matrixD?1A is symmetric positive definite. Assuming this

holds, apply an appropriate theorem from lectures to determine for which α the method

(2) converges, and compute the optimal value α = α? giving the fastest convergence,

along with the associated error reduction factor C > 0 such that

‖xk+1 ? x‖2 ≤ C‖xk ? x‖2 , k ≥ 0.

2

(d) Given that ‖x0 ? x‖2 = 1, estimate the smallest integer for which ‖xk ? x‖2 ≤

(

1

2

)9

, in

the case when γ = 1, β = 2?4, κ =

3β.

EXERCISE 5(*) Let A be a n-by-n symmetric positive definite matrix, and define the

function ‖x‖A : Rn → R by ‖x‖A =

xTAx.

(a) Let A1/2 denote the unique symmetric positive definite square root of A, which satis-

fies (A1/2)2 = A (you may assume without proof that A1/2 exists). Show that ‖x‖A =

‖A1/2x‖2, and use this fact to check that the function ‖x‖A defines a norm on Rn. De-

termine constants 0 < c ≤ C such that

c‖x‖2 ≤ ‖x‖A ≤ C‖x‖2, ?x ∈ Rn.

(b) Now consider the stationary Richardson method for the solution of the linear system

Ax = b, with iteration matrix Bα = I ? αA. Show that A1/2Bα = BαA1/2. Use this to

prove that

‖ek+1‖A ≤ ρ(Bα)‖ek‖A,

where ek = x? xk denotes the solution error after the kth iteration.

EXERCISE 6 Consider the (unpreconditioned) gradient method for the solution of a linear

system Ax = b.

(a) Show that the acceleration parameter αk in the gradient method is the unique solution

to the minimisation problem

αk = arg min Φ(x

k + αrk),

where Φ(y) = 12y

TAy ? yTb denotes the energy of the system Ax = b.

(b) Show that the residuals in the gradient method satisfy (rk+1, rk) = 0 for each k.

(c) Give a geometric interpretation of the gradient method.

3


相关文章

版权所有:编程辅导网 2021 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。 站长地图

python代写
微信客服:codinghelp