联系方式

  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp

您当前位置:首页 >> Matlab编程Matlab编程

日期:2022-11-29 10:21

MATH0033 Numerical Methods,X

Computational homework 2

Nonlinear equations

Both exercises 1 and 2 (marked *) to be submitted.

At least one of these will be marked.

Deadline: 23:59hrs Sunday 27th November

It is recommended that you use some of the Matlab .m files that are provided on Moodle.

Please submit your solutions via Crowdmark. You should submit a single pdf file, created

using the Matlab publish command, formatted as per the template provided on the Moodle

page. Your file should show the code you used, its output (including figures) and your

comments. Please do not include the code of any of the mfiles I gave you. Note that before

submitting to Crowdmark you should “flatten” your pdf file by opening it in a pdf viewer

(like Adobe Reader) and printing it to a new pdf using the Print dialogue (see instructions

on Moodle). Otherwise Crowdmark may not display it properly.

EXERCISE 1(*) Consider the linear system Ax = b, where, for ∈ [0, 1] and n ≥ 5, the

pentadiagonal n× n matrix A and the n-vector b are defined by

.

For a given size n and a given value epsi of ?, you can construct the matrix A? and the vector

b? using the command [A,b] = matrix(n,epsi) (code available on the Moodle page).

(a) We know from lectures that if A? is strictly diagonal dominant then the Jacobi and

Gauss-Seidel methods both converge. For which ? does this hold? (Assume that n ≥ 5.)

(b) Compute the solution of the system in the case n=5 with ? = 0.3, using both the Jacobi

and Gauss-Seidel methods. To do this, use the command itermeth (code available on

the Moodle page) and choose first ’J’ and then ’G’ as the method. Use the tolerance

tol = 10?10, maximum number of iterations 103, and initial guess x(0) = (0, 0, 0, 0, 0)T .

Report the number of iterations necessary for convergence in both cases.

(c) Still using n=5, use the built-in Matlab command eig to compute the spectral radius of

the iteration matrices of the Jacobi method BJ and the Gauss-Seidel method BGS(?)

for  = 0, 0.01, 0.02, . . . , 1. Recall that

where D is the diagonal part of A, L is the lower triangular part of A? (excluding

the diagonal) and U is the upper triangular part of A? (excluding the diagonal). Plot

the spectral radius against the value of ? for both methods on the same axes (the hold

command might be useful).

1

For which (approximate) range of ? do you expect the two methods to converge? How

does your answer compare to the answer you obtained in part (a)?

When both methods converge, which do you expect to be faster?

Which method would you recommend using if n=5 and ? = 0.5? Run both methods for

this case and check that the outcomes are consistent with your recommendation.

EXERCISE 2(*) Consider the boundary value problem: find y : [0, 1] 7→ R such that


= sin(π x) in I := [0, 1], y(0) = y(1) = 0, (1)

the exact solution of which is y(x) = π?2 sin(π x).

In lectures we are soon going to discuss how the solution y can be approximated using a finite

difference method on a set of nodes {xi}Ni=0 defined by xn = nh, n = 0, . . . , N , where h = 1/N

is the step size. We seek a set of N + 1 numbers {ui}Ni=0 approximating the values of the

solution y at the nodes {xi}Ni=0. To impose the boundary conditions we set u0 = uN = 0.

To approximate the ODE we replace the second derivative operator by the finite difference

operator

D2un :=

un+1 2un + un1

h2

.

We find that the numerical approximation u = (u1, . . . , uN?1)T to the interior values satisfies

the linear equation Au = b, for a certain matrix A and vector b. For a given N ∈ N these

may be constructed in Matlab using the script

>> h=1/N;

>> A= (2/h^2)*diag(ones(N-1,1)) - (1/h^2)*diag(ones(N-2,1),1)...

- (1/h^2)*diag(ones(N-2,1),-1);

>> b = transpose(sin(pi*h*(1:N-1)));

(If you are interested in knowing more about the background theory, then the section of the

notes where this material is discussed is §5.9. But you don’t need to look at §5.9 to answer

this question - everything you need is given to you here.)

(a) Use the built-in Matlab command eig to compute the spectral radii of the iteration

matrices BJ and BGS of the Jacobi and Gauss-Seidel methods applied to the system

above, for each N ∈ {5, 10, 20, 40, 80}. Report the results and store them in two vectors

rhoBJ and rhoBGS.

Hint: the matrices D, L, U needed to construct the iteration matrices for the two methods

can be constructed using the commands

>> D = diag(diag(A));

>> L = tril(A)-D;

>> U = triu(A)-D;

Do you predict that the two iterative methods should be convergent for these values of

N? If so, which should converge faster?

What seems to be happening to ρ(BJ) and ρ(BGS) as the system size N increases? As a

result, what do you expect to happen to the performance of the Jacobi and Gauss-Seidel

methods?

To investigate this behaviour further, present the plots produced by the commands

2

>> loglog(Nvec,1-rhoBJ)

>> hold

>> loglog(Nvec,1-rhoBG)

where Nvec=[5,10,20,40,80] is the vector of matrix sizes.

Use your results to propose an approximate relationship between the spectral radii and the

size of the matrix N that you conjecture to hold in the limit as N →∞. Give estimated

values for any constants that appear in your relationships. (I.e. if you conjecture that

ρ(B) ≈ 1? CαN or ρ(B) ≈ 1? CNα then give numerical estimates for C and α.)

Assuming that the error in both methods is approximately proportional to ρ(B)k, and

using the relationship you derive above, and can you predict (roughly) how the number

of iterations ktol required to achieve a fixed solution accuracy tol will grow as N →∞, for

both methods? Comment on the implications for the practicality (or otherwise) of using

the Jacobi and Gauss-Seidel methods (compared to e.g. direct methods) for this problem

when N is large.

(b) Solve the system forN = 5, 10, 20, 40, 80 using the Gauss-Seidel method (use the command

itermeth as in Exercise 1, with a tolerance 10?10 and an increased maximum iteration

number 105), starting from an appropriate initial guess. Plot the resulting solutions on

the same axes (don’t forget u0 and uN ), along with a plot of the exact solution y(x)

(formula stated at the start of this exercise), evaluated on the finest mesh (N = 80). Do

the solutions appear to converge as N increases?

For each N compute the error

e(N) = max

n=0,...,N

|un ? y(xn)|, (2)

and store your results in a vector error vect.

In lectures we will show that e(N) ≤ CN?2 as N → ∞ (equivalently, e(N) ≤ Ch2 as

h = 1/N → 0), where the constant C is proportional to maxx∈[0,1] |y′′′′(x)|. (This comes

from the error in replacing d2/dx2 by D2 — recall theoretical exercise sheet 1.) Do your

numerical results support or contradict this theoretical estimate?

Hint: you may find it useful to plot your results using the command

>> loglog(hvect, error_vect)

where hvect=1./Nvec is a vector of step sizes. Recall that if E(h) = Chp then a loglog

plot of E against h will produce a straight line with slope p and offset logC. Once you

have determined appropriate values of p and C you can check them by adding an extra

plot to your figure using hold and the command

>> loglog(hvect, C*hvect.^p)

(c) Now consider the equation (1), but with the right hand side sin(πx) replaced by 1. The

exact solution in this case is y(x) = (1/2)x(1 ? x). Once again, solve the system for

N = 5, 10, 20, 40, 80 (you will need to modify the vector b in your code), plot the numerical

solutions and the exact solution on the same axes, and compute the corresponding errors

(2). You might initially be surprised by what you see when you plot

>> loglog(hvect, error_vect)

Can you explain what is going on here?


版权所有:编程辅导网 2021 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。 站长地图

python代写
微信客服:codinghelp