DSA2102 Essential Data Analytics Tools: Numerical Computation
Semester 2, AY 2023/2024
Homework 2 / Computer Assignment
Instructions. Due date: 19 Mar (Tuesday) 11:59pm Electronic submission only: please upload a single zipped ile to Canvas. Rename your ile with your name and student ID. 2 (out of 50) marks will be deducted for each day of late submission. Strictly no submission after 26 Mar 2024. |
1. [8 points] For each of the following square matrix, either prove that it does not exist, or give an example in R2根2 .
Matrix |
Strictly diagonally dominant? |
Symmetric? |
Positive deinite? |
A |
Yes |
Yes |
Yes |
B |
Yes |
Yes |
No |
C |
Yes |
No |
Yes |
D |
Yes |
No |
No |
E |
No |
Yes |
Yes |
F |
No |
Yes |
No |
G |
No |
No |
Yes |
H |
No |
No |
No |
2. [8 points] The spectral radius ρ(A) of a square matrix A is the maximum magnitude of its eigenvalues. We have the following theorem:
Theorem. If the n x n matrix A has spectral radius ρ(A) < 1, and b is arbitrary, then, for any vector x0 , the iteration xk+1 = Axk + b converges. In fact, there exists a unique x* such that limk!1 xk = x* and x* = Ax* + b.
Use the theorem above to prove the convergence theorem of the Gauss-Seidel method:
If the n x n matrix A is strictly diagonally dominant, then (a) (Optional.) A is a nonsingular matrix (b) for every vector b and every starting guess, the Gauss-Seidel method applied to Ax = b converges to a solution. |
3. [34 points] Consider the following algorithm for finding the smallest eigenvalue of a symmetric positive definite matrix:
Algorithm minEig: Finding the smallest eigenvalue of a symmetric positive dei- nite matrix A 2 Rnxn.
Note that in line 4, we need to solve a linear system with the same coe伍cient matrix but diferent right-hand-side vector in every iteration. We will use this algorithm to ind the minimum eigenvalue of the matrix A that is downloadable from Canvas (‘matrixA. csv’). You may also use the python skeleton provided (‘minEig. py’).
Set x0 = [1; 1; ...; 1] and tol = 0.5 X 10-2 .
(a) Solve the linear system in line 4 by Gaussian elimination with scaled partial pivoting. Report the eigenvalue found and the overall runtime needed.
(b) Solve the linear system in line 4 by Cholesky factorization. Report the eigenvalue found and the overall runtime needed.
(c) Solve the linear system in line 4 by Successive Over-Relaxation (SOR) method.
Terminate the SOR algorithm when IAx(ˆ)(i) — b(i)I is small enough (instead of
using a ixed number of iterations). Here, x(ˆ)(i) is the i-th iterate in the SOR
algorithm. Justify your choice of the relaxation parameter ω, tolerance, and the
initial guess x(ˆ)0 for the SOR algorithm. Report the eigenvalue found as well as
the overall runtime and average SOR iteration needed.
(d) Briely comment on the three implementations in terms of efficiency and accuracy.
Instructions: Note that your implementation should be as efficient as possible. You may make use of the codes given in lecture notes. Include your source codes (.py or .ipynb or .m etc.) in the zipped ile.
版权所有:编程辅导网 2021 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。