David Siˇ ska School of Mathematics Sciences, University of Edinburgh ˇ
2018/19 Semester 2
Object Oriented Programming with Applications
Problem Sheet 4 - Wednesday 24th October 20181
Exercise 4.1. Get a working implementation of the CompositeIntegrator class.
Use the lecture material.
You will notice that the class, as presented in lectures is not robust. Consider and
modify your code according for the following situations (i.e. throw appropriate exceptions).
1. What if newtonCotesOrder in the is a number other than 1, 2, 3, 4?
2. What if N in the Integrate method is less than or equal to 0?
3. Does the code work if a > b in the Integrate method?
Exercise 4.2. The Newton’s method for approximating solutions x to f(x) = 0, where
f : R → R is assumed to be differentiable is an iterative method where, given an initial
guess for the solution x0 the next entry in the sequence of approximations is calculated
as:
xn := xn1
f(xn 1)
f
0(xn 1)
, n = 1, 2, . . . .
Do the following:
1. To see that you understand how this works calculate the first three approximations
to 0 = x
2 2 =: f(x) with x0 = 2 (this is a nice way of approximating √
2 if
you ever have to do this by hand).
2. Create a method with the following “signature”:
static double NewtonSolver(Func<double, double> f,
Func<double, double> fPrime,
double x0,
double maxError, int maxIter)
It should have the following properties: the return value should be the approximation
xn of solution to f(x) = 0 such that |f(x)| <maxError. If the number of
iterations has reached or exceeded maxIter then an exception should be thrown.
If fPrime != null then it should be used, otherwise the derivative should be
approximated using symmetric finite differences. That is, for a given δ > 0 we
say that
f0(x0) is approximately 1
2δ
(f(x0 + δ) f(x0 ? δ)).
3. Test it by approximating the solution to 0 = x
2 2 with x0 = 2.
1Last updated 9th October 2018
1
Exercise 4.3. Newton’s method generalises to higher dimensions. Indeed consider
a differentiable F : R
d → R
d
. Let JF (x) denote the Jacobian matrix of this function
evaluated at x. That is:
JF (x)
Given an initial guess x0 ∈ R
d we obtain successive approximations to x such that
F(x) = 0 by solving
JF (xn?1)(xn ? xn?1) = ?F(xn?1), n = 1, 2, . . . .
Your task is to complete the class below:
public class NewtonSolver
{
private const double delta = 1e-8; // for approximating partial derivatives
private double tol;
private int maxIt;
public NewtonSolver(double tolerance, int maximumIterations)
{
tol = tolerance;
maxIt = maximumIterations;
}
public Matrix<double> ApproximateJacobian(Func<Vector<double>,
Vector<double>> F, Vector<double> x)
{
/* ... write the code ...*/
}
public Vector<double> Solve(Func<Vector<double>, Vector<double>> F,
Func<Vector<double>, Matrix<double>> J_F, Vector<double> x_0)
{
/* ... write the code ...*/
}
}
The Newton Method should stop if either maxIt is reached or if the l
2 norm (i.e. the
usual Euclidean norm) of F(xn) is smaller than tol.
Now test it by approximating the solution to F(x, y) = 0 where
with x0 = (1, ?1)T
. Note that it is easy to see that at least one solution is x = 4, y = 3.
Hints.
To define something like F : R
d → R
d
in C# use:
Func<Vector<double>, Vector<double>> F = (x) =>
{
Vector<double> y = Vector<double>.Build.Dense(x.Count);
y[0] = x[0]*x[0] + x[1]*x[1] - 2*x[0]*x[1] - 1;
y[1] = x[0]*x[0] - x[1]*x[1] - 7;
return y;
};
To define something like the Jacobian Matrix use:
2
Func<Vector<double>,Matrix<double>> J_F = (x) => {
int d = x.Count;
Matrix<double> J_F_vals = Matrix<double>.Build.Dense (d, d);
J_F_vals[0,0] = 2*x[0] -2*x[1];
J_F_vals[0,1] = 2*x[1]-2*x[0];
J_F_vals[1,0] = 2*x[0];
J_F_vals[1,1] = -2*x[1];
return J_F_vals;
};
You can see above how to define an empty vector and matrix respectively.
To fill in a whole jth column in a matrix you can use: M.SetColumn(j,v) assuming
M is a matrix and v is a vector of appropriate length.
To solve a linear system Ax = b (with A a d × d matrix and b a vector in R
d
, x
unknown) use: Vector<double> x = A.Solve(b);.
To find a determinant of a matrix A use double determinant = A.Determinat();
Note that A.Solve(b) might also fail when the determinant is very small rather
than exactly 0.
Exercise 4.4. In fact our interest in the Newton’s method is also for minimization
problems. A differentiable function f : R
d → R will have a local minimum (or maximum
or saddle point) at x if
is equal to zero. If f is convex then this will be the global minimum.
So we are approximating solutions to F(x) = ?f(x) = 0 using Newton’s method.
The Jacobian matrix of F is now
which is the Hessian matrix of f.
In some cases one would have to approximate the Hessian using finite differences. Let
us define Tδ,iy : R
d → R
d as
Tδ,iy = (y1, . . . , yi?1, yi + δ, yi+1, . . . , yd)T.
(f(Tj,hTi,hx) f(Tj,hTi,hx))
1
2h
(f(Tj,hTi,hx) f(Tj,hTi,hx))
.
Your task is to develop a Newton based minimizer by completing the class below
3
public class NewtonMnimizer
{
private const double delta = 1e-7; // for approximating grad
private NewtonSolver solver;
public NewtonMnimizer(double tol, int maxIt)
{
solver= new NewtonSolver(tol, maxIt);
}
private Vector<double> ApproximateGrad(Func<Vector<double>, double> f,
Vector<double> x)
{
/* ... write the code ... */
}
private Matrix<double> ApproximateHessian(Func<Vector<double>, double> f,
Vector<double>> grad_f,
Vector<double> x)
{
/* ... write the code, allow for grad_f == null ... */
}
public Vector<double> Minimize(Func<Vector<double>, double> f,
Func<Vector<double>, Vector<double>> grad_f,
Func<Vector<double>, Matrix<double>> hessian_f,
Vector<double> x_0)
{
/* ... write the code,
allow for grad_f == null and also hessian_f == null ... */
}
}
Test it by finding the global minimum of the function f given by f(x, y) = x
2 + y
2 by
running the following code:
NewtonMnimizer minimizer = new NewtonMnimizer (1e-5, 100);
Func<Vector<double>, double> f = (x) => x [0] * x [0] + x [1] * x [1];
Vector<double> startPt = Vector<double>.Build.Dense (2);
startPt[0] = 1; startPt[1] = -1;
Console.WriteLine ("With approximate grad of f and hessian of f using f.d.");
Console.WriteLine(minimizer.Minimize(f, null, null, startPt));
Console.WriteLine ("With exact grad_f and approximate hessian f using f.d.");
Func<Vector<double>, Vector<double>> grad_f = (x) => {
Vector<double> grad = Vector<double>.Build.Dense(x.Count);
grad[0] = 2*x[0]; grad[1] = 2*x[1];
return grad;
};
Console.WriteLine(minimizer.Minimize(f, grad_f, null, startPt));
Console.WriteLine ("With exact grad of f and hessian of f.");
Func<Vector<double>, Matrix<double>> hessian_f = (x) => {
Matrix<double> hessian = Matrix<double>.Build.Dense(x.Count,x.Count);
hessian[0,0] = 2; hessian[0,1] = 0;
hessian[1,0] = 0; hessian[1,1] = 2;
return hessian;
};
Console.WriteLine(minimizer.Minimize(f, grad_f, hessian_f, startPt));
Console.ReadKey ();
版权所有:编程辅导网 2021 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。