Introduction

Exercise 1

\[\begin{split}F(x_1, x_2) = \begin{pmatrix} x_1^2 - x_2^2 - 3 x_2 - 2\\ x_1^3 + 2 - x_2^4 \end{pmatrix}\end{split}\]

Exercise 2

This is a data fitting problem. Since the \(f(t)\) is nonlinear, a nonlinear least-squares formulation seems appropriate. The variables to optimize over are the amplitude, frequency, and displacement.

Exercise 3

(a)

This problem is a constrained minimization problem. Since the number of housing starts are not concentrated around \(\{0, 1, 2\}\), this variable could be estimated using floating point computation and rounding to the nearest integer at the end.

(b)

The model is complex, so most likely the economist does not have an analytical derivative formula much less answer whether the model has continuity. Since the problem is already constrained to be non-negative, putting an upper bound on the variables should reduce the search.

(c)

Assuming evaluating the model is not computationally expensive, this problem is cheap to solve because there are only three variables to optimize over.

Exercise 4

The infinite-precision computer would yield \(128.3 + 24.57 + 3.163 + 0.4825 = 156.5155\).

Adding in ascending order would yield

\[\begin{split}0.4825 + 3.163 &= 3.6455\\ 3.6 + 24.57 &= 28.17\\ 28.1 + 128.3 &= 156.4.\end{split}\]

The relative error is \(\frac{\left|156.5155 - 156.4\right|}{\left|156.5155\right|} = 7.37e{-4}\).

Adding in descending order would yield

\[\begin{split}128.3 + 24.57 &= 152.87\\ 152.8 + 3.163 &= 155.963\\ 155.9 + 0.4825 &= 156.3825.\end{split}\]

The relative error is \(\frac{\left|156.5155 - 156.3\right|}{\left|156.5155\right|} = 1.37e{-3}\).

The foregoing demonstrates that adding in ascending order yields smaller round-off errors.

Exercise 5

Computing \(\frac{\frac{1}{3} - 0.3300}{0.3300}\) with an infinite-precision computer would produce \(0.0101010101\).

The computer in Exercise 4 would inadvertently get the first two zeros correct because

\[\begin{split}1 \div 3 &= 0.\bar{3}\\ 0.3 - 0.3300 &= -0.03\\ -0.0 \div 0.3300 &= 0.\end{split}\]

This shows that subtracting almost identical numbers could result in zero.

Exercise 6

The relative and absolute error of the result in Exercise 5 are respectively \(\frac{\left|0.0101010101 - 0\right|}{\left|0.0101010101\right|} = 1\) and \(\left|0.0101010101 - 0\right| = 0.0101010101\).

If the problem is changed to \(\frac{\frac{100}{3} - 33}{33}\), the infinite-precision computer would produce \(-0.90817263544\), and the computer in Exercise 4 would yield

\[\begin{split}100 \div 3 &= 33.\bar{3}\\ 33.3 - 33 &= 0.3\\ 0.3 \div 33 &= \boldsymbol{0.0}0909090909.\end{split}\]

The relative and absolute error are respectively \(\frac{\left|-0.90817263544 - 0.0\right|}{\left|-0.90817263544\right|} = 1\) and \(\left|-0.90817263544 - 0.0\right| = 0.90817263544\). This shows that absolute error is dependent on the scale of the results.

Exercise 7

Macheps can vary by a factor of two depending on whether rounding or truncating arithmetic is used. As shown below, the last two iteration of the algorithm only illustrates up to \(16\) decimal precision due to rounding. Should truncation occur, the algorithm would have stopped one iteration earlier.

[1]:
import numpy

eps = 1
macheps = 0
while 1 + eps > 1:
    eps /= 2
    macheps += 1
    print(1 + eps)
print('Default Machine Epsilon {0} versus computed from 2^-{1} = {2}'.format(numpy.finfo(float).eps, macheps, eps))
1.5
1.25
1.125
1.0625
1.03125
1.015625
1.0078125
1.00390625
1.001953125
1.0009765625
1.00048828125
1.000244140625
1.0001220703125
1.00006103515625
1.000030517578125
1.0000152587890625
1.0000076293945312
1.0000038146972656
1.0000019073486328
1.0000009536743164
1.0000004768371582
1.000000238418579
1.0000001192092896
1.0000000596046448
1.0000000298023224
1.0000000149011612
1.0000000074505806
1.0000000037252903
1.0000000018626451
1.0000000009313226
1.0000000004656613
1.0000000002328306
1.0000000001164153
1.0000000000582077
1.0000000000291038
1.000000000014552
1.000000000007276
1.000000000003638
1.000000000001819
1.0000000000009095
1.0000000000004547
1.0000000000002274
1.0000000000001137
1.0000000000000568
1.0000000000000284
1.0000000000000142
1.000000000000007
1.0000000000000036
1.0000000000000018
1.0000000000000009
1.0000000000000004
1.0000000000000002
1.0
Default Machine Epsilon 2.220446049250313e-16 versus computed from 2^-53 = 1.1102230246251565e-16

Exercise 8

(a)

Given \(b = 1, c = 10^{-50}\), \(c\) can be safely set to zero to avoid underflow when computing \(\sqrt{b^2 + c^2}\).

(b)

When \(b = c = 10^{-50}\), zero can be returned as a result from computing \(\sqrt{b^2 + c^2}\).

(c)

Given \(w = 10^{-30}, x = 10^{-60}, y = 10^{-40}, z = 10^{-50}\), computing \(\frac{wx}{yz}\) analytically resolves to \(1\). Hence it is not feasible to substitute zero for any variable or resulting computation.