2  Week 2: Approximating Functions

2.1 Floating Point Arithmetic

# calculating binary of a number + bit size of a variable
a = 7
bin(a)
a.bit_length()

Practice

What are the mantissa, sign bit, and exponent for the numbers \(7_{10}\), \(-7_{10}\), and \((0.1)_{10}\)?

Practice

To make all of these ideas concrete, let’s consider with a small computer system where each number is stored in the following format:

\[ s\; E\; b_1 b_2 b_3 \]

The first entry is a bit for the sign (0\(=+\) and \(1=-\)). The second entry, \(E\) is for the exponent, and we’ll assume in this example that the exponent can be 0, 1, or -1. The three bits on the right represent the significand of the number. Hence, every number in this number system takes the form

\[ (-1)^s \times (1+ 0.b_1b_2b_3) \times 2^{E} \]

  • What is the smallest positive number that can be represented in this form?
  • What is the largest positive number that can be represented in this form?
  • What is the machine precision in this number system?

2.2 Approximating Functions

Practice

In this problem we’re going to make a bit of a wish list for all of the things that a computer will do when approximating a function. We’re going to complete the following sentence: If we are going to approximate \(f(x)\) near the point \(x=x_0\) with a simpler function \(g(x)\) then …

(I’ll get us started with the first item that seems natural to wish for. The rest of the wish list is for you to complete.)

  • the functions \(f(x)\) and \(g(x)\) should agree at \(x=x_0\). In other words, \(f(x_0) = g(x_0)\)
  • if \(f(x)\) is increasing/decreasing to the right of \(x=x_0\), then \(g(x)\)
  • if \(f(x)\) is increasing/decreasing to the left of \(x=x_0\), then \(g(x)\)
  • if \(f(x)\) is concave up/down to the right of \(x=x_0\), then \(g(x)\)
  • if \(f(x)\) is concave up/down to the left of \(x=x_0\), then \(g(x)\)
  • … is there anything else you would add?

Practice

Let \(f(x) = e^x\). Based on the lecture notes, build a cubic polynomial (Taylor Series) that approximates \(f(x)=e^x\) near \(x_0 = 0\).

\[ e^x \approx 1 + x + \frac{x^2}{2} + \frac{x^3}{6} \]

Now, we will practice plotting some of our approximations.

# import packages
import numpy as np
from plotly import graph_objs as go

# data points along the x axis
# your comments go here
xdata = np.linspace(start = -3, stop = 3, num = 50)

# data points along the y axis (y = e^x)
yexp = np.exp(xdata)
#y1 = xdata**2
#yexp
y1 = 1 + xdata + (xdata**2)/2 + (xdata**3)/6
y1
# define a single plot
fig = go.Figure(go.Scatter(x=xdata, y=yexp, name='$f(x) = e^x$'))

# adding a trace
fig.add_trace(go.Scatter(x=xdata, y=y1, name=r'$1 + x + \frac{x^2}{2} + x^3/6$'))

fig.update_layout(title = 'Plot of f(x)')
fig.show()

Practice Consider the function

\[ f(x) = \frac{1}{1-x} \]

and build a Taylor Series centered at \(x_0 = 0\).

Plot the function, and at least 2 approximations. What do you notice?

# define the new function
fx = 1/(1-xdata)
# define approximation
fx_approx = 1 + xdata + xdata**2

2.3 Approximation Error (con Taylor Series)

Discussion Q: With the function from the previous exercise, what is the expected and absolute error of the linear, quadratic, and cubic function when trying to approximate \(e^{0.1}\) (np.exp(0.1))

  • Complete the table below.
  • Add a plot (with appropriate labels) of your approximations.
Taylor Series Approx. Absolute Error Expected Error
0th order 1 0.10517 O(\(x\)) = 0.1
1st order 1.1 0.00517 O(\(x^2\)) = 0.01
2nd order
3rd order
abs(np.exp(0.1) - 1)

Practice How does the error change when you approximate \(f(x) = e^x\) centered at \(x_0=1\)?

  • Add a table to support your conclusions.
  • Add a plot (with appropriate labels) of your approximations.

Practice Write the Taylor Series for \(f(x) = \sin(x)\) centered at \(x_0=0\). How is this error different from the one when \(f(x) = e^x\)?

  • Add a table to support your conclusions.
  • Add a plot (with appropriate labels) of your approximations.