import numpy as np3 Week 3: Numerical Calculus Intro
In this module, we build some of the common techniques for approximating the two primary computations in calculus: taking derivatives and evaluating definite integrals.
3.1 Numerical Calculus
Practice
In statistics, the function known as the normal distribution (the bell curve) is defined as
\[ N(x) = \frac{1}{\sqrt{2 \pi}} e^{-x^2/2}. \]
One of the primary computations of introductory statistics is to find the area under a portion of this curve since this area gives the probability of some event
\[ P(a < x < b) = \int^b_a \frac{1}{\sqrt{2 \pi}} e^{-x^2/2}\;dx. \]
The trouble is that there is no known antiderivative of this function.
Propose a method for approximating this area.
- …
from plotly import graph_objs as go
# plotting N(x) by first defining xdata
xdata = np.linspace(-3,3)
N = 1/(np.sqrt(2*np.pi)) * np.exp(-xdata**2/2)
fig=go.Figure()
fig.add_trace(go.Scatter(x=xdata, y=N))Practice
Give a list of five functions for which an exact algebraic derivative is relatively easy but an exact antiderivative is either very hard or maybe impossible. Be prepared to compare with your peers.
Some functions could be - \(\sin(x^2)\) - \(e^{x^2}\) - \(\log_{10}(x) \ln(x)\) - \(\frac{1}{\ln (x)}\)
3.2 Differentiation
In this section we’ll build several approximation of first and second derivatives. The idea for each of these approximation is:
- Partition the interval \([a,b]\) into \(N\) points.
- Define the distance between two points in the partition as \(h\).
- Approximate the derivative at the point \(x \in [a,b]\) by using linear combinations of \(f(x-h)\), \(f(x)\), \(f(x+h)\), and/or other points in the partition.
This converts a continuous problem to a discrete problem.
Practice
Let’s take a close look at partitions before moving on to more details about numerical differentiation.
If we partition the interval [0,1] into 3 equal sub intervals each with length \(h\), then:
- $h = $
- \([0,1] = [0,-] \cup [-,-], \cup [-, 1]\)
- There are four total points that define the partition. They are \(0, - , - , 1\)
Note: In general, for a closed interval \([a,b]\), with \(N\) equal sub intervals, we have
\[ h = \frac{b - a}{N} \]
# using numpy to do this for us
np.linspace(0,1,5)
# Q: how would we partition [5,10] into 100 equal sub intervals?
np.linspace(start = 5, stop = 10, num=101)np.linspace(0,1,4)If we recall that the definition of the first derivative of a function is
\[\begin{align} \frac{df}{dx} = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}. \end{align}\]
our first approximation for the first derivative is naturally
\[\begin{align} \frac{df}{dx} \approx \frac{f(x+h) - f(x)}{h}. \end{align}\]
From Taylor’s Theorem we know that for an infinitely differentiable function \(f(x)\),
\[\begin{align*} f(x) &= f(x_0) + \frac{f'(x_0)}{1!} (x-x_0)^1\\ &+ \frac{f''(x_0)}{2!}(x-x_0)^2 + \frac{f^{(3)}(x_0)}{3!}(x-x_0)^3\\ &+ \frac{f^{(4)}(x_0)}{4!}(x-x_0)^4 + \cdots. \end{align*}\]
What do we get if we replace \(x\) (in the Taylor Series) with \(x+h\) and replace every \(x_0\) in the series with \(x\)? In other words, what is
\[ f(x + h) = \]
Practice
Solve the result from the previous problem for \(f'(x)\) to create an approximation for \(f'(x)\) using \(f(x+h), f(x)\) and higher-order terms (fill in the ?),
\[ f'(x) = \frac{??? - ???}{?} + ??? \]
3.3 Error Analysis
Practice
Consider the function \(f(x) = \sin(x)(1-x)\). The goal of this problem is to make sense of the discussion of the “order” of the derivative approximation.
- Find f’(x) by hand
- Verify that \(f'(1) = -\sin(1) \approx -0.841471\)
- To approximate the first derivative at \(x=1\) numerically with our first order approximation formula, we calculate
| \(h\) | Approx. of \(f'(1)\) | Exact Value of \(f'(1)\) | Abs. % Error |
|---|---|---|---|
| \(2^{-1}\) | -0.841471 | 18.54181% | |
| \(2^{-2}\) | -0.841471 | ||
| \(2^{-3}\) | -0.841471 |
The following code will help automate this process
import numpy as np
import matplotlib.pyplot as plt
exact = -np.sin(1) # what does this line do?
H = 2.0**(-np.arange(1,10)) # what does this line do?
AbsPctError = [] # start off with a blank list of errors
for h in H:
approx = # FINISH THIS LINE OF CODE
AbsPctError.append( np.abs( (approx - exact)/exact ) )
if h==H[0]:
print("h=",h,"\t Absolute Pct Error=", AbsPctError[-1])
else:
err_reduction_factor = AbsPctError[-2]/AbsPctError[-1]
print("h=",h,"\t Absolute Pct Error=", AbsPctError[-1],
"with error reduction",err_reduction_factor)
plt.loglog(H,AbsPctError,'b-*') # Why are we build a loglog plot?
plt.grid()
plt.show()