Monthly Archives: September 2015

Linear Approximations and Differentials

Linear Approximation

Let $y=f(x)$ be a differentiable function. The function $f(x)$ can be approximated by the tangent line to $y=f(x)$ at $a$ if $x$ is near $a$. Such an approximation is called a linear approximation.

If $x\approx a$ then $\Delta x=x-a\approx 0$, so we have
\begin{align*}
\frac{\Delta y}{\Delta x}&\approx \frac{dy}{dx}\\
&=f’(a).
\end{align*}
This means that
$$\frac{f(x)-f(a)}{x-a}\approx f’(a),$$
i.e.
\begin{equation}
\label{eq:lineapprox}
f(x)\approx f(a)+f’(a)(x-a).
\end{equation}
The equation \eqref{eq:lineapprox} is called the linear approximation or tangent line approximation of $f$ at $a$. The linear function
\begin{equation}
L(x):=f(a)+f’(a)(x-a)
\end{equation}
is called the linearization of $f$ at $a$. Notice that $L(x)$ is the equation of tangent line to $f$ at $a$.

Example. Find the linearlization of $f(x)=\sqrt{x+3}$ at $a=1$ and use it to approximate $\sqrt{3.98}$ and $\sqrt{4.05}$.

Solution.

Linear approximation of f(x)=sqrt(x+3) at a=1

Linear approximation of f(x)=sqrt(x+3) at a=1

$f’(x)=\frac{1}{2\sqrt{x+3}}$, so
\begin{align*}
L(x)&=f(1)+f’(1)(x-1)\\
&=2+\frac{1}{4}(x-1)\\
&=\frac{x}{4}+\frac{7}{4}.
\end{align*}
When $x\approx 1$, we have the approximation
$$\sqrt{x+3}\approx \frac{x}{4}+\frac{7}{4}.$$
$\sqrt{3.98}$ can be written as $\sqrt{3+0.98}$. Hence,
\begin{align*}
\sqrt{3.98}&\approx \frac{0.98}{4}+\frac{7}{4}\\
&=1.995.
\end{align*}
$\sqrt{4.05}$ can be written as $\sqrt{4+1.05}$. Hence,
\begin{align*}
\sqrt{4.05}&\approx \frac{1.05}{4}+\frac{7}{4}\\
&=2.0125.
\end{align*}

Differentials

Differentials

Differentials

As seen in the above figure, when $\Delta x\approx 0$, $\Delta x=dx$ and $\Delta y\approx dy$. On the other hand, $\frac{dy}{dx}=f’(x)$. Hence, we obtain
\begin{equation}
\label{eq:differential}
\Delta y\approx f’(x)\Delta x.
\end{equation}

Example. The radius of a sphere was measured and found to be 21 cm with a possible error in measurement of at most 0.05 cm. What is the maximum error in using this value of the radius to compute the volume of the sphere?

Solution. Let $V$ denote the volume of a sphere of radius $r$. Then $V=\frac{4}{3}\pi r^3$. What we are trying to find is $\Delta V$ with $\Delta r=0.05$ cm. As seen in \eqref{eq:differential}, $\Delta V\approx dV$, so we find $dV$ instead because finding $dV$ is easier than findingthe exact error $\Delta V$. Differentiating $V$ with respect to $r$, we obtain
\begin{align*}
dV&=4\pi r^2 dr\\
&=4\pi r^2\Delta r\\
&=4\pi\cdot(21)^2\cdot 0.05\\
&=277.
\end{align*}
So the maximum error in the calculated volume is about 277 $\mbox{cm}^3$.

Implicit Differentiation

A lot of time we have seen functions defined as $y=f(x)$. This clearly shows that $y$ is a function of the independent variable $x$. But often functions are defined implicitly. For instance, consider the equation $x^2+y^2=25$. Of course this is the equation of circle centered at the center $(0,0)$ with radius $5$. Also circles are not functions. But if we say $y\geq 0$, then the equation describes the upper half-circle which is a function defined by $y=\sqrt{25-x^2}$. Functions defined by equations like $x^2+y^2=25$ are called implicit functions. In some cases like $x^2+y^2=25$, we can easily write an implicit function explicitly as $y=f(x)$, but in many cases we cannot. For example, $x^3+y^3=6xy$. So, we need to devise a way to differentiate an implicit function without writing it as $y=f(x)$. This can indeed be done by the chain rule. You just assume that $y$ is a function of $x$ and use the chain rule. For example,
\begin{align*}
\frac{d}{dx}y^n&=(y^n)’\frac{dy}{dx}\ (y\ \mbox{is the innermost function})\\
&=ny^{n-1}\frac{dy}{dx}.
\end{align*}
Let us take a look at another example.
\begin{align*}
\frac{d}{dx}\cos y&=(\cos y)’\frac{dy}{dx}\ (y\ \mbox{is the innermost function})\\
&=-\sin y\frac{dy}{dx}.
\end{align*}
Here come more examples.

Example. If $x^2+y^2=25$, find $\frac{dy}{dx}$.

Solution. Differentiating the equation with respect to $x$, we obtain
$$2x+2y\frac{dy}{dx}=0.$$
Solving the resulting equation for $\frac{dy}{dx}$, we obtain
$$\frac{dy}{dx}=-\frac{x}{y}.$$

Example.

1. Find $y’$ if $x^3+y^3=6xy$.

Solution. Differentiate the equation with respect to $x$. Then we obtain
$$3x^2+3y^2\frac{dy}{dx}=6y+6x\frac{dy}{dx}.$$
Solving the resulting equation for $\frac{dy}{dx}$, we obtain
$$\frac{dy}{dx}=\frac{2y-x^2}{y^2-2x}.$$

2. Find the tangent to $x^3+y^3=6xy$ at $(3,3)$.

Solution. The equation of tangent is
$$y-3=\left[\frac{dy}{dx}\right]_{(3,3)}(x-3).$$
$$\left[\frac{dy}{dx}\right]_{(3,3)}=\frac{2\cdot 3-(3)^2}{3^2-2\cdot 3}=-1.$$ Therefore, the tangent is given by $y=-x+6$.

The Chain Rule

Let us consider the function $y=\sqrt{x^2+1}$. Notice that this is a composite function $y=\sqrt{u}$ and $u=x^2+1$. In general, a composite function can be written as $y=f(u)$ where $u$ is a function of $x$, $u=g(x)$. While we know how to differentiate $y=\sqrt{u}$ (i.e. finding $\frac{dy}{du}$) and $u=x^2+1$ (i.e. finding $\frac{du}{dx}$), we do not know how to differentiate $y=\sqrt{x^2+1}$ (i.e finding $\frac{dy}{dx}$). In this lecture, we would like to devise a way to differentiate a composite function. This is actually very important because the differentiable functions we stumble onto most of time are composite functions.

Let $y=f(u)$ and $u=g(x)$ and assume that both $\frac{dy}{du}$ and $\frac{du}{dx}$ exist. Now,
\begin{align*}
\frac{\Delta y}{\Delta x}&=\frac{\Delta y}{\Delta u}\cdot\frac{\Delta u}{\Delta x}\\
&=\frac{f(u+\Delta u)-f(u)}{\Delta u}\cdot\frac{g(\Delta x+x)-g(x)}{\Delta x}.
\end{align*}
Hence,
\begin{align*}
\frac{dy}{dx}&=\lim_{\Delta x\to 0}\frac{\Delta y}{\Delta x}\\
&=\lim_{\Delta u\to 0}\frac{\Delta y}{\Delta u}\cdot\lim_{\Delta x\to 0}\frac{\Delta u}{\Delta x}\ (\Delta u\to 0\ \mbox{as}\ \Delta x\to 0)\\
&=\frac{dy}{du}\cdot\frac{du}{dx}
\end{align*}
or
\begin{align*}
\frac{dy}{dx}&=\lim_{\Delta u\to 0}\frac{f(u+\Delta u)-f(u)}{\Delta u}\cdot\lim_{\Delta x\to 0}\frac{g(\Delta x+x)-g(x)}{\Delta x}\\
&=f’(u)g’(x).
\end{align*}

Theorem. [The Chain Rule]
Let $y=f(u)$ and $u=g(x)$. If both $\frac{dy}{du}$ and $\frac{du}{dx}$ exist, then $\frac{dy}{dx}$ exists and
\begin{align*}
\frac{dy}{dx}&=\frac{dy}{du}\cdot\frac{du}{dx}\\
&=f’(u)g’(x).
\end{align*}

Remark. The derivation of the chain rule shown above is not rigorously correct. The reason is that $\Delta u$ may become $0$. There is a more rigorous proof of the chain rule but we will not discuss that here.

Remark. Students commonly feel a difficulty with applying the chain rule when they learn it for the first time. The difficulty usually is not about understanding the chain rule itself but identifying the function $u=g(x)$. The candidate for $u$ is usually the function inside parentheses (or brackets) or the innermost function.

Example. We are now ready to find $\frac{dy}{dx}$ when $y=\sqrt{x^2+1}$. In this case, we don’t see parentheses or brackets but the innermost function is $x^2+1$. Let $u=x^2+1$. Then $y=\sqrt{u}$. Now,
\begin{align*}
\frac{dy}{du}&=\frac{1}{2\sqrt{u}}\\
&=\frac{1}{2\sqrt{x^2+1}},\\
\frac{du}{dx}&=2x.
\end{align*}
so, we have by the chain rule
$$\frac{dy}{dx}=\frac{dy}{du}\cdot\frac{du}{dx}=\frac{x}{\sqrt{x^2+1}}.$$

Example. Differentiate $y=(x^3-1)^{100}$.

Solution. The function inside parentheses is $x^3-1$. So, it is our candidate. Let $u=x^3-1$. Then $y=u^{100}.$
By the chain rule,
\begin{align*}
\frac{dy}{dx}&=\frac{dy}{du}\cdot\frac{du}{dx}\\
&=100u^{99}\cdot(3x^2)\\
&=300x^2(x^3-1)^{99}.
\end{align*}

Example. Find the derivative of each function.

1. $y=\sin 4x$.

Solution. The innermost function is $4x$. Let $u=4x$. Then $y=\sin u$. By the chain rule,
\begin{align*}
\frac{dy}{dx}&=\frac{dy}{du}\cdot\frac{du}{dx}\\
&=\cos u\cdot4\\
&=4\cos 4x.
\end{align*}

2. $y=\sqrt{\sin x}$.

Solution. The innermost function is $\sin x$. Let $u=\sin x$. Then $y=\sqrt{u}$. By the chain rule,
\begin{align*}
\frac{dy}{dx}&=\frac{dy}{du}\cdot\frac{du}{dx}\\
&=\frac{1}{2\sqrt{u}}\cdot\cos x\\
&=\frac{\cos x}{2\sqrt{\sin x}}.
\end{align*}

Fourier Series

d’Alembert (1717-83) studied a partial differential equation (wave equation) that describes motion of a vibrating string and Jacob Bernoulli (1654-1705) showed that its solution is represented as a trigonometric series. Fourier (1768-1830) also showed that the solution of a heat conduction problem is represented as a trigonometric series. Suppose that $f(\theta)$ satisfies $f(\theta+2\pi)=f(\theta)$ for all $\theta$. That is, $f(\theta)$ is a periodic function with period $2\pi$. Assume that $f$ is Riemann integrable on every bounded interval. Then can $f$ be expended in a series (a trigonometric series)
\begin{equation}
\label{eq:fourier}
f(\theta)=\frac{1}{2}a_0+\sum_{n=1}^\infty(a_n\cos n\theta+b_n\sin n\theta)
\end{equation}
? The answer is yes. The series \eqref{eq:fourier} can be written as
\begin{equation}
\label{eq:fourier2}
f(\theta)=\sum_{n=-\infty}^\infty c_ne^{in\theta},
\end{equation}
when $c_0=\frac{1}{2}a_0$, $c_n=\frac{1}{2}(a_n-ib_n)$, and $c_{-n}=\frac{1}{2}(a_n+ib_n)$ for $n=1,2,3,\cdots$.
\begin{align*}
\int_{-\pi}^{\pi}f(\theta)e^{-ik\theta}d\theta&=\sum_{n=-\infty}^\infty c_n\int_{-\pi}^{\pi}e^{i(n-k)\theta}d\theta\\
&=2\pi\sum_{n=-\infty}^\infty c_n\delta_{nk},
\end{align*}
where $\delta_{nk}$ denotes Kronecker’s delta. Hence we obtain
\begin{equation}
\label{eq:fc}
c_n=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(\theta)e^{-in\theta}d\theta,
\end{equation}
$n=1,2,3,\dots$. $a_n$ and $b_n$ are then given by
\begin{align}
\label{eq:fc2}
a_n&=\frac{1}{\pi}\int_{-\pi}^{\pi}f(\theta)\cos n\theta d\theta,\ n=0,1,2,\cdots,\\
\label{eq:fc3}
b_n&=\frac{1}{\pi}\int_{-\pi}^{\pi}f(\theta)\sin n\theta d\theta,\ n=1,2,\cdots.
\end{align}
The series of the form \eqref{eq:fourier} or \eqref{eq:fourier2} is called a Fourier series and $c_n$ or $a_n$, $b_n$ are called the Fourier coefficients of $f$.

Lemma. If $F$ is periodic with period $P$ then $\int_a^{a+P} F(x)dx$ is independent of $a$.

Proof. Define
\begin{align*}
g(a)&:=\int_a^{a+P} F(x)dx\\
&=\int_0^{a+P}F(x)dx-\int_0^a F(x)dx.
\end{align*}
Then $g’(a)=F(a+P)-F(a)=0$ for all $a$. This means that $g$ is a constant function.

Lemma. Suppose that $f$ is periodic with period $2\pi$ and integrable on $[-\pi,\pi]$. If $f$ is even,
$$a_n=\frac{2}{\pi}\int_0^\pi f(\theta)\cos n\theta d\theta,\ b_n=0.$$
If $f$ is odd,
$$a_n=0,\ b_n=\frac{2}{\pi}\int_0^\pi f(\theta)\sin n\theta d\theta.$$

Remark. $c_0=\frac{1}{2}a_0=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(\theta)d\theta$. Notice that this is the mean value of $f$ on $[-\pi,\pi]$.

If $f(x)$ is a periodic function with period $2L$, then it can be represented on $[-L,L]$ as
\begin{equation}
\label{eq:fourier3}
f(x)=\frac{1}{2}a_0+\sum_{n=1}^\infty\left\{a_n\cos\left(\frac{n\pi}{L}x\right)+b_n\sin\left(\frac{n\pi}{L}x\right)\right\},
\end{equation}
\begin{align}
\label{eq:fc4}
a_n&=\frac{1}{L}\int_{-L}^L f(x)\cos\left(\frac{n\pi}{L}x\right)dx,\ n=0,1,2,\cdots,\\
\label{eq:fc5}
b_n&=\frac{1}{L}\int_{-L}^L f(x)\sin\left(\frac{n\pi}{L}x\right)dx,\ n=1,2,\cdots.
\end{align}

Example. [Sawtooth Function] Let $f$ be defined by
$$f(x)=x,\ -L<x<L$$
and $f(x+2L)=f(x)$.

Sawtooth Function

Sawtooth Function with L=1

Since $x$ is an odd function,
$$a_n=\frac{1}{L}\int_{-}^L x\cos\left(\frac{n\pi}{L}x\right)dx=0,\ n=0,1,2,\cdots.$$
For $n=1,2,\cdots$,
\begin{align*}
b_n&=\frac{1}{L}\int_{-L}^L x\sin\left(\frac{n\pi}{L}x\right)dx\\
&=\frac{2}{L}\int_0^L x\sin\left(\frac{n\pi}{L}x\right)dx\\
&=-\frac{2L(-1)^n}{n\pi}.
\end{align*}
Hence, $f(x)$ is represented as the Fourier series
$$f(x)=-\frac{2L}{\pi}\sum_{n=1}^\infty\frac{(-1)^n}{n}\sin\left(\frac{n\pi}{L}x\right)$$
on the interval $[-L,L]$.

n-th Partial Sum of Fourier Series with n=5

n-th Partial Sum of Fourier Series with n=5

n-th Partial Sum of Fourier Series with n=10

n-th Partial Sum of Fourier Series with n=10

n-th Partial Sum of Fourier Series with n=30

n-th Partial Sum of Fourier Series with n=30

n-th Partial Sum of Fourier Series with n=100

n-th Partial Sum of Fourier Series with n=100

Example. [Square Wave] Let $f$ be defined by
$$f(x)=\left\{\begin{array}{ccc}
-k & \mbox{if} & -\pi<x<0\\
k & \mbox{if} & 0<x<\pi
\end{array}\right.$$
and $f(x+2\pi)=f(x)$.

Square Wave

Square Wave

The Fourier coefficients are computed to be
\begin{align*}
a_n&=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\cos nxdx=0,\ n=0,1,2,\cdots,\\
b_n&=\frac{1}{\pi}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}f(x)\sin nxdx\\
&=\frac{2k}{n\pi}[1-(-1)^n],\ n=1,2,\cdots.
\end{align*}
So, $b_n=0$ if $n$ is even. Now,
$$b_{2n-1}=\frac{4k}{(2n-1)\pi},\ n=1,2,\cdots$$
and
$$f(x)=\frac{4k}{\pi}\sum_{n=1}^\infty\frac{\sin(2n-1)x}{2n-1}.$$

n-th partial Sum of Fourier Series with n=5

n-th partial Sum of Fourier Series with n=5

n-th partial Sum of Fourier Series with n=10

n-th partial Sum of Fourier Series with n=10

n-th partial Sum of Fourier Series with n=30

n-th partial Sum of Fourier Series with n=30

n-th partial Sum of Fourier Series with n=100

n-th partial Sum of Fourier Series with n=100

Since $0<\frac{\pi}{2}<\pi$, $f\left(\frac{\pi}{2}\right)=k$. On the other hand,
\begin{align*}
f\left(\frac{\pi}{2}\right)&=\frac{4k}{\pi}\sum_{n=1}^\infty\frac{\sin\left(\frac{2n-1}{2}\right)\pi}{2n-1}\\
&=\frac{4k}{\pi}\sum_{n=1}^\infty\frac{(-1)^{n+1}}{2n-1}\\
&=\frac{4k}{\pi}\left(1-\frac{1}{2}+\frac{1}{5}-\frac{1}{7}+\cdots\right).
\end{align*}
Hence, we obtain
$$\frac{\pi}{4}=1-\frac{1}{2}+\frac{1}{5}-\frac{1}{7}+\cdots$$
i.e.
$$\pi=4\sum_{n=1}^\infty\frac{(-1)^{n+1}}{2n-1}.$$
This is a famous result obtained by Gottfried Wilhelm Leibniz in 1673 from geometric considerations.

Pi as Leibniz series

Pi as Leibniz series

Gibbs Phenomenon

The Gibbs Phenomenon is an overshoot, a peculiarity of the Fourier series and other eigenfunction series at a simple discontinuity: the $n$th partial sum of the Fourier series has large oscillations near the jump, which may increase the maximum of the partial sum above that of the function itself. The Gibbs phenomenon is observed in the above two examples. The overshoot does not die out as the frequency increases, but approaches to a finite limit. It is a consequence of trying to approximate a discontinuous function with a finite Fourier series i.e. a partial sum of continuous functions which is always continuous.

Gibbs Phenomenon

Gibbs Phenomenon

The Fundamental Theorem of Calculus

First we begin with the Mean Value Theorem for Definite Integrals.

Theorem. If $f$ is continuous on $[a,b]$, then at some point $c\in [a,b]$,
$$f(c)(b-a)=\int_a^b f(x)dx$$
or
\begin{equation}
\label{eq:mvt}
f(c)=\frac{1}{b-a}\int_a^b f(x)dx.
\end{equation}
Notice that the RHS of \eqref{eq:mvt} is the average of $f(x)$ on $[a,b]$.

Example. Find the average value of $f(x)=4-x$ on $[0,3]$ and $c\in[0,3]$ at which $f$ actually takes on this value.

Solution. The region under $y=4-x$ on $[0,3]$ is a trapezoid and its area is $\frac{15}{2}$. So,
\begin{align*}
av(f)&=\frac{1}{3-0}\int_0^3(4-x)dx\\
&=\frac{5}{2}.
\end{align*}
Let $f(c)=\frac{5}{2}$. Then $4-c=\frac{5}{2}$ and $c=\frac{3}{2}$.

Example. Show that if $f$ is continuous on $[a,b]$ ($a\ne b$) and if $\int_a^b f(x)=0$ then $f(x)=0$ at least once in $[a,b]$.

Solution. By the MVT, there exists $c\in[a,b]$ such that
$$f(c)=\frac{1}{b-a}\int_a^b f(x)dx=0.$$

Suppose that $f(t)$ is an integrable function on a finite intervale $I$. Let $a\in I$. Then
$$F(x):=\int_a^x f(t)dt$$
defines a fuction on the interval $I$. Let $f(x)$ be a continuous function on $[a,b]$. Then

Claim 1: $F(x)=\int_a^x f(t)dt$ is continuous on $[a,b]$. (For a proof click here. For those who are not familiar with $\epsilon$-$\delta$ argument, an alternative proof is available here.)

Claim 2: $F(x)$ is differentiable on $(a,b)$ and $F’(x)=\frac{d}{dx}\int_a^xf(t)dt=f(x)$. (For a proof click here.)

The claims 1 and 2 constitute:

The Fundamental Theorem of Calculus (Part I)

If $f$ is continuous on $[a,b]$, then $F(x)=\int_a^xf(t)dt$ is continuous on $[a,b]$ and is differentiable on $(a,b)$. Moreover, $F’(x)=\frac{d}{dx}\int_a^xf(t)dt=f(x)$.

The Fundamental Theorem of Calculus (Part II) relates the definitel integral and an antiderivative of a function $f(x)$.

If $f$ is continuous on $[a,b]$ and $F$ is any antiderivative of $f$ on $[a,b]$, then
$$\int_a^b f(x)dx=F(b)-F(a).$$(For a proof click here.)

Note: The usual notation for $F(b)-F(a)$ is $F(x)|_a^b$ or $[F(x)]_a^b$.

Heat Equation and Schrödinger Equation

There is an intriguing relationship between Schrödinger equation for a free particle and homogeneous heat equation.

1-dimentional Schrödinger equation for a free particle is
\begin{equation}
\label{eq:se}
i\hbar\frac{\partial\psi(x,t)}{\partial t}=-\frac{\hbar^2}{2m}\frac{\partial^2\psi(x,t)}{\partial x^2}.
\end{equation}
Take the Wick rotation $t\mapsto \tau=it$. Then the Schrödinger equation \eqref{eq:se} turns into
\begin{equation}
\label{eq:hheq}
\frac{\partial\phi(x,\tau)}{\partial\tau}=\frac{\hbar}{2m}\frac{\partial^2\phi(x,\tau)}{\partial x^2},
\end{equation}
where $\phi(x,\tau)=\psi\left(x,\frac{\tau}{i}\right)$. \eqref{eq:hheq} is a homogeneous heat equation with diffusion coefficient $\alpha^2=\frac{\hbar}{2m}$. Conversely, apply the Wick rotation $t\mapsto\tau=-it$ to the 1-dimensional homogeneous heat equation
\begin{equation}
\label{eq:hhe2}
\frac{\partial u(x,t)}{\partial t}=\alpha^2\frac{\partial^2 u(x,t)}{\partial x^2}.
\end{equation}
Then the resulting equation is
\begin{equation}
\label{eq:se2}
i\hbar\frac{\partial w(x,\tau)}{\partial t}=-\alpha^2\hbar\frac{\partial^2 w(x,\tau)}{\partial x^2},
\end{equation}
where $w(x,\tau)=u\left(x,-\frac{\tau}{i}\right)$. \eqref{eq:se2} is a Schrödinger equation for a free particle with $m=\frac{1}{2\alpha^2}$. The solution of \eqref{eq:hhe2} with homogeneous boundary conditions takes the form
$$u(x,t)=\sum_{n=0}^\infty A_ne^{-\lambda_n^2\alpha^2 t}X_n(x).$$
Its Wick rotated solution is
\begin{align*}
w(x,\tau)&=\sum_{n=0}^\infty w_n(x,\tau)\\
&=\sum_{n=0}^\infty A_ne^{-i\lambda_n^2\alpha^2\tau}X_n(x).
\end{align*}
$i\hbar\frac{\partial w(x,\tau)}{\partial\tau}=\lambda_n^2\alpha^2\hbar w(x,\tau)$. Thus for each $n=0,1,2,\cdots$, $E_n=\lambda_n^2\alpha^2\hbar$ is the energy and $\omega_n=\lambda_n^2\alpha^2$ is the frequency of the wave $w_n(x,\tau)$.

Definite Integral

If $f: [a,b]\longrightarrow\mathbb{R}$ is a continuous function, the limits from the left-end point, midpoint, and right-end point methods in Areas under Curves exist and they are identical. The limit is denoted by $\int_a^b f(x)dx$ and called the definite integral of $f(x)$ on the closed interval $[a,b]$. It is not necessarily that $f(x)\geq 0$ for all $x\in [a,b]$, but if $f(x)\geq 0$ for all $x\in [a,b]$, then $\int_a^b f(x)dx$ is the area under the curve $y=f(x)$ on the interval $[a,b]$. It can be proven that the definite integral $\int_a^b f(x)dx$ does not depend on the choice of partition of $[a,b]$ or the choice of point in each subinterval to evaluate $f(x)$. More specifically, let $P: a=x_0<x_1<x_2<\cdots<x_n=b$ be an arbitrary partition (or a subdivision) of $[a,b]$ with $\Delta x_k=x_k-x_{k-1}$, $k=1,2,\cdots,n$. Also let $x_k’$ be any point in the subinterval $[x_{k-1},x_k]$, $k=1,2,\cdots,n$. Then
$$\int_a^b f(x)dx=\lim_{n\to\infty}\sum_{k=1}^nf(x_k’)\Delta x_k.$$

The definite integral $\int_a^b f(x)dx$ satisfies the following properties.

Theorem. 1. $\int_a^b f(x)dx=-\int_b^a f(x)dx$.

2. $\int_a^a f(x)dx=0$.

3. $\int_a^b cf(x)dx=c\int_a^b f(x)dx$, where $c$ is a constant.

4. $\int_a^b (f(x)+g(x))dx=\int_a^b f(x)dx+\int_a^b g(x)dx$.

The properties 3 and 4 tell us that the definite integral $\int_a^b f(x)dx$ is linear.

5. $\int_a^c f(x) dx+\int_c^b f(x)dx=\int_a^b f(x)dx$.

6. Let $m$ and $M$ be the minimum and the Maximum values of $f(x)$ on $[a,b]$. Then
$$m(b-a)\leq\int_a^b f(x)dx\leq M(b-a).$$

7. If $f(x)\leq g(x)$ on $[a,b]$, then
$$\int_a^b f(x)dx\leq\int_a^b g(x)dx.$$
As a special case, if $f(x)\geq 0$, then $\int_a^b f(x)dx\geq 0$.

Example. Suppose that
$$\int_{-1}^1 f(x)dx=5,\ \int_1^4 f(x)dx=-2,\ \int_{-1}^1 h(x)dx=7.$$
Find

1. $\int_4^1 f(x)dx$.

Solution.
$$\int_4^1 f(x)dx=-\int_1^4 f(x)dx=2.$$

2. $\int_{-1}^1(2f(x)+3h(x))dx$.

Solution. \begin{align*}
\int_{-1}^1(2f(x)+3h(x))dx&=2\int_{-1}^1f(x)dx+3\int_{-1}^1h(x)dx\\
&=31.
\end{align*}

3. $\int_{-1}^4 f(x)dx$.

Solution.
\begin{align*}
\int_{-1}^4 f(x)dx&=\int_{-1}^1 f(x)dx+\int_1^4 f(x)dx\\
&=3.
\end{align*}

Example. Show that the value of $\int_0^1\sqrt{1+\cos x}dx$ is less than $\frac{3}{2}$.

Solution. Since $-1\leq\cos x\leq 1$ and $\sqrt{x}$ is an increasing function on $[0,\infty)$, $\sqrt{1+\cos x}\leq \sqrt{2}$. Hence,
\begin{align*}
\int_0^1\sqrt{1+\cos x}dx&\leq\int_0^1\sqrt{2}dx\\
&=\sqrt{2}\\
&\approx 1.4142136\\
&<\frac{3}{2}.
\end{align*}

Using the symmetries of even functions and odd functions, we obtain the following properties.

Theorem. Let $f:[-a,a]\longrightarrow\mathbb{R}$ be a continuous function. Then

1. If $f$ is an even function,
$$\int_{-a}^a f(x)dx=2\int_0^a f(x)dx.$$

2. If $f$ is an odd function,
$$\int_{-a}^a f(x)dx=0.$$

An Application of Definite Integral: Average Value of a Continuous Functions

Let $f: [a,b]\longrightarrow\mathbb{R}$ be a continuous function. Divide the closed interval $[a,b]$ into $n$ equal subintervals. Choose $n$ samples of $f(x)$ on $[a,b]$
$$f(c_1), f(c_2),\cdots,f(c_n)$$
such that $c_k\in[x_{k-1},x_k]$, $k=1,2,\cdots,n$. Then the average value of these $n$ samples is
\begin{align*}
\frac{f(c_1)+f(c_2)+\cdots+f(c_n)}{n}&=\frac{1}{b-a}(f(c_1)+f(c_2)+\cdots+f(c_n))\frac{(b-a)}{n}\\
&=\frac{1}{b-a}\sum_{k=1}^nf(c_k)\frac{(b-a)}{n}.
\end{align*}
Now we increase the number of sample points to infinity:
\begin{align*}
\lim_{n\to\infty}\frac{f(c_1)+f(c_2)+\cdots+f(c_n)}{n}&=\lim_{n\to\infty}\frac{1}{b-a}\sum_{k=1}^nf(c_k)\frac{(b-a)}{n}\\
&=\frac{1}{b-a}\int_a^b f(x)dx.
\end{align*}

Definition. The average or mean value of a continuous function $f(x)$ on $[a,b]$ is given by
\begin{equation}
\label{eq:mv}
\mathrm{av}(f)=\frac{1}{b-a}\int_a^b f(x)dx.
\end{equation}

Example. Find the average value of $f(x)=\sqrt{4-x^2}$ on $[-2,2]$.

Solution. Notice that $y=\sqrt{4-x^2}$ on $[-2,2]$ is the upper semi-circle centered at the origin with radius 2. Hence,
\begin{align*}
\mathrm{av}(f)&=\frac{1}{2-(-2)}\int_{-2}^2\sqrt{4-x^2}dx\\
&=\pi.
\end{align*}