Integration by Parts

Let $f(x)$ and $g(x)$ be differentiable functions. Then the product rule
$$(f(x)g(x))’=f’(x)g(x)+f(x)g’(x)$$
leads to the integration

\label{eq:intpart}
\int f(x)g’(x)dx=f(x)g(x)-\int f’(x)g(x)dx.

The formula \eqref{eq:intpart} is called integration by parts. If we set $u=f(x)$ and $v=g(x)$, then \eqref{eq:intpart} can be also written as

\label{eq:intpart2}
\int udv=uv-\int vdu.

Example. Evaluate $\int x\cos xdx$.

Solution. Let $u=x$ and $dv=\cos xdx$. Then $du=dx$ and $v=\sin x$. So,
\begin{align*}
\int x\cos xdx&=x\sin x-\int\sin xdx\\
&=x\sin x+\cos x+C,
\end{align*}
where $C$ is a constant.

Example. Evaluate $\int\ln xdx$.

Solution. Let $u=\ln x$ and $dv=dx$. Then $du=\frac{1}{x}dx$ and $v=x$. So,
\begin{align*}
\int\ln xdx&=x\ln x-\int x\cdot\frac{1}{x}dx\\
&=x\ln x-x+C,
\end{align*}
where $C$ is a constant.

Often it is required to apply integration by parts more than once to evaluate a given integral. In that case, it is convenient to use a table as shown in the following example.

Example. Evaluate $\int x^2e^xdx$

Solution. In the following table, the first column represents $x^2$ and its derivatives, and the second column represents $e^x$ and its integrals.
$$\begin{array}{ccc} x^2 & & e^x\\ &\stackrel{+}{\searrow}&\\ 2x & & e^x\\ &\stackrel{-}{\searrow}&\\ 2 & & e^x\\ &\stackrel{+}{\searrow}&\\ 0 & & e^x. \end{array}$$
This table shows the repeated application of integration by parts. Following the table, the final answer is given by
$$\int x^2e^xdx=x^2e^x-2xe^x+2e^x+C,$$
where $C$ is a constant.

Example. Evaluate $\int x^3\sin xdx$.

Solution. In the following table, the first column represents $x^3$ and its derivatives, and the second column represents $\sin x$ and its integrals.
$$\begin{array}{ccc} x^3 & & \sin x\\ &\stackrel{+}{\searrow}&\\ 3x^2 & & -\cos x\\ &\stackrel{-}{\searrow}&\\ 6x & & -\sin x\\ &\stackrel{+}{\searrow}&\\ 6 & & \cos x\\ &\stackrel{-}{\searrow}&\\ 0 & & \sin x. \end{array}$$
Following the table, the final answer is given by
$$\int x^3\sin xdx=-x^3\cos x+3x^2\sin x+6x\cos x-6\sin x+C,$$
where $C$ is a constant.

Example. Evaluate $\int e^x\cos xdx$.

Solution. In the following table, the first column represents $e^x$ and its derivatives, and the second column represents $\cos x$ and its integrals.
$$\begin{array}{ccc} e^x & & \cos x\\ &\stackrel{+}{\searrow}&\\ e^x & & \sin x\\ &\stackrel{-}{\searrow}&\\ e^x & & -\cos x. \end{array}$$
Now, this is different from the previous two examples. While the first column repeats the same function $e^x$, the functions second column changes from $\cos x$ to $\sin x$ and to $\cos x$ again up to sign. In this case, we stop there and write the answer as we have done in the previous two examples and add to it $\int e^x(-\cos x)dx$. (Notice that the integrand is the product of functions in the last row.) That is,
$$\int e^x\cos xdx=e^x\sin x-e^x\cos x-\int e^x\cos xdx.$$
For now we do not worry about the constant of integration. Solving this for $\int e^x\cos xdx$, we obtain the final answer
$$\int e^x\cos xdx=\frac{1}{2}e^x\sin x-\frac{1}{2}e^x\cos x+C,$$
where $C$ is a constant.

Example. Evaluate $\int e^x\sin xdx$.

Solution. In the following table, the first column represents $e^x$ and its derivatives, and the second column represents $\sin x$ and its integrals.
$$\begin{array}{ccc} e^x & & \sin x\\ &\stackrel{+}{\searrow}&\\ e^x & & -\cos x\\ &\stackrel{-}{\searrow}&\\ e^x & & -\sin x. \end{array}$$
This is similar to the above example. The first columns repeats the same function $e^x$, and the functions in the second column changes from $\sin x$ to $\cos x$ and to $\sin x$ again up to sign. So we stop there and write
$$\int e^x\sin xdx=-e^x\cos x+e^x\sin x-\int e^x\sin xdx.$$
Solving this for $\int e^x\sin xdx$, we obtain
$$\int e^x\sin xdx=-\frac{1}{2}e^x\cos x+\frac{1}{2}e^x\sin x+C,$$
where $C$ is a constant.

Example. Evaluate $\int e^{5x}\cos 8xdx$.

Solution. In the following table, the first column represents $e^{5x}$ and its derivatives, and the second column represents $\cos 8x$ and its integrals.
$$\begin{array}{ccc} e^{5x} & & \cos 8x\\ &\stackrel{+}{\searrow}&\\ 5e^{5x} & & \frac{1}{8}\sin 8x\\ &\stackrel{-}{\searrow}&\\ 25e^{5x} & & -\frac{1}{64}\cos 8x. \end{array}$$
The first columns repeats the same function $e^{5x}$ up to constant multiple, and the functions in the second column changes from $\cos 8x$ to $\sin 8x$ and to $\cos 8x$ again to constant multiple. This case also we do the same.
$$\int e^{5x}\cos 8xdx=\frac{1}{8}e^{5x}\sin 8x+\frac{5}{64}e^{5x}\cos 8x-\frac{25}{64}\int e^{5x}\cos 8xdx.$$
Solving this for $\int e^{5x}\cos 8xdx$, we obtain
$$\int e^{5x}\cos 8xdx=\frac{8}{89}e^{5x}\sin 8x+\frac{5}{89}e^{5x}\cos 8x+C,$$
where $C$ is a constant.

The evaluation of a definite integral by parts can be done as

\label{eq:intpart3}
\int_a^b f(x)g’(x)dx=[f(x)g(x)]_a^b-\int_a^b f’(x)g(x)dx.

Example. Find the area of the region bounded by $y=xe^{-x}$ and the x-axis from $x=0$ to $x=4$.

The graph of y=xexp(-x), x=0..4

Solution. Let $u=x$ and $dv=e^{-x}dx$. Then $du=dx$ and $v=-e^{-x}$. Hence,
\begin{align*}
A&=\int_0^4 xe^{-x}dx\\
&=[-xe^{-x}]0^4+\int_0^4 e^{-x}dx\\
&=-4e^{-4}+[-e^{-x}]_0^4\\
&=1-5e^{-4}.
\end{align*}

A Convergence Theorem for Fourier Series

In here, we have seen that if a function $f$ is Riemann integrable on every bounded interval, it can be expended as a trigonometric series called a Fourier series by assuming that the series converges to $f$. So, it would be natural to pause the following question. If $f$ is a periodic function, would its Fourier series always converge to $f$? The answer is affirmative if $f$ is in addition piecewise smooth.

Let $S_N^f(\theta)$ denote the $n$-the partial sum of the Fourier series of a $2\pi$-periodic function $f(\theta)$. Then

\label{eq:partsum}
\begin{aligned}
S_N^f(\theta)&=\sum_{-N}^N c_ne^{in\theta}\\
&=\frac{1}{2\pi}\sum_{-N}^N\int_{-\pi}^\pi f(\psi)e^{in(\theta-\psi)}d\psi\\
&=\frac{1}{2\pi}\sum_{-N}^N\int_{-\pi}^\pi f(\psi)e^{in(\psi-\theta)}d\psi.
\end{aligned}

Let $\phi=\psi-\theta$. Then
\begin{align*}
S_N^f(\theta)&=\frac{1}{2\pi}\sum_{-N}^N\int_{-\pi+\theta}^{\pi+\theta} f(\phi+\theta)e^{in\phi}d\phi\\
&=\frac{1}{2\pi}\sum_{-N}^N\int_{-\pi}^\pi f(\phi+\theta)e^{in\phi}d\phi\\
&=\int_{-\pi}^\pi f(\theta+\phi)D_N(\phi)d\phi,
\end{align*}
where

\label{eq:dkernel}
\begin{aligned}
D_N(\phi)&=\frac{1}{2\pi}\sum_{-N}^N e^{in\phi}\\
&=\frac{1}{2\pi}\frac{e^{i(N+1)\phi}-e^{-iN\phi}}{e^{i\phi}-1}\\
&=\frac{1}{2\pi}\frac{\sin\left(N+\frac{1}{2}\right)\phi}{\sin\frac{1}{2}\phi}.
\end{aligned}

$D_N(\phi)$ is called the $N$-th Dirichlet kernel. Note that the Dirichlet kernel can be used to realize the Dirac delta function $\delta(x)$, i.e.
$$\delta(x)=\lim_{n\to\infty}\frac{1}{2\pi}\frac{\sin\left(n+\frac{1}{2}\right)x}{\sin\frac{1}{2}x}.$$

Dirichlet kernel D_n(x), n=1..10, x=-pi..pi

Note that
$$\frac{1}{2}+\frac{\sin\left(N+\frac{1}{2}\right)\theta}{2\sin\frac{1}{2}\theta}=1+\sum_{n=1}^N\cos n\theta\ (0<\theta<2\pi)$$
Using this identity, one can easily show that:

Lemma. For any $N$,
$$\int_{-\pi}^0 D_N(\theta)d\theta=\int_0^{\pi}D_N(\theta)d\theta=\frac{1}{2}.$$

Now, we area ready to prove the following convergence theorem.

Theorem. If $f$ is $2\pi$-periodic and piecewise smooth on $\mathbb{R}$, then
$$\lim_{N\to\infty} S_N^f(\theta)=\frac{1}{2}[f(\theta-)+f(\theta+)]$$
for every $\theta$. Here, $f(\theta-)=\lim_{\stackrel{h\to 0}{h>0}}f(\theta-h)$ and $f(\theta+)=\lim_{\stackrel{h\to 0}{h>0}}f(\theta+h)$. In particular, $\lim_{N\to\infty}S_N^f(\theta)=f(\theta)$ for every $\theta$ at which $f$ is continuous.

Proof. By Lemma,
$$\frac{1}{2}f(\theta-)=f(\theta-)\int_{-\pi}^0 D_N(\phi)d\phi,\ \frac{1}{2}f(\theta+)=f(\theta+)\int_0^\pi D_N(\phi)d\phi.$$
So,
\begin{align*}
S_N^f(\theta)-\frac{1}{2}[f(\theta-)+f(\theta+)]&=\int_{-\pi}^0[f(\theta+\phi)-f(\theta-)]D_N(\phi)d\phi+\\
&\int_0^\pi[f(\theta+\phi)-f(\theta+)]D_N(\phi)d\phi\\
&=\frac{1}{2\pi}\int_{-\pi}^0[f(\theta+\phi)-f(\theta-)]\frac{e^{i(N+1)\phi}-e^{-iN\phi}}{e^{i\phi}-1}d\phi\\
&+\frac{1}{2\pi}\int_0^\pi[f(\theta+\phi)-f(\theta+)]\frac{e^{i(N+1)\phi}-e^{-iN\phi}}{e^{i\phi}-1}d\phi.
\end{align*}
$$\lim_{\phi\to 0+}\frac{f(\theta+\phi)-f(\theta+)}{e^{i\phi}-1}=\frac{f’(\theta+)}{i},\ \lim_{\phi\to 0-}\frac{f(\theta+\phi)-f(\theta-)}{e^{i\phi}-1}=\frac{f’(\theta-)}{i}.$$
Hence, the function
g(\phi):=\left\{\begin{aligned} &\frac{f(\theta+\phi)-f(\theta+)}{e^{i\phi}-1},\ -\pi<\phi<0,\\ &\frac{f(\theta+\phi)-f(\theta-)}{e^{i\phi}-1},\ 0<\phi<\pi \end{aligned}\right.
is piecewise continuous on $[-\pi,\pi]$. By the corollary to Bessel’s inequality,
$$c_n=\frac{1}{2\pi}\int_{-\pi}^\pi g(\phi)e^{in\phi}d\phi\to 0$$
as $n\to\pm\infty$. Therefore,
\begin{align*}
S_N^f(\theta)-\frac{1}{2}[f(\theta-)+f(\theta+)]&=\frac{1}{2\pi}\int_{-\pi}^\pi g(\phi)[e^{i(N+1)\phi}-e^{-iN\phi}]d\phi\\
&=c_{-(N+1)}-c_N\\
&\to 0
\end{align*}
as $N\to\infty$. This completes the proof.

Corollary. If $f$ and $g$ are $2\pi$-periodic and piecewise smooth, and $f$ and $g$ have the same Fourier coefficients, then $f=g$.

Proof. If $f$ and $g$ have the same Fourier coefficients, their their Fourier series are the same. Due to the conditions on $f$ and $g$, the Fourier series of $f$ and $g$ converge to $f$ and $g$ respectively by the above convergence theorem. Hence, $f=g$.

The Curvature of a Curve in Euclidean 3-space $\mathbb{R}^3$

The quantity curvature is intended to be a measurement of the bending or turning of a curve. Let $\alpha: I\longrightarrow\mathbb{R}^3$ be a regular curve (i.e. a smooth curve whose derivative never vanishes). If $\alpha$ were to have the unit speed, i.e.

\label{eq:unitspped}
||\dot\alpha(t)||^2=\alpha(t)\cdot\alpha(t)=1.

Differentiating \eqref{eq:unitspped}, we see that $\dot\alpha(t)\cdot\ddot\alpha(t)=0$, i.e. the acceleration is normal to the velocity which is tangent to $\alpha$. Hence, measuring the acceleration is measuring the curvature. So, if we denote the curvature by $\kappa$, then

\label{eq:curvature}
\kappa=||\ddot\alpha(t)||.

Remember that the definition of curvature \eqref{eq:curvature} requires the curve $\alpha$ to be a unit speed curve, but it is not necessarily always the case. What we know is that we can always reparametrize a curve and reparametrization does not change the curve itself but only changes its speed. There is one particular parametrization that we are interested in as it results a unit speed curve. It is called paramtrization by arc-length. This time let us assume that $\alpha$ is not a unit speed curve and define

\label{eq:arclength}
s(t)=\int_a^t||\dot\alpha(u)||du,

where $a\in I$. Since $\frac{ds}{dt}>0$, $s(t)$ is an increasing function and so it is one-to-one. This means that we can solve \eqref{eq:arclength} for $t$ and this allows us to reparametrize $\alpha(t)$ by the arc-length parameter $s$.

Example. Let $\alpha: (-\infty,\infty)\longrightarrow\mathbb{R}^3$ be given by
$$\alpha(t)=(a\cos t,a\sin t,bt)$$
where $a>0$, $b\ne 0$. $\alpha$ is a right circular helix. Its speed is
$$||\dot\alpha(t)||=\sqrt{a^2+b^2}\ne 1.$$
$s(t)=\sqrt{a^2+b^2}t$, so $t=\frac{s}{\sqrt{a^2+b^2}}$. The reparametrization of $\alpha(t)$ by $s$ is given by
$$\alpha(s)=\left(a\cos\frac{s}{\sqrt{a^2+b^2}},b\sin\frac{s}{\sqrt{a^2+b^2}},\frac{bs}{\sqrt{a^2+b^2}}\right).$$
Hence the curvature $\kappa$ is
$$\kappa=\frac{a}{a^2+b^2}.$$

Bessel’s Inequality

Bessel’s inequality is important in studying Fourier series.

Theorem. If $f$ is $2\pi$-periodic and Riemann integrable on $[-\pi,\pi]$ and if the Fourier coefficients $c_n$ are defined by
$$c_n=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(\theta)e^{-in\theta}d\theta,$$
then

\label{eq:besselinequality}
\sum_{n=-\infty}^\infty|c_n|^2\leq\frac{1}{2\pi}\int_{-\pi}^\pi|f(\theta)|^2d\theta.

Proof.
\begin{align*}
0&\leq|f(\theta)-\sum_{-N}^Nc_ne^{in\theta}|^2\\
&=f(\theta)^2-\sum_{-N}^Nf(\theta)[c_ne^{in\theta}+\overline{c_n}e^{-in\theta}]+\sum_{m,n=-N}^Nc_m\overline{c_n}e^{i(m-n)\theta}
\end{align*}
By integrating,
\begin{align*}
\frac{1}{2\pi}\int_{-\pi}^\pi|f(\theta)-\sum_{-N}^Nc_ne^{in\theta}|^2d\theta&=\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta-\sum_{-N}^N\left[c_n\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)e^{in\theta}d\theta\right.\\
\left.+\overline{c_n}\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)e^{-in\theta}d\theta\right]+&\sum_{m,n=-N}^Nc_m\overline{c_n}\frac{1}{2\pi}\int_{-\pi}^\pi e^{i(m-n)\theta}d\theta\\
&=\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta-\sum_{-N}^N|c_n|^2.
\end{align*}
Hence, for each $N=1,2,\cdots$,
$$\sum_{-N}^N|c_n|^2\leq\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta.$$
Taking the limit $N\to\infty$, we obtain
$$\sum_{-\infty}^\infty|c_n|^2\leq\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta.$$

Note that $|a_0|^2=4|c_0|^2$, $|a_n|^2+|b_n|^2=2(|c_n|+|c_{-n}|^2)$, $n\geq 1$. So, in terms of the real coefficients, Bessel’s inequality can be written as

\label{eq:besselinequality2}
\frac{1}{4}|a_0|^2+\frac{1}{2}\sum_1^\infty(|a_n|^2+|b_n|^2)\leq\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta.

Bessel’s inequality implies that $\sum|a_n|^2$, $\sum|b_n|^2$, $\sum|c_n|^2$ are convergent and hence the series of Fourier coefficients $\sum a_n$, $\sum b_n$, $\sum c_n$ are convergent. As we studied in undergraduate calculus the following corollary holds then.

Corollary. The Fourier coefficients $a_n$, $b_n$, $c_n$ tend to zero as $n\to\infty$ (and also as $n\to -\infty$ for $c_{-n}$).

Spectrum

Let us recall the Hooke’s law

\label{eq:hooke}
F=-kx.

Newton’s second law of motion is

\label{eq:newton}
F=ma=m\ddot{x},

where $\ddot{x}=\frac{d^2 x}{dt^2}$. The equations \eqref{eq:hooke} and \eqref{eq:newton} result the equation of a simple harmonic oscillator

\label{eq:ho}
m\ddot{x}+kx=0.

Integrating \eqref{eq:ho} with respect to $x$, we have
$$\int(m\ddot{x}dx+kxdx)=E_0,$$
where $E_0$ is a constant. $d\dot{x}=\ddot{x}dt$ and $\dot{x}d\dot{x}=\dot{x}\ddot{x}dt=\ddot{x}dx$. So,
\begin{align*}
\int(m\ddot{x}dx+kxdx)&=\int(m\dot{x}d\dot{x}+kxdx)\\
&=\frac{1}{2}m\ddot{x}+\frac{1}{2}kx^2.
\end{align*}
Hence, we obtain the conservation law of energy

\label{eq:energy}
\frac{1}{2}m\ddot{x}+\frac{1}{2}kx^2=E_0.

The general solution of \eqref{eq:ho} is

\label{eq:hosol}
\begin{aligned}
x(t)&=a\cos\omega t+b\sin\omega t\\
&=\sqrt{a^2+b^2}\sin(\omega t+\theta),
\end{aligned}

where $a$ and $b$ are constants, $\omega=\sqrt{\frac{k}{m}}$ and $\theta=\tan^{-1}\left(\frac{a}{b}\right)$. From \eqref{eq:energy} and \eqref{eq:hosol}, the total energy $E_0$ is computed to be
$$E_0=\frac{1}{2}m\omega^2(a^2+b^2).$$
This tells us that the total energy of a simple harmonic oscillator is proportional to $a^2+b^2$, the squared amplitude. As seen here, the sawtooth function $f(x)$ is represented as the Fourier series
\begin{align*}
f(x)&=-\frac{2L}{\pi}\sum_{n=1}^\infty\frac{(-1)^n}{n}\sin\left(\frac{n\pi x}{L}\right)\\
&=\frac{2L}{\pi}\left\{\sin\left(\frac{\pi x}{L}\right)-\frac{1}{2}\sin\left(\frac{2\pi x}{L}\right)+\frac{1}{3}\sin\left(\frac{3\pi x}{L}\right)-\cdots\right\}.
\end{align*}
The amplitude $c_n=\frac{2L}{n\pi}$, $n=1,2,3,\cdots$ coincides with twice the angular frequency. $\{c_n\}$ is called the frequency spectrum or the amplitude spectrum.

The First and Second Derivative Tests

The First Derivative Test

The derivative $f’(x)$ can tell us a lot about the function $y=f(x)$. It can tell us where critical points are i.e. points at which $f’(x)=0$ and the critical points are likely places at which $y=f(x)$ assumes a local maximum or a local minimum values. By further examining the properties of $f’(x)$ we can also determine at which critical point, $f(x)$ assumes a local maximum, or a local minimum, or neither. But first we see that $f’(x)$ can tell us where $y=f(x)$ is increasing or decreasing.

Theorem. Increasing/Decreasing Test

1. If $f’(x)>0$ on an open interval, $f$ is increasing on that interval.
2. If $f’(x)<0$ on an open interval, $f$ is decreasing on that interval.

Example. Find where $f(x)=3x^4-4x^3-12x^2+5$ is increasing and where it is decreasing.

Solution.
\begin{align*}
f’(x)&=12x^3-12x^2-24x\\
&=12x(x^2-x-2)\\
&=12x(x-2)(x+1).
\end{align*}
The critical points are $x=-1,0,2$. Using, for instance, the test point method (which is the easiest method of solving an inequality), we obtain the following table.
$$\begin{array}{|c|c|c|c|c|c|c|c|} \hline x & x<-1 & -1 & -1<x<0 & 0 & 0<x<2 & 2 & x>2\\ \hline f’(x) & – & 0 & + & 0 & – & 0 & +\\ \hline f(x) & \searrow & f(-1) & \nearrow & f(0) & \searrow & f(2) &\nearrow\\ \hline \end{array}$$
So we find that $f$ is increasing on $(-1,0)\cup(2,\infty)$ and $f$ is decreasing on $(-\infty,-1)\cup(0,2)$.

Now, local maximum values and local minimum values can be identified by observing the change of sign of $f’(x)$ at each critical point.

Theorem. [The First Derivative Test] Suppose that $c$ is a critical point of a differentiable function $f(x)$.

1. If the sign of $f’(x)$ changes from $+$ to $-$ at $c$, $f(c)$ is a local maximum.
2. If the sign of $f’(x)$ changes from $-$ to $+$ at $c$, $f(c)$ is a local minimum.
3. If the sign $f’(x)$ does not change at $c$, $f$ has neither a local maximum nor a local minimum at $c$.

Example. In the previous example, the sign of $f’(x)$ changes from $+$ to $-$ at $0$, so $f(0)=5$ is a local maximum. The sign of $f’(x)$ changes from $-$ to $+$ at $-1$ and at $2$, so $f(-1)=0$ and $f(2)=-27$ are local minimum values.

The following figure confirms our findings from the above two examples.

The graph of f(x)=3x^4-4x^3-12x^2+5

The Second Derivative Test

The second order derivative $f^{\prime\prime}(x)$ can provide us an additional piece of information on $y=f(x)$, namely the concavity of the graph of $y=f(x)$.

Definition. If the graph of $f$ lies above all of its tangents on an open interval $I$, it is called concave upward on $I$. If the graph of $f$ lies below all of its tangents on $I$, it is called concave downward on $I$.

From here on, $\smile$ denotes “concave up” and $\frown$ denotes “concave down”.

Definition. A point $(d,f(d))$ on the graph of $y=f(x)$ is called a point of inflection if the concavity of the graph of $f$ changes from $\smile$ to $\frown$ or from $\frown$ to $\smile$ at $(d,f(d))$. The candidates for the points of inflection may be found by solving the equation $f^{\prime\prime}(x)=0$ as shown in the example below.

Theorem. [Concavity Test]

1. If $f^{\prime\prime}(x)>0$ for all x in an open interval $I$, the graph of $f$ is concave up on $I$.
2. If $f^{\prime\prime}(x)<0$ for all x in an open interval $I$, the graph of $f$ is concave down on $I$.

Theorem. [The Second Derivative Test] Suppose that $f’(c)=0$ i.e. $c$ is a critical point of $f$. Suppose that $f^{\prime\prime}$ is continuous near $c$.

1. If $f^{\prime\prime}(c)>0$ then $f(c)$ is a local minimum.
2. If $f^{\prime\prime}(c)<0$ then $f(c)$ is a local maximum.

Example. Let $f(x)=-x^4+2x^2+2$.

1. Find and identify all local maximum and local minimum values of $f(x)$ using the Second Derivative Test.
2. Find the intervals on which the graph of $f(x)$ is concave up or concave down. Find all points of inflection.

Solution. 1. First we find the critical points of $f(x)$ by solving the equation $f’(x)=0$:
$$f’(x)=-4x^3+4x=-4x(x^2-1)=-4x(x+1)(x-1)=0.$$ So $x=-1,0,1$ are critical points of $f(x)$ Next, $f^{\prime\prime}(x)=-12x^2+4$. Since $f^{\prime\prime}(0)=4>0$ and $f^{\prime\prime}(-1)=f^{\prime\prime}(1)=-8<0$, by the Second Derivative Test, $f(0)=2$ is a local minimum value and $f(-1)=f(1)=3$ is a local maximum value.

2. First we need to solve the equation $f”(x)=0$:
$$f^{\prime\prime}(x)=-12x^2+4=-12\left(x^2-\frac{1}{3}\right)=-12\left(x+\frac{1}{\sqrt{3}}\right)\left(x-\frac{1}{\sqrt{3}}\right)=0.$$ So $f^{\prime\prime}(x)=0$ at $x=\pm\displaystyle\frac{1}{\sqrt{3}}$. By using the test-point method we find the following table:
$$\begin{array}{|c||c|c|c|c|c|} \hline x & x<-\frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{3}}<x<\frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & x>\frac{1}{\sqrt{3}}\\ \hline f^{\prime\prime}(x) & – & 0 & + & 0 & -\\ \hline f(x) & \frown & f\left(-\frac{1}{\sqrt{3}}\right)=\frac{23}{9} & \smile & f\left(\frac{1}{\sqrt{3}}\right)=\frac{23}{9} & \frown\\ \hline \end{array}$$
The graph of $f(x)$ is concave down on the intervals $\left(-\infty,-\frac{1}{\sqrt{3}}\right)\cup\left(\frac{1}{\sqrt{3}},\infty\right)$ and is concave up on the interval $\left(-\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}}\right)$. The points of inflection are $\left(-\frac{1}{\sqrt{3}},\frac{23}{9}\right)$ and $\left(\frac{1}{\sqrt{3}},\frac{23}{9}\right)$.

The following figure confirms our findings from the above example.

The graph of f(x)=-x^4+2x^2+2 with points of inflection (in blue)

The Substitution Rule

If given integration takes the form $\int f(g(x))g’(x)dx$ then it can be converted to a simpler integration that we may be able to evaluate by the substitution $u=g(x)$. In fact, the integration is given in terms of the new variable $u$ as
$$\int f(g(x))g’(x)dx=\int f(u)du.$$

Example. Evaluate $\int x\sqrt{1+x^2}dx$.

Solution. Let $u=1+x^2$. Then $du=2xdx$. So,
\begin{align*}
\int x\sqrt{1+x^2}dx&=\frac{1}{2}\int\sqrt{u}du\\
&=\frac{1}{3}u^{\frac{3}{2}}+C\\
&=\frac{1}{3}(1+x^2)^{\frac{3}{2}}+C,
\end{align*}
where $C$ is an arbitrary constant.

Example. Evaluate $\int\cos (7\theta+5)d\theta$.

Solution. Let $u=7\theta+5$. Then $du=7d\theta$. So,
\begin{align*}
\int\cos (7\theta+5)d\theta&=\frac{1}{7}\int\cos udu\\
&=\frac{1}{7}\sin u+C\\
&=\frac{1}{7}\sin(7\theta+5)+C,
\end{align*}
where $C$ is an arbitrary constant.

Example. Evaluate $\int x^2\sin(x^3)dx$.

Solution. Let $u=x^3$. Then $du=3x^2dx$. So,
\begin{align*}
\int x^2\sin(x^3)dx&=\frac{1}{3}\int \sin udu\\
&=-\frac{1}{3}\cos u+C\\
&=-\frac{1}{3}\cos(x^3)+C,
\end{align*}
where $C$ is an arbitrary constant.

How do we evaluate a definite integral of the form $\int_a^b f(g(x))g’(x)dx$? The following example shows you how.

Example. Evaluate $\int_{-1}^1 3x^2\sqrt{x^3+1}dx$.

Solution. First let us calculate the indefinite integral $\int 3x^2\sqrt{x^3+1}dx$. Let $u=x^3+1$. Then $du=3x^2dx$. So,
\begin{align*}
\int 3x^2\sqrt{x^3+1}dx&=\int\sqrt{u}du\\
&=\frac{2}{3}u^{\frac{3}{2}}+C\\
&=2(x^3+1)^{\frac{3}{2}}+C,
\end{align*}
where $C$ is an arbitrary constant. Now by Fundamental Theorem of Calculus,
\begin{align*}
\int_{-1}^1 3x^2\sqrt{x^3+1}dx&=\frac{2}{3}[(x^3+1)^{\frac{3}{2}}]_{-1}^1\\
&=\frac{4\sqrt{2}}{3}.
\end{align*}

But there is a better way to do this as shown in the following theorem. Its proof is straightforward.

Theorem. If $u’$ is continuous on $[a,b]$ and $f$ is continuous on the range of $u$, then
$$\int_a^bf(u(x))u’(x)dx=\int_{u(a)}^{u(b)}f(u)du.$$

Example. Let us replay the previous example using this theorem. Again let $u=x^3+1$. Then $du=3x^2dx$ and $u(-1)=0$, $u(1)=2$. Now, by the above theorem,
\begin{align*}
\int_{-1}^1 3x^2\sqrt{x^3+1}dx&=\int_0^2\sqrt{u}du\\
&=\frac{2}{3}[u^{\frac{3}{2}}]_0^2\\
&=\frac{4\sqrt{2}}{3}.
\end{align*}
I believe you will find this more simple than previous method.

I will finish this lecture with the following nice properties.

Theorem. Let $f$ be a continuous function on $[-a,a]$.

(a) If $f$ is an even function, then $\int_{-a}^a f(x)dx=2\int_0^a f(x)dx$.

(b) If $f$ is an odd function, then $\int_{-a}^a f(x)dx=0$.

This can be easily understood from pictures using the symmetries of even and odd functions. But the theorem can be proved using substitution. I will leave it to you.

Mean Value Theorem

The following theorem is something one can easily picture intuitively.

Theorem. [Rolle's Theorem]
Let $f$ be continuous on the closed interval $[a,b]$ and differentiable on the open interval $(a,b)$. If $f(a)=f(b)$, then there exists a number $c$ in $(a,b)$ such that $f’(c)=0$.

Example. Show that the equation $x^3+x-1=0$ has exactly only one real root.

Solution. Let $f(x)=x^3+x-1$. Note that $f(0)=-1$ and $f(1)=1$. So by the Intermediate Value Theorem, we see that there exists at least a root of the equation $x^3+x-1=0$ in the interval $(0,1)$. Now suppose that there are two different roots $a$ and $b$ of the equation $x^3+x-1=0$. Without loss of generality, we may assume that $a<b$. Then $f(x)$ is continuous on $[a,b]$ and differentiable on $(a,b)$. By Rolle’s Theorem then, there exist a number $c$ in $(a,b)$ such that $f’(c)=0$. However, $f’(x)=3x^2+1\geq 1$ for all real number $x$. This is a contradiction. Therefore, there should be only one root of the equation.

The graph of f(x)=x^3+x-1

Let $f$ be continuous on $[a,b]$ and differentiable on $(a,b)$. Define $g(x)$ to be the distance between $f(x)$ and the line segment from $(a,f(a))$ to $(b,f(b))$, i.e.
$$g(x)=f(x)-\frac{f(b)-f(a)}{b-a}(x-a)-f(a).$$ Then $g(x)$ is continuous on $[a,b]$ and differentiable on $(a,b)$. Since $g(a)=g(b)=0$, by Rolle’s theorem there exists a number $c$ in $(a,b)$ such that $g’(c)=f’(c)-\frac{f(b)-f(a)}{b-a}=0$. Therefore, we proved the following theorem.

Mean Value Theorem

Theorem. [Mean Value Theorem]
Let $f$ be continuous on $[a,b]$ and differentiable on $(a,b)$. Then there exists a number $c$ in $(a,b)$ such that
$$f’(c)=\frac{f(b)-f(a)}{b-a}.$$

The following example is an application of the Mean Value Theorem.

Example. Suppose that $f(0)=-3$ and $f’(x)\leq 5$ for all values of $x$. How large can $f(2)$ possibly be?

Solution. By the Mean Value Theorem, there exists a number $c$ in $(0,2)$ such that
$$f’(c)=\frac{f(2)-f(0)}{2-0}=\frac{f(2)+3}{2}.$$
Since $f’(c)\leq 5$,
\begin{align*}
f(2)&=2f’(c)-3\\
&\leq 2\cdot 5-3=7.
\end{align*}
Hence, $7$ is the largest possible value of $f(2)$.

Using Mean Value Theorem, one can prove the following theorem.

Theorem. If $f’(x)=0$ for all $x$ in the open interval $(a,b)$, then $f$ is constant on $(a,b)$.

Maximum and Minimum

Maximum and Minimum

There are two different types of extremum (maximum or minimum) values of a function $y=f(x)$. We may consider a value of $y$ that is an extremum globally on the domain or we may also consider a value of $y$ that is an extremum locally around an $x$ value.

A function $f$ has an absolute maximum at $c$ if $f(c)\geq f(x)$ for all $x$ in the domain of $f$. Similarly, $f$ has an absolute minimum at $c$ if $f(c)\leq f(x)$ for all $x$ in the domain of $f$.

A function $f$ has a local maximum (or relative maximum) at $c$ if $f(c)\geq f(x)$ in some neighborhood of $c$ (i.e an open interval that contains $c$). Similarly, $f$ has a local minimum (or relative minimum) at $c$ if $f(c)\leq f(x)$ in some neighborhood of $c$.

Example.

The graph of f(x)=3x^4-16x^3+18x^2 on [-1,4]

The above figure shows the graph of $f(x)=3x^4-16x^3+18x^2$, $-1\leq x\leq 4$. It has a local maximum at $x=1$ and a local minimum at $x=3$. The local minimum $f(3)=-27$ is also an absolute minimum. $f$ has an absolute maximum $f(-1)=37$. This $f(-1)=37$ is not a local maximum by the way. The reason is that there is no local neighborhood around $x=-1$ as the domain is given by $[-1,4]$.

A natural question one may ask is whether a function always has an absolute maximum and an absolute minimum. You can easily find many examples that show that a function does not necessarily have an absolute maximum or an absolute minimum value. For instance, $y=x$ on $(-\infty,\infty)$ has neither an absolute maximum nor an absolute minimum. The function $y=x^2$ on $[0,1)$ has an absolute minimum 0 at $x=0$ but has no absolute maximum.

Theorem. [Max-Min Theorem, Fermat]
If $f$ is continuous on a closed interval $[a,b]$, then $f$ attains an absolute maximum and an absolute minimum on $[a,b]$.

The following theorem is also due to Fermat.

Theorem. If $f$ has a local maximum or a local minimum at $c$ and if $f’(c)$ exists, then $f’(c)=0$.

The converse of this theorem is not necessarily true i.e. $f’(c)=0$ does not necessarily mean that $f(c)$ is a local maximum or a local minimum. For example, consider $f(x)=x^3$. $f’(0)=0$ but $f(x)$ has neither a local maximum nor a local minimum at $x=0$ as shown in figure below.

The graph of f(x)=x^3

The above theorem is important as an absolute maximum and an absolute minimum may be found among local maximum values, local minimum values and the evaluations of $f$ at the end points, $f(a)$ and $f(b)$. To find local maximum values and local minimum values, we first find points $c$ such that $f’(c)=0$. Such points are called critical points. The reason they are called critical points is that the graph of a function changes from increasing to decreasing or from decreasing to increasing at a critical point.

Definition. A critical point of a function $f(x)$ is a number $c$ in the domain of $f$ such that either $f’(c)=0$ or $f’(c)$ does not exist.

Recipe of Finding Absolute Maximum and Absolute Minimum

Let $f$ be a continuous function on a closed interval $[a,b]$.

Step 1. Find all critical points of $f$ in $(a,b)$.

Step 2. Evaluate $f$ at each critical point obtained in Step 1.

Step 3. Find $f(a)$ and $f(b)$.

Step 4. Compare all the values obtained in Steps 2 and 3. The largest value is the absolute maximum and the smallest value is the absolute minimum.

Example. Find the absolute maximum and the absolute minimum values of
$$f(x)=x^3-3x^2+1,\ -\frac{1}{2}\leq x\leq 4.$$

Solution.

Step 1. Find all critical points of $f$ in $\left(-\frac{1}{2},4\right)$.

$f’(x)=3x^2-6x$. Set $f’(x)=0$ i.e. $3x^2-6x=0$. $3x^2-6x$ is factored as $3x(x-2)$. So we find two critical points $0, 2$.

Step 2. Evaluate $f$ at each critical point obtained in Step 1.

$f(0)=1$ and $f(2)=-3$.

Step 3. Find $f\left(-\frac{1}{2}\right)$ and $f(4)$.

$f\left(-\frac{1}{2}\right)=\frac{1}{8}$ and $f(4)=17$.

Step 4. Compare all the values obtained in Steps 2 and 3.

The largest value is $f(4)=17$ so this is the absolute maximum value of $f$ on $\left[-\frac{1}{2},4\right]$. The smallest value is $f(2)=-3$ so this is the absolute minimum of $f$ on $\left[-\frac{1}{2},4\right]$.