The Curvature of a Surface in Euclidean 3-Space $\mathbb{R}^3$

In here, it is seen that the curvature of a unit speed parametric curve $\alpha(t)$ in $\mathbb{R}^3$ can be measured by its acceleration $\ddot\alpha(t)$. In this case, the acceleration happens to be a normal vector field along the curve. Now we turn our attention to surfaces in Euclidean 3-space $\mathbb{R}^3$ and we would like to devise a way to measure the bending of a surface in $\mathbb{R}^3$, and it may be achieved by studying the change of a unit normal vector field on the surface. To study the change of a unit normal vector field on a surface, we need to be able to differentiate vector fields. But first let us review the directional derivative you learned in mutilvariable calculus. Let $f:\mathbb{R}^3\longrightarrow\mathbb{R}$ be a differentiable function and $\mathbf{v}$ a tangent vector to $\mathbb{R}^3$ at $\mathbf{p}$. Then the directional derivative of $f$ in the $\mathbf{v}$ direction at $\mathbf{p}$ is defined by
\begin{equation}
\label{eq:directderiv}
\nabla_{\mathbf{v}}f=\left.\frac{d}{dt}f(\mathbf{p}+t\mathbf{v})\right|_{t=0}.
\end{equation}
By chain rule, the directional derivative can be written as
\begin{equation}
\label{eq:directderiv2}
\nabla_{\mathbf{v}}f=\nabla f(\mathbf{p})\cdot\mathbf{v},
\end{equation}
where $\nabla f$ denotes the gradient of $f$
$$\nabla f=\frac{\partial f}{\partial x_1}E_1(\mathbf{p})+\frac{\partial f}{\partial x_2}E_2(\mathbf{p})+\frac{\partial f}{\partial x_3}E_3(\mathbf{p}),$$
where $E_1, E_2, E_3$ denote the standard orthonormal frame in $\mathbb{R}^3$. The directional derivative satisfies the following properties.

Theorem. Let $f,g$ be real-valued differentiable functions on $\mathbb{R}^3$, $\mathbf{v},\mathbf{w}$ tangent vectors to $\mathbb{R}^3$ at $\mathbf{p}$, and $a,b\in\mathbb{R}$. Then

  1. $\nabla_{a\mathbf{v}+b\mathbf{w}}f=a\nabla_{\mathbf{v}}f+b\nabla_{\mathbf{w}}f$
  2. $\nabla_{\mathbf{v}}(af+bg)=a\nabla_{\mathbf{v}}f+b\nabla_{\mathbf{v}}g$
  3. $\nabla_{\mathbf{v}}fg=(\nabla_{\mathbf{v}}f)g(\mathbf{p})+f(\mathbf{p})\nabla_{\mathbf{v}}g$

The properties 1 and 2 are linearity and the property 3 is Leibniz rule. The directional derivative \eqref{eq:directderiv} can be generalized to the covariant derivative $\nabla_{\mathbf{v}}X$ of a vector field $X$ in the direction of a tangent vector $\mathbf{v}$ at $\mathbf{p}$:
\begin{equation}
\label{eq:covderiv}
\nabla_{\mathbf{v}}X=\left.\frac{d}{dt}X(\mathbf{p}+t\mathbf{v})\right|_{t=0}.
\end{equation}
Let $X=x_1E_1+x_2E_2+x_2E_3$ in terms of the standard orthonormal frame $E_1,E_2,E_3$. Then $\nabla_{\mathbf{v}}X$ can be written as
\begin{equation}
\label{eq:covderiv2}
\nabla_{\mathbf{v}}X=\sum_{j=1}^3\nabla_{\mathbf{v}}x_jE_j.
\end{equation}
Here, $\nabla_{\mathbf{v}}x_j$ is the directional derivative of the $j$-th component function of the vector field $X$ in the $\mathbf{v}$ direction as defined in \eqref{eq:directderiv}. The covariant derivative satisfies the following properties.

Theorem. Let $X,Y$ be vector fields on $\mathbb{R}^3$, $\mathbf{v},\mathbf{w}$ tangent vectors at $\mathbf{p}$, $f$ a real-valued function on $\mathbb{R}^3$, and $a,b$ scalars. Then

  1. $\nabla_{\mathbf{v}}(aX+bY)=a\nabla_{\mathbf{v}}X+b\nabla_{\mathbf{v}}Y$
  2. $\nabla_{a\mathbf{v}+b\mathbf{w}}X=a\nabla_{\mathbf{v}}X+b\nabla_{\mathbf{v}}X$
  3. $\nabla_{\mathbf{v}}(fX)=(\nabla_{\mathbf{v}}f)X(\mathbf{p})+f(\mathbf{p})\nabla_{\mathbf{v}}X$
  4. $\nabla_{\mathbf{v}}(X\cdot Y)=(\nabla_{\mathbf{v}}X)\cdot Y+X\cdot\nabla_{\mathbf{v}}Y$

The properties 1 and 2 are linearity and the properties 3 and 4 are Leibniz rules.

Hereafter, I assume that surfaces are orientable and have nonvanishing normal vector fields. Let $\mathcal{M}\subset\mathbb{R}^3$ be a surface and $p\in\mathcal{M}$. For each $\mathbf{v}\in T_p\mathcal{M}$, define
\begin{equation}
\label{eq:shape}
S_p(\mathbf{v})=-\nabla_{\mathbf{v}}N,
\end{equation}
where $N$ is a unit normal vector field on a neighborhood of $p\in\mathcal{M}$. Since $N\cdot N=1$, $(\nabla_{\mathbf{v}}N)\cdot N=-2S_p(\mathbf{v})\cdot N=0$. This means that $S_p(\mathbf{v})\in T_p\mathcal{M}$. Thus, \eqref{eq:shape} defines a linear map $S_p: T_p\mathcal M\longrightarrow T_p\mathcal{M}$. $S_p$ is called the shape operator of $\mathcal{M}$ at $p$ (derived from $N$).For each $p\in\mathcal{M}$, $S_p$ is a symmetric operator, i.e.,
$$\langle S_p(\mathbf{v}),\mathbf{w}\rangle=\langle S_p(\mathbf{w}),\mathbf{v}\rangle$$
for any $\mathbf{v},\mathbf{w}\in T_p\mathcal{M}$.

Let us assume that $\mathcal{M}\subset\mathbb{R}^3$ is a regular surface so that any differentiable curve $\alpha: (-\epsilon,\epsilon)\longrightarrow\mathcal{M}$ is a regular curve, i.e., $\dot\alpha(t)\ne 0$ for every $t\in(-\epsilon,\epsilon)$. If $\alpha$ is a differentiable curve in $\mathcal{M}\subset\mathbb{R}^3$, then
\begin{equation}
\label{eq:acceleration}
\langle\ddot\alpha,N\rangle=\langle S(\dot\alpha),\dot\alpha\rangle.
\end{equation}
$\langle\ddot\alpha,N\rangle$ is the normal component of the acceleration $\ddot\alpha$ to the surface $\mathcal{M}$. \eqref{eq:acceleration} says the normal component of $\ddot\alpha$ depends only on the shape operator $S$ and the velocity $\dot\alpha$. If $\alpha$ is represented by arc-length, i.e., $|\dot\alpha|=1$, then we get a measurement of the way $\mathcal{M}$ is bent in the $\dot\alpha$ direction. Hence we have the following definition:

Definition. Let $\mathbf{u}$ be a unit tangent vector to $\mathcal{M}\subset\mathbb{R}^3$ at $p$. Then the number $\kappa(\mathbf{u})=\langle S(\mathbf{u}),\mathbf{u}\rangle$ is called the normal curvature of $\mathcal{M}$ in $\mathbf{u}$ direction. The normal curvature $\kappa$ can be considered as a continuous function on the unit circle $\kappa: S^1\longrightarrow\mathbb{R}$. Since $S^1$ is compact (closed and bounded), $\kappa$ attains a maximum and a minimum values, say $\kappa_1$, $\kappa_2$, respectively. $\kappa_1$, $\kappa_2$ are called the principal curvatures of $\mathcal{M}$ at $p$. The principal curvatures $\kappa_1$, $\kappa_2$ are the eigenvalues of the shape operator $S$ and $S$ can be written as the $2\times 2$ matrix
\begin{equation}
\label{eq:shape2}
S=\begin{pmatrix}
\kappa_1 & 0\\
0 & \kappa_2
\end{pmatrix}.
\end{equation}
The arithmetic mean $H$ and the squared Gau{\ss}ian mean $K$ of $\kappa_1$, $\kappa_2$
\begin{align}
\label{eq:mean}
H&=\frac{\kappa_1+\kappa_2}{2}=\frac{1}{2}\mathrm{tr}S,\\
\label{eq:gauss}
K&=\kappa_1\kappa_2=\det S
\end{align}
are called, respectively, the mean and the Gaußian curvatures of $\mathcal{M}$. The definitions \eqref{eq:mean} and \eqref{eq:gauss} themselves however are not much helpful for calculating the mean and the Gaußian curvatures of a surface. We can compute the mean and the Gaußian curvatures of a parametric regular surface $\varphi: D(u,v)\longrightarrow\mathbb{R}^3$ using Gauß’ celebrated formulas
\begin{align}
\label{eq:mean2}
H&=\frac{G\ell+En-2Fm}{2(EG-F^2)},\\
\label{eq:gauss2}
K&=\frac{\ell n-m^2}{EG-F^2},
\end{align}
where
\begin{align*}
E&=\langle\varphi_u,\varphi_u\rangle,\ F=\langle\varphi_u,\varphi_v\rangle,\ G=\langle\varphi_v,\varphi_v\rangle,\\
\ell&=\langle N,\varphi_{uu}\rangle,\ m=\langle N,\varphi_{uv}\rangle,\ n=\langle N,\varphi_{vv}\rangle.
\end{align*}
It is straightforward to verify that
\begin{equation}
\label{eq:normal}
|\varphi_u\times\varphi_v|^2=EG-F^2.
\end{equation}

Example. Compute the Gaußian and the mean curvatures of helicoid
$$\varphi(u,v)=(u\cos v,u\sin v, bv),\ b\ne 0.$$

helicoid

Helicoid

Solution. \begin{align*}
\varphi_u&=(\cos v,\sin v,0),\ \varphi_v=(-u\sin v,u\cos v,b),\\
\varphi_{uu}&=0,\ \varphi_{uv}=(-\sin v,\cos v,0),\ \varphi_{vv}=(-u\cos v,-u\sin v,0).
\end{align*}
$E$, $F$ and $G$ are calculated to be
$$E=1,\ F=0,\ G=b^2+u^2.$$
$\varphi_u\times\varphi_v=(b\sin v,-b\cos v,u)$, so the unit normal vector field $N$ is given by
$$N=\frac{\varphi_u\times\varphi_v}{\sqrt{EG-F^2}}=\frac{(b\sin v,-b\cos v,u)}{\sqrt{b^2+u^2}}.$$
Next, $\ell, m,n$ are calculated to be
$$\ell=0,\ m=-\frac{b}{\sqrt{b^2+u^2}},\ n=0.$$
Finally we find the Gaußian curvature $K$ and the mean curvature $H$:
\begin{align*}
K&=\frac{\ell n-m^2}{EG-F^2}=-\frac{b^2}{(b^2+u^2)^2},\\
H&=\frac{G\ell+En-2Fm}{2(EG-F^2)}=0.
\end{align*}
Surfaces with $H=0$ are called minimal surfaces.

For further reading on the topic I discussed here, I recommend:

Barrett O’Neil, Elementary Differential Geometry, Academic Press, 1967

Trigonometric Integrals

Let us attempt to calculate $\int\cos^n xdx$ where $n$ is a positive integer. In the following table, the first column represents $\cos^{n-1}x$ and its derivative, and the second column represents $\cos x$ and its integral.
$$\begin{array}{ccc}
\cos^{n-1}x & & \cos x\\
&\stackrel{+}{\searrow}&\\
-(n-1)\cos^{n-2}x\sin x & \stackrel{-}{\longrightarrow} & \sin x\\
\end{array}$$
By integration by parts, we have
\begin{align*}
\int\cos^n xdx&=\cos^{n-1}x\sin x+(n-1)\int\cos^{n-2}x\sin^2xdx\\
&=\cos^{n-1}x\sin x+(n-1)\int\cos^{n-2}xdx-(n-1)\int\cos^{n-1}xdx+C’
\end{align*}
where $C’$ is a constant. Solving this for $\int\cos^nxdx$, we obtain
\begin{equation}
\label{eq:cosred}
\int\cos^n xdx=\frac{1}{n}\cos^{n-1}x\sin x+\frac{n-1}{n}\int\cos^{n-2}xdx+C
\end{equation}
where $C=\frac{C’}{n}$. The formula such as \eqref{eq:cosred} is called a reduction formula. Similarly we obtain the following reduction formulae.
\begin{align}
\int\sin^n xdx&=-\frac{1}{n}\sin^{n-1}x\cos x+\frac{n-1}{n}\int\sin^{n-2}dx\\
\int\tan^nxdx&=\frac{1}{n-1}\tan^{n-1}x-\int\tan^{n-2}dx,\ n\ne 1\\
\int\sec^nxdx&=\frac{1}{n-1}\sec^{n-2}x\tan x+\frac{n-2}{n-1}\int\sec^{n-2}xdx,\ n\ne 1
\end{align}
Example. Use the reduction formula \eqref{eq:cosred} to evaluate $\int\cos^3xdx$.

Solution.
\begin{align*}
\int\cos^3xdx&=\frac{1}{3}\cos^2x\sin x+\frac{2}{3}\int\cos xdx\\
&=\frac{1}{3}\cos^2x\sin x+\frac{2}{3}\sin x+C,
\end{align*}
where $C$ is a constant.

Integral like the following example is rather tricky.

Example. Evaluate $\int\sec xdx$.

Solution.
\begin{align*}
\int\sec xdx&=\int\sec x\frac{\sec x+\tan x}{\sec x+\tan x}dx\\
&=\int\frac{\sec^2x+\sec x\tan x}{\sec x+\tan x}dx\\
&=\frac{du}{u}\ (\mbox{substitution}\ u=\sec+\tan x)\\
&=\ln|u|+C\\
&=\ln|\sec x+\tan x|+C,
\end{align*}
where $C$ is a constant.

Example. Evaluate $\int\csc xdx$.

Solution. It can be done similarly to the previous example.
\begin{align*}
\int\csc xdx&=\int\csc x\frac{\csc x+\cot x}{\csc x+\cot x}dx\\
&=-\ln|\csc x+\cot x|+C,
\end{align*}
where $C$ is a constant.

Evaluating Integrals of the Type $\int\sin^mx\cos^nxdx$ Where $m,n$ Are Positive Integers

Case 1. One of the integer powers, say $m$, is odd.

$m=2k+1$ for some integer $k$. So,
\begin{align*}
\sin^mx&=\sin^{2k+1}x\\
&=(\sin^2x)^k\sin x\\
&=(1-\cos^2x)^k\sin x.
\end{align*}
Use the substitution $u=\cos x$ in this case.

Example. Evaluate $\int\sin^3x\cos^2xdx$.

Solution.
\begin{align*}
\int\sin^3x\cos^2xdx&=\int \sin^2x\sin x\cos^2xdx\\
&=\int(1-\cos^2x)\cos^2x\sin xdx\\
&=-\int(1-u^2)u^2du\ (\mbox{substition}\ u=\cos x)\\
&=\frac{u^5}{4}-\frac{u^3}{3}+C\\
&=\frac{\cos^5x}{5}-\frac{\cos^3x}{3}+C,
\end{align*}
where $C$ is a constant.

Example. Evaluate $\int\cos^3xdx$.

Solution.
\begin{align*}
\int\cos^3xdx&=\int\cos^2x\cos xdx\\
&=\int(1-\sin^2x)\cos xdx\\
&=\int(1-u^2)du\ (\mbox{substitution}\ u=\sin x)\\
&=u-\frac{u^3}{3}+C\\
&=\sin x-\frac{\sin^3x}{3}+C,
\end{align*}
where $C$ is a constant.

Case 2. If both $m$ and $n$ are even.

In this case, use the trigonometric identities
$$\sin^2x=\frac{1-\cos 2x}{2},\ \cos^2x=\frac{1+\cos 2x}{2}.$$

Example. Evaluate $\int\sin^2x\cos^4xdx$.

Solution. Left to readers for exercise. The answer is
$$\frac{1}{16}\left(x-\frac{1}{4}\sin 4x+\frac{1}{3}\sin^32x\right)+C,$$
where $C$ is a constant.

Integrals of Powers of $\tan x$ and $\sec x$

This type of integrals can be mostly done by using the trigonometric identity
$$1+\tan^2x=\sec^2x.$$

Example. Evaluate $\int\tan^4xdx$.

Solution.
\begin{align*}
\int\tan^4xdx&=\int\tan^2x\tan^2xdx\\
&=\int\tan^2x(\sec^2x-1)dx\\
&=\int\tan^2x\sec^2xdx-\int\tan^2xdx\\
&=\int u^2du-\int(\sec^2x-1)dx\ (\mbox{substition}\ u=\tan x)\\
&=\frac{\tan^3x}{3}-\tan x+x+C,
\end{align*}
where $C$ is a constant.

Example. Evaluate $\int\sec^3xdx$.

Solution.
\begin{align*}
\int\sec^3xdx&=\int\sec x\sec^2xdx\\
&=\sec x\tan x-\int\tan^2x\sec xdx\ (\mbox{integration by parts})\\
&=\sec x\tan x-\int(\sec^2x-1)\sec xdx\\
&=\sec x\tan x-\int\sec^3xdx+\int\sec xdx+C’\\
&=\sec x\tan x-\int\sec^3xdx+\ln|\sec x+\tan x|+C’,
\end{align*}
where $C’$ is a constant. Hence,
$$\int\sec^3xdx=\frac{1}{2}\sec x\tan x+\frac{1}{2}\ln|\sec x+\tan x|+C,$$
where $C=\frac{C’}{2}$.

Products of Sines and Cosines

This type of integrals include $\int\sin mx\cos nxdx$, $\int\sin mx\cos nxdx$, and $\int\cos mx\cos nxdx$. In this case use the identities
\begin{align*}
\sin mx\sin nx&=\frac{1}{2}[\cos(m-n)x-\cos(m+n)x]\\
\sin mx\cos nx&=\frac{1}{2}[\sin(m-n)x+\sin(m+n)x]\\
\cos mx\cos nx&=\frac{1}{2}[\cos(m-n)x+\cos(m+n)x]
\end{align*}
Example. Evaluate $\int\sin 3x\cos5xdx$.

Solution.
\begin{align*}
\int\sin 3x\cos5xdx&=-\frac{1}{2}\int\sin 2x+\frac{1}{2}\int\sin 8xdx\\
&=\frac{1}{4}\cos 2x-\frac{1}{16}\cos 8x+C,
\end{align*}
where $C$ is a constant.

Example. Evaluate $\int_0^1\sin m\pi x\sin n\pi xdx$ and $\int_0^1\cos m\pi x\cos n\pi xdx$ where $m$ and $n$ are positive integers.

Solution. If $m=n$, then
\begin{align*}
\int_0^1\sin m\pi x\sin n\pi xdx&=\int_0^1\sin^2m\pi xdx\\
&=\int_0^1\frac{1-\cos 4m\pi x}{2}dx\\
&=\frac{1}{2}\int_0^1dx-\frac{1}{2}\int_0^1\cos 4m\pi xdx\\
&=\frac{1}{2}-\frac{1}{8m\pi}[\sin 4m\pi x]_0^1\\
&=\frac{1}{2}.
\end{align*}
Now we assume that $m\ne n$. Then
\begin{align*}
\int_0^1\sin m\pi x\sin n\pi xdx&=\frac{1}{2}\int_0^1\cos(m-n)\pi xdx-\frac{1}{2}\int_0^1\cos(m+n)\pi xdx\\
&=0.
\end{align*}
So, we can simply write the integral as
\begin{equation}
\label{eq:orthofunct}
\int_0^1\sin m\pi x\sin n\pi xdx=\frac{1}{2}\delta_{mn},
\end{equation}
where
$$\delta_{mn}=\left\{\begin{array}{ccc}
1 & \mbox{if} & m=n,\\
0 & \mbox{if} & m\ne 0.
\end{array}\right.$$
$\delta_{mn}$ is called the Kronecker’s delta.
Similarly, we also have
\begin{equation}
\label{eq:orthofunct2}
\int_0^1\cos m\pi x\cos n\pi xdx=\frac{1}{2}\delta_{mn}.
\end{equation}
The integrals \eqref{eq:orthofunct} and \eqref{eq:orthofunct2} play an important role in studying the boundary value problems with heat equation and wave equation. They also appear in the study of different branches of mathematics and physics such as functional analysis, Fourier analysis, electromagnetism, and quantum mechanics, etc. In mathematics and physics, often functions like $\sin n\pi x$ and $\cos n\pi x$ are treated as vectors and integrals like \eqref{eq:orthofunct}  and \eqref{eq:orthofunct2} can be considered as inner products $\langle\sin m\pi x,\sin n\pi x\rangle$ and $\langle\cos m\pi x,\cos n\pi x\rangle$, respectively. In this sense, we can say that $\sin m\pi x$ and $\sin n\pi x$ are orthogonal if $m\ne n$. For this reason, functions $\sin n\pi x$ and $\cos n\pi x$, $n=1,2,\cdots$ are called orthogonal functions.

Integration by Parts

Let $f(x)$ and $g(x)$ be differentiable functions. Then the product rule
$$(f(x)g(x))’=f’(x)g(x)+f(x)g’(x)$$
leads to the integration
\begin{equation}
\label{eq:intpart}
\int f(x)g’(x)dx=f(x)g(x)-\int f’(x)g(x)dx.
\end{equation}
The formula \eqref{eq:intpart} is called integration by parts. If we set $u=f(x)$ and $v=g(x)$, then \eqref{eq:intpart} can be also written as
\begin{equation}
\label{eq:intpart2}
\int udv=uv-\int vdu.
\end{equation}

Example. Evaluate $\int x\cos xdx$.

Solution. Let $u=x$ and $dv=\cos xdx$. Then $du=dx$ and $v=\sin x$. So,
\begin{align*}
\int x\cos xdx&=x\sin x-\int\sin xdx\\
&=x\sin x+\cos x+C,
\end{align*}
where $C$ is a constant.

Example. Evaluate $\int\ln xdx$.

Solution. Let $u=\ln x$ and $dv=dx$. Then $du=\frac{1}{x}dx$ and $v=x$. So,
\begin{align*}
\int\ln xdx&=x\ln x-\int x\cdot\frac{1}{x}dx\\
&=x\ln x-x+C,
\end{align*}
where $C$ is a constant.

Often it is required to apply integration by parts more than once to evaluate a given integral. In that case, it is convenient to use a table as shown in the following example.

Example. Evaluate $\int x^2e^xdx$

Solution. In the following table, the first column represents $x^2$ and its derivatives, and the second column represents $e^x$ and its integrals.
$$\begin{array}{ccc}
x^2 & & e^x\\
&\stackrel{+}{\searrow}&\\
2x & & e^x\\
&\stackrel{-}{\searrow}&\\
2 & & e^x\\
&\stackrel{+}{\searrow}&\\
0 & & e^x.
\end{array}$$
This table shows the repeated application of integration by parts. Following the table, the final answer is given by
$$\int x^2e^xdx=x^2e^x-2xe^x+2e^x+C,$$
where $C$ is a constant.

Example. Evaluate $\int x^3\sin xdx$.

Solution. In the following table, the first column represents $x^3$ and its derivatives, and the second column represents $\sin x$ and its integrals.
$$\begin{array}{ccc}
x^3 & & \sin x\\
&\stackrel{+}{\searrow}&\\
3x^2 & & -\cos x\\
&\stackrel{-}{\searrow}&\\
6x & & -\sin x\\
&\stackrel{+}{\searrow}&\\
6 & & \cos x\\
&\stackrel{-}{\searrow}&\\
0 & & \sin x.
\end{array}$$
Following the table, the final answer is given by
$$\int x^3\sin xdx=-x^3\cos x+3x^2\sin x+6x\cos x-6\sin x+C,$$
where $C$ is a constant.

Example. Evaluate $\int e^x\cos xdx$.

Solution. In the following table, the first column represents $e^x$ and its derivatives, and the second column represents $\cos x$ and its integrals.
$$\begin{array}{ccc}
e^x & & \cos x\\
&\stackrel{+}{\searrow}&\\
e^x & & \sin x\\
&\stackrel{-}{\searrow}&\\
e^x & & -\cos x.
\end{array}$$
Now, this is different from the previous two examples. While the first column repeats the same function $e^x$, the functions second column changes from $\cos x$ to $\sin x$ and to $\cos x$ again up to sign. In this case, we stop there and write the answer as we have done in the previous two examples and add to it $\int e^x(-\cos x)dx$. (Notice that the integrand is the product of functions in the last row.) That is,
$$\int e^x\cos xdx=e^x\sin x-e^x\cos x-\int e^x\cos xdx.$$
For now we do not worry about the constant of integration. Solving this for $\int e^x\cos xdx$, we obtain the final answer
$$\int e^x\cos xdx=\frac{1}{2}e^x\sin x-\frac{1}{2}e^x\cos x+C,$$
where $C$ is a constant.

Example. Evaluate $\int e^x\sin xdx$.

Solution. In the following table, the first column represents $e^x$ and its derivatives, and the second column represents $\sin x$ and its integrals.
$$\begin{array}{ccc}
e^x & & \sin x\\
&\stackrel{+}{\searrow}&\\
e^x & & -\cos x\\
&\stackrel{-}{\searrow}&\\
e^x & & -\sin x.
\end{array}$$
This is similar to the above example. The first columns repeats the same function $e^x$, and the functions in the second column changes from $\sin x$ to $\cos x$ and to $\sin x$ again up to sign. So we stop there and write
$$\int e^x\sin xdx=-e^x\cos x+e^x\sin x-\int e^x\sin xdx.$$
Solving this for $\int e^x\sin xdx$, we obtain
$$\int e^x\sin xdx=-\frac{1}{2}e^x\cos x+\frac{1}{2}e^x\sin x+C,$$
where $C$ is a constant.

Example. Evaluate $\int e^{5x}\cos 8xdx$.

Solution. In the following table, the first column represents $e^{5x}$ and its derivatives, and the second column represents $\cos 8x$ and its integrals.
$$\begin{array}{ccc}
e^{5x} & & \cos 8x\\
&\stackrel{+}{\searrow}&\\
5e^{5x} & & \frac{1}{8}\sin 8x\\
&\stackrel{-}{\searrow}&\\
25e^{5x} & & -\frac{1}{64}\cos 8x.
\end{array}$$
The first columns repeats the same function $e^{5x}$ up to constant multiple, and the functions in the second column changes from $\cos 8x$ to $\sin 8x$ and to $\cos 8x$ again to constant multiple. This case also we do the same.
$$\int e^{5x}\cos 8xdx=\frac{1}{8}e^{5x}\sin 8x+\frac{5}{64}e^{5x}\cos 8x-\frac{25}{64}\int e^{5x}\cos 8xdx.$$
Solving this for $\int e^{5x}\cos 8xdx$, we obtain
$$\int e^{5x}\cos 8xdx=\frac{8}{89}e^{5x}\sin 8x+\frac{5}{89}e^{5x}\cos 8x+C,$$
where $C$ is a constant.

The evaluation of a definite integral by parts can be done as
\begin{equation}
\label{eq:intpart3}
\int_a^b f(x)g’(x)dx=[f(x)g(x)]_a^b-\int_a^b f’(x)g(x)dx.
\end{equation}

Example. Find the area of the region bounded by $y=xe^{-x}$ and the x-axis from $x=0$ to $x=4$.

The graph of y=xexp(-x), x=0..4

The graph of y=xexp(-x), x=0..4

Solution. Let $u=x$ and $dv=e^{-x}dx$. Then $du=dx$ and $v=-e^{-x}$. Hence,
\begin{align*}
A&=\int_0^4 xe^{-x}dx\\
&=[-xe^{-x}]0^4+\int_0^4 e^{-x}dx\\
&=-4e^{-4}+[-e^{-x}]_0^4\\
&=1-5e^{-4}.
\end{align*}

A Convergence Theorem for Fourier Series

In here, we have seen that if a function $f$ is Riemann integrable on every bounded interval, it can be expended as a trigonometric series called a Fourier series by assuming that the series converges to $f$. So, it would be natural to pause the following question. If $f$ is a periodic function, would its Fourier series always converge to $f$? The answer is affirmative if $f$ is in addition piecewise smooth.

Let $S_N^f(\theta)$ denote the $n$-the partial sum of the Fourier series of a $2\pi$-periodic function $f(\theta)$. Then
\begin{equation}
\label{eq:partsum}
\begin{aligned}
S_N^f(\theta)&=\sum_{-N}^N c_ne^{in\theta}\\
&=\frac{1}{2\pi}\sum_{-N}^N\int_{-\pi}^\pi f(\psi)e^{in(\theta-\psi)}d\psi\\
&=\frac{1}{2\pi}\sum_{-N}^N\int_{-\pi}^\pi f(\psi)e^{in(\psi-\theta)}d\psi.
\end{aligned}
\end{equation}
Let $\phi=\psi-\theta$. Then
\begin{align*}
S_N^f(\theta)&=\frac{1}{2\pi}\sum_{-N}^N\int_{-\pi+\theta}^{\pi+\theta} f(\phi+\theta)e^{in\phi}d\phi\\
&=\frac{1}{2\pi}\sum_{-N}^N\int_{-\pi}^\pi f(\phi+\theta)e^{in\phi}d\phi\\
&=\int_{-\pi}^\pi f(\theta+\phi)D_N(\phi)d\phi,
\end{align*}
where
\begin{equation}
\label{eq:dkernel}
\begin{aligned}
D_N(\phi)&=\frac{1}{2\pi}\sum_{-N}^N e^{in\phi}\\
&=\frac{1}{2\pi}\frac{e^{i(N+1)\phi}-e^{-iN\phi}}{e^{i\phi}-1}\\
&=\frac{1}{2\pi}\frac{\sin\left(N+\frac{1}{2}\right)\phi}{\sin\frac{1}{2}\phi}.
\end{aligned}
\end{equation}
$D_N(\phi)$ is called the $N$-th Dirichlet kernel. Note that the Dirichlet kernel can be used to realize the Dirac delta function $\delta(x)$, i.e.
$$\delta(x)=\lim_{n\to\infty}\frac{1}{2\pi}\frac{\sin\left(n+\frac{1}{2}\right)x}{\sin\frac{1}{2}x}.$$

Dirichlet kernel D_n(x), n=1..10, x=-pi..pi

Dirichlet kernel D_n(x), n=1..10, x=-pi..pi

Note that
$$\frac{1}{2}+\frac{\sin\left(N+\frac{1}{2}\right)\theta}{2\sin\frac{1}{2}\theta}=1+\sum_{n=1}^N\cos n\theta\ (0<\theta<2\pi)$$
Using this identity, one can easily show that:

Lemma. For any $N$,
$$\int_{-\pi}^0 D_N(\theta)d\theta=\int_0^{\pi}D_N(\theta)d\theta=\frac{1}{2}.$$

Now, we area ready to prove the following convergence theorem.

Theorem. If $f$ is $2\pi$-periodic and piecewise smooth on $\mathbb{R}$, then
$$\lim_{N\to\infty} S_N^f(\theta)=\frac{1}{2}[f(\theta-)+f(\theta+)]$$
for every $\theta$. Here, $f(\theta-)=\lim_{\stackrel{h\to 0}{h>0}}f(\theta-h)$ and $f(\theta+)=\lim_{\stackrel{h\to 0}{h>0}}f(\theta+h)$. In particular, $\lim_{N\to\infty}S_N^f(\theta)=f(\theta)$ for every $\theta$ at which $f$ is continuous.

Proof. By Lemma,
$$\frac{1}{2}f(\theta-)=f(\theta-)\int_{-\pi}^0 D_N(\phi)d\phi,\ \frac{1}{2}f(\theta+)=f(\theta+)\int_0^\pi D_N(\phi)d\phi.$$
So,
\begin{align*}
S_N^f(\theta)-\frac{1}{2}[f(\theta-)+f(\theta+)]&=\int_{-\pi}^0[f(\theta+\phi)-f(\theta-)]D_N(\phi)d\phi+\\
&\int_0^\pi[f(\theta+\phi)-f(\theta+)]D_N(\phi)d\phi\\
&=\frac{1}{2\pi}\int_{-\pi}^0[f(\theta+\phi)-f(\theta-)]\frac{e^{i(N+1)\phi}-e^{-iN\phi}}{e^{i\phi}-1}d\phi\\
&+\frac{1}{2\pi}\int_0^\pi[f(\theta+\phi)-f(\theta+)]\frac{e^{i(N+1)\phi}-e^{-iN\phi}}{e^{i\phi}-1}d\phi.
\end{align*}
$$\lim_{\phi\to 0+}\frac{f(\theta+\phi)-f(\theta+)}{e^{i\phi}-1}=\frac{f’(\theta+)}{i},\ \lim_{\phi\to 0-}\frac{f(\theta+\phi)-f(\theta-)}{e^{i\phi}-1}=\frac{f’(\theta-)}{i}.$$
Hence, the function
$$g(\phi):=\left\{\begin{aligned}
&\frac{f(\theta+\phi)-f(\theta+)}{e^{i\phi}-1},\ -\pi<\phi<0,\\
&\frac{f(\theta+\phi)-f(\theta-)}{e^{i\phi}-1},\ 0<\phi<\pi
\end{aligned}\right.$$
is piecewise continuous on $[-\pi,\pi]$. By the corollary to Bessel’s inequality,
$$c_n=\frac{1}{2\pi}\int_{-\pi}^\pi g(\phi)e^{in\phi}d\phi\to 0$$
as $n\to\pm\infty$. Therefore,
\begin{align*}
S_N^f(\theta)-\frac{1}{2}[f(\theta-)+f(\theta+)]&=\frac{1}{2\pi}\int_{-\pi}^\pi g(\phi)[e^{i(N+1)\phi}-e^{-iN\phi}]d\phi\\
&=c_{-(N+1)}-c_N\\
&\to 0
\end{align*}
as $N\to\infty$. This completes the proof.

Corollary. If $f$ and $g$ are $2\pi$-periodic and piecewise smooth, and $f$ and $g$ have the same Fourier coefficients, then $f=g$.

Proof. If $f$ and $g$ have the same Fourier coefficients, their their Fourier series are the same. Due to the conditions on $f$ and $g$, the Fourier series of $f$ and $g$ converge to $f$ and $g$ respectively by the above convergence theorem. Hence, $f=g$.

The Curvature of a Curve in Euclidean 3-space $\mathbb{R}^3$

The quantity curvature is intended to be a measurement of the bending or turning of a curve. Let $\alpha: I\longrightarrow\mathbb{R}^3$ be a regular curve (i.e. a smooth curve whose derivative never vanishes). If $\alpha$ were to have the unit speed, i.e.
\begin{equation}
\label{eq:unitspped}
||\dot\alpha(t)||^2=\alpha(t)\cdot\alpha(t)=1.
\end{equation}
Differentiating \eqref{eq:unitspped}, we see that $\dot\alpha(t)\cdot\ddot\alpha(t)=0$, i.e. the acceleration is normal to the velocity which is tangent to $\alpha$. Hence, measuring the acceleration is measuring the curvature. So, if we denote the curvature by $\kappa$, then
\begin{equation}
\label{eq:curvature}
\kappa=||\ddot\alpha(t)||.
\end{equation}
Remember that the definition of curvature \eqref{eq:curvature} requires the curve $\alpha$ to be a unit speed curve, but it is not necessarily always the case. What we know is that we can always reparametrize a curve and reparametrization does not change the curve itself but only changes its speed. There is one particular parametrization that we are interested in as it results a unit speed curve. It is called paramtrization by arc-length. This time let us assume that $\alpha$ is not a unit speed curve and define
\begin{equation}
\label{eq:arclength}
s(t)=\int_a^t||\dot\alpha(u)||du,
\end{equation}
where $a\in I$. Since $\frac{ds}{dt}>0$, $s(t)$ is an increasing function and so it is one-to-one. This means that we can solve \eqref{eq:arclength} for $t$ and this allows us to reparametrize $\alpha(t)$ by the arc-length parameter $s$.

Example. Let $\alpha: (-\infty,\infty)\longrightarrow\mathbb{R}^3$ be given by
$$\alpha(t)=(a\cos t,a\sin t,bt)$$
where $a>0$, $b\ne 0$. $\alpha$ is a right circular helix. Its speed is
$$||\dot\alpha(t)||=\sqrt{a^2+b^2}\ne 1.$$
$s(t)=\sqrt{a^2+b^2}t$, so $t=\frac{s}{\sqrt{a^2+b^2}}$. The reparametrization of $\alpha(t)$ by $s$ is given by
$$\alpha(s)=\left(a\cos\frac{s}{\sqrt{a^2+b^2}},b\sin\frac{s}{\sqrt{a^2+b^2}},\frac{bs}{\sqrt{a^2+b^2}}\right).$$
Hence the curvature $\kappa$ is
$$\kappa=\frac{a}{a^2+b^2}.$$

Bessel’s Inequality

Bessel’s inequality is important in studying Fourier series.

Theorem. If $f$ is $2\pi$-periodic and Riemann integrable on $[-\pi,\pi]$ and if the Fourier coefficients $c_n$ are defined by
$$c_n=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(\theta)e^{-in\theta}d\theta,$$
then
\begin{equation}
\label{eq:besselinequality}
\sum_{n=-\infty}^\infty|c_n|^2\leq\frac{1}{2\pi}\int_{-\pi}^\pi|f(\theta)|^2d\theta.
\end{equation}

Proof.
\begin{align*}
0&\leq|f(\theta)-\sum_{-N}^Nc_ne^{in\theta}|^2\\
&=f(\theta)^2-\sum_{-N}^Nf(\theta)[c_ne^{in\theta}+\overline{c_n}e^{-in\theta}]+\sum_{m,n=-N}^Nc_m\overline{c_n}e^{i(m-n)\theta}
\end{align*}
By integrating,
\begin{align*}
\frac{1}{2\pi}\int_{-\pi}^\pi|f(\theta)-\sum_{-N}^Nc_ne^{in\theta}|^2d\theta&=\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta-\sum_{-N}^N\left[c_n\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)e^{in\theta}d\theta\right.\\
\left.+\overline{c_n}\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)e^{-in\theta}d\theta\right]+&\sum_{m,n=-N}^Nc_m\overline{c_n}\frac{1}{2\pi}\int_{-\pi}^\pi e^{i(m-n)\theta}d\theta\\
&=\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta-\sum_{-N}^N|c_n|^2.
\end{align*}
Hence, for each $N=1,2,\cdots$,
$$\sum_{-N}^N|c_n|^2\leq\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta.$$
Taking the limit $N\to\infty$, we obtain
$$\sum_{-\infty}^\infty|c_n|^2\leq\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta.$$

Note that $|a_0|^2=4|c_0|^2$, $|a_n|^2+|b_n|^2=2(|c_n|+|c_{-n}|^2)$, $n\geq 1$. So, in terms of the real coefficients, Bessel’s inequality can be written as
\begin{equation}
\label{eq:besselinequality2}
\frac{1}{4}|a_0|^2+\frac{1}{2}\sum_1^\infty(|a_n|^2+|b_n|^2)\leq\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta.
\end{equation}
Bessel’s inequality implies that $\sum|a_n|^2$, $\sum|b_n|^2$, $\sum|c_n|^2$ are convergent and hence the series of Fourier coefficients $\sum a_n$, $\sum b_n$, $\sum c_n$ are convergent. As we studied in undergraduate calculus the following corollary holds then.

Corollary. The Fourier coefficients $a_n$, $b_n$, $c_n$ tend to zero as $n\to\infty$ (and also as $n\to -\infty$ for $c_{-n}$).

Spectrum

Let us recall the Hooke’s law
\begin{equation}
\label{eq:hooke}
F=-kx.
\end{equation}
Newton’s second law of motion is
\begin{equation}
\label{eq:newton}
F=ma=m\ddot{x},
\end{equation}
where $\ddot{x}=\frac{d^2 x}{dt^2}$. The equations \eqref{eq:hooke} and \eqref{eq:newton} result the equation of a simple harmonic oscillator
\begin{equation}
\label{eq:ho}
m\ddot{x}+kx=0.
\end{equation}
Integrating \eqref{eq:ho} with respect to $x$, we have
$$\int(m\ddot{x}dx+kxdx)=E_0,$$
where $E_0$ is a constant. $d\dot{x}=\ddot{x}dt$ and $\dot{x}d\dot{x}=\dot{x}\ddot{x}dt=\ddot{x}dx$. So,
\begin{align*}
\int(m\ddot{x}dx+kxdx)&=\int(m\dot{x}d\dot{x}+kxdx)\\
&=\frac{1}{2}m\ddot{x}+\frac{1}{2}kx^2.
\end{align*}
Hence, we obtain the conservation law of energy
\begin{equation}
\label{eq:energy}
\frac{1}{2}m\ddot{x}+\frac{1}{2}kx^2=E_0.
\end{equation}
The general solution of \eqref{eq:ho} is
\begin{equation}
\label{eq:hosol}
\begin{aligned}
x(t)&=a\cos\omega t+b\sin\omega t\\
&=\sqrt{a^2+b^2}\sin(\omega t+\theta),
\end{aligned}
\end{equation}
where $a$ and $b$ are constants, $\omega=\sqrt{\frac{k}{m}}$ and $\theta=\tan^{-1}\left(\frac{a}{b}\right)$. From \eqref{eq:energy} and \eqref{eq:hosol}, the total energy $E_0$ is computed to be
$$E_0=\frac{1}{2}m\omega^2(a^2+b^2).$$
This tells us that the total energy of a simple harmonic oscillator is proportional to $a^2+b^2$, the squared amplitude. As seen here, the sawtooth function $f(x)$ is represented as the Fourier series
\begin{align*}
f(x)&=-\frac{2L}{\pi}\sum_{n=1}^\infty\frac{(-1)^n}{n}\sin\left(\frac{n\pi x}{L}\right)\\
&=\frac{2L}{\pi}\left\{\sin\left(\frac{\pi x}{L}\right)-\frac{1}{2}\sin\left(\frac{2\pi x}{L}\right)+\frac{1}{3}\sin\left(\frac{3\pi x}{L}\right)-\cdots\right\}.
\end{align*}
The amplitude $c_n=\frac{2L}{n\pi}$, $n=1,2,3,\cdots$ coincides with twice the angular frequency. $\{c_n\}$ is called the frequency spectrum or the amplitude spectrum.

The First and Second Derivative Tests

The First Derivative Test

The derivative $f’(x)$ can tell us a lot about the function $y=f(x)$. It can tell us where critical points are i.e. points at which $f’(x)=0$ and the critical points are likely places at which $y=f(x)$ assumes a local maximum or a local minimum values. By further examining the properties of $f’(x)$ we can also determine at which critical point, $f(x)$ assumes a local maximum, or a local minimum, or neither. But first we see that $f’(x)$ can tell us where $y=f(x)$ is increasing or decreasing.

Theorem. Increasing/Decreasing Test

  1. If $f’(x)>0$ on an open interval, $f$ is increasing on that interval.
  2. If $f’(x)<0$ on an open interval, $f$ is decreasing on that interval.

Example. Find where $f(x)=3x^4-4x^3-12x^2+5$ is increasing and where it is decreasing.

Solution.
\begin{align*}
f’(x)&=12x^3-12x^2-24x\\
&=12x(x^2-x-2)\\
&=12x(x-2)(x+1).
\end{align*}
The critical points are $x=-1,0,2$. Using, for instance, the test point method (which is the easiest method of solving an inequality), we obtain the following table.
$$
\begin{array}{|c|c|c|c|c|c|c|c|}
\hline
x & x<-1 & -1 & -1<x<0 & 0 & 0<x<2 & 2 & x>2\\
\hline
f’(x) & – & 0 & + & 0 & – & 0 & +\\
\hline
f(x) & \searrow & f(-1) & \nearrow & f(0) & \searrow & f(2) &\nearrow\\
\hline
\end{array}
$$
So we find that $f$ is increasing on $(-1,0)\cup(2,\infty)$ and $f$ is decreasing on $(-\infty,-1)\cup(0,2)$.

Now, local maximum values and local minimum values can be identified by observing the change of sign of $f’(x)$ at each critical point.

Theorem. [The First Derivative Test] Suppose that $c$ is a critical point of a differentiable function $f(x)$.

  1. If the sign of $f’(x)$ changes from $+$ to $-$ at $c$, $f(c)$ is a local maximum.
  2. If the sign of $f’(x)$ changes from $-$ to $+$ at $c$, $f(c)$ is a local minimum.
  3. If the sign $f’(x)$ does not change at $c$, $f$ has neither a local maximum nor a local minimum at $c$.

Example. In the previous example, the sign of $f’(x)$ changes from $+$ to $-$ at $0$, so $f(0)=5$ is a local maximum. The sign of $f’(x)$ changes from $-$ to $+$ at $-1$ and at $2$, so $f(-1)=0$ and $f(2)=-27$ are local minimum values.

The following figure confirms our findings from the above two examples.

The graph of f(x)=3x^4-4x^3-12x^2+5

The graph of f(x)=3x^4-4x^3-12x^2+5

The Second Derivative Test

The second order derivative $f^{\prime\prime}(x)$ can provide us an additional piece of information on $y=f(x)$, namely the concavity of the graph of $y=f(x)$.

Definition. If the graph of $f$ lies above all of its tangents on an open interval $I$, it is called concave upward on $I$. If the graph of $f$ lies below all of its tangents on $I$, it is called concave downward on $I$.

From here on, $\smile$ denotes “concave up” and $\frown$ denotes “concave down”.

Definition. A point $(d,f(d))$ on the graph of $y=f(x)$ is called a point of inflection if the concavity of the graph of $f$ changes from $\smile$ to $\frown$ or from $\frown$ to $\smile$ at $(d,f(d))$. The candidates for the points of inflection may be found by solving the equation $f^{\prime\prime}(x)=0$ as shown in the example below.

Theorem. [Concavity Test]

  1. If $f^{\prime\prime}(x)>0$ for all x in an open interval $I$, the graph of $f$ is concave up on $I$.
  2. If $f^{\prime\prime}(x)<0$ for all x in an open interval $I$, the graph of $f$ is concave down on $I$.

Theorem. [The Second Derivative Test] Suppose that $f’(c)=0$ i.e. $c$ is a critical point of $f$. Suppose that $f^{\prime\prime}$ is continuous near $c$.

  1. If $f^{\prime\prime}(c)>0$ then $f(c)$ is a local minimum.
  2. If $f^{\prime\prime}(c)<0$ then $f(c)$ is a local maximum.

Example. Let $f(x)=-x^4+2x^2+2$.

  1. Find and identify all local maximum and local minimum values of $f(x)$ using the Second Derivative Test.
  2. Find the intervals on which the graph of $f(x)$ is concave up or concave down. Find all points of inflection.

Solution. 1. First we find the critical points of $f(x)$ by solving the equation $f’(x)=0$:
$$f’(x)=-4x^3+4x=-4x(x^2-1)=-4x(x+1)(x-1)=0.$$ So $x=-1,0,1$ are critical points of $f(x)$ Next, $f^{\prime\prime}(x)=-12x^2+4$. Since $f^{\prime\prime}(0)=4>0$ and $f^{\prime\prime}(-1)=f^{\prime\prime}(1)=-8<0$, by the Second Derivative Test, $f(0)=2$ is a local minimum value and $f(-1)=f(1)=3$ is a local maximum value.

2. First we need to solve the equation $f”(x)=0$:
$$f^{\prime\prime}(x)=-12x^2+4=-12\left(x^2-\frac{1}{3}\right)=-12\left(x+\frac{1}{\sqrt{3}}\right)\left(x-\frac{1}{\sqrt{3}}\right)=0.$$ So $f^{\prime\prime}(x)=0$ at $x=\pm\displaystyle\frac{1}{\sqrt{3}}$. By using the test-point method we find the following table:
$$
\begin{array}{|c||c|c|c|c|c|}
\hline
x & x<-\frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{3}}<x<\frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & x>\frac{1}{\sqrt{3}}\\
\hline
f^{\prime\prime}(x) & – & 0 & + & 0 & -\\
\hline
f(x) & \frown & f\left(-\frac{1}{\sqrt{3}}\right)=\frac{23}{9} & \smile & f\left(\frac{1}{\sqrt{3}}\right)=\frac{23}{9} & \frown\\
\hline
\end{array}
$$
The graph of $f(x)$ is concave down on the intervals $\left(-\infty,-\frac{1}{\sqrt{3}}\right)\cup\left(\frac{1}{\sqrt{3}},\infty\right)$ and is concave up on the interval $\left(-\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}}\right)$. The points of inflection are $\left(-\frac{1}{\sqrt{3}},\frac{23}{9}\right)$ and $\left(\frac{1}{\sqrt{3}},\frac{23}{9}\right)$.

The following figure confirms our findings from the above example.

The graph of f(x)=-x^4+2x^2+2 with points of inflection (in blue)

The graph of f(x)=-x^4+2x^2+2 with points of inflection (in blue)

The Substitution Rule

If given integration takes the form $\int f(g(x))g’(x)dx$ then it can be converted to a simpler integration that we may be able to evaluate by the substitution $u=g(x)$. In fact, the integration is given in terms of the new variable $u$ as
$$\int f(g(x))g’(x)dx=\int f(u)du.$$

Example. Evaluate $\int x\sqrt{1+x^2}dx$.

Solution. Let $u=1+x^2$. Then $du=2xdx$. So,
\begin{align*}
\int x\sqrt{1+x^2}dx&=\frac{1}{2}\int\sqrt{u}du\\
&=\frac{1}{3}u^{\frac{3}{2}}+C\\
&=\frac{1}{3}(1+x^2)^{\frac{3}{2}}+C,
\end{align*}
where $C$ is an arbitrary constant.

Example. Evaluate $\int\cos (7\theta+5)d\theta$.

Solution. Let $u=7\theta+5$. Then $du=7d\theta$. So,
\begin{align*}
\int\cos (7\theta+5)d\theta&=\frac{1}{7}\int\cos udu\\
&=\frac{1}{7}\sin u+C\\
&=\frac{1}{7}\sin(7\theta+5)+C,
\end{align*}
where $C$ is an arbitrary constant.

Example. Evaluate $\int x^2\sin(x^3)dx$.

Solution. Let $u=x^3$. Then $du=3x^2dx$. So,
\begin{align*}
\int x^2\sin(x^3)dx&=\frac{1}{3}\int \sin udu\\
&=-\frac{1}{3}\cos u+C\\
&=-\frac{1}{3}\cos(x^3)+C,
\end{align*}
where $C$ is an arbitrary constant.

How do we evaluate a definite integral of the form $\int_a^b f(g(x))g’(x)dx$? The following example shows you how.

Example. Evaluate $\int_{-1}^1 3x^2\sqrt{x^3+1}dx$.

Solution. First let us calculate the indefinite integral $\int 3x^2\sqrt{x^3+1}dx$. Let $u=x^3+1$. Then $du=3x^2dx$. So,
\begin{align*}
\int 3x^2\sqrt{x^3+1}dx&=\int\sqrt{u}du\\
&=\frac{2}{3}u^{\frac{3}{2}}+C\\
&=2(x^3+1)^{\frac{3}{2}}+C,
\end{align*}
where $C$ is an arbitrary constant. Now by Fundamental Theorem of Calculus,
\begin{align*}
\int_{-1}^1 3x^2\sqrt{x^3+1}dx&=\frac{2}{3}[(x^3+1)^{\frac{3}{2}}]_{-1}^1\\
&=\frac{4\sqrt{2}}{3}.
\end{align*}

But there is a better way to do this as shown in the following theorem. Its proof is straightforward.

Theorem. If $u’$ is continuous on $[a,b]$ and $f$ is continuous on the range of $u$, then
$$\int_a^bf(u(x))u’(x)dx=\int_{u(a)}^{u(b)}f(u)du.$$

Example. Let us replay the previous example using this theorem. Again let $u=x^3+1$. Then $du=3x^2dx$ and $u(-1)=0$, $u(1)=2$. Now, by the above theorem,
\begin{align*}
\int_{-1}^1 3x^2\sqrt{x^3+1}dx&=\int_0^2\sqrt{u}du\\
&=\frac{2}{3}[u^{\frac{3}{2}}]_0^2\\
&=\frac{4\sqrt{2}}{3}.
\end{align*}
I believe you will find this more simple than previous method.

I will finish this lecture with the following nice properties.

Theorem. Let $f$ be a continuous function on $[-a,a]$.

(a) If $f$ is an even function, then $\int_{-a}^a f(x)dx=2\int_0^a f(x)dx$.

(b) If $f$ is an odd function, then $\int_{-a}^a f(x)dx=0$.

This can be easily understood from pictures using the symmetries of even and odd functions. But the theorem can be proved using substitution. I will leave it to you.

Mean Value Theorem

The following theorem is something one can easily picture intuitively.

Theorem. [Rolle's Theorem]
Let $f$ be continuous on the closed interval $[a,b]$ and differentiable on the open interval $(a,b)$. If $f(a)=f(b)$, then there exists a number $c$ in $(a,b)$ such that $f’(c)=0$.

Example. Show that the equation $x^3+x-1=0$ has exactly only one real root.

Solution. Let $f(x)=x^3+x-1$. Note that $f(0)=-1$ and $f(1)=1$. So by the Intermediate Value Theorem, we see that there exists at least a root of the equation $x^3+x-1=0$ in the interval $(0,1)$. Now suppose that there are two different roots $a$ and $b$ of the equation $x^3+x-1=0$. Without loss of generality, we may assume that $a<b$. Then $f(x)$ is continuous on $[a,b]$ and differentiable on $(a,b)$. By Rolle’s Theorem then, there exist a number $c$ in $(a,b)$ such that $f’(c)=0$. However, $f’(x)=3x^2+1\geq 1$ for all real number $x$. This is a contradiction. Therefore, there should be only one root of the equation.

The graph of f(x)=x^3+x-1

The graph of f(x)=x^3+x-1

Let $f$ be continuous on $[a,b]$ and differentiable on $(a,b)$. Define $g(x)$ to be the distance between $f(x)$ and the line segment from $(a,f(a))$ to $(b,f(b))$, i.e.
$$g(x)=f(x)-\frac{f(b)-f(a)}{b-a}(x-a)-f(a).$$ Then $g(x)$ is continuous on $[a,b]$ and differentiable on $(a,b)$. Since $g(a)=g(b)=0$, by Rolle’s theorem there exists a number $c$ in $(a,b)$ such that $g’(c)=f’(c)-\frac{f(b)-f(a)}{b-a}=0$. Therefore, we proved the following theorem.

Mean Value Theorem

Mean Value Theorem

Theorem. [Mean Value Theorem]
Let $f$ be continuous on $[a,b]$ and differentiable on $(a,b)$. Then there exists a number $c$ in $(a,b)$ such that
$$f’(c)=\frac{f(b)-f(a)}{b-a}.$$

The following example is an application of the Mean Value Theorem.

Example. Suppose that $f(0)=-3$ and $f’(x)\leq 5$ for all values of $x$. How large can $f(2)$ possibly be?

Solution. By the Mean Value Theorem, there exists a number $c$ in $(0,2)$ such that
$$f’(c)=\frac{f(2)-f(0)}{2-0}=\frac{f(2)+3}{2}.$$
Since $f’(c)\leq 5$,
\begin{align*}
f(2)&=2f’(c)-3\\
&\leq 2\cdot 5-3=7.
\end{align*}
Hence, $7$ is the largest possible value of $f(2)$.

Using Mean Value Theorem, one can prove the following theorem.

Theorem. If $f’(x)=0$ for all $x$ in the open interval $(a,b)$, then $f$ is constant on $(a,b)$.