# Group Theory 2: Preliminaries (Functions)

In my previous notes here, I mentioned some about logical symbols. The logical symbols I will use often are $\forall$ which means “for all”, “for any”, “for each”, or “for every” depending on the context, $\exists$ which means “there exists”, and $\ni$ which means “such that” (don’t be confused with $\in$ which means “be an element of”). We also use s.t. for “such that.” There are also $\Longrightarrow$ which means “implies” and $\Longleftrightarrow$ which means “if and only if.” I guess these pretty much cover what we use most of time.

Now lets review about functions in a more formal way. Let $X$ and $Y$ be two non-empty sets. The the Cartesian product $X\times Y$ of $X$ and $Y$ is defined as the set
$$X\times Y=\{(x,y): x\in X,\ y\in Y\}.$$
A subset $f$ of the Cartesian product $X\times Y$ (we write $f\subset X\times Y$) is called a graph from $X$ to $Y$. A graph $f\subset X\times Y$ is called a function from $X$ to $Y$ (we write $f: A\longrightarrow B$) if whenever $(x,y_1),(x,y_2)\in f$, $y_1=y_2$. If $f: X\longrightarrow Y$ and $(x,y)\in f$, we also write $y=f(x)$. A function $f: X\longrightarrow Y$ is said to be one-to-one or injective if whenever $(x_1,y),(x_2,y)\in f$, $x_1=x_2$. This is equivalent to saying $f(x_1)=f(x_2)$ implies $x_1=x_2$. A function $f: X\longrightarrow Y$ is said to be onto or surjective if $\forall y\in Y$ $\exists x\in X$ s.t. $(x,y)\in f$. A function $f: X\longrightarrow Y$ is said to be one-to-one and onto (or bijective) if it is both one-to-one and onto (or both injective and surjective).

Let $f: X\longrightarrow Y$ and $g: Y\longrightarrow Z$ be two functions. Then the composition or the composite function $g\circ f: X\longrightarrow Z$ is defined by $g\circ f(x)=g(f(x))$ $\forall x\in X$. The function composition $\circ$ may be considered as an operation and it is associative.

Lemma. If $h: X\longleftrightarrow Y$, $g:Y\longleftrightarrow Z$ and $f:Z\longleftrightarrow W$, then $f\circ(g\circ h)=(f\circ g)\circ h$.

Note that $\circ$ is not commutative i.e. it is not necessarily true that $f\circ g=g\circ f$ even when both $f\circ g$ and $g\circ f$ are defined.

The following lemmas will be useful when we study group theory later.

Lemma. If both $f: X\longrightarrow Y$ and $g: Y\longrightarrow Z$ are one-to-one, so is $g\circ f: X\longrightarrow Z$.

Lemma. If both $f: X\longrightarrow Y$ and $g: Y\longrightarrow Z$ are onto, so is $g\circ f: X\longrightarrow Z$.

As an immediate consequence of combining these two lemmas, we obtain

Lemma. If both $f: X\longrightarrow Y$ and $g: Y\longrightarrow Z$ are bijective, so is $g\circ f: X\longrightarrow Z$.

If $f\subset X\times Y$, then the inverse graph $f^{-1}\subset Y\times X$ is defined by
$$f^{-1}=\{(y,x)\in Y\times X: (x,y)\in f\}.$$
If $f: X\longrightarrow Y$ is one-to-one and onto (bijective) then its inverse graph $f^{-1}$ is a function $f^{-1}: Y\longrightarrow X$. The inverse $f^{-1}$ is also one-to-one and onto.

Lemma. If $f: X\longrightarrow Y$ is a bijection, then $f\circ f^{-1}=\imath_Y$ and $f^{-1}\circ f=\imath_X$, where $\imath_X$ and $\imath_Y$ are the identity mappings of $X$ and $Y$, respectively.

# Analytic Continuation

The function $f(z)=\displaystyle\frac{1}{1+z}$ has an isolated singularity at $z=-1$. It has the Maclaurin series representation

$$f(z)=\sum_{n=0}^\infty(-1)^nz^n$$
for $|z|<1$. The power series $f_1(z)=\displaystyle\sum_{n=0}^\infty(-1)^nz^n$ converges only on the open unit disk $D_1:\ |z|<1$. For instance, the series diverges at $z=\frac{3}{2}i$ i.e. $f_1\left(\frac{3}{2}i\right)$ is not defined. The first 25 partial sums of the series $f_1\left(\frac{3}{2}i\right)$ are listed below and they do not appear to be approaching somewhere.

S[1] = 1.
S[2] = 1. – 1.500000000 I
S[3] = -1.250000000 – 1.500000000 I
S[4] = -1.250000000 + 1.875000000 I
S[5] = 3.812500000 + 1.875000000 I
S[6] = 3.812500000 – 5.718750000 I
S[7] = -7.578125000 – 5.718750000 I
S[8] = -7.578125000 + 11.36718750 I
S[9] = 18.05078125 + 11.36718750 I
S[10] = 18.05078125 – 27.07617188 I
S[11] = -39.61425781 – 27.07617188 I
S[12] = -39.61425781 + 59.42138672 I
S[13] = 90.13208008 + 59.42138672 I
S[14] = 90.13208008 – 135.1981201 I
S[15] = -201.7971802 – 135.1981201 I
S[16] = -201.7971802 + 302.6957703 I
S[17] = 455.0436554 + 302.6957703 I
S[18] = 455.0436554 – 682.5654831 I
S[19] = -1022.848225 – 682.5654831 I
S[20] = -1022.848225 + 1534.272337 I
S[21] = 2302.408505 + 1534.272337 I
S[22] = 2302.408505 – 3453.612758 I
S[23] = -5179.419137 – 3453.612758 I
S[24] = -5179.419137 + 7769.128706 I
S[25] = 11654.69306 + 7769.128706 I

Also shown below are the graphics of partial sums of the series $f_1\left(\frac{3}{2}i\right)$.

The fist 10 partial sums

The first 20 partial sums

The first 30 partial sums

Let us expand $f(z)=\displaystyle\frac{1}{1+z}$ at $z=i$. Then we obtain
\begin{align*}
f(z)&=\frac{1}{1+z}\\
&=\frac{1}{1+i}\cdot\frac{1}{1+\frac{z-i}{1+i}}\\
&=\sum_{n=0}^\infty (-1)^n\frac{(z-i)^n}{(1+i)^{n+1}}
\end{align*}
for $|z-i|<\sqrt{2}$. Let $f_2(z)=\displaystyle\sum_{n=0}^\infty (-1)^n\frac{(z-i)^n}{(1+i)^{n+1}}$. This series converges only on the open disk $D_2:\ |z-i|<\sqrt{2}$, in particular at $z=\frac{3}{2}i$ and $f_2\left(\frac{3}{2}i\right)=f\left(\frac{3}{2}i\right)=\frac{4}{13}-\frac{6}{13}i$. The first 25 partial sums of the series $f_2\left(\frac{3}{2}i\right)$ are listed below and it appears that they are approaching a number. In fact, they are approaching the complex number $f\left(\frac{3}{2}i\right)=\frac{4}{13}-\frac{6}{13}i$.

S[1] = 0.5000000000 – 0.5000000000 I
S[2] = 0.2500000000 – 0.5000000000 I
S[3] = 0.3125000000 – 0.4375000000 I
S[4] = 0.3125000000 – 0.4687500000 I
S[5] = 0.3046875000 – 0.4609375000 I
S[6] = 0.3085937500 – 0.4609375000 I
S[7] = 0.3076171875 – 0.4619140625 I
S[8] = 0.3076171875 – 0.4614257812 I
S[9] = 0.3077392578 – 0.4615478516 I
S[10] = 0.3076782227 – 0.4615478516 I
S[11] = 0.3076934814 – 0.4615325928 I
S[12] = 0.3076934814 – 0.4615402222 I
S[13] = 0.3076915741 – 0.4615383148 I
S[14] = 0.3076925278 – 0.4615383148 I
S[15] = 0.3076922894 – 0.4615385532 I
S[16] = 0.3076922894 – 0.4615384340 I
S[17] = 0.3076923192 – 0.4615384638 I
S[18] = 0.3076923043 – 0.4615384638 I
S[19] = 0.3076923080 – 0.4615384601 I
S[20] = 0.3076923080 – 0.4615384620 I
S[21] = 0.3076923075 – 0.4615384615 I
S[22] = 0.3076923077 – 0.4615384615 I
S[23] = 0.3076923077 – 0.4615384616 I
S[24] = 0.3076923077 – 0.4615384615 I
S[25] = 0.3076923077 – 0.4615384615 I

The following graphics shows that the real parts of the partial sums of the series $f_2\left(\frac{3}{2}i\right)$ are approaching $\frac{3}{14}$ (blue line).

The real parts of the first 25 partial sums

The next graphics shows that the imaginary parts of the partial sums of the series $f_2\left(\frac{3}{2}i\right)$  are approaching $-\frac{6}{13}$ (blue line).

The imaginary parts of the first 25 partial sums

Also shown below is the graphics of the first 25 partial sums of the series $f_2\left(\frac{3}{2}i\right)$. They are approaching the complex number $f\left(\frac{3}{2}i\right)=\frac{4}{13}-\frac{6}{13}i$ (the intersection of horizontal and vertical blue lines).

The first 25 partial sums

Note that $f_1(z)=f_2(z)$ on $D_1\cap D_2$. Define $F(z)$ as

$$F(z)=\left\{\begin{array}{ccc} f_1(z) & \mbox{if} & z\in D_1,\\ f_2(z) & \mbox{if} & z\in D_2. \end{array}\right.$$

Analytic continuation

Then $F(z)$ is analytic in $D_1\cup D_2$. The function $F(z)$ is called the analytic continuation into $D_1\cup D_2$ of either $f_1$ or $f_2$, and $f_1$ and $f_2$ are called elements of $F$. The function $f_1(z)$ can be continued analytically to the punctured plane $\mathbb{C}\setminus\{-1\}$ and the function $f(z)=\frac{1}{1+z}$ is indeed the analytic continuation into $\mathbb{C}\setminus\{-1\}$ of $f_1$. In general, whenever analytic continuation exists it is unique.

# Functional Analysis 1: Metric Spaces

This is the first of series of lecture notes I intend to write for a graduate Functional Analysis course I am teaching in the Fall.

What is functional analysis? Functional analysis is an abstract branch of mathematics, especially of analysis, concerned with the study of vector spaces of functions. These vector spaces of functions arise naturally when we study linear differential equations as solutions of a linear differential equation form a vector space. Functional analytic methods and results are important in various fields of mathematics (for example, differential geometry, ergodic theory, integral geometry, noncommutative geometry, partial differential equations, probability, representation theory etc.) and its applications, in particular, in economics, finance, quantum mechanics, quantum field theory, and statistical physics. Topics in this introductory functional analysis course include metric spaces, Banach spaces, Hilbert spaces, bounded linear operators, the spectral theorem, and unbounded linear operators.

While functional analysis is a branch of analysis, due to its nature linear algebra is heavily used. So, it would be a good idea to brush up on linear algebra among other things you need to study functional analysis.

In functional analysis, we study analysis on an abstract space $X$ rather than the familiar $\mathbb{R}$ or $\mathbb{C}$. In order to consider fundamental notions in analysis such as limits and convergence, we need to have distance defined on $X$ so that we can speak of nearness or closeness. A distance on $X$ can be defined as a function, called a distance function or a metric, $d: X\times X\longrightarrow\mathbb{R}^+\cup\{0\}$ satisfying the following properties:

(M1) $d(x,y)=0$ if and only if $x=y$.

(M2) $d(x,y)=d(y,x)$ (Symmetry)

(M3) $d(x,y)\leq d(x,z)+d(z,y)$ (Triangle Inequality)

Here $\mathbb{R}^+$ denotes the set of all positive real numbers. You can easily see how mathematicians came up with this definition of a metric. (M1)-(M3) are the properties that the familiar distance on $\mathbb{R}$, $d(x,y)=|x-y|$ satisfies. The space $X$ with a metric $d$ is called a metric space and we usually write it as $(X,d)$.

Example. Let $x=(\xi_1,\cdots,\xi_n), y=(\eta_1,\cdots,\eta_n)\in\mathbb{R}^n$. Define
$$d(x,y)=\sqrt{(\xi_1-\eta_1)^2+\cdots+(\xi_n-\eta_n)^2}.$$
Then $d$ is a metric on $\mathbb{R}^n$ called the Euclidean metric.

This time, let $x=(\xi_1,\cdots,\xi_n), y=(\eta_1,\cdots,\eta_n)\in\mathbb{C}^n$ and define
$$d(x,y)=\sqrt{|\xi_1-\eta_1|^2+\cdots+|\xi_n-\eta_n|^2}.$$
Then $d$ is a metric on $\mathbb{C}^n$ called the Hermitian metric. Here $|\xi_i-\eta_i|^2=(\xi_i-\eta_i)\overline{(\xi_i-\eta_i)}$.

Of course these are pretty familiar examples. If there can be only these familiar examples, there would be no point of considering abstract space. In fact, the abstraction allows to discover other examples of metrics that are not so intuitive.

Example. Let $X$ be the set of all bounded sequence of complex numbers
$$X=\{(\xi_j): x_j\in\mathbb{C},\ j=1,\cdots\}.$$
For $x=(\xi_j), y=(\eta_j)\in X$, define
$$d(x,y)=\sup_{j\in\mathbb{N}}|\xi_j-\eta_j|.$$
Then $d$ is a metric on $X$. The metric space $(X,d)$ is denoted by $\ell^\infty$.

Example. Let $X$ be the set of continuous real-valued functions define on the closed interval $[a,b]$. Let $x, y:[a,b]\longrightarrow\mathbb{R}$ be continuous and define
$$d(x,y)=\max_{t\in [a,b]}|x(t)-y(t)|.$$
Then $d$ is a metric on $X$. The metric space $(X.d)$ is denoted by $\mathcal{C}[a,b]$.

In a metric space $(X,d)$, nearness or closeness can be described by a neighbourhood called an $\epsilon$-ball ($\epsilon>0$) centered at $x\in X$
$$B(x,\epsilon)=\{y\in X: d(x,y)<\epsilon\}.$$
These $\epsilon$-balls form a base for the topology on $X$, called the topology on $X$ induced by the metric $d$.

Next time, we will discuss two more examples of metric spaces $\ell^p$ and $L^p$. These examples are particularly important in functional analysis as they become Hilbert spaces.

# Group Theory 1: An Overview

This is the first of a series of lecture notes on group theory I intend to write for undergraduate Modern Algebra I course I am teaching in the fall semester. Before we begin to discuss the subject, I would like to give an overview of what we study in group theory or more generally in algebra.

Algebra (as a subject) is the study of algebraic structures. So, what is an algebraic structure? An algebraic structure or an algebra in short $\underline{A}$ is a non-empty set $A$ with a binary operation $f$. $\underline{A}$ is usually written as the ordered pair
$$\underline{A}=(A,f).$$
A binary operation $f$ on a set $A$ is a function $f: A\times A\longrightarrow A$. An example of a binary operation is addition $+$ on the set of integers $\mathbb{Z}$. $+$ is a function $+:\mathbb{Z}\times\mathbb{Z}\longrightarrow\mathbb{Z}$ defined by $+(1,1)=2$, $+(1,2)=3$, and so on. We usually write $+(1,1)=2$ as $1+1=2$. In general, one may consider an $n$-ary operation $f:\prod_{i=1}^n A\longrightarrow A$, where $\prod_{i=1}^n A$ denotes the $n$-copies of $A$, $A\times A\times\cdots\times A$.

There are many different kinds of algebras. Let me mention some of algebras with a binary operation here. For starter, $(A,\cdot)$, a non-empty set $A$ with a binary operation $\cdot$ is called a groupoid. A groupoid $(A,\cdot)$ with associative law
$$(ab)c=a(bc)$$
for any $a,b,c\in A$ is callaed a semigroup. If the semigroup has an identity element $e\in A$ i.e.
$$ae=ea=a$$
for any $a\in A$, it is called a monoid. If for every element $a$ of the monoid $A$, there exists an inverse element $a^{-1}\in A$ such that $aa^{-1}=a^{-1}a=e$, the monoid is called a group. A group $(A,\cdot)$ with commutative law i.e.
$$ab=ba,$$
for any $a,b\in A$ is called an abelian group named after a Norwegian mathematician Niels Abel. Note the inverse ${}^{-1}$ can be regarded as an operation on $A$, a unary operation ${}^{-1}: A\longrightarrow A$ defined by ${}^{-1}(a)=a^{-1}$ for each $a\in A$. The identity element $e$ can be also regarded as an operation, a nullary operation $e:\{\varnothing\}\longrightarrow A$. Thus, formally a group can be written as $(A,\cdot,{}^{-1},e)$, a quadrupple of a nonempty set, a binary operation, a unary operation, and a nullary operation.

Now we know what a group is and apparently, group theory is the study of groups. But what exactly are we studying there? What I am about to say is not really limited to group theory but commonly applies to studying other algebraic structures as well. There are briefly two main objectives with studying groups. One is the classification of groups. This becomes particularly interesting with groups of finite order. Here the order of a group means the number of elements of a group. We would like to answer the question “how many different groups of order $n$ are there for each $n$ and what are they?” The classification gets harder as $n$ gets larger. There are groups with the same order that appear to be different. But don’t be decieved by the appearance. They may actually be the same group. What do we mean by same here? We say two groups of the same order same if there is a one-to-one and onto map (a bijection) that preserves operations. Such a map is called an isomorphism. It turns out that if a map $\psi: G\longrightarrow G’$ from a group $G$ to another group $G’$ preserves binary operation, it automatically preserves unary and nullary operations. Here we mean preserving binary operation by
$$\psi(ab)=\psi(a)\psi(b)$$
for any $a,b\in G$. If you have taken linear algebra (and I believe you have), you would notice that a linear map is a map that preserves vector addition and scalar multiplication. A map $\psi: G\longrightarrow G’$ which preserves binary operation is called a homomorphism. If a homomorphism $\psi: G\longrightarrow G’$ is one-to-one and onto, it is an isomorphism. An isomorphism $\psi: G\longrightarrow G$ from a group $G$ onto itself is called an automorphism. In group theory, if there is an isomorphism from a group to another group, we do not distinguish them no matter how different they appear to look. The other objective is to discover new groups from old groups. Some of the new groups may be smaller in size than the old ones. Here we mean smaller in size by having a smaller number of elements i.e. having a lesser order. Some examples are subgroups and quotient groups. Some of the new groups are larger in size than the old ones. An example is direct products. Subgroups, quotient groups (also called factor groups), direct products are the things we will study as means to get new groups from old groups.

Group theory has a significance in geometry. In geometry, symmetry plays an important role. There are different types of symmetries: reflections, rotations, and translations. An interesting connection between geometry and group theory is that these symmetries form groups (symmetry groups). The most general symmetry group of finite order is called a symmetric group. In mathematics, the embedding theorem is conceptually and philosophically important (though it may be practically less important). When we study mathematics, we often feel that the structures we study are highly abstract and we feel like they only exist in our consciousness but not in the physical world. The embedding theorem tells that those abstract structures we study are indeed substructures of a larger structure that we are familiar with in the physical world. The embedding theorem implicates that we are not making up those abstract mathematical structures but we are merely discovering them which already exist in the universe. This kind of view point is called Mathematical Platonism. It turns out that there is an embedding theorem in finite group theory, namely every group of finite order is a subgroup of a symmetric group. The embedding theorem is called Cayley theorem. This means that the study of finite groups boils down to studying symmetric groups.

Remark. There is a mathematical structure called algebras over field $K$ (usually $K=\mathbb{R}$ or $K=\mathbb{C}$). An algebra $\mathcal{A}$ over field $K$ is a vector space over $K$ with a product $\cdot:\mathcal{A}\times\mathcal{A}\longrightarrow\mathcal{A}$ which is distributive over addition:
$$a(b+c)=ab+ac,\ (a+b)c=ac+bc,\ \forall\ a,b,c\in\mathcal{A}.$$
(Here, the symbol $\forall$ is a logical symbol which has meaning “for each”, “for any”, “for every”, or “for all” depending on the context. I will talk more about logical symbols next time as I will use them often.) Note that an algebra $\mathcal{A}$ over field $K$ is not an algebra because the scalar product is not an operation on $\mathcal{A}$. The scalar product is in fact an action of the multiplicative group $K\setminus\{0\}$ on $\mathcal{A}$. Algebras over field $K$ are important structures in functional analysis.

# Applications of Residues: Evaluation of Improper Integrals 2

In this lecture, we study improper integrals of the form $\int_{-\infty}^\infty f(x)\sin axdx$ or $\int_{-\infty}^\infty f(x)\cos axdx$, where $a$ denotes a positive constant. These integrals appear in Fourier analysis. Assume that $f(x)=\frac{p(x)}{q(x)}$, where $p(x)$ and $q(x)$ are polynomials with real coefficients and no factors in common. Also, $q(z)$ has no real zeros. We discuss how to evaluate improper integrals of the above type through the following example.

Example. Evaluate $\int_{-\infty}^\infty\frac{\cos 3x}{(x^2+1)^2}dx$.

Solution. Let $f(z)=\frac{1}{(z^2+1)^2}$. Then $f(z)e^{3iz}$ is analytic everywhere on and above the real axis except at $z=i$. Let $C_R$ be the upper semi-circle centered at the origin with radius $R>1$. Then by Cauchy’s Residue Theorem,
$$\int_{-R}^R\frac{e^{i3x}}{(x^2+1)^2}dx=2\pi i B_1-\int_{C_R}f(z)e^{i3z}dz,$$
where $B_1=\mathrm{Res}_{z=i}[f(z)e^{i3z}]$. $f(z)e^{i3z}$ can be written as
$$f(z)e^{i3z}=\frac{\phi(z)}{(z-i)^2},$$
where $\phi(z)=\frac{e^{i3z}}{(z+i)^2}$. Since $z=i$ is a pole of order $2$ of $f(z)$,
$$B_1=\phi’(i)=\frac{1}{ie^3}.$$
On $C_R$, $|z|=R$ and so by triangle inequality we obtain
$$|(z+i)^2|\geq (R^2-1)^2$$
and thereby
$$|f(z)|\leq\frac{1}{(R^2-1)^2}.$$
$|e^{i3z}|=e^{-3y}\leq 1$ for all $y\geq 0$. Hence, we find that
$$\left|\mathrm{Re}\int_{C_R}f(z)e^{i3z}dz\right|\leq\left|\int_{C_R}f(z)e^{i3z}dz\right|\leq\frac{\pi R}{(R^2-1)^2}\to 0$$
as $R\to\infty$. Therefore,
$$\int_{-\infty}^\infty\frac{e^{i3x}}{(x^2+1)^2}dx=\frac{2\pi}{e^3}.$$

# Applications of Residues: Evaluation of Improper Integrals

Recall the definition of improper integrals in calculus:
\begin{align*}
\int_0^\infty f(x)dx&=\lim_{R\to\infty}\int_0^R f(x)dx,\\
\int_{-\infty}^\infty f(x)dx&=\lim_{R_1\to\infty}\int_{-R_1}^0 f(x)dx+\lim_{R_2\to\infty}\int_0^{R_2}f(x)dx.
\end{align*}
The Cauchy Principal Value (P.V.) is given by
$$\mathrm{P.V.}\int_{-\infty}^\infty f(x)dx=\lim_{R\to\infty}\int_{-R}^R f(x)dx.$$
The Cauchy principal value of an improper integral is not necessarily the same as the improper integral. For example,
$$\mathrm{P.V}\int_{-\infty}^\infty xdx=\lim_{R\to\infty}\int_{-R}^R xdx=0,$$
while
$$\int_{-\infty}^\infty xdx=\lim_{R_1\to\infty}\int_{-R_1}^0xdx+\lim_{R_2\to\infty}\int_0^{R_2}xdx=-\infty+\infty$$
is undefined. In general, if $\int_{-\infty}^\infty f(x)dx<\infty$ then $\mathrm{R.V.}\int_{-\infty}^\infty f(x)dx<0$, but the converse need not be true. Suppose that $f(x)$ is an even function. Then
\begin{align*}
\int_0^R f(x)dx&=\frac{1}{2}\int_{-R}^R f(x)dx,\\
\int_{-R_1}^0 f(x)dx&=\int_0^{R_1}f(x) dx.
\end{align*}
So,
$$\mathrm{P.V.}\int_{-\infty}^\infty f(x)dx=\int_{-\infty}^\infty f(x)dx=2\int_0^\infty f(x)dx.$$

Let us consider an even function $f(x)$ of the form $f(x)=\frac{p(x)}{q(x)}$, where $p(x)$, $q(x)$ are polynomials with real coefficients no factors in common. Furthermore, we assume that $q(z)$ has no real zeros but has at least one zero above the real axis. Let us consider a positively oriented upper semicircle $C_R$ whose radius $R$ is large enough to contain all the zeros above the real axis as shown in the figure below.

$C_R$ together with the interval $[-R,R]$ form a positively oriented simple closed contour. Then by Cauchy’s Residue Theorem we have
$$\int_{-R}^R f(x)dx+\int_{C_R} f(z)dz=2\pi i\sum_{k=1}^n\mathrm{Res}_{z=z_k}f(z),$$
i.e.
$$\int_{-R}^R f(x)dx=2\pi i\sum_{k=1}^n\mathrm{Res}_{z=z_k}f(z)-\int_{C_R} f(z)dz.$$
If $\lim_{R\to\infty}\int_{C_R} f(z)dz=0$ then
$$\mathrm{P.V.}\int_{-\infty}^\infty f(x)dx=2\pi i\sum_{k=1}^n\mathrm{Res}_{z=z_k}f(z).$$
If in addition $f(x)$ is even,
$$\int_{-\infty}^\infty f(x)dx=2\pi i\sum_{k=1}^n\mathrm{Res}_{z=z_k}f(z)$$
or
$$\int_0^\infty f(x)dx=\pi i\sum_{k=1}^n\mathrm{Res}_{z=z_k}f(z).$$

Example. Let us consider the improper integral
$$\int_0^\infty\frac{x^2}{x^6+1}dx.$$
Let $f(z)=\frac{z^2}{z^6+1}$ has isolated singularities at the zeros of $z^6+1$, and is analytic everywhere else. $z^6=-1$ has solutions (the sixth roots of $-1$)
$$c_k=\exp\left[i\left(\frac{\pi}{6}+\frac{2k\pi}{6}\right)\right],\ k=0,1,\cdots,5.$$
The first three roots
$$c_0=e^{i\frac{\pi}{6}},\ c_1=i,\ c_2=e^{i\frac{5\pi}{6}}$$
lie in the upper half plane. Let us consider a positively oriented upper semicircle $C_R$ whose radius $R$ is greater than $1$.

Then
$$\int_{-R}^Rf(x)dx=2\pi(B_0+B_1+B_2)-\int_{C_R}f(z)dz,$$
where $B_k$ is the residue of $f(z)$ at $c_k$, $k=0,1,2$. $B_k$ can be found as we studied here
$$B_k=\mathrm{Res}_{z=c_k}\frac{z^2}{z^6+1}=\frac{c_k^2}{6c_k^5}=\frac{1}{6c_k^3},\ k=0,1,2.$$
Thus, we obtain
$$2\pi(B_0+B_1+B_2)=2\pi\left(\frac{1}{6i}-\frac{1}{6i}+\frac{1}{6i}\right)=\frac{\pi}{3}$$
and hence,
$$\int_{-R}^R f(x)dx=\frac{\pi}{3}-\int_{C_R}f(z)dz.$$
On $C_R$, $|z|=R$ so
$$|z^6+1|\geq ||z|^6-1|=|R^6-1|=R^6-1$$
and thereby we obtain
$$|f(z)|\leq\frac{R^2}{R^6-1}.$$
Since the length of $C_R$ is $\pi R$,
$$\left|\int_{C_R} f(z)dz\right|\leq\frac{R^2}{R^6-1}\cdot\pi R\to 0$$
as $R\to\infty$. Hence,
$$\mathrm{P.V.}\int_{-\infty}^\infty\frac{x^2}{x^6+1}dx=\lim_{R\to\infty}\frac{x^2}{x^6+1}dx=\frac{\pi}{3}.$$
Since the integrand is even,
$$\int_{-\infty}^\infty\frac{x^2}{x^6+1}dx=\frac{\pi}{3}$$
and
$$\int_0^\infty\frac{x^2}{x^6+1}dx=\frac{\pi}{6}.$$

# Zeros and Poles

Zeros and poles are closely related and their relationship may be used to calculates residues. First we introduce two theorems without proof. (Their proofs can be found, for instance, in [1].)

Theorem 1. A function $f$ is that is analytic at a point $z_0$ has a zero of order $m$ there if and only if there is a function $g$, which is analytic and nonzero at $z_0$, such that
$$f(z)=(z-z_0)^mg(z).$$

Theorem 2. Suppose that:

(i) two functions $p$ and $q$ are analytic at a point $z_0$;

(ii) $p(z_0)\ne 0$ and $q$ has a zero of order $m$ at $z_0$.

Then $\frac{p(z)}{q(z)}$ has a pole of order $m$ at $z_0$.

Now we discuss our main theorem in this lecture.

Theorem. Let two functions $p$ and $q$ be analytic at a point $z_0$. If
$$p(z_0)\ne 0,\ q(z_0)=0,\ \mbox{and}\ q’(z_0)\ne 0,$$
then  $z_0$ is a simple pole of $\frac{p(z)}{q(z)}$ and
$$\mathrm{Res}_{z=z_0}\frac{p(z)}{q(z)}=\frac{p(z_0)}{q’(z_0)}.$$

Proof. From the conditions, we see that $q(z)$ has a zero of order $1$, so by theorem 1 it can be written as
$$q(z)=(z-z_0)g(z)$$
where $g(z)$ is analytic at $z=z_0$ and $g(z_0)\ne 0$. This can be in fact readily seen without quoting theorem 1. Since $q(z)$ is analytic at $z_0$, it can be written as a Taylor series expansion
\begin{align*}
q(z)&=q(z_0)+\frac{q’(z_0)}{1!}(z-z_0)+\frac{q^{\prime\prime}(z_0)}{2!}(z-z_0)^2+\cdots\\
&=\frac{q’(z_0)}{1!}(z-z_0)+\frac{q^{\prime\prime}(z_0)}{2!}(z-z_0)^2+\cdots\\
&=(z-z_0)\left\{\frac{q’(z_0)}{1!}+\frac{q^{\prime\prime}(z_0)}{2!}(z-z_0)+\cdots\right\}.
\end{align*}
Set $g(z)=\frac{q’(z_0)}{1!}+\frac{q^{\prime\prime}(z_0)}{2!}(z-z_0)+\cdots$. Then $g(z)$ is analytic at $z=z_0$ and that $g(z_0)=q’(z_0)\ne 0$.

Now, theorem 2 implies that $\frac{p(z)}{q(z)}$ has a simple pole at $z=z_0$, but without quoting theorem 2, $\frac{p(z)}{q(z)}=\frac{\frac{p(z)}{g(z)}}{z-z_0}$, $\frac{p(z)}{g(z)}$ is analytic and nonzero at $z=z_0$. So, $\frac{p(z)}{q(z)}$ has a simple pole at $z=z_0$ as we studied here. Thus,
$$\mathrm{Res}_{z=z_0}\frac{p(z)}{q(z)}=\frac{p(z_0)}{g(z_0)}=\frac{p(z_0)}{q’(z_0)}.$$
This completes the proof.

Example. Find the residue of the functions
$$f(z)=\frac{z}{z^4+4}$$
at the isolated singularity $z_0=\sqrt{2}e^{\frac{i\pi}{4}}=1+i$.

Solution. Let $p(z)=z$ and $q(z)=z^4+4$. Then $p(z_0)=p(1+i)=1+i\ne 0$, $q(z_0)=0$, and $q’(z_0)=4z_0^3=4(1+i)^3\ne 0$. Hence, $f(z)$ has a simple pole at $z_0$. The residue $B_)$ is found by
$$B_0=\mathrm{Res}_{z=z_0}f(z)=\frac{p(z_0)}{q’(z_0)}=\frac{z_0}{3z_0^3}=\frac{1}{4z_0^2}=-\frac{i}{8}.$$

References:

[1] James Brown and Ruel Churchill, Complex Variables and Applications, 8th Edition, McGraw-Hill, 2008

# Residues at Poles

When $f(z)$ has a pole of order $m$, we may be able to find the residue of $f(z)$ at $z_0$ without expanding $f(z)$ into a Laurent series at $z=z_0$. This gives a great computational advantage.

Suppose that $z_0$ is a pole of order $m$ of $f(z)$. Then $f(z)$ has a Laurent series expansion
$$f(z)=\sum_{n=0}^\infty a_n(z-z_0)^n+\frac{b_1}{z-z_0}+\frac{b_2}{(z-z_0)^2}+\cdots+\frac{b_m}{(z-z_0)^m}\ (b_m\ne 0)$$
valid in a punctured disk $0<|z-z_0|<R$.
Define
$$\phi(z)=\left\{\begin{array}{ccc} (z-z_0)^mf(z) & \mbox{if} & z\ne z_0,\\ b_m & \mbox{if} & z=z_0. \end{array}\right.$$
Then $\phi(z)$ has a power series representation
$$\phi(z)=b_m+b_{m-1}(z-z_0)+\cdots+b_2(z-z_0)^{m-2}+b_1(z-z_0)^{m-1}+\sum_{n=0}^\infty(z-z_0)^{m+n},$$
for $|z-z_0|<R$. That is, $\phi(z)$ is analytic at $z=z_0$ and in the disk $|z-z_0|<R$. We find that
$$b_1=\left\{\begin{array}{ccc}\frac{\phi^{(n-1)}(z_0)}{(m-1)!} & \mbox{if} & m\geq 2,\\ \phi(z_0) & \mbox{if} & m=1. \end{array}\right.$$

Conversely, suppose that $f(z)$ can be written in the form
$$f(z)=\frac{\phi(z)}{(z-z_0)^m},$$
where $\phi(z)$ is analytic and nonzero at $z=z_0$. $\phi(z)$ has a Taylor series expansion at $z_0$
\begin{align*}
\phi(z)=&\phi(z_0)+\frac{\phi’(z_0)}{1!}(z-z_0)+\frac{\phi^{\prime\prime}(z_0)}{2!}(z-z_0)^2\\
&+\cdots+\frac{\phi^{(m-1)}(z_0)}{(m-1)!}(z-z_0)^{m-1}+\sum_{n=m}^\infty\frac{\phi^{(n)}(z_0)}{n!}(z-z_0)^n
\end{align*}
in some neighbourhood $|z-z_0|<\epsilon$. Since $\phi(z_0)\ne 0$, $z_0$ is a pole of order $m$ of $f(z)$. Clearly, we have
$$\mathrm{Res}_{z=z_0}f(z)=\left\{\begin{array}{ccc} \phi(z_0) & \mbox{if} & m=1,\\ \frac{\phi^{(n-1)}(z_0)}{(m-1)!} & \mbox{if} & m\geq 2. \end{array}\right.$$
Therefore, we have the following theorem holds.

Theorem. An isolated singularity $z_0$ of a function $f$ is a pole of order $m$ if and only if $f(z)$ can be written as
$$f(z)=\frac{\phi(z)}{(z-z_0)^m},$$
where $\phi(z)$ is analytic and nonzero at $z=z_0$. Moreover,
$$\mathrm{Res}_{z=z_0}f(z)=\left\{\begin{array}{ccc} \phi(z_0) & \mbox{if} & m=1,\\ \frac{\phi^{(n-1)}(z_0)}{(m-1)!} & \mbox{if} & m\geq 2. \end{array}\right.$$

Example. $f(z)=\frac{z+1}{z^2+9}$ has an isolated singularity at $z=3i$. $f(z)$ can be written as
$$f(z)=\frac{\frac{z+1}{z+3i}}{z-3i}.$$
Then $\phi(z)=\frac{z+1}{z+3i}$ is analytic at $z=3i$ and $\phi(3i)=\frac{3-i}{6}\ne 0$. Hence, $z=3i$ is a simple pole of $f(z)$ and
$$\mathrm{Res}_{z=3i}f(z)=\phi(3i)=\frac{3-i}{6}.$$
$z=-3i$ is also a simple pole of $f(z)$ and
$$\mathrm{Res}_{z=-3i}f(z)=\frac{3+i}{6}.$$

Example. Let us consider $f(z)=\frac{z^3+2z}{(z-i)^3}$. $\phi(z)=z^3+2z$ is analytic at $z=i$ and $\phi(i)=i\ne 0$. Hence, $z=i$ is a pole of order $3$ and
$$b_1=\mathrm{Res}_{z=i}f(z)=\frac{\phi^{\prime\prime}(i)}{2!}=3i.$$

# The Three Types of Isolated Singularities

Recall that if $f(z)$ has an isolated singularity at $z=z_0$, it may be represented by a Laurent series
$$f(z)=\sum_{n=0}^\infty a_n(z-z_0)^n+\frac{b_1}{z-z_0}+\frac{b_2}{(z-z_0)^2}+\cdots+\frac{b_n}{(z-z_0)^n}+\cdots$$
in a puctured disk $0<|z-z_0|<R$. The part of series that contains negative powers of $z-z_0$
$$\frac{b_1}{z-z_0}+\frac{b_2}{(z-z_0)^2}+\cdots+\frac{b_n}{(z-z_0)^n}+\cdots$$
is called the principal part of $f(z)$ at $z_0$.

Suppose that there exists $m\in\mathbb{N}$ such that $b_m\ne 0$ and $b_{m+1}=b_{m+2}=\cdots=0$. Then
$$f(z)=\sum_{n=0}^\infty a_n(z-z_0)^n+\frac{b_1}{z-z_0}+\frac{b_2}{(z-z_0)^2}+\cdots+\frac{b_m}{(z-z_0)^m},$$
where $0<|z-z_0|<R$. In this case, the isolated singularity $z_0$ is called a pole of order $m$. A pole of order 1 is usually called a simple pole.

Example.
\begin{align*}
\frac{z^2-2z+3}{z-2}&=z+\frac{3}{z-2}\\
&=2+(z-2)+\frac{3}{z-2},\ 0<|z-2|<\infty
\end{align*}
has a simple pole at $z_0=2$. The residue at $z_0=2$ is 3.

Example.
\begin{align*}
\frac{\sinh z}{z^4}&=\frac{1}{z^4}\left(z+\frac{z^3}{3!}+\frac{z^5}{5!}+\cdots\right)\\
&=\frac{1}{z^3}+\frac{1}{3!z}+\frac{z}{5!}+\frac{z^3}{7!}+\cdots,\ 0<|z|<\infty
\end{align*}
has a pole of order $m=3$ at $z_0=0$. The residue at $z_0=0$ is $\frac{1}{6}$.

When $b_n=0$ for all $n\geq 1$, so that
$$f(z)=\sum_{n=0}^\infty (z-z_0)^n$$
the point $z_0$ is called a removable singularity.

Example.
\begin{align*}
f(z)&=\frac{1-\cos z}{z^2}\\
&=\frac{1}{z^2}\left[1-\left(1-\frac{z^2}{2!}+\frac{z^4}{4!}-\frac{z^6}{6!}+\cdots\right)\right]\\
&=\frac{1}{2!}-\frac{z^2}{4!}+\frac{z^4}{6!}-\cdots,\ 0<|z|<\infty.
\end{align*}
Thus $z_0=0$ is a removable singularity. Define
$$g(z)=\left\{\begin{array}{ccc} f(z) & \mbox{if} & z\ne 0,\\ \frac{1}{2!} & \mbox{if} & z=0. \end{array}\right.$$
Then $g(z)$ is entire.

When an infinite number of the $b_n$ are nonzero, $z_0$ is called an essential singularity.

Example.
\begin{align*}
\exp\left(\frac{1}{z}\right)&=\sum_{n=0}^\infty\frac{1}{n! z^n}\\
&=1+\frac{1}{1!z}+\frac{1}{2!z^2}+\cdots,\ 0<|z|<\infty
\end{align*}
has an essential singularity at $z_0=0$.

Example. $e^z=-1$ when $z=(2n+1)\pi i$ $(n=0,\pm 1, \pm 2,\cdots)$. So $e^{\frac{1}{z}}=-1$ when $\frac{1}{z}=(2n+1)\pi i$ or $z=-\frac{i}{(2n+1)\pi}$ $(n=0,\pm 1,\pm 2,\cdots)$and an infinite number of these points lie in any neighbourhood of the essential singularity $z_0=0$.

Picard’s Theorem. In each neighbourhood of an essential singularity, a function assumes every finite value, with one possible exception, an infinite number of time.

In the above example, since $e^{\frac{1}{z}}\ne 0$ for all $z$, $z=0$ is the exceptional value in Picard’s theorem.

# More on Residues

Here and here, we studied how to evaluate the contour integral $\oint_C f(z)dz$ when $f(z)$ is analytic everywhere within and on the positively oriented simple closed contour $C$ except for a finite number of isolated singularities interior to $C$. The calculation of residues however can be a pain if there are many isolated singularities of $f(z)$ interior to $C$. It turns out that by slightly modifying the function, we may just need to deal with only one isolated singularity regardless of how many isolated singularities of $f(z)$ there are interior to $C$. This gives a great advantage from computational viewpoint.

Theorem. If a function $f$ is analytic everywhere except for a finite number of isolated singularities interior to a positively oriented simple closed contour $C$, then
$$\oint_C f(z)dz=2\pi i\mathrm{Res}_{z=0}\left[\frac{1}{z^2}f\left(\frac{1}{z}\right)\right].$$

Proof.

From the above picture, we see that the function $f(z)$ has a Laurent series expansion
$$f(z)=\sum_{n=-\infty}^\infty c_n z_n\ (R_1<|z|<\infty)$$
where
$c_n=\frac{1}{2\pi}\oint_{C_0}\frac{f(z)}{z^{n+1}}dz$ $(n=0,\pm 1,\pm 2,\cdots)$. In particular, we have
$$\oint_{C_0}f(z)dz=2\pi ic_{-1}.$$
Since the condition of validity with the representation is not of the type $0<|z|<R_2$, $c_{-1}$ is not the residue of $f$ at $z=0$. Let us replace $z$ by $\frac{1}{z}$ in the representation. Then
$$\frac{1}{z^2}f\left(\frac{1}{z}\right)=\sum_{n=-\infty}^\infty\frac{c_n}{z^{n+2}}=\sum_{n=-\infty}^\infty\frac{c_{n-2}}{z^n}\ \left(0<|z|<\frac{1}{R_1}\right).$$
Hence,
$$c_{-1}=\mathrm{Res}_{z=0}\left[\frac{1}{z^2}f\left(\frac{1}{z}\right)\right]$$
and
$$\int_{C_0}f(z)dz=2\pi i\mathrm{Res}_{z=0}\left[\frac{1}{z^2}f\left(\frac{1}{z}\right)\right].$$
Since $f$ is analytic throughout the region bounded by $C$ and $C_0$ (topologically speaking $C_0$ is homotopic to $C$),
$$\oint_C f(z)dz=\oint_{C_0}f(z)dz.$$

Example. Evaluate $\oint_C\frac{5z-2}{z(z-1)}dz$ where $C:\ |z|=2$.

Solution. Let $f(z)=\frac{5z-2}{z(z-1)}$. Then
\begin{align*}
\frac{1}{z^2}f\left(\frac{1}{z}\right)&=\frac{5-2z}{z(1-z)}\\
&=\frac{5-2z}{z}\cdot\frac{1}{1-z}\\
&=\left(\frac{5}{z}-2\right)(1+z+z^2+\cdots)\\
&=\frac{5}{z}+3+3z+\cdots\ (0<|z|<1).
\end{align*}
Thus,
$$\oint_C\frac{5z-2}{z(z-1)}dz=2\pi i(5)=10\pi i.$$