*Definition*. A subset $U\subset X$ is said to be *open* if $\forall x\in U$ $\exists\epsilon>0$ s.t. $B(x,\epsilon)\subset U$.

If $U\subset X$ is open then $U$ can be expressed as union of open balls $B(x,\epsilon)$. Hence, the set of all open balls in $X$, $\mathcal{B}=\{B(x,\epsilon): x\in X,\ \epsilon>0\}$ form a *basis* for a *topology* (a *metric topology*, the *topology induced by the metric* $d$) on $X$. Those who have not studied topology before may simply understand it as the set of all open sets in $X$.

*Definition*. A subset $F\subset X$ is said to be *closed* if its complement, $F^c=X\setminus F$ is open in $X$.

The following is the definition of a continuous function that you are familiar with from calculus. The definition is written in terms of metrics.

*Definition*. Let $(X,d_X)$ and $(Y,d_Y)$ be metric spaces. A mapping $T:X\longrightarrow Y$ is said to be *continuous at* $x_0\in X$ if $\forall\epsilon>0$ $\exists\delta>0$ s.t $d_Y(Tx,Tx_0)<\epsilon$ whenever $d_X(x,x_0)<\delta$.

$T$ is said to be *continuous* if it is continuous at every point of $X$.

The above definition can be generalized in terms of open sets as follows.

*Theorem*. A mapping $T: (X,d_X)\longrightarrow(Y,d_Y)$ is continuous if and only if $\forall$ open set $U$ in $Y$, $T^{-1}U$ is open in $X$.

*Proof*. (Only if, $\Rightarrow$) Suppose that $T:X\longrightarrow Y$ is continuous. Let $U$ be open in $Y$. Then we show that $T^{-1}U$ is open in $X$. Let $x_0\in T^{-1}U$. Then $Tx_0\in U$. Since $U$ is open in $Y$, $\exists\epsilon>0$ s.t. $B(Tx_0,\epsilon)\subset U$. By the continuity of $T$, for this $\epsilon>0$ $\exists\delta>0$ s.t. whenever $d(x,x_0)<\delta$, $d(Tx,Tx_0)<\epsilon$. This means that

$$TB(x_0,\delta)\subset B(Tx_0,\epsilon)\subset U\Longrightarrow B(x_0,\delta)\subset T^{-1}(TB(x_0,\delta))\subset T^{-1}U.$$ Hence, $T^{-1}U$ is open in $X$.

(If, $\Leftarrow$) Suppose that $\forall$ open set $U$ in $Y$, $T^{-1}U$ is open in $X$. We show that $T$ is continuous. Let $x_0\in X$ and let $\epsilon>0$ be given. Then $B(Tx_0,\epsilon)$ is open in $Y$. So by the assumption, $x_0\in T^{-1}B(Tx_0,\epsilon)$ is open in $X$. This means that $\exists\delta>0$ s.t.

$$B(x_0,\delta)\subset T^{-1}B(Tx_0,\epsilon)\Longrightarrow TB(x_0,\delta)\subset T(T^{-1}B(Tx_0,\epsilon))\subset B(Tx_0,\epsilon).$$ This is equivalent to saying that $\exists\delta>0$ s.t. whenever $d(x,x_0)<\delta$, $d(Tx,Tx_0)<\epsilon$. That is, $T$ is continuous at $x_0$. Since the choice $x_0\in X$ was arbitrary, the proof is complete.

Let $A\subset X$. $x\in X$ is called an *accumulation point* or a *limit point* of $A$ if $\forall$ open set $U(x)$ in $X$, $(U(x)-\{x\})\cap A\ne\emptyset$. Here the notation $U(x)$ means that it contains $x$. The set of all accumulation points of $A$ is denoted by $A’$ and is called the *derived set* of $A$. $\bar A:=A\cup A’$ is called the *closure* of $A$. $\bar A$ is the smallest closed set containing $A$.

*Theorem*. Let $A\subset X$. Then $x\in\bar A$ if and only if $\forall$ open set $U(x)$, $U(x)\cap A\ne\emptyset$.

*Definition*. $D\subset X$ is said to be *dense* if $\bar D=X$. This means that $\forall$ open set $U$ in $X$, $U\cap D\ne\emptyset$.

Definition. $X$ is said to be *separable* if it has a countable dense subset.

*Examples*. The real line $\mathbb{R}$ is separable. The complex plane $\mathbb{C}$ is also separable.

*Theorem*. The space $\ell^\infty$ is not separable.

*Proof*. Let $y=(\eta_1,\eta_2,\eta_3,\cdots)$ be a sequence of zeros and ones. Then $y\in\ell^\infty$. We can then associate $y$ with the binary representation

$$\hat y=\frac{\eta_1}{2}+\frac{\eta_2}{2^2}+\frac{\eta_3}{2^3}+\cdots\in [0,1].$$ Each $\hat y\in [0,1]$ has a binary representation and different $\hat y$’s have different binary representations. So, there are uncountably many sequences of zeros and ones. If $y$ and $z$ are sequences of zeros and ones and $y\ne z$, then $d(y,z)=1$. This means that for any two distinct sequences $y$ and $z$ of zeros and ones, $B\left(y,\frac{1}{3}\right)\cap B\left(z,\frac{1}{3}\right)=\emptyset$. Let $A$ be a dense subset of $\ell^\infty$. Then for each sequence $y$ of zeros and ones, $B\left(y,\frac{1}{3}\right)$ has at least one element of $A$. This means that $A$ cannot be countable.

*Theorem*. The space $\ell^p$ with $1\leq p<\infty$ is separable.

*Proof*. Let $A$ be the set of all sequences $y$ of the form

$$y=(\eta_1,\eta_2,\cdots,\eta_n,0,0,\cdots,0),$$ where $n$ is a positive integer and the $\eta_j$’s are rational. For each $n=1,2,\cdots$, the number of sequences of the form $y=(\eta_1,\eta_2,\cdots,\eta_n,0,0,\cdots,0)$ is the same as the number of functions from $\{1,2,3,\cdots,n\}$ to $\mathbb{Q}$, the set of all rational numbers. $\mathbb{Q}$ has the cardinality $\aleph_0$ and so the number is $\aleph_0^n=\aleph_0$. The cardinality of $A$ is then $\aleph_0\cdot\aleph_0=\aleph_0$ i.e. $A$ is countable. Now we show that $A$ is dense in $\ell^p$. Let $x=(\xi_j)\in\ell^p$. Let $\epsilon>0$ be given. Since $\displaystyle\sum_{j=1}^\infty|\xi_j|^p<\infty$, $\exists$ a positive integer $N$ s.t. $\displaystyle\sum_{j=N+1}^\infty|\xi_j|^p<\frac{\epsilon^p}{2}$. Since rationals are dense in $\mathbb{R}$, one can find $y=(\eta_1,\eta_2,\cdots,\eta_N,0,0,\cdots)\in A$ s.t. $\displaystyle\sum_{j=1}^N|\xi_j-\eta_j|^p<\frac{\epsilon^p}{2}$. Hence,

$$[d(x,y)]^p=\sum_{j=1}^N|\xi_j-\eta_j|^p+\sum_{j=N+1}^\infty|\xi_j|^p<\epsilon^p,$$

i.e. $d(x,y)<\epsilon$. This means that $y\in B(x,\epsilon)\cap A\ne\emptyset$. This completes the proof.

Let $\mathbb{Z}$ denote the set of integers. $\mathbb{Z}$ satisfies *well-ordering principle*, namely any non-empty set of nonnegative integers has a smallest member.

One of the most fundamental theorems regarding numbers is *Euclid’s Algorithm*. Although we will not discuss its proof, it can be proved using well-ordering principle.

*Theorem*. [Euclid's Algorithm] If $m$ and $n$ are integers with $n>0$, then $\exists$ integers $q$ and $r$ with $0\leq r<n$ such that $m=qn+r$.

Euclid’s algorithm hints us how we can define the notion that one integer divides another.

*Definition*. Given $m\ne 0, n\in\mathbb{Z}$, we say $m$ divides $n$ and write $m|n$ if $n=cm$ for some $c\in\mathbb{Z}$.

*Examples*. $2|14$, $(-7)|14$, $4|(-16)$.

If $m|n$, we call $m$ a *divisor* or a *factor* of $n$, and $n$ a multiple of $m$. To indicate $m$ is not a divisor of $n$, we write $m\not|n$. For example, $3\not|5$.

*Lemma*. The following properties hold.

(a) $1|n$ $\forall n$.

(b) If $m\ne 0$ then $m|0$.

(c) If $m|n$ and $n|q$, then $m|q$.

(d) If $m|n$ and $m|q$ then $m|(\mu n+\nu q)$ $\forall \mu,\nu$.

(e) If $m|1$ then $m=\pm 1$.

(f) If $m|n$ and $n|m$ then $m=\pm n$.

*Definition*. Given $a,b$ (not both 0), their *greatest common divisor* (in short gcd) $c$ is defined by the following properties:

(a) $c>0$

(b) $c|a$ and $c|b$

(c) If $d|a$ and $d|b$ then $d|c$.

If $c$ is the gcd of $a$ and $b$, we write $c=(a,b)$.

$(24,9)=3$. Note that the gcd 3 can be written in terms of 24 and 9 as $3\cdot 9+1\cdot (-24)$ or $(-5)9+2\cdot 24$. In general, we have the following theorem holds.

*Theorem*. If $a,b$ are not both 0, their gcd exists uniquely. Moreover, $\exists m,n\in\mathbb{Z}$ s.t. $c=ma+nb$.

Now let us talk about how to find the gcd of two positive numbers $a$ and $b$. W.L.O.G. (Without Loss Of Generality), we may assume that $b<a$. Then by Euclid’s algorithm we have

$$a=bq+r,\ \mbox{where}\ 0\leq r<b.$$

Let $c=(a,b)$. Then $c|r$, so $c$ is a common divisor of $b$ and $r$. If $d$ is a common divisor of $b$ and $r$, it is also a common divisor of $a$ and $b$. This implies that $d\leq c$ and so $c=(b,r)$. Finding $(b,r)$ is of course easier because one of the numbers is smaller than before.

*Example*. [Finding GCD]

$$

\begin{aligned}

(100,28)&=(28,16)\ &(100&=28\cdot 3+16)\\

&=(16,12)\ &(28&=16\cdot 1+12)\\

&=(12,4)\ &(16&=12\cdot 1+14)\\

&=4.

\end{aligned}

$$

By working backward, we can also find integers $m$ and $n$ such that

$$4=m\cdot 100+n\cdot 28.$$

\begin{align*}

4&=16+12(-1)\\

&=16+(-1)[28+(-1)16]\\

&=(-1)28+2\cdot 16\\

&=(-1)28+2[100+(-3)28]\\

&=2\cdot 100+(-7)28.

\end{align*}

Therefore, $m=2$ and $n=-7$.

*Definition*. We say that $a$ and $b$ are *relatively prime* if $(a,b)=1$.

*Theorem*. The integers $a$ and $b$ are relatively prime if and only if $1=ma+nb$ for some $m$ and $n$.

*Theorem*. If $(a,b)=1$ and $a|bc$ then $a|c$.

*Theorem*. If $b$ and $c$ are both relatively prime to $a$, then $bc$ is also relatively prime to $a$.

*Definition*. A *prime number*, or shortly *prime*, is an integer $p>1$ such that $\forall a\in\mathbb{Z}$, either $p|a$ or $(p,a)=1$.

Suppose that $p$ is a prime as defined above and $p|ab$, where $1\leq a<p$. Then $p\not|a$ since $a<p$, so $(p,a)=1$. This implies that $p|b$. On the other hand, $b|p(=ab)$ and hence $p=b$ and $a=1$. So, the above definition coincides with the definition of a prime we are familiar with.

*Theorem*. If $p$ is a prime and $p|a_1a_2\cdots a_n$, then $p|a_i$ for some $i$ with $1\leq i\leq n$.

*Proof*. If $p|a_1$, we are done. If not, $(p,a_1)=1$ and so $p|a_2a_3\cdots a_n$. Continuing this, we see that $p|a_i$ for some $i$.

Regarding primes, we have the following theorems.

*Theorem*. If $n>1$, then either $n$ is a prime or the product of primes.

*Theorem*. [Unique Factorization Theorem] Given $n>1$, there is a unique way to write $n$ in the form $n=p_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}$, where $p_1<p_2<\cdots<p_k$ are primes and the exponents $a_1,\cdots,a_k$ are all positive.

*Theorem*. [Euclid] There is an infinite number of primes.

$$\ell^p=\left\{x=(\xi_j): \sum_{j=1}^\infty|\xi_j|^p<\infty\right\}.$$

Define $d:\ell^p\times\ell^p\longrightarrow\mathbb{R}^+\cup\{0\}$ by

$$d(x,y)=\left(\sum_{j=1}^\infty|\xi_j-\eta_j|^p\right)^{\frac{1}{p}}.$$

Then $(\ell^p,d)$ is a metric space. The properties (M1) and (M2) are clearly satisfied. We prove the remaining property (M3) the triangle inequality. $p=1$ case can be easily shown by the triangle inequality of numbers. We need a few steps to do this. First we prove the following inequality: $\forall\alpha>0,\beta>0$,

$$\alpha\beta\leq\frac{\alpha^p}{p}+\frac{\beta^q}{q},$$

where $p>1$ and $\frac{1}{p}+\frac{1}{q}=1$. The numbers $p$ and $q$ are called conjugate exponenets. It follows from $\frac{1}{p}+\frac{1}{q}=1$ that $(p-1)(q-1)=1$ i.e. $\frac{1}{p-1}=q-1$. If we let $u=t^{p-1}$ then $t=u^{\frac{1}{p-1}}=u^{q-1}$. By comparing areas, we obtain

$$\alpha\beta\leq\int_0^{\alpha}t^{p-1}dt+\int_0^{\beta}u^{q-1}du=\frac{\alpha^p}{p}+\frac{\beta^q}{q}.$$

Next, using this inequality we prove the

$$\sum_{j=1}^\infty|\xi_j\eta_j|\leq\left(\sum_{k=1}^\infty|\xi_k|^p\right)^{\frac{1}{p}}\left(\sum_{m=1}^\infty|\eta_m|^q\right)^{\frac{1}{q}}$$

where $p>1$ and $\frac{1}{p}+\frac{1}{q}=1$. When $p=2$ and $q=2$, we obtain the well-known

*Proof*. Let $(\tilde\xi_j)$ and $(\tilde\eta_j)$ be two sequences such that

$$\sum_{j=1}^\infty|\tilde\xi_j|^p=1,\ \sum_{j=1}^\infty|\tilde\eta_j|^q=1.$$

Let $\alpha=|\tilde\xi_j|$ and $\beta=|\tilde\eta_j|$. Then by the inequality we proved previously,

$$|\tilde\xi_j\tilde\eta_j|\leq\frac{|\tilde\xi_j|^p}{p}+\frac{|\tilde\eta_j|^q}{q}$$

and so we obtain

$$\sum_{j=1}^\infty|\tilde\xi_j\tilde\eta_j|\leq\sum_{j=1}^\infty\frac{|\tilde\xi_j|^p}{p}+\sum_{j=1}^\infty\frac{|\tilde\eta_j|^q}{q}=1.$$

Now take any nonzero $x=(\xi_j)\in\ell^p$, $y=(\eta_j)\in\ell^q$. Setting

$$\tilde\xi_j=\frac{\xi_j}{\left(\displaystyle\sum_{k=1}^\infty|\xi_k|^p\right)^{\frac{1}{p}}},\ \tilde\eta_j=\frac{\eta_j}{\left(\displaystyle\sum_{m=1}^\infty|\eta_m|^q\right)^{\frac{1}{q}}}.$$

results the Hölder inequality.

Next, we prove the *Minkowski inequality*

$$\left(\sum_{j=1}^\infty|\xi_j+\eta_j|^p\right)^{\frac{1}{p}}\leq\left(\sum_{k=1}^\infty|\xi_k|^p\right)^{\frac{1}{p}}+\left(\sum_{m=1}^\infty|\eta_m|^p\right)^{\frac{1}{p}}$$

where $x=(\xi_j)\,y=(\eta_j)\in\ell^p$ and $p\geq 1$. $p=1$ case comes from the triangle inequality for numbers. Let $p>1$. Then

\begin{align*}

|\xi_j+\eta_j|^p&=|\xi_j+\eta_j||\xi_j|\eta_j|^{p-1}\\

&=(|\xi_j|+|\eta_j|)|\xi_j+\eta_j|^{p-1}\ (\mbox{triangle inequality for numbers}).

\end{align*}

For a fixed $n$, we have

$$\sum_{j=1}^n|\xi_j+\eta_j|^p\leq\sum_{j=1}^n|\xi_j||\xi_j+\eta_j|^{p-1}+\sum_{j=1}^n|\eta_j||\xi_j+\eta_j|^{p-1}.$$

Using the Hölder inequality, we get the following inequality

\begin{align*}

\sum_{j=1}^n|\xi_j||\xi_j+\eta_j|^{p-1}&\leq \sum_{j=1}^\infty |\xi_j||\xi_j+\eta_j|^{p-1}\\

&\leq\left(\sum_{k=1}^\infty |\xi_k|^p\right)^{\frac{1}{p}}\left(\sum_{m=1}^\infty(|\xi_m+\eta_m|^{p-1})^q\right)^{\frac{1}{q}}\ (\mbox{Hölder})\\

&=\left(\sum_{k=1}^\infty|\xi_k|^p\right)^{\frac{1}{p}}\left(\sum_{m=1}^\infty|\xi_m+\eta_m|^p\right)^{\frac{1}{q}}.

\end{align*}

Similarly we also get the inequality

$$\sum_{j=1}^n|\eta_j||\xi_j+\eta_j|^{p-1}\leq \left(\sum_{k=1}^\infty|\eta_k|^p\right)^{\frac{1}{p}}\left(\sum_{m=1}^\infty|\xi_m+\eta_m|^p\right)^{\frac{1}{q}}.$$

Combining these two inequalities, we get

$$\sum_{j=1}^n|\xi_j+\eta_j|^p\leq\left\{\left(\sum_{k=1}^\infty|\xi_k|^p\right)^{\frac{1}{p}}+\left(\sum_{k=1}^\infty|\eta_k|^p\right)^{\frac{1}{p}}\right\}\left(\sum_{m=1}^\infty|\xi_m+\eta_m|^p\right)^{\frac{1}{q}}$$

and by taking the limit $n\to \infty$ on the left hand side, we get

$$\sum_{j=1}^\infty|\xi_j+\eta_j|^p\leq\left\{\left(\sum_{k=1}^\infty|\xi_k|^p\right)^{\frac{1}{p}}+\left(\sum_{k=1}^\infty|\eta_k|^p\right)^{\frac{1}{p}}\right\}\left(\sum_{m=1}^\infty|\xi_m+\eta_m|^p\right)^{\frac{1}{q}}.$$

Finally dividing this inequality by $\displaystyle\left(\sum_{m=1}^\infty|\xi_m+\eta_m|^p\right)^{\frac{1}{q}}$ results the Minkowski inequality. The Minkowski inequality tells that

$$d(x,y)=\left(\sum_{j=1}^\infty|\xi_j-\eta_j|^p\right)^{\frac{1}{p}}<\infty$$

for $x,y\in\ell^p$. Let $x=(\xi_j), y=(\eta_j),\ z=(\zeta_j)\in\ell^p$. Then

\begin{align*}

d(x,y)&=\left(\sum_{j=1}^\infty|\xi_j-\eta_j|^p\right)^{\frac{1}{p}}\\

&\leq\left(\sum_{j=1}^\infty[|\xi_j-\zeta_j|+|\zeta_j-\eta_j|]^p\right)^{\frac{1}{p}}\\

&\leq\left(\sum_{j=1}^\infty|\xi_j-\zeta_j|^p\right)^{\frac{1}{p}}+\left(\sum_{j=1}^\infty|\zeta_j-\eta_j|^p\right)^{\frac{1}{p}}\\

&=d(x,z)+d(z,y).

\end{align*}

A measurable function $f$ on a closed interval $[a,b]$ is said to belong to $L^p$ if $\int_a^b|f(t)|^p dt<\infty$. $L^p$ is a vector space. For functions $f,g\in L^p$, we define

$$d(f,g)=\left\{\int_a^b|f(t)-g(t)|^pdt\right\}^{\frac{1}{p}}.$$

Then clearly (M2) symmetry is satisfied and one can also prove that (M3) triangle inequality holds. However, (M1) is not satisfied since what we have is that if $d(f,g)=0$ then $f=g$ a.e. (almost everywhere) i.e. the set $\{t\in[a,b]: f(t)\ne g(t)\}$ has measure $0$. It turns out that $=$ a.e. is an equivalence relation on $L^p$, so by considering $f\in L^p$ as its equivalence class $[f]$, $d$ can be defined as a metric on $L^p$ (actually the quotient space of $L^p$). Later, we will be particularly interested in the case when $p=2$ in which case $L^p$ as well as $\ell^p$ become Hilbert spaces. Those of you who want to know details about $L^p$ space are referred to

*Real Analysis*, H. L. Royden, 3rd Edition. Macmillan Publishing Company, 1988

Now lets review about functions in a more formal way. Let $X$ and $Y$ be two non-empty sets. The the *Cartesian product* $X\times Y$ of $X$ and $Y$ is defined as the set

$$X\times Y=\{(x,y): x\in X,\ y\in Y\}.$$

A subset $f$ of the Cartesian product $X\times Y$ (we write $f\subset X\times Y$) is called a *graph* from $X$ to $Y$. A graph $f\subset X\times Y$ is called a *function* from $X$ to $Y$ (we write $f: X\longrightarrow Y$) if whenever $(x,y_1),(x,y_2)\in f$, $y_1=y_2$. If $f: X\longrightarrow Y$ and $(x,y)\in f$, we also write $y=f(x)$. A function $f: X\longrightarrow Y$ is said to be *one-to-one* or *injective* if whenever $(x_1,y),(x_2,y)\in f$, $x_1=x_2$. This is equivalent to saying $f(x_1)=f(x_2)$ implies $x_1=x_2$. A function $f: X\longrightarrow Y$ is said to be *onto* or *surjective* if $\forall y\in Y$ $\exists x\in X$ s.t. $(x,y)\in f$. A function $f: X\longrightarrow Y$ is said to be *one-to-one and onto* (or *bijective*) if it is both one-to-one and onto (or both injective and surjective).

Let $f: X\longrightarrow Y$ and $g: Y\longrightarrow Z$ be two functions. Then the *composition* or the *composite function* $g\circ f: X\longrightarrow Z$ is defined by $g\circ f(x)=g(f(x))$ $\forall x\in X$. The function composition $\circ$ may be considered as an operation and it is associative.

*Lemma*. If $h: X\longleftrightarrow Y$, $g:Y\longleftrightarrow Z$ and $f:Z\longleftrightarrow W$, then $f\circ(g\circ h)=(f\circ g)\circ h$.

Note that $\circ$ is not commutative i.e. it is not necessarily true that $f\circ g=g\circ f$ even when both $f\circ g$ and $g\circ f$ are defined.

The following lemmas will be useful when we study group theory later.

*Lemma*. If both $f: X\longrightarrow Y$ and $g: Y\longrightarrow Z$ are one-to-one, so is $g\circ f: X\longrightarrow Z$.

*Lemma*. If both $f: X\longrightarrow Y$ and $g: Y\longrightarrow Z$ are onto, so is $g\circ f: X\longrightarrow Z$.

As an immediate consequence of combining these two lemmas, we obtain

*Lemma*. If both $f: X\longrightarrow Y$ and $g: Y\longrightarrow Z$ are bijective, so is $g\circ f: X\longrightarrow Z$.

If $f\subset X\times Y$, then the *inverse graph* $f^{-1}\subset Y\times X$ is defined by

$$f^{-1}=\{(y,x)\in Y\times X: (x,y)\in f\}.$$

If $f: X\longrightarrow Y$ is one-to-one and onto (bijective) then its inverse graph $f^{-1}$ is a function $f^{-1}: Y\longrightarrow X$. The *inverse* $f^{-1}$ is also one-to-one and onto.

*Lemma*. If $f: X\longrightarrow Y$ is a bijection, then $f\circ f^{-1}=\imath_Y$ and $f^{-1}\circ f=\imath_X$, where $\imath_X$ and $\imath_Y$ are the *identity mappings* of $X$ and $Y$, respectively.

Let $A(X)$ be the set of all one-to-one functions of $X$ onto $X$ itself. Then $(A(X),\circ)$ is a group. If $X$ is a finite set of $n$-elements (we may conveniently say $X=\{1,2,\cdots,n\})$, then $(A(X),\circ)$ is a finite group of order $n!$, called the *symmetric group of degree* $n$. The symmetric group of degree $n$ is denoted by $S_n$ and the elements of $S_n$ are called *permutations*.

$$f(z)=\sum_{n=0}^\infty(-1)^nz^n$$

for $|z|<1$. The power series $f_1(z)=\displaystyle\sum_{n=0}^\infty(-1)^nz^n$ converges only on the open unit disk $D_1:\ |z|<1$. For instance, the series diverges at $z=\frac{3}{2}i$ i.e. $f_1\left(\frac{3}{2}i\right)$ is not defined. The first 25 partial sums of the series $f_1\left(\frac{3}{2}i\right)$ are listed below and they do not appear to be approaching somewhere.

S[1] = 1.

S[2] = 1. – 1.500000000 I

S[3] = -1.250000000 – 1.500000000 I

S[4] = -1.250000000 + 1.875000000 I

S[5] = 3.812500000 + 1.875000000 I

S[6] = 3.812500000 – 5.718750000 I

S[7] = -7.578125000 – 5.718750000 I

S[8] = -7.578125000 + 11.36718750 I

S[9] = 18.05078125 + 11.36718750 I

S[10] = 18.05078125 – 27.07617188 I

S[11] = -39.61425781 – 27.07617188 I

S[12] = -39.61425781 + 59.42138672 I

S[13] = 90.13208008 + 59.42138672 I

S[14] = 90.13208008 – 135.1981201 I

S[15] = -201.7971802 – 135.1981201 I

S[16] = -201.7971802 + 302.6957703 I

S[17] = 455.0436554 + 302.6957703 I

S[18] = 455.0436554 – 682.5654831 I

S[19] = -1022.848225 – 682.5654831 I

S[20] = -1022.848225 + 1534.272337 I

S[21] = 2302.408505 + 1534.272337 I

S[22] = 2302.408505 – 3453.612758 I

S[23] = -5179.419137 – 3453.612758 I

S[24] = -5179.419137 + 7769.128706 I

S[25] = 11654.69306 + 7769.128706 I

Also shown below are the graphics of partial sums of the series $f_1\left(\frac{3}{2}i\right)$.

Let us expand $f(z)=\displaystyle\frac{1}{1+z}$ at $z=i$. Then we obtain

\begin{align*}

f(z)&=\frac{1}{1+z}\\

&=\frac{1}{1+i}\cdot\frac{1}{1+\frac{z-i}{1+i}}\\

&=\sum_{n=0}^\infty (-1)^n\frac{(z-i)^n}{(1+i)^{n+1}}

\end{align*}

for $|z-i|<\sqrt{2}$. Let $f_2(z)=\displaystyle\sum_{n=0}^\infty (-1)^n\frac{(z-i)^n}{(1+i)^{n+1}}$. This series converges only on the open disk $D_2:\ |z-i|<\sqrt{2}$, in particular at $z=\frac{3}{2}i$ and $f_2\left(\frac{3}{2}i\right)=f\left(\frac{3}{2}i\right)=\frac{4}{13}-\frac{6}{13}i$. The first 25 partial sums of the series $f_2\left(\frac{3}{2}i\right)$ are listed below and it appears that they are approaching a number. In fact, they are approaching the complex number $f\left(\frac{3}{2}i\right)=\frac{4}{13}-\frac{6}{13}i$.

S[1] = 0.5000000000 – 0.5000000000 I

S[2] = 0.2500000000 – 0.5000000000 I

S[3] = 0.3125000000 – 0.4375000000 I

S[4] = 0.3125000000 – 0.4687500000 I

S[5] = 0.3046875000 – 0.4609375000 I

S[6] = 0.3085937500 – 0.4609375000 I

S[7] = 0.3076171875 – 0.4619140625 I

S[8] = 0.3076171875 – 0.4614257812 I

S[9] = 0.3077392578 – 0.4615478516 I

S[10] = 0.3076782227 – 0.4615478516 I

S[11] = 0.3076934814 – 0.4615325928 I

S[12] = 0.3076934814 – 0.4615402222 I

S[13] = 0.3076915741 – 0.4615383148 I

S[14] = 0.3076925278 – 0.4615383148 I

S[15] = 0.3076922894 – 0.4615385532 I

S[16] = 0.3076922894 – 0.4615384340 I

S[17] = 0.3076923192 – 0.4615384638 I

S[18] = 0.3076923043 – 0.4615384638 I

S[19] = 0.3076923080 – 0.4615384601 I

S[20] = 0.3076923080 – 0.4615384620 I

S[21] = 0.3076923075 – 0.4615384615 I

S[22] = 0.3076923077 – 0.4615384615 I

S[23] = 0.3076923077 – 0.4615384616 I

S[24] = 0.3076923077 – 0.4615384615 I

S[25] = 0.3076923077 – 0.4615384615 I

The following graphics shows that the real parts of the partial sums of the series $f_2\left(\frac{3}{2}i\right)$ are approaching $\frac{3}{14}$ (blue line).

The next graphics shows that the imaginary parts of the partial sums of the series $f_2\left(\frac{3}{2}i\right)$ are approaching $-\frac{6}{13}$ (blue line).

Also shown below is the graphics of the first 25 partial sums of the series $f_2\left(\frac{3}{2}i\right)$. They are approaching the complex number $f\left(\frac{3}{2}i\right)=\frac{4}{13}-\frac{6}{13}i$ (the intersection of horizontal and vertical blue lines).

Note that $f_1(z)=f_2(z)$ on $D_1\cap D_2$. Define $F(z)$ as

$$F(z)=\left\{\begin{array}{ccc}

f_1(z) & \mbox{if} & z\in D_1,\\

f_2(z) & \mbox{if} & z\in D_2.

\end{array}\right.$$

Then $F(z)$ is analytic in $D_1\cup D_2$. The function $F(z)$ is called the *analytic continuation* into $D_1\cup D_2$ of either $f_1$ or $f_2$, and $f_1$ and $f_2$ are called *elements* of $F$. The function $f_1(z)$ can be continued analytically to the punctured plane $\mathbb{C}\setminus\{-1\}$ and the function $f(z)=\frac{1}{1+z}$ is indeed the analytic continuation into $\mathbb{C}\setminus\{-1\}$ of $f_1$. In general, whenever analytic continuation exists it is unique.

What is functional analysis? Functional analysis is an abstract branch of mathematics, especially of analysis, concerned with the study of vector spaces of functions. These vector spaces of functions arise naturally when we study linear differential equations as solutions of a linear differential equation form a vector space. Functional analytic methods and results are important in various fields of mathematics (for example, differential geometry, ergodic theory, integral geometry, noncommutative geometry, partial differential equations, probability, representation theory etc.) and its applications, in particular, in economics, finance, quantum mechanics, quantum field theory, and statistical physics. Topics in this introductory functional analysis course include metric spaces, Banach spaces, Hilbert spaces, bounded linear operators, the spectral theorem, and unbounded linear operators.

While functional analysis is a branch of analysis, due to its nature linear algebra is heavily used. So, it would be a good idea to brush up on linear algebra among other things you need to study functional analysis.

In functional analysis, we study analysis on an abstract space $X$ rather than the familiar $\mathbb{R}$ or $\mathbb{C}$. In order to consider fundamental notions in analysis such as limits and convergence, we need to have distance defined on $X$ so that we can speak of nearness or closeness. A distance on $X$ can be defined as a function, called a *distance function* or a *metric*, $d: X\times X\longrightarrow\mathbb{R}^+\cup\{0\}$ satisfying the following properties:

(M1) $d(x,y)=0$ if and only if $x=y$.

(M2) $d(x,y)=d(y,x)$ (Symmetry)

(M3) $d(x,y)\leq d(x,z)+d(z,y)$ (Triangle Inequality)

Here $\mathbb{R}^+$ denotes the set of all positive real numbers. You can easily see how mathematicians came up with this definition of a metric. (M1)-(M3) are the properties that the familiar distance on $\mathbb{R}$, $d(x,y)=|x-y|$ satisfies. The space $X$ with a metric $d$ is called a *metric space* and we usually write it as $(X,d)$.

*Example*. Let $x=(\xi_1,\cdots,\xi_n), y=(\eta_1,\cdots,\eta_n)\in\mathbb{R}^n$. Define

$$d(x,y)=\sqrt{(\xi_1-\eta_1)^2+\cdots+(\xi_n-\eta_n)^2}.$$

Then $d$ is a metric on $\mathbb{R}^n$ called the *Euclidean metric*.

This time, let $x=(\xi_1,\cdots,\xi_n), y=(\eta_1,\cdots,\eta_n)\in\mathbb{C}^n$ and define

$$d(x,y)=\sqrt{|\xi_1-\eta_1|^2+\cdots+|\xi_n-\eta_n|^2}.$$

Then $d$ is a metric on $\mathbb{C}^n$ called the *Hermitian metric*. Here $|\xi_i-\eta_i|^2=(\xi_i-\eta_i)\overline{(\xi_i-\eta_i)}$.

Of course these are pretty familiar examples. If there can be only these familiar examples, there would be no point of considering abstract space. In fact, the abstraction allows to discover other examples of metrics that are not so intuitive.

*Example*. Let $X$ be the set of all bounded sequences of complex numbers

$$X=\{(\xi_j): \xi_j\in\mathbb{C},\ j=1,\cdots\}.$$

For $x=(\xi_j), y=(\eta_j)\in X$, define

$$d(x,y)=\sup_{j\in\mathbb{N}}|\xi_j-\eta_j|.$$

Then $d$ is a metric on $X$. The metric space $(X,d)$ is denoted by $\ell^\infty$.

*Example*. Let $X$ be the set of continuous real-valued functions define on the closed interval $[a,b]$. Let $x, y:[a,b]\longrightarrow\mathbb{R}$ be continuous and define

$$d(x,y)=\max_{t\in [a,b]}|x(t)-y(t)|.$$

Then $d$ is a metric on $X$. The metric space $(X.d)$ is denoted by $\mathcal{C}[a,b]$.

In a metric space $(X,d)$, nearness or closeness can be described by a neighbourhood called an $\epsilon$*-ball (*$\epsilon>0$*) centered at *$x\in X$

$$B(x,\epsilon)=\{y\in X: d(x,y)<\epsilon\}.$$

These $\epsilon$-balls form a *base* for the *topology* on $X$, called the *topology on $X$ induced by the metric $d$*.

Next time, we will discuss two more examples of metric spaces $\ell^p$ and $L^p$. These examples are particularly important in functional analysis as they become *Banach spaces*. In particular, they become *Hilbert spaces* when $p=2$.

Algebra (as a subject) is the study of algebraic structures. So, what is an algebraic structure? An *algebraic structure* or an *algebra* in short $\underline{A}$ is a non-empty set $A$ with a binary operation $f$. $\underline{A}$ is usually written as the ordered pair

$$\underline{A}=(A,f).$$

A *binary operation* $f$ on a set $A$ is a function $f: A\times A\longrightarrow A$. An example of a binary operation is addition $+$ on the set of integers $\mathbb{Z}$. $+$ is a function $+:\mathbb{Z}\times\mathbb{Z}\longrightarrow\mathbb{Z}$ defined by $+(1,1)=2$, $+(1,2)=3$, and so on. We usually write $+(1,1)=2$ as $1+1=2$. In general, one may consider an *$n$-ary operation* $f:\prod_{i=1}^n A\longrightarrow A$, where $\prod_{i=1}^n A$ denotes the $n$-copies of $A$, $A\times A\times\cdots\times A$.

There are many different kinds of algebras. Let me mention some of algebras with a binary operation here. For starter, $(A,\cdot)$, a non-empty set $A$ with a binary operation $\cdot$ is called a *groupoid*. A groupoid $(A,\cdot)$ with associative law

$$(ab)c=a(bc)$$

for any $a,b,c\in A$ is callaed a *semigroup*. If the semigroup has an identity element $e\in A$ i.e.

$$ae=ea=a$$

for any $a\in A$, it is called a *monoid*. If for every element $a$ of the monoid $A$, there exists an inverse element $a^{-1}\in A$ such that $aa^{-1}=a^{-1}a=e$, the monoid is called a *group*. A group $(A,\cdot)$ with commutative law i.e.

$$ab=ba,$$

for any $a,b\in A$ is called an *abelian group* named after a Norwegian mathematician Niels Abel. Note the inverse ${}^{-1}$ can be regarded as an operation on $A$, a *unary operation* ${}^{-1}: A\longrightarrow A$ defined by ${}^{-1}(a)=a^{-1}$ for each $a\in A$. The identity element $e$ can be also regarded as an operation, a *nullary operation* $e:\{\varnothing\}\longrightarrow A$. Thus, formally a group can be written as $(A,\cdot,{}^{-1},e)$, a quadrupple of a nonempty set, a binary operation, a unary operation, and a nullary operation.

Now we know what a group is and apparently, group theory is the study of groups. But what exactly are we studying there? What I am about to say is not really limited to group theory but commonly applies to studying other algebraic structures as well. There are briefly two main objectives with studying groups. One is the classification of groups. This becomes particularly interesting with groups of finite order. Here *the order of a group* means the number of elements of a group. We would like to answer the question “*how many different groups of order $n$ are there for each $n$ and what are they?*” The classification gets harder as $n$ gets larger. There are groups with the same order that appear to be different. But don’t be decieved by the appearance. They may actually be the same group. What do we mean by *same* here? We say two groups of the same order same if there is a one-to-one and onto map (a *bijection*) that preserves operations. Such a map is called an *isomorphism*. It turns out that if a map $\psi: G\longrightarrow G’$ from a group $G$ to another group $G’$ preserves binary operation, it automatically preserves unary and nullary operations. Here we mean *preserving binary operation* by

$$\psi(ab)=\psi(a)\psi(b)$$

for any $a,b\in G$. If you have taken linear algebra (and I believe you have), you would notice that a linear map is a map that preserves vector addition and scalar multiplication. A map $\psi: G\longrightarrow G’$ which preserves binary operation is called a *homomorphism*. If a homomorphism $\psi: G\longrightarrow G’$ is one-to-one and onto, it is an isomorphism. An isomorphism $\psi: G\longrightarrow G$ from a group $G$ onto itself is called an *automorphism*. In group theory, if there is an isomorphism from a group to another group, we do not distinguish them no matter how different they appear to look. The other objective is to discover new groups from old groups. Some of the new groups may be smaller in size than the old ones. Here we mean *smaller in size* by having a smaller number of elements i.e. having a lesser order. Some examples are *subgroups* and *quotient groups*. Some of the new groups are larger in size than the old ones. An example is *direct products*. Subgroups, quotient groups (also called *factor groups*), direct products are the things we will study as means to get new groups from old groups.

Group theory has a significance in geometry. In geometry, symmetry plays an important role. There are different types of symmetries: reflections, rotations, and translations. An interesting connection between geometry and group theory is that these symmetries form groups (symmetry groups). The most general symmetry group of finite order is called a *symmetric group*. In mathematics, the e*mbedding theorem* is conceptually and philosophically important (though it may be practically less important). When we study mathematics, we often feel that the structures we study are highly abstract and we feel like they only exist in our consciousness but not in the physical world. The embedding theorem tells that those abstract structures we study are indeed substructures of a larger structure that we are familiar with in the physical world. The embedding theorem implicates that we are not making up those abstract mathematical structures but we are merely discovering them which already exist in the universe. This kind of view point is called *Mathematical Platonism*. It turns out that there is an embedding theorem in finite group theory, namely *every group of finite order is a subgroup of a symmetric group*. The embedding theorem is called *Cayley theorem*. This means that the study of finite groups boils down to studying symmetric groups.

*Remark*. There is a mathematical structure called *algebras over field* $K$ (usually $K=\mathbb{R}$ or $K=\mathbb{C}$). An algebra $\mathcal{A}$ over field $K$ is a vector space over $K$ with a product $\cdot:\mathcal{A}\times\mathcal{A}\longrightarrow\mathcal{A}$ which is distributive over addition:

$$a(b+c)=ab+ac,\ (a+b)c=ac+bc,\ \forall\ a,b,c\in\mathcal{A}.$$

(Here, the symbol $\forall$ is a logical symbol which has meaning “for each”, “for any”, “for every”, or “for all” depending on the context. I will talk more about logical symbols next time as I will use them often.) Note that an algebra $\mathcal{A}$ over field $K$ is not an algebra because the scalar product is not an operation on $\mathcal{A}$. The scalar product is in fact an *action* of the multiplicative group $K\setminus\{0\}$ on $\mathcal{A}$. Algebras over field $K$ are important structures in functional analysis.

*Example*. Evaluate $\int_{-\infty}^\infty\frac{\cos 3x}{(x^2+1)^2}dx$.

*Solution*. Let $f(z)=\frac{1}{(z^2+1)^2}$. Then $f(z)e^{3iz}$ is analytic everywhere on and above the real axis except at $z=i$. Let $C_R$ be the upper semi-circle centered at the origin with radius $R>1$. Then by Cauchy’s Residue Theorem,

$$\int_{-R}^R\frac{e^{i3x}}{(x^2+1)^2}dx=2\pi i B_1-\int_{C_R}f(z)e^{i3z}dz,$$

where $B_1=\mathrm{Res}_{z=i}[f(z)e^{i3z}]$. $f(z)e^{i3z}$ can be written as

$$f(z)e^{i3z}=\frac{\phi(z)}{(z-i)^2},$$

where $\phi(z)=\frac{e^{i3z}}{(z+i)^2}$. Since $z=i$ is a pole of order $2$ of $f(z)$,

$$B_1=\phi’(i)=\frac{1}{ie^3}.$$

On $C_R$, $|z|=R$ and so by triangle inequality we obtain

$$|(z+i)^2|\geq (R^2-1)^2$$

and thereby

$$|f(z)|\leq\frac{1}{(R^2-1)^2}.$$

$|e^{i3z}|=e^{-3y}\leq 1$ for all $y\geq 0$. Hence, we find that

$$\left|\mathrm{Re}\int_{C_R}f(z)e^{i3z}dz\right|\leq\left|\int_{C_R}f(z)e^{i3z}dz\right|\leq\frac{\pi R}{(R^2-1)^2}\to 0$$

as $R\to\infty$. Therefore,

$$\int_{-\infty}^\infty\frac{e^{i3x}}{(x^2+1)^2}dx=\frac{2\pi}{e^3}.$$

\begin{align*}

\int_0^\infty f(x)dx&=\lim_{R\to\infty}\int_0^R f(x)dx,\\

\int_{-\infty}^\infty f(x)dx&=\lim_{R_1\to\infty}\int_{-R_1}^0 f(x)dx+\lim_{R_2\to\infty}\int_0^{R_2}f(x)dx.

\end{align*}

The

$$\mathrm{P.V.}\int_{-\infty}^\infty f(x)dx=\lim_{R\to\infty}\int_{-R}^R f(x)dx.$$

The Cauchy principal value of an improper integral is not necessarily the same as the improper integral. For example,

$$\mathrm{P.V}\int_{-\infty}^\infty xdx=\lim_{R\to\infty}\int_{-R}^R xdx=0,$$

while

$$\int_{-\infty}^\infty xdx=\lim_{R_1\to\infty}\int_{-R_1}^0xdx+\lim_{R_2\to\infty}\int_0^{R_2}xdx=-\infty+\infty$$

is undefined. In general, if $\int_{-\infty}^\infty f(x)dx<\infty$ then $\mathrm{R.V.}\int_{-\infty}^\infty f(x)dx<0$, but the converse need not be true. Suppose that $f(x)$ is an even function. Then

\begin{align*}

\int_0^R f(x)dx&=\frac{1}{2}\int_{-R}^R f(x)dx,\\

\int_{-R_1}^0 f(x)dx&=\int_0^{R_1}f(x) dx.

\end{align*}

So,

$$\mathrm{P.V.}\int_{-\infty}^\infty f(x)dx=\int_{-\infty}^\infty f(x)dx=2\int_0^\infty f(x)dx.$$

Let us consider an even function $f(x)$ of the form $f(x)=\frac{p(x)}{q(x)}$, where $p(x)$, $q(x)$ are polynomials with real coefficients no factors in common. Furthermore, we assume that $q(z)$ has no real zeros but has at least one zero above the real axis. Let us consider a positively oriented upper semicircle $C_R$ whose radius $R$ is large enough to contain all the zeros above the real axis as shown in the figure below.

$C_R$ together with the interval $[-R,R]$ form a positively oriented simple closed contour. Then by Cauchy’s Residue Theorem we have

$$\int_{-R}^R f(x)dx+\int_{C_R} f(z)dz=2\pi i\sum_{k=1}^n\mathrm{Res}_{z=z_k}f(z),$$

i.e.

$$\int_{-R}^R f(x)dx=2\pi i\sum_{k=1}^n\mathrm{Res}_{z=z_k}f(z)-\int_{C_R} f(z)dz.$$

If $\lim_{R\to\infty}\int_{C_R} f(z)dz=0$ then

$$\mathrm{P.V.}\int_{-\infty}^\infty f(x)dx=2\pi i\sum_{k=1}^n\mathrm{Res}_{z=z_k}f(z).$$

If in addition $f(x)$ is even,

$$\int_{-\infty}^\infty f(x)dx=2\pi i\sum_{k=1}^n\mathrm{Res}_{z=z_k}f(z)$$

or

$$\int_0^\infty f(x)dx=\pi i\sum_{k=1}^n\mathrm{Res}_{z=z_k}f(z).$$

*Example*. Let us consider the improper integral

$$\int_0^\infty\frac{x^2}{x^6+1}dx.$$

Let $f(z)=\frac{z^2}{z^6+1}$ has isolated singularities at the zeros of $z^6+1$, and is analytic everywhere else. $z^6=-1$ has solutions (the sixth roots of $-1$)

$$c_k=\exp\left[i\left(\frac{\pi}{6}+\frac{2k\pi}{6}\right)\right],\ k=0,1,\cdots,5.$$

The first three roots

$$c_0=e^{i\frac{\pi}{6}},\ c_1=i,\ c_2=e^{i\frac{5\pi}{6}}$$

lie in the upper half plane. Let us consider a positively oriented upper semicircle $C_R$ whose radius $R$ is greater than $1$.

Then

$$\int_{-R}^Rf(x)dx=2\pi(B_0+B_1+B_2)-\int_{C_R}f(z)dz,$$

where $B_k$ is the residue of $f(z)$ at $c_k$, $k=0,1,2$. $B_k$ can be found as we studied here

$$B_k=\mathrm{Res}_{z=c_k}\frac{z^2}{z^6+1}=\frac{c_k^2}{6c_k^5}=\frac{1}{6c_k^3},\ k=0,1,2.$$

Thus, we obtain

$$2\pi(B_0+B_1+B_2)=2\pi\left(\frac{1}{6i}-\frac{1}{6i}+\frac{1}{6i}\right)=\frac{\pi}{3}$$

and hence,

$$\int_{-R}^R f(x)dx=\frac{\pi}{3}-\int_{C_R}f(z)dz.$$

On $C_R$, $|z|=R$ so

$$|z^6+1|\geq ||z|^6-1|=|R^6-1|=R^6-1$$

and thereby we obtain

$$|f(z)|\leq\frac{R^2}{R^6-1}.$$

Since the length of $C_R$ is $\pi R$,

$$\left|\int_{C_R} f(z)dz\right|\leq\frac{R^2}{R^6-1}\cdot\pi R\to 0$$

as $R\to\infty$. Hence,

$$\mathrm{P.V.}\int_{-\infty}^\infty\frac{x^2}{x^6+1}dx=\lim_{R\to\infty}\frac{x^2}{x^6+1}dx=\frac{\pi}{3}.$$

Since the integrand is even,

$$\int_{-\infty}^\infty\frac{x^2}{x^6+1}dx=\frac{\pi}{3}$$

and

$$\int_0^\infty\frac{x^2}{x^6+1}dx=\frac{\pi}{6}.$$

*Theorem* 1. A function $f$ is that is analytic at a point $z_0$ has a zero of order $m$ there if and only if there is a function $g$, which is analytic and nonzero at $z_0$, such that

$$f(z)=(z-z_0)^mg(z).$$

*Theorem* 2. Suppose that:

(i) two functions $p$ and $q$ are analytic at a point $z_0$;

(ii) $p(z_0)\ne 0$ and $q$ has a zero of order $m$ at $z_0$.

Then $\frac{p(z)}{q(z)}$ has a pole of order $m$ at $z_0$.

Now we discuss our main theorem in this lecture.

*Theorem*. Let two functions $p$ and $q$ be analytic at a point $z_0$. If

$$p(z_0)\ne 0,\ q(z_0)=0,\ \mbox{and}\ q’(z_0)\ne 0,$$

then $z_0$ is a simple pole of $\frac{p(z)}{q(z)}$ and

$$\mathrm{Res}_{z=z_0}\frac{p(z)}{q(z)}=\frac{p(z_0)}{q’(z_0)}.$$

*Proof*. From the conditions, we see that $q(z)$ has a zero of order $1$, so by theorem 1 it can be written as

$$q(z)=(z-z_0)g(z)$$

where $g(z)$ is analytic at $z=z_0$ and $g(z_0)\ne 0$. This can be in fact readily seen without quoting theorem 1. Since $q(z)$ is analytic at $z_0$, it can be written as a Taylor series expansion

\begin{align*}

q(z)&=q(z_0)+\frac{q’(z_0)}{1!}(z-z_0)+\frac{q^{\prime\prime}(z_0)}{2!}(z-z_0)^2+\cdots\\

&=\frac{q’(z_0)}{1!}(z-z_0)+\frac{q^{\prime\prime}(z_0)}{2!}(z-z_0)^2+\cdots\\

&=(z-z_0)\left\{\frac{q’(z_0)}{1!}+\frac{q^{\prime\prime}(z_0)}{2!}(z-z_0)+\cdots\right\}.

\end{align*}

Set $g(z)=\frac{q’(z_0)}{1!}+\frac{q^{\prime\prime}(z_0)}{2!}(z-z_0)+\cdots$. Then $g(z)$ is analytic at $z=z_0$ and that $g(z_0)=q’(z_0)\ne 0$.

Now, theorem 2 implies that $\frac{p(z)}{q(z)}$ has a simple pole at $z=z_0$, but without quoting theorem 2, $\frac{p(z)}{q(z)}=\frac{\frac{p(z)}{g(z)}}{z-z_0}$, $\frac{p(z)}{g(z)}$ is analytic and nonzero at $z=z_0$. So, $\frac{p(z)}{q(z)}$ has a simple pole at $z=z_0$ as we studied here. Thus,

$$\mathrm{Res}_{z=z_0}\frac{p(z)}{q(z)}=\frac{p(z_0)}{g(z_0)}=\frac{p(z_0)}{q’(z_0)}.$$

This completes the proof.

*Example*. Find the residue of the functions

$$f(z)=\frac{z}{z^4+4}$$

at the isolated singularity $z_0=\sqrt{2}e^{\frac{i\pi}{4}}=1+i$.

*Solution*. Let $p(z)=z$ and $q(z)=z^4+4$. Then $p(z_0)=p(1+i)=1+i\ne 0$, $q(z_0)=0$, and $q’(z_0)=4z_0^3=4(1+i)^3\ne 0$. Hence, $f(z)$ has a simple pole at $z_0$. The residue $B_)$ is found by

$$B_0=\mathrm{Res}_{z=z_0}f(z)=\frac{p(z_0)}{q’(z_0)}=\frac{z_0}{3z_0^3}=\frac{1}{4z_0^2}=-\frac{i}{8}.$$

*References*:

[1] James Brown and Ruel Churchill, Complex Variables and Applications, 8th Edition, McGraw-Hill, 2008

]]>