Skip to main content

Chapter 1 CA1: Maclaurin Series

Section 1.1 Introduction

Have you ever wondered how computers actually calculate things? Computers can only ever do finitely many steps with finitely many bits of information. The inputs in a calculation are numbers represented by finitely many (decimal or binary) digits, for example \(x=0.3\) and \(y=1.744\text{.}\) You probably learned in primary school how to do long addition or multiplication with pencil and paper, and you could imagine that your computer might follow a similar algorithm and compute \(x+y = 2.044\) in a finite number of steps. But then how does it calculate \(e^x\) or \(\sin(x)\text{,}\) or other non-polynomial functions?
The solution is to approximate the function using a power series. For example, the computer approximates \(e^{0.3}\) by adding finitely many terms of
\begin{equation} e^{0.3} = 1 + 0.3 + \frac{1}{2}(0.3)^2 + \frac{1}{6}(0.3)^3 + \frac{1}{24}(0.3)^4 + \cdots \tag{1.1} \end{equation}
In other words, it approximates the function \(e^x\) via the power series
\begin{align*} e^x \amp = 1 + x + \frac{1}{2}x^2 + \frac{1}{6}x^3 + \frac{1}{24}x^4 + \cdots \\ \amp = \sum_{n=0}^{\infty} \frac{1}{n!}x^n. \end{align*}
How does the computer know how many of these terms to add? The short answer is: as many as needed to get the required level of accuracy. The long answer is much more complicated, and is the subject of a whole other area of Mathematics called Numerical Analysis.
How does the computer know to use this particular series? And what even is a series? That’s what we’ll learn in this chapter!

Section 1.2 Infinite Series

Let’s look carefully at the calculation (1.1). We will add the terms one by one and keep an eye on the cumulative total, more formally known as the partial sums:
Number of Terms Partial Sum Value
\(1\) \(1\) \(1\)
\(2\) \(1+0.3\) \(1.3\)
\(3\) \(1+0.3+\dfrac{0.3^2}{2!}\) \(1.345\)
\(4\) \(1+0.3+\dfrac{0.3^2}{2!}+\dfrac{0.3^3}{3!}\) \(1.3495\)
\(5\) \(1+0.3+\dfrac{0.3^2}{2!}+\dfrac{0.3^3}{3!}+\dfrac{0.3^4}{4!}\) \(1.3498375\)
\(6\) \(1+0.3+\dfrac{0.3^2}{2!}+\dfrac{0.3^3}{3!}+\dfrac{0.3^4}{4!}+\dfrac{0.3^5}{5!}\) \(1.3498775\)
If you’re viewing this online, you can compute this yourself by running the following Sage cell:
This computation is actually quite inefficient, since we’re calculating each term from scratch. Notice that the \(k^{\mathrm{th}}\) term is obtained by multiplying the previous term by \(\frac{x}{k}\text{:}\)
\begin{equation*} \frac{x^k}{k!} = \left(\frac{x^{k-1}}{(k-1)!} \right) \cdot \frac{x}{k}. \end{equation*}
Firstly, this allows us to calculate the sum more efficiently:
Secondly, as we add more terms, \(k\) gets ever bigger while \(x\) stays the same, so the multiplier \(\frac{x}{k}\) we use to get from one term to the next gets ever smaller. This means that the terms shrink to zero exponentially and we will see in the next chapter (Chapter 2) that the sum converges for every value of \(x\text{.}\)
In the above calculation, we have only computed finitely many terms, which is all one ever needs for applications. Mathematically, however, (1.1) is an infinite series, and its value represents the sum of all infinitely many terms. What can this mean?
We have seen that the partial sums tend to \(1.34985880757600\ldots\text{,}\) and this limit is defined to the be sum of the infinite series:
\begin{equation*} e^{0.3} = \sum_{k=0}^{\infty} \frac{(0.3)^k}{k!} := \lim_{N\rightarrow\infty} \sum_{k=0}^{N} \frac{(0.3)^k}{k!} \end{equation*}

Definition 1.1.

Given a sequence \(a_0, a_1, a_2, \ldots\) of real numbers (the terms), we define the series
\begin{equation} \sum_{k=0}^{\infty} a_k := \lim_{N\rightarrow\infty} \sum_{k=0}^N a_k\tag{1.2} \end{equation}
to be the limit of the partial sums (running totals) of the terms, provided that this limit exists.
If the limit (1.2) exists, we say that the series converges. Otherwise, it diverges.
We will learn more about convergence and divergence in the next chapter. For now, we’re interested in using infinite series to represent useful functions, like \(e^x\text{,}\) \(\sin(x)\) or \(\cos(x)\text{.}\)

Section 1.3 Power Series

Our first example above shows that the function \(e^x\) can be expressed as an infinite series in the following form:
\begin{equation} e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots = \sum_{k=0}^{\infty} \frac{x^k}{k!}.\tag{1.3} \end{equation}
Note that each term consists of a coefficient (for example \(\frac{1}{3!}\)) multiplied by some power of the variable (for example \(x^3\)), so this is an example of what we call a power series. Power series are particularly useful, as we can approximate them by the sum of the first few terms, which is then a polynomial. We’re very comfortable with polynomials. Figure 1.2 shows how the first few partial sums of the series (1.3) approximate the function \(e^x\text{.}\)
described in detail following the image
A plot of \(y=e^x\) together with plots of the partial sums of its power series of degrees \(0, 1, 2\) and \(3\text{.}\)
Figure 1.2. Approximation to \(e^x\) by various partial sums of its power series
Assuming for now that the series (1.3) converges, how do we know this is even true? How does one find such a series?

Section 1.4 Maclaurin Series

Let’s suppose for the moment that a power series representation for a function \(f(x)\) exists, but we don’t know what the coefficients should be:
\begin{equation} f(x) = c_0 + c_1x + c_2x^2 + \cdots.\tag{1.4} \end{equation}
We can find the value of the first coefficient by the clever trick of plugging \(x=0\) into (1.4):
\begin{equation*} f(0) = c_0 + c_10 + c_20^2 + \dots = c_0. \end{equation*}
What about the next coefficient? We use an even cleverer trick: differentiating the series kills the first coefficient and decreases the exponent in all other coefficients:
\begin{align*} f'(x) \amp = \frac{d}{dx} \big( c_0 + c_1x + c_2x^2 + c_3x^3 + \cdots \big) \\ \amp = c_1 + 2c_2x + 3c_3x^2 + \cdots. \end{align*}
Plugging \(x=0\) into this gives us \(c_1 = f'(0)\text{.}\) Let’s differentiate another time:
\begin{align*} f''(x) \amp = \frac{d}{dx} \big( c_1 + 2c_2x + 3c_3x^2 + 4c_4x^3 + \cdots \big)\\ \amp = 2c_2 + 6c_3 + 12 c_4 + \cdots\\ \Rightarrow f''(0) \amp = 2c_2 \end{align*}
so \(c_2 = \frac{1}{2}f''(0)\text{,}\) next \(c_3 = \frac{1}{6}f'''(0)\text{,}\) etc. In general, we find

Checkpoint 1.4.

Use induction on \(k\) to prove that the expression in Theorem 1.3 is correct.
The series expansion (1.5) is called the Maclaurin series for the function \(f(x)\text{.}\)
So now that we know how to create power series for functions using their derivatives, let’s do the same for two more important functions:

Example 1.5.

Compute the Maclaurin series for \(\sin(x)\text{.}\)
Answer.
\begin{equation*} \sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots = \sum_{k=0}^{\infty} (-1)^k\frac{x^{2k+1}}{(2k+1)!} \end{equation*}
Solution.
We compute the derivatives of \(f(x)=\sin(x)\text{:}\)
\begin{align*} f(x) \amp = \sin(x) \amp f'(x) \amp = \cos(x) \\ f''(x) \amp = -\sin(x) \amp f'''(x) \amp = -\cos(x) \end{align*}
thereafter, the pattern repeats, so \(f^{(4)}(x) = \sin(x)\) again, etc.
Plugging in \(x=0\text{,}\) we get
\begin{align*} f(0) \amp = 0 \amp f'(0) \amp = 1 \\ f''(0) \amp = 0 \amp f'''(0) \amp = -1, \quad\text{etc.} \end{align*}
Now we put it all together:
\begin{align*} f(x) \amp = f(0) + f'(0)x + \frac{f''(0)'x^2}{2!} + \cdots \\ \amp = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots \\ \amp = \sum_{k=0}^{\infty} (-1)^k\frac{x^{2k+1}}{(2k+1)!}. \end{align*}
To get the last expression, note that the non-zero terms correspond to odd powers of \(x\text{,}\) and odd numbers are of the form \(2k+1\text{.}\) The \((-1)^k\) then ensures the alternating signs.

Example 1.6.

Compute the Maclaurin series for \(\cos(x)\text{.}\)
Answer.
\begin{equation*} \cos(x) = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots = \sum_{k=0}^{\infty} (-1)^k\frac{x^{2k}}{(2k)!} \end{equation*}
Solution.
We compute the derivatives of \(f(x)=\cos(x)\text{:}\)
\begin{align*} f(x) \amp = \cos(x) \amp f'(x) \amp = -\sin(x) \\ f''(x) \amp = -\cos(x) \amp f'''(x) \amp = \sin(x) \end{align*}
thereafter, the pattern repeats, so \(f^{(4)}(x) = \cos(x)\) again, etc.
Plugging in \(x=0\text{,}\) we get
\begin{align*} f(0) \amp = 1 \amp f'(0) \amp = 0 \\ f''(0) \amp = -1 \amp f'''(0) \amp = 0, \quad\text{etc.} \end{align*}
Now we put it all together:
\begin{align*} f(x) \amp = f(0) + f'(0)x + \frac{f''(0)'x^2}{2!} + \cdots \\ \amp = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots \\ \amp = \sum_{k=0}^{\infty} (-1)^k\frac{x^{2k}}{(2k)!}. \end{align*}
To get the last expression, note that the non-zero terms correspond to even powers of \(x\text{,}\) and even numbers are of the form \(2k\text{.}\) The \((-1)^k\) then ensures the alternating signs.
Let’s record the Maclaurin series we have found thus far:
\begin{align*} e^x \amp = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots = \sum_{k=0}^{\infty} \frac{x^k}{k!}\\ \sin(x) \amp = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots = \sum_{k=0}^{\infty} (-1)^k\frac{x^{2k+1}}{(2k+1)!} \\ \cos(x) \amp = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots = \sum_{k=0}^{\infty} (-1)^k\frac{x^{2k}}{(2k)!}. \end{align*}

Section 1.5 Euler’s Formula

You may have noticed that the Maclaurin series for \(e^x, \sin(x)\) and \(\cos(x)\) all look very similar - the series for \(\sin(x) + \cos(x)\) has exactly the same terms as the series for \(e^x\text{,}\) except for the differences in signs.
It turns out we can fix these signs if we subsitute \(x = i\theta\text{,}\) where \(i=\sqrt{-1}\) is the imaginary unit. Using the fact that
\begin{align*} \amp i^0 = 1, \amp \amp i^1 = i, \amp \amp i^2 = -1, \amp \amp i^3 = -i, \amp \amp i^4 = i^0 = 1, \amp \amp i^5 = i^1 = i, \; \text{etc.} \end{align*}
we find that
\begin{align*} e^{i\theta} \amp = 1 + i\theta + i^2\frac{\theta^2}{2!} + i^3\frac{\theta^3}{3!} + i^4\frac{\theta^4}{4!} + \cdots \\ \amp = \big[1 - \frac{\theta^2}{2!} + \frac{\theta^4}{4!} - \cdots\big] + i\big[\theta - \frac{\theta^3}{3!} + \frac{\theta^5}{5!} - \cdots\big]\\ \amp = \cos(\theta) + i\sin(\theta). \end{align*}
In particular, if we set \(\theta = \pi\text{,}\) we obtain the following amazing formula, which combines the five most important constants in Mathematics:
\begin{equation} e^{\pi i} + 1 = 0\tag{1.6} \end{equation}

Section 1.6 Maclaurin Polynomials

Now we have a method for obtaining power series expressions for functions. But these are infinite series, and for most applications we want to approximate our functions with finite polynomials. That’s really easy - we just cut off the series after a finite number of terms:

Definition 1.8.

The degree \(n\) Maclaurin polynomial of a function \(f(x)\) is
\begin{equation*} T_n f(x) := f(0) + f'(0)x + \frac{f''(0)x^2}{2!} + \cdots + \frac{f^{(n)}x^n}{n!}. \end{equation*}
Note that the degree \(n\) Maclaurin polynomial will generally consist of \(n+1\) terms. However, it will have degree less than \(n\) if the coefficient of \(x^n\) equals zero, for example
\begin{equation*} T_3\cos(x) = 1 - \frac{1}{2}x^2. \end{equation*}
The degree 0 Maclaurin polynomial of \(f(x)\) is not very interesting, it’s the just the constant function \(f(0)\text{.}\)
You’ll recognise the degree 1 Maclaurin polynomial \(T_1f(x) = f(0) + f'(0)x\text{:}\) its graph is the tangent to \(y=f(x)\) at \(x=0\text{.}\) We also call it the linear approximation to \(f(x)\text{,}\) it is the linear function that most closely approximates \(f(x)\) near \(x=0\text{.}\)
Similarly, the degree 2 Maclaurin polynomial \(T_2f(x) = f(0) + f'(0)x + \frac{f''(0)}{2}x^2\) is called the quadratic approximation to \(f(x)\text{.}\) It is the quadratic function closest to \(f(x)\) near \(x=0\text{.}\)
The following Sage cell let’s you plot the Maclaurin polynomials for \(e^x, \sin(x)\) and \(\cos(x)\text{.}\) Notice that when we increase the order (i.e. the number of terms in the Maclaurin polynomial), the approximation of the function about \(x=0\) improves:

Remark 1.9. Computer algebra systems.

Computer Algebra Systems usually have a command for calculating Maclaurin series. For example, the following shows part of the output from a Wolfram Alpha
 2 
www.wolframalpha.com/
query.
Figure 1.10.
We have already seen Sage code to compute Maclaurin series, here is the simplest code:

Section 1.7 Advanced Topics

Subsection 1.7.1 Computing \(\pi\)

\(\pi = 3.14159265358979323846264338327950288419716939937510582097\ldots\)
It’s not hard to show, using repeated differentiation, that the Maclaurin series for \(\arctan(x)\) is
\begin{equation} \arctan(x) = x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \cdots = \sum_{k=0}^{\infty} (-1)^k\frac{x^{2k+1}}{2k+1}.\tag{1.7} \end{equation}
Since \(\arctan(1) = \frac{\pi}{4}\text{,}\) this gives us an infinite series converging to \(\pi\text{.}\) The Sage cell below computes the partial sums of this series up to some \(N\text{:}\)
\begin{equation} \pi = 4 - \frac{4}{3} + \frac{4}{5} - \frac{4}{7} + \cdots.\tag{1.8} \end{equation}
As you can see from this computation, convergence is extremely slow. See how big you must make N in the above code before you get two correct digits after the decimal point.
One can compute \(\pi\) much faster by using identities such as the following, due to John Machin (in 1706):
\begin{equation} \frac{\pi}{4} = 4\arctan\frac{1}{5} - \arctan\frac{1}{239}.\tag{1.9} \end{equation}
Modern algorithms to compute \(\pi\) are more sophisticated and much faster still, see https://en.wikipedia.org/wiki/Pi. We mention only one method, the BBP (Bailey-Borwein-Plouffe) Algorithm:
\begin{equation} \pi = \sum_{k=0}^{\infty} \frac{1}{16^k} \left( \frac{4}{8k+1} - \frac{2}{8k+4} - \frac{1}{8k+5} - \frac{1}{8k+6} \right)\tag{1.10} \end{equation}
This formula not only converges extremely quickly, but it also allows you to compute specific hexadecimal digits of \(\pi\) without first computing the previous digits.

Subsection 1.7.2 A function without Maclaurin series

Consider the function
\begin{equation*} f(x) = \begin{cases} e^{-1/x^2} \amp \text{if} \; x\neq 0 \\ 0 \amp \text{if} \; x=0. \end{cases} \end{equation*}
This function is well-defined and continuous at \(x=0\text{.}\) You can compute the limit
\begin{equation*} \lim_{x \rightarrow 0} e^{-1/x^2} = 0. \end{equation*}
described in detail following the image
Plot of \(y=e^{-1/x^2}\)
Figure 1.11. Plot of \(e^{-1/x^2}\)
One can show that this function is well-defined and differentiable. In fact, for \(x\neq 0\) we have (check this yourself):
\begin{align*} \frac{d}{dx} f(x) \amp = \frac{2 \, e^{\left(-\frac{1}{x^{2}}\right)}}{x^{3}} \\ \frac{d^{ 2 }}{dx^{ 2 }} f(x) \amp = -\frac{6 \, e^{\left(-\frac{1}{x^{2}}\right)}}{x^{4}} + \frac{4 \, e^{\left(-\frac{1}{x^{2}}\right)}}{x^{6}}\\ \frac{d^{ 3 }}{dx^{ 3 }} f(x) \amp = \frac{24 \, e^{\left(-\frac{1}{x^{2}}\right)}}{x^{5}} - \frac{36 \, e^{\left(-\frac{1}{x^{2}}\right)}}{x^{7}} + \frac{8 \, e^{\left(-\frac{1}{x^{2}}\right)}}{x^{9}} \end{align*}
The plot in Figure 1.11 is suspiciously flat near \(x=0\text{,}\) and in fact we have
\begin{equation*} \left.\frac{d^k}{dx^k}f(x)\right|_{x=0} = 0, \quad \text{for all} \; k \geq 0. \end{equation*}
Therefore, the Maclaurin series for \(f(x)\) is identically zero! So this function does not equal its Maclaurin series.
Replacing \(x\) by \(-1/x^2\) in the power series for \(e^x\text{,}\) we get
\begin{equation*} e^{-1/x^2} = \sum_{k=0}^{\infty} (-1)^k\frac{x^{-2k}}{k!} = 1 - x^{-2} + \frac{1}{2}x^{-4} - \frac{1}{6}x^{-6} + \cdots \end{equation*}
This is not strictly a power series, but a Laurent series, which converges for \(x\neq 0\text{.}\) The Sage cell belows plots the Laurent polynomial for \(e^{-1/x^2}\text{.}\)
Here we see that the partial sums approximate \(f(x)\) well for large \(x\text{,}\) but not for small \(x\text{.}\)

Subsection 1.7.3 Other types of series

For \(s\gt 1\) one can show that the following infinite series converges:
\begin{equation*} \zeta(s) := \sum_{n=1}^{\infty} \frac{1}{n^s} = \frac{1}{1^s} + \frac{1}{2^s} + \frac{1}{3^s} + \cdots, \end{equation*}
This defines the zeta-function, a fascinating function defined by a series other than a power series. Later we will see that \(\zeta(s)\) is related to the distribution of prime numbers. When \(s\) is an even integer, it takes on very interesting values:
\begin{align*} \zeta(2) \amp = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \cdots = \frac{\pi^2}{6} \\ \zeta(4) \amp = \frac{1}{1^4} + \frac{1}{2^4} + \frac{1}{3^4} + \cdots = \frac{\pi^4}{90}\\ \zeta(6) \amp = \frac{1}{1^6} + \frac{1}{2^6} + \frac{1}{3^6} + \cdots = \frac{\pi^6}{945}. \amp \end{align*}
Its values at the odd integers are much more mysterious.
Checkpoint 1.12.
Show that
\begin{equation*} \lim_{s \rightarrow 1^-} \zeta(s) = \infty, \end{equation*}
in other words, the harmonic series
\begin{equation*} \frac{1}{1} + \frac{1}{2} + \frac{1}{3} + \cdots \end{equation*}
diverges.
Hint.
Show that the partial sum \(\sum_{k=1}^N \frac{1}{k}\) is greater than the integral \(\int_1^{N-1}\frac{1}{x}dx = \ln(N-1)\text{.}\)

Subsection 1.7.4 Infinite products

Instead of adding infinitely many terms, one might also multiply them. Thus one may express some functions as infinite products, for example
\begin{align*} \sin x \amp = x\prod_{n=1}^{\infty} \left(1 - \frac{x^2}{\pi^2n^2}\right) \\ \cos x \amp = \prod_{n=1}^{\infty} \left(1 - \frac{x^2}{\pi^2\left(n-\frac{1}{2}\right)^2}\right) \end{align*}
Such products can only converge if the factors tend to \(1\) (not \(0!\)) as \(n\) tends to infinity.
You can plot the finite products against \(\sin(x)\) and \(\cos(x)\) in the following Sage cells:
Try out your own Sage computations here: