Skip to main content

Chapter 20 LA6: Some Applications of Eigenvalues and Eigenvectors

We concluded the previous lecture in this series by outlining a practical problem (ranking pages from a web search) whose solution involved finding eigenvalues and eigenvectors of a matrix. In this lecture we are going to look at two mathematical problems where eigenvalues and eigenvectors are useful. (These problems can also be applied to practical situations but we won’t look at any such applications in any detail.)

Section 20.1 Powers of Matrices

Given the square matrix \(A\) of order \(k\) the problem is to calculate the matrix \(A^n\) where \(n \in \mathbb{N}\text{.}\) For small values of \(n\) the calculation can be done by brute force but for large values of \(n\) this becomes intractable. A solution to this problem if the matrix \(A\) has \(k\) distinct eigenvalues is as follows.
Let the \(k\) distinct eigenvalues of matrix \(A\) be \(\lambda_1, \lambda_2, \ldots, \lambda_k\) and let the associated eigenvectors be \(\mathbf{v_1}, \mathbf{v_2}, \ldots, \mathbf{v_k}\text{.}\) Now let \(P\) be the matrix whose columns are these eigenvectors, i.e.
\begin{equation*} P = \begin{pmatrix} \mathbf{v_1} \amp \mathbf{v_2} \amp \cdots \amp \mathbf{v_k} \end{pmatrix} \end{equation*}
Then
\begin{equation*} A = PDP^{-1} \quad \text{ (or equivalently } P^{-1}AP = D) \end{equation*}
where
\begin{equation*} D = \begin{pmatrix} \lambda_1 \amp 0 \amp \cdots \amp 0 \\ 0 \amp \lambda_2 \amp \cdots \amp 0 \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp \cdots \amp \lambda_k \end{pmatrix}\text{.} \end{equation*}
Thus
\begin{align*} A^n \amp = \left( PDP^{-1} \right) \left( PDP^{-1} \right) \ldots \left( PDP^{-1} \right)\\ \amp = PD \left( P^{-1} P \right)D \left( P^{-1} P \right) \ldots DP^{-1}\\ \amp = PD^n P^{-1}\text{.} \end{align*}
Since
\begin{equation*} D^n = \begin{pmatrix} \lambda_1^n \amp 0 \amp \cdots \amp 0 \\ 0 \amp \lambda_2^n \amp \cdots \amp 0 \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp \cdots \amp \lambda_k^n \end{pmatrix}\text{.} \end{equation*}
this provides a relatively easy way to calculate \(A^n\) for large values of \(n\text{.}\)

Example 20.1.

Calculate \(A^3\) for
\begin{equation*} A = \begin{pmatrix} 3 \amp 1 \\ 1 \amp 3 \end{pmatrix}\text{.} \end{equation*}
Answer.
\(A^3 = \begin{pmatrix} 36 \amp 28 \\ 28 \amp 36 \end{pmatrix}\)
Solution.
Since \(n\) is small here it is easiest just to do the matrix multiplications to obtain
\begin{equation*} A^3 = \begin{pmatrix} 3 \amp 1 \\ 1 \amp 3 \end{pmatrix} \begin{pmatrix} 3 \amp 1 \\ 1 \amp 3 \end{pmatrix} \begin{pmatrix} 3 \amp 1 \\ 1 \amp 3 \end{pmatrix} = \begin{pmatrix} 3 \amp 1 \\ 1 \amp 3 \end{pmatrix} \begin{pmatrix} 10 \amp 6 \\ 6 \amp 10 \end{pmatrix} = \begin{pmatrix} 36 \amp 28 \\ 28 \amp 36 \end{pmatrix} \end{equation*}
However, let’s illustrate the above method (admittedly leaving out the working of finding eigenvalues, eigenvectors and matrix inverse). \(A\) has two distinct eigenvalues \(\lambda_1 = 2\) and \(\lambda_2 = 4\) with corresponding eigenvectors \(\mathbf{v}_1 = \begin{pmatrix} 1 \\ -1 \end{pmatrix}\) and \(\mathbf{v}_2 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}\text{,}\) so let
\begin{equation*} P = \begin{pmatrix} 1 \amp 1 \\ -1 \amp 1 \end{pmatrix}\text{.} \end{equation*}
Thus
\begin{equation*} P^{-1} = \dfrac{1}{2} \begin{pmatrix} 1 \amp -1 \\ 1 \amp 1 \end{pmatrix} \end{equation*}
and therefore
\begin{align*} A^3 \amp = PD^3 P^{-1}\\ \amp = \begin{pmatrix} 1 \amp 1 \\ -1 \amp 1 \end{pmatrix} \begin{pmatrix} 2^3 \amp 0 \\ 0 \amp 4^3 \end{pmatrix} \begin{pmatrix} 1/2 \amp -1/2 \\ 1/2 \amp 1/2 \end{pmatrix}\\ \amp = \begin{pmatrix} 1 \amp 1 \\ -1 \amp 1 \end{pmatrix} \begin{pmatrix} 4 \amp -4 \\ 32 \amp 32 \end{pmatrix}\\ \amp = \begin{pmatrix} 36 \amp 28 \\ 28 \amp 36 \end{pmatrix} \end{align*}
As an aside, note that
\begin{align*} P^{-1} AP \amp = \dfrac{1}{2} \begin{pmatrix} 1 \amp -1 \\ 1 \amp 1 \end{pmatrix} \begin{pmatrix} 3 \amp 1 \\ 1 \amp 3 \end{pmatrix} \begin{pmatrix} 1 \amp 1 \\ -1 \amp 1 \end{pmatrix}\\ \amp = \dfrac{1}{2} \begin{pmatrix} 1 \amp -1 \\ 1 \amp 1 \end{pmatrix} \begin{pmatrix} 2 \amp 4 \\ -2 \amp 4 \end{pmatrix}\\ \amp = \begin{pmatrix} 2 \amp 0 \\ 0 \amp 4 \end{pmatrix} \end{align*}
To see why \(A = PDP^{-1}\text{,}\) note that since \(\mathbf{v_i}\) are eigenvectors of \(A\)
\begin{align*} AP \amp = A \begin{pmatrix} \mathbf{v_1} \amp \mathbf{v_2} \amp \cdots \amp \mathbf{v_k} \end{pmatrix}\\ \amp = \begin{pmatrix} A\mathbf{v_1} \amp A\mathbf{v_2} \amp \cdots \amp A\mathbf{v_k} \end{pmatrix}\\ \amp = \begin{pmatrix} \lambda_1 \mathbf{v_1} \amp \lambda_2 \mathbf{v_2} \amp \cdots \amp \lambda_k \mathbf{v_k} \end{pmatrix} \end{align*}
and by matrix multiplication
\begin{align*} PD \amp = \begin{pmatrix} \mathbf{v_1} \amp \mathbf{v_2} \amp \cdots \amp \mathbf{v_k} \end{pmatrix} \begin{pmatrix} \lambda_1 \amp 0 \amp \cdots \amp 0 \\ 0 \amp \lambda_2 \amp \cdots \amp 0 \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp \cdots \amp \lambda_k \end{pmatrix}\\ \amp = \begin{pmatrix} \lambda_1 \mathbf{v_1} \amp \lambda_2 \mathbf{v_2}\amp \cdots \amp \lambda_k \mathbf{v_k} \end{pmatrix} \end{align*}
Since \(AP = PD\) then \(A = PDP^{-1}\text{.}\)
To summarise the above (and introduce some associated terminology):

Definition 20.2. Diagonal Matrices.

  • A square matrix \(D\) of order \(k\) is called diagonal if all of its off-diagonal entries are 0, i.e. it is of the form
    \begin{equation*} D = \begin{pmatrix} d_1 \amp 0 \amp \cdots \amp 0 \\ 0 \amp d_2 \amp \cdots \amp 0 \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp \cdots \amp d_k \end{pmatrix}\text{.} \end{equation*}
  • A square matrix \(A\) of order \(k\) is called diagonalisable if there exists a matrix \(P\) such that \(D = P^{-1} AP\) is a diagonal matrix.
  • A square matrix \(A\) of order \(k\) is diagonalisable if it has \(k\) distinct eigenvalues.
  • If a square matrix \(A\) of order \(k\) is diagonalisable then
    \begin{equation*} A^n = PD^n P^{-1} \: \text{for } n \in \mathbb{N}\text{.} \end{equation*}

Example 20.3.

A town contains \(51000\) inhabitants. Initially \(2000\) of these are sick. Each month there is a change in the population: of those who are well \(\frac{3}{4}\) remain well and \(\frac{1}{4}\) become sick, of those who are sick \(\frac{1}{2}\) recover but \(\frac{1}{2}\) remain unwell. What is the long term prognosis for this town?
Answer.
The long term prognosis for the town is that there will be twice as many well people as sick people.
Solution.
Let \(w_n\) denote the number of people in the town who are well after \(n\) months and let \(s_n\) denote the number of people in the town who are sick after \(n\) months. Then
\begin{align*} w_0 \amp = 49000, \, \, s_0 = 2000 \, \text{ and}\\ w_n \amp = \frac{3}{4}w_{n-1} + \frac{1}{2} s_{n-1}, \, \, s_n = \frac{1}{4} w_{n-1} + \frac{1}{2} s_{n-1}\text{,} \end{align*}
or in matrix notation
\begin{equation*} \mathbf{w_n} = A \mathbf{w_{n-1}}, \, \, \mathbf{w_0} = \begin{pmatrix} 49000 \\ 2000 \end{pmatrix} \end{equation*}
where
\begin{equation*} \mathbf{w_n} = \begin{pmatrix} w_n \\ s_n \end{pmatrix}, \, \, A = \begin{pmatrix} 3/4 \amp 1/2 \\ 1/4 \amp 1/2 \end{pmatrix}\text{.} \end{equation*}
Now
\begin{equation*} \mathbf{w_n} = A \mathbf{w_{n-1}} = A \left( A \mathbf{w_{n-2}} \right) = A \left( A \left( A \mathbf{w_{n-3}} \right) \right) = \ldots = A^n \mathbf{w_0}\text{.} \end{equation*}
Thus we need to calculate powers of \(A\text{.}\) The eigenvalues of \(A\) are \(1\) and \(1/4\) with corresponding eigenvectors \(\begin{pmatrix} 2 \\ 1 \end{pmatrix}\) and \(\begin{pmatrix} -1 \\ 1 \end{pmatrix}\text{,}\) and so
\begin{align*} A^n \amp = P D^n P^{-1}\\ \amp = \begin{pmatrix} 2 \amp -1 \\ 1 \amp 1 \end{pmatrix} \begin{pmatrix} 1^n \amp 0 \\ 0 \amp (1/4)^n \end{pmatrix} \dfrac{1}{3} \begin{pmatrix} 1 \amp 1 \\ -1 \amp 2 \end{pmatrix}\\ \amp = \dfrac{1}{3} \begin{pmatrix} 2+(1/4)^n \amp 2-2(1/4)^n \\ 1-(1/4)^n \amp 1+2(1/4)^n \end{pmatrix}\text{.} \end{align*}
Notice that
\begin{align*} \lim_{n \to \infty} A^n \amp = \lim_{n \to \infty} \dfrac{1}{3} \begin{pmatrix} 2+(1/4)^n \amp 2-2(1/4)^n \\ 1-(1/4)^n \amp 1+2(1/4)^n \end{pmatrix}\\ \amp = \dfrac{1}{3} \begin{pmatrix} 2 \amp 2 \\ 1 \amp 1 \end{pmatrix} \end{align*}
so that as \(n \to \infty\)
\begin{equation*} w_n \to \dfrac{1}{3} \begin{pmatrix} 2 \amp 1 \\ 1 \amp 1 \end{pmatrix} \begin{pmatrix} 49000 \\ 2000 \end{pmatrix} = \begin{pmatrix} 34000 \\ 17000 \end{pmatrix}\text{.} \end{equation*}
We conclude that the long term prognosis for the town is that there will be twice as many well people as sick people.

Exercises Example Tasks

1.
Calculate \(A^{100}\) if
\begin{equation*} A = \begin{pmatrix} 1 \amp 2 \\ 2 \amp 4 \end{pmatrix}\text{.} \end{equation*}
2.
Find a matrix \(P\) such that \(P^{-1} AP\) is diagonal if
\begin{equation*} A = \begin{pmatrix} 0 \amp 3 \amp 0 \\ 1 \amp 0 \amp -1 \\ 0 \amp 2 \amp 0 \end{pmatrix}\text{.} \end{equation*}

Section 20.2 Coupled Linear Differential Equations

Recall that the linear differential equation
\begin{equation*} \dfrac{dx}{dt} = ax, \, \, a \in \mathbb{R}\text{,} \end{equation*}
has the solution (via separation of variables)
\begin{equation*} x(t) = C e^{at}\text{,} \end{equation*}
where \(C\) is an arbitrary constant. Consider now the system of two linear differential equations
\begin{align*} \dot{x}_1 \amp = \dfrac{dx_1}{dt} = ax_1 + bx_2\\ \dot{x}_2 \amp = \dfrac{dx_2}{dt} = cx_1 + dx_2 \end{align*}
where \(a, \, b, \, c, \, d \in \mathbb{R}\text{,}\) which can be written in matrix notation as
\begin{equation} \dot{\mathbf{x}} = A \mathbf{x}\tag{20.1} \end{equation}
where \(\mathbf{x} = \begin{pmatrix} x_1 \\ x_2 \end{pmatrix}\text{,}\) \(\dot{\mathbf{x}} = \begin{pmatrix} \dot{x}_1 \\ \dot{x}_2 \end{pmatrix}\) and \(A = \begin{pmatrix} a \amp b \\ c \amp d \end{pmatrix}\text{.}\) These equations are “coupled”, i.e. the derivative of \(x_1(t)\) depends on both \(x_1(t)\) and \(x_2(t)\) and likewise for the derivative of \(x_2(t)\text{.}\) Thus we can’t solve the first equation unless we can solve the second and vice versa. Note that if \(A\) is diagonal then the equations become uncoupled and we could solve each separately.
If the matrix \(A\) has 2 distinct eigenvalues, \(\lambda_1\) and \(\lambda_2\text{,}\) by making the change of variable \(\mathbf{y}= P^{-1} \mathbf{x}\text{,}\) where \(P\) is the matrix whose columns are the eigenvectors \(\mathbf{v_1}\) and \(\mathbf{v_2}\) of \(A\text{,}\) we can transform (20.1) into a system where the matrix is diagonal. By solving that system and converting back to our original variables we find that the general solution to (20.1) is
\begin{equation} \mathbf{x} = C_1 e^{\lambda_1 t} \mathbf{v_1} + C_2 e^{\lambda_2 t} \mathbf{v_2}\tag{20.2} \end{equation}
where \(C_1\) and \(C_2\) are arbitrary constants. We can check that (20.2) is indeed a solution to (20.1). From (20.2)
\begin{equation*} \mathbf{x} = C_1 \lambda_1 e^{\lambda_1 t} \mathbf{v_1} + C_2 \lambda_2 e^{\lambda_2 t} \mathbf{v_2} \end{equation*}
and
\begin{align*} A \mathbf{x} \amp = A \left( C_1 e^{\lambda_1 t} \mathbf{v_1} + C_2 e^{\lambda_2 t} \mathbf{v_2} \right)\\ \amp = C_1 e^{\lambda_1 t} A \mathbf{v_1} + C_2 e^{\lambda_2 t} A \mathbf{v_2}\\ \amp = C_1 e^{\lambda_1 t} \lambda_1 \mathbf{v_1} + C_2 e^{\lambda_2 t} \lambda_2 \mathbf{v_2}\text{.} \end{align*}

Example 20.4.

Find the solution to the initial value problem
\begin{align*} \dfrac{dx_1}{dt} \amp = x_1 + 2x_2\\ \dfrac{dx_2}{dt} \amp = 2x_1 + x_2\text{,} \end{align*}
where \(x_1(0) = 2\) and \(x_2(0) = 3\text{.}\)
Answer.
\(x_1(t) = \dfrac{5}{2} e^{3t} - \dfrac{1}{2} e^{-t}\) and \(x_2(t) = \dfrac{5}{2} e^{3t} + \dfrac{1}{2} e^{-t}\)
Solution.
In matrix notation this system is
\begin{equation*} \dot{\mathbf{x}} = \begin{pmatrix} 1 \amp 2 \\ 2 \amp 1 \end{pmatrix} \mathbf{x}, \quad \mathbf{x}(0) = \begin{pmatrix} 2 \\ 3 \end{pmatrix}\text{.} \end{equation*}
The eigenvalues of \(\begin{pmatrix} 1 \amp 2 \\ 2 \amp 1 \end{pmatrix}\) turn out to be \(\lambda_1 = -1\) and \(\lambda_2 = 3\) with associated eigenvectors \(\mathbf{v_1} = \begin{pmatrix} 1 \\ -1 \end{pmatrix}\) and \(\mathbf{v_2} = \begin{pmatrix} 1 \\ 1 \end{pmatrix}\text{.}\) Thus, from (20.2) the general solution is
\begin{equation*} \mathbf{x} = C_1 e^{-t} \begin{pmatrix} 1 \\ -1 \end{pmatrix} + C_2 e^{3t} \begin{pmatrix} 1 \\ 1 \end{pmatrix}\text{.} \end{equation*}
From the initial conditions we have
\begin{equation*} \begin{pmatrix} 2 \\ 3 \end{pmatrix} = C_1 \begin{pmatrix} 1 \\ -1 \end{pmatrix} + C_2 \begin{pmatrix} 1 \\ 1 \end{pmatrix}\text{.} \end{equation*}
Solving this system of linear equations (by Gauss-Jordan elimination say) gives
\begin{equation*} C_1 = -\dfrac{1}{2} \, \text{ and } \, C_2 = \dfrac{5}{2}\text{.} \end{equation*}
Thus, the solution to the initial value problem is
\begin{equation*} \mathbf{x} = \dfrac{5}{2} e^{3t} \begin{pmatrix} 1 \\ 1 \end{pmatrix} - \dfrac{1}{2} e^{-t} \begin{pmatrix} 1 \\ -1 \end{pmatrix} \end{equation*}
or equivalently
\begin{align*} x_1(t) \amp = \dfrac{5}{2} e^{3t} - \dfrac{1}{2} e^{-t}\\ x_2(t) \amp = \dfrac{5}{2} e^{3t} + \dfrac{1}{2} e^{-t} \end{align*}
Note that you can always check your answer by checking that the functions do indeed satisfy the original equations.
Figure 20.5 shows the graph of these solutions. Notice that as \(t\) gets larger because of the \(e^{-t}\) term in each solution the functions get closer together and because of the \(e^{3t}\) term both solutions grow (exponentially). Thus the eigenvalues of the matrix \(A\) will give us some idea of the qualitative nature of the solutions.
Figure 20.5.

Example 20.6.

Solve the initial value problem
\begin{equation*} \dot{\mathbf{x}} = A \mathbf{x}, \: A = \begin{pmatrix} 0 \amp 1 \\ -1 \amp 0 \end{pmatrix}, \: \mathbf{x}(0) = \begin{pmatrix} -4 \\ 8 \end{pmatrix}\text{.} \end{equation*}
Answer.
\(\mathbf{x} = \begin{pmatrix} 8 \sin(t) - 4 \cos(t) \\ 8 \cos(t) + 4 \sin(t) \end{pmatrix}\)
Solution.
The eigenvalues of \(A\) turn out to be purely complex with \(\lambda_1 = i\) and \(\lambda_2 = -i\text{.}\) The associated eigenvectors are \(\mathbf{v_1} = \begin{pmatrix} -i \\ 1 \end{pmatrix}\) and \(\mathbf{v_2}= \begin{pmatrix} i \\ 1 \end{pmatrix}\text{.}\) Thus, from (20.2) the general solution is
\begin{equation*} \mathbf{x} = C_1 e^{it} \begin{pmatrix} -i \\ 1 \end{pmatrix} + C_2 e^{-it} \begin{pmatrix} i \\ 1 \end{pmatrix}\text{.} \end{equation*}
From the initial conditions we have
\begin{equation*} \begin{pmatrix} -4 \\ 8 \end{pmatrix} = C_1 \begin{pmatrix} -i \\ 1 \end{pmatrix} + C_2 \begin{pmatrix} i \\ 1 \end{pmatrix}\text{,} \end{equation*}
which upon solving gives
\begin{equation*} C_1 = 4-2i \, \text{ and } \, C_2 = 4+2i\text{.} \end{equation*}
Thus, the solution to the initial value problem is
\begin{equation*} \mathbf{x} = (4-2i) e^{it} \begin{pmatrix} -i \\ 1 \end{pmatrix} + (4+2i) e^{-it} \begin{pmatrix} i \\ 1 \end{pmatrix}\text{.} \end{equation*}
We can simplify this solution by using Euler’s equation
\begin{equation*} e^{i \theta} = \cos( \theta ) + i \sin( \theta )\text{.} \end{equation*}
Thus
\begin{equation*} \mathbf{x} = (4-2i) \left( \cos(t) + i\sin(t) \right) \begin{pmatrix} -i \\ 1 \end{pmatrix} + (4+2i) \left( \cos(t) - i \sin(t) \right) \begin{pmatrix} i \\ 1 \end{pmatrix}\text{,} \end{equation*}
which simplifies to
\begin{equation*} \mathbf{x} = \begin{pmatrix} 8 \sin(t) - 4 \cos(t) \\ 8 \cos(t) + 4 \sin(t) \end{pmatrix}\text{.} \end{equation*}
This is a real solution! As explained below, because all of the entries in \(A\) are real and the initial conditions are real the solution will also be real.
As shown in Figure 20.7 where these solutions are graphed, purely complex eigenvalues are associated with periodic solutions. The period of these solutions is \(\dfrac{2 \pi}{| \operatorname{Im}(\lambda) |}\text{.}\)
Figure 20.7.
Consider the system of coupled linear differential equations
\begin{equation} \dot{\mathbf{x}} = A \mathbf{x}\tag{20.3} \end{equation}
where the entries in \(A\) are all real. Now imagine that this system has a complex solution given by
\begin{equation} \mathbf{x}(t) = \mathbf{x_1}(t) + i \mathbf{x_2}(t)\text{.}\tag{20.4} \end{equation}
Taking the complex conjugates of both sides of (20.3)
\begin{equation*} \bar{\dot{\mathbf{x}}} = \overline{A \mathbf{x}} = \bar{A} \bar{\mathbf{x}}\text{.} \end{equation*}
Since \(\bar{\dot{\mathbf{x}}} = \dot{\bar{\mathbf{x}}}\) and \(\bar{A} = A\) (as the entries in \(A\) are all real),
\begin{equation*} \dot{\bar{\mathbf{x}}} = A \bar{\mathbf{x}} \end{equation*}
i.e.
\begin{equation} \bar{\mathbf{x}}(t) =\mathbf{x_1}(t) - i \mathbf{x_2} (t)\tag{20.5} \end{equation}
will also be a solution to (20.3). Substituting (20.4) into (20.3) gives
\begin{equation} \dot{\mathbf{x}}_1 + i \dot{\mathbf{x}}_2 = A \mathbf{x_1} + i A \mathbf{x_2}\tag{20.6} \end{equation}
while substituting (20.5) into (20.3) gives
\begin{equation} \dot{\mathbf{x}}_1 - i \dot{\mathbf{x}}_2 = A \mathbf{x_1} - i A \mathbf{x_2}\text{.}\tag{20.7} \end{equation}
Now, adding equations (20.6) and (20.7) gives
\begin{equation*} \dot{\mathbf{x}}_1 = A \mathbf{x_1}\text{,} \end{equation*}
while subtracting (20.7) from (20.6) gives
\begin{equation*} \dot{\mathbf{x}}_2 = A \mathbf{x_2} \end{equation*}
Thus if we have a complex solution to (20.3) then both the real and imaginary parts of this complex solution must separately be solutions and hence a general solution to (20.3) is
\begin{equation*} \mathbf{x}(t) = C_1 \mathbf{x}(t) + C_2 \mathbf{x_2}(t)\text{.} \end{equation*}
This gives us another way of proceeding when the eigenvalues of \(A\) are complex.

Example 20.8.

Find the general solution to
\begin{equation*} \dot{\mathbf{x}} = A \mathbf{x}, \: A = \begin{pmatrix} 1 \amp -5 \\ 2 \amp 3 \end{pmatrix}\text{.} \end{equation*}
Answer.
\(\mathbf{x} = C_1 e^{2t} \begin{pmatrix} -5\cos(3t) \\ \cos(3t) - 3 \sin(3t) \end{pmatrix} + C_2 e^{2t} \begin{pmatrix} -5\sin(3t) \\ \sin(3t) + 3\cos(3t) \end{pmatrix}\)
Solution.
Here the eigenvalues of \(A\) are complex with \(\lambda_1 = 2+3i\) and \(\lambda_2 = 2-3i\text{.}\) The eigenvector associated with \(\lambda_1\) is \(\mathbf{v_1} = \begin{pmatrix} -5 \\ 1+3i \end{pmatrix}\text{.}\) Thus, one solution to the system is
\begin{equation*} \mathbf{x} = e^{(2+3i)t} \begin{pmatrix} -5 \\ 1+3i \end{pmatrix}\text{.} \end{equation*}
Simplifying this solution using Euler’s equation gives
\begin{equation*} \mathbf{x} = e^{2t} \left \{ \begin{pmatrix} -5\cos(3t) \\ \cos(3t) - 3 \sin(3t) \end{pmatrix} + i \begin{pmatrix} -5\sin(3t) \\ \sin(3t) + 3\cos(3t) \end{pmatrix} \right \}\text{.} \end{equation*}
Since we know that both the real part and the imaginary part are solutions to the system we know that the general solution is
\begin{equation*} \mathbf{x} = C_1 e^{2t} \begin{pmatrix} -5\cos(3t) \\ \cos(3t) - 3 \sin(3t) \end{pmatrix} + C_2 e^{2t} \begin{pmatrix} -5\sin(3t) \\ \sin(3t) + 3\cos(3t) \end{pmatrix}\text{.} \end{equation*}
Figure 20.9 shows a plot of this solution when \(C_1 = C_2 = 1\text{.}\)
Figure 20.9.
Note the solutions to the system are periodic with the period determined from the imaginary part of the eigenvalue. However, since the real part of the eigenvalue is positive the amplitude of the solutions grows without bound.
The discussion so far has concentrated on systems of two coupled first-order linear differential equations. However the ideas carry over to systems with more equations.
A qualitative description of the solutions to the system can be determined from the eigenvalues of \(A\text{.}\)

Remark 20.11.

  • If \(A\) has a positive real eigenvalue then the corresponding solution grows without bound.
  • If \(A\) has a negative real eigenvalue then the corresponding solution decays.
  • If \(A\) has a zero eigenvalue then the corresponding solution is constant.
  • If \(A\) has a pair of complex conjugate eigenvalues then the corresponding solution oscillates with period \(2\pi / \operatorname{Im}(\lambda)\) and with the amplitude either growing \((\operatorname{Re}(\lambda) > 0)\text{,}\) decaying \((\operatorname{Re}(\lambda) < 0 )\) or staying the same \((\operatorname{Re}(\lambda) = 0)\text{.}\)

Exercises Example Tasks

1.
Describe the long term behaviour of the solutions to the system \(\dot{\mathbf{x}} = A \mathbf{x}\text{,}\) where
\begin{equation*} A = \begin{pmatrix} -1 \amp 2 \amp 3 \\ 0 \amp -2 \amp 4 \\ 0 \amp 0 \amp 0 \end{pmatrix}\text{.} \end{equation*}