Skip to main content

Chapter 19 LA5: Eigenvalues and Eigenvectors

Section 19.1 Introduction

Definition 19.1.

Let A be a square matrix of order \(n\text{.}\) The scalar \(\lambda\) is called an eigenvalue of \(A\) if there is a non-zero vector \(\mathbf{x}\) for which
\begin{equation*} A\mathbf{x}=\lambda \mathbf{x} \end{equation*}
\(\mathbf{x}\) is called an eigenvector of \(A\) associated with the eigenvalue \(\lambda\text{.}\)

Example 19.2.

For the matrix
\begin{equation*} A=\begin{pmatrix} 0 \amp 4 \amp 3 \\ \frac{1}{2} \amp 0 \amp 0 \\ 0 \amp \frac{1}{4} \amp 0 \end{pmatrix} \end{equation*}
determine if any of \(\mathbf{x}=\begin{pmatrix} 18 \amp 6 \amp 1 \end{pmatrix}^T \text{,}\) \(\mathbf{y}=\begin{pmatrix} -2 \amp 3 \amp 4 \end{pmatrix}^T \) or \(\mathbf{z}=\begin{pmatrix} 36 \amp 12 \amp 2 \end{pmatrix}^T \) are eigenvectors of \(A\text{.}\)
Answer.
\(\mathbf{x}\) and \(\mathbf{z}\) are eigenvectors of \(A\text{.}\)
Solution.
Since
\begin{equation*} A\mathbf{x}=\begin{pmatrix} 0 \amp 4 \amp 3 \\ \frac{1}{2} \amp 0 \amp 0 \\ 0 \amp \frac{1}{4} \amp 0 \end{pmatrix} \begin{pmatrix} 18 \\ 6 \\ 1 \end{pmatrix} = \begin{pmatrix} 27 \\ 9 \\ \frac{3}{2} \end{pmatrix} = \frac{3}{2}\begin{pmatrix} 18 \\ 6 \\ 1 \end{pmatrix} \end{equation*}
\(\mathbf{x}\) is an eigenvector of \(A\) associated with the eigenvalue \(\frac{3}{2}\text{.}\) Since
\begin{equation*} A\mathbf{y}=\begin{pmatrix} 0 \amp 4 \amp 3 \\ \frac{1}{2} \amp 0 \amp 0 \\ 0 \amp \frac{1}{4} \amp 0 \end{pmatrix} \begin{pmatrix} -2 \\ 3 \\ 4 \end{pmatrix} = \begin{pmatrix} 24 \\ -1 \\ \frac{3}{4} \end{pmatrix} \neq \lambda\begin{pmatrix} -2 \\ 3 \\ 4 \end{pmatrix} \end{equation*}
\(\mathbf{y}\) is not an eigenvector of \(A\text{.}\) Since
\begin{equation*} A\mathbf{z}=\begin{pmatrix} 0 \amp 4 \amp 3 \\ \frac{1}{2} \amp 0 \amp 0 \\ 0 \amp \frac{1}{4} \amp 0 \end{pmatrix} \begin{pmatrix} 36 \\ 12 \\ 2 \end{pmatrix} = \begin{pmatrix} 54 \\ 18 \\ 3 \end{pmatrix} = \frac{3}{2}\begin{pmatrix} 36 \\ 12 \\ 2 \end{pmatrix} \end{equation*}
\(\mathbf{z}\) is also an eigenvector of \(A\) associated with the eigenvalue \(\frac{3}{2}\text{.}\)

Example 19.3.

The matrix
\begin{equation*} A=\begin{pmatrix} 0 \amp 1 \\ 1 \amp 0 \end{pmatrix} \end{equation*}
is the matrix for the plane transformation of a reflection in the line \(y=x\text{.}\) Use this geometric interpretation of the matrix to determine the possible eigenvectors and eigenvalues of \(A\text{.}\)
Answer.
\(\mathbf{x}=t\begin{pmatrix} 1 \\ 1 \end{pmatrix}, \textrm{ where } t\in \mathbb{R}\) with eigenvalue \(1\text{.}\)
\(\mathbf{x}=t\begin{pmatrix} 1 \\ -1 \end{pmatrix}, \textrm{ where } t\in \mathbb{R}\) with eigenvalue \(-1\text{.}\)
Solution.
By definition eigenvectors satisfy the equation
\begin{equation*} A\mathbf{x}=\lambda\mathbf{x} \end{equation*}
Thinking in terms of transformations this equation says that an eigenvector will be a vector that is mapped by the transformation to some scalar multiple of itself, (i.e. to a vector parallel to itself). As illustrated in Figure 19.4 any vector lying on the line \(y=x\) will be mapped to itself by a reflection in that line, i.e. for such vectors
\begin{equation*} A\mathbf{x}=\mathbf{x} \end{equation*}
Thus vectors of the form
\begin{equation*} \mathbf{x}=\begin{pmatrix} t \\ t \end{pmatrix}=t\begin{pmatrix} 1 \\ 1 \end{pmatrix}, \textrm{ where } t\in \mathbb{R} \end{equation*}
will be eigenvectors of \(A\) with an eigenvalue of \(1\text{.}\) (Note that we could check this algebraically if so desired.)
As also illustrated in Figure 19.4, any vector perpendicular to the line \(y=x\) will be mapped by a reflection in that line to a vector of the same length but in the opposite direction, i.e. for such vectors
\begin{equation*} A\mathbf{x}=-\mathbf{x} \end{equation*}
Thus vectors of the form
\begin{equation*} \mathbf{x}=\begin{pmatrix} t \\ -t \end{pmatrix}=t\begin{pmatrix} 1 \\ -1 \end{pmatrix}, \textrm{ where } t\in \mathbb{R} \end{equation*}
will be eigenvectors of \(A\) with an eigenvalue of \(-1\text{.}\)
Figure 19.4.
Since no other vectors will be mapped by a reflection in the line \(y=x\) into scalar multiples of themselves, \(A\) will not have any more eigenvectors.

Example 19.5.

Given that \(\lambda=5\) is an eigenvalue for the matrix
\begin{equation*} A=\begin{pmatrix} 1 \amp 2 \\ 4 \amp 3 \end{pmatrix} \end{equation*}
find all eigenvectors associated with this eigenvalue.
Answer.
The eigenvectors associated with the eigenvalue \(\lambda=5\) are all scalar multiples of the vector \(\begin{pmatrix} 1 \amp 2 \end{pmatrix}^T\text{.}\)
Solution.
An eigenvector, \(\mathbf{x}\text{,}\) associated with the eigenvalue \(\lambda=5\) will satisfy the system of linear equations
\begin{equation*} A\mathbf{x}=5\mathbf{x} \end{equation*}
which can be rewritten as
\begin{align*} \begin{pmatrix} 1 \amp 2 \\ 4 \amp 3 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} -5\begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \amp =\begin{pmatrix} 0 \\ 0 \end{pmatrix}\\ \begin{pmatrix} x_1+2x_2-5x_1 \\ 4x_1+3x_2-5x_2 \end{pmatrix} \amp =\begin{pmatrix} 0 \\ 0 \end{pmatrix}\\ \begin{pmatrix} -4 \amp 2 \\ 4 \amp -2 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \amp =\begin{pmatrix} 0 \\ 0 \end{pmatrix} \end{align*}
Now, using Gauss Jordon elimination to solve this homogeneous system
\begin{equation*} \begin{pmatrix} -4 \amp 2 \amp 0 \\ 4 \amp -2 \amp 0 \end{pmatrix} \sim \begin{pmatrix} 2 \amp -1 \amp 0 \\ 0 \amp 0 \amp 0 \end{pmatrix} \begin{matrix} \hspace{5mm} R_1'=\frac{R_1}{-2} \\ \hspace{5mm} R_2'=R_2+R_1 \end{matrix} \end{equation*}
we see that this system has an infinite number of solutions which take the form
\begin{equation*} \mathbf{x}=\begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \textrm{ where } 2x_1-x_2=0 \end{equation*}
or equivalently
\begin{equation*} \mathbf{x}=\begin{pmatrix} t \\ 2t \end{pmatrix}=t\begin{pmatrix} 1 \\ 2 \end{pmatrix} \end{equation*}
Thus the eigenvectors associated with the eigenvalue \(\lambda=5\) are all scalar multiples of the vector \(\begin{pmatrix} 1 \amp 2 \end{pmatrix}^T\text{.}\)
Of course, it is a good idea to check this result. Since
\begin{equation*} A\mathbf{x}=\begin{pmatrix} 1 \amp 2 \\ 4 \amp 3 \end{pmatrix} \begin{pmatrix} 1 \\ 2 \end{pmatrix}=\begin{pmatrix} 5 \\ 10 \end{pmatrix}=5\begin{pmatrix} 1 \\ 2 \end{pmatrix}=\lambda\mathbf{x} \end{equation*}
it is correct.

Section 19.2 Finding Eigenvalues and Eigenvectors

For the square matrix \(A\text{,}\) the eigenvalues and associated eigenvectors are defined by the equation
\begin{equation} A\mathbf{x}=\lambda\mathbf{x}\tag{19.1} \end{equation}
Thus, we want to find values of \(\lambda\text{,}\) and associated vectors \(\mathbf{x}\text{,}\) that satisfy this equation. To do this, rearrange (19.1) to obtain
\begin{equation*} A\mathbf{x}-\lambda\mathbf{x}=\mathbf{0} \end{equation*}
which can be written as
\begin{equation*} A\mathbf{x}-\lambda I\mathbf{x}=\mathbf{0} \end{equation*}
or
\begin{equation} (A-\lambda I)\mathbf{x}=\mathbf{0}\tag{19.2} \end{equation}
where \(I\) is the identity matrix of order \(n\text{.}\) Since (19.2) is a homogeneous system of linear equations it will only have a non-zero solution when
\begin{equation} \det(A-\lambda I)=0\tag{19.3} \end{equation}

Definition 19.6.

The eigenvalues, \(\lambda\text{,}\) of the \(n\times n\) matrix \(A\) are found by solving the characteristic equation
\begin{equation*} \det(A-\lambda I)=0 \end{equation*}
The eigenvectors of the matrix \(A\) are found by solving the homogeneous linear system of equations
\begin{equation*} (A-\lambda I)\mathbf{x}=\mathbf{0} \end{equation*}
for each eigenvalue \(\lambda\text{.}\)

Example 19.7.

Find all eigenvalues and associated eigenvectors for
\begin{equation*} A=\begin{pmatrix} 2 \amp 1 \\ 1 \amp 2 \end{pmatrix} \end{equation*}
Answer.
For \(\lambda=1\text{,}\) the eigenvectors take the form \(\mathbf{x}=t\begin{pmatrix} -1 \\ 1 \end{pmatrix}\text{.}\)
For \(\lambda=3\text{,}\) the eigenvectors take the form \(\mathbf{x}=t\begin{pmatrix} 1 \\ 1 \end{pmatrix}\text{.}\)
Solution.
The eigenvalues, \(\lambda\text{,}\) of \(A\) satisfy the characteristic equation
\begin{equation*} \det(A-\lambda I)=0 \end{equation*}
Since
\begin{equation*} A-\lambda I=\begin{pmatrix} 2\amp 1 \\ 1\amp 2 \end{pmatrix}-\lambda \begin{pmatrix} 1\amp 0 \\ 0\amp 1 \end{pmatrix}=\begin{pmatrix} 2-\lambda \amp 1 \\ 1\amp 2-\lambda \end{pmatrix} \end{equation*}
the characteristic equation is
\begin{align*} \det\begin{pmatrix} 2-\lambda \amp 1 \\ 1\amp 2-\lambda \end{pmatrix} \amp =0\\ (2-\lambda)^2-1 \amp =0\\ \lambda^2-4\lambda+3 \amp =0 \end{align*}
Thus the eigenvalues for \(A\) are
\begin{equation*} \lambda=1 \textrm{ and } \lambda=3 \end{equation*}
To find the eigenvectors associated with each of these eigenvalues we follow the procedure illustrated in Example 19.5, i.e. solve the homogeneous system of linear equations
\begin{equation*} (A-\lambda I)\mathbf{x}=\mathbf{0} \end{equation*}
for each eigenvalue.
When \(\lambda=1\) the augmented matrix and its reduced row echelon form are
\begin{equation*} \begin{pmatrix} 1 \amp 1 \amp 0 \\ 1 \amp 1 \amp 0 \end{pmatrix} \sim \begin{pmatrix} 1 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \end{pmatrix} \begin{matrix} \\ \hspace{5mm} R_2'=R_2-R_1 \end{matrix} \end{equation*}
Thus, the eigenvectors take the form
\begin{align*} \mathbf{x} \amp =\begin{pmatrix}x_1 \\ x_2\end{pmatrix} \textrm{ where } x_1+x_2=0\\ \Rightarrow \mathbf{x} \amp =\begin{pmatrix} -t \\ t \end{pmatrix}=t\begin{pmatrix} -1 \\ 1 \end{pmatrix} \end{align*}
When \(\lambda=3\) the augmented matrix and its reduced row echelon form are
\begin{equation*} \begin{pmatrix} -1 \amp 1 \amp 0 \\ 1 \amp -1 \amp 0 \end{pmatrix} \sim \begin{pmatrix} -1 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \end{pmatrix} \begin{matrix} \\ \hspace{5mm} R_2'=R_2+R_1 \end{matrix} \end{equation*}
Thus, the eigenvectors take the form
\begin{align*} \mathbf{x} \amp =\begin{pmatrix}x_1 \\ x_2\end{pmatrix} \textrm{ where } -x_1+x_2=0\\ \Rightarrow \mathbf{x} \amp =\begin{pmatrix} t \\ t \end{pmatrix}=t\begin{pmatrix} 1 \\ 1 \end{pmatrix} \end{align*}
As always, you can very easily check that these vectors are indeed eigenvectors by performing the matrix multiplication \(A\mathbf{x}\text{.}\)
As an aside, we can get some geometric insight if we consider \(A\) as the matrix of a plane transformation, (even though \(A\) isn’t the matrix for any well known transformation). Figure 19.8 shows the result of applying \(A\) to a square centred on the origin.
Figure 19.8.
Informally, the effect of \(A\) is to pull the square out along the diagonal joining vertices \(B\) and \(D\text{.}\) Thus vectors along this diagonal will be mapped to scalar multiples of themselves while vectors along the diagonal joining vertices \(A\) and \(C\) will be left unchanged.

Example 19.9.

Find all eigenvalues and associated eigenvectors for
\begin{equation*} A=\begin{pmatrix} 1 \amp 2 \\ 3 \amp 4 \end{pmatrix} \end{equation*}
Answer.
For \(\lambda=\dfrac{5+\sqrt{33}}{2}\text{,}\) the eigenvectors take the form \(\mathbf{x}=t\begin{pmatrix} 4 \\ 3+\sqrt{33} \end{pmatrix}\text{.}\)
For \(\lambda=\dfrac{5-\sqrt{33}}{2}\text{,}\) the eigenvectors take the form \(\mathbf{x}=t\begin{pmatrix} 4 \\ 3-\sqrt{33} \end{pmatrix}\text{.}\)
Solution.
The characteristic equation for \(A\) is
\begin{align*} \det\begin{pmatrix} 1-\lambda \amp 2 \\ 3\amp 4-\lambda \end{pmatrix} \amp =0\\ (1-\lambda)(4-\lambda)-6 \amp =0\\ \lambda^2-5\lambda-2 \amp =0 \end{align*}
Thus the eigenvalues for \(A\) are
\begin{equation*} \lambda=\frac{5\pm\sqrt{33}}{2}\approx 5.37, -0.37 \end{equation*}
Clearly the eigenvalues don’t always turn out to be nice numbers, even when the matrix is quite simple. The eigenvectors will also turn out to be rather complicated.
When \(\lambda=\dfrac{5+\sqrt{33}}{2}\) the augmented matrix and its reduced row echelon form are
\begin{equation*} \begin{pmatrix} 1-\left(\frac{5+\sqrt{33}}{2} \right) \amp 2 \amp 0 \\ 3 \amp 4-\left(\frac{5+\sqrt{33}}{2} \right) \amp 0 \end{pmatrix} \sim \begin{pmatrix} \frac{-3-\sqrt{33}}{2} \amp 2 \amp 0 \\ 0 \amp 0 \amp 0 \end{pmatrix} \end{equation*}
Thus, the eigenvectors take the form
\begin{align*} \mathbf{x} \amp =\begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \textrm{ where } \left(\frac{-3-\sqrt{33}}{2}\right)x_1+2x_2=0 \\ \Rightarrow \mathbf{x} \amp =\begin{pmatrix} 4t \\ t(3+\sqrt{33}) \end{pmatrix} = t\begin{pmatrix} 4 \\ 3+\sqrt{33} \end{pmatrix} \end{align*}
When \(\lambda=\dfrac{5-\sqrt{33}}{2}\) the augmented matrix and its reduced row echelon form are
\begin{equation*} \begin{pmatrix} 1-\left(\frac{5-\sqrt{33}}{2} \right) \amp 2 \amp 0 \\ 3 \amp 4-\left(\frac{5-\sqrt{33}}{2} \right) \amp 0 \end{pmatrix} \sim \begin{pmatrix} \frac{-3+\sqrt{33}}{2} \amp 2 \amp 0 \\ 0 \amp 0 \amp 0 \end{pmatrix} \end{equation*}
Thus, the eigenvectors take the form
\begin{align*} \mathbf{x} \amp =\begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \textrm{ where } \left(\frac{-3+\sqrt{33}}{2}\right)x_1+2x_2=0 \\ \Rightarrow \mathbf{x} \amp =\begin{pmatrix} 4t \\ t(3-\sqrt{33}) \end{pmatrix} = t\begin{pmatrix} 4 \\ 3-\sqrt{33} \end{pmatrix} \end{align*}

Example 19.10.

Find all eigenvalues and associated eigenvectors for
\begin{equation*} A=\begin{pmatrix} 1 \amp 2 \\ 2 \amp 4 \end{pmatrix} \end{equation*}
Answer.
For \(\lambda=0\text{,}\) the eigenvectors take the form \(\mathbf{x}=t\begin{pmatrix} -2 \\ 1 \end{pmatrix}\text{.}\)
For \(\lambda=5\text{,}\) the eigenvectors take the form \(\mathbf{x}=t\begin{pmatrix} 1 \\ 2 \end{pmatrix}\text{.}\)
Solution.
The characteristic equation for \(A\) is
\begin{align*} \det\begin{pmatrix} 1-\lambda \amp 2 \\ 2\amp 4-\lambda \end{pmatrix} \amp =0\\ (1-\lambda)(4-\lambda)-4 \amp =0\\ \lambda^2-5\lambda \amp =0 \end{align*}
Thus the eigenvalues for \(A\) are
\begin{equation*} \lambda=0, 5 \end{equation*}
When \(\lambda=0\) the augmented matrix and its reduced row echelon form are
\begin{equation*} \begin{pmatrix} 1 \amp 2 \amp 0 \\ 2 \amp 4 \amp 0 \end{pmatrix} \sim \begin{pmatrix} 1 \amp 2 \amp 0 \\ 0 \amp 0 \amp 0 \end{pmatrix} \end{equation*}
Thus, the eigenvectors take the form
\begin{align*} \mathbf{x} \amp =\begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \textrm{ where } x_1+2x_2=0 \\ \Rightarrow \mathbf{x} \amp =\begin{pmatrix} -2t \\ t \end{pmatrix} = t\begin{pmatrix} -2 \\ 1 \end{pmatrix} \end{align*}
When \(\lambda=5\) the augmented matrix and its reduced row echelon form are
\begin{equation*} \begin{pmatrix} -4 \amp 2 \amp 0 \\ 2 \amp -1 \amp 0 \end{pmatrix} \sim \begin{pmatrix} 2 \amp -1 \amp 0 \\ 0 \amp 0 \amp 0 \end{pmatrix} \end{equation*}
Thus, the eigenvectors take the form
\begin{align*} \mathbf{x} \amp =\begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \textrm{ where } 2x_1-x_2=0 \\ \Rightarrow \mathbf{x} \amp =\begin{pmatrix} t \\ 2t \end{pmatrix} = t\begin{pmatrix} 1 \\ 2 \end{pmatrix} \end{align*}
Notice that when \(0\) is an eigenvalue of \(A\) the system of equations that are solved to find the associated eigenvectors, i.e.
\begin{equation*} (A-\lambda I)\mathbf{x}=\mathbf{0} \end{equation*}
reduces to
\begin{equation*} A\mathbf{x}=\mathbf{0} \end{equation*}
Since \(0\) is an eigenvalue of \(A\) we know that this system must have non-zero solutions which in turn tells us that \(A\) is not invertible. Thus, if \(A\) has \(0\) as an eigenvalue it will not be invertible and vice versa.

Example 19.11.

Find all eigenvalues and associated eigenvectors for
\begin{equation*} A=\begin{pmatrix} 0 \amp 1 \\ -1 \amp 0 \end{pmatrix} \end{equation*}
Answer.
For \(\lambda=i\text{,}\) the eigenvectors take the form \(\mathbf{x}=t\begin{pmatrix} 1 \\ i \end{pmatrix}\text{.}\)
For \(\lambda=-i\text{,}\) the eigenvectors take the form \(\mathbf{x}=t\begin{pmatrix} -1 \\ i \end{pmatrix}\text{.}\)
Solution.
The characteristic equation for \(A\) is
\begin{align*} \det\begin{pmatrix} 0-\lambda \amp 1 \\ -1 \amp 0-\lambda \end{pmatrix} \amp =0 \\ \lambda^2+1 \amp =0 \end{align*}
This equation has no real solutions and so \(A\) does not have any real eigenvalues. (Notice that \(A\) is the matrix for a rotation in the plane about the origin and through \(-\frac{\pi}{2}^c\text{.}\)) However there are applications where it is advantageous to allow the eigenvalues and eigenvectors to be complex. So, proceeding as before, the eigenvalues for \(A\) are
\begin{equation*} \lambda=\pm i \end{equation*}
When \(\lambda=i\) the augmented matrix and its reduced row echelon form are
\begin{equation*} \begin{pmatrix} -i \amp 1 \amp 0 \\ -1 \amp -i \amp 0 \end{pmatrix} \sim \begin{pmatrix} 1 \amp i \amp 0 \\ 0 \amp 0 \amp 0 \end{pmatrix} \end{equation*}
Thus, the eigenvectors take the form
\begin{align*} \mathbf{x} \amp =\begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \textrm{ where } x_1+ix_2=0 \\ \Rightarrow \mathbf{x} \amp =\begin{pmatrix} t \\ ti \end{pmatrix} = t\begin{pmatrix} 1 \\ i \end{pmatrix} \end{align*}
When \(\lambda=-i\) the augmented matrix and its reduced row echelon form are
\begin{equation*} \begin{pmatrix} i \amp 1 \amp 0 \\ -1 \amp i \amp 0 \end{pmatrix} \sim \begin{pmatrix} 1 \amp -i \amp 0 \\ 0 \amp 0 \amp 0 \end{pmatrix} \end{equation*}
Thus, the eigenvectors take the form
\begin{align*} \mathbf{x} \amp =\begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \textrm{ where } x_1-ix_2=0 \\ \Rightarrow \mathbf{x} \amp =\begin{pmatrix} -t \\ ti \end{pmatrix} = t\begin{pmatrix} -1 \\ i \end{pmatrix} \end{align*}
As we have seen the characteristic equation for a \(2\times 2\) matrix is always a polynomial of degree \(2\text{.}\) Once we allow complex solutions we can that every \(2\times 2\) matrix will have exactly \(2\) eigenvalues (so long as we count repeated roots). In fact, it can be shown that this result holds more generally. Every square matrix of order \(n\) has \(n\) eigenvalues (counting repeated roots).

Example 19.12.

Find all eigenvalues and associated eigenvectors for
\begin{equation*} A=\begin{pmatrix} 7 \amp 1 \amp -2 \\ -3 \amp 3 \amp 6 \\ 2 \amp 2\amp 2 \end{pmatrix} \end{equation*}
Answer.
For \(\lambda=0\text{,}\) the eigenvectors take the form \(\mathbf{x}=t\begin{pmatrix} 1 \\ -3 \\ 2 \end{pmatrix}\text{.}\)
For \(\lambda=6\text{,}\) the eigenvectors take the form \(\mathbf{x}=s\begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix}+t\begin{pmatrix} 2 \\ 0 \\ 1 \end{pmatrix}\text{.}\)
Solution.
The characteristic equation for \(A\) is
\begin{equation*} \det\begin{pmatrix} 7-\lambda \amp 1 \amp -2 \\ -3 \amp 3-\lambda \amp 6 \\ 2 \amp 2\amp 2-\lambda \end{pmatrix}=0 \end{equation*}
Now, using the minors method and expanding along row 1,
\begin{align*} \det\begin{pmatrix} 7-\lambda \amp 1 \amp -2 \\ -3 \amp 3-\lambda \amp 6 \\ 2 \amp 2\amp 2-\lambda \end{pmatrix} \amp =(7-\lambda)\begin{vmatrix} 3-\lambda \amp 6 \\ 2 \amp 2-\lambda \end{vmatrix}-1\begin{vmatrix} -3 \amp 6 \\ 2 \amp 2-\lambda \end{vmatrix}+(-2)\begin{vmatrix} -3 \amp 3-\lambda \\ 2 \amp 2 \end{vmatrix} \\ \amp = (7-\lambda)\{(3-\lambda)(2-\lambda)-12\}\\ \amp \quad \quad \quad \quad -\{(-3)(2-\lambda)-12\}-2\{-6-2(3-\lambda)\}\\ \amp = -\lambda^3+12\lambda^2-36\lambda \end{align*}
Thus the eigenvalues for \(A\) satisfy
\begin{align*} \lambda^3-12\lambda^2+36\lambda \amp =0\\ \lambda(\lambda-6)^2 \amp =0\\ \lambda \amp = 0, 6 \end{align*}
Since \(\lambda=6\) is a repeated root of the characteristic equation we say that this is an eigenvalue of multiplicity \(2\text{.}\) Also, since \(\lambda=0\) is an eigenvalue of \(A\) we know that \(A\) is not invertible.
When \(\lambda=0\) the augmented matrix and its reduced row echelon form are
\begin{equation*} \begin{pmatrix} 7 \amp 1 \amp -2 \amp 0 \\ -3 \amp 3 \amp 6 \amp 0 \\ 2 \amp 2 \amp 2 \amp 0 \end{pmatrix} \sim \begin{pmatrix} 1 \amp 0 \amp -\frac{1}{2} \amp 0 \\ 0 \amp 1 \amp \frac{3}{2} \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \end{pmatrix} \end{equation*}
Thus, the eigenvectors take the form
\begin{align*} \mathbf{x} \amp =\begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} \textrm{ where } \begin{matrix} \hspace{3.5mm} x_1-x_3/2=0 \\ \hspace{2mm} x_2+3x_3/2=0 \end{matrix} \\ \Rightarrow \mathbf{x} \amp =\begin{pmatrix} t \\ -3t \\ 2t \end{pmatrix} = t\begin{pmatrix} 1 \\ -3 \\ 2 \end{pmatrix} \end{align*}
When \(\lambda=6\) the augmented matrix and its reduced row echelon form are
\begin{equation*} \begin{pmatrix} 1 \amp 1 \amp -2 \amp 0 \\ -3 \amp -3 \amp 6 \amp 0 \\ 2 \amp 2 \amp -4 \amp 0 \end{pmatrix} \sim \begin{pmatrix} 1 \amp 1 \amp -2 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \end{pmatrix} \end{equation*}
Thus, the eigenvectors take the form
\begin{align*} \mathbf{x} \amp =\begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} \textrm{ where } x_1+x_2-2x_3=0 \\ \Rightarrow \mathbf{x} \amp =\begin{pmatrix} -s+2t \\ s \\ t \end{pmatrix} = s\begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix}+t\begin{pmatrix} 2 \\ 0 \\ 1 \end{pmatrix} \end{align*}

Example 19.13.

Find all eigenvalues and associated eigenvectors for
\begin{equation*} A=\begin{pmatrix} 7 \amp 1 \amp -2 \\ 0 \amp 3 \amp 6 \\ 0 \amp 0 \amp 2 \end{pmatrix} \end{equation*}
Answer.
For \(\lambda=2\text{,}\) the eigenvectors take the form \(\mathbf{x}=t\begin{pmatrix} 8 \\ -30 \\ 5 \end{pmatrix}\text{.}\)
For \(\lambda=3\text{,}\) the eigenvectors take the form \(\mathbf{x}=t\begin{pmatrix} -1 \\ 4 \\ 0 \end{pmatrix}\text{.}\)
For \(\lambda=7\text{,}\) the eigenvectors take the form \(\mathbf{x}=t\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}\text{.}\)
Solution.
The characteristic equation for \(A\) is
\begin{equation*} \det\begin{pmatrix} 7-\lambda \amp 1 \amp -2 \\ 0 \amp 3-\lambda \amp 6 \\ 0 \amp 0 \amp 2-\lambda \end{pmatrix}=0 \end{equation*}
Now, using the minors method and expanding down column 1,
\begin{align*} \det\begin{pmatrix} 7-\lambda \amp 1 \amp -2 \\ 0 \amp 3-\lambda \amp 6 \\ 0 \amp 0\amp 2-\lambda \end{pmatrix} \amp =(7-\lambda)\begin{vmatrix} 3-\lambda \amp 6 \\ 0 \amp 2-\lambda \end{vmatrix}-0\begin{vmatrix} 1 \amp -2 \\ 0 \amp 2-\lambda \end{vmatrix}+0\begin{vmatrix} 1 \amp -2 \\ 3-\lambda \amp 6 \end{vmatrix} \\ \amp = (7-\lambda)\{(3-\lambda)(2-\lambda)-0\}-0+0\\ \amp = (7-\lambda)(3-\lambda)(2-\lambda) \end{align*}
Thus the eigenvalues for \(A\) satisfy
\begin{equation*} \lambda=2,3,7 \end{equation*}
When \(\lambda=2\) the augmented matrix and its reduced row echelon form are
\begin{equation*} \begin{pmatrix} 5 \amp 1 \amp -2 \amp 0 \\ 0 \amp 1 \amp 6 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \end{pmatrix} \sim \begin{pmatrix} 5 \amp 0 \amp -8 \amp 0 \\ 0 \amp 1 \amp 6 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \end{pmatrix} \end{equation*}
Thus, the eigenvectors take the form
\begin{align*} \mathbf{x} \amp =\begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} \textrm{ where } \begin{matrix} \hspace{2mm} 5x_1-8x_3=0 \\ \hspace{4mm} x_2+6x_3=0 \end{matrix} \\ \Rightarrow \mathbf{x} \amp =\begin{pmatrix} 8t \\ -30t \\ 5t \end{pmatrix} = t\begin{pmatrix} 8 \\ -30 \\ 5 \end{pmatrix} \end{align*}
When \(\lambda=3\) the augmented matrix and its reduced row echelon form are
\begin{equation*} \begin{pmatrix} 4 \amp 1 \amp -2 \amp 0 \\ 0 \amp 0 \amp 6 \amp 0 \\ 0 \amp 0 \amp -1 \amp 0 \end{pmatrix} \sim \begin{pmatrix} 4 \amp 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \end{pmatrix} \end{equation*}
Thus, the eigenvectors take the form
\begin{align*} \mathbf{x} \amp =\begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} \textrm{ where } \begin{matrix} \hspace{2mm} 4x_1+x_2=0 \\ \hspace{14.5mm} x_3=0 \end{matrix} \\ \Rightarrow \mathbf{x} \amp =\begin{pmatrix} -t \\ 4t \\ 0 \end{pmatrix} = t\begin{pmatrix} -1 \\ 4 \\ 0 \end{pmatrix} \end{align*}
When \(\lambda=7\) the augmented matrix and its reduced row echelon form are
\begin{equation*} \begin{pmatrix} 0 \amp 1 \amp -2 \amp 0 \\ 0 \amp -4 \amp 6 \amp 0 \\ 0 \amp 0 \amp -5 \amp 0 \end{pmatrix} \sim \begin{pmatrix} 0 \amp 1 \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \end{pmatrix} \end{equation*}
Thus, the eigenvectors take the form
\begin{align*} \mathbf{x} \amp =\begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} \textrm{ where } \begin{matrix} \hspace{2mm} x_2=0 \\ \hspace{2mm} x_3=0 \end{matrix} \\ \Rightarrow \mathbf{x} \amp =\begin{pmatrix} t \\ 0 \\ 0 \end{pmatrix} = t\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \end{align*}
As illustrated in Example 19.13, for an upper triangular matrix the eigenvalues are the entries on the main diagonal. Unfortunately, we cannot use row reduction to find eigenvalues because equivalent matrices do not necessarily have the same eigenvalues. Interestingly though, it can be shown that sum of the eigenvalues of any square matrix is equal to the sum of the entries on the main diagonal (i.e. to the trace of the matrix).
To close this section, we can once again add one more statement to our theorem connecting the ideas that we have studied so far.

Remark 19.15.

We can visualise the relationship between a square matrix \(A\) and its eigenvectors using the Sage cell below. This code plots the unit vector with angle \(\theta\) in blue, i.e. the vector \(\mathbf{v}=(\cos(\theta),\sin(\theta))^T\text{.}\) It also plots in red the vector \(A\mathbf{v}\text{.}\) When these vectors are parallel, \(\mathbf{v}\) is an eigenvector of \(A\text{.}\)
Here is a better visualisation of eigenvectors: since the property of being an eigenvector of a given matrix depends only on the direction (not magnitude) of the vector, we draw a bunch of (blue) unit vectors \(\mathbf{v}\text{,}\) as well as the output vectors \(A\mathbf{v}\) in red. Where the red vector is parallel to the blue one pointing to it, that blue vector is an eigenvector. The factor by which its red vector is longer or shorter is the corresponding eigenvalue.
The Sage cell below shows this same concept in three dimensions.

Exercises Example Tasks

1.
Find all eigenvalues and associated eigenvectors for
\begin{equation*} A=\begin{pmatrix} -2 \amp -3 \\ 3 \amp 4 \end{pmatrix} \end{equation*}
2.
Find all eigenvalues and associated eigenvectors for
\begin{equation*} A=\begin{pmatrix} 4 \amp 0 \amp 1 \\ 2 \amp 3 \amp 2 \\ -1 \amp 0 \amp 2 \end{pmatrix} \end{equation*}
3.
Show that if \(\lambda\) is an eigenvalue of the matrix \(A\) then \(\lambda^2\) is an eigenvalue of the matrix \(A^2\text{.}\)

Section 19.3 A Simple Application

Imagine that a query into an internet search engine finds \(5\) websites. The engine wants to report these sites to the user in some sort of order relating to how useful each site might be, i.e. the engine wants to work out a ranking for each site.
One approach to this problem is to look at the links between the sites. For the sake of discussion, let the links between the sites be as shown in Figure 19.16.
Figure 19.16.
We can represent the information in this diagram via a matrix \(A\) where the entries \(a_{ij}\) satisfy
\begin{equation*} a_{ij}=\begin{cases} 1 \amp \textrm{ if website } i \textrm{ has a link to it from website } j \\ 0 \amp \textrm{ otherwise } \end{cases} \end{equation*}
Thus
\begin{equation*} A=\begin{pmatrix} 0 \amp 0 \amp 1 \amp 0 \amp 0 \\ 1 \amp 0 \amp 0 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \amp 0 \amp 1 \\ 1 \amp 1 \amp 1 \amp 0 \amp 0 \\ 1 \amp 1 \amp 0 \amp 1 \amp 0 \end{pmatrix} \end{equation*}
Now let \(r_i\) be the ranking of website \(i\text{.}\) Then we require
\begin{equation*} 0\leq r_i\leq 1 \end{equation*}
\begin{equation*} \sum_{i=1}^{5} r_i =1 \end{equation*}
and
\begin{equation*} r_i \propto \textrm{ sum of the rankings of the sites which have links to it.} \end{equation*}
The idea here is that if a site has links to it from an important site (i.e. one with a high ranking) then that is better than having a link to it from an obscure site (i.e. one with a low ranking). So, letting the constant of proportionality be \(k\text{,}\) for the websites linked according to Figure 19.16 we have
\begin{align*} r_1 \amp =kr_3\\ r_2 \amp =kr_1\\ r_3 \amp =k(r_2+r_5)\\ r_4 \amp =k(r_1+r_2+r_3)\\ r_5 \amp =k(r_1+r_2+r_4) \end{align*}
or, in matrix form
\begin{equation*} \begin{pmatrix} r_1 \\ r_2 \\ r_3 \\ r_4 \\ r_5 \end{pmatrix} =k\begin{pmatrix} 0 \amp 0 \amp 1 \amp 0 \amp 0 \\ 1 \amp 0 \amp 0 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \amp 0 \amp 1 \\ 1 \amp 1 \amp 1 \amp 0 \amp 0 \\ 1 \amp 1 \amp 0 \amp 1 \amp 0 \end{pmatrix}\begin{pmatrix} r_1 \\ r_2 \\ r_3 \\ r_4 \\ r_5 \end{pmatrix} \end{equation*}
Thus the rankings satisfy the equation
\begin{equation*} \mathbf{r}=kA\mathbf{r} \end{equation*}
or
\begin{equation*} A\mathbf{r}=\frac{1}{k}\mathbf{r}=\lambda\mathbf{r} \end{equation*}
i.e. an eigenvalue problem. For the matrix \(A\) as given above it turns out that the ranking vector is
\begin{equation*} \mathbf{r}=\begin{pmatrix} 0.14 \amp 0.08 \amp 0.22 \amp 0.27 \amp 0.29\end{pmatrix}^T \end{equation*}
and so the sites would be listed in the order \(5, 4, 3, 1, 2\text{.}\)
Of course Google’s page ranking system is much more sophisticated than that described above but the idea is essentially the same. Also, there is much more advanced mathematics needed behind the scenes than indicated above. For example how do we know which eigenvalue and associated eigenvector to take? What happens if all of the eigenvalues are complex? And so on.