Recall that a square matrix is a matrix with the same number of rows as columns. We call an \(n\times n \) matrix a square matrix of order \(n \text{.}\) When we add or multiply two square matrices of order \(n \) we always obtain a square matrix of order \(n \text{.}\) The zero matrix, \(0 \text{,}\) of order \(n \) is the matrix with all entries \(0 \text{,}\) i.e.
\begin{equation*}
0 =
\begin{pmatrix}
0 \amp 0 \amp \cdots \amp 0 \\
0 \amp 0 \amp \cdots \amp 0 \\
\vdots \amp \vdots \amp \ddots \amp \vdots \\
0 \amp 0 \amp \cdots \amp 0
\end{pmatrix}
\end{equation*}
and has the properties
\(\displaystyle 0A=A0=0 \)
\(\displaystyle A+0=A \)
\(\displaystyle A-A=0 \)
where \(A \) is any square matrix of order \(n \text{.}\) The identity matrix, \(I, \) of order \(n \) is the \(n\times n \) matrix with \(1\text{'s} \) on the main diagonal and all other entries \(0 \text{.,}\) i.e.
\begin{equation*}
I =
\begin{pmatrix}
1 \amp 0 \amp \cdots \amp 0 \\
0 \amp 1 \amp \cdots \amp 0 \\
\vdots \amp \vdots \amp \ddots \amp \vdots \\
0 \amp 0 \amp \cdots \amp 1
\end{pmatrix}
\end{equation*}
The identity matrix has the property that for any square matrix of order \(n, \) \(A, \)
\(I \) is the only matrix that satisfies this property.
Section 18.1 Inverse Matrices
Definition 18.1. Inverse Matrix.
Given the square matrix \(A \text{,}\) if there exists a square matrix \(B \) such that
\begin{equation*}
AB=BA=I
\end{equation*}
then we call the matrix \(B \) the inverse of \(A \) and write \(B=A^{-1} \text{.}\)
Note:
If matrix
\(B \) is the inverse of matrix
\(A \) then matrix
\(A \) is the inverse of matrix
\(B, \text{,}\) i.e.
\begin{equation*}
(A^{-1})^{-1}=A.
\end{equation*}
If matrix \(A \) has an inverse then we say that \(A \) is invertible or non-singular.
The inverse of a matrix (if it exists) is unique.
For matrix \(A \text{,}\) if there exists a matrix \(B \) such that \(AB=I \) then it follows that \(BA=I \) as well.
Example 18.2.
Let \(A=\begin{pmatrix} 1 \amp 1 \\ 1 \amp 2 \end{pmatrix} \quad \mbox{and} \quad B=\begin{pmatrix} 2 \amp -1 \\ -1 \amp 1 \end{pmatrix}.\) Calculate \(AB\; \text{and}\; BA. \)
Answer.
\(AB=BA=I. \)
Solution.
\(\displaystyle AB=\begin{pmatrix} 1 \amp 1 \\ 1 \amp 2 \end{pmatrix} \begin{pmatrix} 2 \amp -1 \\ -1 \amp 1 \end{pmatrix} =\begin{pmatrix} 1 \amp 0 \\ 0 \amp 1 \end{pmatrix}=I \)
\(\displaystyle BA=\begin{pmatrix} 2 \amp -1 \\ -1 \amp 1 \end{pmatrix} \begin{pmatrix} 1 \amp 1 \\ 1 \amp 2 \end{pmatrix} = \begin{pmatrix} 1 \amp 0 \\ 0 \amp 1 \end{pmatrix}=I \)
Thus
\(A^{-1}=B\; \text{and}\; B^{-1}=A. \)
Example 18.3.
Show that \(A=\begin{pmatrix} 1 \amp -1 \\ 1 \amp -1 \end{pmatrix} \) is not invertible.
Solution.
Assume that
\(A^{-1} \) exists. Then, since
\(A^{2}=0, \) we have that
\begin{equation*}
A=IA=(A^{-1}A)A=A^{-1}A^{2}=A^{-1}0=0,
\end{equation*}
which is a contradiction. Thus we conclude that
\(A^{-1} \) does not exist.
Given a square matrix \(A \) to find its inverse we need to find a matrix \(A^{-1} \) such that \(AA^{-1}=I \text{.}\) Let’s begin by considering the \(2\times 2 \) case. Let
\begin{equation*}
A=\begin{pmatrix} 1 \amp -1 \\ 1 \amp -1 \end{pmatrix},
\end{equation*}
where \(a\text{,}\) \(b\text{,}\) \(c\) and \(d\) are given. We want to find the entries in
\begin{equation*}
A^{-1}=\begin{pmatrix} x_{1} \amp y_{1} \\ x_{2} \amp y_{2} \end{pmatrix}.
\end{equation*}
Since \(AA^{-1}=I \) we have that
\begin{equation*}
\begin{pmatrix} a \amp b \\ c \amp d \end{pmatrix} \begin{pmatrix} x_{1} \amp y_{1} \\ x_{2} \amp y_{2} \end{pmatrix}=\begin{pmatrix} 1 \amp 0 \\ 0 \amp 1 \end{pmatrix},
\end{equation*}
or equivalently,
\begin{equation*}
\begin{cases}
ax_{1} + bx_{2} = 1\\
cx_{1} + dx_{2} = 0
\end{cases}
\;\;\; \text{ and } \;\;\;
\begin{cases}
ay_{1} + by_{2} = 0\\
cy_{1} + dy_{2} = 1
\end{cases}
\end{equation*}
Both systems of equations have the same coefficient matrix, i.e.
\begin{equation*}
\begin{pmatrix} a \amp b \\ c \amp d \end{pmatrix}.
\end{equation*}
The augmented matrices for these systems are
\begin{equation*}
\begin{pmatrix} a \amp b \amp 1 \\ c \amp d \amp 0 \end{pmatrix}
\;\;\; \text{ and } \;\;\;
\begin{pmatrix} a \amp b \amp 0 \\ c \amp d \amp 1 \end{pmatrix}
\end{equation*}
and since these have the same coefficient matrix we can combine the augmented matrices to get
\begin{equation*}
\left(\begin{array}{c c | c c} a \amp b \amp 1 \amp 0 \\c \amp d \amp 0 \amp 1 \end{array}\right)
\end{equation*}
By reducing this matrix to reduced row echelon form we can solve both sets of equations at the same time. If \(A \) has an inverse then the reduced row-echelon form will be
\begin{equation*}
\left(\begin{array}{c c | c c} 1 \amp 0 \amp \alpha \amp \beta \\ 0 \amp 1 \amp \chi \amp \delta \end{array}\right)
\end{equation*}
and hence \(x_{1}=\alpha,\; x_{2}=\chi,\; y_{2}=\beta \) and \(\; y_{2}=\delta \text{.}\) Thus, the augmented section of this matrix will contain \(A^{-1} \text{.}\)
Example 18.4.
Find the inverse of matrix \(A=\begin{pmatrix} 1 \amp 1 \\ 1 \amp 2 \end{pmatrix}. \)
Answer.
\(A^{-1}=\begin{pmatrix} 2 \amp -1 \\ -1 \amp 1 \end{pmatrix}. \)
Solution.
Begin by augmenting matrix
\(A \) with the identity matrix
\(I, \)
\begin{equation*}
\left(\begin{array}{c c | c c} 1 \amp 1 \amp 1 \amp 0 \\1 \amp 2 \amp 0 \amp 1 \end{array}\right)
\end{equation*}
Now use the elementary row operations to reduce this to reduced row echelon form
\begin{align*}
\left(\begin{array}{c c | c c} 1 \amp 1 \amp 1 \amp 0 \\1 \amp 2 \amp 0 \amp 1 \end{array}\right) \amp \sim\left(\begin{array}{c c | c c} 1 \amp 1 \amp 1 \amp 0 \\0 \amp 1 \amp -1 \amp 1 \end{array}\right) \hspace{8mm} R'_{2}= R_{2}-R_{1} \\
\amp \sim \left(\begin{array}{c c | c c} 1 \amp 0 \amp 2 \amp -1 \\0 \amp 1 \amp -1 \amp 1 \end{array}\right) \hspace{4mm} R'_{1}= R_{1}-R_{2}
\end{align*}
We can read off the inverse as
\begin{equation*}
A^{-1}=\begin{pmatrix} 2 \amp -1 \\ -1 \amp 1 \end{pmatrix}.
\end{equation*}
Example 18.5.
Find the inverse of matrix \(A=\begin{pmatrix} 1 \amp -1 \\ 1 \amp -1 \end{pmatrix}. \)
Answer.
\(A \) has no inverse.
Solution.
Using the same procedure as in
Example 18.4 begin by augmenting matrix
\(A \) with the identity matrix
\(I \text{,}\)
\begin{equation*}
\left(\begin{array}{c c | c c} 1 \amp -1 \amp 1 \amp 0 \\1 \amp -1 \amp 0 \amp 1 \end{array}\right)
\end{equation*}
Now use the elementary row operations to reduce this to reduced row echelon form.
\begin{equation*}
\left(\begin{array}{c c | c c} 1 \amp -1 \amp 1 \amp 0 \\1 \amp -1 \amp 0 \amp 1 \end{array}\right)\sim \left(\begin{array}{c c | c c} 1 \amp -1 \amp 1 \amp 0 \\0 \amp 0 \amp -1 \amp 1 \end{array}\right) \hspace{5mm} R'_{2}= R_{2}-R_{1}
\end{equation*}
Since there is a row of
\(0 \)’s in the coefficient part of the reduced row echelon form while the remainder of the row is non-zero, we can see that there is no solution to the equations for finding the entries in the inverse matrix for
\(A \text{.}\) Thus, we conclude that matrix
\(A \) is not invertible, i.e. has no inverse.
The reasoning applied above to find a procedure for finding the inverse of a \(2\times 2 \) matrix applies equally well to any sized square matrix. Thus we have a general procedure for finding the inverse of a square matrix.
Theorem 18.6.
Given the square matrix \(A \text{,}\) to find its inverse \(A^{-1} \text{:}\)
Form the matrix \(\begin{pmatrix}
A \bigm | I \\
\end{pmatrix} \) by augmenting \(A \) with the identity matrix \(I, \)
Row reduce \(\begin{pmatrix}
A \bigm | I \\
\end{pmatrix} \) to reduced row echelon form,
If the reduced row echelon matrix is of the form \(\begin{pmatrix}
I \bigm | A^{-1} \\
\end{pmatrix} \) read off \(A^{-1} \text{.}\) Otherwise the inverse does not exist.
Example 18.7.
Find the inverse, if it exists, of \(A=\begin{pmatrix} 2 \amp 1 \amp 6 \\ -4 \amp 5 \amp -3 \\ 2 \amp -1 \amp 3 \end{pmatrix}. \)
Answer.
\(A^{-1}=\begin{pmatrix} -2 \amp \frac{3}{2} \amp \frac{11}{2} \\ -1 \amp 1 \amp 3 \\ 1 \amp -\frac{2}{3} \amp -\frac{7}{3} \end{pmatrix}. \)
Solution.
Form the augmented matrix and row reduce to reduced row echelon form:
\begin{align*}
\amp \left(\begin{array}{c c c | c c c} 2 \amp -1 \amp 6 \amp 1 \amp 0 \amp 0 \\-4 \amp 5 \amp -3 \amp 0 \amp 1 \amp 0 \\2 \amp -1 \amp 3 \amp 0 \amp 0 \amp 1 \end{array}\right)\\
\amp \sim \left(\begin{array}{c c c | c c c} 1 \amp \frac{1}{2} \amp 3 \amp \frac{1}{2} \amp 0 \amp 0 \\0 \amp 7 \amp 9 \amp 2 \amp 1 \amp 0 \\0 \amp -2 \amp -3 \amp -1 \amp 0 \amp 1 \end{array}\right)
\begin{matrix}
R'_{1} = \amp \frac{R_{1}}{2}\;\;\;\qquad\\
R'_{2} = \amp R_{2}+4R'_{1}\\
R'_{3} = \amp R_{3}-R_{1}
\end{matrix} \\
\amp \sim \left(\begin{array}{c c c | c c c} 1 \amp 0 \amp \frac{33}{14} \amp \frac{5}{14} \amp -\frac{1}{14} \amp 0 \\0 \amp 1 \amp \frac{9}{7} \amp \frac{2}{7} \amp \frac{1}{7} \amp 0 \\0 \amp 0 \amp -\frac{3}{7} \amp -\frac{3}{7} \amp 0 \amp 1 \end{array}\right)
\begin{matrix}
R'_{1} = \amp R_{1}-\frac{R'_{2}}{2} \\
R'_{2} = \amp \frac{R_{2}}{7}\;\;\;\qquad\\
R'_{3} = \amp R_{3}+2R'_{2}
\end{matrix} \\
\amp \sim \left(\begin{array}{c c c | c c c} 1 \amp 0 \amp 0 \amp -2 \amp \frac{3}{2} \amp \frac{11}{2} \\0 \amp 1 \amp 0 \amp -1 \amp 1 \amp 3 \\0 \amp 0 \amp 1 \amp 1 \amp -\frac{2}{3} \amp -\frac{7}{3} \end{array}\right)
\begin{matrix}
R'_{1} = \amp R_{1}-\frac{33R'_{3}}{14} \\
R'_{2} = \amp R_{2}-\frac{9R'_{3}}{7} \\
R'_{3} = \amp -\frac{7R'_{3}}{3} \;\;\; \qquad
\end{matrix}
\end{align*}
Thus
\begin{equation*}
A^{-1}=\begin{pmatrix} -2 \amp \frac{3}{2} \amp \frac{11}{2} \\ -1 \amp 1 \amp 3 \\ 1 \amp -\frac{2}{3} \amp -\frac{7}{3} \end{pmatrix}.
\end{equation*}
Of course we can always check our answer by confirming that
\(AA^{-1}=I \text{.}\)
Example 18.8.
Find the inverse, if it exists, of \(A=\begin{pmatrix} 2 \amp 1 \amp 6 \\ -4 \amp 5 \amp -3 \\ 2 \amp 8 \amp 15 \end{pmatrix}. \)
Answer.
\(A \) has no inverse.
Solution.
Form the augmented matrix and row reduce to reduced row echelon form:
\begin{align*}
\amp \left(\begin{array}{c c c | c c c} 2 \amp 1 \amp 6 \amp 1 \amp 0 \amp 0 \\-4 \amp 5 \amp -3 \amp 0 \amp 1 \amp 0 \\2 \amp 8 \amp 15 \amp 0 \amp 0 \amp 1\end{array}\right)\\
\amp \sim\left(\begin{array}{c c c | c c c} 1 \amp \frac{1}{2} \amp 3 \amp \frac{1}{2} \amp 0 \amp 0 \\0 \amp 7 \amp 9 \amp 2 \amp 1 \amp 0 \\0 \amp 7 \amp 9 \amp -1 \amp 0 \amp 1\end{array}\right)
\begin{matrix}
R'_{1} = \amp \frac{R_{1}}{2}\;\;\;\qquad\\
R'_{2} = \amp R_{2}+4R'_{1}\\
R'_{3} = \amp R_{3}-R_{1}
\end{matrix} \\
\amp \sim\left(\begin{array}{c c c | c c c} 1 \amp 0 \amp \frac{33}{14} \amp \frac{5}{14} \amp -\frac{1}{14} \amp 0 \\0 \amp 1 \amp \frac{9}{7} \amp \frac{2}{7} \amp \frac{1}{7} \amp 0 \\0 \amp 0 \amp 0 \amp -3 \amp -1 \amp 1\end{array}\right)
\begin{matrix}
R'_{1} = \amp R_{1}-\frac{R'_{2}}{2} \\
R'_{2} = \amp \frac{R_{2}}{7}\;\;\;\qquad\\
R'_{3} = \amp R_{3}-R'_{2}
\end{matrix}
\end{align*}
Since the coefficient part of the reduced row echelon matrix is not the identity matrix
\(A \) does not have an inverse.
For later reference, some properties of the inverse of a matrix are listed below.
Theorem 18.9. Properties of the Matrix Inverse.
Let \(A \) and \(B \) be square invertible matrices of order \(n \) and let \(k \) be a real number. Then
\(\displaystyle (A^{-1})^{-1}=A \)
\(\displaystyle (A^{T})^{-1}=(A^{-1})^{T} \)
\(\displaystyle (kA)^{-1}=\frac{1}{k}A^{-1} \)
\(\displaystyle (AB)^{-1}=B^{-1}A^{-1} \)
\(\displaystyle (A^{r})^{-1}=(A^{-1})^{r}\;\;\;\; \text{where}\;\; r\in \mathbb{N} \)
Example 18.10.
Confirm that \((AB)^{-1}=B^{-1}A^{-1} \) holds for the matrices.
\begin{equation*}
A=\begin{pmatrix} 3 \amp 1 \\ -1 \amp 2 \end{pmatrix} \;\;\; \text{and}\;\;\; B=\begin{pmatrix} 1 \amp 5 \\ 0 \amp -2 \end{pmatrix}
\end{equation*}
Solution.
Firstly,
\begin{equation*}
AB=\begin{pmatrix} 3 \amp 1 \\ -1 \amp 2 \end{pmatrix} \begin{pmatrix} 1 \amp 5 \\ 0 \amp -2 \end{pmatrix} = \begin{pmatrix} 3 \amp 13 \\ -1 \amp -9 \end{pmatrix},
\end{equation*}
and so
\begin{equation*}
(AB)^{-1}=\begin{pmatrix} 3 \amp 13 \\ -1 \amp 9 \end{pmatrix}^{-1} = \frac{1}{14}\begin{pmatrix} -9 \amp -13 \\ 1 \amp 3 \end{pmatrix}.
\end{equation*}
Next
\begin{equation*}
A^{-1}=\begin{pmatrix} 3 \amp 1 \\ -1 \amp 2 \end{pmatrix}^{-1} = \frac{1}{7}\begin{pmatrix} 2 \amp -1 \\ 1 \amp 3 \end{pmatrix},
\end{equation*}
\begin{equation*}
B^{-1}=\begin{pmatrix} 1 \amp 5 \\ 0 \amp -2 \end{pmatrix}^{-1} = -\frac{1}{2}\begin{pmatrix} -2 \amp -5 \\ 0 \amp 1 \end{pmatrix},
\end{equation*}
and so
\begin{equation*}
B^{-1}A^{-1}= -\frac{1}{2}\begin{pmatrix} -2 \amp -5 \\ 0 \amp 1 \end{pmatrix} \frac{1}{7}\begin{pmatrix} 2 \amp -1 \\ 1 \amp 3 \end{pmatrix} = - \frac{1}{14}\begin{pmatrix} -9 \amp -13 \\ 1 \amp 3 \end{pmatrix}.
\end{equation*}
The idea of a matrix inverse can be related to the problem of solving systems of linear equations in the case where the number of equations in the system is the same as the number of variables. As we have seen previously, we can write the system of \(n \) linear equations in \(n \) unknowns
\begin{align*}
a_{11} x_{1}+a_{12}x_{2} + \dots +a_{1n} x_{n}= \amp b_{1}\\
a_{21} x_{1}+a_{22}x_{2} + \dots +a_{2n} x_{n}= \amp b_{2}\\
\vdots \amp\\
a_{n1} x_{1}+a_{n2}x_{2} + \dots +a_{nn} x_{n}= \amp b_{n}
\end{align*}
\begin{equation}
A \mathbf{x} = \mathbf{b}\tag{18.1}
\end{equation}
\(A \)\(n\times n \)\(\mathbf{x} \)\(n\times 1 \)\(\mathbf{b} \)\(A \)(18.1)
\begin{align*}
A^{-1}(A \mathbf{x})= \amp A^{-1} \mathbf{b},\\
(A^{-1}A) \mathbf{x}= \amp A^{-1} \mathbf{b},\\
\mathbf{x}= \amp A^{-1}\mathbf{b}.
\end{align*}
Example 18.11.
Solve the system of equations
\begin{align*}
2x + y + 6z = \amp 9,\\
-4x + 5y - 3z = \amp -7,\\
2x - y + 3z = \amp 5.
\end{align*}
Answer.
\(x=-1, \; y=-1,\; \text{and} \; z=2. \)
Solution.
In matrix notation this system can be written as
\begin{equation*}
\begin{pmatrix}
2 \amp 1 \amp 6 \\
-4 \amp 5 \amp -3 \\
2 \amp -1 \amp 3
\end{pmatrix}
\begin{pmatrix}
x \\
y \\
z
\end{pmatrix}=
\begin{pmatrix}
9 \\
-7 \\
5
\end{pmatrix}
\end{equation*}
We found the inverse of the coefficient matrix in
Example 18.7 and, using that result, we have
\begin{equation*}
\begin{pmatrix}
x \\
y \\
z
\end{pmatrix}=
\begin{pmatrix}
-2 \amp \frac{3}{2} \amp \frac{11}{2} \\
-1 \amp 1 \amp 3 \\
1 \amp -\frac{2}{3} \amp -\frac{7}{3}
\end{pmatrix}
\begin{pmatrix}
9 \\
-7 \\
5
\end{pmatrix}=
\begin{pmatrix}
-1 \\
-1 \\
2
\end{pmatrix}
\end{equation*}
Thus the solution is
\(x=-1, \; y=-1,\; \text{and} \; z=2. \)
\(n \)\(n \)
Theorem 18.12.
Consider the non-homogenous system of \(n \) linear equations in \(n \) variables
\begin{equation*}
A \mathbf{x} = \mathbf{b} ,\; \mathbf{b} \neq 0.
\end{equation*}
The following statements are equivalent:
The system has a unique solution,
\(A \mathbf{x} =0 \) has only the trivial solution \(\mathbf{x} =0, \)
The columns of \(A \) are linearly independent,
\(A \) is invertible.
Exercises Example Tasks
1.
Find the inverse, if it exists, of
\begin{align*}
A = \amp \begin{pmatrix}
2 \amp 1 \amp 3 \\
-1 \amp 2 \amp 4 \\
8 \amp -1 \amp 1
\end{pmatrix}\\
B = \amp \begin{pmatrix}
1 \amp 1 \amp 2 \\
-1 \amp 2 \amp -1 \\
1 \amp -1 \amp 1
\end{pmatrix}
\end{align*}
2.
Find the matrix for a rotation in the plane about the origin through \(\frac{\pi}{4}^{c} \) . Find the inverse of this matrix and interpret it geometrically.
Section 18.2 Determinants
If we attempted to find the inverse of the general \(2\times 2 \) matrix
\begin{equation*}
A= \begin{pmatrix}
a \amp b \\
c \amp d
\end{pmatrix}
\end{equation*}
we would find that, if \(ad-bc\neq 0 \) the inverse is
\begin{equation*}
A^{-1}=\frac{1}{ad-bc} \begin{pmatrix}
d \amp -b \\
-c \amp a
\end{pmatrix},
\end{equation*}
and if \(ad-bc=0 \) then \(A \) does not have an inverse. Thus for a \(2\times 2\) matrix, \(A \text{,}\) calculating the quantity \(ad-bc \) can act as a test for the invertibility of \(A \text{.}\) This quantity is called the determinant of \(A \) and is denoted by
\begin{equation*}
\det(A) \;\text{or}\; \vert A\vert.
\end{equation*}
Example 18.13.
Find the determinant of
\begin{equation*}
A=\begin{pmatrix} 2 \amp -1 \\ 3 \amp 1 \end{pmatrix}.
\end{equation*}
Answer.
Solution.
\begin{equation*}
\begin{vmatrix}
2 \amp -1 \\
3 \amp 1
\end{vmatrix}=2\times 1 -(3\times (-1))=5.
\end{equation*}
Note that since the determinant is not zero this matrix is invertible.
We can also think about the determinant of a
\(2\times 2\) matrix geometrically. We know (see
Theorem 18.12) that a matrix has an inverse when its column vectors are linearly independent. Thus, the
\(2\times 2\) matrix
\begin{equation*}
A= \begin{pmatrix}
a \amp b \\
c \amp d
\end{pmatrix}
\end{equation*}
will have an inverse when the vectors
\((a,c)^{T} \) and
\((b,d)^{T} \) are linearly independent. Now we also know (from
Chapter 16) that two vectors in the plane are linearly independent if they define a parallelogram with non-zero area. Finally, from Math1110, we know that the area of the parallelogram defined by the vectors
\(\mathbf{u} = (u_{1},u_{2})^{T} \) and
\(\mathbf{v} = (v_{1},v_{2})^{T} \) is
\begin{equation*}
Area=\vert u_{1}v_{2} - u_{2}v_{1}\vert.
\end{equation*}
Thus matrix \(A \) will have an inverse when \(ad-bc\neq 0 \) , i.e. \(\det(A)\neq 0. \)
Let’s now apply the same geometric argument to the general \(3 \times 3\) matrix
\begin{equation*}
A=\begin{pmatrix}
u_{1} \amp v_{1} \amp w_{1} \\
u_{2} \amp v_{2} \amp w_{2} \\
u_{3} \amp v_{3} \amp w_{3}
\end{pmatrix}=(\mathbf{u}, \mathbf{v}, \mathbf{w}).
\end{equation*}
Three vectors in space are linearly independent if they define a parallelepiped with non-zero volume. Now, the volume of the parallelepiped formed by the vectors
\begin{equation*}
\mathbf{u} = (u_{1},u_{2},u_{3})^{T} ,\; \mathbf{v} = (v_{1},v_{2},v_{3})^{T}, \; \text{and} \; \mathbf{w} =(w_{1},w_{2},w_{3})^{T}
\end{equation*}
is
\begin{equation*}
Volume=\vert \mathbf{u} \cdot \mathbf{v} \times \mathbf{w} \vert,
\end{equation*}
i.e. the absolute values of the scalar triple product of the vectors, (again see Math1110). Thus the matrix \(A \) will have an inverse when \(\mathbf{u} \cdot \mathbf{v} \times \mathbf{w} \neq 0. \) Hence for a \(3\times 3 \) matrix its determinant is defined as
\begin{equation}
\det(A)=\vert \mathbf{u} \cdot \mathbf{v} \times \mathbf{w} \vert\tag{18.2}
\end{equation}
Example 18.14.
Find the determinant of
\begin{equation*}
A= \begin{pmatrix}
2 \amp 4 \amp 6 \\
3 \amp 2 \amp 1 \\
1 \amp 1 \amp 2
\end{pmatrix}.
\end{equation*}
Answer.
Solution.
Let
\(\mathbf{u}=(2,3,1)^{T},\; \mathbf{v}=(4,2,1)^{T},\; \mathbf{w}=(6,1,2)^{T}. \) Then
\begin{equation*}
\mathbf{v} \times \mathbf{w}=(4,3,1) ^{T} \times (6,1,2)^{T}=(3,-2,-8)^{T}
\end{equation*}
and hence
\begin{equation*}
\det(A) = (2,3,1) ^{T} \cdot (3,-2,-8)^{T}=(3,-2,-8)^{T}=-8.
\end{equation*}
While we can calculate the determinant of a
\(3\times 3\) matrix using formula
(18.2) other algorithms have been derived and have the advantage that they easily generalise to matrices of orders higher than
\(3 \text{.}\)
Theorem 18.15.
For the \(2\times 2\) matrix \(\begin{pmatrix}
a \amp b \\
c \amp d
\end{pmatrix}\text{,}\) \(\det \begin{pmatrix} a \amp b \\ c \amp d \end{pmatrix}=ad-bc \text{.}\)
For the \(n\times n\) matrix \(A=\begin{pmatrix}
a_{ij}
\end{pmatrix}\) we define the minor, \(M_{ij} \) as the determinant of the \((n-1)\times (n-1) \) matrix obtained by deleting the \(i\)th row and the \(j\)th column of \(A \text{.}\) Then
\begin{equation*}
\det(A)=\sum_{j=1}^{n}(-1)^{i+j}a_{ij}M_{ij}\qquad \text{for any}\qquad i=1,2,3,\ldots,n
\end{equation*}
or
\begin{equation*}
\det(A)=\sum_{i=1}^{n}(-1)^{i+j}a_{ij}M_{ij}\qquad \text{for any}\qquad j=1,2,3, \ldots, n.
\end{equation*}
Example 18.16. (Example 18.14 revisited).
Find the determinant of
\begin{equation*}
A=\begin{pmatrix}
2 \amp 4 \amp 6 \\
3 \amp 2 \amp 1 \\
1 \amp 1 \amp 2
\end{pmatrix}.
\end{equation*}
Answer.
\(\det ( A ) = -8\)
Solution.
Using the first of the formulas given above with
\(i=1: \)
\begin{align*}
\det(A) = \amp (-1)^{1+1} 2 \begin{vmatrix}
2 \amp 1 \\
1 \amp 2
\end{vmatrix} + (-1)^{1+2} 4 \begin{vmatrix}
3 \amp 1 \\
1 \amp 2
\end{vmatrix}+ (-1)^{1+3} 6 \begin{vmatrix}
3 \amp 2 \\
1 \amp 1
\end{vmatrix}\\\\
=\amp 2(4-1)-4(6-1)+6(3-2)\\ \\
=\amp -8.
\end{align*}
Example 18.17.
Find the determinant of
\begin{equation*}
A=\begin{pmatrix}
2 \amp 4 \amp 6 \\
0 \amp 2 \amp 1 \\
0 \amp 0 \amp -4
\end{pmatrix}.
\end{equation*}
Answer.
\(\det ( A ) = -24\)
Solution.
Using the second of the formulas given above with \(j=1: \)
\begin{align*}
\det(A) = \amp (-1)^{1+1} 2 \begin{vmatrix}
3 \amp 1 \\
0 \amp -4
\end{vmatrix} + (-1)^{2+1} 0 \begin{vmatrix}
2 \amp 6 \\
0 \amp -4
\end{vmatrix}+ (-1)^{3+1} 0 \begin{vmatrix}
4 \amp 6 \\
3 \amp 1
\end{vmatrix}\\\\
=\amp 2(-12-0)-0+0\\ \\
=\amp -24.
\end{align*}
Notice that for a matrix that is upper triangular the determinant is just the product of the entries on the main diagonal.
Calculating the determinant of a \(3 \times 3 \) matrix via minors is relatively easy. However for matrices of higher orders the calculation can become very tedious. For example, to calculate the determinant of a \(4\times 4 \) matrix potentially involves calculating the determinants of four \(3\times 3\) matrices. Thus for large matrices the preferred strategy for calculating its determinant is based on the observation that for an upper triangular matrix the determinant is just the product of the entries on the main diagonal.
Theorem 18.18.
To calculate the determinant of the \(n \times n \) matrix \(A \text{:}\)
If matrix \(B \) is obtained from matrix \(A \) by interchanging \(2 \) rows then
\begin{equation*}
\det(B)=-\det(A).
\end{equation*}
If matrix \(B \) is obtained from matrix \(A \) by multiplying a row of \(A \) by a scalar \(k \) then
\begin{equation*}
\det(B)=k\det(A).
\end{equation*}
If matrix \(B \) is obtained from matrix \(A \) by adding a multiple of one row of \(A \) to another then
\begin{equation*}
\det(B)=\det(A).
\end{equation*}
Example 18.19. (Example 18.14 revisited).
Find the determinant of
\begin{equation*}
A=\begin{pmatrix}
2 \amp 4 \amp 6 \\
3 \amp 2 \amp 1 \\
1 \amp 1 \amp 2
\end{pmatrix}.
\end{equation*}
Answer.
\(\det ( A ) = -8\)
Solution.
Using the row reduction method, first reduce
\(A \) to an equivalent upper triangular matrix.
\begin{align*}
\begin{pmatrix}
2 \amp 4 \amp 6 \\
3 \amp 2 \amp 1 \\
1 \amp 1 \amp 2
\end{pmatrix}
\amp \sim
\begin{pmatrix}
2 \amp 4 \amp 6 \\
0 \amp -4 \amp -8 \\
0 \amp -1 \amp -1
\end{pmatrix} \;\;\;
\begin{matrix}
\amp \\
R'_{2} \amp = R_{2}-3\frac{R_{1}}{2} \\
R'_{3} \amp = R_{3}-\frac{R_{1}}{2}
\end{matrix}\\
\amp \sim
\begin{pmatrix}
2 \amp 4 \amp 6 \\
0 \amp -4 \amp -8 \\
0 \amp 0 \amp 1
\end{pmatrix} \;\;\;
\begin{matrix}
\amp \\
\amp \\
R'_{3} \amp = R_{3}-\frac{R_{2}}{4}
\end{matrix}
\end{align*}
Since the only elementary row operation used here was that of adding a multiple of one row to another the determinant of the reduced matrix will be the same as the determinant of
\(A \text{.}\) Thus
\begin{equation*}
\det(A)=2\times (-4)\times 1 = -8.
\end{equation*}
Theorem 18.20. Properties of the Determinant.
Let \(A \) and \(B \) be square invertible matrices of order \(n \) and let \(k \) be a real number. Then
\(\displaystyle \det(A^{-1})=\frac{1}{\det(A)}\)
\(\displaystyle \det(A^{T})=\det(A)\)
\(\displaystyle \det(AB)=\det(A) \det(B)\)
\(\displaystyle \det(kA)=k^{n} \det(A)\)
Example 18.21.
Calculate the determinant of the following matrices. Which property of determinants does this illustrate?
\begin{equation*}
A=\begin{pmatrix}
-1 \amp 2 \\
3 \amp -4
\end{pmatrix},\;
B=\begin{pmatrix}
-2 \amp 4 \\
6 \amp -8
\end{pmatrix}
\end{equation*}
Solution.
Firstly,
\begin{equation*}
\det(A)=(-1)\times (-4)-3\times 2=4-6=-2.
\end{equation*}
Next
\begin{equation*}
\det(B)=(-2)\times (-8)-4\times 6=16-24=-8.
\end{equation*}
Since
\(A \) and
\(B \) are square matrices of order
\(2 \) and
\(B=2A \) the fact that
\(\det(B)=4\det(A) \) illustrates Property
\(4.\) of
Theorem 18.20 above.
Theorem 18.22.
Consider the non-homogenous system of \(n \) linear equations in \(n \) variables
\begin{equation*}
A \mathbf{x} = \mathbf{b},\; \mathbf{b} \neq 0.
\end{equation*}
The following statements are equivalent:
The system has a unique solution,
\(A \mathbf{x} =0 \) has only the trivial solution \(\mathbf{x} =0, \)
The columns of \(A \) are linearly independent,
\(A \) is invertible.
\(\det(A)\neq 0 \text{.}\)
Exercises Example Tasks
1.
Find the determinant of
\begin{equation*}
M=\begin{pmatrix}
1 \amp 1 \amp 2 \\
1 \amp -1 \amp 1 \\
0 \amp 2 \amp 4
\end{pmatrix}
\end{equation*}
Using the minor formula.
Using row reduction to an upper triangular matrix.