# Inverse matrix

An n × n matrix, A, is invertible if there exists an n × n matrix, A^{-1}, called the inverse of A, such that

A^{-1}A = AA^{-1} = I_{n}

where I_{n} is the n × n identity matrix. We will denote the identity matrix simply as I from now on since it will be clear what size I should be in the context of each problem. Remember that I is special because for any other matrix A,

IA = AI = A

Note that given an n × n invertible matrix, A, the following conditions are equivalent (they are either all true, or all false):

- rref(A) = I
- det(A) ≠ 0
- x = 0 is the only solution to Ax = 0, where 0 is the n-dimensional 0-vector

A matrix that has an inverse is said to be invertible or nonsingular. A matrix that is not invertible is called singular. It is also worth noting that only square matrices have inverses, but not all square matrices are invertible.

## Inverse of a 2 × 2 matrix

The inverse of a 2 × 2 matrix can be calculated using a formula, as shown below. If

then

To confirm that this is true,

, |

and

Note that (ad - bc) is also the determinant of A, the given 2 × 2 matrix, so the formula for A^{-1} can also be written as:

## Inverse of a 3 × 3 matrix

There are a few methods for determining the inverse of a 3 × 3 matrix or larger matrix.

### Using augmented matrices

One method for finding the determinant of a matrix, A, is to use its augmented matrix, [A|I]. The method involves using Gaussian elimination to find the reduced row echelon form of [A|I]. Refer to the matrix page if necessary for a review of Gaussian elimination. The process will not be shown in this example. Consider the following matrix

and its augmented matrix [A|I]:

To find the inverse of A, we perform Gaussian elimination on its augmented matrix to get:

where rref([A|I]) stands for the "reduced row echelon form of [A|I]." The left 3 columns of rref([A|I]) form rref(A) which also happens to be the identity matrix, so rref(A) = I. Therefore, we claim that the right 3 columns form the inverse A^{-1} of A, so:

We can then confirm this as

, |

and

When we calculate rref([A|I]), we are essentially solving the systems Ax_{1} = e_{1}, Ax_{2} = e_{2}, and Ax_{3} = e_{3}, where e_{1}, e_{2}, and e_{3} are the standard basis vectors, simultaneously. When rref(A) = I, the solution vectors x_{1}, x_{2} and x_{3} are uniquely defined and form a new matrix [x_{1} x_{2} x_{3}] that appears on the right half of rref([A|I]). [x_{1} x_{2} x_{3}] satisfies A[x_{1} x_{2} x_{3}] = [e_{1} e_{2} e_{3}]. But since [e_{1} e_{2} e_{3}] = I, A[x_{1} x_{2} x_{3}] = [e_{1} e_{2} e_{3}] = I, and by definition of inverse, [x_{1} x_{2} x_{3}] = A^{-1}.

As an example, let us also consider the case of a singular (noninvertible) matrix, B:

We form the augmented matrix [B|I]:

However, when we calculate rref([B|I]), we get:

Notice that the first 3 columns do not form the identity matrix. Instead, they form:

which has all 0's on the 3^{rd} row. Therefore, B is not invertible. No matter what we do, we will never find a matrix B^{-1} that satisfies BB^{-1} = B^{-1}B = I. A noninvertible matrix is usually called singular.

Note: The form of rref(B) says that the 3^{rd} column of B is 1 times the 1^{st} column of B plus -3 times the 2^{nd} row of B, as shown below.

#### Determinants and invertibility

It can be proven that if a matrix A is invertible, then det(A) ≠ 0. The converse is also true: if det(A) ≠ 0, then A is invertible. The proof has to do with the property that each row operation we use to get from A to rref(A) can only multiply the determinant by a nonzero number. Though the proof is not provided here, we can see that the above holds for our previous examples.

In the examples above, we found that

was invertible while

was singular. If we calculate the determinants of A and B, we find that

= | |||

+ | |||

= | |||

= | |||

= | |||

+ | |||

= | |||

= |

### Using the adjoint of the matrix

Another method for finding the inverse of larger matrices is to use its determinant along with the adjoint of the matrix. If A is an invertible n × n matrix, the cofactor, C_{ij} for the corresponding entry in A is,

where M_{ij} is the determinant of the matrix that results from removing the i*th* and j*th* column of A. M_{ij} is referred to as the minor for the corresponding entry in A. The matrix formed by all of the cofactors of A is called its cofactor matrix, or matrix of cofactors:

The transpose of matrix A's cofactor matrix is referred to as the adjoint of matrix A, or its adjoint matrix. Thus:

The adjoint of matrix A can be used to find its inverse using the following formula:

Example

Find the inverse of , if it exists, using its adjoint.

First determine whether A is invertible by finding its determinant (recall that if det(A) = 0, the matrix is not invertible). In this example, we use cofactor expansion along the second row of A to find the determinant. Refer to the determinant page to review cofactor expansion or other methods of computing the determinant.

Since det(A) ≠ 0, we know that A has an inverse, and proceed to find its adjoint. From finding the determinant, we know C_{21} = 1, C_{22} = -1, and C_{23} = -½, so we only need to compute the cofactors for the first and third rows:

Thus,

,

and

,

Plugging this into the formula:

,