Solving linear systems

In addition to using methods such as substitution, elimination, and graphing, systems of linear equations can both be described and solved using matrices. Three methods for doing so include Gaussian elimination, using Cramer's rule, and using inverse matrices.

Using Gaussian elimination

Consider the following system of n linear equations and n unknowns:


2x1 + 3x2 - x3  =  4
x1 - x2 + 3x3  =  -5
2x2 - 5x3  =  3

In this case, n = 3. Whenever we have missing variables, such as in the last equation, 2x2 - 5x3 = 3, we can artificially introduce x1 into the equation by setting the coefficient of x1 to 0. In other words,


2x2 - 5x3 = 0x1 + 2x2 - 5x3 = 3


If we organize the coefficients of x1, x2, and x3 into a matrix A, we get:



If we organize the constants on the right-hand side of each equation into a column vector b, we get


,


such that we can organize the system in matrix form as Ax = b, where the column vector x = [x1 x2 x3]T:



Then, finding the column vector x is the same as solving for x1, x2, x3, which we can do by using Gaussian elimination on the augmented matrix constructed using A and b:



After performing Gaussian elimination, the reduced row echelon form of the augmented matrix is:



Thus, the solution to the system of equations is as follows:



Let's look at another example, where the Gaussian elimination is performed.

Example

Use Gaussian elimination to solve the following system of linear equations:

First, write the system of equations in the form Ax = b:



From here on, the rows will be referenced as R1 (row 1), R2, and R3, and operations performed will be relative to each row. For example, 2R1 - R2 → R2 means that row 2 is being subtracted from 2 times row 1, and the original row 2 in the matrix is being replaced by the result. Each subsequent step will reference the newly made matrix from the previous step, so step 1 will be referencing operations being performed on the original augmented matrix.


Step 1. -2R1 + R2 → R2:



Step 2. -R1 + R3 → R3



Step 3. -⅓R2



Step 4. 2R2 + R3 → R3



Step 5. -R2 + R1 → R1



Step 6. -½R3



Step 7. -R3 + R1 → R1



Thus, the solution to the system of equations is:



Using Cramer's rule

Cramer's rule is a method for solving a system of linear equations that uses determinants. Given a system of linear equations with n equations and n variables in matrix form Ax = B,



Cramer's rule states that the solution to the system of linear equations is,



where the square matrix An is formed by replacing the jth column of matrix A with the b vector. For example, A2 is:



Example

Use Cramer's rule to solve the following system of linear equations:

The system of equations in matrix form is:



Then:


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Plugging these into the formula ,





Using the inverse matrix

Given that the given matrix has an inverse, It is possible to solve a system of linear equations in matrix form Ax = B using the following:


 
 

The above holds true for any invertible square matrix, but for the sake of simplicity, we will use a 2 × 2 matrix.

Example

Solve the following system of linear equations using its inverse matrix.

The above system of equations, written in the form Ax = b is:



The inverse of matrix A is:


 
 
 
 

Then, the solution to the system of equations can be found as:


 

In other words, p = 4 and q = -3.