Vector spaces and Linear systems:
- Definition 3.1
The row space of a matrix is the span of the set of its rows. The row rank is the dimension of the row space, the number of linearly independent rows.
- Example 3.2
If
then
is this subspace of the space of two-component row vectors.
The linear dependence of the second on the first is obvious and so we can simplify this description to
.
- Lemma 3.3
If the matrices
and
are related by a row operation
(for
and
) then their row spaces are equal. Hence, row-equivalent matrices have the same row space, and hence also, the same row rank.
- Proof
By the Linear Combination Lemma's corollary, each row of
is in the row space of
. Further,
because a member of the set
is a linear combination of the rows of
, which means it is a combination of a combination of the rows of
, and hence, by the Linear Combination Lemma, is also a member of
.
For the other containment, recall that row operations are reversible:
if and only if
. With that,
also follows from the prior paragraph, and so the two sets are equal.
Thus, row operations leave the row space unchanged. But of course, Gauss' method performs the row operations systematically, with a specific goal in mind, echelon form.
- Lemma 3.4
The nonzero rows of an echelon form matrix make up a linearly independent set.
- Proof
In an echelon form matrix, no nonzero row is a linear combination of the other rows. This is a restatement of that result into new terminology.
Thus, in the language of this chapter, Gaussian reduction works by eliminating linear dependences among rows, leaving the span unchanged, until no nontrivial linear relationships remain (among the nonzero rows). That is, Gauss' method produces a basis for the row space.
- Example 3.5
From any matrix, we can produce a basis for the row space by performing Gauss' method and taking the nonzero rows of the resulting echelon form matrix. For instance,
produces the basis
for the row space. This is a basis for the row space of both the starting and ending matrices, since the two row spaces are equal.
Using this technique, we can also find bases for spans not directly involving row vectors.
- Definition 3.6
The column space of a matrix is the span of the set of its columns. The column rank is the dimension of the column space, the number of linearly independent columns.
Our interest in column spaces stems from our study of linear systems. An example is that this system
has a solution if and only if the vector of
's is a linear combination of the other column vectors,
meaning that the vector of
's is in the column space of the matrix of coefficients.
- Example 3.7
Given this matrix,
to get a basis for the column space, temporarily turn the columns into rows and reduce.
Now turn the rows back to columns.
The result is a basis for the column space of the given matrix.
- Definition 3.8
The transpose of a matrix is the result of interchanging the rows and columns of that matrix. That is, column
of the matrix
is row
of
, and vice versa.
So the instructions for the prior example are "transpose, reduce, and transpose back".
We can even, at the price of tolerating the as-yet-vague idea of vector spaces being "the same", use Gauss' method to find bases for spans in other types of vector spaces.
- Example 3.9
To get a basis for the span of
in the space
, think of these three polynomials as "the same" as the row vectors
,
, and
, apply Gauss' method
and translate back to get the basis
. (As mentioned earlier, we will make the phrase "the same" precise at the start of the next chapter.)
Thus, our first point in this subsection is that the tools of this chapter give us a more conceptual understanding of Gaussian reduction.
For the second point of this subsection, consider the effect on the column space of this row reduction.
The column space of the left-hand matrix contains vectors with a second component that is nonzero. But the column space of the right-hand matrix is different because it contains only vectors whose second component is zero. It is this knowledge that row operations can change the column space that makes next result surprising.
- Lemma 3.10
Row operations do not change the column rank.
- Proof
Restated, if
reduces to
then the column rank of
equals the column rank of
.
We will be done if we can show that row operations do not affect linear relationships among columns (e.g., if the fifth column is twice the second plus the fourth before a row operation then that relationship still holds afterwards), because the column rank is just the size of the largest set of unrelated columns. But this is exactly the first theorem of this book: in a relationship among columns,
row operations leave unchanged the set of solutions
.
Another way, besides the prior result, to state that Gauss' method has something to say about the column space as well as about the row space is to consider again Gauss-Jordan reduction. Recall that it ends with the reduced echelon form of a matrix, as here.
Consider the row space and the column space of this result. Our first point made above says that a basis for the row space is easy to get: simply collect together all of the rows with leading entries. However, because this is a reduced echelon form matrix, a basis for the column space is just as easy: take the columns containing the leading entries, that is,
. (Linear independence is obvious. The other columns are in the span of this set, since they all have a third component of zero.) Thus, for a reduced echelon form matrix, bases for the row and column spaces can be found in essentially the same way— by taking the parts of the matrix, the rows or columns, containing the leading entries.
- Theorem 3.11
The row rank and column rank of a matrix are equal.
- Proof
First bring the matrix to reduced echelon form. At that point, the row rank equals the number of leading entries since each equals the number of nonzero rows. Also at that point, the number of leading entries equals the column rank because the set of columns containing leading entries consists of some of the
's from a standard basis, and that set is linearly independent and spans the set of columns. Hence, in the reduced echelon form matrix, the row rank equals the column rank, because each equals the number of leading entries.
But Lemma 3.3 and Lemma 3.10 show that the row rank and column rank are not changed by using row operations to get to reduced echelon form. Thus the row rank and the column rank of the original matrix are also equal.
- Definition 3.12
The rank of a matrix is its row rank or column rank.
So our second point in this subsection is that the column space and row space of a matrix have the same dimension. Our third and final point is that the concepts that we've seen arising naturally in the study of vector spaces are exactly the ones that we have studied with linear systems.
- Theorem 3.13
For linear systems with
unknowns and with matrix of coefficients
, the statements
- the rank of
is
- the space of solutions of the associated homogeneous system has dimension
are equivalent.
So if the system has at least one particular solution then for the set of solutions, the number of parameters equals
, the number of variables minus the rank of the matrix of coefficients.
- Proof
The rank of
is
if and only if Gaussian reduction on
ends with
nonzero rows. That's true if and only if echelon form matrices row equivalent to
have
-many leading variables. That in turn holds if and only if there are
free variables.
- Remark 3.14
- (Munkres 1964)
Sometimes that result is mistakenly remembered to say that the general solution of an
unknown system of
equations uses
parameters. The number of equations is not the relevant figure, rather, what matters is the number of independent equations (the number of equations in a maximal independent set). Where there are
independent equations, the general solution involves
parameters.
- Corollary 3.15
Where the matrix
is
, the statements
- the rank of
is
is nonsingular
- the rows of
form a linearly independent set
- the columns of
form a linearly independent set
- any linear system whose matrix of coefficients is
has one and only one solution
are equivalent.
- Proof
Clearly
. The last,
, holds because a set of
column vectors is linearly independent if and only if it is a basis for
, but the system
has a unique solution for all choices of
if and only if the vectors of
's form a basis.