- Spanning Sets & Linear Independent
- Lemma 1.1
Where
is a subset of a vector space
,
for any
.
- Proof
The left to right implication is easy. If
then, since
, the equality of the two sets gives that
.
For the right to left implication assume that
to show that
by mutual inclusion. The inclusion
is obvious. For the other inclusion
, write an element of
as
and substitute
's expansion as a linear combination of members of the same set
. This is a linear combination of linear combinations and so distributing
results in a linear combination of vectors from
. Hence each member of
is also a member of
.
- Example 1.2
In
, where
the spans
and
are equal since
is in the span
.
The lemma says that if we have a spanning set then we can remove a
to get a new set
with the same span if and only if
is a linear combination of vectors from
. Thus, under the second sense described above, a spanning set is minimal if and only if it contains no vectors that are linear combinations of the others in that set. We have a term for this important property.
- Definition 1.3
A subset of a vector space is linearly independent if none of its elements is a linear combination of the others. Otherwise it is linearly dependent.
Here is an important observation:
although this way of writing one vector as a combination of the others visually sets
off from the other vectors, algebraically there is nothing special in that equation about
. For any
with a coefficient
that is nonzero, we can rewrite the relationship to set off
.
When we don't want to single out any vector by writing it alone on one side of the equation we will instead say that
are in a linear relationship and write the relationship with all of the vectors on the same side. The next result rephrases the linear independence definition in this style. It gives what is usually the easiest way to compute whether a finite set is dependent or independent.
- Lemma 1.4
A subset
of a vector space is linearly independent if and only if for any distinct
the only linear relationship among those vectors
is the trivial one:
.
- Proof
This is a direct consequence of the observation above.
If the set
is linearly independent then no vector
can be written as a linear combination of the other vectors from
so there is no linear relationship where some of the
's have nonzero coefficients. If
is not linearly independent then some
is a linear combination
of other vectors from
, and subtracting
from both sides of that equation gives a linear relationship involving a nonzero coefficient, namely the
in front of
.
- Example 1.5
In the vector space of two-wide row vectors, the two-element set
is linearly independent. To check this, set
and solving the resulting system
shows that both
and
are zero. So the only linear relationship between the two given row vectors is the trivial relationship.
In the same vector space,
is linearly dependent since we can satisfy
with
and
.
- Remark 1.6
Recall the Statics example that began this book. We first set the unknown-mass objects at
cm and
cm and got a balance, and then we set the objects at
cm and
cm and got a balance. With those two pieces of information we could compute values of the unknown masses. Had we instead first set the unknown-mass objects at
cm and
cm, and then at
cm and
cm, we would not have been able to compute the values of the unknown masses (try it). Intuitively, the problem is that the
information is a "repeat" of the
information— that is,
is in the span of the set
— and so we would be trying to solve a two-unknowns problem with what is essentially one piece of information.
- Example 1.7
The set
is linearly independent in
, the space of quadratic polynomials with real coefficients, because
gives
since polynomials are equal only if their coefficients are equal. Thus, the only linear relationship between these two members of
is the trivial one.
- Example 1.8
In
, where
the set
is linearly dependent because this is a relationship
where not all of the scalars are zero (the fact that some of the scalars are zero doesn't matter).
- Remark 1.9
That example illustrates why, although Definition 1.3 is a clearer statement of what independence is, Lemma 1.4 is more useful for computations. Working straight from the definition, someone trying to compute whether
is linearly independent would start by setting
and concluding that there are no such
and
. But knowing that the first vector is not dependent on the other two is not enough. This person would have to go on to try
to find the dependence
,
. Lemma 1.4 gets the same conclusion with only one computation.
- Example 1.10
The empty subset of a vector space is linearly independent. There is no nontrivial linear relationship among its members as it has no members.
- Example 1.11
In any vector space, any subset containing the zero vector is linearly dependent. For example, in the space
of quadratic polynomials, consider the subset
.
One way to see that this subset is linearly dependent is to use Lemma 1.4: we have
, and this is a nontrivial relationship as not all of the coefficients are zero. Another way to see that this subset is linearly dependent is to go straight to Definition 1.3: we can express the third member of the subset as a linear combination of the first two, namely,
is satisfied by taking
and
(in contrast to the lemma, the definition allows all of the coefficients to be zero).
(There is still another way to see that this subset is dependent that is subtler. The zero vector is equal to the trivial sum, that is, it is the sum of no vectors. So in a set containing the zero vector, there is an element that can be written as a combination of a collection of other vectors from the set, specifically, the zero vector can be written as a combination of the empty collection.)
The above examples, especially Example 1.5, underline the discussion that begins this section. The next result says that given a finite set, we can produce a linearly independent subset by discarding what Remark 1.6 calls "repeats".
- Theorem 1.12
In a vector space, any finite subset has a linearly independent subset with the same span.
- Proof
If the set
is linearly independent then
itself satisfies the statement, so assume that it is linearly dependent.
By the definition of dependence, there is a vector
that is a linear combination of the others. Call that vector
. Discard it— define the set
. By Lemma 1.1, the span does not shrink
.
Now, if
is linearly independent then we are finished. Otherwise iterate the prior paragraph: take a vector
that is a linear combination of other members of
and discard it to derive
such that
. Repeat this until a linearly independent set
appears; one must appear eventually because
is finite and the empty set is linearly independent. (Formally, this argument uses induction on
, the number of elements in the starting set. Problem 20 asks for the details.)
- Example 1.13
This set spans
.
Looking for a linear relationship
gives a three equations/five unknowns linear system whose solution set can be parametrized in this way.
So
is linearly dependent. Setting
and
shows that the fifth vector is a linear combination of the first two. Thus, Lemma 1.1 says that discarding the fifth vector
leaves the span unchanged
. Now, the third vector of
is a linear combination of the first two and we get
with the same span as
, and therefore the same span as
, but with one difference. The set
is linearly independent (this is easily checked), and so discarding any of its elements will shrink the span.