22/09/2014

Linear Independence & Subset Relations

                  Linear Independence & Subset Relations
         


Theorem 1.12 describes producing a linearly independent set by shrinking, that is, by taking subsets. We finish this subsection by considering how linear independence and dependence, which are properties of sets, interact with the subset relation between sets.
Lemma 1.14
Any subset of a linearly independent set is also linearly independent. Any superset of a linearly dependent set is also linearly dependent.
Proof
This is clear.
Restated, independence is preserved by subset and dependence is preserved by superset.
Those are two of the four possible cases of interaction that we can consider. The third case, whether linear dependence is preserved by the subset operation, is covered by Example 1.13, which gives a linearly dependent set Swith a subset S_1 that is linearly dependent and another subset S_2 that is linearly independent.
That leaves one case, whether linear independence is preserved by superset. The next example shows what can happen.
Example 1.15
In each of these three paragraphs the subset S is linearly independent.
For the set

S =\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}\}
the span  [S]  is the  x  axis. Here are two supersets of S, one linearly dependent and the other linearly independent.
dependent:  \{
\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} -3 \\ 0 \\ 0 \end{pmatrix}\}       independent:  \{
\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}\}
Checking the dependence or independence of these sets is easy.
For

S
=\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}
\}
the span  [S]  is the  xy  plane. These are two supersets.
dependent:  \{
\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 3 \\ -2 \\ 0 \end{pmatrix} \}       independent:  \{
\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \}
If

S =\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \}
then  [S]=\mathbb{R}^3 . A linearly dependent superset is
dependent:  \{
\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix},
\begin{pmatrix} 2 \\ -1 \\ 3 \end{pmatrix} \}
but there are no linearly independent supersets of S. The reason is that for any vector that we would add to make a superset, the linear dependence equation


\begin{pmatrix} x \\ y \\ z \end{pmatrix}
=c_1\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}
+c_2\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}
+c_3\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}
has a solution c_1=xc_2=y, and c_3=z.
So, in general, a linearly independent set may have a superset that is dependent. And, in general, a linearly independent set may have a superset that is independent. We can characterize when the superset is one and when it is the other.
Lemma 1.16
Where  S  is a linearly independent subset of a vector space V ,

S\cup\{\vec{v}\}\text{ is linearly dependent}
\quad\text{if and only if}\quad
\vec{v}\in[S]
for any  \vec{v}\in V  with  \vec{v}\not\in S .
Proof
One implication is clear: if  \vec{v}\in[S]  then  \vec{v}=c_1\vec{s}_1+c_2\vec{s}_2+\cdots +c_n\vec{s}_n  where each  \vec{s}_i\in S  and  c_i\in\mathbb{R} , and so  \vec{0}=c_1\vec{s}_1+c_2\vec{s}_2+\cdots +c_n\vec{s}_n+(-1)\vec{v}  is a nontrivial linear relationship among elements of  S\cup\{\vec{v}\} .
The other implication requires the assumption that  S  is linearly independent. With  S\cup\{\vec{v}\}  linearly dependent, there is a nontrivial linear relationship  c_0\vec{v}+c_1\vec{s}_1+c_2\vec{s}_2+\cdots +c_n\vec{s}_n=\vec{0}  and independence of S then implies that  c_0\neq 0 , or else that would be a nontrivial relationship among members of  S . Now rewriting this equation as  \vec{v}=-(c_1/c_0)\vec{s}_1-\dots-(c_n/c_0)\vec{s}_n  shows that  \vec{v}\in[S] .
(Compare this result with Lemma 1.1. Both say, roughly, that \vec{v} is a "repeat" if it is in the span of S. However, note the additional hypothesis here of linear independence.)
Corollary 1.17
A subset  S=\{\vec{s}_1,\dots,\vec{s}_n\}  of a vector space is linearly dependent if and only if some  \vec{s_i}  is a linear combination of the vectors  \vec{s}_1 , ...,  \vec{s}_{i-1}  listed before it.
Proof
Consider  S_0=\{\}  S_1=\{\vec{s_1}\}  S_2=\{\vec{s}_1,\vec{s}_2 \} , etc. Some index  i\geq 1  is the first one with  S_{i-1}\cup\{\vec{s}_i \}  linearly dependent, and there  \vec{s}_i\in[ S_{i-1} ] .
Lemma 1.16 can be restated in terms of independence instead of dependence: if  S  is linearly independent and  \vec{v}\not\in S  then the set  S\cup\{\vec{v}\}  is also linearly independent if and only if  \vec{v}\not\in[S].  Applying Lemma 1.1, we conclude that if  S is linearly independent and  \vec{v}\not\in S  then  S\cup\{\vec{v}\}  is also linearly independent if and only if  [S\cup\{\vec{v}\}]\neq[S] . Briefly, when passing from S to a superset S_1, to preserve linear independence we must expand the span [S_1]\supset[S].
Example 1.15 shows that some linearly independent sets are maximal— have as many elements as possible— in that they have no supersets that are linearly independent. By the prior paragraph, a linearly independent sets is maximal if and only if it spans the entire space, because then no vector exists that is not already in the span.