22/09/2014

Spanning Sets &Linear Independent

        Spanning Sets & Linear Independent


Lemma 1.1
Where  S  is a subset of a vector space V,

[S]=[S\cup\{\vec{v}\}]
\quad\text{if and only if}\quad
\vec{v}\in[S]
for any \vec{v}\in V.
Proof
The left to right implication is easy. If [S]=[S\cup\{\vec{v}\}] then, since  \vec{v}\in[S\cup\{\vec{v}\}] , the equality of the two sets gives that  \vec{v}\in[S] .
For the right to left implication assume that  \vec{v}\in [S]  to show that  [S]=[S\cup\{\vec{v}\}]  by mutual inclusion. The inclusion  [S]\subseteq[S\cup\{\vec{v}\}]  is obvious. For the other inclusion  [S]\supseteq[S\cup\{\vec{v}\}] , write an element of  [S\cup\{\vec{v}\}]  as  d_0\vec{v}+d_1\vec{s}_1+\dots+d_m\vec{s}_m  and substitute  \vec{v} 's expansion as a linear combination of members of the same set  d_0(c_0\vec{t}_0+\dots+c_k\vec{t}_k)+d_1\vec{s}_1+\dots+d_m\vec{s}_m . This is a linear combination of linear combinations and so distributing  d_0  results in a linear combination of vectors from  S . Hence each member of [S\cup\{\vec{v}\}] is also a member of [S].
Example 1.2
In  \mathbb{R}^3 , where

\vec{v}_1=\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}\quad
\vec{v}_2=\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}\quad
\vec{v}_3=\begin{pmatrix} 2 \\ 1 \\ 0 \end{pmatrix}
the spans  [\{\vec{v}_1,\vec{v}_2\}]  and  [\{\vec{v}_1,\vec{v}_2,\vec{v}_3\}]  are equal since  \vec{v}_3  is in the span  [\{\vec{v}_1,\vec{v}_2\}] .
The lemma says that if we have a spanning set then we can remove a \vec{v} to get a new set S with the same span if and only if \vec{v} is a linear combination of vectors from S. Thus, under the second sense described above, a spanning set is minimal if and only if it contains no vectors that are linear combinations of the others in that set. We have a term for this important property.
Definition 1.3
A subset of a vector space is linearly independent if none of its elements is a linear combination of the others. Otherwise it is linearly dependent.
Here is an important observation:

\vec{s}_0=c_1\vec{s}_1+c_2\vec{s}_2+\cdots +c_n\vec{s}_n
although this way of writing one vector as a combination of the others visually sets  \vec{s}_0  off from the other vectors, algebraically there is nothing special in that equation about  \vec{s}_0 . For any  \vec{s}_i  with a coefficient c_i that is nonzero, we can rewrite the relationship to set off  \vec{s}_i .

\vec{s}_i=(1/c_i)\vec{s}_0+(-c_1/c_i)\vec{s}_1
+\dots+(-c_n/c_i)\vec{s}_n
When we don't want to single out any vector by writing it alone on one side of the equation we will instead say that \vec{s}_0,\vec{s}_1,\dots,\vec{s}_n  are in a linear relationship and write the relationship with all of the vectors on the same side. The next result rephrases the linear independence definition in this style. It gives what is usually the easiest way to compute whether a finite set is dependent or independent.
Lemma 1.4
A subset  S  of a vector space is linearly independent if and only if for any distinct  \vec{s}_1,\dots,\vec{s}_n\in S the only linear relationship among those vectors

c_1\vec{s}_1+\dots+c_n\vec{s}_n=\vec{0}
\qquad c_1,\dots,c_n\in\mathbb{R}
is the trivial one:  c_1=0,\dots,\,c_n=0 .
Proof
This is a direct consequence of the observation above.
If the set  S  is linearly independent then no vector \vec{s}_i can be written as a linear combination of the other vectors from S so there is no linear relationship where some of the \vec{s}\,'s have nonzero coefficients. If  S  is not linearly independent then some  \vec{s}_i  is a linear combination \vec{s}_i=c_1\vec{s}_1+\dots+c_{i-1}\vec{s}_{i-1} +c_{i+1}\vec{s}_{i+1}+\dots+c_n\vec{s}_n of other vectors from  S , and subtracting \vec{s}_ifrom both sides of that equation gives a linear relationship involving a nonzero coefficient, namely the  -1  in front of  \vec{s}_i .
Example 1.5
In the vector space of two-wide row vectors, the two-element set  \{ \begin{pmatrix} 40 &15 \end{pmatrix},\begin{pmatrix} -50 &25 \end{pmatrix}\}  is linearly independent. To check this, set

c_1\cdot\begin{pmatrix} 40 &15 \end{pmatrix}+c_2\cdot\begin{pmatrix} -50 &25 \end{pmatrix}=\begin{pmatrix} 0 &0 \end{pmatrix}
and solving the resulting system

\begin{array}{*{2}{rc}r}
40c_1 &- &50c_2 &= &0 \\
15c_1 &+ &25c_2 &= &0
\end{array}
\;\xrightarrow[]{-(15/40)\rho_1+\rho_2}\;
\begin{array}{*{2}{rc}r}
40c_1 &- &50c_2    &= &0 \\
& &(175/4)c_2 &= &0
\end{array}
shows that both  c_1  and  c_2  are zero. So the only linear relationship between the two given row vectors is the trivial relationship.
In the same vector space,  \{ \begin{pmatrix} 40 &15 \end{pmatrix},\begin{pmatrix} 20 &7.5 \end{pmatrix}\}  is linearly dependent since we can satisfy

c_1\begin{pmatrix} 40 &15 \end{pmatrix}+c_2\cdot\begin{pmatrix} 20 &7.5 \end{pmatrix}=\begin{pmatrix} 0 &0 \end{pmatrix}
with  c_1=1  and  c_2=-2 .
Remark 1.6
Recall the Statics example that began this book. We first set the unknown-mass objects at  40  cm and  15  cm and got a balance, and then we set the objects at  -50  cm and  25  cm and got a balance. With those two pieces of information we could compute values of the unknown masses. Had we instead first set the unknown-mass objects at  40  cm and  15  cm, and then at  20  cm and  7.5  cm, we would not have been able to compute the values of the unknown masses (try it). Intuitively, the problem is that the  \begin{pmatrix} 20 &7.5 \end{pmatrix}  information is a "repeat" of the \begin{pmatrix} 40 &15 \end{pmatrix} information— that is, \begin{pmatrix} 20 &7.5 \end{pmatrix} is in the span of the set \{\begin{pmatrix} 40 &15 \end{pmatrix}\}— and so we would be trying to solve a two-unknowns problem with what is essentially one piece of information.
Example 1.7
The set  \{1+x,1-x\}  is linearly independent in \mathcal{P}_2 , the space of quadratic polynomials with real coefficients, because

0+0x+0x^2
=
c_1(1+x)+c_2(1-x)
=
(c_1+c_2)+(c_1-c_2)x+0x^2
gives
\begin{array}{rcl}
\begin{array}{*{2}{rc}r}
c_1 &+ &c_2 &= &0 \\
c_1 &- &c_2 &= &0
\end{array}
&\xrightarrow[]{-\rho_1+\rho_2}
&\begin{array}{*{2}{rc}r}
c_1 &+ &c_2 &= &0 \\
&  &2c_2 &= &0
\end{array}
\end{array}
since polynomials are equal only if their coefficients are equal. Thus, the only linear relationship between these two members of \mathcal{P}_2 is the trivial one.
Example 1.8
In  \mathbb{R}^3 , where

\vec{v}_1=\begin{pmatrix} 3 \\ 4 \\ 5 \end{pmatrix}
\quad
\vec{v}_2=\begin{pmatrix} 2 \\ 9 \\ 2 \end{pmatrix}
\quad
\vec{v}_3=\begin{pmatrix} 4 \\ 18 \\ 4 \end{pmatrix}
the set  S=\{\vec{v}_1,\vec{v}_2,\vec{v}_3\}  is linearly dependent because this is a relationship

0\cdot\vec{v}_1
+2\cdot\vec{v}_2
-1\cdot\vec{v}_3
=\vec{0}
where not all of the scalars are zero (the fact that some of the scalars are zero doesn't matter).
Remark 1.9
That example illustrates why, although Definition 1.3 is a clearer statement of what independence is, Lemma 1.4 is more useful for computations. Working straight from the definition, someone trying to compute whether S is linearly independent would start by setting  \vec{v}_1=c_2\vec{v}_2+c_3\vec{v}_3  and concluding that there are no such c_2 and c_3. But knowing that the first vector is not dependent on the other two is not enough. This person would have to go on to try  \vec{v}_2=c_1\vec{v}_1+c_3\vec{v}_3  to find the dependence c_1=0 c_3=1/2 . Lemma 1.4 gets the same conclusion with only one computation.
Example 1.10
The empty subset of a vector space is linearly independent. There is no nontrivial linear relationship among its members as it has no members.
Example 1.11
In any vector space, any subset containing the zero vector is linearly dependent. For example, in the space \mathcal{P}_2 of quadratic polynomials, consider the subset \{1+x,x+x^2,0\}.
One way to see that this subset is linearly dependent is to use Lemma 1.4: we have 0\cdot\vec{v}_1+0\cdot\vec{v}_2+1\cdot\vec{0}=\vec{0}, and this is a nontrivial relationship as not all of the coefficients are zero. Another way to see that this subset is linearly dependent is to go straight to Definition 1.3: we can express the third member of the subset as a linear combination of the first two, namely, c_1\vec{v}_1+c_2\vec{v}_2=\vec{0} is satisfied by taking c_1=0 and c_2=0 (in contrast to the lemma, the definition allows all of the coefficients to be zero).
(There is still another way to see that this subset is dependent that is subtler. The zero vector is equal to the trivial sum, that is, it is the sum of no vectors. So in a set containing the zero vector, there is an element that can be written as a combination of a collection of other vectors from the set, specifically, the zero vector can be written as a combination of the empty collection.)
The above examples, especially Example 1.5, underline the discussion that begins this section. The next result says that given a finite set, we can produce a linearly independent subset by discarding what Remark 1.6 calls "repeats".

Theorem 1.12
In a vector space, any finite subset has a linearly independent subset with the same span.
Proof
If the set  S=\{ \vec{s}_1,\dots,\vec{s}_n\}  is linearly independent then S itself satisfies the statement, so assume that it is linearly dependent.
By the definition of dependence, there is a vector  \vec{s}_i  that is a linear combination of the others. Call that vector  \vec{v}_1 . Discard it— define the set  S_1=S-\{\vec{v}_1\} . By Lemma 1.1, the span does not shrink  [S_1]=[S] .
Now, if  S_1  is linearly independent then we are finished. Otherwise iterate the prior paragraph: take a vector \vec{v}_2 that is a linear combination of other members of S_1 and discard it to derive  S_2=S_1-\{\vec{v}_2\}  such that  [S_2]=[S_1] . Repeat this until a linearly independent set S_j appears; one must appear eventually because  S  is finite and the empty set is linearly independent. (Formally, this argument uses induction on n, the number of elements in the starting set. Problem 20 asks for the details.)
Example 1.13
This set spans  \mathbb{R}^3 .

S=\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 2 \\ 0 \end{pmatrix},
\begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ -1 \\ 1 \end{pmatrix},
\begin{pmatrix} 3 \\ 3 \\ 0 \end{pmatrix} \}
Looking for a linear relationship

c_1\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}
+c_2\begin{pmatrix} 0 \\ 2 \\ 0 \end{pmatrix}
+c_3\begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix}
+c_4\begin{pmatrix} 0 \\ -1 \\ 1 \end{pmatrix}
+c_5\begin{pmatrix} 3 \\ 3 \\ 0 \end{pmatrix}
=\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}
gives a three equations/five unknowns linear system whose solution set can be parametrized in this way.


\{\begin{pmatrix} c_1 \\ c_2 \\ c_3 \\ c_4 \\ c_5 \end{pmatrix}=
c_3\begin{pmatrix} -1 \\ -1 \\ 1 \\ 0 \\ 0 \end{pmatrix}
+c_5\begin{pmatrix} -3 \\ -3/2 \\ 0 \\ 0 \\ 1 \end{pmatrix}
\,\big|\, c_3,c_5\in\mathbb{R} \}
So S is linearly dependent. Setting  c_3=0  and  c_5=1  shows that the fifth vector is a linear combination of the first two. Thus, Lemma 1.1 says that discarding the fifth vector

S_1=\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 2 \\ 0 \end{pmatrix},
\begin{pmatrix} 1 \\ 2 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ -1 \\ 1 \end{pmatrix} \}
leaves the span unchanged [S_1]=[S]. Now, the third vector of  S_1  is a linear combination of the first two and we get

S_2=\{\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ 2 \\ 0 \end{pmatrix},
\begin{pmatrix} 0 \\ -1 \\ 1 \end{pmatrix} \}
with the same span as S_1, and therefore the same span as S, but with one difference. The set S_2 is linearly independent (this is easily checked), and so discarding any of its elements will shrink the span.