14/09/2014

Vector Space

vector Spce: 

A vector space is a mathematical structure formed by a collection of elements called vectors, which may be added together and multiplied ("scaled") by numbers, called scalars in this context. Scalars are often taken to be real numbers, but there are also vector spaces with scalar multiplication bycomplex numbersrational numbers, or generally any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called axioms, listed below. An example of a vector space is that of Euclidean vectors, which may be used to represent physical quantities such as forces: any two forces (of the same type) can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, vectors representing displacements in the plane or in three-dimensional space also form vector spaces. Vectors in vector spaces do not necessarily have to be arrow-like objects as they appear in the mentioned examples: vectors are best thought of as abstract mathematical objects with particular properties, which in some cases can be visualized as arrows.
         Definition
A vector space over a field F is a set V together with two operations that satisfy the eight axioms listed below. Elements of V are called vectors. Elements of are called scalars. In this article, vectors are distinguished from scalars by boldface. The first operation, vector addition, takes any two vectors v and   w and assigns to them a third vector which is commonly written as v + w, and called the sum of these two vectors. The second operation takes any scalar a and any vector v and gives another vector av. In view of the first example, where the multiplication is done by rescaling the vector v by a scalar a, the multiplication is called scalar multiplication of  v by  a.
To qualify as a vector space, the set V and the operations of addition and multiplication must adhere to a number of requirements called axioms. In the list below, let uv and w be arbitrary vectors in V, and a and   scalars in F.
Axioms:

1) Associtivity of addition:  (u + v ) + w= u + (v + w)

2) Commutativity of addition:  u + v = v + u

3 )Identity element of addition: There exist an element                                   0∈V, called the zero vector, such that v + 0 =v  for all ∈V

4) Inverse elements of addition: for all v∈V , there exist an element -v∈V, 
    called the additive inverse of v, such that v +(-v) =0

5) Compatibility of scalar multiplication with field multiplication:  a(bv)=(ab)v 

6) Identity element of scalar multiplication: 1v=v, where   1 denote the multiplcative identityin F
7) Distributivity of scalar multiplication with respect to field addition: ( a + b )v= av + bv


Basis and dimensions:

A vector v in R2 (blue) expressed in terms of different bases: using the standard basis ofR2 v = xe1 + ye2 (black), and using a different, non-orthogonal basis:v = f1 + f2 (red).

Bases allow the introduction of coordinates as a means to represent vectors. A basis is a (finite or infinite) set B = {bi}i ∈ I of vectors bi, for convenience often indexed by some index set I, that spans the whole space and is linearly independent. "Spanning the whole space" means that any vector v can be expressed as a finite sum (called a linear combination) of the basis elements:
\mathbf{v} = a_1 \mathbf{b}_{i_1} + a_2 \mathbf{b}_{i_2} + \cdots + a_n \mathbf{b}_{i_n},




(1)
where the ak are scalars, called the coordinates of the vector v with respect to the basis B, and bik (k = 1, ..., n) elements of B. Linear independence means that the coordinates ak are uniquely determined for any vector in the vector space.
For example, the coordinate vectors e1 = (1, 0, ..., 0)e2 = (0, 1, 0, ..., 0), to en = (0, 0, ..., 0, 1), form a basis of Fn, called the standard basis, since any vector (x1x2, ..., xn) can be uniquely expressed as a linear combination of these vectors:
(x1x2, ..., xn) = x1(1, 0, ..., 0) + x2(0, 1, 0, ..., 0) + ... + xn(0, ..., 0, 1) = x1e1 + x2e2 + ... + xnen.
The corresponding coordinates x1x2, ..., xn are just the Cartesian coordinates of the vector.
Every vector space has a basis. This follows from Zorn's lemma, an equivalent formulation of the Axiom of Choice. Given the other axioms of Zermelo–Fraenkel set theory, the existence of bases is equivalent to the axiom of choice. The ultrafilter lemma, which is weaker than the axiom of choice, implies that all bases of a given vector space have the same number of elements, or cardinality (cf. Dimension theorem for vector spaces). It is called the dimension of the vector space, denoted dim V. If the space is spanned by finitely many vectors, the above statements can be proven without such fundamental input from set theory.
The dimension of the coordinate space Fn is n, by the basis exhibited above. The dimension of the polynomial ring F[x] introduced above is countably infinite, a basis is given by 1, xx2, ... A fortiori, the dimension of more general function spaces, such as the space of functions on some (bounded or unbounded) interval, is infinite.Under suitable regularity assumptions on the coefficients involved, the dimension of the solution space of a homogeneous ordinary differential equation equals the degree of the equation. For example, the solution space for the above equation is generated by
 exand xex. These two functions are linearly independent over R, so the dimension of this space is two, as is the degree of the equation.
A field extension over the rationals Q can be thought of as a vector space over Q (by defining vector addition as field addition, defining scalar multiplication as field multiplication by elements of Q, and otherwise ignoring the field multiplication). The dimension (or degree) of the field extension Q(α) over Q depends on α. If α satisfies some polynomial equation
qnαn + qn−1αn−1 + ... + q0 = 0, with rational coefficients qn, ..., q0.
("α is algebraic"), the dimension is finite. More precisely, it equals the degree of the minimal polynomial having α as a root.For example, the complex numbers C are a two-dimensional real vector space, generated by 1 and the imaginary unit i. The latter satisfies i2 + 1 = 0, an equation of degree two. Thus, C is a two-dimensional R-vector space (and, as any field, one-dimensional as a vector space over itself, C). If α is not algebraic, the dimension of Q(α) over Q is infinite. For instance, for α =Ï€ there is no such equation, in other words Ï€ is transcendental.

Linear maps and Matrices:

The relation of two vector spaces can be expressed by linear map or linear transformation. They are functions that reflect the vector space structure—i.e., they preserve sums and scalar multiplication:
f(x +  y) = f(x) + f(y) and f(a · x) = a · f(x) for all x and  y in  V, all a in  F.
An isomorphism is a linear map f : V → W such that there exists an inverse map g : W → V, which is a map such that the two possible compositions f ∘ g : W → W and g ∘ f : V → V are identity maps. Equivalently, f is both one-to-one (injective) and onto (surjective).If there exists an isomorphism between V and W, the two spaces are said to be isomorphic; they are then essentially identical as vector spaces, since all identities holding in V are, via f, transported to similar ones in W, and vice versa via g.


Describing an arrow vector vby its coordinates x and yyields an isomorphism of vector spaces.
For example, the "arrows in the plane" and "ordered pairs of numbers" vector spaces in the introduction are isomorphic: a planar arrow v departing at the origin of some (fixed) coordinate system can be expressed as an ordered pair by considering the x- and y-component of the arrow, as shown in the image at the right. Conversely, given a pair (xy), the arrow going by x to the right (or to the left, if x is negative), and y up (down, if y is negative) turns back the arrow v.
Linear maps V → W between two fixed vector spaces form a vector space HomF(VW), also denoted L(VW). The space of linear maps from V to F is called the dual vector space, denoted V. Via the injective natural map V → V∗∗, any vector space can be embedded into its bidual; the map is an isomorphism if and only if the space is finite-dimensional.
Once a basis of V is chosen, linear maps f : V → W are completely determined by specifying the images of the basis vectors, because any element of V is expressed uniquely as a linear combination of them. If dim V = dim W, a 1-to-1 correspondence between fixed bases of V and W gives rise to a linear map that maps any basis element of V to the corresponding basis element of W. It is an isomorphism, by its very definition.Therefore, two vector spaces are isomorphic if their dimensions agree and vice versa. Another way to express this is that any vector space is completely classified ( up to isomorphism) by its dimension, a single number. In particular, any n-dimensional F-vector space V is isomorphic to Fn. There is, however, no "canonical" or preferred isomorphism; actually an isomorphism Ï† : Fn → V is equivalent to the choice of a basis of V, by mapping the standard basis of Fn to V, via φ. The freedom of choosing a convenient basis is particularly useful in the infinite-dimensional context.

Matrices:

A typical matrix
Matrices are a useful notion to encode linear maps. They are written as a rectangular array of scalars as in the image at the right. Any m-by-n matrix Agives rise to a linear map from Fn to Fm, by the following
\mathbf x = (x_1, x_2, \cdots, x_n) \mapsto \left(\sum_{j=1}^n a_{1j}x_j, \sum_{j=1}^n a_{2j}x_j, \cdots, \sum_{j=1}^n a_{mj}x_j \right), where \sum denotes summation,
or, using the matrix multiplication of the matrix A with the coordinate vector x:
x ↦ Ax.
Moreover, after choosing bases of V and Wany linear map f : V → W is uniquely represented by a matrix via this assignment.
The volume of this parallelepipedis the absolute value of the determinant of the 3-by-3 matrix formed by the vectors r1r2, and r3.
The determinant det (A) of a square matrix A is a scalar that tells whether the associated map is an isomorphism or not: to be so it is sufficient and necessary that the determinant is nonzero. The linear transformation of Rncorresponding to a real n-by-n matrix is orientation preserving if and only if its determinant is positive.

Eigenvalues and eigenvectors:

Endomorphisms, linear maps f : V → V, are particularly important since in this case vectors v can be compared with their image under ff(v). Any nonzero vector v satisfying Î»v = f(v), where Î» is a scalar, is called an eigenvector of fwith eigenvalue Î». Equivalently, v is an element of the kernel of the difference f − Î» · Id (where Id is the identity map V → V). If V is finite-dimensional, this can be rephrased using determinants: f having eigenvalue Î»is equivalent to
det (f − Î» · Id) = 0.
By spelling out the definition of the determinant, the expression on the left hand side can be seen to be a polynomial function in Î», called the characteristic polynomial of f.If the field F is large enough to contain a zero of this polynomial (which automatically happens for F algebraically closed, such asF = C) any linear map has at least one eigenvector. The vector space V may or may not possess an eigenbasis, a basis consisting of eigenvectors. This phenomenon is governed by the Jordan canonical form of the map. The set of all eigenvectors corresponding to a particular eigenvalue of f forms a vector space known as the eigenspace corresponding to the eigenvalue (and f) in question. To achieve the spectral theorem, the corresponding statement in the infinite-dimensional case, the machinery of functional analysis is needed.


Subspaces and quotient spaces


A line passing through the origin (blue, thick) in R3 is a linear subspace. It is the intersection of two planes (green and yellow).
A nonempty subset W of a vector space V that is closed under addition and scalar multiplication (and therefore contains the 0-vector of V) is called asubspace of V.Subspaces of V are vector spaces (over the same field) in their own right. The intersection of all subspaces containing a given set S of vectors is called its span, and it is the smallest subspace of V containing the set S. Expressed in terms of elements, the span is the subspace consisting of all the linear combinations of elements of S.
The counterpart to subspaces are quotient vector spaces.Given any subspace W ⊂ V, the quotient space V/W ("V modulo W") is defined as follows: as a set, it consists of v + W = {v + ww ∈ W}, where v is an arbitrary vector inV. The sum of two such elements v1 + W and v2 + W is (v1 + v2) + W, and scalar multiplication is given by a · (v + W) = (a · v) + W. The key point in this definition is that v1 + W = v2 + W if and only if the difference of v1 and v2 lies in W.This way, the quotient space "forgets" information that is contained in the subspace W.
The kernel ker(f) of a linear map f : V → W consists of vectors v that are mapped to 0 in W. Both kernel and image im(f) = {f(v), v ∈ V} are subspaces of V and W, respectively.The existence of kernels and images is part of the statement that the category of vector spaces (over a fixed field F) is an abelian category, i.e. a corpus of mathematical objects and structure-preserving maps between them (a category) that behaves much like the category of abelian groups. Because of this, many statements such as the first isomorphism theorem (also called rank–nullity theorem in matrix-related terms)
V / ker(f) ≡ im(f).
and the second and third isomorphism theorem can be formulated and proven in a way very similar to the corresponding statements for groups.
An important example is the kernel of a linear map x ↦ Ax for some fixed matrix A, as above. The kernel of this map is the subspace of vectors x such that Ax = 0, which is precisely the set of solutions to the system of homogeneous linear equations belonging to A. This concept also extends to linear differential equations
a_0 f + a_1 \frac{d f}{d x} + a_2 \frac{d^2 f}{d x^2} + \cdots + a_n \frac{d^n f}{d x^n} = 0, where the coefficients ai are functions in x, too.
In the corresponding map
f \mapsto D(f) = \sum_{i=0}^n a_i \frac{d^i f}{d x^i},
the derivatives of the function f appear linearly (as opposed to f′′(x)2, for example). Since differentiation is a linear procedure (i.e., (f + g)′ = f′ + g ′ and(c·f)′ = c·f for a constant c) this assignment is linear, called a linear differential operator. In particular, the solutions to the differential equation D(f) = 0 form a vector space (over R or C).

Direct product and direct sum:

The direct product of vector spaces and the direct sum of vector spaces are two ways of combining an indexed family of vector spaces into a new vector space.
The direct product \textstyle{\prod_{i \in I} V_i} of a family of vector spaces Vi consists of the set of all tuples (vi)i ∈ I, which specify for each index i in some index set I an elementvi of Vi. Addition and scalar multiplication is performed componentwise. A variant of this construction is the direct sum \oplus_{i \in I} V_i (also called coproduct and denoted \textstyle{\coprod_{i \in I}V_i}), where only tuples with finitely many nonzero vectors are allowed. If the index set I is finite, the two constructions agree, but in general they are different.

Tensor product:

The tensor product V ⊗F W, or simply V ⊗ W, of two vector spaces V and W is one of the central notions of multilinear algebra which deals with extending notions such as linear maps to several variables. A map g : V × W → X is calledbilinear if g is linear in both variables v and w. That is to say, for fixed w the map v ↦ g(vw) is linear in the sense above and likewise for fixed v.
The tensor product is a particular vector space that is a universal recipient of bilinear maps g, as follows. It is defined as the vector space consisting of finite (formal) sums of symbols called tensors
v1 ⊗ w1 + v2 ⊗ w2 + ... + vn ⊗ wn,
subject to the rules
a · (v ⊗ w) = (a · v) ⊗ w = v ⊗ (a · w), where a is a scalar,
(v1 + v2) ⊗ w = v1 ⊗ w + v2 ⊗ w, and
v ⊗ (w1 + w2) = v ⊗ w1 + v ⊗ w2.
Commutative diagram depicting the universal property of the tensor product.
These rules ensure that the map f from the V × W to V ⊗ W that maps a tuple(vw) to v ⊗ w is bilinear. The universality states that given any vector space Xand any bilinear map g : V × W → X, there exists a unique map u, shown in the diagram with a dotted arrow, whose composition with f equals g:u(v ⊗ w) = g(vw). This is called the universal property of the tensor product, an instance of the method—much used in advanced abstract algebra—to indirectly define objects by specifying maps from or to this object.