The Full Wiki

Euclidean subspace: Wikis

Advertisements
  
  

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

Encyclopedia

From Wikipedia, the free encyclopedia

Three one-dimensional subspaces (red, green and blue lines) of R2.

In linear algebra, a Euclidean subspace (or subspace of Rn) is a set of vectors that is closed under addition and scalar multiplication. Geometrically, a subspace is a flat in n-dimensional Euclidean space that passes through the origin. Examples of subspaces include the solution set to a homogeneous system of linear equations, the subset of Euclidean space described by a system of homogeneous linear parametric equations, the span of a collection of vectors, and the null space, column space, and row space of a matrix.[1]

In abstract linear algebra, Euclidean subspaces are important examples of vector spaces. In this context, a Euclidean subspace is simply a linear subspace of a Euclidean space.

Contents

Note on vectors and Rn

In mathematics, Rn denotes the set of all vectors with n real components:

\textbf{R}^n = \left\{(x_1, x_2, \ldots, x_n) : x_1,x_2,\ldots,x_n \in \textbf{R} \right\}[2]

Here the word vector refers to any ordered list of numbers. Vectors can be written as either ordered tuples or as columns of numbers:

(x_1, x_2, \ldots, x_n) = \left[\!\! \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \!\!\right][3]

Geometrically, we regard vectors with n components as points in an n-dimensional space. That is, we identify the set Rn with n-dimensional Euclidean space. Any subset of Rn can be thought of as a geometric object (namely the object consisting of all the points in the subset). Using this mode of thought, a line in three-dimensional space is the same as the set of points on the line, and is therefore just a subset of R3.

Definition

A Euclidean subspace is a subset S of Rn with the following properties:

  1. The zero vector 0 is an element of S.
  2. If u and v are elements of S, then u + v is an element of S.
  3. If v is an element of S and c is a scalar, then cv is an element of S.

There are several common variations on these requirements, all of which are logically equivalent to the list above.[4] [5]

Because subspaces are closed under both addition and scalar multiplication, any linear combination of vectors from a subspace is again in the subspace. That is, if v1, v2, ..., vk are elements of a subspace S, and c1, c2, ..., ck are scalars, then

c1 v1 + c2 v2 + · · · + ck vk

is again an element of S.

Geometric description

Three two-dimensional subspaces of R3. The center point is the zero vector.

Geometrically, a subspace of Rn is simply a flat through the origin, i.e. a copy of a lower dimensional (or equi-dimensional) Euclidean space sitting in n dimensions. For example, there are four different types of subspaces in R3:

  1. The singleton set { (0, 0, 0) } is a zero-dimensional subspace of R3.
  2. Any line through the origin is a one-dimensional subspace of R3.
  3. Any plane through the origin is a two-dimensional subspace of R3.
  4. The entire set R3 is a three-dimensional subspace of itself.

In n-dimensional space, there are subspaces of every dimension from 0 to n.

The geometric dimension of a subspace is the same as the number of vectors required for a basis (see below).

Systems of linear equations

The solution set to any homogeneous system of linear equations with n variables is a subspace of Rn:

\left\{ \left[\!\! \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \!\!\right] \in \textbf{R}^n : \begin{alignat}{6} a_{11} x_1 &&\; + \;&& a_{12} x_2 &&\; + \cdots + \;&& a_{1n} x_n &&\; = 0& \ a_{21} x_1 &&\; + \;&& a_{22} x_2 &&\; + \cdots + \;&& a_{2n} x_n &&\; = 0& \ \vdots\;\;\; && && \vdots\;\;\; && && \vdots\;\;\; && \vdots\,& \ a_{m1} x_1 &&\; + \;&& a_{m2} x_2 &&\; + \cdots + \;&& a_{mn} x_n &&\; = 0& \end{alignat} \right\}

For example, the set of all vectors (x, y, z) satisfying the equations

x + 3y + 2z = 0 \;\;\;\;\text{and}\;\;\;\; 2x - 4y + 5z = 0

is a one-dimensional subspace of R3. More generally, that is to say that given a set of n, independent functions, the dimension of the subspace in Rk will be the dimension of the null set of A, the composite matrix of the n functions.

Advertisements

Null space of a matrix

In linear algebra, a homogeneous system of linear equations can be written as a single matrix equation:

A\textbf{x} = \textbf{0}

The set of solutions to this equation is known as the null space of the matrix. For example, the subspace of R3 described above is the null space of the matrix

A = \left[ \begin{alignat}{3} 1 && 3 && 2 &\\ 2 && \;\;-4 && \;\;\;\;5 &\end{alignat} \,\right]\text{.}

Every subspace of Rn can be described as the null space of some matrix (see algorithms, below).

Linear parametric equations

The subset of Rn described by a system of homogeneous linear parametric equations is a subspace:

\left\{ \left[\!\! \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \!\!\right] \in \textbf{R}^n : \begin{alignat}{7} x_1 &&\; = \;&& a_{11} t_1 &&\; + \;&& a_{12} t_2 &&\; + \cdots + \;&& a_{1m} t_m & \ x_2 &&\; = \;&& a_{21} t_1 &&\; + \;&& a_{22} t_2 &&\; + \cdots + \;&& a_{2m} t_m & \ \vdots \,&& && \vdots\;\;\; && && \vdots\;\;\; && && \vdots\;\;\; & \ x_n &&\; = \;&& a_{n1} t_1 &&\; + \;&& a_{n2} t_2 &&\; + \cdots + \;&& a_{nm} t_m & \ \end{alignat} \text{ for some } t_1,\ldots,t_m\in\textbf{R} \right\}

For example, the set of all vectors (x, y, z) parameterized by the equations

x = 2t_1 + 3t_2,\;\;\;\;y = 5t_1 - 4t_2,\;\;\;\;\text{and}\;\;\;\;z = -t_1 + 2t_2

is a two-dimensional subspace of R3.

Span of vectors

In linear algebra, the system of parametric equations can be written as a single vector equation:

\left[ \begin{alignat}{1} x& \\ y& \\ z& \end{alignat}\,\right] \;=\; t_1 \!\left[ \begin{alignat}{1} 2& \\ 5& \\ -1& \end{alignat}\,\right] + t_2 \!\left[ \begin{alignat}{1} 3& \\ -4& \\ 2& \end{alignat}\,\right]

The expression on the right is called a linear combination of the vectors (2, 5, -1) and (3, −4, 2). These two vectors are said to span the resulting subspace.

In general, a linear combination of vectors v1, v2, . . . , vk is any vector of the form

t_1 \textbf{v}_1 + \cdots + t_k \textbf{v}_k\text{.}

The set of all possible linear combinations is called the span:

\text{Span} \{ \textbf{v}_1, \ldots, \textbf{v}_k \} = \left\{ t_1 \textbf{v}_1 + \cdots + t_k \textbf{v}_k : t_1,\ldots,t_k\in\mathbf{R} \right\}

If the vectors v1,...,vk have n components, then their span is a subspace of Rn. Geometrically, the span is the flat through the origin in n-dimensional space determined by the points v1,...,vk.

Example
The xz-plane in R3 can be parameterized by the equations
x = t_1, \;\;\; y = 0, \;\;\; z = t_2
As a subspace, the xz-plane is spanned by the vectors (1, 0, 0) and (0, 0, 1). Every vector in the xz-plane can be written as a linear combination of these two:
(t_1, 0, t_2) = t_1(1,0,0) + t_2(0,0,1)\text{.}\,
Geometrically, this corresponds to the fact that every point on the xz-plane can be reached from the origin by first moving some distance in the direction of (1, 0, 0) and then moving some distance in the direction of (0, 0, 1).

Column space and row space

A system of linear parametric equations can also be written as a single matrix equation:

\textbf{x} = A\textbf{t}\;\;\;\;\text{where}\;\;\;\;A = \left[ \begin{alignat}{2} 2 && 3 & \\ 5 && \;\;-4 & \\ -1 && 2 & \end{alignat} \,\right]\text{.}

In this case, the subspace consists of all possible values of the vector x. In linear algebra, this subspace is known as the column space (or image) of the matrix A. It is precisely the subspace of Rn spanned by the column vectors of A.

The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is the orthogonal complement of the null space (see below).

Independence, basis, and dimension

The vectors u and v are a basis for this two-dimensional subspace of R3.

In general, a subspace of Rn determined by k parameters (or spanned by k vectors) has dimension k. However, there are exceptions to this rule. For example, the subspace of R3 spanned by the three vectors (1, 0, 0), (0, 0, 1), and (2, 0, 3) is just the xz-plane, with each point on the plane described by infinitely many different values of t1, t2, t3.

In general, vectors v1,...,vk are called linearly independent if

t_1 \textbf{v}_1 + \cdots + t_k \textbf{v}_k \;\ne\; u_1 \textbf{v}_1 + \cdots + u_k \textbf{v}_k

for (t1, t2, ..., tk) ≠ (u1, u2, ..., uk).[6] If v1, ..., vk are linearly independent, then the coordinates t1, ..., tk for a vector in the span are uniquely determined.

A basis for a subspace S is a set of linearly independent vectors whose span is S. The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see algorithms, below).

Example
Let S be the subspace of R4 defined by the equations
x_1 = 2 x_2\;\;\;\;\text{and}\;\;\;\;x_3 = 5x_4
Then the vectors (2, 1, 0, 0) and (0, 0, 5, 1) are a basis for S. In particular, every vector that satisfies the above equations can be written uniquely as a linear combination of the two basis vectors:
(2t_1, t_1, 5t_2, t_2) = t_1(2, 1, 0, 0) + t_2(0, 0, 5, 1)\,
The subspace S is two-dimensional. Geometrically, it is the plane in R4 passing through the points (0, 0, 0, 0), (2, 1, 0, 0), and (0, 0, 5, 1).

Algorithms

Most algorithms for dealing with subspaces involve row reduction. This is the process of applying elementary row operations to a matrix until it reaches either row echelon form or reduced row echelon form. Row reduction has the following important properties:

  1. The reduced matrix has the same null space as the original.
  2. Row reduction does not change the span of the row vectors, i.e. the reduced matrix has the same row space as the original.
  3. Row reduction does not affect the linear dependence of the column vectors.

Basis for a row space

Input An m × n matrix A.
Output A basis for the row space of A.
  1. Use elementary row operations to put A into row echelon form.
  2. The nonzero rows of the echelon form are a basis for the row space of A.

See the article on row space for an example.

If we instead put the matrix A into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces of Rn are equal.

Subspace membership

Input A basis {b1, b2, ..., bk} for a subspace S of Rn, and a vector v with n components.
Output Determines whether v is an element of S
  1. Create a (k + 1) × n matrix A whose rows are the vectors b1,...,bk and v.
  2. Use elementary row operations to put A into row echelon form.
  3. If the echelon form has a row of zeroes, then the vectors {b1, ..., bk, v} are linearly dependent, and therefore vS .

Basis for a column space

Input An m × n matrix A
Output A basis for the column space of A
  1. Use elementary row operations to put A into row echelon form.
  2. Determine which columns of the echelon form have pivots. The corresponding columns of the original matrix are a basis for the column space.

See the article on column space for an example.

This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns.

Coordinates for a vector

Input A basis {b1, b2, ..., bk} for a subspace S of Rn, and a vector vS
Output Numbers t1, t2, ..., tk such that v = t1b1 + ··· + tkbk
  1. Create an augmented matrix A whose columns are b1,...,bk , with the last column being v.
  2. Use elementary row operations to put A into reduced row echelon form.
  3. Express the final column of the reduced echelon form as a linear combination of the first k columns. The coefficients used are the desired numbers t1, t2, ..., tk. (These should be precisely the first k entries in the final column of the reduced echelon form.)

If the final column of the reduced row echelon form contains a pivot, then the input vector v does not lie in S.

Basis for a null space

Input An m × n matrix A.
Output A basis for the null space of A
  1. Use elementary row operations to put A in reduced row echelon form.
  2. Using the reduced row echelon form, determine which of the variables x1, x2, ..., xn are free. Write equations for the dependent variables in terms of the free variables.
  3. For each free variable xi, choose a vector in the null space for which xi = 1 and the remaining free variables are zero. The resulting collection of vectors is a basis for the null space of A.

See the article on null space for an example.

Equations for a subspace

Input A basis {b1, b2, ..., bk} for a subspace S of Rn
Output An (nk) × n matrix whose null space is S.
  1. Create a matrix A whose rows are b1, b2, ..., bk.
  2. Use elementary row operations to put A into reduced row echelon form.
  3. Let c1, c2, ..., cn be the columns of the reduced row echelon form. For each column without a pivot, write an equation expressing the column as a linear combination of the columns with pivots.
  4. This results in a homogeneous system of nk linear equations involving the variables c1,...,cn. The (nk) × n matrix corresponding to this system is the desired matrix with nullspace S.
Example
If the reduced row echelon form of A is
\left[ \begin{alignat}{6} 1 && 0 && -3 && 0 && 2 && 0 \ 0 && 1 && 5 && 0 && -1 && 4 \ 0 && 0 && 0 && 1 && 7 && -9 \ 0 && \;\;\;\;\;0 && \;\;\;\;\;0 && \;\;\;\;\;0 && \;\;\;\;\;0 && \;\;\;\;\;0 \end{alignat} \,\right]
then the column vectors c1, ..., c6 satisfy the equations
 \begin{alignat}{1} \textbf{c}_3 &= -3\textbf{c}_1 + 5\textbf{c}_2 \ \textbf{c}_5 &= 2\textbf{c}_1 - \textbf{c}_2 + 7\textbf{c}_3 \ \textbf{c}_6 &= 4\textbf{c}_2 - 9\textbf{c}_3 \end{alignat}\text{.}
It follows that the row vectors of A satisfy the equations
 \begin{alignat}{1} x_3 &= -3x_1 + 5x_2 \ x_5 &= 2x_1 - x_2 + 7x_3 \ x_6 &= 4x_2 - 9x_3 \end{alignat}\text{.}
In particular, the row vectors of A are a basis for the null space of the corresponding matrix.

Operations on subspaces

In R3, the intersection of two-dimensional subspaces is one-dimensional.

Intersection

If U and V are subspaces of Rn, their intersection is also a subspace:

U \cap V = \left\{ \textbf{x}\in\textbf{R}^n : \textbf{x}\in U\text{ and }\textbf{x}\in V \right\}

The dimension of the intersection satisfies the inequality

\dim(U) + \dim(V) - n \leq \dim(U \cap V) \leq \min(\dim U,\,\dim V)\text{.}

The minimum is the most general case[7], and the maximum only occurs when one subspace is contained in the other. For example, the intersection of two-dimensional subspaces in R3 has dimension one or two (with two only possible if they are the same plane). The intersection of three-dimensional subspaces in R5 has dimension one, two, or three, with most pairs intersecting along a line.

The codimension of a subspace U in Rn is the difference n − dim(U). Using codimension, the inequality above can be written

\max(\text{codim } U,\,\text{codim } V) \leq \text{codim}(U \cap V) \leq \text{codim}(U) + \text{codim}(V) \text{.}

Sum

If U and V are subspaces of Rn, their sum is the subspace

U + V = \left\{ \textbf{u} + \textbf{v} : \textbf{u}\in U\text{ and }\textbf{v}\in V \right\}\text{.}

For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality

\max(\dim U,\dim V) \leq \dim(U + V) \leq \dim(U) + \dim(V)\text{.}

Here the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case.[8] The dimension of the intersection and the sum are related:

\dim(U+V) = \dim(U) + \dim(V) - \dim(U \cap V)

Orthogonal complement

The orthogonal complement of a subspace U is the subspace

U^\bot = \left\{\textbf{x}\in\textbf{R}^n : \textbf{x} \cdot \textbf{u}=0\text{ for every }\textbf{u}\in U \right\}

Here x · u denotes the dot product of x and u. For example, if U is a plane through the origin in R3, then U is the line perpendicular to this plane at the origin.

If b1, b2, ..., bk is a basis for U, then a vector x is in the orthogonal complement of U if and only if it is orthogonal to each bi. It follows that the null space of a matrix is the orthogonal complement of the row space.

The dimension of a subspace and its orthogonal complement are related by the equation

\dim(U) + \dim(U^\bot) = n

That is, the dimension of U is equal to the codimension of U. The intersection of U and U is the origin, and the sum of U and U is all of Rn

Orthogonal complements satisfy a version of De Morgan's laws:

(U + V)^\bot = U^\bot \cap V^\bot\;\;\;\;\text{and}\;\;\;\;(U \cap V)^\bot = U^\bot + V^\bot\text{.}

In fact, the collection of subspaces of Rn satisfy all of the axioms for a Boolean algebra, with intersection as AND, sum as OR, and orthogonal complement as NOT.

See also

Notes

  1. ^ Linear algebra, as discussed in this article, is a very well-established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in Lay 2005, Meyer 2001, and Strang 2005.
  2. ^ This equation uses set-builder notation. The same notation will be used throughout this article.
  3. ^ To add to the confusion, there is also an object called a row vector, usually written [x1  x2  ···   xn]. Some books identify ordered tuples with row vectors instead of column vectors.
  4. ^ The requirement that S contains the zero vector is equivalent to requiring that S is nonempty. (Once S contains any single vector v it must contain 0v by property 3, and therefore must contain the zero vector.)
  5. ^ The second and third requirements can be combined into the following statement: If u and v are elements of S and b and c are scalars, then bu + cv is an element of S.
  6. ^ This definition is often stated differently: vectors v1,...,vk are linearly independent if t1v 1 + ··· + tkvk0 for (t1, t2, ..., tk) ≠ (0, 0, ..., 0). The two definitions are equivalent.
  7. ^ That is, the intersection of generic subspaces U, VRn has dimension dim(U) + dim(V) − n, or dimension zero if this number is negative.
  8. ^ That is, the sum of two generic subspaces U, VRn has dimension dim(U) + dim(V), or dimension n if this number exceeds n.

References

Textbooks

  • Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0387982590  
  • Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0321287137  
  • Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0898714548, http://www.matrixanalysis.com/DownloadChapters.html  
  • Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3  
  • Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International  
  • Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall  

External links


Advertisements






Got something to say? Make a comment.
Your name
Your email address
Message