The Full Wiki

Coordinate vector: Wikis


Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.


From Wikipedia, the free encyclopedia

In linear algebra, a coordinate vector is an explicit representation of a vector in an abstract vector space as an ordered list of numbers or, equivalently, as an element of the coordinate space Fn. Coordinate vectors allow calculations with abstract objects to be transformed into calculations with blocks of numbers (matrices and column vectors).



Let V be a vector space of dimension n over a field F and let

 B = \{ b_1, b_2, \ldots, b_n \}

be an ordered basis for V. Then for every  v \in V there is a unique linear combination of the basis vectors that equals v:

 v = \alpha _1 b_1 + \alpha _2 b_2 + \cdots + \alpha _n b_n

By one of the defining properties of bases, the α-s are determined uniquely by v and B. Now, we define the coordinate vector of v relative to B to be the following sequence of coordinates:

 v_B = (\alpha _1, \alpha _2, \cdots, \alpha _n)

This is also called the representation of v with respect of B, or the B representation of v. The α-s are called the coordinates of v.

Typically, but not necessarily, the coordinates are represented as elements of a column vector, so that they can be easily manipulated using matrix multiplication:

 [ v ]_B = \begin{bmatrix} \alpha _1 \\ \vdots \\ \alpha _n \end{bmatrix}.

For instance, vector or basis transformations are obtained with a pre-multiplication of the column vector by a transformation matrix (see below). Some authors prefer using row vectors:

 [ v ]_B = \begin{bmatrix} \alpha _1 & \cdots & \alpha _n \end{bmatrix}.

In this case, transformations are obtained with a post-multiplication by a transformation matrix.

The standard representation

We can mechanize the above transformation by defining a function φB, called the standard representation of V with respect to B, that takes every vector to its coordinate representation: φB(v) = [v]B. Then φB is a linear transformation from V to Fn. In fact, it is an isomorphism, and its inverse \phi_B^{-1}:\mathbf{F}^n\to V is simply

\phi_B^{-1}(\alpha_1,\ldots,\alpha_n)=\alpha_1 b_1+\cdots+\alpha_n b_n.

Alternatively, we could have defined \phi_B^{-1} to be the above function from the beginning, realized that \phi_B^{-1} is an isomorphism, and defined φB to be its inverse.



Example 1

Let P3 be the space of all the algebraic polynomials in degree less than 4 (i.e. the highest exponent of x can be 3). This space is linear and spanned by the following polynomials:

BP = {1,x,x2,x3}


 1 := \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \quad ; \quad x := \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \quad ; \quad x^2 := \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} \quad ; \quad x^3 := \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} \quad

then the corresponding coordinate vector to the polynomial

 p \left( x \right) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 is  \begin{bmatrix} a_0 \\ a_1 \\ a_2 \\ a_3 \end{bmatrix} .

According to that representation, the differentiation operator d/dx which we shall mark D will be represented by the following matrix:

 Dp(x) = P'(x) \quad ; \quad [D] = \begin{bmatrix} 0 & 1 & 0 & 0 \ 0 & 0 & 2 & 0 \ 0 & 0 & 0 & 3 \ 0 & 0 & 0 & 0 \ \end{bmatrix}

Using that method it is easy to explore the properties of the operator: such as invertibility, hermitian or anti-hermitian or none, spectrum and eigenvalues and more.

Example 2

The Pauli matrices which represent the spin operator when transforming the spin eigenstates into vector coordinates.

Basis transformation matrix

Let B and C be two different bases of a vector space V, and let's mark with  [M]_{C}^{B} the matrix which has columns consisting of the C representation of basis vectors b1, b2, ..., bn:

 [M]_{C}^{B} = \begin{bmatrix} \ [b_1]_C & \cdots & [b_n]_C \ \end{bmatrix}

This matrix is referred to as the basis transformation matrix from B to C, and can be used for transforming any vector v from a B representation to a C representation, according to the following theorem:

 [v]_C = [M]_{C}^{B} [v]_B.

If E is the standard basis, the transformation from B to E can be represented with the following simplified notation:

 v = [M]^B [v]_B. \,


 v = [v]_E, \, and
 [M]^B = [M]_{E}^B.


The matrix M is an invertible matrix and M-1 is the basis transformation matrix from C to B. In other words,

 [M]_{C}^{B} [M]_{B}^{C} = [M]_{C}^{C} = \mathrm{Id}
 [M]_{B}^{C} [M]_{C}^{B} = [M]_{B}^{B} = \mathrm{Id}


  1. The basis transformation matrix can be regarded as an automorphism over V.
  2. In order to easily remember the theorem
 [v]_C = [M]_{C}^{B} [v]_B,
notice that M 's superscript and v 's subscript indices are "canceling" each other and M 's subscript becomes v 's new subscript. This "canceling" of indices is not a real canceling but rather a convenient and intuitively appealing manipulation of symbols, permitted by an appropriately chosen notation.


Got something to say? Make a comment.
Your name
Your email address