Linear Algebra: Wikis

Advertisements
  

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

Encyclopedia

(Redirected to Linear algebra article)

From Wikipedia, the free encyclopedia

A line passing through the origin (blue, thick) in R3 is a linear subspace, a common object of study in linear algebra.

Linear algebra is a branch of mathematics concerned with the study of vectors, with families of vectors called vector spaces or linear spaces, and with functions which input one vector and output another, according to certain rules. These functions are called linear maps or linear transformations and are often represented by matrices. Linear algebra is central to modern mathematics and its applications. An elementary application of linear algebra is to the solution of a systems of linear equations in several unknowns. More advanced applications are ubiquitous, in areas as diverse as abstract algebra and functional analysis. Linear algebra has a concrete representation in analytic geometry and is generalized in operator theory. It has extensive applications in the natural sciences and the social sciences. Nonlinear mathematical models can often be approximated by linear ones.

Contents

History

Many of the basic tools of linear algebra, particularly those concerned with the solution of systems of linear equations, date to antiquity. See, for example, the history of Gaussian elimination. But the abstract study of vectors and vector spaces does not begin until the 1600s. The origin of many of these ideas is discussed in the article on determinants. The method of least squares, first used by Gauss in the 1790s, is an early and significant application of the ideas of linear algebra.

The subject began to take its modern form in the mid-19th century, which saw many ideas and methods of previous centuries generalized as abstract algebra. Matrices and tensors were introduced and well understood by the turn of the 20th century. The use of these objects in special relativity, statistics, and quantum mechanics did much to spread the subject of linear algebra beyond pure mathematics.

Main structures

The main structures of linear algebra are vector spaces and linear maps between them. A vector space is a set whose elements can be added together and multiplied by the scalars, or numbers. In many physical applications, the scalars are real numbers, R. More generally, the scalars may form any field F — thus one can consider vector spaces over the field Q of rational numbers, the field C of complex numbers, or a finite field Fq. These two operations must behave similarly to the usual addition and multiplication of numbers: addition is commutative and associative, multiplication distributes over addition, and so on. More precisely, the two operations must satisfy a list of axioms chosen to emulate the properties of addition and scalar multiplication of Euclidean vectors in the coordinate n-space Rn. One of the axioms stipulates the existence of zero vector, which behaves analogously to the number zero with respect to addition. Elements of a general vector space V may be objects of any nature, for example, functions or polynomials, but when viewed as elements of V, they are frequently called vectors.

Given two vector spaces V and W over a field F, a linear transformation is a map

 T:V\to W

which is compatible with addition and scalar multiplication:

 T(u+v)=T(u)+T(v), \quad T(rv)=rT(v)

for any vectors u,vV and a scalar rF.

A fundamental role in linear algebra is played by the notions of linear combination, span, and linear independence of vectors and basis and the dimension of a vector space. Given a vector space V over a field F, an expression of the form

 r_1 v_1 + r_2 v_2 + \ldots + r_k v_k,

where v1, v2, …, vk are vectors and r1, r2, …, rk are scalars, is called the linear combination of the vectors v1, v2, …, vk with coefficients r1, r2, …, rk. The set of all linear combinations of vectors v1, v2, …, vk is called their span. A linear combination of any system of vectors with all zero coefficients is zero vector of V. If this is the only way to express zero vector as a linear combination of v1, v2, …, vk then these vectors are linearly independent. A linearly independent set of vectors that spans a vector space V is a basis of V. If a vector space admits a finite basis then any two bases have the same number of elements (called the dimension of V) and V is a finite-dimensional vector space. This theory can be extended to infinite-dimensional spaces.

There is an important distinction between the coordinate n-space Rn and a general finite-dimensional vector space V. While Rn has a standard basis {e1, e2, …, en}, a vector space V typically does not come equipped with a basis and many different bases exist (although they all consist of the same number of elements equal to the dimension of V). Having a particular basis {v1, v2, …, vn} of V allows one to construct a coordinate system in V: the vector with coordinates (r1, r2, …, rn) is the linear combination

 r_1 v_1 + r_2 v_2 + \ldots + r_n v_n.

The condition that v1, v2, …, vn span V guarantees that each vector v can be assigned coordinates, whereas the linear independence of v1, v2, …, vn further assures that these coordinates are determined in a unique way (i.e. there is only one linear combination of the basis vectors that is equal to v). In this way, once a basis of a vector space V over F has been chosen, V may be identified with the coordinate n-space Fn. Under this identification, addition and scalar multiplication of vectors in V correspond to addition and scalar multiplication of their coordinate vectors in Fn. Furthermore, if V and W are an n-dimensional and m-dimensional vector space over F, and a basis of V and a basis of W have been fixed, then any linear transformation T: VW may be encoded by an m × n matrix A with entries in the field F, called the matrix of T with respect to these bases. Therefore, by and large, the study of linear transformations, which were defined axiomatically, may be replaced by the study of matrices, which are concrete objects. This is a major technique in linear algebra.

Vector spaces over the complex numbers

Remarkably, the 2 × 2 complex matrices were studied before 2 × 2 real matrices. Early topics of interest included biquaternions and Pauli algebra. Investigation of 2 × 2 real matrices revealed the less common split-complex numbers and dual numbers, which are at variance with the Euclidean nature of the ordinary complex number plane.

Some useful theorems

For more information regarding the invertibility of a matrix, consult the invertible matrix article.

Generalizations and related topics

Since linear algebra is a successful theory, its methods have been developed in other parts of mathematics. In module theory one replaces the field of scalars by a ring. In multilinear algebra one considers multivariable linear transformations, that is, mappings which are linear in each of a number of different variables. This line of inquiry naturally leads to the idea of the tensor product. Functional analysis mixes the methods of linear algebra with those of mathematical analysis.

See also

Notes

  1. ^ The existence of a basis is straightforward for finitely generated vector spaces, but in full generality it is logically equivalent to the axiom of choice.
  2. ^ Dimension theorem for vector spaces
  3. ^ Pragma's Playground: Matrices for Dummies, http://www.pragmaware.net/articles/matrices/index.php

References

Advertisements

Textbooks

  • Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0387982590 
  • Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0321287137 
  • Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0898714548, http://www.matrixanalysis.com/DownloadChapters.html 
  • Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3 
  • Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International 
  • Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall 

History

  • Fearnley-Sander, Desmond, "Hermann Grassmann and the Creation of Linear Algebra" (via JSTOR), American Mathematical Monthly 86 (1979), pp. 809–817.
  • Grassmann, Hermann, Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik: dargestellt und durch Anwendungen auf die übrigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert, O. Wigand, Leipzig, 1844.

Further reading

Introductory textbooks
  • Axler, Sheldon (February 26, 2004), Linear Algebra Done Right (2nd ed.), Springer, ISBN 978-0387982588 
  • Bretscher, Otto (June 28, 2004), Linear Algebra with Applications (3rd ed.), Prentice Hall, ISBN 978-0131453340 
  • Farin, Gerald; Hansford, Dianne (December 15, 2004), Practical Linear Algebra: A Geometry Toolbox, AK Peters, ISBN 978-1568812342 
  • Friedberg, Stephen H.; Insel, Arnold J.; Spence, Lawrence E. (November 11, 2002), Linear Algebra (4th ed.), Prentice Hall, ISBN 978-0130084514 
  • Kolman, Bernard; Hill, David R. (May 3, 2007), Elementary Linear Algebra with Applications (9th ed.), Prentice Hall, ISBN 978-0132296540 
  • Strang, Gilbert (July 19, 2005), Linear Algebra and Its Applications (4th ed.), Brooks Cole, ISBN 978-0030105678 
Advanced textbooks
  • Bhatia, Rajendra (November 15, 1996), Matrix Analysis, Graduate Texts in Mathematics, Springer, ISBN 978-0387948461 
  • Demmel, James W. (August 1, 1997), Applied Numerical Linear Algebra, SIAM, ISBN 978-0898713893 
  • Golan, Johnathan S. (January 2007), The Linear Algebra a Beginning Graduate Student Ought to Know (2nd ed.), Springer, ISBN 978-1402054945 
  • Golub, Gene H.; Van Loan, Charles F. (October 15, 1996), Matrix Computations, Johns Hopkins Studies in Mathematical Sciences (3rd ed.), The Johns Hopkins University Press, ISBN 978-0801854149 
  • Greub, Werner H. (October 16, 1981), Linear Algebra, Graduate Texts in Mathematics (4th ed.), Springer, ISBN 978-0801854149 
  • Hoffman, Kenneth; Kunze, Ray (April 25, 1971), Linear Algebra (2nd ed.), Prentice Hall, ISBN 978-0135367971 
  • Halmos, Paul R. (August 20, 1993), Finite-Dimensional Vector Spaces, Undergraduate Texts in Mathematics, Springer, ISBN 978-0387900933 
  • Horn, Roger A.; Johnson, Charles R. (February 23, 1990), Matrix Analysis, Cambridge University Press, ISBN 978-0521386326 
  • Horn, Roger A.; Johnson, Charles R. (June 24, 1994), Topics in Matrix Analysis, Cambridge University Press, ISBN 978-0521467131 
  • Lang, Serge (March 9, 2004), Linear Algebra, Undergraduate Texts in Mathematics (3rd ed.), Springer, ISBN 978-0387964126 
  • Roman, Steven (March 22, 2005), Advanced Linear Algebra, Graduate Texts in Mathematics (2nd ed.), Springer, ISBN 978-0387247663 
  • Shilov, Georgi E. (June 1, 1977), Linear algebra, Dover Publications, ISBN 978-0486635187 
  • Shores, Thomas S. (December 6, 2006), Applied Linear Algebra and Matrix Analysis, Undergraduate Texts in Mathematics, Springer, ISBN 978-0387331942 
  • Smith, Larry (May 28, 1998), Linear Algebra, Undergraduate Texts in Mathematics, Springer, ISBN 978-0387984551 
Study guides and outlines
  • Leduc, Steven A. (May 1, 1996), Linear Algebra (Cliffs Quick Review), Cliffs Notes, ISBN 978-0822053316 
  • Lipschutz, Seymour; Lipson, Marc (December 6, 2000), Schaum's Outline of Linear Algebra (3rd ed.), McGraw-Hill, ISBN 978-0071362009 
  • Lipschutz, Seymour (January 1, 1989), 3,000 Solved Problems in Linear Algebra, McGraw-Hill, ISBN 978-0070380233 
  • McMahon, David (October 28, 2005), Linear Algebra Demystified, McGraw-Hill Professional, ISBN 978-0071465793 
  • Zhang, Fuzhen (April 7, 2009), Linear Algebra: Challenging Problems for Students, The Johns Hopkins University Press, ISBN 978-0801891250 

External links

Online books


Study guide

Up to date as of January 14, 2010
(Redirected to Linear algebra article)

From Wikiversity

Material covered in these notes are designed to span over 12-16 weeks. Each subpage will contain about 3 hours of material to read through carefully, and additional time to properly absorb the material.

Contents

Introduction - linear equations

Let us illustrate through examples what linear equations are. We will also be introducing new notation wherever appropriate.

For example:

3xy = 14
2x + y = 11

If you add these two equations together, you can see that the y's cancel each other out. When this happens, you will get 5x = 25, or x = 5. Substituting back into the above, we find that y = 1. Note that this is the only solution to the system of equations. The above method of solving was linear combination, or elimination.

Solving Linear Systems Algebraically

One was mentioned above, but there are other ways to solve a system of linear equations without graphing.

Substitution

If you get a system of equations that looks like this:

2x + y = 11
− 4x + 3y = 13

You can switch around some terms in the first to get this:

y = − 2x + 11

Then you can substitute that into the bottom one so that it looks like this:

− 4x + 3( − 2x + 11) = 13
− 4x − 6x + 33 = 13
− 10x + 33 = 13
− 10x = − 20
x = 2

Then, you can substitute 2 into an x from either equation and solve for y. It's usually easier to substitute it in the one that had the single y. In this case, after substituting 2 for x, you would find that y = 7.

Thinking in terms of matrices

Much of finite elements revolves around forming matrices and solving systems of linear equations using matrices. This learning resource gives you a brief review of matrices.

Matrices

Suppose that you have a linear system of equations

 \begin{align} a_{11} x_1 + a_{12} x_2 + a_{13} x_3 + a_{14} x_4 &= b_1 \ a_{21} x_1 + a_{22} x_2 + a_{23} x_3 + a_{24} x_4 &= b_2 \ a_{31} x_1 + a_{32} x_2 + a_{33} x_3 + a_{34} x_4 &= b_3 \ a_{41} x_1 + a_{42} x_2 + a_{43} x_3 + a_{44} x_4 &= b_4 \end{align} ~.

Matrices provide a simple way of expressing these equations. Thus, we can instead write

 \begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14} \ a_{21} & a_{22} & a_{23} & a_{24} \ a_{31} & a_{32} & a_{33} & a_{34} \ a_{41} & a_{42} & a_{43} & a_{44} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \end{bmatrix} ~.

An even more compact notation is

 \left[\mathsf{A}\right] \left[\mathsf{x}\right] = \left[\mathsf{b}\right] ~~~~\text{or}~~~~ \mathbf{A} \mathbf{x} = \mathbf{b} ~.

Here \mathbf{A} is a 4\times 4 matrix while \mathbf{x} and \mathbf{b} are 4\times 1 matrices. In general, an m \times n matrix \mathbf{A} is a set of numbers arranged in m rows and n columns.

 \mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \ a_{21} & a_{22} & a_{23} & \dots & a_{2n} \ \vdots & \vdots & \vdots & \ddots & \vdots \ a_{m1} & a_{m2} & a_{m3} & \dots & a_{mn} \end{bmatrix}~.

Types of Matrices

Common types of matrices that we encounter in finite elements are:

  • a row vector that has one row and n columns.
 \mathbf{v} = \begin{bmatrix} v_1 & v_2 & v_3 & \dots & v_n \end{bmatrix}
  • a column vector that has n rows and one column.
 \mathbf{v} = \begin{bmatrix} v_1 \\ v_2 \\ v_3 \\ \vdots \\ v_n \end{bmatrix}
  • a square matrix that has an equal number of rows and columns.
  • a diagonal matrix which is a square matrix with only the

diagonal elements (aii) nonzero.

 \mathbf{A} = \begin{bmatrix} a_{11} & 0 & 0 & \dots & 0 \ 0 & a_{22} & 0 & \dots & 0 \ \vdots & \vdots & \vdots & \ddots & \vdots \ 0 & 0 & 0 & \dots & a_{nn} \end{bmatrix}~.
  • the identity matrix (\mathbf{I}) which is a diagonal matrix and

with each of its nonzero elements (aii) equal to 1.

 \mathbf{A} = \begin{bmatrix} 1 & 0 & 0 & \dots & 0 \ 0 & 1 & 0 & \dots & 0 \ \vdots & \vdots & \vdots & \ddots & \vdots \ 0 & 0 & 0 & \dots & 1 \end{bmatrix}~.
  • a symmetric matrix which is a square matrix with elements

such that aij = aji.

 \mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \ a_{12} & a_{22} & a_{23} & \dots & a_{2n} \ a_{13} & a_{23} & a_{33} & \dots & a_{3n} \ \vdots & \vdots & \vdots & \ddots & \vdots \ a_{1n} & a_{2n} & a_{3n} & \dots & a_{nn} \end{bmatrix}~.
  • a skew-symmetric matrix which is a square matrix with elements

such that aij = − aji.

 \mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \ -a_{12} & a_{22} & a_{23} & \dots & a_{2n} \ -a_{13} & -a_{23} & a_{33} & \dots & a_{3n} \ \vdots & \vdots & \vdots & \ddots & \vdots \ -a_{1n} & -a_{2n} & -a_{3n} & \dots & a_{nn} \end{bmatrix}~.

Note that the diagonal elements of a skew-symmetric matrix have to be zero: a_{ii} = -a_{ii} \Rightarrow a_{ii} = 0.

Matrix addition

Let \mathbf{A} and \mathbf{B} be two m \times n matrices with components aij and bij, respectively. Then

 \mathbf{C} = \mathbf{A} + \mathbf{B} \implies c_{ij} = a_{ij} + b_{ij}

Multiplication by a scalar

Let \mathbf{A} be a m \times n matrix with components aij and let λ be a scalar quantity. Then,

 \mathbf{C} = \lambda\mathbf{A} \implies c_{ij} = \lambda a_{ij}

Multiplication of matrices

Let \mathbf{A} be a m \times n matrix with components aij. Let \mathbf{B} be a p \times q matrix with components bij.

The product \mathbf{C} = \mathbf{A} \mathbf{B} is defined only if n = p. The matrix \mathbf{C} is a m \times q matrix with components cij. Thus,

 \mathbf{C} = \mathbf{A} \mathbf{B} \implies c_{ij} = \sum^n_{k=1} a_{ik} b_{kj}

Similarly, the product \mathbf{D} = \mathbf{B} \mathbf{A} is defined only if q = m. The matrix \mathbf{D} is a p \times n matrix with components dij. We have

 \mathbf{D} = \mathbf{B} \mathbf{A} \implies d_{ij} = \sum^m_{k=1} b_{ik} a_{kj}

Clearly, \mathbf{C} \ne \mathbf{D} in general, i.e., the matrix product is not commutative.

However, matrix multiplication is distributive. That means

 \mathbf{A} (\mathbf{B} + \mathbf{C}) = \mathbf{A} \mathbf{B} + \mathbf{A} \mathbf{C} ~.

The product is also associative. That means

 \mathbf{A} (\mathbf{B} \mathbf{C}) = (\mathbf{A} \mathbf{B}) \mathbf{C} ~.

Transpose of a matrix

Let \mathbf{A} be a m \times n matrix with components aij. Then the transpose of the matrix is defined as the n \times m matrix \mathbf{B} = \mathbf{A}^T with components bij = aji. That is,

 \mathbf{B} = \mathbf{A}^T = \begin{bmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \ a_{21} & a_{22} & a_{23} & \dots & a_{2n} \ a_{31} & a_{32} & a_{33} & \dots & a_{3n} \ \vdots & \vdots & \vdots & \ddots & \vdots \ a_{m1} & a_{m2} & a_{m3} & \dots & a_{mn} \end{bmatrix}^T = \begin{bmatrix} a_{11} & a_{21} & a_{31} & \dots & a_{m1} \ a_{12} & a_{22} & a_{32} & \dots & a_{m2} \ a_{13} & a_{23} & a_{33} & \dots & a_{m3} \ \vdots & \vdots & \vdots & \ddots & \vdots \ a_{1n} & a_{2n} & a_{3n} & \dots & a_{mn} \end{bmatrix}

A important identity involving the transpose of matrices is

 { (\mathbf{A} \mathbf{B})^T = \mathbf{B}^T \mathbf{A}^T }~.

Determinant of a matrix

The determinant of a matrix is defined only for square matrices.

For a 2 \times 2 matrix \mathbf{A}, we have

 \mathbf{A} = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \implies \det(\mathbf{A}) = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\end{vmatrix} = a_{11} a_{22} - a_{12} a_{21} ~.

For a n \times n matrix, the determinant is calculated by expanding into minors as

\begin{align} &\det(\mathbf{A}) = \begin{vmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \ a_{21} & a_{22} & a_{23} & \dots & a_{2n} \ a_{31} & a_{32} & a_{33} & \dots & a_{3n} \ \vdots & \vdots & \vdots & \ddots & \vdots \ a_{n1} & a_{n2} & a_{n3} & \dots & a_{nn} \end{vmatrix} \ &= a_{11} \begin{vmatrix} a_{22} & a_{23} & \dots & a_{2n} \ a_{32} & a_{33} & \dots & a_{3n} \ \vdots & \vdots & \ddots & \vdots \ a_{n2} & a_{n3} & \dots & a_{nn} \end{vmatrix} - a_{12} \begin{vmatrix} a_{21} & a_{23} & \dots & a_{2n} \ a_{31} & a_{33} & \dots & a_{3n} \ \vdots & \vdots & \ddots & \vdots \ a_{n1} & a_{n3} & \dots & a_{nn} \end{vmatrix} + \dots \pm a_{1n} \begin{vmatrix} a_{21} & a_{23} & \dots & a_{2(n-1)} \ a_{31} & a_{33} & \dots & a_{3(n-1)} \ \vdots & \vdots & \ddots & \vdots \ a_{n1} & a_{n3} & \dots & a_{n(n-1)} \end{vmatrix} \end{align}

In short, the determinant of a matrix \mathbf{A} has the value

 { \det(\mathbf{A}) = \sum^n_{i=1} (-1)^{i+j} a_{ij} M_{ij} }

where Mij is the determinant of the submatrix of \mathbf{A} formed by eliminating row i and column j from \mathbf{A}.

Some useful identities involving the determinant are given below.

  • If \mathbf{A} is a n \times n matrix, then
 \det(\mathbf{A}) = \det(\mathbf{A}^T)~.
  • If λ is a constant and \mathbf{A} is a n \times n matrix, then
 \det(\lambda\mathbf{A}) = \lambda^n\det(\mathbf{A}) \implies \det(-\mathbf{A}) = (-1)^n\det(\mathbf{A}) ~.
  • If \mathbf{A} and \mathbf{B} are two n \times n matrices, then
 \det(\mathbf{A}\mathbf{B}) = \det(\mathbf{A})\det(\mathbf{B})~.

Inverse of a matrix

Let \mathbf{A} be a n \times n matrix. The inverse of \mathbf{A} is denoted by \mathbf{A}^{-1} and is defined such that

 { \mathbf{A} \mathbf{A}^{-1} = \mathbf{I} }

where \mathbf{I} is the n \times n identity matrix.

The inverse exists only if \det(\mathbf{A}) \ne 0. A singular matrix does not have an inverse.

An important identity involving the inverse is

 { (\mathbf{A}\mathbf{B})^{-1} = \mathbf{B}^{-1} \mathbf{A}^{-1}, }

since this leads to:  { (\mathbf{A} \mathbf{B})^{-1} (\mathbf{A} \mathbf{B}) = (\mathbf{B}^{-1} \mathbf{A}^{-1}) (\mathbf{A} \mathbf{B} ) = \mathbf{B}^{-1} \mathbf{A}^{-1} \mathbf{A} \mathbf{B} = \mathbf{B}^{-1} (\mathbf{A}^{-1} \mathbf{A}) \mathbf{B} = \mathbf{B}^{-1} \mathbf{I} \mathbf{B} = \mathbf{B}^{-1} \mathbf{B} = \mathbf{I}. }

Some other identities involving the inverse of a matrix are given below.

  • The determinant of a matrix is equal to the multiplicative inverse of the

determinant of its inverse.

 \det(\mathbf{A}) = \cfrac{1}{\det(\mathbf{A}^{-1})}~.
  • The determinant of a similarity transformation of a matrix

is equal to the original matrix.

 \det(\mathbf{B} \mathbf{A} \mathbf{B}^{-1}) = \det(\mathbf{A}) ~.

We usually use numerical methods such as Gaussian elimination to compute the inverse of a matrix.

Eigenvalues and eigenvectors

A thorough explanation of this material can be found at Eigenvalue, eigenvector and eigenspace. However, for further study, let us consider the following examples:

  • Let : \mathbf{A} = \begin{bmatrix} 1 & 6 \\ 5 & 2 \end{bmatrix} , \mathbf{v} = \begin{bmatrix} 6 \\ -5 \end{bmatrix} , \mathbf{t} = \begin{bmatrix} 7 \\ 4 \end{bmatrix}~.

Which vector is an eigenvector for  \mathbf{A}  ?

We have  \mathbf{A}\mathbf{v} = \begin{bmatrix} 1 & 6 \\ 5 & 2 \end{bmatrix}\begin{bmatrix} 6 \\ -5 \end{bmatrix} = \begin{bmatrix} -24 \\ 20 \end{bmatrix} = 4\begin{bmatrix} 6 \\ -5 \end{bmatrix} , and  \mathbf{A}\mathbf{t} = \begin{bmatrix} 1 & 6 \\ 5 & 2 \end{bmatrix}\begin{bmatrix} 7 \\ 4 \end{bmatrix} = \begin{bmatrix} 31 \\ 43 \end{bmatrix}~.

Thus,  \mathbf{v} is an eigenvector.

  • Is  \mathbf{u} = \begin{bmatrix} 1 \\ 4 \end{bmatrix} an eigenvector for  \mathbf{A} = \begin{bmatrix} -3 & -3 \\ 1 & 8 \end{bmatrix} ?

We have that since  \mathbf{A}\mathbf{u} = \begin{bmatrix} -3 & -3 \\ 1 & 8 \end{bmatrix}\begin{bmatrix} 1 \\ 4 \end{bmatrix} = \begin{bmatrix} -15 \\ 33 \end{bmatrix} ,  \mathbf{u} = \begin{bmatrix} 1 \\ 4 \end{bmatrix} is not an eigenvector for  \mathbf{A} = \begin{bmatrix} -3 & -3 \\ 1 & 8 \end{bmatrix}~.

Resources

Wikipedia

Wikibooks

External links


Wikibooks

Up to date as of January 23, 2010

From Wikibooks, the open-content textbooks collection

Linear algebra is a branch of algebra in mathematics concerned with the study of vectors, vector spaces, linear transformations, and systems of linear equations. Vector spaces are very important in modern mathematics. Linear algebra is widely used in abstract algebra and functional analysis. It has extensive applications in natural and social sciences, for both linear systems and linear models of nonlinear systems.

It is part of the study of Abstract Algebra.

Contents

General Information and MoS

This book is meant for students who wish to study linear algebra from scratch. The approach will not be entirely informal. Every result in the book is intended to be either proved or justified by some mathematical procedure. Links to tedious proofs can be made to Famous Theorems of Mathematics/Algebra after the proof is written there.

Exercises

Learning to think is extremely important in mathematics. Therefore in this book exercises form an important component and by no means should be ignored. Many important concepts of linear algebra are developed via the exercises in the book. It is necessary that before proceeding to the next chapter, the student does the exercises. Links to hints and solutions to many of the exercises are provided but they should be only used in cases of difficulty.

Table of Contents

Linear Systems

  1. Solving Linear SystemsDevelopment stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
    1. Gauss' Method Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
    2. Describing the Solution Set Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
    3. General = Particular + Homogeneous Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
    4. Comparing Set Descriptions Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
    5. Automation Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
  2. Linear Geometry of n-Space Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
    1. Vectors in Space Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
    2. Length and Angle Measures Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
  3. Reduced Echelon Form Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
    1. Gauss-Jordan Reduction Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
    2. Row Equivalence Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
  4. Topic: Computer Algebra Systems Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
  5. Topic: Input-Output Analysis Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
  6. Input-Output Analysis M File Development stage: 100% (as of Mar 24 2008)(Mar 24 2008)
  7. Topic: Accuracy of Computations Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
  8. Topic: Analyzing Networks Development stage: 100% (as of Jul 13, 2009)(Jul 13, 2009)
  9. Topic: Speed of Gauss' Method Development stage: 50% (as of Mar 24, 2008)(Mar 24, 2008)

Vector Spaces

Vector SpacesDevelopment stage: 100% (as of Apr 17, 2009)(Apr 17, 2009)
  1. Definition of Vector SpaceDevelopment stage: 100% (as of Apr 17, 2009)(Apr 17, 2009)
    1. Definition and ExamplesDevelopment stage: 100% (as of Jun 18, 2009)(Jun 18, 2009)
    2. Subspaces and Spanning setsDevelopment stage: 100% (as of Jun 18, 2009)(Jun 18, 2009)
  2. Linear IndependenceDevelopment stage: 100% (as of Apr 17, 2009)(Apr 17, 2009)
    1. Definition and ExamplesDevelopment stage: 100% (as of Apr 17, 2009)(Apr 17, 2009)
  3. Basis and DimensionDevelopment stage: 100% (as of Apr 17, 2009)(Apr 17, 2009)
    1. BasisDevelopment stage: 100% (as of Jun 18, 2009)(Jun 18, 2009)
    2. DimensionDevelopment stage: 100% (as of Apr 17, 2009)(Apr 17, 2009)
    3. Vector Spaces and Linear SystemsDevelopment stage: 100% (as of Apr 17, 2009)(Apr 17, 2009)
    4. Combining SubspacesDevelopment stage: 100% (as of Apr 17, 2009)(Apr 17, 2009)
  4. Topic: FieldsDevelopment stage: 100% (as of Apr 17, 2009)(Apr 17, 2009)
  5. Topic: CrystalsDevelopment stage: 100% (as of Apr 17, 2009)(Apr 17, 2009)
  6. Topic: Voting ParadoxesDevelopment stage: 100% (as of Apr 17, 2009)(Apr 17, 2009)
  7. Topic: Dimensional AnalysisDevelopment stage: 100% (as of Apr 17, 2009)(Apr 17, 2009)

Maps Between Spaces

  1. IsomorphismsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    1. Definition and ExamplesDevelopment stage: 100% (as of July 19, 2009)(July 19, 2009)
    2. Dimension Characterizes IsomorphismDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  2. HomomorphismsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    1. Definition of HomomorphismDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    2. Rangespace and NullspaceDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  3. Computing Linear MapsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    1. Representing Linear Maps with MatricesDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    2. Any Matrix Represents a Linear MapDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  4. Matrix OperationsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    1. Sums and Scalar ProductsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    2. Matrix MultiplicationDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    3. Mechanics of Matrix MultiplicationDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    4. InversesDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  5. Change of BasisDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    1. Changing Representations of VectorsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    2. Changing Map RepresentationsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  6. ProjectionDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    1. Orthogonal Projection Into a LineDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    2. Gram-Schmidt OrthogonalizationDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    3. Projection Into a SubspaceDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  7. Topic: Line of Best FitDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  8. Topic: Geometry of Linear MapsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  9. Topic: Markov ChainsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  10. Topic: Orthonormal MatricesDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)

Determinants

DeterminantsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  1. DefinitionDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    1. ExplorationDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    2. Properties of DeterminantsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    3. The Permutation ExpansionDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    4. Determinants ExistDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  2. Geometry of DeterminantsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    1. Determinants as Size FunctionsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  3. Other Formulas for DeterminantsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
    1. Laplace's ExpansionDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  4. Topic: Cramer's RuleDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  5. Topic: Speed of Calculating DeterminantsDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)
  6. Topic: Projective GeometryDevelopment stage: 100% (as of Jun 21, 2009)(Jun 21, 2009)

Similarity

Introduction to SimilarityDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
  1. Complex Vector SpacesDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
    1. Factoring and Complex Numbers; A ReviewDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
    2. Complex RepresentationsDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
  2. SimilarityDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
    1. DiagonalizabilityDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
    2. Eigenvalues and EigenvectorsDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
  3. NilpotenceDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
    1. Self-CompositionDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
    2. StringsDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
  4. Jordan FormDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
    1. Polynomials of Maps and MatricesDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
    2. Jordan Canonical FormDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
  5. Topic: Geometry of EigenvaluesDevelopment stage: 50% (as of Jun 24, 2009)(Jun 24, 2009)
  6. Topic: The Method of PowersDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
  7. Topic: Stable PopulationsDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)
  8. Topic: Linear RecurrencesDevelopment stage: 100% (as of Jun 24, 2009)(Jun 24, 2009)

Appendix

Resources And Licensing

  • Licensing And History
  • Resources
  • Bibliography
  • IndexDevelopment stage: 25% (as of Jun 24, 2009)(Jun 24, 2009)

Old pages to be implemented

External links


Simple English

Linear algebra describes ways to solve and manipulate (rearrange) systems of linear equations.

For example consider the following equations:

\begin{matrix}
x &+& y &=& 0 \\
x &-& 2y &=& 3

\end{matrix}

These two equations form a system of linear equations. It is linear because none of the variables are raised to a power. The graph of a linear equation is a straight line. The solution to this system is:

\begin{matrix}
x &=& 1 \\
y &=& -1

\end{matrix}

since it makes all of the original equations valid, that is, the value on left side of the equals sign is exactly the same as the value on the right side for both equations.

Linear algebra uses a system of notation for describing system behavior, called a matrix. For the previous example, the coefficients of the equations can be stored in a coefficient matrix.


Advertisements






Got something to say? Make a comment.
Your name
Your email address
Message