Linear algebra is a branch of mathematics concerned with the study of vectors, with families of vectors called vector spaces or linear spaces, and with functions which input one vector and output another, according to certain rules. These functions are called linear maps or linear transformations and are often represented by matrices. Linear algebra is central to modern mathematics and its applications. An elementary application of linear algebra is to the solution of a systems of linear equations in several unknowns. More advanced applications are ubiquitous, in areas as diverse as abstract algebra and functional analysis. Linear algebra has a concrete representation in analytic geometry and is generalized in operator theory. It has extensive applications in the natural sciences and the social sciences. Nonlinear mathematical models can often be approximated by linear ones.
Contents 
Many of the basic tools of linear algebra, particularly those concerned with the solution of systems of linear equations, date to antiquity. See, for example, the history of Gaussian elimination. But the abstract study of vectors and vector spaces does not begin until the 1600s. The origin of many of these ideas is discussed in the article on determinants. The method of least squares, first used by Gauss in the 1790s, is an early and significant application of the ideas of linear algebra.
The subject began to take its modern form in the mid19th century, which saw many ideas and methods of previous centuries generalized as abstract algebra. Matrices and tensors were introduced and well understood by the turn of the 20th century. The use of these objects in special relativity, statistics, and quantum mechanics did much to spread the subject of linear algebra beyond pure mathematics.
The main structures of linear algebra are vector spaces and linear maps between them. A vector space is a set whose elements can be added together and multiplied by the scalars, or numbers. In many physical applications, the scalars are real numbers, R. More generally, the scalars may form any field F — thus one can consider vector spaces over the field Q of rational numbers, the field C of complex numbers, or a finite field F_{q}. These two operations must behave similarly to the usual addition and multiplication of numbers: addition is commutative and associative, multiplication distributes over addition, and so on. More precisely, the two operations must satisfy a list of axioms chosen to emulate the properties of addition and scalar multiplication of Euclidean vectors in the coordinate nspace R^{n}. One of the axioms stipulates the existence of zero vector, which behaves analogously to the number zero with respect to addition. Elements of a general vector space V may be objects of any nature, for example, functions or polynomials, but when viewed as elements of V, they are frequently called vectors.
Given two vector spaces V and W over a field F, a linear transformation is a map
which is compatible with addition and scalar multiplication:
for any vectors u,v ∈ V and a scalar r ∈ F.
A fundamental role in linear algebra is played by the notions of linear combination, span, and linear independence of vectors and basis and the dimension of a vector space. Given a vector space V over a field F, an expression of the form
where v_{1}, v_{2}, …, v_{k} are vectors and r_{1}, r_{2}, …, r_{k} are scalars, is called the linear combination of the vectors v_{1}, v_{2}, …, v_{k} with coefficients r_{1}, r_{2}, …, r_{k}. The set of all linear combinations of vectors v_{1}, v_{2}, …, v_{k} is called their span. A linear combination of any system of vectors with all zero coefficients is zero vector of V. If this is the only way to express zero vector as a linear combination of v_{1}, v_{2}, …, v_{k} then these vectors are linearly independent. A linearly independent set of vectors that spans a vector space V is a basis of V. If a vector space admits a finite basis then any two bases have the same number of elements (called the dimension of V) and V is a finitedimensional vector space. This theory can be extended to infinitedimensional spaces.
There is an important distinction between the coordinate nspace R^{n} and a general finitedimensional vector space V. While R^{n} has a standard basis {e_{1}, e_{2}, …, e_{n}}, a vector space V typically does not come equipped with a basis and many different bases exist (although they all consist of the same number of elements equal to the dimension of V). Having a particular basis {v_{1}, v_{2}, …, v_{n}} of V allows one to construct a coordinate system in V: the vector with coordinates (r_{1}, r_{2}, …, r_{n}) is the linear combination
The condition that v_{1}, v_{2}, …, v_{n} span V guarantees that each vector v can be assigned coordinates, whereas the linear independence of v_{1}, v_{2}, …, v_{n} further assures that these coordinates are determined in a unique way (i.e. there is only one linear combination of the basis vectors that is equal to v). In this way, once a basis of a vector space V over F has been chosen, V may be identified with the coordinate nspace F^{n}. Under this identification, addition and scalar multiplication of vectors in V correspond to addition and scalar multiplication of their coordinate vectors in F^{n}. Furthermore, if V and W are an ndimensional and mdimensional vector space over F, and a basis of V and a basis of W have been fixed, then any linear transformation T: V → W may be encoded by an m × n matrix A with entries in the field F, called the matrix of T with respect to these bases. Therefore, by and large, the study of linear transformations, which were defined axiomatically, may be replaced by the study of matrices, which are concrete objects. This is a major technique in linear algebra.
Remarkably, the 2 × 2 complex matrices were studied before 2 × 2 real matrices. Early topics of interest included biquaternions and Pauli algebra. Investigation of 2 × 2 real matrices revealed the less common splitcomplex numbers and dual numbers, which are at variance with the Euclidean nature of the ordinary complex number plane.
For more information regarding the invertibility of a matrix, consult the invertible matrix article.
Since linear algebra is a successful theory, its methods have been developed in other parts of mathematics. In module theory one replaces the field of scalars by a ring. In multilinear algebra one considers multivariable linear transformations, that is, mappings which are linear in each of a number of different variables. This line of inquiry naturally leads to the idea of the tensor product. Functional analysis mixes the methods of linear algebra with those of mathematical analysis.


Material covered in these notes are designed to span over 1216 weeks. Each subpage will contain about 3 hours of material to read through carefully, and additional time to properly absorb the material.
Let us illustrate through examples what linear equations are. We will also be introducing new notation wherever appropriate.
For example:
If you add these two equations together, you can see that the y's cancel each other out. When this happens, you will get 5x = 25, or x = 5. Substituting back into the above, we find that y = 1. Note that this is the only solution to the system of equations. The above method of solving was linear combination, or elimination.
One was mentioned above, but there are other ways to solve a system of linear equations without graphing.
If you get a system of equations that looks like this:
You can switch around some terms in the first to get this:
Then you can substitute that into the bottom one so that it looks like this:
Then, you can substitute 2 into an x from either equation and solve for y. It's usually easier to substitute it in the one that had the single y. In this case, after substituting 2 for x, you would find that y = 7.
Much of finite elements revolves around forming matrices and solving systems of linear equations using matrices. This learning resource gives you a brief review of matrices.
Suppose that you have a linear system of equations
Matrices provide a simple way of expressing these equations. Thus, we can instead write
An even more compact notation is
Here is a matrix while and are matrices. In general, an matrix is a set of numbers arranged in m rows and n columns.
Common types of matrices that we encounter in finite elements are:
diagonal elements (a_{ii}) nonzero.
with each of its nonzero elements (a_{ii}) equal to 1.
such that a_{ij} = a_{ji}.
such that a_{ij} = − a_{ji}.
Note that the diagonal elements of a skewsymmetric matrix have to be zero: .
Let and be two matrices with components a_{ij} and b_{ij}, respectively. Then
Let be a matrix with components a_{ij} and let λ be a scalar quantity. Then,
Let be a matrix with components a_{ij}. Let be a matrix with components b_{ij}.
The product is defined only if n = p. The matrix is a matrix with components c_{ij}. Thus,
Similarly, the product is defined only if q = m. The matrix is a matrix with components d_{ij}. We have
Clearly, in general, i.e., the matrix product is not commutative.
However, matrix multiplication is distributive. That means
The product is also associative. That means
Let be a matrix with components a_{ij}. Then the transpose of the matrix is defined as the matrix with components b_{ij} = a_{ji}. That is,
A important identity involving the transpose of matrices is
The determinant of a matrix is defined only for square matrices.
For a matrix , we have
For a matrix, the determinant is calculated by expanding into minors as
In short, the determinant of a matrix has the value
where M_{ij} is the determinant of the submatrix of formed by eliminating row i and column j from .
Some useful identities involving the determinant are given below.
Let be a matrix. The inverse of is denoted by and is defined such that
where is the identity matrix.
The inverse exists only if . A singular matrix does not have an inverse.
An important identity involving the inverse is
since this leads to:
Some other identities involving the inverse of a matrix are given below.
determinant of its inverse.
is equal to the original matrix.
We usually use numerical methods such as Gaussian elimination to compute the inverse of a matrix.
A thorough explanation of this material can be found at Eigenvalue, eigenvector and eigenspace. However, for further study, let us consider the following examples:
Which vector is an eigenvector for ?
We have , and
Thus, is an eigenvector.
We have that since , is not an eigenvector for
Linear algebra is a branch of algebra in mathematics concerned with the study of vectors, vector spaces, linear transformations, and systems of linear equations. Vector spaces are very important in modern mathematics. Linear algebra is widely used in abstract algebra and functional analysis. It has extensive applications in natural and social sciences, for both linear systems and linear models of nonlinear systems.
It is part of the study of Abstract Algebra.
Contents 
This book is meant for students who wish to study linear algebra from scratch. The approach will not be entirely informal. Every result in the book is intended to be either proved or justified by some mathematical procedure. Links to tedious proofs can be made to Famous Theorems of Mathematics/Algebra after the proof is written there.
Learning to think is extremely important in mathematics. Therefore in this book exercises form an important component and by no means should be ignored. Many important concepts of linear algebra are developed via the exercises in the book. It is necessary that before proceeding to the next chapter, the student does the exercises. Links to hints and solutions to many of the exercises are provided but they should be only used in cases of difficulty.
Linear algebra describes ways to solve and manipulate (rearrange) systems of linear equations.
For example consider the following equations:
x &+& y &=& 0 \\ x && 2y &=& 3
\end{matrix}
These two equations form a system of linear equations. It is linear because none of the variables are raised to a power. The graph of a linear equation is a straight line. The solution to this system is:
x &=& 1 \\ y &=& 1
\end{matrix}
since it makes all of the original equations valid, that is, the value on left side of the equals sign is exactly the same as the value on the right side for both equations.
Linear algebra uses a system of notation for describing system behavior, called a matrix. For the previous example, the coefficients of the equations can be stored in a coefficient matrix.
